2017-07-27

<clever> try putting this file into the root of that project, and make it all relative
<clever> why are they not relative?
<clever> why do you have so many absolute paths in the first gist?
<clever> looks perfectly normal
<clever> joehh: what is inside mop-core/default.nix ?
<clever> joehh: *looks*
<clever> yeah
<clever> joehh: and also ghc-pkg describe foo
<clever> jasom: nix-shell -A foo.env then run `ghc-pkg list`
<clever> it ignored what was in-use, and deleted 90% of the os, lol
<clever> ive broken things pretty badly before, i did nix-store --delete --ignore-liveness
<clever> jasom: these should exist by default, it doesnt matter what order the users where made in
<clever> lrwxrwxrwx 1 clever users 46 Mar 7 2016 channels -> /nix/var/nix/profiles/per-user/clever/channels
<clever> lrwxrwxrwx 1 clever users 44 Oct 11 2015 channels_root -> /nix/var/nix/profiles/per-user/root/channels
<clever> jasom: nixos?
<clever> jasom: have you been deleting things to silence warnings about collisions?
<clever> jasom: does channels_root exist in .nix-defexpr?
<clever> jasom: --add must always be followed by --update to apply the additions
<clever> jasom: you need to run nix-channel --update as root
<clever> jasom: and what about ls -ltrh ~/.nix-defexpr/*/*
<clever> jasom: what does "nix-channel --list" and "sudo nix-channel --list" say?
<clever> jasom: just name the channel as always, nix-env -iA unstable.hello
<clever> jasom: yeah
<clever> jasom: and --remove every channel from all non-root users
<clever> jasom: same way you manage them as a normal user, nix-channel --add
<clever> jasom: even as root, only the channel called "nixos" is special, so you can add custom channels to root without harm
<clever> jasom: its usually better to just never have a channel on the users, and manage all channels by root
<clever> gchristensen: but that can make things like propagated inputs and other stuff harder to deal with
<clever> gchristensen: rather then appending 200 -L's to ld and letting it search
<clever> gchristensen: it will buildEnv all deps into a single dir, acting as an index in one spot
<clever> gchristensen: i have found that ghcWithPackages looks better then buildInputs
<clever> hodapp: what override did you add to opencv, and what attribute path?
<clever> because the eval is supposed to be pure
<clever> and hydra doesnt support using build slaves at eval time
<clever> Infinisil: import from derivation
<clever> hodapp: python was off, no contrib option visible
<clever> hodapp: *looks*
<clever> Infinisil: its a case of where IFD is bad
<clever> it does a native darwin build, on a darwin slave
<clever> but import <nixpkgs> { system = "x86_64-darwin"; } doesnt cross-compile
<clever> nativeBuildInputs is for when you cross-compile
<clever> and at this point, is simpler to just pre-run cabal2nix and commit the result
<clever> to fix that, i would have to import 2 instances of nixpkgs (host and darwin), use the host nixpkgs to build and run cabal2nix, then the darwin nixpkgs to import the resulting nix file
<clever> Infinisil: hydra cant do that
<clever> Infinisil: so if i import a darwin nixpkgs, and do haskellPackages.callCabal2nix, it wants to generate the .nix file on a darwin machine
<clever> Infinisil: IFD uses the target arch for building the derivation
<clever> hodapp: i dont see QT in the closure of my opencv
<clever> hodapp: run "nix-store -q --tree" on the .drv file for the build
<clever> these ones
<clever> which arent documented well
<clever> yeah, i could put such a derivation into the top-level assertions of nixos
<clever> taktoa: just the fact that its valid
<clever> Infinisil: just an email address
<clever> Infinisil: i suspect hydra and the module have zero validation on the field i linked, and will insert whatever string you give it into the emails
<clever> correction, what email to include in the from: field
<clever> taktoa: or hydra config, where to email errors
<clever> hodapp: maybe, but i just learned it by reading setup.sh
<clever> hodapp: and the unpackPhase will copy $src to .
<clever> taktoa: what about just copy/pasting the parsing gramar directly from the sudo source, and into the sudoers linter in nix?
<clever> taktoa: it has to uncompact it, so every part is 4 hex digits, strip all :'s, flip it, then insert a . between every char
<clever> taktoa: what would you use to parse the v6 and also convert it into a reverse dns entry?
<clever> taktoa: and also, for reverse dns, that has to turn into 6.0.0.0.5.0.0.0.0.0.0.0 ....
<clever> taktoa: you can turn 1:2:3:0:0:0:0:5:6 into 1:2:3::5:6, and it will insert enough 0's to pad it out
<clever> taktoa: another use i can see for parsers, do you know how ipv6 can have a string of 0's at almost any point replaced by just :: ?
<clever> hodapp: you can also do "runHook postPatch" and that will also support the arrays in bash
<clever> Infinisil: the stdenv has code to detect if the variable is set, and eval it instead
<clever> Infinisil: oddly, you can have both a unpackPhase function, and a $unpackPhase variable, and just typing "unpackPhase" runs the function
<clever_> !pong
<clever_> Infinisil: and unpackPhase runs the original function, not the overridden $unpackPhase
<clever> !ping
<clever> Infinisil: that runs a bash function, not a variable containing bash script
<clever> hodapp: if you dont put double-quotes around it, the \n's get lost
<clever> hodapp: quote the variable
<clever> so nix-instantiate causes builds
<clever> another problem with using IFD to lint sudoers using sudo, is that you must now build sudo at EVAL time

2017-07-26

<clever> bbl
<clever> its the same reason a nix file doesnt end in ;
<clever> one thing that trips me up a lot in nix-repl, never end a statement in ;
<clever> 5
<clever> nix-repl> let x = 5; in x
<clever> let foo="bar" is missing the 'in' keyword and a value to return
<clever> but otherwise, everything must return a value
<clever> so foo = "bar" in nix-repl adds foo to the global scope
<clever> Fuuzetsu: if it finds an =, it will do weird things that nix normally cant do
<clever> Sonarpulse: you mean /usr/lib/dyld?
<clever_> i think
<clever_> nix-store -r /nix/store/foo --add-root result --indirect
<clever_> gchristensen: try eval'ing this in nix-repl
<clever_> nix-repl> hello // { type = "dummy"; }
<clever_> that can also cast to a string
<clever_> derivations are special attribute sets
<clever_> you can just .name
<clever_> gchristensen: do you just have that string, or a derivation that evals to that string?
<clever_> 3gig of ram
<clever_> so lvm splits the luks up, into a swap, and a zfs pool
<clever_> sphalerite[m]: i wanted a single luks volume, and swap, but swap on zfs is bad
<clever_> but i would avoid zfs on zfs when possible
<clever_> my laptop is zfs on lvm on luks
<clever_> LnL: ahhh, nice
<clever_> i dont think you can directly reserve it, but a quota on everything else would do that
<clever_> sphalerite[m]: zfs shares the free space of every filesystem in the pool, and each filesystem can optionaly have a quota setting the upper limit on usage
<clever_> look near that, it may check many places
<clever_> yeah, strace -e file -o /tmp/wine-strace.log -ff wine foo.exe
<clever_> copumpkin: open the build, then under actions, bump to the top of the queue
<clever_> which makes the logs much much easier to read
<clever_> that makes it append pid to the logfile name
<clever_> wine forks a crap-ton, it will need some -ff
<clever> it will need a -ff
<clever_> link?
<clever_> and the dir of the executable and current dir are in that search path
<clever_> on windows, it searches for dll's in $PATH
<clever_> et4te: with the sandboxes off, nothing stops you from just referencing /usr/lib/foo
<clever_> et4te: yeah
<clever_> windows uses pe32
<clever_> et4te: and configure will remain fixed for the rest of the build
<clever_> et4te: the preConfigure hook i gave will fix all scripts that exist at that time, in the current dir (like configure)
<clever_> et4te: nix will also try to run it automatically on $out/bin/ during the fixupPhase (after installPhase)
<clever_> et4te: yep
<clever_> et4te: so it no longer relies on env, and it cant change what foo its using in the future
<clever_> et4te: it replaces #!/usr/bin/env foo, with the absolute path to foo
<clever_> et4te: so you want to do preConfigure = "patchShebangs .";
<clever_> et4te: the problem appears to be with the configure script itself
<clever_> et4te: yeah, it also has to be enabled in the ubuntu box where you where testing
<clever_> et4te: thats why i always setup sandboxing on my own machines
<clever> et4te: is the problem happening at nix build time, or runtime, can n you gist the entire error?
<clever> et4te: where are the scripts with the errors?
<clever> et4te: run patchShebangs on the directory wit h the scripts
<clever> joepie91: not yet
<clever> joepie91: i cant patch the entire bin dir
<clever> joepie91: in my case, a bin dir in the source, with symlinks to various #! scripts that wanted node
<clever> joepie91: i recently discovered that patchShebangs doesnt patch the scripts behind symlinks
<clever> without a .phony: install, it cant do anything more
<clever> you told it to make install, and install already exists, done!
<clever> bennofs: ah, then just hardeningDisable = "all"
<clever> bennofs: or set hardeningDisable = "all" i think
<clever> bennofs: nix-shell -p gcc.cc
<clever> seequ: this example is also in the man page
<clever> $ nix-env -e '.*' (remove everything)
<clever> so hello will be the only installed thing
<clever> will remove everything, and install hello
<clever> in the man page, i see that nix-env -riA nixos.hello
<clever> seequ: nix-env -e i think
<clever_> serougjserg: you need to use attribute names in -p
<clever> avn: i have a PR being tested, that will add split outputs to all haskellPackages
<clever_> killing the ssh will probably lead to an unexpected eof, and then it will try the build again
<clever> copumpkin: the abort button just hides it, and stops any further proccessing
<clever> copumpkin: there is no way to stop an in-progress build
<clever> i could also give that user write to his "own" directory, and pre-install malware into his profile
<clever> looks like it
<clever_> sphalerite[m]: and something nixos sets up in /etc/profile auto-creates it
<clever_> sphalerite[m]: ah yeah, i think it defaults to the default profile
<clever> copumpkin: i have noticed a lot of arm builds, and even a few x86 builds hanging within make, make had a zombie process as a child, and wasnt calling waitpid
<clever_> 64bit x86 has more registers, so its faster/cheaper to pass more arguments in registers
<clever_> the calling convention is based on the cpu mainly
<clever_> so overloaded functions have different symbols
<clever_> there is also symbol mangling, the c++ names (namespace, types, return value) get flattened down to a c level symbol name
<clever_> and also who cleans the stack up, the caller or callee
<clever_> the calling convention sets things like where arguments go (in certain registers, on the stack?)
<clever_> it is capable of building everything, for every PR
<clever_> hydra can check them in future
<clever_> travis has a lot of false negatives, so people ignore it
<clever_> and in future, hydra can test everything, and just tell you what you broke
<clever_> travis runs that automatically when you open a PR
<clever_> nox has a review-pr thing that tests it
<clever_> and then each one of those will update to the latest within that api
<clever_> nixpkgs is currently in the area where you only keep a minimal number of versions (major api changes)
<clever> *waves*
<clever_> currently on the road
<clever_> thats the desktop at my house, let me ssh to it
<clever_> i made a push yesterday, let me see if peti left a comment
<clever_> gchristensen: sure
<clever_> not sure then
<clever_> everything you list in -p gets added to the buildInputs of a dummy derivation
<clever_> pie_: nix-shell -p libwebp
<clever_> pie_: nix doenst care what you have installed, you must put it in the buildInputs
<clever_> gchristensen: the other goes into the initrd, so the boot filesystems can make use of the fs
<clever_> gchristensen: one goes into only the rootfs, so you can use it after stage-2 has loaded (like nfs for media)
<clever_> libwebp.out 23,160 x /nix/store/afsmdas1gzdx3zv2csqwggvh629cidk6-libwebp-0.4.3/include/webp/decode.h
<clever_> yeah
<clever_> maybe the wrong version
<clever_> sounds like openssl
<clever_> rather then using that dir as nixpkgs
<clever_> without the nixpkgs=, it will search for nixpkgs inside of that directory
<clever_> nix-shell -I nixpkgs=/path/to/nixpkgs/unstable -p stuff
<clever_> sphalerite[m]: i believe that has priority over which profile to install things to
<clever_> sphalerite[m]: what did ~/.nix-profile point to on the debian chroot?
<clever> heading off to bed now
<clever> i also checked, and hunit doesnt turn profiling off, so you may also need an override on hunit to turn it off
<clever> and will blame whoever called the hunit functions
<clever> i think the profiling will just not blame hunit for that cpu usage
<clever> so hunit can set profiling to off
<clever> joehh: this will give the original value passed to mkDerivation priority, and only change the default
<clever> enableLibraryProfiling = if (args ? enableLibraryProfiling) then args.enableLibraryProfiling else true;
<clever> the problem, is that the override unconditionaly turns profiling on, even if a package has defined profiling to be off
<clever> hmmm, dont see it, but i can just retype it
<clever> joehh: oh right, i typed an example before on how to solve that, one min
<clever_> it may need both types of profiling to be enabled
<clever_> joehh: is core a library or executable?
<clever_> 20 , enableExecutableProfiling ? false
<clever_> 19 , enableLibraryProfiling ? false
<clever> joehh: can you gist the entire nix file?
<clever> you can still identify matrix users when the mass-quit-spam, lol
<clever> maybe*
<clever> gchristensen: maine for a nixos option that is of type email?
<clever> thats handled by bash
<clever> ~ doesnt expand in syscalls
<clever> oh, i see the problem
<clever> joepie91: what about the ld.so path?
<clever> nh2: i once had to deal with ld needing over 3gig of ram to link firefox, and i was on a 32bit machine
<clever> ahh
<clever> nh2: in reply to something you asked a few hours ago: https://github.com/NixOS/nixpkgs/issues/24844
<clever_> joepie91: find also has -delete
<clever_> :D
<clever_> copumpkin: ehh, i had worse, i was ssh'd into screen+irssi yesterday, and there was 2 minute latency at times
<clever_> copumpkin: fixing that would have greatly increased the speed of that bash based parser, making it entirely userland, rather then context switching for every string concat
<clever> copumpkin: it was hammering the brk() syscall, increasing the heap by a few kb, then shrinking it asap, then increasing it again
<clever> copumpkin: i noticed a bug in bash when it was running the old RSP parser

2017-07-25

<clever> dang, irc is unreadable in this app, i'll be back later
<clever> ec2 classic?
<clever> sphalerite[m]: that hydra is a bit slow, just try again
<clever> heading off for the night
<clever> gchristensen: but if /nix doesnt exist, nix-daemon better not be running!
<clever> gchristensen: so the daemon was running, but the store was kaput
<clever> gchristensen: i'm guessing it was a previous multi-user install, that had been "rm -rf /nix"'d
<clever> gchristensen: maybe just a killall nix-daemon near the start
<clever> gchristensen: i think the entire /nix was deleted, with the daemon running, so launchd didnt start a new one
<clever> gchristensen: might want to add a command to ensure it kills any old nix-daemon upon install
<clever> did launchd start a replacement?
<clever> ps aux | grep nix-daemon
<clever> mightybyte: the directory should exist now
<clever> mightybyte: ls -ltrh /nix/var/nix/daemon-socket/
<clever> gchristensen: what was it?
<clever> there is a launchctl command for that
<clever> the launchd unit has to be restarted
<clever> that daemon pre-dates everything being deleted, including the directory it cant get into
<clever> thats what i was thinking
<clever> and you also reinstalled nix 2 hours ago?
<clever> mightybyte: does it say when nix started?
<clever> mightybyte: ps aux | grep nix-daemon
<clever> mightybyte: is nix-daemon running?
<clever> mightybyte: what user owns the /nix/store dir?
<clever> i think one is on matrix

2017-07-24

<clever> it was chatty the whole 48 hours, lol
<clever> LnL: ah, my arm build of llvm didnt go silent for that long
<clever> catern: using this, you can turn a set into a series of shell commands
<clever> "a = a\nb = b"
<clever> catern: nix-repl> lib.concatStringsSep "\n" (lib.mapAttrsFlatten (k: v: "${k} = ${v}") { a="a"; b="b"; })
<clever> LnL: ive seen my hydra run a build for 48 hours before, and pass
<clever> json might be simpler
<clever> yeah, it can be tricky to know which env vars where added
<clever> sphalerite[m]: my armv7 gcc has started ubilding again
<clever> catern: every attribute on a derivation becomes an env variable at build time
<clever> and the busybox supports ash, xz, and tar
<clever> yeah, right now its a busybox and a shell script
<clever> yeah
<clever> if you want to create the bootstrap tools, like the initial gcc for building stdenv
<clever> so you need to provide a nar that contains a pre-built tar
<clever> <nix/fetchurl> cant unpack a tar, only a nar
<clever> but mixing the arches right can be tricky
<clever> the x86 is probably faster, and hydra could download it without involving any build slaves
<clever> everything will be done on an armv7 build slave
<clever> the above nix expression is a native build
<clever> so hydra cant pick the best one automatically
<clever> the problem, is that you can only define one way to create it
<clever> wak-work: but it can reuse an x86 "build" of the tar, if one happens to be in the store
<clever> wak-work: (import <nixpkgs> { system = "armv7l-linux": }).fetchurl will demand an arm build of curl
<clever> copumpkin: so if the build isnt done yet, and you used the arm fetchurl, it has to build an arm curl to dl it
<clever> copumpkin: one oddity in nix, is that fixed-output derivations need a build of curl from the right arch
<clever> copumpkin: and because it was previously finished, hydra was able to flag it as done, without building curl
<clever> copumpkin: and so it wont even show it as having finished in a cached state
<clever> copumpkin: if the job points to a storepath from a previous job, they share the same build#
<clever> copumpkin: what about the url's to the installer iso's?
<clever> depends on what state they are in
<clever> that one also works
<clever> copumpkin: the eval has a dedicated option, "restart all aborted jobs"
<clever> may need to go up to the eval and restart all aborted jobs
<clever> that is weird
<clever> niksnut: i still have the failed log open in a tab
<clever> HSTS locks htt[s on for the domain, and stops dumb users from ignoring ssl errors
<clever> Infinisil: thats just a server side redirect, HSTS does it client side, and stops you fro turning it off
<clever> niksnut: Makefile:445: recipe for target 'manifypods' failed
<clever> ive always had full access
<clever> the main hydra may have more strict permissions
<clever> but its global
<clever> clear failed builds cache
<clever> under the admin menu at the top
<clever> it has no automatic retry
<clever> it cached the failure between them and assumes it will keep failing
<clever> copumpkin: also, the problem has only happened once on hydra so far
<clever> Infinisil: tis wifi lags too hard to have a convo, lol
<clever> lua and php dont really treat the 2 things as seperate concepts