<clever>
joko: can you gist the output of "ps -eH x" ?
<clever>
joko: if it ssh's back into itself, yeah
<clever>
joko: the flock command in there would limit you to a single job at a time, and the nice is more for when the hydra master is also a slave
<clever>
joko: hydra accepts a list of paths to files matching the /etc/nix/machines format, so you can add a secondary one just for hydra, which nix will ignore
<clever>
joko: also, hydra can use an alternative file for its slaves
<clever>
and the next rebuild will restore that symlink
<clever>
joko: if you delete /etc/nix/machines it will revert back to local builds
<clever>
joko: so the normal users can still use the build slaves
<clever>
joko: nix-daemon needs read access to that keyfile
<clever>
goibhniu: good idea
<clever>
joko: but grub.enable isnt set, so the grub code never even runs
<clever>
garbas: the only thing fishy that i can see is `zpool set bootfs=tank/root/nixos tank`, other then that, all i can think of, is to try EFI grub, rather then efi systemd
<clever>
i cant see anything obvious in that first gist that would explain the boot problems
<clever>
garbas: line 24 can also be deleted, nixos will auto-detect that based on the config in hardware-configuration.nix
<clever>
joko: also, grub isnt enabled, so that line can just be deleted
<clever>
joko: and its set via import <nixpkgs> { system = "x86_64-darwin"; }
<clever>
joko: stdenv.system would be the target your building for
<clever>
joko: what is in your /etc/nix/machines ?
<clever>
pmeunier: .override changes the arguments given to the whole file, on line 1, so the x coming into it is changed
<clever>
pmeunier: in something like that, it might be better to use .override, which operates at a much sooner level
<clever>
rec modifies the values before they reach the first function
<clever>
nothing can override things before rec
<clever>
ah
<clever>
what are you trying to do with those custom overrides?
<clever>
the override function had to setup a new .overrideAttrs, against the new version
<clever>
pmeunier: overrideAttrs is a special thing between the user-facing stdnev.mkDerivation, and the backend that actualy handles making the derivation
<clever>
pmeunier: you have to pass the old hello to a function in this case
<clever>
yeah, if KRACK is in play, you could mitm the session, and inject a fake reply before the real server sends one to the router
<clever>
so the AP must decrypt the packet, re-encrypt with the other session, then rebroadcast
<clever>
and in the case of WPA, only the AP knows how to decrypt things from each peer
<clever>
sphalerite: i believe everything must go thru the AP when in AP mode
<clever>
steveeJ: i believe nix-daemon does something similar, for its sandboxes
<clever>
s/count/could/
<clever>
it could either keep the original layer.tar files, and just unpack them in sequence for each container, or it count union pre-unpacked ones together
<clever>
i think it also depends on what storage driver you use in docker
<clever>
and the kernel may not be happy trying to union 200 filesystems together
<clever>
but that would leave other docker end-users downloading 200 layers
<clever>
which could optimize things better
<clever>
there is a different docker/nix project i forget the name of, that makes one layer for every storepath
<clever>
yeah
<clever>
so zfs dedup is needed
<clever>
steveeJ: and if you import 2 such images, they wind up being 98% identical, often with just 5 bytes differing in a bash script
<clever>
steveeJ: when nix builds an image, it creates an entire layer, that has the closure of a given store path
<clever>
so if i was to spawn 20 containers from the same image, it would be nearly instant
<clever>
steveeJ: and then it uses zfs commands to atomicly spawn an entire FS from that snapshot
<clever>
steveeJ: i set docker to the zfs driver, so docker makes a snapshot of every image
<clever>
steveeJ: the name of the pool
<clever>
sphalerite: i also enabled dedup for amd/docker, because i was loading the same image (built by nix) over&over, and they where 98% identical
<clever>
amd/docker 597M 12.4M 2.25G 4.18x
<clever>
so my store is 131gig of data, but its only using 67gig on-disk
<clever>
amd/nix 67.2G 67.2G 131G 2.16x
<clever>
this includes both the effects of compression and dedup, i believe
<clever>
amd 189G 96K 288G 1.66x
<clever>
NAME USED WRITTEN LUSED RATIO
<clever>
[root@amd-nixos:~]# zfs list -t filesystem -o name,used,written,logicalused,compressratio
<clever>
sphalerite: and one other thing
<clever>
and nearly 1gig of disk
<clever>
so its using 340mb of ram
<clever>
there are 630k rows in the dedup table, each record takes 1.7kb on disk, and 567 bytes in ram
<clever>
dedup: DDT entries 630380, size 1.71K on disk, 567B in core
<clever>
[root@amd-nixos:~]# zpool status -D
<clever>
dedup is elsewhere
<clever>
the help output describes most of the fields
<clever>
sphalerite: one includes L2, the other doesnt
<clever>
joko: can you link your release.nix file?
<clever>
joko: load release.nix in nix-repl
2017-11-14
<clever>
and now nix cant rebuild when things change
<clever>
Lisanna: it stops people from doing impure things like src = "/home/clever/foo"; and they just chmod when it fails to read
<clever>
Lisanna: yeah, that sounds good
<clever>
and setup user, network, pid, and mount namespaces
<clever>
it has to bind-mount every single storepath in the closure of the build, within a chroot like env
<clever>
infinisil: i believe its off, but i always turn it on
<clever>
so any time i missed the binary cache, the test passed
<clever>
yeah, i had sandboxing off at the time
<clever>
a nixos sandbox lacks every one of those, so it hard-codes itself to open "unknown" at runtime
<clever>
and then hard-code the result into the binary
<clever>
turns out, net-snmp will check for /etc/mtab, /proc/mounts, and 3 or 4 other paths, to figure out how your OS deals with keeping track of mounts
<clever>
it took a while to track that problem down
<clever>
git bisect cant deal with that pattern
<clever>
every build i made locally worked
<clever>
every build coming from the binary cache was broken
<clever>
gchristensen: i once had the fun of trying to bisect a bug, caused by nix sandboxing
<clever>
it uses nix copy and runInLinuxVM to mount the image
<clever>
mbrock: modify sourceRoot in the postUnpack
<clever>
yeah
<clever>
mbrock: and stack2nix just generates a whole haskellPackages set, with exactly the closure of the project, and nothing more, on the exact versions stack said to use
<clever>
mbrock: the project i linked has at least 10 sub-projects (each with its own cabal file) in that repo, plus many deps that have been forked and are waiting on upstream things like what i linked
<clever>
mbrock: if your using a stack file, there is now stack2nix
<clever>
pie_: that URL isnt hydra
<clever>
jtojnar: ah
<clever>
jtojnar: i know xfce can do it, ive had to do that on a recent netbook
<clever>
so no tainting the rest of the machine
<clever>
and only that checkout of X will use the new Y
<clever>
so you can just put that into the default.nix of X, to modify what Y src it uses
<clever>
and it will ignore the users config.nix file
<clever>
so you can just do import <nixpkgs> { config = { packageOverrides = pkgs: { ... }; }; }
<clever>
mbrock: that argument lets you pass in the value of config.nix
<clever>
mbrock: have you seen what import <nixpkgs> { config = { ... }; } can do?
<clever>
ahh
<clever>
mbrock: then just skip the unpackPhase, and use what git gave you
<clever>
mbrock: you could also just git clone foo, then nix-shell '<nixpkgs>' -A foo
<clever>
binaryphile: in this case, it can take minutes just to query cache.nixos.org for the full closure of your build, so it helps speed it up a lot
<clever>
there is also a TTL option somewhere in nix.conf to make it re-check sooner
<clever>
binaryphile: thats the binary cache cache, you can clear it by just deleting /nix/var/nix/binary-cache-v3.sqlite*
<clever>
it only goes into pending when hydra actually starts the build
<clever>
one thing lost due to that change, is that the github status plugin wont say "pending" when the job is queued
<clever>
a proper eval-finished hook (and maybe a for loop over the jobs) would solve that issue and restore that function
<clever>
mbrock: and suddenly, it takes 9 hours to run the hooks for every job, lol
<clever>
mbrock: but that started to run into problems, because the hooks are handled by running the hydra-notify program (a perl script), on each build, one by one
<clever>
mbrock: previously, buildQueued was ran for every single job being queued after the eval has finished
<clever>
ikwildrpepper: one thing i noticed, is that hydra will copy it to /var/lib/hydra/ on startup, and i hadnt seen anything like file(...) in the docs
<clever>
ikwildrpepper: yeah
<clever>
ikwildrpepper: the only way to deal with github tokens in hydra is to put them into the store, hydra doesnt support anything better
<clever>
joko: another more extreme thing, that i think is a bit of a flaw in hydra and nix-serve, if i gave you the result of `readlink /run/current-system`, you could download a complete copy of my hydra server (the nixos that powers it), along with any secrets that happen to be in /nix/store/ (which includes github auth tokens)
<clever>
the storepath of anything built from that source also counts
<clever>
joko: nginx can be configured to allow the binary cache URL's thru, but then anybody that knows the hash of your source can also get the full source
<clever>
so its better to just use some network levels firewalls to restrict who can connect
<clever>
basicauth will mess with binary cache usage though
<clever>
and also keep in mind, that hydra will share the full source to everybody
<clever>
yeah, like that
<clever>
joko: somewhat, "sudo -u hydra -i" then "ssh-keygen" and no passphrase, give it access, and then configure hydra with an ssh url in the build input
<clever>
larsvm: diff the output of "env" in both terminals
<clever>
Lisanna: what about setting it to 0?
<clever>
srhb: some devices like the raspberry pi also have a hardware watchdog timer, where you must reset the timer constantly, and if the OS locks up for any reason, the timer runs out, and the hardware forces a reboot
<clever>
srhb: the watchdog i had seen last, would check various conditions to see if they are true (is the network up?, is a given service responding correctly?) and reboot the machine if they fail for X seconds
<clever>
srhb: test wont touch /boot, so any reboot would rollback
<clever>
srhb: i would just use "nixos-rebuild test" and then maybe configure a watchdog service
<clever>
and you could just set require-sigs directly
<clever>
ah, the default for the require-sigs option is based on if the pubkey list is an empty string
<clever>
unpacking a tarball to /nix should fix that, i think
<clever>
you could do it from within a chroot inside the vm
<clever>
any storepath you would want to install with nix-env
<clever>
Lisanna: or just "nix-env -i /nix/store/foo" to install things into a new profile
<clever>
it would need to point a buildEnv i think
<clever>
Lisanna: yeah, you would probably want to chroot into the image, and re-run "nix-env --set /nix/store/foo --profile /nix/var/nix/profiles/per-user/clever/" to fix the generations
<clever>
but taking an ubuntu image, and just replacing /nix with a new copy is easy
<clever>
yeah, incremental changes to /nix are going to be tricky
<clever>
Lisanna: it creates a full /nix directory, with the closure of whatever paths you give it
<clever>
jluttine: the qemu-kvm command in the qemu_kvm package uses it
<clever>
jluttine: kvm is just a kernel feature, that has to be used by something else like qemu
<clever>
fearlessKim[m]: even for a nix-shell, it will import it into the store, then point $src to the store snapshot
<clever>
jluttine: what would the kvm executable do?
<clever>
Lisanna: ah, yeah, you dont need 22
<clever>
fearlessKim[m]: that all happens at eval time, before a single build has begun
<clever>
fearlessKim[m]: the first time that value enters a derivation, nix will import a snapshot of that directory into the store, and hash the whole thing, and replace the value with a storepath
<clever>
fearlessKim[m]: is src refering to a path on disk, or a fetchurl based derivation?
<clever>
Lisanna: looks good
<clever>
but if you use lib.mkForce, it should ignore the previous values
<clever>
jluttine: the type of that option is lines, so it will concat each value together, with a \n seperator