<clever>
Mateon1: or run gdb on it and X, and get a backtrace
<clever>
Mateon1: just upload it to any service that allows sharing files
<clever>
yumbox: and for that, a bootloader has to be enabled
<clever>
yumbox: it doesnt copy the latest kernel to a special name, every kernel has a unique name based on a hash, and you need a bootloader config file to know whcih one is the right one
<clever>
ben: then you have no way to boot nixos
<clever>
yumbox: its much simpler to add the other distro to the nixos grub, most of the time
<clever>
yumbox: i have done it before, but you need to source the nixos grub.conf from the other distro's grub
<clever>
but with nix, i can safely download potentialy broken stuff, with zero risk
<clever>
which makes it trivial for me to open your coredumps, with any other distro, id have to risk breaking my system by downgrading to your "known-broken" version so gdb can find crap
<clever>
another great thing about nixos, i can get the exact same xorg as you by running: nix-store -r /nix/store/sr9yg251855xh3ic3jb4zr3jd959kapr-xorg-server-1.18.4
<clever>
if you ran that, then line 135 of your hastebin would have created /tmp/core.2181.X
<clever>
as a random example, echo "/tmp/core.%p.%e" > /proc/sys/kernel/core_pattern
<clever>
which also solves a second issue, the pre-made cert i git expired a while back, causing all tests to fail
<clever>
today, with the same problem, i just made it generate a fresh keypair on bootup, and refer to its cert from the test scripts
<clever>
so now the key is world-readable!
<clever>
one of my testcases needs an ssl cert+key, and in the past i have made this work by just embeding a cert+key into the git project
<clever>
MoreTea: i believe ACME works purely by storing the secrets outside of the store, they get generated at runtime
<clever>
lwf: nix-env -e hello
<clever>
zimbatm: so -r to /nix/store wont help
<clever>
zimbatm: but if i can read /run/current-system, i can still find the secret paths, the same way the service finds them
<clever>
the /run/current-system/etc is a directory in the store, and all storepaths must currently be world-readable
<clever>
and if a secret's path is in a config file anywhere, you will probably find it thru the above path
<clever>
in here is the original etc that is used to build /etc, before config files become read restricted
<clever>
ls /run/current-system/etc/
<clever>
smw_: a bigger issue i can see, if i can read /run/current-system, i can traverse the entire closure, which will lead me to every secret in the currently running nixos
<clever>
smw_: there are 2 issues about that open on the nix project in github
<clever>
smw_: it was a custom distro, based on nixos
<clever>
smw_: and because sandboxing was enabled, there was pretty much no way for the build jobs to know it wasnt nixos
<clever>
smw_: i was using it as a nix build slave
<clever>
Gravious: your telling it exactly which attribute you want, so it doesnt have to walk the entire nixpkgs tree, and it skips node packages entirely
<clever>
Gravious: yeah
<clever>
that one was entirely stateless
<clever>
joepie91: i have recently ran the rpi3 100% rootless, the initrd contained a squashfs, with a bare-bones distro, just nix-daemon, sshd, and the closure of them
<clever>
joepie91: i have ran 2 of my pi's from iscsi roots before, but each one had its own root, so it still had state and maintance
<clever>
ben: nix-env entirely ignores $NIX_PATH, its weird
<clever>
ben: the channel called nixpkgs
<clever>
Gravious: 1.11 is pretty recent
<clever>
but when you put 100 pi's on a switch, the NAS with the rootfs becomes a bottleneck
<clever>
but thats only going to bottleneck each pi by itself
<clever>
yeah, ethernet over usb
<clever>
Gravious: you generaly always want to use -iA anyways, nix-env -iA nixpkgs.nix-repl
<clever>
yep
<clever>
joepie91: which means the only limiting factor for scaling is cost, electricity, and bandwidth of the LAN
<clever>
joepie91: and depending on how you setup the network boot, you could boot all of them from the same disk image
<clever>
joepie91: just auto-detect any machine that connects with the right auth codes
<clever>
Gravious: what version of nix are you using?, which nix-env
<clever>
joepie91: then you make a service that listens for machines coming online/offline, and have it manage that 2nd file
<clever>
joepie91: the key part, is that hydra supports a : seperated list of /etc/nix/machines files, so you can give hydra a 2nd file, that is imperatively modified
<clever>
joepie91: yeah, but similar software can also be deployed against any other arm board, and even x86 machines
<clever>
Gravious: ok, so its not a corrupt file
<clever>
Gravious: can you pastebin /nix/store/wi77m54m6w5mi246r7g8cws7qb7i56bm-nixpkgs-17.09pre102350.fa03b82/nixpkgs/pkgs/top-level/node-packages.nix
<clever>
joepie91: that second point, means i can just throw more pi's at the problem to make it go faster, and i can unplug pi's when the load is low
<clever>
joepie91: B, with a bit more config, i can just plug pi's in, and they join the hydra automagicaly, and if the pi goes offline, hydra stops trying to use it
<clever>
joepie91: this means 2 things, a: no SD cards to fail
<clever>
joepie91: one thing i have been working on with my rpi3, is full network booting
<clever>
joepie91: my hydra is doing arm builds, but its not used by default, and only has a single pi backing it right now
<clever>
smw_: you have a 16 hour head start on me, lets see who wins the race!
<clever>
smw_: ive asked the exact same question, then i just read the source to find my own answer
<clever>
smw_: policykit uses spidermonekey to parse its rule files
<clever>
not sure why its not building more then
<clever>
ah, that should be well past the stdenv
<clever>
smw_: which derivations is it currently building?
<clever>
you can check top and "ps -eH x" to get some idea of what its actualy using
<clever>
and it cripples itself
<clever>
mine runs fine for days with -j4, then every now and then, all 4 gcc's decide they want 500mb of ram each
<clever>
whenever it goes try to run 16 gcc's at once
<clever>
smw_: then it will just go into swap hell and take 4x longer :P
<clever>
SuprDewd: ah, i havent seen it do that before
<clever>
smw_: so it will try to do up to 16 gcc processes at once, and probably eat all of your ram up and die
<clever>
SuprDewd: interpreter paths are meant to be fixed via a gcc flag in the gcc wrapper, and for pre-compiled stuff, you need patchelf
<clever>
smw_: the make level -j (probably --cores) only does something if enableParallelBuilding=true; has been set inside a derivation, and only helps after ./configure has finished
<clever>
smw_: the nix level -j builds each derivation in parallel, but that only helps once you get past the stdenv bootstrap
<clever>
smw_: -k can help with that some, it will keep building what it can even after a failure, so it doesnt waste hours waiting for you to notice
<clever>
magnetophon: only documentation ive found is a small entry in the nix-store manpage for the --generate option
<clever>
yep :)
<clever>
rebuild wont do anything
<clever>
then you only need to copy and restart the nix-serve unit in systemd
<clever>
magnetophon: and that must be a quoted string, not an unquoted one
<clever>
magnetophon: ah, so its simply using an old secret key, not the new one
<clever>
magnetophon: what exactly did you set secretKeyFile to?
<clever>
magnetophon: how did you enable nix-serve?
<clever>
magnetophon: how is nix-serve being given the key?
<clever>
magnetophon: check the nix-serve config, and possibly restart nix-serve
<clever>
magnetophon: both the public and private keys are base64, so they can only contain the characters listed here, and optionaly end in =
<clever>
you also need to add the binary cache to nix.conf
<clever>
and you need to make sure nix-serve is reading the matching secret key, it might be reading an older one you made days ago
<clever>
and the publickey it generates has to go into the nix.conf of the devices that are going to download from it
<clever>
and the docs say it has to be in the form of domain-number, like example.com-1
<clever>
you must name it after your own domain
<clever>
the command you gave a few days ago creates a key claiming to be cache.nixos.org-1, so it will use the real nixos pubkey, and fail
<clever>
when you ran nix-store --generate
<clever>
magnetophon: did you give the key a unique name?
<clever>
Gravious: if you run nix-daemon as root, and set "export NIX_REMOTE=daemon" for non-root, it can relay the root-needing things to the daemon
<clever>
obadz: writing a testcase framework for a custom network protocol
<clever>
but the curl script, installs nix, and all of its dependencies into /nix/store, fully isolated from the host libs
<clever>
and the nix it installs has to use and work with all libs debian provides
<clever>
Gravious: it looks like the debian package installs nix to /usr/bin, and then uses that to manage /nix/store/
<clever>
currently working my way thru the lua api docs
<clever>
added to my watchlist
<clever>
ah
<clever>
what about its cross-compile support?
<clever>
ah
<clever>
obadz: id also need bindings for the ioctls to control routing tables/addresses, toxcore, and /dev/tun, and thats already half the code in toxvpn
<clever>
Gravious: ah, nice
<clever>
might be simpler to just install via curl
<clever>
hmm, i cant see the above deb in the binary cache either
<clever>
the debs are still being built, but it looks like youll need help to download them
<clever>
this can install it on almost any linux or darwin device, but wont setup the services
<clever>
and its getting late here, i'm off to bed now
<clever>
smw_: you could put asserts into your nix expression, but you would have to hard-code the input and exptect value, and it might still trigger building of derivations your string references
<clever>
ah
<clever>
smw_: also, i'm now starting some integration tests for a large project on my end, and rather then implement its network protocol in perl, ive written a c++/lua app to handle it
<clever>
but the package has to provide its own tests
<clever>
which can be done by just setting doCheck = true; in a derivation
<clever>
the closest thing i can think of to a unit test in nix is to just run "make test" between "make" and "make install"
<clever>
main downside, is that it took 3-4 times as long, and a crap-ton of disk
<clever>
and then it kept the smallest version, and deleted the other files
<clever>
i remember making my own "compression algo" back when i was like 15, it was a php script that just ran gzip, bzip2, and something else, then compared them to see which gave the best result (it didnt bzip the gzip)
<clever>
line 12, lol
<clever>
ah, nice
<clever>
so a pair of 512kb files then, which easily gits inside a block
<clever>
ah
<clever>
enless thats 1mb to hold both files
<clever>
ah, so a 1023kb file would still be within that limit, but too big for gzip to handle in 1 block
<clever>
ah, didnt know it skipped that
<clever>
gzip and bzip2 have 900kb as the upper limit of block size
<clever>
spacekitteh: what if the file is over 1 block in size?
<clever>
greymalkin: all users made by modules within nixpkgs are pre-assigned an id within nixpkgs
<clever>
spacekitteh: i would expect that to still be limited to working within a single block
<clever>
if the uid isnt set, then it dynamicaly assignes one at runtime, if the user doesnt exist
<clever>
greymalkin: i just set users.extraUsers.clever = { isNormalUser = true; uid=1000; }; so they are pinned, and also map nicely over nfs
<clever>
they might not even wind up in the same compression block, and then you gain nothing
<clever>
while the compression algo cant, once you concat them into a giant blob
<clever>
i think part of it, is that git knows the history, and can diff different versions of the same file
<clever>
while still allowing the original object to be extracted
<clever>
the pack files clean that up more, and produce some inteligent binary diffs between objects
<clever>
and then it zlib's that whole string, to make the raw object in .git/objects/
<clever>
internaly, git will store all files with a header of "blob %d\0", the %d is the file size
<clever>
spacekitteh: it would need to map the branch names about to avoid collisions, and to keep the GC from eating other projecs, and with the recent sha1 news, id be more warry about merging projects controlled by different groups
<clever>
spacekitteh: reguarding your 2nd point, i have thought about what would happen if i just jam every git project into a single .git directory
<clever>
or possibly download pre-build nix files from the binary cache
<clever>
so it has to unpack the entire source while doing import-from-derivation, for every project doing this