<clever>
camsbury: is `nix-daemon` running as root?
<clever>
that may speed up your testing
<clever>
steveeJ: if you modify nixos/lib/make-squashfs.nix and add -noDataCompression, then it just wont compress things
<clever>
-noDataCompression
<clever>
steveeJ: dang, you can only choose between xz and gzip, and there is no uncompressed option
<clever>
steveeJ: ^^
<clever>
simukis: more cores? less data in the squashfs? maybe turn off compression?
<clever>
ah, nice
<clever>
not currently
<clever>
it only effects nix run
<clever>
that is entirely seperate from max-jobs and cores
<clever>
simukis: there is a bug that `nix run` will pin itself to a single core, and that then inherits to the shell it launches
<clever>
then nix will try to do up to 16 derivations in parallel, and each one will try to use every core, resulting in it starting up to 16 times the number of cores worth of gcc's
<clever>
max-jobs is how many derivations it does in parallel, cores is how many cores make&friends will try to use
<clever>
max-jobs and cores i mean
<clever>
simukis: there are 2 options in nix.conf, max-jobs and build-cores
<clever>
or just nuke the entire profile from orbit :P
<clever>
so the last generation was busted, and you would have had to use `nix-channel --rollback` to get to the previous manifest.nix and be able to update properly
<clever>
and in the event of an improper shutdown, will just truncate files
<clever>
i believe ext4 values metadata over data
<clever>
samueldr: if you have nuked the generation stuff, then you can just `nix-store --delete` that path
<clever>
ah
<clever>
what does `nix-store --verify --check-contents` report?
<clever>
the manifest.nix is part of how nix-env manages -i and -e, and i dont think its valid to be empty
<clever>
what does `nix-store --verify --check-contents` report?
<clever>
i dont think its supposed to be empty
<clever>
samueldr: its also plausible that the manifest.nix was corrupt somehow
<clever>
samueldr: it will then behave as if you have never done `nix-channel --update` before
<clever>
samueldr: if you dont care about rollbacks when removing all channels, just nuke the /nix/var/nix/profiles/per-user/$USER/channels* symlinks
<clever>
cant find the nixos manual option, i remember there being one
<clever>
ah
<clever>
colemickens: if anything is building, it will still build all the man pages
<clever>
but it will still build them, but it wont download them, and it can GC them
<clever>
colemickens: programs.man.enable = false; will omit them from the install
<clever>
haskell is also c :P
<clever>
Myrl-saki: intersectAttrs, functionArgs, and autoArgs are the bulk of the workload
<clever>
chaker: run `nix show-derivation foo.drv` on both drvs, and compare what src is set to
<clever>
is src= the same on both of them?, can you paste that nix code?
<clever>
chaker: it looks like one side has a source and the other doesnt
<clever>
chaker: ah, that should omit .hg then, are you able to download both versions of inputSrc (run nix-store -r on one) and then just `diff -ru` the 2?
<clever>
lib.cleanSource can help with that
<clever>
chaker: when doing src = ./.; there can be differences like .git and other files
<clever>
chaker: how do the values differ? can you paste them?
<clever>
exarkun1: fileSystems also supports auto-format
<clever>
exarkun1: i'm guessing blockDeviceMapping creates an xvdb and such, then you use normal fileSystems."/foo" to mount it
<clever>
nwspk: id just use a one-shot systemd service then, yeah
<clever>
nwspk: is that library being ran in a service?
<clever>
nwspk: what is going to use that database?
2018-09-13
<clever>
haslersn: let something = import (builtins.fetchTarball URL) {}; in ... environment.systemPackages = [ ... something.package ... ];
<clever>
dynamic is just simpler
<clever>
and why do you need static linking?
<clever>
and pthread is part of glibc, so thats easy
<clever>
exarkun1: glibc.static already exists, just add it to the buildInputs
<clever>
so you need a static pthread and static glibc
<clever>
oh, your doing static linking
<clever>
ah, not sure then
<clever>
exarkun1: are you in nix-shell?
<clever>
yeah
<clever>
or /nix/expr
<clever>
maybe? but it would make the performance of `ls /nix/store` much much worse
<clever>
CMCDragonkai: more about caching the entire AST in a db, and caching the result of pure functions between executions
<clever>
Purple-mx: what do you think of the above?
<clever>
and maybe track what paths are pure, and can cache the result of thunks
<clever>
so you would need to store the pre-forced AST in the cache, and keep the import statements intact
<clever>
for example, <nixpkgs/default.nix> will inspect the contents of ~/.nixpkgs, ~/.config/nixpkgs, and an env var, and import a config.nix
<clever>
hmmm, but it also depends on the files it imports
<clever>
and you have an index of filehash -> Value row#
<clever>
Purple-mx: and when you eval a thunk, you fill in another field of its result
<clever>
Purple-mx: one random thought, what if you have a database, where you store bytecode for every single lambda, and you have the entire AST as rows in the db
<clever>
yep
<clever>
that could be it
<clever>
and managed to skip over the correct (but broken for zcash) method
<clever>
so you incorrectly went to the only correct answer! lol
<clever>
what your already doing is the only way to change it, lol
<clever>
so its imposible to change it with .override
<clever>
and zcash is doing its own callPackage to get librustzcash
<clever>
5 let librustzcash = callPackage ./librustzcash {};
<clever>
exarkun1: then you can use .override to change stdenv, foo, or bar
<clever>
exarkun1: when you run callPackage on a file, and the file starts with a function like { stdenv, foo, bar }:
<clever>
Purple-mx: i can also see that being useful to save the result of thunks after the first run, to speed things up, and also to do more type-checking
<clever>
which i see you are changing
<clever>
you need to instead change name and src
<clever>
exarkun1: and note, that changing version with overrideAttrs rarely has the effect you intend
<clever>
exarkun1: yeah
<clever>
.override lets you change the inputs when callPackage is loading something, so you dont have to map over the buildInputs like that
<clever>
exarkun1: if you use .override, you can replace inputs far more easily
<clever>
rust likely doesnt obey the search path stuff in cc-wrapper
<clever>
ahh, rust
<clever>
everything in the inputs will be added to the gcc search path for -L and -I
<clever>
exarkun1: can you gist your whole nix file?
<clever>
so when the dev output is in buildInputs, it will behave as-if .out was also in buildInputs
<clever>
any files in `ls -l /nix/store/kjqz6x25gai4r3fs8bzkjcifkis81zza-libsodium-1.0.16-dev/nix-support/` ?
<clever>
exarkun1: "${libsodium}" would be the 1st output (out in this case), but when you put libsodium into the buildInputs, the stdenv will i think refer to .dev automatically
<clever>
cement: yep, so thats fixed, is there anything else not working?
<clever>
somebody installed nix using nix-env as root
<clever>
cement: thats not right, `sudo nix-env -e nix`
<clever>
cement: what about `type nix-env` ?
<clever>
cement: what does `nix-env --version` return?
<clever>
that one can be safely ignored, its the old nix.conf vs the new nix binary
<clever>
JonReed: "foo" + "bar" and let the nix parser ignore newlines between values?
<clever>
other env vars can also be embeded, if they are needed
<clever>
grp: which will replace tokens like @out@ with the value of $out, at the time its building the hook
<clever>
grp: when nixpkgs is generating the setuphooks, it runs substituteAll
<clever>
grp: oh, you might want @out@
<clever>
grp: pong
<clever>
stphrolland: just add the channel as root with nix-channel --add, ensure its listed with the name nixos, and then nixos-rebuild boot --upgrade
<clever>
stphrolland: stateVersion must not be changed
<clever>
:(
<clever>
sphalerite: i need to root this tablet to get that stuff working
<clever>
Dezgeg: when it says that, it will nix-store -qR the drv, look in the store to see what is missing, then look in the binary caches to see what can be downloaded and what has to be compiled
<clever>
what does the journal for queue-runner say?
<clever>
it can sometimes hang
<clever>
Dezgeg: try restarting hydra-queue-runner.service first
<clever>
Dezgeg: and what features that needs
<clever>
Dezgeg: whats important, is what derivation it started to build first
<clever>
Dezgeg: any user should work
<clever>
Dezgeg: find the .drv for the job thats not building, and try running `nix-store -r --dry-run` on it, on the hydra master, what does it output?
<clever>
sphalerite: i made an educated guess as to where the 2nd password was, dd'ed over a single byte, and then the crc failed, so the bios factory reset itself
<clever>
sphalerite: i'm guessing it was hashed, it wasnt cleartext
<clever>
Dezgeg: do you have build slaves configured? with the right features?
<clever>
and by diff'ing the nvram, i was able to see how big the hashed pw was, and generally where it was
<clever>
but i still had permission to change the boot pw
<clever>
the config pw was set, which restricted what i could do
<clever>
sphalerite: in that case, the bios had 2 passwords, the general config, and a boot pw
<clever>
sphalerite: with one of my laptops from the 486 era, i was able to remove a bios password by dd'ing into that character device
<clever>
sphalerite: if you `morprobe nvram` then you get this chardev
<clever>
betaboon: and you can easily reconfigure it again later if you loose the hydra
<clever>
betaboon: declarative jobsets help a lot, by letting you define the config in your git repo
<clever>
sphalerite: sometimes, depends on the bios
<clever>
betaboon: the inputs in the project, are for declarative jobsets, which is where hydra creates the jobsets automatically based on json and nix files
<clever>
betaboon: you can also create one jobset for each repo, and put them all in the same project
<clever>
i believe it maps to a .img on the sdcard, not the sd itself
<clever>
sphalerite: i have an android app for mass-storage, but it needs root
<clever>
ah
<clever>
and/or dig around in the bios config near network or boot
<clever>
take the local hdd out of the boot order
<clever>
and if your ipxe script comes from a server-side program, it can dynamically change the ipxe commands it serves
<clever>
sphalerite: this ipxe command, will boot the local hdd, even if ipxe was netboot'd
<clever>
sanboot --no-describe --drive 0x80
<clever>
or remove legacy from the boot menu
<clever>
i was thinking put ipxe on the network, and have no local MBR
<clever>
sphalerite: but i have setup stuff that would allow you to remotely customize that
<clever>
sphalerite: i think that would mostly be up to the bios config
<clever>
sphalerite: so it first unpacks the initrd to ram, including root.squashfs, then it mounts that squashfs, and uncompresses its files on-the-fly
<clever>
sphalerite: minor correction to one of your comments, the rootfs/nixstore, is in the squashfs(compressed) which is in the initrd(also compressed)
<clever>
which is why narfuse doesnt work on .nar.xz files
<clever>
you dont know what offset in the compressed stream, matches to a given offset in the uncompressed stream
<clever>
bennofs: the compression also comes into play
<clever>
sphalerite: its just a giant stream of length-prefixed blobs, some of them are filenames, some are file contents
<clever>
sphalerite: due to the way the nar fileformat works, you need to download every file before X to get the position of X in the stream
<clever>
sphalerite: so the "design limit" of narfuse, winds up giving you caching
<clever>
sphalerite: narfuse would rely on nix to fetch the .nar via some method (including a remote store), and drop the .nar into a local dir
<clever>
sphalerite: narfuse could have an option to also try to download everything from a cache upon failing to find it locally
<clever>
sphalerite: then narfuse mounts /nix/nars to /nix/store and it looks like a normal store
<clever>
sphalerite: and nix would flag things as being valid in db.sqlite, after it finishes /nix/nars/hash-hello-1.2.3.nar
<clever>
yep
<clever>
sphalerite: and then nix downloading things from the binary cache, would just dump .nar's into a dir, and narfuse would make it look like it was unpacked
<clever>
sphalerite: nix would need to be modified, to not unpack the .nar when downloading, and just dump it into a directory, and notify narfuse to rescan the dir
<clever>
sphalerite: and narfuse is then responsible for the "unpacking"
<clever>
sphalerite: i didnt think of that back then, but it could be along the lines of, when it fails to find a path, check the binary caches and run `nix-store -r` on it, and the narfuse backend in nix would just download a .nar file, and register it as being valid
<clever>
sphalerite: for db.sqlite and tracking what actually exists locally?
<clever>
sphalerite: eek, this code is from back when i had left the sandbox off, and i was running nix in nix!
<clever>
sphalerite: glibc is to blame
<clever>
sphalerite: the closure for narfuse is 63mb right now
<clever>
sphalerite: if you keep the storepaths as nar files, then you can share those over ipfs or bittorrent, without having to pay double on the disk, to hold the unpacked and packed
<clever>
sphalerite: the main reason i wrote narfuse, was actually for ipfs
<clever>
static haskell linking gets rid of ghc references
<clever>
thats trivial to do
<clever>
sphalerite: its only 1 or 2mb compiled
<clever>
sphalerite: precompiled haskell, not a full compiler
<clever>
sphalerite: and once you have that, you just need a 404 handler that downloads things on-demand, and you have what you linked above, with caching
<clever>
sphalerite: this allows you to take a directory of .nar files, and just mount it to /nix/store