2019-02-17

<clever> test also made no new generation
<clever> wedens: i try to test such stuff without nixos-rebuild
<clever> that removes the roots, so a normal `nix-collect-garbage` (with no flags) can then delete it
<clever> removing generation 463
<clever> [root@amd-nixos:~]$ nix-env --profile /nix/var/nix/profiles/system --delete-generations 463
<clever> 464 2019-02-06 06:06:36 (current)
<clever> 463 2019-02-03 05:45:01
<clever> [root@amd-nixos:~]$ nix-env --profile /nix/var/nix/profiles/system --list-generations
<clever> wedens: you can also use nix-env to manually do it
<clever> wedens: nix-collect-garbage has flags to delete old generations
<clever> sb0: its possible that you already had rustc, from pre-hydra testing, and when nixpkgs changed, it had to get a new rustc
<clever> sb0: services.hydra.useSubstitutes will allow hydra to just download rustc from cache.nixos.org, and un-hang it
<clever> and the default localhost slave lacks that feature
<clever> sb0: but rustc requires the "big-parallel" feature on a build slave
<clever> sb0: you dont have the binary cache on, so hydra has to build a copy of rustc
<clever> sb0: there it is
<clever> "requiredSystemFeatures": "big-parallel",
<clever> show-derivation each of them
<clever> then youll see 2 new .drv files, for cargo and rust
<clever> sb0: add that to the `nix-store -r --dry-run /nix/store/0gaksckwpi6cwkmkncd8pm5mhbr7z837-conda-artiq-board-kasli-tester.drv --option substituters ""`
<clever> --option substituters ""
<clever> sb0: then it could also be cargo or rust, one sec
<clever> sb0: do you have binary cache enabled under services.hydra ?
<clever> sb0: run `nix show-derivation /nix/store/iwxbwxl3ldk7k36grr0xyj7gn229l0y6-artiq-dev-usr-target.drv` on each of the .drv's listed
<clever> into a pastebin
<clever> sb0: nix-store -r --dry-run /nix/store/0gaksckwpi6cwkmkncd8pm5mhbr7z837-conda-artiq-board-kasli-tester.drv
<clever> yeah
<clever> either use fetchTarball, or add a nixpkgs input under inputs
<clever> and the eval fails, it cant import <nixpkgs>
<clever> sb0: and why is enabled not checked?
<clever> sb0: try using https:// rather then git://
<clever> and post it somewhere
<clever> sb0: can you screenshot the current project config?
<clever> sb0: lets wait ~5mins, for the TTL stuff
<clever> yeah, the error reporting in this area needs work
<clever> sb0: line 13 has a trailing ,
<clever> sb0: your json isnt valid
<clever> parse error: Expected another key-value pair at line 14, column 5
<clever> sb0: you may want to just wait ~5mins
<clever> sb0: the configuration tab is still wrong, so the eval isnt part of the equation yet
<clever> sb0: or wait ~5mins
<clever> sb0: try changing the url to something "wrong" then change it back
<clever> sb0: spec.json tells hydra how to configure .jobsets, and what nix file to build, to get specs.json
<clever> delete lines 2 and 18
<clever> sb0: that json file isnt valid, inputs must be at the root level, not main
<clever> but its already
<clever> and all force eval does, is make it pending
<clever> force eval only works if the page i linked is correct
<clever> sb0: youll need to wait until https://nixbld.m-labs.hk/jobset/artiq/.jobsets#tabs-configuration updates
<clever> sb0: hydra doesnt update it immediately
<clever> sb0: that json error is likely somewhere in the journal logs
<clever> sb0: correct, the settings must point to a root spec.json file, like the one i linked above
<clever> sb0: hydra-project.nix then generates a set of jobset->spec.json, to decribe how to build everything else
<clever> sb0: spec.json tells it how to build hydra-project.nix
<clever> sb0: and in the project settings, you give it the git url, and the relative path, of that spec.json
<clever> sb0: that root spec.json is the one you need to fix
<clever> sb0: there is a root spec.json, that defines how to configure .jobsets, which then creates the main specs.json
<clever> sb0: you must still specify those, in the spec.json
<clever> sb0: oh, that jobset has no valid inputs, and no nix expression, so it will never eval properly
<clever> sb0: what does this journal say?
<clever> sb0: `journalctl -f -u hydra-evaluator`
<clever> including PR support for most
<clever> thats got the declarative config for nearly all of my projects
<clever> sb0: ive not read those docs, ever, but i do have a bunch of examples
<clever> sb0: we are all over the globe
<clever> sb0: eastern canada
<clever> sb0: to start with, open the queue like https://hydra.iohk.io/queue-summary, does it say what arch's have it queued?
<clever> ottidmes: only the fetch's inside nixpkgs can be overriden, the builtin ones cant
<clever> ` --option extra-sandbox-paths` only works if you are a trusted user
<clever> yep
<clever> ah, then stop postgresql.service, and rm that too
<clever> sb0: there is a hydra-init.service in systemd
<clever> sb0: nixpkgs added more required features, and wont run on older build slaves
<clever> sb0: hydra is bad at reporting missing features
<clever> sb0: its likely that your build slaves just lack features, no need to wipe the db
<clever> ottidmes: yeah, sandbox-paths
<clever> ma9e: yeah, thats normal
<clever> heh, so simple!
<clever> ma9e: just set the locale env vars, and add glibcLocales to the nativeBuildInputs
<clever> the builtin fetchers are also serial, rather then parallel, so it can help performance to avoid them
<clever> builtins.fetchurl should be the flat hash, builtins.fetchTarball is the recursive hash
<clever> which builtin fetcher?
<clever> ottidmes: the nix-hash command can also do the same hashing, and convert between base32 and base64
<clever> you can also `nix-store --add-fixed sha256 path` optionally with --recursive, to add a file to the store, and get its hash
<clever> the actual name/path of $out doesnt matter, so you can also nix-store --dump /tmp/foo | sha256sum
<clever> nix-store --dump $out | sha256sum, and your done
<clever> recursive allows it to be a file, directory, or symlink, so it is instead the hash of the nar
<clever> flat requires that $out be a plain file, and then outputHash is just the raw hash of the file (sha256sum, and your done)
<clever> outputHashMode can either be flat or recursive
<clever> ottidmes: it may help you more to know how the hashing is done, and just hash without the store
<clever> and yeah, it should really only be disabling if a sha256 is present
<clever> ottidmes: i think its because that fetcher is ran within the normal build sandbox, and can lack CA certs
<clever> ottidmes: ah, that is a bit odd
<clever> ottidmes: builtins.fetchurl doesnt use the curl command, but rather, the curl library
<clever> ottidmes: i think that one is secure
<clever> noonien: the cc-wrapper script
<clever> ottidmes: ah, nix-prefetch-url then
<clever> noonien: then use patchelf to change it after the build, almost exactly the same as when your packing stuff for nixos
<clever> ottidmes: yeah, nix verified the hash of all fixed-output things, so a mitm attack can only (waste bandwidth|see what your doing|block downloading it)
<clever> noonien: pkgsCross.muslpi
<clever> noonien: you want static linking then

2019-02-16

<clever> ,locate bin/Xvnc
<clever> iqubic: most of the time, yeah
<clever> iqubic: `git pull`
<clever> the 404's may already be fixed, and you just need to pull the fix
<clever> in the nixpkgs checkout
<clever> iqubic: did you do a `git pull` ?
<clever> if your pre-start copies the src from the store, it will undo the update at every restart
<clever> MichaelRaskin: it may also try to use that write status to upgrade itself
<clever> ./result/bin/steam
<clever> iqubic: nix-build -A steam, in the root dir of that nixpkgs checkout
<clever> proton based games also get upset of you -9 the wrong thing, and then steam will never register the game as "closed"
<clever> just telling the window-manager its OK to force-kill was enough
<clever> mek42_laptop: lol, the game locks up solid when i quit to desktop
<clever> iqubic: ls $(nix-build '<nixpkgs>' -A steamPackages.steam-fonts)/share/fonts/truetype/
<clever> mek42_laptop: looks like i was last in the theives guild, when i abandoned the game a few years ago
<clever> mek42_laptop: lol, it even found my old saves
<clever> unknown
<clever> mek42_laptop: wow, it actually starts, i never thought to try skyrim
<clever> iqubic: give you a different version of mono
<clever> the launcher works, and it auto-set my video settings to ulta high, lol
<clever> mek42_laptop: after skyrim finished downloading, it started to download proton 3.16 beta
<clever> iqubic: modify this file in a local checkout of nixpkgs
<clever> iqubic: if you run steam from a terminal, and then try to start the game, what does it show in the terminal?
<clever> skyrim will take 7mins to DL
<clever> none of the .net based games ive previously played work under proton
<clever> iqubic: because M$ touched it
<clever> mek42_laptop: the nixpkgs docs also explain how to make new packages
<clever> ,pills mek42_laptop

2019-02-14

<clever> mek42_laptop: yep
<clever> NemesisD: you could then make a bash script that wraps it, and sets $PATH first
<clever> NemesisD: nix computes runtime deps automatically, based on what storepaths your $out refers to
<clever> inx0: so you need to apply an override to a package, to modify how its build
<clever> inx0: all paths in the store are immutable, writeTextFile always creaes a new /nix/store/foo/
<clever> mrus: you would do all of that at the lvm level
<clever> mrus: you can just do 2*luks -> lvm -> filesystems
<clever> mrus: lvm can already do mirroring, so you could just lvm mirror 2 luks devices
<clever> mrus: why both mdadm and lvm?
<clever> gchristensen: oops, was just about to say that
<clever> elvishjerricco: could be that somebody swapped it around, and your going to swap it once more? :P
<clever> elvishjerricco: ive run into problems due to minimal being the default, `cal` cant highlight the current date
<clever> elvishjerricco: maybe, ive never really used replaceRuntimeDependencies, and i thought minimal was already the default
<clever> elvishjerricco: of note, i think replaceRuntimeDependencies is recursive, and will create new versions of everything, with the paths swapped around
<clever> they got moved, just yesterday!
<clever> tilpner: that commit also explains why half the context functions are missing
<clever> tilpner: wait, i was going to say, you have no way to view context ... lol
<clever> what did unsafeDiscardOutputDependency do... lol
<clever> wait, i linked the wrong builtin above, and you named an entirely diff one, lol
<clever> tilpner: the only way to get rid of context is via unsafeDiscardStringContext
<clever> tilpner: the context is also like a virus, if you use string manip to extract just the "/" from "/nix/store/foo", that "/" still has all of the context on it
<clever> and maybe `nix-diff` the 2 drv's
<clever> elvishjerricco: use `nix-store --query --deriver` to get the .drv for that, and others, then `nix show-derivation` them
<clever> main problem was building v6 binaries, because v7 opcodes would leak in and poison the hydra
<clever> i have mixed qemu-user and real arm build slaves before
<clever> ah
<clever> gchristensen: ?
<clever> elvishjerricco: by attrpath, not really possible, but by name, you can, it could also help to check by attrs on its derivation, but it depends heavily on the context, got a path as an example?
<clever> when you later pass that string to builtins.derivation, it will collect the context for all of its strings, and that new derivation depends on those things
<clever> this string has some invisible context on it, that points to the .drv file for hello
<clever> > "${pkgs.hello}"
<clever> as for what it does, every single string in nix has a context list on it
<clever> so having the nix sandbox on, forces the path to "not exist" and expose such mistakes
<clever> tilpner: one thing to keep in mind, is that if you use unsafeDiscardOutputDependency at the wrong time, a lack of nix sandboxing can hide problems, because the path still exists
<clever> (and a copy is already inside the squashfs on line 118)
<clever> and runtime deps are a subset of that, so i dont depend on it at runtime either
<clever> tilpner: but, because i discarded all context, i dont depend on /nix/store/foo at build-time
<clever> tilpner: this lets me bake the init=/nix/store/foo/init into the file command-line
<clever> tilpner: so it also wont count as a dep of your output
<clever> tilpner: it lets you get the path of something, without it counting as an input
<clever> tilpner: builtins.unsafeGetAttrPos can also be used sometimes
<clever> line 106 will then stream the path into a nar, and write it to that stream
<clever> line 86, the refsink, is a writable stream
<clever> and --references is just a query against db.sqlite
<clever> it then saves that to db.sqlite
<clever> and it will search for the paths that are in the build-time closure (the inputs)
<clever> behind the scenes, nix will basically do `nix-store --dump $out | egrep "foo|bar|baz" -o` to check for deps
<clever> tilpner: yeah, nix does search the nar form of a path forthehashes, at the end of a build
<clever> laas: 64bit wine is also already packaged in nixpkgs
<clever> laas: mostly that src = ./.; happens at eval time, and nix-shell cant easily mutate that region
<clever> laas: maybe use an override in shell.nix to just src = null;
<clever> laas: nix-store --version
<clever> yeah
<clever> laas: how big is . ? (du -h)
<clever> laas: how much ram on the system?
<clever> laas: are you doing src = ./.; ?, what args did you give nix-shell?
<clever> ah
<clever> tilpner: if nix's build dir was on a tmpfs
<clever> tilpner: and then chromium fails during the unpackPhase :P
<clever> so the 2nd nixops your trying to debug, ignores every single change
<clever> Twey: if you then run a totally different nixops via ./result/bin/nixops, it uses the source of that 1st one!
<clever> Twey: i recently ran into the problem that nix-shell with nixops in the inputs, sets PYTHONPATH to look at nixops, by force
<clever> tilpner: run `nix-store -r /nix/store/foo` on its output path
<clever> Baughn: ah
<clever> Baughn: got more info on the bug?
<clever> jrddunbr: but a local search claims its in a package called libibverbs
<clever> jrddunbr: looks like that may be in rdma-core
<clever> ,locate libibverbs
<clever> that file is from ancient times, 2011, back before nixos and nixpkgs got merged
<clever> https://nixos.org/nixos/options.html#udisks agrees that its udisks2 now
<clever> monokrome: you want to be reading nixpkgs, not nixos
<clever> monokrome: oh wait, very top of the page you linked, This repository has been archived by the owner. It is now read-only. Watch
<clever> monokrome: what does the source under `nix-instantiate --find-file nixpkgs` say?
<clever> monokrome: look at the source under `nix-instantiate --find-file nixpkgs`
<clever> that disables all loading of things from $HOME
<clever> i tend to always start with: import <nixpkgs> { config = {}; overlays = []; }
<clever> monokrome: thats also possible, import <nixpkgs> { config = { allowUnfree = true; }; }
<clever> monokrome: config.nix, same as nix-build
<clever> monokrome: it also works on symlinks
<clever> [clever@amd-nixos:~/iohk/daedalus]$ nix-store -l result
<clever> things like ls will check if stdout is a tty, and behave differently if it is, compare `ls` to `ls | cat`
<clever> there is also a syscall to see if a given fd is a tty
<clever> The isatty() function tests whether fd is an open file descriptor referring to a terminal.
<clever> ssh also does the same
<clever> and sudo does the same to write the pw prompt
<clever> sudo uses it to read your pw, even if you `echo foo | sudo bar`
<clever> if you open /dev/tty, you get the tty of the process
<clever> sudo already does exactly that
<clever> you can also use `nix-store -l /nix/store/foo` to view the logs for a given storepath
<clever> monokrome: all logs are already in /nix/var/log/nix/
<clever> monokrome: nix also logs everything already
<clever> monokrome: it may just be writing to stderr
<clever> oborot: :q
<clever> instantepiphany_: https://nixos.org/nixos/options.html#systemd.services.%3Cname%3E.path

2019-02-13

<clever> yep
<clever> asymmetric: youll also want to delete the qcow2 image that build-vm dropped in the working dir
<clever> asymmetric: yeah, i have that on all of my users i use regularly
<clever> asymmetric: then all users will have no pw by default, and login will be imposible
<clever> asymmetric: users.users.foo.initialPassword or initialPasswordHash
<clever> asymmetric: does your system config have an initial password?
<clever> asymmetric: if you didnt set a password in configuration.nix, then it wont have any pw for login
<clever> stites: ive also got a system76 laptop, but never bothered to get the "proper" drivers, it just works with defaults
<clever> i can stil `zfs send` the whole thing to the NAS
<clever> yep
<clever> stites: yeah
<clever> stites: i try to have the pool name match the hostname
<clever> asymmetric: you could build the module with nix-build or similar, `nixos-rebuild build-vm -I nixos-config=./configuration.nix`
<clever> stites: justdoit also allows a custom pool name, at install time
<clever> asymmetric: you can add custom modules to the imports list under configuration.nix
<clever> stites: tank boots?
<clever> spacekitteh[m]: ^^
<clever> spacekitteh[m]: it builds, but i dont know how to test such the maps...
<clever> yep
<clever> personally, ive switched fully to zfs, and ive not had any issues like that on it
<clever> instantepiphany: yeah
<clever> so after adding the path to the store, its contents just change
<clever> instantepiphany: the problem is that some filesystems like ext4 can loose the contents of a file at an improper shutdown
<clever> instantepiphany: run `nix-store --repair-path` on each of those paths
<clever> instantepiphany: sounds like an improper shutdown might have corrupted the disk
<clever> instantepiphany: `Compilation failed in require` looks like a worse issue, that stopped switch from switching
<clever> instantepiphany: and if that fails, `nix-store --verify --check-contents`
<clever> instantepiphany: try `nixos-rebuild boot` ?
<clever> instantepiphany: yeah, that can help
<clever> instantepiphany: if you run it again, what happens?
<clever> phizzz: builtins.fetchGit { url = "ssh://user@host/repo"; rev = "foo"; sha256 = "bar"; }
<clever> phizzz: use a url like ssh://user@host/repo
<clever> phizzz: builtins.fetchGit is about the only way to deal with private repos right now
<clever> phizzz: if gitlab has a .tar.gz or .zip download link, you can just use fetchzip on that
<clever> spacekitteh[m]: *looks*
<clever> or the name of the pool its importing, once it starts to boot
<clever> stites: just pay attention at grub, and note the date/time of the nixos its showing, that should reveal which disk it is
<clever> stites: you likely dont even have to re-run justdoit, or even zero out sdb, just tell the bios to boot sda instead
<clever> `zpool detach pool device` can be used to downgrade that mirror to a single-disk pool, but this is a new/different problem, and its not booting from the justdoit based install
<clever> but it will never return
<clever> and zroot waited 60 attempts to find the missing half of the mirror
<clever> stites: so justdoit nuked half your zroot mirror, and then zroot booted rather then tank
<clever> stites: justdoit uses tank by default
<clever> stites: yep, half of your mirror is missing
<clever> stites: can you pastebin `zpool status` ?
<clever> stites: ?
<clever> stites: this is likely part of why -d exists
<clever> < DHE> yeah, well, my company has a 1/2 petabyte (roughly) array... so there's that.
<clever> < DHE> the search thing is kinda important when your pool is made of 122 hard drives. :)
<clever> stites: the other issue, is that nixos uses `-d /dev/disk/by-id` by default, and if any vdev is not in by-id, it will fail to find it
<clever> stites: so after printing 60 .'s it should boot anyways, and then `zpool status` should reveal what is wrong
<clever> and it will test it 60 times, before giving up, and trying to import a degraded pool
<clever> it will test the pool every second, and print a . each time it tests
<clever> stites: its qwerty
<clever> that did it
<clever> stites: my irc client only higlights if the msg starts with "clever" and @clever doesnt do it
<clever> Ariakenom: then you just need to run wpa_passphrase, and feed the resulting config into /etc/wpa_supplicant.conf, or abandon supplicant and use NM
<clever> Ariakenom: you need to set wireless.enable = true; in your configuration.nix file, and then nixos-rebuild switch
<clever> stites: can you record a video of it booting and hanging?
<clever> gchristensen: what about the timeout thing you added, to make it wait if half the pool devices are missing?