2019-02-05

<clever> matthewbauer[m]: i believe its prepended, so your own nix-channel --add, has higher priority
<clever> matthewbauer[m]: what about just putting a literal $HOME into the string, and then eval'ing the entire thing later on?
<clever> ottidmes: ah
<clever> ottidmes: that uses some state somewhere in $HOME, strace might find it
<clever> so it wont need any update operations
<clever> Twey: but if you add a file to .nix-defexpr, it will parse that file every time you nix-env -iA
<clever> Twey: you would have to re-run nix-channel --update, every time you change the local nixpkgs
<clever> ~/.local/share/applications for example has some related stuff
<clever> ottidmes: and ~/.share
<clever> ottidmes: try grepping ~/.cache/ for the hash of that chrome
<clever> this allows you to nix-env -iA foo.hello
<clever> $ cat ~/.nix-defexpr/test/foo/default.nix
<clever> import /home/clever/apps/nixpkgs
<clever> Twey: can also do that without nix-channel
<clever> Avaq: pkgs.writeText
<clever> so you could test the override itself, without impacting cabal2nix
<clever> it allows you to decouple the attrname, and the cabal name
<clever> yayforj: if you where not using mapAttrs, you could also make an override for vector-algorithms but call the attr my-vector-algorithms
<clever> yayforj: you will need to manually run `cabal2nix cabal://vector-algorithms-0.8.0.1` (i think it was), stick that into a .nix file, and callPackage it
<clever> yayforj: and now you need cabal2nix, to define vector-algorithms, to build cabal2nix
<clever> yayforj: because cabal2nix depends on vector-algorithms
<clever> yayforj: i see the problem, you cant use callHackage or callCabal2nix when defining vector-algorithms
<clever> ah, i see why
<clever> why are you using mapAttrs?
<clever> yayforj: the overlay must return a set of package = override;
<clever> yayforj: which package are you overriding?
<clever> though the laptop lacks a dedicated gpu, so perf suffers
<clever> Taneb: i have steam working on both the laptop&desktop, a few games run well, including some "windows only" ones
<clever> Avaq: not the whole network, the laptop&desktop arent part of nixops, yet
<clever> that also allows me to load them directly with nixos, and build them on hydra
<clever> so router.nix and nas.nix are pure nixos, and can be used without nixops
<clever> myself, i have the nixops specific stuff (deployment.targetHost) in the deployment file, and then i load a "nixos" config via imports
<clever> and paste in the contents of your configuration.nix file
<clever> so you can just do machine = { pkgs, config, ... }: { ... };
<clever> and they can either just be a raw set, or a function that takes inputs
<clever> Avaq: configuration.nix is a nixos module
<clever> Avaq: yeah, the main machinename = thing; stuff are just nixos modules
<clever> if they define settings for both, then the options are merged at the nixos level, the same as using a list in imports
<clever> if 2 files define different machines, then the deployment just contains both set of machines
<clever> Avaq: yeah, they are all merged together
<clever> Avaq: when nixops first did a deploy, it upgraded the kernel, and then you had an old kernel and new modules, so it failed to load the modules
<clever> rebooting the guest `nixops reboot` should solve it
<clever> and it tried to load 4.14 kernel modules on 4.4
<clever> Avaq: nixops then deployed changes, that upgrade you to 4.14.97
<clever> Avaq: nixops created a vbox image, from a base disk image, running 4.4.24
<clever> Avaq: ah, i think i see the problem then
<clever> and check the journal for the things in red
<clever> check the guest journal, and see whats up there
<clever> and then it gives a warning when its done
<clever> ah, then maybe it was deploying just fine, and some of the misc services are just broken
<clever> Avaq: root pw can be used, but it is prefered to have an ssh key on root
<clever> (and that second "machine" could just be a vbox copy you installed manually, from the ISO or pre-made vbox image)'
<clever> Avaq: if you have a working nixos install on a second machine, you can experiment with the none backend instead
<clever> ive never tried using the vbox backend
<clever> not sure what else to check
<clever> maybe check on both, just in case
<clever> journactl -u get-vbox-nixops-client-key.service
<clever> what about the journal log?
<clever> possibly
<clever> not sure what else to check then
<clever> Avaq: anything in red?
<clever> Avaq: what about `systemctl status` as root?
<clever> run `id` and confirm
<clever> Avaq: oh, are you in the vboxusers group (run id to check)
<clever> uid=1000(clever) gid=100(users) groups=100(users),1(wheel),72(vboxusers),131(docker),500(wireshark)
<clever> no changes to the kernel version
<clever> Avaq: what does this output?
<clever> ls -l /run/{booted,current}-system/kernel
<clever> Avaq: sounds like that likely breaks nixops as well, one sec
<clever> and can a guest be started?
<clever> Avaq: is virtualbox enabled on the host and already working?
<clever> so i just leave a note to myself in a comment, on what to `nixops modify` to
<clever> wedens: it can, but the current implementation has a heavy performance cost
<clever> so that part will get lost if you move to another machine
<clever> the -I nixpkgs=foo is one of the things nixops stores in the state file
<clever> Avaq: so i dont have unplanned upgrades, caused by the host doing nix-channel --update
<clever> Avaq: i also do this even when problems dont exist, so i can strictly control the version used in the deployment
<clever> Avaq: yep
<clever> Avaq: that also confirms it is an issue with nixpkgs, but you can at least work around it now
<clever> Avaq: replace deployment.nix with the list of files you originally gave to `nixops create`
<clever> Avaq: the URL would be something like https://github.com/nixos/nixpkgs-channels/archive/nixos-18.09.tar.gz
<clever> Avaq: using `nixops modify -d deployment -I nixpkgs=URL deployment.nix` you can force your deployment to use a different version of nixpkgs
<clever> Avaq: there is a chance that the version in nixos-unstable, doesnt actually work on nixos-unstable
<clever> Taneb: i would say those docs are wrong
<clever> Taneb: nixops appends to the list, it doesnt force the value
<clever> just users.users.root.openssh.authorizedPublickeys i think it was spelled
<clever> wedens: i would put your own ssh pubkey into the deployment files
<clever> but if you have your own ssh keys in the agent, you wont really care much
<clever> which will cut off the nixops generated keys for other state files
<clever> wedens: the first time you deploy from a given state, it will auto-generate its own root keys, and bake those into the allowed ones for root
<clever> wedens: as long as you have a way to maintain root ssh (ssh keypair) on all of those machines
<clever> or it will be changing the version every time you switch to a different box
<clever> and youll want to pin nixpkgs, `nixops modify -d deployment -I nixpkgs=URL deployment.nix`
<clever> just be aware that you can downgrade the remote machines if you forget to `git pull` when deploying from out of date machines
<clever> if you are only ever using the "none" backend, you can mostly ignore that state
<clever> so it can clean up after itself later, and know which machines to deploy to
<clever> wedens: when you create any resource in the cloud (aws machines), it has to track the instance-id aws assigned
<clever> Taneb: there has been some talk about replacing the key/value store nixops uses, so it could have a DB that is shared
<clever> __monty__: nixops just does nix-build, so it will obey /etc/nix/machines
<clever> __monty__: they would use a random build slave (if they have many) and can miss the cached copy
<clever> Taneb: but you can trivially convert nixos machines to nixops machines
<clever> __monty__: at a glance, its just a wrapper around nixos-rebuild, which i feel is worse, given the benefits nixops has with central building
<clever> __monty__: never heard of krops
<clever> Avaq: line 63 of default.nix is dealing with localSystem, an arg to nixpkgs, i suspect your nixops version is too old to work on your nixpkgs version
<clever> __monty__: if its fixed output, it will only build once, and then never again, and use that product
<clever> __monty__: yeah
<clever> blumenkranz: some things like the nixpkgs python are mixed in, and that depends on the current stdenv
<clever> nixos-unstable can be held back by nixos levels failures (grub, key services)
<clever> nixpkgs-unstable can be held back by darwin failures
<clever> varies
<clever> then add -I flags, and mess with NIX_PATH, to see what it does
<clever> nix-instantiate --find-file nixpkgs
<clever> teto: then youll either need to , ah, already fixed
<clever> teto: they are called submodules now, youll need to either use a newer nixops, or an older nixpkgs
<clever> so you have to delete the Hashable from beam-migrate
<clever> and now that your forcing a newer version, Hashable is already there
<clever> haskell-src-exts
<clever> fresheyeball: upstream added a Hashable instance to its own types
<clever> fresheyeball: so you need to patch beam-migrate to not add a duplicate
<clever> fresheyeball: upstream has added the instance properly, and due you you using a newer version, the now conflict
<clever> fresheyeball: doesnt do that for me
<clever> and callCabal2nix needs cabal2nix
<clever> cabal2nix (or its deps) dont like that override
<clever> which is why i switched to applying it only to beam-migrate
<clever> yeah, you broke cabal2nix
<clever> fresheyeball: but it does eval if i apply to just beam-migrate
<clever> 86 beam-migrate = doJailbreak (hself.callCabal2nix "beam-migrate" "${beam}/beam-migrate" { haskell-src-exts = hself.haskell-src-exts_1_21_0; });
<clever> and applying that globally breaks cabal2nix build
<clever> fresheyeball: so, you need to either haskell-src-exts = self.haskell-src-exts_1_21_0; the entire overlay, or target that to a few things
<clever> fresheyeball: an override already exists, forcing hlint to use the 1.21.0 version
<clever> 1177 hlint = super.hlint.overrideScope (self: super: { haskell-src-exts = self.haskell-src-exts_1_21_0; });
<clever> 1176 # The LTS-12.x version doesn't suffice to build hlint, hoogle, etc.
<clever> fresheyeball: aha, i found the problem line
<clever> configuration-common.nix: hlint = super.hlint.overrideScope (self: super: { haskell-src-exts = self.haskell-src-exts_1_21_0; });
<clever> error: while evaluating the attribute 'propagatedBuildInputs' of the derivation 'hlint-2.1.14' at /nix/store/9s913lgsy4ravr2m0iqyb92646jlidnk-source/pkgs/stdenv/generic/make-derivation.nix:183:11:
<clever> fresheyeball: then it will give a clear error, when something didnt stop doing it
<clever> fresheyeball: oh, you can also use your overlays to just set haskell-src-exts_1_21_0 = throw "stop doing that";
<clever> hlint also uses haskell-src-exts
<clever> so you still have a nix file
<clever> fresheyeball: and then callPackages's that
<clever> fresheyeball: callCabal2nix generates a nix file, in the /nix/store/
<clever> "/nix/store/9s913lgsy4ravr2m0iqyb92646jlidnk-source/pkgs/development/haskell-modules/hackage-packages.nix:108250"
<clever> nix-repl> haskellPackages.hlint.meta.position
<clever> fresheyeball: beam-migrate is doing haskell-src-exts
<clever> fresheyeball: this exposes the .nix file behind beam-migrate
<clever> "/nix/store/yw9x53rmfw3swr3vr0pnqy29gnaxjsji-cabal2nix-beam-migrate/default.nix:7"
<clever> nix-repl> haskellPackages.beam-migrate.meta.position
<clever> mdash: the function has not been applied to the string
<clever> mdash: that is a list containing a function (import) and a string
<clever> fresheyeball: thats a harder question, youll need to just pick one, and see if it gets upset over you changing the version
<clever> and not directly pin it
<clever> fresheyeball: you can also just do haskell-src-exts_1_21_0 = self.haskell-src-exts
<clever> fresheyeball: yeah, that should fix it
<clever> then everything will be on the same version and conflicts wont exist
<clever> you want to use .override, or the {} at the end of callCabal2nix, to force a specific one
<clever> fresheyeball: haskellPackages.haskell-src-exts is 1.20.3, haskellPackages.haskell-src-exts_1_21_0 is 1.21.0
<clever> there are 2 versions of haskel-src-exts available
<clever> fresheyeball: the attributes of haskellPackages
<clever> fresheyeball: you need to use the {} at the end of callPackage (or .override) to force them all to use the same one
<clever> fresheyeball: hlint is using one attr, beam the other
<clever> fresheyeball: aha, here is the problem!
<clever> «derivation /nix/store/x920qvxym9pffrcn3jqxhi7phrfg7zhs-haskell-src-exts-1.21.0.drv»
<clever> nix-repl> haskellPackages.haskell-src-exts_1_21_0
<clever> «derivation /nix/store/9b9f91i4vdwq2irw7lalava5a57f5vd0-haskell-src-exts-1.20.3.drv»
<clever> nix-repl> haskellPackages.haskell-src-exts
<clever> fresheyeball: ok, i can confirm beam-migrate depends on haskell-src-exts-1.20.3
<clever> fresheyeball: though beam-core doesnt depend on the problem package, checking further...
<clever> this cmd should build beam-core
<clever> nix-repl> :b haskellPackages.beam-core
<clever> fresheyeball: probably
<clever> but how did they even get different versions to begin with?
<clever> yeah, beam has one version, hlint has another
<clever> then when you bring those things together, you now have 2 haskell-src-exts in scope
<clever> and then different things are building against different versions
<clever> fresheyeball: it looks like you have 2 different versions of haskell-src-exts defined in the nix, somewhere
<clever> thats not the nix expression the 2 versions came from
<clever> fresheyeball: where is the version of haskell-src-exts coming from?
<clever> fresheyeball: can you link the nix expressions?
<clever> fresheyeball: you have 2 different versions being mixed in, you need to ensure everything comes from the same haskellPackages and same override set
<clever> rnhmjoj: IFD isnt allowed within nixpkgs, it ruins the speed of nix-env -q

2019-02-04

<clever> sshd has an option to use the 1st mode, so ssh is fully stopped once you DC
<clever> nix-daemon.socket uses the 2nd mode, so nix-daemon doesnt start until first use, but then remains running, for lower latency
<clever> symphorien: and a 2nd,where systemd will fork out a single daemon, and pass it the listening socket
<clever> symphorien: but socket activation also has 2 modes, one where systemd will accept() each connection, and fork out one child per connection
<clever> tobiasBora: ive seen a few jar based packages
<clever> samueldr: libraries that java/python load, need special wrappers
<clever> samueldr: ive seen this problem before, with java and python
<clever> youll never make a bad derivation again!
<clever> if your feeling crazy, you could just throw it into your nix.conf, and leave it on :P
<clever> gchristensen: so it tests the entire closure for you, at every step
<clever> gchristensen: the build-repeat option makes nix build every single derivation N times, and fail if they are not bit-identical to eachother
<clever> ahh
<clever> samueldr: so as long as the things in that list are installed via systemPackages, its not likely to be an issue
<clever> tobiasBora: its a bash function you run at nix-build time
<clever> samueldr: but only if your mixing nixpkgs revs? (nix-env + nixos, or nix-shell+nixos)?
<clever> gchristensen: using build-repeat?
<clever> samueldr, tobiasBora: ahhh, so wrapGAppsHook has to be used, over a bash script thats ran `java -jar foo.jar`, to force the gdk its linking against, into also looking at matching versions of gdk png modules
<clever> tobiasBora: and thats technically an impurity, which causes gdk to load modules across versions
<clever> tobiasBora: i'm guessing an env var or /run/current-system is being searched, to find modules for loading things like png
<clever> samueldr: oh, i see it now
<clever> tobiasBora: line 156 is a child thread being forked out, so you need strace -f to follow the forking
<clever> tobiasBora: try an strace to see what png file its opening (if any) and then check if its valid
<clever> nearly all of the stack trace is in gtk and gobject, doesnt look like its directly java related
<clever> tobiasBora: yeah, that looks better, looks like an issue when trying to load a png file
<clever> tobiasBora: the 1st arg to gdb needs to be the java binary, not the elf file
<clever> tobiasBora: then we can look at the backtrace and see whats going on
<clever> tobiasBora: id say, get a backtrace first, so do `ulimit -c unlimited` like it said, then open the coredump in gdb
<clever> dhess: yep
<clever> dhess: that was turned off long ago, never got used
<clever> jophish: heh, no current plans to visit anything in the asia region
<clever> boot.initrd.availableKernelModules
<clever> sounds good
<clever> jophish: check lsmod when the device is working
<clever> jophish: what bus is sda on? is that driver in the initrd?
<clever> jophish: i'm guessing the CPU is too old to support hw accelerated crc32, but for even the fallback to not work? weird
<clever> and will return ENODEV if it doesnt
<clever> jophish: lines 214-225 are doing some checks to see if the hardware supports crc32 at a hardware level
<clever> crc32-arm-ce-y:= crc32-ce-core.o crc32-ce-glue.o
<clever> jophish: let me double-check things...
<clever> jophish: are you on an arm processor?
<clever> chross-q: nixos udev will then use /run/current-system/firmware/
<clever> chross-q: typically, the kernel will execute a udev related binary (or write to a special pipe) to request that udev do the firmware loading
<clever> the above magically prints /nix/store/f2y39wr94xbz0abxlcv9saqlaqzq72k8-name
<clever> but, runCommand puts it into the buildCommand attr ...
<clever> teto: ah, its only right when in an attr of the derivation
<clever> nix-repl> :b runCommand "name" { foo = builtins.placeholder "out"; } ''echo $foo''
<clever> hmmm, its also "wrong" at bash time...
<clever> nix-repl> :b runCommand "name" {} ''echo ${builtins.placeholder "out"}''
<clever> > builtins.placeholder "out"
<clever> teto: and did you view it at nix or bash time?
<clever> teto: how does that string compare to $out?
<clever> chross-q: its fetched by nixpkgs, so the nixpkgs have to be updated (or an override added)
<clever> and stdin should be gzip's stdout
<clever> `gzip -d < /run/current-system/initrd | cpio -t`
<clever> chross: it sounds like the gzip isnt decompressing
<clever> chross: what about `file -L /run/current-system/initrd`
<clever> -i will unpack the initrd to the current dir
<clever> oh, or `-i -t`
<clever> chross: the man page says it should be `cpio -i`
<clever> chross: grub also has efi support, so you could just ditch a bit of systemd
<clever> ah yeah
<clever> because the agent popped a query up, on the laptop display, in another room
<clever> and ssh just hung and silently did nothing
<clever> in the past, ive ran ssh on the laptop (over ssh itself), when the laptop was in another room
<clever> then its not the agent being wonky
<clever> sphalerite: is the chromebook able to get a shell on the community box?
<clever> sphalerite: are you doing this from X11 or ssh to your box?
<clever> so it should recover the next time you make a tcp connection
<clever> that kind of stuff is per-connection
<clever> lol
<clever> the UI of `nix` can mess with logs
<clever> ah, maybe switch to nix-copy-closure --to builder -vvvv /nix/store/foo
<clever> sphalerite: check lsof to see what has files in there open
<clever> sphalerite: or another process you missed is open, and still uploading
<clever> sphalerite: you might have stale lock files in /nix/var/nix/current-load/ maybe?
<clever> could also `du --max-1 -hc /nix/store | sort -h | tail` to just find the biggest thng you have
<clever> sphalerite: oddly, i dont see that in the `nix copy` codepath, did you do a remote build?
<clever> thats a local lock i believe
<clever> sphalerite: next thing then would be `nix copy --to builder /nix/store/fatpath -vvvv`
<clever> sphalerite: what about cpu usage in `top`, what % is nix using?
<clever> 1st is an avg since boot
<clever> iostat is only accurate for the 2nd sample onward
<clever> not an IO or cpu issue locally
<clever> what does `top` and `iostat -x 30` show, on both ends?
<clever> ah, it could be IO bottlenecks at either end
<clever> sphalerite: is it many short-lived tcp conns or one long-lived one?
<clever> sphalerite: what does `ping builder -c 100` say at the end?
<clever> sphalerite: sounds like there is high packet loss to the builder, so linux has scaled the window size back, to throttle things
<clever> tcpdump -v and/or wirehsark should show the window size
<clever> it can upload one tcp-window per round-trip, so if the window is low
<clever> sphalerite: tcp window size? upload cap?
<clever> then it will fetch what it can, rather then you uploading it all
<clever> sphalerite: i cant remember the exact flag, but you can set something to make the remote machine use its own binary cache config
<clever> colemickens: thats where you will find it
<clever> > builtins.unsafeGetAttrPos "fetchFromGitHub" pkgs
<clever> > fetchFromGitHub
<clever> have a look at the definition of fetchFromGitHub and fetchzip
<clever> then it would need to be fetchurl, or a modified version of fetchzip
<clever> oh, but you want the tar itself
<clever> colemickens: fetchzip is poorly named, and will just unzip (or untar) whatever you point it at
<clever> i would start by checking for a configure script
<clever> yeah, you would need to check the logs of others, and unpack the src for this one, to see
<clever> that breaks, when nixos requires that php be in /nix/store/hash-php, and the extension be in /nix/store/hash-extension
<clever> so if php was in /usr/local/bin, then the extensions go to /usr/local/lib/
<clever> daniele-: its common for extensions to try to add themself to the dir of the host app
<clever> daniele-: edit the nix expression to no fail, and then file a PR to nixpkgs
<clever> likely everybody (and you) have downloaded that broken build
<clever> daniele-: this says that its also been built on hydra
<clever> daniele-: it might be broken for everybody, and noone else has noticed
<clever> daniele-: its trying to install things into its inputs, it should be writing to $out
<clever> ,xy daniele-