<clever>
so you could test the override itself, without impacting cabal2nix
<clever>
it allows you to decouple the attrname, and the cabal name
<clever>
yayforj: if you where not using mapAttrs, you could also make an override for vector-algorithms but call the attr my-vector-algorithms
<clever>
yayforj: you will need to manually run `cabal2nix cabal://vector-algorithms-0.8.0.1` (i think it was), stick that into a .nix file, and callPackage it
<clever>
yayforj: and now you need cabal2nix, to define vector-algorithms, to build cabal2nix
<clever>
yayforj: because cabal2nix depends on vector-algorithms
<clever>
yayforj: i see the problem, you cant use callHackage or callCabal2nix when defining vector-algorithms
<clever>
ah, i see why
<clever>
why are you using mapAttrs?
<clever>
yayforj: the overlay must return a set of package = override;
<clever>
yayforj: which package are you overriding?
<clever>
though the laptop lacks a dedicated gpu, so perf suffers
<clever>
Taneb: i have steam working on both the laptop&desktop, a few games run well, including some "windows only" ones
<clever>
Avaq: not the whole network, the laptop&desktop arent part of nixops, yet
<clever>
that also allows me to load them directly with nixos, and build them on hydra
<clever>
and paste in the contents of your configuration.nix file
<clever>
so you can just do machine = { pkgs, config, ... }: { ... };
<clever>
and they can either just be a raw set, or a function that takes inputs
<clever>
Avaq: configuration.nix is a nixos module
<clever>
Avaq: yeah, the main machinename = thing; stuff are just nixos modules
<clever>
if they define settings for both, then the options are merged at the nixos level, the same as using a list in imports
<clever>
if 2 files define different machines, then the deployment just contains both set of machines
<clever>
Avaq: yeah, they are all merged together
<clever>
Avaq: when nixops first did a deploy, it upgraded the kernel, and then you had an old kernel and new modules, so it failed to load the modules
<clever>
rebooting the guest `nixops reboot` should solve it
<clever>
and it tried to load 4.14 kernel modules on 4.4
<clever>
Avaq: nixops then deployed changes, that upgrade you to 4.14.97
<clever>
Avaq: nixops created a vbox image, from a base disk image, running 4.4.24
<clever>
Avaq: ah, i think i see the problem then
<clever>
and check the journal for the things in red
<clever>
check the guest journal, and see whats up there
<clever>
and then it gives a warning when its done
<clever>
ah, then maybe it was deploying just fine, and some of the misc services are just broken
<clever>
Avaq: root pw can be used, but it is prefered to have an ssh key on root
<clever>
(and that second "machine" could just be a vbox copy you installed manually, from the ISO or pre-made vbox image)'
<clever>
Avaq: if you have a working nixos install on a second machine, you can experiment with the none backend instead
<clever>
but if you have your own ssh keys in the agent, you wont really care much
<clever>
which will cut off the nixops generated keys for other state files
<clever>
wedens: the first time you deploy from a given state, it will auto-generate its own root keys, and bake those into the allowed ones for root
<clever>
wedens: as long as you have a way to maintain root ssh (ssh keypair) on all of those machines
<clever>
or it will be changing the version every time you switch to a different box
<clever>
and youll want to pin nixpkgs, `nixops modify -d deployment -I nixpkgs=URL deployment.nix`
<clever>
just be aware that you can downgrade the remote machines if you forget to `git pull` when deploying from out of date machines
<clever>
if you are only ever using the "none" backend, you can mostly ignore that state
<clever>
so it can clean up after itself later, and know which machines to deploy to
<clever>
wedens: when you create any resource in the cloud (aws machines), it has to track the instance-id aws assigned
<clever>
Taneb: there has been some talk about replacing the key/value store nixops uses, so it could have a DB that is shared
<clever>
__monty__: nixops just does nix-build, so it will obey /etc/nix/machines
<clever>
__monty__: they would use a random build slave (if they have many) and can miss the cached copy
<clever>
Taneb: but you can trivially convert nixos machines to nixops machines
<clever>
__monty__: at a glance, its just a wrapper around nixos-rebuild, which i feel is worse, given the benefits nixops has with central building
<clever>
__monty__: never heard of krops
<clever>
Avaq: line 63 of default.nix is dealing with localSystem, an arg to nixpkgs, i suspect your nixops version is too old to work on your nixpkgs version
<clever>
__monty__: if its fixed output, it will only build once, and then never again, and use that product
<clever>
__monty__: yeah
<clever>
blumenkranz: some things like the nixpkgs python are mixed in, and that depends on the current stdenv
<clever>
nixos-unstable can be held back by nixos levels failures (grub, key services)
<clever>
nixpkgs-unstable can be held back by darwin failures
<clever>
varies
<clever>
then add -I flags, and mess with NIX_PATH, to see what it does
<clever>
nix-instantiate --find-file nixpkgs
<clever>
teto: then youll either need to , ah, already fixed
<clever>
teto: they are called submodules now, youll need to either use a newer nixops, or an older nixpkgs
<clever>
so you have to delete the Hashable from beam-migrate
<clever>
and now that your forcing a newer version, Hashable is already there
<clever>
haskell-src-exts
<clever>
fresheyeball: upstream added a Hashable instance to its own types
<clever>
fresheyeball: so you need to patch beam-migrate to not add a duplicate
<clever>
fresheyeball: upstream has added the instance properly, and due you you using a newer version, the now conflict
<clever>
fresheyeball: doesnt do that for me
<clever>
and callCabal2nix needs cabal2nix
<clever>
cabal2nix (or its deps) dont like that override
<clever>
which is why i switched to applying it only to beam-migrate
<clever>
yeah, you broke cabal2nix
<clever>
fresheyeball: but it does eval if i apply to just beam-migrate
<clever>
error: while evaluating the attribute 'propagatedBuildInputs' of the derivation 'hlint-2.1.14' at /nix/store/9s913lgsy4ravr2m0iqyb92646jlidnk-source/pkgs/stdenv/generic/make-derivation.nix:183:11:
<clever>
fresheyeball: then it will give a clear error, when something didnt stop doing it
<clever>
fresheyeball: oh, you can also use your overlays to just set haskell-src-exts_1_21_0 = throw "stop doing that";
<clever>
hlint also uses haskell-src-exts
<clever>
so you still have a nix file
<clever>
fresheyeball: and then callPackages's that
<clever>
fresheyeball: callCabal2nix generates a nix file, in the /nix/store/
<clever>
fresheyeball: ok, i can confirm beam-migrate depends on haskell-src-exts-1.20.3
<clever>
fresheyeball: though beam-core doesnt depend on the problem package, checking further...
<clever>
this cmd should build beam-core
<clever>
nix-repl> :b haskellPackages.beam-core
<clever>
fresheyeball: probably
<clever>
but how did they even get different versions to begin with?
<clever>
yeah, beam has one version, hlint has another
<clever>
then when you bring those things together, you now have 2 haskell-src-exts in scope
<clever>
and then different things are building against different versions
<clever>
fresheyeball: it looks like you have 2 different versions of haskell-src-exts defined in the nix, somewhere
<clever>
thats not the nix expression the 2 versions came from
<clever>
fresheyeball: where is the version of haskell-src-exts coming from?
<clever>
fresheyeball: can you link the nix expressions?
<clever>
fresheyeball: you have 2 different versions being mixed in, you need to ensure everything comes from the same haskellPackages and same override set
<clever>
rnhmjoj: IFD isnt allowed within nixpkgs, it ruins the speed of nix-env -q
2019-02-04
<clever>
sshd has an option to use the 1st mode, so ssh is fully stopped once you DC
<clever>
nix-daemon.socket uses the 2nd mode, so nix-daemon doesnt start until first use, but then remains running, for lower latency
<clever>
symphorien: and a 2nd,where systemd will fork out a single daemon, and pass it the listening socket
<clever>
symphorien: but socket activation also has 2 modes, one where systemd will accept() each connection, and fork out one child per connection
<clever>
tobiasBora: ive seen a few jar based packages
<clever>
samueldr: libraries that java/python load, need special wrappers
<clever>
samueldr: ive seen this problem before, with java and python
<clever>
youll never make a bad derivation again!
<clever>
if your feeling crazy, you could just throw it into your nix.conf, and leave it on :P
<clever>
gchristensen: so it tests the entire closure for you, at every step
<clever>
gchristensen: the build-repeat option makes nix build every single derivation N times, and fail if they are not bit-identical to eachother
<clever>
ahh
<clever>
samueldr: so as long as the things in that list are installed via systemPackages, its not likely to be an issue
<clever>
tobiasBora: its a bash function you run at nix-build time
<clever>
samueldr: but only if your mixing nixpkgs revs? (nix-env + nixos, or nix-shell+nixos)?
<clever>
gchristensen: using build-repeat?
<clever>
samueldr, tobiasBora: ahhh, so wrapGAppsHook has to be used, over a bash script thats ran `java -jar foo.jar`, to force the gdk its linking against, into also looking at matching versions of gdk png modules
<clever>
tobiasBora: and thats technically an impurity, which causes gdk to load modules across versions
<clever>
tobiasBora: i'm guessing an env var or /run/current-system is being searched, to find modules for loading things like png
<clever>
samueldr: oh, i see it now
<clever>
tobiasBora: line 156 is a child thread being forked out, so you need strace -f to follow the forking
<clever>
tobiasBora: try an strace to see what png file its opening (if any) and then check if its valid
<clever>
nearly all of the stack trace is in gtk and gobject, doesnt look like its directly java related
<clever>
tobiasBora: yeah, that looks better, looks like an issue when trying to load a png file
<clever>
tobiasBora: the 1st arg to gdb needs to be the java binary, not the elf file
<clever>
tobiasBora: then we can look at the backtrace and see whats going on
<clever>
tobiasBora: id say, get a backtrace first, so do `ulimit -c unlimited` like it said, then open the coredump in gdb
<clever>
dhess: yep
<clever>
dhess: that was turned off long ago, never got used
<clever>
jophish: heh, no current plans to visit anything in the asia region
<clever>
boot.initrd.availableKernelModules
<clever>
sounds good
<clever>
jophish: check lsmod when the device is working
<clever>
jophish: what bus is sda on? is that driver in the initrd?
<clever>
jophish: i'm guessing the CPU is too old to support hw accelerated crc32, but for even the fallback to not work? weird
<clever>
and will return ENODEV if it doesnt
<clever>
jophish: lines 214-225 are doing some checks to see if the hardware supports crc32 at a hardware level