<clever>
cocreature: how do the nix.conf files differ between the machine that accepts the signature and the one that doesnt?
<clever>
cocreature: i believe the names dont matter, was just thinking you may have been renaming things and got the publics of different keys mixed up
<clever>
cocreature: ive not tried IP's as the name of a key before, has the ip in that key ever been edited?
<clever>
cocreature: the name before the : in the signature must also have a matching entry in the trusted-public-keys of nix.conf
<clever>
cocreature: you should see a line like Sig: cache.nixos.org-1:jwEzZZtmn7ZXGZYEAsnIccKW8c6nbqT90Lk+Nwf2I9v4ugLnnA3QB+n2NQRY2i/sFxXYGqikq+P8ODDLyGZcBg==
<clever>
Lisanna: not sure what the issue is then, all you can try is to manually run hydra-eval-jobs with the args from https://lpaste.net/6367929829735530496 (mind the quotes around the --arg) and then strace that
<clever>
Lisanna: also, if you use an ssh based url in the hydra ui, then hydra will use the .ssh/id_rsa in the hydra user, so you dont have to leak your password in configs
<clever>
Lisanna: can you link your release.nix?
<clever>
Lisanna: does it have any open .lock files?
<clever>
Lisanna: ls -l /proc/6398/fd/
<clever>
Lisanna: is that one busy?, what are its children?
<clever>
Lisanna: and if you check `ps aux | grep 6378` do you see a nix-daemon?
<clever>
Lisanna: and is it using cpu?
<clever>
Lisanna: if you check `ps -eH x`, what does the tree look like around the evaluator?
<clever>
palo: and then i have more custom stuff, that runs the sandbox without bundles, to give faster launch times
<clever>
palo: so i only used the arx bundle as an installer
<clever>
palo: and in my case, the entire bundle was ~200mb in size, and there is a ~30+ second delay just starting it, because it has to un-tar the entire thing on every start
<clever>
palo: so it has to copy before setting up the sandbox
<clever>
palo: the tricky part, is that you need things like /etc/resolv.conf and such in the sandbox for stuff to work, and on nixos, those are symlinks to the host store, so things break when you overwrite /nix with a new mount
<clever>
palo: its essentialy a bash script that extracts and runs a static elf inside a .tar
<clever>
palo: i have used the arx mode of nix-bundle to make an app that works on any linux distro, including nixos
<clever>
rardiol1: and nix-store -q --tree ~/.nix-profile
<clever>
more an example of how you can insert variables into a key
<clever>
i dont think that one exists
<clever>
yeah, it would have to match the existing format, ghc841
<clever>
you can also pkgs."ghc-${version}" for ex
<clever>
__monty__: like that?
<clever>
> let key = "hello"; in pkgs.${key}
<clever>
its also handy to have both, you can use nix-repl to force things to run in 1 for backwards compat testing, and nix repl for forward compat
<clever>
either `export NIX_REMOTE=daemon` first or switch to `nix repl`
<clever>
nix 2 auto-detects it, so the env var is no longer set
<clever>
philippD: nix-repl links against nix1 which cant auto-detect NIX_REMOTE
<clever>
patch the app?
<clever>
__monty__: yeah, a PR sounds good, then we can see what others think
<clever>
hodapp: i think it has a pre script, check the source of wrapProgram
<clever>
its 1 extra line
<clever>
and hydra will complain if anybody breaks it
<clever>
__monty__: if it was in nixpkgs, it would be much simpler, just a ghc84 alias that is updated by a maintainer when the thing it points to breaks
<clever>
__monty__: not really, you have to just pick one from haskell.packages.ghc821
<clever>
hodapp: yeah, that could go into $out/share/
<clever>
in the case of a set like fileSystems, it will recursively merge the values within
<clever>
for lists, it just concats them
<clever>
for int types, it throws an error if you set it twice
<clever>
for boolean types, it will throw an error if they dis-agree
<clever>
so it depends on the type of the option
<clever>
every option in nixos has its own merge rules
<clever>
and if you decide to make a 2nd droplet, you can reuse the digital_ocean.nix and base the new hostname_ip_cfg.nix on the existing example
<clever>
fresheyeball: then your nixops file can be: { hostname = { imports = [ ./digital_ocean.nix ./hostname_ip_cfg.nix ]; .... }; }
<clever>
fresheyeball: next, youll want to break the required config into 2 groups, the generic stuff like fileSystems."/" = { device = "/dev/vda1"; fsType = "ext4"; };, which you can put into a digital_ocean.nix file, and then the per-droplet stuff like the IP
<clever>
fresheyeball: :D
<clever>
nixops ignores the config on the droplet
<clever>
ah
<clever>
it expects that to be in the configuration.nix file
<clever>
nixos-generate-config doesnt include that, but you still need it
<clever>
both work, but vda1 is more predictable
<clever>
vda1 is simpler then the uuid
<clever>
fresheyeball: run nixos-generate-config and look at the hardware-configuration.nix it generates
<clever>
fresheyeball: you need to include the drivers for vda in boot.initrd.availableKernelModules
<clever>
what error do you see on the DO console?
<clever>
is /boot a dir or a filesystem?
<clever>
thats a side-effect of you doing the install inside qemu
<clever>
its not a qemu guest
<clever>
does it have a /boot?
<clever>
fresheyeball: you need to take the stuff from hardware-configuration.nix that was required for booting and put that into your nixops config
<clever>
fresheyeball: that adds the qemu-guest.nix file from $NIX_PATH to your imports
<clever>
fresheyeball: yes
<clever>
stdenv.mkDerivation { name = "simple"; buildCommand = "mkdir $out ; gcc ${./simple.c} -p $out/simple"; } would also have the identical effect
<clever>
:D
<clever>
rather then trying to continue, and cause more confusing problems down the road
<clever>
the stdenv builder also does `set -e`, which causes bash to abort at the first problem
<clever>
export PATH="coreutils/bin:$gcc/bin"
<clever>
your missing the $ on coreutils
<clever>
so it cant write to $out/simple
<clever>
it didnt make $out
<clever>
/nix/store/3dd1r98sw1i8vy85i24jdan8qiswdxjv-simple_builder.sh: line 3: mkdir: command not found
<clever>
its only debug
<clever>
inquisitiv3: line 1
<clever>
inquisitiv3: add set -x to the builder script
<clever>
rotaerk: ~/.config/nixpkgs/config.nix has nixpkgs config
<clever>
and keepign the lua.pc would allow existing things to continue to work
<clever>
yeah, adding a lua5.3.pc would also solve the issue
<clever>
lua can also be patched, to have 2 .pc files
<clever>
Orbstheorem: so you have to set the makeflag lua to nothing, but then the makefile automatically adds 5.2, so you have no option but to patch the makefile
<clever>
Orbstheorem: lua5_3 has a lua.pc file, but pkg-config is looking for lua5.3.pc
<clever>
-E is just a shortcut to avoid having to write a 1 line file
<clever>
you can also put the above string into a new nix file, and run `nix build -f newfile.nix`
<clever>
Lisanna: oh, there is a --store thing, let me check its args
<clever>
Lisanna: then on the remote machine do nix-store -r on that .drv
<clever>
Lisanna: another option is to do nix-copy-closure on the .drv file from nix-instantiate
<clever>
Lisanna: its for when the slave is on another lan, and cache.nixos.org can out-do your upload pipe
<clever>
Lisanna: yeah
<clever>
Lisanna: of course, it doesnt know what the remote machine has yet, until it has downloaded everything locally and is ready to start pushing things to the remote box
<clever>
it cant tell the difference between duplicate hosts and just 2 IP's for the same host
<clever>
Lisanna: probably not
<clever>
Lisanna: ssh wont be happy with you, mitm alerts everywhere
<clever>
Lisanna: thats basically what dns does :P
<clever>
Lisanna: rotate the file, so each end-user has a different idea of "1st" in the file
<clever>
Lisanna: but if each end-user does 1 job, they will all put it on the 1st file in /etc/nix/machines