2018-06-23

<clever> Lisanna: and there is a speed-factor in the /etc/nix/machines file
<clever> Lisanna: it locally tracks how much "load" each machine has, by counting how many jobs it has assigned to it
<clever> rardiol1: sudo messes with env vars
<clever> rardiol1: and there may be a bug that its not expanding $HOME properly, i always run nixos-rebuild under `sudo -i`
<clever> rardiol1: and the warning is more likely saying that somebody deleted it, when it should have existed
<clever> rardiol1: ~/.nix-defexpr/channels should be a symlink
<clever> rardiol1: thats where nix-env and nix-channel store channels
<clever> rardiol1: run `sudo -i` then check to see if nixos-rebuild warns again
<clever> rardiol1: nixos-rebuild
<clever> rardiol1: with or without sudo?
<clever> warning*
<clever> strange, what nix command gave that error?
<clever> rardiol1: and what does `echo $NIX_PATH` say?
<clever> rardiol1: what exactly is the warning?
<clever> it cant depend on the args its creating
<clever> yeah, thats one place you can insert it
<clever> so you can just add your own custom things, and then use { pkgs_path, pkgs, lib, config, ... }:
<clever> that is where pkgs itself comes from
<clever> samueldr: everything you put into _module.args, is passed to every module
<clever> samueldr: _module.args.pkgs_path = foo; ?
<clever> elvishjerricco: nice
<clever> ndowens04: did you run nix-channel --update as the user you removed nixos from?
<clever> ndowens04: also, after removing a channel from a user, you still have to --update as that user to apply the removal
<clever> rupert: the warning is just about dumb DB systems that tried to store the id in a fixed-width field and truncate things silently
<clever> rupert: i think only new instances get longer id's and nixops already supports it
<clever> fresheyeball: one of these: https://nixos.org/nixos/options.html#bash
<clever> pkgconfig may work
<clever> and now you have to find the right subdir of the build input to use
<clever> PolarIntersect: python probably put things into /usr/include/python-2.3/ for ex, to solve the very problem nix solves :P
<clever> fresheyeball: maybe?, try to do a root login in DO and then a nixos-rebuild rollback
<clever> fresheyeball: nixops already sets that
<clever> fresheyeball: dont need those
<clever> fresheyeball: networking.nix
<clever> fresheyeball: everything under networking.interfaces i believe
<clever> fresheyeball: yeah
<clever> fresheyeball: read the existing configuration.nix file, that already has the config you need
<clever> fresheyeball: the same way you configure every other nixos thing in nixops
<clever> fresheyeball: you must copy the configuration.nix into the nixops files
<clever> fresheyeball: so nixos will try to run dhcp, and DO wont answer, so it just never comes back online
<clever> fresheyeball: targetHost tells nixops what to ssh into, but does not configure a static ip
<clever> fresheyeball: you need to include that static ip config in the nixops files, or it will just shut off the network
<clever> fresheyeball: does DO require static IP setup or does it provide dhcp?
<clever> fresheyeball: i suspect you didnt configure the network properly, so when nixops deploys, it just turns off the network card
<clever> and ssh keys wont help over the DO console
<clever> you have to put your keys into the nixops deployment files
<clever> fresheyeball: nixops just ignores the configuration.nix entirely
<clever> fresheyeball: start over, set a password with passwd and confirm you can login as root from the console first, then deploy again
<clever> fresheyeball: is it the right one?
<clever> fresheyeball: and if you check ifconfig, does it have an ip?
<clever> fresheyeball: and if you use the DO console to login as root?
<clever> manveru: stop the systemd service, then read its .service file and run it manually
<clever> manveru: ah, you have to start sshd manually after doing `ulimit -c unlimited`
<clever> manveru: ah, checking
<clever> manveru: then run bt in there
<clever> manveru: then run coredumpctl gdb <pid>
<clever> manveru: id start by turning on coredumpctl in the nixos options, then running crashing it, and checking coredumpctl for a coredump
<clever> after th deploy
<clever> are you able to login from that console?
<clever> can you see any errors in the console?
<clever> probably wont help, which vm provider is it?
<clever> can you reboot the vm or confirm its IP?
<clever> did it give any other errors?
<clever> then nixops cant connect to the vm, so it has no way to deploy
<clever> fresheyeball: there is another error above that
<clever> manveru: gdb
<clever> yep
<clever> __monty__: maybe test it on 2 linux machines first?
<clever> __monty__: what about route print?
<clever> rotaerk: if you need to build and get it in PATH, then cabal2nix is part of the answer, you then need to either: add that to systemPackages, nix-env -i it, or wrap emacs to add those to PATH for emacs
<clever> the darwin side hasnt had as much testing
<clever> __monty__: you may need to remove the routes with `ip route` or whatever the darwin tool is
<clever> __monty__: try stopping and restarting both ends and then checking list again
<clever> __monty__: then its not connected yet, do both ends show the other in list?
<clever> ping will still depend on the normal firewall rules, and nixos blocks it by default
<clever> __monty__: first check `list` to see if it says its online
<clever> but if your going to ssh into both ends and add eachother, it doesnt matter much which you use
<clever> __monty__: add will whitelist them, and send a friend request over the onion routing, so they get a log msg saying to whitelist you back
<clever> __monty__: whitelist lets a person connect, but does not inform them about wanting to connect over the onion network, so they will never know to accept
<clever> __monty__: try without the -u flag
<clever> rotaerk: if you need to execute the code inside it, then its best to build it with cabal2nix
<clever> __monty__: dang
<clever> rotaerk: you should build it with nix-build i think, look into cabal2nix
<clever> and then "run" and "bt"
<clever> __monty__: try sudo gdb --args toxvpn -u toonn
<clever> __monty__: ah, i think i was using http://tuntaposx.sourceforge.net/
<clever> __monty__: try running it under gdb to get a backtrace
<clever> __monty__: yeah, i'm coming up on a blank, cant remember how i set it up before
<clever> __monty__: oh wait, that seems different
<clever> __monty__: https://code.gerade.org/tunemu/ i think?
<clever> thats probably what head was, when it was last PR'd
<clever> so you need to nix-env -f release.nix -iA toxvpn.x86_64-darwin for example
<clever> { x86_64-darwin = «derivation /nix/store/zgz0snrsv2nr7fvmsvr8w6srpmmw1s3w-toxvpn-git.drv»; x86_64-linux = «derivation /nix/store/nrrw1458948hl3ywbazvzpnspyg03bz5-toxvpn-git.drv»; }
<clever> its a set containing nix-repl> toxvpn
<clever> so even with my toxid, you cant get my public IP until i approve of your connection
<clever> it also has some onion routing, so you can only find the IP of the peer if they want to expose it
<clever> yeah
<clever> release.nix already handles callPackage
<clever> __monty__: ah, is this for the default.nix in toxvpn?
<clever> __monty__: yes
<clever> then use the new file
<clever> __monty__: create a new file, that does: with import <nixpkgs> {}; callPackage ./oldfile.nix {}
<clever> so everybody has to download the source, always
<clever> Myrl-saki: if you did nix-env -i anything, it has to fetch toxvpn, just to find the .name of the derivation
<clever> Myrl-saki: nixpkgs doesnt allow import-from-derivation
<clever> __monty__: try cloning toxvpn and then editing the default.nix for it to a new version from the toxcore repo
<clever> __monty__: ah, may need to adjust the toxcore version, it has worked on mac in the past, but i dont test it often
<clever> __monty__: you can check ps aux
<clever> __monty__: but you can do (i think) -u foo, to make it drop root after doing so
<clever> __monty__: yeah, toxvpn needs root to create the tun interface
<clever> just set some limits and tell users to retry when things fail?
<clever> push all code to github, let hydra build it all and manage the queue
<clever> you may have better luck just using hydra
<clever> you would need a 17.09 VM to test older versions
<clever> 1.10 doesnt like the new nix schema, and 1.11+ wont do it
<clever> Lisanna: cant seem to reproduce that proxy like "feature"
<clever> and the number of build users is dynamicaly based on the bigger value, between 32, and maxJobs
<clever> i think it needs to allocate a nixbld1 user before it can block on maxJobs
<clever> it will probably run out of build users and hard-fail
<clever> it may require a nix1 daemon on the central hub
<clever> that is a very nix2 style error
<clever> error: build of '/nix/store/11vdgj2s54nzryzj3snl3v1fyb85v0p5-name.txt.drv' on 'ssh://root@192.168.2.15' failed: a 'x86_64-darwin' is required to build '/nix/store/11vdgj2s54nzryzj3snl3v1fyb85v0p5-name.txt.drv', but I am a 'x86_64-linux'
<clever> Lisanna: the laptop is now copying inputs to the desktop...
<clever> in theory, yes
<clever> then i configured the desktop to use an actual mac slave
<clever> ok, i have configured my laptop to think the nixos desktop is a mac slave
<clever> testing...
<clever> hmmm, i have a thought
<clever> but my mac is off, so the build fails
<clever> error: a 'x86_64-darwin' is required to build '/nix/store/rwm1hd7lxa1kmlgbgqbdvb3by9rd4xr9-name.txt.drv', but I am a 'x86_64-linux'
<clever> cannot build on 'ssh://clever@192.168.2.167': cannot connect to 'clever@192.168.2.167': ssh: connect to host 192.168.2.167 port 22: No route to host
<clever> and this uses a mac build slave
<clever> $ nix-build -E 'with import <nixpkgs> { system = "x86_64-darwin"; }; writeText "name.txt" (toString builtins.currentTime)'
<clever> that writes the current timestamp to a file, and uses the current arch's build of bash to echo it in
<clever> > writeText "name.txt" (toString builtins.currentTime)
<clever> you could write a multiplexer
<clever> Lisanna: another option, this is how the protocol is implemented
<clever> you will have to test nix2 and see what it does
<clever> but that forwarding might have been a nix1 "bug"
<clever> Lisanna: i think it would preserve features
<clever> yeah
<clever> Lisanna: ive also noticed in older versions (not sure if its changed), but a remote "slave" will sometimes forward things to its own slaves, so you could use a single slave as a hub to load-balance between more
<clever> Lisanna: i believe the remote nix-daemon will block until a slot is free
<clever> mjrosenb: add this to the overrides, heist = pkgs.haskell.lib.dontCheck super.heist;
<clever> so it will jsut build with the current lens
<clever> when you jailbreak it, your telling cabal to just ignore all version limits
<clever> lowercase b
<clever> mjrosenb: actually, doJailbreak
<clever> mjrosenb: yeah
<clever> coffeecupp__: nix-build '<nixpkgs>' -A emacs
<clever> mjrosenb: and what happens if you instead use the jailbreak method?
<clever> mjrosenb: the haskell stuff is all uniform and well designed, while the other stuff in nix is a bit more ad-hoc
<clever> mjrosenb: it actually makes a lot of things simpler
<clever> mjrosenb: the entire haskell package framework works a bit differently then the normal packages in nixpkgs
<clever> mjrosenb: an example of how ive done the same before: https://github.com/cleverca22/machotool/blob/master/default.nix
<clever> in either case, you can then use that hsPkgs to build your package, and optionally put your packages into the list of overrides
<clever> mjrosenb: plan_b will change the version of lens and rebuild anything that used lens
<clever> mjrosenb: plan_a will just tell cabal to ignore all version constraints
<clever> typing...
<clever> mjrosenb: do you need to change the version of heist or lens?
<clever> one min
<clever> and if its a haskell package, its a bit different
<clever> yeah
<clever> mjrosenb: overrideAttrs will pass you drv
<clever> mjrosenb: yeah
<clever> mjrosenb: that will let you provide a custom config.nix with your own overrides
<clever> mjrosenb: import <nixpkgs> { config = { packageOverrides = pkgs: { heist = pkgs.heist.overrideAttrs (drv: { ... }); }; }; }
<clever> tim`: looks like a systemd issue on the remote end, try rebooting that machine
<clever> zero out a few digits of the hash
<clever> so nix reuses the old version of the source
<clever> blob_: if you leave the sha256 the same, that is telling nix that the source has not changed
<clever> did you try to use the same sha256?
<clever> blob_: did you give it the new rev/branch and sha256?
<clever> behind the scenes
<clever> blob_: fetchFromGitHub will automatically use fetchurl if you have submodules off
<clever> elvishjerricco: what version of nix are you running?
<clever> time for you to file a bug!
<clever> no code can be found that ever made this work
<clever> and it was already commented out in the docs
<clever> `git log --patch` claims that option never actually worked
<clever> Lisanna: there WAS a binary-caches-files option, that was a list of files containing binary caches
<clever> Lisanna: oh!
<clever> Lisanna: yeah, no trace of how this could work
<clever> src/libstore/store-api.hh: ‘binary-caches’. */
<clever> src/libstore/globals.hh: {"binary-caches"}};
<clever> inst/share/nix/corepkgs/unpack-channel.nix: echo -n "$binaryCacheURL" > $out/binary-caches/$channelName
<clever> 117 extraAttrs = "binaryCacheURL = \"" + *dlRes.data + "\";";
<clever> src/nix-channel/nix-channel.cc: DownloadRequest request(url + "/binary-cache-url");
<clever> are you a running these commands as root?
<clever> Lisanna: unfinished plans?
<clever> i thought it would be too, but the code says the last piece is missing
<clever> you will have to add the binary cache to nix.conf
<clever> for when nix.conf is missing
<clever> Lisanna: cache.nixos.org is a default hard-coded into nix itself
<clever> Lisanna: but if no code obeys the file, then its basically useless
<clever> Lisanna: that generates a file like this, but i dont see any code that actually obeys it
<clever> [root@amd-nixos:~]# cat .nix-defexpr/channels/binary-caches/nixos
<clever> Lisanna: did you add it to nix.conf?

2018-06-22

<clever> madknight: that would be the problem
<clever> madknight: what about `echo $NIX_PATH` ?
<clever> bbl
<clever> elvishjerricco: in that example, its just returning the result of running `id`, but it could do anything from cloning a private git repo to pulling passwords out of lastpass, lol
<clever> elvishjerricco: builtins.exec takes the argv as a list, runs the given program (outside the sandbox, as the user doing the eval), and then parses stdout as nix, and returns that nix object
<clever> elvishjerricco: another option is native code in nix
<clever> elvishjerricco: so you would always need a 3rd phase, passing in the secrets nixops created, and then dumping more xml
<clever> elvishjerricco: although, i think the 1st phase is a nix-instantiate dumping xml, while the 2nd phase is just a nix-build that returns a storepath, so the 2nd phase cant return secrets
<clever> ah, maybe modify send_keys to query the second phase instead then
<clever> elvishjerricco: maybe add a 3rd phase at 1.5, that will then run it again with the proper value from running `pass`?
<clever> infinisil: heh, and now sshd depends on the state of the xserver config!
<clever> elvishjerricco: ah, that explains why ive never heard of it before, lol
<clever> elvishjerricco: ive not messed with the passwordStore resource yet
<clever> so it breaks it into 2 phases, by type
<clever> elvishjerricco: in the security-group case, it entirely ignores the actual dependency tree, and simply says that all security-groups must be made after vpc and eip resources
<clever> elvishjerricco: nixops will use the create_after function between pairs of resources, to sort them properly, so its all created in an order the backend accepts
<clever> elvishjerricco: there is a sort thing in the code, let me grab it
<clever> infinisil: so only the parts that changed (or lack memoise) will have to recompute
<clever> infinisil: if you re-import the top-level expression within a single nix process, but it uses memoise correctly, it can recycle values the previous expression had computed
<clever> infinisil: memoise and reload could be combined to almost get that effect
<clever> i think the nix level returns a placeholder value, and nixops will then map it over
<clever> elvishjerricco: there are some hacks to get around that, so the ec2 security groups can refer to ec2 elastic IP's
<clever> elvishjerricco: ah
<clever> and this fork which can reduce resource waste by recomputing things
<clever> 2018-02-06 12:28:29< niksnut> https://github.com/NixOS/nix/tree/memoise
<clever> elvishjerricco: there is also a fork of nix that can eval in parallel
<clever> elvishjerricco: then it does a nix build of a special function, passing it a list of machine names to include in the build
<clever> elvishjerricco: nixops currently has 2 phases, it will first do a simple eval of the expressions, and dump the entire tree as xml, to find out what resources exist
<clever> elvishjerricco: let me find a fork...
<clever> > (rec { xyz = { blah = "foo"; myself = xyz; }; }).xyz.myself.myself.myself.myself.blah
<clever> > (rec { xyz = { blah = "foo"; myself = xyz; }; }).xyz.myself.blah
<clever> __monty__: it can even be ran with zero params, it will ask for config on stdio
<clever> __monty__: just run toxvpn as root and use the repl on stdin/stdout
<clever> the tricky part was having to use mkAfter to get nixos to merge this right
<clever> bpye: so i need to manually write my own iptables lines: https://github.com/cleverca22/nixos-configs/blob/master/router.nat.nix#L21-L23
<clever> bpye: i do see the need for that sometimes, i run a caching dns server on my router, but i dont want it to be publicly open
<clever> it would be simple enough to refactor things you see
<clever> { stdenv, version ? "default", sha256 ? "default" }: ...
<clever> ben: version would have to be moved into an argument to the file, then you can use .override
<clever> .overrideAttrs (oldAttrs: rec { version = "1234"; name = "foo-${version}; src = pkgs.fetchurl { url = "http://example.com/${name}.tar.gz}; }; })
<clever> bpye: more rec!
<clever> bpye: yeah
<clever> rec happens before the overrides are applied, so changes to version dont help
<clever> bpye: you need to override src, not version
<clever> the description could maybe be improved a bit
<clever> its sort of both at once
<clever> ah
<clever> __monty__: there is none, neither end is more client then server
<clever> infinisil: cant seem to delete the rename file though
<clever> hmmm, what if i just drop every alias? ....
<clever> error: The option `services.xserver.videoDrivers' defined in `/nix/store/xdacbvng4mrrx8xc863jqgk1mijvvrsc-43c77db3aa58e06cc3ced846431dd4228f93cd5d.tar.gz-unpacked/nixos/modules/rename.nix' does not exist.
<clever> but the module is still loaded by nixos
<clever> none of them use it
<clever> infinisil: i'm now experimenting to see what happens to the nixops memory usage, if i just drop x11 support, lol
<clever> agander: try grep -r as well
<clever> infinisil: youll want to take the ID from status, and then use it with the add command on every other node, and both parties must add the other for it to form a connection
<clever> infinisil: after that is enabled, you can use the toxvpn-remote program to open a repl for controlling it, it has a help command
<clever> agander: did you search for it with grep or find?
<clever> infinisil: start by setting services.toxvpn = { enable = true; localip = "foo"; };, giving each machine a unique IP, preferably in an unused subnet, either 192.168.x.y or 10.x.y.z
<clever> infinisil: do you have a few machines you can play with?
<clever> infinisil: tox will still auto-detect if the machine is on the LAN, and directly use the private IP's when it can
<clever> if i accepted you on toxvpn, you could then connect to 192.168.123.11
<clever> inet 192.168.123.11 peer 10.123.123.123/32 scope global tox_master0
<clever> 4: tox_master0: <POINTOPOINT,UP,LOWER_UP> mtu 1200 qdisc pfifo_fast state UNKNOWN group default qlen 500
<clever> infinisil: toxvpn gives each machine an extra ip, and you have to choose to use the vpn based one
<clever> infinisil: its all p2p, so it doesnt bottleneck at some external server
<clever> infinisil: thats one major point toxvpn solves over openvpn
<clever> but thats only half the amount of space i need to eval this deployment, lol
<clever> basically, aws gives you a "tmpfs" like block device (150gig for this machine size), that is lost on shutdown
<clever> nulls
<clever> other then the first 5 bytes, the entire xvdb is blank
<clever> but i do see a 150gig xvdb drive
<clever> nixos doesnt appear to mount the ephemeral storage
<clever> tilpner: oh, aws ephemeral storage.....
<clever> and the entire deployment, about 60 machines, 300 gig of ram
<clever> so the 20 machines would need 50gigs
<clever> just to eval the nix
<clever> infinisil: in my quick profiling with nixops, a 10 machine deployment takes ~5gig of ram!
<clever> thats the general featureset of toxvpn
<clever> __monty__: so as an example, both of us could vpn to nixos.org, but we cant vpn to eachother
<clever> __monty__: p2p, udp hole punching, no need for static ip's or port forwarding, and each node has a whitelist of who can connect to it
<clever> i do have plans to base the IP on the pubkey tox generates, so its automatic and "unique", but that would also further complicate a nixops setup
<clever> yeah, thats also an issue, you need to give each machine a unique IP currently
<clever> so it cant be automated entirely with nixops
<clever> but to link the machines up, you need to extract the public key from 1, and share it with the others
<clever> that could also be used to give private IP's for each machine to contact eachother
<clever> tilpner: have you seen toxvpn?
<clever> you can just go thru module-list.nix and negate everything in it you dont want
<clever> #disabledModules = [ "services/networking/ntpd.nix" ];
<clever> infinisil: there is a nixos flag to disable modules
<clever> infinisil: and it OOM's on a box with 64gig of ram...
<clever> there are ~60 machines in the deployment
<clever> it takes 1m 4 seconds, to eval 2 machines in this deployment!
<clever> `nixops deploy --dry-run --include machine1 machine1` is currently running...
<clever> infinisil: ive got a deployment with ~60 machines, and it cant even eval 20 of them at once