<clever>
and now you have to find the right subdir of the build input to use
<clever>
PolarIntersect: python probably put things into /usr/include/python-2.3/ for ex, to solve the very problem nix solves :P
<clever>
fresheyeball: maybe?, try to do a root login in DO and then a nixos-rebuild rollback
<clever>
fresheyeball: nixops already sets that
<clever>
fresheyeball: dont need those
<clever>
fresheyeball: networking.nix
<clever>
fresheyeball: everything under networking.interfaces i believe
<clever>
fresheyeball: yeah
<clever>
fresheyeball: read the existing configuration.nix file, that already has the config you need
<clever>
fresheyeball: the same way you configure every other nixos thing in nixops
<clever>
fresheyeball: you must copy the configuration.nix into the nixops files
<clever>
fresheyeball: so nixos will try to run dhcp, and DO wont answer, so it just never comes back online
<clever>
fresheyeball: targetHost tells nixops what to ssh into, but does not configure a static ip
<clever>
fresheyeball: you need to include that static ip config in the nixops files, or it will just shut off the network
<clever>
fresheyeball: does DO require static IP setup or does it provide dhcp?
<clever>
fresheyeball: i suspect you didnt configure the network properly, so when nixops deploys, it just turns off the network card
<clever>
and ssh keys wont help over the DO console
<clever>
you have to put your keys into the nixops deployment files
<clever>
fresheyeball: nixops just ignores the configuration.nix entirely
<clever>
fresheyeball: start over, set a password with passwd and confirm you can login as root from the console first, then deploy again
<clever>
fresheyeball: is it the right one?
<clever>
fresheyeball: and if you check ifconfig, does it have an ip?
<clever>
fresheyeball: and if you use the DO console to login as root?
<clever>
manveru: stop the systemd service, then read its .service file and run it manually
<clever>
manveru: ah, you have to start sshd manually after doing `ulimit -c unlimited`
<clever>
manveru: ah, checking
<clever>
manveru: then run bt in there
<clever>
manveru: then run coredumpctl gdb <pid>
<clever>
manveru: id start by turning on coredumpctl in the nixos options, then running crashing it, and checking coredumpctl for a coredump
<clever>
after th deploy
<clever>
are you able to login from that console?
<clever>
can you see any errors in the console?
<clever>
probably wont help, which vm provider is it?
<clever>
can you reboot the vm or confirm its IP?
<clever>
did it give any other errors?
<clever>
then nixops cant connect to the vm, so it has no way to deploy
<clever>
fresheyeball: there is another error above that
<clever>
manveru: gdb
<clever>
yep
<clever>
__monty__: maybe test it on 2 linux machines first?
<clever>
__monty__: what about route print?
<clever>
rotaerk: if you need to build and get it in PATH, then cabal2nix is part of the answer, you then need to either: add that to systemPackages, nix-env -i it, or wrap emacs to add those to PATH for emacs
<clever>
the darwin side hasnt had as much testing
<clever>
__monty__: you may need to remove the routes with `ip route` or whatever the darwin tool is
<clever>
__monty__: try stopping and restarting both ends and then checking list again
<clever>
__monty__: then its not connected yet, do both ends show the other in list?
<clever>
ping will still depend on the normal firewall rules, and nixos blocks it by default
<clever>
__monty__: first check `list` to see if it says its online
<clever>
but if your going to ssh into both ends and add eachother, it doesnt matter much which you use
<clever>
__monty__: add will whitelist them, and send a friend request over the onion routing, so they get a log msg saying to whitelist you back
<clever>
__monty__: whitelist lets a person connect, but does not inform them about wanting to connect over the onion network, so they will never know to accept
<clever>
__monty__: try without the -u flag
<clever>
rotaerk: if you need to execute the code inside it, then its best to build it with cabal2nix
<clever>
__monty__: dang
<clever>
rotaerk: you should build it with nix-build i think, look into cabal2nix
<clever>
i think it needs to allocate a nixbld1 user before it can block on maxJobs
<clever>
it will probably run out of build users and hard-fail
<clever>
it may require a nix1 daemon on the central hub
<clever>
that is a very nix2 style error
<clever>
error: build of '/nix/store/11vdgj2s54nzryzj3snl3v1fyb85v0p5-name.txt.drv' on 'ssh://root@192.168.2.15' failed: a 'x86_64-darwin' is required to build '/nix/store/11vdgj2s54nzryzj3snl3v1fyb85v0p5-name.txt.drv', but I am a 'x86_64-linux'
<clever>
Lisanna: the laptop is now copying inputs to the desktop...
<clever>
in theory, yes
<clever>
then i configured the desktop to use an actual mac slave
<clever>
ok, i have configured my laptop to think the nixos desktop is a mac slave
<clever>
testing...
<clever>
hmmm, i have a thought
<clever>
but my mac is off, so the build fails
<clever>
error: a 'x86_64-darwin' is required to build '/nix/store/rwm1hd7lxa1kmlgbgqbdvb3by9rd4xr9-name.txt.drv', but I am a 'x86_64-linux'
<clever>
cannot build on 'ssh://clever@192.168.2.167': cannot connect to 'clever@192.168.2.167': ssh: connect to host 192.168.2.167 port 22: No route to host
<clever>
you will have to test nix2 and see what it does
<clever>
but that forwarding might have been a nix1 "bug"
<clever>
Lisanna: i think it would preserve features
<clever>
yeah
<clever>
Lisanna: ive also noticed in older versions (not sure if its changed), but a remote "slave" will sometimes forward things to its own slaves, so you could use a single slave as a hub to load-balance between more
<clever>
Lisanna: i believe the remote nix-daemon will block until a slot is free
<clever>
mjrosenb: add this to the overrides, heist = pkgs.haskell.lib.dontCheck super.heist;
<clever>
so it will jsut build with the current lens
<clever>
when you jailbreak it, your telling cabal to just ignore all version limits
<clever>
lowercase b
<clever>
mjrosenb: actually, doJailbreak
<clever>
mjrosenb: yeah
<clever>
coffeecupp__: nix-build '<nixpkgs>' -A emacs
<clever>
mjrosenb: and what happens if you instead use the jailbreak method?
<clever>
mjrosenb: the haskell stuff is all uniform and well designed, while the other stuff in nix is a bit more ad-hoc
<clever>
mjrosenb: it actually makes a lot of things simpler
<clever>
mjrosenb: the entire haskell package framework works a bit differently then the normal packages in nixpkgs
<clever>
elvishjerricco: in that example, its just returning the result of running `id`, but it could do anything from cloning a private git repo to pulling passwords out of lastpass, lol
<clever>
elvishjerricco: builtins.exec takes the argv as a list, runs the given program (outside the sandbox, as the user doing the eval), and then parses stdout as nix, and returns that nix object
<clever>
elvishjerricco: another option is native code in nix
<clever>
elvishjerricco: so you would always need a 3rd phase, passing in the secrets nixops created, and then dumping more xml
<clever>
elvishjerricco: although, i think the 1st phase is a nix-instantiate dumping xml, while the 2nd phase is just a nix-build that returns a storepath, so the 2nd phase cant return secrets
<clever>
ah, maybe modify send_keys to query the second phase instead then
<clever>
elvishjerricco: maybe add a 3rd phase at 1.5, that will then run it again with the proper value from running `pass`?
<clever>
infinisil: heh, and now sshd depends on the state of the xserver config!
<clever>
elvishjerricco: ah, that explains why ive never heard of it before, lol
<clever>
elvishjerricco: ive not messed with the passwordStore resource yet
<clever>
so it breaks it into 2 phases, by type
<clever>
elvishjerricco: in the security-group case, it entirely ignores the actual dependency tree, and simply says that all security-groups must be made after vpc and eip resources
<clever>
elvishjerricco: nixops will use the create_after function between pairs of resources, to sort them properly, so its all created in an order the backend accepts
<clever>
elvishjerricco: there is a sort thing in the code, let me grab it
<clever>
infinisil: so only the parts that changed (or lack memoise) will have to recompute
<clever>
infinisil: if you re-import the top-level expression within a single nix process, but it uses memoise correctly, it can recycle values the previous expression had computed
<clever>
infinisil: memoise and reload could be combined to almost get that effect
<clever>
i think the nix level returns a placeholder value, and nixops will then map it over
<clever>
elvishjerricco: there are some hacks to get around that, so the ec2 security groups can refer to ec2 elastic IP's
<clever>
elvishjerricco: ah
<clever>
and this fork which can reduce resource waste by recomputing things
<clever>
elvishjerricco: there is also a fork of nix that can eval in parallel
<clever>
elvishjerricco: then it does a nix build of a special function, passing it a list of machine names to include in the build
<clever>
elvishjerricco: nixops currently has 2 phases, it will first do a simple eval of the expressions, and dump the entire tree as xml, to find out what resources exist
<clever>
ben: version would have to be moved into an argument to the file, then you can use .override
<clever>
.overrideAttrs (oldAttrs: rec { version = "1234"; name = "foo-${version}; src = pkgs.fetchurl { url = "http://example.com/${name}.tar.gz}; }; })
<clever>
bpye: more rec!
<clever>
bpye: yeah
<clever>
rec happens before the overrides are applied, so changes to version dont help
<clever>
bpye: you need to override src, not version
<clever>
the description could maybe be improved a bit
<clever>
its sort of both at once
<clever>
ah
<clever>
__monty__: there is none, neither end is more client then server
<clever>
infinisil: cant seem to delete the rename file though
<clever>
hmmm, what if i just drop every alias? ....
<clever>
error: The option `services.xserver.videoDrivers' defined in `/nix/store/xdacbvng4mrrx8xc863jqgk1mijvvrsc-43c77db3aa58e06cc3ced846431dd4228f93cd5d.tar.gz-unpacked/nixos/modules/rename.nix' does not exist.
<clever>
but the module is still loaded by nixos
<clever>
none of them use it
<clever>
infinisil: i'm now experimenting to see what happens to the nixops memory usage, if i just drop x11 support, lol
<clever>
agander: try grep -r as well
<clever>
infinisil: youll want to take the ID from status, and then use it with the add command on every other node, and both parties must add the other for it to form a connection
<clever>
infinisil: after that is enabled, you can use the toxvpn-remote program to open a repl for controlling it, it has a help command
<clever>
agander: did you search for it with grep or find?
<clever>
infinisil: start by setting services.toxvpn = { enable = true; localip = "foo"; };, giving each machine a unique IP, preferably in an unused subnet, either 192.168.x.y or 10.x.y.z
<clever>
infinisil: do you have a few machines you can play with?
<clever>
infinisil: tox will still auto-detect if the machine is on the LAN, and directly use the private IP's when it can
<clever>
if i accepted you on toxvpn, you could then connect to 192.168.123.11
<clever>
inet 192.168.123.11 peer 10.123.123.123/32 scope global tox_master0
<clever>
4: tox_master0: <POINTOPOINT,UP,LOWER_UP> mtu 1200 qdisc pfifo_fast state UNKNOWN group default qlen 500
<clever>
infinisil: toxvpn gives each machine an extra ip, and you have to choose to use the vpn based one
<clever>
infinisil: its all p2p, so it doesnt bottleneck at some external server
<clever>
infinisil: thats one major point toxvpn solves over openvpn
<clever>
but thats only half the amount of space i need to eval this deployment, lol
<clever>
basically, aws gives you a "tmpfs" like block device (150gig for this machine size), that is lost on shutdown
<clever>
nulls
<clever>
other then the first 5 bytes, the entire xvdb is blank
<clever>
but i do see a 150gig xvdb drive
<clever>
nixos doesnt appear to mount the ephemeral storage
<clever>
tilpner: oh, aws ephemeral storage.....
<clever>
and the entire deployment, about 60 machines, 300 gig of ram
<clever>
so the 20 machines would need 50gigs
<clever>
just to eval the nix
<clever>
infinisil: in my quick profiling with nixops, a 10 machine deployment takes ~5gig of ram!
<clever>
thats the general featureset of toxvpn
<clever>
__monty__: so as an example, both of us could vpn to nixos.org, but we cant vpn to eachother
<clever>
__monty__: p2p, udp hole punching, no need for static ip's or port forwarding, and each node has a whitelist of who can connect to it
<clever>
i do have plans to base the IP on the pubkey tox generates, so its automatic and "unique", but that would also further complicate a nixops setup
<clever>
yeah, thats also an issue, you need to give each machine a unique IP currently
<clever>
so it cant be automated entirely with nixops
<clever>
but to link the machines up, you need to extract the public key from 1, and share it with the others
<clever>
that could also be used to give private IP's for each machine to contact eachother
<clever>
tilpner: have you seen toxvpn?
<clever>
you can just go thru module-list.nix and negate everything in it you dont want