<pistache>
so if I want to have these NixOps containers that share their Nix store with the host, it looks like I'll have to do some changes in NixOps itself
<adisbladis>
pistache: I'd be very happy to collaborate on that
<pistache>
to make it manage a profile specific to the container, but on the host, maybe in /nix/var/nix/profiles/per-host/${vm_id}/, for example
<pistache>
it looks like I can reuse some of the code used to implement rollbacks (if I'm reading things right, I'm still a bit new to the Nix ecosystem)
<pistache>
adisbladis: cool :) I'm happy to hear that you're interested in this feature, I was scared that it would be out-of-scope for NixOps
teto has quit [Ping timeout: 260 seconds]
meh` has quit [Ping timeout: 256 seconds]
<pistache>
adisbladis: what do you think of adding methods to manage the profiles (create/set/delete) to the Transport interface ?
<pistache>
since there is already copy_closure() it would be a nice extension as it would allow to manage the lifetime of the copied closure
raghavsood has joined #nixops
<raghavsood>
I have an application that needs to be deployed on two servers - the application config is the same, but the nginx config differs (hostname). In Nixops, is there any way to write a declaration that goes something like `nginx.virtualHosts.${config.current_machine.hostname}` so that I can separate the service into a single common file I can import
<raghavsood>
Right, that makes sense, now how would I go about doing it for an arbitrary string :D The hostname bit isn't the actual machine hostname, rather a fqd
<raghavsood>
Although if that works I could probably just define a machine-level variable and do `config.var`, right?
<adisbladis>
raghavsood: Yeah, I'd make my own "module" I think
<raghavsood>
Yeah, that might be the sanest option - many such settings will be common, but with different value, across the fleet
<raghavsood>
Thanks!
tokudan has quit [Remote host closed the connection]
tokudan has joined #nixops
teto has joined #nixops
meh` has joined #nixops
<raghavsood>
Using the hetzner plugin, it seems to set the name servers to include hetzner nameservers by default, even if I specify networking.nameservers. The ones I specify are added lower down in resolv.conf
<raghavsood>
Is there a way to make sure only my nameservers are actice?
<raghavsood>
*active
<adisbladis>
raghavsood: lib.mkForce ?
<raghavsood>
Yeah, just discovered that in a github issue, trying it out
<raghavsood>
Interestingly, forcing resolvconf = false and then setting the nameservers didn't work, but just directly forcing the nameservers did
<pistache>
adisbladis: it looks like shared-store containers will be much easier with pluggable transports (if they get these profile methods), right now the only way I found to make them work is by adding special cases to the Deployment class
<adisbladis>
pistache: Nice :)
<adisbladis>
I realized my design is not _quite_ good enough yet
<adisbladis>
I think there needs to be a standard way to nest transports
<adisbladis>
Like, ssh + lxc
<adisbladis>
So you can deploy to remote container hosts
<pistache>
yes! I was thinking of that this morning
<pistache>
although in LXD's case there is already a remote API available, but SSH to the host is still needed for things such as the shared store
<adisbladis>
I'm not entirely clear on how you'd want to express that kind of setup though
<pistache>
yes, me neither
<adisbladis>
I've also realised that what's currently the SSH transport is really "ssh + local"
<pistache>
yes
<pistache>
maybe we could use two different transports (one to the deployed host and one to the closure host), and in the case of non-shared-store targets they would be the same
<pistache>
erm.. that wasn't really clear, I mean using an optional additional transport to the closure
<pistache>
in the spirit of get_ssh_for_copy_closure()
<adisbladis>
I actually think there is a better way, but we need to drop down to a bit lower level
<adisbladis>
nix-copy-closure runs `nix-daemon --stdio` on the remote
<adisbladis>
And communicates over stdin/stdout
<adisbladis>
That flow could be abstracted on top of run_command
<adisbladis>
Suddenly the transport API is a bit smaller
<pistache>
ah right
<adisbladis>
I think we may want to reimplement upload_file/download_file in the same way
<adisbladis>
Chuck tarballs over a pipe maybe?
<adisbladis>
Suddenly the transport API is _tiny_
<pistache>
but in the case of shared-store, we'd still need two different run_command() functions (either two transports or an additional function)
<adisbladis>
I think if you nest things it can be even more elegant :)
<pistache>
ahhh I see what you mean
<pistache>
yes
<adisbladis>
Imagine that there is an SSH transport which only implements run_command(), then you issue commands on the remote host _outside_ of the container namespace
<adisbladis>
Deeper down this stack is an lxc thing that knows how to address things to the correct container
<adisbladis>
In fact, you could express things which are completely bonkers if we do it like this ^_^
<adisbladis>
Multi-jump SSH anyone? :)
<pistache>
yes :)
<adisbladis>
nix-copy-closure requires ssh
<adisbladis>
But `nix copy` doesn't
<adisbladis>
And has an undocumented unix://
<adisbladis>
It's not undocumented per se, but not very well documented
<pistache>
for the LXD backend, we'd also need to be define additional sockets to be forwarded
<pistache>
as I'm communicating with LXD through a socket (currently over HTTP, but I can rewrite it to use a unix socket)
<adisbladis>
I'm not sure that's feasible
<adisbladis>
I think the common denominator needs to be "execute this command"
<adisbladis>
Forwarding sockets is complex stuff
<pistache>
ah
<pistache>
this means I'd have to rewrite the backend to use the command-line API, which is unfortunate
<pistache>
sorry if I ask a silly question, but why would forwarding sockets be so complex ?
<pistache>
isn't it just passing -L to SSH ?
<raghavsood>
Is there a way to iterate all nodes within a deployment? My goal is to have a prometheus scrape job automatically pick up new hetzner nodes as they're added to the deployment
<raghavsood>
I can just iterate `config.nodes`, right?
teto has quit [Ping timeout: 246 seconds]
cptMikky has quit [Ping timeout: 240 seconds]
b42 has quit [Ping timeout: 244 seconds]
<adisbladis>
pistache: I think it's difficult with arbitrary nesting
<adisbladis>
It's very appealing to just have a single pipe over stdin/stdout
<pistache>
ok
<adisbladis>
Especially when you don't want to make assumptions around tooling on the remote
<adisbladis>
We can basically only assume things in a default nixos system
<adisbladis>
Otherwise you could do multiplexing over stdin/stdout
<adisbladis>
Using for example socat
<adisbladis>
But that creates a bootstrapping problem
cptMikky has joined #nixops
<pistache>
I'm using the LXD backend with LXD hosted on a non-nixos system, so I can't even assume a default NixOS system
<adisbladis>
Interesting =)
<adisbladis>
pistache: Good to know
<adisbladis>
I'll take that into account
b42 has joined #nixops
teto has joined #nixops
<gchristensen>
oh cool so vagrant + libvirtd is easy to setup
<gchristensen>
nice t ohave a working thing to compare
<adisbladis>
gchristensen: As opposed to ?
nuncanada has joined #nixops
<gchristensen>
well, I never quite got the nixops + libvirtd working
<gchristensen>
so I can compare what the two are doing