njha[m] has joined #nixos-kubernetes
edef has joined #nixos-kubernetes
<edef> oh, wicked
hexa- has joined #nixos-kubernetes
<edef> was not previously aware this existed
puck has joined #nixos-kubernetes
q3k has joined #nixos-kubernetes
q3k|m has joined #nixos-kubernetes
q3k has left #nixos-kubernetes ["WeeChat 2.7"]
<q3k|m> edef: is your k8s-on-nixos machinery available somewhere publicly?
<edef> so far it's just the plumbing to let kubeadm run
<q3k|m> ah, you actually run kubeadm on nixos? interesting
<edef> i expect to evolve away from kubeadm in the long run, but the NixOS kubernetes module doesn't support other runtimes than docker right now
<q3k|m> yeah, we've already forked kubelet for other reasons
<q3k|m> (fairly hardcoded reliance on flannel and/or other host-side networking runtimes, vs the typical hosted CNI shared directory setup)
<edef> mmm
<edef> we have our own networking layer, so similar concerns for us
<q3k|m> although the CNI shared dir setup is also janky, as it basically relies and hopes and prayers and the CNI binaries being truly static
<edef> yeah, i don't super like that
<q3k|m> and I'm sure that will bite us in the ass at some point
<q3k|m> i'd be open to specifying the networking layer in nixos (as it kinda co-exists with other nixos configuration tunables)
<simpson> There's a `flannel.enable` configuration option, FWIW.
<q3k|m> but i also haven't gotten around to that, and it breaks our other assumption that everything that lives on the cluster is indeed configured from jsonnet, and not just spawned by the kubelets
<edef> simpson: i don't run Flannel
<q3k|m> simpson: yeah but that's not enough
<q3k|m> simpson: like even if you disable flannel
<q3k|m> i don't remember the exact issue
<simpson> Sure, just trying to see what doesn't need to be done, so that it's easier to see what does need to be done.
<q3k|m> i think it should be possible to plug in another networking layer there instead
<q3k|m> if it were also defined as a nixos module that pokes the same systemd targets
<edef> overall, the feeling i got from the existing kubernetes module was that i'd be taking on a lot of maintenance burden relative to just running kubeadm
<q3k|m> i don't trust kubeadm enough so i ended up taking that responsibility
<q3k|m> although in the end it wasn't that much
<q3k|m> and with the benefits of having that also controlled by nix outweigh the effective minimal work
<q3k|m> it seems like https://github.com/NixOS/nixpkgs/pull/67563 removed the forced systemd sequencing, which should also make things easier
<puck> <q3k|m> although the CNI shared dir setup is also janky, as it basically relies and hopes and prayers and the CNI binaries being truly static <- cri-o has a settings file that can set the CNI path, my plan is to just require everything to be in the nix store here
<puck> i should upstream that change, prolly..
<q3k|m> i think you can set the CNI path easily regardless, it's more about moving the build of the networking layer to nixos (vs. keeping it as a part of cluster-level kubernetes manifests) that's problematic for me
<puck> yeah..
<puck> i mean, the flannel CNI module is part of upstream cni plugins for some reason
<q3k|m> i'd also have to move calico CRD data into nix for that to make sense
<q3k|m> and the CRD definitions themselves, etc - basically rewrite https://cs.hackerspace.pl/hscloud/-/blob/cluster/kube/lib/calico.libsonnet to nix)
<q3k|m> since it then makes sense to version that together with the calico CNI plugin version
<puck> egh, yeah. calico has its own plugin
<puck> and most of the golang networking is by default cgo, so not static
<q3k|m> yeah calico is thankfully fully static, at least the CNI plugin is
<q3k|m> cilium would probably not work, though
<q3k|m> although that part might also be static
<q3k|m> i think in most cases the CNI plugins seem to talk to node agents anyway (running in a privileged, hostNetworking container), not really implement anything themselves
<q3k|m> so luckily that builds without cgo
<puck> oh, i tihnk it only talks netlink, it just never talks to the resolver
puck has quit [*.net *.split]
puck has joined #nixos-kubernetes