ixxie has quit [Remote host closed the connection]
ixxie has joined #nixos-kubernetes
ixxie has quit [Ping timeout: 264 seconds]
ixxie has joined #nixos-kubernetes
ixxie has quit [Ping timeout: 246 seconds]
ixxie has joined #nixos-kubernetes
ixxie has quit [Ping timeout: 272 seconds]
ixxie has joined #nixos-kubernetes
ixxie has quit [Ping timeout: 256 seconds]
ixxie has joined #nixos-kubernetes
ixxie has quit [Ping timeout: 265 seconds]
ixxie has joined #nixos-kubernetes
ixxie has quit [Ping timeout: 260 seconds]
ixxie has joined #nixos-kubernetes
ixxie has quit [Ping timeout: 260 seconds]
ixxie has joined #nixos-kubernetes
ixxie has quit [Quit: Lost terminal]
ixxie has joined #nixos-kubernetes
<ixxie>
johanot: you mentioned you use ceph in your cluster?
<johanot>
yup :)
<ixxie>
and do you define the ceph assets inside kubernetes or on the base machines?
<ixxie>
johanot: more generally, how do you decide what to configure on the base nixos machines as a systemd service, and what to run in a container
<johanot>
generally we don't use daemonset. that is to say that: things that need to run at host-level on every host, and specifically things that need privileges, we run as systemd services.
<johanot>
on the other hand: if it can run unprivileged and don't care where it runs, containers are preffered.
<johanot>
ceph in particular: is on dedicated hosts..
<johanot>
because: our kubernetes persistent volumes are backed by ceph. we don't want ceph to depend on kubernetes. kubernetes depedens on ceph. no circular dependencies, please :)
<ixxie>
so you don't even run ceph on the kube hosts?
<ixxie>
You just have a seperate cluster for ceph?
<johanot>
exactly. that's two separate clusters. we want ceph to have plenty of resources for itself, lots of mem and cpu. same for k8s itself.
<johanot>
sorry to disappoint you :D
<ixxie>
Not disappointed, just surprised
<srhb>
It certainly feels like less of a mental burden, at least in my opinion.
<srhb>
I have enough trouble disentangling kubernetes own internal dependencies in my head :-P
<ixxie>
hows the networking done?
<johanot>
we run two separate ceph vlans, one for clients and one for recovery/sync
<johanot>
the latter being the fastest, obviously. I believe its 40gbit afaik
<ixxie>
and the latter only connecting the ceph hosts I guess
<johanot>
exactly
<ixxie>
neat
<ixxie>
I guess you mirror your ceph hosts to your kube hosts geographically?
<johanot>
we have our own datacenters. 2 of them to be precise. located on opposite sites of the same street with our own fiber optics connecting them. so not much geography involved there :D
<ixxie>
lol
<ixxie>
ok
<johanot>
but we actually got a new dedicated fiber installed, just for ceph :) before that, we didn't have 40gbit between the two sites. good thing we convinced management there. hehe
<ixxie>
sweet
<ixxie>
so the other cable is for the kubernetes network?
<johanot>
yeah, and all the other misc stuff we have running :) i'm not 100% sure how we physically route packets though. we have another team for that :P
<ixxie>
you guys should write an article about your infra and post to HN... I think many will be interested.
<ixxie>
(I mean, I have like a thousand different questions, but I don't wanna bug ya, so I will just wait for that article :D )
<johanot>
:D maybe we should put that on the sprint then
monokrome has quit [Remote host closed the connection]