<thoughtpolice>
Yeah, qemu for example compresses to about ~80mb
<thoughtpolice>
but the mono one works because it's the source tarball already using bzip, so there's zero gain from compression. Good to keep in mind
drakonis has quit [Quit: WeeChat 2.4]
<thoughtpolice>
Well, good news: I seem to have improved the time-to-first-byte for a 250mb .nar.xz file served from the cache from ~3-5s down to < 1s, in the event the cache is cold (i.e. the nar file isn't already cached). That's great for cold hits.
<thoughtpolice>
gchristensen: ^
drakonis has joined #nixos-dev
drakonis1 has quit [Ping timeout: 246 seconds]
FRidh has joined #nixos-dev
sphalerite has joined #nixos-dev
orivej has joined #nixos-dev
vcunat has joined #nixos-dev
<vcunat>
gchristensen: I'm not sure if it's a temporary state, but some machines are missing in the current Hydra, e.g. I can't see any of my machines (t4a, t4b, t2a) - in particular without the last one there's none that's allowed those metrics jobs.
<vcunat>
I believe I saw these in there some 24h ago.
<niksnut>
hm, in fact all machines in /etc/nix/machines are missing
<thoughtpolice>
Here is a thing: https://aseipp-nix-cache.global.ssl.fastly.net -- please do not abuse it too hard. But eventually the basic skeleton will probably replace the existing cache entirely!
<thoughtpolice>
I'll post something on the forums maybe but the TL;DR is it serves the actual real upstream cache, not a mirror, so you can just replace your substituters line, I think.
<vcunat>
thoughtpolice: BTW, do we have a good "support contact" to Fastly?
<thoughtpolice>
vcunat: Take a look at my GitHub bio :)
<vcunat>
Oh, right :-) There's one thing that's been bothering me around their/your CDN - that DNS resolvers without IPv4 access won't be able to resolve their names (even fastly.com).
<thoughtpolice>
Well, actually, there's kind of a few issues there we were talking about
<vcunat>
Apparently IPv4-less setups are becoming more common (VPS mainly, I guess).
<vcunat>
What I see running in is that com -> fastly.com has no IPv6 glue.
<thoughtpolice>
IPv6 should work on the current domain, it's using the dualstack domain. My mirror does NOT use the dualstack domain.
<thoughtpolice>
So it's IPv4 only. The thing is, I think we have some current limitations that might be screwing people up a bit due to bad IPv6 router implementations.
<thoughtpolice>
(To be clear: cache.nixos.org has A & AAAA records. My service above, which is not a "mirror" but fully backed by the real cache, only has A records)
<vcunat>
Well, it's each customer's choice which domain they use, but I had something "global" in mind.
<vcunat>
It's dug deep in details how a DNS resolver works... I suppose this channel isn't a good place :-)
<vcunat>
So I'm mainly asking to whom I'd write the details.
<thoughtpolice>
*Shrug* That I wouldn't know about honestly; I work on the Varnish team, not DNS. If you could write specifics (email me) I could ask around maybe
<vcunat>
Right... perhaps I'll write directly to https://github.com/edmonds - we (cz.nic) have actually some ties to DNS@Fastly.
<thoughtpolice>
So, for cache.nixos.org, the IPv6 thing is kind of weird. However, we *can* expose both IPv4 and IPv6 domains. My domain can also be accessed with the dual stack
<thoughtpolice>
But I want to see if people are actually suffering horribly from "shitware router issues", there was another issue about a Nix retry bug I was looking at earlier...
<thoughtpolice>
Now, ideally we could do HTTP2 + IPv4 only, and a separate HTTP2 + IPv6 enabled setup as well, but I don't know if that combo exists(?)
<vcunat>
Perhaps we could expose some easily configurable trigger, e.g. cache-ipv4.nixos.org or something (+perhaps some easy NixOS integration).
<vcunat>
And document that if you have this kind of problems, you should flip this switch.
<thoughtpolice>
Dual domains should be doable, yes.
<vcunat>
Though I do hate to add workarounds for non-conformance to decades old IETF standards, if this is the case.
orivej has quit [Ping timeout: 244 seconds]
orivej has joined #nixos-dev
orivej has quit [Ping timeout: 244 seconds]
vcunat has quit [Quit: Leaving.]
__monty__ has joined #nixos-dev
orivej has joined #nixos-dev
orivej has quit [Ping timeout: 258 seconds]
<gchristensen>
niksnut: what was happening to make the machines list wrong?
orivej has joined #nixos-dev
<niksnut>
gchristensen: I don't know what changed, but the line services.hydra-dev.buildMachinesFiles in packet-importer.nix overrides the default
<gchristensen>
maybe something was hand-jammed on chef to make it work before :)
orivej has quit [Ping timeout: 258 seconds]
<gchristensen>
niksnut: oh I found the problem
<gchristensen>
./provisioner.nix was on chef's config, but wasn't added to ceres'
orivej has joined #nixos-dev
justanotheruser has quit [Ping timeout: 246 seconds]
justanotheruser has joined #nixos-dev
__monty__ has quit [Quit: leaving]
justanotheruser has quit [Ping timeout: 248 seconds]
<arianvp>
is there any mechanic in the docs to let people know what other modules a module calls into?
<arianvp>
usecase: I want to let people know that if they set nginx.enablEACME = true; that 1) a security.acme.<name> attribute is created and 2) because of that a systemd.services.{name} and systemd.timers.{name} attribute is created
<arianvp>
like a sort of exploratory way of explaining at a high level what the module does
<arianvp>
without manually writing this
<arianvp>
I notice that I often want to know this info when reading module docs. Would this be automatable?
<arianvp>
some kind of "document unfolding" ?
<arianvp>
documentation*
drakonis_ has joined #nixos-dev
drakonis has quit [Ping timeout: 246 seconds]
<samueldr>
I think it should be possible to do something already... almost
<thoughtpolice>
adisbladis: Been meaning to test on your server but if you want to try it, by all means, go ahead. Both 404 latency and "cold starts" should be much better now on top of what we did earlier, too. :)
<samueldr>
thoughtpolice++
<{^_^}>
thoughtpolice's karma got increased to 11
<samueldr>
thoughtpolice: do you have issues if I move that under the Maintainers namespace?
<thoughtpolice>
No, I don't really know what's on the wiki anyway