<tilpner>
Yay, another #nixos- channel! I'm going to run out of buffers eventually...
<gchristensen>
hehe
<tilpner>
Did you/Are you going to reroute the alerts here?
<gchristensen>
probably should yeah
<tilpner>
They could be a little more verbose... I've been meaning to do that for my own alerts
<gchristensen>
I don't know how :P
<gchristensen>
want to figure it out and send a PR? :D
worldofpeace_ has joined #nixos-infra
worldofpeace has joined #nixos-infra
<tilpner>
There's no trick to figure out, you just... rewrite the formatting part of the script that passes AM messages to your message thing
<tilpner>
._.
<gchristensen>
yeah but ... I have other things on my list :x
<tilpner>
If I ever do that for my own server, I'll make sure you get the changes. But I probably won't for a while
<tilpner>
I just don't like how AM doesn't keep a log of alerts
<gchristensen>
thanks :)
<garbas>
niksnut: ok here is what i figured out so far. most likely the problem is not with packages|options.json and lack of gzip.
<garbas>
the default netligy cache-control header are "max-age=0, must-revalidate, public"
<garbas>
which mean all content will be should be cached but also re-validated. the idea behind this is that it tries to serve content in an instant.
<garbas>
and this means the same for all the static assets.
<samueldr>
nixos-homepage#353?
<garbas>
i havent found a clear answer but i think that internal CDN traffic is also counted against this 100GB. few people noted in their discourse that traffic reported came down after a month of usage.
<garbas>
samueldr: there was the email from netlify that says we used 50% of the traffic already
<samueldr>
oof
<gchristensen>
I've talked to Bonsai and they're interested in providing free elasticsearch hostin for us
teto has joined #nixos-infra
claudiii has joined #nixos-infra
teto has quit [Ping timeout: 246 seconds]
teto has joined #nixos-infra
worldofpeace_ has quit [Quit: worldofpeace_]
<niksnut>
garbas: okay, so we should set max-age to a higher value then?
<garbas>
niksnut: one solution would be to put cloudflare infront of it
<tilpner>
So just fixing those two paths may not be enough
<gchristensen>
it seems to me that is a fine idea, but a proper seearch would be better, no?
<garbas>
gchristensen: ofcourse. this is actually a first step towards a proper search
<gchristensen>
ah
<garbas>
then we injest the json files into elasticsearch and code up the front end
<garbas>
inject*
<garbas>
but we also solve the problem in the short run at least a bit
<garbas>
tilpner: for the css and other css libraries i will try to pull it from other cdn to also reduce bandwidth on netlify
<tilpner>
Huh?
<tilpner>
What's the point of using netlify then?
<garbas>
i should have said that i would first wait to see the impact of packages.json change
<tilpner>
Please don't make the user connect to >2 CDNs to load the package search
<garbas>
packages.json and options.json will be on s3 ... and website on netlify.
<garbas>
tilpner: if you can find a nice static page service that does previews, and supports letsencrypt, has a big enough free account, we can consider moving to it.
<garbas>
tilpner: now that we decoupled it from other infrastructure the move should be easy
<gchristensen>
what CDNs are you prposing we use? that is what I'm confused about
<samueldr>
I don't think the CSS is actually hampered by slowness, we don't have enough data from their network tab screenshot to know what's up with the 1.4 minutes there
<garbas>
niksnut: in summary. we'll move packages.json/options.json to releases.nixos.org and then we don't have to serve them from nixos.org (netlify). i think this should be done this way from the start anyway
<gchristensen>
it'd be nice to store them with the channel they came from
<samueldr>
that's what will happen
<samueldr>
the data is going to be as fresh as the channel, and updates the same time
<samueldr>
rather than on a cron
<gchristensen>
exactly :)
<samueldr>
this also removes one of the last bit requiring cron
<gchristensen>
I should have said, "it'll"
<samueldr>
the last one will be planet
<garbas>
+1
<garbas>
niksnut: we might need to setup (if not already) cloudfront distribution to compress automatically
<samueldr>
we have two choices I guess
<samueldr>
pre-gzipping and not
<garbas>
if service can handle it i would use the service.
<samueldr>
unless it causes issues to compress on the fly like netlify seemingly has
<gchristensen>
it is a pretty big difference, why wouldn't we gzip ahead of timte?
<samueldr>
I was going to say, the size of the data is non-negligeable
<samueldr>
20x reduction iirc
<garbas>
anybody else on webmaster@nixos.org and can check if an email from bonsai.io?
<garbas>
arrived
<niksnut>
including packages.json/option.json in the release sounds good
<niksnut>
btw instead of gzip we could use brotli
<niksnut>
that's what we also use for the build logs in cache.nixos.org
<niksnut>
it just requires setting the Content-Encoding on the S3 object
<niksnut>
quick test: packages-nixos-19.09.json with gzip is 3974377 bytes, with brotli -9 is 2563249 bytes
<samueldr>
sounds good to me
<niksnut>
and for reference, zstd -19 is 2567286 bytes and much slower