samueldr changed the topic of #nixos-infra to: NixOS infrastructure | logs: https://logs.nix.samueldr.com/nixos-infra/
<samueldr> niksnut: the current download script breaks with the new build product added to the build https://hydra.nixos.org/build/115470810
<samueldr> $ NIX_REMOTE=https://cache.nixos.org/ nix --experimental-features nix-command cat-store '/nix/store/97i5pah4vy2x672z1kalh5g125s81gl0-nixpkgs-tarball-20.09pre218220.35b2ad79ff6/packages.json.br' > '/run/user/1000/release-nixpkgs-unstable/nixpkgs-tarball-2
<samueldr> 0.09pre218220.35b2ad79ff6/nixexprs.tar.xz.tmp'
<samueldr> looking into it, but if you are (1) there and (2) have any tip or ideas don't hesitate
<samueldr> I've made it so a build product can be chosen, as part of https://github.com/NixOS/nixos-channel-scripts/pull/35
<gchristensen> nice
<samueldr> though there's the annoying issue that this is a bit hacky since it relies on subtype, rather than a proper id
<samueldr> note that I think that if a channel succeeds with this output, the channel will be broken
<samueldr> as the nixexprs tarball will be the package.json.br file
<samueldr> (I could be wrong)
<niksnut> samueldr: note that you can download build products by type
worldofpeace has quit [Ping timeout: 240 seconds]
worldofpeace has joined #nixos-infra
<samueldr> niksnut: do you want me to change the handling in `downloadFile` so it uses `by-type` when you download by a specified type?
<samueldr> the current rework has the advantage of breaking the channel advance if a new product is added
<samueldr> while currently it simply downloads whatever is '1', which may not be what is expected
<samueldr> changing to download-by-type but keeping the current implicit "download the only product" behaviour would lead to two diverging code paths
<samueldr> an option, I guess, is to remove that implicit behaviour and *always* specify the type
<samueldr> one drawback from switching to this is that the downloads will go through hydra rather than through cache.nixos.org using cat-store
<niksnut> samueldr: good point, it's better to use cat-store
<samueldr> alright, I think the PR is ready
<samueldr> and as stated, it's fine if you want me to pare it down the the bare essential and remove the dry-run stuff
<niksnut> does it set the content-encoding for the .br objects?
<samueldr> no
<samueldr> I would need a pointer about that
<samueldr> I have no proper S3 experience
<niksnut> in S3BinaryCacheStore we set Content-Encoding to "br"
<niksnut> that way browsers will decompress it on the fly
<niksnut> in C++ it's request.SetContentEncoding(contentEncoding);
<niksnut> I don't know how it's done in perl
<niksnut> maybe just $bucket->add_key(..., { content_type = "application/json", content_encoding = "br" })
<samueldr> I was about to ask
<niksnut> which explicitly mentions content_encoding
<samueldr> could it be the encoding for the upload?
<samueldr> I don't know if it's even something with HTTP, to e.g. PUT a json file but encode the request as brotli
<samueldr> looking at the docs it's likely wrong
<samueldr> add_key_filename and add_key upload with the configuration HASHREF for headers
<niksnut> if it corresponds to the S3 API docs, it's the encoding for the download
<niksnut> or to be precise, it's what the client needs to do to decode, S3 doesn't care
<niksnut> which suggests it is actually sent as a normal header in the PUT request
<niksnut> btw you can verify using curl, e.g. curl -v https://cache.nixos.org/log/76qqj1fw0q32dk5bayznrzl1yvznvsv4-coreutils-8.31.drv shows Content-Encoding: br
<samueldr> I'm not comfortable doing the switch to `put` since this would need an `Amazon::S3::Object` to work with rather than an `Amazon::S3::Bucket`, so I can't intuitively do the change because I'm unable to validate the change
<samueldr> this isn't tested either, but this change feels less intrusive
<niksnut> I can test it
<samueldr> I'm also not used to S3 stuff, so I don't understand the details, so I'm not comfortable refactoring to use put without wasting both our time :)
<niksnut> hm, looks like we can't test this atm because none of the jobsets are finished
<samueldr> yep
<samueldr> the dry-runnable parts have been validated for nixpkgs
<samueldr> (by using /latest instead of /latest-finished)
<niksnut> ah, I see the problem
<niksnut> we don't have any x86_64-linux big-parallel machines
<niksnut> so if I do: hydra-queue-runner --build-one 115471337 -vv
<niksnut> it prints
<niksnut> step ‘/nix/store/02mh4m4n5rb29alzr4r0la8hwjl82h5f-linux-5.4.27.drv’ is now runnable
<niksnut> and gets stuck
<niksnut> gchristensen: do you know what happened with big-parallel?
<gchristensen> oh interesting
<gchristensen> niksnut: will look in the next 15 minutes
<niksnut> thanks
<gchristensen> I accidentally took another 30 minute nap, and had to clear the cobwebs :)
<garbas> gchristensen: I also took also a quick nap in the living room. big mistake! kids moved from paper to my face. took me some time to clean it :)
<gchristensen> haha
<gchristensen> my dog seemed to be trying to put a chew toy in my mouth
<garbas> :)
garbas has quit [Ping timeout: 250 seconds]
garbas has joined #nixos-infra
<gchristensen> niksnut: the problem appears to be from a rushed scale down when a Packet client had to scale up very quilky
<gchristensen> I'm replacing the destroed machines
<niksnut> +1
aminechikhaoui has joined #nixos-infra