<copumpkin>
niksnut: I guess I can just use a nixpkgs github tarball link for the same effect
<niksnut>
yeah
pbogdan has left #nixos-dev ["ERC (IRC client for Emacs 25.3.1)"]
pbogdan has joined joined #nixos-dev
seppellll has joined joined #nixos-dev
FRidh has quit [(Quit: Konversation terminated!)]
<mguex_>
Who here feels appointed to operate a community operated binary cache (i.e. not AW$)? I started the project with fpletz but there hasn't been much progress since a few weeks. I myself do not want to take the lead, but I am happy to hand over the project.
mguex_ is now known as mguex
<mguex>
If you have questions, feel free to /query
<fadenb>
mguex: I believe nobody will commit to something like this without more information upfront. What I would be interested in is the amount of traffic required for such a cache
<cransom>
mguex: how would a community cache work? and do you have a proposal or rfc?
<mguex>
fadenb: sure
<mguex>
cransom: We have a machine that fetches all (binary) outputs and narinfos from cache.nixos.org and makes them available through http(s) and rsync.
<mguex>
We obtain the list of nars by querying hydra jobsets and retain `n` evaluations (currently 9 per channel/release)
<mguex>
That amounts to ~ 350 GiB
<MoreTea>
this will probably become viable once we can sign per-nar in nix 1.12
<mguex>
MoreTea: It's already viable: Hydra signs the .nar's, so as long as you have the pubkey in your nix.conf it does not matter where you get your nars from
<mguex>
trafficwise: Frankly I have no idea. only niksnut knows what is going on on S3/AWS. The goal is to have a "core" infra with 3-5 nodes that redirect requests "cache.nixos.community" to mirrors close to the requesting IP (this is done through mirrorbits software)
<mguex>
fadenb: I am searching for people who are willing to operate the core infrastructure machines. As I said, these machines only distribute the traffic and do not need to serve themselves. This can be done by machines "downstream", i.e. in universities
<copumpkin>
niksnut: seeing a hash change with 1.12 :O
<copumpkin>
is that expected?
<copumpkin>
just recent version of 1.12 too
<copumpkin>
like I evaluate the output path of something in 1.12 before updating and it produced one value, and after updating it produced another value
Sonarpulse has joined joined #nixos-dev
<niksnut>
what hash?
<niksnut>
.drvs can easily change
<copumpkin>
the actual output hash, the nix-store -r
<copumpkin>
trying to figure it out because it's behaving super strangely now
<copumpkin>
basically ran a nix-build on something with 1.11, it gave me hash X. Then I ran again with nixUnstable and gave me hash X again, then I nix-channel --updated and updated unstable and it gave me hash Y
<niksnut>
something that can be reproduced easily?
<copumpkin>
well, this is what's confusing now
<copumpkin>
because after I exited the nix-shell with nixUnstable producing hash Y
<copumpkin>
now 1.1 gives me Y too
<copumpkin>
but I still have the .drv for X so I didn't just imagine it
<copumpkin>
I don't know how easy to repro it is yet
<copumpkin>
still trying to figure out what's going on
<copumpkin>
what's the easiest way to go from a list of .drvs to a list of their output paths without actually realizing them?
__Sander__ has quit [(Quit: Konversation terminated!)]
<clever>
copumpkin: lib.inNixShell
<clever>
copumpkin: that returns true for both nix-shell based evals, and nix-build's ran inside nix-shell
<clever>
copumpkin: i think it was just changed from using a struct FilterFromExpr to inlining the entire thing on the new line 1025
<copumpkin>
yeah the change doesn't look too deep
<copumpkin>
hmm
<copumpkin>
:/
<vcunat>
Mic92: I was cancelling evals on Hydra due to imminent staging merge (and larger rebuilds on master that were pushed in parallel)
<vcunat>
mguex: I think I can operate such a machine, starting within a few months. It's been in my long-term plans and now I have the machine itself. That is, assuming I can get the necessary stuff into an armv7 container.
<vcunat>
I probably don't understand your distribute vs. serve distinction, but I have 100Mbps symmetric connection, and I think can sustain lots of traffic.
<vcunat>
In particular, if you have an idea how to fix tcp_wrappers, that would be great, as it's a blocker for both channels. I took a few approaches already, but the link always fails, even if I explicitly add -lnsl - and libnsl.so (from ${glibc}) does list the "missing" symbol: 00000000000048a0 T yp_get_default_domain
goibhniu has quit [(Ping timeout: 268 seconds)]
<Dezgeg>
it's a versioned symbol only apparently: yp_get_default_domain@GLIBC_2.2.5
<copumpkin>
okay, somehow fetchFromGitHub started returning a path called "source" instead of the old name
<copumpkin>
anyone have any idea how that would happen? I haven't updated nixpkgs which is the only thing I can think of that would affect fetchFromGitHub's name
_ts_ has quit [(Remote host closed the connection)]
vcunat has quit [(Ping timeout: 250 seconds)]
<copumpkin>
never mind
<copumpkin>
turns out the -source thing broke everything for me :)
<copumpkin>
niksnut: thanks a lot :P
<copumpkin>
the whole "the hash didn't change but the store path did" thing is what bit me, fwiw
<copumpkin>
because when I'm building images it actually can matter
<copumpkin>
what was the final outcome of the "how best to pin nixpkgs" discussion?
<copumpkin>
at nixcon
<copumpkin>
I think someone said that Tekmo had a good solution
romildo has quit [(Quit: Leaving)]
JosW has quit [(Quit: Konversation terminated!)]
orivej_ has joined joined #nixos-dev
orivej has quit [(Ping timeout: 250 seconds)]
_ts_ has joined joined #nixos-dev
Sonarpulse has quit [(Ping timeout: 240 seconds)]
Sonarpulse has joined joined #nixos-dev
<MoreTea>
copumpkin: I think it was comparing the version iof Nix, using builtins.fetchurl with sha256 if >= 1.12, otherwise the trick of importing stuff from nix's closure