ekleog changed the topic of #nixos-dev to: NixOS Development (#nixos for questions) | https://hydra.nixos.org/jobset/nixos/trunk-combined https://channels.nix.gsc.io/graph.html https://r13y.com | 18.09 release managers: vcunat and samueldr | https://logs.nix.samueldr.com/nixos-dev
<infinisil> Something something IPFS
<clever> infinisil: ive looked into that before, and the biggest problem is the narinfo files, you have the hash of the build directions, not the hash of the result
<clever> infinisil: so you cant just rely on ipfs's key/value store being hash(value) = value
<infinisil> Yeah, you'd still need a lookup from <drv hash> -> <build content hash> in nixos.org
<infinisil> Which actually also gets you the trust base
<clever> my basic idea, is to just shove the ipfs hash of the .nar.xz, into the .narinfo
<clever> but dont actually upload any objects to ipfs
<clever> that would be a very trivial patch to add to hydra
<infinisil> Ah yeah that would work
<clever> then other people can upload .nar.xz files to their local ipfs node, as they feel like it
<infinisil> Yeah that's a pretty nice idea, nix should then get an option useIPFSWhenAvailable or so
<infinisil> I feel like it's almost worth learning C++ just to help with Nix development
<clever> join the impure!
<clever> join us!
<LnL> I thought people already implemented parts of that
<LnL> the way metadata propagates just doesn't scale for the amount of files that get generated
pie__ has quit [Ping timeout: 244 seconds]
<infinisil> LnL: Not sure I get what you're saying
<LnL> don't know the details, but there is (or was) some kind of performance problem
<{^_^}> nix#1167 (by mguentner, 2 years ago, closed): RFC: Add IPFS to Nix
ma27 has quit [Quit: WeeChat 2.4]
drakonis has quit [Quit: WeeChat 2.3]
drakonis has joined #nixos-dev
<gchristensen> basically you can't write to IPFS fast enough
<clever> my idea, is that nixos.org never writes to ipfs, it just computes the hash of each nar as it uploads to s3
<clever> and then normal users, can re-host what is already in /nix/store/ on ipfs, at the same hashes
<gchristensen> that is very interesting
<clever> which is why i wrote https://github.com/taktoa/narfuse
<clever> this is a fuse layer, that lets you mount /foo to /nix/store/, and then create /foo/p3bky8aczn8fn2145shmnfc76r2k7hi4-libical-3.0.4-dev.nar
<clever> and it will unpack the nar transparently
<gchristensen> can ipfs issue 404s rapidly?
<clever> when running an ipfs daemon, you publish the hash of every object you are hosting, to a DHT
<clever> so 404's happen via DHT lookup
<gchristensen> sounds a bit slow :?
<clever> you could cheat, and download from cache.nixos.org, at the same time as ipfs
<clever> and then if ipfs gains speed, cancel the cache.nixos.org one
<clever> or, you could torrent-style it
<clever> fetch chunks, from both, in parallel
<clever> and if one is faster, it just winds up giving you more chunks
<gchristensen> last I looked the cost of 404s and negotiating peers was prohibitive, but maybe it could be done
<clever> ipfs uses a merkle tree to define files, so you have the hashes for many chunks of the file
<clever> so you can download pieces from ipfs, while using range requests over http to download same-sized pieces
<clever> and then if ipfs fails, just keep downloading chunks from cache.nixos.org
<clever> oh, and i heard that S3 has a 1mb/sec throttle, and encourages you to download chunks in parallel
<clever> load balancers behind the scenes, will then spread that over many servers
<clever> and potentially go faster then a single link could
<clever> oh, i just had another idea
<clever> this provides a binary cache api, and behind the scenes, it connects to many upstream caches
<clever> and itself, will cache things locally
<clever> that could be modified, to download things off ipfs, and then present a "dumb" http interface to nix-daemon
<clever> so you could test things out, without editing nix too much
<clever> but to do any kind of large-scale testing, you would first need ipfs hashes of every nar, in the narinfo files
<gchristensen> S3 doesn't have that kind of limiter
<clever> *looks*
<clever> gchristensen: i know ive seen it recently, but the slack msg i'm finding is from march of last year...
<gchristensen> it isn't true :P
<clever> gchristensen: it was in a recent convo about cache-s3
<gchristensen> I can fetch from the Nix cache's S3 bucket at 90MiB/s
<clever> gchristensen: ah, found it, let me link you the msg...
<clever> gchristensen: https://i.imgur.com/l2kcJys.png
<{^_^}> fpco/cache-s3#20 (by aledeganopix4d, 2 weeks ago, open): The default upload/download method is much slower than S3 cp
<clever> gchristensen: what command did you use to reach 90?
<gchristensen> curl
<clever> 20M on this end
<clever> which is 160mbit i believe
<clever> spiked to 54M at the end
<clever> much closer to my real limit
<clever> gchristensen: but...
<clever> gchristensen: https://i.imgur.com/ENnq6pJ.png also happens
<gchristensen> (it is very close to my time to go to sleep)
<gchristensen> maybe cache-s3 is not very efficient :)
<clever> `We have noted that the uploads/downloads of the caches, done by cache-s3, are much slower than what we would get with a simple aws s3 cp, often by almost a factor of 8/10`
<gchristensen> I saw that
<gchristensen> I dunno
<gchristensen> all i can say is I have never seen any evidence of S3 artificially limiting upload / download rates
<gchristensen> and yes, chunking can help
<clever> i just had an even more crazy idea...
<clever> make a single client, that can do http, ipfs, and torrent, all at once
<clever> grab bytes from all 3, in parallel....
<clever> and re-assemble it, as it goes!
<gchristensen> torrents are so slow to establishp eers
<gchristensen> the current cache model actually makes a lot of sense for our use case
<clever> http gives you the initial burst of speed
<clever> then transisition to ipfs or torrent, as you gain peers
<clever> although, it depends on the file size
<clever> for the small things, it wouldnt really benefit from torrents
<gchristensen> ok I'm out of here for the night :)
<gchristensen> g'night!
<clever> torrent also needs hashes of chunks, that are powers of 2 in size
<clever> so you would need to specially hash the ipfs stuff, with torrent in mind
<clever> and then abuse the ipfs hashes to find the torrent hashes, or just have .torrent files for every single nar
<gchristensen> (S3 can already provide that actually, hehe)
ajs124 has left #nixos-dev ["Machine going to sleep"]
<clever> oh, *doh*, you already linked the bucket directly
phreedom has quit [Remote host closed the connection]
phreedom has joined #nixos-dev
jtojnar has quit [Quit: jtojnar]
jtojnar has joined #nixos-dev
<jtojnar> could someone please change https://hydra.nixos.org/jobset/nixpkgs/gnome#tabs-configuration to gnome-3.32 branch?
jtojnar has quit [Quit: jtojnar]
jtojnar has joined #nixos-dev
jtojnar has quit [Read error: Connection reset by peer]
jtojnar has joined #nixos-dev
drakonis has quit [Quit: WeeChat 2.3]
jtojnar has quit [Remote host closed the connection]
orivej has joined #nixos-dev
orivej has quit [Ping timeout: 252 seconds]
lopsided98 has quit [Quit: No Ping reply in 180 seconds.]
lopsided98 has joined #nixos-dev
pie__ has joined #nixos-dev
drakonis has joined #nixos-dev
johanot has joined #nixos-dev
<manveru> clever: doesn't aria2c support that?
<manveru> that's what apt-metalink and powerpill are using too to speed up downloads on debuntu and arch respectively
orivej has joined #nixos-dev
johanot has quit [Ping timeout: 250 seconds]
johanot has joined #nixos-dev
johanot has quit [Ping timeout: 245 seconds]
johanot has joined #nixos-dev
chaker has joined #nixos-dev
ixxie has joined #nixos-dev
johanot has quit [Quit: WeeChat 2.4]
ixxie has quit [Ping timeout: 250 seconds]
johanot has joined #nixos-dev
johanot has quit [Ping timeout: 255 seconds]
johanot has joined #nixos-dev
ma27 has joined #nixos-dev
<gchristensen> ,tell jtojnar done
<{^_^}> gchristensen: I'll pass that on to jtojnar
<gchristensen> ,tell jtojnar done (re: gnome-3.32)
<{^_^}> gchristensen: I'll pass that on to jtojnar
<gchristensen> it would be really nice if we could keep hydra's config in version control
LnL has quit [Ping timeout: 250 seconds]
rsa has joined #nixos-dev
Guest21659 has joined #nixos-dev
jtojnar has joined #nixos-dev
<jtojnar> gchristensen: thank you
<{^_^}> jtojnar: 1 hour, 15 minutes ago <gchristensen> done (re: gnome-3.32)
<{^_^}> jtojnar: 1 hour, 15 minutes ago <gchristensen> done
<Profpatsch> clever: We already have stability issues with Cloud providers that are paid to do their jobs, if we add ipfs and torrents to the mix it’s going to be hell.
<Profpatsch> especially if we try some fancy splitting the load between the three.
Sigyn has quit [Quit: People always have such a hard time believing that robots could do bad things.]
Sigyn has joined #nixos-dev
johanot has quit [Quit: WeeChat 2.4]
<Guest21659> gchristensen: we could, I have not used it but you can define declarative jobsets
<gchristensen> yeah, that would work for jobsets but the whole thing would be nicer :P
Guest21659 is now known as LnL
<manveru> looks like b4acd9772975d549f491d48debb679cf4ec78133 broke the nevow python lib, which is required for tahoe-lafs :(
<gchristensen> :(
<manveru> anyone know what this trial thing is?
<manveru> because nix-locate says it's in twisted, but that's already a dependency
drakonis has quit [Ping timeout: 252 seconds]
drakonis has joined #nixos-dev
<manveru> ah, so seems like it needs to be added to checkInputs as well now
drakonis has quit [Read error: Connection reset by peer]
drakonis has joined #nixos-dev
<catern> clever: for the most part the abstraction of a CDN is exactly the abstraction we need for the binary cache, modulo the fact that it requires you to use HTTP, which is not a constraint we have... hmm, are there any CDNs that say "hey we offer HTTP, but if you use [whatever protocol] directly, it'll be even faster"?
<gchristensen> HTTP is a pretty decent protocol for it, I think the thing people miss is being able to have a loose, local cache
<emily> just shell out to $BROWSER to download things from the cache. What could go wrong?
<ekleog> catern: the abstraction we need for the binary cache is “fetch me this content-addressed file”, which is not exactly CDNs
<ekleog> (not sure whether NARs are currently content-addressed but they definitely should be)
<catern> ekleog: that's fair, we have more information, which the CDN abstraction throws away
ajs124 has joined #nixos-dev
<ekleog> and hydra should (and AFAIR already does) make available a mapping from derivation hash to content-addressed hash
disasm has quit [Quit: WeeChat 2.0]
<catern> ekleog: oh wait no you tricked me, it's not actually content-addressed :)
<catern> yeah so I stand by what I said, the CDN abstraction is exactly what we want
<ekleog> well I was still typing :p
<ekleog> issue with the CDN abstraction is it requires HTTP and it requires a single actor for hosting a binary cache
<ekleog> while the only “single actor” requirement we actually have is for hosting the derivation hash -> content-address mapping
<catern> we want "here's the hash of a derivation, if you know that derivation and you have a build output for it, give me it"; it doesn't lose much information to reduce that to "here's a string, if you know that string and have something stored for it, give me it"
<ekleog> here you're merging two independent tasks together, the derivation hash -> content-address, and the content-address -> content tasks
disasm has joined #nixos-dev
<catern> yeah but that's because I'm describing the function we actually want to compute :)
<ekleog> yes, but we don't have to compute it all with a single protocol
<catern> the derivation hash -> content-address function isn't useful on its own, nor is the content-address -> content tasks function
<clever> [clever@system76:~/nixpkgs]$ curl https://cache.nixos.org/5igbdc1czdss7341r360648n14pkpp5r.narinfo
<clever> FileHash: sha256:149bmnwm8him2dbdnb2qlbb3sgxn08dg604g9b752qddsclka1z1
<clever> URL: nar/149bmnwm8him2dbdnb2qlbb3sgxn08dg604g9b752qddsclka1z1.nar.xz
<clever> FileHash is the hash over the compressed form, and currently, that is also in the name of the nar itself
<clever> so if 2 different derivations (totally different $out paths) have the same output, they will actually share the .nar.xz
<ekleog> clever: so it's indeed already what we do, nice :)
<clever> NarHash: sha256:1ag1bswgyxip66jgn65n40x0c2q0c40n69wlbm1g20x9a5gpxjk7
<clever> there is also a hash over the un-compressed form, which nix uses for validation
<clever> FileSize: 930620
<clever> NarSize: 5510000
<clever> and then these are used to tell you how big the download will be, and how much it will use on-disk after unpacking
<catern> (also I misspoke, I meant we go from out-path hash -> content, not derivation hash -> content)
<clever> but, $out hash, is based on the hash of the derivation
<catern> yeah, but it's not exactly the hash of the derivation
<catern> just being more precise
disasm has quit [Quit: WeeChat 2.0]
drakonis1 has joined #nixos-dev
ajs124 has left #nixos-dev [#nixos-dev]
ajs124 has joined #nixos-dev
ajs124 has left #nixos-dev ["Machine going to sleep"]
ajs124 has joined #nixos-dev
<ajs124> what's the most convinient way to bisect a kernel on nixos?
ajs124 has left #nixos-dev [#nixos-dev]
orivej has quit [Ping timeout: 244 seconds]
ajs124 has joined #nixos-dev
<catern> so currently nix-shell, if TMPDIR is unset, sets TMPDIR=$XDG_RUNTIME_DIR; this is fairly annoying to me because XDG_RUNTIME_DIR is usually quite small (limited to 100MB on my system) and I run some processes inside nix-shell which can take up a fair amount of space (several GB)
<catern> does anyone see a reason why to do this?
<catern> On that note: nix-shell: don't use XDG_RUNTIME_DIR if TMPDIR is unset https://git.io/fjepw
<jtojnar> can we temporarily forbid r-ryantm from sending pull requests? otherwise it will open one for every GNOME app
drakonis1 has quit [Quit: WeeChat 2.3]
drakonis_ has joined #nixos-dev
drakonis has quit [Ping timeout: 246 seconds]
drakonis1 has joined #nixos-dev
drakonis has joined #nixos-dev
drakonis has quit [Read error: Connection reset by peer]
drakonis2 has joined #nixos-dev
drakonis_ has quit [Ping timeout: 240 seconds]
drakonis1 has quit [Ping timeout: 245 seconds]
orivej has joined #nixos-dev
drakonis2 has quit [Ping timeout: 240 seconds]
drakonis2 has joined #nixos-dev
drakonis2 has quit [Read error: Connection reset by peer]
drakonis2 has joined #nixos-dev
drakonis2 has quit [Read error: Connection reset by peer]
drakonis2 has joined #nixos-dev
drakonis2 has quit [Read error: Connection reset by peer]
drakonis2 has joined #nixos-dev
niksnut has quit [Ping timeout: 244 seconds]
niksnut has joined #nixos-dev
drakonis has joined #nixos-dev
drakonis2 has quit [Ping timeout: 244 seconds]
drakonis_ has joined #nixos-dev
drakonis has quit [Ping timeout: 240 seconds]
drakonis has joined #nixos-dev
drakonis_ has quit [Ping timeout: 252 seconds]
drakonis has quit [Read error: Connection reset by peer]
drakonis has joined #nixos-dev
drakonis has quit [Read error: Connection reset by peer]
drakonis has joined #nixos-dev
drakonis has quit [Read error: Connection reset by peer]
drakonis has joined #nixos-dev
drakonis has quit [Read error: Connection reset by peer]
drakonis has joined #nixos-dev
drakonis1 has joined #nixos-dev
vaibhavsagar has joined #nixos-dev
ryantm_ has joined #nixos-dev
ryantm_ has quit [Quit: leaving]
ryantm has joined #nixos-dev
drakonis1 has quit [Quit: WeeChat 2.3]