<sphalerite>
oooh that sounds useful. I wanted to do some sort of change to nix a while back and looked through the curl docs searching for URL-handling stuff
<andi->
on a slightly different note: I am terrifyied and relief knowing that we use curl so heavily when fetching things...
<andi->
I have a feeling that an http(s) only client that only does one thing would be better..
<andi->
sphalerite: well on the plus side: it has a lot of exposure and people looking at it.. but from what we have seen with OpenSSL that doesn't mean much... on the negative side it speaks almost every networked protocol on this planet..
<sphalerite>
andi-: IMHO there's nothing wrong with it speaking all the protocols, as it does allow deactivating support for protocols. So you can set the library up to only allow HTTPS
<andi->
As I said I'm both glad and terrifyied :)
<andi->
maybe I would feel better if I'd start building my local nix without everything but HTTP(s) support
pie_ has quit [Read error: Connection reset by peer]
pie__ has joined #nixos-security
<pie__>
sphalerite, more protocols more areas to look at with the same amount of eyes?
<ekleog>
andi-: tbh, I don't really see your attack model
<ekleog>
like, unless you're considering unencrypted protocols and MitM (which should happen only for few packages), only the provider of the data fetched can attack your downloader
<ekleog>
and the downloader is sandboxed anyway, so (theoretically) all it can do is write stuff that'll be hash-checked to be in the FO
<ekleog>
so basically, the worst an attacker could do is change a download to another that has the same hash… which sounds not really likely, as afair even sha1 is no longer allowed?
<ekleog>
(or do I miss something?)
<andi->
You are probably right. I also do not have a model in mind.
<andi->
I just think it is a bit crazy..
<ekleog>
yeah, that is completely true
<andi->
I recently witnessed a discussion where someone said "uggh wget? Why didn't you use curl?" the other person responded: "I don't think either of them is a great choice but that is what we got."
<ekleog>
yeah x)
<andi->
People also started working on musl while glibc provided everything they needed.. Well it provides too much for them ;)
<ekleog>
need to rewrite curl in rust
<andi->
I am not even sure that is a great idea.
<andi->
It will be 10x the size of the binary.
<ekleog>
with flags to runtime enable/disable protocols
<andi->
Will probably use thousands of crates..
<ekleog>
well, rust is (theoretically) capable of dynamic linking, so the size shouldn't matter if we adapt nix to actually use that
<ekleog>
(well, not matter more than it does for C++, which has the same amount of monomorphization)
<andi->
It would be nice because then you'd have a rusty TLS library as well (hopefully).
<andi->
But who audits that?
<ekleog>
tbh the tls library is actually the thing I rather keep from gnutls or whatever
<ekleog>
precisely because even if it's written without unsafe code it can still be unsafe
<ekleog>
and it just likely doesn't have that many eyes to look on it
<sphalerite>
ekleog: what about builtins.fetchurl, which doesn't require a hash?
<ekleog>
will likely be great when it matures, though :)
<ekleog>
sphalerite: good point… but imo when using that it's already too late, and the nix security model is already broken :° (hello mozilla's rust-overlay)
<ekleog>
like, the server can just decide to serve you another file
<andi->
Regarding the attack model: Someone (amazon) owning the server I am downloading my packages from is able to exploit whatever in curl, execute some code (in a sandbox but with local network access..) and then still deliver the requested output..
<ekleog>
so the only change an attack brings is if it's also possible from a MitM / MotS and you're being MitMed / MotSed
<ekleog>
hmmm good point there's local network access
<andi->
I have a dejavu regarding that.. there was some system that already fixed that attack vector
<ekleog>
so now we need to get nix to sandbox stronger to actually only allow some outgoing network access! :)
<andi->
I should start writning a blog again m(
<ekleog>
(like, whitelist of allowed contacted IPs / DNSes for FO derivations)
<ekleog>
(which could be auto-computed by the lib.fetch*, actually)
<andi->
disable rfc1918 and the corresponding IPv6 addresses..
<ekleog>
that's some good first step, but a whitelist would be best imo
<ekleog>
can you remind me your github handle? it doesn't auto-complete from @andi
<andi->
how do you properly autocompute that without killing internal package repos?
<andi->
ekleog: andir
<pie__>
nix security model? <ekleog> sphalerite: good point… but imo when using that it's already too late, and the nix security model is already broken :° (hello mozilla's rust-overlay)
<ekleog>
pie__: my view of the nix security model is “if the .nix is well-written, it will generate the correct executable”
<ekleog>
hmm, +“and the binary caches are actually trust-able”
<pie__>
aha.
<pie__>
oh i just had an idea, sort of, since we have reproducible packages, maybe we could do some kind of web of trust thing where people duplicate a build and do some kind of pgp signing to the binary cache if it matches
<ekleog>
which isn't the case with builtins.fetchurl without a hash, given that can be thwarted from the server
<sphalerite>
andi-: sandbox but with local network access..?
<pie__>
(of course thats assuming andthing bad happens in the build step and not before...well anyway, /me runs off)
<ekleog>
pie__: yeah, there have been talks about that, and it'd be great :) cachix is a step in this direction, afair
<andi->
sphalerite: yes, so it must be configurable :)
<sphalerite>
ekleog: I'd say the binary caches don't need to be trust-able, only the configured trusted keys :)
<ekleog>
^
<pie__>
sphalerite, whats that mean
<ekleog>
pie__: the IP doesn't need to be trusted, only the trusted key
<ekleog>
that avoids having to trust the DNS, IP, etc.
<sphalerite>
pie__: you don't need to trust the binary caches themselves, only the keys that the paths served up by the caches are signed with
<pie__>
i mighhave bene implicitly assuming that we are talking about trusting the contents of the binary caches
<pie__>
is that it?
<andi->
we must trust those. I started with distrusting the CDN (Amazon in our case) towards the HTTPs lib we are using
<ekleog>
andi-: well, with fully reproducible builds we don't even need to actually trust those
<pie__>
assuming nix checks the hashes of things it gets from the caches, things should be fine up to breaking of hashes
<pie__>
9?)
<ekleog>
just rebuild like 30% of what you download (depending on your level of paranoia) and verify that it's actually giving the same result
<ekleog>
should make any cheating likely to be detected, and thus impossible to do in practice
<ekleog>
(and then distribute said rebuilding of 30% of the rebuilds to friends… and we're back to the idea of adding signatures to the binary cache \o/)
<andi->
I think just adding a 2nd identity that rebuilds e.g. the -small pkg sets would already be nice. Then publishing those hashes and you can compare the binaries/hashes with those you got from hydra. Gives another level of insurance.
<ekleog>
well, the problem with publishing the hashes is that hydra then knows on which paths it can cheat
<ekleog>
and given an attack can come from any executable… well… :/
<ekleog>
the trick in rebuilding X% of the packages locally is to do it randomly
<ekleog>
so that hydra can't know which packages to fake
<ekleog>
(that doesn't work with offloading rebuilds to friends, for sure, as they'd have to publish said signatures, but with offloading rebuilds to friends it becomes possible to just require that every received build has 2 trusted signatures and if not it'll be rebuilt locally)
<andi->
And then we end up blacklisting chromium ;) (from phone)
<ekleog>
:D
<ekleog>
well, if any package is blacklisted then hydra will be fully-trusted again… unless said package is checked for having multiple signatures that are unlikely to be all compromised
<pie__>
lol chromium is not reproducible or what?
<pie__>
also what was the thing with mozillas rust overlay?
<sphalerite>
pie__: no, it's just horrendously resource-intensive to build
<pie__>
ah lol
<sphalerite>
pie__: it fetches data about the current rust version at eval time, so it's not reproducible
pie_ has joined #nixos-security
<ekleog>
I still really need to add the possibility to add a hash to the fetchurl, so it takes less awfully slow to evaluate
<ekleog>
should be trivial, but I just find more time to debate on irc than to actually do productive work
<pie_>
whoops
<sphalerite>
ekleog: which fetchurl?
<sphalerite>
the rust thing?
<ekleog>
sphalerite: yeah, the mozilla rust-overlay thing
<sphalerite>
ah right
<ekleog>
just being able to provide a hash for the manifest (with maybe detailed information about where to fetch it) should fix it
<sphalerite>
sounds good
<ekleog>
(just hoping rust keeps an archive of past manifests)
<sphalerite>
well they have so far, right?
<ekleog>
dunno, didn't check yet
<sphalerite>
it does support passing a date for the manifest
<ekleog>
they keep an archive of all past nightlies/betas/stables, that is sure
pie__ has quit [Ping timeout: 240 seconds]
<sphalerite>
the overlay, that is
<ekleog>
oh? sounds good, so it really just needs the hash
<sphalerite>
yep should do
<ekleog>
oooh wait actually the date is the same date as the date of the beta/nightly/stable
<ekleog>
it didn't break git repositories too much, however it completely broke chromium's svn when they added a test to check that chromium wasn't vulnerable to it, iirc
<pie_>
ah right
<pie_>
"this will never happen" is not an excuse for edge cases xD
<pie_>
it will happen precisely because it will never happen
<ekleog>
esp. when the person feeding files is likely to be actively attacking you :°
<pie_>
then again it wouldnt make sense for git clones to also pull in commits from forks, no way github would do that
<andi->
samueldr: thanks, I'll to the 18.03 port :-)
<sphalerite>
pie_: yeah I think github uses a single real git repo for all forks of a given repo. The branches are only accessible in the actual repo they belong to though.