pie___ has joined #nixos-security
pie__ has quit [Ping timeout: 240 seconds]
copumpkin has quit [Ping timeout: 272 seconds]
copumpkin has joined #nixos-security
WilliButz has quit [Read error: Connection reset by peer]
WilliButz has joined #nixos-security
WilliButz has quit [Quit: WeeChat 2.0]
WilliButz has joined #nixos-security
pie___ has quit [Ping timeout: 244 seconds]
copumpkin has quit [Ping timeout: 244 seconds]
copumpkin has joined #nixos-security
<andi-> I wonder how many issues this really will help to fix: https://daniel.haxx.se/blog/2018/09/09/libcurl-gets-a-url-api/ :)
<sphalerite> oooh that sounds useful. I wanted to do some sort of change to nix a while back and looked through the curl docs searching for URL-handling stuff
<andi-> on a slightly different note: I am terrifyied and relief knowing that we use curl so heavily when fetching things...
<andi-> I have a feeling that an http(s) only client that only does one thing would be better..
copumpkin has quit [Quit: Textual IRC Client: www.textualapp.com]
copumpkin has joined #nixos-security
copumpkin has quit [Client Quit]
<sphalerite> andi-: why?
pie_ has joined #nixos-security
copumpkin has joined #nixos-security
<andi-> sphalerite: well on the plus side: it has a lot of exposure and people looking at it.. but from what we have seen with OpenSSL that doesn't mean much... on the negative side it speaks almost every networked protocol on this planet..
<sphalerite> andi-: IMHO there's nothing wrong with it speaking all the protocols, as it does allow deactivating support for protocols. So you can set the library up to only allow HTTPS
<andi-> As I said I'm both glad and terrifyied :)
<andi-> maybe I would feel better if I'd start building my local nix without everything but HTTP(s) support
<sphalerite> andi-: http://sprunge.us/keqCxf
<sphalerite> testing now :)
<andi-> I didn't just mean that. I also meant the libcurl it is linking against :)
<andi-> I am looking at that right now. Trying to figure out if new feature flags are being enabled per default or not.
<sphalerite> why?
<andi-> because it is still code that could be executed during runtime by whatever means :/
<andi-> the current expresseion for curl looks pretty restrictive with every option but http2 disabled. This is what curl --version says:
<andi-> Protocols: dict file ftp ftps gopher http https imap imaps pop3 pop3s rtsp scp sftp smb smbs smtp smtps telnet tftp
<andi-> Features: AsynchDNS IPv6 Largefile GSS-API Kerberos SPNEGO NTLM NTLM_WB SSL libz TLS-SRP HTTP2 UnixSockets HTTPS-proxy
pie_ has quit [Read error: Connection reset by peer]
pie__ has joined #nixos-security
<pie__> sphalerite, more protocols more areas to look at with the same amount of eyes?
<ekleog> andi-: tbh, I don't really see your attack model
<ekleog> like, unless you're considering unencrypted protocols and MitM (which should happen only for few packages), only the provider of the data fetched can attack your downloader
<ekleog> and the downloader is sandboxed anyway, so (theoretically) all it can do is write stuff that'll be hash-checked to be in the FO
<ekleog> so basically, the worst an attacker could do is change a download to another that has the same hash… which sounds not really likely, as afair even sha1 is no longer allowed?
<ekleog> (or do I miss something?)
<andi-> You are probably right. I also do not have a model in mind.
<andi-> I just think it is a bit crazy..
<ekleog> yeah, that is completely true
<andi-> I recently witnessed a discussion where someone said "uggh wget? Why didn't you use curl?" the other person responded: "I don't think either of them is a great choice but that is what we got."
<ekleog> yeah x)
<andi-> People also started working on musl while glibc provided everything they needed.. Well it provides too much for them ;)
<ekleog> need to rewrite curl in rust
<andi-> I am not even sure that is a great idea.
<andi-> It will be 10x the size of the binary.
<ekleog> with flags to runtime enable/disable protocols
<andi-> Will probably use thousands of crates..
<ekleog> well, rust is (theoretically) capable of dynamic linking, so the size shouldn't matter if we adapt nix to actually use that
<ekleog> (well, not matter more than it does for C++, which has the same amount of monomorphization)
<andi-> It would be nice because then you'd have a rusty TLS library as well (hopefully).
<andi-> But who audits that?
<ekleog> tbh the tls library is actually the thing I rather keep from gnutls or whatever
<ekleog> precisely because even if it's written without unsafe code it can still be unsafe
<ekleog> and it just likely doesn't have that many eyes to look on it
<sphalerite> ekleog: what about builtins.fetchurl, which doesn't require a hash?
<ekleog> will likely be great when it matures, though :)
<ekleog> sphalerite: good point… but imo when using that it's already too late, and the nix security model is already broken :° (hello mozilla's rust-overlay)
<ekleog> like, the server can just decide to serve you another file
<andi-> Regarding the attack model: Someone (amazon) owning the server I am downloading my packages from is able to exploit whatever in curl, execute some code (in a sandbox but with local network access..) and then still deliver the requested output..
<ekleog> so the only change an attack brings is if it's also possible from a MitM / MotS and you're being MitMed / MotSed
<ekleog> hmmm good point there's local network access
<andi-> I have a dejavu regarding that.. there was some system that already fixed that attack vector
<ekleog> so now we need to get nix to sandbox stronger to actually only allow some outgoing network access! :)
<andi-> I should start writning a blog again m(
<ekleog> (like, whitelist of allowed contacted IPs / DNSes for FO derivations)
<ekleog> (which could be auto-computed by the lib.fetch*, actually)
<andi-> disable rfc1918 and the corresponding IPv6 addresses..
<ekleog> that's some good first step, but a whitelist would be best imo
<ekleog> can you remind me your github handle? it doesn't auto-complete from @andi
<andi-> how do you properly autocompute that without killing internal package repos?
<andi-> ekleog: andir
<pie__> nix security model? <ekleog> sphalerite: good point… but imo when using that it's already too late, and the nix security model is already broken :° (hello mozilla's rust-overlay)
<ekleog> pie__: my view of the nix security model is “if the .nix is well-written, it will generate the correct executable”
<ekleog> hmm, +“and the binary caches are actually trust-able”
<pie__> aha.
<pie__> oh i just had an idea, sort of, since we have reproducible packages, maybe we could do some kind of web of trust thing where people duplicate a build and do some kind of pgp signing to the binary cache if it matches
<ekleog> which isn't the case with builtins.fetchurl without a hash, given that can be thwarted from the server
<sphalerite> andi-: sandbox but with local network access..?
<pie__> (of course thats assuming andthing bad happens in the build step and not before...well anyway, /me runs off)
<ekleog> pie__: yeah, there have been talks about that, and it'd be great :) cachix is a step in this direction, afair
<andi-> sphalerite: yes, so it must be configurable :)
<sphalerite> ekleog: I'd say the binary caches don't need to be trust-able, only the configured trusted keys :)
<ekleog> ^
<pie__> sphalerite, whats that mean
<ekleog> pie__: the IP doesn't need to be trusted, only the trusted key
<ekleog> that avoids having to trust the DNS, IP, etc.
<sphalerite> pie__: you don't need to trust the binary caches themselves, only the keys that the paths served up by the caches are signed with
<pie__> i mighhave bene implicitly assuming that we are talking about trusting the contents of the binary caches
<pie__> is that it?
<andi-> we must trust those. I started with distrusting the CDN (Amazon in our case) towards the HTTPs lib we are using
<ekleog> andi-: well, with fully reproducible builds we don't even need to actually trust those
<pie__> assuming nix checks the hashes of things it gets from the caches, things should be fine up to breaking of hashes
<pie__> 9?)
<ekleog> just rebuild like 30% of what you download (depending on your level of paranoia) and verify that it's actually giving the same result
<ekleog> should make any cheating likely to be detected, and thus impossible to do in practice
<ekleog> (and then distribute said rebuilding of 30% of the rebuilds to friends… and we're back to the idea of adding signatures to the binary cache \o/)
<andi-> I think just adding a 2nd identity that rebuilds e.g. the -small pkg sets would already be nice. Then publishing those hashes and you can compare the binaries/hashes with those you got from hydra. Gives another level of insurance.
* andi- runs to dinner, bbl
<{^_^}> nix#2414 (by Ekleog, 7 seconds ago, open): (Allow) limiting network access in fixed-output derivations
<ekleog> well, the problem with publishing the hashes is that hydra then knows on which paths it can cheat
<ekleog> and given an attack can come from any executable… well… :/
<ekleog> the trick in rebuilding X% of the packages locally is to do it randomly
<ekleog> so that hydra can't know which packages to fake
<ekleog> (that doesn't work with offloading rebuilds to friends, for sure, as they'd have to publish said signatures, but with offloading rebuilds to friends it becomes possible to just require that every received build has 2 trusted signatures and if not it'll be rebuilt locally)
<andi-> And then we end up blacklisting chromium ;) (from phone)
<ekleog> :D
<ekleog> well, if any package is blacklisted then hydra will be fully-trusted again… unless said package is checked for having multiple signatures that are unlikely to be all compromised
<pie__> lol chromium is not reproducible or what?
<pie__> also what was the thing with mozillas rust overlay?
<sphalerite> pie__: no, it's just horrendously resource-intensive to build
<pie__> ah lol
<sphalerite> pie__: it fetches data about the current rust version at eval time, so it's not reproducible
pie_ has joined #nixos-security
<ekleog> I still really need to add the possibility to add a hash to the fetchurl, so it takes less awfully slow to evaluate
<ekleog> should be trivial, but I just find more time to debate on irc than to actually do productive work
<pie_> whoops
<sphalerite> ekleog: which fetchurl?
<sphalerite> the rust thing?
<ekleog> sphalerite: yeah, the mozilla rust-overlay thing
<sphalerite> ah right
<ekleog> just being able to provide a hash for the manifest (with maybe detailed information about where to fetch it) should fix it
<sphalerite> sounds good
<ekleog> (just hoping rust keeps an archive of past manifests)
<sphalerite> well they have so far, right?
<ekleog> dunno, didn't check yet
<sphalerite> it does support passing a date for the manifest
<ekleog> they keep an archive of all past nightlies/betas/stables, that is sure
pie__ has quit [Ping timeout: 240 seconds]
<sphalerite> the overlay, that is
<ekleog> oh? sounds good, so it really just needs the hash
<sphalerite> yep should do
<ekleog> oooh wait actually the date is the same date as the date of the beta/nightly/stable
<ekleog> I was thinking there were two dates
<{^_^}> mozilla/nixpkgs-mozilla#113 (by Ekleog, 6 seconds ago, open): Allow to add a hash to the downloaded manifest
<pie_> pointless question, how hard would it be to swap out the hash format nixpkgs uses
<ekleog> the sha256 one used by nixpkgs, or the hash format used by nix?
<ekleog> for the first, relatively easy, just need to re-download the world and re-hash it, can likely be automated
<ekleog> for the second, near-to-impossible
<ekleog> (it has been designed for some extensibility, though, iirc)
<pie_> both i guess
<pie_> :P
<sphalerite> pie_: do you mean the hash algorithm or the representation?
<pie_> the algorithm
<sphalerite> shouldn't be tooooo complicated to add support, migrating nixpkgs to a new hash would be expensive but feasible
<sphalerite> s/new hash/new hash algorithm/
<pie_> im just bothered git is kind of stuck with sha1 afaik :P
<pie_> but meh
<ekleog> pie_: git is slowly moving to abstracting sha1 away
<pie_> yeah last i looked into it there was some semblance of motion
<ekleog> brian carlson is working to abstracting all these into oids
<pie_> i just remember this periodically
<ekleog> oh wait, last I checked was around 2016, these infos are maybe not up-to-date ._.
<pie_> also something that bothers me, but i havent thought of a good attack yet, this might have come up the other day here, cant remember,
<pie_> ekleog, ah :P
<ekleog> hopefully now all references to sha1 are now refs to oids and the next step is to add a new hash algorithm and some migration plan :D
* ekleog always hopeful
<pie_> so, github forks all seem to be somehow part of the same object in the backend, you can access hashes from one fork in another
<pie_> hm i wonder if that also applies to when you clone it...
<pie_> or if its just with github urls
<ekleog> git now checks for the known collision, fwiw (not saying that another one couldn't appear)
<pie_> i vaguely remember something about that breaking git repositories so yes xD
<pie_> heres a random example of what i mean (this is unrelated to the collisions stuff):
<pie_> ok im having trouble actually getting this together..
<ekleog> it didn't break git repositories too much, however it completely broke chromium's svn when they added a test to check that chromium wasn't vulnerable to it, iirc
<pie_> ah right
<pie_> "this will never happen" is not an excuse for edge cases xD
<pie_> it will happen precisely because it will never happen
<ekleog> esp. when the person feeding files is likely to be actively attacking you :°
<pie_> then again it wouldnt make sense for git clones to also pull in commits from forks, no way github would do that
<copumpkin> I replied on the issue tracker :)
<samueldr> FTR I backported the nodejs 6.14.4 update to 18.09, probably needed on 18.03 still (even if not much depends on it) https://github.com/nodejs/node/blob/master/doc/changelogs/CHANGELOG_V6.md#6.14.4
<andi-> samueldr: thanks, I'll to the 18.03 port :-)
<sphalerite> pie_: yeah I think github uses a single real git repo for all forks of a given repo. The branches are only accessible in the actual repo they belong to though.