drakonis has quit [Read error: Connection reset by peer]
aminechikhaoui has quit [Ping timeout: 255 seconds]
orivej has joined #nixos-dev
orivej has quit [Ping timeout: 240 seconds]
FRidh has joined #nixos-dev
phreedom has quit [Ping timeout: 250 seconds]
pie__ has quit [Ping timeout: 248 seconds]
phreedom has joined #nixos-dev
pie__ has joined #nixos-dev
__Sander__ has joined #nixos-dev
goibhniu has joined #nixos-dev
goibhniu has quit [Ping timeout: 265 seconds]
orivej has joined #nixos-dev
<srhb>
Did we or github change something such that urls like https://github.com/tianocore/edk2/commit/9e2a8e928995c3b1bb664b73fd59785055c6b5f6 are no longer, but used to be correct for fetchpatch? As far as I can tell, it should have .patch or .diff at the end, but I'm seeing several succesfully cached versions of patches like this over the past few days, which manifests as failures only for people who aren't
<srhb>
using our cache. Confused.
<LnL>
I don't know about that ever working without the proper extension
<srhb>
Me neither. So how'd it get cached...
<LnL>
did it tho?
<Dezgeg>
maybe they used to detect the user-agent?
<srhb>
LnL: Yes, but, good catch. It's an empty file. 8)
<LnL>
people using tools like nix-prefetch-url incorrectly isn't always noticable
<srhb>
Indeed.
aminechikhaoui has joined #nixos-dev
<srhb>
But there's some masking going on here that we could prevent.
<srhb>
At least on hydra.
<LnL>
so if the hash already exists somehow you don't notice unless you used tofu
<Profpatsch>
gchristensen: You had a bot running that learns by monitoring other channels, right?
<Profpatsch>
Can we use that for other channels, too?
<srhb>
Profpatsch: I think it's helpfully named "the" or something similarly inscrutable. :D
catern has quit [Ping timeout: 248 seconds]
ghostyy has quit [Ping timeout: 248 seconds]
<Profpatsch>
srhb: Yes, I remembered “the” name, but I think it’s offline rn?
<Profpatsch>
Or there was a nickchange
<srhb>
Indeed.
goibhniu has joined #nixos-dev
Jackneill has quit [Read error: Connection reset by peer]
<gchristensen>
sure, my goal is to write down what actions they need to be able to take -- if its a bucket policy or IAM role is whatever
<clever>
gchristensen: which sub-account was this even on... lol
<clever>
found the bucket
<clever>
gchristensen: it has a dedicated iam user, with S3 full access
<clever>
i didnt bother locking it down too much
<clever>
gchristensen: and in the past, i have seen nixops deal with restrictive policies in a very poor manner
<clever>
gchristensen: if you lack permission to list objects, it just silently ignores the error, assumes the object doesnt exist yet, tries to create it, then complains that it already exists
<gchristensen>
aye
<gchristensen>
aminechikhaoui: do you have a policy handy?
ilbelkyr has joined #nixos-dev
<aminechikhaoui>
gchristensen: I'll double check in few minutes
<aminechikhaoui>
gchristensen: heh I'm using "s3:*" as well :D
<gchristensen>
aminechikhaoui: if you can write a list of commands to run start to finish from ^ to upload to distributing keys, I'll docbookify it :)
<Mic92>
globin: it is already installed into my profile, I just need to reboot after my daily work :)
<gchristensen>
you don't need to write up too many words, I can fill that out
<globin>
Mic92: nice!
<clever>
gchristensen: also something i use locally, secret-key-files= in nix.conf will make nix 2.0 sign all paths after building them, and keep the signatures in db.sqlite
<clever>
gchristensen: i believe that also supports using nix copy without being trusted on the receiving side
<aminechikhaoui>
gchristensen: yeah I'll try to do that
<FRidh>
18.09 is getting close. Are the release managers known yet?
<gchristensen>
I've been working with vcunat on this
<gchristensen>
FRidh: would you like to nominate someone to be the second release manager?
<FRidh>
gchristensen: nice. Might be good to make an announcement soon, and have a discussion on the schedule.
<gchristensen>
yeah
<FRidh>
gchristensen: not really :)
vcunat has joined #nixos-dev
<gchristensen>
anyone else here like to nominate someone, including themselves, for release manager?
<gchristensen>
I have some ideas, but don't want to leave anyone out :)
orivej has joined #nixos-dev
<gchristensen>
fpletz, globin, ping?
<globin>
gchristensen: pong?
<gchristensen>
PMing
drakonis has quit [Remote host closed the connection]
drakonis has joined #nixos-dev
lopsided98 has quit [Quit: Disconnected]
<fpletz>
gchristensen: pong
nly has joined #nixos-dev
<nly>
Is there anyone still working on ipfs to nix integration?
<gchristensen>
my understanding is ipfs just can't tolerate the write rate of nix, and the reads are too slow
<vcunat>
nly: no one, I believe
<vcunat>
(the hope to base binary cache on it failed, as noted)
drakonis has quit [Read error: Connection reset by peer]
fpletz has joined #nixos-dev
fpletz has quit [Changing host]
<nly>
Hmm
drakonis has joined #nixos-dev
sir_guy_carleton has joined #nixos-dev
drakonis has quit [Read error: Connection reset by peer]
nly has quit [Read error: Connection reset by peer]
vcunat has quit [Ping timeout: 240 seconds]
<clever>
gchristensen: my understanding, is that ipfs cant deal with the key/value requirements of a binary cache
<clever>
gchristensen: ipfs uses hash(value) = value, but nix uses hash(derivation) = result
<clever>
so you need some kind of mapping from storepaths to ipfs hashes
<gchristensen>
that is troubling too, but it can provide a key/value layer on top
<clever>
but that could be a snapshot db, held in ipfs
<clever>
as for reading, i believe there is a DHT, where users can insert a ipfs_hash -> pubkey pairing, and then you can query that DHT to see who has a given file
<clever>
so you have to walk the dht to find records on a given entry, then walk the dht some more to find the ip behind that pubkey
<domenkozar>
main problem with ipfs is that it's slow
<domenkozar>
it's really bad experience
<gchristensen>
^
<domenkozar>
I looked into using it with cachix
<domenkozar>
there's no reason except ideologies to do so
<clever>
yeah, i can see how it would have trouble with large scale queries
<domenkozar>
it's very overengineered
<domenkozar>
and it's low level still
<domenkozar>
maybe in 10y :)
<clever>
i can see how it may improve if it has an api to query for the existance of 200 different entities at once
<clever>
then the dht walk can reuse some of the nodes it finds
<clever>
but it would need support for that even at the network level
<domenkozar>
you also need to implement "best" node selection
<domenkozar>
and replication logic, ipfs is pull based
<domenkozar>
etc
mpickering has quit [Ping timeout: 256 seconds]
<clever>
yeah, replication is something it lacks
<clever>
there is also a security problem
<clever>
who has a copy of version 0.1 of $app, i found an exploit for it!
<clever>
thanks, and your ip is?
<domenkozar>
:D
<clever>
if nix is sharing the entire store...
genesis has quit [Ping timeout: 256 seconds]
fpletz has quit [Ping timeout: 256 seconds]
sys9mm has quit [Ping timeout: 256 seconds]
<clever>
domenkozar: there is also another problem, if i paste the result of `ls -lh /run/current-system` from a hydra, you can then run `nix-store -r` to download the entire closure of that hydra
<clever>
domenkozar: that closure includes github api tokens
<clever>
if ipfs comes into play, that will work against every machine
mpickering has joined #nixos-dev
fpletz has joined #nixos-dev
<gchristensen>
only if hydra is using nix-serve, right?
<LnL>
gchristensen: oh, what non amazon s3 did you test with?
<gchristensen>
and also: only if ipfs is serving the entire system's store, right?
<gchristensen>
LnL: Ceph's S3
<clever>
gchristensen: by default, hydra has its own nix-serve style api build into the hydra server itself
<clever>
gchristensen: all hydra's share the entire store, directly over the hydra http interface
<clever>
turning on S3 blocks that interface
<LnL>
gchristensen: hmm, not sure if your statement is entirely correct then
<gchristensen>
LnL: which statement?
<LnL>
oh n/m
<clever>
gchristensen: try to query for /nix-cache-info on a given hydra, and youll see if it can share the store or not
<samueldr>
what's the expected support cycle of `nix` itself?
<gchristensen>
niksnut: can we depend on a minimum of Nix 2.0 for NixOS 18.09?
<gchristensen>
and: should we release a Nix 2.1.0 for it?
<vcunat>
Sounds fine to me to depend on it (since some point in August), but we should think of possible upgrade paths from older versions than 18.03.
<gchristensen>
yikes, right,
<vcunat>
(and what the error messages will look like)
sir_guy_carleton has quit [Quit: WeeChat 2.0]
vcunat has quit [Ping timeout: 245 seconds]
goibhniu has quit [Ping timeout: 256 seconds]
goibhniu has joined #nixos-dev
orivej has quit [Ping timeout: 244 seconds]
<matthewbauer>
jtojnar: i think it's safest wait to bump minver.nix until after 18.09 branches
<matthewbauer>
this gives users two releases of mixed 1.0 & 2.0
<gchristensen>
that mean snixpkgs can't use nix2 for another 6mo
<matthewbauer>
imo at least. we can do it right on sept 1 though
<matthewbauer>
ok maybe i misunderstood the policy. would it hurt to start using 2.0 on unstable immediately? i guess for backports maybe
<matthewbauer>
it could cause issues
<gchristensen>
ah right
<infinisil>
I mean, Nix 2.0 (the language) doesn't have any features we couldn't live with for another 6 months
<gchristensen>
the closure stuff for disk images is nice ...
<infinisil>
Closure stuff?
<samueldr>
I think it all depends on what's the expected support cycle for `nix`
<samueldr>
is there any maintenance being done on 1.11?
<samueldr>
(including there being no need for maintenance)
<samueldr>
(I'm not saying yet whether those assertions mean I push for or against 2.0 in 18.09)
<gchristensen>
I think the feature I'm thinking is In structured attribute mode, exportReferencesGraph exports extended information about closures in JSON format. In particular, it includes the sizes and hashes of paths. This is primarily useful for NixOS image builders.