drakonis has quit [Read error: Connection reset by peer]
drakonis has joined #nixos-dev
psyanticy has joined #nixos-dev
drakonis_ has joined #nixos-dev
drakonis1 has joined #nixos-dev
drakonis has quit [Ping timeout: 240 seconds]
drakonis_ has quit [Ping timeout: 250 seconds]
orivej has joined #nixos-dev
orivej has quit [Ping timeout: 268 seconds]
orivej has joined #nixos-dev
<gchristensen>
Ericson2314: ping
<gchristensen>
Ericson2314: two things, (a) if ofborg started building Nix PRs, would that be annoying w.r.t. your RFC (b) "I mean that hydra goes down a fair amount" do you have details on this? the monitoring I have suggests hydra doesn't often go down, but maybe we're measuring different things
drakonis1 has quit [Read error: Connection reset by peer]
<niksnut>
I don't think Hydra does down a lot, but the turnaround time for a PR could be quite long (e.g. if hydra is evaluating a nixos jobset)
orivej has quit [Ping timeout: 246 seconds]
ckauhaus has joined #nixos-dev
drakonis1 has joined #nixos-dev
drakonis has joined #nixos-dev
drakonis1 has quit [Read error: Connection reset by peer]
drakonis1 has joined #nixos-dev
<Ericson2314>
@gchristiansen ok our own hydra when building PRs did. If the official one doesn't, and also wouldn't when evaluating more little things, that's good. Glad to be wrong.
<Ericson2314>
@gchristiansen as to ofborg, so nothing against ofborg itself, but I consider having two separate CIs an overhead just needed cause of Nixpkgs's sheer scale
<Ericson2314>
But for smaller things I think having just one CI build everything is simpler, and out to be easier for all parties
<gchristensen>
gotcha
<gchristensen>
it seems that actually ofborg is a better fit, as it doesn't attempt to match the build capabilities of hydra
<gchristensen>
(also btw `tensen` not `tiansen` :))
orivej has joined #nixos-dev
orivej has quit [Ping timeout: 240 seconds]
johnny101 has quit [Quit: Konversation terminated!]
johnny101 has joined #nixos-dev
<disasm>
worldofpeace: lets chat sometime this week
<Ericson2314>
gchristensen: oops!
<Ericson2314>
"attempt to match the build capabilities of hydra" what do you mean?
<Ericson2314>
In particular I think it would be nice if PRs were cached
<Ericson2314>
I don't know much about ofborg, it doesn't have its own cache does it?
<gchristensen>
ofborg builds things in the small, hydra more towards building tens of thousands of things
<gchristensen>
it doesn't, but I recently took over all the builders and perhaps it could publish to a cache
<Ericson2314>
"took over all the builders" oh? I'm curious
<gchristensen>
they used to be run by volunteers, but it made it difficult to do risky things like reuse a cache
<niksnut>
gchristensen: how will ofborg build Nix PRs? will it build all of release.nix or just some jobs?
<Ericson2314>
do people still donate machines and you control the daemon or something?
<gchristensen>
right now my thought is to build each job inside the "release" aggregate derivation, as individual builds
<gchristensen>
Ericson2314: they used to until like 3 days ago
<Ericson2314>
did they donate the machines to the foundation then?
<gchristensen>
no, they didn't actually donate anything they just ran a builder process on whatever machine they wanted
<Ericson2314>
so what machines does it run on now?
<gchristensen>
machines I control
<Ericson2314>
I took "took over all the builders" to mean the same builders no owners, but maybe you mean new builders?
<Ericson2314>
ok
<gchristensen>
right, I got some more servers that I pay for out of the patreon, and run builders there
<gchristensen>
niksnut: what do you think about that method?
<Ericson2314>
gchristensen: the way I thought about hydra vs ofborg was less big vs small, but total vs partial
<Ericson2314>
Hydra builds everything, and ofborg build some subset
<Ericson2314>
so for Nix it may be small, but we do want to build everything
<gchristensen>
sure, I think we can do that
<gchristensen>
it does get a bit challenging because ofborg doesn't do distributed builds, each build must be one architecture only
<gchristensen>
but other than what I assume are one or two jobs like that, it should be fine
<gchristensen>
so maybe that spoils it
<Ericson2314>
Sorry but I am still having trouble understanding the advantages of ofborg vs hydra in that case
<Ericson2314>
isn't much of ofbog business logic for dealing with nixpkgs in particular?
<Ericson2314>
like using the commit message to decide what to build?
<samueldr>
one is continuous testing, the other is continuous building/delivery
<Ericson2314>
and taking commands from github comments
<gchristensen>
sure, but that is all isolated in to a nixpkgs specific code paths. extending ofborg to build nix is not a huge lift
chrisaw has joined #nixos-dev
<gchristensen>
(in fact it is nearly done, if we want it)
<LnL>
a big difference between ofborg and hydra is that ofborg doesn't centralize derivations
<gchristensen>
yea
<Ericson2314>
centralize derivations?
<Ericson2314>
what does that mean?
<LnL>
the hydra master machine is the only thing that evaluates expressions
<niksnut>
gchristensen: sounds good to me
<LnL>
every builder receives derivations and inputs to build
<Ericson2314>
so how does ofborg work?
<gchristensen>
because of that coordination, in hydra if an x86_64-linux build depends on an aarc64-linux build, hydra distributes it fine. ofborg won't because it doensn't do central coordination
<Ericson2314>
different machines do everything
<Ericson2314>
cause no remote builders?
<gchristensen>
when a build is request for `hello`, the builders run `nix-build . -A hello`
<LnL>
evaluation of everything and builds are split up
<LnL>
and builds also evaluate (just the specified attributes) locally on the builder
<gchristensen>
(think back to the threat model and usage scenario of random people running builders, probably not wanting to trust me to push them arbitrary drvs)
<clever>
gchristensen: pushing a drv itself is fine, because its all content addressed
<Ericson2314>
uhuh
<clever>
gchristensen: the security problem is pushing the deps to save that builder some time
<LnL>
yeah
<clever>
you should be able to `nix-copy-closure` a `.drv` without needing the remote machine to be trusted
<LnL>
not sure if nix differentiates that, but yeah I think that would be ok
<Ericson2314>
yes you should be able to
<Ericson2314>
so evaluating on different machines is good
<Ericson2314>
but I do like one big database of derivations
<LnL>
that will copy either a drv or an input both of which are hashed
<Ericson2314>
I also am skeptical of secuity differences between nixpkgs and arbitrary PRs, but that is a separate issue
<clever>
Ericson2314: i have a fixed-output derivation that would make you weep, gchristensen has seen it
<cransom>
is it the one with the hash collision?
<clever>
cransom: worse, it gives an attacker a shell on your machine
<cransom>
yeah, thats probably not great.
<clever>
they are still restricted by the nix sandbox, but its still not a good ting
<Ericson2314>
clever: interesting
<clever>
a fixed-output derivation only requires that $out match a certain hash when it terminates
<clever>
but what if it just never terminates?
<clever>
it has free reign of your cpu and network
<clever>
thats basically a botnet node :P
<LnL>
builds will get timed out, but that's only in place for capacity reasons
<Ericson2314>
ahahahahahahahaah
<Ericson2314>
so there is the semi-sandbox for fixed outputs, right?
<clever>
fixed-output derivations are still in a chroot, but the network namespace isnt used
<clever>
so they have full network access
<Ericson2314>
hahaha
<Ericson2314>
love it
<adisbladis>
And yet people still argue against restricting FODs /o\
<LnL>
moving all fetchers to builtins definitively has some disadvantages
<samueldr>
could nix have some way to "seal" FOD providers? e.g. at a higher level you register FOD builders, only those builders can do FODdy stuff, then when evaluating stuff from there they can't do FOD stuff except through those registered builders
<samueldr>
(or maybe through another evaluation entirely, that is configured via an option?)
<samueldr>
so we get the worst^W best of both solutions, no need to add so many builting, and still allow end-users to shoot themselves in the foot if they so desire too
<andi->
combine FODs wit hash collisions and you have some (abitrary) I/O capability in Nix ;-)
<clever>
a while back, i was trying to build electron with nixpkgs
<clever>
and to build electron, you first need to fetch the source, with gclient
<worldofpeace>
disasm: Cool, I think wednesday or thrusday afternoons could work for me. how about you?
<clever>
and gclient generates a directory that is 30gig in size
<clever>
i definitely wouldnt want to curse the community with putting gclient directly into nix itself as a builtin fetcher :P
<clever>
so that kind of thing should be a FOD
<clever>
but also, gclient runs post-fetch hooks, that mutate the source, and sometimes run a 2nd copy of gclient that was shipped with the source, and wasnt patched
<clever>
so that whole thing turned into a nightmare to make it work
<clever>
a nightmare that eats 30gig of bandwidth for each iteration
<Ericson2314>
that's amazing
<LnL>
the main problem is how you vet that given that you usually want other programs as input for these kind of custom fetchers
<Ericson2314>
of course electron woudl do something like that
<clever>
Ericson2314: i think chromium does the same thing, but nixpkgs is cheating with a release tarball that has it pre-done
<clever>
Ericson2314: gclient also prefers to keep the .git folder, for 30 different repos, so you can quickly update
<clever>
i then gave up trying to FOD gclient, and began to re-implement it as gclient2nix :P
<Ericson2314>
haha nice
<Ericson2314>
gchristensen: so basically I don't want the perect to prevent the good
<Ericson2314>
and ofborg or travis or whatever would still be better than what we have today
<Ericson2314>
but I hope we at least move towards needing just one tool for all these things
<Ericson2314>
be it hydra or something else
<gchristensen>
travis would be a significantly worse option than anything else
<Ericson2314>
agreed :)
<clever>
Ericson2314: my new implementation of gclient2nix, would use haskell to parse the files, run nix-build to fetch things, then recursively do that
<eraserhd>
i know I'm stepping in to a convo I haven't been following, but what if the user has to upload the sources to a file server and hydra doesn't build anything not cached?
<clever>
Ericson2314: and then use hnix to generate an expr, that would use plain fetchFromGitHub/fetchurl/fetchgit, and then compose things together in a non-fixed drv
<Ericson2314>
clever it is discovering the hashes via downloading?
<clever>
yeah, the haskell code is using a variant of nix-prefetch-git
<clever>
to both fetch things (for recursion) and for hash discovery
<clever>
Ericson2314: ive only seen it doing git, but i never got gclient2nix fetching the entire thing
<Ericson2314>
gchristensen: We have a new when2meet BTW https://www.when2meet.com/?8349006-HM1c6, it's looking like tuesday or thursday 9am is a good recurring time
<clever>
Ericson2314: deps= is a map, of where to put things, and where to get them from, recursedeps= (bottom of the file) says to only check the DEPS files in those deps
<Ericson2314>
clever: does it distinguish between "internal" repos that also use gclient and "leaf" ones that do not?
<Ericson2314>
haha
<Ericson2314>
beat me to it
<Ericson2314>
clever: so with IFD I would make your tool *non* recursive, but it does another round of IFD for each "recursedeps="
<Ericson2314>
make sense?
<clever>
yeah
<clever>
but i need the hashes of those recursive deps
<Ericson2314>
clever: dose it give you git hashes or refs?
<clever>
possibly both
<Ericson2314>
I would pass in a whitelist / "lockfile"
<Ericson2314>
and then the tool converts from that without IO
<clever>
the gclient2nix code, was meant to be ran outside a sandbox, and would generate an IFD-free expr, that hash all of the right hashes
<Ericson2314>
I also will make an RFC soon (maybe with Matt) that we should support git tree and blob hashes (no submodules) natively
<Ericson2314>
then we don't need separate hashes for more git things
<clever>
i was initially intending to make the electron expr be in nixpkgs
<Ericson2314>
I am hoping we move away from committed autogenerated code :)
<clever>
i feel that IFD in nixpkgs is worse though
<Ericson2314>
because?
<Ericson2314>
clever: btw I finally updated the Ret cont recursive nix RFC, including sketching how we might be able to turn IFD into ret cont
<clever>
Ericson2314: i recently saw a hydra eval take 5 hours, because it had to build 3 copies of ghc at IFD time
<clever>
Ericson2314: and because hydra isnt aware of those IFD deps, it cant publish them to the cache, so nix-instantiate also takes 5 hours
<clever>
and even if they where in the cache, i would have a surprise 1gig download (the final ghc) just to import from that derivation
<Ericson2314>
clever: right that's a real problem, but it's a hydra problem not an IFD problem
<simpson>
It is starting to sound like we need to have a more nuanced concept of what makes a build expensive. Also some sort of community-derived expectation for which builds are expensive, so that we can distinguish between everybody-problems and Hydra-problems.
<clever>
i wonder if recursive nix may solve some of these issues
<Ericson2314>
clever: so even with a better implementation of nix, we can have a more concurrent evluators
<Ericson2314>
and be able to do more than one IFD at a time
<Ericson2314>
and not block on them
<clever>
Ericson2314: oh, did you see my builtins.fork idea?
<niksnut>
bold proposal: ban all network access in fixed-output derivations (so even ban builtins:fetchurl). The only way to access external stuff is via flake inputs.
<Ericson2314>
that is what hercules more or less does, as I understand it
<simpson>
If FODs had to indicate their expected *size*, then we could start to talk about expected network bandwidth costs. Also it'd be interesting (but possibly needing to be removed to a higher level) to have policy that can tradeoff bandwidth for local CPU/disk.
<clever>
simpson: gentoo manifest files require you to specify the filesize, and several hashes (different algos) upfront
<clever>
simpson: and the fetcher will enforce that they all match
<Ericson2314>
niksnut: I think that basically just moves the problem under a new name?
<adisbladis>
It would make reaching for IFD a whole lot less inviting
<clever>
niksnut: but one of those repos (src/DEPS via line 76) has another DEPS file, that you must fetch, recursively
FRidh has joined #nixos-dev
<clever>
and there are hooks to run after fetching
<clever>
and all of those things just dump into a single directory (DEPS says where to dump each one)
<Ericson2314>
niksnut: right, well that's tantamount of having a whitlisted form of fixed output derivations or something? or saying we can just use (pure) builtins.fetch*?
<Ericson2314>
I'm not against flakes being the method as a matter of design perhaps, but the rest of flakes is very orthogonal
<Ericson2314>
just to be clear
<clever>
Ericson2314: i also prefer <nix/fetchurl.nix> over builtins.fetchurl
<clever>
Ericson2314: <nix/fetchurl.nix> is a special derivation that calls c++ rather then running bash, so it can run in parallel and isnt forced to be sequential at eval time
<niksnut>
the problem is that you can't query (or mirror) the fetchurl dependencies of a repo
<Ericson2314>
clever: I agree with that
<niksnut>
so for instance you can't vendor all the external dependencies in an easy way
<niksnut>
curse of turing completeness, as usual
<clever>
my solution with gclient, was to pre-fetch everything once, in an impure program
<Ericson2314>
niksnut: fair, but a separate problem
<clever>
and then generate an expr, that uses plain old fetchFromGitHub/fetchgit
<clever>
so all FOD is standard fetchers
<Ericson2314>
OK I need to grab lunch, but happy to distinguish talking about these things after
<Ericson2314>
bye for now!
<clever>
then all composing things together and running hooks, happens in a pure derivation, with no network
Synthetica has joined #nixos-dev
drakonis has quit [Ping timeout: 245 seconds]
drakonis has joined #nixos-dev
drakonis_ has joined #nixos-dev
drakonis has quit [Read error: Connection reset by peer]
<Ericson2314>
gchristensen: perhaps you should be coauthor with me on the nix prs rfc?
FRidh has quit [Quit: Konversation terminated!]
<domenkozar[m]>
niksnut: was this the bug with min-free GC you were talking about?
<domenkozar[m]>
waiting for the big garbage collector lock...
<domenkozar[m]>
error: unexpected end-of-file
<niksnut>
no
<niksnut>
iirc, it was a race that allowed paths to be deleted while they were still in use by a build
<clever>
ive had several variations of that
<clever>
paths in-use by an active build getting GC'd
<clever>
paths in use by an eval getting GC'd
<domenkozar[m]>
so I should probably open a bug for this
<clever>
drv's an eval just made getting GC'd
<domenkozar[m]>
clever: that should be fixed in Nix 2.3
<clever>
most recently, a profile made by nix-env was GC'd, and the latest generation was a dead symlink!!
<Ericson2314>
@tetdim there's no correct way to simplify it other than getting rid of the wrapper depedendnecy
<clever>
niksnut: i suspect a large chunk of my gc issues are race conditions, that nix-collect-garbage can also trigger, but min-free just makes it simpler, by forcing a gc to start mid-eval, and also pausing the eval to wait for the gc
tetdim has joined #nixos-dev
asymmetric_ has joined #nixos-dev
ris has joined #nixos-dev
pie_ has quit [Quit: pie_]
asymmetric_ has quit [Ping timeout: 240 seconds]
asymmetric has quit [Ping timeout: 264 seconds]
psyanticy has quit [Quit: Connection closed for inactivity]
drakonis_ has quit [Ping timeout: 240 seconds]
asymmetric has joined #nixos-dev
drakonis_ has joined #nixos-dev
asymmetric has joined #nixos-dev
asymmetric has quit [Changing host]
<disasm>
worldofpeace: sounds good!
drakonis1 has quit [Quit: WeeChat 2.6]
drakonis has joined #nixos-dev
drakonis_ has quit [Ping timeout: 245 seconds]
drakonis has quit [Ping timeout: 246 seconds]
orivej has joined #nixos-dev
justanotheruser has quit [Ping timeout: 276 seconds]
pie_ has joined #nixos-dev
johnny101 has quit [Quit: Konversation terminated!]