<gchristensen>
I think it is a huge success that almost all the PRs on the first page are _module_ changes, instead of page after page of package changes
<simpson>
What's the right way to do a null src? I just need to grab a JAR and call makeWrapper a bunch, so I don't actually have anything to unpack.
<orivej>
Hydra has finally built all x86_64-linux staging jobs (except a few hanging with no output), and it looks like all regressions to master (https://hydra.nixos.org/eval/1412079?compare=trunk) are either transient build failures or already accounted for in staging or in master.
<vcunat>
Yes. There are actually more regressions on master itself - from brief look due to wayland update.
<vcunat>
A few hundred (transitively) failed jobs just on x86_64-linux
<vcunat>
orivej: thanks for getting staging into a good shape, and fixing lots of other problems, too :-)
FRidh has quit [(Quit: Konversation terminated!)]
FRidh has joined joined #nixos-dev
orivej has quit [(Ping timeout: 268 seconds)]
goibhniu has joined joined #nixos-dev
phreedom has joined joined #nixos-dev
taktoa has joined joined #nixos-dev
ma27 has joined joined #nixos-dev
ma27 has quit [(Quit: WeeChat 1.9.1)]
ma27 has joined joined #nixos-dev
<regnat[m]>
The latest job of the 17.03 release on hydra has a lot of new failed builds, probably for transient reasons
<regnat[m]>
Could someone with the rights to do it restart it?
<ma27>
regnat[m]: isn't 17.03 out of maintenance?
<regnat[m]>
ma27: there has been a few commits backported to it recently
<domenkozar>
FYI: master now requires grahamcofborg-eval to pass for each PR
<domenkozar>
17.09 requires all current gragamcofborg-eval CI status to pass
<domenkozar>
let me know if this causes any issues
phreedom has quit [(Quit: No Ping reply in 180 seconds.)]
phreedom has joined joined #nixos-dev
ma27 has quit [(Ping timeout: 240 seconds)]
ma27 has joined joined #nixos-dev
FRidh has quit [(Remote host closed the connection)]
<LnL>
nice!
ma27 has quit [(Ping timeout: 240 seconds)]
<vcunat>
Oh. I thought it would only be needed for PRs and not for direct pushes :-0
<vcunat>
error: failed to push some refs to 'git@github.com:NixOS/nixpkgs.git
<gchristensen>
domenkozar: can the settings be made less strict? https://github.com/NixOS/nixpkgs/pull/31891 for example can't be merged, since it isn't completely up to date with master
<vcunat>
BTW, nice thing about direct pushing is that you can sign all the commits; I even merge PRs of others by creating such merges.
<gchristensen>
yeah, that is a bonus
<gchristensen>
a downside is my nearly daily PMing people about the eval being broken due to direct pushes
<gchristensen>
I don't _think_ the intention was to block direct pushes at this point
<MichaelRaskin>
Yeah, I remember niksnut being annoyed about immediately self-merged PRs.
orivej has joined joined #nixos-dev
<vcunat>
On NixCon we discussed some statistics and there was a PR merged in a couple seconds after submission :-)
<MichaelRaskin>
I definitely did that at some point.
<vcunat>
I don't see a reason to strictly forbid self-merging, e.g. if a change is so non-controversial that noone even bothers to react within a week or two.
<MichaelRaskin>
I said _immediately_.
<gchristensen>
or in a brave future world where you can ask a bot to merge a PR after it verifies certain things build ...
<MichaelRaskin>
I did the immediate self-merge because I just wanted to minimize the risks in terms of git doing random stuff and lacking any connection between model and UI logic (compared to Monotone or Mercurial), and GitHub looks restrictive enough to not explode everything on merge.
<vcunat>
Ah, right. I can't see advantage of that in comparison to direct pushing. (Except to dodge a push policy.)
<MichaelRaskin>
Erm, git being horrible?
<vcunat>
gchristensen: we use GitLab at work exactly this way
<MichaelRaskin>
I mean, doing a creative push with git is easier than with anything else.
<gchristensen>
vcunat: neat :)
<vcunat>
you push "merge when CI succeeds" and forget the thing
<MichaelRaskin>
And it was before force-push protection and other sanity checks on master.
<vcunat>
(until you get e-mail that it failed :)
<MichaelRaskin>
Yeah, _if_ it ever becomes a viable option to update stuff like that…
<vcunat>
I'm not sure if the GUI hides some git pitfalls.
<vcunat>
(the GitHub GUI)
<MichaelRaskin>
It has less functionality → it has less pitfalls, because for git every possibility contains a pitfall.
<vcunat>
:-)
<vcunat>
Well, the button would be nice. 15 minutes after submission of a trivial fix, and the mandatory check isn't complete yet https://github.com/NixOS/nixpkgs/pull/31892
<gchristensen>
bummer :( but also if master has moved, I think you can't merge
<MichaelRaskin>
I would call this a disaster.
<gchristensen>
I think that is a bit harsh
<vcunat>
The upside is that at this point master is rather unlikely to move ;-)
<gchristensen>
but it isn't configured properly
<MichaelRaskin>
Well, OK, I would call it a disaster if all PRs have to be explicitly rebased on top of master-at-the-moment-of-merge for any actual period of time.
<MichaelRaskin>
Of course, a misconfiguration that lasts of today is just a funny story.
<MichaelRaskin>
lasts just for today
<gchristensen>
right
<vcunat>
Yeah, with our rate - 800 PRs merged in last 30 days...
<MichaelRaskin>
I guess on the bright side, that + ff-merges would allow us to set up SVN access to the master branch!
<gchristensen>
vcunat: oh I see, the checks finished once, but then reran immediately due to editing the PR body. I've used this as a feature to retrigger runs on PRs which failed for mysterious reasons, but it isn't ideal for cases like this. I'll work on a way to retrigger so I can ignore edited events.
<gchristensen>
vcunat: merge!
<vcunat>
wait!
* gchristensen
isn't merging anything, but your branch can finally merge
<vcunat>
It would actually be workable if the bot was fast.
<vcunat>
With some script to pseudo-automatically create either a PR - or simply a branch where the bot could be instructed to start CI on it.
<gchristensen>
I'll setup another evaluator
<vcunat>
> Choose which status checks must pass before branches can be merged into master. When enabled, commits must first be pushed to another branch, then merged or pushed directly to master after status checks have passed.
<vcunat>
I'm afraid the feature isn't available without forbidding direct pushing of changes not checked by CI.
<gchristensen>
bummer :(
<vcunat>
I don't think we're ready for that at *this* moment.
<gchristensen>
I agree
<vcunat>
domenkozar: ^^ (I can't manage these things in the official repo)
<vcunat>
We might be able to work around these limits somehow, but it will probably be cumbersome. GitHub is just inflexible. At best we might get lucky, with some new feature deployed soon that fits us better.
<vcunat>
gchristensen: what about modyfing the bot to give positive reviews? :-)
<gchristensen>
with time, people will get used to "red X means actually broken"
<gchristensen>
on evals?
<vcunat>
For example. On some conditions.
<vcunat>
I'm thinking about a different option that requiring CI: Require pull request reviews before merging When enabled, all commits must be made to a non-protected branch and submitted via a pull request with at least one approved review and no changes requested before it can be merged into master.
<vcunat>
s/that/than/
<gchristensen>
that seems to put us in the same place we are now
<vcunat>
Right, I see, that actually can't give us more flexibility.
<MichaelRaskin>
Erm. Let me translate: if committer A makes an effort and reviews 50 PRs, and next week committer B decides to clean up the follow-ups that A missed, it becomes impossible?
<vcunat>
Well, if we were starting nixpkgs "hub" repo now, I would go for GitLab with GitHub as some kind of automatic mirror.
<vcunat>
... but changing is hard, and we probably don't have enough motivation (yet)
<gchristensen>
:)
orivej has quit [(Ping timeout: 255 seconds)]
<MichaelRaskin>
I would expect a typical Nix-related changed to have a trajectory of ignored-ignored-ignored-Eelco Dolstra randomly decides to pay attention-in a month some random action is taken, it is not guaranteed that the choice takes into consideration any part of the previous year of discussions.
<MichaelRaskin>
(the last month does get considered)
<vcunat>
I wouldn't be that pessimistic :-) but I guess there's some bit of truth.
<MichaelRaskin>
I think it was you who merged some change, got a question «why do it like that» and replied with «erm, we tagged you like weekly for a month during the discussion»
<MichaelRaskin>
I do recognise that from the load point of view Eelco Dolstra does a lot of great work. But de-bottlenecking is just not on his priority list.
<vcunat>
that's possible :-)
<vcunat>
(meant for your next-to-last message)
<MichaelRaskin>
Yeah, length/delay ratio made it clear.
<vcunat>
We have the typical problem of growing project. Bringing all important decisions to a (shared) single person doesn't scale.
<gchristensen>
for sure
<vcunat>
I suppose we'll have to either create a top-level group where it's enough to (say) get two approvals of the several members, and/or do some splitting alike to "subsystems" in Linux kernel.
<MichaelRaskin>
That's true, but I would say Nixpkgs might be below the median across projects that succesfully grew past that.
<gchristensen>
what do you mean?
<vcunat>
I'm also unsure what's meant.
<MichaelRaskin>
Well, at the current size the normal bottlenecking problem is people asking for opinion of the top person too often, but some problems never get any reaction. But, if some decision is reached by a consensus of multiple caring experienced people, the top person just never pays attention to the problem, either.
jtojnar has quit [(Read error: Connection reset by peer)]
<gchristensen>
#nixos is over 700 people on a regular basis now. the github repo receives 800+ PRs/mo, and is close to #1 in # of active reviewers on a repo on github. the binary cache sees, IIRC, >50,000 unique IPs per month. we see the beginnings of a docs team blooming, I think nixos bigger than we expect
<domenkozar>
so the restriciton is too strict for master?
contrapumpkin has quit [(Quit: My MacBook Pro has gone to sleep. ZZZzzz…)]
<vcunat>
Right, we mostly moved to consensus-based decision-making for the "important" changes, and it's not too bad so far.
<vcunat>
remote: error: Required status check "grahamcofborg-eval" is expected.
<domenkozar>
I don't seem to have a way to restrict CI on PRs
<MichaelRaskin>
vcunat: I would say at this huge size I would expect a clearer understanding of what constitutes consensus. Not instances of people not asking to wait for them during the discussion but later claiming lack of consensus.
<domenkozar>
but not on master
<domenkozar>
that means all committers would have to go through a branch first
<domenkozar>
and merge once CI passes
<vcunat>
domenkozar: restricting by reviews or CI always disables direct pushing, apparently
<vcunat>
I see no other option in the GitHub settings
<domenkozar>
well it disables it because the commit doesn't have status attached
<domenkozar>
you could push to a different branch
<domenkozar>
wait for CI
<vcunat>
yes, when it gets the status, direct pushing worked for me
<domenkozar>
and merge to master
<domenkozar>
I'll revert this change for now
<vcunat>
domenkozar: but what if someone else pushes to master before you?
<domenkozar>
it's too invasive
<domenkozar>
until we have a proper discussion
<vcunat>
you have to rebase and wait for CI again
<domenkozar>
vcunat: I disabled rebasing restriction
<domenkozar>
I'd start by imposing this restriction on 17.09
<domenkozar>
but first we need to discuss it :)
<domenkozar>
cc gchristensen
<gchristensen>
sounds good
<MichaelRaskin>
Does it come with an understanding that it is OK to have a single PR to backport a lot of stuff to 17.09?
<domenkozar>
sure, it only requires status check, no more
<MichaelRaskin>
I mean that given what we just discussed about decision making, it would be nice to have some announcement and to say that bundling is OK.
<gchristensen>
I'm sure there will be other concerns too
<gchristensen>
"domenkozar | but first we need to discuss it :)" :)
<domenkozar>
MichaelRaskin: I've reverted the restriction
<MichaelRaskin>
Well, I want to predict some concerns early.
<MichaelRaskin>
domenkozar: I meant more general discussion, I noticed that you had said that the restrictions is reverted, thanks.
<domenkozar>
yeah probably need a rfc
jtojnar has joined joined #nixos-dev
<gchristensen>
oh well :) thank you for trying, domenkozar
<domenkozar>
happy to break prod any time
<domenkozar>
joking aside, sorry for trouble.
<gchristensen>
that's the spirit
<MichaelRaskin>
Well, I would say that no other way but trying works for figuring out github features anyway!
<domenkozar>
MichaelRaskin: indeed
<vcunat>
Some kinds of problems are difficult to anticipate.
<gchristensen>
so if a build starts and it has to build many dependencies first, will it send buildStarted for the top level build and then just stepFinished for each sub-dep?
<clever>
not sure, let me check the source some more
<clever>
internally, its implemented by various parts of hydra running the hydra-notify script
<ma27>
Mic92: will try it out and comment on the PR after that :)
<Mic92>
7374650
<Mic92>
sorry
<FRidh>
Why do we actually store propagatedBuildInputs in `propagated-native-build-inputs`? It only adds to the closure size while stdenv could simply add the inputs to `buildInputs` during eval
<vcunat>
niksnut: BTW, Hydra's feature that shows git changes (since the last time a job worked) doesn't work. I think it did work recently.
<copumpkin>
hmm! so maybe not my fault
<copumpkin>
anyway I can't look until later either
<copumpkin>
sorry :(
<copumpkin>
at work now
<vcunat>
saving the fun stuff for later :-)
<copumpkin>
always!
<orivej>
this is strange, darwin.Csu fails to build at 304259, but hydra claims (https://hydra.nixos.org/build/63901181) that stdenvBootstrapTools were good. may it be that stdenvBootstrapTools did not depend on darwin.Csu until recently?
<copumpkin>
hmm, everything should depend on Csu indirectly because libsystem includes it
<copumpkin>
and it shouldn't be a runtime reference
<orivej>
Could someone check if `nix-build -A darwin.Csu` builds at 304259b? (I'm testing on macOS 10.11 and it fails.)
<copumpkin>
orivej: yeah fails on 10.12 too
<copumpkin>
makes sense
<copumpkin>
I do remember this coming up a while ago
<copumpkin>
that's us trying to tell it not to set rpaths
<copumpkin>
it's mostly some sort of interplay between LnL's "rpath everywhere" thing and Csu not supporting that
<copumpkin>
we can also just turn off pre-10.5 support because it's ridiculous
<gchristensen>
LnL: does that mean we can start opening up grahamcofborg to more users, a bit less cautiously?
<copumpkin>
but I could swear I'd fixed this somewhere in setup.sh
<copumpkin>
after checking extensively with Sonarpulse
<LnL>
yeah, when it's merged I plan to enable it on my builder
<LnL>
copumpkin: we the CF stuff out of the condition
<LnL>
because we set one of those by default now
<copumpkin>
oh so where's it getting an rpath?
<LnL>
if you have an stdenv with CF it'll get set
<LnL>
so I think the Csu purity change moved some stuff around introducing the faulure
<LnL>
(during some of the stdenv stages there's no CF yet)
Sonarpulse has quit [(Ping timeout: 252 seconds)]
<orivej>
this may be the commit that broke darwin.Csu, but "nix-build pkgs/top-level/release.nix -A stdenvBootstrapTools.x86_64-darwin.stdenv" broke later
<vcunat>
Hmm, the borg could do cp /home/LnL/foo $out ? :-)
<vcunat>
(or perhaps upload it somewhere instead of copying?)
<gchristensen>
vcunat: there is a reason Borg can only be triggered by a very restricted set of users
<gchristensen>
and only evaluates by default
<vcunat>
One can do builds during evaluation time, you know...
<gchristensen>
I do
<vcunat>
well, just checking
<gchristensen>
evals run on linux, which has good sandboxing
<vcunat>
I suppose people will only run it on machines that are only used for testing, or some such.
<LnL>
it runs with restricted eval
<LnL>
but during a build on darwin you could copy stuff yes
<vcunat>
and fixed-output derivations will also be able to do networking...
<gchristensen>
yes indeed
<gchristensen>
vcunat: I invite you to perform (friendly) adversarial research on the project, and to report any problems you come across, so we can make it better
<niksnut>
vcunat: fixed
<vcunat>
niksnut: :-)
<gchristensen>
what is fixed?
<aminechikhaoui>
the world
<gchristensen>
_phew_
<vcunat>
gchristensen: git log links from Hydra
<gchristensen>
oh, cool
<aminechikhaoui>
hehe
<LnL>
vcunat: my builder does run with a separate unprivileged user so it's slightly harder then that :)
<MichaelRaskin>
gchristensen: the problem is that ideally it should be run on _subnets_ only used for testing…
<gchristensen>
what should be?
<MichaelRaskin>
bogr
<vcunat>
LnL: Good. Security is mostly about non-intentional incidents.
<MichaelRaskin>
borg
<gchristensen>
I don't understand
<MichaelRaskin>
Well, just user separation already limits the host impact a lot, but fixed-output builds have arbitrary network access.
<gchristensen>
yes
phreedom has quit [(Ping timeout: 260 seconds)]
<vcunat>
gchristensen: [attacking the borg service] :-) I'm not sure. There are so many things to do. Reminds me that around this month Mozilla funded a security audit of our open-source projects (knot-dns + knot-resolver), but borg probably isn't too interesting for them yet...
<gchristensen>
vcunat: well an audit of "NixOS and related infra" could be interesting perhaps
<vcunat>
Yes, nixos/nixpkgs might be an interesting topic for them.
<gchristensen>
like that bug in hydra where unprivileged users can (iirc) restart builds if they have the audacity to click a grayed out link
<MichaelRaskin>
I dunno; maybe when borg is used, nix-daemon should be run in an environment that doesn't have actual network access, only http and https proxy with local subnet blocked
<MichaelRaskin>
gchristensen: I guess you still need a Hydra login for that.
<gchristensen>
MichaelRaskin: I'd love to read a write-up from you on operationally securing a borg install. as I my intention is for it to be a community-run project, it would be good to make strong recommendations
<Sonarpulse>
copumpkin: what was the issue?
<vcunat>
(but all the projects tend to be closely related either to "internet" or Firefox)
<gchristensen>
vcunat: mozilla uses nix... :)
<MichaelRaskin>
I don't think I am qualified to write actual opsec write-ups.
<gchristensen>
MichaelRaskin: I think you've written a great set of concerns and resolutions already
<MichaelRaskin>
Basic namespace isolation is feasible, though.
<MichaelRaskin>
I mean, I consider actual security audits to be a higher bar.
<gchristensen>
I'm not asking for an audit, just your recommendations that you're coming up with
jtojnar has joined joined #nixos-dev
<MichaelRaskin>
I mostly deal in scripts, or one-off remarks, not in writeups.
<gchristensen>
as long as you put them in to an issue on grahamc/ofborg that'd be delightful
<MichaelRaskin>
I mean, I do writeups, but in the areas where I have more competence.
<vcunat>
Around Nix you sure are competent ;-)
<MichaelRaskin>
It depends on the subarea. I don't like module system too much, and I have never learnt a lot of finer details (and I know there are many).
<MichaelRaskin>
Security analysis of software… well, doing it well requires not missing a lot of considerations that I am not fluent with.
<gchristensen>
thats fine
<gchristensen>
"I guess you still need a Hydra login for that." -> anyone can get a hydra account, just go "Sign in with google" and you'll automagically register
<vcunat>
Hmm, it would be slightly nicer if instead of Google accounts it could use GitHub API... but almost all contributors will have both.
<vcunat>
(or even those generic interfaces like openid)
<gchristensen>
(nobody _wants_ to use openid) github auth would be neat
<vcunat>
I want to use openid (and I do use mine often).
<vcunat>
But I guess that's a local difference, as .cz markets it to public.
<vcunat>
I really love how you can validate your identity, auto-login into all places, auto-fill and auto-update all your personal data in shops etc. But I'm digressing from #nixos :-)
<gchristensen>
ah it turns out unpriveleged users can trigger evals but not restart jobs
<gchristensen>
it'd be -really- cool to slowly, bit by bit, replace the perl with something not perl
<vcunat>
Domen was working on Haskell interface to hydra.
<clever>
could start with hydra-notify, its a very basic script that just runs a functions in a set of plugins
<gchristensen>
that is maybe the hardest b/c it has to integrate with all the plugins? I dunno
<clever>
rewrite all the plugins in c!
<clever>
ive also been thinking, allow hydra-eval-job to run on the build slaves
<gchristensen>
IIRC the web interface requires all the drvs be on the webserver
<clever>
can you copy them over the remote nix protocol?
<clever>
i know i can copy them with nix-copy-closure
<vcunat>
Hmm, yes... why not make it another job that produces all those drv files.
<vcunat>
but I can imagine to hit some performance problem in nix-copy-closure or similar stuff
<gchristensen>
I think also the builders shouldn't be streaming results back to the master, they should just stream right to s3
<vcunat>
They do that already/
<clever>
vcunat: i think they stream it thru hydra, to s3
<gchristensen>
no they don't, they stream to the master, who streams to s3
<clever>
hydra writes to s3 instead of the local hdd
<vcunat>
Oh. Well at least they stream immediately now.
<gchristensen>
yeah, but they're still suffering with all the `xz` processes
<gchristensen>
but it is better :)
<clever>
my hydra is always busy doing 1-4 evals
<clever>
the IO and cpu struggle to keep up
<clever>
Dezgeg: i'm seeing some coreutils failures on my end; id: cannot find name for user ID 1001
<vcunat>
Dezgeg: staging is about to be merged by tomorrow, probably
contrapumpkin has quit [(Quit: My MacBook Pro has gone to sleep. ZZZzzz…)]
contrapumpkin has joined joined #nixos-dev
vcunat has quit [(Ping timeout: 250 seconds)]
FRidh has quit [(Quit: Konversation terminated!)]
FRidh has joined joined #nixos-dev
JosW has joined joined #nixos-dev
<clever>
Dezgeg: yep, turning on sandboxes fixes coreutils
vcunat has joined joined #nixos-dev
colabeer has joined joined #nixos-dev
colabeer has quit [(Remote host closed the connection)]
<Sonarpulse>
vcunat: that PR would be mass rebuild if nothing else, I guess ping me when staging is merged and I'll merge it after?
<LnL>
I'm testing a fix for the bootstrap tools
<vcunat>
Sonarpulse: OK. With that I'm waiting for LnL to ping me first :-)
<Sonarpulse>
LnL: is fixing the bootstrap tools before the staging merge?
<LnL>
yes, it broke on staging
<LnL>
and it's a mass-rebuild for darwin
<Sonarpulse>
LnL: broken for both?
<LnL>
just darwin
<Sonarpulse>
LnL: hmm ok
<Sonarpulse>
LnL: binutils related by any chance?
<LnL>
no rpath issue in Csu because of the CoreFoundation changes
<Sonarpulse>
oh hmm
JosW has quit [(Quit: Konversation terminated!)]
<vcunat>
oh, mass rebuild for darwin -> no merge by tomorrow
<vcunat>
full darwin rebuild takes several days on Hydra
<LnL>
are you sure, when all builders are healthy?
<vcunat>
I'm quite sure that one day is never enough.
<vcunat>
I guess two or three days might be, with those ten builders.
<vcunat>
But you've done more such rebuilds, so perhaps I'm mistaken...
colabeer has joined joined #nixos-dev
<gchristensen>
the builders _should_ be remaining fairly healthy these days
<LnL>
yeah, the big queue is usually a combination of a bunch of builders breaking and rebuilds on multiple branches
<Sonarpulse>
vcunat lnl: well if it's going to be a few days anyways...
<Sonarpulse>
my stdenv patch is a full mass rebuild for everyone
<Sonarpulse>
as is my bintools-wrapper one
<Sonarpulse>
the latter probably still works (or maybe just needs some linux allowedRequisites finaggling
<Sonarpulse>
but also needs a niksnut review, has he reverted it before
<Sonarpulse>
copumpkin: we should get your headless overlay in there :)
<Sonarpulse>
I want to establish the precedent for where to put such things
<copumpkin>
I'd love to, but when someone told me to file an RFC I lost steam
<colabeer>
is this channel also about nix package manager itself? I'm looking for the ./dev-shell script as stated in https://nixos.org/nix/manual/#chap-hacking has it been renamend?
<copumpkin>
PR is RFC IMO :P
<Sonarpulse>
copumpkin: oh, i forgot about that
<Sonarpulse>
that seems like overkill
<copumpkin>
if you want to push on it, I'd love to see it
<copumpkin>
I don't have the spare cycles to work on it these days
<Sonarpulse>
ok
<Sonarpulse>
well niksnut also wanted the "easy cross instantiation"
<Sonarpulse>
so maybe we can avoid the RFC for both
<Sonarpulse>
which is *hardly* an improvement, but only I seem to think that :D
<copumpkin>
yeah, and I'd really prefer not to have to pick a separate file as the entry point
<copumpkin>
dealing with .nix files is a lot more painful than dealing with expressions
<Sonarpulse>
well don't forget we are strict in the keys
<copumpkin>
sure
<Sonarpulse>
so you have to do a decent amount of eval to get (import <nixpkgs> {}).anything
<copumpkin>
which is why you'd nest it a level down or two
<Sonarpulse>
hmm?
<copumpkin>
pkgs.headless.foo
<Sonarpulse>
but just to eval headless
<Sonarpulse>
need keys of pkgs
<Sonarpulse>
(I know `bar` doesn't eval `headless.foo`, it's the cost imposed by the outer nixpkgs i'm talking about)
<Sonarpulse>
(that's what I was very unclear on after my talk at the hackathon)
<copumpkin>
sure, but there's not much to do about that is there?
goibhniu1 has joined joined #nixos-dev
<copumpkin>
without adopting a very different model
<Sonarpulse>
agreed
<Sonarpulse>
just like add a comment "if you are impatient; do this more convoluted thing isntead: "
<vcunat>
colabeer: right channel, but I don't know answer
<vcunat>
it might be among 1.11 -> 1.12 changes
orivej has quit [(Ping timeout: 248 seconds)]
goibhniu has quit [(Ping timeout: 255 seconds)]
<vcunat>
you may try #nixos as well, but there tend to be more beginner-like discussions
colabeer has quit [(Ping timeout: 248 seconds)]
<copumpkin>
colabeer: it's nix-shell now
<copumpkin>
colabeer: the hacking.xml is updated in the repo but the published docs haven't changed
<vcunat>
ah, yes, manuals live on nixos.org are for stable releases
acowley has joined joined #nixos-dev
<acowley>
Hello, I think the ghc-8.2.1 build failure for darwin on hydra was a fluke, but it has been propagating forward for a while now. Could it perhaps be restarted manually? https://hydra.nixos.org/build/63891215
<vcunat>
Hmm, latest build on trunk segfaulted, indeed.
<vcunat>
So I restarted it now, as I'm unsure when staging gets merged.
orivej has joined joined #nixos-dev
<orivej>
vcunat: I'm going to enable parallel building of Qt 4 that you've disabled two years ago because it is one of the bottlenecks of the mass rebuilds. It seems enabled in Gentoo so it may actually work, and I'd rather fix its parallel build than be waiting for it to complete on a single core.
<vcunat>
I thought the more important packages should be migrated to Qt 5 by now, but OK.
<vcunat>
orivej: ^^
<orivej>
vcunat: rebuild amount says that qt4 affects 71 on x86_64-darwin and 400 on x86_64-linux
<vcunat>
The trunk jobset has > 42k jobs...
<orivej>
of course there are not many, but I consistently see many of the last few thousands of jobs of a mass rebuild waiting for an hour long qt4 rebuild
<vcunat>
Actually that's precisely because there aren't many.
<vcunat>
The way Hydra's simple scheduling works, your probability of getting built is (I think) roughly proportional to the number of jobs in which closures you are.
<gchristensen>
yeah
<vcunat>
Lately we get blocked a lot by i686 builds.
Sonarpulse_ has joined joined #nixos-dev
Sonarpulse_ has quit [(Changing host)]
Sonarpulse_ has joined joined #nixos-dev
<vcunat>
Because they were *mostly* removed
<gchristensen>
it just picks a random thing and builds everything it depends on until it is done, then picks the next random one and ...
Sonarpulse has quit [(Ping timeout: 268 seconds)]
<vcunat>
but we still have some large i686 nixos tests that wait for stuff anyway.
<vcunat>
They're not blocking channels if they break, but the big channels still wait for them to finish (in any way).
<vcunat>
Still, my point of view is that this larger "latency" on Hydra isn't much of a problem.
<gchristensen>
I agree
<vcunat>
as we still mostly use the full bandwidth.
<vcunat>
And for security we have the fast small channels.
<gchristensen>
and for bad issues we can scale up to a whole mess of builders and get out the large channel in <24hrs
<vcunat>
So... non-parallel jobs like qt4 now feel a problem mainly for local builds.
<vcunat>
:-) the glorious scaling of clouds, whenever you pay enough.
<gchristensen>
or have deep pockets in corporate backers
<orivej>
I am mainly observing my own hydra cluster, the hydra at nixos may have more pressing problems than qt4...
<vcunat>
Oh, I misunderstood. I should've noticed "hydra" (some) vs. "Hydra" (.nixos.org)
<gchristensen>
ah ... that is a different thing ...
<Dezgeg>
the aarch64 box does struggle with low-parallelism builds, e.g. qt4Full times out after 10h
<vcunat>
Right. The box is only one and it's huge.
<vcunat>
Huge in core number but presumably low on per-core power.
<Dezgeg>
yes
<gchristensen>
2.0GHz iirc
<vcunat>
We might do a hack similar to what I did for chromium - tweak the parallelism somewhere in-between.
<gchristensen>
note we can get more aarch64 boxes but we need to demonstrate progress in officially supporting aarch64 :)
<vcunat>
So it's not single-core but doesn't exceed some limit where parallel failures tend to happen too often.
<vcunat>
Nice challenge.
<vcunat>
gchristensen: what does "officially support" actually mean?
<vcunat>
One of the platforms for (my) 18.03?
<gchristensen>
like having curl nixos.org/nix/install | sh work on aarch64 would be an amazing start
<vcunat>
hmm, that shouldn't be difficult, right?
<gchristensen>
no probably not
<vcunat>
binary cache hash is the only part that sets the Linuxes apart, I suspect
<vcunat>
gchristensen: do you know if those boxes can do ARMv7?
<gchristensen>
they cannot
<vcunat>
thanks
<gchristensen>
but I happen to know we can be provided such hardware
<gchristensen>
but they'd really like us to make more substantial progress on aarch64 first
<vcunat>
Yes, that's understandable.
<vcunat>
One platform at a time. (Or pipelined progress, to be more precise.)
<vcunat>
Perhaps make a GitHub issue with checkboxes for these support targets?