<{^_^}>
#85241 (by LnL7, 5 hours ago, merged): Revert "lib/options: Use escapeNixIdentifier for showOption"
<infinisil>
Which is the default channel on darwin (and maybe in other places too)
<infinisil>
This was a new user suddenly getting overwhelmed with a lot of errors
<infinisil>
Because of this I don't think unstable should be the default anywhere
<infinisil>
And we might want to consider introducing a rolling stable channel
<infinisil>
Just pointing nixpkgs-stable -> nixos-20.03 (or whatever the latest nixos release channel is) should be good enough
<gchristensen>
why that as opposed to make it so that error can't get to unstable?
<infinisil>
gchristensen: For this case that could've been avoided by building the manual with ofborg
<infinisil>
But in general, the idea of unstable is that arbitrary stuff can break
<MichaelRaskin>
Because
<infinisil>
And people opting into it can't expect everything to always be smooth
<MichaelRaskin>
someone could make an argument that Linux needs Z _now_ and macOS build would need a week to fix
<MichaelRaskin>
Not in this case, I guess, but it just serves as a remainder
<gchristensen>
I think the argument for nixpkgs-unstable is it isn't your main package manager, so probably isn't so critical
<gchristensen>
like the user base for nixpg ks-unstable is mostly like devs using it on their other-os laptop
<gchristensen>
not great that it showed up kinda busted though
<gchristensen>
but a nixpkgs-stable pointing to nixos-20.03 means all of a sudden it is going to bump big-time without notice or release notes
<infinisil>
Hm true..
<gchristensen>
(this was the argument against nixos-stable being a channel)
<infinisil>
If we had like 6-weekly releases this wouldn't be a problem I guess
<gchristensen>
personally I'd like to ditch nixpkgs-unstable and have a unified channel, nixos-unstable, nixos-«stable-rev»
<samueldr>
I believe the main stopper for that may be darwin queue something dragging nixpkgs-unstable down, right?
<gchristensen>
right, and sometimes aarch64 etc.
<gchristensen>
but even in the face of that, I'd prefer it
<infinisil>
6 weekly releases, channels named like proper semver, 1.1, 1.2, etc. one channel called "stable" pointing to latest release, one unstable for master with some checks
<gchristensen>
as it would keep everybody's eyes on the same target
<infinisil>
How bout something like that
<gchristensen>
I think we'd find 6 weeks too fast for many users
<gchristensen>
what would semver look like for us?
<samueldr>
monotonically increasing version would end up better as it would show there is no guarantees between updates
<infinisil>
Pretty much stay at 1.x unless major change occurs, like a potential stdenv rewrite
<samueldr>
though what happens after 6 weeks, your shiny nixos-not-unstable-123 is now invalid and you have to upgrade manually to 124 or 125?
<samueldr>
without security updates it seems redundant
<infinisil>
Maybe we shouldn't even have channels for each version
<samueldr>
or even plain simple backports
<infinisil>
Just a channel for stable and one for unstable
<infinisil>
Considering that one must update to newer nixos releases anyways currently this is probably no problem
<gchristensen>
if you don't do releases, then we'd probably have to backport every patch from master
<infinisil>
(Because security)
<gchristensen>
since there isn't a clean point to cut differences and start over from master, like we have now
<gchristensen>
I like things that keep things releasable at all times -- things like marking broken packages broken as they break, as opposed to big-bang efforts at the end
<gchristensen>
so that is cool
<infinisil>
(I'll reply later, currently cooking a midnight snack!)
<gchristensen>
the current structure is really useful for planning purposes
<gchristensen>
users know that every 6 months they'll need to do some work to become compatible with the new thing, and that they'll generally be able to deploy any update that comes along quickly and without trouble
<gchristensen>
users know that every 6 months they'll need to do some work to become compatible with the new thing, and that they'll generally be able to deploy any update that comes alongbetween those 6mo events quickly and without trouble
<samueldr>
though, it would be interesting to figure out a common "feature update" scheme for big changes, instead of simply sending them off to unstable whenever they are ready
<gchristensen>
every 6 months is a bit fast for some users, like flyingcircus which stays on a stable release for a few years
<samueldr>
maybe the first thursday of a month, merge in the breaking changes that are waiting?
<samueldr>
so there is a schedule for when things break on unstable
<gchristensen>
as long as critical opensslbugfixes never come on the first Friday after the first Thursday of the month :P
<samueldr>
bug breaking changes can happen at tha moment anyway!
<samueldr>
we'd have a juici merge commit to revert if need be!
<samueldr>
juicy*
<samueldr>
I probably need to sort out the idea a bit before if it's even useful
<gchristensen>
oh well for unstable I don't care, it is for stable that I care
<gchristensen>
one thing the observability folks talk about is never slow down merges for the sake of "safety", and that leaning in to the risk and improving the checking and validation of your pipeline is the better bet
<gchristensen>
don't not deploy on Friday because you're scared of what happens on the weekend, instead fix your pipeline to make it safe to deploy any time you need -- because you will need to be able to deploy changes at scary times, and you'll want the safety you've built up over time
<samueldr>
we should have everyone's config on hydra :)
<gchristensen>
yeah, so that is Rust's model
<gchristensen>
"is this language change okay?" "I dunno, try building every crate with it"
<samueldr>
(schroedinger's quip, both genuine and fascetious!)
<gchristensen>
another way to go, and IMO the better way to go, is improve our validation of PRs and channels to deal with problems that come up
<gchristensen>
make the process of getting code merged and released robust against awkward failures
<gchristensen>
we can look at the current segmentation of user groups as a feature, and intentional pipeline of "safing" the code: Nix on other OS's are least likely to be severely harmed by a bad bug in nixpkgs master, and more likely to experience them first. nixos-unstable users are now less likely to get a serious serious problem, but still more likely to get cut on less serious problems unique to NixOS
<gchristensen>
itself, and stable users least likely to get these severe issues ... users can choose where their risk tolerances are on that spectrum, and deploy what they feel comfortable with
<gchristensen>
personally, I stay on nixos stable, except for when a beta is out -- then I'm on the beta, to volunteer as part of the riskier subset, to expand the use cases the beta is exposed to, to reduce the risk for users once it become stable
<gchristensen>
that is how I look at it, anyway
<nh2>
As an industrial user, I think releases are extremely valuable. I also think 6 months is the right cadence. It allows to budget time for the upgrade, and with high probability other people will already have found the bugs we're likely to encounter for that specific upgrade, making it take much less time, and more importantly, less risky.
<nh2>
(If a NixOS upgrade makes our servers go down even for half an hour, we lose users real fast, they rely on us being up.)
<gchristensen>
I'm with ya, nh2 :)
<infinisil>
nh2: The time between releases is 6 months, but old releases only get supported for 1 month after a new one
<infinisil>
So currently one month is the time people have to update
<infinisil>
I guess we have multiple aspects to this thing: For one, we have the user scale of "I'm okay with stuff breaking occasionally <-> Absolutely nothing can break"
<infinisil>
Another is that everybody should be able to get security updates as fast as possible
<infinisil>
And another is that we should be able to deprecate things over time in potentially incompatible ways
<nh2>
The overlap window is fine for us; I would prefer it to be a bit longer if possible, but that's not crucial. An upgrade usually takes us ~1 week or less for us to test carefully and deploy. The key thing is the plannability of the upgrade, and the reliance on stable-channel upgrades being very unlikely to surprise-break stuff.
<gchristensen>
people in the "Absolutely nothing" camp are (a) dreaming (b) lying and (c) can pay for that if they want
<gchristensen>
they *also* need to take their own responsibility and create their own deploy and test process to make deploys safe, even in the face of upgrades
<MichaelRaskin>
gchristensen: this is a direction of a spectrum, though
<gchristensen>
aye
<gchristensen>
true :)
<gchristensen>
I guess I just mean ... don't engineer too hard towards that end
<gchristensen>
one of NixOS's greatest features in the ability to undo
<MichaelRaskin>
Also, I am not sure there is something willing to sign an SLA of «absolutely nothing can break and security updates must be delivered in a timely manner»
<MichaelRaskin>
Actually, there are changes with variouts rollback cost
<MichaelRaskin>
(so, more dimensions to keep track of)
<gchristensen>
MichaelRaskin: I know that, and surely you know I know that :P
<gchristensen>
the cost of failure is lower on NixOS
<gchristensen>
typically
<MichaelRaskin>
The cost of failure is _nonuniform_ in NixOS, though
<gchristensen>
yeah
<gchristensen>
the cost of failure is nonuniform on every os
<MichaelRaskin>
(yes, I do assume everyone here kind of knows it, I want it to be common knowledge in the strong sense)
<MichaelRaskin>
We do have some expected hard to rollback changes, and some interesting ones
<nh2>
I think the current stable/unstable split is pretty good. In my opinion what is needed is full automatic testing pre-PR-merge. _That_ should keep unstable much greener and safer.
<MichaelRaskin>
I wonder if a public tracker of ofborg's: rebuilds predicted, rebuilds checked, passthru.tests examined passthru.tests found would be any incentive for people to provide CI-earmarked funding
<pie_[bnc]>
"absolutely nothing can break" gets easier to a point with more meta <gchristensen> one of NixOS's greatest features in the ability to undo
<pie_[bnc]>
just dont deploy to prod :D
<gchristensen>
though I'm not sure it is feasible to actualy say nothing can fail
<MichaelRaskin>
Does deploying the first OpenSSL release supporting heartbeat extensions count as something getting broken?
<drakonis>
MichaelRaskin: heartbleed?
<MichaelRaskin>
Well, it comes bundled
<infinisil>
gchristensen: There is a way that nothing will ever break from updates
<gchristensen>
don't do updates
<infinisil>
Namely if there aren't any updates!
<gchristensen>
rm -rf github.com/nixos
<nh2>
Right, and that isn't even silly. If I want nothing to break, I don't just don't update. I can just read the git log of my `release-*` branch, and cherry-pick what I need if I want minimal possible changes.
<MichaelRaskin>
No half-measures: cut the electricity
<MichaelRaskin>
I mean, it is not always clear (to us in general) if some bump is actually a likely-to-be-triggered high-severity bug being fixed
<nh2>
Stable channels are great because they have few enough commits that I can read them all when upgrading within stable, and I can get a very good guess of what is changed and what may possibly (but unlikely) break.
<infinisil>
Hm, what are current problems with the release/channel process? (kind of asking myself)
<gchristensen>
I have very few problems with them
<gchristensen>
other than periodically the checks need to be improved
<gchristensen>
(in fact, I would be sad to see the process change significantly)
<infinisil>
Yeah that's one thing that's a bit of a problem currently
<gchristensen>
it will always be true
<gchristensen>
we will always be shipping things with bugs, and we will always need to improve our checking
<infinisil>
Another thing that's a problem (I think) is that releases require too much work
<infinisil>
too much manual work
<infinisil>
So yeah the mark-as-broken immediately thing would be a good improvement for this
<gchristensen>
yeah, so making them easier to release is a big +1 for me
<gchristensen>
moving as much of the work in to the day-to-day process (make sure (errors|release notes|etc) are good at merge, etc.)
<infinisil>
Yeah
<infinisil>
Alternatively though, a way to make releases less demanding is to just make more frequent ones
<infinisil>
Because there's just less stuff
<gchristensen>
making things happen more often does tend to make things more streamlined, especially if people doing it are paid and have to do it
<infinisil>
(though there's also an overhead to making the release itself, which could backfire)
<nh2>
infinisil: I agree with you that unstable should not be the default anywhere. Even if it were more reliable. Most people just want stable software that does not change unexpectedly, and (very important) security updates. I recommend most people to switch away from unstable as the first thing after installing nix.
<nh2>
The people that want `unstable` know that they want unstable, and there are still enough of them to test unstable well (e.g. all the Arch users coming to NixOS).
<infinisil>
gchristensen: Yeah I'd also expect it to become more streamlines
<infinisil>
s/lines/lined
<gchristensen>
that sort of butts up against obligation and desire, being OSS things need to stay fun and feel good
<MichaelRaskin>
More frequent releases without tiered legth of support means they are less useful as releases, though
<gchristensen>
one way we could quickly make managing a release utterly terrible and miserable is say "sorry, it has to be done before the end of the month"
<gchristensen>
instead the community is really good about saying "it is usually out by the 45th of March"
<gchristensen>
that is fun and respectful of commitments and obligations and the effort everybody is putting in
<infinisil>
I feel like mozilla has a great model though
<infinisil>
Have a beta version, which one release later becomes stable
<MichaelRaskin>
I think emphasising the legacy endpoints of months would first lead to branchoff 1 Feb / 1 August
<infinisil>
At which point a new beta is cut from master
<MichaelRaskin>
Then maybe just nothing else will change.
<MichaelRaskin>
Well, Mozilla does have ESRs
<infinisil>
I'm thinking that would be a good idea too tbh
<gchristensen>
yeah but that is haaaaaard
<infinisil>
LTS releases which get cut every like 2 years, mainly just getting security updates
<nh2>
infinisil: Do we not have this already, just slightly slower than Firefox? The beta is when `release-20.03` is forked and before it is released?
<gchristensen>
an ESR is likely to be $$$,$$$/yr
<infinisil>
This is then the "industrial" tier
<MichaelRaskin>
We have quite a few things where I (who does not care about releases!) notice discussions «good we can drop it in two months, it would be hard to carry this on»
<infinisil>
nh2: Yeah, though not repeating regularly every <fixed time>
<nh2>
infinisil: .03 and .09?
<infinisil>
What I'm saying is that e.g. as soon as 20.03 is released, 20.04 is cut from master and made to be the new beta. One month later the same repeats
<gchristensen>
NixOS (os, 75k builds) necessarily has a different relaese model of firefox (a browser, a few hundred-to-small-thousanddeps)
<infinisil>
gchristensen: Does it necessarily have to? I honestly feel like mozilla model could work for us, though it might require a bit more man-power
<gchristensen>
yes I think it does
<MichaelRaskin>
infinisil: what ESR length?
<gchristensen>
firefox doesn't manage dozens of services and state and manage filesystems and kernels and on and on
<infinisil>
MichaelRaskin: I was thinking 1-2 years
<nh2>
infinisil: Mozilla has fully automated that though and as far as I can tell, and all commits have full QA on them pre-merge like I lobby for. I think you could pull that model only after you have that, *and* with more man-power as you say.
<MichaelRaskin>
Too many things we ship don't really have 2 years stability expectations
<MichaelRaskin>
And Mozilla kind of has an idea what they are building
<infinisil>
MichaelRaskin: Hm good points..
<gchristensen>
a very specific idea, and its users have very limited expections of the browser (relative to an OS)
<MichaelRaskin>
I think for quite a few packages we can have sincere disagreement whether they are broken
<MichaelRaskin>
I would say the core is
<MichaelRaskin>
The browser interface has very few pieces where the expectation is to pass through the underlying dependency interface
<nh2>
I think a {rolling unstable, 6-months-release, 24-month-LTS-release} is close to optimal. But the 24-month one is quite costly from a maintenance perspective, I don't think that is feasible until we're far more automated.
<MichaelRaskin>
And automation won't help
<gchristensen>
you need a lot of committed money up front to support it long enough for it to matter
<MichaelRaskin>
At these terms — look at Debian — this is backporting upstream patches
<cole-h>
^
<nh2>
It does help to a large extent, by freeing up manpower to be available for those manual tasks
<MichaelRaskin>
Manpower is not fungible this way
<infinisil>
I guess for now we should just focus on having better checks for PRs and ideally not allowing commits directly to master that would circumvent those
<infinisil>
And make releases not be as much work
<infinisil>
(as suggested by gchristensen)
<drakonis>
if you have to extend the maintenance period, offer a way to pay people for the effort
<cole-h>
In that vein, I like the idea of having a top-level broken packages index
<drakonis>
sponsor a LTS release
<nh2>
I agree though that that is what it becomes, backporting upstream patches, and that isn't really great. It's also not fun, and that's important for unpaid labour.
<nh2>
I think LTS releases like Ubuntu are quite valuable (I use them on Ubuntu), but a lot of that value also comes from the fact that often on the 6-months Ubuntu releases specific things are so badly broken that you have to use the 24-month releases just to not suffer from them. If we can make the 6-months releases really good and the upgrade smooth, I think that should be done before looking at 24-month releases.
<gchristensen>
infinisil: that will always be a problem and something we need to work on, no matter what our release process looks like :)
<infinisil>
gchristensen: (what specifically are you referring to?)
<cole-h>
Focusing on better PR checks, I assume
<gchristensen>
infinisil: we will always need to be honing and improving our validation at the PR and hydra level
<drakonis>
the debian security team appears to be paid
<gchristensen>
infinisil: and, we will always need to be honing our tooling for maintaining our systems and code (like broken packages)
<nh2>
infinisil: I agree, the better automatic we QA PRs, the less work there will be, the more fun will it be, and I think master commits without PRs are just a plain bad idea and should be extremely discouraged.
<infinisil>
I feel like with enough effort and processing power, PR validation could really reach a peak where we can be pretty confident that we won't have to fear having broken master (for most use cases)
<nh2>
I believe that
<gchristensen>
I like the optimism :D
<infinisil>
nh2: Regarding your above message, I almost feel like 6 month releases are too sparse for people to get in the "let's automate this so we don't have to do it again" mindset
<gchristensen>
(honestly)
<drakonis>
LTS NixOS should only exist if there's enough demand for it
<gchristensen>
infinisil: people are already talking about it
<drakonis>
see also getting enough people who'd want to deploy nixos for longer spans of time
<gchristensen>
infinisil: and have talked about and worked on and improved and streamlined the release process every release
<drakonis>
and corporate that wants to greenlight NixOS on prod but cannot due to a lack of longer cycles
<gchristensen>
more PRs get release notes before merging than they did 2 releases ago; tooling has improved; to-do and process documentation has improved
<infinisil>
That sounds pretty good!
<gchristensen>
it is pretty good!
<gchristensen>
we're doing pretty good!
<infinisil>
I guess the broken package thing is a major thing left to do. Anything else?
<drakonis>
would it be worthwhile to ask around about whether there's a demand for longer cycles?
<gchristensen>
we had a slip-up which you've seen, let's fix it, then figure out a way to prevent it, and then keep doing pretty good
<gchristensen>
drakonis: there is :P
<gchristensen>
but demand doesn't = $$$$$
<drakonis>
right
<drakonis>
ring up whether they'd want to pony up some dosh to keep it going for longer than the usual cycles?
<gchristensen>
so far I haven't heard that level of demand hehe
<nh2>
infinisil: I don't understand your argument. To me, the need for automation seems obvious, and the non-automation we have for many things (like having humans run VM tests or `nixpkgs-review`) is already a big pain independent of release cadence. It even is for just `master`
<drakonis>
i'm curious about how well it'd work in practice, given the tooling isn't maddeningly complex
<gchristensen>
I'm not totally clearwhat nixpkgs-review offers over ofborg
<nh2>
gchristensen: it builds all _dependents_
<nh2>
e.g. you upgrade libpng, it builds firefox and you see that it works
<gchristensen>
sounds like a lot
<infinisil>
nh2: Hm yeah, at least last release I felt like the release managers were a bit too overloaded with the things to do
<infinisil>
Didn't get the feeling this time, but I don't know too much about it tbh
<infinisil>
I guess one thing is that there's a huge amount of issues in the release milestones every time
<infinisil>
Last nixcon we talked about this and I brought up the idea of marking issues with priorities
<infinisil>
With e.g. 5 different levels, maybe even automated based on what dependencies it involves
<gchristensen>
die a hero or live long enough to become the enemy :P
<drakonis>
heh
<gchristensen>
one thing though is super extremely big -1 on automatic merging
<nh2>
gchristensen: I know, I think it that model should return, but for a slightly different scope (normal "dependency" ofborg running on trusted infra is good, contributed untrusted compute can provide the "dependee" part)
<gchristensen>
it turns out it is a bit scary asking people to run arbitary PR builds in their personal network
<MichaelRaskin>
There is also some fun to be had about finding out what the current distribution of deployed settings is
<gchristensen>
yeah that was really really hard
<gchristensen>
people would contribute a builder and then disappear for months, and it was a choice of "keep their builder" or "try to evolve the tool"
<gchristensen>
people were gung-ho about the idea of contributing builder time, but it didn't pan out to be very many people willing to actually do it
<nh2>
gchristensen: we just make it easy to run it in a VM, problem largely solved. Also, see the `nonmalicious-checked`label I propose -- such builders would only execute code that is already checked by NixOS maintainers people already trust
<nh2>
gchristensen: also, automatic merging isn't necessary. Reading diffs and clicking merge is easy, humans can do it, it's even fun. Waiting for builds and checking that stuff works is the bad part
<gchristensen>
yep
<gchristensen>
"tenfold the currently available Hydra+ofborg resources this way" that is probably not realistic
<MichaelRaskin>
There are quite a few cases where reading diff also takes a few days
<MichaelRaskin>
(wall-clock, hopefully not pure time)
<nh2>
right, but they are are rare, and easier to do if you don't also have to do mundane machine tasks like building stuff
<MichaelRaskin>
In theses cases having dedicated ofborg stuff leads to starting the builds earlier (after a more cursory check for something obviously malicious)
<nh2>
gchristensen: the way I want my nixpkgs-commiter part of the day to be is like this:
<nh2>
* Look at built PRs, click merge, merge, merge. Merge 120 PRs per hour
<nh2>
* Look at new PRs, read their diffs, apply `nonmalicious-checked` if the diff looks good, do that for 3 PRs per minute. Do 180 PRs per hour
<MichaelRaskin>
Not even close to reality even with ideal tooling
<MichaelRaskin>
And actually usable package test infrastructure (which we also lack)
<MichaelRaskin>
(judging from the observation)
<infinisil>
What I think would be very nice is for ofborg to have its checks defined in nixpkgs directly
<MichaelRaskin>
Hmmm. passthru.tests?
<infinisil>
General checks
<gchristensen>
nh2: what if ofborg built every dependency, starting with PRs in the 1-10 range
<gchristensen>
nh2: btw if you want to replace ofborg, please do :)
<infinisil>
E.g. have a file nixpkgs/ofborg.nix which describes all things to run as an attribute of derivations
<infinisil>
Every derivation is displayed as a check
<gchristensen>
as MichaelRaskin put it, "22:07 <MichaelRaskin> What is harder than ensuring job security? Getting rid of job security you have but do not want.it is hard to get rid of job security you no longer want
<gchristensen>
oops extra newline in there
<nh2>
MichaelRaskin: Why not? I think for `r-ryantm` point release upgrades (which is a huge fraction of PRs), you can read the diff in 15 seconds. Assuming that the `nonmalicious-checked` semantics are "we trust upstream and as long as the point release src URL points to upstream, that's fine from nixpkgs's perspective. If you read the full upstream diff (which is also very good), it of course takes longer (but still
<nh2>
managable for point release upgrades).
<MichaelRaskin>
Full upstream _diff_ is ~ never manageable
<gchristensen>
nh2: if ofborg built every downstream dependency, would nixpkgs-review know?
<MichaelRaskin>
r-ryantm can be automerged if tests are to be trusted.
<gchristensen>
no thank ou
<MichaelRaskin>
Well, nh2's description of workflow strictly implies that…
<nh2>
no it doesn't
<nh2>
I'd never automerge, because reading these small point release diffs is usuallly just 2 lines, and clicking merge is fun.
<MichaelRaskin>
It's shortlogs, not diffs
<gchristensen>
" Automatic merging of r-ryantm bot point releases, e.g. 1.2.3 -> 1.2.4, after the above checks have passed." though :)
<nh2>
My suggested workflow always includes reading the nixpkgs diff -- that's a requirement for `nonmalicious-checked` to be set.
<MichaelRaskin>
And I do not buy 2PR/min here
<MichaelRaskin>
For r-ryantm there is no point in reading _just_ that
<gchristensen>
and ideally, reviewing thes epoint releases would include an evaluation of if they should go to stable for security reasons
<gchristensen>
but nh2, if ofborg built all downstream things, would nixpkgs-review know?
<gchristensen>
like, would it be useful if ofborg did that, or would it just be ignored and people do it anyway
<nh2>
MichaelRaskin: Example: https://github.com/NixOS/nixpkgs/pull/85196/files. Click `Files changed`, ok it's 2 lines and not malicious. Click the `Release on GitHub` link from the description, OK that looks benign, apply `nonmalicious-checked` label. That can be done in 30 seconds.
<nh2>
Of course that is not the case for _every_ PR, but for a good chunk of them.
<MichaelRaskin>
Again: if you are talking of r-ryantm PRs clicking Files _makes no sense_
<infinisil>
nh2: If it was a malicious release, would you see it in the release notes though..?
<MichaelRaskin>
If it was a well-planned malicious upstream release, we wouldn't find out anyway
<nh2>
MichaelRaskin: It can also be a human-made PR. I'm just using the r-ryantm's PRs as and example for the size / effort for the typical point release upgrade. (Also I think it is always a good idea to have a quick check on whether the bot did it right, and thus clicking Files is useful. IIRC Ryan or somebody else even said that on my discourse post.)
<MichaelRaskin>
For r-ryantm it is more useful to have a second bot that checks that the diffs match a regexp
<MichaelRaskin>
Because they always do
<MichaelRaskin>
And if they do, there is nothing to see there
<nh2>
infinisil: Sorry, I could have re-ordered the actions a bit better. You can apply `nonmalicious-checked` already after having read the diff (assuming we agree on its semantics being "the nixpkgs change is nonmalicious and we trust the upstream to be not malicious"). The changelog reading is separate, maybe also should have a label like "changelog-benign" judged by a human.
<gchristensen>
as a second category of discussion, maybe it'd be useful for me to hand over the keys to ofborg to a team
<gchristensen>
merge and update its code, extend its featureset / support / etc. as needed, and with the time and attention needed to merge the maybe somewhat spookier things
<infinisil>
nh2: Sounds decent
teto has quit [Ping timeout: 246 seconds]
<nh2>
gchristensen: sorry for the delay in answering your other questions, I was just typing all the time on this topic :D. I think it would be very useful if ofborg started building dependencies in the 1-10 range (and advertised loudly that it did).
<nh2>
I also think it would be great if we could have an ofborg command to run the equivalent of a full nixpkgs-review, with the agreed convention that one should check the ofborg queue size or so to not overload it.
<gchristensen>
if priority were a thing, it would not need that -- instead working on the initial 1, and then the remaining 9 for regular PRs, and then fill in remaining capacity with nixpkgs-review as long as those PRs are open
<nh2>
gchristensen: "would it be useful if ofborg did that, or would it just be ignored and people do it anyway" - my understanding is that if people see that ofborg built something, they trust it and don't build it. If ofborg writes on the ticket "I have nixpkgs-reviewed it, you don't need to do that", they'd trust that too.
<nh2>
But it would be especially useful if `nixpkgs-review` would download what ofborg has built, so that you can go to a PR, see ofborg has built the dependees, run `nixpkgs-review pr 12345 --package firefox` and immediatelly drop into a nix-shell with the ofborg-built firefox, to check if PNGs load for this libpng upgrade PR.
<gchristensen>
ofborg doesn't put anything in to any cache right now
<nh2>
gchristensen: Right, that would be useful to have. Building dependees would already be useful without it, but it would be even more useful with it.
<nh2>
gchristensen: How about a cheap approach to priorities: Let people label PRs with `ci-prio-1`, `ci-prio-2` etc, and ofborg interpret that?
<gchristensen>
I don't think that would be very good for users
<nh2>
Why not?
<gchristensen>
because my PRs are high prio :)
<MichaelRaskin>
Also, not all build commands inside one PR are same prio
<gchristensen>
and parsing user input is not cheap, but has long term downsides in terms of costly training
<infinisil>
Package priorities!
<gchristensen>
besides, ofborg's capacity is jus tfine
<MichaelRaskin>
for 10× current builds?
<MichaelRaskin>
Maybe 5×
<nh2>
gchristensen: From what I've seen, nixpkgs committers are pretty responsible if you tell them what the rules are (and only committers can label, right?).
<nh2>
So I don't think everybody would set their builds as prio-1
<gchristensen>
but what is the point?
<nh2>
E.g. I see a libpng security upgrade, I go to the PR, label it `ci-prio-1`, and a bit later I return and see that Firefox and Chrome built fine.
<nh2>
And not only I can see it, but all commiters can see it (in contrast to nixpkgs-review, which I run on my own computer and nobody knows that I run it or what the results are, there is no coordination)
<nh2>
infinisil: I think the problem with per-package priorities is that some upgrades to the same package are more important than others. For example, if there's a libpng security upgrade, I'd read the CVE, and based on that determine whether it should be `ci-prio-1` or `ci-prio-3` (e.g. it could be a fix for Darwin 10.10 or some other things that are not super important for us).
<infinisil>
Hm maybe yeah. Though package priorities would be useful in general, e.g. so people can focus better on issues of more important packages.
<nh2>
gchristensen: I did not understand what you meant with "it was a choice of 'keep their builder' or 'try to evolve the tool'" further up btw. Did you mean people ran outdated builder software and didn't want to upgrade?
<nh2>
infinisil: yes I'm not saying package priorities would be bad, I'm saying that they are more than an MVP for useful priorites that ofborg could interpret
<gchristensen>
I'd really like to have other people able to deploy and monitor and etc. ofborg btw, so if anyone is interested ...
<MichaelRaskin>
«Didn't want» implies some observable opinion, though. «Were not easy to reach» might be more specific.
<gchristensen>
yeah, right, MichaelRaskin
<gchristensen>
and it is hard to beat regular hardware in a datacenter for "is available"
<infinisil>
gchristensen: You should get an eval error saying that longkeyid is missing. That option needs to be removed from lib/tests/maintainers.nix
<gchristensen>
cool
<gchristensen>
thanks infinisil
<nh2>
gchristensen: I think the solution to that is to take away the need to even care what the builder software runs. "Make it an appliance." Do it like Cachix, providing a URL you stick into your `configuration.nix` and it updates automatically to whatever version the ofborg maintainers think you should be running. That's especially to follow if the actual builds run in a VM.
<MichaelRaskin>
I think at some point my ofborg builder was the only x86_64 builder. And for Reasons™ it had weekly multi-hour downtime. When this was… noticed and discussed, a couple more builders appeared
<gchristensen>
but anyway, capacity isn't really a problem, (other than macos) tbh
<infinisil>
I do have a free macbook air I'd be willing to sacrifice
<infinisil>
MMmaybe
<gchristensen>
can't do maybes :P
<MichaelRaskin>
Right, in Rust they are called differently
<infinisil>
Maybe if I can find a room to put it in where it doesn't keep me up at night :P
<gchristensen>
you could mail it to me
<nh2>
infinisil: don't macbooc airs not lack a fan?
<MichaelRaskin>
Wait, _Air_ doesn't even have moving parts, no?
<gchristensen>
yeah it does
<infinisil>
gchristensen: I do still need it occasionally, so can't do that :P
<gchristensen>
at least they used to
<infinisil>
nh2: I have a super old one with a fan still!
<gchristensen>
ah then I don't really want it tbh
<MichaelRaskin>
And fragile sleep, apparently!
<infinisil>
Mid 2012 macbook air 13"
<MichaelRaskin>
(a couple of years ago I was sleeping in three meters — no walls — from Gigabyte Brix i7-4770R being — you guess it — ofborg builder)
<nh2>
infinisil gchristensen: I think a key part is that the system should tolerate builders disappearing for a while. Would that be a problem?
<gchristensen>
of course not, ofborg has always handled that fine
<gchristensen>
but the problem is not architectural
<gchristensen>
the problem is the machines submitting PRs doesn't sleep
<gchristensen>
(people)
<gchristensen>
reviewers have low tolerance for ofborg being down or slow before they just merge
<MichaelRaskin>
Well, maybe merge-if-build-succeeds could have helped with the last part
<infinisil>
So ideally evaluation time should be decreased?
<MichaelRaskin>
But full-cycle CI for ofborg cannot be had (because Github)
<MichaelRaskin>
And auto-merge is scary-ish
<MichaelRaskin>
Scary feature that is also hard to test — …
<gchristensen>
infinisil: that isn't what I'm saying
<gchristensen>
infinisil: that isn't what I'm saying running on just some systems only works if you have a sufficiently large capacity that you're always above required capacity
<nh2>
gchristensen: Is that really true? If I see ofborg was started and is running, I just go to the next PR and come back later.
<gchristensen>
yes that is really true
<gchristensen>
I know this because people will merge PRs where ofborg clearly didn't work, instead of tell me it didn't work
<nh2>
gchristensen: Even better would be if ofborg posted on the ticket "I finished building this". Because then I would be notified via email and know when to come back to click merge.
<gchristensen>
heh
<infinisil>
What if it posted a comment for every arch it built..!
<gchristensen>
good lord
<gchristensen>
back to the bad old days lol
<MichaelRaskin>
Well, old days were more step-by-step reporting
<infinisil>
I think decreasing time-to-fail would greatly help though, and maybe make the fail be a red mark instead of a grey one like it is now (where appropriate)
<MichaelRaskin>
It was «I finished building this check», not «I finished building this PR»
<gchristensen>
ah
<MichaelRaskin>
Of course, if GitHub allowed actual check status, it would be simpler
<nh2>
gchristensen: Also, another thing that may help the "impatient people" problem is giving more visibility. Like saying "evaling ... usually done in N" seconds or "building chrome ... usually times out" to set expectations
<infinisil>
Imagine you'd get a red mark within 60 seconds of submitting a PR, notifying you of a thing your PR broke
<MichaelRaskin>
Because we want to have red:definitely-no
<nh2>
infinisil: I would love that
<gchristensen>
we'll need to make evaluation a lot faster :)
<MichaelRaskin>
So breakage on macOS needs _at least_ a check whether there is a failure on master already
<infinisil>
I think some smarts based on changed git paths should be possible
<MichaelRaskin>
Hmmmm
<MichaelRaskin>
Maybe it makes sense
<MichaelRaskin>
Failed build = red, because it should be marked broken instead
<nh2>
gchristensen: "people will merge PRs where ofborg clearly didn't work, instead of tell me it didn't work" - with "didn't work", do you mean here that ofborg claimed a build failed, or do you mean that ofborg itself failed somehow and nobody told you?
<gchristensen>
the second
<gchristensen>
there are ways ofborg's evaluation could be made faster, like parallelizing the different evaluation steps among other evaluators
<gchristensen>
anyway
eyJhb has quit [Quit: Clever message]
<gchristensen>
I should head to bed, it is late forme
eyJhb has joined #nixos-dev
eyJhb has quit [Changing host]
eyJhb has joined #nixos-dev
<nh2>
gchristensen: OK but then I think you need to communicat that better. I wasn't aware that you want to be pinged when ofborg fails (I thought it might bother you), and probably others aren't eather
<gchristensen>
lol
<gchristensen>
oay
<gchristensen>
if anyone wants to help with ofborg, I'd be glad to have help. I recently added log aggregation so cole-h could help debug some problems
<nh2>
also my spelling goes down with the hour lol
<gchristensen>
I'm surprised to hear people think I wouldn't want to kno
* infinisil
is also heading to bed
<infinisil>
> CEST
<{^_^}>
"The time in CEST is currently 05:04:44 (UTC +2)"
<gchristensen>
since ofborg is only worth any effort if it is actually doing the thing it is for
<cole-h>
If borg fails, please ping me (esp. if you get that big purple "internal-ofborg-error") and I'll help troubleshoot
<MichaelRaskin>
(me, in CEST: trying to use lack of sleep to suppress my natural reaction to having to package DOMjudge)
<nh2>
gchristensen: also perhaps somehow communicating which are the things that you need to hear about and what they look like so that the average user can identify them, e.g. "ping gchristensen if ofborg didn't post an eval within N time"
<cole-h>
I've been doing that a little bit recently: scroll through backlog and see if the yellow marks are just waiting on a darwin builder or if it hasn't even finished evaling in N hours
<gchristensen>
I mean,
<MichaelRaskin>
More like «didn't start eval»
<gchristensen>
I guess I expect more out of people with merge access than average users, and I guess I would expect them to be familiar with "normal" and "sorta broken looking" but fair enough
<gchristensen>
maybe something people could help with
<MichaelRaskin>
Real benefit of contributed ofborg builders: more people having any idea what ofborg is actually supposed to do!
<nh2>
gchristensen: if you teach me what shall be I'd be happy to write up some process docs -- maybe I could even contribute a bit of code but I only recently learnt basic Rust
<nh2>
MichaelRaskin: yes
<gchristensen>
that is okay, cole-h has too -- find an itch and we can scratch it
<gchristensen>
MichaelRaskin: I'd rather just give keys to ofborg's prod :P
<MichaelRaskin>
This might scale even worse!
<gchristensen>
no, it would scale better
<gchristensen>
having >1 person who can deploy to prod and understands its prod is >> a bunch of people who can only runa small part of ofborg
<nh2>
gchristensen: IMO there should be a simple bot that posts those instructions to the PR, to tell the people what's supposed to happen. E.g.
<nh2>
> ofborg: Hi, thanks for your PR. I'll try to eval this PR, and also chose to {build the package and its dependencies, the package and its dependees because it has less than 10 dependees}. If the eval does not start further down in the Github Checks section within N seconds, please ping gchristensen
<{^_^}>
error: syntax error, unexpected ',', expecting ')', at (string):296:11
<gchristensen>
sounds like a nice first PR :)
<MichaelRaskin>
gchristensen: if only being able to succesfully deploy to prod automatically implied understanding what is going on there! (but yeah, it does give better picture than just running a builder)
<nh2>
gchristensen: another success method: Consider doing do what peti does, twitch stream when he does nixpkgs Haskell work. There are are always a couple people in there (me included) leeching his knowledge on how things work.
<MichaelRaskin>
gchristensen: given a nontrivial part of ofborg failures is failing to _start eval_, it is not good first PR, but a good first GH app
<gchristensen>
heh
<nh2>
gchristensen: and it's much easier to contribute after you have leeched some knowledge
<gchristensen>
okay now I just need to find some time to do that
<gchristensen>
maybe cole-h could do it, he's learned a bunch about ofborg and contributed some good PRs lately
<nh2>
gchristensen: peti's approach is to only do the streaming when he does nixpkgs work _anyway_, not as some additional free-time-sucking activity
* cole-h
glances at his "remove extraneous single-quotes" commits
<gchristensen>
I know, but ofborg has largely been set-and-forget :)
<gchristensen>
so there are lots of nixpkgs things I do and could stream, but they would infrequently be ofborg
<nh2>
gchristensen cole-h: OK maybe I could call with you to suck some knowlege about ofborg at some time. I'm pretty busy in general but I'm usually quite OK documenting stuff to make processes smooth, so maybe I can contribute a bit on that
<nh2>
anyway, also going to bed now, I have to give a presentation about NixOps at FPco at 15:00 :D
<gchristensen>
cool, are you going to talk about some of the upcoming nixops 2 stuff?
<nh2>
gchristensen: No just teaching some colleagues nixops basics
<gchristensen>
right on
<gchristensen>
good luck!
<nh2>
Thanks! We don't really use nixops there but people are curious
<makefu>
firefox timed out on unstable, however it seems that the channel was already updated and how my system tries to build firefox from source. is it possible to retry the job?
<makefu>
~ hydra-check firefox-unwrapped
<makefu>
Build Status for nixpkgs.firefox-unwrapped.x86_64-linux on unstable
<rnhmjoj>
am i the only one that despise with his heart the linter thing in the python test driver? it's making me lose time over immaterial stylist formatting, morover it doesn't allow some perfectly fine PEP8 formatting and is recommedding nonsese changes. look at this: https://hastebin.com/cugadadare.diff what am i supposed to do here?
<Valodim>
it's not visible in the diff, but I guess it's fixing a whitespace error?
<srk>
rnhmjoj: same here, especially bad when you add lines and it forces you to reformat according to surrounding context
<srk>
it should have easy way to disable or just spit warning instead
<rnhmjoj>
Valodim: i think so, but since the script contains nix quasiquotes i'm not directly inserting whatever whitespace it's detecting. here's the code: https://hastebin.com/gexefoquge
<Valodim>
quasiquotes don't mix too well with meaningful indentation anyways, do they? :\
<Valodim>
but yeah, I see where it's coming from. formatting is always a little out of whack when it's generated from nix expressions, very bad combination with whitespace-strict linters
<Valodim>
so I think that particular example works, since the quasi-quotes strip all whitespace before the "router", and the whitespace before the ${lib.optionalString fills it up to work correctly
<Valodim>
but if there were another line in the '', wouldn't that lose the indentation, and break the script?
<rnhmjoj>
i think the optionalString is leaving a blank line and it doesn't like that. i managed to trick it with if (!withFirewall) then ''router.succeed("systemctl start nat.service")'' else "pass"
<Valodim>
I don't think the optionalString itself leaves anything. it's the indenting whitespace before the ${} that's left
<MichaelRaskin>
And of course even without conditionals, just with expansions, one cannot just run the formatter on the source, because expansions have different lengths from variable names
asymmetric has quit [Remote host closed the connection]
asymmetric has joined #nixos-dev
johnny101m has quit [Ping timeout: 260 seconds]
johnny101m2 has joined #nixos-dev
Guest30 has joined #nixos-dev
{`-`} has joined #nixos-dev
v0|d has quit [Read error: Connection reset by peer]
Jackneill has quit [Ping timeout: 240 seconds]
Jackneill has joined #nixos-dev
Jackneill has quit [Ping timeout: 265 seconds]
<gchristensen>
(cross-posting from accidentally posting to -chat) anyone available to review this PR, which drops longkeyid from the maintainer list, adds lib tests for the team list, and also documents how to review additions to the maintainer and team lists? https://github.com/NixOS/nixpkgs/pull/85247
<{^_^}>
#85247 (by grahamc, 16 hours ago, open): maintainers: document new maintainers and team changes
<etu>
gchristensen: Hmm, thinking about the maintainer example, it has the github username "ghost" and attribute name example. We usually say that the attribute should be the same as the github username. So it would make sense if the example match that?
<gchristensen>
I'll change ghost to example
<gchristensen>
I picked ghost there because it is a user guaranteed to not be a real person
teto has joined #nixos-dev
Guest30 has quit [Quit: Connection closed]
Jackneill has joined #nixos-dev
<gchristensen>
ehmry: cool PR
<ehmry>
gchristensen: yea, my first perl module
<ehmry>
now I better understand where nix come from
tdeo has quit [Quit: Quit]
tdeo has joined #nixos-dev
tdeo has quit [Changing host]
tdeo has joined #nixos-dev
<gchristensen>
hehe
<gchristensen>
ehmry: may I PM?
<ehmry>
gchristensen: yes
sogatori has quit [Remote host closed the connection]
<gchristensen>
is the purpose of the `trash` directory so that the removal from `/nix/store` is atomic, to cover up the fact that the actual deletion of the files is not atomic?
<Valodim>
recursively deleting files can take a very long time, so that sounds sensible to me
<gchristensen>
cool
<gchristensen>
I'm thinking it'd be nice if nix's gc deleted some things as it went, instead of moving everything to trash and then deleting
<michaelpj>
incidentally, I noticed recently that Nix has its own recursive file deletion logic, which I think only exists so it can track how much gets deleted. But I wonder if it's also slower than it needs to be for that reason
<qyliss>
gchristensen: you mean like outputs that are just files?
<gchristensen>
it is basically mark-and-sweep, and the marking does the accounting. the sweep should be pretty standard, though?
<gchristensen>
qyliss: I'm thinking instead of: (1) move a bunch of stuff to /nix/store/trash/ (2) delete /nix/store/trash, do like (1) move 1 or a few things to /nix/store/trash (2) delete /nix/store/trash (3) goto 1 until we've deleted enough
<qyliss>
why?
<MichaelRaskin>
All moving to trash is a single pass to release the lock ASAP
<niksnut>
exactly
<gchristensen>
I ran out of disk space, so instead of running the command I actually wanted to run: nix-collect-garbage --max-freed 500000000000 I had to run `for i in $(seq 1 500); do nix-collect-garbage --max-freed 1000000000; done` in order to recover some space sooner
<niksnut>
it doesn't have to do with atomicity
<gchristensen>
MichaelRaskin: gotcha, that makes sense
<gchristensen>
it isn't the worst thing, just a bit annoying
<MichaelRaskin>
I think it should be safe to run a trash-deleting process in parallel.
<MichaelRaskin>
I hope nix won't break too much if some trash is stolen from it…
<gchristensen>
hehe probably not
<michaelpj>
while we're talking about GC, a real nice QOL feature that I've wanted for a bit would just be to make it use the progress monitor. I think it knows how many things it needs to process, so we should be able to give some kind of useful progress estimate
<gchristensen>
michaelpj: on that note, I wouldn't mind some sorting of how old the path is :) some fanciness could be nice
<MichaelRaskin>
gchristensen: well, too parallel rm's do complain when they find that the already-planned work is done by the other rm
<gchristensen>
`|| true` and get it on the next path :')
<niksnut>
gchristensen: we used to have that a long time ago
<clever>
ive seen issues where just doing `ls /nix/store` takes over a minute
<gchristensen>
oh interesting
<clever>
and that can slow down GC even finding stuff to mark
<MichaelRaskin>
Long ago was direct deletion without trash I think?
<clever>
MichaelRaskin: i think the bigger reason for /nix/store/trash is atomic deletion of a directory
<niksnut>
or to be precise, it sorted by atime
<MichaelRaskin>
clever: atomicity is irrelevant; it is not valid → no promises
<niksnut>
we could use the registration time but it's not necessarily very useful
<michaelpj>
I'd have guessed that the GC bottleneck is filesystem IO so parallization wouldn't help much?
<gchristensen>
oh, well, that is okay but pretty out of vogue
<MichaelRaskin>
michaelpj: it's two processes
<MichaelRaskin>
There is mark-and-sweep through the path database and removal of paths
<clever>
the 1st phase is GC, is to list everything in /nix/store, and delete any invalid path (those not in db.sqlite), while putting all garbage (unrooted) into a list
<clever>
by "delete", files get rm'd, and directories get moved to /nix/store/trash
<clever>
the 2nd phase, will random-sort that list, then "delete" items from it
<clever>
and --max-freed can stop it at any point
<clever>
3rd phase is to then basically `rm -rf /nix/store/trash`
<clever>
and the 4th phase, then goes thru /nix/store/.links/ and deletes anything with 1 hardlink
<niksnut>
that's not quite correct
<niksnut>
deletion doesn't have to be atomic
<niksnut>
the only atomic step is deregistration
<clever>
ah, so its more about spending less time with gc.lock held?
<niksnut>
(aka invalidation)
<niksnut>
yes
<clever>
i also think that the "delete files, move dirs" step can still be costly, on some fs's
<clever>
i think for ext2/3/4, deleting a large file is costly
<clever>
but for xfs&zfs, deleting large files is cheap
<gchristensen>
well gosh I didn't know there was so much pent-up discussion to be had around this :x
<clever>
niksnut: oh, and there is a race condition in the min-free based gc
<clever>
niksnut: it computes how much to delete to reach max-free, then it grabs the gc.lock (which may already be held by an identical gc)
<MichaelRaskin>
niksnut: will Nix complain if I start a process removing random stuff in trash, racing with the GC?
<clever>
so, until min-free based GC completes, every nix operation you do, starts another $N gig GC
<clever>
which can grind a hydra to a halt for hours
<niksnut>
MichaelRaskin: it might, it probably doesn't expect /nix/store/trash disappearing entirely
<clever>
the solution i can see, is to move the computation for max-free, within the gc.lock, and cancel the GC if your already over min-free
<MichaelRaskin>
niksnut: I did think about that — but I could limit myself to staff _inside_ trash
<niksnut>
right
<MichaelRaskin>
(but I would still end up deleting something before Nix reaches it)
<MichaelRaskin>
(but after Nix _plans_ to reach it)
<gchristensen>
give it a go, MichaelRaskin :D
<gchristensen>
let's rewrite nix-collect-garbage in Rust, and add an async background process which listens on a channel for paths moved to trash, and cleans them up as they arrive
<MichaelRaskin>
gchristensen: that, or write a shell five-liner…
<MichaelRaskin>
Speaking of large stores, what is the reason that Nix asserts that store can be enumerated?
<niksnut>
?
<MichaelRaskin>
I tried to make a store that is +r-x
<MichaelRaskin>
Nix complained
<clever>
MichaelRaskin: last time i did that, nix didnt complain (i gave root +r still), but nixos did undo it on boot
<MichaelRaskin>
Nah, I tried it after migrating from NixOS boot scripts
<MichaelRaskin>
~ line 100 in local-store.cc Nix sets 01755 on realStoreDir
<MichaelRaskin>
niksnut: do you remember why it does that?
<clever>
MichaelRaskin: when does git blame say it was added?
<niksnut>
I think I tried changing that once and things broke
<clever>
MichaelRaskin: secrets also have problems even in that scenario, you can -qR (thru nix, or manually) starting at /run/current-system/ and find every config and secret for all daemons
<MichaelRaskin>
The commit adding that said «Allow the physical and logical store directories to differ»
<MichaelRaskin>
So I would need a per-line log to find that _true_ source of the line
<clever>
MichaelRaskin: what about the date of the commit?
* clever
looks
<MichaelRaskin>
2016, but it is likely not to be the original source
<MichaelRaskin>
clever: 4494000e0
<clever>
yeah, maybe it wasnt nixos but nix itself that fixed it on boot
<MichaelRaskin>
clever: well, I guess what you want to query is derivers (because encrypted queries)
drakonis has joined #nixos-dev
<MichaelRaskin>
Encrypted configs, sorry
drakonis_ has quit [Read error: Connection reset by peer]
<pie_[bnc]>
Whats the desired way to package python stuff that you run directly from a single source file?
<MichaelRaskin>
But there are some completion failure modes that should disappear in this case
<clever>
pie_[bnc]: ive got an example somewhere...
<pie_[bnc]>
clever: thanks, ive done this before by making a wrapper that just calls it with python, but im never sure how im *supposed *to be doing it
<clever>
pie_[bnc]: this creates a shell script called $out, which runs python /nix/store/hash-file.py
<pie_[bnc]>
ok so basically yes heh
drakonis has quit [Ping timeout: 272 seconds]
<MichaelRaskin>
niksnut: any memories _what_ broke?
<MichaelRaskin>
I basically considered trying and seeing what breaks, no high expectations there.
<MichaelRaskin>
niksnut: the failing permissions are not exactly what I would try first… and yeah, maybe not all scenarios can tolerate this from get-go, but would you object for an option for skipping force-chmod (so that the boot process can configure the store permissions as preferred)?
abathur has joined #nixos-dev
<timokau[m]>
Skimming through the backlog, it looks like I missed a fascinating CI and process discussion tonight. nh2++ for pretty much everything they said
<{^_^}>
nh2's karma got increased to 17
drakonis1 has joined #nixos-dev
justanotheruser has quit [Ping timeout: 272 seconds]
<srk>
I only managed to read part of that, will take a look again
drakonis_ has quit [Ping timeout: 256 seconds]
johnny101m2 has quit [Remote host closed the connection]
<matthewbauer>
Does anyone have a recommended VPS provider for Nix builders? I've been using a Vultr instance as a spare builder. It's pretty cheap ($3.50/month), but only provides 8G storage, meaning I frequently hit storage limits (min-free helps a bit).
<domenkozar[m]>
hetzner :D
<domenkozar[m]>
if you can't afford that, I have can help getting a job that will :D
<qyliss>
The second most recent commit on master starts with "[Don't merge]"...
<qyliss>
and it was reverted :D
<Profpatsch>
domenkozar[m]: lol, I’m stealing the “if you can’t afford that” sentence
cole-h has quit [Quit: Goodbye]
teto has quit [Ping timeout: 246 seconds]
cole-h has joined #nixos-dev
<ehmry>
matthewbauer: vpsfree, if you are okay with europe https://vpsfree.cz/
<yorick>
qyliss: well, wasn't merged
<yorick>
just pushed in
<ehmry>
though I think vpsfree only offers two core hosts
<srk>
it's 8 cores but you're not supposed to eat all that as it's shared
<gchristensen>
spending $30/mo gets you a lot of kit at hetzner's server auctions
<srk>
(meant to heave some headroom for peak-hours)
<gchristensen>
vpsfree gets a big +1 for obvious reasons :)
* srk
not a part of it anymore, -1 here :)
<matthewbauer>
,domenkozar: ehmry thanks, i'll check those out
<matthewbauer>
I probably want basically nixbuild.net, but obviously not quite available yet
<gchristensen>
maybe he can make it available to you
<cole-h>
"If you're interested in evaluating the service or have any questions, please contact ..."
<cole-h>
Straight from the website -- maybe if you ask nicely, he'll hook you up :)
teto has joined #nixos-dev
<domenkozar[m]>
srk: discord! :D
<srk>
evaluatin'
<srk>
had to killall 'Web Content' few minutes ago :D
<domenkozar[m]>
:D
<srk>
can someone point me to declarative jobsets for hydra.nixos.org?
ixxie has joined #nixos-dev
lejonet has joined #nixos-dev
<ajs124>
I don't think hydra.nixos.org is configured declaratively, or is it?
<samueldr>
pretty sure it isn't for nixos/nixpkgs
<srk>
yeah, that might be the case I can't find em
<ajs124>
It's pretty straightforward, though. Well, as straightforward as declarative jobsets are.
* srk
follows the evaluation trail trying to eval all packages like hydra does
<ajs124>
we're actually building the small channels on our own hydra and have declarative config for that. the repo is private though, for some reason and I won't be able to reach the person that did that today, to ask why.
<timokau[m]>
The blog post on HN is a good intro do nix, enthusiastic but not setting false expectations
<gchristensen>
nice
<etu>
> It is one of the most underrated software I have came across and the industry would beneficiates a lot by using it.
<{^_^}>
error: syntax error, unexpected ')', expecting ID or OR_KW or DOLLAR_CURLY or '"', at (string):297:1
<etu>
Good quote right there
<julm>
I've packaged tremc https://github.com/NixOS/nixpkgs/pull/85323 because I didn't know about stig before. Unfortunately the stig package is currently broken due to its tests failing on locale.Error: unsupported locale setting.
<julm>
And once if I disable the tests (doCheck=false; doesn't work (maybe because checkPhase is overrided) so I phases = ["unpackPhase" "patchPhase" "buildPhase" "installPhase"];), then it builds, but the resulting stig fails at runtime with: ModuleNotFoundError: No module named 'stig'
<julm>
-once
<pie_[bnc]>
i dont suppose anyone has proposed adding an operator or builtin to the language that lets you retrieve the argument attrset WITH default arguments...?
enticeingng has quit [Remote host closed the connection]
abathur has joined #nixos-dev
Taneb has quit [Quit: I seem to have stopped.]
Taneb has joined #nixos-dev
<samueldr>
didn't you? or was it something else that you proposed around that set-pattern arguments thing?
drakonis1 has joined #nixos-dev
drakonis_ has quit [Ping timeout: 260 seconds]
teto has joined #nixos-dev
drakonis_ has joined #nixos-dev
drakonis1 has quit [Ping timeout: 265 seconds]
ghuntley has joined #nixos-dev
<pie_[bnc]>
samueldr: well I asked about it
<pie_[bnc]>
samueldr: but i dont think anyone takes me seriously enough for that to ahve had any effects :p
<pie_[bnc]>
i was wondering if anyone else had brought it up
<samueldr>
ah, no, it was named ellipses
<pie_[bnc]>
yeah that was something else
<pie_[bnc]>
s/was/is/
<disasm>
Mic92: I don't want to mark https://hydra.nixos.org/build/116519909/nixlog/3 if we can avoid it. Can you look at this? It says no module named nix when importing pythonix. I was able to reproduce with nix-shell -p pythonix and import nix.
<disasm>
mark as broken that is :)
ris has joined #nixos-dev
<ris>
granted merge powers yesterday, dare i make my first merge a 1200-package rebuild trigger? #79772
<{^_^}>
hydra#737 (by Ma27, 5 days ago, open): Get rid of dependency to SQLite
<gchristensen>
qq: right now, the `grahamcofborg-eval` check stays yellow/pending until the very last status is marked as done, and then grahamcofborg-eval goes green if nothing else failed. how important that this stay true?
<gchristensen>
if that status check ceased to exist, would that be fine?
<gchristensen>
in terms of UX / clarity
<gchristensen>
ma27[m]: could you look at making that failure not happen, as a separate PR?
<LnL>
for the evaluation checks I'd say yes, builds perhaps not
<gchristensen>
LnL: say more
<worldofpeace>
kinda hard to parse the meaning
<gchristensen>
worldofpeace: was that for cole-h?
<worldofpeace>
cole-h: thanks, I was rather close to that and gave up
<worldofpeace>
gchristensen: no
<cole-h>
Hehe
<cole-h>
(ignore the darwin indentation; `gg=G` did that)
<gchristensen>
worldofpeace: do you ascribe any special, specific meaning to the check named "grahamcofborg-eval"?
<gchristensen>
or is it just "allof 'em are green" or "not all of them are green"
<ma27[m]>
gchristensen: yeah, will attempt to fix it :)
<LnL>
I think checking evaluation is the most important part, builds are kind of a separate thing
<gchristensen>
LnL: yeah definitely would still be separate, and the checks would still be red/green for evals
<gchristensen>
but grahamcofborg-eval sort of acts as an "envelope" check: it goes pending first, and exits pending last. I'd like to get rid of this envelope behavior
<LnL>
but keeping the overall status pending until all checks pass seems the most clear to me
<gchristensen>
I was afraid of that :')
<gchristensen>
if the envelope behavior is removed, but at least one thing was pending until they were all done, would that satisfy it?
<LnL>
otherwise something looks good but might still fail later if it's merged too fast
<LnL>
depends a bit how the ui works I guesss
<MichaelRaskin>
gchristensen: I guess without envelope item GitHub can fail event ordering or something
<gchristensen>
true
<MichaelRaskin>
One could have an always-succeed envelope check maybe?
<MichaelRaskin>
Not to delay valuable data
<gchristensen>
okay, cool, good feedbac
<timokau[m]>
I really hope someone is working on automating the mark-broken thing, I'd love that for unstable too
<timokau[m]>
Yeah I've seen that. I just fear that they may loose motivation now that its done for 20.03
<gchristensen>
we'll see
<gchristensen>
the cool thing is it doesn't even need to be very good. it is easy to validate if it did the right thing, and then it could go to a PR for review
<gchristensen>
so some ugly awk / sed could get it done I _think_ in most cases
<cole-h>
Would need to take into account the issue above with cargo-download
<cole-h>
A comprehensive solution would, at least
<gchristensen>
yeah but the risk is what
<gchristensen>
missing some? :P
<timokau[m]>
In my dream world it would be some continuously running service like r-ryantm. It could automatically create a PR marking the package as broken and ping the maintainer in it.
<gchristensen>
+1
<niksnut>
ma27[m]: looks good except for the lastModified change
<gchristensen>
timokau[m]: with 75k builds that is a pipedream, heh
<ma27[m]>
niksnut: fixed, thanks! I guess I just used an outdated `nixFlakes` for testing.
<niksnut>
yeah, lastModified got renamed to lastModifiedDate last week or so
<timokau[m]>
gchristensen: I still think we can avoid close to all broken packages with the process I outlined on discourse. There would be exceptions, but that's okay. It would just be hard to get enough consensus and implement the process.
<gchristensen>
timokau[m]: gotta go carefully and with sugar :)
<gchristensen>
LnL MichaelRaskin and I were just talking about the bad-old-travis days in -ofborg
<timokau[m]>
gchristensen: I don't understand
<gchristensen>
everybody needs to want to do it
<gchristensen>
gotta go, dinner
<timokau[m]>
Ah, that's what you mean by sugar
<timokau[m]>
Yes it may be possible, but it just takes a lot of mental energy to push the community into a direction, create RFCs etc
<gchristensen>
the point of sugar is to pull :)
<gchristensen>
you can't force volunteers to do things. things have to stay fun and easy
<gchristensen>
one nice way to get people to do something isto make it the easiest way to do it
<timokau[m]>
Yeah I know, wouldn't want it any other way. I'm just saying that complaining is easy, implementing is hard. That's why I'm doing the first and not the latter ;)
<timokau[m]>
Enjoy your meal!
justanotheruser has quit [Ping timeout: 240 seconds]
<MichaelRaskin>
timokau[m]: you strategically choose to implement all things Sage instead…
<LnL>
gchristensen: I think the hydra data could be really valuable for broken stuff / cleanup
<gchristensen>
+100
<gchristensen>
ok now it reall yis dinner time :D back in abit
<LnL>
right, something like broken -> removed -> completely gone
<LnL>
but I bet there's enough stuff that can skip to the last step to start with
<worldofpeace>
a big purge, right LnL
<pie_[bnc]>
there goes our coverage lol
<pie_[bnc]>
well, mh. if its broken. nevermind.
* pie_[bnc]
is frustrated that he cant do anything good with overriding because of the stuid forgetting-default-arguments thing
<pie_[bnc]>
i think optional arguments are good for overridable let expressions
<pie_[bnc]>
if only they would work
<pie_[bnc]>
hm wel actually i guess that just means i cant do appends. but thats a good chunk of useful overrides gone nevertheless.
justanotheruser has joined #nixos-dev
ixxie has quit [Quit: Lost terminal]
<cole-h>
worldofpeace: python38Packages.nixpkgs is broken on python 3.8 (but not on 3.7), python37Packages.imgaug is broken on both 3.7 and 3.8 (jonringer also posted this on the PR)
<worldofpeace>
cole-h: thank you 👼of nixos. I think I might massage my temples and maybe return in like 3 hours 😂
<worldofpeace>
diva! cya
<gchristensen>
worldofpeace: take it easy, worldofpeace :)
<cole-h>
o/
<gchristensen>
worldofpeace++ cole-h++
<{^_^}>
cole-h's karma got increased to 24, worldofpeace's karma got increased to 112
<worldofpeace>
gchristensen: I will 😺 we should probably talk later sometime btw