<tilpner>
genesis - A builder to do what? Extracting appimages and patchelf-ing binaries is very different from what appimage-run does
<genesis>
yes, it would be useful too.
<genesis>
depend how many people would like to exclusivily distribute their software in such poor packer, like soulseekqt does.
<tilpner>
genesis - I considered that actually, but I don't want appimages to become a long-term component of nixpkgs applications
<genesis>
i wonder if their if a reverse dependancies hack to get nixos package names with a ldd lookup :)
<tilpner>
appimage-run is just a convenience for people who need to run something right now, and don't care for it to be packaged nicely
<tilpner>
But in the end, it's still downloading a random binary from the internet
<tilpner>
Building the application from source should still be the eventual goal (though I realize that's not always feasible)
<genesis>
yes, and i was used to when i'm in a demo mood :)
orivej has joined #nixos-dev
<ekleog>
has anyone thought of making something that can build .deb, .rpm, pkgbuilds, appimages etc. from a nix derivation?
* ekleog
thinking this may be a way to make developers switch to using nix
<gchristensen>
ekleog: there are things like that already,ish
<ekleog>
would need to allow some impurity by mapping from package name+version to only package names in the .deb's, but giving the option to either use a dependency from upstream pkgmanager or bundle it in the package could be great
<gchristensen>
would be cool
* ekleog
just throwing ideas in the air
<srhb>
I thought we already had *something* like that? Is that not what debian-build.nix does?
<srhb>
Ah, I guess not
<gchristensen>
we do have toolings to produce DEBs and RPMs but not from paths built by Nix
<ekleog>
I guess the important question is, does it try to keep the purity of Nix, or does it give in. I'd think it'd need to give up purity on those platforms (at least with a flag) to allow the .deb to be unpacked the way a non-nix-built .deb would, if we want widespread adoption of this :)
Sonarpulse has joined #nixos-dev
<ekleog>
(also, documentation somewhere, because I never knew we had tools to produce debs and rpms :°)
<gchristensen>
the tooling to produce DEBs and RPMs spawn a Debian or CentOS VM to do it
<ekleog>
(and if we want to reach every developer…)
<ekleog>
oh that sounds right, then :)
<ekleog>
so all that's missing is documentation and a whole lot of advertisement everywhere targeted as using nix as a cross-platform package builder
<ekleog>
“all”
<gchristensen>
well also it sort of sucuks
<gchristensen>
well also it sort of sucks, because it is very slow
<vcunat>
you can see hydra's jobs for nix's debs and rpms
<gchristensen>
yeah, and it has to explicitly list all of the _deb_ and _rpm_ dependencies it has, too, throwing away a lot of the benifits Nix has. it isn't a nice thing or fun thing to use, but a way for Nix to produce them in a pinch really
<vcunat>
I personally don't believe in creating a common abstraction that can generate debs, rpms and other packages.
<ekleog>
gchristensen: oh. indeed, that's not something that could really be used, then :/ I was thinking of maybe starting a map from nix packages to debs and rpms, and defaulting to bundling the dep if it can't be found in the map
<ekleog>
vcunat: oh, it's in nix, not in nixpkgs, that's why I didn't find anything :) thanks!
<gchristensen>
that is lying to the user and they'll have a bad time when the lie falls apart
<ekleog>
gchristensen: that's basically what people who make .debs by hand do, though, isn't it?
<gchristensen>
yes, but propagating that lie in to our system doesn't make their system better, just makes our system worse
<vcunat>
ekleog: no, almost all the tooling is in nixpkgs
<vcunat>
nix is just a user of that tooling
<vcunat>
(the only public example I know of, actually)
<gchristensen>
I used to have a tool which took a closure and converted it in to an RPM
<ekleog>
well, I'm thinking it wouldn't impact our system in any way (that's just a function from derivation to deb), and I don't think it'd make their system better, but it may bring in people who may be interested in writing all their packages in a single run :) (the specificity nix has being the ability to bundle deps without hurting the surrounding system, that makes me think that'd maybe be possible)
<gchristensen>
it would replace "/nix/store/" with "/n/someth/" and created /usr/bin stubs
<ekleog>
vcunat: oh indeed, so here was the debian-build.nix user that I couldn't find earlier! so you were right, srhb :)
<ekleog>
I mean, “take this derivation, drop almost all guarantees on the floor, and we're now on par with other package managers” :°
<gchristensen>
I'm not sure those are the users we want to get
<ekleog>
well, if users want to use nix even just for debs / rpms, their software will still have to compile through nix, so they'll end up writing derivations that can be used in nixpkgs :)
<ekleog>
and that'd make less maintenance work if it's pushed upstream 😇
Drakonis has joined #nixos-dev
xeji has joined #nixos-dev
<vcunat>
:-)
<xeji>
could someone with the access rights pls restart the 17k aborted hydra jobs of nixpkgs:trunk ? Ofborg is timing out on most things...
<gchristensen>
done
<xeji>
<3
<thoughtpolice>
dtz[m]: Is there an easy way of dropping into a nix-shell with libcxxStdenv/clang instead of the default stdenv?
jtojnar has quit [Remote host closed the connection]
<vcunat>
Hmm, right. I hoped to speed up merging of staging-next by cancelling those mass rebuilds, but I see that has issues as well.
<vcunat>
thoughtpolice: if it's not a one-time thing, you might want to write a shell.nix with (some override stdenv).mkDerivation { ... }
<LnL>
I think being stricter about what goes into master is a better solution for that :)
<vcunat>
Yes, of course.
<vcunat>
This was just about the immediate situation.
<vcunat>
I didn't have the energy to search for the culprit commit(s) and move it to staging.
jtojnar has joined #nixos-dev
NinjaTrappeur has quit [Quit: WeeChat 2.2]
<vcunat>
(That's what I normally try to do when I notice.)
NinjaTrappeur has joined #nixos-dev
* ekleog
almost wondering whether *everything* shouldn't go into staging
<ekleog>
would likely delay a bit the “lighter” rebuilds, but such a policy would make it much easier to avoid having a commit mistakenly going to master
<ekleog>
but let's first try this staging / staging-next / master triplet and see how that goes, anyway, now is too soon to know whether that's good or not :)
<vcunat>
everything going to a single branch is precisely what we want to avoid
<vcunat>
if you want a base that has binaries and has been tested, there are the branches that follow channels
jtojnar has quit [Remote host closed the connection]
jtojnar has joined #nixos-dev
<samueldr>
vcunat, ekleog, it was a direct-to-master push on a innocuous looking package, if it went through a PR with ofborg labelling, it would probably have been obvious
<FRidh>
A mass rebuild on master delays a merge of staging-next by roughly 3 days, and this week there have been already two.
<samueldr>
(though, it was a CVE fix, which makes it hard to "just revert")
<vcunat>
(probably, I didn't really look closely at all the commits)
<gchristensen>
the policy I held for security patches a while back was just push to master since staging was not in such a hard spot, and it worked okay, so neq. is probably operating under those policies and should be updated
<vcunat>
or simply didn't realize it's a large rebuild
Sigyn2 has joined #nixos-dev
Sigyn2 has quit [Excess Flood]
<gchristensen>
also possible
Sigyn2 has joined #nixos-dev
Sigyn2 has quit [Excess Flood]
<gchristensen>
with my impending ability to get stuff done outside of work hours (yay medical science) once I've done some ofborg work, it would be not hard to have it comment on mass-rebuild master pushes, to gently nudge people towards PRs
Sigyn2 has joined #nixos-dev
Sigyn2 has quit [Excess Flood]
Sigyn2 has joined #nixos-dev
Sigyn2 has quit [Read error: Connection reset by peer]
<LnL>
yeah, that was the next thing I was planning to look into
<samueldr>
yeah, I'm 99% confident that it's a situation where it looked far removed from mass-rebuilds
<ekleog>
woohoo \o/
<vcunat>
I've been thinking of having a standard binary cache with (platform, commit) -> list of Hydra jobs' out-hashes
Sigyn2 has joined #nixos-dev
Sigyn2 has quit [Excess Flood]
<vcunat>
That would make it less painful to detect rebuilds.
Sigyn2 has joined #nixos-dev
Sigyn2 has quit [Excess Flood]
<Sonarpulse>
jtojnar: yes, thanks!
<gchristensen>
that would be interesting
<Sonarpulse>
jtojnar: yeah I hate to be such a pedant with this mental stuff
<vcunat>
perhaps cachix, so pushed-to by borg
<Sonarpulse>
but it really is true that thinking "native vs cross" will fuck up your codebase
<gchristensen>
I think, also, ofborg has sufficient funding to produce a linux-only binary cache
<jtojnar>
Sonarpulse: also there is a broken link to nixpkgs PR in the other thread
<Sonarpulse>
that was nixpkgs not too long ago, after all :)
<Sonarpulse>
oh hmm hope it's not just a special charchter thing
<vcunat>
gchristensen: yes, or that way. If the machines were under control, it could go even into cache.nixos.org.
<gchristensen>
yes, true
<vcunat>
Apart for self-usage in borg, people would only need to evaluate the half they changed and not master.
<vcunat>
(saving half the work isn't such a great deal, though, so I've been doing higher-priorty stuff, like channel blockers)
<ekleog>
… wait, so I have to ask, Sonarpulse = Ericson2314? (mind blown, I thought two people were heavily working on cross-nixpkgs, turns out only one 😢)
<Sonarpulse>
ekleog: yeah
<Sonarpulse>
hahahaha
<Sonarpulse>
sorry
<Sonarpulse>
I have two grand-fathered nicks
<Sonarpulse>
I don't mean to be this mysterious
<vcunat>
The avatar does have "sonar pulse" in the image, but it's just a slight hint.
<simpson>
Happens. I think most folks who have been chatting this morning have been known under at least one other nick; it's part of life.
<ekleog>
you're not the only one, though, n*ksnut also made me have this “wat is this true?” instant ^^
<samueldr>
it's obvious, look at their sonar pulse icon on github!
<vcunat>
generally it seems best practice to unify the nix
<vcunat>
*nics
* gchristensen
has been grahamc / gchristensen for long enough to not discuss the third nick
<Sonarpulse>
hahaha
<Sonarpulse>
gchristensen: yeah part of me was thinking I should just go (Ericson2314, sonarpulse) -> jcericson for the clean start
<gchristensen>
I also PM https://twitter.com/grahamc once a year or so asking if they're ready to give it up
<vcunat>
:-) reminded me of this one a general note more relevant to you than to my particular case.
<infinisil>
Regarding merging PR's: When would one want to have a rebase and merge instead of a merge commit?
<xeji>
infinisil: I usually do squash and merge if the PR has only one commit, merge commit if there's more than one.
<infinisil>
Isn't squash only useful when there's multiple commits? Why not rebase and merge instead if it's only 1?
<infinisil>
xeji: ^
<xeji>
it's pretty much the same thing if there's nothing to squash, but squash and merge adds the PR number in the commit message :)
<xeji>
so it's easier to track
<infinisil>
Heh I see
<infinisil>
So I guess I'll just avoid rebase+merge, use merge commit when multiple commits, and squash and merge if only 1 commit
<infinisil>
And I'll stop always mentioning that people should squash themselves :P
<xeji>
that's what I do but everyone seems to have their own preferences. some of us prefer to always create a merge commit.
<samueldr>
xeji: do you know if those will be the merge_commis_sha in the API?
<samueldr>
(the squash and merge)
<samueldr>
I've been trying to figure out a way to track commits on master that didn't go through the PR process
<infinisil>
xeji: I like that way, it also doesn't clutter the commit history with unnecessary branches/merges.
<xeji>
I think squash and merge creates a new commit sha even when nothing is squashed
<infinisil>
samueldr: A test repository would probably work well to figure out exactly how everything works
<samueldr>
probably
<gchristensen>
I don't like the rebase because it breaks the commit from the PR
<gchristensen>
I don't like the rebase because it breaks the commit *hash* from the PR
<samueldr>
^ I tend to agree
<infinisil>
gchristensen: Squash does not?
<samueldr>
squash does
<samueldr>
since it creates a new commit
<infinisil>
Even if it's 1 commit?
<infinisil>
Oh
<xeji>
yep
<gchristensen>
well there you go, I guess I just prefer the simple merge option, since it retains the commits from the PR
<infinisil>
Now that you mention it, I have to agree
<samueldr>
though squash has the benefit (AFAICT) that it creates *1* commit, which is available in the API
<gchristensen>
that said Nixpkgs has a _cherry-pick_ style so the cherry-pick commit reference methods have to be used to be thorough
<samueldr>
yep, when those are used it's generally safe for my use case
<infinisil>
Now I'm conflicted, damnit
<gchristensen>
(which is to say, git cherry-pick with -x, and also git patch-id)
<infinisil>
merge commit or squash+merge..
<Dezgeg>
rebase!
<samueldr>
I have strong arguments against rebasing other people's PRs, even if only for keeping track of the "real intentions"
<samueldr>
or maybe strong opinions, no real arguments yet
<samueldr>
even though it creates what some call "a messy history"
<samueldr>
and we can't expect users to rebase their own stuff on tips all the time
<infinisil>
I'll just keep on doing everything with merge commits
<infinisil>
Original sha gets kept, and merges are easily filterable
<samueldr>
my opinion (not nixpkgs-specific) rebases are a tool for the developer when they are working on their feature branch, to avoid locally-messy history
phreedom_ has quit [Ping timeout: 250 seconds]
<gchristensen>
if you use a cherry-pick based workflow, that can't hold
phreedom has joined #nixos-dev
<gchristensen>
but under the most common project styles I agree
<xeji>
infinisil: I don't think keeping the original sha is that important (typically people delete the branch anyway after their PR gets merged). Tracking changes down to the PR on github good enough IMO.
<samueldr>
true, there's almost two projects, the nixpkgs-packages, where we init and upgrade packages; the nixpkgs-framework, all the bricks, mortar, pipes, glue and tape that holds stuff together, and they're both (imo) served by different workflows :/
<infinisil>
How do these changed sha's interact with people rebasing on master with their commits cherry-picked before (commits that are now in master, but have a different sha)?
<infinisil>
Probably git should be enough to not throw merge conflicts and resolve it automatically
<Dezgeg>
yes it's smart enough usually
<Dezgeg>
I do that without a hitch 90% of the time in those projects where you send patches via e-mail
<infinisil>
90% of the time? What are the problems in the other 10%?
<Dezgeg>
presumably when the same (or nearby) lines were modified by some other patches
vcunat has quit [Quit: Leaving.]
__Sander__ has quit [Quit: Konversation terminated!]
xeji has quit [Quit: WeeChat 2.0]
ma27 has quit [Quit: WeeChat 2.0]
ma27 has joined #nixos-dev
Sonarpulse has quit [Ping timeout: 240 seconds]
ma27 has quit [Quit: WeeChat 2.0]
ma27 has joined #nixos-dev
orivej has quit [Ping timeout: 240 seconds]
Sonarpulse has joined #nixos-dev
<thoughtpolice>
Hah. I just tested PostgreSQL's JIT support in ~1.5 billion rows of data and... it got 30 seconds slower! I guess COUNT(*) isn't quite enough to stress things...
<thoughtpolice>
COUNT(*) is extremely synthetic, to be fair. It's a better test of parallel sequential scan performance than anything. For simple cases you can easily bloat the runtime by generating 'bigger' code that's otherwise slower since it can't really specialize anything, I'm guessing.
<Drakonis>
it is better prepared for conditionals than selecting everything
<thoughtpolice>
I'm guessing complex predicates with things like subselects/operators that can be inlined will fair much better.
<Drakonis>
its because count is extremely wasteful
<Drakonis>
its counting EVERYTHING
<thoughtpolice>
At 1.5 billion rows in 3 minutes that's about 8.3 million rows/second. The slowdown is 30 seconds, so that's 1.5 billion in 3.5 minutes, or about 7.1 million rows/second. That's a difference of only 20ns per row, which isn't exactly a ton.
<Drakonis>
you said 1.5bil rows, one column right?
<thoughtpolice>
No, there are many columns.
<Drakonis>
that's magnifying the effect
<Drakonis>
unless of course you're not telling me if you're counting just a single column or everything, but i'm going with everything because that's what the asterisk does
<Drakonis>
conditional-less counting hurts
<thoughtpolice>
I realize COUNT(*) is exact, that's kind of the point of why I used it... The JIT test was more to see if any appreciable difference appears, which it can. But COUNT(*) is synthetic and this is a relatively large outlier I think, like I said. But in PostgreSQL COUNT(*) is better written as COUNT(), it doesn't care about the number of columns...
<thoughtpolice>
It's not the same as COUNT(1), for instance.
<thoughtpolice>
They're all fundamentally slow, anyway.
<thoughtpolice>
Well, I guess in theory COUNT(1) only has to see if the "one" (the first) column has a non-NULL entry, while COUNT(*) must check if "any" column has to have an entry. So I see what you mean there.
<thoughtpolice>
But from what I understand, COUNT(*) is no more inefficient even in the face of 'sparser' rows: PostgreSQL will know whether the row is populated at all regardless of which column exists, while COUNT(1) must always check NULL-ness of column 1. So the visible semantics are the same (assuming non-NULL column 1) but the runtime semantics aren't.
<Drakonis>
the issue is that it becomes hilariously inefficient if you aren't running conditionals to prevent it from sweeping the entire table
<thoughtpolice>
Also I doubt it matters much in a non-columnar format. Regardless of whether you check column 1 or column 20, the storage engine is going to fetch the row, almost certainly. Also yes I realize that there are no conditionals... Getting an exact row count was exactly why I wrote the exactly query 'SELECT COUNT(*) FROM table' lol.
<Drakonis>
lol
<Drakonis>
i gotta look at the source before i keep going
<Drakonis>
gotta see how hilariously inefficient the function is
<Drakonis>
okay i stand corrected, it just fetches every row
<thoughtpolice>
That said I'm impressed I can get 9 million rows a second with very little tuning for a full sequential scan, which isn't bad. The bad results for JIT here almost certainly wouldn't be detectable without NVMe + parallel seqscan, at least. If my hardware was marginally worse it might look different.
<Drakonis>
it only looks into columns when filtering
<gchristensen>
not sure this is on topic for #nixos-dev
<Drakonis>
yes
<Drakonis>
thoughtpolice, looks like postgresql has row indexing now
<Drakonis>
index scans rather
<Drakonis>
this is nice and i stand corrected, disregard my complaining
<thoughtpolice>
drakonis: another good one I enjoyed for this experiment is, 11 has parallel index creation for ordinary b-tree indices. That managed to cut my indexing down to 30% the previous time for these ETL scripts with a few parallel workers. :)
<thoughtpolice>
but yeah, a bit OT. I'll maybe write some numbers up for this if I can get more interesting ones...
<gchristensen>
how about #nixos-chat for frther chat