worldofpeace changed the topic of #nixos-dev to: NixOS Development (#nixos for questions) | NixOS 20.09 Nightingale ✨ https://discourse.nixos.org/t/nixos-20-09-release/9668 | https://hydra.nixos.org/jobset/nixos/trunk-combined https://channels.nix.gsc.io/graph.html | https://r13y.com | 20.09 RMs: worldofpeace, jonringer | https://logs.nix.samueldr.com/nixos-dev
srk has quit [Ping timeout: 240 seconds]
srk has joined #nixos-dev
supersandro2000 has quit [Disconnected by services]
supersandro2000 has joined #nixos-dev
alp_ has quit [Remote host closed the connection]
tokudan has quit [Remote host closed the connection]
tokudan has joined #nixos-dev
tokudan has quit [Remote host closed the connection]
tokudan has joined #nixos-dev
ris has quit [Ping timeout: 264 seconds]
pmy has quit [Ping timeout: 256 seconds]
pmy has joined #nixos-dev
cole-h has quit [Ping timeout: 264 seconds]
teto has quit [Ping timeout: 240 seconds]
pmy has quit [Ping timeout: 256 seconds]
pmy has joined #nixos-dev
kalbasit has quit [Ping timeout: 240 seconds]
srk has quit [Ping timeout: 240 seconds]
srk has joined #nixos-dev
red[evilred] has quit [Quit: Idle timeout reached: 10800s]
supersandro2000 has quit [Quit: The Lounge - https://thelounge.chat]
supersandro2000 has joined #nixos-dev
pmy has quit [Ping timeout: 256 seconds]
pmy has joined #nixos-dev
<siraben> Looks like my fork of nixpkgs hasn't synchronized tags, how do I pull them from upstream?
red[evilred] has joined #nixos-dev
<red[evilred]> git fetch upstream
<red[evilred]> (assuming you named your upstream upstream)
<red[evilred]> oh wait
<red[evilred]> you want to actually push all the tags to your github fork?
<red[evilred]> and not your local working copy?
<samueldr> tags are pulled and pushed independently
<samueldr> pushed at least, pulled not entirely sure
<red[evilred]> git push --tags
rajivr has joined #nixos-dev
srk has quit [Remote host closed the connection]
srk has joined #nixos-dev
orivej has joined #nixos-dev
evanjs has quit [Ping timeout: 260 seconds]
evanjs has joined #nixos-dev
supersandro2000 has quit [Quit: Ping timeout (120 seconds)]
evanjs has quit [Ping timeout: 260 seconds]
orivej has quit [Ping timeout: 256 seconds]
evanjs has joined #nixos-dev
red[evilred] has quit [Quit: Idle timeout reached: 10800s]
tilpner has quit [Read error: Connection reset by peer]
tilpner has joined #nixos-dev
cole-h has joined #nixos-dev
alp has joined #nixos-dev
FRidh has joined #nixos-dev
tilpner_ has joined #nixos-dev
tilpner has quit [Ping timeout: 256 seconds]
tilpner_ is now known as tilpner
FRidh has quit [Ping timeout: 260 seconds]
FRidh has joined #nixos-dev
{^_^} has quit [Remote host closed the connection]
{^_^} has joined #nixos-dev
saschagrunert has joined #nixos-dev
teto has joined #nixos-dev
thibm has joined #nixos-dev
alp has quit [Ping timeout: 272 seconds]
__monty__ has joined #nixos-dev
supersandro2000 has joined #nixos-dev
<supersandro2000> Is there an easy way I can backport https://github.com/svanderburg/node2nix/pull/215 ?
<{^_^}> svanderburg/node2nix#215 (by SuperSandro2000, 6 minutes ago, open): utillinux -> util-linux
<supersandro2000> it uses nodePackages which is probably not great to patch
alp has joined #nixos-dev
cole-h has quit [Ping timeout: 246 seconds]
alp has quit [Ping timeout: 272 seconds]
supersandro2000 has quit [Quit: The Lounge - https://thelounge.chat]
supersandro2000 has joined #nixos-dev
supersandro2000 has quit [Ping timeout: 265 seconds]
supersandro2000 has joined #nixos-dev
<timokau[m]> Is there any policy about backporting new features and packages? It comes up from time to time. (CC jonringer worldofpeace )
<timokau[m]> This contributor has dug up a related mailing list post from 2018: https://github.com/NixOS/nixpkgs/pull/100653#issuecomment-736629476
<andi-> could someone have a look at https://github.com/NixOS/hydra/pull/825 ? I would really like to have proper eval errors so debugging them doesn't become a multi-hour journey.
<{^_^}> hydra#825 (by samueldr, 5 weeks ago, open): Fix unhelpful error messages in aggregate jobs.
alp has joined #nixos-dev
orivej has joined #nixos-dev
<supersandro2000> timokau[m]: does it build without other backports?
mkaito has joined #nixos-dev
<das_j> sad gitlab noises
supersandro2000 has quit [Quit: The Lounge - https://thelounge.chat]
supersandro2000 has joined #nixos-dev
alp has quit [Ping timeout: 260 seconds]
<teto> with flakes, is there a setting to prevent autolock ? it's driving me crazy when I forget --no-write-lock-file. Also is there a way to invalidate the cache ? even without locking, it wont redownload one of the flake dependency I've just updated
<supersandro2000> teto: Are you tracking a branch or a commit?
<teto> supersandro2000: a branch
<supersandro2000> I think the TTL in your nix config is responsible for that
<sphalerite> supersandro2000: That doesn't sound like something that should be backported to me.
<supersandro2000> you could decrease that or track a commit
<supersandro2000> also nix-store --delete /nix/store/....
alp has joined #nixos-dev
FRidh has quit [Ping timeout: 240 seconds]
FRidh has joined #nixos-dev
<gchristensen> I find it to be a mild crisis that github is able to delete users, pull requests, and comments without warning and without way for us to access the data. I wonder if somebody could develop a tool to archive every patch pushed to a PR via the webhook event stream?
<timokau[m]> supersandro2000: I'm not sure. For the sake of a general policy, let's assume it does (or all other backports are also compatible with our backport policy).
<lukegb> gchristensen: did that, err, happen recently?
<gchristensen> yes
<lukegb> :/
<lukegb> Spammy user?
<gchristensen> no
<gchristensen> volth
<lukegb> Huh, why did they get oublietted
<lukegb> I guess that's a separate topic
<qyliss> oh wow that is scary
<lukegb> 14:25:05 <lukegb> That is obnoxious in terms of erasing history of commits though - even though in theory the commit history should be standalone, PR discussions do tell us a lot...
<lukegb> Oops
<lukegb> In any case, we should at least archive merged PRs, I think?
<gchristensen> yes, and our commits definitely don't tell the story -- and some contributors have been a bit hostile to putting PR context in to commits
* lukegb nods
<qyliss> I think that's a real problem we need to solve
<gchristensen> me too
<qyliss> I think contributor hostility in general is a growing problem
<infinisil> Could be a bit related to PR's lingering a lot
<gchristensen> volth did not delete their own account
<gchristensen> ideally we could stay focused on the issue of archiving for a bit?
<qyliss> yes good
<qyliss> oh, GitHub deleted the account?
<gchristensen> yes
<gchristensen> only GitHub can "disappear" an account
<gchristensen> if you delete your own account, your comments are owned by Ghost
<qyliss> Yeah
<qyliss> I was assuming you meant they'd gone through and manually deleted every commend and stuff
<gchristensen> github disappears accounts, taking with it comments and PRs
<gchristensen> no, because also their PRs are 404
<qyliss> I can't believe they take away PRs
<eyJhb> So everything from volth is gone, including comments, etc? That leads to weird convos...
<qyliss> damn
<lukegb> qyliss: yeah, they don't even do that if you delete your own account
<lukegb> So: archiving PR discussions+latest .patch?
<lukegb> And then issue comments?
<gchristensen> yes, though this is volth's preferred method of using github -- they used to delete all their comments after they were seen -- but again this was totally github
<gchristensen> yeah I think so
<qyliss> I'm on vacation and was looking for a project
<lukegb> Is the GitHub merged indicator reliable enough or should we just gobble all closed/merged PRs?
<qyliss> so I can have a go at doing archiving I suppose
<lukegb> (ISTR there are cases where it won't trigger as merged properly)
<gchristensen> we can take a backup and get today's data, but having a continuous archival for ongoing would be good I think
<qyliss> lukegb: I'd just archive every event as it comes in
<gchristensen> and fetch .patch's
<qyliss> (then I'd send it to a mailing list, which would mean data ownership is distributed and even if the archive goes away other people will have copies)
<lukegb> Fair. I was mostly thinking it would be easy/convenient to just keep history for merged things
<lukegb> Basically: when PR is closed send a summary of it at close time to a mailing list
<infinisil> IPFS might actually be good for distributing such an archive :)
<qyliss> lukegb: Non-merged PRs are valuable too
<lukegb> qyliss: I'd argue they're valuable, but not vital like merged ones
<qyliss> sure, but why bother differentiating
<qyliss> there aren't way more unmerged ones than merged
<qyliss> my one concern here would be that GitHub users expect to be able to redact comments
<gchristensen> not really, you can always go back to the edit history
<qyliss> gchristensen: I think you can choose whether to make the edit visible
<qyliss> in case you paste a password or something I guess
<gchristensen> easy to check
<infinisil> volth being wiped could just be that they themselves requested removal
<gchristensen> ah you can indeed delete the revision
<infinisil> "right to be forgotten" in GDPR-kinda thing
<andi-> I've mirrored all our refs (that includes all merged and not merged PRs) to a local git repo for a year or so.. recently github just hangs up on me when I try to fetch them..
<gchristensen> infinisil: that I'm almost certain that isn't what happened today
<infinisil> gchristensen: Where does your certainty come from?
<gchristensen> I am happy to share privately
<qyliss> I suppose GitHub already does notification emails
<qyliss> So if you paste a password that's probably already getting unrevocably sent to 1000 people
<gchristensen> andi-: do you by chance have the pr #96859?
<andi-> It broke ~2 months ago, let me check
<lukegb> gchristensen: it's still fetchable
<gchristensen> oh?
<qyliss> oh that's good news
<lukegb> As "pull/96859/head"
<gchristensen> can you push it somewhere for me?
<gchristensen> ehh I can get it I don't need tobe lazy :P
<gchristensen> thanks
<qyliss> that's fascinating
<andi-> gchristensen: esos: 1.4.1 -> 1.10.0 ?
<andi-> ^mesos
<lukegb> (which is a trick I learned from nixpkgs-review)
<andi-> I've been pushing them here: https://git.sr.ht/~andir/nixpkgs
<andi-> the ui doesn't show them but they should all be there
* lukegb grumbles about "what's next, github just changing the repo history to delete commits they don't like"
<eyJhb> lukegb: Don't give them ideas!
<andi-> rewriting commit messages that do not fit their pov ;)
<qyliss> For anyone other than andi-, I think if you change +refs/heads/*:refs/remotes/nixpkgs/* to +refs/*:refs/remotes/nixpkgs/* it'll fetch all PRs and stuff on git fetch
<eyJhb> andi-: would it really suprise you if they did?
<qyliss> although if that just hangs it's probably not very helpful :p
<andi-> The fun part about that archiving is that it triggered the sr.ht git backend rewrite from Python -> Go as it was not able to handle the load...
<qyliss> lol
<gchristensen> hahaha
<andi-> apparently they did exec once for each ref that you push
<eyJhb> That sounds tedious
<andi-> regarding archival: I was thinking about redirecting GH webhooks to pre-signed S3 urls. Would that work?
<qyliss> andi-: that doesn't include the patch
<andi-> qyliss: you mean the HTTP PATCH that s3 requires? IIRC you can set a bucket policy for POST
<qyliss> andi-: no I mean GitHub won't include the diff in a PR webhook afaik
<andi-> ah yeah
<lukegb> oh wow
<andi-> did anyone ever export a repository? how do all the different events in the export look like?
<lukegb> qyliss: I just pulled all PRs
<lukegb> whoops
<qyliss> lukegb: I'm doing it now
<qyliss> how big is it?
<lukegb> not huge
<qyliss> andi-: fwiw pulling all refs seems to be working for me
<lukegb> it finished for me
<andi-> lucky you, I'm still getting error: RPC failed; curl 18 transfer closed with outstanding read data remaining
<qyliss> andi-: how are you doing it?
<andi-> mirror = true\nfetch = +refs/*:refs/ in the remote
<andi-> git fetch github && git push srht
<qyliss> hmm
<lukegb> my nixpkgs .git dir is... 8GB?
<qyliss> that's slightly different to what I did (posted above)
<qyliss> mine is 5.4
<andi-> lukegb: did you use SSH to pull?
<qyliss> I used HTTPS
<lukegb> I used HTTPS
<andi-> I am using HTTPS because I do not want credentials for my GH account here (not not manage another one)
supersandro2000 has quit [Excess Flood]
<lukegb> I don't have SSH for my upstream remote because it wouldn't do anything useful :P
supersandro2000 has joined #nixos-dev
<andi-> q gchristensen
<qyliss> The problem with just archiving all refs though is that refs can still be force-pushed
<qyliss> so it would only be okay as long as they were pushed to a repo that never GCed
<qyliss> which presumably would be one that we'd have to host ourselves
<supersandro2000> let me throw a wild theory in the pot: github disabled his account temporarly
<qyliss> because no git hoster is going to do that
<qyliss> supersandro2000: I don't think theorising what happened is that useful
<qyliss> no matter what happened, GitHub has shown that they can do this
<qyliss> so no matter the reason, we need to have some strategy to counter it
<gchristensen> +1
<gchristensen> a well-intentioned user or even administrator could make a catastrophic mistake and it'd be nice to recover without too much trouble. continuity of business and whatnot
<supersandro2000> the problem we have is that nixpkgs is to big and we will get api rate limited with any tool
<qyliss> presumably they don't rate limit webhooks
<adisbladis> supersandro2000: Technical problems we can work with
<qyliss> and with webhooks + git fetch (and you can fetch all PRs at once) there'd be no other API use to rate limit
<qyliss> my proposal is this:
<qyliss> we host a git repo somewhere we can disable the GC, and fetch every ref once an hour or something
<qyliss> (also with the config option that allows fetching arbitrary refs)
<supersandro2000> the problem is we don't gc
<qyliss> we hook up a small program that receives github webhooks and just sends the JSON blob to a mailing list
<qyliss> supersandro2000: what do you mean?
<qyliss> anyway, with these two things, we get all the information that we need, in a way that is very easily mirrored
<supersandro2000> if anyone decides to push 1 GB random PR with that, too
<qyliss> supersandro2000: nothing stopping them doing that today
<qyliss> you'd just have to keep an eye on it
<qyliss> the archiver has to archive
<adisbladis> qyliss: A friend is using https://github-backup.branchable.com/
<adisbladis> It may be worth looking into
<supersandro2000> qyliss: and throw out garbage because disk space is cheap but not free
<qyliss> adisbladis: I think archiving webhooks is a better approach if you have perms to set one up, because then you don't need to worry about new API things coming along
<qyliss> adisbladis: since you can just indiscriminately archive everything
<gchristensen> happy tohelp with that
<andi-> mhm, I already have that postgresql with events coming in...
<qyliss> one problem is that if somebody git clones the non-GCed archive, they won't get dangling commits
<andi-> publish daily/weekly/… tarballs?
<qyliss> Yeah that would work
<qyliss> I wonder if git bundles can include all commits
<qyliss> you could host torrents for the tarballs
<andi-> is GH Even delivery guranteed? For example if you reboot a machine does it "backfill"?
<qyliss> andi-: it'll retry a few times
<qyliss> but it won't retry forever
<qyliss> you can manually resend webhooks in the admin view though
<qyliss> so as long as we knew we'd missed them we could have gchristensen resend them
<andi-> nice, we can have that as oneshot service of system startup. The humans API via IRC :D
<qyliss> I like it
<andi-> qyliss: if you want to hack on this go ahead. I'm willing to host one instance and would really like if we end up with multiple people recording history (and distributing the same)
<andi-> I can also chip in a lot of time this month..
<qyliss> I wonder if we could make a ref that just included every commit
<qyliss> yeah we definitely could
<qyliss> have a branch with a series of empty merge commits that have every other commit as second parent
<adisbladis> If anyone would like to experiment with this https://github.com/nix-community/ could be used as a test
<adisbladis> I'd like the same mirroring happening there
tilpner_ has joined #nixos-dev
tilpner has quit [Ping timeout: 256 seconds]
tilpner_ is now known as tilpner
thibm has quit [Ping timeout: 240 seconds]
thibm has joined #nixos-dev
mkg20001 has quit [Quit: Idle for 30+ days]
orivej has quit [Ping timeout: 240 seconds]
orivej has joined #nixos-dev
alp has quit [Remote host closed the connection]
alp has joined #nixos-dev
<supersandro2000> quick question about https://github.com/NixOS/nixpkgs/pull/105663
<{^_^}> #105663 (by mannahusum, 8 hours ago, open): pianobooster: Fix 'Could not find the Qt platform plugin "xcb" in ""' for
<supersandro2000> This does not change anything right?
thibm has quit [Ping timeout: 240 seconds]
thibm has joined #nixos-dev
mkaito has quit [Quit: WeeChat 2.9-dev]
mkaito has joined #nixos-dev
mkaito has joined #nixos-dev
<qyliss> lol, I should have picked a repo smaller than Nixpkgs to test my "make every commit reachable" script on
<qyliss> andi-: do you think it matters if the "everything" ref is non-deterministic between independent archives?
<qyliss> I suppose it doesn't matter if it isn't append-only
<qyliss> so I could have some ordering (like hash) and do a linked list insert for each new commit
<andi-> I think the everything ref is great to have and we should be able to determine if they are "the same" but they do not have to be identical..
<lukegb> \aliens{blockchain}
<qyliss> andi-: do you want to look at something to archive webhooks while I'm looking at this git stuff?
<qyliss> I'm propose webhook -> email -> public-inbox but there are other approaches that would work
kalbasit has joined #nixos-dev
alp has quit [Ping timeout: 272 seconds]
<V> wait what
<V> volth is unhistoried?
<V> I was literally looking at their profile a couple of days ago
<andi-> qyliss: can do, i already have most of the code for that
<srk> I'm running mirroring of bunch of repos to be able to execute post-receive hooks, two repos setup with --mirror, first fetches and pushes to the other one that has post-receive hook
<srk> got tired of webhooks, irc notify deprecation.. was running it for nixpkgs as well but it's not active currently
<srk> (with the intention to stream commits to IRC / web)
mkaito has quit [Quit: WeeChat 2.9-dev]
<hexa-> FRidh: can you add https://hydra.nixos.org/jobset/nixpkgs/staging-20.09 to the staging-stable project desc?
<hexa-> I hope that we'll get a clickable link from that :o
saschagrunert has quit [Quit: Leaving]
red[evilred] has joined #nixos-dev
<red[evilred]> testing with nix-community makes a lot of sense - because, scale
<red[evilred]> but everyone knows this so I'm going to go make some chili :-)
<FRidh> hexa-: done. yes
<hexa-> FRidh: thanks, just noticed I could've done that myself, from the projects list view …
alp has joined #nixos-dev
ris has joined #nixos-dev
cole-h has joined #nixos-dev
rajivr has quit [Quit: Connection closed for inactivity]
alp has quit [Ping timeout: 272 seconds]
<hexa-> You can't see this card
<hexa-> This card references something that has spammy content.
<hexa-> thanks GitHub.
<cole-h> Very cool
<{^_^}> cole-h: 2 days, 2 hours ago <gchristensen> when you're about, could you rebase https://github.com/NixOS/nixpkgs/pull/104740 to use the v0.5.1 upstream point release of packngo?
<cole-h> gchristensen: Wrong PR? :D
<gchristensen> lol
<gchristensen> very stale and the wrong link. great success
<cole-h> Hehe
<cole-h> Guess it's been a while since I was here :P
<gchristensen> since you said anything, anyway
<lukegb> What's the difference between the nixpkgs and nixos projects in hydra, anyway
<clever> lukegb: nixpkgs builds for darwin and linux, and has less testing
<clever> lukegb: nixos only builds for linux, and has special tested sets, which the channels will block on
<samueldr> since the build outputs are shared, it's not duplicating work
alp has joined #nixos-dev
m1cr0man has joined #nixos-dev
FRidh has quit [Quit: Konversation terminated!]
alp has quit [Ping timeout: 272 seconds]
orivej has quit [Ping timeout: 264 seconds]
alp has joined #nixos-dev
red[evilred] has quit [Quit: Idle timeout reached: 10800s]
thibm has quit [Quit: WeeChat 2.6]
cole-h has quit [Quit: Goodbye]
cole-h has joined #nixos-dev
sgo has joined #nixos-dev
kalbasit has quit [Remote host closed the connection]
kalbasit has joined #nixos-dev
costrouc has joined #nixos-dev
stigo has quit [Ping timeout: 256 seconds]
lassulus has quit [Remote host closed the connection]
lassulus has joined #nixos-dev
sgo has quit [Ping timeout: 256 seconds]
sgo has joined #nixos-dev
zimbatm has quit [Ping timeout: 272 seconds]
__monty__ has quit [Quit: leaving]
jonringer has joined #nixos-dev