tilpner has quit [Remote host closed the connection]
tilpner has joined #nixos-dev
alp has quit [Ping timeout: 260 seconds]
alp has joined #nixos-dev
orivej has quit [Ping timeout: 260 seconds]
<mkaito->
is there a separate channel for nix development?
mkaito- is now known as mkaito
mkaito has joined #nixos-dev
mkaito has quit [Changing host]
<mkaito>
we're running into some freezing issues on nix master that I'd like to debug, and I've got some trouble getting `nix develop` to compile nix at all. I don't know where I should bring this up.
<mkaito>
wish I could reproduce, but nix shell refuses to eval :P
<yorick>
does nix develop refuse to eval as well?
<mkaito>
but basically, I would get into the dev shell, bootstrap, configure, make, and then get a million glibc errors
<yorick>
hm
<mkaito>
yeah, it just gets stuck at some point and does nothing
<mkaito>
yesterday it was on a .drv for lowdown, today it's brotli. yesterday it would get stuck acquiring a file lock, today it gets the lock and THEN gets stuck
<mkaito>
I'm running fs checks just in case
<mkaito>
once it gets stuck, I have to SIGKILL the nix-daemon, and then collect the entire store, or nothing works.
<mkaito>
well, nothing that would touch the store anyway
<mkaito>
oh well, gotta get ready for a wedding, guess I'll try again on monday with more patience.
<raboof>
hmm, looks like the test expectations indeed don't match with the implementation. weird thing is I do see the tests in the pypi tarball, but not in the sources in https://github.com/tarpas/pytest-testmon
jonringer has joined #nixos-dev
<Ericson2314>
siraben: did you see what the other person linked in bintools-wrapper?
<Ericson2314>
You add a new case in that big if-else chain
<abathur>
does anyone have a sense of how safe/unsafe it is to run the last few store-modifying steps of a Nix install (installing nix itself, running nix-store --load-db, setting up/updating a channel, dealing with SSL certs, etc.) on an existing store installed with a previous installer version?
<siraben>
Ericson2314: Yeah I tried that and added a new case like "${rmToolchain}/lib/...so.3" but it failed
<Ericson2314>
siraben: what did it fail with?
<siraben>
Erm, garbage collected now. I'll replicate it tomorrow and post on the PR.
alp has joined #nixos-dev
<Ericson2314>
siraben: ok, thanks
rajivr has quit [Quit: Connection closed for inactivity]
__monty__ has joined #nixos-dev
alp has quit [Ping timeout: 272 seconds]
<mkaito>
domenkozar[m]: thanks for the tip; I'm running nix master from yesterday evening, actually.
<__red__>
but when I build it as a dependency - it fails
<__red__>
eg:
<__red__>
nix-build nixpkgs -A termite
<__red__>
I've not dug into why to be honest
<__red__>
okay - now I want to know and I'm going to hunt it down
<__red__>
brb
<JJJollyjim>
yeah termite has patches
<JJJollyjim>
as in my PR
<JJJollyjim>
they call it vte-ng upstream
<JJJollyjim>
we've been applying the same patches from vte-ng 0.56, which have cleanly applied up until vte 0.62 was released
<__red__>
but the version that came down shouldn't have changed - so why is it failing
<JJJollyjim>
hm?
<JJJollyjim>
someone update vte
<JJJollyjim>
*updated
<JJJollyjim>
we don't package vte-ng, we package vte and override it with the patches that make up vte-ng
<__red__>
oh - now I see it
<__red__>
lemme look at your PR
<__red__>
sounds like you unraveled this altready
leungbk has quit [Remote host closed the connection]
leungbk` has joined #nixos-dev
cole-h has quit [Ping timeout: 260 seconds]
<jonringer>
If anyone participated with the 20.09 release (ZHF or otherwise) please join https://meet.google.com/yzv-ynuw-fjb, I'm hosting the retrospecitve meeting
<gchristensen>
<3 jonringer
<{^_^}>
jonringer's karma got increased to 15
<aanderse>
free karma to anyone who joins
* gchristensen
prepares a scary patch
<abathur>
anyone have a sense of how safe/unsafe it is to run the last few steps of a Nix install (installing nix itself, running nix-store --load-db, setting up/updating a channel, dealing with SSL certs, etc.) on an existing store installed with a previous installer version? (this is already possible for single-user installs, and possible on master for multi-user; multi-user on 2.3.x still blocks it
<LnL>
I would _definitively_ backup the db if you don't want to start your store from scratch
<LnL>
but given that you restore the db I think it's pretty safe in principle, as long as you make sure not to trigger a gc
zarel has quit [Ping timeout: 272 seconds]
zarel has joined #nixos-dev
<ryantm>
@jonringer is going to be submitting some RFCs for modifying the release process. I'm trying to proactively gather an candidate RFC shepherd team for the steering committee to approve. Is anyone interested in volunteering to help shepherd?
<jonringer>
ryantm++
<{^_^}>
ryantm's karma got increased to 26
<__red__>
btw, "red" is fine, I just didn't want to derail the convo. Only my wife when she's pissed at me calls me by my full name :-)
<ryantm>
Two of the RFCs he wants to make are 1. move release back 2 months 2. add branchoff period of 2 weeks where staging can't be merged into staging-next
<ryantm>
or forward 2 months might be more appropriate*
<MichaelRaskin>
ryantm: it's like UTC±2, just call it «later»
<gchristensen>
why move the release 2mo?
<gchristensen>
actually, I can wait for the RFC :)
<LnL>
yeah, I'm not sure there's much value to doing that tho
<abathur>
I guess what I'm trying to fumble towards is whether install-over has unsafe parts and is a bit of a misfeature (unless those parts are skipped for reinstalls)
<LnL>
ah, well the single user install can be used to upgrade so that doesn't drop the existing db
<abathur>
so in nix#3128 one of the changes was to make it so multi-user reinstalls weren't blocked from reinstalling as well
<abathur>
that commit isn't in master, not quite sure if it's intentional or not yet
<abathur>
er
<abathur>
is in master, not in 2.3.x
__monty__ has quit [Quit: leaving]
<LnL>
thought you where doing an actual uninstall, but not that making a backup hurts :)
bridge[evilred] has quit [Remote host closed the connection]
bridge[evilred] has joined #nixos-dev
<abathur>
well, a few targets; I've been trying to square darwin volume updates with various possibilities and edge-cases here with a "curing" process that can currently ~undo the darwin-specific parts (and I think these bits naturally extend into a general uninstaller once other components have the same treatment), but I was assuming we'd already been living with reinstallable multi-user
<LnL>
importing the store paths should be idempotent already, the conditions revolve more around making sure an old daemon isn't still running, etc.
<abathur>
my general curing approach for the volume itself is prompting them to delete it (and requiring sudo as confirmation)
<LnL>
upgrading the daemon by running the installer again isn't a thing currently AFAIK
<abathur>
but the edge case of trying to "upgrade" old volume configs is a little different; the simple way to do it is to say: if you want to upgrade, you'll have to start a fresh store
<abathur>
but I'm not sure that's actually useful to people with existing stores
<LnL>
I think upgrading would definitively be valuable
<abathur>
so if it lets those specific users keep their volume with an intact store, it needs to not break it
<LnL>
but the implementation could be to uninstall and retrace all the steps except for the data itself
<abathur>
that's basically how the curing step currently works--revert everything to roughly-clean state and then run the installer as normal
<jonringer>
gchristensen: I'm fine with merging 643
<abathur>
"everything" is a misnomer here; it's just the synthetic.conf, fstab, volume, keychain, and mounting daemon--but I'm imagining a next phase where curing and --uninstall the same routine, and every install would attempt to uninstall ~everything before trying to install it (in place of the current validate_assumptions steps that just blow up and tell you to fix it)
<abathur>
*where curing and --uninstall _are_ the same routine
<abathur>
LnL so in that next phase, expanding to cover everything, the daemon would get removed at the curing step
<abathur>
in fact I've already got the statement to do that for macOS, but part of my reason for saying that's a next step is that it needs poly support on the Linux side as well
<LnL>
yeah, that's what I also had in mind, with the exception that uninstall would ask to delete the actual data at the end
<abathur>
by data, are you talking about all of /nix? config contents?
<abathur>
(the way my curing step for darwin *currently* works it actually will nuke the whole Nix volume)
<abathur>
but I'm fumbling with what it should do about people on existing macOS installs who might be inconvenienced by having to rebuild their full store
<LnL>
I mean the volume or /nix directory depending on the context
kalbasit has quit [Ping timeout: 256 seconds]
<abathur>
oh, I think I mis-understood you
<abathur>
you mean curing == uninstall without deleting /nix
<abathur>
yeah?
<LnL>
yeah, so that part ends with only and unmounted volume as the remaining thing
<abathur>
I'm mostly deleting it because of the complexity of the darwin setup; wiping it (it asks for a password) means we can create a known volume, encryption credential, keychain entry, fstab, synthetic.conf, launchdaemon mounter that embeds a reference to the volume UUID
<abathur>
rather than have to try and triage a lot of annoying edge cases where each of those components could be set up incorrectly
<LnL>
right, but all of those can get deleted/disabled and recreated afterwards
<abathur>
they can, but for example we have to do a dance if they keep an encrypted volume to know whether we've got a good credential for it
<abathur>
so we have to go find the credential, and dismount+lock it, and then try to use their keychain credential to unlock and mount it
<abathur>
and then, of course, we can't delete that credential
<LnL>
yeah, except for the encryption passphrase which is kind of a problem in general I feel like
<abathur>
but if we couldn't find a credential then we have to make them give us one (or then, finally, tell them they can't continue without letting us delete the volume)
<LnL>
that works but only if it came from the user initially no?
<abathur>
so my tack has been to burn it to the ground and start fresh, and make an env for excluding all of this
<abathur>
depends on the setup I guess; if we just assume we made the volume, if we can't find the credential then they've presumably deleted it or reset the keychain or something
<abathur>
or moved/accessed the drive from another system if it is external
<abathur>
but yeah, the other components are easy enough to delete and re-write
<abathur>
I guess it could let them keep the volume (if installing over it is pretty safe) as long as it *can* find the credential readily