<delroth>
as a user I really wonder why there is staging / staging-next if it just gets merged with huge regressions like this :/ every single time there's a staging-next merge I end up having to send 2-3 PRs to unbreak stuff, and my infra isn't really that big
<samueldr>
I haven't checked; was the unzip patch sent to staging or staging-next first?
<delroth>
what's the process for considering a staging-next merge as acceptable btw?
<samueldr>
gchristensen: should we just press the green button?
<samueldr>
delroth: not entirely positive, the staging/staging-next process is a bit personal and internalized by a few devs watching it I think
<delroth>
as I was mentioning earlier, every single time there's a staging-next merge it's a day or two of pain trying to get fix PRs merged in because half of my system doesn't build anymore
<delroth>
and here we have clearly a case where running any of the "big" nixos tests would have clearly caught the regression
<samueldr>
I thin an issue might be the lack of tooling / reporting on hydra evals
<samueldr>
I think*
<delroth>
there are continuous builds on staging though, correct?
<worldofpeace>
staging-next is hard :)
<delroth>
I understand it's a hard problem, and that defining a perfect bulletproof process is tricky
<samueldr>
while there are hydra jobsets, for both, since there's no notification, I guess it's easy just to not really check it
<samueldr>
no reporting either
<samueldr>
and yeah, in addition to having the branch merge back into master handled "by feel", it may need some thinking
<samueldr>
like I just saw it might have broken qt
<delroth>
ghostscript is broken too
<delroth>
afaict
<samueldr>
not related to the zip bomb for qt
<delroth>
the ghostscript failure happens to be bad luck with parallel builds afaict
<samueldr>
right because in one it's hello, in the other nixpkgs.hello
<samueldr>
anyway there's trunk in nixpkgs
<samueldr>
I need to write the filter interface for the evaluation page I think
<samueldr>
so it's easier to pick e.g. only failures, remove dependency failures
<samueldr>
(or pick only timeouts)
<samueldr>
I guess it would help better pin pointing the issues in those situations
<samueldr>
another thing that would help, but we *may* not be entirely ready to handle the load, is to pick those big honking changes into a one time eval+build on hydra
<samueldr>
(if there was some tooling to better express "eval before the change, then eval after the change")
<samueldr>
such tooling could also realistically be used to better fast track security updates I guess
_e has quit [Quit: WeeChat 2.4]
_e has joined #nixos-dev
phreedom has quit [Remote host closed the connection]
<alunduil>
is there any guideline around how modules are organized? I'm looking to add a small module for automatically replicating zfs snapshots (nothing fancy to start with) but I don't know if I should co-locate it with the current zfs module or create a new one in a better location. Let me know if this question belongs in #nixos as well.
<gchristensen>
should be a separate module. for example, there is already one for znapzend
<alunduil>
nice. I didn't see that one. I'll check that as an example. Thanks!
phreedom has joined #nixos-dev
puck has quit [Quit: nya]
puck has joined #nixos-dev
tv has quit [Ping timeout: 246 seconds]
lassulus has quit [Ping timeout: 268 seconds]
tv has joined #nixos-dev
lassulus has joined #nixos-dev
orivej has quit [Ping timeout: 272 seconds]
das_j has quit [Ping timeout: 250 seconds]
FRidh has quit [Ping timeout: 245 seconds]
Drakonis has quit [Quit: WeeChat 2.4]
FRidh has joined #nixos-dev
FRidh has quit [Ping timeout: 246 seconds]
FRidh has joined #nixos-dev
johanot has joined #nixos-dev
justanotheruser has quit [Ping timeout: 244 seconds]
das_j has joined #nixos-dev
justanotheruser has joined #nixos-dev
psyanticy has joined #nixos-dev
noonien has joined #nixos-dev
puck has quit [Ping timeout: 244 seconds]
puckipedia has joined #nixos-dev
puckipedia is now known as puck
johanot has quit [Quit: WeeChat 2.4]
avn has quit [Ping timeout: 245 seconds]
FRidh has quit [Quit: Konversation terminated!]
orivej has joined #nixos-dev
__monty__ has joined #nixos-dev
Drakonis has joined #nixos-dev
johanot has joined #nixos-dev
justanotheruser has quit [Ping timeout: 248 seconds]
justanotheruser has joined #nixos-dev
<yorick>
gchristensen: is there some arm community builder? I'm trying to debug armv7l not building on aarch64 anymore
<samueldr>
there's no armv7l community builder*, the community builder is aarch64
<yorick>
*?
<samueldr>
* (in practice it's able to use a KVM accelerated armv7l system)
<yorick>
samueldr: why can't we just add it to extraPlatforms and call it a day?
<yorick>
alternatively, how does that work?
<samueldr>
oh, that doesn't generally works
<samueldr>
because of impurities
<samueldr>
it looks like it works for a while
<yorick>
and then it fails on libffi or something? ;)
<samueldr>
but there's things that ends up looking at impure things like the cpu features, or trying to run things
<samueldr>
there's no personality in the kernel like for i686 vs. x86_64 for armv7 vs. aarch64
<samueldr>
(I think it's called personality, right?)
<samueldr>
so, as of right now, the only sane way to build armv7l on aarch64 is to boot in 32 bit (armv7) mode or use kvm
<yorick>
our current armv7l builder is an r-pi3 with an armv7l system, it seems to work
<yorick>
okay, how does kvm work?
<samueldr>
erm, can't help much more than "it just works, like on x86_64" lol
<samueldr>
but it did work for me on an rpi3b
<samueldr>
as long as /dev/kvm is there, qemu with kvm will work
* samueldr
checks for a post
<yorick>
I mean, how do I set it up?
<samueldr>
for best results, using the EFI iso image for armv7l is likely easier
<andi->
I think sphalerite was working on that during ZuriHac?
<yorick>
I can't really boot that on a 128GB packet instance :D
<samueldr>
sorry yegortimoshenko, you weren't the one I wanted to ping
samueldr has left #nixos-dev [#nixos-dev]
samueldr has joined #nixos-dev
<sphalerite>
leaving out of shame?
<samueldr>
no, refreshing the nicknames list
<yegortimoshenko>
ah, ok :-)
<sphalerite>
:p
<yorick>
I'm still here
<samueldr>
yeah, seems like my client didn't believe you were
<yorick>
oh, I thought it was userspace kvm instead of user kvm
<yorick>
system&
<samueldr>
just bog standard qemu with kvm :)
<yorick>
or maybe we should fix the impurities
<samueldr>
though, the trick is to still boot using qemu-system-aarch64, but passing -enable-kvm -cpu host,aarch64=off
<samueldr>
the impurities are things like "being able to execute code"
<samueldr>
they are not AFAIUI nix's, but system impurities
<sphalerite>
andi-: yes I was trying… had some weird issues
<samueldr>
of note: there are limitations with kvm; first I think it's limited to 8 (or was it 6, or 12?) core with qemu; though that may have been fixed
<samueldr>
though you could spin up more builder vms
<samueldr>
and for memory, LPAE has to be enabled in the kernel
<samueldr>
which isn't in the default config set for armv7l, but I *think* we could; IIRC all armv7 platforms we intend to target should have it