drakonis_ has quit [Read error: Connection reset by peer]
init_6 has joined #nixos-dev
orivej has quit [Ping timeout: 268 seconds]
init_6 has quit [Ping timeout: 255 seconds]
init_6 has joined #nixos-dev
asymmetric has joined #nixos-dev
asymmetric has quit [Ping timeout: 252 seconds]
asymmetric has joined #nixos-dev
init_6 has quit [Ping timeout: 246 seconds]
ajs124 has joined #nixos-dev
ajs124 has left #nixos-dev [#nixos-dev]
ajs124 has joined #nixos-dev
ajs124 has left #nixos-dev [#nixos-dev]
ajs124 has joined #nixos-dev
justanotheruser has quit [Ping timeout: 250 seconds]
ajs124 has left #nixos-dev [#nixos-dev]
orivej has joined #nixos-dev
timokau has quit [Quit: WeeChat 2.4]
timokau has joined #nixos-dev
justanotheruser has joined #nixos-dev
worldofpeace has joined #nixos-dev
asymmetric_ has joined #nixos-dev
asymmetric has quit [Ping timeout: 255 seconds]
asymmetric_ is now known as asymmetric
asymmetric has quit [Quit: Leaving]
drakonis_ has joined #nixos-dev
drakonis has quit [Ping timeout: 252 seconds]
ajs124 has joined #nixos-dev
asymmetric| has joined #nixos-dev
asymmetric| has quit [Client Quit]
worldofpeace has quit [Remote host closed the connection]
drakonis has joined #nixos-dev
drakonis_ has quit [Ping timeout: 240 seconds]
jtojnar has quit [Remote host closed the connection]
aszlig has quit [Quit: Kerneling down for reboot NOW.]
aszlig has joined #nixos-dev
<sphalerite>
hm, I'm considering backporting linux_testing bumps, since linux_testing is "unstable" anyway and there's not much point in having an outdated unstable kernel on the stable branch. Thoughts?
<samueldr>
I thought I asked them to be
<samueldr>
ah, only asked for _latest
<samueldr>
sphalerite: my thoughts are that they probably should be updated in sync with master; the kernel is in a special position within nixpkgs where every updates are brought in stable just as in unstable, main difference being the default LTS won't be changed in stable
<sphalerite>
yeah
<sphalerite>
I'd think the same would apply for any unstable/release-candidate/development-version software
<sphalerite>
also, is there a reason why linux_testing isn't built by hydra?
<sphalerite>
(done the backport)
<samueldr>
looks like the author of `linux-testing.nix` wondered the same
<samueldr>
not "the", but "an" author
<sphalerite>
well, it would make me happy! :p
<samueldr>
the question is open since 2014 :)
asymmetric| has joined #nixos-dev
drakonis_ has joined #nixos-dev
catern has joined #nixos-dev
<timokau>
Is the dynamic linker on nix doing something special? I'm having problems understanding why a binary doesn't pick up a specific library, even though its location is first in rpath
<timokau>
Mh issue seems to be that LD_LIBRARY_PATH gets preference over rpath
<timokau>
... which seems to contradict the ld.so manual, according to which DT_RPATH should be serached first of all
<timokau>
Oh nevermind apparently there is a difference between rpath and runpath
<globin>
samueldr, sphalerite: I always backported all testing/latest bumps
Irenes has joined #nixos-dev
<Irenes>
so I felt like updating people on a thing I mentioned in #nixos a couple weeks ago
<Irenes>
which is the status of my effort to find a way to have build infrastructure for 32-bit ARM
<samueldr>
globin: good, then I think we pretty much all agree; probably good to write to NeQissimus(sp?) and sync so we have less duplicated work
<Irenes>
since most physical 32-bit ARM machines are underpowered by modern standards, the approach that seems most viable to me is to run under qemu
<Irenes>
the existing functionality in config.system.build.vm doesn't seem designed to work with a host and guest of different architectures
<Irenes>
I am, however, making considerable progress with a proof-of-concept based on qemu
<Irenes>
it's not automated enough yet, nor have I done the work to make the guest run Hydra
<samueldr>
thinking of running it on qemu on x86_64?
<Irenes>
if anybody sees a reason not to proceed with this approach, it would be great to hear about it. otherwise, I just wanted to brag.
<Irenes>
yes
<samueldr>
not that much faster than armv7 native on some sbcs
<samueldr>
though, I came to a similar conclusion recently
<Irenes>
yeah, but that way it could be run on EC2 instances so nobody has to physically build a farm
<Irenes>
I've been doing a bunch of random experimenting and I should have a good sense of how the speed is going to compare within another couple days
<samueldr>
but quite tangentially: instead of going with qemu-system emulation, I have a PoC of qemu-kvm on aarch64, running armv7l
<Irenes>
oh nice!
<samueldr>
compilation was about as fast as it was on aarch64
<Irenes>
that sounds like a really solid approach
<samueldr>
but that's only usable on aarch64 systems that can handle 32 bits
<samueldr>
so, _not_ all of them
<Irenes>
are there cloud providers that it could be run on?
<samueldr>
cloud, maybe, not sure, but baremetal there might be, depending on their availability
<samueldr>
(baremetal on demand, like packet.net)
* Irenes
nod
<Irenes>
makes sense
<Irenes>
sounds viable then
<samueldr>
though that's only been at a really PoC level, it worked fine with a customized kernel (LPAE for more than 3GiB of memory) and with patched grub so it'd EFI boot
<samueldr>
grub will not need patches once they release their next update
<Irenes>
I'm running mine with a stock kernel but I don't think needing an extra patch or config option is a big deal really
<samueldr>
nah, not a big deal
<samueldr>
LPAE is a kernel option
<samueldr>
(but CANNOT be used on platforms missing LPAE)
<Irenes>
I think most of the work to productionize it is going to be automating the build process
<samueldr>
(which is why it can't just be default to yes in the kernel config)
<Irenes>
that makes sense
<samueldr>
though, personal biases maybe, using qemu with OVMF for EFI boot made this easy, once I had built a base system which EFI booted
<samueldr>
(and in theory should work wherever mainline u-boot works)
<Irenes>
heh, I cross-compiled a u-boot binary and passed it as the bios to qemu, using a guest image set up for extlinux
<Irenes>
that approach felt natural since it's more or less how my physical ARM7 smolputer is set up
<samueldr>
I tried with u-boot, but with kvm u-boot wasn't happy (or qemu wasn't)
<Irenes>
yeah I can imagine kvm needing something else
<samueldr>
it looked like something was disappearing shotly after boot, so either u-boot hung, or early kernel failed
<samueldr>
though, those are mostly implementation details :)
<Irenes>
yeah :)
<Irenes>
so I'm going to be spending the next few days using my proof-of-concept to get my personal machines in a state that I'm satisfied with, and then I plan to dig into how config.system.build.vm is implemented and see whether it's possible to extend or fork for this setup
<Irenes>
have you been able to cross-build a guest image at all? it would be a lot more convenient if that worked
<samueldr>
I haven't tried cross-building a guest image
<Irenes>
yeah I verified that the most naive possible approach does not work :)
<samueldr>
I think that with the recent work it might be possible to get a good enough image to bootstrap a native build
<Irenes>
yeah, which is all that's necessary
<samueldr>
I did it by using an older image from dezgeg, and building a new system from a recent nixpkgs clone; then fixed things and technically I still have my "golden" efi bootable image
<Irenes>
yeah that's exactly what I did also :)
<Irenes>
except not with EFI
<Irenes>
(the partition layout looks like EFI, but it isn't)
<samueldr>
though, time being scarce other more important things have been prioritized