<samueldr> :( kevin-nix's sd image boots on gru_dumo, but though without display
<samueldr> hmm, but eDP HDMI out shows the output
<samueldr> (in a wrong configuration for mirroring though)
orivej has quit [Ping timeout: 246 seconds]
<thefloweringash> is that on google's kernel or upstream kernel?
<thefloweringash> oh, that reminds me, did anyone look at why 4.4 kernels don't seem to build on nixos-unstable with the `selected processor does not support `crc32w w0,w0,w1'` error?
<samueldr> thefloweringash: from your repo, google's kernel
<samueldr> thefloweringash: haven't looked, but willing to bet it's a diff in the µarch, like cortex-Asomething
<samueldr> there still is the story of µarch being impure in nix that annoys me, and for which I don't know if there is any easy solution :/
<samueldr> IIRC we have 3 different µarch in the four machines
<samueldr> (aarch64)
<gchristensen> ouch :(
<gchristensen> we could ... erm ... virtualise it?
<samueldr> it's not an aarch64 only issue, see how building on recent intel makes packages that fail to build on the older intel hardware
<samueldr> and on x86_64 it's not something that can be fixed by virtualization
<clever> ive also had trouble when building v7 on aarch64
<clever> i have 2 open issues in nixpkgs about that
<samueldr> the CPU still executes whatever is thrown at them, and the cpu features are still present
<gchristensen> yeah :/
<samueldr> clever: on an aarch64 running kernel or when booted on an armv7 kernel?
<{^_^}> #56285 (by cleverca22, 15 weeks ago, open): python ctypes extension fails to build due to purity issues
<samueldr> the former is a different impurity
<samueldr> the latter would surprise me a little
<clever> samueldr: when building v7 binaries with a v7 compiler, on an aarch64 kernel
<samueldr> right, different impurity
<clever> libffi detects an aarch64 cpu, generates aarch64 assembly, then gcc goes "wut" and fails
<samueldr> (still related)
<{^_^}> #56290 (by cleverca22, 15 weeks ago, open): kexec-tools arch purity problem
<clever> kexec-tools too
<clever> the worse part in python, is that it just goes "oh, i guess ill turn off libffi support"
<clever> so the python build "works"
<clever> then 2 builds later, something else fails, because your python lacks ffi support
<clever> and the cache has been poisoned, and hydra refuses to rebuild something that "worked"
<samueldr> it's with sage, that tim*kau[m] has spotted failures when parts of it, or was it dependencies, were built on a recent intel hydra machine, while another part (or sage iteself?) was built on the older hardware
<gchristensen> oh profpatsch is going to Sage Days in Paris next week
<samueldr> clever: that's the worst part I agree
<gchristensen> if you have things you want to pass along
<samueldr> timokau[m]: ^
<samueldr> though I think profpatsch is already in the know
<clever> samueldr: using nix-store --repair, i can forcibly replace one version of $out with another
<clever> and thats how ive been fixing things
<samueldr> clever: yeah
<samueldr> what I was thinking is virtualization + something to hide the cpu information from whatever the kernel gives...
<samueldr> probably some kernel module
<clever> samueldr: kvm on x86 has a setting to fudge the cpuid output
<samueldr> but would probably still allow just banging opcodes to the cpu
<samueldr> clever: not enough to repro those issues
<samueldr> that's because it's just *shown*, so it's not enough for code running
<samueldr> just detection using cpuid and related features
<samueldr> and it's something that #reproducible-builds just are not looking into
<gchristensen> ouch
<samueldr> yes
<thefloweringash> in the short time I managed to look at the 4.4 kernels, my best guess was the re-organisation of gcc flags had overridden the kernel's build step, which tries to set an extra flag to enable a feature for just that file
<thefloweringash> samueldr: it's been a while since I updated the kernel in that repo, if you're seeing problems with newer hardware it'd make sense to update the kernel from arch
<gchristensen> cpu features should be implemented by the lisp evaluator built in to the cpu
<gchristensen> optimizing at execution time
<samueldr> thefloweringash: that was my next step
<samueldr> thefloweringash: I also have experience with chromeos stuff on the intel side so it's not all foreign :)
<thefloweringash> ah excellent, just wanted to warn you I'd been slacking
<samueldr> no worries
<samueldr> ooh, I wonder if it's one of the patches
<samueldr> hack to "fix" console output
<samueldr> what if the "fix" broke it on my hardware :D
<samueldr> oh, and thanks thefloweringash, this is a real helper in getting started, instead of doing the hard work by myself :)
<DigitalKiwi> samueldr: would getting f2fs to work be something you could help me figure out? no big deal if you can't but if we can get it I'll write up a blog/wiki article about it :)
<samueldr> it's mainly the lack of time that makes it impossible, sorry
<DigitalKiwi> it's ok
<samueldr> though if you have precise questions, I can try and answer them, or guide you
<DigitalKiwi> ok yeah that'd be great
<samueldr> I don't know enough about how computers execute stuff, but could it be possible at runtime, maybe through qemu, to vet anything executable for their instructions?
<samueldr> kind of looking first, see something that's not supported, return sigill or whatever should happen
<samueldr> that would be only relevant at build time, and probably slow things down though
<samueldr> but I'm thinking if it's only userspace stuff it might be not as bad
<samueldr> a bit like how qemu-user can exec userspace stuff for other arches, but kernelspace still happens native
<samueldr> hmmm, qemu-user without kvm
<clever> that reminds me
<clever> i once used x86-64 qemu-user, on x86-64
<samueldr> yeah, like that
<samueldr> but without kvm
<clever> because its non-kvm, and has the same bugs as qemu-system in non-kvm mode
<clever> in my case, the default cpu it emulated lacked features the code used
<samueldr> great, then I guess it could help
<samueldr> still a bummer for the likely performance hit
<clever> but it has the same perf cost as non-kvm qemu
<samueldr> for everything userspace, yeah
orivej has joined #nixos-aarch64
zupo has joined #nixos-aarch64
kvaster_ has quit [Ping timeout: 272 seconds]
zupo has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
orivej has quit [Ping timeout: 248 seconds]
mthst has joined #nixos-aarch64
mthst has quit [Client Quit]
mthst has joined #nixos-aarch64
mthst has quit [Client Quit]
zupo has joined #nixos-aarch64
mthst has joined #nixos-aarch64
zupo has quit [Ping timeout: 268 seconds]
grw has quit [Ping timeout: 248 seconds]
mthst has quit [Ping timeout: 248 seconds]
mthst- has joined #nixos-aarch64
mthst- has quit [Ping timeout: 259 seconds]
mthst has joined #nixos-aarch64
mthst has quit [Ping timeout: 250 seconds]
mthst has joined #nixos-aarch64
mthst has quit [Ping timeout: 258 seconds]
orivej has joined #nixos-aarch64
kvaster_ has joined #nixos-aarch64
kvaster_ has quit [Ping timeout: 245 seconds]
grw has joined #nixos-aarch64
zupo has joined #nixos-aarch64
ryantrinkle has joined #nixos-aarch64
grw has quit [Quit: WeeChat 1.4]
grw has joined #nixos-aarch64
zupo has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
ardumont has quit [Ping timeout: 252 seconds]
ardumont has joined #nixos-aarch64
Thra11 has joined #nixos-aarch64
orivej has quit [Ping timeout: 248 seconds]
Thra11 has quit [Ping timeout: 258 seconds]
orivej has joined #nixos-aarch64
kvaster_ has joined #nixos-aarch64
kvaster_ has quit [Ping timeout: 258 seconds]
orivej has quit [Ping timeout: 258 seconds]
zupo has joined #nixos-aarch64
orivej has joined #nixos-aarch64
zupo has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]