zupo has joined #nixos-aarch64
zupo has quit [Client Quit]
zupo has joined #nixos-aarch64
zupo has quit [Client Quit]
zupo has joined #nixos-aarch64
zupo has quit [Client Quit]
orivej has quit [Ping timeout: 240 seconds]
zupo has joined #nixos-aarch64
zupo has quit [Client Quit]
zupo has joined #nixos-aarch64
zupo has quit [Client Quit]
zupo has joined #nixos-aarch64
zupo has quit [Client Quit]
<petersjt014> just fyi: I don't know if anyone has noticed, but the aarch build has been timing out for the past 2 days
<petersjt014> I don't have access to the build server, so I'm not sure exactly how that'd be fixed
<dtz> :(
zupo has joined #nixos-aarch64
zupo has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
zupo has joined #nixos-aarch64
zupo has quit [Client Quit]
orivej has joined #nixos-aarch64
zupo has joined #nixos-aarch64
orivej has quit [Ping timeout: 250 seconds]
zupo has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
orivej has joined #nixos-aarch64
orivej has quit [Quit: No Ping reply in 180 seconds.]
orivej has joined #nixos-aarch64
<gchristensen> http://gsc.io/scratch/2019-02-01T12:16:36Z-full-screen-3698220e-106a-4f14-9830-5fda88f067c7.png packet-t2a5-qc-centriq-1 is up and taking work :)
<gchristensen> with this new capacity, I propose we cut the # of jobs on t2a-3 from 80 to 40, and give each job 2 cores
<gchristensen> and cut jobs on c2-centriq-1 from 40 to 20, and give each job 2 cores
<samrose> gchristensen: thank you that overlay suggestion helped
<gchristensen> great!
zupo has joined #nixos-aarch64
<samrose> I didn't realize it would be that simple
<gchristensen> :)
<gchristensen> I guess I could have sent a link to an example, too
zupo has quit [Client Quit]
zupo has joined #nixos-aarch64
<samrose> (don't know if that is the best example but it gave me the gist of it)
zupo has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
zupo has joined #nixos-aarch64
orivej has quit [Ping timeout: 245 seconds]
zupo has quit [Quit: My MacBook has gone to sleep. ZZZzzz…]
orivej has joined #nixos-aarch64
<samrose> I've put nixpkgsArgs ? { config = { allowUnfree = true; inHydra = true; }; } in the default.nix of my hydra build, but for the life of me I cannot convince hydra to accept this
<samrose> nor if I put it into an input in jobset config
<samrose> hmm it appears to be working now
orivej has quit [Ping timeout: 240 seconds]
orivej has joined #nixos-aarch64
orivej has quit [Ping timeout: 272 seconds]
Thra11 has joined #nixos-aarch64
orivej has joined #nixos-aarch64
Thra11 has quit [Ping timeout: 268 seconds]
Thra11 has joined #nixos-aarch64
KemoNine has joined #nixos-aarch64
<KemoNine> hello, i was referred by a colleague to ask about armv7 tests running for nixos ; i'm trying to get libvirt going on an arm8 to run debian stretch armhf releases ; c/ sphalerite
<gchristensen> sphalerite: ^ I recommended vielmetti direct KemoNine to you
<samueldr> hi!
<KemoNine> hi!
<KemoNine> many thanks gchristensen
<gchristensen> (hoping you had luck with nixos tests on armv7 on the community hisilicon builder)
<samueldr> not sure I follow "about armv7 tests running for nixos"
<gchristensen> armv7 test being nixos tests, running on armv7
* samueldr opens worksonarm to see if there is context
<gchristensen> my understanding is KemoNine is wanting to ask about running a 32b vm on a 32b/64b-capable host
<samueldr> ah, and possibly not nixos related, right?
<gchristensen> right, but my hope is sphalerite has managed to get nixos tests running in 32b mode on the hisi builder :)
<samueldr> KemoNine: using qemu only (but should work for libvirt too) I had success booting images for UEFI
<samueldr> though not sure if debian provide those
<KemoNine> they provide some basic images I can get to boot ; i ran into troubles with the kernel finding the virtio network driver ; but i may be squarely in debian kernel territory
<KemoNine> and qemu only i think i can get boot strapped ; i did that in the past on x86 as a rough experiment
Thra11 has quit [Ping timeout: 244 seconds]
<samueldr> right, though I would assume that the network driver issue would be the same on qemu than on libvirt :/
<KemoNine> possibly ; but i will try poking it just in case ; and worst-case i can fall back to ubuntu bionic if needed for these particular vm's
<samueldr> in my experience, once you're botting off kvm, nothing's really complicated, except if the drivers for the virtualised hardware aren't there
<KemoNine> understood ; thiank you for the input
<samueldr> this kernel option might be useful along the way https://cateee.net/lkddb/web-lkddb/ARM_LPAE.html
<KemoNine> i think i'm going to see about ubuntu on armhf then... they have a pre-built lpae netboot installer and kernel
<KemoNine> and i definitely would prefer lpae for these
<gchristensen> samueldr: thanks for jumping in, despite it being a bit off-topic :)
<KemoNine> very much appreciated on my end :)
<KemoNine> looks like ubuntu with lpae "just works" with networking and the like ; some notes for those intersted: https://paste.lollipopcloud.solutions/?a11e0ce26ecfb182#L0r7UcYMIMt4O/NgYf7hmDGyP2RvWOQVxbtC9cl7D1g=
zupo has joined #nixos-aarch64
tilpner has quit [Quit: WeeChat 2.3]