<orivej> ekleog: you may try e.g. "sysdig evt.type=write" (with "programs.sysdig.enable = true;") to see what's going on.
<ekleog> ok, so actually it looks like journalctl is actually the source of all this spam (with 80% certitude): I just ran a strace -p $(pidof systemd-journald), and a *lot* of information strolls by, much more than 33KB in 10s -- after thinking about it, as these 2KB were computed by du -hs /var/log/journald, journald's log rotation must have limited the increase in folder size... (also for sysdig, seeing it
<ekleog> requires a kernel module doesn't make me want to try it without much further investigation, which won't come before long)
<ekleog> hmm, are you actually sure about nix not storing substituted dependencies? I've got as many .drv's for firefox-unwrapped as I have firefox-unwrapped's in my nix-store, and think I'd have noticed if I actually did rebuilt firefox every single time
<ekleog> s/dependencies/.drv's/
<aszlig> also, niksnut: ^
ris has quit [(Ping timeout: 248 seconds)]
_ts_ has quit [(Ping timeout: 248 seconds)]
orivej has quit [(Ping timeout: 260 seconds)]
orivej has joined joined #nixos-dev
mbrgm has quit [(Ping timeout: 248 seconds)]
mbrgm has joined joined #nixos-dev
vcunat has joined joined #nixos-dev
<vcunat> apparently some closure size increased in staging, and that broke one installer test (the system no longer fits) https://hydra.nixos.org/build/64709761
taktoa has joined joined #nixos-dev
ma27 has joined joined #nixos-dev
ma27 has quit [(Client Quit)]
FRidh has joined joined #nixos-dev
<vcunat> well, pushed 474c1ce79 for that, at least
phreedom has quit [(Quit: No Ping reply in 180 seconds.)]
phreedom has joined joined #nixos-dev
zraexy has quit [(Ping timeout: 255 seconds)]
zraexy has joined joined #nixos-dev
<orivej> vcunat: looking at this closure size graph it seems almost impossible for the closure size spike to be caused by anything other than fwupd being enabled by default:
<vcunat> orivej: right, the spike went down now
<vcunat> it has a nice closure: two pythons, two perls, two gtk+...
<orivej> if the nixos installer test is the most obvious way to notice such things, and it is good to notice them early, maybe you should revert 474c1ce79?
<vcunat> (meaning fwupd)
<orivej> :)
<vcunat> I'm considering that
<vcunat> I think I would prefer to separate such tests into different jobs
<vcunat> when you look at the red/green/... icons at Hydra, it's only confusing
<vcunat> that installation with SW RAID breaks
<vcunat> when it's "only" closure size increase
<vcunat> ping niksnut for that, as he's sensitive to closure blowups
<vcunat> It might be just a single job that measures closures of various systems and packages (individually) and compares them to some hardcoded thresholds to either succeed or print some informative warning.
<vcunat> And add some changes to make it easy to diff two closures in terms of sizes, e.g. name-sorted list from nix-store -qR. (And then you can e.g. use `nix why-depends` if it's something new.)
taktoa has quit [(Remote host closed the connection)]
_ts_ has joined joined #nixos-dev
ris has joined joined #nixos-dev
<orivej> are there any Hydra branches for "staging-17.09", or should I commit a mass rebuild straight to "release-17.09"?
<orivej> err, Hydra jobsets
<FRidh> orivej: there is no job for staging-17.09. That one was primarily used before the release. You can indeed push directly to release.
<vcunat> +1
<vcunat> :-) the four-headed merge
<vcunat> Thanks for keeping the first-parent line on master.
<gchristensen> can IFD be turned off in nix 1.11?
<vcunat> gchristensen: what is IFD?
<gchristensen> ack Import From Derivation but what I meant was building during evaluation
<vcunat> oh, yes, I believe so
<vcunat> but Eelco claimed that is (supposed to be) done on Hydra, so there's certainly something wrong
<vcunat> and for what should change on Hydra, I'd start with --cores != 1 on aarch64
<gchristensen> that is on me :(
<vcunat> if it's so the whole time, I'm amazed noone has noticed
<vcunat> (until now)
<gchristensen> yeah, the whole time :)
<gchristensen> hmm it might be challenging to fix this
<FRidh> vcunat: yea I accidentally pushed a merge to master instead of to staging
<FRidh> do indeed try to keep aware of using --no-ff ;)
<Dezgeg> wow
<aszlig> gchristensen: maybe go with --readonly-mode?
<Dezgeg> no wonder why there are so many > 10h timeouts on aarch64 then :)
<vcunat> I thought we were just overloading it too much
<vcunat> it never occurred to me, and it's written in every log
<aszlig> i mean, it doesn't turn off IFD entirely but if you don't have the path to be imported from in your store the build will fail
<gchristensen> Dezgeg: we may need to make a new t2a host
<vcunat> aszlig: oh, that's... nondeterminstic :'(
<Dezgeg> why a new one?
<vcunat> some that doesn't have --cores 1
<Dezgeg> isn't that tweakable from hydra?
<gchristensen> a bit ago I rm -rf 'd my ~ and maay have lost access to the arm one
<aszlig> orivej: your restore commit still doesn't really restore that much :-/
<orivej> hm?
<aszlig> git diff 88eea6947fd4e7e2b92d4e778a6ab3ed66761ae6^ fbbda41e05d89325baaefef6fd0a420a1c365d7e
<vcunat> FRidh, orivej: so we leave the mass rebuild on master?
<vcunat> Estimating rebuild amount by counting changed Hydra jobs.
<vcunat>  12337 x86_64-darwin
<vcunat>  18846 x86_64-linux
<vcunat> I think it would be more practical to revert it for now, for anyone developing against master.
<FRidh> vcunat: isnt that mostly desktop environments, long builds, and the long tail of "insignificant" packages?
<vcunat> It's almost everything.
<orivej> but some of this stuff has already been built in the staging jobset. could you start nixpkgs/trunk evaluation to see what is left and how much is broken?
<FRidh> some of it has been build in python-unstable branch, which was build on staging
<FRidh> *based on
<vcunat> ah, right, bad diff
<aszlig> i have the revert commits ready, should i push?
<vcunat> Looking at staging Hydra, it's ~22k builds, summed over all platforms.
<vcunat> Mostly darwin, I think.
<vcunat> I don't have a strong opinion on this.
<vcunat> (We updated all the channels today.)
<gchristensen> nice :)
<gchristensen> it'll be a bit annoying for grahamcofborg but that is probably finee
<orivej> I've queued the evalution of the nixpkgs/trunk jobset
<gchristensen> oh cool, I got in Dezgeg :)
<vcunat> gchristensen: have you assimilated some phones?
<vcunat> (or other ARMs)
<gchristensen> I have no
<gchristensen> not. but I'd be happy to add some to the ... cube? :)
<gchristensen> ok, what should we set nix.buildCores to ? keep in mind hydra is set to use 48 cores, it has 96
<vcunat> I would actually not go so high.
<vcunat> If we don't want to keep discovering bugs in everyone's makefiles.
<vcunat> I don't think it's common to build with -j 100 on ARMs
<gchristensen> sorry
<gchristensen> ok, what should we set nix.buildCores to ? keep in mind hydra is set to 48 maxJobs, it has 96 cores.
<orivej> without https://github.com/NixOS/nixpkgs/pull/31965 I'd start with cores = 0
<orivej> and only lower it if we run out of RAM
<gchristensen> oh. I was thinking I'd set it to like, 2, or 3 :P
<vcunat> 128 GiB RAM is there, right?
<vcunat> (that's what they have on web site)
<gchristensen> yeah
<vcunat> That should be OK for builds.
<gchristensen> but we definitely don't want unbounded parallelism I think
<vcunat> it might not be that bad
<gchristensen> how about we start with 3 and go from there?
<vcunat> that won't have any effect
<gchristensen> no?
<vcunat> it would pass -l 3 everywhere
<vcunat> See the PR orivej referenced.
<gchristensen> oh
<gchristensen> you sure? load average: 37.60, 38.06, 36.44
<vcunat> What's the --jobs setting?
<vcunat> Oh, 48 :-)
<vcunat> I can see nothing better than cores = 0 ATM.
<vcunat> And with that I think we can have smaller maxJobs.
<vcunat> e.g. 24
<orivej> gchristensen: load average = max-jobs because one make always runs at least one process, even if that exceeds the target load average
<vcunat> With so many jobs it will likely utilize all the cores anyway.
<vcunat> Doing less and faster feels better to me.
<vcunat> HDD space < RAM * 2 :-)
<aszlig> vcunat: hmm... i can't really reproduce the azerty test failure :-/
<gchristensen> 312G /
<vcunat> Oh, I assumed it's 250G, like on web.
<vcunat> well, my phone has less flash than RAM, so I suppose it's common for ARMs
<gchristensen> maybe I got a special one
<vcunat> aszlig: I can't reproduce it locally either
<gchristensen> ok here goes :)
<vcunat> maybe it's connected to high load
* gchristensen looks at build-cores set to 0
<gchristensen> _hem_
<aszlig> vcunat: certainly, doing some tests...
<gchristensen> ok let's try restarting a big build
<orivej> aszlig: could you push your revert to a branch in *your* fork of nixpkgs (for review)?
<vcunat> aszlig: Hydra succeeded on the seventh attempt. Lucky me!
<vcunat> aszlig: if you think my PR is unlikely to worsen anything, I'm willing to try it "blindly" and see if the error rate goes down on Hydra.
mbrgm_ has joined joined #nixos-dev
mbrgm has quit [(Ping timeout: 248 seconds)]
mbrgm_ is now known as mbrgm
<gchristensen> "waiting for exclusive access to the Nix store..." I get this a lot using nix interactively on the type-2a
<adisbladis> vcunat: What ARMs would you like to have accumulated?
<aszlig> vcunat: sure
<aszlig> but i still think this is a race condition
<vcunat> I don't have any aarch64, so borg would be a nice way to test stuff in there as well.
<vcunat> aszlig: what kind of race?
<aszlig> vcunat: the terminal not getting the sendKeys()
<aszlig> so i think we should make that more robust instead
<vcunat> aszlig: so something else than not getting enough CPU for 60s
<vcunat> (and thus not getting them)
<vcunat> I wonder if there's a performance penalty when you virtualize multiple machines at once on a single CPU...
<vcunat> (or perhaps lack of some other resources)
<aszlig> vcunat: ah, sorry... that timeout is for reader.ready
<aszlig> vcunat: so that's not the kind of race condition i was having in mind
<aszlig> vcunat: so does it always happen with the xterm tests or the VT ones too?
<gchristensen> vcunat: I'm working on getting a type 2a for people in the community to help with the aarch64 effort, we can run a borg there
<vcunat> aszlig: I don't know
<vcunat> gchristensen: a separate one?
<vcunat> wouldn't it be better shared
<vcunat> if it's for the PRs, it would be best if the binaries from testing were then used for binary cache contents directly...
<aszlig> vcunat: okay, i've got an idea, one second...
<gchristensen> yes, a separate one
<gchristensen> I don't want to mix that infrastructure, and ideally would give out more liberal access to borg than committers. we don't want to be too liberal in having things access the hydra build machines
<gchristensen> since they're a fairly critical component of the chain of trust of the nix cache
<aszlig> vcunat: don't merge #32020 yet, please
<vcunat> OK. I wasn't going to.
<vcunat> gchristensen: but 2A seems really overkill just to test PRs
<gchristensen> it is also for people in the community to help with the aarch64 effort
<vcunat> even so, 96 cores... and 20 Gbit network
<vcunat> We don't need to worry until we get it, I guess.
<gchristensen> why worry? they want us to have the tools we need to get support
<vcunat> But I would expect there's some relatively secure way to "split" it into two machines.
<vcunat> That would be ideal, as parts unused by community would get utilized by Hydra directly.
<vcunat> You don't need to hand out root access, most likely, too.
<gchristensen> I'd rather not go down this complicated route, it doesn't seem like a prudent use of time
<vcunat> Right, probably not.
<vcunat> I mean, considering relative priorities.
<gchristensen> right
<gchristensen> I'm confused about that failure to build, I wish I had more log output.
<gchristensen> maybe something went over the 30min build timeout
<vcunat> there's a mass rebuild on master
<vcunat> gchristensen: what missing tools do you have in mind? Something I missed on https://github.com/NixOS/nixpkgs/issues/31606 ?
<vcunat> Or is this general support (not NixOS-focused) to be put into as separate thread?
<gchristensen> oh by tools I mean, access to a big ol' aarch64 box to play with and get stuff compiling
<vcunat> :-)
<vcunat> I completely misunderstood your sentence.
<vcunat> I thought they were waiting for us to provide better SW support before contributing another machine.
<gchristensen> I was talking to them about the security model of hydra and why we don't want to let contributors have general access to it
<gchristensen> they understood
<gchristensen> aszlig, vcunat both of you can't call the bot... would you like to?
<vcunat> I thought you've told me I can
<vcunat> I would like to, because of darwin
<gchristensen> you're not in the ACL :/
<vcunat> I'm right at 16 threads, so Linux builds aren't a problem, but I guess you can't instruct it per-platform yet.
<vcunat> (Maybe you just offered it, I don't remember.)
<gchristensen> vcunat, aszlig: https://github.com/grahamc/ofborg#guidelines
<vcunat> thanks
phreedom has quit [(Ping timeout: 248 seconds)]
<aszlig> vcunat: Can you make a jobset for this branch? https://github.com/aszlig/nixpkgs/tree/keymap-test-debug
<aszlig> that way we should get a screenshot whenever a timeout occurs
<aszlig> and the test build will always succeed with an output path
<aszlig> vcunat: so in order to debug this i'd suggest making a hydra jobset and i'll try to regularly push random whitespace changes to the keymap test to that branch so we have multiple evaluations
<vcunat> aszlig: I don't want to complicate this with that mass rebuild
<vcunat> I can point it to your branch, but better pick the commit atop 2f1a818d00f957f3102c0b412864c63b6e3e7447
<vcunat> (for example)
<vcunat> (the last finished Hydra evaluation)
<aszlig> vcunat: in order to keep eval-time low: https://gist.github.com/aszlig/3bf82d4d5c20d682e3c2e610cf526a65
<aszlig> okay, will rebase
<aszlig> done
phreedom has joined joined #nixos-dev
<aszlig> vcunat: for that gist it should be something like input "recipe" -> git checkout -> https://gist.github.com/3bf82d4d5c20d682e3c2e610cf526a65.git
<vcunat> let me try
<aszlig> vcunat: and of course for the main expression release.nix in recipe
<aszlig> that way only the keymap tests are evaluated
<vcunat> hydra is really flexible... but since around the mass-rebuild merge all evals I see are only pending, including this one https://hydra.nixos.org/jobset/nixos/keymap-test-debug
<aszlig> mhm...
<aszlig> maybe the evaluator has OOMed or something?
<aszlig> otoh... it's Restart=always
<vcunat> considering that Hydra gets tricked into building during evaluation...
<vcunat> (maybe that's why some evals can take over half an hour lately)
<aszlig> sure, but it should already use the parallel evaluator
<aszlig> hm, well...
<vcunat> yes, two evals at once IIRC
<aszlig> the parallel eval only does 4 at once
<vcunat> Well, right now it continues to build staging, at least, but by tomorrow it will probably be mostly idle (all but Darwin).
<gchristensen> the t2a is down to a load of 3. bummer that I turned up cores right at the end of a set of jobs
<vcunat> gchristensen: that was just temporary?
<vcunat> Hydra shows it as fully loaded by jobs.
<vcunat> (and five of those are kernel builds)
<gchristensen> no it wasn't temporary
<vcunat> Hydra reports no builds finished on aarch64 for the past several minutes
<vcunat> (I didn't look further back)
<vcunat> s/builds/build steps/
<gchristensen> hmm
<gchristensen> there are 4 cc1's running
<vcunat> gchristensen: you restarted it somehow?
<vcunat> (the nix daemon or something)
<gchristensen> yeah, nix-daemon restarted when I changed the jobs
<vcunat> Hydra's probably showing just ghost jobs, as almost all "are there" for over an hour.
<vcunat> Maybe it thinks that 2A is loaded even though it isn't.
<gchristensen> yeah those jobs may need to be canceled and restarted
<vcunat> gchristensen: how long ago was that approximately?
<gchristensen> 1h 55min ago
<vcunat> OK
<vcunat> I'm cancelling the ghosts, and it seems to start filling up with new jobs.
<vcunat> ah, no, I was looking at t2-4 jobs
<vcunat> gchristensen: even newly started gcc built is stuck at waiting for exclusive access to the Nix store...
<vcunat> for several minutes now
<gchristensen> hmm
<vcunat> just as all the other builds
<gchristensen> ok what is happening :)
<vcunat> Those that have been running for two hours have nothing else but this line in the log.
<LnL> I've seen that before
<vcunat> All of them.
<vcunat> A stale lock, possibly, but I haven't seen that with nix yet.
<vcunat> (But I don't have a 100-core either.)
<gchristensen> right
<gchristensen> how about I stop nix-daemon and kill the leftover nix-store processes
<LnL> yeah, I probably restarted the daemon
<LnL> and maybe killall -u nixbld1 etc.
<vcunat> yes, I don't think it can get worse
<vcunat> or reboot the whole machine
<gchristensen> ah ha
<gchristensen> lots of nix-store --serve --write
<LnL> that rings a bell
<gchristensen> there we go
<gchristensen> ok, it should be good to go now ...
<aszlig> gchristensen: have you looked into the journal?
<aszlig> gchristensen: because that might be because of builds during eval, they don't show up in the queue of the UI
<gchristensen> it was in a really bad state
<gchristensen> it should be good now, but nothing is building on it
<gchristensen> vcunat: can you cancel / restart some jobs?
<vcunat> I'm not sure if he has access to the evaluator
<aszlig> restarting the evaluator and/or the queue-runner should have stopped it as well
<vcunat> gchristensen: there are ~7k in the queue
<vcunat> for aarch64
<aszlig> ah, on the build slave
<gchristensen> yeah but are there jobs that think they're building?
<aszlig> might have been a good idea to strace one of those nix-store --serve --write processes
<vcunat> https://hydra.nixos.org/machines shows empty list
<aszlig> my guess would be that they hung because of network issues
<gchristensen> hmm I bet hydra just gave up on the aarch box for a bit
<vcunat> :'(
<vcunat> hopefully not for good
<gchristensen> iirc if it fails to connect too many times in a row it'll give up for a few minutes
<aszlig> nah, IIRC it has a timeout for retry
<gchristensen> two IIRCs, I'll take that as fact :P
<gchristensen> there it goes
<gchristensen> spooky looking at a 96 core machine at 0 load... fixed now :)
<aszlig> int delta = retryInterval * std::pow(retryBackoff, info->consecutiveFailures - 1) + (rand() % 30);
<aszlig> delta is the time in seconds until it retries
<aszlig> retryBackoff is 3 and retryInterval is 60
<gchristensen> pretty much all the cores are at 100% usage
<aszlig> ah, now it has builds running
<aszlig> oh
<aszlig> missed that: 16:09 < gchristensen> there it goes
<gchristensen> ah :)
<vcunat> great, so from now on we can expect roughly doubled throughput, compared to all time before :-D
<gchristensen> :$
<gchristensen> with unbounded parallelism we should expect unbounded throughput! :)
<aszlig> more or less...
<aszlig> i'm only having nproc 48 build nodes, but when building the world the main bottleneck are autoconf scripts
<aszlig> maybe it would make sense to run them with something like dash
<vcunat> niksnut: I hope you/someone can revive the evaluator. x86_64-linux has nothing to build already...
<gchristensen> I think it is him and ikwildrpepper wil access to the evaluator
<vcunat> yes
<vcunat> at least years ago it was so IIRC
<vcunat> aszlig: on Hydra we can utilize the cores even with --cores 1, so I don't think it's worthwhile
<vcunat> better spend time migrating those to meson or something ;-)
yorick has quit [(Ping timeout: 240 seconds)]
JosW has joined joined #nixos-dev
<aszlig> vcunat: hm, ran a small benchmark with gnu hello and it really doesn't make such a big difference:
<aszlig> dash: real 0m18.208s user 0m11.568s sys 0m6.139s
<aszlig> bash: real 0m18.257s user 0m11.667s sys 0m6.085s
<gchristensen> is there anything we can do on nodes with gobs of RAM to make builds touch disk less?
<aszlig> gchristensen: make /tmp a tmpfs maybe?
<Dezgeg> then you get to the fun part of sizing the tmpfs correctly x)
<aszlig> yeah :-D
<gchristensen> no kernel tunables that would be obvious?
<aszlig> ah, you mean more agressive caching and no fsync?
<Dezgeg> sure, dirty_ratio and dirty_background_ratio
<aszlig> or:
<aszlig> systemd.services.nix-daemon.environment.LD_PRELOAD = "${pkgs.libeatmydata}/lib/libeatmydata.so";
<gchristensen> I would 100% do that if this machine was easy to replace :P
<gchristensen> (read: netboot)
<aszlig> hm?
<aszlig> what exactly?
<gchristensen> libeatmydata
<Dezgeg> well the nix store isn't power-fail safe anyway so almost no harm x)
<Dezgeg> though with that even nix-store --repair won't work
<aszlig> gchristensen: why? the builds are going to be transferred off the build machine anyway, so no harm
<gchristensen> I need to be able to safely update the machine
<aszlig> hm...
<aszlig> run a second nix-daemon?
<aszlig> along with a separate store from /nix/store
<aszlig> nix 1.12 should make that quite easy
<aszlig> but even when not using nix 1.12, you can still use containers
<aszlig> ... which of course shouldn't share the store... hmmm...
<gchristensen> ehh once we get a bit better aarch64 support it'll be easy too make this netboot I think then we can just go libeatmydata
<aszlig> or that :-)
<clever> Dezgeg: i think that as long as the order of data writes, and flushes to db.sqlite are preserved, it is powerfail safe
<clever> but the defaults for ext4 dont preserve that
<clever> ive seen files truncate after an improper shutdown
<clever> zfs seems like it will obey those rules much much better
<clever> ive also noticed, /nix/var/nix/binary-cache-v3.sqlite has the sync mode disabled for speed, which can lead to corruption at improper shutdown
<clever> nix can safely regenerate it if deleted, but it doesnt actualy handle the corruption
<clever> so i had to spend a few hours debugging that, and then delete the db
<Dezgeg> well it's nix who is not calling fsync() on store paths
<clever> yeah, but i would still expect zfs to keep non-sync'ed writes that happen before a synced db.sqlite, to all be preserved in-order
<Dezgeg> why?
<clever> thats just the design feel i get from zfs, but i see your point, the fsync() would lag more because of other data...
<Dezgeg> that would be a massive performance killer
<clever> but what about when you close() a handle, doesnt that imply sync?
<Dezgeg> no
<clever> ah, the man page agrees with you
<clever> 2017-11-25 12:36:31 < DeHackEd> the file written normally will be in an unkonwn state, but ZFS does guarantee that the sequence of filesystem syscalls will have been performed in order
<clever> from #zfs
<clever> and a normal sync() would ensure everything goes to disk
<clever> so in theory, if nix did a sync() against the /nix/store filesystem, all storepaths would persist, and it would be safe to mark them as good in db.sqlite
<Dezgeg> well yes, that would work on any filesystem
<vcunat> Hydra is evaluating again!
FRidh has quit [(Quit: Konversation terminated!)]
JosW has quit [(Quit: Konversation terminated!)]
mbrgm_ has joined joined #nixos-dev
mbrgm has quit [(Ping timeout: 240 seconds)]
mbrgm_ is now known as mbrgm
vcunat has quit [(Quit: Leaving.)]
<simpson> Hm, with fetchgit, is there a way to ask for the --recursive flag to git? Looking at packaging libfirm using these instructions: https://pp.ipd.kit.edu/firm/Download.html
<clever> simpson: i think you want fetchSubmodules = true;
* simpson tries
Sonarpulse has joined joined #nixos-dev
<Sonarpulse> bgamari: https://sourceware.org/binutils/docs/binutils/Selecting-the-Target-System.html is btw a partial list of ways to specify all the target platforms in binutils
mbrgm has quit [(Ping timeout: 240 seconds)]
mbrgm has joined joined #nixos-dev
ma27 has joined joined #nixos-dev
<gchristensen> "Parallel mksquashfs: Using 96 processors"
zraexy has quit [(Ping timeout: 260 seconds)]
<ma27> gchristensen: do we actually receive sponsoring for these machines (because we're such an awesome distro :p) or is this completely paid by the NixOS Foundation?
<gchristensen> Packet.net provides one Type 2 for Hydra, one Type 2A for hydra, and soon one Type 2A to help the community have an aarch64 system to help improve our aarch64 support
<gchristensen> see also https://www.packet.net/bare-metal/
<ma27> gchristensen: awesome!
<gchristensen> Packet.net is extremely awesome all around :)
<gchristensen> I love that my personal server has bonded gigabit NICs, and their entire company is incredible
<clever> :O
<ma27> gchristensen: sounds awesome! I should really have a more deatailed look at them %)
<clever> gchristensen: are the x86 offerings anywhere near as amazing?
<gchristensen> they all are
<gchristensen> take a look at that URL ^
<clever> nice
<clever> and i think i heard something about a nixos image?
<gchristensen> they officially support nixos on some of the types, but I owe them an updated installer
zraexy has joined joined #nixos-dev
_ts_ has quit [(Ping timeout: 252 seconds)]