sphalerite changed the topic of #nixos-dev to: NixOS Development (#nixos for questions) | NixOS 19.03 released! https://discourse.nixos.org/t/nixos-19-03-release/2652 | https://hydra.nixos.org/jobset/nixos/trunk-combined https://channels.nix.gsc.io/graph.html https://r13y.com | 19.03 RMs: samueldr,sphalerite | https://logs.nix.samueldr.com/nixos-dev
<clever> gchristensen: i just had a crazy idea, but it would require patching hydra
<clever> gchristensen: what if your mac build agents, where also linux build agents (since they run nixos!)
<clever> gchristensen: or even better, configure the nixos host to act like a proxy, conditionalized on the target system, so hydra thinks its 1 magic box that can run both linux and darwin
drakonis has quit [Ping timeout: 248 seconds]
Synthetica has quit [Quit: Connection closed for inactivity]
<gchristensen> clever: :o
<clever> gchristensen: i dont remember when it broke, but build slaves used to obey /etc/nix/machines, and could forward a build on even further
<clever> gchristensen: so you could claim the host can run both linux+darwin, and then it will farm the darwin stuff out to its own guest
<clever> and because hydra thinks its one machine, it wont over-allocate jobs to it
<clever> but you cant make hydra prefer doing darwin jobs first, so a linux job may keep the machine tied up
<gchristensen> nice
<gchristensen> if the machines had spare CPU, I might set that up
<clever> gchristensen: a more complex patch, would be to teach hydra about such a setup
<clever> so hydra knows 2 machines share the compute power, and to do one or the other
<clever> and then use speedfactor, so it prefers other linux slaves over the "mac"ish ones
<clever> or maybe use some of the hydra-provisioner logic to dynamically add and remove the linux side
<clever> so when they get a mac load, they vanish from the linux slave list
phreedom has quit [Remote host closed the connection]
phreedom has joined #nixos-dev
v0|d has quit [Ping timeout: 252 seconds]
v0|d has joined #nixos-dev
drakonis has joined #nixos-dev
drakonis has quit [Quit: WeeChat 2.4]
drakonis has joined #nixos-dev
drakonis_ has joined #nixos-dev
tdeo has quit [Quit: Quit]
tdeo has joined #nixos-dev
Jackneill has joined #nixos-dev
orivej has joined #nixos-dev
phreedom has quit [Remote host closed the connection]
phreedom has joined #nixos-dev
drakonis has quit [Ping timeout: 272 seconds]
drakonis has joined #nixos-dev
Synthetica has joined #nixos-dev
georgyo has joined #nixos-dev
<gchristensen> clever: I have a small project for you, if you're inclined :)
<gchristensen> clever: which is that sometimes run-macos-vm fails during the macos boot phase on DSMOS. I'm inclined to ignore this problem and hack around it and restart it if it fails to bring up ssh within 10min
<{^_^}> #62731 (by ivan, 8 minutes ago, open): libmatroska: 1.5.0 -> 1.5.2
<ivan> also I feel like we need a mechanism to force building some other package that is known to be tightly version-coupled
<gchristensen> ivan: perhaps you'd like to be granted ofborg privileges
<ekleog> ivan: you might want to open a PR like https://github.com/NixOS/ofborg/pull/361 :)
<{^_^}> ofborg#361 (by yorickvP, 4 weeks ago, merged): add yorickvP to extra-known-users
<ekleog> looks like I'm slow
<gchristensen> nope, thanks for sending the link
<ekleog> ivan: also, “force building some other package that is known to be tightly version-coupled” sounds like a good use for `passthru.tests = { fubar-still-builds = fubar; };` -- can't remember whether it's already been tested to not infinite-recurse, though ; also, the tooling doesn't already build those but at least the metadata will be there
<ivan> thanks, PR opened
<ivan> do I have to get the reverse dependencies for ofborg myself?
<gchristensen> ivan: once `grahamcofborg-eval` finishes, it'll have a Details link which lists impacted packages
<gchristensen> also, still deploying your auth grant
<ivan> ah, great, thanks
<ivan> https://github.com/NixOS/nixpkgs/pull/59727 had a similar version coupling issue
<{^_^}> #59727 (by r-ryantm, 7 weeks ago, merged): vulkan-headers: 1.1.101.0 -> 1.1.106
<ivan> nothing added there to prevent it
<gchristensen> usually I recommend adding a comment around the version saying "when updating X update Y"
<ivan> thanks
<gchristensen> oops... I forgot to press "continue" on the deploy.
<gchristensen> should be good to go, ivan
<ivan> thanks
<gchristensen> gutsy choice, ivan :)
<ivan> did I @ the wrong bot
<ivan> apparently
<gchristensen> you @'d an organization
<gchristensen> but more gutsy was putting so many builds in to a single incantation
<gchristensen> each of those will run at the same time, and at the end you'll not get very good information as all the logs will be combined -- and if any of them fail, the whole result will be a fail
<ivan> ah but I've got a log untangling network in my brain already
<ivan> thanks nix
<pie__> not sure whether to laugh or cry
<ivan> crawling up my yak stack to the chromium update I wanted to do
<ivan> I guess I'll try separate build commands in the future
<Profpatsch> pie__: When in doubt, try both
<Profpatsch> And then press "continue" to deploy
<pie__> Profpatsch, ++
<pie__> press any emotion to continue
<gchristensen> what are we talking about?
<gchristensen> I feel there is a critique here that would be good to hear
<Profpatsch> we’re just joking
<Profpatsch> Sometimes we carry our special brand of humor to the surface of official nix channels ;)
<Profpatsch> Come join us in ##proto-n … better not
<ivan> why isn't the aarch64-linux change listed in grahamcofborg-eval here? https://github.com/NixOS/nixpkgs/pull/62720
<{^_^}> #62720 (by ivan, 6 hours ago, open): chromium: 74.0.3729.157 -> 75.0.3770.80
<ivan> also, is it `build chromium.aarch64-linux`?
<gchristensen> no, just chromium. also, it will never work on ofborg
<gchristensen> ofborg limits builds to 1h
<ivan> ah ok thanks
<samueldr> ivan: chromium build has been "broken" due to a dependency not being supported on aarch64 for a small while
<samueldr> >> error: Package ‘aften-0.0.8’ in /home/samueldr/nixpkgs/pkgs/development/libraries/aften/default.nix:16 is not supported on ‘aarch64-unknown-linux-gnu’, refusing to evaluate.
<ivan> ah, thank you
<samueldr> if you haven't already, and desire to help on aarch64 stuff, you might ask for acces to the community aarch64 builder, which is beefy, so helpful on longer compilations to iterate in fixing things
<ivan> maybe someone wants to just merge https://github.com/NixOS/nixpkgs/pull/62731
<{^_^}> #62731 (by ivan, 2 hours ago, open): libmatroska: 1.5.0 -> 1.5.2
<ivan> I will probably deal with any very unlikely fallout
<ivan> thanks
<catern> quick thought, I haven't thought it out in too much depth yet, but: flakes are separated into a list of inputs, then an outputs-function which takes those evaluated inputs. I'm guessing that maybe that's done so that the inputs can be introspected and separate evaluated and all that. But, it's a bit awkward; what if instead we had new syntax for using so-and-so flake? like |foo://bar| to use the flake referenced by foo://bar. and then
<catern> we could have a builtin which returns a list of all the flakes used by a specific expression, and maybe another builtin to provide the actual flake inputs?
drakonis_ has joined #nixos-dev
drakonis has quit [Ping timeout: 248 seconds]
drakonis_ has quit [Ping timeout: 252 seconds]
drakonis_ has joined #nixos-dev
Jackneill has quit [Ping timeout: 244 seconds]
Jackneill has joined #nixos-dev
<niksnut> catern: interesting idea, however you do need a way to enumerate all the inputs of a flake to compute the lockfile
<niksnut> if flakes are imported using some special syntax, then presumably you could use this in any nix expression, not just flake.nix
<niksnut> so it's not clear how you find all the inputs
<niksnut> well, I guess you could parse all the nix expressions in a flake...
<clever> gchristensen: DSMOS?, do you have a screenshot of the error?
<gchristensen> clever: do a quick search for "Waiting for DSMOS" ;)
<gchristensen> and before anybody else notices, note thath we shouldn't be hit by DSMOS since the macs do run on apple hardware
<clever> gchristensen: i did run into a failure with a giant no sign once, rebooting with zero changes cleared it
<gchristensen> no sign?
<clever> a circle with a diagonal line thru it
<gchristensen> I was only able to see "Waiting for DSMOS" by passing debug in the boot params
<gchristensen> ah
<clever> where do you set the boot params? config.plist?
<gchristensen> I did it by hand in clover
<gchristensen> there seem to be many reasons why it won't boot -- to the point that just issuing a reboot seems like a reasonable course of action
<gchristensen> but if you can do better, please :D
<clever> *looks*
<clever> <key>Arguments</key>
<clever> maybe in here
<gchristensen> yeah I think there
<clever> deploying...
<clever> gchristensen: added debug, but it didnt print any debug...
<clever> debug is present under Options: when in clover
<gchristensen> mmm I think you need it in Boot Args
<clever> clover however has a -v option in its ui, which does show debug
<clever> i can also confirm that apply.sh doesnt like it when you reboot without a rollback
<gchristensen> lol that is ture
<gchristensen> it very much wants to be a fresh disk
<clever> you may want to run qemu with -no-reboot
<clever> then it will just exit if you try to reboot the machine
<gchristensen> oh cool
<clever> yeah, i now have verbose logging on bootup
<clever> i'll make that the default and push my changes
<gchristensen> can you link me to where you build clover once again?
<clever> and you can then use it like: cloverImage = (pkgs.callPackage ./macs/clover-image.nix { csrFlag = "0x23"; }).clover-image;
<clever> until i can reproduce the issue, i dont think i'll be of much help, all i can think of is to reboot it endlessly
<clever> though... apply.sh could just tell it to shutdown, and then systemd restarts it, endlessly, until something goes wrong :P
<gchristensen> apply.sh doesn't run if "waiting for dsmos" fails
<gchristensen> so endless reboots isn't the worst thing
<clever> so, it will eventually fail, at waiting for dsmos
<clever> then you configure qemu to overwrite the logs every time it starts
<clever> and wait for it to hang at dsmos!
<gchristensen> "libguestfs-with-appliance" oof
<gchristensen> you sure it needs the appliance? :)
<clever> it was failing without that, but maybe i fixed it already
<clever> gchristensen: another thing i want to do next, is change the option to a submodule
<clever> so i can spin up 2 macs on each mac
<gchristensen> oh?
<gchristensen> I really wouldn't recommend it
<gchristensen> but if you insist :)
<clever> and limit the core/ram so they can share
<gchristensen> but why?
<clever> seperate build systems, hydra and buildkite
<gchristensen> sounds painful
<clever> it would also give us a spare guest to set of infinite reboot
<clever> as for the disk images, i already confirmed zfs clone looks suitable
<gchristensen> yeah zfs clone would be fine
<clever> clone the snapshot, to make a new zvol for the guest, destroy at shutdown
<gchristensen> nice
<clever> oh, and cachecache is already running on my macs
<gchristensen> anyway, I would really recommend against it :x the macs seem to alreafdy be resaurce strained as it is
<clever> so the host persists things the guest downloads often
<clever> oh, and there is a problem i nix darwin
<gchristensen> oh?
<clever> --option foo bar, works
<clever> but --option foo "bar baz" fails
<clever> no amount of escaping things has worked so far
<gchristensen> I don't understand the problem
<clever> if i try to give `darwin-rebuild` more then 1 trusted-public-keys, it fails to even run
<clever> and i cant use nix.conf, because nix-darwin wont overwrite existing files,so it will never manage it
<LnL> clever: shell splitting is kind of broken for darwin-rebuild IIRC
* LnL should probably fix that
<clever> oh, and once that works, i'm pre-building the entire nix-darwin closure on hydra
<clever> so it can just download a prebuilt copy
<clever> and with cachecache on the host, its essentially just copying from the disk
<gchristensen> clever: LIBGUESTFS_PATH = libguestfs-appliance;
<gchristensen> will safe you from rebuilding libguestfs just to put the appliance in the env
<clever> ah, i didnt look at the wrapper that closely, and thought it was setting an env var
<gchristensen> I haven't either, but iirc it does cause a rebuild
<LnL> clever: as for /etc adding an unsafe option for deployment purposes that overwrites stuff and doesn't perform user sanity checks would make sense for automation
<clever> LnL: i suspect NIX_CONF_DIR may also work
<clever> just temporarily tell nix to look elsewhere for nix.conf
<clever> then nix-darwin wont find it in the real spot, and will happily put it there
<clever> but behind the scenes, nix is looking elsewhere
<LnL> sure, but the nix.* options are useless if nix.conf isn't managedf
<clever> just temporary, while nix-darwin is creating nix.conf
<gchristensen> I think this is just for bootstrapping
<clever> exactly
<LnL> trusted-public-keys must be configured in the daemon IIRC
<LnL> unless I fixed that?
<{^_^}> nix#1921 (by LnL7, 1 year ago, closed): trusted-public-keys in a (trusted) user's nix.conf is silently ignored
<LnL> should work then
<clever> i believe if you run nix as root, NIX_CONF_DIR is trusted fully
<clever> oh, but the nix-daemon wont get that var
<clever> yeah, your above link will help solve that more easily
<LnL> root doesn't go through the daemon unless --store is explicitly set so that should also work
<clever> but its running darwin-rebuild as nixos, via sudo
<clever> so it will be using the daemon
<clever> i'll need to make nixos a trusted user, oh crap
<clever> in the nix.conf i cant have, lol
<clever> maybe take advantage of the need to reload nix-daemon? lol
<gchristensen> clever: why do you set the csrFlag to 0x23 just to override it?
<clever> gchristensen: the default is 3, which only allows unsigned kexts
<gchristensen> interesting
<clever> i later set it to 0x23, which also allows dtrace
<clever> 3 was the default from the original config.plist
<clever> the image generator links to the definition of the bitfield
<gchristensen> ah!
<LnL> clever: you could sudo nix-store -r ... before switching but that's not very nice either :/
<clever> LnL: i have been thinking of baking the storepath of the prebuilt thing into apply.sh
<clever> since the config and apply.sh are paired together in the host closure
<LnL> if that's an option and you don't need to run rebuilds interactively all the installation setup can be skipped
<clever> i could even skip shipping the darwin config!
<clever> which was a tricky part
<clever> this is starting to sound like the darwin version of nixops, lol
<clever> i'm guessing i can skip the entire darwin-installer, since thats only to bootstrap?
<clever> and if i have a built copy of nix-darwin, i can skip running darwin-rebuild, and just run something inside it, but what?
<LnL> yeah there are a couple of missing parts, activation is currently split up into 2 phases (which I'd like to get rid of)
<LnL> and some stuff should be configurable for deployment purposes like I mentioned before
<clever> i have wondered, how it can do root-like things, without sudo, when you darwin-rebuild
<clever> is that just a side-effect of 44 echo "%admin ALL = NOPASSWD: ALL" > /etc/sudoers.d/passwordless
<LnL> yeah
<clever> that makes sense
<LnL> $out/user-activate is applied for the "login user" and $out/activate runs as root
<clever> do they need to be ran in a special order?
<LnL> don't think so
<clever> should be simple enough then to just nix-store -r, and then run both, one via sudo
<clever> and since there is a clear seperation between download&activate, i can just delete nix.conf between those 2
<LnL> but I really want to fold them together and get rid of the user one
<clever> yeah, the root one can just drop-root for you
<LnL> yeah, I think sudo -u $SUDO_USER should be equivalent
<clever> except when its being ran on bootup
<LnL> bootup only runs a subset
<clever> now i'm getting crazy ideas
<clever> gchristensen: a nixops plugin, to support nix-darwin machines ....
<gchristensen> heh
<gchristensen> I considered doing that!
<clever> if you conditionally run either nixos or nix-darwin here
<gchristensen> I decided I would just feel happier overall if the VMs disappeared on each boot
<clever> then you can mix both types in a single env
<clever> for the build slave stuff, yeah
<clever> but other people may have different needs for their macs and nix-darwin
<LnL> man this project really got out of hand, I just added evalModules to my user config.nix
<LnL> :D
<gchristensen> :o
lopsided98 has quit [Quit: Disconnected]
lopsided98 has joined #nixos-dev
drakonis_ has quit [Ping timeout: 248 seconds]
drakonis_ has joined #nixos-dev
<gchristensen> clever: I guess you're not interested in writing the thing to reboot the VM if it doesn't come up?
<clever> gchristensen: its more that i'm currently benchmarking that mac as a build slave, and cant reboot it constantly
<gchristensen> ah!
<clever> once ive got a spare guest, i can give that a spin
<gchristensen> cool
drakonis_ has quit [Ping timeout: 258 seconds]
<LnL> clever: I added a darwin-rebuild activate subcommand and fixed the quoting
<clever> LnL: nice
<clever> i'll have to try that out later today
ajs124 has joined #nixos-dev