taktoa has quit [(Remote host closed the connection)]
<Mic92>
did something changed regarding nativeBuildInputs? badvpn https://hydra.nixos.org/build/62385014/nixlog/1 cannot find pkgconfig if it is in nativeBuildInputs, but it works if it is in buildInputs
<gchristensen>
something did change w.r.t. pkgconfig and nativebuildinputs, ping Sonarpulse
<gchristensen>
NixOps should mutilate the configuration.nix on the target host after it is done
<gchristensen>
for example, it should add assert builtins.trace "Hey dummy, you're on your server! Use NixOps!" false; to the beginning of configuration.nix
<simpson>
Heh.
<gchristensen>
the other day I uninstalled everything from my server and deleted all the users but root by mistake because of this
<ikwildrpepper>
gchristensen: or somehow disable nixos-rebuild or remove it from PATH
<ikwildrpepper>
gchristensen: i might consider it as part of new tool that I am experimenting with atm
jtojnar has quit [(Read error: Connection reset by peer)]
<gchristensen>
cool :)
jtojnar has joined joined #nixos-dev
<gchristensen>
ooohhh it could hard-set nix.nixPath = [ "nixos-config=${pkgs.writeText "configuration.nix" "assert false" }" ]
<gchristensen>
this way it doesn't have to do anything weird about nixos-rebuild, and doesn't have to manipulate user files
<gchristensen>
hmm no that doesn't work, because setting it overrides the defaults
jtojnar has quit [(Remote host closed the connection)]
jtojnar has joined joined #nixos-dev
<disasm>
gchristensen: yeah, I did that myself a few months back, thanks for submitting that issue! Luckily my server was pre-production at the time so all I had to do was run nixops deploy to get it working again.
<gchristensen>
:D
<gchristensen>
in a mkDerivation build, I do: echo ${<nixpkgs>} and this outputs /nix/store/k8g2ma0sg58mzfjr1my8hys8gm374yqv-nixpkgs but /nix/store/k8g2ma0sg58mzfjr1my8hys8gm374yqv-nixpkgs is a symlink to `.` any ... thoughts ...?
<ikwildrpepper>
gchristensen: ls -l $(nix-instantiate --find-file nixpkgs)
<gchristensen>
I'm not finding a way to copy the channel data in to the store and refer to it
taktoa has joined joined #nixos-dev
jtojnar has quit [(Remote host closed the connection)]
jtojnar has joined joined #nixos-dev
<gchristensen>
niksnut: would you merge a PR to nixos.org adding a "Friend of Nix" page to show that serious orgs use Nix? re: https://github.com/NixOS/nixos-homepage/issues/167 I asked catern to just make the page and then PR it, but they rightly want to make sure it can merge prior to soliciting people's logos
<niksnut>
sounds good to me
jtojnar has quit [(Remote host closed the connection)]
<copumpkin>
when I run a sandboxed nix build in linux, what capabilities does it get?
<copumpkin>
none?
<MichaelRaskin>
I would hope that none.
<copumpkin>
yeah, I can't find any code adding any
<copumpkin>
the only thing I could think of might be CAP_CHOWN
<copumpkin>
but even that seems unlikely
<catern>
all capabilities correspond to (subsets of) things which root can do, so, assuming a package can build without root/caps outside of Nix, it shouldn't need root or any capabilities inside Nix
<catern>
(things which root can do and normal users can't)
<copumpkin>
fair enough
taktoa has quit [(Remote host closed the connection)]
<niksnut>
now we just need to enable debug info for more packages
<copumpkin>
ooh
<copumpkin>
o/
<copumpkin>
niksnut: any idea how this all works on macOS, if at all (I know we have the .dsym stuff and Nix doesn't currently produce any of that)
<copumpkin>
seems unlikely to be applicable, even if the underlying dwarf stuff is probably similar
<catern>
mm, just curious, why doesn't this just realize the derivation containing the debuginfo into the store, instead of maintaining its own cache?
<copumpkin>
yeah I also asked that when I first saw the magic debuginfo stuff in the binary cache
<catern>
(I see why it needs to go out to cache.nixos.org for debuginfo though)
<copumpkin>
it seems like it's mostly because of the buildid machinery in ELFs?
<catern>
let me rephrase, sorry
<copumpkin>
which isn't automatically linkable to a store path?
<catern>
I mean that there's c.n.o/debuginfo/<buildid>, and the existence of that makes sense: it's non-trivial to map a build ID to a store path containing debugging data for that build, so we need this map to exist
<catern>
then, once we know the store path containing debugging data for that build, we can download that from the cache. but: "dwarffs will unpack all these debug info files to a cache (/var/cache/dwarffs/) and serve them from there." why doesn't it put them into the store instead?
<niksnut>
it could, but I wanted to keep it a bit Nix-independent
<niksnut>
in principle, dwarffs can support .debug files from any distribution
<catern>
ah, that makes a lot of sense
<copumpkin>
niksnut: what does enabling debug info in nixpkgs derivations look like? creating a separate output for them? wondering if we can do it fairly systematically without too much burden on derivation authors
<niksnut>
set separateDebugInfo = true
<copumpkin>
should we just turn that on by default? I'm hoping we'll be able to support that sensibly in darwin too in the not-too-distant future (the main obstacle is that the tool generating that isn't fully open source yet, even though it's on its way)
<dtzWill>
dwarffs!! haha :D
* dtzWill
reads
<copumpkin>
dwarffORTRESs
<copumpkin>
now we know that niksnut is a secret fan
<dtzWill>
dwarffortresssFS
<dtzWill>
dwarves are files, etc.
<dtzWill>
(or does DF call them 'dwarfs'?
* dtzWill
wonders why we can't make 'buildid' == 'store-hash'
<dtzWill>
but using existing buildid stuff does bode well for use outside NixOS
<dtzWill>
but always felt like buildid was what people without Nix added since they wished they had Nix
<dtzWill>
lol
<copumpkin>
is buildid a hash or a uuid?
<dtzWill>
i thought it was a hash, but now I'm checking...
<dtzWill>
i believe it's supposed to be deterministic
<copumpkin>
so we could in theory use our store paths
<dtzWill>
oh, looks like --build-id can be many things: uuid, sha1, md5, or a specified hex string
<dtzWill>
i see why not tying to Nix makes sense, but just curious if there are reasons beyond that for not using it
<catern>
I assumed the reason was because the debug info for different builds of the same derivation is not necessarily the same
<catern>
if a build is not 100% deterministic
JosW has joined joined #nixos-dev
goibhniu has joined joined #nixos-dev
<dtzWill>
catern: oh, indeed. and while that triggers a "fix it! fix it! that shouldn't happen!" hehe that's unrealistic for now/not there yet. Thanks, makes sense
taktoa has joined joined #nixos-dev
Sonarpulse has quit [(Remote host closed the connection)]
Sonarpulse has joined joined #nixos-dev
<Mic92>
gchristensen: I would be interested maintaining official aarch64 images on the website. I will probably start with pine64/rock64.
<Mic92>
zimbatm: I can also do a nix-review, if the machine is not ready yet
<Mic92>
on the rust pr
<gchristensen>
cool
<Mic92>
gchristensen: you are operating the aarch64 hydra worker, right?
<gchristensen>
yeah
<zimbatm>
Mic92: I'm still waiting on Hetzner to respond. probably tomorrow
<gchristensen>
does pine64 / rock64 have the proper CPU type?
<Mic92>
zimbatm: I started a build
<Mic92>
gchristensen: you mean aarch64? Yes hence the name
<zimbatm>
Mic92: all the rust packages should be failing if all goes to plan :)
<Dezgeg>
really? what potential differences there are?
FRidh has quit [(Quit: Konversation terminated!)]
<gchristensen>
oh maybe it is, and I'm just thinking about armv8 which isn't
<Mic92>
gchristensen: pine64 has the same cpu as rpi3, which is already usuable by the binary cache.
* makefu
can confirm this :)
<gchristensen>
w00t
<Dezgeg>
of course they could always invent new differences in the future, just as they did in the past...
<Mic92>
I might be also able to lend rpi3 later
<gchristensen>
you mean for the build cluster? or?
<Mic92>
gchristensen: to test kernel + build images
<gchristensen>
oh ok
<gchristensen>
(we can't realistically put raspberry pi's in to the actual hydra as a builder)
<Mic92>
no, way too slow, compared to the current builder
<gchristensen>
yeah :)
<gchristensen>
and if we need, we could probably get a second aarch64 builder from Packet
<Dezgeg>
hooking the existing rpi3 image (nixos/modules/installer/cd-dvd/sd-image-aarch64.nix) to hydra would be a good first step
<Mic92>
Dezgeg: is there a reason, this is not currently happen?
<Dezgeg>
well someone needs to figure what incantanation to write (and to which release.nix)
<MichaelRaskin>
I wonder just how unrealistic is to replace Travis with something of Hydra-like quality
<Mic92>
makefu: is the wifi-driver/firmware of the rpi3 redistributal?
<gchristensen>
MichaelRaskin: when do you mark a build as failed if we used hydra?
<makefu>
it is part of the raspian distro, i guess yes
<makefu>
and one part of the firmware is already part of the package, only the "txt" is missing
<Dezgeg>
well, the wifi firmware presumably comes from other part of broadcom than the soc firmware ;)
<MichaelRaskin>
gchristensen: immediately on failure, unless known-transient stuff (space, network), in which case after three failures? with a bit wider list of people with restart rights.
<gchristensen>
MichaelRaskin: when what fails?
<gchristensen>
MichaelRaskin: that is always what I had a hard time with
<MichaelRaskin>
Sorry? We have Hydra, it eventually realises derivations.
<MichaelRaskin>
Every derivation can be in queue, realised, transiently failed (with counter), apparently-non-transiently failed.
zarel has quit [(Quit: Leaving)]
<gchristensen>
so if any build fails in all 40,000?
<Sonarpulse>
gchristensen: no more broken than previous commit
<MichaelRaskin>
Ah, you mean what to display green/red
<gchristensen>
aye
<MichaelRaskin>
Yes, no new failures w.r.t. base commit.
<gchristensen>
yeah, that is worth exploring
<gchristensen>
when I was experimenting with this, it was _very_ hard to do that :?
<MichaelRaskin>
I think I can do it in shell and the bottleneck will still be in builds.
<Sonarpulse>
what was the hard part?
<MichaelRaskin>
Integrating with Hydra… main problem is I know no Perl and prefer to keep that amount
<gchristensen>
doing it :P
<Sonarpulse>
:D
<gchristensen>
and also the transient failures that happen more than you think
<MichaelRaskin>
I know.
<Sonarpulse>
I like never get transient failures locally
<Sonarpulse>
but I see hydra get them all the time
<Sonarpulse>
on small rebuilds too
<gchristensen>
you're not operating at a load of 30
<Sonarpulse>
30?
<gchristensen>
yeah
<Sonarpulse>
what units?
<gchristensen>
`load`
<MichaelRaskin>
loadavg?
<gchristensen>
or as the aarch64 builder: a load avg of ~90