<Sonarpulse> dtz: ah haha
<Sonarpulse> gchristensen: I've was thinking about writing the next nix pill that lethelman never wrote/finished
<Sonarpulse> but
<Sonarpulse> at some point as the nix pills get more advanced
<Sonarpulse> they become more specific to a single nixpkgs version
<Sonarpulse> rather than general design patterns
<Sonarpulse> separately, bgamari was reminding me of the usefulness of "maintainer-notes" implementation-specific documentation
<Sonarpulse> I'm wary of over engineering things, but I'm somewhat tempted to split the nix pills into sort of "nix / timeless" and nixpkgs-specific
<Sonarpulse> orivej: vaguely related to the above, bgamari once reminded me we'll propably needs a pkgconfig-wrapper
<Sonarpulse> at the point, the redundancies between the tree wrappers mean we should factor things out
<Sonarpulse> I have a bunch of comments in cc-wrapper that I think no one is seeing, and furthermore is annoying to modify because it will cause mass-rebuilds
<gchristensen> I'd be cautious about making too many disparate docs
<Sonarpulse> this *-wrapper stuff might be lower hanging fruit
<Sonarpulse> get the documentation easily editable in some new spot
<Sonarpulse> then think about how that may/may not overlap with a *-wrapper / env hook nix pill
<Sonarpulse> dtz: I have an old WIP branch for clang cross compile
<Sonarpulse> dtz: its a bit subtle as tools and libraries (libcxx, compiler-rt) need to come from different stages
<Sonarpulse> so might be of some use
<bgamari> Is it just me or does cabal2nix not workwith cabal-install 2?
<bgamari> it seems to look for the old 00-index.gz, which cabal no longer downloads iirc
<bgamari> either that or I am doing something terribly wrong
<dtz> hmm might be good to look at old clang cross-compile if you have around
<dtz> i mostly just started building "all" llvm bits together and it became easier lol :P
<Sonarpulse> dtz: I'll dredge it up
<Sonarpulse> dtz: last thing on the texinfo is that the "buildPackages." in buildPackages.perl shouldn't be needed
<samueldr> Anyone with maven experience can chime in here? this would apply for eclipse builds too AFAIK (unverified) https://github.com/NixOS/nixpkgs/issues/34182
<Sonarpulse> bgamari said it once was, but that would be a bug on my part if so
<Sonarpulse> so let me know if the hash changes
<Sonarpulse> bgamari: that
<Sonarpulse> 's alarming
<Sonarpulse> is it shelling out to cabal, or linking it?
<bgamari> I honestly don't know
<bgamari> I suppose probably linking it looks like
<dtz> Sonarpulse: oh, honestly I wasn't sure why "buildPackages." was needed, but I think I get it now
<Sonarpulse> dtz: yeah it's only needed for buildPackages.stdenv.*
<Sonarpulse> bgamari: if its linking it, then it could actually be a bug
<bgamari> oh?
<Sonarpulse> bgamari: I wouldn't be surprised if cabal2nix was just being built with old cabal
<Sonarpulse> but I'm just guessing
<Sonarpulse> dtz: I guess I'll keep on reviewing things and maybe you do another build?
<Sonarpulse> also can use other nixpkgs to avoid mass rebuild
<dtz> yeah I've got rebuilds going but they're backed up so dunno when will be ready, just wanted to let you know what was tested/now :)
<dtz> *tested/not
<bgamari> has anyone else observed that nixUnstable wraps lines to the point of being unreadable?
<Sonarpulse> bgamari: i saw some issue for that on macos
<bgamari> I'm seeing it on nixos
<bgamari> Setup: Encoun
<bgamari> tered missing d
<bgamari> ependencies
<Sonarpulse> ew
<Sonarpulse> sorry no idea
<Sonarpulse> still on 1.11
<Sonarpulse> mainly out of laziness
<bgamari> yeah, it's probably best to stay for now; the new interface is amazing but its quirks really take a bit on productivity
<bgamari> e.g. --show-traces still doesn't work
<bgamari> so I regularly find that I have to fall back to the old interface
MichaelRaskin has quit [Ping timeout: 268 seconds]
<Sonarpulse> bgamari: I hope we can speed up development on it
<Sonarpulse> to get over the finish line
<Sonarpulse> let me rebase
<Sonarpulse> nevermind :D
<Sonarpulse> dtz: pushed the pciutils thing
<Sonarpulse> but it won't build on aarch64 at least
<dtz> hmm, let's drop pciutils for now? checked build history, shouldn't have included it in this batch
<dtz> only worked in musl-native
<Sonarpulse> dtz: ok
<Sonarpulse> but keep that commit :)
<Sonarpulse> dtz: the problem is outer make cds into subdir and runs configure there
<Sonarpulse> ugh
<Sonarpulse> hate those
<dtz> bgamari: show-traces was turned into an option, ("nix show-config" / nix.conf / "--option show-trace true")
<dtz> oh
<dtz> erm I really thought I fixed that
<dtz> or maybe it was another instance of that haha
<dtz> but yeah that's unfortunate >.<
<Sonarpulse> dtz: its definitely been fixed by you two before, I wish i knew what to grep for!
<bgamari> dtz, max-build-cores and max-jobs indeed were
<bgamari> Which frankly I'm not sure constitutes an improvement
<dtz> bgamari: recently "show-trace" is an option
<bgamari> these are pretty frequently used options
<dtz> (6 days ago lol)
<bgamari> it seems reasonable to devote a flag to them, even if this is slightly inconsistent
<Sonarpulse> yeah not sure what anti-huffman this is
<dtz> well regardless just saying it's an "option" now (past week) while previously it wasn't possible using new UI at all lol
<dtz> don't mean to say things couldn't be a bit smoother
<dtz> :)
<bgamari> dtz, ahh, fair enough
<Sonarpulse> dtz: let me know if you wanna put that commit on a different branch
<Sonarpulse> so we don't loose it when rolling back
<Sonarpulse> bgamari: do you remember fixing any make-calls-configure-in-subdir for reference?
<bgamari> nope
<bgamari> that generally just worked iirc
<bgamari> Sonarpulse, which package is failing?
<Sonarpulse> bgamari: pciutils
<Sonarpulse> dtz: https://github.com/NixOS/nixpkgs/pull/34180 need testing?
<bgamari> Sonarpulse, didn't I have a patch for that?
<Sonarpulse> bgamari: this was yours cherry-picked many times I'd think
<Sonarpulse> I'll check
<dtz> Sonarpulse: re:pciutils nah we can just drop it if can't get it working reasonably--I'm grabbing the commit you pushed now and will stash locally thank you :)
<bgamari> that being said, it's possible that I just worked around pciutils
<bgamari> since my target doesn't have any PCI devices
<Sonarpulse> dtz: sounds good
<bgamari> dtz, thanks again for taking the time to merge sort through my trail of chaos
<dtz> Sonarpulse: building texinfo5 and texinfo6 on aarch64 presently, finally got through all the deps xD
<Sonarpulse> dtz: cool!
{^_^} has quit [Remote host closed the connection]
{^_^} has joined #nixos-dev
<dtz> texinfo looks good ^_^
<Sonarpulse> dtz: cool!
<Sonarpulse> dtz: stashed it?
<Sonarpulse> err wrong branch
<dtz> :)
<dtz> and yeah grabbed your pciutils commit, ty sir :)
orivej has quit [Ping timeout: 268 seconds]
mbrgm has quit [Ping timeout: 256 seconds]
mbrgm has joined #nixos-dev
acowley has quit [Ping timeout: 268 seconds]
acowley has joined #nixos-dev
Sonarpulse has quit [Ping timeout: 256 seconds]
la_putin has joined #nixos-dev
el_putin has quit [Read error: Connection reset by peer]
Sonarpulse has joined #nixos-dev
ma27 has joined #nixos-dev
ma27 has quit [Ping timeout: 265 seconds]
Sonarpulse has quit [Ping timeout: 246 seconds]
la_putin has quit [Quit: Konversation terminated!]
orivej has joined #nixos-dev
pie__ has quit [Ping timeout: 246 seconds]
orivej has quit [Ping timeout: 256 seconds]
pie_ has joined #nixos-dev
orivej has joined #nixos-dev
FRidh has joined #nixos-dev
* domenkozar waves
pie_ has quit [Ping timeout: 240 seconds]
<FRidh> This is is quite nice: `with import (fetchTarball channel:nixos-17.09) {};` Does a nix command exist for substituting this with a fixed revision and hash? Something like `pip freeze` but then pinning Nixpkgs commit.
ma27 has joined #nixos-dev
ma27 has quit [Client Quit]
ma27 has joined #nixos-dev
<niksnut> FRidh: yes, fetchGit
<niksnut> e.g. fetchGit { url = https://github.com/NixOS/nixpkgs.git; ref = "release-17.09"; rev = "d9a2891c32ee452a2cd701310040b660da0cc853"; }
<aminechikhaoui> niksnut: btw would this eventually deprecate hydra inputs for example
<niksnut> yeah
<aminechikhaoui> awesome
<FRidh> niksnut: I'm familiar with fetchTarball and fetchgit. But what I mean is a way that it can fill in the revision and hash for you.
__Sander__ has joined #nixos-dev
<niksnut> that doesn't exist yet
<zimbatm> anyone wants to become a GSoC mentor? https://github.com/nix-community/google-summer-of-code
ma27 has quit [Ping timeout: 255 seconds]
FRidh has quit [Ping timeout: 240 seconds]
FRidh has joined #nixos-dev
viric has quit [Ping timeout: 256 seconds]
ma27 has joined #nixos-dev
pie_ has joined #nixos-dev
ma27 has quit [Ping timeout: 276 seconds]
FRidh has quit [Ping timeout: 240 seconds]
ma27 has joined #nixos-dev
ma27 has quit [Ping timeout: 265 seconds]
infinisil has quit [Quit: ZNC 1.6.5 - http://znc.in]
viric has joined #nixos-dev
infinisil has joined #nixos-dev
ma27 has joined #nixos-dev
ma27 has quit [Ping timeout: 255 seconds]
ma27 has joined #nixos-dev
Sonarpulse has joined #nixos-dev
<copumpkin> niksnut: I'm making progress on my mysterious sandbox bug. It only shows up with --check! Still no clean repro unfortunately, but I think I'm honing in on it
<copumpkin> homing
<niksnut> weird
<copumpkin> yeah, can't explain it yet based on the code
<Dezgeg> maybe without --check it manages to read nsswitch.conf from before entering the sandbox and caches that?
<Dezgeg> and/or resolv.conf
<copumpkin> now trying to get it to fail in a builder that I control rather than builtin:fetchurl
pie__ has joined #nixos-dev
<copumpkin> maybe that'll clarify what's going on
pie_ has quit [Ping timeout: 256 seconds]
<niksnut> copumpkin: have you tried strace?
<copumpkin> yup, nothing obvious jumped out at me, but I'll look more carefully soon if this path shows me nothing
ma27 has quit [Ping timeout: 255 seconds]
<niksnut> I would look for connect() calls
<niksnut> and grep for nscd
<copumpkin> yeah I did, it kept trying to connect to the nscd socket but it wasn't there, iirc
<copumpkin> will look again soon
ma27 has joined #nixos-dev
<copumpkin> okay, so it doesn't seem to happen for my non-builtin builder with other conditions identical
<copumpkin> so I guess strace it is
<copumpkin> so I see various messages about trying to connect to nscd socket, but ENOENT
<copumpkin> e.g.,
<copumpkin> [pid 1347] connect(3, {sa_family=AF_UNIX, sun_path="/var/run/nscd/socket"}, 110) = -1 ENOENT (No such file or directory)
<copumpkin> same in the child
<copumpkin> although it's broken up with other nonsense due to concurrency
jtojnar has joined #nixos-dev
<copumpkin>
<copumpkin> [pid 1360] stat("/etc/resolv.conf", <unfinished ...>
<copumpkin> [pid 1360] <... open resumed> ) = -1 ENOENT (No such file or directory)
<copumpkin> that's also the child I think
<copumpkin> shouldn't there be a resolv.conf in the sandbox?
<Dezgeg> so no resolv.conf + no nscd = no dns in sandbox then
<copumpkin> trying now builtin:fetchurl against direct IP
<copumpkin> to see if it's general networking or just DNS
<copumpkin> so yeah, just DNS
<Dezgeg> is there a resolv.conf outside sandbox?
<Dezgeg> is it a symlink?
ma27 has quit [Ping timeout: 256 seconds]
jtojnar_ has joined #nixos-dev
jtojnar has quit [Ping timeout: 248 seconds]
jtojnar_ is now known as jtojnar
FRidh has joined #nixos-dev
<copumpkin> dtz, Sonarpulse : that clang python3 will screw us
<copumpkin> Dezgeg: it exists and is a regular file :/
<Sonarpulse> copumpkin: oh
<Sonarpulse> going to work
<Sonarpulse> so talk after
<Sonarpulse> feel free to revert if too bad
<dtz> was afraid of that, copumpkin . Why I pinged y'all :)
<copumpkin> in general, grahamcofborg would catch stuff like that
<dtz> and indeed please revert with my apologies
<copumpkin> not sure why the bot didn't look at it, actually
<copumpkin> gchristensen: any idea?
<gchristensen> whats up?
<copumpkin> gchristensen: just wondering why ofborg didn't notice 34178
<gchristensen> hmm
<gchristensen> I'm not sure :/ I'll dig in to that shortly
<niksnut> copumpkin: maybe you have a non-standard nsswitch.conf?
<gchristensen> copumpkin: in the future if you notice that you can always trigger it again, fwiw: "@grahamcofborg eval"
<copumpkin> niksnut: it probably is, yeah, just trying to figure out what's weird about it. This is on AWS CodeBuild, FWIW
<copumpkin> so it's inside a docker container
<copumpkin> gchristensen: cool :) in this case I didn't see the PR until after it was merged
<gchristensen> yeah :/ I'll have to see what happened
Sonarpulse has quit [Ping timeout: 276 seconds]
<Dezgeg> actually, now that I see that those lines don't match: [pid 1360] stat("/etc/resolv.conf", <unfinished ...> [pid 1360] <... open resumed> ) = -1 ENOENT (No such file or directory)
<niksnut> probably we shouldn't bind-mount the host's nsswitch.conf, but install a minimal one
<Dezgeg> stat() is unfinished but open() gets ENOENT
<copumpkin> niksnut: in my case I had to avoid nscd on purpose because it was somehow going out to the host (as far as I could tell) and getent would fail for nix looking up the build user groups
<niksnut> edolstra, We found a potential security vulnerability in a repository for which you have been granted security alert access.
<copumpkin> I could probably be more precise about that
<niksnut> github complaining about gemfile.lock files again
<copumpkin> niksnut: nice!
<copumpkin> lol
<copumpkin> Dezgeg: yeah the lines might not match up properly, because I'm getting junk from all the concurrent processes with strace -f
<copumpkin> is tehre a good way to disentangle those?
<Dezgeg> you can split it to one file per process with some strace flag
<copumpkin> ah, cool
<copumpkin> didn't know about that one
pie__ has quit [Ping timeout: 240 seconds]
Sonarpulse has joined #nixos-dev
ma27 has joined #nixos-dev
<jtojnar> why does nix why-depends build packages?
<domenkozar> otherwise it can't know?
ma27 has quit [Ping timeout: 246 seconds]
<jtojnar> is there something similar for the drv files?
<copumpkin> Dezgeg: okay I now have a per-PID breakdown :P
ma27 has joined #nixos-dev
jtojnar_ has joined #nixos-dev
<jtojnar_> something like nix-store --tree but in reverse
jtojnar has quit [Ping timeout: 246 seconds]
jtojnar_ is now known as jtojnar
jtojnar has quit [Ping timeout: 256 seconds]
ma27 has quit [Ping timeout: 276 seconds]
ma27 has joined #nixos-dev
JosW has joined #nixos-dev
pie__ has joined #nixos-dev
<copumpkin> Dezgeg, niksnut: so I'm looking for write(2, "\1\n", 2) as what happens right before builtinFetchurl
<copumpkin> https://pastebin.com/aKf6Ti8t is all I see
FRidh2 has joined #nixos-dev
<copumpkin> there's no nix-daemon involved so I wouldn't expect much IPC, but perhaps the downloader stuff is weird?
<copumpkin> I guess it must be in a different process
<copumpkin> because I don't see any writes with the warnings about resolving the host name
<copumpkin> niksnut: oh, so the actual download happens in the parent process? we seem to fork, then builtinFetchurl in the child somehow gets the parent process to run the actual download? definitely seeing the resolution messages coming from the parent in strace
<niksnut> copumpkin: no
<niksnut> download is done in the child
<copumpkin> well, then this strace is super confusing
<copumpkin> because the write(2, "warning: unable to download" is coming from the parent
<copumpkin> whereas the final error: download failed comes from teh child
<copumpkin> to be clear, I get a few of these: "
<copumpkin> warning: unable to download 'http://tarballs.nixos.org/sha256/fd9ca16de1052aac899ad3495ad20dfa906c27b4a5070102a2ec35ca3a4740c1': Couldn't resolve host name (6); retrying in 2608 ms" followed by "
<copumpkin> error: unable to download 'http://tukaani.org/xz/xz-5.2.3.tar.bz2': Couldn't resolve host name (6)"
<niksnut> copumpkin: isn't it just passing on a message from the child?
<copumpkin> I see no corresponding writes on the child process
<copumpkin> let me double check
<copumpkin> yeah, no writes containing that on the child. I suppose they might be hidden in a longer string that strace truncated
<copumpkin> but the child does comparatively little as I showed above
<copumpkin> in that paste
<copumpkin> you can see the write(2, "\1\n" indicating that we're about to do childish things
<copumpkin> and then it just exits
<copumpkin> after very little work
<niksnut> I see write(2, "error: unable to download 'http:"..., 99) = 99
<niksnut> at the end of the trace
<copumpkin> yeah
<copumpkin> but that's the final error message. But where did those warnings come from?
<copumpkin> it retries 8 times (4x tarballs.nixos.org and 4x the original tukaani host) and warns for each of them before giving up with that error
<niksnut> probably they came from the download thread
<niksnut> try tracing with -f
<copumpkin> this is with -f
<copumpkin> or rather, -ff splitting all the processes into different output files
<copumpkin> because it was impossible to follow otherwise
<copumpkin> but the only place I see those warnings come from is the parent process
<copumpkin> that's why I'm confused
<copumpkin> as I said, they might be buried in a long string that strace is truncating
<copumpkin> is there a distinctive syscall I should look for that might indicate the downloader process?
<niksnut> clone()
<copumpkin> I was using "\1\n" to find the child
<copumpkin> ok
<niksnut> unfortunately, it's pid 2 because of the pid namespafe
<niksnut> namespace
<niksnut> so you'll have to map that back to the host pid somehow
<copumpkin> I don't think strace pays any attention to that
<copumpkin> none of my pids are 2
<copumpkin> or you mean the return value :/
<niksnut> clone(child_stack=0x7f4252cdaf70, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f4252cdb9d0, tls=0x7f4252cdb700, child_tidptr=0x7f4252cdb9d0) = 2
<niksnut> oh there's another thread
<niksnut> clone(child_stack=0x7f4252cdaf70, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7f4252cdb9d0, tls=0x7f4252cdb700, child_tidptr=0x7f4252cdb9d0) = 8
<copumpkin> I see
<niksnut> not sure what that is...
<copumpkin> I see a lot of tiny processes that do nothing but set_robust_list(...) and then a bunch of futez calls and then die
<copumpkin> futex
<copumpkin> so I'm guessing that those clones must be the next processes in order (there's almost nothing happening on this machine)
<copumpkin> yeah, that makes sense
<copumpkin> I think that's the clone = 2 from above
<copumpkin> niksnut: I'm guessing the 8 might be trying tukaani after tarballs.nixos.org failed?
<copumpkin>
<copumpkin> openat(AT_FDCWD, "/nix/store/1zv5dwifxg5fh08gif8ld3h9f40y8czh-glibc-2.26-115/lib/libnss_dns.so.2", O_RDONLY|O_CLOEXEC) = -1 ENOENT (No such file or directory)
<copumpkin> this looks like an actual downloader
<copumpkin> so it tries nscd socket, looks in resolv.conf, looks in hosts, then tries libnss_dns
<copumpkin> and then dies
<copumpkin> no attempts to look at nsswitch, it seems
<Dezgeg> what's the hosts: line in nsswitch?
<niksnut> copumpkin: that file is supposed to exist...
<copumpkin> yeah
<Dezgeg> not in the chroot
<Dezgeg> if you mean libnss_dns.so.2
<niksnut> hm
<Dezgeg> it's the library used by the executing nix
<copumpkin> running another build to dump contents of nsswitch.conf
<copumpkin> although why would it matter if the child isn't reading it:
<Dezgeg> isn't it just the hosts nsswitch?
<Dezgeg> probably it got cached before the fork for whatever reason
<copumpkin> could something in the parent be reading nsswitch so the child doesn't bother?
<copumpkin> yeah maybe
<Dezgeg> but if libnss_dns.so.2 is the one that actually connects to the dns servers, then it makes sense why it doesn't work
<copumpkin> nsswitch just says "hosts: files dns"
<Dezgeg> I suppose the 'files' one then just looks in /etc/hosts
<copumpkin> yeah
<copumpkin> no compat
<copumpkin> but then again I'm not running nscd
<Dezgeg> yes
<Dezgeg> if you were it'd connect to it and work
<copumpkin> so the issue is that the sandbox doesn't have enough stuff in it to allow DNS resolution?
<copumpkin> what's unclear to me is why this seems to only happen during --check
<copumpkin> maybe it's just a flawed test on my side and the non-check version is getting substituted
<Dezgeg> something caused it to be loaded in the parent, like substituter doing dns lookups
<copumpkin> oh
<copumpkin> so this is fork weirdness
<Dezgeg> no
<copumpkin> I wish I had a clean repro
<copumpkin> it's such a PITA to trigger CodeBuild runs for each test I run
<Dezgeg> which way it is? fails when run _with_ --check?
<copumpkin> yeah
<copumpkin> and only seems to fail when run with --check
<Dezgeg> how about without check but with substitutes disabled?
<copumpkin> trying it, but IIRC that still worked
<Dezgeg> I bet if you strace that, someone loads libnss_dns.so.2 in the parent and the child in the sandbox never loads libnss_dns.so.2
<Dezgeg> strace the working build
<copumpkin> yeah, give it a few minutes to run :D
<copumpkin> although
<copumpkin> in one of my pastes above
<copumpkin> I pasted the parent process and that opens libnss_dns successfully
<copumpkin> before it ever spawns a child
<copumpkin> aha, the --check was a red herring, and you're right, if I run it with no --check and no substitutes, I get the same error
<copumpkin> that's good
<copumpkin> so it seems like builtin:fetchurl just can't resolve hostnames in this environment
<Dezgeg> which paste shows that?
<copumpkin> sorry, I must've forgotten to paste it
<copumpkin> I'll dig up another trace without --check or any confounding factors
<copumpkin> aha, I can repro it outside of CodeBuild
<copumpkin> this will make experimenting easier
<copumpkin> !
<copumpkin> okay this time I don't see libnss_dns.so, maybe because I disabled substitutes
<copumpkin> in the parent
__Sander__ has quit [Quit: Konversation terminated!]
<copumpkin> haha
<copumpkin> so now my repro
<copumpkin> involves builtins.seq (builtins.fetchurl "https://google.com") ...
<copumpkin> if the seq is there, it works
<copumpkin> if instead of builtins.fetchurl, I seq `5`, then it fails
<copumpkin> niksnut: :P
<copumpkin> so I think this is actually a more general bug but is almost always masked by nix doing DNS resolution of its own prior to calling a builder
<copumpkin> in my case, being a super narrow build, it didn't do that and failed
<copumpkin> it probably also relies on nscd being turned off
<Dezgeg> yes
<copumpkin> so basically, I'm extremely unlucky :)
<copumpkin> but I'm not imagining things
<niksnut> copumpkin: hm, I wonder why it's doing DNS resolution. Probably checking the binary cache.
<copumpkin> yeah, or in my case (when I disable substitutes), builtins.fetchurl
<niksnut> ah right
<copumpkin> so the hacky fix would be to just insert a dummy resolution :P
<copumpkin> always
<copumpkin> but that doesn't feel ideal
<copumpkin> maybe always insert nix's glibc into the sandbox?
<copumpkin> I think the key issue is that if you're using a builtin builder, you could in theory use all of nix's own closure
<copumpkin> in practice builtin:fetchurl is the only one and that doesn't do much
<niksnut> the whole point of builtins:fetchurl was that it wouldn't require any sandbox configuration
<copumpkin> but if someone were to add another builtin builder, that might be very confusing if it used any more of nix's dependencies
<niksnut> we may as well get rid of it if that's not the case
<copumpkin> well, I just mean in theory, if builder.isBuiltin then sandbox += nix.closure
<copumpkin> hmm
<niksnut> there is no nix closure necessarily
<copumpkin> oh, because it might be installed in /usr
<niksnut> right, or another nix store
<copumpkin> hmm
<niksnut> but maybe we can just force nss modules to be loaded
<niksnut> in the parent
MichaelRaskin has joined #nixos-dev
<copumpkin> yeah, that might be easiest
ma27 has quit [Ping timeout: 265 seconds]
<copumpkin> nastiest solution of all: whenever the evaluator evaluates something, it implicitly prepends "builtins.sec (buitlins.fetchurl ...)"
<copumpkin> :P
<copumpkin> seq
jtojnar has joined #nixos-dev
FRidh2 has quit [Quit: Konversation terminated!]
<copumpkin> niksnut: still curious about the other thing I was asking the other day: why is builtin:fetchurl even run in a child if it's purely nix code? it seems like we can trust that it'll do the right thing
michaelpj_ has joined #nixos-dev
<copumpkin> can't it just enqueue a download in the parent process and be done with it?
<copumpkin> that would avoid this issue and maybe simplify the whole thing
pie__ has quit [Ping timeout: 240 seconds]
<copumpkin> niksnut: I don't see anything in the NSS machinery to preload stuff, except by actually performing a resolution. I could ask gethostbyname("nixos.org") at startup and not pay attention to the result, I suppose :P
<dtz> aw man I remember doing that WAY back to get some stuff working on iOS lol
<dtz> http://www.saurik.com/id/3 <-- I'm the "Will Dietz" referred to lol
<dtz> oh I suppose I just issued dummy HTTP requests, not dummy gethostbyname() calls, lol
<dtz> anyway really can't imagine it's the same but :D
<copumpkin> nice! fellow ex-jailbreakers unite
<LnL> niksnut: copumpkin: gchristensen: most of the macs are idle or stuck
<gchristensen> ack
<gchristensen> is mac1?
<LnL> it's idle, don't know why
<LnL> 4-8 are all stuck on sending inputs
ma27 has joined #nixos-dev
<copumpkin> ooh niksnut nss_load_all_libraries might be good
<copumpkin> I wonder if we're supposed to call that though
<copumpkin> technically I don't think we are, but how bad is it if we do?
<gchristensen> LnL: I did some work on mac1, let's see if that gets better. back shortly to apply it to the rest.
<Dezgeg> nss_load_all_libraries is static
<copumpkin> yeah seeing that now
<copumpkin> sigh
<copumpkin> as is nss_load_library
<copumpkin> __nss_disable_nscd could work if we went and undid all the global variables it set
<copumpkin> super hacky though
<Dezgeg> I think you really just need to do the dummy hostname lookup
<copumpkin> probably easier just to resolve something and let NSS do its thing
<Dezgeg> it's not like builtin:fetchurl is going to be very common
<copumpkin> niksnut: that isn't beautiful, but would you accept it?
<copumpkin> it seems like not sticking builtin:fetchurl into a child might be a cleaner long-term solution
<copumpkin> or if we do, just disabling its sandbox
<niksnut> copumpkin: well, I thought about doing builtin:fetchurl in a thread, but we would have to be careful about filtering out file:// etc.
<niksnut> in general, running it as root is not a great idea
<copumpkin> ah
<copumpkin> I see
<copumpkin> that makes sense, I guess
<copumpkin> so I'll just resolve nixos.org early on and ignore the output?
<copumpkin> can't think of a nicer way to do this that isn't horribly fragile
<niksnut> just download http://invalid.invalid
<copumpkin> you mean enqueue a download of it using the downloader on the parent process?
<copumpkin> I guess invalid to force it to cycle through all possible host resolvers?
<niksnut> try { getDownload()->download("http://invalid.invalid"); } catch (...) { }
<niksnut> and do it in build.cc, not at startup
<copumpkin> like at the top of startBuilder()?
<copumpkin> gimme a bit to adjust my setup to use a custom Nix
<niksnut> yeah
<niksnut> conditional even on isBuiltin()
<niksnut> and wrapped in std::call_once
<copumpkin> cool, will try
<gchristensen> it sort of looks like mac1 is trying to ask permission to do something, like install updates and reboot
<copumpkin> niksnut: running a test with https://github.com/copumpkin/nix/commit/7ca2bd74e1c25a77a50a39dee15404518b194736 now, hold your breath :P
<copumpkin> niksnut: I think that worked, but now I just need to get it to shut up about being unable to resolve the invalid URL :P
<copumpkin> can I suppress those warnings?
<copumpkin> also, it retries a few times, which is probably unnecessary
<niksnut> copumpkin: maybe put that in a separate function, like preloadNSS()
<niksnut> I think you can specify a retry count
<niksnut> in request
<copumpkin> yeah, I'm turning .tries = 1
<copumpkin> but the warning is a bit less clear
<niksnut> maybe request needs a flag to suppress warnings
<copumpkin> I proposed calling it `bool stfu` in the PR :P
<copumpkin> feels a bit awkward but I can do that if you think it's right
<copumpkin> (obviously kidding about the name)
<niksnut> where does the warning come from anyway?
<copumpkin> so I get
<copumpkin> warning: unable to download 'http://this.pre-initializes.the.dns.resolvers.invalid': Couldn't resolve host name (6); retrying in 285 ms
<niksnut> well, if tries = 1 it shouldn't print that
<copumpkin> true
<copumpkin> gonna run another test to make sure it's quiet
<copumpkin> niksnut: looks good and quiet now :D
<copumpkin> I suppose you might prefer the once_flag being a static variable inside preloadNSS?
<niksnut> copumpkin: thanks!
<copumpkin> thank you (and dezgeg) for all the help tracking it down :)
<copumpkin> I feel very important to this project: it's essential to have someone very unlucky bug testing stuff :P
<copumpkin> I'm gonna trigger a hydra nix eval and then update nixUnstable in nixpkgs
<gchristensen> <3 copumpkin
<gchristensen> <3 niksnut
<copumpkin> niksnut: I think nix-build --hash might have broken
<copumpkin> thanks gchristensen :)
<copumpkin> niksnut: if I run the example from your original --hash buildmode commit, I now get:
<copumpkin> output path ‘/nix/store/wyrj496mdypcqxik1fh95fjjfmma5yir-nix’ has r:sha256 hash ‘11clfc8fh8q8s3k4canmn36xhh3zcl2zd8wwddp4pdvdal16b5n6’ when ‘0fffffffffffffffffffffffffffffffffffffffffffffffffff’ was expected
<copumpkin> (I had to replace the first f with a 0 to get it to pass the validation)
<copumpkin> oh maybe it's just because my daemon is still 1.11
michaelpj_ has quit [Ping timeout: 240 seconds]
<copumpkin> is setting `hashed-mirrors =` in nix.conf sufficient to disable hashed mirrors everywhere? (for builtins and nixpkgs fetchurl?
<copumpkin> I think so, right?
<copumpkin> it seems like the nixpkgs fetchurl supports a NIX_HASHED_MIRRORS but nothing in nix sets it
<copumpkin> oh I see, mirrors.nix sets hashedMirrors :(
ma27 has quit [Ping timeout: 276 seconds]
<copumpkin> doesn't let me set it to an empty list
<copumpkin> oh, I can set it to a single space
<LnL> is that also used by fetchurl?
<LnL> or just the builtin
<copumpkin> NIX_HASHED_MIRRORS is used by nixpkgs fetchurl, and hashed-mirrors in nix.conf is used only by the builtin
<LnL> I see, that's why it wasn't working
MichaelRaskin has quit [Ping timeout: 248 seconds]
<copumpkin> niksnut: maybe it makes sense to put NIX_HASHED_MIRRORS purely into nix now (so that the hashed-mirrors option controls it), instead of hardcoding it in mirrors.nix in nixpkgs?
MichaelRaskin has joined #nixos-dev
<copumpkin> that way we can stop using the impure env var in fetchurl
JosW has quit [Quit: Konversation terminated!]
infinisil has quit [Quit: ZNC 1.6.5 - http://znc.in]
infinisil has joined #nixos-dev
infinisil has quit [Quit: ZNC 1.6.5 - http://znc.in]
infinisil has joined #nixos-dev
<MichaelRaskin> Well, whatever you do there is always proxy…
la_putin has joined #nixos-dev
<contrapumpkin> oh I just mean for NIX_HASHED_MIRRORS
<contrapumpkin> not all the impure env vars
<contrapumpkin> it's just awkward to have this defined in a bunch of different places if you want to turn it off
<ekleog> Hmm... Anyone knows whether there is a dirNameOf equivalent to baseNameOf? I can't find any :/
<LnL> builtins.dirOf
<ekleog> oh, thanks!
* ekleog feels stupid right now
<contrapumpkin> the hashed mirrors thing is hiding a lot of bad downloads now
<LnL> I bet, didn't even know about that until recently
<gchristensen> yikes
<gchristensen> you mean b/c of tarballs.nixos.org?
<gchristensen> that is why we have it :$
<contrapumpkin> it sort of diminishes accountability though, too
<gchristensen> accountability of upstreams?
<contrapumpkin> of our packages
<contrapumpkin> because there's not necessarily any trace left anywhere but nixos infra of where their inputs came from
<contrapumpkin> it's much easier to trust package X if it just grabs source from *.gnu.org and builds it with a small script
<contrapumpkin> than if it fetches something from tarballs.nixos.org/sha256/kljkljklawjfkelwjfwklejfkal that ostensibly came from gnu.org but doesn't match anything on there
* contrapumpkin shrugs
<contrapumpkin> it's a bit of a trade-off I guess
<contrapumpkin> maybe we should have a fall-back with a warning
<gchristensen> I suppose I see where you're coming from, but there is real value in the mirror in terms of being able to maintain thee distro, and if you don't trust tarballs.nixos.org to contain valid data then I suspect there are deeper problems
<contrapumpkin> "original source didn't work, falling back to hashed mirror" or something
<gchristensen> that would be great
infinisil has quit [Quit: ZNC 1.6.5 - http://znc.in]
<LnL> how does that actually get populated?
<contrapumpkin> it's also less clear to me what value hashed mirrors add over binary substitutes
<gchristensen> hmm maybe not deeper problems if you don't trust anything produced by NixOS other than a given copy of nixpkgs and a given nix...
<contrapumpkin> since we now seem to have two content-addressable substitution mechanisms
infinisil has joined #nixos-dev
<gchristensen> there is a mirror-tarballs script somewhere
<gchristensen> but I'm not sure if / when it is run
<gchristensen> are FO outputs in the cache stable when the tools to fetch them change?
<contrapumpkin> yeah
<LnL> also things like changing the name
<contrapumpkin> shot an email to nix-devel
<contrapumpkin> will see
<LnL> I was actually looking for some kind of content addressable fetch helper when I first came across it
<LnL> was expecting something like fetchSha256 "..." for bootstrap tarballs
infinisil has quit [Quit: ZNC 1.6.5 - http://znc.in]
infinisil has joined #nixos-dev
<MichaelRaskin> contrapumpkin: technically, hashed mirror can work with non-standard store path.
<contrapumpkin> fair enough, but that seems somewhat far-fetched and depends on eelco remembering to populate it
<contrapumpkin> or rather, fairly niche use case
Sonarpulse has quit [Ping timeout: 248 seconds]