LnL has quit [(Quit: exit 1)]
LnL has joined joined #nixos-dev
mbrgm has quit [(Ping timeout: 248 seconds)]
mbrgm has joined joined #nixos-dev
JosW has joined joined #nixos-dev
page has quit [(Ping timeout: 260 seconds)]
mbrock has quit [(Ping timeout: 246 seconds)]
mbrock has joined joined #nixos-dev
MichaelRaskin has quit [(Quit: MichaelRaskin)]
alp has quit [(Ping timeout: 240 seconds)]
alp has joined joined #nixos-dev
goibhniu has joined joined #nixos-dev
phreedom has joined joined #nixos-dev
<fpletz> gchristensen: seems to work now, thanks!
taktoa has joined joined #nixos-dev
<domenkozar> lol
peti has joined joined #nixos-dev
<grahamc> It did? Ugh
<grahamc> I hate that box
<domenkozar> grahamc: is that just one box?
<grahamc> T2-3?
<gchristensen> domenkozar: packet-t2-3 is just one box, yeah, it is -3 because I've destroyed 2 other boxes thinking it'd never boot up
<gchristensen> I thought -3 was busted too, but evidently after a night of thinking about it, it came up :P
<gchristensen> they take half of forever to start
<domenkozar> I mean we have only one packet machine?
<gchristensen> well t2-3 and the aarch64 one
<gchristensen> iirc the t2-3 takes 1/3 of the standard build load anyway :P
<domenkozar> :D
taktoa has quit [(Remote host closed the connection)]
<gchristensen> ok y'all (domenkozar, fpletz, ikwildrpepper) how do you want to handle KRACK? should we even try waiting for upstream releases? should we start finding and applying patches? we'll need to send an announcement, and should probably patch 17.03
<gchristensen> lots of commits to hostap unrelated to this change, I'm not sure we'll be able to just apply some patches
<clever> gchristensen: wpa_supplicant would probably also need patching
<gchristensen> definitely
<gchristensen> anything else you can think of?
<clever> i'm still on wep ....
<gchristensen> !!!
<clever> the first router i tried to upgrade to, black-holed all ipv6 traffic
<domenkozar> I thought hostap==wpa_supplicant
<clever> the second one refused to allow traffic between wan and lan
<gchristensen> they're by the same people, but we package them separateley
<clever> hostapd is an access point (the router side of things)
<clever> wpa_supplicant is the client
<domenkozar> yeah we need client patching
<domenkozar> mostly :)
<domenkozar> since most APs won't be patched
<clever> and not many AP's run nixos
<gchristensen> we need both
<domenkozar> I mean client patching is 99% of users
<gchristensen> yeah
<gchristensen> I'm biased, my AP will be patched as soon as we patch it
<gchristensen> ok there are patches available
<gchristensen> http://w1.fi/security/2017-1/ I'm working on patches from this
<clever> it would also help to have a working exploit, to confirm its actually fixed
<gchristensen> yeah
<clever> and we already have an issue and PR open to fix it
<gchristensen> I have my branch 1min from being ready to push, but uses fetchpatch
<gchristensen> should we teach this contributor fetchpatch, or use mine? :)
<fpletz> gchristensen: whatever you prefer :)
<clever> at a glance, i think its just spoofing some AP packets during the key negotiation, and telling the client to use a different key?
<gchristensen> fpletz: can you write a notice?
<clever> and it must also spoof some client packets, to get the AP on the same key
<clever> and then a 3rd party knows what key the session is using
__Sander__ has joined joined #nixos-dev
<fpletz> gchristensen: shouldn't we wait for a channel release first? :)
<gchristensen> sorry I just mean write, not send :)
<gchristensen> if not, no worries, I can
<fpletz> but that will probably take a few days anyway… those damn tests :(
<gchristensen> :(
<domenkozar> did we bump the eval?
<gchristensen> I did for 17.03
<domenkozar> we need checklist for security patches :)
<gchristensen> I did for 17.09 I mean
<gchristensen> checklist -> yes!
<fpletz> domenkozar: I'm regularly watching hydra and restart timed out and aborted tests, and bump the jobs of course
<gchristensen> guh
<fpletz> it's a very tedious process, though, I'm thinking about writing a script :/
<gchristensen> is there a particularly broken host?
<fpletz> depends, the qemu hangs are random issues not tied to a particular test or host
<gchristensen> that's for sure
<fpletz> and wendy is sometimes out of disk space, but that host is only building i686 anyway
<domenkozar> we did find that mkfs.ext4 hangs
<domenkozar> and when you debugg it, it works every time
<domenkozar> gotta love heisenbug
<domenkozar> s
<clever> i'm half considering just opening a PR so it will strace that mkfs 100% of the time
<gchristensen> :o
<clever> this is where it hangs
<clever> ive had it hang 4 times in a row, but the instant i add strace to there, it stops hanging
<fpletz> gchristensen: sorry, gotta do some work stuff now, I can write up something in the evening though
<gchristensen> ok
<gchristensen> I'll poke hydra periodically
<copumpkin> niksnut: any reason you have the sandbox stuff in .gen.hh rather than (include ) on the scheme side? I've refactored it a bit to use include and think it's a bit nicer, while still working fine
<aminechikhaoui> LnL: niksnut any chance https://github.com/NixOS/nixpkgs/commit/514593ea31d7e67e8efa2f2ff26c9569d508a5ef can be backported to release-17.03 ?
<aminechikhaoui> applying it seems to fix the build of curl with darwin which was broken
<LnL> oh right, I forgot about that
<LnL> I can do it after work
<aminechikhaoui> LnL: thank you
<niksnut> if it's just a cherry-pick, I can do that now
<aminechikhaoui> great niksnut
<LnL> I wouldn't expect any conflicts, the changes are pretty simple
<aminechikhaoui> yeah g++ -> c++ :D
<LnL> curl must have ignored the env variables on darwin until now
<niksnut> done
<aminechikhaoui> thanks niksnut
<copumpkin> niksnut: is there any way to get two binary caches to coexist with a clear priority between them in 1.12 right now? I have a local binary cache that contains many of the same hashes as the public nixos.org one, but the nixos.org one always seems to win
<copumpkin> I think it's because my local one is over s3:// and that appears to be a bit slower at checking file existence
<clever> copumpkin: there is a cache info file in the root directory, that defines a priority
<clever> c2d ~ # cat /media/videos/4tb/nix-cache/nix-cache-info
<copumpkin> yeah I have one
<copumpkin> but the public one still seems to be winning
<clever> the one setup by nix-push lacks a priority
<copumpkin> not using nix-push
<clever> cache.nixos.org has a priority of 40
<clever> what priority is yours at?
<copumpkin> more broadly, it also seems weird to define that as a property of the cache rather than the consumer
<copumpkin> I think I set it to 10
<clever> yeah, that is a bit odd
<copumpkin> maybe I should set WantMassQuery too
<copumpkin> on S3
<copumpkin> putting the StoreDir into that nix-cache-info also seemed odd to me
<copumpkin> The only attribute that makes sense to me on a binary cache is WantMassQuery :P
<clever> i think StoreDir being there, prevents you from doing any pointless query if you wanted /home/clever/nix/store based builds
peti has quit [(Quit: WeeChat 1.8)]
peti has joined joined #nixos-dev
jtojnar has quit [(Quit: jtojnar)]
<gchristensen> fpletz: it doesn't look like t2-3 came back
mingc has quit [(Quit: Ping timeout (120 seconds))]
mingc has joined joined #nixos-dev
__Sander__ has quit [(Quit: Konversation terminated!)]
goibhniu has quit [(Ping timeout: 255 seconds)]
<shlevy> niksnut: Is there any way to use builtins.fetchgit with ssh URLs?
<shlevy> Getting "not a valid URI" errors :(
Sonarpulse has quit [(Ping timeout: 252 seconds)]
<fpletz> gchristensen: weird, when I messaged you earliert it was building stuff o/
<fpletz> do we have some kind of monitoring?
<shlevy> :o return scheme == "http" || scheme == "https" || scheme == "file" || scheme == "channel" || scheme == "git" || scheme == "s3";
<shlevy> :'(
MichaelRaskin has joined joined #nixos-dev
Sonarpulse has joined joined #nixos-dev
<gchristensen> fpletz: I have ~a bit, the regular hydra has more (datadog)
<gchristensen> using ZFS for the / on the packet build server was a mistake, the rescue OS can't run ZFS :(
<gchristensen> I can't work on it until this evening either, unfortunately
Sonarpulse has quit [(Remote host closed the connection)]
<copumpkin> niksnut, shlevy, gchristensen : we should probably talk about how to fix 10.13 multi-user or revert it to single user for now
<gchristensen> yes
<copumpkin> my inclination is just to go with the loopy solution for now
<copumpkin> it's not beautiful but it seems quick and easy to implement
<gchristensen> if you make that, I'll deploy it here and test it a bit
<copumpkin> can probably take a look tonight or tomorrow night at this rate
<cransom> i'll volunteer as tribute as well, if you wanted some more eyes
<copumpkin> I'm also hoping this will be a temporary implementation until we can assume a fix in the kernel
Sonarpulse has joined joined #nixos-dev
<gchristensen> niksnut: (1) can the AWS machines build tests? (2) can you scale out a bit to get the 17.09 channel built?
<gchristensen> #2 is dependent on #1 being a yes :P
<shlevy> copumpkin: Yeah, agreed on loopy solution, if it works
<copumpkin> thanks cransom might create a ticket about it later
<copumpkin> I'll ping you on it when we get to it
<cransom> copumpkin: sounds good
<niksnut> gchristensen: if you mean VM tests: no
<gchristensen> dang
<niksnut> EC2 doesn't support nested virtualization
<gchristensen> oh duh, that's right
<gchristensen> the packet host self-destructed again :(
<copumpkin> you can have very slow VM tests ;)
<copumpkin> and if you want very slow VM tests, EC2 can also run very slow ARM VM tests :D
<copumpkin> or PPC! there are no limits to our emulation
<gchristensen> I have an updated image about ready to be used, it doesn't use ZFS, so I'd be able to recover it with their rescue OS
<gchristensen> (as noted though, I can't do that until this evening)
<copumpkin> lots of nix discussion on HN btw: https://news.ycombinator.com/item?id=15478209
* gchristensen unblocks the orange site to check it
<shlevy> gchristensen: Hmm I don't see /etc/profile.d/nix-daemon.sh in 1.12, just nix.sh
<shlevy> gchristensen: Is that expected?
<gchristensen> uhh
<gchristensen> maybe I didn't merge thaht PR
<shlevy> :D
<gchristensen> I might need to go looking in to the diff's between the installers and forward-port the changes
<gchristensen> but after I fix this builder issue
<gchristensen> also I need to write / practice my talks for nixcon >.>
<MichaelRaskin> Practice???
<clever> gchristensen: about that packet server, does the rescue OS have kexec?
<gchristensen> clever: it does not, and can't get it because of grsec
<shlevy> gchristensen: Thanks, can you ping me?
<clever> gchristensen: dang!
<gchristensen> clever: I know, right? >.>
<clever> gchristensen: are you able to edit the bootloader config?
<gchristensen> lol no
<MichaelRaskin> And zfs-fuse is too clow?
<MichaelRaskin> slow
<gchristensen> but maybe I can can convince them to hack the system's config in the backend as a favor? not sure
<clever> gchristensen: you cant just mount the /boot for the existing install?
<gchristensen> I don't think so
<clever> gchristensen: or does the VM not give you kernel control?
<gchristensen> it is the most tightly secured rescue os I've ever seen
<MichaelRaskin> No fuse???
<clever> gchristensen: how can it be a rescue OS if you cant mount /boot for the os your trying to rescue?
<gchristensen> I don't have a /boot
<MichaelRaskin> Well, you could nbd it outside and attach it somewhere more friendly…
<gchristensen> oh my word
<MichaelRaskin> nbd-server doesn't need anything special from kernel.
<clever> lol
<MichaelRaskin> Another option is sigh about lack of fuse and boot a Qemu inside rescue system with the kernel you need
<clever> gchristensen: why does the original OS your trying to fix not have a /boot/, was that part of the zfs filesystem?
<gchristensen> yeah
<MichaelRaskin> Why doesn't rescue have FUSE is a better question…
<gchristensen> well so it is an out-of-date Alpine image
<gchristensen> which is very Minimal and customized for the datacenter's custom hardware
<copumpkin> would be nice if the trusted-binary-caches logic weren't applied to --dry-run operations
<copumpkin> niksnut: is there a way I can test the health of a binary cache with `nix-build --dry-run` and without modifying /etc/nix/nix.conf? If I just pass in `--option binary-caches` that works unless I have a daemon, in which case it complains that the parameter I'm passing in isn't trusted
<copumpkin> but since I'm not actually adding anything to the store, that shouldn't matter
<copumpkin> I guess IFD can make that messier
nixer has joined joined #nixos-dev
<copumpkin> I wonder if I can ask it to operate on a dummy store with the new store URI stuff
<MichaelRaskin> And then store prefix will mismatch
<MichaelRaskin> Maybe just override NIX_CONF_DIR to point to another nix.conf location?
<copumpkin> the new store URI thing lets you specify the store prefix
* copumpkin tries to remember how
<LnL> I've been wondering if there's a way we could warn people if they run nixos-rebuild with an unsupported channel
<gchristensen> that would be nice
<LnL> maybe we could add a file to the tarball like the .git-revision?
<copumpkin> anyone know if I can pass one of the newfangled store URIs to nix-build in 1.12?
<LnL> with NIX_REMOTE I think
<shlevy> gchristensen: Do you have a PR or branch somewhere with nix-daemon.sh for 1.12?
<gchristensen> probably?
<gchristensen> but it might need updating
<LnL> about that, any thoughts on removing the /etc/profile part?
<gchristensen> ok, that is probably fine, but some install code probably needs backporting as well, not certain
<shlevy> gchristensen: I did :)
<shlevy> scripts/local.mk
<gchristensen> I mean the other stuff too
<shlevy> Which stuff?
<shlevy> This doesn't touch the installer
<shlevy> Should it?
<shlevy> :(
<shlevy> Bleh
<shlevy> Maybe I'll just have my team use single user
<gchristensen> is the 1.11 installer + a 1.12 upgrade not ok?
<shlevy> No, it breaks because nix-profile-daemon isn't there :D
<gchristensen> oh
<gchristensen> then that patch is all you need! :)
MichaelRaskin has quit [(Ping timeout: 252 seconds)]
MichaelRaskin has joined joined #nixos-dev
<Mic92> kerberos support or no kerberos support in curl: https://github.com/NixOS/nixpkgs/pull/29785
<Mic92> (policy discussion)
<gchristensen> "Kerberos implementations are not necessarily interoperable, and the one in gss (which is GNU gss) is very rarely used." ugh
<Mic92> that is the one, we ship in curlFull one
<Mic92> *currently
page has joined joined #nixos-dev
JosW has quit [(Quit: Konversation terminated!)]
<rycee> LnL: About the nixos-rebuild with unsupported channel I would suggest something like news.nix from Home Manager. However, while the module itself should be trivial to add, the nixos-rebuild tool would have to be made "news aware"...
<LnL> I was thinking of people using nixpkgs-unstable instead of nixos-* for their system
<LnL> that's based on what tests are run on hydra, not the changes in nixpkgs
<rycee> LnL: Ah, sorry. I misunderstood you to mean a stable channel that is getting retired. I.e, I was imagining a final entry saying "NixOS X.Y is no longer supported, please upgrade to a newer version".
<LnL> that's something we already did with 16.09 IIRC
<LnL> I like the idea of news/changelog for nixos, but I'm not sure how useful it is for stable releases
<rycee> I'm finding it quite useful in HM but even its "stable version" is changing quite a lot.
<rycee> Also the way it implements the news module require two evaluations out the configuration, which may not be the best fit the somewhat more numerous NixOS modules ;-)
<LnL> yeah, same for me I added the changelog when changing some defaults
<LnL> oh, that would be pretty noticeable for nixos
<ekleog> news would be something quite great to state what happens with a change of stateVersion, wouldn't it? (someday people do have to update eg. postgresql, and at that time having a news telling them what to do to finish the upgrade would be great)
<LnL> yeah indeed
<LnL> I'm using an integer for the stateVersion and have instructions on what to change for each increment
<ekleog> I guess it should go through the RFC process so that everyone would benefit from the same instructions as you do (and you wouldn't have to compile them, module changes would come bundled with them), that said I don't have time to write one up for the time being :)
<ekleog> random pre-sleep idea: what about splitting <nixpkgs/nixos> into a separate repository owned by the NixOS org? I'm not completely clear on what benefits this would bring (something like less downloads for nix-only users against maybe issues with synchronizing changes that require both a package and a module addition?), but it'd sound more semantically correct to me
<ekleog> (this idea arose while thinking about whether nixup and/or home-manager should be “taken over” by the NixOS org once mature enough, though of course that'd be only if developers actually want it -- btw, please hl me if you answer, I'm going afk and likely won't backlog everything :))
<gchristensen> ekleog: it used to be split but it was hard to maintain
<copumpkin> ekleog: see https://github.com/nixos/nixos :)