<clever-afk>
simendsjo: i saw something about that on an issue, one min
2017-03-24
<clever>
praduca: as a guest or host?
<clever>
c74d: you can make it depend on other variables in the current file, like a let block of booleans, but it cant depend on the config argument passed in
<clever>
maurer: nixos doesnt have code for that right now
<clever>
maurer: if you can just run a second pgsql and change the $HYDRA_DBI on hydra, that could do it
<clever>
LnL: yeah, but you can forcibly drop that context via builtins.unsafeDiscardStringContext
<clever>
maurer: and also, the sandboxes in nix basicaly create containers on the fly, for every build
<clever>
maurer: so building never happens inside the container
<clever>
maurer: i'm pretty sure the container just connects to /nix/var/nix-daemon/socket and tells the host to do all building
<clever>
maurer: nixos containers already share the /nix/store of the host, so its still doing that anyways
<clever>
maurer: id normaly add the user to nix.trustedUsers in configuration.nix, but there can only be 1 nix-daemon, so it may need to go on the host, and now the usernamespace is making it a mess
<clever>
maurer: probably, maybe give it ssh to the host?
<clever>
maurer: that should do it
<clever>
maurer: yeah, and if your using nixos, thats done via configuration.nix, nix.buildMachines i believe
<clever>
maurer: oh, you didnt configure a build slave
<clever>
maurer: and hydra doesnt build on localhost by default
<clever>
maurer: check the journal
<clever>
either look it up, or guess until it works
<clever>
benzrf: checking gist to see if i can find an old answer i had
<clever>
one not found?
<clever>
what does ldd say when ran on /run/opengl-driver/lib/i965_dri.so ?
<clever>
but ive sort of run out of not-nixos machines i could test on, lol
<clever>
it could possibly be handled via ~/.nix-profile/lib/ and use profile.d/nix.sh to do something
<clever>
contrapumpkin: the problem, is that if you try to handle this at link-time, nix will need to re-compile it for your gpu, and now the binary cache is of no use
<clever>
packages compiled with nix assume that /run/opengl-driver/lib has the libGL files
<clever>
and thats why the directory is just missing
<clever>
benzrf: it lets nixos switch out the ati libGL for an nvidia libGL at runtime, without nix trying to recompile every package under the sun
<clever>
benzrf: so you need to get them from /usr/lib/
<clever>
benzrf: i think its more important that the version matches the xorg, and chances are nixpkgs will have a different version
<clever>
benzrf: and add it to LD_LIBRARY_PATH
<clever>
benzrf: yeah
<clever>
is it giving a clear linker error saying what wasnt found?
<clever>
they need to be compatible with the xorg side of the gpu drivers
<clever>
benzrf: which libs you put there depend on what gpu you have
<clever>
benzrf: opengl is a bit tricky to get working outside of nixos, it relies on the libGL files being in /run/opengl-driver/lib
<clever>
nixpkgs-channels updates after the build passes on hydra
<clever>
Unode: you want the 16.09 branch on nixpkgs-channels
<clever>
bkchr: so it cant be cast to a string without passing it thru builtins.toJSON or similiar
<clever>
bkchr: the srcInfo on line 2 is probably a set, not a string
<clever>
bkchr: removing the true on line 5 might fix things
<clever>
bkchr: so that boils down to "let ... in true stdenv.mkDerivation ..."
<clever>
bkchr: you are attempting to run warn (which returns true) on a derivation
<clever>
bkchr: can you paste the line of code your using?
<clever>
ndowens08: i use it more for development, i can nix-shell, then make && ./foo every time i edit the code, until it does what i want
<clever>
ndowens08: it gives you a shell with the same env variables that nix-build uses for the build, so you can test ./configure && make
<clever>
Yaniel: you can always re-run zsh under nix-shell
<clever>
bkchr: oh, and if you only care about nixos support, you can just cheat: /run/current-system/sw/share/emacs/site-lisp/nix-mode.el
<clever>
bkchr: nix needs to generate a config file that refers to it, and it sounds like emacsWithPackages automates that
<clever>
bkchr: eval "${pkgs.nix}/share/emacs/site-lisp/nix-mode.el" in nix to get the path
<clever>
bkchr: it prints the 1st argument, then returns the 2nd argument
<clever>
nix-repl> builtins.trace "print" "value"
<clever>
trace: print
<clever>
"value"
<clever>
bkchr: builtins.trace
<clever>
Unode: just keep in mind, the $out path depends on the combination of nixpkgs you used, so it will try to rebuild it, if you change that set of nixpkgs
<clever>
Unode: (import /home/clever/nixpkgs {}).vmTools will give you the vmTools attributeset from a different nixpkgs, you can then run that on a different one
<clever>
Unode: yeah
<clever>
then to try and force it to use a specific storepath
<clever>
its usually better to just pick a nixpkgs you know works
<clever>
if the nixpkgs matches up, then it will just reuse it
<clever>
if you wanted to use a different vmTools, you would have to do something like (import /home/clever/nixpkgs {}).vmTools
<clever>
then use the non-overridden version for the system
<clever>
Unode: you could .overrideDerivation the libxml2 for runInLinuxVM to disable testing
<clever>
spacekitteh: getting late hear, i'm heading off to bed, good luck :)
<clever>
spacekitteh: though that doesnt match up right, maybe a file got changed and i didnt notice?
<clever>
spacekitteh: the repo program appears to be reading some git config values, no error yet
<clever>
spacekitteh: that appears to be just one of the log files
<clever>
the output should tell you what that path is
<clever>
nix leaves $out after the failure
<clever>
so the logs are in $out
<clever>
line 11, you did cd $out
<clever>
oh
<clever>
-K makes it save the dir after failing
<clever>
somewhere near /tmp/nix-build-foo-0/
<clever>
what about the strace output?
<clever>
you could also add an echo before that, to see how nix parses it
<clever>
they look fine to me
<clever>
and then build with -K
<clever>
id throw some strace at it next, strace -o logfiles -ff repo init ...
<clever>
strange, but it sounds like network is on now
<clever>
just throw in an invalid sha256, and it will grant you network
<clever>
spacekitteh: i think the problem is that you didnt set the sha256, so its not a fixed-output derivation
<clever>
spacekitteh: yeah, everything lines up right, let me double-check some other things
<clever>
the <name> is also important to tracking it down
<clever>
default
<clever>
spacekitteh: can you gist more of the error, the entire console output?
<clever>
spacekitteh: that isnt the same url as in test.nix, is it the same derivation?
<clever>
spacekitteh: and what is the error it gives?
<clever>
spacekitteh: line 2 of test.nix can be done more easily with callPackage
<clever>
bobthejanitor: how are you checking what its outputing?
<clever>
spacekitteh: can you gist your example and the error?
<clever>
spacekitteh: how you set the url it downloads is your choice
<clever>
spacekitteh: just an outputHash and outputHashAlgo, thats enough to enable the network
<clever>
spacekitteh: thats purely up to you
<clever>
dtz: the main issues ive ran into with the kexec stuff, is ensuring the network comes online right both times (kexec and the install), and that the install goes 100% perfect, any mistake and you cant recover
<clever>
spacekitteh: but otherwise, it should just work on any distro with a linux kernel
<clever>
spacekitteh: in one recent test of the kexec stuff, i discovered that kexec doesnt appear to work right when under xen's dom0
<clever>
you will want to "systemctl stop autoreboot.timer" to stop it from interupting you
<clever>
and if you didnt configure the ssh keys right, it will reboot itself at the end of the hour, and the previous OS boots like nothing happened
<clever>
spacekitteh: if you configure the ssh keys right, you can then ssh in, format, and nixos-install like normal
<clever>
spacekitteh: this generates a tarball, unpack it to / on any machine, run /kexec_nixos, and within 2 minutes, it will be running nixos from ram
<clever>
spacekitteh: i would go with either stdenv.mkDerivation with buildCommand, or runCommand
<clever>
spacekitteh: buildCommand results in setup being sourced for you
<clever>
spacekitteh: you almost always want to set buildCommand, not builder
<clever>
bbl
<clever>
Ralith: but you can freely set min and max on the cache
<clever>
Ralith: my issue with zfs is the oposite, it drops the entire cache at the slightest sign of memory load, then perf suffers
<clever>
brb
<clever>
spacekitteh: if the sandbox is enabled, all network will be blocked inside normal derivations
<clever>
ndowens08: as long as the gui opens when you run bitcoin-qt, it should be good
<clever>
Ralith: so you dont have to set a hard-limit on the size of each fs, but you can configure them differently
<clever>
Ralith: ive been going zfs for all of my new installs, each filesystem has its own config, but they all share the same zfs pool (the raw partition)
2017-03-22
<clever>
ndowens08: heading out now
<clever>
the let block also gets rid of the need for rec
<clever>
having it in a let block avoids poluting the env some, and makes it more clear
<clever>
ive seen a few users try to overrideDerivation the version, then ask why it didnt work
<clever>
also, i find putting version into the attrset misleading
<clever>
ndowens08: oh yeah, i was going to bump the bitcoin version from 0.13 to 0.14, but if your in the area already
<clever>
dhess: i did see some code in nixops to automaticaly provision S3 buckets and RDS databases, but if your not using that, those roles could just be omitted
<clever>
maurer: yeah, and set targetEnv="none"; somewhere in the nixops config
<clever>
might be something from before the change
<clever>
Yaniel: i believe it will be fixed soon
<clever>
Yaniel: it used to, but that got broken by the recent changes to push all logs to S3
<clever>
ocharles: not sure what else it could be, but maybe something isnt setting $PATH right
<clever>
ocharles: nixos-version is the current-system, not the booted system
<clever>
ocharles: that trailing slash was actualy a typo on my end, heh
<clever>
ocharles: strange
<clever>
ocharles: what does ls -l /run/booted-system/ say?
<clever>
ocharles: /run/wrappers/bin should be in $PATH
<clever>
ocharles: what is $PATH set to?
<clever>
amosbird: yeah
<clever>
ocharles: you cant switch from 16.09 to 17.03 online, it needs a reboot
<clever>
amosbird: nix-serve will turn a machine into a binary cache, so others can just query from it on-demand
<clever>
amosbird: nix-serve and nix-copy-closure are 2 options
<clever>
in the above example, you can fix it via ./foo + ("/" + bar)
<clever>
ronny, domenkozar: every time you append a string to a path, it parses it, and strips un-needed elements, so ./foo + "/" + bar will strip the / in the middle
<clever>
and binary caches already use public/private key pairs
<clever>
if you use a .local domain for the cache, avahi would handle that for you
<clever>
ronny: and also nix-serve
<clever>
it might already have a back door when installed by nix
<clever>
yeah, thats another matter
<clever>
nix cant tell the difference between a corrupt file, and a file that has been tampered with, they are just both coming up with the wrong hahs
<clever>
the above commands can also find malicious tampering with executables
<clever>
dtluna: and nix-store --verify --check-contents --repair should try to repair it
<clever>
nix-store --verify --check-contents will check everything for possible damage and list whats broken
<clever>
dtluna: its also possible this is the result of an improper shutdown after nix-channel --update
<clever>
dtluna: then your hdd is corrupted, give it an fsck pass and nix-store --verify --check-contents
<clever>
smw: finding out which file provides el1_irq will explain the panic more
<clever>
dtluna: can you pastebin /nix/store/9bfwq8ad5z3p4b0j2mimxgcal28a6q6c-nixos-16.09.1836.067e66a/nixos/pkgs/applications/misc/keepassx/default.nix ?
<clever>
fRoping: they are also done from your profile
2017-03-21
<clever>
gchristensen: weird, wasnt expecting fetchpatch to do such things
<clever>
avn: ah, yeah, that sounds harder to track down
<clever>
avn: find -inum can search by inode number, and all things in the hardlink share an inode
<clever>
avn: i just assumed grub cant deal with zfs and always went with ext4 /boot
<clever>
smw: at one time, it was used to plug cameras into PDA's and such
<clever>
smw: its a general-purpose bus (like usb) but over the SD card interface
<clever>
smw: i believe the wifi is over sdio on the rpi3
<clever>
ive also heard, that the metadata in zfs always uses 128 bit ints, and it takes advantage of metadata compression to hide the cost
<clever>
viric: ive recently been informed of lz4's insane byte/sec rates
<clever>
and you can freely change the recordsize
<clever>
viric: when using gzip, you can freely scale from -1 to -9, the other algos dont have those options
<clever>
viric: what about seek+write?, that will change the size of a compressed block, and it may not be able to insert it at the right spot
<clever>
avn: the desktop has 4 swap partitions, spread over several SSD's
<clever>
avn: the laptop is zfs on lvm on luks, so i can put the swap in lvm and share the luks device
<clever>
avn: and for that reason, i never put swap on zfs, not even a zvol
<clever>
avn: but i have a laptop with 3gig of ram, that has never locked up, it just goes into swap-hell under over-use
<clever>
avn: the 16gig machine is the one with the problem, and it happens any time chromium is using a decent amount of ram
<clever>
gchristensen: ive found that defaulting the input chain to accept to be a mild security problem, while reloading the rules, you are open to any packet
<clever>
avn: the low-ram systems have never had that problem, and always go into swap-hell, and eventualy recover
<clever>
avn: it does tend to happen under high memory usage, but only on the machine with the most ram, heh
<clever>
but i suspect it might be hardware related, i use zfs on 5 machines, and only 1 does it
<clever>
avn: my main issue with zfs is not just maching killing lag, but entirely locking up
<clever>
then it has to rebuild both arcs at bootup, via normal on-demand reads
<clever>
so the L2ARC is essentialy lost upon reboot
<clever>
also, the L2ARC is basicaly just a dedicated swap for the main ARC
<clever>
viric: qcow compression is a bit simpler, because its already treating it as a block device, there are more logical places to do the compression blocks, though you still need an index to allow seeking
<clever>
joepie91: so any modifications will be caught
<clever>
joepie91: the url is a hash of the contents, and nix will always check the hash of fixed-output things like fetchurl