2017-08-09

<clever> #2 set dontPatchELF = true; in the derivation
<clever> MP2E: this will just mess with the ELF headers to make the library load way ahead of time (might break things in weird ways)
<clever> #1 patchelf --add-needed LIBRARY
<clever> 3 options i can think of to fix it
<clever> that can remove paths you have added for use by dlopen
<clever> MP2E: by default, the stdenv will strip things from the rpath if ldd says they arent needed

2017-08-08

<clever> Infinisil: ah, yeah, its showing outages now
<clever> Infinisil: #letsencrypt's statusbot is mentioning outages
<clever> the default.nix in that dir specifies which branch, on line 18
<clever> that builds a subset of the hydra jobs for a branch i had a PR for
<clever> pikajude: i also randomly add projects for my nixpkgs PR's like https://github.com/cleverca22/hydra-configs/blob/master/haskellPackages/release.nix
<clever> thats part of why i put the hydra config for all of my projects into a single central repo
<clever> which might have become builtins
<clever> pikajude: hmmm, the only thing i'm using from nixpkgs right now is listToAttrs and mapAttrsToList
<clever> pikajude: though we dont even need a nixpkgs at all, it could be done entirely with map and builtins
<clever> pikajude: i think this one gets most of it: https://github.com/cleverca22/hydra-configs/blob/master/lib.nix#L13
<clever> brb
<clever> the jq -S i added on line 58 of the plugin sorts the json, and now hydra only runs the job when things change
<clever> LnL: which causes hydra to re-run the declarative nix job every minute, even when no data has changed
<clever> LnL: and on june 21st, i found a bug in the encode_json function of perl, the keys are in a random order
<clever> it was added in may 24th
<clever> so whichever plugin returns data first (line 62) claims control of that input
<clever> the plugin must return undefined to say its not handling it
<clever> any hydra plugin implementing a fetchInput function is ran, for every input hydra tries to fetch, on every eval
<clever> LnL: the 3rd input on here grabs the PR data every 60 seconds: https://hydra.angeldsis.com/jobset/toxvpn/.jobsets#tabs-configuration
<clever> its fully working
<clever> thats the type "githubpulls" in the spec.json
<clever> when both halves are put together, hydra will create a new jobset for every PR, and post the status back to the commit that is at the tip of the branch
<clever> it uses the current revision of the input named on line 64 to know which github and which rev
<clever> any job matching that regex gets posted to github as a status
<clever> LnL: and line 62 contains a piece of regex, that must match the project:jobset:job string
<clever> LnL: the github token on line 60 needs repo:status to post the travis checks, and its also used to greatly increase the ratelimiting for the PR data
<clever> LnL: for the travis like status checks, thats done entirely inside hydra.conf
<clever> and i supply a default, so i can test that logic with nix-build
<clever> so the nix file named on line 6 gets passed a PR list as the pulls argument
<clever> LnL: this is where that happens
<clever> at the same time it polls github for changes to given branches
<clever> for this half of it, adding a build input of type "githubpulls" causes hydra to query the PR list every time its checking for changes to the inputs
<clever> LnL: that jobset must contain a single build called jobsets, which returns more json, a list of the same structure, which will configure all other jobsets
<clever> LnL: hydra will fetch that json, and use it to create a jobset called .jobsets
<clever> LnL: you setup the url to a repo like this, and give it a relative path like "toxvpn/spec.json" (very similiar to setting the release.nix stuff)
<clever> infinisil: i think the expense summary is only for prs.nix.gsc.io?
<clever> infinisil: a painful amount :P
<clever> LnL: same to view the review statuses
<clever> LnL: for anything more complex like a special word a certain user has to say, it would need some changes to the perl in hydra
<clever> LnL: that could then easily be implemented with the nix logic i have in my hydra-configs repo
<clever> LnL: of note, the milestone and assignees appear to be available in the json, so you could have a "magic" milestone that hydra will filter the PR's on
<clever> LnL: here is an archive of the json for a PR on my toxvpn project: https://github.com/cleverca22/hydra-configs/blob/master/sample-pr.json
<clever> gnuhurd, sphalerite: currently, all services have to be rewritten by hand, it cant translate the existing systemd.services definitions over
<clever> LnL: i have since set it up on 2 hydras and helped a 3rd person set it up
<clever> LnL: and that whole process is done inside a pure nix build
<clever> LnL: and hydra will then re-configure itself
<clever> LnL: using the declarative jobset stuff, hydra can pass you some JSON describing all PR's, you must then return some JSON describing all jobsets
<clever> brb
<clever> grahamc: yeah, i previously said that assuming nix and the sandbox logic is 100% perfect, the only other threats i can think of are people abusing cache.nixos.org to host content we dont want to host
<clever> if we ignore sandbox/kernel level exploits
<clever> grahamc: such as?, the only things i can think of are weird network requests (if you claim to have a hash), and chewing up cpu/ram
<clever> bendlas: i think this is also why sudo based travis runs in a full vm, and non-sudo travis runs as a container
<clever> bendlas: so only kernel exploits are a threat for non-fixed-output paths
<clever> bendlas: if you turn on a sandbox in nix, it cant read / or even access the network
<clever> infinisil: hydra has no way to limit that per jobset right now
<clever> and end-users only get that output if they ran the same malicious script under nix-build
<clever> bendlas: if the container logic does its job, then the scripts output merely gets saved to cache.nixos.org and it does no real harm
<clever> LnL: the bigger risk i can see, is people opening PR's that download things, and abusing cache.nixos.org as a mirror for whatever they want, of any size
<clever> aupiff: only when using nix-shell is the env properly modified to make things work
<clever> aupiff: you should open a nix-shell that has the nixpkgs clang in its PATH
<clever> LnL: and at that point, nix would have just built the thing on the end-users machine anyways
<clever> LnL: if nix is trusted 100%, then only users running the "malicious" nix expression will get the cached build
<clever> infinisil: the current github status plugin in hydra doesnt produce the right output to make it a sensible status hook
<clever> infinisil: travis should go first, this PR has been in a queue for 5 hours, lol
<clever> s4sha: so you may need to do the include inside the right extraConfig for the virtualhost
<clever> but the value of document_root may depend on where you include it
<clever> the .conf variants include a script_filename
<clever> fastcgi.conf fastcgi.conf.default fastcgi_params and fastcgi_params.default all exist in that directory
<clever> you can probably start by copying only the doc root line into the config
<clever> and with strace, you can view what values actually reach php-fpm
<clever> s4sha: this shows some extra config you can add, one of the sets document root
<clever> ah, i was using lighttpd at the time, and it didnt have any special config
<clever> let me see what i was doing before
<clever> probably
<clever> \r\25DOCUMENT_ROOT/var/spool/nginx/html\17\10S
<clever> and that is the path nginx told it to run
<clever> read(3, "\17\37SCRIPT_FILENAME/var/spool/nginx/html/index.php
<clever> and yeah, 404 is obvious now from the files its opening
<clever> ah, it was sending the error right back to nginx, and its the error you previously gave
<clever> can you paste the entire line that had an error?
<clever> its also useful to see where the errors are going
<clever> and try sending it a single request
<clever> then do strace -p 27208 -s 3000
<clever> we need its pid#
<clever> there should be a worker process below the master process
<clever> adding --color to grep can help
<clever> s4sha: there should be a php near the middle
<clever> s4sha: throw it into a pastebin and i can decode it
<clever> s4sha: to start with, set php-fpm to only have 1 worker, then pastebin the output of "ps -eH x | grep php -C5"
<clever> oh yeah, i think fcgi is also capable of sending the stderr back to nginx, which might do something with the logs
<clever> then i can see exactly what its doing, including any log messages
<clever> s4sha: what i usually do in this case, is set it to a max of 1 worker, then attach strace to the worker
<clever> copumpkin: the entire serviceConfig is a single option, any attempt to set something under it wipes all of the serviceConfig
<clever> s4sha: and did you nixos-rebuild switch after making the change?
<clever> s4sha: where did you put that config string?
<clever> s4sha: the logs are either in the systemd journal, some /var/log folder, or /dev/null
<clever> s4sha: the /nix/store is immutable, so it will never have log files
<clever> any other channels will only get support if they happen to match up to the same versions of things
<clever> nixpkgs-unstable has caching for 32bit/64bit linux, and 64bit darwin
<clever> only thing you might loose is binary cache builds of darwin things
<clever> yeah, you can almost always run nixos-unstable or nixos-17.09 on darwin
<clever> yeah, if you run "nixops info" it shows the -I flags under "Nix path:"
<clever> and i think per-machine, you can set nixpkgs paths in the nix file
<clever> and you can configure -I flags for the whole deployment
<clever> and you can configure -I flags
<clever> i think it uses <nixpkgs> from $NIX_PATH
<clever> but you didnt notice until after you turned it off
<clever> the problem happened instantly upon nixos-rebuild switch, it just lost the ability to boot
<clever> if its working now, then the current revision is safe
<clever> the nixpkgs channels lack nixos testing
<clever> hydra tests tests to catch that, and only publishes safe revisions on nixos channels
<clever> about 6-ish months ago, it broke grub.conf in a way that prevented rollbacks
<clever> generally need to avoid running nixos from the nixpkgs channel, it can break the system in ways thats hard to repair
<clever> ah
<clever> ah, thats a bit old, but not what you said it was before
<clever> what does sudo nix-channel --list say?
<clever> ive only used the nixos module, and nixos should never be ran from nixpkgs-unstable
<clever> eacameron: dang, that makes it much harder to share the logs for debug purposes
<clever> eacameron: do you know if the journal logs that letsencrypt makes on nixos contain any secrets?
<clever> Phillemann: after the build is done, nix will check for every build input in your output (by grepping for the storepaths), and any paths it finds become runtime dependencies
<clever> Lisanna: looks like $buildFlags is the best option you have
<clever> Lisanna:
<clever> Phillemann: you probably need to add it to buildInputs, then patch the result of $(which peco) into the other script at compile-time
<clever> Phillemann: and buildInputs are only available at build time
<clever> Phillemann: and the scripts in bin must be +x'd
<clever> Phillemann: everything added to buildInputs must contain a bin subdir to get added to PATH
<clever> avn: zfs checksums all data, and would be more likely to detect such things
<clever> avn: ext4 does writes in place, so fsck may be happy, but the data is mixed thruout time
<clever> avn: you should either freeze all writing to the disk, snapshot the whole thing as one atomic operation, or just shut it off, then clone it
<clever> avn: if it wasnt an atomic copy, then that could have corrupted almost anything on the machine
<clever> avn: sqlite has some pretty extreme testing on it, was the disk image cloned with the machine on or off?
<clever> its a glitch caused by how nix wraps things
<clever> mpcsh: tell it to stop asking
<clever> run it, ssh back in, do the above, pray
<clever> avn: this creates a storepath containing kexec, and an install image
<clever> avn: my kexec trick might work under kvm
<clever> just mount the existing partitions on /mnt
<clever> a full wipe of /nix/store and the db.sqlite, and re-run nixos-install should recreate it all
<clever> the nixos-rebuild will probably fail, because things are missing from the store, that the db claims exist
<clever> data and config files will persist
<clever> avn: that will recreate the db and all store contents
<clever> avn: another option, boot from an install cd, delete the nix database, run nixos-install
<clever> id reboot it and try again
<clever> from just nix-collect-garbage??
<clever> avn: run another nix-collect-garbage, does the file exist?, dont run anything else
<clever> avn: cant think of anything else at the moment
<clever> which includes your entire nixos
<clever> it will try to delete all invalid paths, paths that the db says shouldnt exist
<clever> yeah
<clever> copying the db will cause massive problems
<clever> not sure then
<clever> avn: does /etc/nix/nix.conf differ any?
<clever> avn: i want to know what command you ran to get that ghostscript path, not what your running against it
<clever> avn: sqlite is extremely hardened against corruption
<clever> avn: can you throw the entire console output into a gist?
<clever> what command did you run on them?
<clever> and how are you evaling that copy of nixpkgs to get the ghostscript drv?
<clever> ah
<clever> how did you clone it?
<clever> avn: can you describe the problem system in more detail?, is it using nix-daemon?, how are you evaling the nixpkgs?
<clever> but nix cant repair anything it doesnt consider valid
<clever> so it will always have the same contents
<clever> the path to the .drv is a hash of its contents
<clever> did you just use the path another machine had made, or the output of nix-instantiate from the problem machine?
<clever> it may turn up a different drv path
<clever> avn: what about nix-instantiate -A ghostscript
<clever> and then re-eval the same nixpkgs
<clever> avn: try a nix-collect-garbage --max-freed=1m
<clever> avn: how did the file get onto the bad machine?
<clever> avn: nix-store --query --references ?
<clever> avn: what about --query --hash?
<clever> avn: does that file exist on the "bad machine" ?
<clever> ahhh
<clever> yeah, its a comma seperated list
<clever> as in, the kernel build
<clever> big-parallel is of note because the kernel requires it
<clever> domenkozar: big-parallel and benchmark are also features that may occur
<clever> domenkozar: what features did you enable in /etc/nix/machines ?
<clever> can you link the queue?
<clever> domenkozar: yeah, it is very bad about reporting why steps arent being run
<clever> domenkozar: ahhh
<clever> domenkozar: the queue is also so long, that it still hasnt finished loading
<clever> domenkozar: https://hydra.nixos.org/queue ah, its still present
<clever> hmmm, the same link now leads to https://hydra.angeldsis.com/queue_summary
<clever> yeah
<clever> domenkozar: this hydra is over a year old, and prints the entire queue http://hydra.earthtools.ca/queue
<clever> domenkozar: the queue used to be fully visible, but a change to hydra a few months ago removed it
<clever> setting the system flag forces everything to be 32bit only
<clever> i also made this helper a while back
<clever> jophish: this is how you get the proper path every time
<clever> patchelf --interpreter "$(cat $NIX_CC/nix-support/dynamic-linker)"

2017-08-07

<clever> part of its power comes in acting as an automatic root for the rpath nix cant track
<clever> sphalerite: then the bash script depends on gcc, rather then just glibc
<clever> and writeScript cant read $NIX_CC
<clever> sphalerite: gcc is a build input for $NIX_CC, which has the ld.so path
<clever> the result symlink also acts as a root, to keep the libraries from being deleted out from under you
<clever> to (re)patch any given binary
<clever> so you can do ./result ./foo.elf
<clever> that bash script has a patchelf invocation
<clever> colabeer, sphalerite: this contains a nix expression that when compiled, creates a bash script
<clever> Sonarpulse: so i would get a purely libc-less gcc
<clever> Sonarpulse: i was mainly setting libc to null to make sure any attempt at using libc would fail
<clever> i just send lucasOfBesaid to a fairly old one to be sure
<clever> Sonarpulse: it works on 16.09, at the least
<clever> i belive that attribute is the gcc normally used to build libc
<clever> Sonarpulse: i was using https://github.com/cleverca22/nix-tests/blob/master/arm-baremetal.nix before as an experimental method of getting a libc-less compiler
<clever> lucasOfBesaid: Sonarpulse may know more as well
<clever> lucasOfBesaid: id have to re-read the code in all-packages.nix and see how it changed
<clever> lucasOfBesaid: so that cheaty trick of setting libc to null no longer works
<clever> lucasOfBesaid: something to do with the cross-compiler and libc was changed
<clever> which arent compatible with the example
<clever> going to a fairly old one to avoid stdenv improvements
<clever> then in another dir, nix-build arm-baremetal.nix -I nixpkgs=/home/clever/nixpkgs-channels
<clever> git clone http://github.com/nixos/nixpkgs-channels ; cd nixpkgs-channels; git checkout nixos-16.09
<clever> try it against a checkout of 16.09?
<clever> ah, things may have changed in the newest master
<clever> so you just need to find a string that makes gcc happy
<clever> you can also nix-build arm-baremetal.nix --argstr arch i686-elf
<clever> lucasOfBesaid: line 1 sets the target arch that gets passed to gcc
<clever> lucasOfBesaid: something i wrote a while back to do similar: https://github.com/cleverca22/nix-tests/blob/master/arm-baremetal.nix
<clever> lucasOfBesaid: yeah, one sec
<clever> lucasOfBesaid: i686-linux is an internal code for nixos, that just says everything must be 32bit
<clever> it will just default to 32bit
<clever> lucasOfBesaid: you can also do import <nixpkgs> { system = "i686-linux"; }, then it wont even have a 64bit option
<clever> lucasOfBesaid: then everything will be in 32bit mode
<clever> lucasOfBesaid: what about just directly using pkgsi686Linux.stdenv.mkDerivation ?
<clever> yeah
<clever> srhb: right now, sddm is broken on master
<clever> mpcsh: check the irc logs for this bot
<clever> 2017-08-07 10:25:38 -nix-gsc-io`bot:#nixos- Channel nixos-unstable-small advanced to https://github.com/NixOS/nixpkgs/commit/f152749c99 (from 6 hours ago, history: https://channels.nix.gsc.io/nixos-unstable-small)
<clever> dtzWill: that also contains the filenames for everything in the closure, so it depends on the entire closure!
<clever> dtzWill: <nixos/lib/make-disk-image.nix> creates a full disk image containing a copy of the closure you specified
<clever> 2017-08-07 13:03:48 < clever> Attrs: LegacyBIOSBootable
<clever> 2017-08-07 13:03:46 < clever> Type-UUID: 21686148-6449-6E6F-744E-656564454649
<clever> after you set that type
<clever> nwuensche: double-check what the i command says about it
<clever> nwuensche: i think so
<clever> dtzWill: there is also a related problem that cant easily be fixed
<clever> and nix wants to haul along the entire uncompressed copy of nixos
<clever> but i needed to refer to a path that will exist inside the squashfs, on the kernel cmdline
<clever> the last time i used that, i squashfs'd an entire nixos up, then jammed that into an initrd
<clever> but it will have bar's absolute path
<clever> so i can make a config file that does "foo=${builtins.unsafeDiscardStringContext bar}", and that config wont depend on bar
<clever> that strips all context from a string
<clever> builtins.unsafeDiscardStringContext is also a fun function
<clever> yeah
<clever> ah
<clever> then you'll know you cheated
<clever> so if you cheat, the files just wont exist
<clever> that will enforce you to only access things that are properly defined as build inputs
<clever> also, you should really build with sandboxing on
<clever> and only the build-time deps are looked for in the output
<clever> dtzWill: and if any string with context is pased to derivation, they become build-time deps
<clever> dtzWill: the builtins.derivation function creates a string (the output path) that depends on the derivation
<clever> dtzWill: every string in nix has some magic context on it, the dependencies of the string
<clever> dtzWill: if you hard-code a storepath as a string, it wont
<clever> dtzWill: but only if you got those paths via a derivation
<clever> dtzWill: yes
<clever> moesasji: this creates 3 bash scripts, and then i can just do something like environment.systemPackages = [ (pkgs.callPackage ./util.nix {}) ];
<clever> moesasji: #!/usr/bin/env bash
<clever> it will reuse the existing rootfs, as long as you mount it to /mnt/
<clever> nwuensche: then nixos-generate-config --root /mnt and nixos-install
<clever> nwuensche: and set the type right on the 1mb
<clever> and then mount the 499mb to /mnt/boot after formatting again
<clever> umount it, delete it, then create 2, a bios boot (1mb) and a replacement /boot (499mb)
<clever> nwuensche: simplest thing is to just make the bios boot partition and boot via legacy
<clever> look in the bios config to see if it mentions uefi or efi or secure boot
<clever> nwuensche: this is what fdisk should say about it when you run the i command on it
<clever> Attrs: LegacyBIOSBootable
<clever> Type-UUID: 21686148-6449-6E6F-744E-656564454649
<clever> Command (m for help): i
<clever> to boot via legacy on gpt, you must create a bios boot partition, 1mb, no fs, never mounted
<clever> in the fdisk -l output
<clever> Disklabel type: dos
<clever> is the partition table gpt or mbr?
<clever> nwuensche: do you want to boot with legacy or uefi?
<clever> yeah
<clever> nwuensche: try editing /etc/resolv.conf and set it to use 8.8.8.8 directly
<clever> srhb: ive seen one router that didnt use v4/v6 as a cache key in its dns cache, so it could return a v6 reply to a v4 query!