2019-10-05

<clever> __monty__: what does $TMP point to?
<clever> __monty__: is the remote host nixos or something else?
<clever> __monty__: is /tmp on a tmpfs?
<clever> everything after the freeman42x:
<clever> 2019-10-05 11:48:23 < clever> freeman42x: EDITOR=kate sudoedit configuration.nix
<clever> __monty__: the error includes the string "No space left on device" and it references a path on disk
<clever> __monty__: i think the remote machine ran out of disk space?
<clever> freeman42x: did you try running that command yet?
<clever> freeman42x: literally just run exactly what i gave above, edit the file, then save&quit
<clever> freeman42x: EDITOR=kate sudoedit configuration.nix
<clever> freeman42x: you can use sudoedit with kate
<clever> freeman42x: why must the text editor run as root?
<clever> freeman42x: what if you just dont run kate as root? run it seperately from the sudo
<clever> freeman42x: thats worse, kate doesnt want to be ran as root for security reasons
<clever> freeman42x: your working directory changed, you have to cd to the right place
<clever> freeman42x: try `sudo -i` instead ?
<clever> freeman42x: how did you get root?
<clever> that is what caused that error
<clever> freeman42x: why are you root?
<clever> freeman42x: what does `id` return?
<clever> freeman42x: this is a shell file i'm using for a haskell.nix project, i then build it with plain old cabal (not the multi-project one)
<clever> freeman42x: there is also a high chance that cabal/stack is just ignoring the builds haskell.nix provides, causing the issues you have
<clever> freeman42x: haskell.nix provides its own shell files
<clever> freeman42x: it may not be using the stdenv
<clever> freeman42x: the stdenv will get .dev and .out for you automatically
<clever> freeman42x: just add zlib to the buildInputs in your shell.nix file
<clever> and your new module (which can just be in imports) will assume full control
<clever> pbb: if you change the attr from boot.loader.grub, to boot.loader.corebootgrub, then you can simply boot.loader.grub.enable = false;
<clever> that could also work
<clever> pbb: it might be simpler, to just add a new setting, install bios
<clever> pbb: you would have to move installBootLoader out of system.build, and then assign a type to it
<clever> pbb: so there are basically no rules on what goes within it, nor how to merge duplicates
<clever> pbb: currently, the entire system.build is a single type, of attributes of unspecified type
<clever> pbb: which runs substituteAll on switch-to-configuration
<clever> which then runs systemBuilder
<clever> pbb: the path within installBootLoader, becomes an env var in this derivation
<clever> pbb: or modify top-level.nix to change how it references installBootLoader
<clever> pbb: not that i know of, you would need to modify the grub module, to insert more things into it
<clever> pbb: yep
<clever> pbb: yeah, you would need to entirely overwrite it with your own script, and then somehow know what the old value was
<clever> pbb: its just flagged as internal, so the docs all claim it doesnt exist
<clever> pbb: installBootLoader is still a normal nixos option
<clever> HKei: in this example, i'm generating a bash script, that will run whatever cabal produced, with all of the complex args the program needs to work
<clever> HKei: https://github.com/input-output-hk/cardano-sl/blob/parallel-restore/wallet/shell.nix#L34 i just reference whatever drv the default.nix contains, and .overrideAttrs it to add more tools
<clever> HKei: i usually do that by having shell.nix do an override against default.nix
<clever> that sounds like the best option then
<clever> didnt actually look at that page
<clever> tilpner: oh, there are multiple releases?
<clever> jluttine: upstream doesnt provide one zip per font
<clever> tilpner: that zip is several gig in size
<clever> if licensing issues wont let hydra build it, then there is no fix
<clever> the key, is for hydra to build those split up ones
<clever> tilpner: fetch the entire release in phase 1, then break it up in multiple drvs
<clever> then those partials are in the cache
<clever> and the critical part, is that hydra is configured to build everything in the middle layer
<clever> the 3rd layer is a buildEnv to put it back together
<clever> the 2nd layer, is an array of drvs, that unpack 1 font each (always 1 font)
<clever> i think the simplest option, would be to make 3 layers of drvs, the first layer just fetches everything as a zip (plain fetchurl on the release)
<clever> you would have to download each file in the font, one per fetchurl
<clever> ah, so you cant bypass the release/archive tars and download just one font
<clever> worldofpeace: are the fonts grouped by directory within that repo?
<clever> worldofpeace: i find even the main nixpkgs docs cause that, the problem is that they show how builder works, but dont point out that its mainly for learning how it works
<clever> ive found that most sites i visit, cant keep up with my modem, lol
<clever> HKei: 500mbps fiber here
<clever> gigs then
<clever> HKei: its also huge, in that you have to download several 100mb (or was it gig?) worth of fonts, before you can extract and keep 1
<clever> it would need seperate tar files, for each font

2019-10-04

<clever> and then decides what to do, based on the actual target
<clever> haskell.nix, just translates the if's in your cabal file, into nix level if's
<clever> and then everything implodes :P
<clever> so (stack|cabal)2nix, wont try to supply Win32 to the windows build, and will try to build posix on windows
<clever> cabal2nix just computes the expr for the current (or given) os
<clever> stack2nix is just a wrapper, that runs cabal2nix on the right versions of everything
<clever> the major difference, is that it supports conditional statements in cabal files
<clever> yep
<clever> yep
<clever> haskell.nix can obey either stack.yaml, or cabal.project, and can also cross-compile
<clever> stack2nix obeys the stack.yaml, but then wont find things on the cache
<clever> cabal2nix has better cache coverage, but wont obey your stack.yaml
<clever> freeman42x: its far better to build it with nix, using either cabal2nix, stack2nix, or haskell.nix
<clever> freeman42x: but, i dont like using either cabal new-build, or stack, because both are impure
<clever> freeman42x: stack --nix, just runs itself under nix-shell for you
<clever> freeman42x: have you tried `nix-shell -p zlib` yet?
<clever> freeman42x: youve only done half of that
<clever> freeman42x: in addition to installing the binary, you must also copy a db file to the right dir
<clever> nix-pkgconfig relies on a database of mappings between pkg-config .pc files and the nixpkgs attribute they are provided by. A minimal example database (default-database.json) is included which can be installed via:
<clever> freeman42x: did you read the readme?
<clever> freeman42x: then that tool isnt actually doing what it claims to do
<clever> freeman42x: `file -L /run/current-system/sw/bin/pkg-config` ?
<clever> i hate it when the package name, and binary, are one character different :P
<clever> freeman42x: oops, with a -, `type pkg-config`
<clever> freeman42x: what does `type pkgconfig` say?
<clever> red[m]: you can use that tree to get muslc based everything in nix
<clever> > pkgsCross.musl64.stdenv.mkDerivation
<clever> > pkgsCross.musl64.hello
<clever> and do native linux and darwin builds, on their own hosts
<clever> it can compile to windows, from both linux and darwin
<clever> ah, then you probably want to look into haskell.nix
<clever> and the server also has to run on every platform?
<clever> server-side components?
<clever> freeman42x: you can just compile it once, and use the same js on every platform
<clever> freeman42x: if all of the code is just JS in the end, it doesnt matter what target os you use
<clever> #haskell.nix on freenode
<clever> freeman42x: haskell.nix can cross-compile from linux to windows, using nix
<clever> freeman42x: but your probably better off just doing `nix-shell -p zlib` and then `cabal build`
<clever> ,ifd
<clever> import (pkgs.fetchFromGitHub { owner = "bgamari"; repo = "nix-pkgconfig"; rev = "todo"; sha256 = "todo"; }) {}
<clever> or use IFD
<clever> yeah
<clever> and systemPackages is a list of packages
<clever> that is an expression, that returns a package
<clever> environment.systemPackages = [ (import /home/clever/apps/nix-pkgconfig/default.nix {}) ];
<clever> `import /home/clever/apps/nix-pkgconfig/default.nix {}`
<clever> if they are on windows, then they must get zlib via some other mean
<clever> freeman42x: if building on nixos, you must use `nix-shell -p zlib`
<clever> freeman42x: and nix-pkgconfig is already packaged, you just throw import at a path to it, and you get the package out
<clever> freeman42x: installing zlib.dev wont fix cabal, you must use `nix-shell -p zlib` to get zlib working right
<clever> ,nix-shell freeman42x
<clever> wrl: nix-build '<nixpkgs/nixos/release.nix>' -A iso_minimal_new_kernel.x86_64-linux
<clever> wrl: and it just uses whatever linux_latest is on that nixpkgs rev
<clever> # A variant with a more recent (but possibly less stable) kernel
<clever> # that might support more hardware.
<clever> wrl: there is already a preset called iso_minimal_new_kernel
<clever> ajs124: that would do it!
<clever> wrl: you may need to build a custom ISO, that uses 5.3 instead
<clever> > linux_5_3
<clever> that is the default on unstable
<clever> > linux
<clever> wrl: what kernel does arch have?
<clever> bendlas: then IFD isnt needed, everything has been pre-generated
<clever> bendlas: and you then just commit that whole mess a repo
<clever> bendlas: my thinking, is that when you update the sha256, you also generate the new dep-tree info, for what .o depends on what .h files
<clever> wrl: kexec might be a good way for you to test things, that doesn need any working disk drivers
<clever> ajs124: edit nixpkgs level :P
<clever> bendlas: but yeah, with 20k derivations, nix will still suffer massively
<clever> bendlas: id want to pre-generate the dep tree info, and ship that with the rev+sha256
<clever> wrl: are you writing to sd? or sd?1 ?
<clever> so id want it to be a non-IFD step, that you just generate the dep tree once ahead of time, and then import the file
<clever> it would likely be far faster, if you batched it into a single IFD step, but then it costs more when nothing changes
<clever> bendlas: the major performance cost, is that IFD must happen serially, so it can take 20+ mins to do the first pass
<clever> bendlas: snack does similar with haskell, there is a dedicated binary, that will list the modules a given module depends on, and its ran via IFD, for each module
<clever> hlolli: to help reduce the "but it worked for me" situations
<clever> hlolli: nixos also sets PATH differently, to only include what the service needs, so it doesnt happen to work due to what youve nix-env -i'd
<clever> hlolli: your likely not using nix-shell, so things fail in weird ways
<clever> hlolli: builds will only function if you do them under nix-build or nix-shell
<clever> selfsymmetric-mu: there is an entire fonts tree of options in configuration.nix
<clever> hlolli: you probably want yarn2nix, to build the package
<clever> hlolli: do all building in nix, and just use the built result in the service
<clever> hlolli: dont do that, its not pure
<clever> hlolli: are you running `yarn build` from a systemd service?
<clever> hlolli: can you pastebin the entire output when it fails?
<clever> hlolli: is the build happening under nix-build/nix-shell?
<clever> exarkun: i have been thinking of rewriting bors as well, while it does work, the errors when it doesnt are horid, typically just ruby backtraces, lol
<clever> but, if any naughty admin pushes directly to master, you cant fast-forward, and ci has to start over
<clever> and if it passes, fast-forward master to that merge commit you just tested
<clever> exarkun: the bors logic could be baked into github, just have github itself push the merge to a dummy branch, and wait for ci to pass on that dummy branch
<clever> and then if the status checks are green, bors will do it
<clever> exarkun: bors does that, by just pushing the merge commit to a bors/staging branch
<clever> exarkun: the problem, is that github has to re-trigger CI, on the result of merging the commit into master
<clever> which is why many projects demand that you rebase ontop of master before merging
<clever> exarkun: and if master changes after that test, things can break
<clever> exarkun: the problem, is that travis merge builds, are if the pr passed with the old value of master
<clever> exarkun: BUT, hydra will now rebuild EVERY SINGLE PR, on EVERY SINGLE PUSH TO MASTER
<clever> exarkun: so, you could configure hydra to build pull/$PR/merge, on every PR
<clever> exarkun: if you fetch pull/42/merge, you get a commit that is the result of merging the pr into master (but that commit wont be pushed to master)
<clever> exarkun: if you try to fetch the pull/42/head branch, you get the tip of the PR
<clever> exarkun: it kind of does have some tools to support it
<clever> it only waits for a batch if tis already busy doing something else and must wait
<clever> and when done, it will test pr2+pr3, as a second batch
<clever> but if you tell it to merge pr1, pr2, and pr3, it will start testing pr1
<clever> in your case, it would test pr1, and then pr2, as seperate things
<clever> qyliss: and once the current job is done, it will create a merge commit, containing everything from the queue, and do them in a second batch
<clever> qyliss: if a PR merge is pending checks, any further attempts to merge one enter a queue
<clever> and travis doesnt re-test every time master moves
<clever> so if you push PR1, and it passes, then PR2 gets merged, then PR1+master now fails
<clever> and travis building the merge branch cant catch the above, because it only tests the merge commit when you push
<clever> it can solve problems like PR1 passes CI, PR2 passes CI, but if you merge both, CI fails
<clever> infinisil: if you tell bors to merge several branches at once, it will merge all of them together, then test them as a single batch
<clever> infinisil: bors is a bot, that will generate a merge commit, then run CI (in this case, a full hydra build??) and only push that merge to master if CI passes
<clever> buckley310: requireFile is used for things like oracle java
<clever> buckley310: pkgs.requireFile is one method
<clever> buckley310: if you supply the hash upfront, nix will compute where it should be in /nix/store, and use that copy
<clever> cransom: you can also use `nix-store --query --roots $FOO` to find out why it is still alive
<clever> buckley310: yes
<clever> so its not really leaking a process
<clever> when the bash it started exits, sudo will then exit with the same error code
<clever> a simple `sudo bash` leaves a sudo proc around, with bash in the argv[1]
<clever> root 26346 0.0 0.0 130004 2972 pts/12 S 11:33 0:00 sudo bash
<clever> cransom: so nix considers it "in use" and not safe to delete
<clever> cransom: "use", as in, they are still in the argv array
<clever> exarkun: `sudo nix-store --delete $FOO` will always fail, because its in the args of sudo, so sudo is "using" it
<clever> exarkun: nix will check for open files, env vars, mapped files, and program arguments
<clever> if you have changed the ports, you need to set allowedTCPPorts and allowedUDPPorts yourself, to whatever you changed them to
<clever> hpfr[m]: if you set openDefaultPorts, it just opens 2 ports, one tcp, one udp
<clever> hpfr[m]: add the ports to networking.fireall

2019-10-03

<clever> tetdim: if you want it in the path, you must use nix-env
<clever> tetdim: nix-build never adds the products to PATH
<clever> tetdim: what about `nix-build ~/nixpkgs -A nix` , without your patches?
<clever> tetdim: nice, so the freebsd stdenv is already working
<clever> toxvpn operates entirely on the IP layer
<clever> freeman42xxx: you can also `man configuration.nix` and then `/` to search
<clever> freeman42xxx: toxvpn is based on the same p2p nature of hamachi
<clever> tetdim: where does that error appear in the code?
<clever> tetdim: and you already cloned nixpkgs to ~/nixpkgs ?
<clever> ,nix-shell freeman42xxx
<clever> exarkun: that will proove that somebody upstream borked (or mitm'd) the download, and nixos was just blindly trusting your old hash
<clever> exarkun: set the hash wrong, on all machines, and see if the new hash they come up with matches, on all machines
<clever> tetdim: nix show-config | egrep 'restr|sand'
<clever> exarkun: you can blame somebody on the rust package management end, for changing a source tar without changing the version
<clever> exarkun: if you use the "darwin" hash on nixos, then it should just work everywhere
<clever> tetdim: you may need to turn off restricted mode and the sandbox
<clever> exarkun: if you provide the old hash, nixos finds the old copy in /nix/store and doesnt care that upstream borked things
<clever> exarkun: this means, that upstream has fudged with the source, and only the arch+darwin hash will work
<clever> 2019-10-03 16:46:43 < clever> exarkun: is nixos reporting that hash in an error, or just saying that hash works?
<clever> exarkun: i get the arch+darwin hash, on my nixos box
<clever> hash mismatch in fixed-output derivation '/nix/store/0kxcxasvcn9n1cj8iawjmhc2xkahcan6-ristretto-0.9.999-vendor': wanted: sha256:1qbfp24d21wg13sgzccwn3ndvrzbydg0janxp7mzkjm4a83v0qij got: sha256:1vfzdvpjj6s94p650zvai8gz89hj5ldrakci5l15n33map1iggch
<clever> exarkun: if you intentionally set the hash wrong on nixos, does it report the nixos hash, or the arch+darwin hash?
<clever> exarkun: and what is the error from nixos?
<clever> exarkun: is the path to the .drv on both machines identical?
<clever> exarkun: are both nixos and arch using the same nixpkgs rev?
<clever> exarkun: is nixos reporting that hash in an error, or just saying that hash works?
<clever> just clone nixpkgs, and `nix-build ~/nixpkgs -A hello` and you might get lucky!
<clever> tetdim: that one looks like it should work
<clever> tetdim: thats claiming its a linux machine, so it will try to run linux binaries
<clever> you just claimed it will be "bsd"
<clever> 2019-10-03 16:40:10 < tetdim> on freebsd, bsd
<clever> tetdim: it must be "x86_64-freebsd" or nixpkgs will fail
<clever> tetdim: the above doesnt tell me what host_machine.system().to_lower() returns on your machine
<clever> tetdim: that paste tied the channel up for 30+ seconds :P
<clever> tetdim: no, thats the code to generate the system name :P
<clever> mmmmm, spam
<clever> tetdim: as long as builtins.currentSystem returns "x86_64-freebsd", then nixpkgs might just work out of the box
<clever> and then uses those, to build the stdenv
<clever> tetdim: it just symlinks $out/bin/make to /usr/local/bin/gmake!
<clever> tetdim: ahhh, the freebsd bootstrap cheats
<clever> ln /usr/local/bin/gmake make
<clever> tetdim: id say, study this and the other file in its dir, somebody has already started things
<clever> and a tarball, that contains gcc and patchelf, for $OS + $arch
<clever> x86-64 cheats by using 32bit-x86 busybox
<clever> tetdim: each arch has its own busybox to bootstrap things, and its own tar for that arch
<clever> tetdim: the only static binary is the busybox used to unpack the tools tar
<clever> tetdim: all stages after the tar, obey the gcc version specified in nixpkgs
<clever> tetdim: that is then followed by several stages, building a dumb gcc, building a glibc, then building a smart gcc that can use the glibc
<clever> tetdim: unpack-bootstrap-tools.sh will unpack that tar to $out, and re-patchelf it (with the patchelf inside the tar) to expect all of its libs in $out/lib/
<clever> tetdim: the gcc tools tar, is basically the lfs /tools/ dir
<clever> tetdim: busybox is used to unpack those tools, and line 29 will patchelf the patchelf
<clever> tetdim: under linux, it starts with a naked busybox binary (static of course), and a tarball that contains gcc and some very very basic utils
<clever> tetdim: it looks like somebody already started it for bsd
<clever> tetdim: the stdenv contains gcc, ld, make, and basic tools like cp/rm
<clever> tetdim: here is a basic derivation you can run, if you supply uour own binaries for ps, env, id, and cat, you can run that without a single reference to <nixpkgs> and stdenv
<clever> tetdim: step 2, is to get the stdenv to build, under that nix, and then use that to build nix with nix
<clever> tetdim: nice
<clever> neat*
<clever> gnidorah: stats_gui is a near thing i found within the console, it can report any performance metric in steam
<clever> gnidorah: yeah, comments always help
<clever> gnidorah: id just leave the PR as it is then
<clever> gnidorah: ahh, it would probably have to be in SDL's rpath then, which isnt easy
<clever> gnidorah: and related to your steamcmd stuff, i recently discovered `steam steam://nav/console`, that opens a console, directly in the steam UI
<clever> gnidorah: but its not aware of dlopen() things, and will break those
<clever> gnidorah: this line is likely the problem, it will go over the DT_NEEDED field, figure out which RPATH entries you need, and remove the extra
<clever> slabity: what does `sudo id` report?
<clever> gnidorah: if you remove the wrapProgram and set dontPatchELF = true;, does it still work?
<clever> gnidorah: the build tools might already do that, but patchelf --shrink breaks it
<clever> gnidorah: instead of using wrapProgram and LD_LIBRARY_PATH, it would be better to keep it in the rpath
<clever> hyperfekt: i think it has also been called speed factor?, and it helped to balance the load between differently powerful machines
<clever> abbradar[m]: what does --show-trace say?
<clever> abbradar[m]: ?

2019-10-02

<clever> hmmm yeah, dont think it can graph shutdown
<clever> elvishjerricco: it can show how long everything takes, and what it waited for
<clever> elvishjerricco: look at the image that generates
<clever> elvishjerricco: systemd-analyze plot
<clever> could be hanging for other reasons
<clever> elvishjerricco: but the nixops will get killed at this point
<clever> elvishjerricco: i would expect that to still work...
<clever> to the source!
<clever> ah
<clever> elvishjerricco: ?
<clever> elvishjerricco: compare /run/current-system before and after you reboot, any differences?
<clever> elvishjerricco: should still work in that case
<clever> elvishjerricco: systemd-boot or grub?