2019-09-25

<clever> maybe thats just to log what make is doing
<clever> wait, its running trace-gen, on cat ....
<clever> $(trace-gen) cat $< >> $@.tmp
<clever> it feels like a heredoc, so i'm not sure what trace-gen is doing...
<clever> tetdim: this will turn into: const char * schema = R"FOO(......)FOO"; i think
<clever> tetdim: you also want to look at where it gets #include'd
<clever> tetdim: i think you have to run trace-gen, or it wont escape things properly
<clever> cfricke: nix why-depends
<clever> tetdim: ah, you implied that you had deleted all make files
<clever> tetdim: read the old makefile in github
<clever> tetdim: if you read the makefile, it will tell you how it was generated
<clever> tetdim: make should generate that header for you
<clever> tetdim: because then you have to edit the sql in a .hh file, and you have to escape every " yourself
<clever> tetdim: it uses the trace-gen binary, and some echo/cat commands, to generate it
<clever> tetdim: src/libstore/local.mk:%.gen.hh: %
<clever> sondr3: but, by that time, i already knew c++, qbasic, perl, annd javascript
<clever> sondr3: something of note, the only programming course ive ever taken at school, was visual basic, in grade 11
<clever> sondr3: ah
<clever> sondr3: `strace -f -e execve` may further explain it, but you would have to compare to what a mac does, and execsnoop may hide the answer
<clever> sondr3: it might even be a bash level thing, because make runs these strings with `sh -c`
<clever> sondr3: it might be that mac's make, or clang, is ignoring the "" in the arguments
<clever> sondr3: is macos using a different makefile, or is he telling you to edit it?
<clever> name++
<clever> so it should have no real effect
<clever> sondr3: the gist sets LTHREAD to ""
<clever> sondr3: what happens if you remove the LTHREAD part?
<clever> sondr3: but when you read the script, you skipped over that part!
<clever> and g++ blindly obeys, and tries to open ""
<clever> sondr3: one of the arguments to g++ was ""
<clever> sondr3: i thought so
<clever> 11978 access("", F_OK) = -1 ENOENT (No such file or directory)
<clever> you must then answer it, and supply a pw
<clever> inkbottle: `nixos-install` will run passwd for you, at the end
<clever> sondr3: `strace -e execve -f make` might also help
<clever> sondr3: oops `strace -f -o logfile make ; grep ENOENT logfile > logfile2` then upload logfile2 to gist
<clever> sondr3: `strace -o logfile make ; grep ENOENT logfile > logfile2` then upload logfile2 to gist
<clever> ,nix-shell
<clever> sondr3: nix-shell should provide the gcc wrapper for you, installing gcc and gnumake will break things
<clever> sondr3: do all of the lines mention ENOENT?
<clever> sondr3: `strace -f make 2>&1 | grep ENOENT` ?
<clever> sondr3: gist works good, and nix-shell should fix everything
<clever> sondr3: are you using nix-shell or did you install a compiler?
<clever> sondr3: can you pastebin the Makefile contents and the exact output it gives when failing?
<clever> sondr3: what is the error?
<clever> i dont think i see bsd make packaged
<clever> sondr3: i think it will always be gnu make, you can check `make --version` to confirm
<clever> sondr3: `nix-shell -p` will provide you with make
<clever> equivrel: `nix-collect-garbage --delete-older-than 7d` will delete the roots for profiles it has access to (root can delete from all), then delete the garbage that it exposed
<clever> equivrel: `nix-store --gc` doesnt matter what uset its ran as, and can only delete things without roots

2019-09-24

<clever> fun!, lol
<clever> nix can also build each executable on a different machine, in parallel, if you have build slaves
<clever> exarkun: so you can build just one executable from the cabal file, even if the others fail to build or are slow to build
<clever> exarkun: haskell.nix supports per-component derivations
<clever> exarkun: yep
<clever> exarkun: what is in the components set?
<clever> exarkun: thats just the Setup.hs file
<clever> exarkun: oh, and first, `import ./that-file` in nix repl
<clever> exarkun: run `nix repl` on that file
<clever> inkbottle: services.xserver.enable = true; services.desktop-manager.xfce.enable = true;
<clever> but the cache itself is a default entry, so your config overwrites it, and then merges with others
<clever> the key is a non-default entry, so it will merge with your key definitions
<clever> because the key isnt set as default!
<clever> ahh
<clever> its a list type, so if you set it several times, it will concat the lists
<clever> so if you set that option to anything at all, the default vanishes
<clever> its the default value
<clever> exarkun: you will also want to double-check /etc/nix/nix.conf and confirm if cache.nixos.org is still in the config
<clever> you likely want to use the iohk binary cache
<clever> oh, thats the build of nix-tools itself
<clever> exarkun: if you dont, it will just build EVERYTHING, rather then just what you depend on
<clever> exarkun: did you use -A to tell it what to build?
<clever> exarkun: the problem is that you are no longer obeying nixpkgs, and are instead obeying a stackage snapshot
<clever> exarkun: stack2nix has the same limitation
<clever> inkbottle: yeah
<clever> inkbottle: you just need to make sure the configuration.nix your installing can access wifi after the install is done and youve booted it up
<clever> inkbottle: if you can open google in a browser, then your network is already working
<clever> emily: i mostly just ignore the partition type codes, and trust blkid instead
<clever> but with how sloppy every motherboard firmware is....
<clever> __monty__: the bios is supposed to only boot from a correctly tagged esp partition...
<clever> both root and /boot have "Linux filesystem data"
<clever> i'm using the wrong uuid type on /boot!
<clever> now that you mention it.....
<clever> inkbottle: ef00 is shorthand for a longer uuid
<clever> which means that when it does fail, all attempts to debug it will fail
<clever> /boot being on zfs "works" but all attempts to debug it claim it shouldnt
<clever> inkbottle: this script supports /boot being vfat, ext4, or just a directory on / (which is zfs)
<clever> vfat = "mkfs.vfat $NIXOS_BOOT -n NIXOS_BOOT";
<clever> inkbottle: and line 54
<clever> ${mkBootTable.${cfg.bootType}}
<clever> ${lib.optionalString (cfg.bootType != "zfs") "1 : size=${toString (2048 * cfg.bootSize)}, type=0FC63DAF-8483-4772-8E79-3D69D8477DE4"}
<clever> it allows bios booting on gpt
<clever> inkbottle: thats only used if cfg.efi == false
<clever> inkbottle: you can also generate a usb image, that has justdoit pre-installed
<clever> inkbottle: the kexec.nix in the same dir, lets you boot nixos from ram, from another os
<clever> inkbottle: you literally just ssh into the installer, and run justdoit, and it does it
<clever> inkbottle: yep
<clever> and leave configuration.nix in /etc/nixos/
<clever> Athas: what i do, is add `imports = [ /home/clever/nixos-configs/whatever.nix ];` to configuration.nix
<clever> and performance began to suffer massively, because one of the layers had a plain array, and searched it one by one
<clever> Taneb: i saw a blog, where a guy tried making 20,000 drives, just to see what would break first
<clever> werner291: then youll need to get the network up first, using something like dhclient or dhcpcd, or ip
<clever> waleee-cl: no clue
<clever> werner291: https://termbin.com/
<clever> waleee-cl: comment it out and see if any changes happen
<clever> waleee-cl: overrides and overlays
<clever> i'll need to see the config files then
<clever> are you able to access the grub menu?
<clever> werner291: can you just boot an older generation from the grub menu?
<clever> werner291: i have no idea how you broke it this badly, lol
<clever> werner291: can you gist the hardware-configuration.nix and configuration.nix files?
<clever> werner291: while waiting for which device to appear?
<clever> rihardsk[m]: those have a version# in the 300's
<clever> rihardsk[m]: its not specifically the cuda libraries, but the nvidia libraries
<clever> rihardsk[m]: hence, the need for an override, lookup overrides in the nixpkgs manual
<clever> rihardsk[m]: nixos avoids that problem, by using the same file to configure both the driver and libraries, so they always match
<clever> rihardsk[m]: and it will break every time ubuntu updates things
<clever> rihardsk[m]: the problem is that the version you need might not be in nixpkgs
<clever> rihardsk[m]: you might need overrides to enforce that, and it will break when ubuntu updates things
<clever> rihardsk[m]: so its simplest if you just use nixos with cuda
<clever> rihardsk[m]: and the kernel driver must match up to the linux kernel your using
<clever> rihardsk[m]: the version of the cuda libraries you use, must match the version of the kernel driver
<clever> exarkun: stack-to-nix is its own repo, check #haskell.nix
<clever> exarkun: and then stack-to-nix will just import the right one, rather then generating it again
<clever> exarkun: stackage.nix is a git repo, that contains nix exprs, for every version, of every package, on all of stackage
<clever> exarkun: which is what haskell.nix is now doing
<clever> exarkun: but at one point, it changed, and it now generates output for every single expr in the entire stackage snapshot
<clever> exarkun: the old stack2nix generated cabal2nix output only for the packages you depend on
<clever> enteee: 169.254 happens when dhcp fails to get an ip addr
<clever> incompatibleKernelVersion is still commented out in release-19.09
<clever> the commit that broke it wasnt backported
<clever> 19.09 also wasnt broken
<clever> doesnt look like it
<clever> pbb: then zfs support isnt enabled, ....
<clever> nix-repl> options.boot.supportedFilesystems.files
<clever> pbb: what about this?
<clever> nix-repl> options.boot.supportedFilesystems.value
<clever> [root@amd-nixos:~]# nix repl '<nixos/nixos>'
<clever> pbb: `nixos-option boot.supportedFilesystems` ?
<clever> alexarice[m]: and your working dir is $NIX_BUILD_TOP when things begin
<clever> alexarice[m]: TEMP, TMP, and TEMPDIR are already set to $NIX_BUILD_TOP
<clever> pbb: can you pastebin the full error message?
<clever> alexarice[m]: export HOME=$NIX_BUILD_TOP
<clever> mojjo: also, every time you do `import ./something { inherit pkgs; }` you can instead fo `pkgs.callPackage ./something {};` and it will pass `pkgs` for you
<clever> mojjo: import
<clever> danderson: /run/current-system will probably point to it

2019-09-23

<clever> rawtaz: that builds the avr firmware for my thermostat, so it gives you an avr-gcc compiler
<clever> and then the initrd generation script for nixos, puts x86 libs, into an arm initrd!
<clever> tilpner: if you use the arm ldd on an arm binary, the x86 ld.so for qemu itself will print the qemu deps out
<clever> tilpner: ldd works by setting a special env var, that causes ld.so to print the libs as it loads them, then exit, rather then running main()
<clever> tilpner: and it must be staticly linked, or the "guest" ldd breaks
<clever> tilpner: qemu-user wasnt packaged in nixpkgs at the time i wrote the module
<clever> tilpner: and non-descript errors like gcc cant create valid executables
<clever> tilpner: glib's build framework was changed, causing it to not obey the static build flags i set, causing everything to implode
<clever> tilpner: and it fixed the qemu build issues?
<clever> nalck: does the service fork itself into the background? does the ExecStop actually stop it?
<clever> nalck: systemd should stop the service for you, but you may have that mis-configured
<clever> __red__: which should exist by default if your using nixos
<clever> __red__: that will only exist if you have a channel called nixos, on root
<clever> tilpner: i'll need to look at that and see what it does...
<clever> das_j: not recently
<clever> m15k: same place all the share stuff winds up
<clever> /run/current-system/sw/share/bash-completion/completions/
<clever> tokudan: (import <nixpkgs> {}).fetchFromGitHub
<clever> tokudan: you could also import a 2nd pkgs
<clever> tokudan: because you cant use pkgs in imports, need to use `builtins.fetchTarball` instead, or nix-channel
<clever> tokudan: it will convert the path to a string automatically
<clever> and fetchFromGitHub is a derivation
<clever> tokudan: derivations are paths
<clever> tokudan: imports = [ "${nixos-hw}/foo/bar.nix" ];
<clever> rhitakorrr: but stack2nix level stuff, typically just gets git rev's from the stack file, and doesnt know what the sha256 of things are, so it cant purely generate the nix
<clever> rhitakorrr: cabal2nix needs a copy of the src, and you can usually get that with `fetchFromGitHub` or related
<clever> akamaus: if you want to replace the derivation, you need an overlay
<clever> DariusTh`: you can use `nix-copy-closure` to copy that between 2 machines, as long as you have ssh from one to the other
<clever> rizary_: virtualisation.docker.enable = true;
<clever> rizary_: you need to enable docker in configuration.nix to get the daemon running
<clever> DariusTh`: that can only reasonably be done if you have network access to a machine that can provide that software
<clever> DariusTheMede: can you not use nix-shell to develop elsewhere, and then copy the final product over when its done?
<clever> DariusTheMede: who is "they" and why do you need nix-shell to work on such a restricted machine?
<clever> DariusTheMede: why is the network and root access so locked down on this Persia machine?
<clever> DariusTheMede: you could even use `ssh -R` to forward the whole cache.nixos.org, if you mess with Persia's /etc/hosts
<clever> DariusTheMede: then it will be allowed to ssh into the mac
<clever> DariusTheMede: why not forward a port the other way, with `ssh -R` ?
<clever> so this basically replaces the whole dkms mess that i think ubuntu made to solve the problem?
<clever> shyim: it will then use that to get modules like nvidia, for 5.2
<clever> shyim: you must give it a set, like pkgs.linuxPackages_5_2
<clever> DariusTheMede: and which is the client with no network access?
<clever> DariusTheMede: run `ifconfig` on both machines, do they share a subnet? can one ssh to the other?
<clever> DariusTheMede: also, if your vpn is blocking internet access, you should probably look into fixing it to not
<clever> DariusTheMede: you can use an IP as well
<clever> that will just fetch whatever it can from the remote machine
<clever> DariusTheMede: nix-shell --option substituters ssh://user@host
<clever> DariusTheMede: you need to copy everything nix wants, oh, and ssh substituters
<clever> DariusTheMede: you will still want to disable using the cache
<clever> DariusTheMede: on the remote machine, from `ssh remote nix-store --version` when it fails
<clever> DariusTheMede: try adding a .bashrc with similar code?
<clever> DariusTheMede: darwin's bash will never run a global file from /etc when doing non-interactive ssh
<clever> DariusTheMede: on darwin, you must mess with .bashrc in the home dir
<clever> DariusTheMede: they dont have to be the same os, and you can have binaries from the "wrong" os in your /nix/store/
<clever> DariusTheMede: it copies storepaths from one machine to another, both machines need nix installed
<clever> DariusTheMede: and `host` is the machine that did have internet access, and has a copy of the thing
<clever> DariusTheMede: you would run `nix-copy-closure --from host` on the destination
<clever> just `nix-copy-closure --from host /nix/store/whatever` and ignore all the key junk
<clever> DariusTheMede: if its single-user, then you dont need root
<clever> DariusTheMede: singleuser or multiuser nix install?
<clever> DariusTheMede: then run it as root via some other means
<clever> step 2, sip some hot-chocolate
<clever> DariusTheMede: step 1, sudo nix-copy-closure --from foo /nix/store/bar
<clever> DariusTheMede: it also has a man page
<clever> DariusTheMede: for that, you want nix-copy-closure, which needs ssh between the 2 machines
<clever> DariusTheMede: pre-download things from the binary cache, before you go offline
<clever> DariusTheMede: that just says what version of everything to download/build
<clever> DariusTheMede: if you disable the binary cache, it cant download a pre-built copy of bash, so it has to instead download the source for bash
<clever> exarkun: regular old print?
<clever> exarkun: there is a special flag, to make the build "pass" even if it fails
<clever> exarkun: you do
<clever> exarkun: the nixos tests should write that to $out i believe, try the index.html in $out
<clever> exarkun: it will open $LOGFILE, and if that env var isnt set, /dev/null
<clever> or not, lol
<clever> i'm guessing thats in the perl docs
<clever> which comes from `use Logger;`
<clever> exarkun: and that log argument comes from `new Logger`
<clever> exarkun: the `new` function recieves a log argument
<clever> c0c0: what does `stty` say about `erase` ? on both the local machine, and after you ssh into the remote
<clever> but nothing says you cant declaratively generate a fake "state" and restore it on bootup
<clever> the original use for save&restore, was to just save at shutdown, then restore on bootup, so your firewall becomes a giant mess of state, that just persists thru reboots, lol
<clever> gchristensen: the rest, is just a partial iptables argument list, with the `-t raw` omitted
<clever> -A PREROUTING -j nixos-fw-rpfilter
<clever> gchristensen: this is setting the default target for the chain, and the numbers are packet and byte counters, if you wanted to save them on shutdown and restore on bootup
<clever> :PREROUTING ACCEPT [20006224:1592249935]
<clever> gchristensen: basically, each table is just a `*nat` line, some update operations, and a `COMMIT` line, and everything is applied atomicly
<clever> docker might also do something, cant remember
<clever> gchristensen: some services like fail2ban also mutate the rules, on the fly
<clever> gchristensen: but, that conflicts with extraCommands, and a lot of junk wants to use bash loops to run iptables multiple times
<clever> gchristensen: its very simple
<clever> the problem, is that you have to generate output similar to `iptables-save` from nix, and then use that
<clever> cat ${something} | iptables-restore, will do the entire update, many rules, in a single atomic operation
<clever> so if you run `iptables -A` 20 times, the 20th call is 20x slower then the 1st call
<clever> but, copying that table, gets slower, every time you add a rule
<clever> and any packets that arrive after the pointer-update will use the new rules
<clever> any time you modify the rules, you must copy the entire table, (and modify the copy), then update a single pointer
<clever> the reason, is that the kernel is using RCU lists for the firewall, read-copy-update
<clever> id also love to set it use iptables-restore
<clever> it should support allowing a port on a given interface, without having to use extraCommands
<clever> yeah
<clever> eyJhb: there is also -I to insert at a given offset if you want to use that
<clever> eyJhb: id have to read it closer to see what exactly is different with the rp stuff
<clever> ah, ive not used rpfilter any yet, dont know how it differs
<clever> the log-refuse is handled seperately from extraCommands
<clever> ah, this explains the part i just said i didnt know
<clever> which says that anything not matching a thing in `nixos-fw` will go to `nixos-fw-log-refuse`
<clever> -A nixos-fw -j nixos-fw-log-refuse
<clever> i dont know why (from -save only) but this line appears after the 3 `-t filter -A nixos-fw` in my router.nat.nix
<clever> in my case, `-A INPUT` only has a single entry, that tells it to run thru `-A nixos-fw` next
<clever> -A INPUT -j nixos-fw
<clever> eyJhb: run `iptables-save`, look under `*filter` and look at the `-A input` area, all packets start there
<clever> eyJhb: have you looked at the example router.nat.nix i linked above?
<clever> and nixos-fw-accept has no default, so you can append more to it
<clever> eyJhb: i believe the drop only happens after it has tried nixos-fw-accept
<clever> using mkAfter fixes that
<clever> eyJhb: in my case, the commands where being ran before nixos-fw-accept had been created, causing failure
<clever> eyJhb: if you set extraCommands multiple times, the order when it merges things isnt always obvious, and is based on the order of things in the imports array
<clever> eyJhb: thats due to how the merging works with types.lines
<clever> eyJhb: one minute
<clever> and the tests cant easily test that, because you have to build 2 configs, boot one, switch to the 2nd, then confirm state
<clever> eyJhb: there are a lot of edge cases with rebuild switch
<clever> eyJhb: what i just said
<clever> its expecting the start to overwrite for you
<clever> and the stop entry is configured to not turn things off, because you dont want a restart to leave you without a firewall
<clever> eyJhb: but if you disable the firewall, it just doesnt re-run the firewall service
<clever> eyJhb: the issue is more that, if you change the rules, the firewall script gets restarted, and applies the changes
<clever> > (import <nixpkgs> { config.oraclejdk.accept_license = true; }).hello
<clever> and config = { has the highest
<clever> $NIXPKGS_CONFIG has even higher priority
<clever> hodapp: ~/.config/nixpkgs/config.nix is the new location, and it has priority
<clever> hodapp: does ~/.config/nixpkgs/config.nix exist?
<clever> hodapp: what did you put in that file?
<clever> hodapp: probably in config.nix
<clever> i opened https://github.com/NixOS/nix/issues/1256 to make pkgs.fetchgitPrivate simpler to function, while remaining sandboxed
<clever> because making pkgs.fetchgitPrivate work securely isnt easy
<clever> ajs124: to be able to clone private git repos, with your ssh agent working