2020-05-19

<clever> alj[m]: nix-instantiate '<nixpkgs/nixos>' -A system, before and after the change
<clever> alj[m]: you just want the top-level drv for all of nixos
<clever> typetetris: its usually in the nixos release notes
<clever> what you set it to isnt that important, its more important that it doesnt change without you planning for that change
<clever> typetetris: but its then set using mkDefault, so your config can still override things
<clever> typetetris: the first time nixops deploys, it will query the remote machine for its stateVersion, and then put that into the internal sqlite database
<clever> syd: yep
<clever> syd: oh, and if you `journalctl -o json`, you can see the names that this library wants
<clever> syd: and something close to this, from the conduit side
<clever> (Just (Journal.Match systemdUnitField "sshd.service"))
<clever> `journalctl -t systemd | grep exited` from the CLI
<clever> though, if you want all units, just `-t systemd` then
<clever> syd: if you filter on both a unit and a syslog identifier, you get only the start/stop messages that systemd generates
<clever> [root@amd-nixos:~]# journalctl -u display-manager -t systemd
<clever> syd: but you can filter on whatever you want
<clever> syd: in this case, it filters on sshd, and translates ssh pubkey hashes, back into pubkeys and usernames
<clever> syd: this uses conduits and a journal library to create a stream of journal messages
<clever> syd: one min
<clever> syd: do you know haskell?
<clever> syd: yeah
<clever> syd: it would likely be simpler for you to just watch the journal for things exiting, and then act on that instead
<clever> syd: its not really possible to override that, youll need to just edit the module that created the option to begin with
<clever> alj[m]: find the .drv from before and after, and run nix-diff on both of them
<clever> syd: so it has to call mapAttrs before it can know what the args to mapAttrs are
<clever> syd: nix must first know what attrs config.systemd.user.services contains, before it can give you the value of config.systemd.user.services
<clever> syd: you can never reference a given option, while defining the value of the same option
<clever> tareluerlz2: the benefit with zfs, is that free space is shared equally, rather then you having to allocate a large chunk ahead of time
<clever> tareluerlz2: with lvm, you can easily make another block device in the array, name it clearly, and then format it and mount as / for nixos
<clever> tareluerlz2: nixos tends to break if other things are in /etc, /bin, and /lib, so its best to give nixos its own rootfs, zfs and lvm make that a lot simpler to manage
<clever> tareluerlz2: i think the "proper" way to do multiboot on efi, is that each OS registers itself in the efi vars, and the bios lets you pick one
<clever> pistache: id just go ahead with the pr then
<clever> pistache: test to see what happens with a youtube-dl=null override first, and see if it breaks any
<clever> then it will leak in or fail
<clever> pistache: or you could just not include youtube-dl in the wrapper, when you .override { youtube-dl = null; }
<clever> pistache: could make it a flag you can change with .override
<clever> pistache: i think its mostly just personal preferences of whoever made the package
<clever> so mpv works the same everywhere, no mater what youtube-dl you have in PATH
<clever> pistache: i'm guessing its just to prevent the users old youtube-dl from breaking things
<clever> hpfr[m]: pkgs.fetchurl creates a derivation that will fetch things later at build time
<clever> hpfr[m]: builtins.fetchurl happens at eval time, which forces the eval to stop while it downloads
<clever> //
<clever> morgrimm: what would the interaction do?

2020-05-18

<clever> betawaffle: my Makefile just assumes nix and always uses $out
<clever> and if you dont like a change you did to the side, nixos-rebuild rollback!
<clever> when you nixos-rebuild, nix will copy the entire dir into /nix/store, and then point apache to the copy
<clever> set the documentRoot = /home/clever/something; without quotes
<clever> or you could do things in a more nix-friendly way
<clever> and use a symlink to point <website> to that
<clever> you could also just put things at /var/www/htdocs instead
<clever> so if you only have +x on a dir, you can read files, but only if you know the names
<clever> read on a dir lets you know what the contents actually are
<clever> Guest7: execute on a dir lets you cd into it, and interact with its contents
<clever> weird things also happen if you have +r but -x, you can list a dir, but you cant cd to it, or do anything with its children
<clever> jlv: the Shell variant will add a #! for you
<clever> > pkgs.writeShellScriptBin "name" "body"
<clever> jlv: writeScriptBin and writeShellScriptBin will solve that issue
<clever> jlv: so your script is a literal file of /nix/store/hash-name, but systemPackages will try to do PATH=/nix/store/hash-name/dir, which doesnt exist
<clever> jlv: writeScript puts the binary directly at $out, but systemPackages puts the thing in $PATH
<clever> finnwww[m]: either exit fully to the login prompt, or hit alt+f2
<clever> finnwww[m]: you dont have to reboot, just login again
<clever> finnwww[m]: what if you properly login as finn, without using su?
<clever> finnwww[m]: and if you close the terminal and re-open it, does `echo $NIX_PATH` change?
<clever> finnwww[m]: and `echo $NIX_PATH` ?
<clever> finnwww[m]: and `ls ~/.nix-defexpr/channels/` ?
<clever> tdeo: you still need to run cargo in the nix-shell
<clever> all of the linker stuff will run into errors if you try to use it outside of nix-shell
<clever> the compiler will only work inside nix-shell
<clever> puts the cross gcc into your shell
<clever> that creates a derivation that can build "name" on windows, and then
<clever> tdeo: nix-shell -E 'with import <nixpkgs> {}; pkgsCross.mingwW64.stdenv.mkDerivation { name = "name"; }'
<clever> tdeo: for which target?
<clever> finnwww[m]: what does `nix-channel --list` say?
<clever> finnwww[m]: not sure, you either want to add the channel as root (sudo nix-channel --add ...) or run install without root
<clever> finnwww[m]: if you want root to see the channel, you must add the channel as root
<clever> finnwww[m]: oh, your using channels?
<clever> finnwww[m]: the only thing <home-manager> does is lookup the string you had put into NIX_PATH, but you can just skip that indirection
<clever> finnwww[m]: nix-shell http://something -A install
<clever> finnwww[m]: you can also just skip NIX_PATH entirely
<clever> finnwww[m]: sudo on some distros will undo env vars, do just `sudo -i`, then try the export in the root shell
<clever> infinisil: oh, nice
<clever> finnwww[m]: you need to put home-manager into NIX_PATH or do -I home-manager=something
<clever> qyliss: yeah, its also called _main, because EVERY binary has since been merged into a single one, and it uses argv[0] even more, to do the same logic
<clever> superbaloo: it checks argv[0] and changes its behavior based on that
<clever> superbaloo: nix-shell is just nix-build
<clever> superbaloo: and if you read that mkshell/default.nix from nixpkgs, youll see exactly what its doing
<clever> 426 mkShell = callPackage ../build-support/mkshell { };
<clever> > builtins.unsafeGetAttrPos "mkShell" pkgs
<clever> > builtins.unsafeGetAttrPos "mkShell" stdenv
<clever> superbaloo: mkShell just runs mkDerivation with a few arguments
<clever> superbaloo: you need to replace stdenv.mkShell with stdenv.mkDerivation, and add a src = ./.;
<clever> superbaloo: if you want nix-build to build the same expr, run `nix-build shell.nix`
<clever> superbaloo: then it uses whatever nix expression was in the shell.nix
<clever> superbaloo: what args did you use?
<clever> superbaloo: it depends on what arguments you ran nix-shell with
<clever> superbaloo: it evals a nix expression to a drv, builds every input to that drv, then puts the env from the drv into its own env, and launches a shell
<clever> abathur: it should probably have a name= added to the fetchurl, to make it cleaner, and/or use fetchpatch
<clever> abathur: ah, the nix function just grabs whatever is after the last / and uses that as a name
<clever> abathur: sounds like the name= is "wrong"

2020-05-17

<clever> Ashy: without nscd, you have no local dns cache, and no dns plugins
<clever> Ashy: nscd is needed for avahi .local domains and systemd .machine domains
<clever> betawaffle: the karma feature is managed by infinisil
<clever> evils: yeah, its definitely loosing track, lol
<clever> (probably)
<clever> so you want -lwayland-client
<clever> and server, and others
<clever> betawaffle: there is no libwayland.so, only libwayland-client.so
<clever> $ ls wayland/lib
<clever> $ nix-build '<nixpkgs>' -A wayland -o wayland
<clever> betawaffle: did you add wayland to buildInputs?
<clever> betawaffle: cc ${./main.c} -o $out -lfoo, and add foo to { buildInputs = [ foo ]; }
<clever> c00w: cc ${./main.c} -o $out
<clever> a static qemu-aarch64 cant do that, so the var properly leaks into the guest
<clever> rather then emulating the arm ld.so
<clever> and if that var was set, when qemu-aarch64 got ran, it would print the x86 deps and exit
<clever> citadelcore: try `LD_TRACE_LOADED_OBJECTS=1 ls` and youll see what i mean
<clever> the same way #!/bin/sh runs sh on the shell script
<clever> and ld.so is the interpreter (its a field in the ELF file), so running a binary normally also runs ld.so on it
<clever> citadelcore: ldd is just a shell script to run ld.so on a binary, with that env var set
<clever> citadelcore: if ld.so discovers that you set the env var LD_TRACE_LOADED_OBJECTS=1, it will print the libs and exit without running main()
<clever> citadelcore: though, a dynamic qemu-user also breaks ldd, because the host ld.so responds and prints host-deps instead!
<clever> citadelcore: if it was dynamic, youd also have to let /lib and /usr/lib in, since nix cant track the deps of a non-nix binary
<clever> citadelcore: i'm guessing your qemu-aarch64 is also a static binary?
<clever> citadelcore: you need to add the qemu-user to this field in nix.conf
<clever> extra-sandbox-paths = /run/binfmt /nix/store/pax822l4mrxj5naiyv3qayvqrnhl0as3-qemu-user-aarch64-3.1.0 /etc/nsswitch.conf /etc/protocols /usr/bin/env=/nix/store/9v78r3afqy9xn9zwdj9wfys6sk3vc01d-coreutils-8.31/bin/env
<clever> citadelcore: and its the qemu-user binary that was not found
<clever> citadelcore: the problem, is that binfmt-misc changed argv[0] to your qemu-user binary
<clever> citadelcore: did you tell nix to allow the qemu-user-aarch64 binary into the nix sandbox?
<clever> citadelcore: and are you on an arm system?
<clever> citadelcore: what does file say about the binary?
<clever> teto: if you click the padlock, you should see options for things like media in there, for that specific domain
<clever> quinn: i think theres something like just deleting a state file or rebooting to fix that?
<clever> pie_: if your only formatting, you can still set a label, and use LABEL= in the configuration
<clever> pie_: and then `nix copy` it over, after another script (like justdoit) has partitioned things
<clever> pie_: if you know the full layout of the hw, and your also in control of the partitioning, you could pre-build the configuration.nix/nixos in the same container
<clever> nice
<clever> pie_: also, /mmnt may now exist...
<clever> pie_: thatll do it
<clever> pie_: does it exist at /mnt/nix/store/foo?
<clever> pie_: i went thru all of my irc logs, to get every example i could find, and then began documenting the ones that looked interesting
<clever> pie_: one sec
<clever> pie_: if you then pre-build a nixos from a given config, you can copy it over, and then just tell nixos-install to fix the bootloader and your done!
<clever> pie_: so you can now copy a storepath from one machine to another, but the destination is /mnt/nix/store/
<clever> pie_: then the local?root=/mnt works as usual, but against the remote nix
<clever> `?remote-store=` changes the URI the remote nix uses
<clever> pie_: `ssh://root@target` tells nix to ssh into target, and run nix on the remote box to access the store
<clever> nix copy --to ssh://root@target?remote-store=local?root=/mnt /nix/store/hash-nixos
<clever> pie_: that lets you build the entire nixos on one machine, then copy it over ssh to /mnt/ on a remote box
<clever> pie_: have you seen my remote chroot idea?
<clever> yeah
<clever> nixos-enter is using namespacing to set things up, and that also has bugs that break the nix sandbox, --option sandbox false
<clever> pie_: you need to `nix copy --from local?root=/mnt /nix/store/foo` to copy it back out to the host
<clever> pie_: but there are some bugs in the IFD stuff, where its looking on the host instead
<clever> pie_: its using local?root=/mnt/ to fake a chroot
<clever> numkem: it becomes a lot more powerful if you point it to something with actual nix files, then you can import them
<clever> numkem: or `niv update jq -a branch=testing`
<clever> numkem: the main thing niv offers there, is making it trivial to update the rev/sha256, you just `niv update jq` and it will get the latest commit on the current branch
<clever> numkem: to replace the source of the jq package thats already in nixpkgs
<clever> numkem: thats a bit of a bad example, the most you can do with that is then `pkgs.jq.overrideAttrs (old: { src = sources.jq; })`
<clever> numkem: can you link that example?
<clever> numkem: you can then do `import sources.thing` to load the default.nix or `import "${sources.thing}/foo.nix"` to load foo.nix
<clever> numkem: niv just provides a path to whatever you added
<clever> betaboon: though, from what i hear, most of this security code has been around since the rpi1, and others have already cracked it back then
<clever> betaboon: but due to them half-assing it (turning it on, but then throwing security out the window), they have forced me unto RE'ing it more, and now half the secrets are out of the bag :P
<clever> betaboon: the rpi4 is basically setup to have videogame console level security, and if the firmware was done right, you could make it a nightmare to crack
<clever> betaboon: at this point, you need to melt the epoxy off the cpu with acid and start probing the bare die, or use undocumented vpu jtag stuff
<clever> betaboon: and boom, you now have secureboot and tpm, on an rpi4, and physical access suddenly requires MUCH higher skills, because the keys are on the same silicon as the cpu
<clever> betaboon: my custom bootcode.bin can then implement the TPM functionality, and also block the arm from ever reading OTP, and ensure all VPU firmware is also signed by a key i trust
<clever> betaboon: basically, there is a per-device signing key in the OTP memory, so if i got a "blank" cpu, i could program my own key, and then only a bootcode.bin i trust can ever run
<clever> betaboon: and with the right docs, you could maybe even make it more secure then all of the above, to the point where not even physical access can beat it
<clever> betaboon: in theory, you could also implement a lot of the above on an rpi4, if the per-device key wasnt burnt into it
<clever> betaboon: i dont really see any way to stop things if the attacker has physical access, so this is mostly just a software restriction
<clever> then its just down to the tpm being configured to self-destruct when you get it wrong too many times, and if you know what the right hashes where ahead of time
<clever> betaboon: but you can always just rip the tpm off the board, and then play a fake log to it directly
<clever> betaboon: and then it relies on the bios being authentic, so the logs get reported correctly
<clever> betaboon: if the chain of hashes is wrong, you must reset both the cpu and tpm, and its up to the motherboard&bios to ensure you can never reset only the tpm
<clever> betaboon: and if you can replay the same sequence of hashes, the tpm unlocks itself, and then you can use secrets
<clever> betaboon: i think the main thing you would have wanted is measured boot, where instead of restricting what runs, you just report the hash of each thing that runs (to the tpm), before passing on control
<clever> betaboon: tpm is a seperate thing, parallel to secureboot
<clever> betaboon: and the end result, is that everything in kernel mode and your drivers are all signed and can be trusted, so you dont have to worry about kernel level rootkits
<clever> betaboon: and then its up to the user with the keys, to ensure that efi binary can only ever execute more code, that has been signed with approved keys
<clever> betaboon: then the bios will validate the .efi binary is correctly signed
<clever> betaboon: for a typical x86 machine, the bios would maybe be signed, and validated by the maskrom in the cpu/management engine (optional steps)
<clever> betaboon: the basic idea with secureboot is just to ensure that no binary in the boot chain is ever modified without permission from some user or group of users
<clever> betaboon: the email in this tweet claims the signatures are required for "safety certifications" ....???
<clever> betaboon: and it doesnt validate signatures for the 2nd stage, so any hope of secureboot style stuff is toast
<clever> betaboon: it also makes basically zero sense why the 1st-stage of the boot process is signed, it has never been signed on any other model
<clever> betaboon: which parts of that fall under the copyright act?....
<clever> betaboon: and now i can sign my own boot files, and run them on an rpi4
<clever> betaboon: and with the hint of hmac-sha1 from another user who had already cracked things, i found the routines and eventually the full keys
<clever> betaboon: once i knew that the 1st-stage of the rpi4 boot was signed, i set to work cracking it!, i knew the address of the mask rom, and was able to easily dump that and begin disassembly
<clever> lol
<clever> betaboon: so, just running a program, is now illegal? lol
<clever> betaboon: (also, i did "ram" copying to solve the rpi4 issue...)
<clever> betaboon: "ram copying", lol, so now memcpy() is illegal? :D
<clever> betaboon: so only authentic firmware can run!
<clever> betaboon: after a few days of stumlbing around and my custom binary always being ignored, i opened a ticket, and eventualy discovered, the 1st stage of the bootloader is signed
<clever> betaboon: everything sounds fine so far, right?
<clever> betaboon: and wanting to get as much open-source as possible, i wanted to replace both stages with open-source software
<clever> betaboon: that blob will enable the ddr4 controller, then load a start4.elf blob from SD, and execute it
<clever> betaboon: the rpi4 boot process, involves loading a blob from SPI flash and running it
<clever> betaboon: another edge case, is what ive been doing with the rpi4 lately
<clever> now what? lol
<clever> simpson: there is also the weird edge cases, like somebody buying the full rights to a once abandoned game, but the source has since been lost
<clever> simpson: and they mentioned that even discussing such things can be illegal in some areas
<clever> simpson: i saw a comment on a recent youtube video, where the topic was cracking DRM in abandoned games, to preserve them
<clever> hsngrmpf[m]: i would just do a manual patchelf
<clever> pie_: so if you set pathsToLink = [ "/foo" ];, then the $out/foo of everything in systemPackages will be merged together, creating /run/current-system/sw/foo
<clever> pie_: for each package in systemPackages, it will link that path
<clever> pie_: pathsToLink must be a list of strings, not store paths
<clever> keithy[m]: `rm -f` wont give itself write permissions to a dir, you have to chmod first
<clever> keithy[m]: `chmod -R +w /nix`
<clever> pie_: and where do you expect those links to show up?
<clever> pie_: how did you try using it? with what args?
<clever> pie_: and it will be forced to either download everything again from cache.nixos.org, or rebuilt
<clever> pie_: without that data, nix is also unable to grab store-paths from the squashfs
<clever> and then you can get a closure within the initrd
<clever> pie_: you may want to try reverting that 2-year-old revert, and see if its now working better
<clever> pie_: and you want $closure/nix-path-registration, which is no longer available due to this revert
<clever> pie_: and this generates a fake db.sqlite backup, that represents the closure of a list of paths
<clever> tricking nix into thinking those paths had been added as normal
<clever> pie_: and upon "first boot", this will restore db.sqlite from that file
<clever> pie_: in this case, the util for generating a squashfs, also made a nix-path-registration file
<clever> pie_: currently, nix will consider everything in /nix/store as garbage, and try to GC itself to death
<clever> the main thing missing from that, is importing a db.sqlite backup
<clever> pie_: the direct_stage_2 attr, will skip the squashfs entirely, and just put stage2 directly into the initrd
<clever> pie_: https://gist.github.com/cleverca22/c099a3e413b6e9622653e70c66a5efed i was asked that exact question, 4 months ago
<clever> finding the gist...
<clever> pie_: already have the answer! lol
<clever> hexagoxel: thats also been bothering me for months
<clever> pie_: most things under profiles work like that
<clever> pie_: you have to add <nixpkgs/nixos/modules/profiles/clone-config.nix> to the imports, before it will show up
<clever> oops
<clever> 2020-05-17 06:28:33 < hadrian[m]> Its a Yale YRD256 zwave (no bluetooth module installed). Free app on F-Droid opens it like butter :(
<clever> pie_: you have to add <nixpkgs/nixos/modules/profiles/clone-config.nix#L60 (I can guess but i didnt check)
<clever> pie_: clone-config.nix isnt in the list of default nixos modules, so it wont show in the docs
<clever> pie_: now /etc/busybox points to the derivation, but its not in PATH, so the dozen utils i didnt want dont break things
<clever> 34 etc.busybox.source = pkgs.busybox;
<clever> 22 environment = {
<clever> pie_: i would just use /etc/
<clever> s
<clever> niso: this will iterate over the entire options tree, convert it to json, and then further convert that to various doc format
<clever> JJJollyjim: due to lazyness, it wont eval the pkgs.systemd, and will just ignore it

2020-05-16

<clever> but if you dont have many builders, it doesnt really matter
<clever> hpfr[m]: things like linux benefit heavily from more cores, so you you would set big-parallel to somethng that gives more cores to make, and runs fewer derivations at once
<clever> hpfr[m]: yeah
<clever> hpfr[m]: and if a derivation lists big-parallel as required, it will only run on this builder
<clever> hpfr[m]: this machine has 3 features, including big-parallel
<clever> builder@192.168.2.15 i686-linux,x86_64-linux /etc/nixos/keys/distro 1 1 big-parallel,kvm,nixos-test
<clever> $ cat /etc/nix/machines
<clever> hpfr[m]: its just a nix level feature, to filter where builds can run
<clever> hpfr[m]: you have to declare the remote builder as supporting big-parallel
<clever> hpfr[m]: requiredSystemFeatures, big-parallel
<clever> cab404[m]: the default= for something may have changed in a way that breaks it, or the path to home-manager is in the store in a way that breaks it
<clever> so i need this to even know it exists, lol
<clever> nix-repl> x = pkgsCross.raspberryPi.callPackage ({ hello }: hello) {}
<clever> the biggest surprise, is that __spliced only exists after callPackage mangles the args
<clever> hello.__spliced.buildBuild
<clever> i did find an ugly way, let me reproduce it
<clever> but if i just { stdenv, buildPackages }: its not clear what i depend on, and how to override only hello
<clever> Ericson2314: when cross-compiling, i want hello from buildPackages, yeah
<clever> i found an ugly way thru __spliced, but it breaks when you .override
<clever> Ericson2314: one thing ive had trouble with, how do i refer to a host build when i do something like preBuild = "${hello}/bin/hello"; ?
<clever> not sure how this has happened twice now
<clever> michaelpj: for all of my systems, zfs is the rootfs, so it cant really reload zfs at runtime, so /dev/zfs should never really go away...
<clever> i think that forcibly renames it to zfs, and chmod's it
<clever> KERNEL=="zfs", MODE="0666", OPTIONS+="static_node=zfs"
<clever> michaelpj: and then `grep MODE /etc/udev/rules.d/90-zfs.rules` says what to match against it
<clever> michaelpj: i also suspected that DigitalKiwi's issue went away after a reboot, but we didnt confirm
<clever> michaelpj: `udevadm info -a -p $(udevadm info -q path -n /dev/zfs)`
<clever> michaelpj: `/etc/udev/rules.d/90-zfs.rules` should be handling it, let me check it closer...
<clever> michaelpj: the permissions on /dev/zfs are wrong now, there is a udev rule to fix it, but it seems to be broken
<clever> so you can filter by unit, rather then sub-service
<clever> pie_: systemd uses cgroups to track every process a unit spawns, and blames any logs from all of them on that unit file
<clever> pie_: you can also `journalctl -f -u xinetd.service` to get everything from the unit
<clever> pie_: you need to just blindly `journalctl -f -o json | grep something`, and then look at the SYSLOG_IDENTIFIER
<clever> pie_: nope, its something the service reports in each msg
<clever> pie_: and xinetd+tftpd reports under 2 different SYSLOG_IDENTIFIER from a single systemd unit
<clever> pie_: -u matches against the _SYSTEMD_UNIT field, -t filters on SYSLOG_IDENTIFIER instead
<clever> "_SYSTEMD_CGROUP":"/system.slice/xinetd.service","SYSLOG_IDENTIFIER":"tftpd","_SYSTEMD_UNIT":"xinetd.service","SYSLOG_IDENTIFIER":"tftpd",
<clever> [root@router:~]# journalctl -f -t tftpd -o json
<clever> pie_: ah, 2 different syslog names
<clever> May 16 04:14:31 router tftpd[14970]: tftpd: serving file from /tftproot
<clever> May 16 04:14:31 router tftpd[14970]: tftpd: trying to get file: bootcode.bin
<clever> [root@router:~]# journalctl -f -t tftpd
<clever> May 16 03:49:37 router xinetd[870]: START: tftp pid=14822 from=192.168.2.177
<clever> [root@router:~]# journalctl -f -t xinetd
<clever> pie_: the one i'm using does print some logs to the journal, but yeah, its pretty bare
<clever> pie_: and if that config matches the configuration.nix your installing, it will just copy it over