2019-09-23

<clever> though, if somebody can forge a collision on the final sha1, they could just include the whole real history in that as well\
<clever> but if you dont have the entire chain, you cant validate that its fully authentic
<clever> so its basically a blockchain
<clever> git commits are a hash of the commit object, which contains the previous commit hash
<clever> ajs124: if you trust the git server, yes
<clever> https://github.com/NixOS/nix/issues/1256 would be the "best" option, but has been put on the back-burner by builtins.fetchGit almost replacing it
<clever> i have multiple solutions for git and private repos
<clever> the source for those builtin derivations
<clever> and your back to the exact same problems `pkgs.fetchgit` has
<clever> then `git` would be ran as `nixbld1`, and cant access your ssh-agent
<clever> this lets you do internal activities, in parallel, at build time
<clever> so it will fork out a child, setup the whole sandbox as normal, but instead of executing a program, it just calls an internal function
<clever> this is generating a derivation, and later on (at build time), nix will run an internal function in the child proc
<clever> Miyu-chan: look at the builder= on line 18, its set to "builtin:fetchurl"
<clever> Miyu-chan: <nix/fetchurl.nix> is a weird edgecase, https://github.com/NixOS/nix/blob/master/corepkgs/fetchurl.nix
<clever> Miyu-chan: yeah, the builtins and all eval-time things are single-threaded
<clever> pkgs.fetchgit will happen in a builder child, and can run in parallel
<clever> fetchGit is also single-threaded, and pauses the eval while fetching, so it harms performance
<clever> Miyu-chan: fetchGit can be ran on branch names, which is impure
<clever> yuken: behind the scenes, its just running `git fetch`, so it can use anything that git uspports
<clever> emily: i went the other way, i learned nix first (and read the c++ that powers it), before fully learning haskell!
<clever> most paritition utils will convert the units for you, then round to the nearest multiple of 1mb
<clever> 2048 sectors (of 512 bytes each) is 1mb
<clever> > (2048 * 512) / 1024 / 1024
<clever> Device Start End Sectors Size Type
<clever> /dev/sda1 2048 4196351 4194304 2G Linux swap
<clever> Units: sectors of 1 * 512 = 512 bytes
<clever> c0c0: its common to just make the offset a multiple of 1mb
<clever> 3 of the drives in my nas, report the above
<clever> Sector size (logical/physical): 512 bytes / 4096 bytes
<clever> I/O size (minimum/optimal): 4096 bytes / 4096 bytes
<clever> `fdisk -l /dev/???` will show the sector sizes
<clever> Sector size (logical/physical): 512 bytes / 512 bytes
<clever> I/O size (minimum/optimal): 512 bytes / 512 bytes
<clever> you need to delete and remake the partitions with a better offset
<clever> which just ruins performance
<clever> c0c0: then every 4kb write, involves reading 8kb, modifying a 4kb substring, then writing 8kb back out
<clever> c0c0: if a filesystem is using 4kb blocks, and your physical drive is also using 4kb blocks, but your partition starts at an offset line 5kb in
<clever> ,-A c0c0

2019-09-22

<clever> jD91mZM2: you can probably `nix-build` the manual, though i'm not sure of the attr path
<clever> jD91mZM2: security is also another reason, since it can isolate `ps aux` output and such, but the `/nix` is still shared between host and guest, so be careful with secrest in derivations
<clever> in one recent case, even if the service (buildkite-agent) supports multiple instances, the scripts that run within it expect exclusive access to /build
<clever> jD91mZM2: one option is to be able to run services like mysql, that currently dont support multiple instances (in the nixos module as it is now), and to spawn several of them
<clever> emily: from what ive heard, zfs can be configured to scrub at regular intervals (read all data, and check checksums), and then email you if it detects corruption
<clever> thats why i never had that specific problem
<clever> cinimod``: i can also confirm, the systemd service i used cuda in, always runs as root
<clever> no example to compare against
<clever> which is why i still havent gotten opencl to work on amd
<clever> it helps to have access to a nixos machine that already has cuda working
<clever> yep
<clever> reboot, save the output of lsmod, run it as root, save the output of lsmod again
<clever> does the output differ?
<clever> without root, try to strace python, and see what fails with EPERM
<clever> sounds like permission issues
<clever> `sudo -i` gets a shell
<clever> so you want to sudo before you nix-shell
<clever> cinimod``: sudo may undo key env vars like LD_LIBRARY_PATH
<clever> yep, that part works
<clever> cinimod``: what happens if you run the tensorflow stuff as root?
<clever> cinimod``: what happens if you run `nvidia-smi` ?
<clever> cinimod``: ack, no source available!
<clever> Binary file NVIDIA-Linux-x86_64-418.56/libcuda.so.418.56 matches
<clever> [nix-shell:~/apps/nvidia-stuff]$ grep -r --color cuInit
<clever> [nix-shell:~/apps/nvidia-stuff]$ sh $src -x
<clever> [clever@amd-nixos:~/apps]$ nix-shell '<nixpkgs>' -A linuxPackages.nvidia_x11
<clever> cinimod``: try googling that error?
<clever> ah, it uses a custom builder.sh, nasty!
<clever> [clever@amd-nixos:~/apps]$ nix edit nixpkgs.linuxPackages.nvidia_x11
<clever> hmmm, thats not directlty it...
<clever> [clever@amd-nixos:~/apps]$ nix-shell '<nixpkgs>' -A linuxPackages.nvidia_x11
<clever> fixed, refresh
<clever> oops, editing...
<clever> cinimod``: you want to instead just export that LD_LIBRARY_PATH
<clever> cinimod``: auto.out was a binary in my $PATH, that used cuda
<clever> cinimod``: add it to the arguments of mkDerivation
<clever> or add it to shell.nix
<clever> cinimod``: yeah, you just need to `nix-build -A linuxPackages.nvidia_x11 '<nixpkgs>'` to find the path
<clever> cinimod``: i have this in a wrapper shell-script, to ensure cuda is in the env
<clever> LD_LIBRARY_PATH=${linuxPackages.nvidia_x11}/lib $out/bin/auto.out "\$@"
<clever> finding the nix code...
<clever> there it is
<clever> checking how my program can reference it...
<clever> /nix/store/yz24ris1zgplr2qi6bmvp4xd4p0jbj7s-nvidia-x11-418.74/lib/libcuda.so.1
<clever> 2019-09-22 08:53:07.507573: I tensorflow/stream_executor/platform/default/dso_loader.cc:53] Could not dlopen library 'libcuda.so.1'; dlerror: libcuda.so.1: cannot open shared object file: No such file or directory
<clever> cinimod``: it must be set like this, for things to find the gl/cuda libs
<clever> /run/opengl-driver/lib
<clever> [clever@amd-nixos:~]$ echo $LD_LIBRARY_PATH
<clever> cinimod``: is LD_LIBRARY_PATH set?
<clever> cinimod``: now try using cuda
<clever> cinimod``: yes, in configuration.nix
<clever> cinimod``: you may need to reboot, and/or try what jonringer said
<clever> cinimod``: what about `ls -l /dev/nvidia*` ?
<clever> jonringer: cuda works fine, on my nvidia hardware
<clever> but cuda will only work of those exist, so you have to check for those first
<clever> and all are 777, so any user can access them
<clever> # ls -l /dev/nvidia*
<clever> cinimod``: from peeking at the open FD's on a cuda process i have, i can see that its using a bunch of nvidia nodes on /dev/
<clever> lrwx------ 1 root root 64 Sep 22 08:23 7 -> /dev/nvidiactl
<clever> lrwx------ 1 root root 64 Sep 22 08:23 8 -> /dev/nvidia-uvm
<clever> lrwx------ 1 root root 64 Sep 22 08:23 10 -> /dev/nvidia1
<clever> # ls -l /proc/1762/fd/
<clever> cinimod``: `ls /run/opengl-driver/lib` will have files, if its working
<clever> cinimod``: so you have to use switch to make the changes persist
<clever> cinimod``: kernel changes generally require a reboot, and reboot undoes test
<clever> cinimod``: let me see how its working on my end...
<clever> cinimod``: ive made cuda work without that before
<clever> cinimod``: `nixos-rebuild test` is undone by a reboot
<clever> cinimod``: what type of VM is it?
<clever> cinimod``: add hardware.opengl.enable = true; to it, and then nixos-rebuild
<clever> under /etc/nixos/
<clever> cinimod``: it goes into configuration.nix
<clever> cinimod``: the kernel drivers need to be loaded, before userland can do anything with the gpu
<clever> cinimod``: hardware.opengl.enable = true;
<clever> cinimod``: install pciutils
<clever> freeman42x: try again then
<clever> freeman42x: did what device did you use for the if= flag?
<clever> freeman42x: dd should work, unetbootin breaks everything
<clever> freeman42x: how did you make the usb stick bootable?
<clever> didnt hear that
<clever> alienpirate5: whenever the tested jobset for nixos-unstable goes green
<clever> ,howoldis
<clever> alienpirate5: i believe its already fixed in nixpkgs master
<clever> so alsamixer will control pulse instead, and you must select a diff card with f6
<clever> selfsymmetric-mu: by default, pulseaudio configures some custom "alsa" drivers, that connect to the pulse daemon

2019-09-21

<clever> Huw28: add a - to the fsType
<clever> if any fs has an `fsType = "ntfs-3g";` then nixos will automatically install ntfs3g for you
<clever> 6 config = mkIf (any (fs: fs == "ntfs" || fs == "ntfs-3g") config.boot.supportedFilesystems) {
<clever> 8 system.fsPackages = [ pkgs.ntfs3g ];
<clever> /home/clever/apps/nixpkgs/nixos/modules/tasks/filesystems/ntfs.nix
<clever> Huw: i would start by adding `noauto` to the options, and then see what happens if you try to manually `mount -v /mnt/d/`
<clever> Huw: i would start by adding `noauto` to the options, and then see what happens if you try to manually `mount -v /mnt/d/`
<clever> cinimod``: the nix needs to be on the host to see kvm, and you need permission to access kvm, in your case, to be in the kvm group
<clever> cinimod``: a vm usually wont have kvm working
<clever> justsomeguy: if your on nixos, its best to use documentation.man.enable = true;
<clever> cinimod``: intel or amd cpu?
<clever> cinimod``: does /dev/kvm exist?
<clever> werner291: `ps aux | grep X`, find the config file its using, and read that, is the config you set present?

2019-09-20

<clever> __red__: channels are also global and per-user, and you can add multiple channels if you want to use things from another channel
<clever> __red__: overlays can be either system wide or per-user
<clever> __red__: nixpkgs also supports overlays, which can change and add packages
<clever> `nix-env -q` will list its contents
<clever> any time you run `nix-env -i` or `nix-env -e`, it will modify ~/.nix-profile/manifest.nix and update the profile
<clever> __red__: manifest.nix and systemPackages basically have the same role as world
<clever> __red__: at one time, nixos.com was nsfw, but the domain has since expired
<clever> __monty__: i'm guessing it was made as small as possible, and its supposed to resize itself on first boot, but maybe it was just a little bit too small
<clever> cinimod`: this will load azure-mkimage.nix and pass it a nixpgs and rev, you could add a diskSize there, or just modify the default
<clever> cinimod`: line 6, the ..., causes it to ignore any arguments not listed there
<clever> cinimod`: you can just --arg diskSize 2048
<clever> cinimod`: line 5
<clever> cinimod`: can you link the instructions?
<clever> cinimod`: more that you can switch to aws, to avoid the issues of nixops having azure disabled
<clever> cinimod`: i have used cuda before with nixops and aws, so nothing is forcing you to go with azure
<clever> cinimod`: nvidia is primarily cuda, amd is primarily opencl i believe
<clever> nvidia i think?
<clever> cinimod`: was cuda the amd or the nvidia one?
<clever> why do you need azure for Chebyshev polynomial approximations?
<clever> ack, youll need to use nixops 1.6, and then follow that doc
<clever> i would just use nixops
<clever> your better off nuking it and starting over
<clever> then youll never be able to actually use this machine once the free space is fixed
<clever> cinimod`: can you ssh in as root?
<clever> then try `nix-collect-garbage --max-freed 1m` again
<clever> that will give you a whopping 17mb!!
<clever> cinimod`: sudo rm -rf /var/log/journal/8b08efc1f01043c68c35918ca30c2a5a
<clever> cinimod`: try `du -h /var` and use a pastebin
<clever> cinimod`: `du -h` ?
<clever> cinimod`: even bash makes .bash_history
<clever> cinimod`: systemd makes a bunch of crap, like journal files
<clever> cinimod`: does that work?
<clever> cinimod`: find at least one file outside of /nix/store, delete it manually, then `nix-collect-garbage --max-freed 1m`
<clever> ebzzry: is it listed in `mount` output?
<clever> anything with an __ prefix, gets added to both the top-level scope (with the prefix) and then again to builtins (with it stripped)
<clever> so things like abort, true, false, and map are just in the scope by default
<clever> anything without a __ prefix, gets directly added to the top-level scope
<clever> toString behave very differently from treating it as a string
<clever> > "${./.}"
<clever> > toString ./.
<clever> mishac: is the dns server configured in /etc/resolv.conf ?

2019-09-19

<clever> Ariakenom_: its more about the difference, between the old and new config
<clever> Ariakenom_: if mutableUsers = false; i think you just use password/passwordHash instead, not entirely sure, i just always leave it mutable
<clever> Ariakenom_: if mutableUsers = true; then the initial password only applies when first creating the user, and wont have any future effect
<clever> and find out what you did wrong to reset the pw to default
<clever> if you had to use hunter2 to login, change it asap, and review the ssh logs
<clever> and make a note to fix it asap :P
<clever> and for more automated things, where ssh isnt public (or is secure initially), you can also use users.extraUsers.clever.initialPassword = "hunter2";
<clever> symphorien: yeah, but these are all single-user machines
<clever> hyperfekt: thats why its in a passwords.nix file, that is never put into git
<clever> ive got this in my config files
<clever> users.extraUsers.clever.initialHashedPassword = passwords.hashedPw;
<clever> Ariakenom_: you can also define the default password (hash) in the nixos options
<clever> Ariakenom_: yep
<clever> if mutable users is off, then adduser wont work, and nixos is in more control
<clever> but if a user was in the nixos config, then isnt, i believe nixos will try to clean things up and delete it
<clever> Ariakenom_: if mutable users is enabled, then you can freely add/remove users with adduser, outside of nixos's control
<clever> but if the pw isnt defined in the nixos config, it will be lost, and you must define it again with passwd
<clever> if you rollback to a generation when the user was defined, nixos will re-create the user
<clever> because the new config says the user shouldnt exist
<clever> if it was defined in configuration.nix, but not defined in nixops, then the user was deleted when you deployed
<clever> Ariakenom_: was it in configuration.nix initially?
<clever> Ariakenom_: did you deploy a config that didnt define the user, then fix it to define the user?
<clever> since /dev/kvm has a 6 in the last spot, __6, anybody can act on it
<clever> so you must either be root, or in the uucp group, to do anything
<clever> but with my ttyS0 for example, its 660
<clever> crw-rw---- 1 root uucp 4, 64 Aug 6 04:59 /dev/ttyS0
<clever> Zer0xp: look at the permissions on the kvm device, its 666, so any user can use it
<clever> Zer0xp: on nixos, you dont have to
<clever> brute-focing a 2048 bit key, is going to be near-imposible, and is way more secure then having passwords allowed at all
<clever> Ariakenom_: i prefer to configure it to only accept ssh keys, for all users
<clever> Ariakenom_: look this up in the nixos docs
<clever> Type: one of "yes", "without-password", "prohibit-password", "forced-commands-only", "no"
<clever> services.openssh.permitRootLogin
<clever> then kvm should just work
<clever> Zer0xp: are you in a container? are you checking on the host?
<clever> [root@amd-nixos:~]# ls -l /dev/kvm
<clever> crw-rw-rw- 1 root root 10, 232 Sep 17 18:35 /dev/kvm
<clever> its not a directory
<clever> Zer0xp: does /dev/kvm exist?
<clever> Zer0xp: does /dev/kvm exist now? what about `lsmod | grep kvm` ?
<clever> Zer0xp: what does `sudo modprobe -v kvm_intel` say?
<clever> Zer0xp: intel or amd cpu?
<clever> Zer0xp: kvm should just work out of the box, does /dev/kvm exist?
<clever> Ariakenom: the main things you want to pay attention to, is to get the bootloader, fileSystems., and network config right, then you can incrementally finish everything else
<clever> Ariakenom: and you can still nixos-rebuild --rollback to undo it
<clever> Ariakenom: it will just replace the currently running os, but it wont touch configuration.nix
<clever> nixos cant touch the version in nix-env, so that version is lagging behind and causing problems
<clever> you need to do `nix-env -e nix`
<clever> you have 2 copies of nix installed, on your user, and on nixos
<clever> contrun[m]: what does `type nix` return?
<clever> it looks like it detects failure to make a namespace (due to debian sillyness) and then turns off the sandbox
<clever> and its keyed on your nix version
<clever> contrun[m]: ah, sandbox-fallback is a new thing, and it can only be set to false, or be missing
<clever> contrun[m]: and the correct value is in the config file
<clever> sandbox = relaxed
<clever> contrun[m]: your config appears to be missing newlines?
<clever> contrun[m]: what file did you add useSandbox = "relaxed" to? can you pastebin that file?
<clever> misha31: yep
<clever> contrun[m]: run that, then read the file it pulled up, does the type show relaxed, the same as on the github page?
<clever> contrun[m]: nix-instantiate --find-file nixpkgs/nixos/modules/services/misc/nix-daemon.nix
<clever> `nix copy --to local?root=/mnt/nix ./result`
<clever> yeah
<clever> it must be at /nix on the beaglebone
<clever> misha31: create a /nix on the beaglebone, add it to exports, then mount it to /mnt/nix on the laptop
<clever> misha31: thats the first step you need to do
<clever> misha31: is the beaglebone nfs mounted to the desktop?
<clever> contrun[m]: relaxed turns off the sandbox for fixed-output drvs, and a drv can set a special attr to opt-out of the sandbox
<clever> so things being read-only wont get in the way
<clever> misha31: `nix copy` can copy it to the beaglebone for you, and each build will be at a new path
<clever> nix-build will replace the result symlink when things change
<clever> misha31: you should just be using the result symlink directly, and not copying over the file
<clever> misha31: why do you need to write to the files?
<clever> 2019-09-19 00:50:18 < clever> but it also undoes +x
<clever> ah, its --no-preserve=mode
<clever> without root, it can only make files be owned by you
<clever> cp can only preserve ownership if you use root
<clever> but it also undoes +x
<clever> cp -r --no-preserve=permissions, will also turn the files writable as it copies them
<clever> as long as its not in /nix/store/
<clever> you can chmod +w the copy
<clever> ah yeah
<clever> what was the project from a few days ago?
<clever> or fix the expr to build it right from the start
<clever> misha31: you would need to copy the files out to a normal dir
<clever> misha31: oh, that just renames it, there is no way to make it writeable
<clever> misha31: -o other-result

2019-09-18

<clever> rnhmjoj: anything about ext4 or the rootfs?
<clever> rnhmjoj: what does `dmesg` say at the end?
<clever> pkgs.fetchurl could do builtins.typeOf and throw
<clever> lol
<clever> infinisil: all drv attrs go thru a variant of toString
<clever> > toString false
<clever> > toString true
<clever> if you have a hash for the .tar.gz of a source, you can fairly easily translate that into a nix expr
<clever> mightybyte: but which lock file? the yarn one? the npm one? there is also a 3rd tool to do the same thing
<clever> though you can also run it in a mode without IFD, if you supply a pre-generated one
<clever> but generating that file, requires import from derivation
<clever> yarn2nix works via yarn.lock, to mostly generate fixed-output drvs, that will fetch the deps at build time
<clever> and fetching deps needs network
<clever> mightybyte: but node is a bigger mess, so things like node2nix need to fetch the deps, and recursively chase the package.json files
<clever> mightybyte: so you must find the sha256 of each dep's src, and run cabal2nix on each deps cabal file
<clever> mightybyte: cabal2nix can do it without network, because it assumes you already did cabal2nix for the other dependencies
<clever> peti: which only accepts 2 args
<clever> peti: but that --argstr would be passed to nixos/default.nix, not configuration.nix
<clever> peti: i checked the source in the above, and it looks like there is no way for you to access that, and the vars that do contain the value arent exported
<clever> sb0: probably
<clever> sb0: the official hydra.nixos.org has also disabled that perl code, since it now uploads everything to S3 and is configured to not act as a cache
<clever> sb0: somebody forgot to update the perl code
<clever> sb0: but store-uri is used by the c++ code, when the queue-runner is copying artifacts into whatever the store-uri is
<clever> sb0: ive also looked into the source, binary_cache_secret_key_file is used by the perl code, when hydra itself as acting as a binary cache (over http)