2019-12-12

<clever> [root@amd-nixos:~]# modinfo xen-pciback
<clever> which prevents any other driver from claiming the card
<clever> there was a kernel param to allocate it to the xen pci backend
<clever> exactly what i did
<clever> that was likely the legacy vesa mode drivers at fault
<clever> EdLin: even if i blacklisted the drivers for the card, it still did it
<clever> EdLin: rather then using old dos mode text consoles
<clever> EdLin: thats what i was guessing, it switches to high-res graphical mode, and renders the fonts in software
<clever> EdLin: but what is grub doing differently, that it doesnt screw things up?
<clever> EdLin: in my case, if linux did any text mode console activity, the card was "spent" and windows refused to work
<clever> notgne2: i had zero issues playing hot-potato with usb cards, trading it back&forth at runtime, without rebooting either os
<clever> notgne2: also, weirdly, the gpu is the only card that gave me trouble
<clever> zeta_0: yep
<clever> notgne2: i was using xen at the time, which has commands to hot-add/remove pci devices, and config files to add them on bootup
<clever> elvishjerricco: you can also `cat /proc/cmdline` to see the args it used at boot
<clever> elvishjerricco: and if you try to `umount /iso` ?
<clever> notgne2: you could even hotplug it between the 2 OS's, and just hit a magic button to switch to the other machine
<clever> notgne2: if you can somehow save and restore the gpu state (or just reboot the card), then you can play hot-potato with it all you want
<clever> notgne2: it may also vary wildly from card to card, how easily linux can undo state within the gpu
<clever> notgne2: ive heard of per-slot reset lines, but my motherboard doesnt support it
<clever> notgne2: that requires clearing the state in the gpu and basically soft-resetting the card
<clever> notgne2: there was some kernel params to forcibly allocate the driver to the pci backend in xen, so the framebuffer never touched it
<clever> notgne2: i was able to swap it, at boot time
<clever> notgne2: so, while i can run both at once, dual-boot would be simpler and have the same stability, lol
<clever> notgne2: also, if windows touches the gpu even once, you must reboot the entire host before windows can use it again
<clever> notgne2: all control must be done over ssh!
<clever> notgne2: so from an end-user viewpoint, the machine just hangs immediately after grub tries to execute the kernel
<clever> notgne2: also, the gpu passthru required 100% banning linux from touching the gpu, not even the text mode console
<clever> notgne2: and after increasing the size, i never got it to work again
<clever> notgne2: i managed to just barely get it working, but accidentally made the disk only 4gig in size, so there was no room to install any games! lol
<clever> notgne2: shortly before i got into nixos, i was looking into xen and gpu passthru, so i could boot windows, with full gfx performance, and then kill it when i didnt need it anymore
<clever> zeta_0: i think so, try using ssh to connect to something, gpg should ask for a pw once
<clever> then you dont have to co-operate over resources!
<clever> kexec is just a slight change to colinux, perform a hostile-takeover, dont give it back :P
<clever> popular*
<clever> and when 64bit windows got populate, computers had proper virtualization extensions, so the cheat wasnt needed anymore

2019-12-11

<clever> colinux was also 32bit only
<clever> colinux never had smp support on the linux side, so it was limited to 1 core for linux
<clever> WSL is just adding proper linux support to the windows kernel i think
<clever> WSL is very different
<clever> and windows just thinks the driver had disabled IRQ's for an abnormally long time
<clever> then the driver undoes the take-over, and restores control back to the windows kernel
<clever> and then linux runs for a short period of time
<clever> but in reality, every time the "driver" gets control of the cpu, it performs a hostile take-over
<clever> from the point of view of windows, its a driver that needs several gig of ram, and hoards the cpu a lot
<clever> basically, it turned the entire linux kernel into a "windows network driver"
<clever> elvishjerricco: http://colinux.org/
<clever> elvishjerricco: also, colinux was a thing
<clever> elvishjerricco: to windows, probably not, from windows, maybe
<clever> i wanted to add linux to the M$ bootloader, so you could transition over like that, without a usb stick, and without dd'ing over the live disk
<clever> and that is the rabbit hole that led me to learning the M$ bootloader, lol
<clever> elvishjerricco: if another linux distro is currently working on that machine, you can just copy this magic tarball over, run 2 commands, and it will instantly be running nixos from ram
<clever> zeta_0: you just need to add `export SSH_AUTH_SOCK=/run/user/1000/gnupg/S.gpg-agent.ssh` to your sessionCommands
<clever> zeta_0: sounds like its working normally
<clever> elvishjerricco: also, with kexec, you dont even need the usb at all
<clever> zeta_0: and then do `ssh-add`
<clever> zeta_0: for short-term testing, try `export SSH_AUTH_SOCK=/run/user/1000/gnupg/S.gpg-agent.ssh`
<clever> zeta_0: you didnt paste any path
<clever> elvishjerricco: thats also an option, depends on how confortable you are with grub
<clever> zeta_0: `echo $SSH_AUTH_SOCK` ?
<clever> zeta_0: gpg agent is listening for ssh stuff, but is SSH_AUTH_SOCK pointed to it?
<clever> elvishjerricco: it was originally made for somebody that wanted a usb stick that can boot all the installers (multiple distro's in the grub menu)
<clever> elvishjerricco: the rootfs is fully contained within nixos-initrd, so its always in a copytoram mode
<clever> elvishjerricco: you then add that grub-fragment to a grub.cfg, and grub-install as usual
<clever> elvishjerricco: if you run nix-build on this, it will generate a directory with 3 files, nixos-kernel, nixos-initrd, and grub-fragment.cfg
<clever> ornxka: https://hydra.angeldsis.com/jobset/things/rpi-open-firmware and i configured my hydra to build the firmware
<clever> elvishjerricco: the multi-boot helper i have may also work for that
<clever> ornxka: yep
<clever> zeta_0: 2019-12-11 19:39:28 < clever> zeta_0: 1481 is the new pid
<clever> zeta_0: replace 2179 with the new pid
<clever> ornxka: my fork has nix support, and if you just `nix-build -A helper`, it will generate all of the proper cross-compilers (line 3, it doesnt care which channel your on), then cross-compile the entire firmware, and generate a bash script that updates the vfat partition in your SD reader
<clever> zeta_0: 2019-12-11 19:24:29 < clever> zeta_0: do you see an ssh socket open?
<clever> zeta_0: 2019-12-11 19:34:30 < clever> zeta_0: lsof -p 2179 | grep STREAM
<clever> ornxka: somebody has reverse engineered the ddr2 stuff on the rpi1-3, and ive forked that repo
<clever> zeta_0: 1481 is the new pid
<clever> ornxka: currently, it only works on the rpi 1-3, and its even more baremetal then the examples, because the arm isnt online, neither is the ram!
<clever> ornxka: if i copy that to an SD card, and rename to `bootcode.bin`, it prints hello world, from an rpi GPU
<clever> ornxka: now that vc4 is properly in nixpkgs, https://gist.github.com/cleverca22/c9e89ceaadba96f1969bc8eeab8ba532#file-helloworld-c-L16 i can just `nix-build -A helloworld --argstr linker sram` and i get a `hello.bin` file
<clever> zeta_0: ps aux | grep gpg
<clever> ornxka: i did see some mips stuff while doing that, but i dont know which mips it was
<clever> ornxka: https://github.com/NixOS/nixpkgs/pull/72657 is an example of adding vc4 cross-compile to nixpkgs
<clever> zeta_0: you have to use the new pid for the gpg agent
<clever> ornxka: (and is already in nixpkgs)
<clever> zeta_0: lsof -p 2179 | grep STREAM
<clever> ornxka: step 1, does a cross-compiler exist for it?
<clever> chrisaw: .overrideAttrs (old: { foo = old.foo ++ [ "end" ]; })
<clever> zeta_0: gpg is already running, we dont know how, and dont really care how, lol
<clever> zeta_0: i think all that does, is install the ssh binary
<clever> zeta_0: do you see an ssh socket open?
<clever> zeta_0: lsof -p 2179 | grep STREAM
<clever> zeta_0: then gpg is running
<clever> zeta_0: `ps aux | grep gpg` ?
<clever> zeta_0: is gpg agent running by default already?
<clever> and this one makes it remember the unlock forever (until shutdown)
<clever> default-cache-ttl-ssh 34560000
<clever> max-cache-ttl-ssh 34560000
<clever> that line is likely important, lol
<clever> enable-ssh-support
<clever> [clever@amd-nixos:~]$ cat .gnupg/gpg-agent.conf
<clever> zeta_0: then it will be saved into the gpg keyring forever, and any time an app wants to use it, gpg will ask you for the 2nd pw
<clever> zeta_0: when you run `ssh-add` with the gpg ssh agent, it will ask for 2 passwords, first one to decrypt id_rsa, then a 2nd one, to re-encrypt it, within the gpg keyring
<clever> [clever@amd-nixos:~]$ echo $SSH_AUTH_SOCK
<clever> /run/user/1000/gnupg/S.gpg-agent.ssh
<clever> zeta_0: the gpg agent remembers the keys, and will defer asking for a pw until you try to use the key
<clever> zeta_0: there is also a nixos option for ssh-agent, but the gpg ssh agent remembers keys, so you dont have to keep doing ssh-add
<clever> zeta_0: that should be a good place for it
<clever> and the latest nixos-unstable seems to also break it
<clever> zeta_0: but i set it up somewhat impurely, and dont remember how!
<clever> zeta_0: personally, i use gpg-agent as an ssh agent
<clever> that option controls how long
<clever> default-cache-ttl 34560000
<clever> [clever@amd-nixos:~]$ cat .gnupg/gpg-agent.conf
<clever> zeta_0: gpg-agent can remember the gpg passphrase for a set period of time, before forgetting and asking again
<clever> fresheyeball: the configure phase, use postConfigure = "exit 1"; to speed up iteration, and then try and figure out why cmake claims cuda is disabled
<clever> kolaente_: we have the how, but not the why
<clever> heh, that would explain half of it!
<clever> kolaente_: but you simply havent updated home-manager yet
<clever> kolaente_: its also possible that both configuration.nix and home-manager are installing the go from nixos-unstable
<clever> kolaente_: double-check your home-manager config?
<clever> kolaente_: but `users.users.<user>.packages in my configuration.nix` isnt home-manager, as far as i know? ...
<clever> kolaente_: oh!, so its been installed by home-manager
<clever> you may need to use the exact name `nix-env -q` shows
<clever> you can use `which go` and `nix-env -q` to confirm
<clever> nope, it should take effect immediately
<clever> thats likely when you installed go to the user's profile
<clever> that will show dates for each
<clever> kolaente_: and then `nix-env --profile /nix/var/nix/profiles/per-user/user/profile --list-generations`
<clever> yep
<clever> kolaente_: what is the lowest <n> in the list?
<clever> kolaente_: do you see which generation it was added at?
<clever> kolaente_: ls /nix/var/nix/profiles/per-user/clever/profile*/bin/go
<clever> kolaente_: yeah
<clever> you must uninstall that copy with `nix-env -e go`
<clever> you installed go with `nix-env -i`, so its ignoring what configuration.nix did
<clever> yep, theres your problem
<clever> try `command which --all go` ?
<clever> we want the which in $PATH
<clever> ah, thats why
<clever> kolaente_: `which which` ?
<clever> kolaente_: what does `which --all go` return?
<clever> kolaente_: which user did you add the channel to? are you running `nix-channel --update` as that user?
<clever> for nixops, it looks somewhat like this
<clever> import sys;import site;import functools;sys.argv[0] = '/nix/store/sr81h3qj8hwgclqbzgnplsf8lyb1gy98-nixops-1.7/bin/.nixops-wrapped';functools.reduce(lambda k, p: site.addsitedir(p, k), ['/nix/store/sr81h3qj8hwgclqbzgnplsf8lyb1gy98-nixops-1.7/lib/python2.7/site-packages','/nix/store/5izkw319vwa329ygggf0skjx4mwrn0lj-python2.7-prettytable-0.7.2/lib/python2.7/site-packages','/nix/store/rhf1igkyfwb5cylnid0k2sh4818hmyi4-python2.7-boto-2.49.0/lib/python2.7/site-pa
<clever> which its adding as site dirs
<clever> philipp[m]2: yep, there it is, it has an array of site-packages folders
<clever> philipp[m]2: and what is within .psub-wrapped?
<clever> so the PYTHONPATH persists at runtime
<clever> does such a shell script exist?
<clever> and something will create a shell-script wrapper, that uses the build-time $PYTHONPATH
<clever> i think the magic, is that all inputs wind up in PYTHONPATH at build time
<clever> mzabani: there is also `nix-channel --rollback`, since channels have their own generations
<clever> which may be something you didnt want done
<clever> mzabani: nixos-rebuild switch would also update the bootloader, but that also builds the current configuration.nix
<clever> mzabani: `/run/current-system/bin/switch-to-configuration boot` will make whatever your currently running the default boot, and update the list to reflect what --list-generations prints
<clever> mzabani: the bootloader config is not updated until you re-run switch or boot
<clever> mzabani: do you have a large thing in /nix/store you know the path of, that you want deleted?
<clever> mzabani: that would imply that they have already been deleted from the profile
<clever> mzabani: which generation do you want to delete?
<clever> but that depends on if you want to develop it more
<clever> then nix-shell will let you build gridsync, and `nix-shell -A using` lets you use a copy nix built purely
<clever> let foo = import ./. {}; foo // { using = stdenv.mkDerivation { name = "foo"; buildInputs = [ foo ]; }
<clever> maybe something like this in shell.nix
<clever> i would make it into an attribute
<clever> exarkun: `nix-shell -p 'import ./. {}'` may also work
<clever> exarkun: you must create another derivation, that has gridsync in the buildInputs, then nix-shell that other drv
<clever> exarkun: nix-shell gives you a shell suitable for building gridsync, with that file
<clever> exarkun: and do you expect it to give you a shell with a working copu of gridsync, or a shell suitable for building gridsync?
<clever> exarkun: post the default.nix?
<clever> exarkun: nix-shell -v ?
<clever> keithy[m]: simplest way is to just mount it, and rerun nixos-generate-config with no args
<clever> keithy[m]: you will want to fix hardware-configuration.nix, or bad thigns will happen
<clever> fresheyeball: this was setting it, but the logs still said cuda not enabled
<clever> 2019-12-11 00:32:17 < clever> time nix-build -E '(import <nixpkgs> { config.allowUnfree = true; }).python37Packages.pytorch.override { cudaSupport = true; }' --cores 48
<clever> sure
<clever> fresheyeball: we want a `postConfigure = "exit1";` and then iterate until configurePhase says coda IS enabled
<clever> fresheyeball: cuda isnt enabled, so cmake doesnt build c10, so cmake doesnt generate the header we want
<clever> fresheyeball: correct
<clever> fresheyeball: left a gist and some ideas above
<clever> gchristensen: and ive put all changes of importance into the above gist, so the machine doesnt matter much
<clever> gchristensen: the problem appears to be before the computation heavy phase
<clever> gchristensen: the plan i have now doesnt require an insane core count
<clever> fresheyeball: my current idea, is to edit it to fail after the configure script, skip the huge build, and then iterate until the configure phase says cuda is actually enabled
<clever> gchristensen: shal we kill it or leave it up?
<clever> fresheyeball, gchristensen: kind of stuck now, not sure what to check next
<clever> rotaerk: that sounds like acceleration profile issues
<clever> fresheyeball: despite building with cuda enabled, the logs say cuda is disabled...
<clever> time nix-build -E '(import <nixpkgs> { config.allowUnfree = true; }).python37Packages.pytorch.override { cudaSupport = true; }' --cores 48
<clever> fresheyeball: which is being loaded by the root...
<clever> fresheyeball: aha, c10/CMakeLists.txt has an `add_subdirectory(cuda)` which loads the file in question
<clever> CMake Error at CMakeLists.txt:41 (target_compile_options): Cannot specify compile options for target "c10_cuda" which is not built by
<clever> fresheyeball: pkgs.cudnn duh!
<clever> fresheyeball: do you know which package thats in?
<clever> fresheyeball: Caffe2: Cannot find cuDNN library. Turning the option off
<clever> fresheyeball: it almost looks like your meant to build c10 seperately, in its own package
<clever> fresheyeball: and thats where things get complicated, i dont see any obvious thing loading c10/cuda/CMakeLists.txt
<clever> nothing there looks optional, but the file itself may be optionally pulled in elsewhere
<clever> fresheyeball: and thats what causes it to install the missing header, so we must read that cmake file
<clever> c10/cuda/CMakeLists.txt:install(FILES ${CMAKE_BINARY_DIR}/c10/cuda/impl/cuda_cmake_macros.h
<clever> out -> /nix/store/brh6i1g40zhcyvqbdnfgwxk0i2iwnfgl-source
<clever> nix-repl> :b a.src
<clever> nix-repl> a = ((import <nixpkgs> { config.allowUnfree = true; }).python37Packages.pytorch.override { cudaSupport = true; })
<clever> i also notice, its putting the headers in a weird place, and not generating cuda_cmake_macros.h
<clever> result/lib/python3.7/site-packages/torch/lib/include
<clever> fresheyeball: i can confirm that pytorch is not a split output derivation
<clever> fresheyeball: turning on cuda support seems to slow it down heavily at the testing phase
<clever> ive no clue how to do it purely
<clever> samueldr: yeah, std was it, i used some impure cmd to download a build of std for windows and it just landed in $HOME
<clever> samueldr: i basically used nix-shell to get a linux->windows cross-compiler, then used the standard rust tools to get a windows crate, and off it went
<clever> samueldr: ive gotten it to work impurely, using rustup i think it was, in a nix-shell
<clever> gchristensen: lol, i only have 32gig of ram each in my laptop&desktop
<clever> fresheyeball: this will re-build it, with cuda support enabled
<clever> [root@fresheyeball2:~/nixpkgs]# time nix-build -E '(import <nixpkgs> { config.allowUnfree = true; }).python37Packages.pytorch.override { cudaSupport = true; }' --cores 48
<clever> fresheyeball: cudaSupport defaults to false
<clever> [root@fresheyeball2:~/nixpkgs]# vi pkgs/development/python-modules/pytorch/default.nix
<clever> fresheyeball: and this is likely why
<clever> CUDA_TOOLKIT_ROOT_DIR not found or specified
<clever> fresheyeball: i had added this to the installPhase, but it found nothing
<clever> + find -name cuda_cmake_macros.h
<clever> fresheyeball: that explains why my machine was swapping harder then usual, lol
<clever> fresheyeball: wait a second, this is using 20gig of ram? lol
<clever> fuzen: there is also an appimage-run in nixpkgs
<clever> fresheyeball: ctrl+a 1, to go to screen 1
<clever> the core list doesnt even fit on a split screen, lol
<clever> fresheyeball: `top` is in screen 1, i see 48 cores!
<clever> [root@fresheyeball2:~/nixpkgs]# nix-build -A python37Packages.pytorch --arg config '{ allowUnfree = true; }' --cores 56
<clever> fresheyeball: i'm in a screen session, `screen -x`
<clever> fresheyeball: the race is on, can gchristensen's machine beat mine, even with a late start!
<clever> mwdev: you should only ever use config.boot.kernelPackages within extraModulePackages
<clever> mwdev: that might be mixing a 5.3 module with a 5.4 kernel, which will fail to load
<clever> [1851/2983] Building CXX object caffe2/CMakeFiles/torch_cpu.dir/__/aten/src/ATen/native/quantized/cpu/kernels/QuantizedOpKernels.cpp.AVX.cpp.o
<clever> zeta_0: what error does it throw?
<clever> mwdev: and then it gets simpler to use nixpkgs overlays, to do it in steps, one to add the linux_5_4 and linuxPackages_5_4, then a 2nd to modify linuxPackages_5_4
<clever> mwdev: so you may want to do (linuxPackagesFor (callPackage ...)).extend, to apply an overlay to it
<clever> mwdev: linuxPackagesFor (callPackage ...)` will ignore the overlay
<clever> 5 choices*
<clever> 5: rename both the directory and cabal file!
<clever> 4: rename both the cabal file and set name=!
<clever> 3: set name = "my-script"; in developPackage
<clever> 2: rename the directory to my-script
<clever> 1: rename the file to my_script.cabal
<clever> zeta_0: you have 3 choices
<clever> zeta_0: you can also pass `name = "my-script";` to `developPackage`, then the directory name wont matter (the name= must match the cabal file then)
<clever> zeta_0: they must match (the _ vs -) or it wont work
<clever> zeta_0: found your problem, the directory is my_script, but the cabal file is my-script.cabal
<clever> zeta_0: so your already using callCabal2nix
<clever> zeta_0: developPackage is just a giant fancy wrapper around callCabal2nix
<clever> o1lo01ol1o: what was it?
<clever> zeta_0: pkgs/development/haskell-modules/make-package-set.nix: developPackage =
<clever> zeta_0: first, we need to see what developPackage actually does
<clever> gchristensen: thank you
<clever> zeta_0: what is inside `default.nix` ?
<clever> buckley310: the rest looks like it should work
<clever> zeta_0: callCabal2nix will handle cabal2nix for you
<clever> buckley310: recurseIntoAttrs is only needed to make nix-env -i search inside it
<clever> and the count isnt going up
<clever> i just noticed, i'm barely 1/6th into the build, lol
<clever> i think an `exit 1` and `--keep-failed` in the installPhase would help more then a big computer
<clever> gchristensen: and has nearly 3000 files to compile with cmake
<clever> [495/2983] Running gen_proto.py on onnx/onnx.in.protokeFiles/gloo.dir/transport/tcp/buffer.cc.o.cc.o
<clever> gchristensen: its maxing out every core in my machine
<clever> fresheyeball: %Cpu(s): 71.2 us, 28.6 sy, 0.0 ni, 0.0 id, 0.1 wa, 0.0 hi, 0.0 si, 0.0 st
<clever> fresheyeball: dang is this a huge build, lol
<clever> zeta_0: you need a my_script.cabal file, and i think `cabal init` will make one
<clever> mwdev: examples are in all-packages.nix
<clever> mwdev: i would not expect that line to work, you have to pass the linux packakage thru pkgs.linuxPackagesFor first
<clever> zeta_0: cabal2nix only works if you have a cabal file
<clever> fresheyeball: pytorch is building without cuda support
<clever> -- USE_CUDA : OFF
<clever> fresheyeball: does pytorch need git submodules to build?
<clever> fresheyeball: ./. is basically the default for nix-build, so that part can be omitted
<clever> fresheyeball: and what `nix-build` cmd to reproduce the error?
<clever> hexa-: that filename is on line 37 of your pastebin, and its also in nixpkgs
<clever> hexa-: youll need to read the install-grub.pl file, and see what exactly its doing when mirrored boots is set, what efiSysMountpoint does, and what it runs grub-install with
<clever> mwdev: you must put a linuxPackages set into kernelPackages, such as pkgs.linuxPackages_5_3
<clever> mwdev: then line 4 is a single derivation with linux, and line 8 is expecting a set of all linux packages, and thats where it fails
<clever> hexa-: ive not tried mirrored efi yet, so i dont know how its setup properly
<clever> hexa-: i think its upset that /boot isnt vfat tagged as the ESP
<clever> mwdev: what is the contents of linux-5.3.nix?
<clever> fresheyeball: double-check to see if the build system already did that, and your just copying the wrong path
<clever> fresheyeball: you need to run something to turn it into a cuda_cmake_macros.h
<clever> fresheyeball: can you push your nix code somewhere and i can try to reproduce it?
<clever> mwdev: which nixos option did you set to what value?
<clever> o1lo01ol1o: you can always start with `pwd ; ls -lh` in the installPhase, to figure out whats available and where you are
<clever> zeta_0: relative to the file that contains the path
<clever> zeta_0: and must end with something not a /
<clever> zeta_0: paths start with either ./ (relative path) or / (absolute)
<clever> zeta_0: its a path, pointing to whatever directory the nix file is in
<clever> hexa-: and one thing i always worry about, somebody can just modify the initrd to phone-home with the pw, and then wait for you to ssh in and unlock it
<clever> CMCDragonkai: i'm not sure there is a simple option for that
<clever> hexa-: if you want remote unlock, then your /boot and /boot/efi must be plaintext
<clever> something (havent checked what) will add luks support to grub, and grub will have its own pw prompt, before the initrd
<clever> hexa-: and when using efiSysMountpoint, you could make /boot just a normal dir on /
<clever> hexa-: efiSysMountpoint is only for the .efi binary, the kernel/initrd remain on /boot, outside of efiSysMountpoint
<clever> hexa-: the efi binaries can optionally be on a /boot/efi partition, or a subdir of /boot
<clever> hexa-: and nixos always puts the kernel/initrd on the /boot partition