<clever>
ylwghst: it can also help to look at where nvidia will use things
<clever>
ylwghst: that will make nixos-rebuild use a custom version of nixpkgs
<clever>
ylwghst: nixos-rebuild -I nixpkgs=/home/clever/nixpkgs test
<clever>
ylwghst: it can also be helpfull to purposely put an error in, to confirm that its even reading the file
2017-11-29
<clever>
ylwghst: the fact that it doesnt fail, means that pkgs.nvidia-x11 wasnt even referenced
<clever>
ylwghst: also, .override doesnt take a function, so that override should just fail with an error
<clever>
ylwghst: sometimes, its simpler to just git clone nixpkgs and modify that file directly
<clever>
d6e: if you run "nixos-rebuild build" it will create a result symlink, then run "nix-store -qR result | grep nginx" and you should see the path to the generated config
<clever>
dhess_: and because the path type calls toString on line 223, the check function wont accidentally copy your key to /nix
<clever>
dhess_: yeah
<clever>
dhess_: as long as you call toString on the path before using it in any script, it shouldnt wind up in the store
<clever>
dhess_: the only thing the "path" type does is check that the first character is a /
<clever>
dhess_: toString just converts the path to a raw string, without copying it anywhere
<clever>
is this a client or host key?
<clever>
and that file has to exist at that path at runtime, and nix wont do anything to help
<clever>
dhess_: toString will basicaly insert the result of $(realpath ./mykey.private) into the string
<clever>
dhess_: it doesnt, thats why it breaks nixops
<clever>
nixops already does its own things to manage the ssh host keys
<clever>
stphrolland: launch-xnest spawns an x server in a window, and the -A hsdm-config attribute of the default.nix will make a script that can run hsdm, just aim $DISPLAY at the xnest server
<clever>
stphrolland: the only missing part, is a gui for the login window
<clever>
stphrolland: it has working pam, and xorg launching, and i think launching an actual user session
<clever>
stphrolland: ive also worked on a new display manager, written from scratch in haskell
<clever>
stphrolland: yeah, depends on if you want to build a modified version, or just read the source
<clever>
i tend to use `nix-shell '<nixpkgs>' -A slim` followed by `unpackPhase`, but my method also downloads the dependencies required to build it and could be slower
<clever>
eraserhd: nix-repl '<nixpkgs>'
<clever>
lol
<clever>
taktoa: and what do you write in latex? lol
<clever>
taktoa: i dont think your yard is big enough
<clever>
dtzWill: as in, i could just fire up a travis job, and hang the entire container host
<clever>
dtzWill: things that any userland app can run
<clever>
gchristensen: oh right, there are also x86 opcodes that just hang the machine
<clever>
gchristensen: how does that work?
<clever>
thats why the iommu is on backwards
<clever>
with the rpi, the gpu is the master, and the arm cores are more of an after-thought and act as slave devices
<clever>
Dezgeg: yep
<clever>
the reason for that, is to run untrusted code in the arm, while having HDMI and DRM keys in a special area that only the GPU can access
<clever>
it restricts what phys memory the arm mmu can access
<clever>
Dezgeg: oops, ^^^
<clever>
dtzWill: also, i think the closest thing the rpi has to an IOMMU, is on backwards, lol
<clever>
rather then working, and being a security hole
<clever>
and if you forget to replace a handle with the phys, it just doesnt work
<clever>
but if you never give them a phys addr, they only have handles to operate on
<clever>
i was thinking more along the lines of giving them a phys addrs makes them think they can attack things, and also lets you miss an argument somewhere
<clever>
and the kernel will sed in the phys address
<clever>
and the userland puts that blobid into the operands of the command stream
<clever>
dont even give the userland a physical access, just give it a blob id#
<clever>
that was my general idea for that driver at the time
<clever>
so you really need a kernel or xorg "server" that can multiplex many clients into a single GPU
<clever>
one problem with the current design i was doing, is that you cant really have 2 GL clients at once
<clever>
i think traditionally, more of this is implemented in the kernel/xorg, and the end-users (glxgears for ex) dont do as much
<clever>
that gives the userland write access to the DMA buffers
<clever>
and which blob you wind up mapping, depends on what handle you selected last
<clever>
so you allocate a blob and get a handle, select that handle, then mmap the char device into ram
<clever>
i think added a select ioctl, to pick an object, and implemented mmap on the character device
<clever>
dtzWill: the client side library directly opened a char node in /dev and used ioctl to implement the rendering
<clever>
dtzWill: in the past, i have implemented my own opengl library, from scratch, that didnt even involve xorg
<clever>
dtzWill: but then what that uses to talk to xorg, and the gpu, can be anything
<clever>
dtzWill: the mesa libs the client loads implement the opengl api
<clever>
dtzWill: or the mesa client might directly access the gpu over a node in /dev and have total access to all physical ram
<clever>
dtzWill: the mesa client side for example may have special formats for the textures, before it ships them to xorg
<clever>
dtzWill: and the reason why, is that you dont just have some specialized drivers in the xorg side, the client mesa libs in the end application also have to be specialized
<clever>
dtzWill: and the correct mesa implementation for your GPU gets swapped in impurely
<clever>
dtzWill: at build-time, you link against the dumb mesa, then at runtime, nixos will use LD_LIBRARY_PATH to redirect things to /run/opengl-driver/lib
<clever>
__monty__: i just never wrote a cabal file, i distribute it with a default.nix
<clever>
eraserhd: what does nix-channel --list and sudo nix-channel --list say?
<clever>
Turion: and if you where to use opengl in the example i linked, you would just add mesa to the buildInputs on 5 i believe
<clever>
eraserhd: without -f, it will search every channel, with -f, it will search the nixpkgs in $NIX_PATH
<clever>
eraserhd: if you have multiple channels, then that can differ
<clever>
Turion: personally, i havent used stack or cabal on any of my own projects, i just make a pkgs.runCommand containing ghcWithPackages, then i just run ghc -o foo
<clever>
dtzWill: and my previous laptop doesnt support that
<clever>
dtzWill: i have also discovered, that if you compile rocksdb with gcc -O4, it will wind up using some sse4 stuff (the compiler is to blame)
<clever>
dtzWill: and mplayer has runtime cpu detection, where it can compile both, then swap the function pointer out based on /proc/cpuinfo at runtime
<clever>
dtzWill: mplayer for example, has many inline asm chunks, and c chunks implementing the same thing, and some ./configure options let you switch between the sse or the c variant
<clever>
the main limit i can see is things like the layout of `struct stat` and things like #ifdef's
<clever>
dtzWill: what kinds of dis-agreements can happen if you try to use the same IR for many platforms?
<clever>
dtzWill: what about switching between x86-64 and aarch64 (64bit arm)
<clever>
dtzWill: but translating it to x86-32 or armv7 would be harder?
<clever>
dtzWill: so it should be trivial to optimize the IR to your specific x86-64 cpu, to take advantage of what features you have
<clever>
dtzWill: something ive been wondering, how feasible is it to compile something large/complex to llvm, then finish it off later, and how cross-platform would that llvm be?
<clever>
:D
<clever>
yeah
<clever>
changing that will break the very things its meant to fix
<clever>
thats why its called stateVersion
<clever>
fresheyeball: that tells nixos what version your state is
<clever>
fresheyeball: that is normal and should not be changed
<clever>
adisbladis: they couldnt just call it a uuid? lol
<clever>
fresheyeball: sdk1 is your ext4 rootfs, and grub cant install to that
<clever>
in my case, sda is a random data drive, not involved in the boot
<clever>
sda is simply the first sata drive to be initialized
<clever>
fresheyeball: can you gist the output of "mount" and "lsblk" ?
<clever>
fresheyeball: oh, what does nix-channel --list say?
<clever>
and then it can detect such an issue
<clever>
but after an upgrade, the version can change, and then it has to re-run grub-install
<clever>
fresheyeball: and at each nixos-rebuild, it knows there has been no change, and doesnt try to update the MBR
<clever>
so it may not notice that the grub.device is set wrong
<clever>
fresheyeball: nixos uses /boot/grub/state to keep track of what bootloader has last been installed, in my case, 2.02
<clever>
fresheyeball: can you gist the output of "mount" ?
<clever>
fresheyeball: what is your actual boot device?
<clever>
fresheyeball: what is boot.loader.grub.device set to in configuration.nix?
<clever>
fresheyeball: you want "sudo -i", not "sudo su"
<clever>
justanotheruser: assuming your on the fixed livecd image, it will just chroot in
<clever>
with the whole target mounted under /mnt, the same as if you where going to install
<clever>
nixos-install --chroot
<clever>
justanotheruser: yeah
<clever>
if*
<clever>
justanotheruser: i the entire /boot is lost, then nixos will recreate all of it, and re-install the MBR stubs
<clever>
justanotheruser: nixos has some hidden files in /boot that keep track of the MBR state
<clever>
ly
<clever>
every arm chip does it different
<clever>
sphalerite: x86 is far more standardized
<clever>
sphalerite: ive had trouble getting anything arm to boot under qemu, the problem is the lack of a solid definition of how the cpu goes from reset -> firmware -> bootloader
2017-11-28
<clever>
and silently ignores the rebuilds
<clever>
it obeys PYTHONPATH, and uses the old version
<clever>
catern: if you then nix-build nixops, and run a new build via ./result/bin/nixops
<clever>
catern: so if you enter a nix-shell with nixops in the buildInputs, nixops gets added to PYTHONPATH as well
<clever>
catern: python's setup hook adds all inputs to PYTHONPATH
<clever>
catern: one anoying thing ive noticed, nixops has itself and python in the propagatedBuildInputs
<clever>
catern: you could try adding echo's to the setup hook and confirm it all
<clever>
catern: yeah, i believe thats all right
<clever>
and at runtime, docker puts all the layers back together
<clever>
so when you rebuild, the derivations that havent changed can reuse the layers they made before
<clever>
FenTiger: most things like the tarball generator just put the entire closure into a single output
<clever>
FenTiger: do you want to generate a list of derivations, that each transform one dependency?
<clever>
FenTiger: thats doing import from derivation, which causes nix to build some things at eval time, which generally ruins performance
<clever>
FenTiger: but the eval cant access it
<clever>
FenTiger: exportReferencesGraph creates a variable/file at build time, for the derivation its in, which i believe contains the runtime closure of the referenced thing
<clever>
FenTiger: after the eval and build have finished, nix will basicaly grep the output, to see what build-time deps it still refers to, and those become the runtime deps
<clever>
FenTiger: the runtime dependencies are not known at eval time
<clever>
catern: hmmm, not sure on that one
<clever>
catern: /etc/hosts it to 127?
<clever>
catern: ah, you may want to use (import <nixpkgs> { config={}; overlays={}; }).fetchzip
<clever>
catern: to fetch nixpkgs or a normal nix file?
<clever>
catern: why do you want a function that unpacks?
<clever>
samueldr: yeah, thats not right, its corrupting the nix store!
<clever>
and does nix-store --verify --check-contents complain about it being corrupt?
<clever>
that is strange
<clever>
samueldr: was the file empty before the build started?
<clever>
samueldr: does the empty file persist after the build has ran?
<clever>
and with the rescue boot option, you can auto-generate the whole ramdisk
<clever>
you can now test xen unikernels without having xen installed, without using root
<clever>
so that runs nixos, under xen, under qemu
<clever>
the image contains the xen hypervisor
<clever>
catern: this generates a nixos image, then boots that image under qemu