2019-06-28

<clever> line 93 then runs evalResources, and passes it that list
<clever> nh2: resourcesByType will gather all of the ec2SecurityGroups from your deployment files
<clever> nh2: checking source...
<clever> jasom: this causes services.xserver.enable to spawn a vnc server, and then all of the normal login prompt and session stuff "just works" within that
<clever> jasom: one min
<clever> nh2: i think you can do resources.ec2SecurityGroups.foo = { nodes, ... }: ...
<clever> qqii: this tells you where to look in all-packages.nix, which is referencing the python packages
<clever> > builtins.unsafeGetAttrPos "youtube-dl-light" pkgs
<clever> qqii: if you search nixpkgs/pkgs/top-level/python-packages.nix for youtube-dl-light, youll see that both packages share the default.nix, but one has an override on 2 params
<clever> qqii: one depends on ffmpeg, the other doesnt
<clever> qqii: nix-diff $(nix-instantiate '<nixpkgs>' -A youtube-dl-light) $(nix-instantiate '<nixpkgs>' -A youtube-dl)
<clever> you just need to keep track of what each slot is for, so you know which one to delete later
<clever> that allows having a backup password, so you can still open it if you loose the keyfile, and then you can re-add a new keyfile
<clever> snow: this reveals that there are 8 slots on my system, and only slot 0 is currently in use
<clever> [root@system76:~]# cryptsetup luksDump /dev/nvme0n1p2
<clever> snow: basically, luks has a list of keys that can all unlock the disk, and you can have a mix of passphrases and keyfiles
<clever> snow: do you know about how luks can have multiple headers and passwords?
<clever> snow: try `-d /key/keyfile -l 4` ?
<clever> snow: does the file exist at that path?
<clever> snow: i'm paranoid and feel it would be more secure to use a keyfile on a normal fs
<clever> snow: and that likely doesnt exist yet
<clever> snow: ah wait, yeah, it looks like its failing to decrypt due to failing to find the key file
<clever> snow: and what path did you try to mount ext4 from?
<clever> snow: what does `blkid /dev/sde` and `blkid /dev/md0` say?
<clever> (which lets you fix the boot partition on the SD card)
<clever> and the usb gadget one, cant boot the rpi, it instead makes the rpi emulate a mass-storage device! (exposing the SD card to the host)
<clever> the publicly provided bootcode.bin files, only function over SD cards, mass-storage, ethernet, and usb gadget
<clever> oh, usb can even load it as a usb gadget, when the rpi is plugged into a usb host
<clever> Shados: that phase, will load bootcode.bin from one of the above sources, and execute it
<clever> Shados: the gpu boot rom can boot from, the SD card (on 2 different busses), nand flash, spi flash, usb (both mass-storage, and ethernet)!
<clever> Shados: yep
<clever> or just `nix-store -r /nix/store/foo` to download it from a configured binary cache
<clever> makefu: if you use nix-copy-closure (and have nix on the legacy machine), those hard-coded paths will exist
<clever> makefu: which will just copy the cfg, and assert that its valid
<clever> makefu: only thing missing, is to add named-checkconf into the mix, replace the ${cfg.configFile} on 195, with the result of `runCommand "named.conf-checked" { buildInputs = [ bind ]; } ''cp ${cfg.configFile} $out ; named-checkconf $out''`
<clever> and 28/29 tells it to use that dir
<clever> 191 will also ensure `named` owns the state dir
<clever> shouldnt care what the uid is
<clever> line 187 and 195 will tell bind to lookup `named` in `/etc/passwd` at runtime, and drop-root at the right times
<clever> bind creates a user called `named` and thats the only real problem i can forsee
<clever> so youll need to handle that yourself
<clever> obviously, the stuff to manage /etc and auto-create users wont work
<clever> then just copy-closure the whole unit file to a machine, and symlink it somewhere systemd looks
<clever> and if you toss in a -I nixos-config=./configuration.nix, you can build from any configuration.nix you want
<clever> makefu: this will let you build the .service file for any existing nixos service
<clever> nix-build '<nixpkgs/nixos>' -A 'config.systemd.units."tor.service".unit'
<clever> makefu: i also have another trick you may like
<clever> ahh
<clever> (it will validate the resulting config at build time, and fail the build if its invalid)
<clever> makefu: thinking of something like this?
<clever> ahhhh, and you can have multiple "pci data structures" in a single rom, each with its own "code type"!
<clever> ah, 2 systems, the old one with just a raw x86 entrypoint, and then a new one with a table that describes all the capabilties and entry point type
<clever> i'm guessing if you want to support 2 options, you need 2 roms, and spend 2 BAR's on it?
<clever> and further down, you have the code type, x86, open firmware, efi, ...
<clever> and expects the function to return control back to the bios, eventually
<clever> the bios will just call whatever function happens to be there, on bootup
<clever> Miyu-chan: offset 03h is the main one, `Entry point for INIT function. Power-On Self-Test (POST)does a FAR CALL to this location`
<clever> but the other lists it as a 16 bit field, of aa55
<clever> one site lists it as 0x55 0xaa, which i assumed was 55aa
<clever> https://wiki.osdev.org/AMD_Atombios says the rom should start with 0xaa55! (byte ordering issues)
<clever> Miyu-chan: a few pages down on https://resources.infosecinstitute.com/pci-expansion-rom/#gref it mentions the format for the expansion roms, and that it must start with 0x55aa as a signature
<clever> Miyu-chan: but modern systems, have pxe booting built into the bios, and the NIC's that do have "roms" tend to have flash memory, which is where https://ipxe.org/howto/romburning comes in
<clever> Miyu-chan: in the old days, you could program a rom, shove it into a socket on your NIC, and then configure the bios to network boot, and it will just run whatever is on the rom
<clever> Miyu-chan: network cards are also special
<clever> Miyu-chan: you may now read the horror stories above :P
<clever> the only way to be 100% sure, is to either prevent reflashing entirely, or to flash it externally, so the untrusted code is never ran
<clever> levdub: which then lies about having reflashed the hardware
<clever> levdub: the problem, is that if the reflashing happens by just booting the machine into a special os, it could be booting that inside a hypervisor
<clever> but if you can reflash the bios, its game over
<clever> the whole point of secureboot, is to ensure only signed blobs of code are loaded while booting, so you can trust the login prompt when it asks for a pw
<clever> Miyu-chan: and thats part of what secureboot wants to stop you from doing (which also blocks your ability to just run your own firmware)
<clever> Miyu-chan: but if you get root on a machine that supports flashrom, you could malware the bios itself
<clever> sadly, ive only found 1 machine where flashrom has write support, and 2 or 3 with read-only support
<clever> in the case of my router, its only 1mb
<clever> -rw-r--r-- 1 root root 1.0M Jun 28 04:55 router.bios
<clever> Miyu-chan: this will read the entire bios, and dump it to a file
<clever> [root@router:~]# nix run nixpkgs.flashrom -c flashrom -p internal:laptop=this_is_not_a_laptop --read router.bios
<clever> one min...
<clever> but similarly, depending on the motherboard, i could just reflash the whole damn bios :P
<clever> and then you come along, rent a machine, and boom, i own your os
<clever> i could rent a machine from something like packet.net, write some malware to all the pci devices, then cancel the machine
<clever> yeah
<clever> this is also scary for any datacenter that does baremetal
<clever> have fun sleeping tonight :P
<clever> and that infection will persist even after you swap the hdd!
<clever> Miyu-chan: and then your entire OS boots under a VM without you even knowing it
<clever> Miyu-chan: in theory, if the expansion rom is actually an eeprom (or regular flash), malware could inject a whole damn hypervisor into there
<clever> and then you just load and execute the right section, and link all the dynamic symbols
<clever> Miyu-chan: i'm guessing its a PE file with sections tagged for loading on certain arches
<clever> Miyu-chan: https://wiki.osdev.org/AMD_Atombios some random docs on the expansion rom for AMD GPU's
<clever> Devices may have an on-board ROM containing executable code for x86 or PA-RISC processors, an Open Firmware driver, or an EFI driver. These are typically necessary for devices used during system startup, before device drivers are loaded by the operating system.
<clever> the bios will just blindly run a blob it finds in a pci-e device, if the fields are set right
<clever> yes
<clever> raid controllers often use that to show a "press f7 to configure raid" menu
<clever> video cards use that to initialized themselves
<clever> Miyu-chan: and on the subject of vBIOS stuff you mentioned, pcie devices can include a firmware blob, and the specs from legacy bios will just blindly run that firmware on bootup
<clever> so you dont get any weird numa problems, like core1 asked for data, then core2 got the reply, and they have to sync their L1 caches up via ram
<clever> which goes back to the core that initiated the request
<clever> and each fifo has its own IRQ to signal when its finished
<clever> each core, has its own fifo to issue read/write requests
<clever> this is because pcie supports multiple command queues
<clever> and never has a given irq hit more then 1 core
<clever> in this gist, youll notice my nvme device, has 8 IRQ's, and is firing requests off to all my cores!
<clever> Miyu-chan: thats something entirely different, and not really in the scope of iommu
<clever> so the cpu will enforce what each card can do
<clever> but there is iommu, where you can configure a memory mapping on a per-slot basis
<clever> Miyu-chan: the typical CPU will just blindly obey the attack
<clever> so the pcie device must ask the cpu to fetch something from ram (but it still bypasses the opcode execution system)
<clever> but in modern pcie, all pcie devices are wired directly to the cpu
<clever> reversing the master/slave setup
<clever> Miyu-chan: in the legacy pci days, bus master means the pci device could drive the addr lines on its own, and directly read ram without having to involve the cpu
<clever> Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+
<clever> Miyu-chan: and i havent even touched on DMA yet :P
<clever> but when you have 16g of video ram, and only a 256mb window into it, there must be some mapping going on, and you must understand that, or it will screw you over
<clever> to learn which regions i shouldnt touch :P
<clever> i also had fun just dd'ing to different offsets, and watching the screen to see what corrupted what
<clever> i used lspci to find the phys addr, and then configured MTD to just treat that region as a block device
<clever> ive done exactly that, back when my entire gpu ram fit within a single BAR
<clever> and it has such a large chunk of the address space, so you can copy a large chunk at once
<clever> and then just use a dumb memcpy to upload textures
<clever> i highly suspect there is memory mapping going on, so you can configure the gpu to map portions of that 256mb window, to portions of the gpu ram
<clever> Miyu-chan: my GPU, has a whole 256mb chunk of the address space (but i think it has 16g of gpu ram)
<clever> Region 0: Memory at c0000000 (64-bit, prefetchable) [size=256M]
<clever> Shados: yeah, thats what i was guessing
<clever> my sound card has 16kb of "ram" and no "io"
<clever> yeah, io ports still exist, lol
<clever> next usb device only has 256 bytes of "ram"
<clever> the 1st usb controller i see, only has 4kb of "ram" and no "io" at all (likely all memory mapped io)
<clever> the IO use the special in/out opcodes in the cpu, while the "ram" is just a normal physical address you can read/write from
<clever> my 1st sata controller has 6 regions, 5 of them are IO (of varying sizes) and one is 1024 bytes of "ram"
<clever> depends on the device
<clever> i believe those are the BAR's
<clever> look for the `Region 0: ` parts
<clever> [root@amd-nixos:~]# lspci -vv
<clever> looks like your right
<clever> `program the Base Address Registers (commonly called BARs) to inform`
<clever> so when you try to write to that addr, the cpu knows which pci slot to route to, and what lanes to multiplex is over
<clever> but with modern systems, the cpu will sniff all attempts to configure the BAR's, and then remember what device is at every phys addr
<clever> (its basically just ISA, with automatic configuration and detection)
<clever> and then (in legacy pci), when you put the right addr onto the bus, the card will respond correctly
<clever> and the bios is supposed to assign a unique physical address to them on bootup
<clever> that define what the card can do (io/ram) and how much (kb, mb?, gig?)
<clever> you start with a number of pci BAR's
<clever> so the OS interface to the cards is still the same as legacy pci
<clever> and despite pcie being packet based over multiple serial links, its meant to emulate legacy pci
<clever> i believe
<clever> the bios is supposed to configure that in both cpu's
<clever> yeah
<clever> so it definitely wont be starved for bandwidth
<clever> but it has a whopping 64 lanes between the CPU's
<clever> and if you do use 2 cpus, the performance will depend on which cores are trying to access which pcie devices
<clever> but if you only install 1 cpu in the dual motherboard, half the pcie slots are dead
<clever> or dual-socket, with 64 lanes of each socket wired to the other socket (inter-cpu linkage), then each cpu gets 4 slots of 16x (64 lanes each)
<clever> and the motherboard could either be configured for 8 slots of 16x on a single cpu socket (128 lanes total)
<clever> something else interesting i saw in a recent-ish thing about a new cpu, is that the cpu itself had 128 pcie lanes on the socket
<clever> Miyu-chan: https://www.fpga4fun.com/PCI-Express.html has more info
<clever> and that your motherboard is already doing basically the same thing
<clever> 00:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD/ATI] RD890/RD9x0/RX980 PCI to PCI bridge (PCI Express GPP Port 0)
<clever> i believe its in the specs
<clever> refclock, tx, and rx pairs again
<clever> 2 pairs on one side, 1 pair on the other
<clever> and in the 6th photo, you can see the pc side better
<clever> but you will run into a bottleneck if you try to push 4 lanes worth of traffic thru it
<clever> so its expanding 1 lane into 4 lanes, which are then treated as 4 slots of 1x
<clever> the refclock, tx and rx pairs
<clever> which is sending 3 diff pairs to each pcie slot
<clever> and in the 4th photo, the expansion board has a chip on the bottom
<clever> if you check the 2nd photo on the above product, youll see the end that goes into the PC, is entirely passive
<clever> so its turning 1 lane into 4 * 1lane, with a bandwidth bottleneck
<clever> i highly doubt those are actually 16 lane slots though
<clever> Miyu-chan: this is a device that splits a single lane, into 4 16x slots
<clever> think of it like an ethernet switch
<clever> yeah
<clever> so you send data to the port multipler, over slot 5, (using whatever lanes the bios said 5 is)
<clever> port multipliers are like USB hubs, and dont change the mapping, they just act like a proxy
<clever> so the cpu and motherboard vendors dont share how to configure that
<clever> and the OS is supposed to just leave it as it is
<clever> at powerup, the bios is supposed to program the cpu, to say which lanes go to which slots, in what order
<clever> if you know the cpu and motherboard internals (most OS's dont), maybe
<clever> if you want to do things "right" you also need a port multipler, which can take 4x, and then split it into 4 * 1x
<clever> so if you use a 1x extender in a 4x slot, your throwing 3 lanes out
<clever> the motherboard is all point to point, so a 4x slot, always has 4 lanes dedicated to it, even if your extender can only use 1 lane
<clever> in the case of mining, the GPU may rely on the extra power pins, so they may have a reason to use a full 16x slot
<clever> that last mining extension, just doesnt wire lanes 2-16 up, and uses a full 16x slot
<clever> some motherboards have pci-e slots with the end missing, so you can fit longer cards in the slot
<clever> Miyu-chan: the specs say it should sanely negotiate down to fewer lanes (and as low as 1 lane)
<clever> rather then cutting up a whole usb3 cable, and using the existing pairs
<clever> using sas cables, lol
<clever> ah, it looks like they soldered directly to a bare usb3 plug
<clever> looking close, those dont look like normal usb3 cables
<clever> oh, lol
<clever> the special parts (usb<->pcie) are un-modified
<clever> and if you mess up, just buy another usb 3.0 cable at the corner store, no special hardware lost
<clever> and then use the "usb3->pcie" adapter on the end, to do half the heavy lifting
<clever> samueldr: but, you could take that "mining rig" cable, chop off the usb connector on the end, and solder it to an rpi4
<clever> which opened up the option of getting your lanes crossed :P
<clever> samueldr: it just used many usb3 cables, lol
<clever> samueldr: now, combine that with the hackaday you linked
<clever> samueldr: the one i saw wasnt single-lane, but full 16x
<clever> so, just avoid crossing the lanes, and convert it back to PCIE on the far end, and your GPU wont know the difference
<clever> Miyu-chan: then you can use the cheap differential pairs (and connectors) in usb3 cables, to transfer PCIE long-ish distance
<clever> Miyu-chan: oh, that reminds me, ive seen guys making a custom PCI-E board, that just routed each PCIE lane to a usb 3.0 connector
<clever> good luck getting the closed-source nvidia drivers to run on an rpi though, lol
<clever> with the limitation that its a 1x lane, so you cant get full bandwidth
<clever> in theory, you could do the same to an rpi4, replace the usb 3.0 controller with a PCI-E slot, and then plug any PCI-E device into it
<clever> this crazy hot removed the pci-e GPU from a thin-client, and hot-wired a PCI-E slot onto it
<clever> Miyu-chan: this article is also related to the rpi4
<clever> Miyu-chan: already following
<clever> a multi-purpose hole, lol
<clever> samueldr: the apple repair guy on youtube called USB-C the cloaca of ports :P
<clever> samueldr: yeah, type-c is a mess, the connector and the capabilities are all over the place
<clever> so you can get more wattage (at 3A or 5A) by just upping the voltage too
<clever> USB-C can also do many voltages
<clever> samueldr: at surface level, that seems like the "most open" gpu you can get, since its just plain old x86?
<clever> samueldr: its just an x86 computer in pci-e form
<clever> samueldr: this is a video about an intel developement sample GPU
<clever> samueldr: one min...
<clever> ive even been crazy enough to write my own opengl driver, from scratch, right down to probing the IO ports in the 3d core :P
<clever> and yeah, i have read a lot of the docs on the 3d side of things, but the firmware is still closed on the rpi, and the hw video decode stuff is fully closed, so i wonder what else there is, that could possibly compete
<clever> in the rpi4 announcement, they mentioned they are sticking to videocore4 (the 3d core) because its the "most open" in the arm market
<clever> samueldr: i was told that the PD_SENSE pin is wired up, to signal to the charger what its demands are
<clever> the new 4 model
<clever> the rpi has usb 3.0, proper gigabit ethernet, usb-c power/gadget, and with some uber-hacking, you could maybe get pci-e out of it
<clever> lordcirth_: the rpi4 cpu is also a better design, that can do more work for a given clock rate
<clever> lordcirth_: i may have even heard of rumors of LN2 cooling
<clever> lordcirth_: many
<clever> and then i need to convince the x86 gdb to open a core dump from arm
<clever> it was an emulated segfault, within the x86 side, but i think it produced an arm core file
<clever> so it was imposible to debug
<clever> and gdb was heavily confused by the arch mix
<clever> lordcirth_: i was trying to run teamspeak, but it segfaulted within pulseaudio
<clever> ive also done silly things like emulate x86 on an rpi
<clever> but it happens to support emulating the host arch with zero trouble, lol
<clever> because its meant to be used for foreign arches
<clever> yeah, qemu-user always lacks hw accel
<clever> which simulated the lack of sse4
<clever> but, i then switched to using qemu-user-x86_64, to emulate 64bit (in userland) on 64bit, lol
<clever> because qemu by default emulates the oldest 64bit capable cpu (which lacks sse4)
<clever> i was initially using qemu-system to debug the sse4 problem
<clever> ive discovered sse4 features in some stuff iohk ships
<clever> yeah
<clever> openssl (at least years ago) produced a poisoned build, like python does now
<clever> kexec-tools also has the same issue, but correctly fails, rather then producing poisoned builds
<clever> but, the python build scripts treat a failure to build libffi as soft, and just disable ffi support for you
<clever> so, the build should have failed! (and does for pkgs.libffi)
<clever> libffi detects a 64bit cpu, generates 64bit ASM, then the v6/v7 gcc goes "wut" and errors out
<clever> there is also a different ASM problem here
<clever> ASM is still a seperate issue
<clever> but then down-stream, the binary was copied to a v6 only cpu, and failed to execute
<clever> and the configure script simply ignored the nix $system flag, and built what the cpu could handle
<clever> in this case, the gcc was capable of both v6 and v7 builds, and says which one it was generating for, in the elf headers
<clever> its designed as a setup hook, so you could just throw it into a v6 stdenv and your done
<clever> i was using that to detect v7 opcodes in a v6 build
<clever> samueldr: this will cause a build failure if the binaries depend on the wrong arch
<clever> samueldr: one min
<clever> fasd: thats what --upgrade does
<clever> nixpkgs and hackage have name collisions
<clever> jackdk: but super.bar is the haskell package bar, while pkgs.bar is the nixpkgs bar
<clever> pie_: haskellPackages does self.callPackage ./foo { bar = pkgs.bar; };
<clever> and it prefers the right side
<clever> and it just blindly searches the set created by //
<clever> i believe newScope is implemented via `callPackageWith ( self.nixpkgs // self )`
<clever> pie_: it will prefer the version from your package set over the parent one

2019-06-27

<clever> pie_: i found an example somewhere and copied it
<clever> pie_: i dont look at the implementation anymore, i just refer to simple-test.nix
<clever> also, `pkgs.newScope self` is identical to `callPackageWith ( self.nixpkgs // self )` i beliee
<clever> so you dont need a reference to pkgs
<clever> pie_: line 4/8, the callPackage it generates can search up the call chain for you
<clever> and as long as your using the self on line 3, and the self of each overlay right, future overlays will impact past overlays
<clever> and you can recursively call overrideScope' to add more overlays ontop of it
<clever> pie_: makeScope + newScope create a new thing, that contains an overrideScope' and callPackage
<clever> and the foldl' here will just repeatedly call .extend with each overlay in a list
<clever> cardanoPkgsBase has an .extend
<clever> composeExtensions takes 2 overlays and turns it into 1 overlay
<clever> overrideScope' and extend both take an overlay, and apply it to something
<clever> makeScope adds that
<clever> pie_: overrideScope' then
<clever> pie_: and you can then fold .extend over things
<clever> > (pkgs.extend (self: super: { a = 42; })).a