<clever>
jared-w: the roku2 even uses the exact same bcm2835 as the rpi1
<clever>
jared-w: while the arm deals with all of the UI and fetching the encrypted content stream over the network
<clever>
jared-w: the gpu firmware on the VPU, can then deal with DRM decryption, hw video accel, and even turning that video stream into a texture for use by opengl
<clever>
jared-w: there is a second MMU between "arm physical" and "real ram", so you can block the arm from ever accessing certain pages of ram
<clever>
jared-w: the VPU is basically the management engine, and the arm is the slave being put into a cage
<clever>
jared-w: and it has since evolved into a sort of management engine type setup
<clever>
jared-w: the VPU used to be the only cpu in the chip
<clever>
jared-w: i lost the link, but one of the broadcom devs asked to just "throw an arm in there" for future use
<clever>
jared-w: and of course, i had to nixify everything i touched :P
<clever>
jared-w: and then it can boot as the 1st-stage, bootcode.bin or recovery.bin
<clever>
jared-w: if you link with this linker script, and then objcopy it to a .bin file, the mask rom will load it to 0x80000000 (and the origin must be set to that, or the linker gets everything wrong)
<clever>
jared-w: i have also reverse engineered the mask rom, and extracted the hmac-sha1 keys used to validate the 1st-stage files
<clever>
jared-w: so i could go nuts if you want.....
<clever>
jared-w: i have had the fun of single-stepping thru linux, with gdb+jtag, to figure out why it was hanging on boot with zero printf's
<clever>
then things continue as you would expect
<clever>
and then linux mounts the rootfs, as directed by the root= in its cmdline
<clever>
start4.elf then loads kernel.img and the right dtb file for this model, and boots linux up on the arm side
<clever>
yep
<clever>
and start4.elf is never signature-checked
<clever>
by default, it will only load start4.elf from the SD card
<clever>
so you can do any order you want, and skip any mode you want
<clever>
bootconf.txt exists within the SPI flash, and controls the boot order and tftp ip
<clever>
the official SPI image, will initialize the ddr4 controller, then load start4.elf from either SD or tftp
<clever>
so you can un-brick things
<clever>
the official recovery.bin will re-flash the SPI chip, then delete itself
<clever>
if any location is missing or not signed, it silently moves to the next
<clever>
usb-device boot over the dwc2 controller (the usb-c port), same as the compute modules
<clever>
a tagged blob in the SPI eeprom
<clever>
recovery.bin on an SD card
<clever>
for the rpi4, the 1st-stage can be loaded from 3 locations, in the following order
<clever>
its more that the rpi foundation isnt obeying common sense
<clever>
jared-w: and the 2nd stage isnt signed, so its still a security nightmare :P
<clever>
but it can be upgraded, to add netboot support for the 2nd stage
<clever>
the 1st-stage is never loaded over the network
<clever>
so, they moved the 1st-stage file to an SPI eeprom on the rpi itself
<clever>
and they knew such bugs would happen again
<clever>
with the rpi4, they completely redid both ethernet and usb-host
<clever>
but, the early revisioins of the mask rom, had bugs, causing netboot and usb-boot to fail in weird ways
<clever>
or usb mass-storage
<clever>
for later models, network boot was added to the mask rom, to allow fetching bootcode.bin over tftp
<clever>
for the rpi 1-3, the 1st stage is bootcode.bin on the SD card
<clever>
yeah
<clever>
jared-w: i was also surprised to find, the rpi4 requires the 1st-stage file to be correctly signed
<clever>
jared-w: i also want to get rpi4 support, but they changed so much, that i'll need to reverse engineer a lot more to make it work
<clever>
because every print routine eventually touched printf
<clever>
jared-w: i had to cross-compile an FPU-less glibc, to even get debug info out of it
<clever>
jared-w: the instant printf was ran, it would bork, because it had FPU opcodes
<clever>
jared-w: rpi3 also failed hard, because i didnt grant linux permission to use the FPU
<clever>
jared-w: rpi3 had random segfaulting, because i didnt give linux permission to flush the L2 cache
<clever>
jared-w: and if i disabled SMP support in linux, the rpi2 lagged horribly, because with SMP disabled entirely, the L1/L2 cache didnt work, so every ram access was a cache-miss
<clever>
jared-w: rpi2 failed hard, because i didnt tell the arm core to enable SMP, so linux then faulted hard when it tried to use mutex stuff
<clever>
jared-w: lack of documentation on linux flags, and exactly which models they used
<clever>
jared-w: the readme claimed it was able to boot linux, but it took a month of work to even get linux to boot on rpi-open-firmware
<clever>
jared-w: the only blob that remains, is the mask rom, which is a rom, so i cant do much
<clever>
jared-w: for the rpi2 and rpi3, i can boot nixos with every blob that is removable, removed
<clever>
jared-w: yeah
<clever>
jared-w: and if you want to see more rpi stuff, you can watch #raspberrypi-internals
<clever>
requiring co-operation between 2 cross-compilers
<clever>
jared-w: thats 4 seperate areas of the code, covering 2 radically different ISA's, on completely seprate cpu cores
<clever>
prior to all of that code, the mac address was randomized on every boot
<clever>
jared-w: and then update the device-tree based on the serial#, to generate the right mac address
<clever>
jared-w: i can then treat that symbol as a struct, and recover the values that BCM2708ArmControl.cc had copied over
<clever>
and nix just magically builds it all and the puzzle comes together
<clever>
so arm blobs get baked into vc4 blobs
<clever>
jared-w: the vc4 derivations depend on some of the arm derivations, and the entire `overlay = self: super: {` defines libraries that can be compiled for both vc4 and arm
<clever>
jared-w: the parent will re-spawn the child, and forward that attr to the new child, which repeats
<clever>
jared-w: this code will write an attr to the child, the child then evals things and emits json, and then when the child gets too fat, it writes its attr out a pipe
<clever>
jared-w: when the child gets too fat, its killed, and a new child is spawned, which resumes where the last one stopped
<clever>
jared-w: currently, hydra will keep that entire set in the parent, and then the child begins to eval attrs at a given point
<clever>
and it could just discard the child at any time, to reset the heap
<clever>
and keep the thunk in the parent
<clever>
it would do the evals for a given attr in a child proc
<clever>
jared-w: one of my forks of nix, was to allow a sort of un-eval
<clever>
jared-w: but that set is rooting each thing you eval, so it cant clean up after you
<clever>
jared-w: part of the problem, is that release.nix contains a set of 1000's of packages, and as you eval each value in the set, you increase the heap usage
<clever>
jared-w: so after the override is applied, as it creates the set within release.nix
<clever>
jared-w: hydraJob gets ran on the result of the .override
<clever>
jared-w: but, the current aggregate job, must eval everything it depends on, causing performance problems
<clever>
jared-w: release-lib.nix maps that over nixpkgs, to aid in gc'ing a release.nix
<clever>
jared-w: this function will strip .override and many other things off, making GC simpler
<clever>
jared-w: .override will keep the pre-called function around, hydra has special logic to help there
<clever>
a set often turns into a string if you eval it again
<clever>
also, i have no idea how, but haskell.nix can change the types of already eval'd things
<clever>
and if one happens to appear inside an int or a string, it keeps the thing alive
<clever>
so it just blindly searches each object for pointers to other objects
<clever>
but the current gc library isnt aware of the internal structure of the c++ types
<clever>
yeah
<clever>
but you have to scan the heap, to see if pointers to those remain
<clever>
if you compute `foo + bar`, then you can GC both foo and bar
<clever>
jared-w: but it also can never regain spent ram
<clever>
jared-w: if you disable the GC library at compile time, it has no ram limit at all
<clever>
lol
<clever>
and remember, list1 + list2 has a cost of lenght(list1) + length(list2)
<clever>
look at lib.uniq and lib.subList in nixpkgs
<clever>
in this case, the problem is mostly about how subList was implemented
<clever>
builtins.subList is a single copy, of the lenght of the new list
<clever>
exponentially
<clever>
jared-w: so the more elements you subList, the slower it gets
<clever>
jared-w: and list concat in nix, involves copying an array of Value*'s
<clever>
jared-w: lib.subList and lib.uniq, are implemented by concat'ing lists, one element at a time
<clever>
jared-w: implementing builtins.subList gave a major performance boost, but i never finished that PR
<clever>
jared-w: the biggest performance cost in snack, is lib.uniq, which heavily abuses lib.subList, which is a performance nightmare
<clever>
jared-w: so i had to first write a cabal merger, that joins all the cabal files into one
<clever>
jared-w: also, snack only works within a single cabal file, and cardano is split over something like 20 cabal files
<clever>
jared-w: i got snack working in one branch, but it never became the official tool
<clever>
jared-w: the rest is just symlinking things together, and running `yarn install`
<clever>
jared-w: now that ive looked more closely at yarn2nix, its basically just translating yarn.lock into an array of pkgs.fetchurl calls, and nothing more
<clever>
jared-w: documentation can lie, luke, use the source
<clever>
jared-w: it was faster to dive face-first into the source, and rewrite half of it, then to wait for it to finish running
<clever>
jared-w: while waiting for that, i also re-wrote half of snack, and got the eval down to 15mins, lol
<clever>
jared-w: when i was looking into snack, i started an eval for cardano-sl, 48 hours later, that one eval was still running
<clever>
jared-w: any time you get 2 drv files and expected only 1, nix-diff!
<clever>
jared-w: and then reading source to see why
<clever>
jared-w: mostly, just using nix-diff on 2 drv files, once it had done it twice
<clever>
jared-w: snack defeated this problem, by running filterSource on storepaths, to extract a single file out
<clever>
jared-w: and now it depends on its siblings, not itself!
<clever>
jared-w: and `/nix/store/hash/yarn.lock` doesnt get re-copied, because its already immutable
<clever>
jared-w: but add in `src = lib.cleanSourceWith { inherit filter; src = ./.; };` and now src is definitely copied to the store
<clever>
jared-w: the issue, is that `src = ./.;` and then `src + "/yarn.lock` is identical to just `./yarn.lock`
<clever>
and then it regains its sanity
<clever>
but, you can just give it the paths like this, and bypass getting them from $src
<clever>
120 yarnLock = ./yarn.lock;
<clever>
121 packageJSON = ./package.json;
<clever>
jared-w: so now the node_modules gets rebuilt if ANYTHING changed
<clever>
jared-w: yarn2nix then did `src + "/yarn.lock"` to get the lock file, which was always /nix/store/hash-source/yarn.lock
<clever>
jared-w: there was a bug at one point in the daedalus stuff, i was using filterSource to clean up the src passed to yarn2nix
<clever>
change any part, and you have to spend 5 minutes rebuilding libsass
<clever>
yarn2nix is the former, all sources go into one derivation, then everything gets compiled at once into a node_modules
<clever>
jared-w: the true fix, is to figure out how to put both of them into a yarn offline cache
<clever>
jared-w: ive found that things can still build if i manually delete all of the @types/ from yarn.lock, but yarn keeps re-adding them
<clever>
jared-w: one of the options within screen
<clever>
Set default behaviour for new windows regarding if screen should change window title when seeing proper escape sequence. See also "TITLES (naming windows)" section.
<clever>
defdynamictitle on|off
<clever>
jared-w: yarn (even with --verbose) wont tell you whats going on, and just try to delete stuff from your offline cache, and re-download it
<clever>
jared-w: basically, foo-1.2.3 and @types/foo-1.2.3 are both foo-1.2.3.tar.gz, yet not the same foo-1.2.3.tar.gz
<clever>
jared-w: there are now 5 different pairs of packages, that have different hashes/content, for the same tar filename
<clever>
jared-w: my latest problem with yarn2nix, is @types/ junk
<clever>
jared-w: i use the moretea yarn2nix
<clever>
jared-w: i just always have a block cursor at all times
<clever>
jared-w: yarn2nix offers both codegen and ifd routes, if the yarn.lock file is right, you can do all of the codegen within a derivation, so you dont have to update the codegen constantly
<clever>
the -U to screen forced utf8 support, fixing things
<clever>
jared-w: when i first moved to nixos, i had utf8 problems in the above cmd, because it bypassed .bashrc, and wasnt setting $LANG right
<clever>
yeah, screen is emulating it fully, tracking what the title should be on a per-window basis, and showing the right one
<clever>
its just correctly matching xterm's title to what the screen window last asked for
<clever>
but its not (yet) updating the titles within screen's UI
<clever>
jared-w: screen will remember the title of each window, and when i switch windows, it correctly updates xterm to whatever that window's title is
<clever>
jared-w: possibly because i'm using this to connect to the remote systems
<clever>
jared-w: 90% of my windows are just "screen", so its a bit difficult to find things in alt+tab
<clever>
jared-w: something that ive been stuck on for months, is simply setting the window title in both screen and xterm, via terminfo stuff
<clever>
jared-w: i'm not sure how, but :altscreen can toggle it, without changing $TERM, so brick/vim/less must be able to query if its supported
<clever>
jared-w: i tend to always use screen, which doesnt allow it by default
<clever>
screen itself (the program) uses alternate screens, but doesnt allow them internally by default, ctrl+a :altscreen toggles emulating them
<clever>
jared-w: vim/less use alternate screens to not overwrite your shell (and anoyingly, erase the whole editor session upon exit)
<clever>
jared-w: if alternate screen is enabled in your terminal emulator, suspendAndResume will also use it then
<clever>
jared-w: so you could run a shell, and brick will know to redraw when it exits
<clever>
jared-w: continue changes the state, halt lets you halt and return one final state, suspendAndResume will disable the curses UI and run an IO monad
<clever>
jared-w: brick also has 3 ways for you to return in an EventM, continue, halt, and suspendAndResume