<clever>
fresheyeball: that might mount them weirdly, just try running mounting from the CLI
<clever>
nh2: there will be a lot of double-counting if you count packages in both nixpkgs and debian for example
<clever>
nh2: is the count for debian just the package manager and core os scripts?, or every package in the package manager?
<clever>
nh2: the google at the end! lol
<clever>
and some recent FS's omit the fields it does have, because they dont store things in the dir listing
<clever>
yeah
<clever>
nh2: also, readdir is supposed to stat for you, in bulk, but not every FS implements it fully
<clever>
2 threads would let both drives seek and lookup data
<clever>
id also expect threads to help when dealing with a raid mirror and a well-designed fs
<clever>
the codebase wasnt nearly that big, but from what i heard, it was still pretty massive
<clever>
nh2: lets just say the project was large enough to need petabytes of log files
<clever>
slyfox: i think a large chunk of the unfolding in this .hi is generics and eq
<clever>
yep :)
<clever>
fresheyeball: and nixos-rebuild switch
<clever>
fresheyeball: the new config it generates will configure everything to mount up next time you boot
<clever>
fresheyeball: dont need fstab, just mount /dev/sdc1 /media/foo && nixos-generate-config
<clever>
fresheyeball: yeah, nixos-generate-config just scans what is currently mounted
<clever>
then it should be easy
<clever>
fresheyeball: you may want to make a backup of that, and then diff it to confirm it did the right thing
<clever>
fresheyeball: if you just mount them manualy and re-run nixos-generate-config it will update hardware-configuration.nix
<clever>
nh2: the main reason for cutting down what it can read is just to minimize what can influence the hash of $out, but also stopping other impurities is a benefit
<clever>
nh2: yeah
<clever>
nh2: yep, done exactly that, which is something i think the nix sandbox can enforce getting right
<clever>
simpson: ive also heard that GHC can run haskell code at compile-time, to do something
<clever>
nh2: one thing i was thinking of with the nix style, is that it forces things to be more applicative, once you turn the nix sandbox on, it cant access an impurity, and you must either manualy or automaticaly include everything
<clever>
and your msg was cut off at about 454 bytes, not counting the command/chanenl
<clever>
nh2: irc is limited to 512 bytes per message, including the comamnd and channel name
<clever>
i think the msg got cut off a bit, "more powerful (more expressi"
<clever>
it would need to regen the depdency info every time the include path changes
<clever>
but the paths are also different
<clever>
though under nixos, that will break for nix-shell based builds, because of the lack of timestamps
<clever>
nh2: yeah, cmake flagged every system header as a depdency of the .o's
<clever>
nh2: ah, i think cmake can do that, let me check some things
<clever>
nh2: ah
<clever>
nh2: i know somebody that has worked on a source repo so large that the stat phase of make took something like 30 minutes
<clever>
so instead of shake directing the entire build, it will transform the dep tree and commands into a .nix tree
<clever>
slyfox: i have also thought about adding a nix backend to shake, so it can spit out a .nix file for building each .o as its own derivation
<clever>
nh2: hmmm, what if i modify the source in a .hs, but not its types, the hashes of the types shouldnt change, and it may not rebuild things that depend on those types?
<clever>
yeah
<clever>
nh2: i havent looked into out-of-tree builds with ghc yet, so i tend to leave a mess in the source dir
<clever>
nh2: that almost defeats the entire point of having the hashes, all it does it prevent mistakes if the timestamps get out of sync in the other direction
<clever>
this was enough to clean the build and purge everything
<clever>
cp -r ${./src}/*.hs src/
<clever>
but in that case, the final output may have the same age as every source file, so it doesnt even check things
<clever>
slyfox: because the timestamps of all files are jan 1st 1970
<clever>
slyfox: i have noticed that if i import an out of date project with src = ./.;, that ghc --make wont always rebuild things
<clever>
nh2: then use the store to save those hash=output pairs, and sandboxes to enforce it only accessing the Y.hs and state1 files
<clever>
nh2: and if the compiler puts too much data about the neighboring files into state1, then it just recompiles the Y unit more often
<clever>
nh2: i was thinking something along the lines of run code X on all source files to produce state1, then hash(state1 + code Y) = compile(code Y, state1)
<clever>
slyfox: i am seeing a number of hashes in the .hi file, those are probably there to prevent mistakes and force rebuilds at the right time
<clever>
nh2: i would just import GHC at that point, and tell it to parse things for me, but skip the compilg
<clever>
over 100x increase in size, though i am derivating from Eq, Show, and Generic
<clever>
slyfox: dang!, 8000 lines for a 31 line data definition, with zero imports
<clever>
slyfox: ooo
<clever>
jophish: but allowing those rebuilds may also improve perf
<clever>
jophish: ah, so the .hi is more then just type info, and that could cause problems if i wanted to rely on the hash of the .hi to trigger future rebuilds
<clever>
jophish: and would the .hi contain just the types, or the AST of the function for future optimization?
<clever>
jophish: then how would haskell deal with foo :: [a] -> [a] in a library?, would the performance always suck, compared to if the same code was in the project?
<clever>
nh2: for polymorphic types like foo :: [a] -> [a], it cant know what functions to call at compile time, and has to either use sucky function pointers, or defer the compile until later on
<clever>
nh2: what if haskell just included the AST in the .o's and did more inlining at link time?
<clever>
otherwise, it wont notice the changes
<clever>
and then you need a way to rebuild the .hi quickly, to check if it has changed
<clever>
nh2: i believe there is a ghc -M, but you would need to get the .hi out of the .hs, and then divorce the hi from it, so changes to the .hs dont cascade and cause rebuilds
<clever>
nh2: since they lack proper header files
<clever>
nh2: yeah, and cases like haskell and java are a messy area
<clever>
nh2: i was thinking more along the lines of hashing every input to each .o file, to enforce that without the build system having to make a promise
<clever>
nh2: things like linking the .o's from each parallel process into the final program
<clever>
nh2: it would need not just checkpointing and the ability to resume, but also the ability to do parts of the build in parallel and pure, and then merge the results up
<clever>
and i dont want to bother with cmake or similiar
<clever>
i do when the project is transitioning from a single gcc command to something slightly more complx
<clever>
if you #include something new and forget to update the makefile, weird things happen
<clever>
thats one thing ive messed up sometimes with hand-written Makefile's
<clever>
and if somebody ever tries to violate it, the sandbox will cause it to fail
<clever>
and just lock that in at commit time
<clever>
or, because of the purity of nix, you can gcc -M once, to create a dep-tree for the entire project
<clever>
but in the best case, you can gcc -M every file once, then reuse the .o from the binary cache and skip the compile
<clever>
yeah
<clever>
Dezgeg: it scans the #include statements, and spits out a partial Makefile rule
<clever>
Dezgeg: gcc -M
<clever>
then gather all the .o's up and link
<clever>
send 5 .c files to every build slave in the hydra cluster
<clever>
but also, you get distcc for free
<clever>
yeah
<clever>
its no worse then /nix/store/.links/
<clever>
but you now need to rewrite every makefile in nix
<clever>
simpson: my idea was to ${./foo.c} every source and buildenv a bunch of ./foo.h's together, so it depends on the hashes of the individual source files, rather then the tar it came from
<clever>
and i dont think you can ever join a pid namespace
<clever>
i think because of unshare, there is no path on the host that you can just chroot into
<clever>
yeah
<clever>
ryantrinkle: ssh keypairs can probably do it
<clever>
it will default to pulse if pulseaudio is running
<clever>
Filystyn: f6 to change cards in alsamixer
<clever>
taktoa: and a kernel panic causes ping timeout
<clever>
taktoa: it means the pulseaudio libs are redirecting alsamixer to pulse
<clever>
hit f6 to change card
<clever>
yep
<clever>
alsa will clearly tell you if its going thru pulse or not
<clever>
pulse will also hijack alsamixer, you have to force it to use raw alsa
<clever>
uninitialized bytes leaking into the tar maybe?
<clever>
ah
<clever>
yeah, but its not fully clear what it does
<clever>
ah, run "file" on the final archive, gzip may try to store timestamps
<clever>
ive used that method to confirm single-bit errors in video files that had drifted between half a dozen hdd's and network shares over many years
<clever>
Mioriin: does the config file already exist?
<clever>
but that would require more changes to things
<clever>
my previous idea, is to write encrypt(chunk1) to a journal first, before you overwrite the plaintext copy with chunk0
<clever>
sphalerite: and if you loose power, chunk1 is lost
<clever>
sphalerite: ah yeah, they have diagrams showing every step, and you can see that chunk0 is on the luks, chunks 2/3 are on the plaintext, and chunk1 is in ram
<clever>
pie_: yeah, i would prefer to just backup to an external system and redo the entire disk at once
<clever>
sphalerite: at a glance, i think that tool will move the data 10mb forward, as it does the encryption, to make room for the luks header, *reads more*
<clever>
so it would eventualy finish as just 1 pv taking the entire disk
<clever>
then you could potentialy drop the pv at the end of the disk, and expand the 1st pv
<clever>
and it has its own recovery system
<clever>
after expanding the VG, but before expanding the LV, you can use pvmove to shuffle lvm blocks within the array
<clever>
you could also defrag that if you had more time
<clever>
but that shouldnt harm perf
<clever>
the data will be slightly fragmented, <half2><half1> on the disk
<clever>
then delete the original, make a 2nd PV, and expand the VG and LV
<clever>
and either dd or rsync the data over
<clever>
then make a luks'd LV inside that
<clever>
pie_: if the usage is under 50%, yeah, shrink the fs, then make an lvm PV in the new space
<clever>
pie_: just had an idea on how it might be done more safely
<clever>
pie_: oh, how much disk used, and what filesystem?
<clever>
at what point will it get interupted?, how do you forcibly flush things to the journal?, prevent the drive from re-ordering writes?
<clever>
and its complex to test things recovering after power loss
<clever>
id be warry about modifying that myself, more chance of data loss for anybody using it
<clever>
pie_: you would need to record the progress to disk, and have a way to resume it after a reboot
<clever>
and it would need the same password as before to resume the operation
<clever>
just record the latest block in the same journal you log the backup data in
<clever>
and also, i jut thought, dont need a full bitmask
<clever>
yeah
<clever>
but then you need to overhaul the entire cryptsetup process
<clever>
about the only way i could improve that, is to keep a bitmask of what blocks have been encrypted, and also a journal for when your overwriting blocks
<clever>
ah
<clever>
sphalerite: at the cost of all data i would expect
<clever>
sphalerite: or if the user just yanks the battery?
<clever>
sphalerite: but what happens if the battery runs out?
<clever>
sphalerite: as long as power isnt interupted, it should be safe
<clever>
sphalerite: nix-repl links directly into nix
<clever>
nixrl: odd, i would expect that to work
<clever>
Infinisil: set the installPhase in the derivation
<clever>
steveeJ: because of newScope, it will search obj. first, and it will still allow some defaults if you f{}
<clever>
steveeJ: cp = pkgs.newScope obj; f = cp ./f.nix;
<clever>
steveeJ: then the newscope method would work better
<clever>
nixrl: the nix-shell cmd you ran, and how you tried to compile your program
<clever>
steveeJ: both of those will produce a final derivation, and both will have a .override to change things more
<clever>
steveeJ: or cp = pkgs.newScope obj; f = cp ./f.nix {};, it will check obj. first, then pkgs.
<clever>
steveeJ: either f = callPackage ./f.nix { foo = bar; };
<clever>
nixrl: can you copy/paste everything in the terminal window to a pastebin?
<clever>
nixrl: and i can -lreadline just fine
<clever>
nixrl: i do see a libreadline.a under /nix/store/72db5zhzxafbifa94xfkp5blvdhlbz4z-readline-6.3p08/lib
<clever>
nixrl: check that the readline it compiled actually has static libs
<clever>
thineye: that kind of change might affect the ability to load modules, so all modules have to be built against the right kernel, and then you want hydra helping out
<clever>
thineye: all i can think of then is to make a 2nd grsecurity kernel, and the grsecurity module will have to change which one lands in config.boot.kernelPackages
<clever>
ah
<clever>
thineye: would it be possible to just always put that config in the gresecurity kernel, and just leave it there if nvidia isnt found?
<clever>
jophish: should be ebough to just add -static to line 25 of qemu-user.nix
<clever>
and if you refence the kernel like this, it will always be the right version
<clever>
thineye: then have a nixos module optionaly insert the package into boot.extraModulePackages
<clever>
thineye: one thing that would be better though, is to make a special package in nixpkgs, that contains that modified nvidia driver, always compiled correctly
<clever>
thineye: yeah, you can set those options in a nixos module easily
<clever>
ive had to do the same when emulating some xen stuff
<clever>
ahh
<clever>
what does that do?
<clever>
its just accessing files as the current user and doing tcp
<clever>
copumpkin: i cant think of anything abnormal aws-cli would do