2017-04-29

<clever> fresheyeball: running mount*
<clever> fresheyeball: that might mount them weirdly, just try running mounting from the CLI
<clever> nh2: there will be a lot of double-counting if you count packages in both nixpkgs and debian for example
<clever> nh2: is the count for debian just the package manager and core os scripts?, or every package in the package manager?
<clever> nh2: the google at the end! lol
<clever> and some recent FS's omit the fields it does have, because they dont store things in the dir listing
<clever> yeah
<clever> nh2: also, readdir is supposed to stat for you, in bulk, but not every FS implements it fully
<clever> 2 threads would let both drives seek and lookup data
<clever> id also expect threads to help when dealing with a raid mirror and a well-designed fs
<clever> the codebase wasnt nearly that big, but from what i heard, it was still pretty massive
<clever> nh2: lets just say the project was large enough to need petabytes of log files
<clever> slyfox: i think a large chunk of the unfolding in this .hi is generics and eq
<clever> yep :)
<clever> fresheyeball: and nixos-rebuild switch
<clever> fresheyeball: the new config it generates will configure everything to mount up next time you boot
<clever> fresheyeball: dont need fstab, just mount /dev/sdc1 /media/foo && nixos-generate-config
<clever> fresheyeball: yeah, nixos-generate-config just scans what is currently mounted
<clever> then it should be easy
<clever> fresheyeball: you may want to make a backup of that, and then diff it to confirm it did the right thing
<clever> fresheyeball: if you just mount them manualy and re-run nixos-generate-config it will update hardware-configuration.nix
<clever> nh2: the main reason for cutting down what it can read is just to minimize what can influence the hash of $out, but also stopping other impurities is a benefit
<clever> nh2: yeah
<clever> nh2: yep, done exactly that, which is something i think the nix sandbox can enforce getting right
<clever> simpson: ive also heard that GHC can run haskell code at compile-time, to do something
<clever> nh2: one thing i was thinking of with the nix style, is that it forces things to be more applicative, once you turn the nix sandbox on, it cant access an impurity, and you must either manualy or automaticaly include everything
<clever> and your msg was cut off at about 454 bytes, not counting the command/chanenl
<clever> nh2: irc is limited to 512 bytes per message, including the comamnd and channel name
<clever> i think the msg got cut off a bit, "more powerful (more expressi"
<clever> it would need to regen the depdency info every time the include path changes
<clever> but the paths are also different
<clever> though under nixos, that will break for nix-shell based builds, because of the lack of timestamps
<clever> nh2: yeah, cmake flagged every system header as a depdency of the .o's
<clever> CMakeFiles/toxvpn.dir/depend.make:CMakeFiles/toxvpn.dir/interface_linux.cpp.o: /nix/store/vclzd48jmzl7hw0y4s7j7xzsf8c96g00-libtoxcore-20160907/include/tox/tox.h
<clever> nh2: ah, i think cmake can do that, let me check some things
<clever> nh2: ah
<clever> nh2: i know somebody that has worked on a source repo so large that the stat phase of make took something like 30 minutes
<clever> so instead of shake directing the entire build, it will transform the dep tree and commands into a .nix tree
<clever> slyfox: i have also thought about adding a nix backend to shake, so it can spit out a .nix file for building each .o as its own derivation
<clever> nh2: hmmm, what if i modify the source in a .hs, but not its types, the hashes of the types shouldnt change, and it may not rebuild things that depend on those types?
<clever> yeah
<clever> nh2: i havent looked into out-of-tree builds with ghc yet, so i tend to leave a mess in the source dir
<clever> nh2: that almost defeats the entire point of having the hashes, all it does it prevent mistakes if the timestamps get out of sync in the other direction
<clever> this was enough to clean the build and purge everything
<clever> cp -r ${./src}/*.hs src/
<clever> but in that case, the final output may have the same age as every source file, so it doesnt even check things
<clever> slyfox: because the timestamps of all files are jan 1st 1970
<clever> slyfox: i have noticed that if i import an out of date project with src = ./.;, that ghc --make wont always rebuild things
<clever> nh2: then use the store to save those hash=output pairs, and sandboxes to enforce it only accessing the Y.hs and state1 files
<clever> nh2: and if the compiler puts too much data about the neighboring files into state1, then it just recompiles the Y unit more often
<clever> nh2: i was thinking something along the lines of run code X on all source files to produce state1, then hash(state1 + code Y) = compile(code Y, state1)
<clever> slyfox: i am seeing a number of hashes in the .hi file, those are probably there to prevent mistakes and force rebuilds at the right time
<clever> nh2: i would just import GHC at that point, and tell it to parse things for me, but skip the compilg
<clever> over 100x increase in size, though i am derivating from Eq, Show, and Generic
<clever> -rw-r--r-- 1 sarria sarria 129K Apr 26 17:34 Types.hi
<clever> -rw-r--r-- 1 sarria sarria 1.1K Apr 25 19:44 Types.hs
<clever> slyfox: dang!, 8000 lines for a 31 line data definition, with zero imports
<clever> slyfox: ooo
<clever> jophish: but allowing those rebuilds may also improve perf
<clever> jophish: ah, so the .hi is more then just type info, and that could cause problems if i wanted to rely on the hash of the .hi to trigger future rebuilds
<clever> jophish: and would the .hi contain just the types, or the AST of the function for future optimization?
<clever> jophish: then how would haskell deal with foo :: [a] -> [a] in a library?, would the performance always suck, compared to if the same code was in the project?
<clever> nh2: for polymorphic types like foo :: [a] -> [a], it cant know what functions to call at compile time, and has to either use sucky function pointers, or defer the compile until later on
<clever> nh2: what if haskell just included the AST in the .o's and did more inlining at link time?
<clever> otherwise, it wont notice the changes
<clever> and then you need a way to rebuild the .hi quickly, to check if it has changed
<clever> nh2: i believe there is a ghc -M, but you would need to get the .hi out of the .hs, and then divorce the hi from it, so changes to the .hs dont cascade and cause rebuilds
<clever> nh2: since they lack proper header files
<clever> nh2: yeah, and cases like haskell and java are a messy area
<clever> nh2: i was thinking more along the lines of hashing every input to each .o file, to enforce that without the build system having to make a promise
<clever> nh2: things like linking the .o's from each parallel process into the final program
<clever> nh2: it would need not just checkpointing and the ability to resume, but also the ability to do parts of the build in parallel and pure, and then merge the results up
<clever> and i dont want to bother with cmake or similiar
<clever> i do when the project is transitioning from a single gcc command to something slightly more complx
<clever> if you #include something new and forget to update the makefile, weird things happen
<clever> thats one thing ive messed up sometimes with hand-written Makefile's
<clever> and if somebody ever tries to violate it, the sandbox will cause it to fail
<clever> and just lock that in at commit time
<clever> or, because of the purity of nix, you can gcc -M once, to create a dep-tree for the entire project
<clever> but in the best case, you can gcc -M every file once, then reuse the .o from the binary cache and skip the compile
<clever> yeah
<clever> Dezgeg: it scans the #include statements, and spits out a partial Makefile rule
<clever> Dezgeg: gcc -M
<clever> then gather all the .o's up and link
<clever> send 5 .c files to every build slave in the hydra cluster
<clever> but also, you get distcc for free
<clever> yeah
<clever> its no worse then /nix/store/.links/
<clever> but you now need to rewrite every makefile in nix
<clever> simpson: my idea was to ${./foo.c} every source and buildenv a bunch of ./foo.h's together, so it depends on the hashes of the individual source files, rather then the tar it came from
<clever> simpson: and i have had thoughts on how to recreate ccache within nix, in a pure manner
<clever> simpson: ccache gives you similiar benefits at a .o level
<clever> you just need to filter out .git when hashing it
<clever> :D
<clever> pie_: some of it is interesting

2017-04-28

<clever> they delete the old versions
<clever> happens every time they make an update
<clever> NickHu: i suspect the file its downloading is a 404 page with a timestamp
<clever> NickHu: when it fails ti download, it should give a storepath, run "file" on that
<clever> Ralith: megabytes :P
<clever> and it gets between 500mb/sec and 900mb/sec on raw seq reads with dd
<clever> oh, it also appears in lspci
<clever> 09:00.0 Non-Volatile memory controller: Intel Corporation Device f1a5 (rev 03)
<clever> its clearly keyed, but the screw to hold it in doesnt fit right
<clever> copumpkin: after seating it properly, it shows up here, and works as a normal block device
<clever> /dev/nvme0n1 BTPY652506Q0512F INTEL SSDPEKKW512G7 1 512.11 GB / 512.11 GB 512 B + 0 B PSF109C
<clever> Node SN Model Namespace Usage Format FW Rev
<clever> [root@amd-nixos:~]# nvme list
<clever> copumpkin: when i first popped the nvme in, i was also unable to find it anywhere, turns out i didnt push it all the way into the socket
<clever> boot.initrd.availableKernelModules = [ "nvme" ]; includes it in the initrd, and auto-loading finished the job
<clever> without the log device, the improper shutdown would have resulted in data loss
<clever> the pool can survive without the cache and log devices, oh right
<clever> was able to force-import it from the initrd shell, and then fix the config
<clever> so the pool wouldnt import
<clever> then this morning, i discovered that i didnt have nvme in the initrd
<clever> and yesterday, i added an NVME cache + log to my zfs pool
<clever> yeah, i switched to zfs shortly after that incident
<clever> then a garbage collection took another hour
<clever> after an hour of crunchy, it errord out and mounted the disk read-only
<clever> and btrfs didnt like 30,000 small files in a single dir
<clever> when i made that comment, i had btrfs on a spinning rust disk
<clever> or a virtualbox
<clever> can just ssh into any linux box and run it
<clever> copumpkin: this lets you run the exact same eval process as hydra, without having to configure/install it
<clever> copumpkin: one sec
<clever> if the file is still required
<clever> gleber_: also, files like /etc/dhcpcd.duid can be configured via configuration.nix
<clever> then it might be possible, but ssh is far simpler
<clever> ah, havent seen that one
<clever> 22 isnt nat'd, so only the host can ssh in
<clever> you can also see how i configured nat, so external systems can access services in the container
<clever> ryantrinkle: you can also configured the authorizedKeys in there
<clever> yeah
<clever> and i dont think you can ever join a pid namespace
<clever> i think because of unshare, there is no path on the host that you can just chroot into
<clever> yeah
<clever> ryantrinkle: ssh keypairs can probably do it
<clever> it will default to pulse if pulseaudio is running
<clever> Filystyn: f6 to change cards in alsamixer
<clever> taktoa: and a kernel panic causes ping timeout
<clever> taktoa: it means the pulseaudio libs are redirecting alsamixer to pulse
<clever> hit f6 to change card
<clever> yep
<clever> alsa will clearly tell you if its going thru pulse or not
<clever> pulse will also hijack alsamixer, you have to force it to use raw alsa
<clever> uninitialized bytes leaking into the tar maybe?
<clever> ah
<clever> yeah, but its not fully clear what it does
<clever> ah, run "file" on the final archive, gzip may try to store timestamps
<clever> ive used that method to confirm single-bit errors in video files that had drifted between half a dozen hdd's and network shares over many years
<clever> and see why they arent matching up
<clever> id hexdump -C a.tar > a.txt ; hexdump -C b.tar > b.txt; diff -u a.txt b.txt
<clever> ah, and that confirms it
<clever> The default is --sort=none, which stores archive members in the same order as returned by the operating system.
<clever> nixos's tar manpage is from 2016, and does have --sort
<clever> its a man page from 2013 on gentoo
<clever> ah, thats why i couldnt find --sort in the tar manpage
<clever> yeah
<clever> gchristensen: there might still be the order of files within a directory
<clever> :D
<clever> :)
<clever> as it packages things up
<clever> gchristensen: this tells tar to change the uid/gid to 0/0, and to change the timestamp to 1
<clever> gchristensen: yeah, one sec

2017-04-27

<clever> "luke, use the source!"
<clever> gchristensen: if its a fairly simple structure, i just convert the whole thing to json
<clever> nixpkgs saves you the work of having to describe every single dependency
<clever> yeah, if you want to use python for example, you must refer to another expression that gives python
<clever> :D
<clever> and its just copying all of the source to $out/src/, so nothing actually gets compiled or patched
<clever> no arguments are being passed to makeWrapper on line 23, so the wrapper doesnt actually do anything
<clever> by running nix-shell on the expression
<clever> is the program 32bit only or can it build in 64bit also?
<clever> Infinisil: all compiling must be done within a nix-shell, compilers and such should not be installed
<clever> and your not supposed to install things like compilers or stdenv's
<clever> 32bit only compiler*
<clever> and there are a number of other ways
<clever> Infinisil: if you use callPackage_i686, you will get a 32bit only derivation in the file
<clever> reactormonk[m]: can you pastebin the full output from the latest nixos-install run?
<clever> reactormonk[m]: what about df -i ?
<clever> add pwd and ls to the phase to confirm your surroundings
<clever> the default unpackPhase will unpack $src to the current dir
<clever> Infinisil: cp executable-in-src.py $out/bin/executable ; wrapProgram $out/bin/executable
<clever> Infinisil: so it has to be copied to $out anyways
<clever> Infinisil: and the input must remain after the build has finished
<clever> Infinisil: look at line 133
<clever> Infinisil: thats makeWrapper, not wrapProgram
<clever> jophish: yeah
<clever> jophish: did you see my answer from a few hours ago?
<clever> Infinisil: you want to do cp executable-in-src.py $out/bin/executable ; wrapProgram $out/bin/executable
<clever> Infinisil: wrapProgram will modify $1 in-place
<clever> copumpkin: sounds good
<clever> sphalerite: so as long as you have a seperate filesystem for that, you can safely resume at any point
<clever> sphalerite: oh!, luksipc supports a resume.dat file
<clever> thineye: thats what the old parameter in .overrideDerivation (old: { ... }) is for
<clever> hodapp: the nixos test framework uses 9plan to mount the host /nix/store to the guest, without using disk images
<clever> pie_: i'm writing a userland emulator for xen, that can run a unikernel without root or ring0
<clever> pie_: just the ability to test xen stuff in a more automated matter
<clever> pie_: and 9plan doesnt work with linux->xen->qemu->linux, so i have to build an entire disk image with the closure for every testcase
<clever> pie_: so i would have to run the unikernel under xen, under qemu, under nixos, which might already be under a hypervisor
<clever> pie_: the issue, is that you need root to test them, and things like nix builds lack root
<clever> pie_: something ive been working on lately is a way to test certain xen unikernels
<clever> pie_: ah them, heh
<clever> Mioriin: you can still clone it without an acct, and modify things on your machine
<clever> Mioriin: a better option would be to modify that file in a git clone of nixpkgs, to make a proper option
<clever> where is it?
<clever> pie_: nope
<clever> Mioriin: services.xserver.displayManager.gdm.autoLogin.user = "some-user\nmore config=stuff!!!";
<clever> Mioriin: and there is no proper way to insert text into it, so you will either need to modify this file, or cheat in an ugly way
<clever> sphalerite: ah yeah
<clever> Mioriin: does the config file already exist?
<clever> but that would require more changes to things
<clever> my previous idea, is to write encrypt(chunk1) to a journal first, before you overwrite the plaintext copy with chunk0
<clever> sphalerite: and if you loose power, chunk1 is lost
<clever> sphalerite: ah yeah, they have diagrams showing every step, and you can see that chunk0 is on the luks, chunks 2/3 are on the plaintext, and chunk1 is in ram
<clever> pie_: yeah, i would prefer to just backup to an external system and redo the entire disk at once
<clever> sphalerite: at a glance, i think that tool will move the data 10mb forward, as it does the encryption, to make room for the luks header, *reads more*
<clever> so it would eventualy finish as just 1 pv taking the entire disk
<clever> then you could potentialy drop the pv at the end of the disk, and expand the 1st pv
<clever> and it has its own recovery system
<clever> after expanding the VG, but before expanding the LV, you can use pvmove to shuffle lvm blocks within the array
<clever> you could also defrag that if you had more time
<clever> but that shouldnt harm perf
<clever> the data will be slightly fragmented, <half2><half1> on the disk
<clever> then delete the original, make a 2nd PV, and expand the VG and LV
<clever> and either dd or rsync the data over
<clever> then make a luks'd LV inside that
<clever> pie_: if the usage is under 50%, yeah, shrink the fs, then make an lvm PV in the new space
<clever> pie_: just had an idea on how it might be done more safely
<clever> pie_: oh, how much disk used, and what filesystem?
<clever> at what point will it get interupted?, how do you forcibly flush things to the journal?, prevent the drive from re-ordering writes?
<clever> and its complex to test things recovering after power loss
<clever> id be warry about modifying that myself, more chance of data loss for anybody using it
<clever> pie_: you would need to record the progress to disk, and have a way to resume it after a reboot
<clever> and it would need the same password as before to resume the operation
<clever> just record the latest block in the same journal you log the backup data in
<clever> and also, i jut thought, dont need a full bitmask
<clever> yeah
<clever> but then you need to overhaul the entire cryptsetup process
<clever> about the only way i could improve that, is to keep a bitmask of what blocks have been encrypted, and also a journal for when your overwriting blocks
<clever> ah
<clever> sphalerite: at the cost of all data i would expect
<clever> sphalerite: or if the user just yanks the battery?
<clever> sphalerite: but what happens if the battery runs out?
<clever> sphalerite: as long as power isnt interupted, it should be safe
<clever> Infinisil: nix-build -E 'with import <nixpkgs> {}; callPackage ./default.nix {}'
<clever> yeah, that sounds fun and risky
<clever> oh
<clever> and the android os remains in control
<clever> sphalerite: the ubuntu on android thing i saw many years ago just runs a portion of ubuntu under a chroot, combined with an x11 android app
<clever> sphalerite: and just 2 days ago, repl got merged into nix
<clever> sphalerite: nix-repl links directly into nix
<clever> nixrl: odd, i would expect that to work
<clever> Infinisil: set the installPhase in the derivation
<clever> steveeJ: because of newScope, it will search obj. first, and it will still allow some defaults if you f{}
<clever> steveeJ: cp = pkgs.newScope obj; f = cp ./f.nix;
<clever> steveeJ: then the newscope method would work better
<clever> nixrl: the nix-shell cmd you ran, and how you tried to compile your program
<clever> steveeJ: both of those will produce a final derivation, and both will have a .override to change things more
<clever> steveeJ: or cp = pkgs.newScope obj; f = cp ./f.nix {};, it will check obj. first, then pkgs.
<clever> steveeJ: either f = callPackage ./f.nix { foo = bar; };
<clever> nixrl: can you copy/paste everything in the terminal window to a pastebin?
<clever> nixrl: and i can -lreadline just fine
<clever> nixrl: i do see a libreadline.a under /nix/store/72db5zhzxafbifa94xfkp5blvdhlbz4z-readline-6.3p08/lib
<clever> nixrl: check that the readline it compiled actually has static libs
<clever> thineye: that kind of change might affect the ability to load modules, so all modules have to be built against the right kernel, and then you want hydra helping out
<clever> thineye: all i can think of then is to make a 2nd grsecurity kernel, and the grsecurity module will have to change which one lands in config.boot.kernelPackages
<clever> ah
<clever> thineye: would it be possible to just always put that config in the gresecurity kernel, and just leave it there if nvidia isnt found?
<clever> jophish: should be ebough to just add -static to line 25 of qemu-user.nix
<clever> and if you refence the kernel like this, it will always be the right version
<clever> extraModulePackages = [ config.boot.kernelPackages.v4l2loopback ];
<clever> thineye: then have a nixos module optionaly insert the package into boot.extraModulePackages
<clever> thineye: one thing that would be better though, is to make a special package in nixpkgs, that contains that modified nvidia driver, always compiled correctly
<clever> thineye: yeah, you can set those options in a nixos module easily
<clever> ive had to do the same when emulating some xen stuff
<clever> ahh
<clever> what does that do?
<clever> its just accessing files as the current user and doing tcp
<clever> copumpkin: i cant think of anything abnormal aws-cli would do
<clever> thineye: what are you trying to do?
<clever> thineye: all-packages.nix cant access nixos config
<clever> rotaerk: the final build by nix-build lacks -prof, but i can -prof things under nix-shell and get profiling
<clever> 90% of the time fhs's are the wrong way to fix things on nixos, and patchelf just works
<clever> not sure
<clever> hodapp: if the message argument is set, it will print that, which can explain to the user how to get the .deb file
<clever> that just means hydra cant test things
<clever> "it produces packages that cannot be built automatically"
<clever> that say where to get the deb
<clever> you can also put in directions
<clever> it will never be able to download