<clever>
Disk /dev/nvme0n1: 477 GiB, 512110190592 bytes, 1000215216 sectors
<clever>
2017-10-02 04:08:10 < clever> so modern drives lie about the block size
<clever>
2017-10-02 04:08:01 < clever> but old OS's crash hard if the drive tells the truth
<clever>
now you have no way to do things properly
<clever>
so modern drives lie about the block size
<clever>
but old OS's crash hard if the drive tells the truth
<clever>
the problem, is that modern devices dont use 512 byte blocks
<clever>
thats a bit big :P
<clever>
and it heavily depends on the block size of the underlying hardware
<clever>
hyper_ch: there is no real harm from making it too big
<clever>
ah
<clever>
which hardphones are you needing openvpn on?
<clever>
i havent messed with them much
<clever>
ah yeah
<clever>
but it helps to know if there are bottlenecks like cpu usage
<clever>
and then it should perform better
<clever>
if you lower the mtu enough on the tun0 interface, then openvpn wont have to fragment things
<clever>
hyper_ch: another thing, udp or tcp mode?
<clever>
and that server becomes a bottleneck
<clever>
they must route thru a server
<clever>
the issue ive seen, is that if 2 clients want to talk to eachother
<clever>
hyper_ch: i wrote my own vpn to get around the bottlenecks of openvpn :P
<clever>
hyper_ch: id have a program that will query the remote end to find what the newest shared snapshot is, and then sync via that
<clever>
and both ends need the old snapshot
<clever>
yeah
<clever>
yeah
<clever>
the altroot is only active until reboot
<clever>
altroot tells it to mount things under /mnt/
<clever>
but when installing, you want to chroot under /mnt
<clever>
normally, zfs mounts everything under / automatically
<clever>
if you write to a section smaller then a block, the drive has to read the current value, overwrite part of it, then write the entire block back out
<clever>
more because they have larger blocks
<clever>
12 gives 4096, and 13 gives 8192 i believe
<clever>
hyper_ch: it makes the block sizes 2^12
<clever>
hyper_ch: yep
2017-10-01
<clever>
hyper_ch: just woke up
<clever>
i'm heading to bed :P
<clever>
its only 5am
<clever>
hyper_ch: oh dang, got distracted
<clever>
zfs will just automatically create gpt partitions, i believe
<clever>
hyper_ch: first, run "truncate image-1.img -s 512m"
<clever>
and mess around
<clever>
so you can just use ftruncate to make a dozen 512mb files, and then make pools ontop of that
<clever>
zfs also supports operating on files directly, without needing losetup
<clever>
make a mirror between one of the 4tb's and the 8tb
<clever>
mirror mode, just with different params
<clever>
and if any drive fails, you loose it all
<clever>
so replacing any drive with a bigger drive instantly gives you more storage
<clever>
if your just in jbod mode, it wont care, but you have zero redundancy
<clever>
well, it depends on which mode your in
<clever>
it would only use the first 4tb of the 8 drive
<clever>
you cant really mix sizes like that
<clever>
ah
<clever>
hyper_ch: once it finishes the resilver, remove the 3xdrive from the mirror, then you have just a single 8tb with no redundancy
<clever>
hyper_ch: one method, turn it into a mirror between the 3x array and the 8tb drive
<clever>
i found gzip-9 to tax things a lot more, but the 4x savings is nice sometimes
<clever>
not sure
<clever>
hyper_ch: nice
<clever>
and snapshots just make an extra reference to the data, so the old version cant be GC'd
<clever>
so if you want to modify a file, it creates new blocks with the new data
<clever>
zfs is a lot like nix, every block on the disk is immutable
<clever>
pie_: ]$ ls -l /nix/var/nix/profiles/system
<clever>
pie_: what does --query --roots say about those GHC's?
<clever>
so something is rooting them
<clever>
but he already ran a GC recently
<clever>
because 900 > 10
<clever>
sort -n considers 900kb to be creater then 10mb
<clever>
pikajude: it sorts 900kb as being less then 1mb
<clever>
this one will be thr entire store
<clever>
du --max=1 -hc /nix/store | sort -h
<clever>
ah, but thats just for the latest generation of nixos
<clever>
that will tell you why its rooted
<clever>
pie_: next, find something fat from that listing that you dont want, and run nix-store --query --roots against it
<clever>
ive been in situations where the terminal was the bottleneck
<clever>
|tail shows less
<clever>
now its just down to how fast your terminal can render
<clever>
yep
<clever>
there is a cache in the kernel, so the 2nd will always be faster, no mater which order you ran them in
<clever>
try the first one again
<clever>
but there many be fat things from older nixos builds, your nix-env profile, or result symlinks you left laying around
<clever>
both of those will show everything that the current build of nixos depend son
<clever>
du --max=0 -hc $(nix-store -qR /run/current-system) | sort -h
<clever>
pie_: first, have you tried just a normal "nix-collect-garbage" with no flags?
<clever>
pie_: many, one sc
<clever>
the kernel doesnt allow changing uid once you have dropped root
<clever>
pie_: its more like switching to the nobody account
<clever>
pie_: but on some kernel configs, you need root to drop them further
<clever>
pie_: its dropping some perms that it doesnt need, i dont remember what exactly
<clever>
yegortimoshenko: add "boot.allow_shell" to the kernel parameters i believe
<clever>
and nix doesnt allow setuid binaries in the store, so you need to enable it in the nixos config
<clever>
pie_: unity must be reusing the chromium setuid wrapper for its sandboxing
<clever>
pie_: ah, the option tilpner mentioned is an alias to security.chromium.SuidSandbox.enable
<clever>
adelbertc: that should be possible, as long as you can control what ports the daemon listens on, so it cant conflict with another instance of itself
<clever>
adelbertc: what exactly is it doing against dockerd?
<clever>
Mic92: yeah, so the prng will rarely get reused
<clever>
Wizek: with nix-channel
<clever>
so every sector has a different prng
<clever>
my understanding, is that the prng is based on the master key, and sector#
<clever>
yeah, thats the real question
<clever>
you get the result of xor(prng1, prng2)
<clever>
infinisil: and then you xor the 2 drives together
<clever>
infinisil: but if you then have 2 drives, each doing drive1 = xor(data, prng1) and drive2 = xor(data, prng2)
<clever>
xor, lol
<clever>
Mic92: hmmm, do you know if luks is just xorg(data, prng), or if its more complex?
<clever>
iqubic: nix-prefetch-git, or just supply a wrong hash, and look at the error nix-build gives
<clever>
iqubic: yes
<clever>
Mic92: then the bit flips will be visible to zfs, and it can pick the right size
<clever>
Mic92: if zfs encryption wasnt an option, then you could do zpool create POOL /dev/luks1 /dev/luks2
<clever>
Mic92: ah, so the bitflips will just go clean thru luks (scrambling different bits as it gets decrypted), and then mdadm would get upset about the mismatch, and not know the right answer
<clever>
Mic92: but what about the luks level, does it have any checksum?
<clever>
Mic92: ah, so mdadm mirror doesnt know which side is "right"
<clever>
iqubic: yeah, thats what every single derivation does
<clever>
iqubic: the repo always goes into /nix/store, and fetchFromGitHub returns the path
<clever>
iqubic: and then based on their values, it will either fetch a github tarball url, or clone from github
<clever>
iqubic: fetchFromGitHub is a function that takes a handful of arguments (lines 192-194)
<clever>
Mic92: but what exactly can go wrong if you did use mdadm?
<clever>
Mic92: yeah, thats why i prefer to put the zfs directly on the devices
<clever>
Mic92: and you can potentialy generate a rainbow table, to map an encrypted frame of all 0's, back to a key
<clever>
Mic92: it lacks an IV, and due to h264 padding, there can be frames of all 0's
<clever>
Mic92: that reminds me, there is a problem ive researched about mpegts encryption (tv/satalite stuff)
<clever>
hyper_ch: yeah, that would be safer from a crypto perspective
<clever>
pmade: no longer bottlenecked by your upload bandwidth
<clever>
pmade: it would deploy a lot faster if it was in the same availability zone as the machines your creating
<clever>
Mic92: what could potentially go wrong if you make 2 luks volumes, and then mdadm mirror them into a single block device?
<clever>
pmade: i dont think it will do that, but you wil want to obviously restrict access to the user nixops is ran as
<clever>
Mic92: ah, so as long as it exists in the initrd at the right path (either via mount, or embeded) it will work
<clever>
pmade: so nixops will only have to copy the differences (hostname, any future updates)
<clever>
pmade: i believe you can generate an AMI, that has the entire closure, then upload that to amazon, and configure nixops to start with that AMI
<clever>
pmade: ah, an AMI is better for that
<clever>
in postdevice i'm gueessing?
<clever>
Mic92: i'm thinking, a way to do zfs encryption on a headless server, and i only have to destroy a key in /boot and know the rest is toast
<clever>
Mic92: is there also any way to load the passphrase from a file in /boot with no user intervention?
<clever>
pmade: share them at what level?
<clever>
some metadata will be readable, the names of datasets, and sizes, but i believe filenames and their contents are totally protected
<clever>
so its nearly the same as FDE
<clever>
if you set the encryption flag directly on the pool (rather then pool/root like the wiki said), it will be inherited by everything
<clever>
to unlock the data
<clever>
then you will need to manualy enter the password on the backup machine
<clever>
hyper_ch, infinisil: i believe with this, you can get raid, encryption, snapshots, and even use znapzend, and the remote backup server has no way to view the contents
<clever>
found it
<clever>
2017-09-15 13:16:52< Mic92> disasm: no, requires zfsUnstable anyway
<clever>
output path ‘/nix/store/3676mpn0q0x46qr795z3c716irkr614l-nerdfonts-1.1.0’ has r:sha256 hash ‘1cg11apglr833a246jnxaibfgaj77w090gqxwpzbqlzmh9aw4zg2’ when ‘1f3qvzl7blqddx3cm2sdml7hi8s56yjc0vqhfajndxr5ybz6g1rw’ was expected