gchristensen changed the topic of #nixos-chat to: NixOS but much less topical || https://logs.nix.samueldr.com/nixos-chat
pxc has joined #nixos-chat
drakonis has quit [Remote host closed the connection]
pxc has quit [Ping timeout: 252 seconds]
pxc has joined #nixos-chat
pxc has quit [Ping timeout: 264 seconds]
pxc has joined #nixos-chat
pxc has quit [Ping timeout: 252 seconds]
pxc has joined #nixos-chat
pxc has quit [Ping timeout: 252 seconds]
pxc has joined #nixos-chat
pxc has quit [Quit: WeeChat 2.1]
lassulus_ has joined #nixos-chat
lassulus has quit [Ping timeout: 252 seconds]
lassulus_ is now known as lassulus
lopsided98 has quit [Quit: Disconnected]
lopsided98 has joined #nixos-chat
<samueldr> related to an earlier conversation about linux versions being backported to stable
jD91mZM2 has joined #nixos-chat
jtojnar has quit [Ping timeout: 252 seconds]
__monty__ has joined #nixos-chat
jtojnar has joined #nixos-chat
<jD91mZM2> Is it possible that ZFS is faster somehow? I feel like my computer has been sped up...
<gchristensen> ZFS uses much more RAM than many FSes so it may feel faster due to that?
<jD91mZM2> Wouldn't using more RAM feel slower? :P
<jD91mZM2> Or you mean it writes less to disk immediately?
<LnL> euh, that's called dataloss
<LnL> reads are cached
<andi-> I would guess it is less cluttered around the disk but with SSDs I am not sure what the impact really would be
<__monty__> Is that why people say you need ECC for ZFS?
jD91mZM2 has quit [Ping timeout: 252 seconds]
<gchristensen> the ECC for ZFS is to protect against a scrub of death whichisn't real http://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/
<LnL> aah, what I hate about gpg signing is that I loose my commit message if I type a wrong password
<infinisil> LnL: you don't have 3 tries?
<LnL> yes, but I can't type :p
<gchristensen> LnL: is it still there in .git/COMMIT_MESSAGE?
<LnL> gchristensen: oh, yes!
<gchristensen> good :)
<gchristensen> I've lost many many commit messages that way, haha
<LnL> why doesn't git open that with my next attempt...
<gchristensen> __monty__: a specific good quote, "if you’re running non-ECC RAM that turns out to be appallingly, Lovecraftianishly evil, ZFS will mitigate the damage, not amplify it"
<infinisil> LnL: there must be a way to get that working
<LnL> and over time I've started to find commit messages more and more important, I try never to write a one-liner anymore
<infinisil> I certainly do, I commit every time I change my system config
<gchristensen> LnL: you're a much more considerate developer than me
<LnL> btw,I love the nix git history, it's kind of all or northing :p
<gchristensen> hh
<LnL> oh! builtins.fromTOML, somebody has been doing rust stuff :)
<gchristensen> uh oh
<gchristensen> what?
<gchristensen> oh my haha
<ekleog> didn't nbp make, like, a lib.fromTOML a few months|years ago? this may remove quite a bit of code :°
<LnL> yeah, the mozilla overlays use a nix implementation
<infinisil> An *incomplete* one
<LnL> probably
<infinisil> Yaml is famously complicated to implement fully, there's many yaml libraries that don't confirm to the sepc
<infinisil> Spec
<infinisil> I even saw a "yaml" implementation that doesn't even work with json
<infinisil> (yaml is supposed to be a superset of json)
<LnL> it didn't start out that way I think
<ekleog> infinisil: do we agree we're speaking of toml here?
* ekleog thinks he missed some context somehow
<infinisil> Ohh
<infinisil> Yea i thought of yaml, sorry
<infinisil> Taking everything back then
<gchristensen> but also the toml implementation in builtins.fromTOML is incomplete
<joepie91> "Your Home folder is running out of disk space, you have 173 MiB remaining (0%) "
<joepie91> :(
<gchristensen> is it garbage collection day?
<joepie91> apparently, lol
<joepie91> currently ncdu'ing, I might just have some big junk somewhere
<joepie91> this is my primary HDD filling up, which is only 1TB
<gchristensen> I have like 45G of vagrant boxes :|
<joepie91> I... probably have an old vagrant box sitting somewhere? from years ago
<joepie91> but I doubt it's that big
<joepie91> other than that, I just have lots of assorted junk lol
<joepie91> like, "oh I want to run some numbers on historical weather, let's download 30 years of climate data" is a fairly regular kind of occurrence to me...
<joepie91> also lots of archived repositories and other crap that's at risk of deletion
<__monty__> joepie91: Was it you who said analyses have been done comparing environmental impact of fiat vs crypto currencies?
<gchristensen> #nixos-chat-crypto
<gchristensen> #nixos-chat-cryptocurrencies
<__monty__> gchristensen: Just 1 link. That's all I'm asking ; )
<joepie91> __monty__: yes, but I do not immediately have the links handy, and various Twitter discussions have exhaused my energy for cryptocurrency-related things for a while :P
<gchristensen> irc://#nixos-chat-cryptocurrencies@irc.freenode.net
<gchristensen> ^ one URL :)
<__monty__> joepie91: I'd appreciate if you send me the links if you come across them again.
<joepie91> will do
<gchristensen> non-USAians, do people in your cultures value "fine china"?
<joepie91> gchristensen: I know that historically, 'porcelain tableware' was something that was valued here in NL, but I don't know if that's still a thing
<joepie91> (literally translated)
<gchristensen> interesting. do some people still pass it down like a family heirloom?
<joepie91> gchristensen: I *think* so, but rather out of a sense of tradition rather than because it's porcelain
<joepie91> answer confidence: 40%
<joepie91> :P
<gchristensen> hehe, yes, same
jD91mZM2 has joined #nixos-chat
<jD91mZM2> Uh oh... "To sync your Dropbox, move your Dropbox folder to a partition with a compatible File System. Dropbox is compatible with Ext4."
<joepie91> ...wat
eren has quit [Ping timeout: 264 seconds]
jD91mZM2_ has joined #nixos-chat
<jD91mZM2_> joepie91: Seems I can't use dropbox with ZFS D:
<joepie91> that seems like a stupid limitation
<joepie91> also: 11114 store paths deleted, 40762.70 MiB freed
<jD91mZM2_> Indeed
<gchristensen> not bad
jD91mZM2 has quit [Ping timeout: 244 seconds]
<simpson> jD91mZM2_: Happens. You picked the wrong service.
<gchristensen> jD91mZM2_: make a file inside ZFS and format it, and mount it ext4? :)
jD91mZM2_ is now known as jD91mZM2
<infinisil> Or a zvol
<jD91mZM2> joepie91: Is that from a GC? Wow
<joepie91> yeah
<jD91mZM2> simpson: What do you recommend?
<joepie91> deleting old generations
<joepie91> anyway, 40GB cleaned up from a GC, 325GB in my home folder...
<jD91mZM2> Wow. My max is like 18GB, and that got filled straight back on the next rebuild
<joepie91> where the hell is the rest of the space
<joepie91> oh yeah, I split this up into two partitioins
<gchristensen> last time I had runaway space it was a 100GB audit log on a centos box.
<jD91mZM2> gchristensen: That's a cool idea! Would it automount?
<gchristensen> it can
<simpson> jD91mZM2: Well, to be extremely biased, you could use a Tahoe-LAFS grid. They are *provider-independently secure*; it doesn't matter who owns and operates the grid, they can't read your files.
<jD91mZM2> joepie91: Now do a nix-store --optimise, that usually cleans a crap ton more for me if I haven't done it in a while
<simpson> My bias is that I own and operate matador.cloud, but there's also https://leastauthority.com/ which is more oriented towards personal backups.
<simpson> Plus, buying services from LAE helps fund Tahoe-LAFS development.
<gchristensen> yay LAFS
<jD91mZM2> I literally just store my password manager vault on dropbox heh
<joepie91> holy shit bucklescript (bs-platform npm module) is 354MB, what
<simpson> jD91mZM2: How courteous of you to share your passwords with Dropbox~
<joepie91> oh, vendor'd ocaml
<jD91mZM2> simpson: Obviously encrypted :)
<simpson> jD91mZM2: How is that obvious? It's not a LAFS.
<joepie91> (password manager vaults are typically encrypted)
<jD91mZM2> Wouldn't store my plain text passwords on dropbox :P
<LnL> I've been meaning to try out LAFS
<jD91mZM2> simpson: Does LAFS have a mobile app so I can use it with KeePass mobile?
<simpson> No.
<simpson> We've discussed this quite a bit. The general agreement seems to be that phones are terrible.
<simpson> The protocol only exists in Python right now. Somebody would have to be incentivized to make a JVM or iOS port.
<jD91mZM2> infinisil: Oh, zvols are like embedded filesystems in a zpool? Wow
<infinisil> Yup
<gchristensen> infinisil: whoa
<simpson> (There's upstream work to create a more standard and portable protocol, but again, we need people to care; only one person is working on it paid right now.)
<infinisil> Not sure on its limits, there are some problems when you use them for swap. But they are optimal for VMs
<jD91mZM2> Btw zfs seems to mount my dataset root as /main. Is this a problem?
<jD91mZM2> I mean, my sub-datasets like main/home and main/root are mounted as /home and / respectively
<jD91mZM2> but main itself is mounted as /main, I think?
<sphalerite> no. But if you want to avoid that you can do zfs set main mountpoint=none
<sphalerite> or mountpoint=legacy
<jD91mZM2> gchristensen: The ZFS tutorial you sent me has a page about zvols :D https://pthree.org/2012/12/21/zfs-administration-part-xiv-zvols/
<jD91mZM2> sphalerite: What happens if I store data in there? Will it just get stored like normally or does anything special happen because it has children?
<gchristensen> jD91mZM2: I only have so much space in my brain
<jD91mZM2> gchristensen: Didn't mean "you didn't read enough", more like "if you're interested, here's a link that you shared with me back"
<gchristensen> ahh :)
<infinisil> jD91mZM2: The only thing the hierarchy of zfs datasets does is inherit properties
<infinisil> jD91mZM2: But each one of them, parents, children, are its own usable dataset
<jD91mZM2> infinisil: Awesome, thanks!
<infinisil> So you can mount, set quotas, whatever you need with them, or disable mounting completely. The last of which is useful when you only want to use parents to group properties to get the inherit behaviour
<jD91mZM2> And creating a zvol won't even require unmounting my drive, will it? Damn I love ZFS
<joepie91> "A supported file system is required as Dropbox relies on extended attributes (X-attrs) to identify files in the Dropbox folder and keep them in sync. We will keep supporting only the most common file systems that support X-attrs, so we can ensure stability and a consistent experience."
<joepie91> rephrased: lol fuck you, we don't care about supporting uncommon things
<infinisil> But ZFS does support xattrs!
<joepie91> so yeah, I have to agree with simpson here; you picked the wrong service :P
<joepie91> infinisil: the problem isn't that ZFS doesn't support xattrs
<joepie91> the problem is that Dropbox doesn't want to spend time implementing support for it
<joepie91> and testing it
<jD91mZM2> More like "In order to make you feel good about using dropbox, we want to make sure it never ever fails. So we have a hardcoded list of thigns that are guaranteed to work"
<infinisil> Yeah
<joepie91> jD91mZM2: which they can also just do by testing for ZFS
<joepie91> but that is, apparently, too much work
<jD91mZM2> I'm sure enabling ZFS support is literally just adding it to the hardcoded list heh
<joepie91> either way, what I read from this thread is "we don't care about supporting you if you use something that's uncommon enough to be unprofitable to support"
<gchristensen> maybe you can patch it
<joepie91> there are so many ways to deal with this that doesn't result in this "doesn't work" failure mode
<joepie91> that don't *
<jD91mZM2> Worst part is I can't choose any service I like because I really need it to work with KeePassXC Android
<jD91mZM2> Keepass2Android*
<joepie91> :(
<joepie91> yeah, I guess you'll have to look into patching out the check then
<joepie91> jD91mZM2: what does nix-store --optimise do anyway? it seems very slow
<jD91mZM2> joepie91: IIRC it removes duplicates and hard-links them instead
<gchristensen> --optimise freaks me out.
<jD91mZM2> Saved me almost 25% of my usage once I believe
<infinisil> joepie91: Yup, after it's done, look into /nix/store/.links
<infinisil> It contains every single file in every derivation as a hash
<jD91mZM2> So... I just made an ext4 zvol... it still complains about it
<joepie91> infinisil: right :P
<joepie91> jD91mZM2: is your nix-explorer meant to be built against nightly?
<jD91mZM2> joepie91: Yeah, I'm using the rust 2018 edition for all new projects
<joepie91> right
<jD91mZM2> I usually like stable, but... rust 2018 man
<joepie91> infinisil: is it safe to interrupt the optimise process?
<infinisil> I don't know for sure, but my guess would be yes
<jD91mZM2> I've totally done that once
<jD91mZM2> So it should be safe, unless I'm just lucky
<joepie91> heh
<joepie91> I'll just let it run for a bit then...
<joepie91> also
<joepie91> what the *fuck* does Anker do to their Powerline USB cables
<joepie91> I have a 3M cable and it works flawlessly even with finicky stuff like flashing microcontrollers
<joepie91> even good cables typically fail after 2M
<infinisil> Not trying to be rude or anything, but inquisitiv is asking too many questions
<gchristensen> infinisil: I can say something if you'd like, but also you should feel free to say so
<gchristensen> be polite and factual
<joepie91> infinisil: possibly handy to keep http://www.mikeash.com/getting_answers.html in your back pocket
<joepie91> (it's non-confrontationally written, unlike the 'asking smart questions' thing)
<infinisil> gchristensen: Nah it's fine, I'll just slow down my reply-rate a bit
<jD91mZM2> What would happen if I ran `sudo zfs destroy main/root` (while still mounted)?
<jD91mZM2> would it be like rm -rf --no-preserve-root /?
<jD91mZM2> or would it complain?
<joepie91> gchristensen: to expand on my earlier answer re: tableware.. in NL it's common nowadays to approach 'passing down things' from a more pragmatic angle; typically parents will collect a pile of old tableware, miscellaneous utility items, sometimes furniture, etc. when they replace their own... and when $child leaves the home, usually when they go to college/uni, they get that pile of stuff from their parents
<joepie91> passing down of family heirlooms typically happens later in life, afaik
<joepie91> the optimise is still running... I've gone from 40G free to 54G free so far
<jD91mZM2> Told you it's the best :D
<joepie91> jD91mZM2: dunno, there seems to be a distinct CPU / disk space tradeoff here so far :D
<sphalerit> jD91mZM2: alternatively to messing atoound with ext4 and zvols, I think zfs supports xattrs
<sphalerit> You just need to enable it
<jD91mZM2> sphalerit: I bet Dropbox just keeps a hardcoded list of what filesystems "are supported" without giving a crap about if I enable xattrs or not
<sphalerit> Check `zfs get xattr <your-fs>`
<jD91mZM2> on (default)
<jD91mZM2> So... does anybody here know any good Google Drive clients? :P
<joepie91> lol
<joepie91> 57GB and counting..
<jD91mZM2> Freed by this one invokation or in total?
<joepie91> in total; that's 17GB freed by optimisation so far
<joepie91> I probably have a pretty big nix store though
<jD91mZM2> TFW sigkill won't kill a program
<gchristensen> an issue with optimizing is if you have a LOT of files, depending on your FS, you might hit the birthday problem
<jD91mZM2> Birthday problem sounds fun
<joepie91> 60G...
<infinisil> gchristensen: hash collisions?
<gchristensen> yeah, because you have millions of files in one dir (/nix/store/.links) you risk a collision in the FS
<gchristensen> it doesn't break anything or corrupt anything, just can't create the new file
eren has joined #nixos-chat
<infinisil> gchristensen: But like, those are cryptographic hash functions, which shouldn't let you find even a single collision within millions of years of computing!
* samueldr would argue not creating the file breaks whatever tried to create the file
<infinisil> (sha256 and co. at least)
<gchristensen> ext4 doesn't use sha256 to create the hash table of files in a directory
<gchristensen> it isn't a collision of different files matching, it is a collision of inodes in the tree
<{^_^}> nix#1522 (by domenkozar, 1 year ago, open): Optimize store stops working at ~10M files
<infinisil> Ohh
<ekleog> if only legacy wasn't legacy https://github.com/NixOS/nix/issues/2103
<{^_^}> nix#2103 (by Ekleog, 17 weeks ago, closed): “Hierarchize” the store
<ekleog> hmm, wait, maybe actually hierarchizing only the .links folder would be possible without breaking too much legacy?
<gchristensen> yeh
<gchristensen> .links dir is no API
<ekleog> obligatory https://xkcd.com/1172/
<gchristensen> yeah too bad
<joepie91> 68G....
mudri has joined #nixos-chat
<jD91mZM2> google-drive-ocaml seems to use fuse to create a mountpoint for Google Drive. Does this mean my password database is never physically on my drive? (That seems scary if Google Drive would ever go down)
<joepie91> 32789.67 MiB freed by hard-linking 1690459 files
<gchristensen> withthis repo's name, you know its gon be good https://github.com/TheMozg/awk-raycaster
<joepie91> I... what.... no...
<gchristensen> oh my yes, joepie91
<joepie91> totally unrelated and very much off-topic, very interesting thread: https://twitter.com/mspowahs/status/1033606649926246401
<joepie91> anybody have recommendations for an editor/IDE/whatever that'll help me navigate through a C++ codebase by figuring out where things are defined and whatnot? editing is not required, so just a codebase viewer is fine too
<joepie91> (existing codebases, cannot require editor-specific project files)
<joepie91> well, I guess CodeBlocks is out
<joepie91> opened a single CPP file, it's been spinning my cursor for 15 minutes now
aszlig has quit [Quit: Kerneling down for reboot NOW.]
aszlig has joined #nixos-chat
<infinisil> joepie91: emacs with some LSP server would probably work
<infinisil> (any editor that supports LSP really)
<infinisil> (because LSP)
<joepie91> infinisil: undefined variable: some LSP server
<joepie91> :P
<joepie91> anyway, seems Eclipse can do it
<joepie91> infinisil: thanks; do we have that packaged?
<infinisil> > clangd
<{^_^}> undefined variable 'clangd' at (string):171:1
<infinisil> Doesn't look like it
<infinisil> joepie91: Check out https://langserver.org/ too
<joepie91> thanks
<infinisil> Those lists are really nice
<joepie91> clangd is apparently available in clang >= 5
<infinisil> ,locate bin clandg
<{^_^}> Couldn't find any packages
<joepie91> ,locate bin clangd
<{^_^}> Found in packages: haskellPackages.llvmPackages.clang-unwrapped
<infinisil> Huh
<infinisil> My local nix-locate has it in llvmPackages.libclang
<infinisil> (I need to make this bot autoupdate it a bit better)
<joepie91> idem for the online package search
<joepie91> it... does not show up when I do `nix-shell -p llvmPackages.libclang`
<joepie91> what's up with that?
<infinisil> joepie91: Oh lol, default output is dev apparently
<infinisil> While only .out contains that binary
<infinisil> You know what would be a *super* cool project
<infinisil> Making these lists work via Nix modules: https://langserver.org/
<infinisil> So if you code haskell and c++ in emacs and vim, you do `lsp.clients = [ "vim" "emacs" ]; lsp.servers = [ "clangd" "haskell-ide-engine" ];`
<infinisil> And you got a vim and emacs setup to work perfectly with those 2 languages
<infinisil> Too many cool things to do!
<joepie91> infinisil: magic! https://github.com/jbree/ide-clangd
<joepie91> also yes, that'd be nice :P
eren has quit [Read error: Connection reset by peer]
eren has joined #nixos-chat
<samueldr> oh, easter egg at google, search for: Bletchley Park
<joepie91> yes! I seem to have definition-jumps in Atom now!
<jD91mZM2> What backup solutions do you people have? I know sphalerite uses zfs to mirror his drive, but anybody else?
<sphalerit> No I don't use mirroring as a backup
<jD91mZM2> oh
<sphalerit> I use mirroring for redundancy on some machines, but mirroring is not backup
<sphalerit> I use zfs snapshots with send/recv (replication I think?) for backup
<jD91mZM2> sphalerit: That's the thing I meant. You send a snapshot and receive it on another disk
<jD91mZM2> Kindaaaaa like mirroring, but only mirroring fixed points...?
<sphalerit> And not within the same pool. Mirroring is in the same pool
<jD91mZM2> sphalerit: Didn't mean mirroring as in RAID-something stuff, or whatever the technical stuff is
<jD91mZM2> Meant mirroring as in making sure two things are identical
<sphalerit> Yeah but that's what mirroring refers to in the context of zfs :p
<jD91mZM2> Noted
<jD91mZM2> I tried `zfs send` on my home folder and it took forever lol
<jD91mZM2> s/folder/dataset
<sphalerit> An incremental snapshot from there should be significantly faster
<sphalerit> But yeah a complete snapshot will take as long to transfer as all of the data stored in the filesystem
<samueldr> I use borg backup, which operates at the files level, it has pretty awesome deduplication between backups
<samueldr> both time-wise (earlier and later backups) and across computers, when using the same archive
<samueldr> since my main computers already synchronize a bunch of files, this deduplicates the backup a bunch
<jD91mZM2> Ooooo yeeeeeaa this looks awesome
<jD91mZM2> LZ4 compression, encryption, oooo
<jD91mZM2> Can it sync with S3?
<samueldr> duno!
<samueldr> I use it with another (physically distant) computer
<jD91mZM2> Over ssh? Or..?
<samueldr> ssh
<samueldr> I also used it, for a good while, with a USB disk
<samueldr> probably should go back to doing this
<samueldr> (having a local hot copy off all my computers
<jD91mZM2> How big do those USBs typically have to be?
<jD91mZM2> I mean, on a scale from the original data of course
<samueldr> I'll start a manual backup to show you the scale deduplication
<samueldr> (it prints a report at the end)
<samueldr> but generally, it's dataset + whatever size is needed for the delta, compressed... it'll vary greatly depending on whether it compresses well I guess
<samueldr> I know I never had to actively purge backups on a 1TB drive, but that's for a dataset that was around a hundred gigabytes I think, probably less
<samueldr> (it can also be configured to keep data in a finer-grained manner, e.g. one for each month, except for the last month, one for each week, except the last week, one for each day)
<jD91mZM2> Nice, thank you a lot!
<jD91mZM2> I don't think it supports S3 :(. But it seems amazing otherwise!
<infinisil> jD91mZM2: Note that zfs supports this as well
<infinisil> incremental + compression
<infinisil> I'm personally using http://www.znapzend.org/ for backups, which uses zfs for them
<infinisil> But it does need zfs on both sides
<jD91mZM2> Cool, thanks!
<gchristensen> today might be the day I move my nixos config out of /etc/nixos, or chown it to my user...
<gchristensen> tired of the sudo / no sudo dance
<samueldr> :D
<gchristensen> feels blasphemous to move it out of that dir :)
<samueldr> as I said before, I run my systems assuming /etc/nixos is owned by the end-user (single-user computers)
<samueldr> oh, and my system's nixpkgs checkout lives inside /etc/nixos, at /etc/nixos/nixpkgs :)
<elvishjerricco> I use NixOps to deploy to localhost. I'm not in love with it, but it lets me use `deployment.keys`, and I don't have to hard code the uri to the S3 bucket my desktop uploads stuff to.
<elvishjerricco> So all my config is just under my home directory
<jD91mZM2> gchristensen: Heh, that's one of the first things I did
<jD91mZM2> You have patience :)
<elvishjerricco> NixOps really isn't suited to rapid redeployment though. It evaluates the expression like 3 times, and it ALWAYS uploads keys
jD91mZM2 has quit [Quit: WeeChat 2.0]
<gchristensen> I don't like my config to be "weird" so I can experience it more like new users
<samueldr> gchristensen: do you consider chmodding "weird"?
<gchristensen> I'm not saying I've been rational
drakonis__ has joined #nixos-chat
snajpa has quit [Ping timeout: 260 seconds]
snajpa has joined #nixos-chat
__monty__ has quit [Read error: Connection reset by peer]
__monty__ has joined #nixos-chat
__monty__ has quit [Quit: leaving]