<elvishjerricco>
I need to figure out the best way to translate pywall colors to a base16 scheme (https://github.com/chriskempson/base16). I use base16 templates to make everything the same colors, but my color scheme is terrible at the moment.
<elvishjerricco>
ldlework: I've already got a base16 template for emacs :)
<ldlework>
oh ok
<elvishjerricco>
What are those red cursors in the video you posted?
<ldlework>
elvishjerricco: whitespace-mode
<elvishjerricco>
ldlework: Oh so it's just trailing whitespace? And you detected trailing whitespace in your macro?
<ldlework>
well no, i always have whitespace-mode on, the macro just tracked " = " I think
<ldlework>
whitespace-mode makes the face for trailing whitespace full-red, but other than that nothing special going on
<ldlework>
i should've recorded the bit where I made the macro :)
<elvishjerricco>
ldlework: Emacs macros are nice, but I've found I have to use them as a substitute for the multiple-cursors features in other editors. Emacs doesn't really have anything competitive to that AFAIK, and I really miss multiple cursors.
<ldlework>
i find that emacs macros are farr more powerful than multiple cursors
<elvishjerricco>
ldlework: They are. But they don't solve the problem that multiple cursors solve particularly well
<ldlework>
you can include search operations in your macros and multiple cursors just can't match that
<ldlework>
sure they do
<ldlework>
most multiple cursor implementations start by having you place the other cursors
<ldlework>
which is essentially a very basic search operation
<ldlework>
just going to each site and playing back the same sequence of edits performed in the first one
<ldlework>
multiple cursors is just a gimmick to allow you to see all applications simultaneously
<ldlework>
which i admit looks cool
<elvishjerricco>
ldlework: The difference to me is that I get to just type when I'm using multiple cursors. When I'm making macros, I feel like I'm writing a program and I have no idea if it's about to work or fail.
<ldlework>
also, emacs has a multiple cursors implementation
<ldlework>
sure, that's a feeling
<ldlework>
it doesn't solve a different set of problems
<ldlework>
emacs macros are definitely a superset of multiple cursors imo
<elvishjerricco>
Plus having to hit F4 a million times to apply my macro is slower and more annoying than just alt+clicking and dragging to start all the lines.
<ldlework>
i dunno why you'd use F4
<ldlework>
C-x e to apply a keyboard macro
<ldlework>
then you can just keep hitting e
<elvishjerricco>
ldlework: same problem
<ldlework>
but again, that's not really a different set of problems
<elvishjerricco>
ldlework: multiple cursors is just a faster and more convenient way of doing a few of the things macros can do
<ldlework>
macros are more powerful here anyway, because in ending a macro in a search operation you land where the next execution will take place
<elvishjerricco>
that emacs package for multiple cursors seems awesome
<ldlework>
so you can even skip some locations that would otherwise be matches
<ldlework>
i'm just not sure they're exactly faster
<ldlework>
you can see the huuuuuge list of cursor selection functions
<ldlework>
remembering and correctly using those to accurately mark your sites
<ldlework>
vs doing it interactively with tail search operations
<ldlework>
using normal emacs operations
<elvishjerricco>
Wow watching the demo video of that multiple cursors package, the guy edits down a copy paste from his browser of his youtube channel to just the durations and view counts of each video in like 5 seconds. I really don't think you could that nearly as quickly with macros, if only because you have to manually repeatedly apply it
<ldlework>
i've seen it and i've used the package for a few months
<ldlework>
similar to when you'd used ! with search and replace to go ahead and replace all ocurrences because you're confident the data is regular enough, mulitple cursors can be pretty fast
<ldlework>
but in those same cases I realized you can just do C-u 1000 C-x e
<ldlework>
to apply the macro enough times to get to the end of the file and then I don't need to remember any new keybindings for a multiple cursors package
<sphalerite>
Somewhat concerned that Hetzner's KVM consoles are accessed via unencrypted HTTP D:
<ivan>
not behind a VPN?
<sphalerite>
ivan: nope
<sphalerite>
ivan: you go to (for example) http://fsn1-kvm64.hetzner.com/auth.asp and log in there, then you download a file "spider.jnlp" also via unencrypted HTTP, then you run that and it seems to communicate in cleartext too
<srhb>
That indeed does not sound good.
<srhb>
Write them a mail and ask? They're usually quite responsive.
<sphalerite>
yeah I sent a support request
sphalerit has joined #nixos-chat
<srhb>
Thanks :) I never realized that.. Not nice.
sir_guy_carleton has quit [Quit: WeeChat 2.0]
sir_guy_carleton has joined #nixos-chat
<andi->
Werent they using self-signed certificates?
<sphalerite>
andi-: yeah if you try to access it via HTTPS it comes up with a self-signed cert
<jD91mZM2>
I just compiled rnix for WebAssembly! Select the "Nix" language in https://jd91mzm2.github.io/editor.html and check the tree representation :D
<sphalerite>
whooooooooaaaaaaa
* sphalerite
wonders if you could do that with nix itself as well
<jD91mZM2>
What do you mean? Actually evaluating Nix in WASM?
<sphalerite>
yeah
<sphalerite>
not super useful really I imagine
<jD91mZM2>
I mean... for a list of all packages maybe
<sphalerite>
besides, joepie91 implemented some stuff (was it just parsing, or did you go into evaluation as well?) in js anyway
<jD91mZM2>
Would it be rude to sort of "advertise" I guess Nix unrelated projects here? I need all the help testing something I can get
<srhb>
jD91mZM2: I think that's fine. :)
<jD91mZM2>
Then https://jd91mzm2.github.io/chess.html. It's a rewrite of a chess minimax algorithm, and I want to make sure it's as good as it can be against humans. Somebody actually won over the previous version so I'm a little worried :P
jD91mZM2 has quit [Quit: WeeChat 2.0]
<joepie91>
sphalerite: just parsing
<sphalerite>
Updating 10 debian servers… the joys
<andi->
sphalerite: kexec into the nixos installer? :)
<sphalerite>
andi-: well I'd need to port their configs to nixos first. I might do that eventually but for now I'm just updating them :p
<sphalerite>
I've migrated *part* of the network to nixos
<infinisil>
But running it leads to random segfaults :/
<infinisil>
Although, I have a similar problem with xmobar, so it might be something with my setup that's fundamentally broken
<samueldr>
hmm, why were cached failures removed? while git bisecting a section of nixpkgs with a commonly failing build it's... annoying to see it restart
<samueldr>
(I'm also maybe bad at bisecting)
<sphalerite>
what was the name of that web service that checks the quality of your TLS config again?
<sphalerite>
Emacs automatically updates the serial when editing a zonefile
<sphalerite>
♥
sir_guy_carleton has quit [Quit: WeeChat 2.0]
<sphalerite>
achievement get: my first nixops deployment!
<ldlework>
sphalerite: heyyyy congrats
<ldlework>
sphalerite: what does your server do
<sphalerite>
ldlework: it's a bunch of servers
<ldlework>
that do what!
<sphalerite>
ldlework: one of them serves netboot for 10 other machines
<sphalerite>
ldlework: the others don't do much… yet
<ldlework>
i never really understand netboot
<ldlework>
but cool!
<sphalerite>
but will replace some other debian servers soon
<ldlework>
sphalerite: for work?
<sphalerite>
nope, haven't started work yet :D
<sphalerite>
(soon!)
<ldlework>
where at?
<sphalerite>
It's for the uni's computing society
<ldlework>
ah
<sphalerite>
we have a server room filled with old donated servers that are there for educational purposes
<sphalerite>
First service I'm going to move from debian will probably be DNS, which ought to be exciting
<ldlework>
Maybe!
<ldlework>
I bet DNS is pretty easy to setup with nix heh
<sphalerite>
well… if DNS breaks my nixops breaks
<sphalerite>
so that'll be great fun
<srhb>
Nothing a little host file wrangling won't fix. :D
<sphalerite>
indeed
<sphalerite>
uh wtf
<sphalerite>
One of the machines won't deploy successfully because… it's trying to run switch-to-configuration as a bash script..?
<sphalerite>
the shebang looks right
* sphalerite
retoots it
<sphalerite>
ah. hahahaha
<sphalerite>
TIL: If the kernel fails to execute the shebang interpreter (say, because it's 64-bit while the kernel is 32-bit) for a script, it will try to run it as a shell script instead.
<jD91mZM2>
but... why
<gchristensen>
that doesn't match my understanding of how that works... how would it know to run it as a shell script?
<gchristensen>
what is a shell script to the kernel?
<sphalerite>
gchristensen: I guess it uses /bin/sh? xD
<gchristensen>
if I had to guess, I'd guess there is something else going on
<samueldr>
If the header of a file isn't recognized (the attempted execve(2) failed with the error ENOEXEC), these functions will execute the shell (/bin/sh) with the path of the file as its first argument. (If this attempt fails, no further searching is done.)
<samueldr>
from POSIX: There are two distinct ways in which the contents of the process image file may cause the execution to fail, distinguished by the setting of errno to either [ENOEXEC] or [EINVAL] (see the ERRORS section). In the cases where the other members of the exec family of functions would fail and set errno to [ENOEXEC], the execlp() and execvp() functions shall execute a command interpreter and the environment of the executed command shall
<samueldr>
be as if the process invoked the sh utility using execl() as follows:
<andi->
I just hope thats not true for elf binaries that cant be run.
<gchristensen>
it isn't
<gchristensen>
if the interpreter is wrong, you get a confusing error about the file not existing
<andi->
Yep
<sphalerite>
I think I've fixed it with a cheeky nixpkgs.system = "i686-linux";
<gchristensen>
that'll do it
<sphalerite>
Still waiting for the build to finish though
<samueldr>
yeah, pretty sure the header is recognized for "other ELFs"
<sphalerite>
yaaay it works
<sphalerite>
ldlework: what about netboot don't you understand?
<ldlework>
sphalerite: i just don't really concieve of what it is and what it is for
<ldlework>
never looked into it
<ldlework>
it is a matter of ignorance not confusion
<sphalerite>
you know how your firmware loads the bootloader from the HDD and boots it? Netboot is when the firmware uses the network card, talks to a DHCP server, and the DHCP server tells it where to download the boot image
<ldlework>
huh
<ldlework>
what's the point
<gchristensen>
bulk management
<sphalerite>
yep, and it's very resilient for remote machines. If you have a machine booting from disk and you botch your boot sector, you won't have much fun once it's shut down
<srhb>
And you can do really cool generic stuff with it. For instance, you could have your server boot a generic image once it's plugged in for the first time, that image boots up, registers the hardware information for the server which configures your netboot host to feed it a "database-server-type" image to it on next reboot, because you're short one of those and now you know it's a machine that is configured
<srhb>
for that sort of thing.
<srhb>
Provisioning becomes "plug it in and register the image it should boot centrally" :)
<ldlework>
sphalerite: I just throw away machines
<sphalerite>
also it lets you boot completely without a HDD!
<gchristensen>
srhb++
<{^_^}>
srhb's karma got increased to 16
* srhb
has done this with great joy before
<srhb>
Coupled with Nix especially it becomes ridiculously powerful.
<gchristensen>
ldlework: hardware?
<ldlework>
i've never worked at a place that uses its own hardware
<gchristensen>
well there you go, you wouldn't use netboot unless you're working with hardware
<ldlework>
just thousands and thousands of aws machines
<srhb>
lol
<ldlework>
i see
<srhb>
Yeah, I've noticed this tendency
<srhb>
I'm amazed at the money-burning going on in clouds. :D
<ldlework>
well it is not new
<srhb>
It's relatively new to me, still
<ldlework>
i doubt it amounts to lost money
<ldlework>
compared to the problems inherent in maintaining server states
<srhb>
Certainly looks that way to me..
<srhb>
ldlework: Ah, but you have netboot and Nix. Problem solved ;P
<ldlework>
if something goes wrong on a machine
<gchristensen>
it depends on if you want capex or opex
<ldlework>
and it takes 3 minutes to spin up a new machine
<gchristensen>
and also your workload
<srhb>
gchristensen: Whassat?
<ldlework>
there's absolutely no business sense in having anyone paid 100k a year spend more than 3 minutes replacing it
<srhb>
ldlework: From a cost perspective it seems to me nothing beats the hybrid solution
<ldlework>
and the cap for "fixing" is unknown. the cap for replacement is three minutes.
<gchristensen>
srhb: capex: CAPital EXPenditure (we have $100k we want to spend today), opex: OPerational EXPenditure (we want to spend $100k over the next 6mo)
<srhb>
Capacity that you must have is always cheaper on your own hardware
<srhb>
Capacity that you _sometimes_ must have is usually cheaper in the cloud.
<srhb>
gchristensen: Ah!
<gchristensen>
let's be real here
<ldlework>
i'm not talking about owned hardware whatsoever, i have nothing to say about it
<gchristensen>
there are a lot of different reasons to pick cloud vs. hardware
<srhb>
absolutely
<srhb>
I'm only talking about the cost of the actual hardware.
<gchristensen>
and we're not going to have a thorough discussion beyond "they're both options"
<ldlework>
not anymore
<srhb>
Enumerating every reason is... Impossibru..
<ldlework>
srhb: how much hardware you got
<srhb>
None currently. :)
<gchristensen>
I typically netboot my cloud servers, of course
<srhb>
Previously about ~80-100 128-256 gig mem variable cpu config machines, with a handful of those being very special purpose.
<ldlework>
cool, what was the business?
<srhb>
only some 40 of those were "mine to manage" and they were generic container executors plus some "in house cloud storage"
<srhb>
The real inefficiency was that we had some batch job workloads that should have _clearly_ been offloaded to the cloud
<srhb>
Didn't happen while I was there.
<srhb>
And then you're overprovisioning on bare metal, which is very costly..
<srhb>
Need to take a page out of gchristensens "spin up a packet machine" book for that ;-P
<gchristensen>
any bare metal cloud provider will do
* srhb
nods
<srhb>
I do have a soft spot for Packet these days though, for obvious reasons...
* gchristensen
thinly veils his fanboism
<srhb>
:D
<gchristensen>
these days they can provision a bare metal server faster than AWS can a VM
<srhb>
That's.. Pretty amazing.
<ldlework>
you just lose out on everything else
<andi->
Its really fun to use and just works as you expect it.
<andi->
ldlework: it has a use case and for that it might be the right fit..
<gchristensen>
trade-offs
<ldlework>
and it might not
<gchristensen>
I'm not sure why you're determined to be negative on the topic
<ldlework>
i don't feel like i was being negative
<ldlework>
easy to interpret someone criticising something you're a fanboy about as negativity though
<ldlework>
i was just pointing out it goes both ways
<gchristensen>
maybe I'm misinterpreting your messages
<ldlework>
i feel like i am tired of you constantly making meta-commentary about any time i try to engage nix channels though, super wearysome
<ldlework>
i am just providing my thoughts to the channel without malice or malcontent or any wish to negatively influence anyone so i hope you can just work it out on your end in the future
<simpson>
ldlework: Specifically, how might one "lose out", and what's in the "everything"? I don't know much about Packet.
<simpson>
Or is this a complaint about netboot?
<ldlework>
there's tons of advantages to using AWS, not least of their insane support at least at coroproate levels
<ldlework>
I was just thinking "dang faster ec2 would be nice. hmm but aws is an institution at this point"
<ldlework>
and then I said what I said
<simpson>
Neither my employer nor my side business are on AWS; it's a choice that businesses choose to make.
<gchristensen>
the problem I have is your approach feels, to me, to be generally abbrasive and negative -- not looking to build up, but feels more critical and not fun. since you insist this isn't true, I regularly check with other people engaged in the chats to see how they interpret them, too. typically, they have agreed with my read on it, however they do note that your abbrasiveness is borderline. however, I'm not
<ldlework>
um obviously..
<gchristensen>
interested in bordering-on-abrasiveness, and I'm not interested in giving up on keeping this community a notably positive and joyous one. I won't stop that effort.
<ldlework>
well at least you have the people on your side to make your arbitrary judgements about my intent
<ldlework>
if only all our communities could be led with such great insight and peity
<ldlework>
the only time i ever detect negativity is when you meta-comment
<ldlework>
because i'm certainly not generating my speech from negativity
<ldlework>
so cool, guess I just have to bear the overhead burden of you harrassing me on the regular in order to participate
<gchristensen>
that is the trouble with our words -- it isn't how we perceive our own words, but how the other people around us perceive them
<ldlework>
yes, very progressive of you
<srhb>
^- prime example, please..
<ldlework>
sure just attack me personally
<gchristensen>
this is also part of why it is challenging to communicate emotional topics on the internet, the tone and rhythm of the messages is lost
<ldlework>
ignore that
<srhb>
I'm not trying to attack you...
<ldlework>
i point out how it is a very specific political stance
<gchristensen>
this is not a political stance, this is communication
<ldlework>
to think that it is on the speaker's burden to not offend
<ldlework>
no it is a political stance
<simpson>
ldlework: What, you think words have meanings~
<gchristensen>
this puts you in a very difficult position then
<srhb>
There should certainly be an active intent to not offend on the part of the speaker.
<ldlework>
srhb: no but you just ignored all such pointed at me
<ldlework>
but you jumped at the first criticism i made towards them
<simpson>
gchristensen: Clearly the trouble is that people ever bother to communicate. We should go back to grunting and pointing~
<ldlework>
speaking of thinly veiled
<gchristensen>
simpson: :d
<ldlework>
or just giving people the benefit of the doubt
<ldlework>
and approaching them directly when someone has said something you're not sure about
<ldlework>
instead of ruminating behind the scenes and making public displays of arrogant piety
<srhb>
ldlework: I am tired and I actually would have spoken up sooner if not for that. Instead I only jumped when it became very clearly unreasonable to me.
<gchristensen>
I don't ruminate behind the scenes, I quickly bring concerns to the forefront
<ldlework>
but yeah pretend to be wise adults
<ldlework>
you admited to consorting behind the scenes
* ldlework
can barely detect any veil
<ldlework>
srhb: yeah, sure, nevermind what I am
<ldlework>
tired, or you know, being attacked in the moment
<gchristensen>
I have found it very helpful to ask privately how they feel about the conversation, yes
<simpson>
ldlework: Lucky 10000: Folks have a bias that prevents them from realizing their own hypocrisy. I hear the word "optics" tossed around a lot to try to recognize this.
<ldlework>
simpson: do you believe you are trying to win my mind by being wholly arrogant
<ldlework>
"what you think words have meanings~"
<ldlework>
so consistent on our application of principle
<ldlework>
this display has been pathetic all around
<ldlework>
i am not trying to hurt anyone
<simpson>
ldlework: I'm not trying to win *anybody's* mind. I think it's hilarious that gchristensen thinks that *you're* the "critical and not fun" person, and not me.
<ldlework>
do whatever want in response to your feelings of my speech
<ldlework>
i just want to talk about nix and technology
matthewbauer has joined #nixos-chat
<andi->
Because it just came up elsewhere: any of you running with a r/o $HOME with the exception of a few white listed folders (~/dev, ~/Mail,...)?
<ldlework>
no but it sounds like a fun challenge
<ldlework>
a really hard challenge!
<srhb>
andi-: I half-wish I had made that choice years ago
<ldlework>
have you pulled it off?
<srhb>
homedir fungus is real
<srhb>
And there's like years of fungus in mine...
<andi->
ldlework: no just thinking about it with a friend
<srhb>
decades, in fact, probably.
<samueldr>
I was thinking of leaving $HOME, having $HOME be only for generated stuff, with my actual files elsewhere (with proper links)
<samueldr>
so $HOME could be /Users/samuel/Settings, and my actual files be in /Users/samuel, or even not sharing a common folder, something like HOME=/Settings/samuel/
<samueldr>
though expecting most things to dislike that
jD91mZM2 has quit [Quit: WeeChat 2.0]
<ldlework>
samueldr: i like that idea too
<ldlework>
like
<ldlework>
why do i keep my repos in ~/src there's no actual reason to do so
<samueldr>
on a multi-user system, like unix was brought up as, it made more sense to have one folder be a user's playground
<ldlework>
sometimes very large institutions have wisdom embedded in them that you can't easily see at first
<ldlework>
so the other side of me just says "don't bother" :)
<ldlework>
yeah
<ldlework>
and sometime very large institutions have irrelevant wisdom embedded in them that you can't easily see needs deprecating
<ldlework>
it goes both ways
<ldlework>
damn complexity
<ldlework>
samueldr: think you'd ever try that
<samueldr>
moving files our of $HOME, making $HOME a "settings" folder, still tempted; should probably do it when I switch my drives around
<ldlework>
i need to figure out how to get docker working properly on nix so i can get in the habit of just trying stuff out in nix containers
<ldlework>
and/or s/docker/nix vm thingies
<samueldr>
I have a 1TB SSD sitting unused that I *should* be moving my files onto, but it means downtime on my workstation, which I'm strangely averse to; especially since with nixos my configs are alike everywhere
jtojnar has joined #nixos-chat
<andi->
if I gained one thing with switching to NixOS it is being able to move my stuff quickly... Thats like THE benefit for me. I can just reinstall it on another machine - and I do so regulary whenver I acquire new stuff.. Moving the actual data is mostly just cloning git repos (and enrolling new SSH-Keys / restoring from backup)
<samueldr>
yes! I'm mostly at the lazy point of "don't want to wait the thirty minutes it'll take to transfer the data"
<ldlework>
totally
<ldlework>
andi-: do you have a blog or have you written about your process at all?
<simpson>
andi-: It's to the point where I now talk about Nix as the easiest reproducible-build tool I've seen.
<andi->
ldlework: no, I didn't even publish my config anywhere even thought it could be mostly public..
<andi->
it's one of those things that are very low on the priority list :/
<ldlework>
andi-: adamantium and I (mostly him) have been thinking a lot about the bootstrap process so more information the better
<ldlework>
have you seen their "themelios" tool?
<andi->
simpson: yes, I showed up at the new job (with one of my own devices) and bootstrapped everything I needed to work from git + nixos-install.. Was very nice and people there aren't used to someone being able to work on hour 0..
<ldlework>
i think clever has a similar tool
<andi->
ldlework: what kind of part of the process needs another tool?
<ldlework>
andi-: the idea is like, you take themelios.sh and your nixcfg repo and push one button and you get a nice nix system
<ldlework>
i mean the script is pretty long so i think it does ~something~ :)
<ldlework>
i think it mostly has to do with running nixos on zfs
<andi->
I might be blind to the issue it tries to solve.. I just partition my devices (takes a minute or so, longest is rolling a new luks passphrase..) and then nixos-install after cloning? o.O If I wanted to automated the process further I'd have a custom live-system that asks me what kind of partitioning the (new) device should get etc..
<ldlework>
i think the idea is to be turn-key
<andi->
I have that for unattended installations where it just does everything not sure if *I* would want to on any of my workstations. Even with 5 machines currently I do not feel the need to automate the installation further. It would be nice tho
<andi->
Maybe a bit of `eval-config` and a loop around the device presets with an partitioning script would be feasible and straight forward..
<ldlework>
i dunno what you just said
snajpa has joined #nixos-chat
<andi->
I mean I already have all device configurations in files and can just use them.. The only thing missing for me would be automated partitioning and pre-loading of all packages (eval-config). Then I could have the "turn-key" style installation with a simple boot menu asking me which device (profile) I want to install.
<ldlework>
that would be pretty cool
<andi->
the time invested there probably never comes back to me and it is very low on my todo-list unless I have a reason to proof a point why NixOS is superior to a co-worker again :P
<infinisil>
That would indeed be pretty cool..
<infinisil>
My main problem with setting up configs on multiple machines is the stateful data
<infinisil>
Because there's data I want on all of them, and data I only want on some of them
<infinisil>
And it needs to by synced and stuff
<infinisil>
And I need backups from machines to other machines
<elvishjerricco>
infinisil: I'm in the midst of a move to ZFS, particularly with the goal to use send/recv for backups. Struggling to figure out what all I ought to backup though; anything that's in a git repo feels redundant
<andi->
thats a problem we will not be able to fix with nix unless that data doesn't change very often :-)
<infinisil>
I backup all my /home and /var/lib
<andi->
i backup everything but /nix/store... The "bloat" might be useful at some point and is deduplicated on further backups anyway..
<gchristensen>
andi-: we've talked about / being a tmpfs except for specific locations
<andi->
gchristensen: that sounds like a "quick win" towards requiring everything to be declared (or exempted)
<gchristensen>
("we've" being #nixos-chat, not NixOS The Project)
<gchristensen>
andi-: yeah, shlevy runs his system that way -- with explicitly set XDG_* dirs to stateful spots
<andi->
gchristensen: oh that was back in april.. just grep'ed it
<elvishjerricco>
ZFS deduplication can't possibly work on encrypted "raw" sends. I really want to like ZFS native encryption but.... meh. Probably better to just give the backup server access to the data and use LUKS or something.
<gchristensen>
elvishjerricco: ZFS encryption isn't stable, so probably shouldn't use it. and: the ZFS project recommends against using dedupe unless you know exactly why you want it in the face of all the reasons you don't
<infinisil>
elvishjerricco: Why can't it work?
<elvishjerricco>
gchristensen: The deduplication would be solely for the backup pool
<elvishjerricco>
i.e. mount it, let it use all the memory it wants while we backup, then unmount
<gchristensen>
the trouble there is it will be a big pool with a big dedupe table, so very possible it becomes unmountable
<elvishjerricco>
infinisil: I guess it might work within one dataset, but data that's the same between different encrypted datasets won't be deduplicated.
<infinisil>
Oh you mean the actual deduplication, I thougth you meant no duplicated data transfer on backups
<andi->
if it is just for backups why not stick to something like borg which doesn't require tranferring data n-times just to let ZFS dedupe it?
<elvishjerricco>
gchristensen: Dedupe tables are a few hundred megs per terabyte, aren't they? That's not so bad for my very modest use case
<gchristensen>
cool :)
<gchristensen>
not sure, but that sounds about right
<infinisil>
andi-: The thing with zfs send/recv is that it's super fast, because it comes from the filesystem directly
<elvishjerricco>
It's also incremental
<infinisil>
No need to scan everything like rsync or borg (or any other backup tool)would have to do
<andi->
so the backup source must also always be a ZFS?
<elvishjerricco>
You can store zfs sends in a file
<gchristensen>
every time we talk about ZFS here I get the urge to erase and reinstall w/ zfs =)
<infinisil>
But yeah, zfs source and destination is optimal
<andi->
I am still trying to convince myself that I do not have any usecase that woudl be better off with ZFS.. I really like having a standard kernel filesystem :-)
<elvishjerricco>
ZFS is awesome. But yea, I really really hope someone replaces it eventually. bcachefs looks at least a little promising.
<infinisil>
andi-: The one feature I'm using all the time that I couldn't have with other filesystems are snapshots
<infinisil>
I take a snapshot of everything every 5 minutes
<elvishjerricco>
that's a lo
<elvishjerricco>
lot
<infinisil>
This means if I mess anything up, I can just get an older version o f it
<andi->
why so often?
<andi->
how do you mess up so badly? (I know rm -rf . in the wrong folder...)
<infinisil>
Every 5 minutes for the last hour, every hour for the last day, every day for the last week, and every week for the last month
<infinisil>
I mean, I use it like once every week or so
<gchristensen>
zfs snapshots are so cheap, no reason to not
<andi->
I have done similar things with BTRFS on a server a few years back... worked nicely but the performance went into the basement after a few thousand snapshots
<infinisil>
And snapshots are really cheap with zfs, there's basically no cost to doing them
<infinisil>
(except the storage they will be using)
<elvishjerricco>
infinisil: Do you have a script that maintains that timeline of snapshots?
<infinisil>
I even wrote a good NixOS module for it :)
<elvishjerricco>
infinisil: Thank you for that :)
<andi->
maybe I should give it a try on a spare machine..
<elvishjerricco>
What's the time table on bcachefs look like? I know there was talk about beginning to move it into the kernel somewhat recently.
<elvishjerricco>
It doesn't have snapshots yet, but once it gets that and send recv, I'd gladly switch to something in-kernel.
<andi->
since may there are patches being sent to the LKML for bcachefs
<andi->
It seems like they are just starting out submitting it piece by piece
<gchristensen>
and even then, depending on how much you like your data, being in-tree is no guarantee ;)
<andi->
ha, backups all the way ;-)
<gchristensen>
and with a fancy new fs -- ideally your backups are on a different fs
<andi->
backup your backups so you have backups of your backups when you fuc* up your backups ;-)
<andi->
speaking about backups of backups.. I have been thinking of pushing another encrypted copy of my stuff to $cloud storage provider. Any experience in that region? I am having a few GB/day so not that much of data... Amazon Galcier sounds like a reasonable thing to do since it is just for the worst case anyway..
<gchristensen>
glacier is perfect for that, but know how much it will cost you to get it back
<gchristensen>
also do consider LeastAuthority.org, friends of the Nix ecosystem :)
<infinisil>
I've spoken with my friends about maybe setting up backup servers for each other, would be pretty neat if everybody would have a backup of everybody else on their machine
<infinisil>
Haven't thought about how to do that though, what's the most efficient and securest way to do
<infinisil>
it
<andi->
I am using syncthing for synching between people and that workes suprisingly well
<andi->
not sure it would be suited for that kind of stuff tho..
<gchristensen>
do check out tahoelafs, infinisil :) the security model is great for that
<infinisil>
Thanks, heard of it a couple times already, should really give it a closer loor
<infinisil>
Look
<infinisil>
I also should really start on my own backup tool i have planned
<infinisil>
Although i might make it my bachelors thesis
<sphalerite>
andi-: syncthing is kind of a love/hate story for me :p it's made my life so much easier, especially managing pictures on my phone — I just take a picture and there it is on my laptop! Really useful for backups as well, I can make a backup then zfs-snapshot it on my laptop and delete it to free up the space
<sphalerite>
andi-: what I hate about it is that it doesn't have a text-based UI, the default config is completely broken security-wise for multi-user systems, and the mysterious failures to connect
<andi->
I would say it is a first in linux kernel development..
ldlework has left #nixos-chat ["WeeChat 2.0"]
<sphalerite>
whaaaaat
<gchristensen>
holy guacamole
<andi->
We are living in exciting times :-)
<joepie91>
I'm very curious to see how this plays out
<infinisil>
gchristensen: Is tahoelafs practical for large amounts of data?
<infinisil>
Like, terabytes
<infinisil>
(if you happen to know)
<gchristensen>
simpson is the local tahoelafs expert
<joepie91>
infinisil: no reason why it wouldn't work, but make sure you're familiar with the tradeoffs of tahoe-lafs in general :P
<joepie91>
(mostly in the area of latency, retention mechanism, etc.)
<simpson>
infinisil: For a single TiB-sized blob? I'm not sure what the size limit is, but I think it's documented. One sec.
<joepie91>
oh, in a single blob
<infinisil>
Well, I'm thinking about backups in a distributed fashion between a couple machines of friends
<joepie91>
infinisil: be aware that if you don't regularly repair files, they will die as machines do and the shares run out...
<joepie91>
it's not automatically self-healing
<infinisil>
Ah darn
<joepie91>
so you need to trigger a verify-and-repair every once in a while
<joepie91>
the deletion mechanism is also currently limited to "let the lease expire" with no explicit delete command
<joepie91>
enabling the lease mechanism means you will need to periodically renew leases, or files may be deleted
<infinisil>
Hmm alright, so it seems not optimal for what I'm trying to do
<joepie91>
at least this was true last time I checked, an explicit delete command was once in the cards but I have no idea if that got anywhere
<joepie91>
infinisil: aside from those caveats I see no reason why it wouldn't work though
<simpson>
infinisil: I don't think that there are any hard limits; for a latency/liveness tradeoff, the official client's segment size is 128KiB, but that's hidden under the hood. So probably the only limit is your OS and filesystem?
<simpson>
The largest blobs I've stored are ~5GiB, but again, the client broke that up into smaller segments for me.
<joepie91>
simpson: possibly there may be a limit on the representable segment count in the UEB though?
<joepie91>
or filesize for that matter
<joepie91>
also, there *is* a directory size limit of... 30k? entries
<joepie91>
approx
<infinisil>
I'm also worrying about the incremental part (if I'm using it for backups)
<joepie91>
iirc directories are still SDMF, and SDMF is limited to 1 segment
<infinisil>
And it's not that I want all my data to be stored in it, I'd only used it for a backup
<joepie91>
infinisil: I've been using duplicity with tahoe
<joepie91>
there's a bit of overhead in retrieving the incremental manifests but nothing deal-breaking
<simpson>
joepie91: I have no idea how the segment trees are represented; I'd hope that it's recursive, but I haven't checked.
<infinisil>
Hmm.. i'll look into it
<gchristensen>
and the low authority of the file stores makes it ideal for friends hosting friends backups
<joepie91>
simpson: no, I just mean the num_segments and size fields etc.
<joepie91>
iirc that's a fixed amount of bytes
<simpson>
And yes, directories have limits, because directories are implemented with mutable files and the size limit helps make directory writes less collidey.
<joepie91>
(unlike the share hash trees and such, which are offset based)
<joepie91>
infinisil: either way I'd recommend periodic full backups even when using incremental backups
<simpson>
joepie91: Oh, probably, but at the same time, I can easily imagine a CoW segment-based rope using only one mutable cap, so if this doesn't exist it's not because the DB can't handle it.
<infinisil>
joepie91: why that? It's only more expensive..
<infinisil>
I don't wanna store TB of data full peridoically, especially when it rarely changes
<joepie91>
infinisil: incremental backups have the irritating habit of rotting; you're adding a lot of links in the chain where a fault in any one of the links could break the full chain and make your backups unrecoverable
<joepie91>
so periodic full backups work as 'checkpoints', so to say
<infinisil>
Hmm i see
<joepie91>
(this is generally true, tahoe-lafs or not)
<infinisil>
I think zfs doesn't have that problem
<gchristensen>
not with scrubs
<joepie91>
simpson: I've been looking at a better mutable-files implementation for a derived protocol I'm working on (inspired by tahoe-lafs but with a few different design choices and some better protections), and I haven't fully worked it out yet, but I *think* that tahoe's approach can be improved upon
<joepie91>
wrt directory mutations
kisik21 has quit [Ping timeout: 244 seconds]
* gchristensen
declares Nix email bankruptcy and deletes 1,900 emails
<joepie91>
oh dear
<joepie91>
that bad?
<gchristensen>
I get a lot of email
<andi->
But but but fulltext history search? That's the one reason I tag mails to get out of my face but never delete..
<gchristensen>
by delete I mostly mean assigning the "deleted" label :)
<andi->
What are you using for the tagging?
<gchristensen>
notmuch
<andi->
:+1: its the only thing that worked for me when I reevaluated my mail client a few years ago
<gchristensen>
the truth, though, is I hate every thing about email so the client doesn't make a huge difference to me :P
<elvishjerricco>
gchristensen++
<{^_^}>
gchristensen's karma got increased to 26
<andi->
There is worse things... I like my workflow with mails.
<infinisil>
In contrast there's me, who's been too lazy to install an email client on his machine for all the time he's been using linux
<gchristensen>
I have (had?) notmuch scripts to archive github threads after merge, which was cool
<gchristensen>
https://github.com/NixOS/nixpkgs/issues/46326 <- things like this are reasons we're in need of defining some maintainership structure: I have no business having opinions on llvm :)
<{^_^}>
#46326 (by copumpkin, 1 week ago, open): Bump default LLVM to 6 (or maybe even 7?)
<infinisil>
simpson: Well, tahoe-lafs does sound indeed very good for my usecase :D
<infinisil>
Read through a decent part of the docs
<infinisil>
Even has a backup command, these upload helpers might come in handy too for me, and the storage expiration is off by default
<infinisil>
And being able to easily share read-access with others is nice too
<samueldr>
>:( I was making progress into finding the root cause of an issue... or so I though I was until my test was shown to sometimes fail even though it ran fine the time before
<samueldr>
now I need to figure out 1. why it isn't deterministic, 2. make it deterministic :/
<joepie91>
infinisil: do keep in mind the second part of the storage expiration remark: there's no explicit delete button
<infinisil>
Yeah, seems fine tbh
<infinisil>
A neat way to manage gc in a distributed fashion
<infinisil>
samueldr: Hehe yeah, that happened to me a couple times before as well, one time it was a /bin/sh impurity
<infinisil>
joepie91: Now I'll just have to convince my friends that we all set it up :P
<infinisil>
And I need to upgrade my storage actually..
<joepie91>
infinisil: make sure to look into the invite mechanism :)
<infinisil>
I read about it, but I think I already forgot what it was lol
<joepie91>
infinisil: oh, on which note, one big benefit of tahoe-lafs is that it's totally okay with heterogeneous storage grids
<joepie91>
there's absolutely no requirement to have similarly-sized storage nodes
<joepie91>
(though if you only have a few nodes and they're *really* out of whack, that might cause redundancy issues)
<samueldr>
hmm, anyone have clues about what could make it that sometimes segfaults show up in dmesg (and thus the console) and sometimes not?
<joepie91>
memory management bugs!
<infinisil>
joepie91: Ah, so it will automatically use whatever servers have free space?
<samueldr>
an explicit segfault
<joepie91>
oh, or do you mean the logging being non-deterministic, rather than the failure?
<samueldr>
not a signal
<samueldr>
yes
<joepie91>
ah hum, no idea then
<samueldr>
a real and explicit null-dereference
<samueldr>
using the same test, either dmesg | grep 'segfault at' works first time, or will not (using waitUntilSucceeds)
<samueldr>
and this matches with it being in the console output of the test, or not
<joepie91>
infinisil: right; when uploading a thing, it basically does this, simplified: hash thing, concat thing hash with each storage node hash, order by resulting hash, start at the top and keep polling servers until enough servers have said "yep, I have space to store a share for this"
<samueldr>
and using the same nixpkgs, force deleting a success, it sometimes flip to succeeding and sometimes to failing, so timing issues, possibly, but with what?
<joepie91>
infinisil: this does have a latency tradeoff, in that it's not *fully* deterministic where a share ends up, so in particular for files that have existed throughout a lot of node churn, it can sometimes be a bit slow-ish to find the shares
<samueldr>
already have waitForUnit("default.target"), though I'm not sure this would be the latest thing :/ (maybe too on-topic for nixos-chat)
<joepie91>
(this only affects the initial share-finding, it does not affect retrieval throughput)
<infinisil>
joepie91: Neato
<infinisil>
The NixOS configuration of tahoe is a bit intimidating though
<joepie91>
infinisil: honestly, if you have some free time, I'd recommend reading https://tahoe-lafs.org/~warner/pycon-tahoe.html - it's fairly accessible and there's a lot of interesting technical choices in tahoe
<joepie91>
(not required for using it, just as a curiosity)
<infinisil>
Thanks, might do so
<infinisil>
joepie91: Regarding these introducers, are these centralized?
<infinisil>
Oh you can probably run any number of them
<joepie91>
infinisil: yes; however, after connections with other storage nodes have been made, the introducer is no longer necessary
<joepie91>
dunno about multiple introducers
<joepie91>
used to only support one
<joepie91>
this was something that was on the todo list to fix a few years ago
<joepie91>
btw, storage nodes keep persistent connections to each other
<gchristensen>
they do? I should reread that article :D
<infinisil>
Ah, Q17: Is it possible to run multiple introducers on the same grid?A: Faruque Sarker has been working on this as a Google Summer of Code project. His changes are blocked due to needing more people to test them, review their code, and write more unit tests. For more information please take a look at ticket #68