<cransom>
when i was doing a lot of backup/recovery from arrays, adding another 4-8gigs of memory would keep the sender from going idle and starving the receiver
<gchristensen>
say, cransom, you know things about things
<gchristensen>
say I have ~100tb of data growing at, let's assume 20tb/yr and it is all basically write-only. how would you store that on a budget?
<cransom>
i started using backblaze for offsite backups from home
<cransom>
i don't have 100tb, but i haven't been having any problems (that i know of) with the 2-3tb so far
<gchristensen>
backblaze would be tough
<cransom>
how so?
<cransom>
i assume you wanted offsite and not a quarterly amazon subscription for 8tb drives :)
<cransom>
minio (go server) can proxy s3 style requests to other random backends like backblaze if that was a problem.
<gchristensen>
at $500/mo for b2 I think I'd rather the quarterly subscription :P
<cransom>
hrm. yeah, i guess. at 100+tb, those numbers get bigger
<cransom>
when do you need to read the data back?
ben has joined #nix-darwin
<gchristensen>
cransom: probably-never but in case it is cheaper to ship this thing than to xfer it across the internet again
nikivi has quit [Quit: ZNC is awesome]
nikivi has joined #nix-darwin
szicari has joined #nix-darwin
<szicari>
Keep up the good fight! I'm doing some build pipeline work and am being horrified daily at the sheer amount of reliance on side effects and shoddy, error-prone steps that go into the typical packaging and deployment process.
nikivi has quit [Client Quit]
<szicari>
The "nix" approach is so much saner, albeit with a higher learning curve.