gchristensen changed the topic of #nixos-chat to: NixOS but much less topical || https://logs.nix.samueldr.com/nixos-chat
<infinisil> You know what I want for Nix
<infinisil> Build stats and optimizations and estimates regarding them
<gchristensen> oh?
<infinisil> E.g. time how long a `nix-build -A firefox` takes, use this time to estimate future builds
<infinisil> Allow a profiling run where processor speedup can be measured
<gchristensen> ah cool
<infinisil> Then do better optimizations with it
<infinisil> Also, do time measurements for all other things
<infinisil> Querying the cache, evaluating, instantiating, building
<infinisil> I want a total estimate of how long it will take
<infinisil> And how long it took, and for it to use this information to optimize
<infinisil> Stuff like that
<infinisil> We can then also plot build time of derivations over upgrades
<pie_> infinisil: sounds cool
<gchristensen> there was talk about having build metrics emitted over a special FD to the nix-daemon and stored separately
<gchristensen> seems feasible to export that somehow
<infinisil> Yeah I think I remember something like that
<infinisil> Rant follows
<infinisil> How the hell did we get to the point that my laptop's fan needs to be spinning full force, just
<infinisil> me reviewing some pull requests???
<infinisil> The web has gone shit, and I really hate it
<infinisil> Yeah, there are good sites, but those are minority
<infinisil> I just wish we had more native apps
<infinisil> I want a YouTube app. Having one could probably reduce my power usage for youtube 3fold
<infinisil> There *is* a GitHub desktop app I just discovered today, but I bet it runs on Electron, which is really no improvement
<infinisil> I want a good Reddit Linux client too
drakonis has joined #nixos-chat
<aminechikhaoui> better fix the web then :D
<ottidmes> infinisil: that is what I am afraid for when eventually buying a new laptop (for now I do most on my desktop), most of what I do is in a browser, whether it be chrome/firefox/electron, so you also need lots of RAM
<infinisil> aminechikhaoui: A good native app feels so much better though, the instant transitions and no reliance on the internet working are really nice (especially with my constant internet problems here)
<infinisil> Right now on YouTube, I click a button, I wait for literally 5 seconds until the new site is there
<ottidmes> infinisil: ~1s for me, but I cannot wait when we finally get fiber here
<infinisil> Okay it's a bit faster now, maybe 2 seconds, but sometimes I really have to wait a while, just for youtube to load all its divs and what have you not and render it all nicely
<infinisil> People from 20 years ago would laugh at this
<ottidmes> infinisil: laugh about you complaining ;)
<infinisil> Maybe
<ottidmes> infinisil: 20 years is a long time ago for the internet, its been only been 13 years for me having internet access
<pie_> things will change when people start getting annoyed by their porn being slow
<samueldr> sadly(?) porn sites are often at the forefront of doing things right :/
<samueldr> [citation needed]
<gchristensen> porn sites have been working on optimizing away miliseconds for many years
<samueldr> I had a quasi-meltdown (exagerated for theatrical purposes) at work due to the sheer stupidity that a web browser has trouble rendering a list of 6336 SQL queries in a debug tool (here disregard the fact that this in itself is an issue) locking down my computer for two minutes since it happened when changing something in a <select>, which blocked all interaction with anything
<samueldr> you could potentially use that to create a DoS on an end-user's machine :(
<samueldr> who would win? xeon workstation with 64GB of ram OR 6336 stringy bois in a web browser?
<gchristensen> lol
<pie_> samueldr: thats kind of what i meant i guess
<pie_> also yeah, the browser debug tools we have now can do some pretty cool stuff BUT
lassulus_ has joined #nixos-chat
qyliss^work has quit [Quit: bye]
qyliss has quit [Quit: bye]
Lisanna_ has quit [Remote host closed the connection]
lassulus has quit [Ping timeout: 272 seconds]
lassulus_ is now known as lassulus
qyliss has joined #nixos-chat
qyliss^work has joined #nixos-chat
drakonis_ has joined #nixos-chat
drakonis has quit [Ping timeout: 244 seconds]
drakonis has joined #nixos-chat
drakonis_ has quit [Ping timeout: 252 seconds]
ottidmes has quit [Ping timeout: 246 seconds]
<infinisil> Lol
drakonis_ has joined #nixos-chat
drakonis has quit [Ping timeout: 252 seconds]
<jackdk> infinisil: did you see https://drewdevault.com/2018/11/15/sr.ht-general-availability.html the other day?
<drakonis1> use it vs github?
<infinisil> jackdk: Yeah, pretty cool
<drakonis1> the source for it is available
drakonis has joined #nixos-chat
drakonis_ has quit [Ping timeout: 252 seconds]
drakonis_ has joined #nixos-chat
drakonis has quit [Ping timeout: 252 seconds]
drakonis has joined #nixos-chat
drakonis_ has quit [Ping timeout: 252 seconds]
drakonis_ has joined #nixos-chat
drakonis has quit [Ping timeout: 252 seconds]
drakonis has joined #nixos-chat
drakonis_ has quit [Ping timeout: 252 seconds]
drakonis_ has joined #nixos-chat
drakonis has quit [Ping timeout: 252 seconds]
lassulus has quit [Quit: WeeChat 2.2]
lassulus has joined #nixos-chat
ekleog has quit [Quit: back soon]
ekleog has joined #nixos-chat
drakonis1 has quit [Quit: WeeChat 2.2]
<{^_^}> #50665 (by matthewbauer, 1 hour ago, open): Future of package updaters in Nixpkgs
<colemickens> is there a guide for how to add new updaters? I looked into it and nope'd out when I saw Haskell.
drakonis has joined #nixos-chat
drakonis_ has quit [Ping timeout: 252 seconds]
Lisanna_ has joined #nixos-chat
drakonis_ has joined #nixos-chat
drakonis has quit [Ping timeout: 252 seconds]
ninjin has quit [Remote host closed the connection]
ninjin has joined #nixos-chat
jackdk has quit [Ping timeout: 240 seconds]
pie_ has quit [Ping timeout: 256 seconds]
jasongrossman has joined #nixos-chat
__Sander__ has joined #nixos-chat
drakonis_ has quit [Remote host closed the connection]
mmercier has joined #nixos-chat
ottidmes has joined #nixos-chat
mmercier has quit [Quit: mmercier]
Lisanna_ has quit [Quit: Lisanna_]
<joepie91> unfortunately that sr.ht thing seems to be more or less copying cgit's UI which is notoriously horrible
<joepie91> mmyeah, browsing around the site a bit, that does not seem pleasant to use :/
__monty__ has joined #nixos-chat
ninjin has quit [Quit: WeeChat 2.2]
ottidmes has quit [Quit: WeeChat 2.2]
ninjin has joined #nixos-chat
<colemickens> it's worth reading a bit more into it..
<colemickens> there are plans for the UX niceties it seems you seek, while focusing on a self hosted, micro-service-y style application tier, with everything you need to replace github, with the rest coming
<colemickens> Drew has blogged about it quite a bit, including thoughts about a web UI on top of the codereview via mailing list implemenation
obadz has quit [Ping timeout: 240 seconds]
obadz has joined #nixos-chat
<joepie91> colemickens: my experience so far with "we'll fix the UX bits later" has been pretty much universally poor :) so I'm very skeptical
<joepie91> like, I appreciate the OSS and selfhostedness, but I'm not necessarily convinced about the other aspects (in particular the splitting it up into distinct services, which makes it really difficult to produce a coherent UI), and the current UX definitely makes me skeptical of what it can eventually become, given that the current UX is what the technical decisions are based on, not a potential future UX
<joepie91> I don't really have a good word for it, but I very much get that first impression that I've been getting from many open-source things where UX just isn't valued very much or understood well, and everything is built according to "what we're used to" and some nebulous idea of 'simplicity' and 'elegance' that doesn't really translate to any end-user benefits.. with the predictable consequence that pretty much everybody except for a small set of
<joepie91> people finds the resulting software unpleasant to use
<joepie91> and it's possible that I'm wrong of course, but I've seen this go wrong *so many times* before...
mmercier has joined #nixos-chat
NinjaTrappeur has quit [Quit: WeeChat 2.3]
Lisanna has joined #nixos-chat
NinjaTrappeur has joined #nixos-chat
mmercier has quit [Quit: mmercier]
drakonis has joined #nixos-chat
mmercier has joined #nixos-chat
mmercier has quit [Client Quit]
julm has quit [Ping timeout: 268 seconds]
julm has joined #nixos-chat
dmc has quit [Ping timeout: 240 seconds]
polyzen has joined #nixos-chat
drakonis has quit [Quit: WeeChat 2.2]
ottidmes has joined #nixos-chat
__Sander__ has quit [Quit: Konversation terminated!]
polyzen is now known as dmc
drakonis has joined #nixos-chat
drakonis_ has joined #nixos-chat
drakonis has quit [Ping timeout: 252 seconds]
Drakonis__ has joined #nixos-chat
drakonis_ has quit [Read error: Connection reset by peer]
drakonis has joined #nixos-chat
drakonis_ has joined #nixos-chat
Drakonis__ has quit [Ping timeout: 252 seconds]
drakonis has quit [Ping timeout: 252 seconds]
drakonis has joined #nixos-chat
drakonis_ has quit [Ping timeout: 252 seconds]
<infinisil> ,paste
<{^_^}> Use a website such as http://nixpaste.lbr.uno/ or https://gist.github.com/ to share anything that's longer than a couple lines
<infinisil> nixpaste is down..
<infinisil> I always had worries about using it, but now I have confirmation
<infinisil> ,paste = Use a website such as https://gist.github.com/ or https://hastebin.com/ to share anything that's longer than a couple lines
<{^_^}> paste redefined, was defined as: Use a website such as http://nixpaste.lbr.uno/ or https://gist.github.com/ to share anything that's longer than a couple lines
<infinisil> ,paste
<{^_^}> Use a website such as http://nixpaste.lbr.uno/ or https://gist.github.com/ to share anything that's longer than a couple lines
<infinisil> Oh no :(
<gchristensen> what?
<infinisil> My bot didn't update the definition for some reason
<gchristensen> ah
<gchristensen> they were so similar I thought you had set it the same value :)
<infinisil> Goddamnit my bot is such a mess, I have so many things to fix
<andi-> what was special about nixpaste?
<gchristensen> it isn't a big deal, infinisil ;)
<infinisil> andi-: Nothing really, just a site by nix users, similar to https://paste.ubuntu.com/ or so
<gchristensen> I wonder if it was one of the first to support nix highlighting
<andi-> I c.. I have alwways thought about running a pastebin backed by a public git repository.. Everyone could host a clone and a few people publish to it.. might as well run it on ipfs then
<andi-> but there is gist and so many alternatives already.. not worth the investment
<gchristensen> I used to have a "pastebin" which was just generated static HTML and uploaded to a self-deleting s3 bucket
<andi-> I still run my own (https://paste.h4ck.space/#/) since it allows me to send one-time and encrypted pastes to people while being able to verify that..
drakonis1 has joined #nixos-chat
<infinisil> andi-: In my own paste thing I'm just using hashes that are hard to predict, is there a difference to using encryption?
<infinisil> In the end you share an url with the decryption key anyways
<andi-> the server never has the plaintext
<infinisil> Ahh I see
<infinisil> Well in my case it's my own server so I don't care about that
<andi-> it is also my own but since a few other people usually end up using your pastebin it is a nice property :)
<sphalerite> andi-: but the server serves the code for decrypting it…
<andi-> yeah I know
<andi-> it isn't ideal :/
<andi-> sphalerite: I trust the server since I host it.. that was the important aspect for me. I donot want to leak any kind of data anywhere..
<andi-> ofc if the server is being tampered with you can always tell the client to post back the content..
<infinisil> Here's my recently created paste nixos module if anybody is interested, very simplistic https://github.com/Infinisil/system/blob/master/config/new-modules/paste.nix
lassulus has quit [Quit: WeeChat 2.2]
<aminechikhaoui> infinisil hms what's needed to host it, a fairly large disk ?
<infinisil> aminechikhaoui: Nothing!
<aminechikhaoui> :o
<infinisil> Notes: The `mine.subdomains` option sets up the subdomain on my DNS server, you might need to adjust this for your own DNS (or don't use a separate subdomain at all). And: To use it: `echo foo | ssh <server> pst` -> outputs link to paste
<infinisil> Or `cat foo.png | ssh <server> pst png` -> creates paste with png extension
lassulus has joined #nixos-chat
<aminechikhaoui> infinisil alright, I might host that :) I'll play with it for a bit and see if my little server can host it
<infinisil> :D
<sphalerite> infinisil: ewwww unnecessary cat :p
<infinisil> sphalerite: It's not unnecessary!
<gchristensen> ssh <server> pst png < foo.png :)
<infinisil> Ahhh
<sphalerite> infinisil: <foo.png ssh server pst png
<infinisil> Alright ya got me
<gchristensen> but who cares :)
<infinisil> Didn't know you could put the <foo.png before the command :O
<sphalerite> the poor shell, has to fork itself an extra time
drakonis_ has joined #nixos-chat
<gchristensen> sphalerite: it is good exercise
<sphalerite> yeah you can put a redirection anywhere in the command. Even awkward stuff like `ssh server pst <foo.png png`
<infinisil> Can this be called a sublinear micro-fork-bomb
<sphalerite> lol
<sphalerite> SLMFB please
<infinisil> Ah yes
<sphalerite> we don't want to be too clear and understandable
<infinisil> The `mine.subdomains` option makes me want to write a proper DNS module
drakonis has quit [Ping timeout: 244 seconds]
<infinisil> Like a *proper* one, that can handle everything
<infinisil> Okay, the sequential ids on changes might be a bit hard..
<gchristensen> make that part impure
<infinisil> builtins.currentTime? :P
<gchristensen> or rather, a implementation detail of the application framework
<gchristensen> probably don't want to have your config change once a second
<infinisil> Ah yeah, should only change if changes actually happen
<infinisil> Maybe just a number stored in `/var/lib/dns/id`, and then using some sed to replace a @NUMBER@ in the generated file with the id+1 and increase the one in the file
* sphalerite just realised he completely forgot to increment the serial on the last few DNS changes :|
<gchristensen> infinisil: it doesn't have to increment monotonically
<infinisil> I think it's really only important if you have slave DNS servers
<infinisil> But still good practice
<infinisil> gchristensen: Oh
<gchristensen> it can just be a timestamp or datestamp :) that is fine
<infinisil> Well then I could just hash the file contents deterministically and pure
<sphalerite> gchristensen: huh? I thought it does. Hence the common practice of using a date
<sphalerite> gchristensen: oh so you mean it doesn't need to increment by 1 each time
<infinisil> Yeah I also had the idea that it should increase monotonically
<gchristensen> oops, right, sorry
<infinisil> Ah, so hashing doesn't work
<infinisil> Hehehehehe
<infinisil> runCommand "dnsconfig" {} "date > $out"
<gchristensen> lol
<infinisil> Perfect
<infinisil> pure but impure
<gchristensen> pure minus that impure part
<infinisil> pure in spirit
<gchristensen> if you consider time to be like the io monad
<infinisil> Okay so that problem is solved. The only major problem left (I think) would be how to encode all possible records
<infinisil> Okay not really major
<infinisil> Maybe just a bunch of functions to use like `records = a "sub" { ip = "1.1.1.1"; }`
<infinisil> Maybe just a bunch of functions to use like `records = [ (a "sub" { ip = "1.1.1.1"; }) (aaaa "sub" { ip = "::"; })]`
<infinisil> Btw, is this zone file syntax universal for all DNS servers?
<gchristensen> [ (a "sub" { ip = "1.1.1.1"; }) (a "sub" { ip = "2.2.2.2"; }) ]
<gchristensen> "all" is a
<gchristensen> "no", but it is quite universl
<infinisil> gchristensen: With that code you mean that it should prevent this situation?
<gchristensen> no, it should allow that situation
<infinisil> Ahh, right, multiple ips for automatic simple load balancing
<infinisil> Also, doing this in Nix provides for a great opportunity to give names and documentation to these 5 numbers in SOA records
drakonis has joined #nixos-chat
<andi-> Has anyone played around with https://sr.ht yet? It sounds like an interesting approach that could finally work without creating accounts all over the place to submit a patch.
<simpson> I'm not willing to give the author enough credit to give their system a try. It looks like a thing, for sure.
ninjin has quit [Quit: WeeChat 2.2]
<samueldr> (I made an account to make sure my name wasn't taken... hint hint nudge nudge)
<samueldr> took a superficial look, but the time I registered they were running into slowness due to the publicity
drakonis_ has quit [Ping timeout: 252 seconds]
<samueldr> the fact it's AGPL is probably good
<andi-> I just have hopes that at some point we might get aways from the github (and more recently gitlab) "monoculture". It is nice from the user experience perspective but not really great in many other ways..
<simpson> Is it a monoculture or isn't it? You mention multiple providers in the same utterance.
<infinisil> Federated systems++
<simpson> Meanwhile people do use things like Fossil and Chisel, to say nothing of the various proprietary source-control systems.
<samueldr> in my opinion, giving away the home of your project as being on another provider's host is main issue :/
<samueldr> if e.g. github allowed me to setup git.samueldr.com as a fully-featured URL I wouldn't mind as much
<infinisil> samueldr: Doesn't it allow that for github pages?
<samueldr> but the fact that my misc. projects are advertised with https://github.com/ in the forefront is problematic
<samueldr> infinisil: for websites, but not code, issues, etc
<infinisil> Yeah
aminechikhaoui has quit [Quit: The Lounge - https://thelounge.github.io]
<infinisil> Imagine everybody getting a domain for free at birth and they can keep it forever
aminechikhaoui has joined #nixos-chat
<mdash> samueldr: mmm, what's the problem with that?
<samueldr> can't move the project without myriads links now being broken
<mdash> mm, true
<mdash> yes, the web is poorly thought out in that regard
<samueldr> it isn't really... services are
<simpson> samueldr: Hm. By that argument, isn't the problem that people can choose domain names? If domain names were all long and unmemorable, then there would be no benefit in sharecropping on somebody else's domain.
<mdash> samueldr: why do links point to machines instead of content?
<samueldr> bitbucket, if they don't still allow it, once allowed setting up such URLs
<samueldr> if they were long and unmemorable, alternative systems would come along making them memorable, no?
<infinisil> That's a good argument for IPFS, being able to move content freely and URL's (or hashes) staying the same
<samueldr> mdash: yeah, kinda on point, but things don't really work that way (yet)
<simpson> Yeah, but that's okay. There's a world of difference between *you* choosing local bookmark names, and *the domain owner* choosing the names for you.
<mdash> infinisil: exactly
<simpson> infinisil: Yep. If only IPFS weren't shitty, slow, cap-unsafe, and censorable.
<samueldr> simpson: for finding things out, I mean, we would have e.g. AOL keywords for those unmemorable strings, centralized
<simpson> At least it's gotten peoples' minds churning.
<mdash> censorability not all that bad overall
<simpson> samueldr: Nope, discoverability and enumerability go out the window.
<infinisil> simpson: It sure is rather slow.. I hope that improves. Shitty? I'd consider it quiet good in concept. What's capsafe? Yeah censorable is a bit of a problem, but I think that's workaroundable.
<ldlework> >> "What's capsafe?"
<ldlework> oh boy here we go
<gchristensen> buckle up
<gchristensen> Capability-Safe
<simpson> infinisil: Compare and contrast with systems like Tahoe-LAFS, where the long unguessable strings are the *only* way to access data.
<simpson> In IPFS, intermediate nodes not under the user's control may learn things about what the user is viewing and requesting.
<simpson> This happens in other distributed systems that have lots of hype, like Peertube.
<mdash> there's really only two stable states possible for a content distribution platform, other than "so obscure nobody notices/uses it": #1, capable of filtering out illegal content or #2, illegal because it can't filter out illegal content
<infinisil> simpson: Hmm I see, but isn't that ultimately what makes IPFS tick?
<infinisil> Or can this be worked around?
<gchristensen> mdash: I guess that explains why bitcoin has been a failure
<simpson> I would say that it cannot be worked around. You need *provider-independent* security properties, where it doesn't matter who is providing the service.
<mdash> gchristensen: bitcoin isn't exactly a content distribution platform, despite various stunts
<simpson> IPFS explicitly says that it *does* matter who is providing storage.
<simpson> (This has nothing to do with the CAP Theorem, BTW. "cap" here is short for "capability", as in "capability security")
aminechikhaoui has quit [Quit: The Lounge - https://thelounge.github.io]
<infinisil> simpson: How does IPFS say that the provider does matter?
aminechikhaoui has joined #nixos-chat
<simpson> infinisil: Storage providers can do stuff to what they've stored: https://github.com/ipfs/faq/issues/18
<{^_^}> ipfs/faq#18 (by whyrusleeping, 3 years ago, closed): How does ipfs compare to freenet? (WIP)
<simpson> Or https://github.com/ipfs/faq/issues/10 providers can look through your data
<{^_^}> ipfs/faq#10 (by whyrusleeping, 3 years ago, closed): If I'm operating a node, can I browse the files that my node is hosting?
<gchristensen> simpson: what kind of "stuff" can they do?
<simpson> gchristensen: AIUI a node can: inspect any request routed through that node, censor individual files/nodes/etc., keep tabs on what's been requested.
<gchristensen> ah, yes
<simpson> Not groundbreaking, but not *that* much of an improvement, if any, on existing HTTPS static servers.
<infinisil> > "Storage providers can do stuff to what they've stored" What stuff can they do to the stored things? Certainly not modify, only read
<{^_^}> error: syntax error, unexpected ID, expecting ')', at (string):206:113
<infinisil> {^_^}: Shhh
<simpson> infinisil: Reading is still game-over.
<mdash> simpson: for some threat models
<infinisil> IPFS doesn't have any encryption guarantees, if you want encryption, build something ontop of ipfs
<mdash> ipfs sure not gonna replace tahoe soon but it'd be a nice replacement for npm, pypi, etc
<simpson> Tahoe-LAFS, for example, has "verify" caps, which don't even grant read access, because not every task needs them.
<simpson> infinisil: Right. My point is that IPFS, while it's content-addressed, is *not* cap-aware, but people are keen to conflate the two.
<simpson> Also admittedly I'm usually talking to people who are concerned that Google is reading their email.
<__monty__> mdash: Dhall actually hosts its stdlib on ipfs.
<__monty__> It also allows url imports.
<mdash> __monty__: bold
<infinisil> simpson: Okay so what if we put an encryption layer on top of IPFS. Nodes won't be able to read the content anymore. Is that still not cap-aware?
<simpson> infinisil: That's a big improvement, but even better would be giving every IPFS client encryption by default. In fact, let's just make encryption mandatory.
<simpson> Literally *the* argument in the IPFS community against this is that they think that encryption is slow. For that meme, they've sold you on a shit system.
<__monty__> More relevant would be it prevents deduplication imo.
<simpson> infinisil: Also, if you can't hold a cap, it's not cap-aware, as a general rule. Even people who *claim* to read capability-theory stuff are usually not actually being strict enough.
<__monty__> Imagine having to store the same meme a gazillion times.
<gchristensen> __monty__: it'd be almost as bad as reading Twitter
<simpson> __monty__: Sure. Tahoe-LAFS doesn't have dedup, because it would enable file discovery and search. The estimated cost, according to one Tahoe-LAFS group, was about 1% extra disk usage.
<infinisil> simpson: I'm just thinking of IPFS as a base layer protocol, like IP, which isn't encrypted. You need something on top for that
<simpson> infinisil: Okay, but it's not good enough to use as a base layer.
<infinisil> simpson: Hold on, what does "holding a cap" mean?
<simpson> To put things another way, encryption hasn't worked like that for a while. Folks are into AEAD.
<simpson> infinisil: A cap here is literally a hard-to-guess long string.
<__monty__> simpson: I'm pretty skeptical about that estimate. I'd imagine people using ipfs to share silly cat videos for example. That'll easily make up for 99% of a difference with/without dedup. Joking ofc, but 1% sounds ridiculous to me.
<infinisil> I guess I'll have to look up capability-theory at some point, because I still have no idea what you mean by holding a cap :P
<simpson> __monty__: Oh, I'm sorry, it was more reasonable, 0.1% https://tahoe-lafs.org/hacktahoelafs/drew_perttula.html
<simpson> infinisil: Do you know what, say, a password-reset URL is? It's long, and hard to guess, and having it is equivalent to being able to reset your password.
<infinisil> Yup
<gchristensen> infinisil: I send you an encrypted and signed blob, you can't do anything with it. if I send you the public key (a "verify" cap) you can verify its contents
<gchristensen> infinisil: if I send you the encryption key, you can read its contents -- those keys are your "cap" and you "hold" them by posessing them
<infinisil> Oh, so it's literally a capability, being able to do something specific
<gchristensen> yeah
<mdash> simpson: i think the ipfs and tahoe use cases are mostly disjoint
<simpson> Yes. Now the challenge is to find all the uses of "capability" which don't follow this, like e.g. Linux kernel capabilities.
<mdash> and there's ipfs use cases that would benefit more heavily from dedup
<simpson> mdash: Maybe? I don't understand what IPFS does better than HTTPS. Or than Gopher, for that matter, aside from some better inline-data hygiene.
drakonis has quit [Ping timeout: 268 seconds]
<mdash> simpson: host-independent URLs
<mdash> URNs, if you like
<infinisil> Yeah ^^
<simpson> mdash: Except that everybody's gonna just use ipfs.io. Getting this kind of magic independence working surely requires *at least* as much of a permanent backbone as DNS or other systems that let you magically call out into the void, right?
<infinisil> And CDN
<simpson> CDNs are already a thing. You get what you pay for with IPFS; it's free but slow, and you have to spin up your own machines in order to make it faster.
<mdash> this seems reasonable to me
<mdash> is there any architectural reason IPFs has to be slower than bittorrent?
<ldlework> bitchute is using bittorrent to serve video
<simpson> It feels often to me like distributed-web folks usually assume that disk is free.
<gchristensen> wowwwww slower than bittorrent
<infinisil> simpson: If everybody were to use IPFS, it gets faster automatically the more people start hosting it
<ldlework> is bittorrent slow?
<gchristensen> soslow
<gchristensen> (especially on small files)
<ldlework> bittorrent is the mechanism by which i max out my bandwidth
<ldlework> the only one
<gchristensen> if you have few big files it is fast, but many small individual torrents -- oof
<ldlework> you don't need a torrent for every file
<ldlework> and given a torrent you don't need to download every file
<ldlework> i've downloaded torrents with many small files and they seem to go as fast as big movies
<ldlework> i think it has to do with peer availability tbh
<gchristensen> finding and establishing peers
<ldlework> you are not going to notice the difference between downloading tons of images and downloading a movie, given the access to peers
<infinisil> gchristensen: So you think a tar of all files is much much faster than all files on their own?
<simpson> infinisil: I've heard that many times, but it's not clear *how* that would happen. More peers does mean more *chances* to find fast peers, but it also means more traffic, which could mean fewer fast peers available.
<__monty__> Yeah, I regularly get small torrents in a matter of seconds.
<gchristensen> yeah, I regularly download small files over http in a matter of miliseconds
<ldlework> are these margins we really care about in this context?
<gchristensen> what context are we talking about?
<infinisil> simpson: Having 1000 slow peers (in bandwidth) can be faster than 5 fast peers
<simpson> The way that Bittorrent manages to make it work is by putting a *ceiling* on how much data will be moved between each peer in the swarm. IPFS doesn't have that.
<ldlework> ipfs and why bit* isn't good enough
<gchristensen> give my context, it matters a lot
<mdash> simpson: IMO the point of these distributed-web technologies is not to compete with netflix/google/etc CDNs on performance merits, but to provide an extant alternative when their use becomes untenable
<ldlework> huh?
<gchristensen> (which is, I want to serve / download many many small files)
<ldlework> so a startup of a couple seconds vs milliseconds breaks your usecase?
<gchristensen> absolutely
<simpson> mdash: In that case, I have nothing nice to say. I've peeked around many of these communities, and very rarely do people understand that security is not optional in 2018.
<ldlework> intense.
<gchristensen> a couple seconds * 1,000,000
<mdash> but yeah like you said, atleast it's got people thinking
<ldlework> you don't need a different torrent or peer network for each file
<ldlework> this is fallacy
<gchristensen> ldlework: I have millions of small files, and the list of files changes regularly. is that something a single torrent file can sustain?
<ldlework> do you need multiple torrents at any given time, or a single torrent for a given list of files?
<ldlework> if the files change, you can generate a new torrent on all the peers and its as if there is no new torrent
<gchristensen> the point of the service is for the list of files changes over time
<ldlework> and how does a few seconds of spin up time for accessing the current list break any use-case of that list?
<gchristensen> well the use case is latency-sensitive
<ldlework> what is the use-case?
<gchristensen> a Nix binary cache :)
<ldlework> where's the absolute nessecity to avoid a few seconds against longer download times?
<gchristensen> for every drv, Nix checks the cache to see if it is available in the cache -- and there can often be many thousands of drvs -- so generating 404s quickly is important
<simpson> If only Nix could batch those. But then that'd mean speaking something cleverer than HTTP.
<ldlework> sure but for this you just need the metadata of the torrent
<ldlework> and this probably presumes you have dedicated peers
<gchristensen> this proposed metadata file would be many megabytes big
<ldlework> sure but it's not like bittorrent is using distributed peer network to ship torrent metadata
<gchristensen> yes, wouldn't it be a bit annoying to have to fetcth many megabytes of data on every call to Nix?
<joepie91> gchristensen: fast negatives are a big problem in decentralized systems
<joepie91> not sure anybody really has a solution to that, other than preventatively collecting metadata about file existence and efficiently storing it locally
<joepie91> eg. as a bloom filter
<ldlework> you mean the first time, until the check for the latest torrent doesn't reflect the one you already have
jasongrossman has quit [Quit: ERC (IRC client for Emacs 26.1)]
<gchristensen> ldlework: new files are added roughly once a second
<ldlework> yeah but my nix system doesn't utilize the newest things in the nix cache every second
<ldlework> for a given drv are new binary caches being updated continuously? if so what's the point of that?
<gchristensen> what do you mean?
<ldlework> like if I install some package at some version
<ldlework> why do i care that other versions incomming with new binary caches, etc
<joepie91> gchristensen: btw, are you thinking of a centrally managed distributed cache, or a fully decentralized network?
<ldlework> the amount of binary cache data updated relevant to the packages i have actually installed are probably 0 per second
<simpson> ldlework: User story time. How do you want $(nix-build) to behave when you're building a package with a tall dependency tree?
<ldlework> i don't know enough about nix-build
<simpson> Well, whatever Nix tool you use a lot. That feeling when Nix is reaching out to the binary cache, how *should* Nix be spending its time?
<ldlework> you can also mix approaches
<ldlework> when i nix install something, it installs from a given nixpkgs, if I haven't downloaded the cache manifest for the derivations in that version of nixpkgs, i download it
<gchristensen> joepie91: really, what I'd like to do, is have a backup of the entire cache.nixos.org binary cache, but I can't afford it :)
<joepie91> gchristensen: how big is it?
<gchristensen> ~100-120tb
<joepie91> oof
<joepie91> is that just the most recent X builds or everything since the dawn of time?
<ldlework> i don't really know enough about how it works
<mdash> need a premium nixos subscription that just mails you a couple hard drives every six months
<gchristensen> joepie91: essentially, since the dawn of time
<simpson> ldlework: It's conceptually pretty simple right now. Your Nix reaches out to the binary cache and checks for each missing derivation, by hash.
<infinisil> ldlework: The thing is, if the root derivation you're trying to build is in the cache, it can just fetch it from the cache. But if it's not, it needs to build it, which means it needs to first build all the dependencies.
<ldlework> simpson: i feel like that's what nixpkgs commits are for
<infinisil> So it needs to query the cache for each of those dependencies, download them if present, and fetch *their* dependencies if not
<ldlework> given a nixpkg commit, all the derivations inside are known
<ldlework> you don't have to treat them independently
<simpson> ldlework: That would be an interesting system, where checking things into nixpkgs requires first prebuilding and updating an index.
<simpson> However, it'd *also* require 100% deterministic builds, *and* you'd still have to handle every overlay and external Nix expression in a meaningful way. (Not that I know anything about overlays.)
<ldlework> it doesn't handle overlays but the number of packages utilized in overlays are typically small so you can specially treat those or something
<ldlework> i guess my point is that with dedicated peers, the actual act of bittorrent spin-up time to when you are actually downloading bytes is insignificant
<ldlework> it's probably not the thing for nix binary cache but i don't think start time is the problem. how bittorrent indexes information is probably the actual problem in that usecase.
<ldlework> (which manifests as startup time, but not because you're looking for peers)
<gchristensen> so anyway if anyone wants to mail me big hard drives, lmk
<joepie91> gchristensen: is there a specific reason you want to back up -all- builds, not just the ones that people actually still use?
<gchristensen> joepie91: I think they're really fascinating and have archeological value :) like running a bisect across the last 5 years of changes
<joepie91> gchristensen: hmm. wouldn't it be possible in principle to just recreate those builds, though?
<joepie91> given the determinism of the build process etc.
<ottidmes> infinisil: I dont really use REPLs I am still stuck in my old PHP thinking, in GHCi all I do is :r, so I use ghcid these days
<joepie91> gchristensen: also, for what it's worth, archive.org estimates the storage cost of data at about $1k/TB for 'the foreseeable future'
<gchristensen> joepie91: actually, the binary cache also has sources, many of which might no longer be available
<joepie91> if I recall correctly
<samueldr> ++ the sources only are of value
<joepie91> I would have to check that...
<joepie91> gchristensen: right, maybe just replicate the sources then?
<infinisil> ottidmes: How I sometimes use ghci: `nix-shell -p 'ghcWithPackages (p: [ p.stuff-i-need ])' --run ghci`, then read some file or so with `contents <- readFile "foo"`, and process it with maybe some packages, sorts, filters, maps, etc.
<joepie91> gchristensen: archive.org would probably be happy to host those actually
<gchristensen> joepie91: tough to do... though maybe not impossible :)
<joepie91> (the sources)
<gchristensen> I wonder if the narinfo format can tell me if it was a fixed output
<ldlework> infinisil: you could probably rewrite my static content generator in haskell in a weekend :)
<joepie91> gchristensen: hold on, let me inquire
<ottidmes> infinisil: I would then make a Haskell file first and then define functions in them that do what I want, so I never define anything myself in ghci
<infinisil> ldlework: Heh, not so sure on that, what's it do?
<gchristensen> hmm! might not be so bad, by looking at narinfo files with no References
<ldlework> infinisil: you basically describe, for each content-type you define, where data comes from, how it is transformed, including where it goes out to. That's it. Then there is a library of processors that do some transformations that are useful for building say, a blog or gallery, etc. Same system used for markdown->html, minified js, less->css, thumbnail images, etc
<ldlework> basically just a programmable content transformation pipeline
<ldlework> its like 100 lines in python
<ldlework> the core is
<ldlework> here's a partial config showing tasks for building css and js, https://gist.github.com/dustinlacewell/41ebd7bb86f85050c607ae8200739a56
<ldlework> actually, here's my whole config for my whole website in progress, https://gist.github.com/dustinlacewell/443e5253171a61c5a7b40da65363625d
jasongrossman has joined #nixos-chat
ninjin has joined #nixos-chat
<infinisil> ldlework: Looking pretty neat!
<ldlework> infinisil: don't you think haskell could model that kind of thing well
<ldlework> nixlang could too
<infinisil> Haha nix
<infinisil> ldlework: Yeah, haskell is probably great at this stuff, don't feel like looking into how it would work for now though :)
<joepie91> that looks a bit Gulp-y :P
<ldlework> Yeah it is closer to gulp than like hugo or jekyll
<ldlework> a few people have built static site generators ontop of gulp too
<joepie91> probably for the better to be honest
<joepie91> given my experience with jekyll-style generators
<ldlework> i like hugo and jekyll but i always want to organize my data or associate between my data in a way i just can't express with their APIs
<ldlework> Blot on the otherhand doesn't even know what content is. And it's up to you to invent it.
<infinisil> ldlework: There is hakyll, a haskell static site generator
<ldlework> infinisil: yeah but is it like jekyll or like gulp?
* infinisil has no idea
<infinisil> Never used any of them
<joepie91> that's kind of my main complaint about stuff like jekyll
<joepie91> it has a really specifically-defined idea about what a content processing pipeline should look like
<joepie91> and if you disagree with that idea, well, tough luck
<ldlework> exactly
<infinisil> Okay, problem: I'm doing a presentation for university, and I had to meet with 2 assistants for them to give me feedback
<infinisil> Now one of them actually never gave any feedback
<infinisil> Should I still put their name in the slide (as a thanks?)
<infinisil> Wait, I'll just not put any names in the slides
<infinisil> It's not mandatory or so
<samueldr> a thanks slide and a shame slide? (burning bridges is good, right?)
<__monty__> Yeah, my pov is what harm does thanking too many people do? Especially on something as ephemeral as a presentation.
<infinisil> __monty__: I'd just kinda feel uncomfortable, thanking somebody that all three of us know didn't do anything
<gchristensen> add them anyway, they'll know and feel the shame maybe?
<__monty__> infinisil: Put the name on the slide and don't mention them out loud?
<infinisil> I mean, for the first meeting they just didn't give any feedback and sat there silently, but for the second meeting I just got an email from him saying he was sick, sooo, not sure
<infinisil> Ehh, I'll just not put them in the slides
<gchristensen> you say you *had* to meet with them?
<infinisil> gchristensen: Yup
<gchristensen> I think you _have_ to add their name then
hark has joined #nixos-chat
<gchristensen> and in that case I like __monty__'s suggestion
jasongrossman has quit [Ping timeout: 264 seconds]
<infinisil> Well from the presentations the others did until now, only like half of them included their names
<infinisil> I guess a slide with their names wouldn't hurt though
<gchristensen> ah ok
<ldlework> joepie91: you might like metalsmith
jackdk has joined #nixos-chat
<gchristensen> anyone want to let me borrow part of their disk?
<samueldr> would a byte do?
<gchristensen> a bit
qyliss^work_ has joined #nixos-chat
lopsided98_ has joined #nixos-chat
ma27_ has joined #nixos-chat
ma27 has quit [Ping timeout: 268 seconds]
pita has quit [Ping timeout: 268 seconds]
obadz has quit [Ping timeout: 268 seconds]
qyliss^work has quit [Ping timeout: 268 seconds]
tilpner has quit [Ping timeout: 268 seconds]
lopsided98 has quit [Ping timeout: 268 seconds]
pita has joined #nixos-chat
qyliss^work_ is now known as qyliss^work
__monty__ has quit [Quit: leaving]
obadz has joined #nixos-chat
tilpner has joined #nixos-chat
<infinisil> gchristensen: What for?