<samueldr>
ugh! github not allowing multiple forks of the same repo under a same user/org is annoying me
<abathur>
trudat
<bqv>
samueldr: why would you want that
<samueldr>
one fork from the upstream repo per device for each vendor kernel, under one organization
<bqv>
Also god I just got carpetbombed by the letsencrypt email expiry bot
<bqv>
samueldr: why not just branches?
<samueldr>
because the experience of branches suck on github
<samueldr>
they don't get indexed
<bqv>
:/
<drakonis>
github is a terrible place
<samueldr>
I used branches to work around the issue
<samueldr>
but also, branches will be extra-messy since I want a "main branch" and one branch per source release
<samueldr>
so now it's likely to accumulate much more than desirable, and be less manageable :(
<samueldr>
another good reason
<samueldr>
let's say I make a project
<samueldr>
dunno
<samueldr>
let's call it mobile-nixos
<samueldr>
and I'm working on it
<samueldr>
under my own username
<samueldr>
now everyone who forks it gets my branches in their reop
<samueldr>
repo*
<drakonis>
ohoho a real life example
<samueldr>
those branches are extremely stale!
<samueldr>
so before it got adopted in the org, I went and made myself a WIP organization, who's sole use is to work around that issue
<samueldr>
because it's messy
<samueldr>
there is no good reason to artificially limit forks
<samueldr>
if it's about that counter
<samueldr>
count one fork per user
<samueldr>
there, I fixed it
<samueldr>
I want my gold star
<samueldr>
oh, another reason there, if I wanted to actively maintain the repo, I would have issues open, but there issues would be meaningless, for which branch does the issue apply??
<samueldr>
(I don't want to *maintain* old kernel branches, eww)
<abathur>
is there a specific reason it needs to be a fork and not N local repos cloned from one source and pushed to N new repos under the org?
<abathur>
lots of upstream/downstream?
<samueldr>
abathur: one of them is the "magic" where pushing something on a fork only uploads the delta
<samueldr>
I intiially went with a non-fork repo, and the upload would still be going if I did
<samueldr>
if I kept with it*
<abathur>
nod, figured
<abathur>
I wonder if it's structural or just a thin restriction they could work around and would feel incentivized to do if they thought the alternative was you uploading N separate repos... :]
das_j has quit [Quit: killed]
ajs124 has quit [Quit: killed]
das_j has joined #nixos-chat
ajs124 has joined #nixos-chat
rajivr has joined #nixos-chat
<abathur>
can you make it a template repo and generate new ones from it? not sure how that works in practice
<abathur>
I guess you lose history? not sure if that matters in your case?
<samueldr>
I don't know
<samueldr>
and anyway I'd have to ask linus torvalds
<bqv>
Huh, I just recognized a caller from another radio station I like, on the one I'm currently listening to
<bqv>
That feels weird
<abathur>
samueldr: you can turn a fork into a template I think
<samueldr>
>> A new fork includes the entire commit history of the parent repository, while a repository created from a template starts with a single commit.
<samueldr>
wouldn't work
<samueldr>
since the main reason behind using a fork is to use the automatic resolving of the existing objects
<abathur>
k
neeasade has quit [Remote host closed the connection]
waleee-cl has quit [Quit: Connection closed for inactivity]
rardiol has quit [Quit: No Ping reply in 180 seconds.]
rardiol has joined #nixos-chat
<ashkitten>
you can always just make a new repo and push the original one to it
<ashkitten>
but github won't know it's related to the original repo
<ashkitten>
is gitlab or gitea better, by the way?
<drakonis>
hmm
<drakonis>
gitlab is an improvement in some ways
<ashkitten>
gitlab's ui feels very cramped and messy imo, but idk if it's better or worse than github
MichaelRaskin has quit [Quit: MichaelRaskin]
<samueldr>
gitea is probably better for self-hosting a small instance
<samueldr>
gitlab has more features, so I guess it depends on the features you need
<lovesegfault>
I'll poke andi- about it tomorrow at work :P
<lovesegfault>
samueldr!!
<lovesegfault>
samueldr++
<{^_^}>
samueldr's karma got increased to 244
<lovesegfault>
lol
<lovesegfault>
sorry for the first one
drakonis has quit [Quit: WeeChat 2.8]
slack1256 has quit [Ping timeout: 246 seconds]
<sphalerite>
blargh, nixos uses a different order for SSH host key preference than ubuntu by default, so even though I copied the host keys when moving from ubuntu to nixos, SSH complains that the key changed :(
<lovesegfault>
I gotta garbage collect some day: [105464 paths optimised, 9977.7 MiB / 447939 inodes freed]
__monty__ has joined #nixos-chat
<ar>
sphalerite: from what i can see (on my nixos boxen), nixos uses:
<ar>
HostKey /etc/ssh/ssh_host_rsa_key
<ar>
HostKey /etc/ssh/ssh_host_ed25519_key
<ar>
there's also the services.openssh.hostKeys option
<ar>
sphalerite: the message you're getting might be coming from ubuntu having ecdsa keys enabled
<sphalerite>
ooh it doesn't use ECDSA at all? I see. Still, it has the same ed25519 host key it had with ubuntu :( ah well, not the end of the world
aleph- has quit [Read error: Connection reset by peer]
<colemickens>
cache.nixos.org doesn't redirect to https? (which might explain why it doesn't show the pubkey) ?
parsley936 has joined #nixos-chat
<ar>
sphalerite: not by default
aleph- has joined #nixos-chat
ixxie has joined #nixos-chat
<gchristensen>
colemickens: we don't think it is necessary to ban http access
<eyJhb>
gchristensen: is there ANY reason to not just use HTTP? The content is verifed on download, right gchristensen ?
<gchristensen>
if you're in the EU you might not want your govenment to know you're fetching hacking tools
<eyJhb>
I would more have thought the US, but that is a valid point. But besides privacy regarding what you are fetching?
<eyJhb>
HTTP performs better, as far as I know because there is less overhead
<gchristensen>
I think you're asking if cache.nixos.org has any unverified fetches, and there aren't any :)
<gchristensen>
so as long as you're comfortable being watched, there isn't a chance for manipulation
ixxie has quit [Ping timeout: 264 seconds]
<gchristensen>
this was a whole debate arouth apt-get supporting HTTPS, I betthere were a lot of good threads on this
<Arahael>
eyJhb: Not entirely true, https can perform better, too, especially for multiple fetches.
<Arahael>
But it's trivial to proxy and cache http, and that can be an advantage.
<eyJhb>
Arahael: but isn't that in most cases regarding HTTP/2+ ?
<eyJhb>
I would love a local NixOS cache at AAU
<eyJhb>
But I don't think they will donate 200 TB space
<Arahael>
eyJhb: HTTP/2+ and http are entirely different things.
<JJJollyjim>
gchristensen: i feel like the nix situation is a lot more messy
<JJJollyjim>
than apt
<eyJhb>
Arahael: I know, but in regards to HTTPS performing better, I would normally assume that it is in conjuction with HTTP/2 or higher
<JJJollyjim>
since there's a massive risk of sending a hash of a secret to the cache
<JJJollyjim>
so no, im not comfortable being watched, while i am for apt
<eyJhb>
And not HTTP/1.x + HTTPS that performs well
<JJJollyjim>
"don't put secrets in the store" is important, but it's also a lot of work and i imagine a lot of people flout it
<sphalerite>
JJJollyjim: wait, does nix support uploading to an HTTP cache?
<JJJollyjim>
Don't think so, but you ask the cache for all the things you're gonna build
<JJJollyjim>
So if I'm about to build a config file, I request a narinfo for it based on the hash
<JJJollyjim>
And if that has a config file, things get messy
<JJJollyjim>
*** if that has a secret
<JJJollyjim>
I could be misunderstanding the situation here
<JJJollyjim>
But imo the status quo is pretty scary
<JJJollyjim>
and i feel like advancing rfc59 is probably the most important thing for the security of nixos systems
<JJJollyjim>
Because not having a consistent secret storage approach sucks
<__monty__>
JJJollyjim: Are you worried about sha-256 being reversed?
<JJJollyjim>
Yes
<JJJollyjim>
(better secret storage is important for other reasons as well, but yeah)
<JJJollyjim>
A single graphics card calculates tens of billions of sha256s per second
<gchristensen>
JJJollyjim: great point
<gchristensen>
JJJollyjim: (probably) most of your config files don't try to substitute, fwiw
<__monty__>
Hmm, let's take a reasonable secret, a 256 bit ssh key. That's 1e77 possibilities, even at 100e9 shas/second with a million graphics cards that's still about 1e60 seconds. Brute-forcing is not something to worry about afaics. Especially for secrets you rotate with some frequency. (I'm not advocating for putting secrets in the store. Just questioning whether sending a hash to the HTTP cache is a
<__monty__>
serious worry.)
<sphalerite>
I agree with __monty__ on this one. When it becomes feasible to reverse a sha256, we're going to have bigger problems than that. Additionally, the part where you have to identify _which_ hash was produced from data containing a secret is non-trivial and will also multiply the amount of effort required — if a hash produced from data containing the secret is even queried, as gchristensen
<sphalerite>
mentions.
<JJJollyjim>
gchristensen: ah nice, didn't know about that
<sphalerite>
Additionally, it's unlikely for it to be a hash of just the secret — typically it'll be a config file containing the secret, which you'd also have to guess.
<JJJollyjim>
monty: My wifi password isn't a 256-bit secret, but it's in networking.wireless.networks
<Arahael>
My wifi password is written on the fridge.
<Valodim>
some interesting context: there was the NIST hash function competition, which formally named Keccak as SHA-3
<JJJollyjim>
sphalerite: sha256 is absolutely reversable, for pretty much any human-memorable secret
<JJJollyjim>
do you mean allowSubstitutes=false?
<JJJollyjim>
If sha256 were not reversable, we wouldn't have things like pbkdf
<Valodim>
however, the practical result of all the research that went on was that SHA-2 is really pretty good
<sphalerite>
Valodim: lol
<JJJollyjim>
It's good as a hash function, but not as an irreversible key derivation function
<Valodim>
sha256 is a hash function, not a kdf
<JJJollyjim>
That's not what it tries to be
<JJJollyjim>
Yeah exactly
<JJJollyjim>
Which is why I'm saying sending it to the store is a concern
<Valodim>
those are not the same primitive, using them interchangably is incorrect and you are breaking cryptographic assumptions when you do
<JJJollyjim>
gchristensen: if my config file derivation is set up like that, the output hash will then go into whatever is next, right?
<JJJollyjim>
Like the system environment or the systemd unit file
<JJJollyjim>
And then that might get substituted
<Valodim>
the result of a similar competition for a standardized kdf is argon2, which is memory-hard and has pretty nice properties, but suffers a bit from portability issues
<gchristensen>
yeah the output hash will carry forward for sure
<__monty__>
JJJollyjim: Unless you put that key in a separate file how would it alone end up hashed?
<gchristensen>
JJJollyjim: yeah I mean you shouldn't put secrets in the store. no doubt about it
<__monty__>
Blake3 is supposed to fix the API issues with Blake2 and is supposedly usable as a KDF (argon2 is a kdf based on blake2).
<JJJollyjim>
It wouldn't, but you can try to reverse to the hash of a standard config file format produced by e.g. a nixos module
<Valodim>
but saying that sha256 is reversible is too broad a statement - it is "reversible" given a lot of context (e.g. "it's a word in the dictionary", or "it's something a human would come up with")
<JJJollyjim>
Agreed, but "it's not reversible" is also too broad
<gchristensen>
JJJollyjim: are you NSA? :P
<Valodim>
but that's mostly an implication of "it's fast". no other aspect about sha256 itself makes it "reversible" by any means
<JJJollyjim>
yep
<gchristensen>
ohp
<Valodim>
in addition to being slow, argon2 is memory-hard, which basically means "it needs a lot of dedicated RAM to compute". makes it very hard to crack because ram doesn't scale as well as ASICs, but at the same time it reduces portability
<JJJollyjim>
gchristensen: just boring non-government infosec :P
<Valodim>
hm. I wonder if argon2 derived passwords in a public file would be ok (except for really weak passwords). I feel like it would, but I never looked at the numbers :)
<gchristensen>
JJJollyjim: fwiw I have found good luck using various scritps with vault to provision tokens instead of putting them in the store
<JJJollyjim>
yeah
<Arahael>
ideally you want to avoid passwords though, and avoid making them accessible. beyond the encryption, there is still loads that can gonwrong.
<JJJollyjim>
im just hoping something like that becomes standardised across nixos
<JJJollyjim>
so people dont have to build it themselves
<gchristensen>
I'm not sure itwill be easy to become standard
<gchristensen>
the lifecycle, provision techniques, etc. are all so complicated and often quite boutique
<Arahael>
a password store would be wonderful, though even better would be some sort of tpm service.
<Arahael>
simplest option: just buy a security key.
<JJJollyjim>
the worst-case scenario for this kind of attack is probably when someone has their nixos configuration public, and puts in a secret with an argument or something
<JJJollyjim>
then someone watching the substituter can look for hashes they dont recognise to identify the files that secrets, and they know the exact structure of the file
<JJJollyjim>
(also, if the secret is near the end of a file, you can precompute the hash up to that point, and only recalculate the end of it)
<JJJollyjim>
which accelerates the attack
<JJJollyjim>
s/the files that secrets/the files that contain secrets/
<__monty__>
Arahael: A public password store sounds scary.
<JJJollyjim>
Arahael: i have a yubikey, now how do i get the secrets into all my nixos module options?
<JJJollyjim>
:P
<Arahael>
__monty__: i didnt mean public, i meant local to each system, as a standard service.
<gchristensen>
Arahael: you mean like the kernel key ring? :)
Jackneill has quit [Ping timeout: 240 seconds]
<Arahael>
gchristensen: i am actually not familiar with that. :(
<gchristensen>
I wasn't either until flokli told me about it
<__monty__>
Arahael: Hmm, and where does the advantage compared to doing that now come in? Just an easy way to specify "${from-password-manager}"?
<balsoft>
Hm, have anybody considered splitting compilation and linking of C/C++ packages in two separate derivations such that compilation depends only on the (maybe normalised in some way) headers of dependencies and linking depends on the actual built dependencies? That could allow to reduce the time that mass rebuilds take by quite a while.
<Arahael>
unfortunately, in my case, i need a portable solution. i now use 1password + separate yubikey. and anything that has 2fa, has that enabled.
<balsoft>
I know it will never get to nixpkgs, just wondering if this is feasible
<Arahael>
sadly, the more important the service, the crappier the security :(
<Arahael>
__monty__: yes
<gchristensen>
balsoft: hmmm I'm not sure it would actually reduce rebuilds, though it would with CAS store paths since the headers would have the same hash
<JJJollyjim>
__monty__: there are dozens of places in the official nixos modules that say "PUT YOUR SECRETS HERE (p.s. putting secrets in the store is a bad idea)"
<balsoft>
gchristensen: yeah, I'm talking about CAS, sorry that I didn't mention that
<JJJollyjim>
I want to see them all replaced with something user-friendly that doesn't involve setting up my own hashicorp vault and writing my own systemd services and so on
<gchristensen>
balsoft: for me, I think it would be interesting to have stdenv.mkDerivation split each phase in to its own derivation :)
<Philipp[m]1>
Centos in German is like an interactive Rammstein song.
<sphalerite>
hahahaha
<sphalerite>
can't remember which software it was exactly but I have recently worked with software localised to German as well. It's a weird experience, like it's a foreign language even though I'm a native speaker.
slack1256 has quit [Remote host closed the connection]
<Philipp[m]1>
It's because many of the words used in a tech context have a very specific meaning that they loose when you just translate them literally.
<Philipp[m]1>
I like having the option for localised docs but localised output for system executables? No thank you!
<__monty__>
Yes, localized system utilities and preferences are just a pain when you need to help someone.
metheflea has left #nixos-chat ["Kicked by @appservice-irc:matrix.org : Idle for 30+ days"]
<gchristensen>
anyone know of a tool which will walk all my USB hubs and devices and examine how much bandwidth and what-not that each device is using?
<gchristensen>
to be able to quickly answer the question "is it plugged in wrong, or am I at capacity?"
<ivan>
lsusb -t?
<gchristensen>
hmmm that might do it, though something with a few more UX affordances would be nice :P
<abathur>
already tried usbtop? (not that I've ever used it...)
<gchristensen>
which is a thing which maps all my devices in the tree and highlights things like this device uses 480M and its hub only supports 480M, and also that same hub has 3x 12M devices also plugged in