<jtojnar>
aargh, missing in configure.ac and m4 just eats everything until the next command, resulting in helpful “./configure: line 15039: syntax error near unexpected token `;;'”
<gchristensen>
ehh I think I'll change that alert to not use predict_linear
psyanticy has joined #nixos-dev
drakonis_ has quit [Read error: Connection reset by peer]
drakonis_ has joined #nixos-dev
phreedom_ has quit [Ping timeout: 260 seconds]
dtz has quit [Ping timeout: 250 seconds]
zarel has joined #nixos-dev
roberth has quit [Ping timeout: 250 seconds]
zarel has quit [Quit: Leaving]
phreedom has joined #nixos-dev
drakonis has joined #nixos-dev
drakonis has quit [Quit: WeeChat 2.6]
drakonis has joined #nixos-dev
drakonis1 has joined #nixos-dev
drakonis has quit [Read error: Connection reset by peer]
drakonis has joined #nixos-dev
drakonis_ has quit [Ping timeout: 245 seconds]
drakonis1 has quit [Ping timeout: 252 seconds]
drakonis has quit [Ping timeout: 276 seconds]
drakonis has joined #nixos-dev
ixxie has joined #nixos-dev
orivej has joined #nixos-dev
<noonien>
is there a way to get an unwrapped gcc and buildtools for cross-compilation?
<noonien>
i'm trying to compile a kernel module for another machine and i'm getting invalid modules compiled, my guess is it's happing because of the wrapping
orivej has quit [Ping timeout: 265 seconds]
orivej has joined #nixos-dev
orivej has quit [Ping timeout: 240 seconds]
<infinisil>
The wrapper is what makes cross comp possible! Pretty sure anyways
<thoughtpolice>
It really depends on what you're trying to accomplish. There are various cross compiler things in the package infrastructure but in other cases I've just written my own package to cross-compile GCC and treat it like any other normal package
<thoughtpolice>
(This only works in bare metal-ish environments though, or if you're willing to write your own stuff to tie in a libc)
<noonien>
well, i used to build this module in a docker container since i didn't know nixpkgs had gcc49 (i'm building for an older kernel)
<noonien>
i've setup a nix-shell with the following: http://ix.io/22mt
<noonien>
the project builds successfully, however, when i try to insert the module, i get these errors: http://ix.io/22mu
<thoughtpolice>
The TL;DR is "much faster globally and with much lower latency". One of the changes there is actually already deployed and reduced TTFB significantly for a HKG user (something like ~1.2 seconds to ~250ms)
<thoughtpolice>
There are also many more internal changes that will help us monitor problems and troubleshoot issues.
<clever>
thoughtpolice: ah, some of that ive tried to mitigate with cachecache
<clever>
but streaming hits is something i still need to work on
<thoughtpolice>
The second major latency reducing measure (streaming miss) isn't yet live, but that's a major one
<clever>
currently, cachecache will read the entire reply from upstream, write it to disk, then serve it to the client
<clever>
but i need to look into conduits more, and stream it better
<clever>
another thing, is being able to fork a conduit, mid-stream
<clever>
thoughtpolice: so overall, the api is the same, but performance changes to make it respond faster?
<thoughtpolice>
Yes, nothing user facing is changing, other than "It's a lot better, now"
<clever>
i think the only thing that the client needs to change for, is the http2 nar push stuff
<thoughtpolice>
Yes, exactly, that's something I'm experimenting with now
<thoughtpolice>
But it's not a high priority
<thoughtpolice>
And it will be very selectively enabled for compatible clients
<thoughtpolice>
But other things are just "fee". For instance, a lot of times `nix ...` seems to "hang" on downloads before they start, which seems to take a while sometimes. You can also see that with `nix-build` with the normal cURL-style bars that don't start until a few seconds after beginning. That's because right now the cache fetches everything from S3 first before sending a byte. That will be fixed.
<thoughtpolice>
s/fee/free
<clever>
cachecache has the same problem
<thoughtpolice>
Unrelated but similar to cachecache it seems, I also added a proxying mode to Eris recently that allows it to serve (and even sign!) objects from upstream caches, meaning one instance can serve both cache.nixos.org objects, and private objects.
<thoughtpolice>
Pretty nifty, but I didn't yet work in the ability to actually keep the proxied objects (that aren't local) on disk yet, which would basically be the same feature I suppose.
<clever>
i'm just keeping signatures intact as i proxy, but i do merge multiple caches together
<thoughtpolice>
(Well, it doesn't cache the literal .narinfo, it regenerates it)
<clever>
and i cache every narinfo i see in a single hashmap in ram
<clever>
narinfo misses are cached for a short period of time
<clever>
line 362 downloads the entire nar at once
<thoughtpolice>
Yeah, I do nothing that fancy, but in practice I expect most users of Eris aren't that latency sensitive for low-volume stuff.
<thoughtpolice>
And if they are they'll just set up some other HTTP proxy, like Varnish itself or whatever
<clever>
316 writes that entire nar to local disk
<thoughtpolice>
I did get Mojolicious's streaming support to work, however. I think.
<clever>
then 318 returns it towards the client
<thoughtpolice>
(The main thing I need for Eris now is some kind of ACME integration... So utterly tedious but so useful)
<clever>
the problem, is that i want to start 1 http download
<clever>
then stream it to disk
<clever>
and stream it to the user
<clever>
but, the hard part
<clever>
i want a 2nd user to come along, at any point in time, read the partial file on then, then fork the stream in ram, and join in
<clever>
on disk*
<thoughtpolice>
Yes, we have this feature at work in our platform, and in practice it's extremely useful when you need it, since in basically allows wide fanout at no cost.
<thoughtpolice>
"Request collapsing"
<clever>
yeah
<thoughtpolice>
Hopefully those issues will be a thing of the past soon for cache.nixos.org at least :)
<clever>
thoughtpolice: are there multiple instances of cache.nixos.org spread around the world?
<thoughtpolice>
No, there's only one. The storage costs would be astronomical with S3, I'm afraid.
<thoughtpolice>
In my own personal experiments, I did play with a geo-redundant cache, however
<clever>
i meant the cache layer, after the s3 bucket
<thoughtpolice>
Oh, yeah, absolutely
<thoughtpolice>
Read all the Fastly docs in that page I linked, I detail it extensively
<clever>
its harder to reverse engineer that from the outside, when cloudflare is masking things
<thoughtpolice>
You can basically find all the information in that document publicly, if you watch recordings of our customer conferences and read things carefully.
<thoughtpolice>
There's nearly nothing there that hasn't been written elsewhere.
<clever>
something ive thought about, is how much advantage a custom program that knows nix can give you, over a dumb squid instance
<clever>
oh, and there is one problem that would need to be fixed in nix itself, it refuses to download X from the cache, if X depends on Y, and it doesnt have Y yet
<thoughtpolice>
Yes, I do leverage some aspects of Nix to enhance the server configuration, since we essentially use Varnish under the hood. You could probably do more though with some improved cooperation.
<clever>
thoughtpolice: you could maybe get much better performance (parallism), if you download both X&Y, but dont try to unpack X's nar, until Y has finished
tv has quit [Ping timeout: 240 seconds]
<thoughtpolice>
Yeah, that's one way, I've noticed some things like that. Another I was thinking of is whether we could use TLS 1.3 0RTT to reduce initial handshake times once the user has talked to the cache once. 0RTT has some flaws but for us I don't think they (mainly replay attacks) apply very much for strict GET requests.
<thoughtpolice>
There are probably several optimizations we can make if we can enhance Nix I bet, but I've only thought of a few. Server Push is the most obvious one.
<clever>
ive also found performance issues in nix/hydra-queue-runner, just in terms of how many narinfo/sec they can request
* clever
gets image
<thoughtpolice>
HTTP/2 multiplexing should help that today a ton though. Though apparently HTTP/2 is currently sometimes problematic for some users...
<thoughtpolice>
It's not clear to me how much of that is cURL bugs vs Nix bugs.
tv has joined #nixos-dev
<thoughtpolice>
My current logging states indicate HTTP/2 is only about 20% of the traffic hitting my beta service, even though it's been available in Nix for quite a while at this point by default
<clever>
thoughtpolice: you can see here, that hydra is barely getting even 1 request/second, when hitting a cachecache instance on localhost
<clever>
and nearly all of those are cache hits, in a haskell hashmap
<thoughtpolice>
Oh, the *queue runner*. Yeah wow that's insanely slow.
<thoughtpolice>
You should literally be able to request like thousands of narinfos a second with HTTP/2 multiplexing from the upstream cache lol (or at least, my version)
<clever>
i suspect the algo in nix itself though, isnt capable of that
<clever>
it starts at the root of the dep graph, and works downwards thru the tree
<clever>
so the number of pending queries will grow as it fans out thru the dep tree
<clever>
if its even able to queue them up
<clever>
$ nix-build channel:nixos-unstable -A hello --dry-run