genesis has quit [Remote host closed the connection]
__Sander__ has quit [Ping timeout: 276 seconds]
pie__ has quit [Ping timeout: 248 seconds]
peti has quit [Quit: rebooting]
pie_ has joined #nixos-dev
orivej has joined #nixos-dev
peti has joined #nixos-dev
__Sander__ has joined #nixos-dev
FRidh has quit [Read error: Connection reset by peer]
genesis has joined #nixos-dev
genesis has quit [Quit: Leaving]
the has quit [Remote host closed the connection]
genesis has joined #nixos-dev
the has joined #nixos-dev
the has joined #nixos-dev
the has quit [Changing host]
the has quit [Remote host closed the connection]
the has joined #nixos-dev
the has joined #nixos-dev
the has quit [Changing host]
<gchristensen>
`the` is now monitoring several large channels for spam queues to preemptively ban in #nixos chans
<LnL>
nice
<gchristensen>
cue, though, :P
<srhb>
Anyone know why nixpkgs.metrics was aborted for trunk-combined recently?
<LnL>
no builders with the required features available
<angerman>
clever: can I make the `kexec_nixos` automatically start `justdoit`?
<angerman>
clever: and then reboot?
<angerman>
If so, I've got a cloud-init user data that would completely auotmate the take over: https://gist.github.com/fe992da7ac6eb64a98fb1566e453f4b3; currently I still need to login and run `justdoit && reboot` inbetween once.
<gchristensen>
anyone around available to review a cross compilation patch for me?
<angerman>
gchristensen: depends on how complicated it is.
<{^_^}>
#44124 (by grahamc, open): systems: Allow detection of powerpc and sparc
FRidh has joined #nixos-dev
orivej has quit [Ping timeout: 248 seconds]
the has quit [Excess Flood]
the has joined #nixos-dev
the has joined #nixos-dev
the has quit [Changing host]
<gchristensen>
w00t other than getting kicked out due to flooding bans, it succeeded in preemptively banning
<clever>
how did it know it was a bot? lol
<gchristensen>
its in several large channels looking for spam cues
<clever>
ah
the has quit [Excess Flood]
<clever>
gchristensen: of note, most irc servers will throttle you some before you flood out, but the only way to detect that is to msg yourself
the has joined #nixos-dev
the has joined #nixos-dev
the has quit [Changing host]
<clever>
gchristensen: so after sending X messages, msg yourself, then wait for a response
<gchristensen>
huh
the has quit [Excess Flood]
the has joined #nixos-dev
the has joined #nixos-dev
the has quit [Changing host]
sir_guy_carleton has joined #nixos-dev
<gchristensen>
thus far I've just been doing thread::sleep(time::Duration::from_millis(200)); ... can't really add that level of complexity right yet
the has quit [Excess Flood]
<gchristensen>
I have a patch here which should fix the issue of it leaving every time ... not quite as elegant but should do the trick.
the has joined #nixos-dev
the has joined #nixos-dev
the has quit [Changing host]
the has quit [Remote host closed the connection]
the has joined #nixos-dev
the has joined #nixos-dev
the has quit [Changing host]
<domenkozar>
niksnut: should we remove metrics from release requirements?
<domenkozar>
or will that machine get back in other form
<vcunat>
metrics is in release requirements? I didn't realize that.
<vcunat>
Still, I assume we want to keep metrics anyway, so that's two birds with one stone.
<domenkozar>
yeah but performance machine is gone
__Sander__ has quit [Quit: Konversation terminated!]
<vcunat>
Surely we can run it on another one?
<vcunat>
Most of the metrics seem fairly deterministic.
<vcunat>
(numbers of allocations, memory, etc.)
<gchristensen>
Hydra also records metrics like eval time and memory
<domenkozar>
well not timing
<domenkozar>
but whatever :)
<domenkozar>
I'm more worried that it's blocking channel
<domenkozar>
so we should probably remove it for now
<domenkozar>
unless it will be fixed in 1-2 days
the has quit [Remote host closed the connection]
<vcunat>
I have one free core2duo machine and maybe 6 GiB RAM. I should be able to set it up for Hydra to use on Saturday, if desired.
<vcunat>
If Hydra only schedules one job at a time, it should be fairly consistent.
<vcunat>
But for a quick channel fix we can just schedule it wherever _for now_.
<vcunat>
It's two free core2duo machines, actually.
the has joined #nixos-dev
the has joined #nixos-dev
the has quit [Changing host]
<vcunat>
Having a machine only for the metrics job seems wasteful, but as long as Hydra never schedules more than one job on it, it could accept others as well.
<domenkozar>
you can do jobs = 1
<vcunat>
Yes, that's what I meant :-)
<domenkozar>
and set it that it can run performance stuff
<domenkozar>
but it's useful to have it mandatary
<domenkozar>
then when chromium rebuild comes it
<domenkozar>
it's immediately compiling there
<vcunat>
I wouldn't set "big-parallel" on that machine :-)
<domenkozar>
oh those two are separate
<gchristensen>
small-singular
<domenkozar>
well it could be one machine
<vcunat>
metrics uses "benchmark" tag
<vcunat>
no other job in nixpkgs uses it
<vcunat>
(from a quick grep)
<domenkozar>
right :)
<domenkozar>
I hope it's not an amazon t2 :D
<vcunat>
Though core2duo wouldn't really be representative for nowadays performance :-/
<vcunat>
so I'm not sure it would be a win.
<gchristensen>
the rax machine was the only benchmark machine, and it was some cloud machine from years ago
<gchristensen>
it wasn't intended to be representative for nowadays performance, but as a stable reference point for comparing impact to nixpkgs over time
<vcunat>
So far I haven't heard of any other use for those weak core2 machines, so I can set one up for this, if you don't come with a better idea. Or we could just abandon the stability on some of the metrics...
<gchristensen>
not up to me :P
<vcunat>
Maybe we should set up some place to discuss stuff specific to this Hydra's instance/HW/etc.
<vcunat>
Perhaps most of it could be public.
<Dezgeg>
I think so far the cpu metrics have correlated pretty heavily with the object allocation rate anyway
<vcunat>
If it was a cloud machine, I wouldn't expect those to have really stable performance.
<vcunat>
(but I can't speak from experience or some deeper knowledge)
<vcunat>
Still, what you write would suggest it was stable enough :-)
<vcunat>
Eh, I was muddling "allocation rate" and "number of allocations" :-(
<domenkozar>
you can bursted CPU or fixed on amazon
<domenkozar>
and instance types mean you always get the same hardware