<siraben>
gchristensen: can that be done in a nix package?
<gchristensen>
no
<gchristensen>
(thisis a good thing)
<siraben>
Right. I was thinking so. I was explaining Nix to a friend of mine who works with heterogenous computing clusters and hardware-specific flags came up
<gchristensen>
yeah, the story there isn't perfect
<gchristensen>
I knowthis is something the floxdev.com folks have figured out internally, not sure it is something they have in their public product yet
<ekleog>
what you can do is build two derivations, one with cuda and one without, and then build a derivation that detects whether a nvidia gpu is present to pick which one to run
<__monty__>
Does guix have a solution for this? They've focused on HPC a bit, no?
<infinisil>
Or no need for paste, output is: fd full kvm null ptmx pts random shm stderr stdin stdout tty urandom zero
<ekleog>
(because in practice what one usually really wants is to _run_ with/out cuda, not to _build_ with/out cuda, so if you have a heterogeneous system you can have one derivation that contains both and runtime-selects what to run, and it'll even be better as you can then have a single build for all the heterogeneous machines)
<MichaelRaskin>
__monty__: in HPC shouldn't you know in advance whether CUDA / OpenCL / whatever is available, though
<__monty__>
MichaelRaskin: Running simulations on them, yes. But I'm sure there's smaller scale HPC clusters where you do have to deal with heterogeneous hardware on the management side.
<siraben>
ekleog: turns out their cluster has so much variation that the cartesian product would be too big
<MichaelRaskin>
I guess proper-Nix solution would be to run detection outside build and have an attrset of flags per machine, then feed that to the builds
<siraben>
ekleog: oh, and building both may not always be possible
<ekleog>
siraben: ugh :( and even with building one binary per machine type rather than do the full cartesian product?
<siraben>
building both on one machine?
<ekleog>
yup, with enough cross it should be possible (not going to say it would be easy though)
<__monty__>
Not being able to build is fairly easily solved with remote builders though.
<siraben>
__monty__: ah right
<MichaelRaskin>
A bit like NixOS handles some driver configuration: detect, prepare configuration, do a pure build from configuration
<sterni>
gchristensen: is there really no way to query the build status of every job of a job set at once via the json interface of hydra?
<lukegb>
there is, I think
<gchristensen>
no idea, the API is a bit ad hoc
<gchristensen>
improvements could be made :)
<lukegb>
you can ask for /builds on an eval
<lukegb>
if you want to make a really long-running query to hydra :P
<sterni>
lukegb: well it's certainly better than having 16000 requests
<lukegb>
it'll take... a few minutes to return
<gchristensen>
what are you trying to do, sterni ?
<sterni>
not me actually, but its about getting a jobset state overview mostly
<sterni>
but seems like eval/<id>/builds is the solution if it is what lukegb promises
<sterni>
odd that it isn't mentioned in hydra-api.yaml
<gchristensen>
I'm not inclined to say it is a stable api
<gchristensen>
yeah, I don't really think that is a good idea
<sterni>
what would be?
<sterni>
I'm inclined to think we should just only check on our aggregate job of interesting builds instead
<gchristensen>
much better (like status.nixos.org)
<gchristensen>
I'll take a closer look
<gchristensen>
but I don't think it is a good idea to be creating tools that run super heavy queries. it isn't the end of the world: we can block the endpoint or whatever to deal with it, so maybe it isn't a big deal
<gchristensen>
but maybe there are other approaches
<lukegb>
oh right, definitely checking this in to nixpkgs is not a good idea
<sterni>
I think the idea is to run this once a week, but I can see why you are concerned
mjsir911 has quit [Remote host closed the connection]
<sterni>
since that is the main interest and the general status of the jobset becomes outdated very quickly
<sterni>
especially if this is only executed like once a week as intended
<gchristensen>
that sounds like good information
<gchristensen>
might need to add API keys or something to hydra some day
<sterni>
you can probably also cause quite a lot of load without the API
eyJhb has joined #nixos-dev
<sterni>
although it is probably more finnicky because of lazy tabs and truncated lists?!
<gchristensen>
surely can, yeah
<gchristensen>
APIs just have a tendency to be mechanized, with a user who never gives up
<sterni>
that's true
<sterni>
also I think you can easily cause a lot of trouble by accident
<gchristensen>
yeah
<gchristensen>
hydra has benefited for a long time by having "friendly" users
<sterni>
hydra has this tendency to randomly timeout or 500, so ppl will likely retry a lot in clients which can of course just increase the harm done
<gchristensen>
yeah
<gchristensen>
a fairly regular component of my paid work this year has been around finding and fixing issues like this, but the answers aren't always obvious
<{^_^}>
hydra#963 (by grahamc, 1 day ago, open): Deleting jobsets is extremely slow
contrapumpkin has joined #nixos-dev
copumpkin has quit [Ping timeout: 265 seconds]
<gchristensen>
maralorn, sterni so maybe it is fine, another option is running some sort of batch job in sql via the dump and load job
<gchristensen>
that has significant downsides, though
<maralorn>
I at least have no problem with using an unstable API. And I am very mindful regarding the resulting load.
<gchristensen>
that sounds fine to me
<maralorn>
gchristensen: I think the "correct" solution would be if we could give the eval/<id>/builds a query param like eval/<id>/builds?buildstatus=1
<maralorn>
Even nicer would probably be to have a POST endpoint to builds where we can query a list of builds. Because I think especially if we want to run this script more often (e.g. once per day) we don‘t need to requery all jobs but only jobs newly created for that derivation.
<gchristensen>
why POST?
contrapumpkin is now known as copumpkin
<maralorn>
To give the list of builds we want in the body.
<maralorn>
But it was just an idea.
<gchristensen>
ah
<maralorn>
I am sorry I didn‘t ask first before writing something so terribly inefficient. I was just eager to have som kind of PoC.
<gchristensen>
no worries
<gchristensen>
I think the filtering would be good, but not sufficiently advanced today
<maralorn>
lol, there really is a way to Favorite jobs on hydra. How amazing!
<gchristensen>
go for it :). if it causes problems, we can block it or improve it
<maralorn>
So, but is there an endpoint where I can get the constitutent builds of an aggregate job in batch?
<gchristensen>
I actually don't know :( I don't know all of hydra
<gchristensen>
maybe try calling the HTML endpoint with accept: application/json?
cole-h has joined #nixos-dev
<maralorn>
Yeah, I am trying that but for that endpoint there is no json.
mkaito has joined #nixos-dev
<maralorn>
Well, the evals/<id>/builds query takes about 5 minutes and contains all information that I could dream of. So if you don‘t mind I would use that one. That’s only two queries about once or twice per week.
mkaito has quit [Quit: WeeChat 3.1]
<gchristensen>
I'm going to say go ahead
<gchristensen>
it might cause problems and we may need to block it without notice, but I'm not going to bet on it
<gchristensen>
at some point it might make sense to pay a dba to look at the database for a few hours
<siraben>
`nix-build build.nix --argstr maintainer siraben --argstr system raspberryPi`
<samueldr>
isn't pkgsCross easier?
<samueldr>
ah I see, for all things owned by a maintainer
<samueldr>
siraben: does that work with --argstr system "armv7l-linux" ?
<samueldr>
e.g. raspberryPi should be armv6l-linux (different!)
<samueldr>
I just now realize you're not the author
<siraben>
samueldr: haha yeah, not author
<siraben>
Ah it doesn't allow platforms?
<samueldr>
I didn't check anything else than looking at the code and PR
<samueldr>
since it seems to rely on builtins.system, I guess that passing "armv7l-linux" and similar should work
rajivr has quit [Quit: Connection closed for inactivity]
<sterni>
does ofborg changed output paths take aarch64-linux in account for linux
<gchristensen>
no
<gchristensen>
the rebuild count labels maybe should be split for specific architectures
<sterni>
so linux == x86_64-linux and darwin == x86_64-darwin
<gchristensen>
yea
dotlambda has joined #nixos-dev
dotlambda has quit [Client Quit]
dotlambda has joined #nixos-dev
dotlambda has quit [Client Quit]
tomberek has quit [Ping timeout: 240 seconds]
dotlambda has joined #nixos-dev
sophrosyne97 has joined #nixos-dev
dotlambda has quit [Client Quit]
dotlambda has joined #nixos-dev
dotlambda has quit [Client Quit]
dotlambda has joined #nixos-dev
dotlambda has quit [Client Quit]
dotlambda has joined #nixos-dev
dotlambda has quit [Client Quit]
dotlambda has joined #nixos-dev
dotlambda has quit [Client Quit]
<sophrosyne97>
Where are the builtin functions for nix like all or any defined? I can see that they're considered primops in the nix repl but I can't seem to find where they are actually located anywhere.
<sophrosyne97>
I was asking because I saw a PR asking for someone to add a builtin and I was just curious how they were structured. How would I even go about adding a builtin in?
<jonringer>
Ah, yea, builtins are part of the actual nix executable. So samueldr's references above are appropriate then
<samueldr>
I basically grepped through the code for known builtins
hax404 has quit [Ping timeout: 250 seconds]
<jonringer>
you can look at a previous PR to add a builtin, and that should have everything required to add another
<samueldr>
(hopefully the unstable Nix codebase hasn't seen too much of a refactor since the last builtin addition)
<ekleog>
I'm thinking a bit about how to improve the UI of https://github.com/Ekleog/nixpkgs-check ; and one idea I had was to make it be a GUI, which would simplify quite a lot the UI handling logic that's currently in there, as well as allow for more parallelism and thus a faster runtime. I'm just wondering, given we are in a developer community, which is more used to TUIs… are there people who would
<ekleog>
find such a decision to be bad?
<samueldr>
TUIs are not not GUIs (yes, double negation :))
<samueldr>
some TUIs, at least
<samueldr>
you could build it entirely as a "rich" TUI and yet have parallelism and interactivity and such
<ekleog>
well, I'm currently doing a semi-rich TUI, but to go further I'd need to figure out a way to have `nix build` output its progress bar to only a portion of the screen, which is too complex for me to want to tackle it ^^' (and this semi-rich TUI, basically “UI widgets but still on a scroll-down-only mode,” doesn't allow for parallelism)
<ekleog>
(on the other hand, a GUI with gtk would allow me to just spawn one terminal per `nix build` instance and be able to ask the user questions at the same time)
<ekleog>
one terminal widget*
<sterni>
do we have GNU coreutils in stdenv on darwin as well?
supersandro2000 has quit [Killed (card.freenode.net (Nickname regained by services))]
supersandro2000 has joined #nixos-dev
<abathur>
sterni: yes
pmy has quit [Ping timeout: 252 seconds]
pmy has joined #nixos-dev
<sterni>
very good
<abathur>
there may be some small asterisks, iirc a few coreutils are linux only in some way
<abathur>
or at least one that I recall, maybe if I look at the list I'll remember...
<abathur>
ah, runcon; not sure if there are others
kreisys has joined #nixos-dev
cole-h has joined #nixos-dev
copumpkin has joined #nixos-dev
contrapumpkin has quit [Ping timeout: 246 seconds]
tomberek has joined #nixos-dev
bennofs__ has quit [Read error: Connection reset by peer]