<clever>
pbb: or modify top-level.nix to change how it references installBootLoader
<clever>
pbb: not that i know of, you would need to modify the grub module, to insert more things into it
<clever>
pbb: yep
<clever>
pbb: yeah, you would need to entirely overwrite it with your own script, and then somehow know what the old value was
<clever>
pbb: its just flagged as internal, so the docs all claim it doesnt exist
<clever>
pbb: installBootLoader is still a normal nixos option
<clever>
HKei: in this example, i'm generating a bash script, that will run whatever cabal produced, with all of the complex args the program needs to work
<clever>
HKei: i usually do that by having shell.nix do an override against default.nix
<clever>
that sounds like the best option then
<clever>
didnt actually look at that page
<clever>
tilpner: oh, there are multiple releases?
<clever>
jluttine: upstream doesnt provide one zip per font
<clever>
tilpner: that zip is several gig in size
<clever>
if licensing issues wont let hydra build it, then there is no fix
<clever>
the key, is for hydra to build those split up ones
<clever>
tilpner: fetch the entire release in phase 1, then break it up in multiple drvs
<clever>
then those partials are in the cache
<clever>
and the critical part, is that hydra is configured to build everything in the middle layer
<clever>
the 3rd layer is a buildEnv to put it back together
<clever>
the 2nd layer, is an array of drvs, that unpack 1 font each (always 1 font)
<clever>
i think the simplest option, would be to make 3 layers of drvs, the first layer just fetches everything as a zip (plain fetchurl on the release)
<clever>
you would have to download each file in the font, one per fetchurl
<clever>
ah, so you cant bypass the release/archive tars and download just one font
<clever>
worldofpeace: are the fonts grouped by directory within that repo?
<clever>
worldofpeace: i find even the main nixpkgs docs cause that, the problem is that they show how builder works, but dont point out that its mainly for learning how it works
<clever>
ive found that most sites i visit, cant keep up with my modem, lol
<clever>
HKei: its also huge, in that you have to download several 100mb (or was it gig?) worth of fonts, before you can extract and keep 1
<clever>
it would need seperate tar files, for each font
2019-10-04
<clever>
and then decides what to do, based on the actual target
<clever>
haskell.nix, just translates the if's in your cabal file, into nix level if's
<clever>
and then everything implodes :P
<clever>
so (stack|cabal)2nix, wont try to supply Win32 to the windows build, and will try to build posix on windows
<clever>
cabal2nix just computes the expr for the current (or given) os
<clever>
stack2nix is just a wrapper, that runs cabal2nix on the right versions of everything
<clever>
the major difference, is that it supports conditional statements in cabal files
<clever>
yep
<clever>
yep
<clever>
haskell.nix can obey either stack.yaml, or cabal.project, and can also cross-compile
<clever>
stack2nix obeys the stack.yaml, but then wont find things on the cache
<clever>
cabal2nix has better cache coverage, but wont obey your stack.yaml
<clever>
freeman42x: its far better to build it with nix, using either cabal2nix, stack2nix, or haskell.nix
<clever>
freeman42x: but, i dont like using either cabal new-build, or stack, because both are impure
<clever>
freeman42x: stack --nix, just runs itself under nix-shell for you
<clever>
freeman42x: have you tried `nix-shell -p zlib` yet?
<clever>
freeman42x: youve only done half of that
<clever>
freeman42x: in addition to installing the binary, you must also copy a db file to the right dir
<clever>
nix-pkgconfig relies on a database of mappings between pkg-config .pc files and the nixpkgs attribute they are provided by. A minimal example database (default-database.json) is included which can be installed via:
<clever>
freeman42x: did you read the readme?
<clever>
freeman42x: then that tool isnt actually doing what it claims to do
<clever>
wrl: kexec might be a good way for you to test things, that doesn need any working disk drivers
<clever>
ajs124: edit nixpkgs level :P
<clever>
bendlas: but yeah, with 20k derivations, nix will still suffer massively
<clever>
bendlas: id want to pre-generate the dep tree info, and ship that with the rev+sha256
<clever>
wrl: are you writing to sd? or sd?1 ?
<clever>
so id want it to be a non-IFD step, that you just generate the dep tree once ahead of time, and then import the file
<clever>
it would likely be far faster, if you batched it into a single IFD step, but then it costs more when nothing changes
<clever>
bendlas: the major performance cost, is that IFD must happen serially, so it can take 20+ mins to do the first pass
<clever>
bendlas: snack does similar with haskell, there is a dedicated binary, that will list the modules a given module depends on, and its ran via IFD, for each module
<clever>
hlolli: to help reduce the "but it worked for me" situations
<clever>
hlolli: nixos also sets PATH differently, to only include what the service needs, so it doesnt happen to work due to what youve nix-env -i'd
<clever>
hlolli: your likely not using nix-shell, so things fail in weird ways
<clever>
hlolli: builds will only function if you do them under nix-build or nix-shell
<clever>
selfsymmetric-mu: there is an entire fonts tree of options in configuration.nix
<clever>
hlolli: you probably want yarn2nix, to build the package
<clever>
hlolli: do all building in nix, and just use the built result in the service
<clever>
hlolli: dont do that, its not pure
<clever>
hlolli: are you running `yarn build` from a systemd service?
<clever>
hlolli: can you pastebin the entire output when it fails?
<clever>
hlolli: is the build happening under nix-build/nix-shell?
<clever>
exarkun: i have been thinking of rewriting bors as well, while it does work, the errors when it doesnt are horid, typically just ruby backtraces, lol
<clever>
but, if any naughty admin pushes directly to master, you cant fast-forward, and ci has to start over
<clever>
and if it passes, fast-forward master to that merge commit you just tested
<clever>
exarkun: the bors logic could be baked into github, just have github itself push the merge to a dummy branch, and wait for ci to pass on that dummy branch
<clever>
and then if the status checks are green, bors will do it
<clever>
exarkun: bors does that, by just pushing the merge commit to a bors/staging branch
<clever>
exarkun: the problem, is that github has to re-trigger CI, on the result of merging the commit into master
<clever>
which is why many projects demand that you rebase ontop of master before merging
<clever>
exarkun: and if master changes after that test, things can break
<clever>
exarkun: the problem, is that travis merge builds, are if the pr passed with the old value of master
<clever>
exarkun: BUT, hydra will now rebuild EVERY SINGLE PR, on EVERY SINGLE PUSH TO MASTER
<clever>
exarkun: so, you could configure hydra to build pull/$PR/merge, on every PR
<clever>
exarkun: if you fetch pull/42/merge, you get a commit that is the result of merging the pr into master (but that commit wont be pushed to master)
<clever>
exarkun: if you try to fetch the pull/42/head branch, you get the tip of the PR
<clever>
exarkun: it kind of does have some tools to support it
<clever>
it only waits for a batch if tis already busy doing something else and must wait
<clever>
and when done, it will test pr2+pr3, as a second batch
<clever>
but if you tell it to merge pr1, pr2, and pr3, it will start testing pr1
<clever>
in your case, it would test pr1, and then pr2, as seperate things
<clever>
qyliss: and once the current job is done, it will create a merge commit, containing everything from the queue, and do them in a second batch
<clever>
qyliss: if a PR merge is pending checks, any further attempts to merge one enter a queue
<clever>
and travis doesnt re-test every time master moves
<clever>
so if you push PR1, and it passes, then PR2 gets merged, then PR1+master now fails
<clever>
and travis building the merge branch cant catch the above, because it only tests the merge commit when you push
<clever>
it can solve problems like PR1 passes CI, PR2 passes CI, but if you merge both, CI fails
<clever>
infinisil: if you tell bors to merge several branches at once, it will merge all of them together, then test them as a single batch
<clever>
infinisil: bors is a bot, that will generate a merge commit, then run CI (in this case, a full hydra build??) and only push that merge to master if CI passes
<clever>
buckley310: requireFile is used for things like oracle java
<clever>
buckley310: pkgs.requireFile is one method
<clever>
buckley310: if you supply the hash upfront, nix will compute where it should be in /nix/store, and use that copy
<clever>
cransom: you can also use `nix-store --query --roots $FOO` to find out why it is still alive
<clever>
buckley310: yes
<clever>
so its not really leaking a process
<clever>
when the bash it started exits, sudo will then exit with the same error code
<clever>
a simple `sudo bash` leaves a sudo proc around, with bash in the argv[1]
<clever>
tetdim: unpack-bootstrap-tools.sh will unpack that tar to $out, and re-patchelf it (with the patchelf inside the tar) to expect all of its libs in $out/lib/
<clever>
tetdim: the gcc tools tar, is basically the lfs /tools/ dir
<clever>
tetdim: busybox is used to unpack those tools, and line 29 will patchelf the patchelf
<clever>
tetdim: the stdenv contains gcc, ld, make, and basic tools like cp/rm
<clever>
tetdim: here is a basic derivation you can run, if you supply uour own binaries for ps, env, id, and cat, you can run that without a single reference to <nixpkgs> and stdenv
<clever>
tetdim: step 2, is to get the stdenv to build, under that nix, and then use that to build nix with nix
<clever>
tetdim: nice
<clever>
neat*
<clever>
gnidorah: stats_gui is a near thing i found within the console, it can report any performance metric in steam
<clever>
gnidorah: yeah, comments always help
<clever>
gnidorah: id just leave the PR as it is then
<clever>
gnidorah: ahh, it would probably have to be in SDL's rpath then, which isnt easy
<clever>
gnidorah: and related to your steamcmd stuff, i recently discovered `steam steam://nav/console`, that opens a console, directly in the steam UI
<clever>
gnidorah: but its not aware of dlopen() things, and will break those
<clever>
gnidorah: this line is likely the problem, it will go over the DT_NEEDED field, figure out which RPATH entries you need, and remove the extra