<clever>
and then nas.nix (in git) deals with grouping stuff
<clever>
so i fill that file in once at install time, with only 1 custom line
<clever>
and configuration.nix only has stuff tied to that hardware
<clever>
the nas just has imports = [ ./nixos-config/nas.nix ]; in its configuration.nix
<clever>
thats the other major thing, i dont try to automate things like that
<clever>
but i dont maintain the list as hard as i should if that was a real threat
<clever>
in theory, that would prevent you from pulling off a mitm attack on my 1st session, so you cant ever mitm me on those hosts
<clever>
core.nix also has ssh host keys for everything i ssh into commonly
<clever>
but the vim config itself is in its own vim.nix module
<clever>
lovesegfault: core.nix is stuff i want basically anywhere, like my vim config, and other stuff
<clever>
lovesegfault: i have 2 rough classes, and some core files
<clever>
lovesegfault: check my nixos-configs repo for an example
<clever>
lovesegfault: yep
<clever>
gwen: the issue with python stuff, is that you tend to have to wrap the final script, that is using the python library, that is using the qt libraries
<clever>
your first point! :D
<clever>
lovesegfault++
<clever>
but, closed source things like teamspeak arent built by hydra
<clever>
hydra should email everybody listed in the maintainer field
<clever>
the executable is a shell script, that will ask you to agree to the EULA, then unpack the source
<clever>
lovesegfault: that file
<clever>
> teamspeak_client.meta.position
<clever>
34 version = "3.3.0";
<clever>
seems pretty basic
<clever>
53 echo -e 'q\ny' | sh -xe $src
<clever>
that did they do, that shouldnt happen
<clever>
lovesegfault: oh, weird, teamspeak is only broken on 19.09
<clever>
lovesegfault: and there goes the -small channel!
<clever>
wait... now teamspeak works again....
<clever>
but a bisect points to a commit entirely unrelated to teamspeak
<clever>
whatever auto-answers it is now broken
<clever>
the build just hangs with a EULA in the log output
<clever>
a more anoying problem, is building the teamspeak client
<clever>
lovesegfault: a recent failure of qtwebengine building has stopped me from building plex-media-player, but everything else works fine
<clever>
lovesegfault: the scripts only care that the tested jobset is passing, which grabs a subset of important packages
<clever>
so some things may slip thru
<clever>
lovesegfault: -small also tests fewer things
<clever>
lovesegfault: the -small channels dont want for hydra to build everything, so they update faster (but then you dont have good coverage on the cache)
<clever>
lovesegfault: the main channels wait for hydra to finish building everything, plus testing a few select things (the tested job)
<clever>
and to make a link to your deps's source pin derivations
<clever>
manveru: then you can follow those to see the code behind the pin
<clever>
manveru: my prefered way, is to generate symlinks, like ln -sv ${fetchurl ...} reponame
<clever>
manveru: as for making hydra cache things properly, you just need to generate a derivation that depends on the same fetchurl calls, and expose it in release.nix
<clever>
pkgs.fetchzip forces you to supply a sha256, and makes it simpler to cache
<clever>
another thing, switching from fetchTarball to pkgs.fetchzip would help
<clever>
yeah
<clever>
no need to check for changes, if you know the content is unchanging
<clever>
manveru: if a sha256 is present, i think it just ignores the ttl entirely, and uses the $out if it exists in /nix/store/
<clever>
manveru: the ttl mainly controls re-checking http for changes, when you already have the file
<clever>
manveru: far fewer steps
<clever>
manveru: but if you use a binary cache, it will stream it thru a decompressor, and deserialized into /nix/store
<clever>
manveru: then more cpu churn, as its serialized, hashed, and sent to nix-daemon, and deserialized back into /nix/store/
<clever>
manveru: then you have a cpu delay to uncompress and untar it to a tmp folder, as your current user
<clever>
manveru: when using things like fetchTarball against github, you first have a slow delay at github, generating the tar and streaming it over
<clever>
manveru: if you provide a sha256 ahead of time, it can also search the binary cache for things, and fetch it in a faster manner
<clever>
manveru: the file and unpacked links, point into the nix store, but arent GC roots, so nix can GC things, and they just act as a shortcut to save time
<clever>
manveru: the .info files contain metadata, like when it was last checked (to control how often it rechecsk), and the etags (to help with cache control headers)
<clever>
manveru: when nix is fetching things at eval time, it manages some symlinks in ~/.cache/nix/tarballs/
<clever>
and thats a major bottleneck
<clever>
the main problem i ran into, is that builtins.fetchTarball, without a sha256, cant use the binary cache
<clever>
and then it can get things from there after a gc
<clever>
niso: this adds an input to the jobset, of type githubpulls, which will fetch a list of all PR's, and pass it to toxvpn/default.nix (line 6) as a json file
<clever>
niso: i would generate the json using nix, things like map and listToAttrs
<clever>
niso: currently, you need to hard-code the list of branches into a nix file, and use declarative jobsets to generate the jobsets
<clever>
kenran: the only real difference, is that you can have several of them, and you use the 2nd argument to refer to other packages+overrides, rather then rec{
<clever>
kenran: overlays are list a list of functions, that take both the previous package set, and the final package set, and return the top-level attrs to overwrite
<clever>
kenran: packageOverrides just takes a previous version of the package set, and returns the top-level attrs to overwrite
<clever>
`man nix-hash`
<clever>
oops
<clever>
lordcirth__: nix will accept both
<clever>
lordcirth__: base32 vs base64
<clever>
which you can now nix copy over
<clever>
exarkun: i can confirm the above nix-build generates the same $out
<clever>
exarkun: and confirm it has the same output as line 58 of the `show-derivation` gist
<clever>
what does the new version use in its place?
<clever>
did the old version use it?
<clever>
lordcirth__: yeah
<clever>
lordcirth__: oh, and `make` in the middle
<clever>
lordcirth__: phases hasnt been set, so it will do the usual configure/build/install stuff, which includes running ./configure and then running `make install`
<clever>
lordcirth__: that file is the one that builds it
<clever>
that also works
<clever>
inkbottle: and its easy enough to enable docker in nixos
<clever>
inkbottle: a docker is likely the simplest way to get a debian like env
<clever>
inkbottle: all i can think of then is to try using yarn2nix instead of plain npm
<clever>
inkbottle: does ` pkg-config --cflags --libs libusb-1.0` work in the nix-shell?