<clever>
systemd-boot, i havent even seen a limit on how many generations it puts in /boot, and i'm not sure if it even has a rollback menu?
<clever>
grub supports both
<clever>
ah
<clever>
if you set boot.loader.grub.efiInstallAsRemovable it should work in a usb stick, and if you also set device, it should with via legacy, as long as you have both a bios boot, and a fat32 efi system
<clever>
it should be possible to do both
<clever>
no clue
<clever>
this lets you cut out locales you dont want
<clever>
with 3, it gives the same error as chrome
<clever>
with 4, it wants me to continue the statement
<clever>
i was using 1-------1 to test
<clever>
and in chrome, it evals to this
<clever>
Uncaught ReferenceError: Invalid left-hand side expression in postfix operation
<clever>
it thinks there is more to the statement
<clever>
doesnt work in nodejs
<clever>
Infinisil: so it parses fine, but will probably blow up at runtime, it expects a ......
<clever>
nh2: my idea with the cachecache project, is that you point it to many caches (you would have to nix-push to a normal http server), and then it would multiplex for you, and manage choosing which cache to get something from
<clever>
and it will be an immutable snapshot of the headers
<clever>
but keep in mind, it will import a copy of the include dir when nix-shell is ran
<clever>
schmits: add an attribute to the derivation, CPLUS_INCLUDE_PATH = "-I${./include}";
<clever>
then it would need the .zip or .tar url from github
<clever>
fetchtarball would unpack it after fetching
<clever>
ldesgoui: need to call fetchurl on it first
<clever>
catern: and the modules are able to set eachothers options and inter-connect
<clever>
catern: the default.nix under nixos will merge all of the core modules and the <nixos-config> module together, to create a single config attrset
<clever>
not build nova-image.nix directly
<clever>
you must put nova-image into the nixos-config, and build nixpkgs/nixos/default.nix
<clever>
the example you pasted already has that
<clever>
use -A
<clever>
which is exactly how configuration.nix works
<clever>
and then aim nixos-config at that
<clever>
you can also add any of these modules to the imports section of a custom one
<clever>
so you dont notice that the . isnt part of the path
<clever>
and the errors say set.a.b
<clever>
its just an unusual thing that not many will expect
<clever>
yeah, { "a.b" = value; }
<clever>
which will give fun error messages, because set.a.b doesnt exist
<clever>
its an attribute names a.b
<clever>
joepie91: there are also some edge cases, like set.a-b or set."a.b"
<clever>
joepie91: i only have 3gig of ram on this laptop, so it takes a while to open things
<clever>
joepie91: ah, just opened the gist
<clever>
nh2: nope, nix expects the entire store as one stream
<clever>
joepie91: what about this region?
<clever>
279 %left OR
<clever>
joepie91: i think its in the expr_op definition in parser.y
<clever>
joepie91: only source i can think of is the parser and lexer files
<clever>
lookup the nixpkgs overlays int he docs
<clever>
for normal nixpkgs, its in ~/.config/nixpkgs/overlays i think
<clever>
but thats for configuration.nix
<clever>
2017-06-26 21:45:59< Infinisil> You can however clone the nixpkgs-mozilla repo, then link to it with nixpkgs.overlays = [ (import path/to/mozilla/overlay/rust-overlay.nix) ];
<clever>
i think thats in the mozilla overlay
<clever>
its usualy simpler to use the rust thats already packaged into nixpkgs
<clever>
there is a --print-rpath
<clever>
yeah
<clever>
the gist i linked can do that
<clever>
you must edit the rpath with patchelf, to point to zlib
<clever>
installing libraries with nix-env will never make them work
<clever>
nope
<clever>
i have used nix-index, and it works well when you know the path
<clever>
so the rustup binary will keep working
<clever>
and as long as you keep that result symlink, nixos wont delete the libraries
<clever>
that bash script can be ran on an ELF file to patch it
<clever>
_habnabit: if you run nix-build on this file, you will get a result symlink pointing to a bash script
<clever>
then you can take advantage of the optimization ngingx has already had
<clever>
nh2: it may also help to mix the 2, use a slower daemon with a full cache of everything, then a nginx over that, that can quickly serve recently used data
<clever>
nh2: and will be configured to prefer a cache.nixos.org signature, when one exists
<clever>
nh2: something ive been thinking of, is a custom daemon, that can query several binary caches, then cache them all in one spot, and serve them up
<clever>
nh2: at least on cache.nixos.org, the files are immutable, but when you take different cache providers into account, they may each have different narinfo contents
<clever>
and f isnt always going to give the same out for the same in
<clever>
and the output is generated via output=f(input)
<clever>
ah, yeah
<clever>
catern: CA storage?
<clever>
now i have to unpack it to use the content, and then i just delete the zip
<clever>
its the same as serving .zip files in a torrent
<clever>
that just encourages people to delete it from the ipfs store, and then nobody benefits from it
<clever>
so you need 2 copies of everything you install (one in /nix/store, one in the ipfs store)
<clever>
and related, the object must remain in the ipfs storage dir, in blocked form (not usable by nix) to share with others
<clever>
catern: so if nobody asks for a given object, it just doesnt get mirrored, and it dies off
<clever>
catern: another limitation of IPFS, is that you only store the things you asked the network for
<clever>
so when it gets the nar request, it knows exactly what path to pack up on-demand
<clever>
for example, hydra uses a narpath that is identical to the storepath, without even a .nar at the end
<clever>
and the naming of the nar file depends on the server
<clever>
and the narinfo files act as a map between things
<clever>
yeah
<clever>
and your "lookup" function is gcc
<clever>
its more of a hash(source) = compiled_binary database
<clever>
and the binary cache stuff isnt based on the hash of the content
<clever>
for it*
<clever>
so you need to know the hash of the content to ask for me
<clever>
ipfs is mainly a hash(value) = value database
<clever>
nh2: and it currently has a bug, where it doesnt save the signatures from cache.nixos.org, so anything you get from nixos, is re-signed by your key
<clever>
nh2: and ensure the public half is in the nix.conf file
<clever>
nh2: you need to generate a keypair (nix-store man page), then use --key-file on nix-push so it can sign things
<clever>
hodapp: now i can just debug it locally
<clever>
hodapp: i can confirm the same problem in a vm when i use the config from the gist
<clever>
so are the narinfo files
<clever>
nh2: what is the contents of nix.conf on that machine?
<clever>
hodapp: testing...
<clever>
nh2: dont see one, youll need to use one of the many extraConfig options to insert it at the right place
<clever>
copy it somewhere, change display_errors, and then manualy run php-fpm with the right end, and the new php.ini path
<clever>
hodapp: if you read the php-fpm command from the systemd unit, youll find the current php.ini path
<clever>
hodapp: did you ever try it with display_errors = on in the php.ini?
<clever>
i dont think nix can read narinfo+nar files from a local dir
<clever>
gchristensen: as long as you serve the original .narinfo files from cache.nixos.org, the signatures stay intact, but you would need to host the unpacked torrent on http, and set that as a binary cache
<clever>
also, when i check the A records for cache.nixos.org, i can see 8 ipv4 addresses behind the domain
<clever>
and since you also have nginx acting as a mitm for cache.nixos.org, you can see it trying again on another cache
<clever>
checking the http access logs can reveal what its doing
<clever>
nix-daemon may ignore them entirely
<clever>
and if you dont, the narinfo files just wont be signed
<clever>
you have to pass it a key with --key-file
<clever>
and nix-push will re-sign things, so the narinfo will be wrong
<clever>
i think nginx has an index of what it has downloaded
<clever>
but just throw out the bytes as it goes
<clever>
another thought, is to write a custom client that will recursively browse the narinfo tree and fetch every nar via the cache
<clever>
ah
<clever>
almost sounds like cloudfront needs more work?, do we know anything about the internals behind how it works?
<clever>
but i do have ~5 nixos machines at home, and a local cache to share between them would help
<clever>
and maybe being in your region
<clever>
i would expect cloudflare caching and ec2 stuff to not be that much better then cache.nixos.org, other then it being an isolated instance
<clever>
ah
<clever>
ah
<clever>
i just run my own recursive cache
<clever>
nh2: i also noticed you use 8.8.8.8, but i recently had issues with 8.8.8.8 claiming some servers i use dont exist
<clever>
ah, i see a diff
<clever>
nh2: i aslo need to test our your nginx nixos cache when i get home, ive got gigabit inside the lan so it should help with the latency of common things
<clever>
nix-store -r /nix/store/foo
<clever>
yeah
<clever>
integrateddynamics: 3251ms
<clever>
my log file says things like this
<clever>
tconstruct: 722ms
<clever>
they are split between the modern mc and the old 1.7 stuff
<clever>
what version of MC?
<clever>
your forge might be too old to make it
<clever>
mine was in ~/.MCUpdater/instances
<clever>
yeah, you can limit the search to there if your using ftb
<clever>
joepie91: try this cmd, find $HOME -name loading-log.log
<clever>
joepie91: do you see a loading-log.log file?
<clever>
the fml logs may say which mod is to blame
<clever>
i remember it being more like 10mins
<clever>
or fixing the initialization code in every mod
<clever>
and i think its heavily single threaded
<clever>
most of my time is spent in the cpu anyways
<clever>
boomshroom: it works even on the non-oracle jvm
<clever>
boomshroom: you can also run jvisualvm from the oraclejdk to debug the java process
<clever>
it can help to just put everything into one gist
<clever>
joepie91: nice
2017-07-02
<clever>
the server thinks your a different player
<clever>
the only thing i can think of that would cause that, is if you where forcing the game to play offline
<clever>
boomshroom: and if i forget to even play the game for a week, i can login to a surprise of more stuff, or game-breaking lag from an out of control farm, lol
<clever>
(aka, having to leave the entire client open)
<clever>
boomshroom: i also prefer a private server so chunkloaders dont cripple my system when i'm not actually playing
<clever>
brb
<clever>
boomshroom: it might have been generated somewhere, you can try asking Baughn next time he is on
<clever>
spinus: stateVersion controls a range of things that arent backwards compatible, like the postgresql format on-disk, what type of ssh host keys sshd uses, and a few others
<clever>
magnetophon1: you may want to compare it to the new versions of the kernel stuff in nixpkgs
<clever>
spinus: the whole point of that option, is so that nixos knows what state your nixos was originally installed in
<clever>
spinus: it should never be bumped
<clever>
and report a bug to musnix
<clever>
magnetophon1: it might be simpler to just use an older version of nixos, try 17.03 or 16.09
<clever>
magnetophon1: that sounds like the recent cross-compiler changes
<clever>
then system is still intact
<clever>
magnetophon1: and does system-75-link exist?
<clever>
magnetophon1: what does this say about system?
<clever>
ls -l /nix/var/nix/profiles/
<clever>
there shouldnt be any link or dir called system-profiles
<clever>
magnetophon1: you need to leave 2 links, the un-numbered one, and the one it points to
<clever>
spinus: the one i linked should have most of them
<clever>
spinus: this only puts the qemu drivers into the initrd, but doesnt force weird config onto things