<clever>
your making it behave like a weaker system, as viewed by a subset of the processes
<clever>
and i think using cgroups on a more powerful machine, would just recreate that problem
<clever>
yeah
<clever>
cgroups wont really help that kind of thing
<clever>
or i -j 1 and wait 4 times as long for the build
<clever>
so i either -j 4, and let 3 gcc's fight over the ram with that 1 thing
<clever>
but there is no way to set that
<clever>
so i dont want any parallelization on just that step
<clever>
there is a step in the glibc build, that takes a crap-ton of ram
<clever>
and there is a related problem ive run into on my arm builders
<clever>
and i saw a bug a few months ago where a derivation ignored that entirely, and used /proc/cpuinfo to max the system out on its own
<clever>
rly: so nix-daemon doesnt know how many cores a given derivation is actualy going to use
<clever>
rly: another issue related to what you said, build-cores simply sets $NIX_BUILD_CORES when a build starts, and the stdenv may run make -j $NIX_BUILD_CORES, or it may just run make
<clever>
rly: you probably added it to extraConfig
<clever>
each derivation will only use 1 core
<clever>
rly: but on the other hand, build-max-jobs=10, build-cores=1 will waste resources when you have a series of derivations that depend on eachother
<clever>
rly: the issue, is that build-cores doesnt always help derivations (some ignore it), so if you go purely with build-max-jobs=1, build-cores=10, it will waste resources
<clever>
correction, 8, and 8, so 64 things
<clever>
rly: so it will run up to 40 things at once
<clever>
rly: line 5 says it can do up to 5 derivations at once, and line 16 says each of those can use 8 cores on make
<clever>
rly: can you pastebin the contents of /etc/nix/nix.conf?
<clever>
but it cant assert that it is null, because the assert false; null; is bailing before it can return null, lol
<clever>
which asserts that it is null
<clever>
it is an argument to that default.nix
<clever>
while evaluating anonymous function at /home/clever/apps/nixpkgs/pkgs/stdenv/linux/default.nix:6:1, called from /home/clever/apps/nixpkgs/pkgs/stdenv/linux/make-bootstrap-tools.nix:176:21:
<clever>
and that was added in the most recent change to that file
<clever>
i think
<clever>
something is reading the crossSystem attribute, when it shouldnt be
<clever>
183 crossSystem = assert false; null;
<clever>
and because i can reproduce it localy, i can track it down
<clever>
and that fails
<clever>
error: assertion failed at /home/clever/apps/nixpkgs/pkgs/stdenv/linux/make-bootstrap-tools.nix:183:19
<clever>
[clever@amd-nixos:~/apps/nixpkgs]$ nix-build pkgs/top-level/release.nix -A unstable
<clever>
LnL: in this case, its pkgs/top-level/release.nix in nixpkgs
<clever>
fpletz: the arm build of readelf can detect v6 vs v7
<clever>
fpletz: i was mainly working on a bash script you can add to the stdenv, that would cause the builds to fail if they contain the wrong opcodes
<clever>
LnL: i plan to nixos it
<clever>
LnL: i have an arm based router on the way thats open-source-ish
<clever>
and it detects a v7 capable cpu
<clever>
yep
<clever>
fpletz: the only reason i noticed, is because i had a mix of v6 and v7 in my cluster, the v7 would make bad v6 builds, then the v6 would barf on them
<clever>
fpletz: it might have been the hand-rolled assembly in openssl, that would ignore -march
<clever>
fpletz: perl and openssl where affected
<clever>
fpletz: i didnt look into that side of it in much depth, and only a few packages did it, not all
<clever>
gchristensen: related, armv6 builds on an armv7 arent pure, the compiler detects a v7 host and makes borked builds
<clever>
Baughn: sent some info in PM
<clever>
Baughn: yeah, let me dig thru this more
<clever>
Baughn: but there is also the odd delay between each of these
<clever>
ah, ive seen that pack in the listings, didnt know it used nix
<clever>
can you gist the nix expressions so i can profile them on this end?
<clever>
ow
<clever>
for no overhead, and assuming the above is an average, thats 4 seconds
<clever>
how many URL's are you trying to convert?
<clever>
there is also a hefty amount of overhead before that gets ran, which may be to blame
<clever>
..." | time python ...
<clever>
Baughn: simplest thing i can think of is to put the "time" command at the start, before the python, and see how fast it is
<clever>
Baughn: so now the rpm of your disk comes into play
<clever>
Baughn: nix also wants to sync things to disk between every build
<clever>
Baughn: but its still a bit costly, several forks per conversion
<clever>
Baughn: you can, and as long as python doesnt change, it could cache these conversions
<clever>
Baughn: if you can find a nix expression that returns the ascii code for a given character, you could potentialy use map to split a string up, and then escape anything outside a given range
<clever>
Baughn: if you can stop the inputs it depends on from changing
<clever>
Baughn: you could split that up, and reuse the forest every time
<clever>
yeah, i believe the evaluator will just block waiting for that derivation
<clever>
import from derivation will cause it to build things at eval time
<clever>
once that step is done, it no longer has to touch any nix expressions
<clever>
Baughn: that will fully evaluate everything bar references, without building anything
<clever>
Baughn: you can test things by just manualy running one of the steps, nix-instantiate foo.nix -A bar
<clever>
Baughn: i had to boot with efi off, then do an install with boot.loader.grub.efiInstallAsRemovable = true;
<clever>
Baughn: aha, if virtualbox cant find an efi system partition, it fails so hard that the video output doesnt even come up
<clever>
Baughn: the second step can be made parallel, and happens after the entire nix expression has been evaluated
<clever>
Baughn: then you have nix-store -r, which turns each .drv into an output
<clever>
Baughn: the first step is nix-instantiate, its single threaded, and it turns nix expressions into .drv files
<clever>
Baughn: there are 2 main steps to building things in nix
<clever>
Baughn: i believe nix is entirely single threaded
<clever>
Baughn: trying something different in vbox first
<clever>
Baughn: id prefer qemu if that option worked
<clever>
Baughn: not currently installed
<clever>
Baughn: if i flip efi on in virtualbox, it just fails to launch entirely, it cant fallback to legacy pxe booting
<clever>
jazzencat: it should always use the build from the last 'nixos-rebuild switch'
<clever>
Baughn: it would help if i was able to boot efi under a vm and experiment with it more, just to eliminate user-error from the equation
<clever>
always had to enable the CSM and use legacy booting
<clever>
and ive had bad luck with efi, every install has failed
<clever>
and it has more options
<clever>
mpickering: i'm guessing somebody just forgot to put options on it, ive just been sticking to grub because i know how it works more
<clever>
mpickering: that sounds like the simplest option
<clever>
you will either want to chainload nixos's systemd-boot from the ubuntu EFI, or switch to grub
<clever>
so there is no way to configure it further
<clever>
and systemd-boot has 1 option, enable, nothing else
<clever>
so its ignoring all of the entries you put into boot.loader.grub
<clever>
yeah, that would explain it
<clever>
mpickering: can you pastebin the configuration.nix?
<clever>
so you cant just paste nixos stuff into the ubuntu files
<clever>
nixos needs to update the bootloader config every time you do "nixos-rebuild switch"
<clever>
that will break nixos
<clever>
mpickering: can you pastebin the configuration.nix?
<clever>
mpickering: which bootloader are you using?
<clever>
mpickering: now look in /boot/grub/grub.cfg and see if they are added
<clever>
mpickering: did you run nixos-rebuild switch?
<clever>
and the old jobs will keep using old expressions
<clever>
gchristensen: changes in the nix expression should cause the job to restart on its own
<clever>
mpickering: and the entire 8-13 section should be copy/pasted from debian's grub config
<clever>
mpickering: some parts like line 6 will need tobe modified to suit your setup
<clever>
arianvp2_: and then model it like every other package
<clever>
arianvp2_: in users.nix, you want to use pkgs.callPackage ./zsh-config.nix
<clever>
arianvp2_: that second nixpkgs will obey the ~/.nixpkgs/config.nix of whatever user ran nixops
<clever>
arianvp2_: there is also an unrelated problem in your zsh-config, you import <nixpkgs> again, so it will ignore all config set in nixpkgs.config
<clever>
arianvp2_: that currently says simple_le, which isnt right
<clever>
arianvp2_: can you pastebin the whole config?
<clever>
LnL: in my case, i have an old kdenlive in ~/.nix-profile, and the new kdenlive in nix-shell gets upset about incompatible kde stuff in the env
<clever>
ixxie: it can
<clever>
LnL: kdenlive is the most recent example i came across
<clever>
LnL: some naughty packages use propagated-user-env packages and refuse to work under nix-shell
<clever>
ixxie: ah, what you have would work then
<clever>
ixxie: no need to change any system settings
<clever>
ixxie: if you nix-shell -A linuxPkgs.arcane-chat in this folder, you get all of the stuff you need to develop with it
<clever>
Mic92: in my case, i reference it directly via nix-build with -A config.system.build.kexec_tarball
<clever>
Mic92: its mainly used for internal nixos things, and you then reference it from elsewhere
<clever>
Mic92: the entire system.build section lacks a type, so the module framework just lets you go wild and do whatever you want
<clever>
kier: sudo likes to wipe the env, and nixos-rebuild requires a nixos-config entry in the path, you left it out, and it didnt fail, so sudo must have blocked things
<clever>
Mic92: one idea is that on startup, it will query an http url thats programed into it, like http://example.com/online?id=foo, which will then either dump a config file back, or pass the IP on for a user to control
<clever>
Mic92: i do plan to have it serialize all of this config, so it can be ran in a more automated fasion
<clever>
but i didnt want to bother with trying to support all of the variations
<clever>
Mic92: the "proper" way to do it, is to have the init system (systemd or the older sysinit stuff) to a full shutdown, and then "kexec -e" rather then reboot/halt/poweroff