<clever>
raghavsood: yeah, somethings wrong there, is sda GPT or MBR?
<clever>
raghavsood: try running the bootinfo script, from the debian liveimage
<clever>
raghavsood: are you able to boot it into a rescue console?
<clever>
raghavsood: what does the script report, after you install nixos onto one of the failing machines?
<clever>
as for the nvme machine, you have 2 nvme devices, each with an ext4 boot and a zfs vdev, and the bootinfo script doesnt think legacy would work, so its not looking for it
<clever>
about all it does, is make the boot order in the bios not matter
<clever>
raghavsood: so sdb isnt giving you very much redundancy
<clever>
raghavsood: sda is configured to find an fs with a given uuid (or default to sda1) and then read /grub from that, sdb is configured to just blindly read sda sector 1, which does the previous
<clever>
and pastebin the results
<clever>
raghavsood: can you run this on one of the working machines?
<clever>
bqv: which version of nixpkgs are you using?
<clever>
bqv: what if you run `nix path-info` on that bash?
<clever>
bqv: where does that path appear in the derivation, `nix show-derivation /nix/store/foo.drv` ?
<clever>
bqv: can you pastebin more of the build log?
<clever>
bqv: which expression are you trying to build?
<clever>
bqv: that looks like a sandboxing issue
<clever>
bqv: yep
<clever>
nix#3628
<clever>
> pkgs.path
<clever>
betawaffle: it lets you add custom args to every module, in the same place { config, pkgs, lib, ... }: happens
<clever>
energizer: i believe mobile-nixos is already able to do that as well
<clever>
mmchen: is the lsp server running under nix-shell?
<clever>
,libraries mmchen
2020-05-27
<clever>
bqv: only if you use self.passthru
<clever>
> "${hello}"
<clever>
bqv: the derivation itself is $out
<clever>
bqv: why are you trying to get $out from passthru?
<clever>
bqv: runCommand has its own $out
<clever>
cole-h: nope
<clever>
wchresta: but you could also just `makeWrapper ${a}/bin/foo $out/bin/foo`
<clever>
wchresta: so basically, `wrapProgram foo` will rename foo -> .foo-wrapped, then runs `makeWrapper .foo-wrapped foo`
<clever>
wchresta: makeWrapper takes a thing to run, and a thing to output to, while wrapProgram takes a single param, renames it, then runs makeWrapper on the renamed + original
<clever>
wchresta: pkgs.runCommand with makeWrapper should do
<clever>
jtojnar: not sure, usually you just <nixpkgs/nixos/something>
<clever>
jtojnar: imports cant depend on the pkgs passed into a module
<clever>
cole-h: you may need to add it to boot.supportedFilesystems first
<clever>
cole-h: i think it was `mount -t ntfs3g` maybe?
<clever>
cole-h: you might be using the read-only ntfs implementation
<clever>
cole-h: where did `backup` come from?
<clever>
cole-h: what does `lvdisplay -C` report?
<clever>
keithy[m]: you can just `nix-shell -p coreutils`, even on a mac
<clever>
balsoft: the user is within the config file, so i expect the service to do its own root-drop
<clever>
not sure
<clever>
__red__: while the home = libDir; tells nixos to make it as the home of a user
<clever>
__red__: StateDirectory tells systemd to dynamically create its own dir with the right privs
<clever>
__red__: i would expect it to be owned by bacula
<clever>
angle?
<clever>
balsoft: ah, hadnt looked at that attack angel
<clever>
iclanzan: nix-serve doesnt allow you to list things, so the attacker would need to know the hash of a storepath to download it
<clever>
iclanzan: its only a risk if you tell others what the path to one of the secrets is, or something depending on the secret
<clever>
benny: the timestamp of the symlinks in /nix/var/nix/profiles/
<clever>
iclanzan: it has to be setup in the /etc/nix/nix.conf file
<clever>
iclanzan: and nix will just fetch the deps automatically
<clever>
iclanzan: all binaries made by nix, use libraries in /nix/store, even on darwin
<clever>
iclanzan: yes
<clever>
iclanzan: if the cache is configured, it pulls a pre-built copy, if the cache isnt configured, nix builds whats missing
<clever>
iclanzan: and they can still just run nix-shell
<clever>
iclanzan: there is no difference between linux and nixos builds
<clever>
iclanzan: you compile on each platform, then push to the cache server
<clever>
iclanzan: run your own cache? or use cachix?
<clever>
iclanzan: not really
<clever>
chloekek_: that is exactly what haskellPackages.callCabal2nix does
<clever>
-p takes 0 or more packages, which get added to the buildInputs
<clever>
`-p` isnt `--pure`
<clever>
`nix-shell -p` should give you make, its part of the stdenv
<clever>
keithy[m]: installing anything that isnt compiler related
<clever>
`nix-shell -p` then `make`
<clever>
keithy[m]: run make in nix-shell ?
<clever>
keithy[m]: what kind of use?
<clever>
keithy[m]: you should generally not install any compiler type tools, only ever use nix-shell
<clever>
ive never had such issues with zfs though
<clever>
yeah
<clever>
or a random .drv file is nulls, so nix cant build anything touching it
<clever>
and now the manifest.nix for nix-env is toast, so they cant install anything
<clever>
other nix users, that did an improper reboot immediately after a nix build
<clever>
but that issue has never been on my own machines
<clever>
ah, that would solve the issue ive seen before
<clever>
so nix would have to open every file in $out and fsync() each one...
<clever>
elvishjerricco: then nix read $out, hashed it, and recorded its presense in db.sqlite
<clever>
elvishjerricco: one case i can see, where nix isnt entirely at fault, is if you just built something, and the binary in the derivation didnt fsync()
<clever>
elvishjerricco: but ext4 is sometimes doing naughty things, and data can still be lost after that point
<clever>
elvishjerricco: my rough understanding, is that when you close() a file, the fs driver is supposed to flush everything to disk, and not return until its commited
<clever>
abathur: and run ended with: Illegal instruction (core dumped)
<clever>
prepare ended with: 2147483648 bytes written in 47.78 seconds (42.86 MiB/sec).
<clever>
abathur: the main storage is a raidz1 over 3 mechanical drives
<clever>
abathur: the nas has an optane that i sometimes use as an SLOG or L2ARC, but its currently inactive
<clever>
Henson: maybe i'm just spoiled by having nvme on everything else, and the spinning rust is just always that slow, lol
<clever>
abathur: found the box for the optane module, it doesnt say much, lol
<clever>
Henson: how do the above numbers look?
<clever>
but ARC hit% was never below 99.5%
<clever>
Henson: putting a 2gig file into /dev/shm, also caused the ARC to drop from 6.8gig to 4.79gig
<clever>
Henson: somehow, its worse then when going over nfs, lol
<clever>
Henson: and thats the performance to write the whole 2gig file back to zfs
<clever>
and that command took 8 seconds to complete
<clever>
Henson: as a test, `time zfs destroy -v naspool/root@zfs-auto-snap_weekly-2020-05-18-00h00`, this snapshot is the final root for 412mb of data
<clever>
Henson: and nix-collect-garbage once ran for over 8 hours recently
<clever>
Henson: basic things like `zfs destroy` on a snapshot can take over 5 mins sometimes
<clever>
Henson: the laptop is nvme single-disk, and theres no real issues with it
<clever>
Henson: the nas with 3 disks in raidz1, seems to perform poorly, but its all spinning rust
<clever>
Henson: performance for zfs, seems to vary wildly from machine to machine
<clever>
when it was otherwise idle
<clever>
Henson: for example, my nas had the arc shrink by almost a gig, for no aparent reason
<clever>
Henson: any kind of load, and it tends to discard the entire ARC
<clever>
Henson: ive not had any issues like that, and ive found the reverse is the problem
<clever>
Henson: thats basically what line 65 does: wipefs -a ${cfg.rootDevice}