<clever>
pie_[bnc]: adding on `-o json`, shows ALL fields
<clever>
pie_[bnc]: this shows up in `journalctl -f`
<clever>
Jan 03 18:50:14 amd-nixos pulseaudio[6337]: E: [pulseaudio] module.c: Failed to open module "module-jack-sink".
<clever>
Jan 03 18:50:14 amd-nixos pulseaudio[6337]: E: [pulseaudio] ltdl-bind-now.c: Failed to open module module-jack-sink.so: module-jack-sink.so: cannot open shared object file: No such file or directory
<clever>
pie_[bnc]: the full cmd
<clever>
pie_[bnc]: what command are you using?
<clever>
o1lo01ol1o: its plaintext, so you can peek inside and see whats wrong with it
<clever>
o1lo01ol1o: nix will then compure the name of the narinfo, based on what it wants to read
<clever>
o1lo01ol1o: the --from must point to a directory
<clever>
pie_[bnc]: it could be a hard-coded list of modules!
<clever>
pie_[bnc]: id look into how it even tab completes
<clever>
pie_[bnc]: weird
<clever>
pie_[bnc]: its likely that the old pulse just didnt have it in the search path
<clever>
pie_[bnc]: that will do it!
<clever>
thomashoneyman: can you gist all of the nix files?
<clever>
thomashoneyman: only if dep is somehow passed to stdenv.mkDerivation will it impact the build
<clever>
pie_[bnc]: line 5 should already add bluetooth support, so i dont see 6 being needed
<clever>
pie_[bnc]: check the timestamp on the process in `ps aux` ?
<clever>
pie_[bnc]: have you confirmed that pulseaudio has restarted?
<clever>
pie_[bnc]: ive only been able to initiate a connection with blueman
<clever>
pie_[bnc]: you may need to `pactl exit` to kill the daemon, it should restart itself
<clever>
pie_[bnc]: are you set to the pulseaudio full package?
<clever>
pie_[bnc]: a little bit
<clever>
same as any normal install from the ISO
<clever>
Raito_Bezarius: if you use kexec, then just pick the right channel before you run nixos-install and nixos-generate-config, and it will just start from what you picked
<clever>
Raito_Bezarius: when nixops creates a machine, it will read the current version (whatever it booted with) and remember that as your stateVersion
<clever>
symphorien: done
<clever>
symphorien: local tells it to directly open /nix itself, rather then asking nix-daemon
<clever>
symphorien: that should maybe be changed to `nix ping-store --no-net --store local`
<clever>
Raito_Bezarius: its likely complaining because the option was defined
<clever>
which one?
<clever>
symphorien: also, why are you even running nix on boot?
<clever>
symphorien: if nix is ran as root, it shouldnt need nix-daemon, it can just fork the worker out itself
<clever>
symphorien: and there was a bug, where any error in the activation scripts, leads to systemd itself not even being in PATH, and then your boot fails :P
<clever>
symphorien: yes
<clever>
o1lo01ol1o: the narinfo is always named hash.narinfo
<clever>
o1lo01ol1o: nix will have told you that it was downloading /nix/store/hash-name
<clever>
o1lo01ol1o: that will look in dir, for hash.narinfo, and the narinfo should have the relative path to the .nar.xz
<clever>
o1lo01ol1o: i think it was something like `nix copy /nix/store/hash-foo --from file:///home/clever/dir`
<clever>
or there was no error!
<clever>
then the real error is going out the vga port, and falling on the floor
<clever>
Raito_Bezarius: are you able to boot it into a rescue env?
<clever>
o1lo01ol1o: you would need something like `wget --continue` to resume every time it fails
<clever>
o1lo01ol1o: if you download the narinfo and nar to a local directory, you can then use nix copy to import it into your store
<clever>
lovesegfault: and if you read the json, what do you see?
<clever>
lovesegfault: your leaving an eval.json in the directory, which changes its hash
<clever>
lovesegfault: how do they differ?
<clever>
lovesegfault: run `diff -ru` on 2 versions of that path, from the errors
<clever>
evils: lol, who would have thought to make sucha tool!
<clever>
evils: id need a decoder wheel to translate!
<clever>
evils: ah, lol
<clever>
jared-w: it is up to the developer to either run `regenerate.sh` themselves, or apply the patch file
<clever>
jared-w: ah, builtkite supports uploading artifacts when the build is done (pass or fail), and this one just uploads a .patch file (line 61)
<clever>
evils: cat?
<clever>
jared-w: link?
<clever>
lovesegfault: whats the current error?
<clever>
lovesegfault: you want to put the shell script outside of nixpkgs
<clever>
lovesegfault: when you edit the shell script, it changes the contents of ., which changes the /nix/store/hash-nixpkgs hash
<clever>
lovesegfault: it stops the very kinds of impurity your trying to fix :P
<clever>
lovesegfault: you may need to include the copy of nixpkgs in /nix/store, in the -I list
<clever>
lovesegfault: but you will want to set the flag to disallow IFD
<clever>
lovesegfault: allowed-uris is also impure, and shouldnt be involved
<clever>
then `-I .` will also refer to the root
<clever>
lovesegfault: you want to be cd'd into the root of nixpkgs, and give the command a path of nixos/release-combined.nix
<clever>
lovesegfault: `-I .` only allows access to the nixos subdir, you want `-I ..`
<clever>
lovesegfault: what args did you give it, and what dir did you run it in?
<clever>
if your working on the current dir
<clever>
-I . is often enough
<clever>
lovesegfault: give it a -I that includes the path its trying to access
<clever>
lovesegfault: `--gc-roots-dir` just stops things from GC'ing the .drv files its creating, so hydra has a chance to use them
<clever>
lovesegfault: restricted mode is on, so only paths you specify with -I can be visible
<clever>
lovesegfault: its in pkgs.hydra
<clever>
lovesegfault: mostly, you only need line 13-16
<clever>
lovesegfault: no need for any config file or postgresql database
<clever>
lovesegfault: it wont get the deeper attributes that hydra would eval
<clever>
lovesegfault: nix-instantiate isnt as recursive as hydra
<clever>
edef: it also didnt complete, it crashed the system hard, forcing a reboot
<clever>
rough number, cant remember exactly
<clever>
that broke my system, back when i ran btrfs, lol
<clever>
also, just parsing the nixpkgs release.nix, would generate ~20,000 drv files in /nix/store/
<clever>
gchristensen: if you set that to false with --option, it will hard fail on all IFD
<clever>
$ nix show-config | grep import
<clever>
allow-import-from-derivation = true
<clever>
but its better to just not snapshot /nix
<clever>
gchristensen: if you delete, a file, but a snapshot holds a copy, and then you remake it, nix cant dedup, but zfs could
<clever>
gchristensen: was just thinking, one case where zfs dedup would be better then nix-store --optimize, is when snapshots get involved
<clever>
craige: but you can use nix-env -p /path/to/profile --rollback or --set to just go to a certain build
2020-01-02
<clever>
alexherbo2: nix-channel --update as root, will fix the main problem in those logs
<clever>
zeta_0: the env vars nixos had set, will mostly still be set in shellHook, but nix-shell may have overwritten some and prepended ot others
<clever>
alexherbo2: nixos will re-create it on boot, but your nix-env profile and nix-channel profile will be gone
<clever>
zeta_0: if you run nix-shell on a derivation that has a shellHook, then nix-shell will run whatever bash code is in the shellHook, before giving you a shell
<clever>
zeta_0: ive not looked heavily into direnv yet
<clever>
zeta_0: yeah
<clever>
zeta_0: nix-shell will load shell.nix by default, and fall back to default.nix when shell isnt found
<clever>
zeta_0: i would just put that into the shellHook for your shell.nix
<clever>
zeta_0: by default, nix-shell will add to most env vars
<clever>
gustavderdrache: generate bash scripts that will set PYTHONPATH
<clever>
so it will claim to be a 256mb drive, but if you run a util, it magically grows, and the contents are swapped out
<clever>
elux: there are also some drives, that dont directly involve the bios, on first boot, it shows a dummy plaintext partition, you must then install software to that, which will run the unlock tool, and chainload the real os
<clever>
elux: ive heard that some drives actually do things a bit like luks, its ALWAYS encrypted, and setting a pw just encrypts the master key
<clever>
elux: some drives actually use the pw you set to encrypt, others are lazy and just use the password as a boolean to allow/deny
<clever>
lovesegfault: like nixos containers or nixops
<clever>
lovesegfault: to make all of nixos a submodule
<clever>
yeah
<clever>
lovesegfault: something.* could be of type submodule, and then you can have many something.*.foo's being set
<clever>
its a set of options, imports, and config
<clever>
lovesegfault: configuration.nix and all of the other modules in nixos, are all modules
<clever>
lovesegfault: so every systemd service has the same set of options
<clever>
lovesegfault: modules can also be submodules, the systemd config uses that for example
<clever>
more that zfs cant fit the answer into the filefrag api, so they choose to just not implement it
<clever>
yeah
<clever>
and with zfs being spread over many disks, which disk is that offset on??
<clever>
zfs also doesnt support filefrag, because the api returns a list of offset + length pairs, for each fragment of the file
<clever>
but i can see how xfs_fsr might break the reflinks
<clever>
and if over a set limit, it just duplicates the entire file, and swaps the backing blocks if fragmentation improved
<clever>
it uses the filefrag api (and binary by the same name) to just get extend lists, so it can count the fragments
<clever>
buckley310: thats less of a dedup, and more of a defrag
<clever>
then it just relies on the fs to make better choices for the copy of the file
<clever>
basically, its a userland tool that will copy an entire file using standard api's, then an ioctl that will atomicly swap the backing blocks of 2 files, and check for race conditions (source got modified)
<clever>
buckley310: ah, sounds like xfs_fsr's defrag tools
<clever>
benley: zfs
<clever>
gchristensen: ive never run into issues where it doesnt fit, but ive not done full dedup on multi-tb datasets yet
<clever>
zeta_0: try opening another terminal window
<clever>
FRidh: makefile or strace
<clever>
FRidh: it could be forcing a dynamic build, or ignoring the static config flags
<clever>
wucke13: just add a name= on each
<clever>
wucke13: you need to name both of the fetchFromGitHub's
<clever>
wucke13: then when it tries to copy fv7byzwlavjz1ad9g4zrwv0ddby78b08-source to source, it already exists, so it winds up copying to source/fv7byzwlavjz1ad9g4zrwv0ddby78b08-source
<clever>
wucke13: when it copies j6q42xy7c9f0p9qw16i9hdy8768rys0m-source, it strips the hash, creating source/
<clever>
wucke13: ahhh, i see the problem
<clever>
wucke13: line 869 will run unpackFile on each thing in $srcs
<clever>
wucke13: so it will run the default unpackPhase
<clever>
wucke13: can you pastebin more of the nix expression?
<clever>
wucke13: note, that you can never write to $src, only $out and subdirs of the temp dir you start in
<clever>
wucke13: between creating the destination, and trying to copy a 2nd thing to it
<clever>
wucke13: when cp copies things from the store, the copy it creates will be read-only, you may need a chmod +w -R
<clever>
shajra: havent tried doing anything like that yet
<clever>
i think
<clever>
shajra: i dont think cabal.project should exist at build time, haskell.nix is supposed to parse cabal.project, then build everything with old-cabal
<clever>
shajra: if you want to patch the source, you probably want either patchPhase, or use a custom runCommand based thing, to compose several things together, and use the result as a src
<clever>
zeta_0: probably an emacs issue
<clever>
zeta_0: dont know, but haskell.nix should obey the cabal file
2020-01-01
<clever>
replace haskellPackages with pkgs.haskellPackages
<clever>
zeta_0: have you tries pkgs.haskellPackages ?
<clever>
kahiru: you can also use a systemd service
<clever>
cvlad-: xserver.xkbOptions = "caps:shiftlock"; lets you do the xkbmap part
<clever>
turq: nix.trustedUsers in configuration.nix
<clever>
eoli3n_: just `fish` and let $PATH find it?
<clever>
clone it, and unpack a 3rd layer ontop of the clone...
<clever>
clone it, and unpack a 2nd layer ontop of the clone
<clever>
so, when you import an image, it will unpack one layer
<clever>
in theory, it also eliminates the limit of 128 layers (a problem in unionfs), but docker doesnt actually allow you to take advantage of that
<clever>
so, the layers are pre-merged, as a single zfs filesystem, leaving you with less computation overhead
<clever>
colemickens: it will use zfs to clone the entire layer, and then mutate the clone
<clever>
colemickens: from what ive read, instead of using unionfs to merge each layer together
2019-12-31
<clever>
ivan: the database is pre-generated, and downloaded by nix-channel --update
<clever>
jasom: pkgs.pkgsStatic.stdenv will give you a stdenv, that is pre-configured for static everything
<clever>
zaslet: `nix-shell -p foo` gives you a compiled copy of foo, `nix-shell -A foo` gives you an env suitable for building foo
<clever>
monokrome: sounds like a bug within pulseaudio, try checking journalctl for logs
<clever>
monokrome: then how is the card gone?
<clever>
monokrome: is it vanishing from /proc/asound/cards ? anything in dmesg?
2019-12-30
<clever>
Baughn: but now the sha256 must be of the nar of $out, rather then the raw .jar itself
<clever>
Baughn: i think fetchurl { downloadToTemp = true; postFetch = "mkdir $out; mv downloadedFile $out/"; name = "simpler_name"; url = "..."; } would do that
<clever>
Baughn: that would also be more compatible with running it thru buildEnv to put every jar into a single dir later
<clever>
Baughn: then you can set name="simple-name"; and generate /nix/store/hash-simple-name/[fancy]-complex name.jar
<clever>
Baughn: another option is to use recursive hashing (will need a different sha256)
<clever>
Baughn: what is the exact error msg?
<clever>
Baughn: id also prefer using pkgs.fetchurl if you have a hash, builtins.fetchurl harms performance more
<clever>
Baughn: the + may also translate into a space, try removing it?
<clever>
Baughn: you want name, not filename
<clever>
Baughn: what is the exact args you ran it with?
<clever>
nilsirl[m]: also, it will break every time somebody pushes to the branch, but only if you dont have the source already, causing all kinds of fun problems
<clever>
nilsirl[m]: just put a branch name where the rev goes, that simple
<clever>
gchristensen: so its going to continue to DoS in your name, even if your not using it? lol
<clever>
so i tend to use nix repl to figure out the exact expr i want, but then switch to nix-build -E to run it repeatedly as i edit the files
<clever>
Enzime: i find that nix repl gets confused if i edit the file its running, so i have to quit and re-enter the repl, and then i have to repeat all previous commands
<clever>
duairc: the meaning is relative to the total of all shares, and if they are equal, then they get an equal share of the workers
<clever>
duairc: yeah
<clever>
duairc: if 2 jobsets are fighting over that remote machine, hydra uses the shares to equally divide the workload
<clever>
duairc: it controls which jobset in hydra gets more time, if you have more then one
<clever>
duairc: if one job has 20 shares, and the other has 40 shares, the one with 40 shares gets 2x the cpu time
2019-12-25
<clever>
raboof: exactly
<clever>
raboof: and behind the scenes, everything has an extra config. on it, so your file turns into this: { config = { boot = ...; nixpkgs.config = ...; }; }
<clever>
raboof: then, there is also the nixpkgs config, which is at the nixos option of nixpkgs.config
<clever>
raboof: and if imports winds up inside config, it will get moved out, becoming { config = { ... }; imports = [ ... ]; }
<clever>
raboof: if a module is missing all 3 of those, the module system will wrap it with { config = { ... }; } for you
<clever>
raboof: all nixos modules, must contain 3 keys and only 3 keys, config, imports, and options
<clever>
raboof: you want to set nixpkgs.config.allowUnfreePredicate
<clever>
raboof: which file did you put that in? how did you test it?