2018-10-18

<clever> tbenst: when using that outside of nixos, you will want to add nix = self.nix; to the userPackages
<clever> ah, with tbenst
<clever> gchristensen: sure, what was the problem?
<clever> tbenst: if you look in /nix/var/nix/profiles/something/profiles-something-link/bin youll find a nix-env you can use to try again with a newer userPackages
<clever> tbenst: nix wasnt in your userPackages list, so the -r uninstalled nix
<clever> gchristensen: its used by things like haskellPackages.yaml.env
<clever> if its in the nixbld group, then you dont need o+r on the keys
<clever> nixos doesnt clean it up on boot
<clever> in the past, i have seen my redhat9 machine just delete everything on bootup, which has resulted in the machine hanging for 2 hours, because i rebooted it so little :P
<clever> `boot.tmpOnTmpfs` tells nixos to mount a tmpfs to /tmp, so all files are held in ram and lost at shutdown
<clever> depends on the config
<clever> but if you have any o+r files in $HOME (the default is rwxr-xr-x for me), and an attacker knows the exact name, he can read them
<clever> so the directories can be o+x, and then the file is g+r
<clever> just o+x is enough
<clever> you dont have to give it o+rwx
<clever> you can mix them as well, so /home and /home/foo are o+x
<clever> so you need to either use the group bits and the nixbld group, or the other bits
<clever> since the builds can run as any member of the nixbld group, the user bits wont really work
<clever> how you get execute on each dir is up to you
<clever> o1lo01ol1o: you just need execute on the directories, either via the user bits (if your the owner), the group bits (if your in the same group), or the other bits (for when your not an owner or group)
<clever> o1lo01ol1o: you also need execute on every parent up to /
<clever> o1lo01ol1o: read only allows you to ls the directory itself
<clever> o1lo01ol1o: you need execute on directories to access files within them
<clever> ah, you meant verbatim, not vim, lol
<clever> grp: though i just put the entire vim config into a single string: https://github.com/cleverca22/nixos-configs/blob/master/vim.nix#L16
<clever> and then it will replace @key@ and @key2@ within foo.txt
<clever> grp: basically, just pkgs.substituteAll { key = "value"; key2 = pkgs.hello; name = "foo"; src = ./foo.txt; }
<clever> grp: pkgs.substituteAll can do things for you
<clever> pre-existing ones, or ones you can only fetch with git@host?
<clever> whats preventing you from using builtins.fetchGit instead?
<clever> but basically any user on the machine can run nix-build, which then runs the user-provided commands as a member of nixbld, and they are still semi-world-readable
<clever> you could also make the directory owned by the nixbld group, and not be world-readable
<clever> strange
<clever> oh, and the machine where it works, is that nixos?
<clever> you could, but be aware that every single nix build will have permission to read those keys
<clever> that fix is via builtins.fetchGit, which runs the clone as your user, not in any sandbox or specialized user
<clever> and i prefer agents, so your not sharing the key with every single build
<clever> but /tmp is a common shared space
<clever> because the build runs as nixbld1, it lacks access to read /home/foo, even with sandboxing off
<clever> the trick is having a socket in /tmp that the nixbld user has permission to
<clever> o1lo01ol1o: this is how i got fetchGitPrivate to work with ssh agents
<clever> o1lo01ol1o: ah, with fetchGitPrivate?
<clever> o1lo01ol1o: what exactly have you added to NIX_PATH?
<clever> o1lo01ol1o: are you calling toString anywhere?
<clever> o1lo01ol1o: you want to use builtins.fetchGit now, its a lot safer and simpler
<clever> run500: but that arm python cant `import io`, so the build fails
<clever> run500: my cheat half works, https://github.com/cleverca22/nixos-configs/blob/master/qemu.nix and qemu-user.aarch64 = true; allows me to run aarch64 binaries on my x86 machine, so when the cross-compile screws up and tries to run the arm python, it "works"
<clever> i'm testing my cheat now...
<clever> run500: i also got yours near the end
<clever> /nix/store/hi7b399mz31439b7zd7v855iv5q0n0da-aarch64-unknown-linux-gnu-binutils-2.30/bin/aarch64-unknown-linux-gnu-ld: cannot find -lpython2.7
<clever> run500: curretly building the above setuptools...
<clever> a vm on linux on mac hardware is "ok"
<clever> o1lo01ol1o: the only legal way to run macos, is on mac hardware
<clever> i was using macincloud.com at the time to get my ios app built
<clever> o1lo01ol1o: yeah
<clever> o1lo01ol1o: because apple has to be apple :P
<clever> o1lo01ol1o: last time i looked into that kind of problem (before getting into nixos), i read that you must have a darwin machine signing the binary after it has been compiled
<clever> *tries*
<clever> robstr: it will never be able to write to /home
<clever> robstr: the only thing a derivation can copy to is $out
<clever> so you would have to disable several options to get rid of doc!
<clever> the same for documentation.man.enable, which also requests $out/share/doc, lol
<clever> Aerobit: also, if you documentation.info.enable = false; then systemPackages may omit the $out/share/doc paths
<clever> nix-env has different rules
<clever> and thats specific to just what you put in systemPackages
<clever> robstr: fairly common
<clever> robstr: callPackage ("${builtins.fetchGit {...}}/foo.nix") {}
<clever> and also ensure the meta.platforms only supports linux, so it wont try to build on mac
<clever> aminechikhaoui: perhaps you can just fix 32bit support?
<clever> i also see 2 tars in this folder, one 64bit, the other "not 64bit"
<clever> ftp://ftp.supermicro.com/utility/IPMIView/Linux/
<clever> it could try building on darwin!!
<clever> it doesnt even have a platforms entry!
<clever> some people flip on enablebroken to bypass platforms
<clever> both i feel
<clever> aminechikhaoui: the package should have an assert in it then, to refuse to build in 32bit mode
<clever> gchristensen: try getting a backtrace
<clever> gchristensen: i never used that, just a single ip:port in my setup
<clever> aminechikhaoui: is the entry in rpath 32 or 64bit?
<clever> gchristensen: what was multipathd for?
<clever> and let lazyness figure it out
<clever> and then at some point in the eval, you iterate over the list, and give each a unique number, based on what else is in the config
<clever> you could have a list like config.portmappings = [ foo ];
<clever> systemd is already able to auto-generate a uid for services, which it will destroy at service shutdown
<clever> and i later ported it to my laptop, which had grub in the MBR of the iscsi device, and nixos made it trivial to apply the same module to an entirely different arch
<clever> i initially wrote that for my rpi's because of nfs trouble, and /boot was on the SD card
<clever> iscsistart is a special staticly linked binary, that deals with connecting the kernel module, without a long-term daemon
<clever> so grub doesnt even know its a network boot situation
<clever> gchristensen: and ipxe rewrites the legacy bios api, so when grub tries to read the "local" hdd, iscsi is used instead
<clever> gchristensen: i was also using sanboot in ipxe, to perform a legacy boot against the MBR of an iscsi device
<clever> i believe that daemon is responsible for handling reconnects
<clever> and this runs iscsid after bootup
<clever> gchristensen: this pre-dates the proper network support in the initrd, and will connect the block device on bootup
<clever> one sec
<clever> ive also done iscsi locally
<clever> bbl
<clever> no special system services required
<clever> joko: this replicates how hydra does an eval
<clever> joko: it evals with restricted mode on
<clever> joko: hydra needs a path to the json, to describe what nix file powers the jobset, and then that nix file generates more json (dynamically), to describe the jobset

2018-10-17

<clever> jb55: yeah, that sounds like a plausible answer
<clever> because ssh-agent detects that the client is in a different uid, it needs an socat proxy
<clever> gchristensen: the trick, is to tell the build to use /tmp/hax as a unix socket for the agent
<clever> gchristensen: ive made it work with an ssh agent
<clever> gchristensen: fetchGitPrivate can function without sharing the key to the build
<clever> arianvp: then the changes wont take effect until you reboot!
<clever> arianvp: anything else?
<clever> arianvp: what is the output from nixos-rebuild?
<clever> arianvp: which channel are you on?
<clever> kiloreux: and that was when you ran `nix repl sample.nix` ?
<clever> kiloreux: just packages, on its own
<clever> kiloreux: if you `nix repl sample.nix` and then eval `packages`, which is it?
<clever> kiloreux: that looks more like a set then a list?
<clever> kiloreux: if you run nix-instantiate on the buildEnv, then `nix show-derivation` on that drv, what is in the paths attribute?
<clever> kiloreux: did the buildEnv one give any errors?
<clever> kiloreux: so that will be whatever the fold on 158 returned
<clever> kiloreux: ah, yeah, that does look like it
<clever> kiloreux: its not clear where the .packages from line 8 is defined
<clever> > "raw path: ${./.}, toStringd: ${toString ./.}"
<clever> but i dont see any obvious cause for the original problem
<clever> and using toString on them gives an entirely different value, and often breaks things
<clever> kiloreux: paths convert to strings automatically
<clever> kiloreux: just pkgs.path by itself
<clever> kiloreux: minor nitpick, line 33, dont use toString on paths
<clever> ah
<clever> yeah, i believe thats it
<clever> symphorien: most of the time, everything shares a process group, but there is a special syscall to make yourself the leader of a new group
<clever> kreisys: if the repo is dirty, it will just return all 0's
<clever> haitlah: you can also use haskell.lib.overrideCabal to add to the executableSystemDepends or similar
<clever> hmmm, its just a normal mkDerivation
<clever> haitlah: lib.overrideDerivation might work
<clever> haitlah: and Setup.hs is just a thin wrapper around the main function in the cabal executable
<clever> haitlah: the root problem, is that nixpkgs provides a Cabal package (as a haskell library) but no cabal executables
<clever> haitlah: and `./Setup repl` will be identical to `cabal repl`
<clever> haitlah: if you manually compile Setup.hs with ghc, then you can use the Setup binary in-place of cabal
<clever> haitlah: its simpler to use Setup.hs
<clever> haitlah: or, `ghc Setup.hs -o Setup && ./Setup repl`
<clever> haitlah: you want addCabal carrier-directory.env
<clever> haitlah: your addCabal function is altering the base derivation, then you use .env to get the non-base derivation
<clever> haitlah: and what is in your shell.nix ?
<clever> Lisanna: run `strace -p <pid>` against it, what does it show?
<clever> Lisanna: what does `top` say its parent is doing cpu% wise?
<clever> and after it checks the deps,it reaps?
<clever> maybe due to a minor bug/oversight, it waits for the builder to terminate, but doesnt reap the zombie
<clever> 2018-10-17 08:19:16 < Lisanna> ps lists the PID that default-builder.sh was running under as <defunct>
<clever> Lisanna: the nix-build above the builder might be scanning $out for deps
<clever> teto: when did it start giving the recursion error?
<clever> teto: line 5 refers to 2 nix giles, can you also pastebin those files?
<clever> teto: can you pastebin that output?
<clever> teto: oh, `nixops info --no-eval`
<clever> teto: what does `nixops info` return?
<clever> teto: one theory is that you have actual infinite recursion in your nix expressions, does anything happen to maybe import itself?
<clever> you have to manually add them to the right search path
<clever> i think part of it is that gdb cant auto-load from the split outputs currently
<clever> ekleog: you have to run it in a dir with the unpacked source
<clever> drakonis1: ah, you want runCommandCC, not runCommand, to get the nix-support bug fixed
<clever> ekleog: its partially implemented, but not enabled by default
<clever> the version of vulkan-loader must match the vesion of vulkan-headers
<clever> 4 assert version == vulkan-headers.version;
<clever> aleph-: what setup is missing?
<clever> aleph-: why cant you run it as noah?
<clever> which user does nix-env -i foo fail as?
<clever> that one is intact
<clever> and ls -lh /root/.nix-profile/manifest.nix
<clever> aleph-: what does ~/.nix-profile point to on each user?
<clever> aleph-: why is it looking in /root? the previous messages say you where running as noah
<clever> drakonis: i think you want to add gcc.cc.lib to the list where zlib is
<clever> it will make a zlib a dependency
<clever> that bash script will then patchelf whatever you run it on
<clever> drakonis: if you run nix-build on one of these nix files, it will generate a bash script in result
<clever> aleph-: and what is the full output from `nix-env -vvvv -i PACKAGE_HERE` ?
<clever> aleph-: ls -lh /nix/var/nix/profiles/per-user/noah/
<clever> when doing what command?
<clever> what error did it have?
<clever> it may also remove home-manager
<clever> aleph-: rm /nix/var/nix/profiles/per-user/noah/profile*
<clever> i only needed the one without /
<clever> can you paste it here?
<clever> aleph-: what does the above point to?
<clever> ls -l ~/.nix-profile
<clever> ah, then your only option is to nuke the profile
<clever> aleph-: try nix-env --rollback
<clever> deleting it wont help any
<clever> yeah, your FS truncated it after an improper shutdown
<clever> aleph-: ls -lh /nix/store/qchpbs64ppkl54mbmz5hk0ffhg29p902-env-manifest.nix
<clever> aleph-: your manifest.nix was corrupted by an improper shutdown i think
<clever> Arahael: how is it not working?
<clever> only bug i saw with proton was failing to find python3 in PATH, which has been fixed in nixos-unstable
<clever> aleph-: yeah, proton just works, for games that are compatible
<clever> and then stream your input devices back the other way
<clever> steam can stream the gameplay from a windows box to the nixos box
<clever> the steam homeplay stuff also works, so i can run any windows game on that box, and then stream it over the network
<clever> and a couple windows games even work
<clever> Arahael: most games that work on linux just work in nixos too
<clever> drakonis: nix-store --verify --check-contents is the command does that, but it doesnt check the ownership of things
<clever> my memory is wonky, lol
<clever> yeah, i was helping him a few days ago
<clever> should be able to just open a terminal then and sudo chown
<clever> drakonis: are you able to login at all?
<clever> yep

2018-10-16

<clever> and then the stdenv does some magic to make it cross or native, depending on which list its in?
<clever> i think whats supposed to happen, is that you just accept protobuf as an input, and you put it into both lists
<clever> ahh
<clever> then you want buildPackages.protobuf3_5 in the nativeBuildInputs
<clever> and what did you put into buildInputs?
<clever> you need a native for protoc and then a target for linking?
<clever> run500: just try one and see if it works
<clever> run500: have you tried just doing buildPackages.myInputPackage?
<clever> run500: if you only override stdenv, then every other input to that package are now in the wrong version
<clever> but when you src = ./.; and it copies result back into the store, the src doesnt deoend on the path result points to, i think
<clever> ah yeah
<clever> you just need to include the state in result, and then check for it in src
<clever> that already happens by accident a lot, the result symlink winding up in src
<clever> ah
<clever> it initially took over 3 days to eval a package
<clever> the biggest problem ive had with snack has been O(n^n) style problems
<clever> libghc to the rescue!
<clever> -M omits a lot
<clever> ghc is a little upset that the .hi and .o from past modules, are in a different dir from the output
<clever> so i have to copy some of the inputs to the output, to make ghc happy
<clever> but TH actually fails, because it expects the .o files from TH to be in the -outputdir
<clever> TH works due to snack just getting the entire import list, recursively
<clever> what else is there?
<clever> solved enough that it can build most of cardano-sl, but it fails due to a lack of -hide-all-packages
<clever> "what their inputs are" is already solved in snack
<clever> though, now you need an external tool to run a build command on every module
<clever> just use a shake style build without `ghc --make` and it runs the wrapper script on every module
<clever> what part of ghc has to be re-implemented?
<clever> ah, yeah, that could then become a bottleneck, would need to heavily optimize nix then
<clever> if the .hi dont change, the build gets reused
<clever> then the cache-key will be a hash of the .hs and .hi files
<clever> if you just replace ghc with a wrapper, that feeds the .hs and .hi files into nix-build
<clever> recursive nix would solve everything
<clever> elvishjerricco: all it knows is that the INPUTS have changed, so all dependants must build!
<clever> elvishjerricco: but with nix (and snack), it cant understand that something has changed or not, since it has no way of knowing the previous state
<clever> elvishjerricco: under an impure cabal build, ghc can detect when the types exported by a module dont change, and then not rebuild all dependants
<clever> elvishjerricco: related, ive recently been working on snack, and i found some edge-cases it cant deal with currently
<clever> your castle of purity is built upon impurities :P
<clever> yeah
<clever> the design of nix forces a rebuild if the inputs change in any way
<clever> so some methods have 4 copies, .hi, .so, .a, _p.a
<clever> i believe the .hi file encodes both the types of the exported symbols, and the bytecode for in-lineable functions
<clever> and further optimize what it does inline
<clever> so even with dynamic linking, it can "staticly" inline chunks of things
<clever> a copy of the partially compiled code is in the .hi file
<clever> i think inlining is why the .hi files are so big
<clever> all haskell code is static, everything else is dynamic
<clever> yeah, its currently partially static
<clever> __monty__: with the modern nixpkgs, a static haskell binary wont depend on ghc
<clever> __monty__: when using static linking, ghc can also do dead code elimination, and just omit code your not using
<clever> and you have to upload the entire thing to every machine in your cluster
<clever> the entire 1.6gig of compiler becomes a runtime dep
<clever> __monty__: a bigger pain, is when you go to deploy those binaries with nixops
<clever> idealy, we could split the profiled, non-profiled, dynamic, static, and hi files, each into their own outputs
<clever> and they are only needed at compile time
<clever> elvishjerricco: yeah, its actually a .hi per file
<clever> -r--r--r-- 1 root root 2.3M Dec 31 1969 DynFlags.dyn_hi
<clever> -r--r--r-- 1 root root 2.3M Dec 31 1969 DynFlags.hi
<clever> -r--r--r-- 1 root root 2.3M Dec 31 1969 DynFlags.p_hi
<clever> and the _p.a if you want a static with profiling
<clever> and you need the .a if you are making a static executable
<clever> you need the .so if you are going to make a dynamic executable
<clever> just the ghc library itself is over 200mb for the compiled output
<clever> -r--r--r-- 1 root root 193M Dec 31 1969 libHSghc-8.2.2_p.a
<clever> -r-xr-xr-x 1 root root 72M Dec 31 1969 libHSghc-8.2.2-ghc8.2.2.so
<clever> -r--r--r-- 1 root root 113M Dec 31 1969 libHSghc-8.2.2.a
<clever> it has to compile a profiled and non-profiled version of each i think
<clever> __monty__: the .a files are for static linking, and _p.a are variants of the same with ghc profling enabled
<clever> elvishjerricco: so js-jquery, whose only point is to give you a 50kb file, pulls in 1.6gig of compiler
<clever> elvishjerricco: yep
<clever> __monty__: i think the root cause, is that every single .hs file in the boot packages (ghc, cabal, base, template-haskell, ghc-prim, containers, and others), is its own .a file, and _p.a
<clever> so if a package refers to its own data, it wont depend on that 1.6gig monster of a compiler
<clever> elvishjerricco: the data, and dyn lib, are now in seperate outputs, for all haskell packages
<clever> __monty__: the lib dir
<clever> 1.6G /nix/store/p372bagly14f82850qkzpjyimvw8ipqy-ghc-8.2.2/lib
<clever> elvishjerricco: prior to my split-outputs changes, that file was in the same dir as the dyn libs
<clever> elvishjerricco: the entire point of js-jquery, is to just define a string that points to a jquery.js file
<clever> __monty__: ghc itself adds 1gig on its own, plus every lib you depend on
<clever> > haskellPackages.js-jquery.meta.description
<clever> elvishjerricco: another more painful problem
<clever> and then at link-time for the executable, you pick one
<clever> i believe the current infra will generate both dynamic and static libs
<clever> elvishjerricco: and that adds 1gig to your closure
<clever> elvishjerricco: the dynamic libraries for haskell depend on a file in the ghc storepath
<clever> fendor: and teamsviewer just outright disabling the old versions the instant a new one comes out
<clever> fendor: i think part of it is that nixos tends to just be less compatible, due to things like no /lib and such
<clever> fendor: they upgrade things in a breaking fasion regularly
<clever> and i guess nginx assumes it has root, so it can fix the ownership of things it creates