qyliss changed the topic of #spectrum to: A compartmentalized operating system | https://spectrum-os.org/ | Logs: https://logs.spectrum-os.org/spectrum/
<IdleBot_5e50c57d> There are many moving parts! Curent status: CrosVM in a private network namespace with socat-proxied Squid-Cache access and socat-proxies rust-9p sharing /nix/store access, with CrosVM sandbox enabled, with no initramfs and no need to rebuild root squashfs, boots then runs curl from 9p-mounted /nix/store via host proxy
<IdleBot_5e50c57d> Of course, in the host network namespace there is no port talking 9p (only unix sockets)
<IdleBot_5e50c57d> https://github.com/7c6f434c/7c6f434c-configurations/blob/master/crosvm/crosvm-runner — all the Nix expressions are here, and I tried to separate my local container magic from the arguments with positive reuse chances
<IdleBot_5e50c57d> qyliss: so basically, if we do not want any overlay magic (argh, overlayfs), rust-9p over TAP works fine
<IdleBot_5e50c57d> Fun stuff: I need both ip tuntap and mktuntap, and then I run ip tuntap in the setup phase as in-container root (already in the inner FS and NET namespace) to allow later running mktuntap which is mostly necessary to select the TAP interface
pie_ has quit [Ping timeout: 258 seconds]
pie_ has joined #spectrum
tilpner has joined #spectrum
pie_ has quit [Ping timeout: 258 seconds]
pie_ has joined #spectrum
pie_ has quit [Ping timeout: 265 seconds]
<IdleBot_5e50c57d> Pushed a cleanup. Now the call is showing off the flexibility: $( test-build ./crosvm-runner.nix crosvm-launcher) "curl tum.de --proxy http://10.0.2.2:3128; exit" "3128:tcp"
<IdleBot_5e50c57d> (this requires no rebuilds, just writing and bind-mounting a single file; the general interface will be obviously complicated and probably different on my system and in SpectrumOS)
<IdleBot_5e50c57d> qyliss: I guess my complicated Lisp wrapper-generating stuff is not useful for you as long as it is clear what are the commands run inside the container? Or I could make sure that enough support to try out my examples can be provided by a single nix-buildable Common Lisp daemon running as root… (main question: provide settings that make sense outside my exact deployment)
<IdleBot_5e50c57d> Note that there is no point in compiling too much into the kernel: once you have opened the SquashFS, you have all the modules you might ever want
<qyliss> IdleBot_5e50c57d: so, what should I be looking for here?
<IdleBot_5e50c57d> Well, you can now just look for the commands that are lists of arguments. I have a _minimal_ set of compiled-in stuff, a pretty static squashfs, and plan-9-mounting of the rest, if you wondered how it would work
<qyliss> Oh, cool
<IdleBot_5e50c57d> And IPv6 9p mounts are apparently nontrivial…
<qyliss> oh :(
<IdleBot_5e50c57d> Alternatively, I have missed something in my Socat option autogeneration
<qyliss> Hopefully soon enough this can be virtio-fs
<IdleBot_5e50c57d> (In that case, you do not have to care until cross-VM sockets actually become relevant for you)
<IdleBot_5e50c57d> I guess for some cases you will still use 9p as a better studied protocol…
<qyliss> Oh hey weston-terminal works now
<qyliss> At least when I SSH in and run it manually
<qyliss> Wonder what changed..
<qyliss> That's exciting, anyway
<qyliss> puck: if I SSH to a VM and run weston-terminal it works, but not if I start it directly in init. Could that be something to do with seats or something?
<puck> maybe, but i'm suspecting it's simpler
<qyliss> Looks like it's something in the environment.
<puck> do you have SHELL set?
<qyliss> no
<qyliss> let's try with it
<puck> it defaults to /bin/bash
<qyliss> ffs
<puck> you can also --shell apparently
<qyliss> I think before we had that and it still didn't work?
<qyliss> Pretty sure you had --shell
<puck> hrmm
<qyliss> When you tried it
<puck> not sure
<puck> what's the error message?
<qyliss> isn't one
<qyliss> that's what's made it so hard to debug
<puck> oh, as in' it immediately closes?
<puck> ... oh, wait a sec, yeah, that's about right i think. if exec'ing into the terminal fails it probably ends up printing an error message inside the terminal, then immediately closes
<puck> if you strace -f or something that might tell you more
<qyliss> Yeah, copying the environment works.
<qyliss> So now I just figure out what it needed
<qyliss> Yeah, --shell works now
<qyliss> I'm sure it didn't used to
<puck> yeah
<puck> i presume pre-procfs mounting
<puck> or well, devpts probably
<qyliss> No, I've had that since the start
<qyliss> And you had procfs
<qyliss> Oh, not devpts
<puck> i'm not entirely sure you did, i remember mounting those in when working on the sommi
<puck> sommelier patch
<qyliss> But you had that when you worked on it
<qyliss> Definitely
<puck> yeah, but i don't think i called it with --shell
<qyliss> I'm sure you did
<puck> also i was not aware s6-mount was a thing, interestingly
<qyliss> There are s6- implementations of most commands
<qyliss> not much reason to use them mostly
<qyliss> but I thought I might get away without busybox or coreutils
<puck> heh
<multi> i think skarnet started writing a whole bunch of reimplementations in s6-{portable,linux}-utils, and then later on realised that there wasn't much sense in doing so and has kind of stopped doing that
<puck> s6-syscall
<puck> :p
<qyliss> s6-ln is good
<qyliss> It can atomically relink
<puck> oh, interesting
<qyliss> Which POSIX ln is forbidden from doing
<multi> iirc s6-p-u is now meant to be portable utilities which don't really have equivalents elsewhere, like s6-hiercopy and s6-nuke
<qyliss> NixOS should use it, but doesn't.
<qyliss> I could probably introduce that though
<puck> multi: i think you can do kill -1 to kill all but pid 1, hrmm
<multi> puck: that might be an implementation dependent thing
<multi> puck: also entirely possible that skarnet did a survey of existing kill(1) implementations and decided, "eh, fuck that, i'm writing my own thing which does this one very specific thing i need"
<puck> multi: huh, it's actually POSIX
<qyliss> Oh that's neat
<qyliss> In that case I might not need s6-p-u
* multi shrugs, and points at #s6
<qyliss> OH I bet I know what changed to make weston-terminal work
<qyliss> For a while I'd symlinked /bin/sh to ${dash}/bin/sh, but it should have been ${dash}/bin/dash
<qyliss> So there was no /bin/sh
<puck> heh
<qyliss> Might want to make it log if running the shell fails...
MichaelRaskin has joined #spectrum
<MichaelRaskin> So, to avoid an annoying warning from CrosVM but keep using writeScript, I just went and implemented an actually working version of pad-string-to-devisibility (as the lib version uses linear recursion depth, which is bad)
<MichaelRaskin> Maybe I am doing it wrong
<MichaelRaskin> (I _also_ have a shell function doing the same thing with a file for _another_ bootscript block device)
<qyliss> What warning is this?
<MichaelRaskin> Well, as I said, I find «source /dev/vdb» very convenient way to avoid expensive rebuilds or complicated boot processes
<qyliss> Go on
<MichaelRaskin> There is one catch: a boot script does not norally have a size exactly divisible by block size…
<qyliss> Oh, right.
<MichaelRaskin> And CrosVM rounds the size of the image _down_, not up
<qyliss> oh..
<MichaelRaskin> Whatever, padding with some spaces is cheap
<MichaelRaskin> But then it still _warns_ that it cuts off some of my kilobytes of padding I don't care about
<MichaelRaskin> And as I want to keep the number of warnings down (I, uhm, is debugging the network setup and I want _these_ warnings, not junk)…
<MichaelRaskin> So now I have two bootscript block devices, one generated in Nix and one in a shellscript from runtime parameters, and the Nix one is padded by Nix code (with logarithmic recursion depth, yay, and the logarithm is just the logarithm of the padding needed), and the shell one is padded using shell arithmetics and yes | head -n
<qyliss> puck: that wasn't actually what fixed it
<qyliss> was presumbly necessary, but fixing /bin/sh alone wasn't enough
pie_ has joined #spectrum
<qyliss> I think that's about as far as my interest in why weston-terminal suddenly works goes
<MichaelRaskin> Re: virtio-fs: also note that VM-to-VM virtio-fs would require one globally cutting-edge feature (virtio-fs) and one rust-vmm-cutting-edge feature (vhost-user). With rust-9p I could write a demo right now doesn't have any TCP/IP traffic in the top namespace (OK, as long as CrosVM insists on using ioctl I cannot remove that namespaced-TAP for free)
<qyliss> I'm not at all opposed to cutting edge features
<MichaelRaskin> Every cutting edge is eager to become bleeding edge, and the blood might be yours.
<MichaelRaskin> I mean, I like some cutting edge features and hate others, and here I am on the «like» side, but if your declared goal is security, you should always provide an option to use only features that survived a few years in the wild
<MichaelRaskin> (also, I have no idea what to expect about the amount of implementation work around these features)
<MichaelRaskin> Apparently, it's not IPv6 and 9p, it's me forgetting some of the tricks I was using
<MichaelRaskin> (running tuntap setup commands with & leads to interesting outcomes that depend on basically everything in interesting ways, who could have thought)
<qyliss> I expect that by the time people are actually using this, a few years in the wild will have passed.
<MichaelRaskin> We-ell, there are some benefits in closing the last milestone with something which has a documented (and reasonable) set of tradeoffs for immediate use
<qyliss> that's true, but that's a year away at least I'd say
<qyliss> (I'm behind the original timeline of a year, for sure)
<MichaelRaskin> Of course there will still be bugs in our implementation
<qyliss> The good news if weston-terminal working all of a sudden puts me a good bit closer to the next milestone than I thought
<MichaelRaskin> Well, projects don't get several years of widespread use in one year…
<MichaelRaskin> (and please remember everyone is bad at planning… which also means that you might be surprised by the problems rust-vmm-vhost-user faces in a year!)
ardea is now known as aranea
<qyliss> So here's a question
<qyliss> Is it better to do 9p through crosvm, or run a 9p client inside the guest?
<qyliss> Not using crosvm would mean the server was isolated inside a VM
<qyliss> Oh, but crosvm's 9p support is only relevant for a server on the host anyway
<qyliss> so no point in that
<qyliss> I could use their 9p server impl still, even
<MichaelRaskin> I am not sure if hteir implementation makes some fun assumptions
<MichaelRaskin> rust-9p is easy to build (see my repo), works and doesn't care if it has to listen a TCP port or a Unix domain socket
<qyliss> MichaelRaskin: are you planning on upstreaming rust-9p?
<MichaelRaskin> Yes, unless someone copypastes the expression first
<qyliss> A Nixpkgs commit would be easier for me to cherry-pick :P
<MichaelRaskin> Right now I am in a state where I am not too keen to make _any_ of my nixpkgs clones a checkout of fresh master (because I do these weird multi-nixpkgs builds)