qyliss changed the topic of #spectrum to: A compartmentalized operating system | https://spectrum-os.org/ | Logs: https://logs.spectrum-os.org/spectrum/
<qyliss> I expanded a bit on how VM->VM communication will work on the website: https://spectrum-os.org/git/www/commit/
<qyliss> give nlnet something to read
<MichaelRaskin> I wonder what is the good way to manage the export list of an FS VM
<MichaelRaskin> Maybe just dynamically launching a filtering process that adds a prefix to all requests…
<MichaelRaskin> I think VM machine migration often means things that cannot be done as «migrated as source code»
<MichaelRaskin> (I also think that a VM-per-application-instance implies that you have a very limited amount of VM images that are then given different connections to store)
<MichaelRaskin> I think it is cheap to have multiple storage VMs, so you can still divide data into sensitivity domains, but without an obligation to align this perfectly with application VM allocation
<MichaelRaskin> BTW, how long does it take to spin up and shutdown an empty CrosVM in your test runs?
<MichaelRaskin> Hmm. If we need to mount different subsets of a single tree, and probably want to do it with a single server and filtering proxies, we might just as well ask the proxies to make sure the paths are well-formed and have no .. and are UTF-8-clean etc.
<MichaelRaskin> Good question what happens to symlinks in all the mess
<qyliss> I think the server doesn't need to care much about symlinks
<MichaelRaskin> The server doesn't, but the system design doe
<MichaelRaskin> does
<MichaelRaskin> Do we imply that we mount just different subsets of the same tree (split into multiple parts by sensitivity etc.), or is layout customisable?
<qyliss> I don't see why it shouldn't be customisable
<MichaelRaskin> Neither do I, but then the user will need some reminder of how symlinks work
<qyliss> I think if you're using symlinks you're responsible for understanding how they work.
<qyliss> if this work has any impact, a very small proportion of users know what a symlink is
<MichaelRaskin> Maybe. We are going to break a nontrivial amount of assumptions, though
<qyliss> how so?
<qyliss> mount points will work the same as any other unix systetm
<MichaelRaskin> Kind of
<MichaelRaskin> A typical unix system doesn't have the same subtree mounted at different paths all the time
<MichaelRaskin> And to get to thepoint where nobody knows what a symlink is, you need to get through the phase where people who do know use and like the system
<MichaelRaskin> Hm. I wonder at what level do we want to enforce read-only subtrees
<qyliss> that would be done by the fs server
<MichaelRaskin> Not sure what is the exact plan.
<MichaelRaskin> The FS server will have a socket per subtree and per its read-only status?
<qyliss> something like that
<MichaelRaskin> If we can get cheap enough VMs, there is a benefit in having a single FS server socket for the actual FS server, and then multiple filtering VMs that enforce RO and subtrees
<MichaelRaskin> Advantages: no dynamic reconfiguration of VMs and no dynamic reconfiguration of servers
<MichaelRaskin> (Also, if we have dead-simple filtering proxies, we can skip «reinvent the filesystem»)
<MichaelRaskin> (And looking at BtrFS history, reinventing the FS is not what you want to do in a single year)
<qyliss> MichaelRaskin: oh, btw, takes 1.5 seconds or so to get to the point of a terminal appearing
<MichaelRaskin> I guess you have a better CPU than I do.
<qyliss> actually, that time it took more like 3s
<qyliss> not slow, anyway
<qyliss> this is on 2012 hardware
<MichaelRaskin> Not sure what is the CPU year for Thinkpad W530
<qyliss> this is a 220
<qyliss> x220
<qyliss> so probably 1 generation later
<qyliss> how long does it take for you?
<MichaelRaskin> Well, I do have an extra layer of container around
pie_ has quit [Ping timeout: 260 seconds]
<qyliss> Yeah we should be comparing like for like
<MichaelRaskin> But purely the dmesg part takes strictly more than 2 seconds, every time
<qyliss> Yeah, at the moment I'm getting 2s of that, and then 1s later the window appears
<MichaelRaskin> But this is just to 9p-mount and exit
<qyliss> I've tagged mktuntap 1.0, btw. Seems like the thing to do since it has another user.
<qyliss> If somebody else wants to submit to Nixpkgs they're most welcome to, but I won't self-submit.
<MichaelRaskin> In a sense, as long as all the ChromiumOS stuff from SpectrumOS repo is not submitted upstream, mktuntap is a small thing (as it seems to be most useful for CrosVM: Qemu can select interface name on its own)
<qyliss> I'm sure there are other use-cases
<qyliss> execline-style tap/tun device configuration feels generally useful to me
<MichaelRaskin> I wonder what is safer by now, wlroots or libvncclient
pie_ has joined #spectrum
pie_ has quit [Ping timeout: 258 seconds]
pie_ has joined #spectrum
tilpner_ has joined #spectrum
tilpner has quit [Ping timeout: 268 seconds]
<MichaelRaskin> BTW: re: 9P over IPv6: in principle, if for non-containerised VMs you want IPv6, one could also do a connect() then mount via an FD
<MichaelRaskin> I think authenticated 9P mounts work like that anyway