qyliss changed the topic of #spectrum to: A compartmentalized operating system | https://spectrum-os.org/ | Logs: https://logs.spectrum-os.org/spectrum/
<jpo> sorry, was tethering for internet last night and phone died earlier than i expected
<jpo> "< lejonet> something close to what nixos does when wrapping things might be a good way to do the service-level isolations without the service needing to know about them?"
<jpo> lejonet: do you have any pointers re ^ ?
<jpo> (i'm not familiar with what nixos does for service-level isolations)
<jpo> "< IdleBot_b5116448> I dunno, maybe given that I have most of the detailed design, I should just try to write it down and then defend this vision..."
<jpo> IdleBot_b5116448: yeah, i think that would be helpful. we're making conflicting suggestions, and i think it might be due to different desired end states and different design goals when evaluating trade-offs. i'd be curious to see the big picture of what you have in mind
<jpo> IdleBot_b5116448: your system sounds a lot more like subgraph than qubes, and that's fine if that's what's desired, but it does result in less-paranoid choices for various design trade-offs
MichaelRaskin has joined #spectrum
<jpo> ehmry: you raise a very good point about nix's stance on minimalism vs. "functionality out of the box". on the one hand, nix inherently minimizes your effective runtime TCB by being able to trivially compute and expose only the minimally necessary closure of dependencies for the given thing you want to run, which is fantastic, yet OTOH each node of the dependency tree has a wider fanout
<jpo> architecturally, nix wins, and in the long run we can have minimal versions alongside whatever else for things we might care to make minimal (for example ssh & its deps, or gpg, or such)
<jpo> so i think in the end, everybody wins? not entirely clear though IMHO
<MichaelRaskin> jpo: re: my design being closer to SubgraphOS than Qubes: I think that KVM vs Xen is the starting point of SpectrumOS, the rest of the design I propose should be tunable. I do think that a path from zero to running code in one month goes closer to SubgraphOS, but then the design can be upgraded naturally and even used as a mix of SubgraphOS-like and Qubes-like depending on user choices
<MichaelRaskin> A lot of Nixpkgs choices can be trivially overriden by passing null instead of the most dangerous dependencies, I think
<jpo> all this talk of UID isolation makes me slightly concerned. if you want meaningful security, you can't expose the whole linux kernel attack surface. if you want compatability with random shit (trust me, you do), then kernel attack-surface reduction is not a viable strategy (you will inevitably want to just run some shit right now, and not be willing to figure out the minimal set of things safe to expose), and sometimes the minimal set is too large/dangerous fo
<jpo> ^ means you need some other smaller-interface isolation mechanism (read: virt)
<jpo> and ^ means UIDs are irrelevant
<MichaelRaskin> Well, an file-disclosure exploit in VM's directory sharing code is not unheard of
<MichaelRaskin> It doesn't always cheaply escalate to an arbitrary attack against host kernel
<MichaelRaskin> Maybe we don't need to play with UIDs and it is enough to use mount namespaces, though.
<jpo> that's far from the only concern. even if filesystems ceased to exist as a concept, the kernel is way too large of an attack surface for comfort
<MichaelRaskin> jpo: I don't understand that point — my plan about jailing includes jailing the VM itself
<jpo> what does "jailing the VM" mean? what host resources do you expect it to have access to?
<jpo> the VMM should implement discressionary access control and restrict itself, as most these days are moving towards
<jpo> are you expecting to write some form of MAC policies or ACLs or such?
<jpo> do you envision a single shared host filesystem, of which subsets are selectively exposed across VM boundaries? (potentially concurrently to multiple VMs?)
<MichaelRaskin> 1. a VM breakout without extra exploits should not give full access to host FS. 2. user may choose to provide some subdirectories to multiple VMs (or 9P servers) at once, this being intersecting but not equal sets
<MichaelRaskin> Yes, I do
<jpo> 1. 100% agreed on the goal, probably not agreed on the optimal approach to implement
<MichaelRaskin> In my current container-only based system (because I alone never got around to building a toolset for light enough VMs) I do set ACLs from time to time
<jpo> *sigh*
<jpo> okay
<jpo> it seems we view linux with very different levels of trust
<jpo> i'm not trying to bash your system. if it works for you and is suitable for your threat model, that's fantastic, and congrats on building something that sounds like it solves your problem reasonably well
<MichaelRaskin> Well, I do agree that having a VM as an extra isolation layer would be a huge improvement
<MichaelRaskin> I never got around to do this improvement on my own, but it looks like SpectrumOS spec is a commitment to do the most annoying part (lightweight VMs, both the CrosVM setup and something to boot inside)
<jpo> have you tried qubes?
<MichaelRaskin> No. I haven't tried to set up Xen stuff in ages, so I am kind of afraid.
<MichaelRaskin> (Also, I want some system-management functionality that seems to be a pain to implement there…)
<jpo> you should try qubes
<MichaelRaskin> For me, isolation is not purely security, but also a usability boost (no, these things are not allowed to interact like that), and it sounds like getting some basic stuff (each page should be in a separate Firefox instance with a fresh view of FS) would be crazy expensive with Qubes model
<jpo> i have ~15 VMs open right now. one has a browser logged into my bank, another logged into some employer stuff, another doing research on a single project, another with some insurance stuff. yeah, it's heavier-weight than containers, but you should really give it a try ;)
<jpo> before i started using qubes, I had a self-build system a lot like what you describe, but built on OpenBSD, and with the tools we had available 5-10 years ago
<jpo> since then, i've concluded (with quite a lot of supporting evidence) that anything that hopes to have a single monolithic core providing all the complexity of a unix system across privelege boundaries is an inevitably-doomed architecture from a security perspective
<MichaelRaskin> jpo: as I said, isolation is also a usability tool in my system, and I have a ton of other weird usability demands, and making them interact with Qubes doesn't sound like something that will work.
<jpo> the isolation most definitely also provides usability wins in qubes too (the whole template/appvm system, etc.). can you elaborate spefically on what you mean?
<MichaelRaskin> Well, I have a complicated mess of Lisp code (at a user level and at a system level) to quickly apply cross-cutting settings set in various contexts
<MichaelRaskin> It also integrates with all the jailing stuff to integrate various runtime configuration of Firefox instances to be done easily
<MichaelRaskin> And I also have a weird mostly-tagging-based StumpWM configuration that I have built to match my desired mode of interaction and got used to
<jpo> this sounds like a "picture is worth a thousand words" type thing. i'd be rather curious to see a demo of how you use it day to day. system diversity is cool, and we can all learn from each other. i'd primarily be interested to see if there are any interesting UX lessons to be learned from what you've done, since this kind of isolation is not a very widely explored design space
<jpo> would you be willing to make a screencast showing some common workflows?
<jpo> i suspect it's probably pretty similar to what i used to do, but i'd love to have another perspective
<MichaelRaskin> Well, a part of my system is also really complicated to screencast
<lejonet> jpo: oh, it was not a "nixos does service-level isolation", it was more a thought that the whole "wrap a binary with a script that sets up things for it before calling/starting the binary" part
<jpo> like - dare i say it - docker entry points? (/me ducks)
<MichaelRaskin> Also I wonder whether I can show-off any of the rofi-based selection stuff without showing too much in the visible menu choices! (I mean, a screencast is different from a live demonstration in its ability to be frame-by-frame analysed by people outside intended audience)
<jpo> well, i'd certainly prefer a live demonstration (tighter feedback loop to ask questions, etc.), but i have no idea if we'd ever meet in person
<MichaelRaskin> The isolation part itself is pretty boring and doesn't look like anything
<jpo> but the part about what is shared and how you specify what user-meaningful data should be exposed to what environments sounds different than other systems in this space
<MichaelRaskin> The way my Firefoxes look is more determined by the userChrome.css I use than by isolation
<jpo> ah, so entirely spoofable by a pwned browser :P
<MichaelRaskin> Spoofable to do what, sorry?
<jpo> confuse it with a different browser, to try to get you to e.g. enter a passphrase
<jpo> ui-redressing attacks, etc.
<lejonet> jpo: xD it was more a thing that struck me as maybe a first step, without having to do all too much tooling outside of like current nix model (because the wrapper could be done in post-install and stuff)
<MichaelRaskin> The windows that can hope to get a password are tagged on launch, so it would need to be a very targeted attack to pwn _that_
<MichaelRaskin> Otherwise, I have to have just launched a browser where I expect to enter a password
<jpo> lejonet: if post-install has the same meaning in nix as it does in e.g. rpm-based things, it should ideally be avoided as much as possible IMO, because then you need to trust the packaging even if the contents are never used
<MichaelRaskin> Given that most of my Firefoxes are isolated in a conspiciously transient way, the password gets entered just on launch, and then the browser gets closed soon afterwards.
<MichaelRaskin> jpo: I think post-install is postInstall = '' … '';
<lejonet> jpo: it could either be in postInstall like MichaelRaskin points out, or just simply post install, but the thing that have become clear after actually thinking about that approach is that it would require some time-investment in tooling anyway, which is probably better spent to look at taking steps towards the actual model and such
<jpo> yeah, a demo would be great. i think i'd like your system's human-side (though i think we disagree on what's constitutes acceptably-safe implementations of the lower-level details)
<MichaelRaskin> jpo: you probably wouldn't, because it's crazy fine-tuned to my preferences
<lejonet> I would also be interested in a demo, from both of yours systems if possible, because as jpo said earlier, its always interesting to see how others have solved stuff
<jpo> lejonet: a pretty good walkthrough/demo of qubes is https://livestream.com/internetsociety2/hope/videos/178431606
<lejonet> jpo: thank you :) I will look at that link
<MichaelRaskin> jpo: you still need to explain me why putting a VM into a container is worse than running is without a container
<MichaelRaskin> That's true that I often settle down for the «not faster than the bear, just faster than another tourist» approach
<jpo> no, i don't disagree with that, but that wasn't my point. rather that having a VMM's implementation self-privdrop can provide a tighter approximation of actual least-privelege than externally-enforced isolation (be it mandatory access control policies, containers, etc.)
<MichaelRaskin> Well, VMM priv-dropping is nice, but to minimise rebuilds I would prefer the exact filesystem view to be composed externally on-demand
<jpo> err, i think we miscommunicated somehow
<jpo> i don't see what this has to do with rebuilding
<MichaelRaskin> OK, maybe I should say that I am not sure I want a VMM to even _have_ a mount call anywhere in its code
<jpo> and yes, i agree that at least the /usr being composed externally is desirable
<MichaelRaskin> It should limit what it can _do_, but proper bind mounts to limit what it can _see_ do not win anything from being done internally
<jpo> agreed, i don't want it to even have a concept of filesystems at all!
<MichaelRaskin> Well, _some_ piece of code has to run the 9P server
<jpo> 1) you don't need to use 9p (though, i do need to properly look at virtio-fs, this is new to me and i haven't properly dug into it yet)
<jpo> 2) you can deprivelege it in userspace and avoid the kernel filesystem attack surface entirely
<MichaelRaskin> Well, 9P with a separate server, virtio-fs, virtfs. Something will read files. It shouldn't be able to name files not intended for it.
<jpo> and yes, of course that would have performance penalties, which is another reason i initially recommended block devices for passing /nix to the guest
<jpo> sure, but that doesn't need to be an actual filesystem in the trusted host, and i'd argue it shouldn't be!
<jpo> MichaelRaskin: are you familiar with the term "disaggregation"?
<jpo> re bears vs. tourists: yep. that's fine if that's all your threat model justifies, but if i'm going to use something myself or feel comfortable recommending it to others, it's gotta be... a bit more high assurance
<MichaelRaskin> Oh, how cute, they invented one more word for that?
<jpo> heh
<MichaelRaskin> jpo: I just know I won't maintain good enough opsec to resist targeted attacks willing to use a Firefox exploit and userns exploit and actual human collecting information from my project participation in a way that picks up more than one my username.
<jpo> yet qubes is used by people who need to have good enough opsec or people literally die
<jpo> so, yeah. i feel some moral obligation to encourage paranoid choices when it comes to security architecture decisions
<MichaelRaskin> The point is that Qubes is just one useful technical tool in a toolbox you need, and then actual opsec is a ton of human behaviours
<jpo> yes, absolutely, of course. i agree with you 100%. but you still want the tools in your toolbox to be stronger when possible
<MichaelRaskin> I don't want an unbreakable scredriver that is heavy enough to be unusable on my laptop screws, though
<MichaelRaskin> (BTW my actual preferred way of browser engine isolation is to download just HTML and convert it to readable text with something vastly simpler than a modern browser engine)
pie_ has joined #spectrum
<MichaelRaskin> jpo: re: how setting up passing stuff looks like: @sub ,nix,zathura/bin/zathura XDG_DATA_DIRS="$(readlink -f ~/.nix-personal/personal-result/mime || nix-build --no-out-link '<nixpkgs>' -A shared_mime_info)/share" ,here, article.pdf
<MichaelRaskin> (of course the long argument is hidden inside a wrapper script)
<MichaelRaskin> Then it has a proper bind-mount access to the current directory (read-only, though) and can redisplay the PDF as I edit the source and recompile
<jpo> read-only is absolutely critical. i wonder if you could still escape your filesystem sandbox with *at(2) and dirfd-relative calls, especially if directories ever move outside the container. already seems too complex to confidently reason about to me
<jpo> a lot of the container ecosystem got broken that way earlier this year (or maybe it was last year)
<jpo> and people smarter than me and more familiar with the relevant subsystems than i am assumed it was fine at the time it was implemented...
<jpo> i prefer to conservatively guard against my own unknown-unknowns with architectural safeguards
<jpo> not trying to just throw FUD at your solution though - it's most definitely better than not sandboxing, and if it's usable enough that people who are currently not sandboxing might start doing so, then it is most definitely a pragmatic win
<jpo> so - good job, that's nice
<jpo> but still not what i'm comfortable with personally
<jpo> "< lejonet> Because VM breakouts do exist, tho iirc they are few and far between"
<jpo> yeah, well... that absolutely depends on how you use them. if you allow expose a bunch of complexity in a thing running with ambient authority, you're gonna have a bad day. many of the xen security advisories don't affect qubes because of how we've depriveleged & isolated qemu
<MichaelRaskin> jpo: so I fully support VMs _plus_ this
<jpo> MichaelRaskin: same
<MichaelRaskin> jpo: but I do want this granularity and as much layers protecting it as possible. So whatever it is running the shares, I want shares to be container-isolated, too
<jpo> though tbh, other projects have already spent (and are continuing to spend) lots of effort trying to get that right, so IMO qyliss should prioritize the nix-relevant bits
<lejonet> jpo: indeed, at least all breakouts I know off depend on the VMM being run with higher privileges
<MichaelRaskin> jpo: I did try to make a design easy to cut in parts to go step by step
<jpo> lejonet: there are many that don't. some due to ambituities in the intel reference manuals and implementers got things wrong (fault semantics, emulated paging, what to flush when, etc.)
<MichaelRaskin> I think what is lacking is an easy-to-manage customisable fast-boot-reduced-surface kernel build to run inside
<jpo> also plenty of bugs in the paravirtualization stuff
<MichaelRaskin> Well, and CrosVM in Nixpkgs
<jpo> also plenty of bugs in the cross-vm memory management stuff
<lejonet> jpo: true
<jpo> sorry, i don't mean crosvm, i mean... between vm ;)
<jpo> and... yeah, lots of stuff
<lejonet> I've just not looked too much into VM break outs :)
<jpo> yeah, there are tons
<jpo> latent dma from passed-through devices
<jpo> bypassing the hypervisor and directly breaking stuff in SMM
<lejonet> yeah, DMA is always tricky and blargh
<jpo> if we expose the network stack and filesystem of the host, i guarantee that'll eventually lead to an escape
<lejonet> yep
<MichaelRaskin> jpo: note that whatever manages the FS for sharing into machines, a breakout there is still access to persistent data not intended for the VM (so tricks might still make sense)
<MichaelRaskin> (contained FS view out of bind mounts, maybe nuique UIDs, etc.)
<ehmry> is this about FS for storage or FS for representing state?
<MichaelRaskin> Storage _is_ state
<ehmry> I think that its worthwhile to distinguish between immutable data and plan9-style FS interface
<ehmry> and also mutable storage
<qyliss> gosh, this is a long discussion to wake up to
<MichaelRaskin> Maybe waking up too early was a mistake? Although it might not result in any posts to spectrum-devel (sadly)
<qyliss> I already woke up too late for the first NixCon talk
<ehmry> qyliss: fwiw it was guix talk
<qyliss> I watched it from bed on the livestream :)
<ehmry> what I was trying to get at is that when the file-system is exposed as 9p its not terribly difficult (and fun actually) to implement pseudo-file-systems or inject exotic file-system policy
<ehmry> because it can be done in userspace in more-or-less whatever language you like
<jpo> ^^^^^^^^^
<jpo> exactly
<jpo> if you think outside the box of traditional unix system design, you soon find you have a lot better options
<jpo> the host doesn't even need a filesystem at all
<ehmry> and the unix file-system interface is actually pretty flexible, conceptually
<qyliss> Oh yes, intriguing.
<ehmry> i don't know how hard it would be to connect an externall 9p server to a virtio channel
<ehmry> which is probably faster than TCP
<MichaelRaskin> jpo: how many layers of 9p are we speaking?
<MichaelRaskin> Because is you want a single layer that implements both the protocol and shiny new and interesting policies, it will be possible to fool it into disclosing too much
<MichaelRaskin> You don't have any actual good options because the host must be able to boot on commodity hardware and modern hardware is a lost cause
<jpo> MichaelRaskin: again, if you think outside the unix/linux box, you have options like https://genode.org/documentation/genode-foundations/19.05/components/Common_session_interfaces.html#File_system, which stack & multiplex nicely
<jpo> MichaelRaskin: that's where firmware security comes in. that's another rabbit hole. see heads & trenchboot
<MichaelRaskin> Sigh. Firmware is not just motherboard
<jpo> it's not a lost cause, but it is a bunch of work, and IMO should not be the immediate goal of this project
<jpo> MichaelRaskin: yes, i'm well aware of that
<jpo> i deal with that reality every day at work now ;)
<MichaelRaskin> jpo: I think it is a lost cause if we are a) not a HW project b) want to have anything working in 2020
<jpo> either way, this project is about what you do after bootstrapping to a trustworthy state. it's kinda out of scope
<MichaelRaskin> Anyway, if Spectrum cannot be used for the least-trusted stuff with, say, Nix-on-Arch, I am not interested at all.
<MichaelRaskin> In terms of access logic, Genode's FS access is not _that_ different from bind mounts on Linux
<MichaelRaskin> But if we write any FS-access-logic from scratch, it'd better not be the only layer of defense
<jpo> it absolutely is! because in linux, you own it once and you own the world, and with genode your entire fs can be depriveleged and have an isolated instance per security-relevant context
<jpo> anyway, i should really sleep, but... i just wanted to say, MichaelRaskin, I hope you aren't taking my comments personally. i really do like what you've done, and it seems like a clear and meaningful win over the status quo
<MichaelRaskin> I mean, I am not inclined to take things personally
<MichaelRaskin> And we started by agreeing that we apply different levels of requirements anyway
<jpo> cool
<MichaelRaskin> What I have done is not really usable by random people, though (you know you Lisp or you can do nothing)
<MichaelRaskin> But I definitely have some usability observations that I want SpectrumOS to use to be on the trajectories of doing things much better than me _first_, then still being able to improve the security _when needed_ while still allowing different tradeoffs (and hopefully keeping usability)
<jpo> i think the part that'll most likely be useful to share betwee what you want and what i want (nevermind what qyliss wants! :P) would be the inside-vm integration parts
<MichaelRaskin> Indeed.
<qyliss> jpo: I'd be curious to know more about how a host would work without a file system, or maybe I misunderstand what you mean
<qyliss> Linux needs a filesystem to find init on, doesn't it?
<MichaelRaskin> qyliss: well, this could be a really small initramfs
<MichaelRaskin> Which doesn't need any actual FS code beyond FS cache code
<qyliss> Right, okay
<qyliss> That's what I thought
<MichaelRaskin> (well, one also needs to put the VM somewhere)
<jpo> and then your storage could be stacked block devices that the host kernel itself never mounts directly, and that's what you expose to VMs
<MichaelRaskin> I am not sure one can write MirageOS-VM-runner-with-HW-passthrough quickly
<jpo> this is not theoretical - this is what qubes does w/ lvm
<jpo> we could get away without any filesystems mounted from disk by dom0, we just haven't so far mainly because fedora has... different expectations
<jpo> but all we put there are a couple things in /etc, some logs, and a couple xml files in /var
<jpo> everything else can be stateless / derived
<qyliss> how do you envision the VMs interacting with those block devices?
<MichaelRaskin> Qubes has really coarse isolation granularity
<MichaelRaskin> There you know your VM by name, so there are storage device(s) for each VM, I think
<qyliss> I believe so
<MichaelRaskin> Spectrum could (eventually) support multiple storage VMs providing remote-FS services to other VMs
<qyliss> That's very true
<qyliss> Was just thinking the same
<qyliss> That might be the way forward.
<MichaelRaskin> The way forward now is CrosVM in mainline Nixpkgs and a below-1s-boot GNU Hello machine
<MichaelRaskin> Which requires almost nothing
<jpo> there's a bunch of prior art here, probably better explained in email form than irc monologue, and not at 3am
<MichaelRaskin> Indeed, 3am does sound like a complicated factor
<qyliss> I'm looking forward to reading such an email, then :)
<jpo> oh, historically that's a very productive hour for me, but i gotta get up in a couple hours
<MichaelRaskin> In my design I stressed why splitting network VM duties is a good idea, but of course one can delay that, and of course one can apply the same logic to the other useful services
<qyliss> MichaelRaskin: you're right about next immediate step being crosvm
<qyliss> I'm going to start on that tomorrow at NixCon hack days
<MichaelRaskin> One thing to remember is obviously that whatever VM builds the updated initramfs for the next boot, it has to mount something as /nix/store, and it is security equivalent to the host
<qyliss> From what I can tell it's not too bad to build.
<MichaelRaskin> I should learn Rust tooling already (well, and Rust itself), but I never get around to it
<MichaelRaskin> (I remember reading that Plan9 needs no root account and being disappointed that it kind of doesn't, but there is that one thing running the UID-setting service…)
<MichaelRaskin> I guess with reproducible initramfs build it might not be that expensive to have multiple isolated VMs build it and compare the results, but of course these VMs are quite likely to have identical problems if any of them has problems
<hyperfekt> MichaelRaskin: i think most of the cost of VMs could be absorbed by prewarming them and letting KSM do its thing
<MichaelRaskin> Well, you don't know in advance what FS sharing you want to provide to the VM
<hyperfekt> MichaelRaskin: crosvm is trivial to package, i did so already. or do you mean an architecture to actually make use of it easily?
<tilpner> I've had great success in reducing VM "startup" time (with qemu) by resuming a migration from the same file multiple times, but AFAICT neither crosvm nor firecracker support that feature
<hyperfekt> ehmry: connecting an external 9p server to virtio should rewuire some massaging because the 9p and virtfs protocols aren't identical
<tilpner> I was going to look into how hard that would be to add, but... currently it's collecting dust on my todo list :c
<qyliss> hyperfekt: oh you did?
<hyperfekt> tilpner: afaik that gives you problems with randomization which is one of the main reasons kvm doesn't have vmfork
<qyliss> is it in nixpkgs?
<hyperfekt> qyliss: a pr
<qyliss> oh neat
<tilpner> hyperfekt: It might be a problem if all instances on all installations were using the same migration, but generating a new one at boot might help
<hyperfekt> tilpner: i mean you'd still break the crypto
<MichaelRaskin> Well, I guess one could provide some seed to VM to use right-after-migration?
<tilpner> Ehh, even then. Maybe opt-in, as convencience. But it's hypothetical anyway
<MichaelRaskin> hyperfekt: soo, your last comment in the PR was «I want to wait until this branch is stable» — is it ?
<tilpner> I haven't been following discussion much, perhaps startup time isn't a problem?
<hyperfekt> MichaelRaskin: it's definitely possible to implement forking VMs, bromium for so in microxen. but it's probably too complex to focus on rn
<MichaelRaskin> Well, a kernel build that boots for 5 seconds is a problem for a VM-per-page-in-Firefox
<MichaelRaskin> hyperfekt: that's true
<MichaelRaskin> I keep hearing that people do bring boot times below 1s (provided some restriction, but I think we don't need too much stuff)
<hyperfekt> MichaelRaskin: yeah, been for a while. but i want to change the patch so it uses a compile time env var instead of runtime and i still haven't figured out how chromium releases work lol
<MichaelRaskin> «how» or «whether», hm
<hyperfekt> MichaelRaskin: if you're interested in vm boot times look at 'my vm is lighter than your container' paper
<tilpner> <1s times would be good enough to not bother with the complexity of migration!
<hyperfekt> iirc <50ms is feasible
<tilpner> Doing what though?
<tilpner> On my laptop, I got 160ms for Perl eval, but up to 2s to load the ghc closure
<MichaelRaskin> hyperfekt: I have seen multiple papers, including possibly this one
<hyperfekt> loading a vm image and bringing up a linux kernel.
<MichaelRaskin> I want a Nix expression for CrosVM with Linux and GNU Hello and all that working inside 1s, though
<hyperfekt> yeah that should definitely be doable if i'm extrapolating correctly
<MichaelRaskin> I also think so, otherwise I would not say it is an obvious next step to achieve, and would say something about collecting information
<hyperfekt> the scales that vm-per-page requires are probably achievable with ukl
<hyperfekt> (ignoring the superlinear scaling in the kvm code)
<MichaelRaskin> Building a unikernel Firefox?
<qyliss> In Qubes VMs always took several seconds to start, and it wasn't that bad.
<qyliss> Would be nice (and I think possible) to do it quickly, but we can proceed without it.
<MichaelRaskin> Qubes VMs _need_ a lot of stuff done, though
<qyliss> that too
<qyliss> hyperfekt: do you think it's something more complicated than "the biggest build number in the release directory is the release"?
<qyliss> Okay, I'm sitting in the middle of the audience chairs
<qyliss> Please come and assemble if you want to come to the Spectrum table at the pub
<qyliss> We'll leave a bit before 18:00 hopefully to make sure we get a table
<qyliss> pie_ ehmry lejonet: ^
<lejonet> qyliss: assemble where? :P
<qyliss> In the middle of the audience chars
<lejonet> oh middle of the chairs, I'm blind :P
<qyliss> *chairs
pie_ has quit [Ping timeout: 245 seconds]
<hyperfekt> qyliss: frankly i have no idea. i would like it to track not just built but actually released versions, and that there's an entire wiki page to find those in the repos for chromium browser and i did not see through how that transfers to crosvm
bonux has joined #spectrum
bonux has quit [Quit: ERC (IRC client for Emacs 27.0.50)]