Introducing Qubes Odyssey Framework
Qubes OS is becoming more and more advanced, polished, and user friendly OS.
But Qubes OS, even as advanced as it is now, surely have its limitations. Limitations, that for some users might be difficult to accept, and might discourage them from even trying out the OS. One such limitation is lack of 3D graphics support for applications running in AppVMs. Another one is still-far-from-ideal hardware compatibility – a somehow inherent problem for most (all?) Linux-based systems.
There is also one more “limitation” of Qubes OS, particularly difficult to overcome... Namely that it is a standalone Operating System, not an application that could be installed inside the user's existing OS. While installing a new application that increases system's security is a no-brianer for most people, switching to a new, exotic OS, is quite a different story...
Before I discuss how we plan to address those limitations, let's first make a quick digression about what Qubes reallyis, as many people often get that wrong...
What Qubes IS, and what Qubes IS NOT?
Qubes surely is not Xen! Qubes only usesXen to create isolatedcontainers – security domains (or zones). Qubes also is not a Linux distribution! Sure, we currently use Fedora 18 as the default template for AppVMs, but at the same time we also support Windows VMs. And while we also use Linux as GUI and admin domain, we could really use something different – e.g. Windows as GUI domain.
So, what is Qubes then? Qubes (note how I've suddenly dropped the OS suffix) is several things:
- The way how to configure, harden, and use the VMM (e.g. Xen) to create isolated security domains, and to minimize overall system TCB.
- Secure GUI virtualization that provides strong gui isolation, while at the same time, provides also seamless integration of all applications running in different VMs onto one common desktop. Plus a customized GUI environment, including trusted Window Manager that provides unspoofable decorations for the applications' windows.
- Secure inter-domain communication and services infrastructure with centrally enforced policy engine. Plus some “core” services built on top of this, such as secure file exchange between domains.
- Various additional services, or “addons”, built on top of Qubes infrastructure, such as Disposable VMs, Split GPG, TorVM, Trusted PDF converter, etc. These are just few examples, as basically the sky is the limit here.
- Various additional customizations to all the guest OSes that run in various domains: GUI, Admin, ServiceVMs, and AppVMs.
Introducing Qubes HAL: Hypervisor Abstraction Layer
Because Qubes is a bunch of technologies and approaches that are mostly independent from the underlying hypervisor, as discussed above, it's quite natural to consider if we could easily build an abstraction layer to allow the use of different VMMs with Qubes, instead of just Xen? Turns out this is not as difficult as we originally thought, and this is exactly the direction we're taking right now with Qubes Odyssey! To make this possible we're going to use the libvirt project.
So, we might imagine Qubes that is based on Hyper-V or even Virtual Box or VMWare Workstation. In the case of the last two Qubes would no longer be a standalone OS, but rather an “application” that one installs on top of an existing OS, such as Windows. The obvious advantage we're gaining here is improved hardware compatibility, and ease of deployment.
And we can go even further and ask: why not use Windows Native Isolation, i.e. mechanisms such as user account separation, process isolation, and ACLs, to implement domain isolation? In other words why not use Windows OS as a kind of “VMM”? This would further dramatically improve then lightness of the system...
Of course the price we pay for all this is progressively degraded security, as e.g. Virtual Box cannot be a match to Xen in terms of security, both architecturally and implementation-wise, and not to mention the quality of isolation provided by the Windows kernel, which is even less.
But on the other hand, it's still better than using “just Windows” which offers essentially only one “zone”, so no domain isolation at all! And if we can get, with minimal effort, most of our Qubes code to work with all those various isolation providers then... why not?
Being able to seamlessly switch between various hypervisors is only part of the story, of course. The remaining part is the support for different OSes used for various Qubes domains. Currently we use Linux, specifically Fedora 18, in our GUI & Admin domain, but there is no fundamental reason why we couldn't use Windows there instead. We discuss this more in-depth in one of the paragraphs below.
The diagram below tries to illustrate the trade-offs between hardware compatibility and ease of deployment vs. security when using different isolation backends with Qubes. Some variants might also offer additional benefits, such as “super-lightness” in terms of CPU and memory resources required, as is the case with Windows Native Isolation.
Some example configurations
Let's now discuss two extreme variants of Qubes – one based on the baremetal Xen hypervisor and the other one based on Windows Native Isolation, so a variant from the opposite endof the spectrum (as shown on the illustration above).
The diagram below shows a configuration that uses a decent baremetal hypervisor, such as Xen, with abilities to securely assign devices to untrusted service domains(NetVM, UsbVM). So, this is very similar to the current Qubes OS.
Additionally weseeseparate GUI and Admin domains:the GUI domain might perhaps be based on Windows, to provide users with a familiar UI, while the Admin domain, tasked with domain management and policy enforcement,might be based on some minimal Linux distribution.
In the current Qubes OS there is no distinction between a GUI and an Admin domain --both are hosted within one domain called “dom0”. But in some cases there are benefits of separating the GUI domain from the Admin domain. In a corporate scenario, for example, the Admin domain might be accessible only to the IT department and not to the end user. This way the user wouldhave no way of modifying system-wide policies, and e.g. allowing their “work” domain to suddenly talk to the wild open Internet, or to copy workproject files from “work” to “personal” domains(save for the exotic, low-bandwidthcovert channels, such as through CPU cache).
The ability to deprivilege networking and USB stacks by assigning corresponding devices (NICs, and USB controllers) to untrusted, or semi-trused, domains provides great security benefits. This automatically prevents various attacks against the bugs in WiFi stacks or USB stacks.
What is not seen on the diagram, but what is typical for baremetal hypervisors is that they are usually much smaller than hosted hypervisors, implementing less services, and delegating most tasks, such as the infamous I/O emulation to (often)unprivileged VMs.
Let's now look at the other extreme example of using Qubes – the diagram below shows an architecture of a “Qubized” Windows system that uses either a hosted VMM, such as Virtual Box or VMWare Workstation, or even the previously mentioned Windows Native Isolation mechanisms, as an isolation provider for domains.
Of course this architecture lacks many benefits discussed above, such as untrusted domains for networking and USB stacks, small hypervisors, etc. But it still can be used to implement multiple security domains, at amuch lower “price”: better hardware compatibility, easier deployment, and in case of Windows Native Isolation – excellent performance.
And it really can be made reasonable, although it might require more effort than it might seem at first sight. Take Windows Native Isolation – of course just creating different user accounts to represent different domains is not enough, because Windows still doesn't implement true GUI-level isolation. Nor network isolation. So, there is a challenge to do it right, and “right” in this case means to make the isolation as good as the Windows kernel can isolate processes from different users from each other.
Sure, a single kernel exploit destroys this all, but it's still better than “one application can (legally) read all my files” policy that 99% of all desktop OSes out there essentially implement today.
Now, probably the best thing with all this is that once we implement a product based on, say, Qubes for Windows, together with various cool “addons” that will take advantage of the Qubes services infrastructure, and which shall be product-specific, it should then be really easy to upgrade to another VMM, say Hyper-V to boost security. And the users shall not even notice a change in the UI, save for the performance degradation perhaps (well, clearly automatic creation of VMs to handle various users tasks would be more costly on Hyper-V than with Windows Native Isolation, where “VMs” are just... processes).
Qubes building blocks – implementation details
Let's have a look now at the repository layout for the latest Qubes OSsources – every name listed below represents a separate code repository that corresponds to a logical module, or a building block of a Qubes system:
core-admin
core-admin-linux
core-agent-linux
core-agent-windows
core-vchan-xen
desktop-linux-kde
desktop-linux-xfce4
gui-agent-linux
gui-agent-windows
gui-agent-xen-hvm-stubdom
gui-common
gui-daemon
linux-dom0-updates
linux-installer-qubes-os
linux-kernel
linux-template-builder
linux-utils
linux-yum
qubes-app-linux-pdf-converter
qubes-app-linux-split-gpg
qubes-app-linux-tor
qubes-app-thunderbird
qubes-builder
qubes-manager
vmm-xen
vmm-xen-windows-pvdrivers
Because current Qubes R2 still doesn't use HAL layer to support different hypervisors, it can currently be used with only one hypervisor, namely Xen, whose code is provided by the vmm-xenrepository (in an ideal world we would be just using vanilla Xen instead of buildingour own from sources, but in reality we like the ability to build it ourselves, slightly modifying some things).
Once we move towards the Qubes Odyssey architecture (essentially by replacing thehardcoded calls to Xen's management stack, in the core-adminmodule, with libvirt calls), we could then easily switch Xen for other hypervisors, such as Hyper-V or Virtual Box. In case of Hyper-V we would not have access to the sources of the VMM, of course, so would just be using the stock binaries, although we still might want to maintain thevmm-hyperv repository that could contain various hardening scripts and configuration files for this VMM. Or might not. Also, chances are high that we would be just able to use the stock libvirt driversfor Hyper-V or Virtual Box,so no need for creating core-libvirt-hypervor core-libvirt-virtualboxbackends.
What we will need to provide, is ourcustom inter-domaincommunication library for each hypervisor supported.This means we will need to write core-vchan-hypervor core-vchan-virtualbox. Most (all?) VMMs do provide some kind of API for inter-VM communication (or at least VM-host communication), so the main task of such component is to wrap the VMM-custom mechanism with Qubes-standarizedAPI for vchan (and this standardization is one thing we're currently working on). All in all, in most cases this will be asimple task.
If we, on the other hand, wanted to support an “exotic” VMM, such as the previously mentioned Windows Native Isolation, which is not really a true VMM, then we will need to write our own libvirt backend to support is:
core-libvirt-windows
... as well as the corresponding vchan module (which should be especially trivial to write in this case):
core-vchan-windows
Additionally, if we're building a system where the Admin domain is not based on Linux, which would likely be the case if we used Hyper-V, or Virtual Box for Windows, or, especially, Windows Native Isolation, then we should also provide core-admin-windowsmodule, that, among other things, should provide Qubes qrexecimplementation, something that is highly OS-dependent.
As can be seen above, we currently only have core-admin-linux, which is understandable as we currently use Linux in Dom0. But the good news is that we only need to write core-admin-XXXonce for each OS that is to be supported as an Admin domain, asthis code should not be depend on the actual VMM used (thanks to our smart HAL).
Similarly, we also need to assure that our gui-daemoncan run on the OS that is to be used as a GUI domain (again, in most cases GUI domain would be the same as Admin domain, but not always). Here the situation is generally much easier because “with just a few #ifdefs” our current GUIdaemon should compile and run on most OSes, from Linux/Xorg to Windows and Macs (which is the reason we only have one gui-daemonrepository, instead of several gui-daemon-XXX).
Finally we should provide some code that will gather all the components neededfor our specific product and package this all into either an installable ISO, if Qubes is to be a standalone OS, like current Qubes,or into an executable, in case Qubes is to be an “application”. The installer, depending on the product, might do some cool things, such as e.g. take current user system and automatically move it into one of Qubes domains.
To summary, these would be the components needed to build “Qubes for Windows” product:
core-admincore-admin-windows
core-agent-windows
core-vchan-windows
core-libvirt-windows
desktop-windows
gui-agent-windows
gui-common
gui-daemon
windows-installer-qubes-for-windows
qubes-builder
qubes-manager
Additionally we will likely need a few qubes-app-*modules that would implement some "addons", such as perhaps automatic links and documents opening in specific VMs, e.g.:
qubes-app-windows-mime-handlers
Here, again, the sky's the limit and this is specifically the area where each vendor can go to great lenghts and build killer apps using our Qubes framework.
Now, if we wanted to create "Qubes for Hyper-V" we would need the following components:
core-admin
core-admin-windows
core-agent-linux
core-agent-windows
core-vchan-hyperv
desktop-windows
gui-agent-linux
gui-agent-windows
gui-common
gui-daemon
windows-installer-qubes-hyperv
qubes-app-XXX
qubes-builder
qubes-manager
vmm-hyperv
Here, as an example, I also left optional core-agent-linux and gui-agent-linux components (the same that are to be used with Xen-based Qubes OS) to allow support for also Linux-based VMs – if we can get those “for free”, then why not!
It should be striking how many of those components are the same in both of those two cases – essentially the only differences are made by the use of different vmm-* components and, of course, the different installer
It should be also clear now how this framework now enables seamless upgrades from one product (say Qubes for Windows) to another (say Qubes for Hyper-V).
Licensing
Our business model assumes working with vendors, as opposed to end users, and licensing to them various code modules needed to create products based on Qubes.
All the code that comprises the base foundation needed for creation of any Qubes variant (so core-admin, gui-common, gui-daemon, qubes-builderand qubes-manager) will be kept open source, GPL specifically. Additionally all the code needed for building of Xen-based Qubes OS with Linux-based AppVMs and Linux-based GUI and Admin domains, will continue to be available as open source. This is to ensure Qubes OS R3 that will be based on this framework, can remain fully open source (GPL).
Additionally we plan to double-license this core open source code to vendors who would like to use it in proprietary products and who would not like to be forced, by the GPL license, to share the (modified) sources.
All the other modules, especially those developed to support other VMMs (Hyper-V, Virtual Box, Windows Native Isolation), as well as those to support Windows OS (gui-agent-windows, core-agent-windows, core-admin-windows, etc) will most likely be proprietary and will be available only to vendors who decide to work with us and buy a license.
So, if you want to develop an open source product that uses Qubes framework, then you can freely do that as all the required core components for this will be open sourced. But if you would like to make a proprietary product, then you should buy a license from us. I think this is a pretty fair deal.
Current status and roadmap
We're currently working on two fronts: one is rewriting current Qubes code to support Qubes HAL, while the other one is adding a backend for Windows Native Isolation (which also involves doing things such as GUI isolation right on Windows).
We believe that by implementing two such extreme backends: Xen and Windows Native Isolation we can best show the flexibility of the framework (plus our customer is especially interested in this backend;)
We should be able to publish some code, i.e. the framework together with early Qubes OS R3 that will be based on it, sometime in fall or maybe earlier.
We obviously are determined to further develop Xen-based Qubes OS, because we believe this is the most practically secure OS available today, and we believe such OS should be open source.
Qubes R2 will still be based on the Xen-hardcoded code, because it's close to the final release and we don't want to introduce such drastic changes at this stage. The only thing that Qubes R2 will get in common with Qubes Odyssey is this new source code layout as presented above (but still with hardcoded xl calls and xen-vchan).
So, this is all really exciting and a big thing, let's see if we can change the industry with this :)
Oh, and BTW, some readers might be wondering why the framework was codenamed “Odyssey” -- this is obviously because of the “HAL” which plays a central role here, and which, of course, also brings to mind the famous Kubrick's movie.