By philwinder | July 2, 2016
This week I was lucky enough to have spend some time with Mesos 1.0.0-RC1 and specifically, the new unified containerizer. But first, let’s discuss what has existed for the last few years.
Mesos instantiates tasks inside containerizers. These containers are intended to isolate itself from the underlying OS and from other containers. This allows the host to control the container.
There were two types of containerizer: Mesos and Docker. The Docker containerizer is code that delegates the task of containerization to the Docker daemon. Isolation is provided by the functionality provided by Docker (e.g. cpu/mem/network/disk/etc.). The Mesos containerizer was Mesos’ own isolation implementation, which was based on namespaces and cgroups. It provided implementations of various isolation policies.
Whilst they both provided similar functionality there were differences. For example, although complicated, it was technically quite simple to add another isolation technique, like the Flocker isolator for mesos that I wrote. Docker does have a range of “plugins” but they are very opinionated and usually too constraining.
Finally, Mesos were considering adding a runC containerizer and quickly realised that they’d have to rewrite all the same functionality again. At this point there was a communal “liiightbuuulb”. They could reduce technical debt and introduce new features at the same time.
The unified containerizer aims to replace all containerizers (although the old ones are still available for use) with a single version, where the details are implemented by plugins. Confusingly, development has steamrollered straight over the top of the Mesos containerizer. Hence, the new containerizer is actually part of the Mesos containerizer.
The plugins are available in three flavours: launcher, isolator and provisioner. The launcher is responsible for starting the tasks. Examples are a Linux launcher and a SystemD launcher. The isolator is responsible for lifecycle events. Examples are cgroups isolators for cpu/mem/etc. The key introduction here is a new CNI networking isolator and a GPU isolator. Finally, the provisioner is responsible for providing binaries or images. Docker images are the obvious example here.
Benefits of Unification
The primary benefit is the simple addition of new technologies to one of the three plugin stages. We can already see this in the rapid development and release of the CNI isolator, which instantly makes the Mesos network capabilities far more flexible than Docker’s networking plugins. The Mesos devs are asking the community to provide more.
A further benefit is that the plugins are self contained. The functionality required to run these technologies are packaged within the isolators. For example, you don’t need to install the Docker daemon to run Docker containers.
Finally, this will increase the speed of new feature additions, because of the decoupling and flexibility.
There’s issues with the idea of caching images. Because the isolators are distinct, the container images are re-downloaded every time you start the same container. This obviously hits startup performance. But crucially, eats disk space. I had to raise my default VM disk sizes to 30 GB just run a demo microservices application.
Nearly all expected Docker commands are missing for the current implementation, which is to be expected. I’m sure they will be added shortly.
Currently, Marathon support is lagging behind. Tasks can only be started from the
mesos-execute CLI. And even there, the CLI is messy. It needs a complete rewrite.
Furthermore, there is the confusion over the old and new containerizers. I understand that there is a need for the old containerizers to exist for compatibility reasons, but I would have prefered if the new unified containerizer was delivered as a new containerizer called “unified”. Now new users have to choose between “Old Mesos”, “Docker” and “New Mesos, maybe with Docker”. Rather confusing as it stands. And already, Mesos isn’t famed for it’s ease of use; this makes it worse.
However, this was definitely a good idea. The benefits can already be seen in the CNI and GPU implementations. New features will be coming thick and fast, all of which will compete strongly with Kubernetes.