How docker replaced my virtual machines and chroots

In one of my earlier posts I showed you why and how I use chroots for software development. In situations were the isolation of a chroot is not enough, I sometimes also used virtual machines. Both solutions have its pros and cons: A chroot is quite easy to setup and access with no virtualization overhead and you get native host performance. Virtualization offers very good isolation and some nice features e.g. creating snapshots, but it’s not very convenient if you want to use it “transparently” and the virtual machine’s performance suffers the virtualization overhead. Especially the limited 3D performance makes it impossible for me to use a virtual machine in some cases.

For my use-cases, docker provides the benefits of both, chroots and virtual machines. In this post I assumes you know what docker is and how it basically works.

What I want to do - some typical use-cases

Before I show you how I use docker, here a few examples of what I want to do. This are two typical use-cases:


  • On the host I’m running Arch Linux 64-bit writing some code for some super exciting software.
  • I compile/debug and test it on my host.
  • I want to create a packages of the same software for Ubuntu 12.04 32-bit and 64-bit (or any other Linux system)
  • I have to compile/test and maybe debug it on this systems
  • I need hardware accelerated 3D support for running the software I’m working on


  • I want to use a specific software (maybe even in a specific version) what is not available for my host system’s Linux distribution.
  • E.g. I want to use ROS. There are packages for Ubuntu 14.04, but not for Arch Linux. It consists of dozens individual packages and I don’t want to fiddle around with building it from source.

The old-school solution to carry out this tasks would be to natively boot into the required system and do the required work. Quite inconvenient, not to say totally impracticable if multiple different systems are required. Chroot’s and virtual machines are a more practical solution - I used them often and a combination of both is a good solution. Read on if you want to know why and how I now use docker to replace both, my virtual machines and my chroots.

Docker can do it

Some time ago I started to play around with docker. At this time it was not ready to replace either my chroots nor my vm’s in a convenient way … spawning additional processes inside a running container was not possible (at least not in an a convenient way). Not to blame docker - my use-cases just don’t fit the main intent of docker. Docker is a system for management and deployment of application containers, not operating system containers. I don’t want to go into detail what docker is and what not, but the use-cases provided above are not most typical for what docker is usually used.

Nevertheless since version 1.3 docker is flexible enough to support what I want to do in a quite convenient way: they introduced docker exec. This allows to spawn additional processes in an already running container. E.g. running docker exec -it ubuntu1204 /bin/bash spawns bash in the running container ubuntu1204. This is quite handy if I want to run several interacting applications inside a single container on demand, e.g. client + server and a debugger or whatsoever.

The nice thing is that docker makes it so easy to create complete development- or runtime environments for whatever Linux within a few minutes. Also docker features like making snapshots and recreating the same environment on different machines can be quite handy.

How I use docker

The Dockerfile

Now I show you how I set up my dockerized Ubuntu development and runtime environment. Below the Dockerfile for my ubuntu 14.04 based image.

Note: The Dockerfile installs the proprietary NVIDIA driver for accelerated 3D support - you eventually have to modify the “install graphics driver” section for your needs. For more information on how to get hardware accelerated 3D support with docker read one of my oder posts. Also the user and uid in the “create user/setup environment” must be adjusted if you want to use this Dockerfile on your host with your user.

FROM ubuntu:14.04
MAINTAINER github/gklingler

# ===== install/setup prerequisites =====
RUN apt-get update

# Use the "noninteractive" debconf frontend
ENV DEBIAN_FRONTEND noninteractive

# ===== create user/setup environment =====
# Replace 1000 with your user/group id
RUN export uid=1000 gid=1000 && \
    mkdir -p /home/gernot && \
    echo "gernot:x:${uid}:${gid}:gernot,,,:/home/gernot:/bin/bash" >> /etc/passwd && \
    echo "gernot:x:${uid}:" >> /etc/group && \
    echo "gernot ALL=(ALL) NOPASSWD: ALL" > /etc/sudoers.d/gernot && \
    chmod 0440 /etc/sudoers.d/gernot && \
    chown ${uid}:${gid} -R /home/gernot

# ===== Install additional packages =====
RUN apt-get -y install bash-completion git build-essential vim

# ===== install graphics driver&co for accelerated 3d support (optional) =====
# install nvidia driver
RUN apt-get install -y binutils mesa-utils
ADD /tmp/
RUN sh /tmp/ -a -N --ui=none --no-kernel-module
RUN rm /tmp/
# some QT-Apps/Gazebo don't not show controls without this

ENV HOME /home/gernot
ENV USER gernot
USER gernot

Building the image

docker build -t ub1404-dev .

Creating a container

docker create --privileged -e "DISPLAY" -v="/tmp/.X11-unix:/tmp/.X11-unix:rw" -v="/home/gernot:/home/gernot:rw" -u gernot -w /home/gernot -h ub1404-dev --name="ub1404-dev" -i -t ub1404-dev /bin/bash

If you want hardware accelerated 3D (assuming you have the right driver installed) the following is essential: --privileged -e "DISPLAY" -v="/tmp/.X11-unix:/tmp/.X11-unix:rw".

To mount the complete home directory into the docker environment: -v="/home/gernot:/home/gernot:rw".

To avoid file system permission problems we have to make sure that we enter the docker environment with the same user/UID than on the host system - this is done with -u gernot (also the “create user/setup environment” section in the Dockerfile must match).

The working directory should be the home directory: -w /home/gernot

-h ub1404-dev specifies the hostname and --name="ub1404-dev" the name of the container that is going to be created. We don’t want an interactive shell nor allocate a pseudo tty and the container should be based on the image ub1404-dev: -i -t ub1404-dev.

The process that is spawned by default is /bin/bash , but this doesn’t actually matter because I “enter” the container with docker exec and the process that should be spawned must be specified there anyway (see below).

Starting the container

I actually “autostart” my containers when starting my desktop environment.

xhost si:localuser:$USER     <-- to make sure we can connect to the X-server and start GUI applications
docker start ub1404-dev

Now I have my running “general purpose” ubuntu 14.04 development/runtime environment container that I can treat nearly like a virtual machine (at least for my use-cases).

Using the container

The the previously started container can now be “entered” with:

docker exec -ti ub1404-dev /bin/bash

You can even execute any program inside the container with _docker _exec, e.g. if you have Firefox installed in the container you can easily run it from within the container by simply running:

docker exec -ti ub1404-dev firefox

Further tweaking

For convenience I’ve defined aliases for entering a docker container (i.e. spawning a bash prompt) and executing arbitrary commands:

alias d_enter="docker exec -ti ub1404-dev /bin/bash"
alias d_x="docker exec ub1404-dev"

With d_enter I get a bash prompt in my ub1404-dev container. With d_x [COMMAND] I can directly execute any command inside the docker container.


Managing my development/runtime environments with docker provides all benefits of its “ecosystem” what allows me to get a working development environments of “any” Linux system within a few minutes. Features like making snapshots of containers and recreating the same environment on different machines can be quite handy.

“Dockerized environments” are as easy to use as a chroots and additionally offer some properties of virtual machines (own hostname, own IP address, etc.). Docker provides a mighty and uniform system for managing my environments and provides more flexibility than my earlier solution with chroots and virtual machines. For this reasons, docker replaced most of my virtual Linux machines and chroots.

If you like this post, have any questions or any kind of feedback, please let me know and leave a comment below.

comments powered by Disqus