We run a handful of internal services written in C++, and compared to our web apps, deployment is anything but pretty. Between differing library versions and being tied to a single operating system for all installations, things can get hairy fast, especially when we include dev servers.
Recently I came across Docker and figured I’d see what all the hype was about. For those unfamiliar, Docker is like a really light-weight virtual machine that provides process and filesystem isolation without the overhead of spinning up a VM.
To test it out, I put together an echo server in Libevent that (arbitrarily) requires Libevent 2.1, which is different from what both Ubuntu has (2.0) and CentOS has (1.4). This means we need to build Libevent from source. This isn’t a big deal, except that maintaining multiple versions of Libevent on the same server can be a pain. Not impossible (we do it now!), but not ideal. You can see the test app here on GitHub. Let’s get started…
First things first, let’s spin up an Ubuntu virtual machine in VirtualBox and install Docker. Their instructions for Ubuntu are pretty straight forward, so I won’t go into details on setup.
Once we have Docker installed we have to build a Docker image based on CentOS 6.4 with Libevent 2.1 ready to go. By doing this as an independent step from building our actual app, we can create an extensible Libevent Docker image without having to re-compile Libevent every time.
I created a
Dockerfile with the following in it:
FROM centos:6.4 MAINTAINER Nathan Wong <email@example.com> # add the setup script to install libevent from source ADD setup.sh / RUN chmod +x setup.sh RUN /setup.sh
All this does is add the file
setup.sh to the Docker image and run it.
setup.sh looks like this:
# build libevent yum -y install gcc gcc-c++ make wget http://downloads.sourceforge.net/project/levent/libevent/libevent-2.1/libevent-2.1.3-alpha.tar.gz gtar -xzf libevent-2.1.3-alpha.tar.gz cd libevent-2.1.3-alpha ./configure && make && make install ln -s /usr/local/lib/libevent-2.1.so.3 /usr/lib64/libevent-2.1.so.3
This is exactly what you would run to get Libevent installed in a normal CentOS setup, so nothing’s out of the ordinary yet. Where things look different is that we can now build the Docker image with this setup script:
docker build -rm -t nathan/libevent .
That will take a minute, as it runs through the installation. Once it’s done we can run Docker in interactive mode:
docker run -i -t nathan/libevent /bin/bash
This gives you a bash prompt inside the Docker image we just created. If this is your first time with Docker, give it a shot: write to a file, then exit and run it again. Your file’s gone! It’s kind of magical how quickly Docker spins up the image while providing such powerful isolation.
(This is outside the scope of what we’re doing here, but if you wanted to save a snapshot of the filesystem with that file you just added, you can save the untagged image found by running
docker images and saving it as a real image, and then boot back into that. Also, if you don’t want to do that, the commands found here will save you a ton of disk space.)
The next step is actually loading our app into the Docker image. We do this by creating a new Docker image based on the Libevent one, with its own
Dockerfile that looks like this:
FROM nathan/libevent MAINTAINER Nathan Wong <firstname.lastname@example.org> # push up the actual echo server's souce as built from "make docker" in src. ADD docker.tar run/docker.tar # add the setup script to build our echo server ready to run ADD setup.sh / RUN chmod +x setup.sh RUN /setup.sh # and add an entry point that runs the echo server ADD start.sh start.sh ENTRYPOINT ["/bin/bash"] CMD ["start.sh"]
This is similar to the Libevent Docker image, except now it starts with the
nathan/libevent image instead of
centos. We also have added an entry-point this time, which is what Docker will spawn when you run it instead of just dropping you into an interactive bash session. Our
start.sh script just spawns off our
This time our
setup.sh looks like this:
# build our example app cd run/ gtar -xf docker.tar make ls out
And we can build this Docker image the same way we did before, with the additional step of bundling up our app first:
cd ../../src make docker mv docker.tar ../docker/example cd ../docker/example docker build -rm -t nathan/example .
docker images now shows that we have
nathan/example. We can run the example Docker image by running:
docker run -p 10121:10121 -name example_runner nathan/example docker rm example_runner
This runs the example image while forwarding port 10121 from the host machine to the Docker. We can
telnet localhost 10121 now and watch our messages get echoed back!
That’s all there is to it. The full setup is available on GitHub here. I’m far from an expert with Docker, so there’s a lot more to it than what I went through here. Still, it’s neat to see how easy it is to spin up a few Docker images that provide efficient code isolation; in theory, a bug in your C++ code wouldn’t let an attacker get anything more than what’s available in the image (and any volumes you’ve shared with it), much like a virtual machine.
Unlike a VM, though, the Docker images spawn up quickly and feel more like running a local program. I’m sure there’s a catch, but I haven’t found it yet. Deploying these Docker images to dev servers, other local copies, production servers, etc. would be a lot easier than what we do now (repo deployment and automated build scripts). It’s too bad Docker itself is (by necessity) platform-dependent on recent Linux kernels with LXC, though.
While I know they recommend against running Docker in production yet, is anyone running Docker for anything in a more serious setting?