Using Docker for Self-Contained C++ Deployment

Docker

We run a handful of internal services written in C++, and compared to our web apps, deployment is anything but pretty. Between differing library versions and being tied to a single operating system for all installations, things can get hairy fast, especially when we include dev servers.

Recently I came across Docker and figured I’d see what all the hype was about. For those unfamiliar, Docker is like a really light-weight virtual machine that provides process and filesystem isolation without the overhead of spinning up a VM.

To test it out, I put together an echo server in Libevent that (arbitrarily) requires Libevent 2.1, which is different from what both Ubuntu has (2.0) and CentOS has (1.4). This means we need to build Libevent from source. This isn’t a big deal, except that maintaining multiple versions of Libevent on the same server can be a pain. Not impossible (we do it now!), but not ideal. You can see the test app here on GitHub. Let’s get started…

First things first, let’s spin up an Ubuntu virtual machine in VirtualBox and install Docker. Their instructions for Ubuntu are pretty straight forward, so I won’t go into details on setup.

Once we have Docker installed we have to build a Docker image based on CentOS 6.4 with Libevent 2.1 ready to go. By doing this as an independent step from building our actual app, we can create an extensible Libevent Docker image without having to re-compile Libevent every time.

I created a Dockerfile with the following in it:

FROM centos:6.4
MAINTAINER Nathan Wong <nathan@cdev.ca>

# add the setup script to install libevent from source
ADD setup.sh /
RUN chmod +x setup.sh
RUN /setup.sh

All this does is add the file setup.sh to the Docker image and run it. setup.sh looks like this:

# build libevent
yum -y install gcc gcc-c++ make

wget http://downloads.sourceforge.net/project/levent/libevent/libevent-2.1/libevent-2.1.3-alpha.tar.gz
gtar -xzf libevent-2.1.3-alpha.tar.gz
cd libevent-2.1.3-alpha
./configure && make && make install
ln -s /usr/local/lib/libevent-2.1.so.3 /usr/lib64/libevent-2.1.so.3

This is exactly what you would run to get Libevent installed in a normal CentOS setup, so nothing’s out of the ordinary yet. Where things look different is that we can now build the Docker image with this setup script:

docker build -rm -t nathan/libevent .

That will take a minute, as it runs through the installation. Once it’s done we can run Docker in interactive mode:

docker run -i -t nathan/libevent /bin/bash

This gives you a bash prompt inside the Docker image we just created. If this is your first time with Docker, give it a shot: write to a file, then exit and run it again. Your file’s gone! It’s kind of magical how quickly Docker spins up the image while providing such powerful isolation.

(This is outside the scope of what we’re doing here, but if you wanted to save a snapshot of the filesystem with that file you just added, you can save the untagged image found by running docker images and saving it as a real image, and then boot back into that. Also, if you don’t want to do that, the commands found here will save you a ton of disk space.)

The next step is actually loading our app into the Docker image. We do this by creating a new Docker image based on the Libevent one, with its own Dockerfile that looks like this:

FROM nathan/libevent
MAINTAINER Nathan Wong <nathan@cdev.ca>

# push up the actual echo server's souce as built from "make docker" in src.
ADD docker.tar run/docker.tar

# add the setup script to build our echo server ready to run
ADD setup.sh /
RUN chmod +x setup.sh
RUN /setup.sh

# and add an entry point that runs the echo server
ADD start.sh start.sh

ENTRYPOINT ["/bin/bash"]
CMD ["start.sh"]

This is similar to the Libevent Docker image, except now it starts with the nathan/libevent image instead of centos. We also have added an entry-point this time, which is what Docker will spawn when you run it instead of just dropping you into an interactive bash session. Our start.sh script just spawns off our ./example app.

This time our setup.sh looks like this:

# build our example app

cd run/
gtar -xf docker.tar
make

ls out

And we can build this Docker image the same way we did before, with the additional step of bundling up our app first:

cd ../../src
make docker
mv docker.tar ../docker/example
cd ../docker/example

docker build -rm -t nathan/example .

Running docker images now shows that we have nathan/libevent and nathan/example. We can run the example Docker image by running:

docker run -p 10121:10121 -name example_runner nathan/example
docker rm example_runner

This runs the example image while forwarding port 10121 from the host machine to the Docker. We can telnet localhost 10121 now and watch our messages get echoed back!

That’s all there is to it. The full setup is available on GitHub here. I’m far from an expert with Docker, so there’s a lot more to it than what I went through here. Still, it’s neat to see how easy it is to spin up a few Docker images that provide efficient code isolation; in theory, a bug in your C++ code wouldn’t let an attacker get anything more than what’s available in the image (and any volumes you’ve shared with it), much like a virtual machine.

Unlike a VM, though, the Docker images spawn up quickly and feel more like running a local program. I’m sure there’s a catch, but I haven’t found it yet. Deploying these Docker images to dev servers, other local copies, production servers, etc. would be a lot easier than what we do now (repo deployment and automated build scripts). It’s too bad Docker itself is (by necessity) platform-dependent on recent Linux kernels with LXC, though.

While I know they recommend against running Docker in production yet, is anyone running Docker for anything in a more serious setting?

Perceived vs Intrinsic Value

Roadmap

The goal of building a product is to create value. That much is obvious. The hard part is figuring out what your customers find valuable. I find it helpful to taxonomize the features that we’re building into one of these three categories when laying out a product roadmap:

  1. Perceived Value (traditionally “the hook”). These are features that get your customers excited immediately upon hearing about it. The feature resonates with them in a way that makes them want to use your product.
  2. Intrinsic Value. These are features that provide real, measurable value – but not in a way customers necessarily know they need yet.
  3. Necessities. These are features that get you back to zero; without these, your product is unusable despite the other value. These are, unfortunately, almost always going to be “me-too” features.

Read More…

Two Years of Heavy Donut Gambling

Donuts!
Image courtesy of Roger Byrne

While a certain other digital currency is all the rage these days, at BuySellAds donuts reign supreme.

This week our beloved Donut Bot turned two, and for his birthday I gave him a new way to take our donuts: you can now play blackjack by typing !blackjack 25 and !deal, indicating !hitme, !stand, or !double once the cards are dealt.

This joins the growing collection of commands Donut Bot supports, including !give 25 Nathan to give donuts to someone, !roulette 25 red and !spin to gamble them, !status lunch time and !team to manage statuses, !leaderboard and !donuts to see how many donuts you have, and more.

Read More…

Startup Engineers: Be Boring

Running a startup sometimes feels like living in an earthquake zone. Around every corner there are surprises, most of which are unavoidable. But as an engineer building a product for a startup your job is to lay a foundation that can withstand the ensuing turmoil.

One of the ways you can achieve that is to be boring. It’s not sexy or exciting, but using proven technologies with which you’re familiar will allow you to be the rock in your org chart, a semblance of sanity and consistency upon which the business is built. Your coding chops can then be saved for building features at lightning speed, not experimenting with unproven technologies.

Read More…

Make the Type System Do the Work

I wrote this post February 2012 and somehow never hit publish. So here it goes, two years later, to kick off February 2014.

Declaring types and being restricted by the type system is often cited as a negative aspect of C++. I think this is an unfair assessment: a type system can make a programmer’s life considerably easier if it’s embraced instead of fought, as we’re seeing with the rise in popularity of Haskell. But C++, despite all its warts, has a pretty formidable type system of its own.

Read More…

1 2 3 4  Scroll to top