Trifecta, part 2: Docker services
Create your own Heroku-like addons
(This is the third article in a series of four, and follows on from Vagrant as the base OS. Code from this series of articles will be made available on Github as yunojuno/trifecta.)
Following on from the previous article, we now have a Vagrant VM,running Ubuntu 14.04, with Python 2.7.9 installed, along with Virtualenv and Docker:
This article is about the Docker setup, and how we use Docker Compose (or Fig, as it was) to orchestrate the containers that we need to support our application.
Quick recap - our application, for the purposes of this series, is a Django web app, which uses the following external services: Postgresql, Redis, Memcached and ElasticSearch. In addition, we use Nginx as reverse proxy, to serve our static content. This is exactly how our production environments are set up on Heroku, and we want our developers' environments to get as close to this as possible.
We are using Heroku addons to provide these external services in production, and we connect the them through a combination of URL and credentials:
$ heroku config
=== polar-woodland-2337 Config Vars
BONSAI_URL: https://username:password@example.bonsai.com
DATABASE_URL: postgres://username:password@example.com:5432/myapp
MEMCACHIER_PASSWORD: password
MEMCACHIER_SERVERS: example.memcachier.com:11211
MEMCACHIER_USERNAME: username
REDISTOGO_URL: redis://redistogo:password@hammerjaw.redistogo.com:10660/
The crucial aspect of this is the we don't know the implementation details at the other end of the URL - so long as we can connect, it's not an issue. (Specific details of version numbers is usually available through the addon providers' website.) We want to replicate this in development, and the easiest way to provide this is with Docker, and specifically Docker Compose.
I've struggled with Docker, a lot, and have written several of articles in the past about how it simply doesn't work for us (q.v. Docker at YunoJuno), but I am about to eat my hat on this one. The key (for us), was in ignoring the complexities of name resolution and dynamic IPs, and forcing everything to look like it was running on the host, and therefore addressable as localhost (q.v. Docker at YunoJuno revisited).
Setting up containers can be a PITA, but using docker-compose
, you can configure a complete environment in one file:
postgres:
image: postgres:9.3
ports:
- "5432:5432"
volumes:
- /vagrant/backups:/backups
elasticsearch:
image: elasticsearch:1.4
ports:
- "9200:9200"
redis:
image: redis:2.8
ports:
- "6379:6379"
memcached:
image: memcached:1.4
ports:
- "11211:11211"
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
volumes:
- /vagrant/etc/nginx/sites:/etc/nginx/conf.d
- /vagrant/etc/nginx/certs:/etc/nginx/certs
- /app/static:/static
# HACK: allows the container to share the host networking stack,
# so we can reference the app as localhost.
net: host
As you can see, we are running the base ("official") docker images - we could specify a Dockerfile in here if we wanted to customise the service containers, but for our purposes, we can get away with the vanilla defaults. You'll also notice that we are specifying the image version - postgres:9.3
- we means we have total control over our environment. If you want everyone to upgrade to 9.4, just update this one file, and the next time anyone brings up their environment, they'll get the new version.
If you now run this file through compose, using docker-compose up
all of the containers will be built (if the images don't exist), or recreated (if they do) - you can run this as often as you like:
# Use the -d flag to run containers in the background
vagrant@vagrant-ubuntu-trusty-64:/vagrant/docker$ docker-compose up -d
Recreating docker_memcached_1...
Recreating docker_redis_1...
Recreating docker_elasticsearch_1...
Recreating docker_nginx_1...
Recreating docker_postgres_1...
The output is a set of running containers, with default ports bound and available to the host as localhost:
vagrant@vagrant-ubuntu-trusty-64:/vagrant/docker$ docker-compose ps
Name State Ports
---------------------------------------------------------------
docker_elasticsearch_1 Up 0.0.0.0:9200->9200/tcp, 9300/tcp
docker_memcached_1 Up 0.0.0.0:11211->11211/tcp
docker_nginx_1 Up
docker_postgres_1 Up 0.0.0.0:5432->5432/tcp
docker_redis_1 Up 0.0.0.0:6379->6379/tcp
Nginx doesn't have any ports in this list as it's set up to use -net=host
. Whilst the other containers provide services that must be available from the host as localhost
(i.e. host -> container), Nginx needs to access our application, which means the container being able to contact the host (i.e. container -> host). This is subject of many a blog post, and there is a detailed description of How Docker networks a container in the documentation, but for our specific use case (a developer's local machine, unconnected to the outside world) we can go "route 1" using the --net=host
option. To quote from the docs:
Tells Docker to skip placing the container inside of a separate network stack. In essence, this choice tells Docker to not containerize the container's networking!
This means that the container sees the host aslocalhost
- which is what we want in this case. It allows us to use the following configuration in the Nginx conf file (assuming we run our application development server on port 8000 on the host machine):
upstream app {
server 127.0.0.1:8000 fail_timeout=0;
}
The only other aspect of the docker-compose configuration worth calling out are the shared volumes - which we can use to pass configuration details for Nginx into the container - including the site configuration and SSL certificates.
What next?
We now have a host VM, with a specified Python runtime (2.7.9), and Virtualenv and Docker installed. We have postgres, redis, memcached and elasticsearch running as local services, available on the default ports. We have Nginx set up to receive incoming requests on ports 80/443, acting as a reverse proxy to serve static content from a directory mounted into the container, and to forward application requests on to our application. All that's missing now, is the application itself, which is the third and final part of our series.
========================================================
Ubuntu 14.04 ........... localhost .......... Vagrant
|
+- nginx ............... localhost:80/443 ... Docker
|
+- postgres ............ localhost:5432 ..... Docker
|
+- redis ............... localhost:6379 ..... Docker
|
+- memcached ........... localhost:11211 .... Docker
|
+- elasticsearch ....... localhost:9200 ..... Docker
(? application ......... localhost:8000 ..... Virtualenv)
========================================================