Docker at YunoJuno revisited
Episode IV: A New Hope
The easiest way to solve a problem is not to have it in the first place
Tao of Hugo
I've written a lot in the past about trying to get Docker to work within an unstable developer's environment: tl;dr the dynamic allocation of IP addresses to containers makes it bloody hard if you need to start / stop containers regularly. There are plenty of solutions available, but they are better suited to managing large, stable (even immutable) infrastructure.
It turns out that the solution to running a docker-based developer environment was staring me in the face the whole time, and involves ignoring the IP address issue completely. By using the --publish
option you can expose the internal container ports on the host machine, thereby allowing everything to run as localhost
- which is what you need when developing.
This is best illustrated by an example. The following shows how to download and run the official ElasticSearch container and make it available on http://localhost:9200
.
# check current instance of ES running 'natively' on VM
$ curl localhost:9200
{
"ok" : true,
"status" : 200,
"name" : "Basilisk",
"version" : {
"number" : "0.90.7",
"build_hash" : "36897d07dadcb70886db7f149e645ed3d44eb5f2",
"build_timestamp" : "2013-11-13T12:06:54Z",
"build_snapshot" : false,
"lucene_version" : "4.5.1"
},
"tagline" : "You Know, for Search"
}
# stop local instance
$ sudo service elasticsearch stop
* Stopping ElasticSearch Server
# confirm it's no longer running
$ curl localhost:9200
curl: (7) Failed to connect to localhost port 9200: Connection refused
# pull the 'official' ES image (NB `docker run` will pull by default)
$ docker pull elasticsearch
latest: Pulling from elasticsearch
3cb35ae859e7: Pull complete
41b730702607: Pull complete
...
elasticsearch:latest: The image you are pulling has been verified.
Digest: sha256:888a29e3a6540515257fa9de97e504532da58509cc9cbd80fb01d20916328905
Status: Downloaded newer image for elasticsearch:latest
# run a container based on new image, and 'publish' the
# container's port 9200 on the host port 9200
# (`-d` runs it as a daemon)
$ docker run -d --publish 9200:9200 elasticsearch
6bc87bbf34b9ce4b97ace44c1c8a81c93b20af7147508b45f5af6492cd1f3f9e
# confirm that the new container is running
$ curl localhost:9200
{
"status" : 200,
"name" : "Terror",
"cluster_name" : "elasticsearch",
"version" : {
"number" : "1.4.5",
"build_hash" : "2aaf797f2a571dcb779a3b61180afe8390ab61f9",
"build_timestamp" : "2015-04-27T08:06:06Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
Repeat this same process for the external services you require - in our case this means Redis, Memcached and Postgres.
Add this to our new frontend build system, and the only thing remaining is our Django app itself.
Making Freelance Work