Is DNS the new HTTP?
No, obviously not - but the title got you here.
Back in 2010 I wrote a blog post called HTTP self-service, which advocated the use of embedded HTTP servers inside software running in on the server, to make it easier to manage and maintain. The basic premise then was that HTTP was everywhere, and that everyone seemed to have their own lightweight embeddable HTTP server software library. In addition, any self-respecting web framework has an embedded HTTP server to facilitate local development. Developers 'get' HTTP (pun intended).
Fast-forward four years and seems like DNS is getting the same treatment. I guess it was inevitable that if you gave developers more control over infrastructure, that they would look to software to solve their own problems, and that appears to be what is happening, specifically in the field of service discovery.
Back in the day, when SOAP ruled, Service Discovery meant finding the URL of a remote service, and XML was the answer (to everything). We had 'standards' such as DISCO and UDDI to facilitate service discovery, and lots of complex XML schemas to describe how services interacted. The nirvana of dynamic service discovery (that you could even broker services and somehow choose between competing products offerings on-the-fly) never materialised, at least outside of controlled corporate environments, and it turns out that just looking up the service API endpoint is not really that big of an issue.
However, in the new world of SaaS, PaaS, IaaS, XaaS (where 'X' now stands for any character [A-Z]), and more specifically of roll-your-own infrastructure we have a new problem - not finding the URL, but binding the IP address. If you've spent any time playing with Docker, you'll have realised that dynamic IP allocation is a real problem. In a typical web application environment you'll have your web application and any number of external containers / services - a database, a cache, a queue, a search tool, an SMTP server, and so on. If these services have no fixed IP, how on earth are you supposed to connect up the containers?
The Docker way of linking containers is to have the Docker daemon inject the information about the remote container you want to link to into the local container environment variables. So, for instance, your web app container may boot up with DB_PORT
, DB_PORT_6379_TCP_ADDR
and so on. This works pretty well in so far as it does work - but it means adding a step into your configuration to take this into account. And crucially this link is not 'live' - if you subsequently kill the database container and recreate it (say to restore the data to a know baseline for testing), the IP may have changed, in which case your linking is now incorrect.
If only there were a way to connect names (let's call them "domain names") to IP addresses, such that clients didn't need to know the IP address?
Of course this system exists, it works, it's battle-hardened, and most importantly it requires no additional software - DNS is baked into everything - if your device can connect to the internet, it can handle DNS.
Until recently DNS was seen as something on the infrastructure side of the fence - I don't know of any developer who could tell me how DNS works (the protocol), whereas I would assume (/hope) that every developer I know could recite large chunks of the HTTP specification. DNS was something that was "out there". It just works.
In recent months (and I may be coming late to this party), things have changed, partly, I think, spurred on by Docker's momentum, AWS's ubiquity, and the fact that infrastructure has become "programmable". The idea of having an infrastructure API would have seemed unimaginable ten years ago.
Getting to the punchline, we now have a number of projects seeking to bring DNS to the masses. I first spotted in with Skydock, a project to bring DNS capability to Docker deployments, which in turn relies on SkyDNS. This provides a lightweight local DNS service that integrates with Docker to enable consistent deterministic names to containers that have dynamic IPs.
And then just a few days ago we had the launch of Consul - a much more complete product that provides a similar DNS service (as well as lots of other good stuff).
Both of these products 'speak' DNS, in the same way that previous products exposed HTTP interfaces - in fact both offer both HTTP and DNS, but it seems as if DNS is something that we, as developers, are finally becoming more comfortable with.
(FWIW, the subject of DNS as a useful 'tool' is something that Dave Winer brought up in 2011 (Why DNS needs an API proving that yet again, he's ahead of the curve.)
UPDATE something I forgot to mention in the original post - both Counsel and Skydock / SkyDNS are written in Go. As is Docker. And Heroku's new client, hk. There's a theme.
Making Freelance Work