Site icon Software Reviews, Opinions, and Tips – DNSstuff

Exploring the Use of DNS in Multi-Cloud Environments

The multi-cloud approach to modern IT means workloads—whether virtual machines, containers, or minimal applications—can be located on any one of multiple independent infrastructures; whether running simultaneously for fault-tolerance or when being moved between these. In a modern IT deployment, workloads can (and will) regularly move between cloud and onsite deployments, whether for price, maintenance, latency, or many other reasons. Yet access to these workloads—and, more critically, authentication, monitoring, and management visibility—must be maintained throughout.

All of the above depends on DNS, but DNS starts to get rather complicated when multiple infrastructures are involved. Especially when users are spread across many locations and might be accessing these services from a public or corporate address. How can access to these workloads be maintained transparently to users, while at the same time allowing for in-depth monitoring, authentication, and policy enforcement?

The Key Problem: Transparency in Resolution

DNS is an old protocol. The first DNS server specifications, RFC 882 and RFC 883, were published in 1983 by the Internet Engineering Task Force (IETF). Thus, DNS isn’t truly built for a flexible, multi-cloud deployment, where two connections to a server separated by as little as an hour might end up talking to completely different IP addresses. This was never envisioned in the DNS protocol at all.

(This would be like expecting telnet to have encryption and tunneling. No such features were envisioned when the protocol was built, and so while some applications may have added these features later, they’re subject to the vagaries of implementation-specific differences.)

When a user requests api3.example.com, for instance, how should that be handled? Should there be one IP address with a load balancer? Or, in a modern deployment, should a different IP address be intelligently returned by an organization-controlled DNS infrastructure, allowing for rapid scale-out and intelligent routing decisions to be made transparently?

The problem is that as server IP addresses, users locations, and IP addresses change, and network conditions do likewise, the network needs to know how best to resolve the query. It needs to do this while taking into account latency, server load, availability, price, and a hundred other deployment-specific factors. As you can see, not at all an easy task, and certainly not one that DNS was built to handle.

Right along with this, of course, are the all-too-familiar problems of authentication and monitoring. If policies for api3.example.com with IP address 52.234.219.08 are in place, what happens if api3.example.com is suddenly 28.219.38.015 due to a cloud outage? How, again, should that be handled?

The Solution: Multi-Cloud DNS

One of the most common solutions to this problem is to give your DNS server more smarts than it would otherwise have. This necessitates running your own DNS infrastructure, but is becoming more and more common in larger data centers and cloud deployments.

The key reason to do this is to allow the DNS server to make routing and policy decisions on its own.

Instead of having an A record in one of the more common DNS providers (api3.example.com in a 52.234.219.08, for illustrative purposes), you configure DNS to inform all users that *.example.com is handled by ns1.example.com, ns2.example.com, and ns3.example.com. Again, entirely for illustrative purposes.

From there, you can use simple round-robin distribution, or make more complex geographic, authentication-based, or addressing-based decisions to answer the DNS query. In this way, you can ensure each user will get the best experience based on their location and other factors.

This also allows DNS monitoring and analysis to be put in place. Based on the answer, the DNS server comes to and subsequently returns to the user, authentication challenges can be put in place for those outside a corporate IP space, or IPS/IDS systems can react as needed, among other possibilities.

Intelligent DNS

Multi-cloud environments create a host of new challenges and uses for many technologies, DNS among them.

However, they also create highly valuable opportunities for more intelligence to be placed within the infrastructure at many levels, from security and authentication to localization and user experience. This allows an application, service, or website to scale across countries, continents, and cloud providers without requiring multiple user-visible endpoints or, in fact, any user knowledge at all.

Data centers and cloud deployments are increasingly becoming containerized, agile configurations of servers and applications. DNS that’s up to the challenge of not only highly mobile users, but mobility at the server end as well, is going to be crucial, and with it, the staff and policies in place to ensure a clean, fast, accountable, and secure user experience.