We’ve gotten to the point where any issues in our network can probably be blamed on DNS. Our routing protocols are rock solid. Our switches are fast and programmed to be reliable. So why is the bane of our existence an application layer protocol that seems to be as flaky as a biscuit?
As presented by BlueCat Networks during Networking Field Day 19 this past November, DNS is unreliable because it’s not built to be reliable in the first place. We have to also make sure that we realize that DNS is only one pillar of an entire related infrastructure. Where most people just see DNS as the issue we must also take into account DHCP and IP Address Management (IPAM) as the other pillars of DDI infrastructure.
So, why does DNS always get the blame when something goes wrong? It’s because DNS is the most visible part of the equation. If you are missing an IP address, things are really broken but they are broken in a way that is pretty visible. It’s like a car in many ways. If nothing works on the car, you have an entirely different method of troubleshooting to start with.
However, when things are sort of working, it makes for a much different dilemma. With an IP address, some apps and services will work. However, the ones that focus specifically on DNS and name resolution will fail or only work intermittently. That makes the troubleshooting process much harder. In an enterprise, every second that you spend tracking down issues is a huge investment in downtime that most businesses can’t really afford.
Another huge issue is the rise of DNS hijacking incidents. DNS hijacking is an insidious problem because it can inject information in a data stream or redirect users transparently without making them aware of the issues. Could you imagine the impact that DNS hijacking could have on a financial institution? If the user doesn’t know what has happened it could cause a significant incident with very little to go on.
And what’s to alert us to finding the problem when DNS gets hijacked? Well, right now very little. We might know because some internal resources are offline. We might eventually find out if we do an inspection. But otherwise we’re out of luck. And if the hijacking occurs on a critical piece of infrastructure, like the forwarders list for a Windows domain controller? That’s a huge problem.
BlueCat Networks is fixing all of this by integrating the DDI infrastructure into something they call Enterprise DNS. There is more visibility, more error checking, and less complexity to these services with BlueCat. The entire DDI infrastructure feels like it was designed to run together with BlueCat. Address management and reverse DNS lookup are tightly integrated with BlueCat. Instead of hoping that your records are consistent across platforms and devices like you do with the basic Windows DNS and DHCP tools offered today, you can trust that BlueCat is consistent. Their database-driven approach ensures that everything is correct. And if something is amiss you’ll spot it right away.
BlueCat can also ensure that it is the authoritative DNS source for your network. It can secure your endpoints to make sure that no rogue DNS servers are being used. This can also help you with troubleshooting. Say, for example, a user decides to change their DNS entries to 220.127.116.11 or 18.104.22.168 because it’s what they have at home. While most of their DNS resolution will work, they may have issues logging into a domain controller or accessing internal documents. By auditing and correcting this user behavior you can reduce strain on your helpdesk and ensure that proper corporate policies are being adhered to.
Bringing It All Together
DNS is problematic because it’s distributed. It was designed to be used in such a way as to be able to survive incidents that take down large portions of the network. However, this distribution and decentralized approach also means it can work part of the time incorrectly and still provide service to your users. By combining DNS, DHCP and IPAM into one platform and giving us management capabilities, BlueCat Networks is bringing DNS into the 21st century and making is less of a problem to manage and instead making it a key pillar of availability and security going forward.
What are they doing that’s unique? Superficially it sounds similar to what Infoblox has been doing for years.