A Yeti in the DNS


written by Yves Bovard

Most of the time, the Internet works without any problem; we can just power on our computer and start surfing… ok, most of the time. Many things have to be reliable to make this possible: power, cables, routers, computers, software and, last but not least, the DNS. This last point is one of the most critical parts of the Internet. Each time we read our favorite online newspapers, each time we check our e-mails, write and reply to them, or more generally, each time we use the Internet, many queries are sent to DNS servers to convert (more or less) meaningful Web addresses to IP addresses. And this is only the tip of the iceberg.

In the early days of the Internet, this task was handled by a single file. During the 1980s, however, it became clear that such a method was not scalable enough. The DNS was thus born. Three parts were designed. First, the stub resolver is located on your computer. It receives your question: what is the IP of www.switch.ch? This question is transformed to a standard DNS message and sent over the network to the second part, the resolvers. These are able to find an answer almost instantly, either because somebody has already looked for it or by querying the third part, the authoritative servers, located somewhere on the Internet. They are structured in a hierarchical tree, with root servers at the top. Some of them know the answer to the question you asked.

Nowadays, the authoritative root of the tree is made up of 13 servers named alphabetically from a.root-servers.net to m.root-servers.net. In reality, a technique named anycast allows a much larger number of servers around the world to listen out for (and answer with) the same address. For example, k.root-server.net actually comprises 33 nodes spread all across the globe. To analyse the workload of the DNS, DNS OARC (DNS Operations Analysis and Research Center) computes yearly statistics (Day in The Life of the Internet, DITL). In 2015, it used a time window of three days and found that 10 of the 13 root servers answered about 60 billion queries in this period.

The current state of this infrastructure is robust. A single server failing to respond does not affect the availability. When a server is overloaded, we can just add more servers to spread the traffic. The size and complexity of this infrastructure make it hard to analyse. The new Yeti DNS Project (www.yeti-dns.org) aims to study it by asking the following questions and more:

  • A single server currently transfers the zone to the 13 root servers. What about having more?
  • Can we add more root servers? If yes, how many?
  • How can we improve a server renumbering process?
  • How can we improve the management of DNSSEC key signing?
  • How quickly can the DNSSEC Zone Signing Key be rolled over?

The first conference in Yokohama on 31 October 2015 marks the end of the infrastructure building phase and the start of the research phase. Some bugs in software have already been found and corrected, and a number of networking recommendations have been drawn up. These early findings bode well for the next three years until the project comes to an end in 2018.

SWITCH is proud to be involved in this project as a root operator. Since we are in charge of the .ch and .li country code top-level domains (ccTLDs), we think it is important to offer our active support to DNS research. The new knowledge this project discovers will have a great importance for the whole community.

The project’s website features traffic statistics as well as the volunteer list and a blog.

One thought on “A Yeti in the DNS”

  1. Thanks for carrying the torch by participating in this effort! 15 years ago, SWITCH was already part of the anycast DNS experiments to validate this technique before it became widely used for the root nameservers.

Comments are closed.

%d bloggers like this: