No, this is not a kind of secret message nor a new ice-cream. On 11th October 2017 the root zone will be signed with a new key. Ladies and gentlemen, update your DNS resolver. As of July 11th, the new key is published in the root zone and your resolver should start updating its trust anchors automatically!
A Swiss domain holder called us today telling us that the .ch zone points to the wrong name servers for his domain.
The NS entries were ns1.dnshost[.]ga and ns2.dnshost[.]ga. We contacted the registrar and soon realized that this is not the only domain that had unauthorized changes. We identified 93 additional .ch and .li domain names that pointed to the two rogue name servers. While domain hijacking by pointing to a rogue NS is a known attack, 94 domains on a single day is very unusual. So we analyzed what the hijacked domains were used for and soon found out that they are used to infect internet users with malware.
Visitors to the hijacked domains were redirected to the Keitaro TDS (traffic distribution system):
A TDS decides where to redirect the visitor to, often depending on its IP address (i.e. country),
user agent and operating system.
A dead end may look like the following:
And the visitor will be redirected to Google.
However, in some cases, the visitor is redirected to the Rig Exploit Kit:
hXXp://188.225.87[.]223/?doctor&news=...&;money=...&cars=236&medicine=3848 hXXp://188.225.87[.]223/?health&news=... ...
And the visitor gets infected.
The payload is Neutrino Bot:
MD5: a32f3d0a71a16a461ad94c5bee695988 SHA256: 492081097c78d784be3996d3b823a660f52e0632410ffb2a2a225bd1ec60973d).
It gets in touch with its command and control server and grabs additional modules:
hXXp://poer23[.]tk/tasks.php hXXp://poer23[.]tk/modules/nn_grabber_x32.dll hXXp://poer23[.]tk/modules/nn_grabber_x64.dll
A little later, it also gets an update
hXXp//www.araop[.]tk/test.exe MD5: 7c2864ce7aa0fff3f53fa191c2e63b59 SHA256: c1d60c9fff65bbd0e3156a249ad91873f1719986945f50759b3479a258969b38)
The rogue NS were inserted in the .ch zone file at around 13:00 today. The registrar discovered soon what happened and rolled back the unauthorized changes. At 16:00 all of the changes in the .ch & .li zone were reverted and the NS records pointed to the legitimate name servers again.
[Update 10.7.17 17:15]
Gandi the registrar of the 94 domain names has written a blog post, as well as SCRT the domain holder that initially informed us about the domain name hijacking of scrt.ch. SCRT also showed how Strict Transport Security protected their recurring visitors from being redirected to the bogus website!
The share of DNSSEC signed domain names in .ch and .li reached 1% for the first time in June 2017. While this is still a very low number compared to other ccTLDs, the number of DNSSEC signed domain names is increasing at a high rate for the last two quarters.
The Domain Name System Security Extensions (DNSSEC) is a set of technologies that secures the origin authentication and data integrity of the Domain Name System. It allows to detect DNS records that have been modified on the way from the authoritative name server to the client using a domain name. This helps to protect Internet users from going to bogus websites.
In addition from protecting Internet users from cybercriminals and state sponsored actors, DNSSEC is the base for important standards such as DNS-based Authentication of Named Entities (DANE).
DNSSEC in .ch and .li
DNSSEC was enabled for the .ch and .li zones in 2010 but unfortunately received a slow adaptation by domain holders. From 2013 there was a slow but steady growth of domain names signed with DNSSEC. In November 2016 we noticed a increased rate of DNSSEC signed domain names that accelerated in April 2017.
It is rare that a malware family uses .ch or .li domain names in their domain name generation algorithm (DGA). The last time I remember, that we had to take action against a malware using .ch or .li domain names was about 8 years ago. It was Conficker that infected millions of computers worldwide. The malware was generating about 500 .ch and .li domains a day to be potentially used as a command and control server. By then SWITCH joined the conficker working group to prevent the use of domain names by this malware.
Since then we have been watching the use of .ch and .li domain names in malware DGAs and prepared for this by making an agreement with the Registrar of Last Resort (RoLR) to prevent the registration of domain names used in DGA algorithms of malware.
This week the Swiss Govermental Computer Emergency Response Team (GovCERT) informed us about the malware Tofsee using .ch as one of the TLDs in its DGA. Continue reading
SWITCH operates recursive name servers for any user within the Swiss NREN. While larger universities typically run their own recursive name server, many smaller organisations rely on our resolvers for domain name resolution. During the consolidation of our name server nodes into two data centres, we looked for opportunities to improve our setup. Dnsdist is a DNS, DoS and abuse-aware load balancer from the makers of PowerDNS and plays a big part in our new setup. While the first stable release of dnsdist (version 1.0.0) is only a few days old (21 April 2016), it feels like everyone is already using it. We are happy users as well and want to share with you some of the features we especially like about dnsdist.
Our old setup consisted of several name server nodes which all shared the same IP address provided by anycast routing. Our recursive name server of choice was and still is BIND, and we have been providing DNSSEC validation and malicious domain lookup protection through our DNSfirewall service for some time. While this setup worked very well, it had the disadvantage that some badly behaved or excessive clients could degrade the performance of a single name server node and as such affect all users routed to this node. Another disadvantage was that each name server node got its share of the whole traffic. While this may sound good, it has the disadvantage that we have several smaller caches, one on each node. My favorite quote from Bert Hubert, founder of PowerDNS, is: “A busy name server is a happy name server“. What it means is that it is actually faster to route all your queries to a single name server node because this will improve the cache-hit rate.
Dnsdist provides a rich set of DNS-specific features
Our new setup still makes use of anycast routing. However, it is now the dnsdist load balancer nodes that announce this IP address, and they forward the queries to the back-end recursive name servers for domain name resolution.
A recent presentation by SIDN (.nl) at the Spring 2016 DNS-OARC workshop reminded me of the importance of Time-To-Live (TTL) values in TLD zones. Specifically, it got me thinking about lowering the negative caching time in .ch/.li from currently 1 hour to 15 minutes.
What is negative caching?
When a resolver receives a response to a query, it caches it for the duration of the TTL specified by the record. For positive responses, the record contains the TTL, but for negative responses (response code NXDOMAIN), there is no answer to the query question. For this case, the response contains the SOA record of the zone in the authority section. Negative caching is specified in RFC 2308 as the minimum of the SOA record’s TTL and the SOA minimum field. For example, the original SOA record of the .ch zone looked as follows:
dig +nocmd +noall +answer @a.nic.ch ch. soa
ch. 3600 IN SOA a.nic.ch. helpdesk.nic.ch. 2016041421 900 600 1123200 3600
The SOA TTL is 3600, and the SOA minimum time is also set to 3600. The minimum of these two values is of course 3600 too. That means the negative caching time for any .ch domain lookup is one hour.
A lower negative caching time is more user-friendly
People who are about to register a new domain name may also look up the name over DNS. However, this means that they just cached the non-existence of the name in the resolver they are using. A domain can be registered in a matter of minutes, and this can prevent them from using the domain name on their network for the duration of the negative caching time.
Most of the time, the Internet works without any problem; we can just power on our computer and start surfing… ok, most of the time. Many things have to be reliable to make this possible: power, cables, routers, computers, software and, last but not least, the DNS. This last point is one of the most critical parts of the Internet. Each time we read our favorite online newspapers, each time we check our e-mails, write and reply to them, or more generally, each time we use the Internet, many queries are sent to DNS servers to convert (more or less) meaningful Web addresses to IP addresses. And this is only the tip of the iceberg.
In the early days of the Internet, this task was handled by a single file. During the 1980s, however, it became clear that such a method was not scalable enough. The DNS was thus born. Three parts were designed. First, the stub resolver is located on your computer. It receives your question: what is the IP of www.switch.ch? This question is transformed to a standard DNS message and sent over the network to the second part, the resolvers. These are able to find an answer almost instantly, either because somebody has already looked for it or by querying the third part, the authoritative servers, located somewhere on the Internet. They are structured in a hierarchical tree, with root servers at the top. Some of them know the answer to the question you asked.
Nowadays, the authoritative root of the tree is made up of 13 servers named alphabetically from a.root-servers.net to m.root-servers.net. In reality, a technique named anycast allows a much larger number of servers around the world to listen out for (and answer with) the same address. For example, k.root-server.net actually comprises 33 nodes spread all across the globe. To analyse the workload of the DNS, DNS OARC (DNS Operations Analysis and Research Center) computes yearly statistics (Day in The Life of the Internet, DITL). In 2015, it used a time window of three days and found that 10 of the 13 root servers answered about 60 billion queries in this period.
The current state of this infrastructure is robust. A single server failing to respond does not affect the availability. When a server is overloaded, we can just add more servers to spread the traffic. The size and complexity of this infrastructure make it hard to analyse. The new Yeti DNS Project (www.yeti-dns.org) aims to study it by asking the following questions and more: