SWITCH Security-Blog

SWITCH-CERT IT-Security Blog


Leave a comment

Why the most successful Retefe spam campaign never paid off

Switzerland is one of the main targets of the Retefe banking trojan since its first appearance in November 2013. At that time, it changed the local DNS resolver on the computer (See also blog post “Retefe Bankentrojaner” in German only). Almost a year went by until they changed to the still current approach of setting a proxy auto-config (PAC) URL (See also blog post “The Retefe banking Trojan has targeted Switzerland“). To understand the story of this blog post, it helps to understand the modus operandi of the Retefe malware. We recommend you read up on it on our blog links posted above if you are not familiar with it.

While the Retefe actors are constantly changing tactics, for example their newest campaigns also target Mac OS X users, their malware still works the same. One of notable changes was the introduction of Tor in 2016. At first, they started using Tor gateway domain names such as onion.to, onion.link within the proxy auto-config URLs, later on they switched to Tor completely. The advantage of using Tor is of course, anonymity and the difficulty to block or take down the infrastructure.

Onion domain names don’t use DNS or do they?
The Tor network can use .onion domain names but these names are not resolved over DNS but instead work only in the Tor network. RFC 7686 (The “.onion” Special-Use Domain Name) goes into more details on the special case of .onion domain names. However, the fact is that .onion domain names do leak into the DNS system. For potential reasons and more information on this subject we recommend the paper by Versign Labs “Measuring the Leakage of Onion at the Root” (PDF).
Continue reading


2 Comments

SWITCH DNS recursive name service improvements with dnsdist

SWITCH operates recursive name servers for any user within the Swiss NREN. While larger universities typically run their own recursive name server, many smaller organisations rely on our resolvers for domain name resolution. During the consolidation of our name server nodes into two data centres, we looked for opportunities to improve our setup. Dnsdist is a DNS, DoS and abuse-aware load balancer from the makers of PowerDNS and plays a big part in our new setup. While the first stable release of dnsdist (version 1.0.0) is only a few days old (21 April 2016), it feels like everyone is already using it. We are happy users as well and want to share with you some of the features we especially like about dnsdist.

Our old setup consisted of several name server nodes which all shared the same IP address provided by anycast routing. Our recursive name server of choice was and still is BIND, and we have been providing DNSSEC validation and malicious domain lookup protection through our DNSfirewall service for some time. While this setup worked very well, it had the disadvantage that some badly behaved or excessive clients could degrade the performance of a single name server node and as such affect all users routed to this node. Another disadvantage was that each name server node got its share of the whole traffic. While this may sound good, it has the disadvantage that we have several smaller caches, one on each node. My favorite quote from Bert Hubert, founder of PowerDNS, is: “A busy name server is a happy name server“. What it means is that it is actually faster to route all your queries to a single name server node because this will improve the cache-hit rate.

Dnsdist provides a rich set of DNS-specific features
Our new setup still makes use of anycast routing. However, it is now the dnsdist load balancer nodes that announce this IP address, and they forward the queries to the back-end recursive name servers for domain name resolution.

The server nodes are located in two data centres, and both load-balancers announce the same IP address to make use of anycast routing. Query load is typically sent to resolvers within the same data centre but is distributed to the other site as well in the event of a higher load or server loss.


Continue reading


6 Comments

Optimizing Negative Caching Time in DNS

A recent presentation by SIDN (.nl) at the Spring 2016 DNS-OARC workshop reminded me of the importance of Time-To-Live (TTL) values in TLD zones. Specifically, it got me thinking about lowering the negative caching time in .ch/.li from currently 1 hour to 15 minutes.

What is negative caching?
When a resolver receives a response to a query, it caches it for the duration of the TTL specified by the record. For positive responses, the record contains the TTL, but for negative responses (response code NXDOMAIN), there is no answer to the query question. For this case, the response contains the SOA record of the zone in the authority section. Negative caching is specified in RFC 2308 as the minimum of the SOA record’s TTL and the SOA minimum field. For example, the original SOA record of the .ch zone looked as follows:

dig +nocmd +noall +answer @a.nic.ch ch. soa
ch. 3600 IN SOA a.nic.ch. helpdesk.nic.ch. 2016041421 900 600 1123200 3600

The SOA TTL is 3600, and the SOA minimum time is also set to 3600. The minimum of these two values is of course 3600 too. That means the negative caching time for any .ch domain lookup is one hour.

A lower negative caching time is more user-friendly
People who are about to register a new domain name may also look up the name over DNS. However, this means that they just cached the non-existence of the name in the resolver they are using. A domain can be registered in a matter of minutes, and this can prevent them from using the domain name on their network for the duration of the negative caching time.
Continue reading


1 Comment

A Yeti in the DNS

Most of the time, the Internet works without any problem; we can just power on our computer and start surfing… ok, most of the time. Many things have to be reliable to make this possible: power, cables, routers, computers, software and, last but not least, the DNS. This last point is one of the most critical parts of the Internet. Each time we read our favorite online newspapers, each time we check our e-mails, write and reply to them, or more generally, each time we use the Internet, many queries are sent to DNS servers to convert (more or less) meaningful Web addresses to IP addresses. And this is only the tip of the iceberg.

In the early days of the Internet, this task was handled by a single file. During the 1980s, however, it became clear that such a method was not scalable enough. The DNS was thus born. Three parts were designed. First, the stub resolver is located on your computer. It receives your question: what is the IP of www.switch.ch? This question is transformed to a standard DNS message and sent over the network to the second part, the resolvers. These are able to find an answer almost instantly, either because somebody has already looked for it or by querying the third part, the authoritative servers, located somewhere on the Internet. They are structured in a hierarchical tree, with root servers at the top. Some of them know the answer to the question you asked.

Nowadays, the authoritative root of the tree is made up of 13 servers named alphabetically from a.root-servers.net to m.root-servers.net. In reality, a technique named anycast allows a much larger number of servers around the world to listen out for (and answer with) the same address. For example, k.root-server.net actually comprises 33 nodes spread all across the globe. To analyse the workload of the DNS, DNS OARC (DNS Operations Analysis and Research Center) computes yearly statistics (Day in The Life of the Internet, DITL). In 2015, it used a time window of three days and found that 10 of the 13 root servers answered about 60 billion queries in this period.

The current state of this infrastructure is robust. A single server failing to respond does not affect the availability. When a server is overloaded, we can just add more servers to spread the traffic. The size and complexity of this infrastructure make it hard to analyse. The new Yeti DNS Project (www.yeti-dns.org) aims to study it by asking the following questions and more:
Continue reading


2 Comments

So Long, and Thanks for All the Domains

While Trojans like Dyre and Dridex are dominating malware-related news, we take the time to have a closer look at Tinba (Tiny Banker, Zusy, Illi), yet another Trojan which targets Windows users. In the first part of this post, we give a short historical review, followed by hints about how to detect (and remove) this threat on an infected system. In the second part, we have a look at a portion of the Trojan’s code which enhances its communication resilience, and how we can leverage these properties for defensive purposes.

Tinba is a fine piece of work, initially purely written in assembly. CSIS discovered it back in May 2012, and it contained WebInject capability and rootkit functionality in a binary of just 20 KB. The source code of Tinba leaked in July 2014, helping bad guys to create their own, extended versions.

Tinba Rootkit ZwQueryDirectoryFile

The source code of Tinba leaked in July 2014. Shown are some preparations to hook ZwQueryDirectoryFile.

Tinba on steroids was discovered in September 2014. Two main features are worth noting: First, each binary comes with a public key to check incoming control messages for authenticity and integrity. Second, there is a domain generation algorithm (DGA), which we will discuss later. In October 2014, Tinba entered Switzerland, mainly to phish for credit card information.

Tinba Inject

Tinba tried to phish credit card information.

Like other commodity Trojans, Tinba checks whether it is running in a virtual machine/sandboxed environment by checking the hard-disk size or looking for user interaction. According to abuse.ch, there was an intense distribution of Tinba in Switzerland early this year. Such spam campaigns can happen again at any time, so it is of use to know how to detect Tinba on an infected system and remove it.

Even though Tinba has the ability to hide directories and files (rootkit functionality), cybercriminals were wondering why they should bother using it. Why not simply hide directories and files with the “hidden” flag, which works for most users? Thus, it is relatively simple for a computer-savvy user to remove this version of Tinba from an infected (see instructions below).

Tinba Directory Hidden

A randomly named directory, which contains the Trojan itself, can be hidden by setting its attributes to “hidden”.

Continue reading


Protect your network with DNS Firewall

If you run your own mail server, you will quickly find out that 90% of the e-mails you receive are spam. The solution to this problem is e-mail filtering, which rejects or deletes unwanted spam. This solution is generally well accepted, and most users would not want the old days back when your inbox was filled with scams. Those people who want spam can also work around it by disabling spam filtering for their e-mail address or opting to run their own mail server.

Spam, scammers and other malicious abuse are not unique to e-mail. One possible approach is to invent a filtering technology for every protocol or service and allow the service owners to block misuse according to their policy. On the other hand, most services on the Internet make use of the Domain Name System (DNS). If you control DNS name resolution for your organisation, you can filter out the bad stuff the same way you filter out spam on e-mail. The difference and the advantage of DNS is that DNS filtering is independent of the service you use.

Back in 2010, ISC and Paul Vixie invented a technology called Response Policy Zones (RPZ) (See CircleID Post Taking back the DNS). While it has always been possible to block certain domain names from being resolved on your DNS resolver, adding host names manually as an authoritative zone does not scale.

(Illustration Christoph Frei)

(Illustration Christoph Frei)

Continue reading


DNSSEC signing your domain with BIND inline signing

With BIND 9.9, ISC introduced a new inline signing option for BIND 9. In earlier versions of BIND, you had to use the dnssec-signzone utility to sign your zone. With inline signing, however, BIND refreshes your signatures automatically, while you can still work on the unsigned zone file to make your changes.

This blog post explains how you can set up your zone with BIND inline signing. The zone we are using is called example.com. In addition, we look at how to roll over your keys. In our example, we do a Zone Signing Key (ZSK) rollover. We expect that you are already familiar with ISC BIND and have a basic understanding of DNSSEC. More specifically, you should be able to set up an authoritative-only name server and have read up on DNSSEC and maybe used some of its functions already.

Architecture

Before we set up inline signing with BIND, let us look at a typical network architecture. We will set up inline signing on a hidden master name server. This server is only reachable from the Internet via one or more publicly reachable secondary name servers. We will only cover the configuration of the hidden master as the secondary name server configuration will not differ for the signed zone (assuming you are using DNSSEC-capable name server software).
Continue reading