DNS Amplification Attacks
There has been a lot of talk recently about DNS amplification attacks (with prominent news reports of high bandwidth attacks targeted at anti-spam services, cloud providers, financial institutions, etc). These are a class of denial of service attack that use DNS servers to emit large amounts of traffic onto unsuspecting victims. The attackers use the forged source addresses of their victims to send a large stream of queries to the DNS servers, and this results in a much larger stream of responses being sent from those DNS servers back to the victim computers - with the aim of overwhelming them or their network connections. Although the DNS servers employed in the attack can become adversely impacted, the primary target of the attacks are the computers whose addresses are being forged.
Attacks that employ IP source address forgery are effective on the Internet today, because the countermeasures that would prevent this forgery are not very widely deployed. BCP 38 (a 13 year old document!) and BCP 84 describe network ingress filtering as a way for ISPs to block forged traffic from their customer networks, but many ISPs fail to employ it. Organizations should also ideally configure their networks not to permit internally generated traffic with forged addresses from crossing their borders - typically a border Access Control List or more granular techniques like unicast reverse path forwarding (URPF, a per-subnet level antispoofing method) can be used to do this, but once again, these are not in very widespread use.
In the past, the attacks commonly employed ‘open recursive DNS resolvers’ - these are DNS recursive resolvers that will resolve names for any client computer without restriction. When such servers are available, the attacker can use a DNS record of their choosing (possibly under the attackers control) that intentionally generates a very large response. To prevent such abuse, many organizations these days lock down their recursive resolvers so that they answer requests only from clients on their own networks. Penn does this also. There are however public DNS resolver services, like Google DNS, and OpenDNS, that by design are open to the world, and so need to have effective countermeasures in place to deal with these potential attacks. Both Google and OpenDNS say that they do.
There are still a huge number of open recursive DNS resolvers on the Internet, the vast majority of which are probably unintentionally so. The Open DNS Resolver Project has been cataloging them and reports a number in the neighborhood of 25 million!
But authoritative DNS servers (which need to be open to the world) are also quite vulnerable to amplification attacks. In this case, the attacker cannot choose an arbitrary DNS record, but instead must use only records that already exist in the authoritative server’s zones. However, DNS responses are usually always amplifying (ie. the size of the response is a lot larger than the size of the request), so it’s only a question of the scale of attack that can be achieved.
Interestingly, DNSSEC, a security enhancement to the DNS protocol, makes the amplification problem significantly worse, since DNS responses with cryptographic signatures are much bigger than normal, unsigned DNS responses. Hence lately, we’ve been seing a lot of these attacks target authoritative servers with zones that are signed with DNSSEC.
Penn was one of the earliest organizations to deploy DNSSEC (August 2009, well before the root and most of the TLDs were signed). We first noticed such attacks on our authoritative servers in July of last year (2012) - at that time we didn’t have a good way to counteract these, but our server and network infrastructure was easily able to absorb the attacks, so we didn’t take any action - the attacks continued for a few weeks and then disappeared. In late January 2013 though, a new round of amplification attacks happened on a much larger scale that did negatively affect our infrastructure, causing high load on two of our servers and almost saturating the outbound bandwidth on their network connections. By this time, code to implement an experimental response rate limiting countermeasure was available (see Vernon Schryver and Paul Vixie’s Response Rate Limiting (RRL) – implementations are available for popular DNS server software such as BIND, NSD, etc). We deployed these enhancements shortly after the new attacks, and they have effectively addressed the problem for the time being. The RRL code works by keeping track of client requests, and for repeated requests for the same records from the same client addresses it rate limits the responses, either by silently ignoring some requests, or providing small ‘truncated’ responses. The working assumption is that well behaved DNS resolvers cache responses for the advertised TTL of the record and so should not be making repeated queries for the same record in a short period of time. The truncated responses when encountered will cause well behaved DNS resolvers to retry their query over TCP, which cannot be effectively used in forged address attacks. More details of how this works are available in this technical note describing the operation of RRL. I’ve heard that the RRL extensions are planned to be incorporated officially into BIND 9.10, although from the recent announcement about ISC’s new DNSco subsidiary, it isn’t clear whether this feature will be available only to commercial customers.
Any record that produces a large response can be effectively employed in these attacks. Most of the attacks to date though have been using the DNS resource record type ‘ANY’ (RR type 255). This query, when directed to an authoritative server’s zone name, returns all records at the apex of the zone. With DNSSEC, you’ll additionally get DNSKEY, RRSIG, and NSEC/NSEC3 records. To give an idea of the scale of the amplification, a DNSSEC-enabled “upenn.edu, ANY” query generates a response that is roughly 88 times as large as the request (a query of about 38 bytes, and a response of 3,344 bytes). The actual amplification ratio of the bits on the wire is less than this because we have to consider the encapsulating headers (L2, IP, and UDP). With Ethernet (14 bytes) and IPv4 (20 bytes) and UDP (8 bytes), a 38-byte DNS query occupys an 80-byte Ethernet frame. The 3,344 byte DNS response packet exceeds the Ethernet MTU and is fragmented into 3 ethernet frames, totalling 3,470 bytes. This yields an amplification ratio of about 40x, so in the absence of rate limiting countermeasures, a 1Mbps stream of query traffic (about 1,500 queries/second) would have produced a 40Mbps stream of traffic directed towards the victim.
Even non-DNSSEC zones produce a quite substantial amplification though, and often it’s easily sufficient for an effective attack. From discussion with a number of colleagues at other institutions, it’s clear that non-DNSSEC sites have also been undergoing the same types of attacks and have had to deploy rate limiting countermeasures.
Some people have proposed not answering requests for ANY (and after all ANY was only meant to be a diagnostic tool and not intended for production uses). This might buy time until attackers adapt to using other records. But it could cause collateral damage also. It turns out there is a variety of software that uses ANY for different purposes. For example, the PowerDNS recursor uses it to obtain A and AAAA records from authority servers in one query and response (normally two query/responses would be required).
So, what can be done in the long term? The RRL implementations appear to be working very well at many sites, but attackers will undoubtedly adapt their methods - perhaps by performing more highly distributed attacks across many authoritative servers using a larger set of record types. Some folks at NLnet Labs have written a good paper that discusses some of the issues: Defending against DNS reflection amplification attacks. Also, although RRL has been designed very carefully to minimize collateral damage, there will still be situations in which it might not work very well, eg. when dealing with resolvers behind large scale NATs and CGNs - a problem which might be increasingly common as we approach IPv4 depletion.
There doesn’t seem to be much hope (or incentive) for widescale deployment of BCP38 and other methods to reduce the scope of source address forgery.
Ideas have also been proposed in the IETF to enhance the DNS query/response conversation with lightweight authentication cookies, which might thwart most forgery based attacks. See http://tools.ietf.org/html/draft-eastlake-dnsext-cookies-03 for example. But they would require widescale updates to a lot of DNS software to have an effect, and have thus far not gained much traction.
Forcing DNS queries to use TCP is probably a much too heavyweight solution that will impose large costs in terms of DNS response latency and server resource requirements, although the experimental TCP cookie transactions extension (see RFC 6013 - http://tools.ietf.org/html/rfc6013 and this USENIX paper) aims to address some of the issues. It may be necessary to consider a TCP based solution in light of some operational problems observed with UDP and large DNS packets - for example firewalls and other middle boxes that do not pass through IP fragments, or that botch up handling of extension mechanisms like EDNS0 that negotiate the use large UDP/DNS payloads.
Response amplification survey at Internet2 schools
I was interested in knowing what size amplification the ANY query would produce at some other sites, so I wrote a quick program to do this and tabulate the results. I chose the set of 210 Internet2 universities that I already monitor at my dnsstat website. For each zone, I ran an EDNS0 (DO=1) DNS query for ANY at the zone apex via each of the authoritative servers for the zone, and measured the query and response size and the resulting amplification (response_size/query_size). The full set of data can be seen here, but I’ll just excerpt some of the entries from the beginning of the file, sorted by descending order of response amplification. As expected the largest amplifications (in the neighborhood of 100x for DNS payloads, 50x for full packets) are all from DNSSEC signed zones. But many non DNSSEC zones produce large amplifications too. The very low amplification ratios for the zones towards the end of the dataset are mostly due to DNS servers that don’t understand EDNS0 and return FORMERR (format erorr) responses.
#########################################################################
# DNS response amplification results for ANY query at zone apex.
# Data collected 2013-04-10
# Sorted by descending order of response amplification ratio (last column)
# Amplification is the ratio of the DNS response and request payloads only,
# it doesn't include the encapsulated UDP/IP/L2 etc headers.
# Line Format:
# zone ns_name ns_address query_size response_size amp1 amp2
# amp1 is the ratio of DNS response payload to DNS query payload
# amp2 is the estimated ratio of the entire response packet(s) to request
# packets, assuming an ethernet path MTU.
#########################################################################
ksu.edu nic.kanren.net. 164.113.192.242 36.0 4085.0 113.47 53.99
ksu.edu kic.kanren.net. 164.113.92.250 36.0 4085.0 113.47 53.99
umbc.edu UMBC3.umbc.edu. 130.85.1.3 37.0 4093.0 110.62 53.41
lsu.edu phloem.uoregon.edu. 128.223.32.35 36.0 4016.0 111.56 53.10
lsu.edu bigdog.lsu.edu. 192.16.176.1 36.0 4016.0 111.56 53.10
umbc.edu UMBC5.umbc.edu. 130.85.1.5 37.0 4045.0 109.32 52.80
umbc.edu UMBC4.umbc.edu. 130.85.1.4 37.0 4045.0 109.32 52.80
sdsmt.edu ns5.gratisdns.dk. 85.17.221.46 38.0 4095.0 107.76 52.76
sdsmt.edu ns4.gratisdns.dk. 87.73.3.3 38.0 4095.0 107.76 52.76
sdsmt.edu ns2.gratisdns.dk. 208.43.238.42 38.0 4095.0 107.76 52.76
sdsmt.edu ns1.gratisdns.dk. 109.238.48.13 38.0 4095.0 107.76 52.76
uiowa.edu dns3.uiowa.edu. 128.255.1.27 38.0 4093.0 107.71 52.74
uiowa.edu dns2.uiowa.edu. 128.255.64.26 38.0 4093.0 107.71 52.74
uiowa.edu dns1.uiowa.edu. 128.255.1.26 38.0 4093.0 107.71 52.74
lsu.edu otc-dns2.lsu.edu. 130.39.254.30 36.0 3972.0 110.33 52.54
lsu.edu otc-dns1.lsu.edu. 130.39.3.5 36.0 3972.0 110.33 52.54
ucr.edu adns2.berkeley.edu. 128.32.136.14 36.0 3965.0 110.14 52.45
ucr.edu adns1.berkeley.edu. 128.32.136.3 36.0 3965.0 110.14 52.45
ualr.edu ns4.ualr.edu. 130.184.15.85 37.0 3979.0 107.54 51.96
ualr.edu ns3.ualr.edu. 144.167.5.50 37.0 3979.0 107.54 51.96
ualr.edu ns2.ualr.edu. 144.167.10.1 37.0 3979.0 107.54 51.96
ualr.edu ns.ualr.edu. 144.167.10.48 37.0 3979.0 107.54 51.96
uiowa.edu sns-pb.isc.org. 192.5.4.1 38.0 4013.0 105.61 51.74
sdsmt.edu ns3.gratisdns.dk. 194.0.2.6 38.0 3963.0 104.29 51.11
berkeley.edu ns.v6.berkeley.edu. 128.32.136.6 41.0 4040.0 98.54 50.19
berkeley.edu adns2.berkeley.edu. 128.32.136.14 41.0 4040.0 98.54 50.19
berkeley.edu adns1.berkeley.edu. 128.32.136.3 41.0 4040.0 98.54 50.19
upenn.edu noc3.dccs.upenn.edu. 128.91.251.158 38.0 3866.0 101.74 49.90
berkeley.edu sns-pb.isc.org. 192.5.4.1 41.0 3996.0 97.46 49.66
berkeley.edu phloem.uoregon.edu. 128.223.32.35 41.0 3996.0 97.46 49.66
ksu.edu ns-3.ksu.edu. 129.130.139.150 36.0 3651.0 101.42 48.42
ksu.edu ns-2.ksu.edu. 129.130.139.151 36.0 3651.0 101.42 48.42
ksu.edu ns-1.ksu.edu. 129.130.254.21 36.0 3651.0 101.42 48.42
indiana.edu dns1.iu.edu. 134.68.220.8 40.0 3671.0 91.78 46.30
indiana.edu dns1.illinois.edu. 130.126.2.100 40.0 3671.0 91.78 46.30
indiana.edu dns2.iu.edu. 129.79.1.8 40.0 3655.0 91.38 46.11
mst.edu dns02.srv.mst.edu. 131.151.245.19 36.0 3433.0 95.36 45.63
okstate.edu ns2.cis.okstate.edu. 139.78.200.1 40.0 3599.0 89.97 45.43
okstate.edu ns.cis.okstate.edu. 139.78.100.1 40.0 3599.0 89.97 45.43
mst.edu ns-2.mst.edu. 131.151.247.41 36.0 3417.0 94.92 45.42
mst.edu ns-1.mst.edu. 131.151.247.40 36.0 3417.0 94.92 45.42
mst.edu dns01.srv.mst.edu. 131.151.245.18 36.0 3417.0 94.92 45.42
mst.edu dns03.srv.mst.edu. 131.151.245.20 36.0 3385.0 94.03 45.01
mst.edu ns1.umsl.com. 134.124.31.136 36.0 3369.0 93.58 44.81
ucr.edu ns2.ucr.edu. 138.23.80.20 36.0 3356.0 93.22 44.64
ucr.edu ns1.ucr.edu. 138.23.80.10 36.0 3356.0 93.22 44.64
ksu.edu nic.kanren.net. 2001:49d0:2008:f000::5 36.0 4085.0 113.47 43.58
ksu.edu kic.kanren.net. 2001:49d0:2003:f000::5 36.0 4085.0 113.47 43.58
upenn.edu dns2.udel.edu. 128.175.13.17 38.0 3344.0 88.00 43.38
upenn.edu dns1.udel.edu. 128.175.13.16 38.0 3344.0 88.00 43.38
upenn.edu adns2.upenn.edu. 128.91.254.22 38.0 3344.0 88.00 43.38
upenn.edu sns-pb.isc.org. 192.5.4.1 38.0 3312.0 87.16 42.98
lsu.edu phloem.uoregon.edu. 2001:468:d01:20::80df:2023 36.0 4016.0 111.56 42.88
lsu.edu bigdog.lsu.edu. 2620:105:b050::1 36.0 4016.0 111.56 42.88
[ ... rest of data omitted ...]
– Shumon Huque