There is nothing to debate. This is, in my opinion, an epic find. Because DNS is (in most cases) based on UDP (connectionless), the "security" is based on a handful of characteristics. Other flaws aside, a proper DNS client will only accept DNS responses that have the following characteristics:
- The response contains an answer to a question that the client asked
- The response came from a nameserver that the client originally initiated the request to
- The TXID of the response matches that of the request
- The destination port of the response matches the source port of the original request
Dan's find, of which the full details are being withheld until BlackHat 2008 this August, nearly eliminates the source port difficulty and likely also tackles or at least reduces the problem space of the TXID issue. I have to be honest here -- anyone who didn't notice this particular anomaly in the past should be embarrassed -- I know I am. If you were to view the state table of a firewall fronting a vulnerable DNS implementation, you'd see that the source port of all outbound DNS requests from that particular DNS implementation would be fixed for an inordinately long time:
sis0 udp a.b.c.d:6489 -> 192.58.128.30:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:6489 -> 192.35.51.30:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:6489 -> 144.142.2.6:53 MULTIPLE:MULTIPLE sis0 udp a.b.c.d:6489 -> 199.7.83.42:53 MULTIPLE:MULTIPLE sis0 udp a.b.c.d:6489 -> 192.12.94.30:53 MULTIPLE:MULTIPLE sis0 udp a.b.c.d:6489 -> 192.153.156.3:53 MULTIPLE:MULTIPLE sis0 udp a.b.c.d:6489 -> 204.2.178.133:53 MULTIPLE:MULTIPLE sis0 udp a.b.c.d:6489 -> 192.33.4.12:53 MULTIPLE:MULTIPLE sis0 udp a.b.c.d:6489 -> 204.74.113.1:53 MULTIPLE:MULTIPLE sis0 udp a.b.c.d:6489 -> 61.200.81.111:53 SINGLE:NO_TRAFFIC sis0 udp a.b.c.d:6489 -> 207.252.96.3:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:6489 -> 163.192.1.10:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:6489 -> 192.31.80.30:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:6489 -> 216.239.38.10:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:6489 -> 216.239.34.10:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:6489 -> 64.233.167.9:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:6489 -> 66.249.93.9:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:6489 -> 64.233.161.9:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:6489 -> 206.132.100.105:53 SINGLE:NO_TRAFFIC
As you can see, the source port is identical for all outbound DNS requests. This port will change eventually, and honestly I am not sure what the exact details are regarding its change, however I believe it is sufficient to say that it remains constant long enough for you to be concerned with it.
In addition to my work related duties, I also have my own personal systems to worry about. My client systems were quickly fixed, however they remain vulnerable because of my setup -- all DNS requests from my network are forced to go through an internal DNS server that I control. I use OpenBSD almost exclusively for my routing, firewalling and other common "gateway" type functionality, and it just so happens that my DNS server runs OpenBSD too. So despite my clients being secure from this attack if they were querying external nameservers directly, my internal DNS server put them at risk. I was surprised to find that on the US-CERT announcement that they were listed as "unknown" and now, more than a week later, there is still no mention of the issue on any of the OpenBSD mailing lists or on the website.
Since my DNS server runs on a little, low-power device, the thought of having to rebuild something from source or other disk/CPU intensive operations makes me cringe. Thats what got me thinking -- is there a way I can protect myself until a patch is available from my vendor and I have time to deal with it?
As it turns out, I believe there is. My interim solution is to use PF to provide some randomization to my outbound DNS requests.
If your DNS server sits behind a NATing PF device, the source port randomization is implicit -- see the section titled TRANSLATION in the pf.conf manpage. If your DNS server is also running on the PF device, as it is in my case, you are still vulnerable unless you configure things correctly.
The trick is to force requests coming from your DNS server, which have non-random source ports, to be processed by PF so that they too are subject to the implicit source port randomization. This is actually quite simple, and you may already be doing this without realizing it, depending on your setup. Assuming the outbound address of your DNS server (or PF device itself) is a.b.c.d, the following NAT entry in your pf.conf should be sufficient:
nat on $WAN_IF inet proto { tcp, udp } from a.b.c.d to any port 53 -> a.b.c.d
This results in a dramatically different state table:
sis0 udp a.b.c.d:40783 -> a.b.c.d:59623 -> 192.55.83.30:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:40783 -> a.b.c.d:64916 -> 199.7.83.42:53 MULTIPLE:MULTIPLE sis0 udp a.b.c.d:40783 -> a.b.c.d:50591 -> 202.12.28.131:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:40783 -> a.b.c.d:60017 -> 192.55.83.30:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:40783 -> a.b.c.d:63472 -> 200.160.0.7:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:40783 -> a.b.c.d:55603 -> 200.189.40.10:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:40783 -> a.b.c.d:50749 -> 192.52.178.30:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:40783 -> a.b.c.d:64190 -> 192.83.166.11:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:40783 -> a.b.c.d:64346 -> 128.63.2.53:53 MULTIPLE:SINGLE sis0 udp a.b.c.d:40783 -> a.b.c.d:59970 -> 61.220.48.1:53 SINGLE:NO_TRAFFIC
As you can see, the original source port is still fixed in the internal table, however the source port that is used out there on the big bad Internet, where the risk lies, is random. To add even further randomness, if you have multiple addresses at your disposal, you can distribute outbound DNS requests across a pool of those addresses:
nat on $WAN_IF inet proto { tcp, udp } from a.b.c.d to any port 53 -> { a.b.c.1, a.b.c.2, a.b.c.3, ... } round-robin # or if you have a contiguous space available, use the 'random' modifier nat on $WAN_IF inet proto { tcp, udp } from a.b.c.d. to any port 53 -> { somenetwork/mask } random
The astute reader will have probably already noticed that it is major overkill to include TCP DNS traffic in these countermeasures, however I cannot think of any adverse side effects of doing so.
The proof is in the pudding. Utilizing the handy service setup by DNS-OARC, you can see that my setup appears "FAIR":
$ dig +short porttest.dns-oarc.net TXT z.y.x.w.v.u.t.s.r.q.p.o.n.m.l.k.j.i.h.g.f.e.d.c.b.a.pt.dns-oarc.net. "a.b.c.d is FAIR: 14 queries in 10598.6 seconds from 14 ports with std dev 4249.80"