Monday, July 14, 2008

Mitigating DNS cache poisoning with PF

By now, nearly everyone has heard about the latest round of DNS vulnerabilities, including non-technical people given the massive media coverage. If you have not heard of it, you probably wouldn't be reading this, however on the off chance that I am wrong, please read US-CERT VU#800113 and CVE-2008-1447 and then continue here. Most organizations are scrambling to find and apply fixes while some are still sitting idly debating the merits of the find.

There is nothing to debate. This is, in my opinion, an epic find. Because DNS is (in most cases) based on UDP (connectionless), the "security" is based on a handful of characteristics. Other flaws aside, a proper DNS client will only accept DNS responses that have the following characteristics:

  1. The response contains an answer to a question that the client asked
  2. The response came from a nameserver that the client originally initiated the request to
  3. The TXID of the response matches that of the request
  4. The destination port of the response matches the source port of the original request
Any attacker wishing to forge a DNS response in the hopes of manipulating the behavior of a DNS client could fairly easily guess or otherwise obtain points one and two. Point three has been flawed in the past, but despite the best RNGs this is still just a 16-bit number, which only gives you 65536 possible values. Point four is in theory also a 16-bit number with a few caveats, so it is only slightly less random than a true 16-bit space.

Dan's find, of which the full details are being withheld until BlackHat 2008 this August, nearly eliminates the source port difficulty and likely also tackles or at least reduces the problem space of the TXID issue. I have to be honest here -- anyone who didn't notice this particular anomaly in the past should be embarrassed -- I know I am. If you were to view the state table of a firewall fronting a vulnerable DNS implementation, you'd see that the source port of all outbound DNS requests from that particular DNS implementation would be fixed for an inordinately long time:

sis0 udp a.b.c.d:6489 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:6489 ->       SINGLE:NO_TRAFFIC
sis0 udp a.b.c.d:6489 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:6489 ->       SINGLE:NO_TRAFFIC

As you can see, the source port is identical for all outbound DNS requests. This port will change eventually, and honestly I am not sure what the exact details are regarding its change, however I believe it is sufficient to say that it remains constant long enough for you to be concerned with it.

In addition to my work related duties, I also have my own personal systems to worry about. My client systems were quickly fixed, however they remain vulnerable because of my setup -- all DNS requests from my network are forced to go through an internal DNS server that I control. I use OpenBSD almost exclusively for my routing, firewalling and other common "gateway" type functionality, and it just so happens that my DNS server runs OpenBSD too. So despite my clients being secure from this attack if they were querying external nameservers directly, my internal DNS server put them at risk. I was surprised to find that on the US-CERT announcement that they were listed as "unknown" and now, more than a week later, there is still no mention of the issue on any of the OpenBSD mailing lists or on the website.

Since my DNS server runs on a little, low-power device, the thought of having to rebuild something from source or other disk/CPU intensive operations makes me cringe. Thats what got me thinking -- is there a way I can protect myself until a patch is available from my vendor and I have time to deal with it?

As it turns out, I believe there is. My interim solution is to use PF to provide some randomization to my outbound DNS requests.

If your DNS server sits behind a NATing PF device, the source port randomization is implicit -- see the section titled TRANSLATION in the pf.conf manpage. If your DNS server is also running on the PF device, as it is in my case, you are still vulnerable unless you configure things correctly.

The trick is to force requests coming from your DNS server, which have non-random source ports, to be processed by PF so that they too are subject to the implicit source port randomization. This is actually quite simple, and you may already be doing this without realizing it, depending on your setup. Assuming the outbound address of your DNS server (or PF device itself) is a.b.c.d, the following NAT entry in your pf.conf should be sufficient:

nat on $WAN_IF inet proto { tcp, udp } from a.b.c.d to any port 53 -> a.b.c.d

This results in a dramatically different state table:

sis0 udp a.b.c.d:40783 -> a.b.c.d:59623 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:64916 ->       MULTIPLE:MULTIPLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:50591 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:60017 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:63472 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:55603 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:50749 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:64190 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:64346 ->       MULTIPLE:SINGLE
sis0 udp a.b.c.d:40783 -> a.b.c.d:59970 ->       SINGLE:NO_TRAFFIC

As you can see, the original source port is still fixed in the internal table, however the source port that is used out there on the big bad Internet, where the risk lies, is random. To add even further randomness, if you have multiple addresses at your disposal, you can distribute outbound DNS requests across a pool of those addresses:

nat on $WAN_IF inet proto { tcp, udp } from a.b.c.d to any port 53 -> { a.b.c.1, a.b.c.2, a.b.c.3, ... } round-robin
# or if you have a contiguous space available, use the 'random' modifier
nat on $WAN_IF inet proto { tcp, udp } from a.b.c.d. to any port 53 -> { somenetwork/mask } random

The astute reader will have probably already noticed that it is major overkill to include TCP DNS traffic in these countermeasures, however I cannot think of any adverse side effects of doing so.

The proof is in the pudding. Utilizing the handy service setup by DNS-OARC, you can see that my setup appears "FAIR":

$  dig +short TXT
"a.b.c.d is FAIR: 14 queries in 10598.6 seconds from 14 ports with std dev 4249.80"
As usual, YMMV. If I am mistaken in any of my assumptions or suggestions, I would love to hear differently. Otherwise, enjoy.

Monday, July 7, 2008

Netflix Queue Randomizer

Security is where I spend the bulk of my time, however I have dabbled quite extensively in other areas. So it is no surprise when every once in a while I do something that is not security related. This is one of those times.

Netflix. Love it or hate it, but me not having a TV makes Netflix a lifesaver because I can get my dose of movies while multitasking (current movie: Golden Child). The problem I continually find is that my queue directly follows my particular mood I was in when I was adding to the queue. The result is that the movies in my queue will have this flow to them that is not always appealing. For example, I watched a string of westerns last week, however that was because a week or two prior I watched Tombstone and then went and selected movies that Netflix said I'd like based on my 4-star rating of Tombstone.

Several months I ago I wrote to Netflix asking them for the ability to randomize my queue. Sure, I can do this by hand, however when I have 50 movies in my queue it gets old after the first time you do it. Since I figured this would be useful to others, it would not be unreasonable to ask Netflix to implement this. Furthermore, the queue manipulation stuff is already there and used by the queue manager, so it makes me wonder why it was never implemented. Understandably, they probably have other things on their plate. An API, perhaps?

The result is the following Greasemonkey abomination:

// ==UserScript==
// @name           Netflix Queue Randomizer
// @namespace
// @include
// ==/UserScript==
function moveToTop(id, from) {
   method: 'GET',
   url: '' + id + '&from=' + from + '&pos=1&sz=0&mt=true&ftype=DD',
   headers: { 'X-Requested-With': 'XMLHttpRequest' }
function randomSort() {
 return (Math.round(Math.random()) - 0.5);
function randomizeQueue() {
 var inputs = document.getElementsByTagName('input');
 var idRe = /^OP(\d{8})$/;
 var curIds = new Array();
 var curPoss = new Array();
 var newOrder = new Array();
// find all of the movies in the queue and store their ID and current
  // position in the queue
  for (var i=0; i < inputs.length; i++) {
    if (inputs[i].type == 'hidden' && inputs[i].name.match(idRe)) {
      var matches = idRe.exec(inputs[i].name);
  // create a new array that is the same size as the movie queue and
  // randomize it.  This has to be done this was because QueueReorder does
  // not support moving to arbitrary positions, only the top.
  for (var i=0; i < curIds.length; i++) {
    newOrder[i] = i;
  // now map the current movies and their positions to their new homes
  for (var i=0; i < newOrder.length; i++) {
    moveToTop(curIds[newOrder[i]], curPoss[newOrder[i]])
// Create a new div that contains the 'Randomize Queue' link, and then prove
// that I have nearly zero HTML foo by being unable to place the link to the
// left of the bottom 'Update DVD Queue' button
var qFooter = document.getElementById('updateQueue2');
var rDiv = document.createElement('div');
rDiv.setAttribute('class', 'd1_qt');
var rLink = document.createElement('a');
rLink.addEventListener('click', randomizeQueue, false);
var rText = document.createTextNode("Randomize Queue");
qFooter.parentNode.insertBefore(rDiv, qFooter);

Obviously for any of this to work you'll need to have Greasemonkey installed, which requires Firefox, an understanding of how to load this (Tools -> Greasemonkey -> New User Script), and a Netflix account.


Saturday, July 5, 2008

Craigslist Posting Security -- Adequate

If you have not bought or sold something on Craigslist, or at minimum browsed your particular region's Craigslist section, you truly have not experienced the best that the Internet has to offer. I use Craigslist probably half a dozen times per year for legitimate reasons -- to sell something I want to make a buck on or simply cannot bring myself to throw away, or perhaps I need to buy something particularly exotic or maybe something I'm looking to get on the cheap. The remainder of the time I'm cruising Craigslist purely for entertainment purposes. The "Best of Craigslist" and "Free" sections consume the bulk of my time.

Thursday was one of those days where I was posting an item for sale on Craigslist. I received the email that contains a link to publish, edit or delete my post, and at that moment my subconscious tazed me and told me there was something of interest in that link. It was not too unlike other links I have received in the past from sites that require me to verify that I do, in fact, own a particular email address. It contains a link that, among other things, contains some seemingly random garbage either as part of the URI or as part of the query string. This "random garbage" is generally an MD5 checksum or similar mechanism that ensures that it cannot be easily guessed and allows all involved parties to sleep comfortably knowing that posts cannot be tampered with by anyone other than authorized parties. Poor ways of implementing this would include anything that bases the MD5 on anything that can be easily guessed or otherwise obtained. Obviously, if the system in question simply MD5'd the poster's email address and posting title, a little trickery would get an attacker access to the management of that particular post.

When I received the email the other day, I quickly parsed through the past ~3 years or so of Craigslist posting emails and quickly noticed there was a pattern. All posts are of the form[8 digits]/[5 lower case letters or numbers]. I legitimately thought I was on to something. A few bogus posts later (which subsequently got flagged. Thanks, Craigslist overlords!) I was wondering, could it really be this easy?

As it turns out, no. It is no simple task to defeat Craigslist posting security. The first 8 digits in the path are easily obtained. In fact, they simply correspond to the posting ID which is freely available from any posting. This brings up two interesting points:

  1. This provides no security, and in reality probably was not chosen for security reasons
  2. Craigslist cannot handle more than 10^8-1 (99,999,999) posts in any one posting window, which is typically 7 days. This presents a curious DoS condition that is probably entirely impractical, however is interesting to consider.
This brings us to the last 5 characters of the URI. Another quick analysis of my posts shows that they are always 5 characters and only ever contain a mixture of numbers and lower-case characters. The mathematicians in the house have already busted out the answer on their pocket calculator, however for those not so inclined that means there are (26+10)^5 possible values for this field (26 lowercase characters, 10 digits, 5 places, which results in just over 60 million possibilities. 60,466,176, to be exact).

If those 5 characters were based on something that could be easily guessed or obtained, there would be cause for concern, however no correlation was determined between the 5 characters and the following characteristics:

  • Poster's email address
  • Posting title
  • Date/time
  • Post ID
This leads me to believe that it is a randomly generated string of some sort that serves as an index into a database of posts. Anyone that has ever had to develop, enforce or audit a password policy knows that a 5 character password, regardless of content, is prone to failure. In this particular case, however, is it adequate?

In my opinion, yes. Given the nature of how Craigslist posts are managed -- HTTPS -- and the relatively limited time window in which the management URLs can be accessed (7 days for most posts, 30 for a limited few), the chances of someone brute-forcing these seemingly simple 5 characters is virtually 0. Since these require HTTPS posts, even if you can pull off 1 per second, it will still take you nearly 2 years to guess the correct URI ((26+10)^5)/60/60/24) == 699 days). By the time you guess it, the post will have expired or been deleted, and on the off chance that you get lucky and it still exists, you will almost certainly have tripped up something on Craigslist's side and Craig Newark himself will be on his way to your house to slap you around.

Tuesday, July 1, 2008

Update your bookmarks/feeds -- to Blogger

In case you have not noticed, things have changed a bit around here.

When it comes to things that can kill me, make me money and/or affect my reputation in one way or another, I'm a firm believer in "If you want the job done right, do it yourself." Those who know me know that there are very, very few individuals I trust with my online presence and fortunately I have never had to call upon them to lend a hand.

How does this relate to Blogger, you ask? Well, for years I had not trusted any third party to host my blog and host it right, however just recently I took at new look at Blogger and they definitely appear to run a tight ship. Whether its the Google strings or not, I have nearly zero problems with their operations.

Most everything has been migrated permanently with the exception of a few random posts from many years ago, however those will come in time. I have a great pile of mod_rewrite foo going on right now to move people, crawlers and the like over to the new site, however it does not appear that RSS readers are going to play along. They'll happily follow the redirect, but the source for the feed remains the same.

So... update your bookmarks, else come this time next week the old blog will be gone and the redirects will be replaced with feeds to something much less pleasant. I'll happily take suggestions of convincing items to put in the old RSS feed to coerce people into moving over.

Thanks, and enjoy!