Saturday, June 14th, 2008

3rd Annual Symposium on Information Assurance

I was recently given the honor of delivering a keynote talk for the 3rd Annual Symposium on Information Assurance, which was held in conjunction with 11th Annual New York State Cyber Security Conference.  It was a great conference and I want to thank Sanjay Goel for inviting me!


The conference was VERY academic… which I love.  Academics present with an eye to the future so I listened as PHD candidates talked about securing nano networks, sensor based wifi networks and a slew of other topics… Academics also seem to have an boldness and fearless approach to the topics they present, which I admire…


While I enjoyed most of the talks I attended, there was one that perked the ears of the blackhat in me.  John Crain of ICANN gave a talk on “Securing the Internet Infrastructure: Myths and Truths”.  If you don’t know, ICANN basically owns the root DNS servers that the world relies on everyday.  He gave a great explanation of how ICANN goes about creating a heterogeneous ecosystem of DNS servers.  These DNS servers use multiple versions and types of DNS software, multiple versions and types of operating systems, and  even go so far as to use various pieces of hardware and processors.  The reasoning behind this logic is… if a vulnerability is discovered in a particular piece of software (or hardware) is discovered, it would only affect a small part of the entire root DNS ecosystem, whose load could be transferred to another.  It’s an interesting approach indeed.  After the talk, someone asked me why enterprises/corporations don’t adopt a similar strategy.  I thought about it some and I don’t think this approach could enterprise environment… here’s why (other than the obvious costs and ungodly administration requirements):


ICANNs interest is primarily based on preventing hackers from modifying a 45k text file (yes the root for the Internet is a ~45k text file).  Now, if a hacker happens to break into a root DNS server and modifies the file, ICANN can disable the hacked system, restore the file and go about their business.  As long as ICANN has a “good” system up somewhere, they can push all their traffic to that system.  Businesses on the other hand, aren’t not primarily interested in preventing the modification of data (not yet at least), they are more interested in preventing the pilfering of data.  So if you own a network of a million different configurations, a vulnerability in any one of those configurations could allow an attacker to steal your data.  Once the hacker has stolen your data, what does it matter that the 999,999 other systems are unhacked?  

This brings up the heart of the argument, should we be worried about our systems being compromised or should we be worried about our data being stolen?  These are actually two different problems as I don’t necessarily have to compromise your system to steal your data…

Posted by xssniper | Filed in Uncategorized

  • kuza55

    Academia seems kind of weird, some of the stuff they do is pretty cool, e.g. things like this: or or the PwdHash/SafeHistory/SafeCache stuff to come out of Stanford or the web cache & DNS pinning stuff done by Princeton ages ago.

    But security is really an applied topic, and it’s pretty irrelevant when your great design relies on something which the browser doesn’t guarantee (SessionSafe, PwdHash) or they simply have implementation flaws (SafeCache, SessionSafe).

    And I’ve had a sour feeling towards academia ever since Stanford released that Anti-DNS Pinning paper where they didn’t really do anything other than re-implement and use what others had done before and get a shitload of press for it.

    Anyway, I think your premise that it’s ok for ICANN to use this isn’t completely correct since you’re assuming that you can detect unauthorised modifications. I don’t know how updates to DNS are handled, but unless they’re all cryptographically signed, then you still have the problem of knowing when you’re owned.