Archive for the 'Uncategorized' Category

Saturday, June 14th, 2008

3rd Annual Symposium on Information Assurance

I was recently given the honor of delivering a keynote talk for the 3rd Annual Symposium on Information Assurance, which was held in conjunction with 11th Annual New York State Cyber Security Conference.  It was a great conference and I want to thank Sanjay Goel for inviting me!

 

The conference was VERY academic… which I love.  Academics present with an eye to the future so I listened as PHD candidates talked about securing nano networks, sensor based wifi networks and a slew of other topics… Academics also seem to have an boldness and fearless approach to the topics they present, which I admire…

 

While I enjoyed most of the talks I attended, there was one that perked the ears of the blackhat in me.  John Crain of ICANN gave a talk on “Securing the Internet Infrastructure: Myths and Truths”.  If you don’t know, ICANN basically owns the root DNS servers that the world relies on everyday.  He gave a great explanation of how ICANN goes about creating a heterogeneous ecosystem of DNS servers.  These DNS servers use multiple versions and types of DNS software, multiple versions and types of operating systems, and  even go so far as to use various pieces of hardware and processors.  The reasoning behind this logic is… if a vulnerability is discovered in a particular piece of software (or hardware) is discovered, it would only affect a small part of the entire root DNS ecosystem, whose load could be transferred to another.  It’s an interesting approach indeed.  After the talk, someone asked me why enterprises/corporations don’t adopt a similar strategy.  I thought about it some and I don’t think this approach could enterprise environment… here’s why (other than the obvious costs and ungodly administration requirements):

 

ICANNs interest is primarily based on preventing hackers from modifying a 45k text file (yes the root for the Internet is a ~45k text file).  Now, if a hacker happens to break into a root DNS server and modifies the file, ICANN can disable the hacked system, restore the file and go about their business.  As long as ICANN has a “good” system up somewhere, they can push all their traffic to that system.  Businesses on the other hand, aren’t not primarily interested in preventing the modification of data (not yet at least), they are more interested in preventing the pilfering of data.  So if you own a network of a million different configurations, a vulnerability in any one of those configurations could allow an attacker to steal your data.  Once the hacker has stolen your data, what does it matter that the 999,999 other systems are unhacked?  

This brings up the heart of the argument, should we be worried about our systems being compromised or should we be worried about our data being stolen?  These are actually two different problems as I don’t necessarily have to compromise your system to steal your data…

Posted by xssniper | Filed in Uncategorized | 1 Comment »

 

Tuesday, April 15th, 2008

Mark Dowd scares me….

If you haven’t heard yet, Mark Dowd chopped up a Flash vulnerability ninja style and released a 25 page whitepaper describing his attack.  It’s truly a work of art and can be found here. <pdf>

    

I’m not even going to attempt to describe any portion of this attack (just thinking about it makes my head hurt), but Thomas Ptacek from Matasano has a great writeup <writeup>

Posted by xssniper | Filed in Uncategorized | 2 Comments »

 

Friday, April 4th, 2008

Insecure Content Ownership

Taking ownership of someone else’s content is always a tricky deal.  Nate McFeters and I spoke about some of the issues related to taking “ownership” of someone else’s content last year at Defcon, but we continue to see more and more places willingly accepting third party content and happily serving it from their domain.  I came across an interesting cross domain issue based on content ownership that involved Google.  Google has fixed the issue, but I thought the issue was interesting so I’ll share the details… but before I do… I wanted to mention the efforts put forth by the Google Security Team (GST).  Fixing this issue was not trivial… it involved significant changes as to how content was served from Google servers.  Needless to say, the GST moved quickly and the issue was fixed in an amazingly expedient and effective manner… KUDOS to the GST!

    

On to the issue:
I discovered that users could upload arbitrary files to the code.google.com domain by attaching a file to the “issues” portion of a project.  The uploaded file is then served from the code.google.com domain.  Normally, these types of attacks would make use of the Flash cross domain policy file and the System.security.loadPolicyFile() API, however due to the unique path of each project, the cross domain capabilities of Flash are very limited in this instance as policy files loaded via loadPolicyFile() are “limited to locations at or below its own level in the server’s hierarchy”. 

    

Address Bar

     
Flash isn’t the only option here though.  Java has a different security policy and uploading a Java class file to the code.google.com domain gives me access to the entire domain, as opposed to only certain folders and sub folders. 

    

Sounds pretty straight forward huh?  Well, I ran into some issues as the JVM encodes certain characters in its requests for class files made via the CODE attribute within APPLET tags.  After poking around a bit, I realized that requests made via the ARCHIVE would be sent as is, without the encoding of special characters.  With this newfound knowledge in hand, I created a JAR file with my class file within it and uploaded it to code.google.com.

      

Issues Upload

    

Now, the CODE attribute is a required attribute within the APPLET tag, so I specified name of the class file I placed within the JAR file.  When the APPLET tag is rendered, the JVM first downloads the JAR file specified in the ARCHIVE attribute, the JVM then makes the request for the class file specified in the CODE attribute.  In this instance, the request for the class file specified in the CODE attribute will fail as the class file is not on the code.google.com server (even if it was, we wouldn’t be able to reach it as requests made via the CODE attribute are encoded).  The failure to locate the class file causes the JVM to begin searching alternate locations for the requested class file and the JVM will eventually load a class file with the same name located inside of the JAR file…

    

Applet Code  

    

Once the class file is loaded, the JVM will fire the init() method and Java’s Same Origin policy allows me to use the applet to communicate with the domain that served the applet class file (as opposed to the domain that hosts the HTML calling the APPLET tag).  Here’s a screenshot of the PoC page I was hosting on XS-Sniper.com. 

     

Proof of Concept

    
I don’t think there is a tool on the market today that even attempts to detect something like this and I’ve met many “security professionals” that have no idea that vulnerabilities like this even exist.  This isn’t the first time I’ve come across a cross domain hole based on content ownership.  I’m expecting we’ll see a lot more of these types of vulnerabilities in the future as cross domain capabilities becomes more prevalent in client side technologies and as content providers become more and more comfortable in taking ownership of others content.

Posted by xssniper | Filed in Uncategorized | 17 Comments »