Archive for the 'Security' Category

Thursday, October 11th, 2012

Content Smuggling

A few years ago, I discovered a peculiar design decision described in the PDF specification. This design flaw allows for an attacker to conduct XSS attacks against some websites that would not normally have XSS vulnerabilities. I reported this issue to Adobe in late 2009. Apparently, there are some challenging back-compat issues which make changing this design difficult. Given it’s been nearly three years since I first reported the issue to Adobe and a fix from Adobe doesn’t seem likely (Chrome has already fixed their internal PDF reader), I figured I should let web application community know about the exposures. I don’t expect “APT” or other 31337 $country “cyber liberation armies” to use this anytime soon, but it is interesting behavior and I hope web application security folks find it interesting. Hopefully some researcher who’s smarter than me can take this to the next level. Oh, and I apologize for the ugly PoCs in advance!

If we take a look at section 3.4.1, “File Header” of Appendix H in the PDF specification, we see the following:

13. Acrobat viewers require only that the header appear somewhere within
the first 1024 bytes of the file.

Anyone who has read the PDF specification probably knows about this behavior, in fact Julia Wolf mentioned this behavior in her epic OMG WTF PDF talk at CCC in 2010. This peculiar design allows for the creation of a hybrid file that is both some arbitrary file type (such as gif, png, doc…etc) and PDF. We do this by cramming a PDF header after another file header. An example of this is shown in the screenshot below:

GIFPDF

Hopefully, by now we’ve already realized that hosting user controlled PDFs and serving those PDFs from a sensitive domain is dangerous from a web security perspective. However, with this quirky file header behavior, we’ll have the ability to smuggle PDFs onto a website that only accepts “benign” file types. As an example, I’ve uploaded such a file to an appspot web application I created. The PoC shows that we can load a single file as both a GIF (or any other file type we want) and a PDF. Adobe PDF reader needs to be set as your default PDF handler for the PoC to work.

http://pdfxss.appspot.com/Son-of-Gifar/Content-Smuggling.html

The only difference in the two displays in the PoC above is the way we reference the file. In the case of the image, we simply use an IMG tag. When we want to force the browser to hand the file to the default PDF reader, we make use of the OBJECT tag and explicitly specify a content type to force the content to be handled by the default PDF reader. Of course, this technique can be generalized for other plugins.

<img src="http://pdfxss.appspot.com/Son-of-Gifar/NotAPDF.gif" height=10 width=10></img>

vs

<object data="http://pdfxss.appspot.com/Son-of-Gifar/NotAPDF.gif" type="application/pdf" width="500" height="500"></object>

PDFs do not have by-design access to the DOM of the domain from which it is served. How then can we use a PDF to achieve XSS? Here is where the feature rich Adobe PDF Reader comes into play. Once the PDF is loaded, we have a couple different options to achieve XSS. First, we can redirect the PDF to a javascript url. These redirections will navigate the browser (not the PDF document) and results in true browser based XSS on the victim domain. Luckily, Adobe considers redirection from a PDF to javascript URLs a bug and has eliminated the most obvious methods for achieving this. There is however, another method which essentially achieves the same impact. We can use a built in API to make network requests to and from the victim domain. These network requests will carry any cookies associated with the victim domain, giving the attacker access to authenticated resources.

The following link demonstrates how this issue would be used against a website. The domain xs-sniper.com (the attacker’s domain in this example) loads a smuggled GIF/PDF from http://pdfxss.appspot.com (the victim domain in this example). Once the PDF is loaded, we make use of the built in XML APIs to retrieve a file /secret.txt from http://pdfxss.appspot.com (the victim domain).

http://xs-sniper.com/sniperscope/Adobe/Son-of-Gifar/PDFXML.html

IE users will see a warning in the PDF reader. This is because IE actually downloads the PDF and opens a local copy :) You can verify the IE behavior by browsing to this PoC with Internet Explorer (Adobe PDF Reader must be set as the default reader).

http://pdfxss.appspot.com/Son-of-Gifar/Location.html

Lastly, you can inject a PDF into a website if you have already XSS. This might be helpful in bypassing XSS filters or application filtering. This is accomplished by injecting a PDF into the vulnerable site using the OBJECT tag.

<object data="http://vulnerable-domain/xss.asp?vulnerable-param=<injected PDF HERE>" type="application/pdf" width="500" height="500"></object>

An example of how this could be done is given below (this PoC best viewed in FireFox with Adobe PDF Reader, but the technique is possible for all browsers).

http://xs-sniper.com/sniperscope/Adobe/Son-of-Gifar/PDF-Injection.html

What’s the impact? Well, I suspect there are plenty of Internet facing web sites that are vulnerable to this bug. Any web application that accepts uploads of “benign” file types and then serves those files back to the user could be affected. This also affects websites which rely on content-type headers to prevent XSS (btw, this strategy doesn’t work). See Phil Purviance’s blog for tips on spotting (and exploiting) websites that use content-type to protect against XSS. This bug can also be used to exploit the applications that use content-disposition headers to prevent XSS bugs. The most common attack surface here will likely be internal content portals. Pretty much every internal content portal used in the enterprise is vulnerable to this issue (think Sharepoint).

You can test for this issue by trying to upload this file to a vulnerable web application. If you see the PDF header in the uploaded file AND the file is served from a sensitive domain (ex. it has auth cookies), then the application is vulnerable.

The proper defense for this is the usage of alternate domains for user supplied content (aka sandboxed domains). Sandboxed domains can be tricky to implement. Some of the most popular web applications on the web already make extensive use of sandbox domains, but the vast majority of web applications do not. Once again, internal content portals are in a hard spot as it’s more difficult to implement a sandboxed domain on an internal network. Sandboxed domains is a subject many “web application security specialists” understand poorly and probably deserves its own blog post. How to properly implementing a sandboxed domain is a great interview question for senior web application security roles because it tests design and implementation skills. It also requires a really solid understanding of browser/plugin same origin policy. I haven’t seen much written about sandboxed domains, but this blog post does a nice job of summing up some of the challenges of content hosting. http://googleonlinesecurity.blogspot.com/2012/08/content-hosting-for-modern-web.html

Happy hunting!

BK

Posted by xssniper | Filed in Security, Web Application Security | 3 Comments »

 

Thursday, July 12th, 2012

Tridium – An ICS Learning Moment…

We are happy to see Robert O’Harrrow is shining a light on the vulnerabilities associated with Industrial Control Systems (ICS). The ICS software community is light years behind modern software security. Sadly, we can honestly say that the security of iTunes is more robust than most ICS software. Terry and I plan on releasing some technical details about what we’ve found in the near future, but for now we wanted to talk about some of our experiences with this particular issue.

First, ICS-CERT has done a great job tracking this issue. It’s been months since we first reported the issue to ICS-CERT. Following up with an unresponsive vendor is extremely frustrating. It was apparent that ICS-CERT was making every effort to follow-up with Tridium and they kept us well informed throughout the entire process. We especially want to thank those ICS-CERT analysts who kept us apprised of developments despite the lack of response and unwillingness of Tridium to accept responsibility for the issue.

We are disappointed that it took so long for the public to become aware of this issue. According to the Washington Post article, Tridium became aware of this vulnerability “almost a year ago, when a Niagara customer that uses the software to manage Pentagon facilities turned up issues in an audit”. We are disappointed that even after discovering critical, remotely exploitable vulnerabilities in Tridium software… our government chose to purchase and implement the software anyway. We are disappointed that our tax payer money paid for the ignored security audit, paid for the acquisition, and paid for the implementation/deployment of known vulnerable software. We’d like to challenge our nation’s leadership to evaluate the failures in our current processes surrounding the acquisition of software that support Critical Infrastructure and Industrial Control Systems.

At times, we felt like ICS-CERT had their hands tied. We realize when you are working with vulnerabilities that could affect critical infrastructure, a delicate balance between disclosure and timely notification of affected organizations must be maintained. However, when a vendor is unresponsive or refuses to accept responsibility for an issue, ICS-CERT should have the authority to inform those customers who are vulnerable in a timely manner. DHS and ICS-CERT work for us, the American people… they do not work for the PR departments of ICS companies. ICS-CERT should be able to take the appropriate actions to ensure that we’re safe and to ensure ICS customers have the right information to mitigate and control risk. The PR damage done to any individual company should never be part of this equation. If a vendor is unresponsive or unwilling to accept responsibility for a security issue, ICS-CERT should have the option of disclosing issues (45) days after initial notification from external researchers (this is consistent with CERT/CC’s disclosure timelines). Of course, special circumstances require special handling, but we’re sure the folks at ICS-CERT can make those determinations when needed.

Probably the most disappointing part of the whole ordeal is Tridium’s eagerness to blame the customer. We’ve seen this from other ICS vendors as well. It should never be the customer’s responsibility to have to compensate for poor design. Many ICS vendors expect customers to ensure their product is implemented securely, yet provide zero (or extremely vague) guidance on how to do so. In many cases, secure deployment is simply impossible due to the extremely poor security design. Notification, automatic patching, and secure implementation guidelines in the ICS world are light years behind modern software. We don’t understand Tridium’s claims that, “The firm also is doing more to train customers about security” when the root cause of these issues is poor design and coding practices from Tridium itself. Maybe Tridium should invest in training their developers about security first…

If you would like to contact us about our experiences, please email us at: help – at – fixicssecurity.com

Billy Rios @xssniper and Terry McCorkle @0psys

Posted by xssniper | Filed in ICS, Security | Comment now »

 

Friday, June 10th, 2011

Turning the Tables – Part II

I’m posting some of the research I’ve been working on over the last few months. I planned on submitting some of this research to the Blackhat/DEFCON CFP, but it looks like I’ll be tied up for most of the summer and I won’t be able to make it out to Vegas for BH or DEFCON this year (pour some out and “make it rain” for me). The gist of the research is this: I’ve collected of number of malware C&C software packages. I set up these C&Cs in a virtual network and audited the applications and source code (when available) for bugs. The results were surprising; most of the C&C software audited has pretty crappy security.

This week’s sample is an auth bypass and SQL injection on a BlackEnergy C&C page. The first of the samples can be found here: http://software-security.sans.org/blog/2011/06/10/spot-the-vuln-rabbit-authbypass-and-sqli

I’ll post more samples in the coming weeks.

Attacking malware C&C is an interesting proposition. Exploiting a single host can result in the transfer of hundreds or even thousands of hosts from one individual to another. I’m not the first to note that malware and C&C software is evolving. Gone are the days of simple IRC bots receiving clear text commands from an IRC server. Today’s C&C’s are full fledged, feature rich applications with much complexity. Complexity is the enemy of security, even malware authors cannot escape this. There is no magic bullet, even malware authors face the difficulties of writing secure code. This is especially so if their customers are paying money for C&C software and demand newer features and robust interfaces. Today’s malware landscape looks much like a typical software enterprise with paying customers, regularly scheduled feature updates, marketing, and a sprinkling of PR. Who knows, maybe in the near future these malware enterprises will have dedicated, on-call security engineering teams and a formal SDL process :)

Posted by xssniper | Filed in Web Application Security | Comment now »