security

The Calculus of Threat Modeling

I have been designing secure and security products for 20 years. I always thought of this as “architecture” and it took me a long time to realize that a major part of what I was doing was threat modeling. There are many established approaches to threat modeling, but because I backed into the field, I had rolled my own. This post is to explicitly describe what I have been doing.

The Next Level

“That’s where the money is!”
– Attributed to Willie Sutton, Non-Traditional Withdrawals Specialist

Willie Sutton was quoted as having said the above (he denied coining the phrase) in response to the question, “Why do you rob banks?” At the time, it was an obvious choice; in a pre-networked world, value was primarily transmitted by moving physical objects around the world, whether they were bars of precious metalsmineral crystals, or slips of paper. A non-traditional account withdrawal, then, relied on transporting physical objects from point A (a location controlled by the bank) to point B (a location controlled by the attacker).

The Digital Pickpocket

I am sure that everyone has seen the commercial where users of a specific brand of smartphone are passing a video back and forth by simply touching the devices together.  It is a very slick feature that obviously makes moving files between mobile devices an easy task to accomplish.

The technology being used to provide this feature is known as Near Field Communications (NFC).  This same technology, which is an extension to older Radio Frequency Identification (RFID) technology, is also being integrated in other facets of our lives under the banner of convenience.  Unfortunately, like anything where convenience is the priority, there are some potential security issues that the security community has been pointing out for years.  In this case we are talking about “Tap to Pay” credit cards, transit cards, and other cards that use NFC to broadcast payment information to payment terminals.

As previously mentioned, NFC is an extension to RFID technology.  RFID technology, typically used to track inventory, is (I am over simplifying here) essentially a small radio transmitter that requires little to no power.  The main difference, which according to many NFC vendors is a security feature, is that RFID allows for a longer range transmission than NFC.  Essentially NFC will work when the devices are inches apart while RFID can be meters apart.  If you want the real geeky details on exactly how NFC works I suggest that you give the ISO standard (ISO 18092) a read. 

To read a NFC transmission or even an RFID one for that matter one simply needs to have a receiver that is within range of the transmitting device.  I would like to tell you that this transmission is performed over cryptographically secured channels or that only an authorized receiver may pick up the transmission but unfortunately, this is not always the case.

This week we had an opportunity to talk with KOMO TV News Reporter Matt Markovich about NFC technology and some of the risks it presents when used as a payment mechanism.  I would say that my impression of Matt was that he is more technical that most reporters I have worked with in the past as when he approached Leviathan for assistance on his story, he already had a working test case that helps prove the threat.

What Matt was proving (video below) was that this technology of convenience is not secure from an eavesdropper or interception.  Essentially, a “bad guy” can build his own receiver and as long as he is within the necessary range read the transmission coming from the NFC enabled card.  In Matt’s test case, he uses Visa credit cards however, with a bit of customization work this can be extended to read other types of NFC enabled cards such as transit passes, and door locks.

When watching the video remember no vulnerability is being exploited this is simply leveraging a feature of the technology, not a bug.  NFC is after all simply a radio transmitter, there is no access control or authorization required to accept that radio transmission. 

It is also important to understand that this is different than some of the ways we have seen RFID technology leveraged by attackers.  In the past attackers have built low cost devices like this Proxmark one pictured below to read RFID enabled devices;

Proxmark Proxcard reader
Proxmark Proxcard reader

While a setup like this could easily and just as cheaply be built (less than 100$) to read NFC this is not exactly portable or discrete which are two things that an attacker will require due to the fact that in order to read the NFC chip the attacker must be within range which for NFC is no more than 4 inches.

In addition, the above setup assumes that, even if you follow one of the many online tutorials, you must have a level of competence when it comes to building your own electronics.  So, instead of going to all of this trouble and to insure a more stealthy mechanism for attack we go back to the beginning of this post and the wiz bang smartphone feature found on most Android smartphones that allows you to transfer data simply by touching the devices together.

Security researchers were quick to leverage   the native NFC support found in most Android phones plus the powerful features of the smartphone itself to make this attack stealthy and practical.  By simply running a custom, community supported version of the Android Operating system as well as publically available apps one can turn their smartphone in to a NFC receiver and accept a transmission.  Not only can one receive the transmission, which by the way contains all the data needed to “borrow” the target’s credit card details, but it can be saved and then replayed at a later date or relayed in real-time from the smartphone to make a purchase at any standard “tap to pay” terminal.

This is exactly what Matt did and demonstrates in the video and explains in this article.

The most common response to this sort of attack is typically something along the lines of; “yes, but you need to be within 4 inches to make this work.”  In fact, this is exactly what MasterCard said in response to the KOMO News inquires; "The circumstances under which it can occur in the real world are extremely rare."

This is absolutely true however thieves already have no problem performing a traditional pick pocket theft, so why not instead of actually taking the wallet simply bump in to your target and scan it. 

In cases like this, it is human nature to want to find someone or something to blame.  Before you assume that it is once again those “evil hackers” or organized criminal rings that are responsible remember – this is a feature not a bug.  The demonstration as done by Matt in the video is simply leveraging an existing feature.  This means that if you absolutely must find someone to blame, then you must pick whomever is responsible for implementing such an insecure way to transmit payment data.  Yes you guessed, the Payment Card Industry (PCI).  Let’s be clear, this “vulnerability” exists due to the fact that convenience has outweighed security.  The PCI wanted a way to insure that consumers can quickly pay for items without spending the extra few seconds fumbling with your wallet and counting that dirty paper cash stuff that no one seems to use anymore.

Could the Payment Industry implement NFC technology in a secure way?  The ISO standard does outline various provisions that may add security to the implementation however, considering the scale of this implementation there may be real world operational and technical hurdles that prevent this from happening.

Unfortunately, we will see more and more of this technology however, one of the best defenses today is to simply call your credit card company and ask them to issue you a card that does not support “tap to pay” or any other NFC technology.  Today, card issuers are honoring this request, of course eventually they may stop however, if enough consumers reject the technology perhaps change can be forced.

The Double-Edged Sword of HSTS Persistence and Privacy

HTTP Strict Transport Security or more commonly known as HSTS is a draft policy by the IETF WebSec working group currently proposed that would extend the standard list of HTTP response headers to allow domain owners to enroll complying browsers into exclusively secure communications with the web server for an asserted period of time.

This is accomplished by rewriting all HTTP requests to that particular domain regardless of entry (be it via link, image or manually typed in the address bar) over HTTPS and validating the certificate chain. If a secure connection cannot be established or the certificate chain cannot be verified then the request fails with a transport level error and is abandoned.

The actual implementation of this is nearly trivial. Over a secure connection the server simply has to return the header specifying how long the browser should exclusively attempt HTTPS connections and a flag whether it should include sub-domains:

Strict-Transport-Security: max-age=31536000; includeSubDomains

Under normal circumstances as long as the user has been to that domain within the max-age of the policy, this is an effective mitigation against sslstrip type attacks which rely on users to initiate an HTTP connection to perform a man-in-the-middle attack against the browsers.

One of the less understood implications of this proposal is the role that wildcard SSL certificates play. When purchasing an SSL certificate the domain owner must decide between a standard certificate that covers only one particular FQDN such as store.domain.com or a (more expensive) wildcard certificate issued to *.domain.com that would encompass multiple sub-domains such auth.domain.com and store.domain.com.

As the certificate wildcard feature is decoupled from the HSTS includeSubDomains flag it leads to interesting behavior that allows an actor such as an advertising company or any other entity to store, retrieve, and edit data in the browser's database. When a wildcard SSL certificate is used it allows the owner to have a near unlimited number of entires in the HSTS databases as currently implemented by supporting browsers.

An entry in the HSTS database can grant a single-bit of information to an interested party that can be retrieved at a later time. Lets look at an example where we want to store and retrieve the word "HELLO" in a browser's HSTS database using nothing but forum image tags and a trivial encoding.

To set the bits we would simply need to create a post with the following tags:

[img]https://charcount-5.trackingdomain.com/setbit.png[/img]

[img]https://0-H.trackingdomain.com/setbit.png[/img]
[img]https://1-E.trackingdomain.com/setbit.png[/img]
[img]https://2-L.trackingdomain.com/setbit.png[/img]
[img]https://3-L.trackingdomain.com/setbit.png[/img]
[img]https://4-O.trackingdomain.com/setbit.png[/img]

 

When a browser goes to each of these URLs over HTTPS the web server would see the /setbit.png key and include a HSTS header with a large max-age value in the response and create an entry in the browser's HSTS table for each of the sub-domains.

To read this data back out a javascript block on a different domain than the original forum would first brute force the character count by creating resource requests enumerating possible values and having the server respond whether the request came in over HTTP or HTTPS as the requests would have been rewritten by the browser if the sub-domain is present in HSTS database. These requests would look like:

http://charcount-1.trackingdomain.com/getbit.png [ Server: HTTP ]
http://charcount-2.trackingdomain.com/getbit.png [ Server: HTTP ]
http://charcount-3.trackingdomain.com/getbit.png [ Server: HTTP ]
http://charcount-4.trackingdomain.com/getbit.png [ Server: HTTP ]
http://charcount-5.trackingdomain.com/getbit.png [ Server: HTTPS! ]

The same brute-force enumeration process would be performed to retrieve the individual characters of the message body. This enumeration is more effective than the current history enumeration attacks via CSS (here.)

At first this approach looks like a Bloom filter. Seemingly akin to burning in bits permanently and not having the ability to change them but thanks to the max-age specifier of the header it is possible to also clear bits by setting their maximum age to 0:

Request URL: https://charcount-5.trackingdomain.com/clearbit.png
Strict-Transport-Security: max-age=0;

Initially this doesn't look worse than standard tracking cookie as long as it is cleared on a regular basis but clearing the HSTS database frequently renders it much less effective in preventing the very attacks it sought guard against. Therein lies the classic trade-off of security versus privacy. Of the currently two HSTS supporting browsers there is no consensus on this topic. Chrome opts for increased privacy by clearing HSTS database when cookies are cleared while Firefox 4 opts to store HSTS settings in the separate and infrequently cleared site-preferences database.

So what can be done about this?

My proposal is to amend the draft to force the includeSubDomains flag on wildcard certificates. This would limit them to only one entry in the browsers HSTS database and make the technique above prohibitively expensive to non-CA owners as a separate signed SSL certificate would be needed for every bit of information stored and limit encoding options. That way we can have the best of both worlds, privacy and security.