Bulk ASLR Data Analysis

Bulk ASLR Data Analysis

Hello from the Lotan team at Leviathan!

We recently looked at a sample set of 80,000 crashdumps from a production environment and decided it was time to look at some data we have in aggregate. Lotan's core focus is detecting stage one attacks (shellcode) in crashed processes. To achieve this goal Lotan has to process the bulk of the data contained within a memory image. One of the most interesting components of these process images is the information about loaded modules from Windows processes.

The Cobbler’s Children and source code integrity, or why your source code repositories need better shoes.

I’m reminded of the saying ‘The Cobbler’s children have no shoes’. We consider our customer facing products more important than our internal ones.

 

Of course we do, right? Our bills are paid by our customers and we’d like more of them, so an hour or dollar spent to improve their experience is easily justified.  However, allocating resources to internal processes and systems gets a different response.  You want to spend how much to do what?  Internal processes, systems and tools have to pass greater scrutiny to be “justified”. Since it’s “all in the family”, internal tools don’t have to be as pretty, well documented, reliable or secure, right? 

Oh, and doing it “right” just takes too much time.  “Just get it done instantly, if not sooner.”

 

DevOps is one way of making the development process faster and cheaper. Unfortunately, it’s all too easy to let it devolve into anarchy. Doing DevOps  the right  way is harder. It’s so much easier to spool up an instance of $Whatever_Tool_You_Need_Right_Now to fix some issue before a release than to plan for what you need and document what you’re doing. Sure, you might be using the same tool as some other developer or team, but hey, when it’s faster and easier to just create your own instance instead of coordinating systems, faster and easier often win out. Documentation and hardening, when done by yourself, just take too much time.

 

The result is something I call the ‘Sergeant Schultz approach’ –IT worries about enterprise wide systems directly under their control and leaves the development environment alone until forced to pay attention, when it may be too late.

 

These systems may have been quickly stood up and aren’t being fully protected by your controls. You’re not watching logs, tracking users, remediating vulnerabilities to ensure stable, hardened systems.

 

A loss of availability in on these systems is annoying, but obvious.  It may slow development but it won’t permanently affect your business.  

But a loss of confidentiality and integrity can affect the enterprise as a whole. A source code leak might reveal some trade secrets to a competitor. A loss of integrity can be far worse.

 

Imagine an development server not benefiting from the full protection of your controls.  A malicious or negligent user can modify code for your mobile app, web app or device OS, creating a stability issue, back door or jackpot condition.

 

Discovering a back door or jackpot condition prior to code release is merely embarrassing. Remediating the issue post- release opens the company to loss of market share and goodwill. If the parade of horribles doesn’t have your attention, consider the possibility that the jackpot condition is a negative one. Remember the Therac-25, where a radiation therapy device sometimes overdosed patients? That was mere negligence. Imagine what a malicious party could do.

 

Ok – so I’ve stated the obvious – it’s easier to do it fast and loose than to take the time to plan things out.  However, it IS possible to reduce the risk of compromise, even in a fast moving DevOps environment.  By using the following 4 steps, you will greatly reduce the risk of a catastrophic loss of integrity, with reasonable amount of effort expended.

 

We recommend the following:

 

Inventory tracking: IT should have, as a minimum, visibility into the DevOps environment- instances, users and permissions. If you have a SIEM or log management tool, have the instances report up to it.

 

Identity management: Each user with write capability should have their own credentials. Ideally, these are referenced back to the firms LDAP/AD user store. If you can, require the use of 2FA to reduce the risk of credential compromise.

 

User tracking: Every change to source code should map back to a specific human user.

 

Pre- built images: When feasible, systems should be standardized and centralized. Consider offering pre-hardened VM images with  favorite development tools to offer developers so they don’t have to roll their own.

 

 

We don’t need to make our tools as nice as what we offer our customers, but they need to be good enough. Your kids don’t need Louboutins, but they do need good enough shoes.

 

For the curious:

https://www.psychologytoday.com/blog/credit-and-blame-work/200812/cobblers-children-syndrome-in-the-workplace

 

Guidance - Flash Vulnerability CVE-2015-5119

During the Hacking Team breach which came to light earlier this week, a large quantity of Hacking Team's internal data was posted online.  Some of this data pertained to a 0-day (a vulnerability which the vendor is not aware of) in Adobe Flash (versions 9 through to 18.0.0.194) (CVE-2015-5119) which allows an attacker to execute code on a victims computer if they browse to a website with a malicious flash file embedded. 

Guidance - OpenSSL Vulnerability CVE-2015-1793

This morning, OpenSSL released details of a vulnerability (CVE-2015-1793) affecting OpenSSL versions 1.0.2c, 1.0.2b, 1.0.1n and 1.0.1 for client connections; listening servers are unaffected unless they validate client certificates.  Anyone can issue themselves a certificate for any domain and the OpenSSL library will not notice, allowing someone to impersonate a server and pass TLS/SSL based checks.  The vulnerability allows an attacker to use a leaf certificate as if they were a Certificate Authority and issue rogue certificates to themselves. 

The Harms of Forced Data Localization

How we store data---and how we think about keeping our memories available over the long term---has changed in the last few years. The world has become better at keeping data secure and safe by distributing it to multiple continents. However, some leaders are calling for "national Internets"---censored, walled gardens set up to appease special interest groups that range from political factions, to property cartels, to religious police. Other leaders have taken a different tack, called forced localization; rather than blocking your communications, they want to require that all your data (and all the computers that handle it) be inside a single country: theirs, for whichever country they represent. These would be major changes to the structure of the Internet---changes that would harm both businesses and the general public.

Scarcity of Cybersecurity Expertise

Scarcity of cybersecurity experts is a real problem that can be quantified and described---but not one that can easily be solved. Limited resource availability, the basis for our entire economic system, is ordinarily a problem of finding raw materials or advanced machinery, not one of hiring the workers we need to defend our assets---but with more than one million cybersecurity positions unfilled worldwide, currently-identified cybersecurity needs could not be met if every employee at GM, Costco, Home Depot, Delta, and Procter \& Gamble became security experts tomorrow. Those one million positions span all industries, specializations, and requirements, and include approximately 25,000 non-military positions in the United States' federal civil service.