Multi-factor & Risk-based Authentication
Tracking Bad Guys Who Enter IT SystemsRSA's Bret Hartman on Monitoring Behavior of Systems' Intruders
Bad guys and bad things will get into IT systems despite safeguards built to prevent just that.
See Also: Live Webinar | Breaking Down Security Challenges so Your Day Doesn’t Start at 3pm
"Security isn't black and white and never has been," Bret Hartman, chief technology officer of RSA, the IT security arm of storage maker EMC, says in an interview with GovInfoSecurity.com (transcript below). "The fact is the systems are so complicated that you never can be sure; you know that there is some certain amount of penetration."
That said, Hartman contends effective countermeasures that include studying the behavior of those accessing alsystems can address penetrations. "You keep looking at people's behavior even once they are through that initial authentication check," he says, "Constantly look at how they behave each time to determine do I trust this person? Are they starting to do something that is maybe a little whacky that maybe I don't trust them as much as I did five minutes ago? That notion of managing risk and looking at behavior makes it perhaps more acceptable to say, 'Okay, we have bad guys that might be in the system, but at least we're watching them every minute and hopefully detecting them before they do anything too bad.'"
In the interview, Hartman also addresses the role geolocation plays in cloud computing. Data physically reside somewhere in the cloud, but the location isn't always evident. And he says that poses problems for organization seeking to employ cloud services while remaining compliant with laws and regulations.
"It turns out that when you move things to cloud, you lose the concept of what applications and what data are where," Hartman says. "It turns out for risk management and compliance purposes, knowing where a piece of data is on the planet must be really, really important, especially if you don't want to violate laws or you want to deal with regulatory compliance."
Hartman says researchers at RSA are exploring new approaches to geolocation technology "to (know) just what is running where on the planet to make sure it is, in fact, compliant."
In the interview, Hartman also discusses:
- Creating a trustworthy cloud environment, and
- Detecting and preventing the advanced persistent threat
Before become RSA's CTO, he served as CTO for IT security at parent EMC. Before joining EMC, Hartman served as director of technology services for SOA appliances at IBM.
At the National Security Agency. Hartman helped to create the Defense Department's Trusted Computer Systems Evaluation Criteria, knwon as the Orange Book.
Hartman earned a bachelor's degree in computer science and engineering from the Massachusetts Institute of Technology and a master's degree in computer science from the University of Maryland.
The Cloud Challenge
ERIC CHABROW: RSA has two research centers, RSA Laboratories and the RSA Anti-Fraud Command Center. Please tell us about some of the innovative research going on at these labs and what kind of new security technologies can we expect to see in six months a year or two out.
BRET HARTMAN: It is a hugely interesting time to be working in this area. On the labs front, there is a quite a bit of work going on in terms of thinking about the cloud in particular. Challenges that we see now just an increasing adoption rate in terms of people thinking about the cloud. Really a huge change over the last year, and in particular thinking about the hybrid cloud, how do we take organization's infrastructure that may sit in the datacenter and little by little translate it off into those public clouds so we typically have a mixed environment.
One example, just one of several in the labs area, is thinking about how we expand in the notion of geolocation. It turns out that when you move things to the cloud, you loose the concept of what applications and what data is where, so it turns out for risk management and compliance purposes, knowing where a piece of data is on the planet turns out to be really, really important, especially if you don't want to violate laws say or you want to deal with regulatory compliance. One of the things that are clever new approaches for geolocation is to determine just what is running where on the planet to make sure it is in fact compliant.
On the fraud and command center side, the challenge there is always trying to stay one step ahead of the bad guys. There is a never-ending list of threats that are out there. The broad categories, as we think in particular about this concept of advanced persistent threat, is definitely the buzz word of the day. What are techniques that we can do better in terms of number one, detecting those advanced persistent threats are even there, and two, how do you remediate against them, which are huge challenges?
Locating Data in the Cloud
CHABROW: You talk about geolocation, what are your scientists looking at as you explore not only this, but other areas of changes within the way computing is done to make sure they are secure?
HARTMAN: The primary focus in terms of the change, as we think about this emerging hybrid cloud model, is how do we have a basis of trust, how do we a basis of assurance in terms of what is running? Is it adequately protected, dealing with things like multi-tenancy issues of who else might be co-located with my cloud? They are all variations on the same theme of how do I trust this new environment, how do I have the ability in terms of what is running, where it's running, and am I comfortable with it? The notion of visibility control compliance on the cloud turns out to be quite important. That spans a number of different areas; that expands things like knowing the identities, the authentication, information-based security, collecting of logs, governance risk and compliance, and geolcation based polices, as an example.
All of those at the end of the day, they need to fit together in terms of what you might call a secure chain of trust. How do establish a secure chain of trust from hardware, from that bare metal hardware, through BIOS, through the operating system, through the virtual layer, through the application stack, all the way up to be able to tie all those different security mechanisms that exist at every single layer in the stack? How do they tie together to give you that secure chain of trust, and then how do you measure how secure that is? That is the goal we have to address.
CHABROW: Is this a different way of thinking about security, or has this been a way security has always been thought about?
HARTMAN: In a way, it's a natural evolution. The thing that makes it different as we think about this kind of cloud model is that the notion of physical control of course goes away. When an organization has something running in their datacenter and it's their employees that are running it, you have a basic level of trust that you have brick walls and barbed wire fences surrounding your datacenter, people enter the datacenter with badges. It is relatively static in terms of what they've built and what is installed. This notion of trust because much fewer degrees of freedom. As you move into this model, you need some new tools. All the same tools that we have are still there, but you need some new ones. Part of the whole point of the cloud is the fact that it is such a dynamic and flexible model, that's why it is so valuable. You can save a lot of money that way, but that dynamism and flexibility comes at a cost of potentially increased risk. It is still your classic 3101 defense in depth argument that we've had forever, but we're applying it to a new set of technologies.
CHABROW: Obviously, a lot of people would be worried about putting some sensitive data on a public cloud. Is that a kind of concern that might go away three or five years from now, or is that something that you think will be around for a long time?
HARTMAN: You mean the notion of a public cloud?
CHABROW: The notion that of a security of a public cloud that people would be worried today about putting sensitive information on a public cloud. Is technology evolving to the point where that may not be quite of a concern say two to three to five years from now?
HARTMAN: The concerns in terms of can I trust the public cloud, those concerns will diminish as technology vendors like RSA fulfill that requirement and as we embed security into the stack. The thing to recognize is that the notion of trust differs depending on your requirements. If you are thinking about a consumer-based application versus a healthcare application versus something that crosses classified data there are requirements for trust are far different in those environments and the security mechanisms are different. Part of the challenge is people view it as black or white, either I trust that cloud service or don't. Just like any other information systems, it all depends on what you will use it for and what is your tolerance for risk. What we're seeing as we meet with different service providers, is the different service providers are targeting at different levels of assurance that depending on, for example, is this for a particular part of the federal government or is this focused on, something around the education system or dealing with healthcare and what kind of measures need to be place there versus say an intelligence system or an operational system for one of the military services. As a result, the more and more those specific offerings, absolutely the concern will diminish just like every single new technology that comes down the pipe. There are concerns in terms of risk? Is it mature; do we have the right controls in place? Just like everyone of those, technology will definitely be there to address those requirements.
Adavanced Persistent Threat
CHABROW: Let's talk a little bit about the advanced persistent threat. What are the main challenges of dealing with that?
HARTMAN: They are, of course, huge. The definition of advanced persistent threat is the fact that it is targeted at individuals, and it is so insidious because the malware that makes it through into your system, whether that is sitting on let's say your laptop or on a server, very difficult to detect and the actual exploitation of that malware may not take place for a long time, if ever. It becomes challenging. It is not like something that just blasts through the front door and deletes your hard drive or attempts to. It is very, very pernicious and very narrowly focused.
Part of the challenge around advanced persistent threat is - again, it's defense in depths - recognizing that those countermeasures we put in place to prevent that advanced persistent threat to infiltrate the system are just by definition less and less effective. The fact that the complexity of the stack is so great, so much code, it is impossible to get rid of every last vulnerability that exists in that stack; we just see it over and over again. As long as people keep writing code and making patches and making multi-million line application stacks, there will be vulnerabilities and some small percentage of APT will get through. What that means, especially in APTs, is the shift of emphasis less so. It doesn't go away, but less so on infiltration and more so in terms of detecting an exploitation, trying to prevent it if possible, but even if you can't prevent it then trying to do something about it so it doesn't happen again. That is hard because they are so targeted. The fact is that when you move to that exploitation, chances are that's going to a look whole lot like typical application access. If it is an exploitation, say we're talking about moving money or say accessing somebody's patient record, it may not be that much different than what a human being would do. But part of the trick in an APT is to, I think, being able to tie different sources of evidence up and down the stack together to have a higher degree of confidence that this is truly exploitation and not just something that a typical user is doing.
CHABROW: Is there a shift in thinking in developing security technology that there is a certain acceptance that some how bad things or bad guys will get into systems, but what needs to be done is if they do to just deal with that?
HARTMAN: I think so. Now I know this is a very difficult thing to be able to accept, in particularly of course as you get into military, the military command, that is a really tough thing to be able to deal with. The fact that the security isn't black and white and never has been right, but the ideas that you have this concept of absolute security you have the countermeasures in place. The fact is the systems are so complicated that you never can be sure; you know that there is some certain amount of penetration. And of course that was always true. There is always a risk of whatever, espionage or insider attack. I mean there are always, always those issues. But I think what we realize much more now I think than when we did is that there are effective countermeasures that can deal with these notions of kind of the shades of gray with respect to security. And really the way to deal with those shades of gray is to have a more behavioral approach, how about more dynamic way to look at security. Rather than say, okay you authenticate, either you are a good guy or a bad guy. You do your best, but you keep looking at people's behavior even once they are through that initial authentication check. Constantly look at how they behave each time to determine do I trust this person? Are they starting to do something that is maybe a little whacky that maybe I don't trust them as much as I did five minutes ago? That notion of managing risk and looking at behavior makes it perhaps more acceptable to say, "Okay, we have bad guys that might be in the system, but at least we're watching them every minute and hopefully detecting them before they do anything too bad." That's a good change.