"We have to make sure that we don't hold ourselves hostage to compliance activities," Ross says during the second of a two-part roundtable discussion, moderated by Information Security Media Group's Eric Chabrow [transcript below].
IT security professionals can't lose sight of the innovation that's so critical to cybersecurity, Ross says.
For John Carlson, executive vice president of BITS with oversight of the organization's cybersecurity and fraud prevention initiatives, progress and innovation comes through public-private partnerships that help to communicate the risks, develop a response and bring change to the various challenges organizations in all industries face. "We need to work in partnership with different organizations to build security into the infrastructure, including software assurance and software security," Carlson says.
Ross agrees. "We need to work closely with industry in this partnership so we can take advantage of the wonderful innovations that come out of our private-sector companies and we can take advantage of those new techniques and technologies," he says.
In part two, the four panelists, including George Moore, the State Department's chief computer scientist, and Rebecca Herold, principal at Rebecca Herold and Associates, discuss:
- Scoring employees on how effectively they implement IT security;
- Communicating information risk challenges to senior executives and other non-IT and IT security personnel; and
- Exploiting research and solutions offered by other industries to improve the information risk framework process.
Automation in Info Risk Management
ERIC CHABROW: As we look at organizations today, there are more stakeholders, there are more threats. As you said earlier, there's a lot of complexity here. Can proper information risk management be done without automating?
GEORGE MOORE: I would say not. The attacks are coming at automated speed over the Internet and the attackers have many more people than we have to defend, so in my opinion at least, in the kind of environment that I sit in, it would be impossible to do it without automation.
JOHN CARLSON: Agreed.
REBECCA HEROLD: I would say, too, that automation is necessary in many different ways. One of the things that I've seen over the years is that organizations don't even know where their data is located, so how are you going to know how to protect that data if you don't even know where it resides? One type of automation that I've found is very helpful are these tools now that you can use to identify where your data - your critical data, PHI [protected health information] or any other types of sensitive information - is located, and then keep an inventory of that up-to-date using automation, and then that way you'll know where this data is at and how the risk levels are based upon its location, because so many breaches occur because people didn't know data was even located in the area where the incident occurred to begin with. That's just one example of an automation tool that's very helpful for risk reduction now, and that's the inventory tool that exists.
CARLSON: I think in addition to the automation point - which I agree with - you do need to still have forums for experts to talk to one another about the changing threat environment. That's one thing that I think the financial-services community has done a very good job dating back 12-13 years ago, establishing an information sharing and analysis center on ISAC. Our ISAC is a forum for experts to get together and talk about, more or less on a continuous basis, the changing threat environment and tactics for how to respond. It's also provided, through that means and other means such as the Financial Services Sector Coordinating Council, forums to have discussions with government officials around the changing threat environment and how we can work together in partnership with the appropriate controls in place to protect the information or to not subvert, say for example, a law-enforcement investigation that may also be going on concurrently so that we can protect the industry, the sector and the economy from any sort of large-scale cyber attack or malware attack that could affect multiple institutions.
I think it's a combination of good, strong controls with automation at individual institutions, but also, a way to collaborate across the industry, and where necessary, with government agencies and with other sectors, since many times we're all using the same operating system or many of the same operating systems, or the same suppliers across multiple sectors, so we have to recognize that it's a combination and it really has to be a collaboration and a partnership.
RON ROSS: I would agree with everybody who's said it's a combination. Certainly, we can't do this job of continuous monitoring without automation. It's certainly a necessary capability, but not sufficient, and I think if you look around, or as Rebecca mentioned, there are a lot of things that automation can do that humans don't do very well, and certainly the inventory management. Also, something I know George has been very much involved in is the automated checking of configuration settings that some are part of the SCAP [Security Content Automation Protocol] program that NIST runs. You have these configuration settings that are established on laptop computers and portable devices that actually eliminated attack factors that adversaries may use to compromise your systems.
Ultimately, it's a necessary piece, but not sufficient, because there are a lot of things that only humans can do and humans do best, certainly when you talk about the insider threat and being able to monitor certainly bad actors or people whose privileges should be reduced because of certain types of activities. There's a whole series of things in the management and operational space which are also very much amenable to continuous monitoring, not with automation, on a regular basis as determined by the organization, but I think the combination of these activities really will work well to do what we would call a very robust continuous-monitoring program.
Understanding an Organization's Risk
CHABROW: Over at the State Department, you developed a scoring system which helps people within your organization understand how they're doing in protecting the information assets, as well as developing a good risk program. Can you address that George a little bit, about how that works and why it's useful?
MOORE: Our system focuses currently mostly on technical risks; I'll talk in a moment about non-technical risks. Right now, we score those mostly as vulnerabilities. If this vulnerability existed for this weakness by itself, how much would that be? And we start with the common vulnerability scoring system that was developed by NIST as a basis and sort of adapt and expand that to some other areas to look at actual vulnerabilities from the national-vulnerability database that would be on the machine: missing patches, configuration settings, hold passwords, lack of training, and a number of other things. We try to put those in a way that we can simply add them to get total risk on a machine or across a segment of the network and assign that to the people who are responsible, and we convert it to a simple letter grade based on something like the average risk per machine - it's not exactly it, but it's close to that.
The thing about the letter grade is that it lets managers who are not computer-security people know how much they should focus their IT resources on the actual technical risk. We've been working with Ron Ross in his risk management framework on trying to move that up a level, from what he described as level three up to level two. That first section focuses mostly on inputs to security or fixing individual controls. The effective ... approach, which we're starting to use, focuses on 15 key outcomes that the security system needs to achieve and specific ways to measure whether or not we're effectively achieving those outcomes. One of those might be, as an example, keeping unmanaged hardware off the network, making sure that for every piece of hardware that's on the network, or piece of software for that matter, you know who's managing it and then you can assess whether they're doing a good job. But first you have to know who that is; the assumption being that if you have something that's not being managed, it's going to be a really big risk and you need to get it off the network. But anyhow, there are about 15 of those and by measuring those, we can determine whether or not we need to take a detailed look at the individual controls that make that overall result occur.
CHABROW: The effectiveness of that, especially in dealing with non-IT managers - how does that work out?
MOORE: We found that it's very effective in getting the non-IT managers to care about security, because most of our ambassadors overseas have never had an "F" in their life, and if they suddenly get an "F" on security, they want to get it fixed. And they can understand that, whereas if we send them a long, technical report listing things like cross-site scripting, they have no idea what we're talking about. It just doesn't work from a managerial point-of-view. But we also think it's very important not just to measure the inputs to security, but to find ways to measure the end result, and we don't know how that's going to work out yet because that's still an experimental process, but we're very hopeful that'll be done.
CARLSON: One of the things that this conversation points to is some of the challenges we all have in terms of communicating the risks, communicating what individuals and executives should do in order to respond to the changing challenges. And in some respects, the technology community has created its own language that it's made it very difficult to have those types of conversations with the consumer on one hand, in terms of safe computing, or safer computing practices, to the senior executives that will be the ones that have to fund or authorize the funding of different initiatives that are out there. I think that's an ongoing challenge that we in the IT industry and the technology community need to do a better job to communicate on both ends of that spectrum.
The other thing I would like to talk about that I think is of importance, as well, is the need to focus on the kind of infrastructure, as well as policies, and that's a space that we've focused quite a bit of time and energy over the years in terms of trying to work with other responsible parties to solve various problems. [For] example, today we're working very closely with ICANN [Internet Corporation for Assigned Names and Numbers], the not-for-profit that oversees the names of the Internet, and they have a proposal to establish an unlimited number of top-level domain names, and we for many years have advocated that ICANN needs to do a better job to ensure that those domains that are associated with financial services have stronger security controls, that we can protect consumers, or that they won't be confused, that they're going onto a website that may actually not be a bank, but may be a fraudster. Those are the types of examples that we need to work in partnership with different organizations to build security into the infrastructure, including software assurance and software security. That would also go to cloud computing and the need to have greater assurance that if you're doing functions or activities via cloud computing, you have a recognized level of assurance that those activities are secure and safe and that privacy is protected.
ROSS: I wanted to just quickly follow up on what John just talked about. I think we have to make sure that we don't hold ourselves hostage to compliance activities. In things that George Moore and John Streufert have been doing for the State Department, thinking out of the box is really a big part of that whole initiative, and I think one of the things we can't lose sight of is that innovation is critical to cybersecurity, our whole business. And this brings up John's point about public-private partnerships. We need to work closely with industry in this partnership so we can take advantage of the wonderful innovations that come out of our private-sector companies and we can take advantage of those new techniques and technologies. That's really important.
There's some great research going on. For example, in some of our universities in the local D.C. area, they are looking at bringing back a system to a known, secure state - almost in real time. And if that kind of research ultimately is successful, it really wouldn't make any difference what the adversary throws at you. If you can reconstitute that system more quickly, then that attack can be exploited when the malware is actually on the system. That really changes the whole dynamic of how we protect our systems. It's one small example among many out there that are going on within our universities, our industry and I'm always excited to learn new things, new ideas, and incorporate those back into our cybersecurity standards and guidelines.
MOORE: One thing I failed to mention a moment ago - we were talking about how to use scoring - is that we were also able to measure the impact of using the scoring, and when we did this at AID [United States Agency for international Development], we achieved a two-thirds reduction in risk the first year, the first six months, actually. In the State Department, we got a 90-percent reduction in the first year. Then, not only can you use it to motivate the executives and to provide information to the technicians about what to fix first, but you can use it to measure the overall reduction in risk and determine whether you're having an impact.
HEROLD: As we're talking about all of these risks - and I agree with what you've been saying - one of the things though that complicates returning to a normal state is when you're dealing with the Internet and with information that's getting out on the Internet. I've been trying to help organizations for the past few years with social-media issues, and the risks there are very complicated, because you might have a denial-of-service attack or you might have some other type of attack where it's coming from the Internet or maybe it's through some other type of venue that's related to Facebook or Twitter or somewhere else.
But the problem I've seen is you might be able to return your network back to fully functioning, but if the data has gotten out there, in the blink of an eye you have perhaps thousands or even millions of customer records that have been copied and you don't know where it's at. That's what a lot of organizations now are struggling with - how to keep their data from being put onto the Internet and then not being able to get it back onto the system again, because basically, once it's out there, it's very hard to put the genie back in the bottle so to speak. I think that's a risk area that a lot of organizations are really struggling with, how to protect personal information and address the privacy issues even when you do have the ability to address the more pure information-security issues.
MOORE: I think Ron's point was that ... if your data is in your network and if you can return it to the desired state quickly, you can do that before the attacker is able to exploit the tool that they've put on your network and exfiltrate the information. But you're absolutely right, Rebecca. Once it's out there, it's out there, and it'll be out there.
HEROLD: Yeah. Right now, there are a lot of breaches occurring because of data that's been leaked out through exploits that are coming through social-networking sites, and I've found that a lot of organizations are really trying to figure out how you can prevent that from happening, so that's another area of risk that I think is going to just continue to evolve over the next few years.
MOORE: I also want to mention, Rebecca, about our privacy. We've been working very hard with the CIO Council - the privacy committee - and we actually have a new appendix that we're going to be putting into our security-control catalog that's going to deal specifically with privacy controls, and they're based on the Fair Information Practice Principles, international standards-based. Again, we're trying to elevate the whole discussion of privacy to the level that we have in our security-control catalogs, so those controls can be understood and be implemented by organizations across the federal government. That's a very important issue and I'm glad Rebecca brought that up.
HEROLD: Yeah, and that's just an FYI for you, too. I mentioned earlier about the smart grid. That's something back in June of 2009. One of the first things I had our group do was a privacy impact assessment. If you look at NISTIR 7628, we have a whole volume in that document that's dedicated just to privacy, and right now our group is actually creating privacy use cases that the energy sector will be able to use to help ensure privacy - like you were saying, mapped to the fifth - is in existence and is being addressed, so those things don't get overlooked.
Similar Problems for Different Industries
CHABROW: We're coming to the end. Feel free to have some concluding thoughts. Ron?
ROSS: Our focus in everything we're doing now is to try to work with our federal agencies to operationalize the cybersecurity standards and guidelines that we produced in the FISMA project for the past seven or eight years. Certainly, the focus is shifting from the details under the hood, where we're chasing every single vulnerability, to going back up to the three tiers that we described earlier, where we have the focus on cybersecurity governance, risk-management governance, at tier one; a good enterprise architecture implementation with a good cybersecurity architecture at tier two; and all of those activities then informing how we develop, upgrade and build our information systems down in tier three.
I think a lot of what our CISOs and CIOs face today, we're in some sense asking a lot of folks to defend systems that are indefensible, and we're going to try to focus a lot more on building a better product in the system upstream so the things that we have to deal with downstream are a little bit easier to handle. That complexity, reducing complexity, looking again at connectivity - does everything have to be connected to everything else? Probably not. And then the cultural issues that are associated with every organization. Those are kind of the three C's that don't come up on this radar a lot, but we're focusing an awful lot on those things to help try to impact the kind of security that we have within our federal agencies today.
MOORE: What I take away from what our colleagues in the health and financial sectors said is that we really have similar problems. We have similar kinds of attacks. We often found that the effective solutions are similar as well. And, as our economy becomes increasingly dependent on our online activity, these information issues create a significant risk in a number of areas. There's social disruption, economic disruption. That might be either intentional or accidental. Because this risk is so important, an environment where governments and the private sector work together to find these solutions is going to be a lot cheaper and more effective than having us all have to reinvent the wheel separately. It's really great that NIST has already done so much along these lines and it's also good to recognize that we need a lot more cooperation and work along these on all sides.
CARLSON: I would just kind of close by saying that our focus is on protecting consumers, which includes businesses as well as our member financial institutions, and it really starts with understanding the changing risk environment, putting into place controls to mitigate those risks, and increasingly, that results into the need for partnerships and collaboration with many different parties with a goal of trying to build in security into the infrastructure, into the policies, into the procedures. I think we also are potentially in a period where there could be substantial new legislation with respect to cybersecurity requirements and controls, and perhaps even opportunities to expand significantly the sharing of information between the government and the private industry. All of those things need to be done in a way that really protect the infrastructure, protect the privacy of individuals, but also is reflective of the changing risk environment that we're currently dealing with.
HEROLD: I agree with all the tips that have already been given. Definitely, we need that framework, we need the collaboration. Learn from what others are doing and also work with them. Use the tools and automation where you can. Definitely, you need to have a good governance structure in place, and you need to make sure that the folks who are managing your information and your systems know what they're doing.
I guess the other thing I want to emphasize is what I've found over the years to be such a weak point, and that's the business partners. Again, one of the things that I'm really trying to do is to make sure that those small and medium-sized businesses, who I think are a huge Achilles heel for a lot of organizations that are working with them, understand what they need to do to protect the information that they're processing for their business partners. As you're thinking about risk management, don't forget about your business partners. You need to make sure that they aren't leaving a huge hole through your back door even when you have everything locked up elsewhere. That would be one thing I'd like to pass along that might be a little bit different than what's already said.