Putting to Rest RSA Key Security Worries Impact on Online Transaction Seen as Minimal

A recently published research paper that raised questions about the efficacy of RSA public-private key cryptography shouldn't be too concerning for IT security practitioners, says Eugene Spafford of Purdue University. And although the research has since been disputed, Spafford explains why there's still value in such a discussion.

The research paper, entitled Ron was Wrong, Whit was Right, concludes that the way the RSA algorithm generates random numbers to be used in encryption keys could, in rare instances, make a secret number public. And that could create a potential vulnerability that hackers might exploit, the researchers say.

Spafford says the exposed keys aren't the type that would be used by businesses such as financial institutions that conduct sensitive transactions on the Internet.

What apparently happened is that some smaller organizations created their own Secure-Socket-Layer public-private-key set using software to generate random numbers, Spafford says. The smaller organizations may have used a small set of seed values that would generate the same set of large prime numbers.

So what lessons can be learned from this? According to Spafford, one of the problems is with encryption, "the whole aspect of key generation and management, and that has been the case for a very long time."

He argues that although security practitioners can develop and use algorithms that are effectively unbreakable, if they're unable to generate truly random keys and keep them safe from prying eyes, "then it doesn't matter how strong the algorithms really are."

"There have been a number of systems that, going back in time, the generation of a key ... didn't use enough randomness and resulted in keys that were more trivially broken," Spafford says in an interview with Information Security Media Group's Eric Chabrow [transcript below].

Spafford says this kind of scrutiny and review of security systems is a necessary element in ensuring their validity. "It's important that we regularly verify our assumptions, verify that the systems we're using really work the way that they're supposed to work," he says.

In the interview, Spafford:

  • Summarizes the problem raised in the research paper;
  • Evaluates the response by RSA Chief Technologist Sam Curry to the paper;
  • Explains why such research into possible flaws of encryption and cryptographic solutions, even when disputed, is valuable.

Spafford also serves as executive director of the Purdue Center for Education and Research in Information Assurance and Security. Widely considered a leading expert in information security, Spafford has served on the Purdue computer science faculty since 1987. His research focuses on information security, computer crime investigation and information ethics.

RSA Public-Key Security Issue

ERIC CHABROW: Please take a few moments to summarize what you see as the problem the researchers raise in the paper entitled, "Ron was Wrong, Whit was Right," and the response by RSA Chief Technologist Sam Curry to the paper.

EUGENE SPAFFORD: What the researchers found is that by collecting a very large number of existing public keys and doing some analysis, they were able to find common factors that were used in generating those keys. This is a weakness that can be exploited because if one can find those factors, it's possible to find the private keys associated with them. The conclusion that they make in the paper is that this is a fundamental weakness in using the RSA algorithm, but in reality what it demonstrates is that there are weaknesses if a random number generation mechanism that's used to generate the keys isn't really truly random. It's not so much a flaw with RSA as it is with the implementation that has been used to generate many of the keys.

CHABROW: That sort of supports what RSA Chief Technologist Sam Curry said, that it's more of a process than it is actually the number generation itself?

SPAFFORD: I would say that's a reasonably accurate characterization.

Issues for Large Organizations

CHABROW: Okay, so I'm a CSO at a bank or a hospital or a government agency and our organization uses the RSA public-key cryptography. What should I do?

SPAFFORD: The follow-up that I've seen posted online and related to the paper indicates that the keys where they found difficulties were in self-signed, locally generated SSL keys or encryption keys, not the kind of keys that would likely be used at a financial institution. What appears to be the case is that some organizations generated their own SSL public-private key sets using software that had poor random number generators, may have repeatedly started from a small set of seed values and, as a result, occasionally would regenerate the same set of large prime numbers. These keys being somewhat of a problem of course are not likely used in major commercial transactions. Those keys tend to be generated using a much better random number generation system, possibly even hardware generation, and didn't appear to be among the sets of keys that were found to be vulnerable.

CHABROW: What would be some of the situations an organization would use these keys that the researchers pointed out could have a flaw?

SPAFFORD: This might be at an educational institution or somebody's home where they set up a web server with an SSL certificate using RSA. It could also be where somebody has generated a PGP key for themselves, again from one of these home systems with a poor generator then installed that public-key in one of the directories. There are a couple of different places where the keys could come from. At least the sources that I have been looking at, some of the analyses that have been online indicate that really high-security keys, ones that are very important to large enterprises, were not among the ones that were found to have deficient keys.

CHABROW: And you're not aware of any organizations, high-end organizations, that would use the ones that were described in the paper?

SPAFFORD: That's correct.

Value in RSA Key Security Debate

CHABROW: Is this much to do about nothing or is there some worthy discussion here?

SPAFFORD: Oh, I think there are some worthy things to get out of this. One of the big problems with encryption is the whole aspect of key generation and management, and that has been the case for a very long time. We're able to develop and use algorithms that are effectively unbreakable given current technology, but unless we're able to generate truly random keys and keep them appropriately safe from prying eyes, then it doesn't matter how strong the algorithms really are. This is an example of a problem with generating a good key and having enough randomness to generate a key that can't be easily broken or doesn't possibly provide some benefit to an attacker, and we've seen this kind of problem before. There have been a number of systems that, going back in time, the generation of a key didn't use enough entropy, didn't use enough randomness and resulted in keys that were more trivially broken. I myself, with a couple of my students, found a problem with the Kerberos 4 key generation scheme about 15 years ago. That was really very similar to this same idea.

CHABROW: Does this research suggest that even tried and true IT security practices must be questioned and tested periodically? Even RSA's response says although it didn't agree with the conclusions, and they sort of agreed with what you said, they liked the idea that people are spending time looking into things like this.

SPAFFORD: I think it's very important. It's very easy to make assumptions about how the underlying technology works or is supposed to work. We've seen time and time again where those assumptions are incorrect possibly because whoever it was developing the code misunderstood or didn't understand these issues, and so it's important that we regularly verify our assumptions, verify that the systems we're using really work the way that they're supposed to work. Furthermore, to understand that over time because this is still a developing field, our technology gets better, our computers get faster ... and we understand some issues of algorithms better. So assumptions that were made in the past may not hold true in a future environment. That's another reason to go back and test things.




Around the Network