Social Engineering: The Gorilla in the Room
So, I'd started my weekly blog entry intending to discuss application security (I'm keenly interested in what the just-released BIS survey is going to reveal) when the following headline came across on my BIS RSS feed "Social Engineering Hits Brit Bank Head, Victim of Fraud."
You'll have to forgive me for being so easily distracted by this headline, but social engineering is a topic of immense interest for me these days. Quite frankly, I think of it as the 600-pound gorilla sitting in the corner of the room (or perhaps sitting in the corner of the branch) with regards to financial institutions and their needing to safeguard sensitive data. It's the hardest risk to build effective controls for, as it's almost completely prone to human nature, and there are multiple points of exposure involved that require a coordinated effort to effectively address.
Literally every engagement I'm on reveals multiple forms of exposure in terms of allowing access to NPPI (non-public personal information) because of the way people behave in the course of conducting business. Loan applications left sitting in fax/printer/copier machine queues. Office doors left not only unlocked, but wide open with stacks of documents just sitting there in the open. Phone conversations with account holders that can be heard by people not at all involved in the exchange. The list goes on and on and gets quite long.
And that's before I get to the scariest part of all this. Our practice conducts Social Engineering tests in which we run through a wide range of activities intended to circumvent a financial institution's controls and gain unauthorized access to NPPI. Without breaching any confidentiality agreements, let me just say that even the most successful tests reveal that human nature is hard at work wreaking havoc in even the most security-aware institutions. For the very best of these cases the people involved realize that something was amiss and immediately report the exchange to the appropriate parties (e.g. ISO, Compliance Officer, etc.). In the worst case the breach occurs and no one is any the wiser until the report is issued.
However, no matter how security aware the institution is, it's still vulnerable to socially engineered attacks. It's not hard to understand why, though; most of the people involved are focused on helping ensure a smooth customer/member experience and are in constant "help" mode. They want to resolve problems, answer questions and process requests. It's their basic nature to want to be part of the solution and not whip out cynicism as part of their approach to doing their job. And sadly, this is exactly what the bad guys are counting on.
Speaking of the bad guys, they're effective because they are always thinking of new ways to get at NPPI. They learn what works, what doesn't and then modify their approach to keep things moving along. And no matter how hard we try to keep up and stop them, they manage to keep at it. So, to say the least, constant vigilance is essential. And as an aside, the engineers we use to conduct our tests are not bad guys (former or otherwise), though I do hesitate to answer the phone when I recognize their numbers on the caller-id screen!
During a recent interview I was asked what I saw as possible "future regulatory guidance". I modified the question to be "What I wished would be the next big thing for regulatory guidance." I'd like to see an increased focus on social engineering issues. Strong physical and logical security controls receive such focus, but what's really missing is the human element..... I'd like to see the regulatory agencies target that next.
Go and read the story and tell me if I'm wrong.