Industry Insights with Carol Hilderbrand

Application Security & Online Fraud , Fraud Management & Cybercrime , Security Operations

Just Who Exactly Should Take Responsibility for Application Security?

Just Who Exactly Should Take Responsibility for Application Security?

Recent high-profile software supply chain breaches have sharpened the focus on application security. But as cybersecurity professionals know all too well, concern doesn’t always equate to action. In theory, the rise of DevSecOps best practices that shift responsibility for application security further left should reduce the number of vulnerabilities that now routinely make it into production applications. However, real life is a little messier. Just as application security has become both more important and more complex to implement, shrinking development cycles leave less time to do so.

Indeed, the increased complexity extends to responsibility. Mend Vice President of Outbound Product Management Jeffrey Martin, recently participated in a lively roundtable as a panel of industry experts discussed the main challenges of figuring out who does what when it comes to application security. Key takeaways include the following:

Open Source makers aren’t AppSec teams

With commercial software, the builders take responsibility for the security of the software being sold. Not so with open source software. While people often assume the responsibility for security belongs to the open source community, it really lies with the user of open source code. Companies that aren’t vigilant with open source essentially enter into a risk transfer to an unknown entity that they then don’t monitor.

Governance standards are just the beginning

OWASP (The Open Web Application Security Project), NIST (The U.S. National Institute of Standards and Technology), and MITRE ATT&CK are good frameworks and ways to understand risk. Fundamentally, good security doesn’t have that many rules. Keep your stuff up to date. Make sure you know what your risks are.

Prioritize those risks. Any good framework will actually get most of those basics correct.

Security should not be developer-led…

We talk a great deal about shifting left and putting it on individuals. But if developers’ goals and incentives don’t include security, they won’t do it. Humans act in their own interests and unless their interests are made to be something different, they’re going to behave how they want to behave. If a company wants to secure code, it’s on them to put in place the standards, enforce the standards, and actually care and invest. Companies that don’t do those things will never be secure and are basically just setting up people to fail. Companies have to get their priorities right and invest in the tools and training that empowers developers to perform robust security.

…But they do need to be engaged

There are things that development managers can do to introduce more security in a reasonable way that doesn’t cost a ton of extra time and money. Importantly, they can lead by encouraging developers to take reasonable steps that will help. For instance, when introducing a new library, don’t introduce anything that’s got a known vulnerability, kind of a “do no harm” approach.

The real solution is to continually advocate for security to people who control the budgets and what is built. Emphasize that they’re not giving enough time or resources to create a secure application, only enough time to develop a functional application. If you don’t care about the person who controls the budget, you probably should.

Application security belongs to the company as a whole — but many don’t get it.

Prioritizing speedy delivery over secure code inevitably results in technical debt that accumulates as an application ages. However, most companies don’t ask developers to maintain technical debt at a certain level because that’s not how the business works. If that application generates revenue, management will want it to last as long as it can. Management has to learn that huge amounts of technical debt mean insecure applications, which greatly increase business risk. That’s where the big changes have to come. Not just shift left in development and developer enablement, but shifting the responsibility to the actual company, the ones that control the resources.

Expect outside forces to drive wider change.

Businesses that consume software expect it to be secure, but software security is simultaneously very distributed and not regulated. Companies are just beginning to wake up to the issue. SBOMs and the legislation around them have certainly driven some companies to at least demand a software bill of materials if they’re going to buy pieces of software from somebody. However, the average business is not yet asking the right questions about security, never mind putting commercial demand on their suppliers. It will come, however. Because of that, companies will have to change their behavior, not just developers. As consumer demand grows, companies will have to change in order to follow the money. We can also expect government regulation to drive that change, as evidenced by the new National Cybersecurity Strategy from the Biden Administration. Think of it this way. Until the Food and Drug Administration (FDA) was established, there was little regulation on food safety, and fraudulent practices were rife among food manufacturers. Perhaps it’s time for an FDA for software safety, too. Sometime in the next five to 10 years, we’re going to see the equivalent of an FDA for at least critical infrastructures. It’s too important to everything.



About the Author

Carol Hilderbrand

Carol Hilderbrand

Technology Writer, Mend

A veteran of Computerworld and CIO magazine, Hildebrand is an award-winning technology writer who writes extensively about cybersecurity and how it impacts business innovation.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing bankinfosecurity.com, you agree to our use of cookies.