Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development , Standards, Regulations & Compliance
ISMG Editors: Can Governments Get a Handle on AI?
Also: EU Policy Updates and the Disconnect Between CEOs and CISOs Anna Delaney (annamadeline) • October 6, 2023In the latest weekly update, editors at Information Security Media Group examine policies in the U.S. and Europe that could regulate AI, recent developments within the EU cybersecurity and privacy policy arena, and the disparities between the perspectives of business leaders and cybersecurity leaders on the security landscape.
See Also: Live Webinar | Old-School Awareness Training Does Not Hack It Anymore
The panelists - Anna Delaney, director, productions; Tony Morbin, executive news editor, EU; Rashmi Ramesh, assistant editor, global news desk; and Akshaya Asokan, senior correspondent - discussed:
- What we know so far about the Biden administration's proposed plan to create an executive order on AI and its impact on society;
- Updates from the European policy space, including how the EU is proposing to regulate AI while at the same time promoting AI to boost the economy - plus the U.K.'s approval of an Online Safety Bill;
- How the perceptions of business leaders and cybersecurity leaders vary sharply when it comes to the cybersecurity landscape and the threats we face.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the Sept. 20 London Summit special edition and the Sept. 29 edition on the industry impact of Cisco's Splunk acquisition.
Transcript
This transcript has been edited and refined for clarity.
Anna Delaney: Hi and welcome to the ISMG Editors' Panel. I'm Anna Delaney, and here, ISMG editors dissect and discuss the latest cybersecurity news, trends and policies. This week I'm very pleased to be joined by Rashmi Ramesh, assistant editor, global news desk; Akshaya Asokan, senior correspondent; and Tony Morbin, executive news editor for the EU. Great to be with you all.
Rashmi Ramesh: Great to be here, Anna.
Delaney: Tony, start us off. You're in the boardroom, aren't you?
Tony Morbin: That's exactly right. Yeah, we're going to be talking about the boardroom, cybersecurity, CISOs in the boardroom and all that entails.
Delaney: Akshaya, that's a beautiful backdrop behind you. Tell us more.
Akshaya Asokan: So back home, it's raining cats and dogs. And, this is a view from where I stay. So I thought I'll just add it.
Delaney: You're staying dry in the U.K. then.
Asokan: Yeah.
Delaney: And Rashmi, you're in the office. Are you?
Ramesh: Yes. So my background is pretty dynamic today. So back in the office after a long weekend here in India, I thought why not just keep it simple and organic this time around.
Delaney: Very good. I'm sharing another photo from a recent trip to Bruges that happened a couple of weeks ago. This is Boniface Bridge, otherwise known as the lover's bridge or kissing bridge. Rashmi, we are starting with AI this week. And I think, it'll be a running theme throughout our discussion today. You've written that U.S. President Joe Biden plans to sign an Executive Order, EO, regarding artificial intelligence and its impact on society. So what do we know so far?
Ramesh: So the latest in that is the White House would publish the very-long-anticipated EO on AI this fall. So that's the latest on it. So this Executive Order has two goals. One is to curb. And the second is to boost cyber resilience. So it's expected to basically outline any mandatory cybersecurity standards for AI systems in critical sectors. It's expected to promote information sharing among government agencies, private sector organizations, and research institutions, so we have better threat intelligence and incident response capabilities. And it's also likely to allocate funds for R&D on building tech that already has security baked into it. And public-private partnerships are expected to play a significant role as well between federal agencies, private industry, and academia, to set up best practices, guidelines, and frameworks to secure AI systems. Now, we don't have details on what it includes exactly, of course. But CISOs and CIOs that I've spoken to on this basically have two opposite sides of opinions. So one, you have folks who say, Look, this tech is expanding by leaps and bounds, and we are unable to keep up. So regulate it comprehensively, and do it right now. And big tech companies themselves, including Microsoft, have asked for better regulation so that innovation is more streamlined and ethical. And then there are folks who say, Hang on for a minute and see where we are. It's a tech that is evolving too rapidly at this point. And you cannot have one single regulation that can address all aspects of it. So you begin with a piecemeal set of bills that address different aspects of risks. So it's more comprehensive, and regulations also evolve along with the technology.
Delaney: Now, your article mentioned that the White House in October 2022 published a Blueprint for an AI Bill of Rights. So can you just provide a bit more context on this Bill of Rights, and how it relates to the upcoming Executive Order?
Ramesh: So the Blueprint basically aims at establishing principles and rights that protect individuals and the society at large from potential harms that are caused by AI technologies. Its focus is on ensuring fairness, transparency, privacy, ethical use of AI, accountability, and it also grants some fundamental rights for users. Now, while it provides a framework, it does not have the legal authority to enforce these principles. So it's more of a guidance. Now, the Blueprint and the upcoming EO are related in their broader goal of shaping the regulatory framework and governance of AI. But their approach and focus are different. So the Blueprint emphasizes mostly the ethical and human-centric AI principles, whereas the EO focuses more on the cybersecurity and protection of critical infrastructure.
Delaney: Very good. Well, it's an ongoing conversation. We look forward to more updates from you, Rashmi. Thank you very much. Well, Akshaya, following on from Rashmi is your talk about regulation. There's a lot about cybersecurity and privacy in AI regulation. There's a lot of discussion in the EU. Can you update us on what's been happening? What's the most important activity?
Asokan: So in the EU, what we are seeing is, AI Act has entered the trilogue process, which the European Council, Parliament and the Commission have been doing, as well as the lawmakers, where they will discuss amendments to what is going to be the first-ever global regulation on AI, artificial intelligence. So, as the lawmakers are discussing, potentially, the final round of amendments, there have been concerns from privacy folks who in an open letter this week have said that tech lobbying groups have been actively working with the lawmakers to introduce or rather they have introduced a potential clause into the AI Act. The clause is included under Article 6 of categorization for high-risk AI and can limit high-risk categorization. Now, the current amendments allow tech companies to interpret what they deem is to be a high-risk AI system. So this, they say is a potential loophole in the AI Act because they say that if you leave this to the interpretation of tech companies, they can potentially or deliberately categorize high-risk systems as not high risk anymore. And this will not put them under the obligation of transparency and accountability. So in open letters, a lot of privacy organizations are asking lawmakers to bring it back to the previous versions or the initial draft of the AI Act that will stipulate the same line of risk requirements for a high-risk AI system, which is your surveillance systems, biometrics, scanning, profiling, and all of that. And within the EU member state itself, we are seeing some differences because, according to media reports, their suggestions that France is leading a group of countries to back out from the current stipulation of banning AI for biometrics, saying they don't want the existing ban of AI for surveillance system. So that is what is happening. So we don't know how the final AI Act will turn out. But it is likely that the EU will finalize the regulation by November.
Delaney: And we also have the UK Parliament approving the Online Safety Bill. Can you tell us about that?
Asokan: So the UK Parliament in September passed the Online Safety vote. Again, that has received much criticism, especially from privacy and tech companies such as WhatsApp and Signal, which use end-to-end encryption. So as per the existing bill, it grants the British telecom regulator of comm power to order these online intermediaries, such as WhatsApp and Signal, to use accredited technology to perform client-side scanning. And experts warned that this potential clause can turn into as the surveillance regime and can be used by authoritarian governments to surveil the public, and it can lead to mass surveillance. So that's what the concern right now is. And from the tech industry's perspective, what they say or continue to say is that if they weaken their existing encryption standard, it will affect privacy of its users across the globe, and it will weaken the internet security. And they have Meredith Whittaker, who is the president of Signal, and companies such as WhatsApp and Apple have criticized the bill. And they as part of their continued criticism, the government, in a debate last month, introduced slight changes to the proposed clause. And they said that they often cannot ask online companies to perform client-side scanning, unless a technology that is proportionate and can ensure privacy is there. But this changes have not been reflected in the final bill. And we see that tech companies are still leveling up their criticism in a hope that this change will not be reflected or rather that encryption will not be effected in the final bill. And the bill right now is waiting for Royal Assent before it's passed.
Delaney: I was at a roundtable the other day, and the security leaders at the roundtable weren't impressed by this Online Safety Bill. So let's see what happens next. Thank you, Akshaya. That was fantastic. Tony, you've just completed research for a survey which compares business leaders' and cybersecurity leaders' perceptions of the cybersecurity landscape. So we're curious to know, what did you find out?
Morbin: Well, I'm going to repeat Akshaya's word there - accountability. So just running through the background. CEOs and boards are ultimately responsible for the running of an organization that rewarded them financially when things go well and sacked or even subject them to legal cases when they didn't. It's a bit like football club owners; their managers changed the players when they start to lose, then they change the managers when that doesn't work. And ultimately, they're forced to sell or close up, if it still can't be fixed. Law on cybersecurity and the CISO in this game are changing rapidly. In the past, the CISO was at the player level, somebody to be sacked in the event of a breach. Hence, the typical current 18-month lifespan of a CISO. But as the volume and impact of attacks, particularly ransomware, have grown and reached the point of causing and risking business failures, cybersecurity has gone from being a technical issue to a primary business risk that the board has to be cognizant of. The change was signaled way back in 2014 when the CEO of Target quit following a breach. Now, CISOs are more likely to report directly to the board or sometimes be on the board. And well, that's great in terms of being able to influence decisions. It is a new role, requiring new skills, responsibilities and a different outlook. If it wasn't bad enough before losing your job in the aftermath of a breach, you might now face legal sanctions as well. In the U.S., the Securities and Exchange Commission, which typically enforces the law against CEOs and CFOs for various violations such as Ponzi schemes, accounting fraud, or market manipulation, is increasingly looking at failed cybersecurity as a negligence issue. In June this year, SolarWinds reported that the SEC had issued a Wells notice that is a letter saying that it plans to take legal action on the company's chief financial officer and the chief information security officer, alleging violations of federal security laws following a major breach of the Orion Software Platform back in December 2020. And then in July this year, the SEC adopted rules requiring boards to disclose the impact of cybersecurity incidents that they experience, along with annual reports on their cybersecurity risk management, strategy and governance. So for the CEO, the board and CISOs, the role of cybersecurity and business success and continuity is under scrutiny as never before. In theory, CISOs have accepted the advice that today a CISO needs to be able to understand true business impacts, communicate effectively - avoiding technical jargon - and partner effectively with their counterparts. Similarly, business leaders have accepted the boards need to prioritize cyber risk the same way they do financial risk or business risk, and they need to be asking what cyber risk can be assigned, mitigated, and accepted. But in practice that isn't happening. Boards complained that their CISOs still focus on the technical impacts, while CISOs complained that the boards are failing to comprehend the seriousness of the risks encountered and how vulnerable the business remains. As you mentioned, I've been involved in some recent surveys on the topic among both CISOs and business leaders. And so just stealing from those comments that I've heard. One CISO on what advice he would give to the board said, No matter how bad the CISO makes it sound, it's worse. We need to invest more in basic maturity and cyber hygiene. In contrast, the board advised the CISOs reporting to the board, Keep it short and to the point, but don't pull any punches. Understand the risks and don't provide assurances that you can't back up and always be honest and tell the truth. So, along with many other comments, in that vein, the board was saying, Tell us and the CISOs were saying, Listen to what we're telling you. But the message is often not getting across. The perception of the board members spoken to was that they were meeting cybersecurity leaders more often than the cybersecurity leaders reported. But there was more cyber expertise on the board than the cybersecurity leaders believed to be the case. Anecdotally, there also seems to be a lack of personal relationships between the board members and cybersecurity leaders compared to other disciplines. There is fault on both sides. But if cybersecurity is going to be truly addressed as an existential risk for businesses, boards are going to have to up their game - particularly increasing the quality and quantity of cyber expertise that they have on the board, and their understanding and communication with cybersecurity leaders. And cybersecurity leaders will have to increasingly learn to understand and speak the language of business risk and adopt a board mentality of networking with the levers of power.
Delaney: Well said. And Tony, in your recent search or interviews, have you identified any common misconceptions that business leaders may have regarding cybersecurity?
Morbin: Well, it wasn't so much misconceptions. It's a different perspective and different priorities. So for example, you can imagine personal liability was higher up the risk register for business leaders than it was for security leaders. They all agreed on the main tenets of reputation being the highest risk and mitigation strategies as well. And they also kind of agreed that CISOs were in the lead position when it came to providing that cybersecurity expertise. But where they differed was that the business leaders were much more optimistic about how well they were doing, how good their cybersecurity was, and how good their understanding was. CISOs, I guess, coming from a more skeptical kind of approach, were less confident about how well they were doing and how strong their mitigations against vulnerabilities were. Even things like AI, you think based on the answers that the board had a much greater understanding of AI than the cybersecurity leaders. But, I think, it was just that the cybersecurity leaders who are tasked with operationalizing AI, as opposed to sending out a grand strategy, had to delve into the nitty gritty and say, This is harder than it first looks. There were a lot more issues here than we first thought. But I guess again, you've got to be fairly optimistic to be a business leader. So there are different perspectives.
Delaney: Very true. Do you see a shift in perception among business leaders in recent years, given all these cybersecurity events, breaches and changes in the landscape?
Morbin: Yes, but I think frankly, the CISOs have had a bigger shift that they have got on board with the idea that they need to communicate on a business level and understand business risk, but aren't necessarily delivering on that. Whereas the business leaders, I won't say are paying lip service, but they are at an earlier stage of accepting the criticality of cybersecurity in the risk register of what can affect their organization.
Delaney: That's an important discussion. Thank you, Tony, for sharing those insights. And finally, and just for fun, if you were to create a cybersecurity-themed board game, what would it be called? And what would be the objective of the game?
Ramesh: I can go first. I have a very fun game in mind. So, how maintaining cyber hygiene is considered a tedious task for a lot of companies. So a board game that I would call Cyber Guardians would basically help assign specific tasks to specific people on a rotational basis. So it's fun. People are picked at random for specific tasks, and you get the job done.
Delaney: Bingo, love it. Tony, do you have an idea?
Morbin: I'll go for a Monopoly board game to hack the Pentagon. You have to install malware on strategic servers. When you get enough, you can take over senior official accounts or departments, and finally deploy your zero day. But of course, they'd have to be obstacles and opportunities along the way, similar cards to ones you got on Monopoly. So, firewall switched off is going to help you advance or MFA enacted will facilitate or deny access. And there may be other kind of threats and opportunities that can be presented to the player.
Delaney: There's a lot there. That's a great idea. Akshaya?
Asokan: Probably something like Find the Insider, it would be more like a spy game or Cluedo. You'd have to find the hacker or the insider threat among your players. And you leave decks of clues, and you just have to find who that guy is.
Delaney: Yeah, I love that murder mystery. Well, I was just going to play on Snakes and Ladders - Cybersecurity Ladders and Pitfalls. So, the game features ladders that allow players to climb quickly if they successfully addressed the cybersecurity challenges; however, pitfalls can set them back if they fail. So not the most original.
Morbin: I was going for the same idea of the obstacles and the opportunities, but I think Snakes and Ladders is a much more immediate way of doing that. But again, I think Akshaya's would be fun to play - who's siphoning off whose money?
Delaney: Yeah. There's some business deals in this, I think. Let's talk to some toy companies and games companies. Thank you so much, Akshaya, Rashmi, and Tony, I found this immensely enjoyable and informative.
Asokan: Thank you.
Ramesh: Thanks, Anna.
Delaney: And thanks so much for watching. Until next time.