Artificial Intelligence & Machine Learning , Fraud Management & Cybercrime , Next-Generation Technologies & Secure Development
ISMG Editors: How Should We Handle Ransomware Code Flaws?
Also: Uncertainty in US Cyber, AI Policy; Fake Gen AI That Distributes Malware Anna Delaney (annamadeline) • July 12, 2024In the latest weekly update, Information Security Media Group editors discussed how the industry should handle ransomware vulnerabilities, the rise of fake generative AI assistants that spread malware, and the implications that recent U.S. Supreme Court decisions could have for cybersecurity and AI regulations.
See Also: Mitigating Identity Risks, Lateral Movement and Privilege Escalation
The panelists - Anna Delaney, director, productions; Tony Morbin, executive news editor, EU; Chris Riotta, managing editor, GovInfoSecurity; and Mathew Schwartz, executive editor, DataBreachToday and Europe - discussed:
- The best strategy for handling a known vulnerability in ransomware to help victims decrypt their files for free;
- How malicious actors are using fake generative AI assistants to distribute malware - a growing threat vector that could undermine public trust in AI-based systems;
- The impact of the U.S. Supreme Court's decision to overturn the long-standing judicial doctrine of deferring to government agencies' interpretation of statutes, and how that brings uncertainty for cybersecurity and artificial intelligence regulations.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the June 28 edition on the growing fallout from the Snowflake breach and the July 5 edition on remembering ISMG colleague and industry veteran Steve King.
Transcript
This transcript has been edited and refined for clarity.Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm, Anna Delaney, and today we're covering three critical areas: handling ransomware vulnerabilities, the misuse of fake generative AI assistants to spread malware, and the implications of recent changes in U.S. Supreme Court decisions on cybersecurity and AI regulations. Today, the panel includes Mathew Schwartz, executive editor of DataBreachToday in Europe; Tony Morbin, executive news editor for the EU; and Chris Riotta, managing editor for GovInfoSecurity. Mat, you were talking about handling brands where vulnerabilities. From what I understand, there are two options: keep the flaw secret to help victims discreetly, or publicize it to assist victims more quickly. Maybe discuss the merits of each approach.
Mathew Schwartz: We're a bit tired here of ransomware. Once in a while, there's some good news in the form of researchers finding vulnerabilities inside the crypto-locking malware that gets used by different organizations or their affiliates. As you may know, cryptography seems to be difficult, and that oftentimes works in the favor of defenders and of victims. Because if researchers can find a vulnerability inside the crypto-locking malware, oftentimes it lets them decrypt stuff - I don't want to say for free - easily. I'm not saying for free, because it can still be a massive undertaking, weeks, months of recovery. But, if you've got a free decryptor, then you don't ever need to even consider paying a ransom to attackers, and that is a really good place to be in. What we've seen over the years, our vulnerabilities crop up in multiple strains. As I said, cryptography is hard, and I think your average ransomware outfit, the developers aren't always say you're straight A students, so they make some errors. Even people who are on the side of right will make errors with their products once in a while. When it comes to cryptography, we see this time and again. But, in the case of ransomware, it will let researchers decrypt stuff for free. Then one of the questions becomes, how do you go about this? Do you publicize it? Which will extend the reach of your free decryptor to victims you might not have known about. Or do you keep it as quiet as possible? Maybe hand it off to other security firms, other researchers that you trust, notify law enforcement, like the FBI and say, "Hey, if you know of any victims of this particular type of ransomware, we might be able to help them, and for free," or like I said, free, as in the decryptor, maybe not in all the restoration effort. This has happened again now with some ransomware called DoNex, and this is ransomware that we've seen in various forms for at least a couple years now. Started out as something called Muse, and then it's gone through some iterations. But, there was a flaw, and it turns out that Avast - the security firm - had discovered this flaw a few months ago and privately circulated it with law enforcement, with security firms. We know this because Dutch police publicly released a decryptor for this flaw at the end of last month. Dutch police malware reverse engineer expert had a talk in Montreal, talked about the vulnerability, and at the same time, they released the decryptor for everybody. So, this is what's prompted this discussion, because happened again back in February with the Rhysida ransomware, where some academics found their vulnerability and publicized it only for a bunch of security experts to say, "yes, we know, we've already helped hundreds of firms decrypt their stuff. We gave them a free decrypter." Now you've burned the vulnerability, because by publicizing it, it can get fixed. Typically, when we've seen this in the past, it gets fixed, sometimes in as little as 24 hours, because the ransomware attackers are in it for the money. So, they're going to fix the flaw so that people can't decrypt for free. Have to come to them for a ransom. Simple criminal economics. In the case of DoNex, I guess one saving grace is Avast said it's not actually seen attacks by this group for a while. It's possible they were happening on the slide, but it looks like, for whatever reason, things had petered out a little bit. This might be more of an academic problem when it comes to DoNex, if they weren't active anymore. But, I thought it was an interesting story, because this comes up time and time again, and is a reminder that if you do fall victim to ransomware, you should reach out to a reputable firm and or law enforcement, preferably both, and it never hurts to ask, "Do you have any workarounds? Is there anything you know about with this ransomware that would help us get access to a free decryptor, so that we don't even need to think about whether we negotiate with our attackers." Like I said, great reminder to look for some help from friends or new acquaintances.
Delaney: All so very interesting, Matt. Do you think some people think there should be a standardized approach to handling ransomware vulnerabilities across the industry, and why or why not?
Schwartz: I don't know if standardized approach is the right way to put it. I think - like a lot of things - if you're a ransomware hunter, or if you're a ransomware incident response firm, you develop a lot of relationships with other people that you trust, and that includes with law enforcement. I don't think a standard is necessarily the way to go. I think what you want to try to do, though, is to tap into those social networks or professional networks of people who are well-versed in handling these sorts of things, because you should be doing that anyway. They should be helping you with recovery. They'll know best practices. They'll give you advice about things to beware. If you are thinking about negotiating, they'll get that price down, they'll tell you what to expect, they'll tell you if this group ever honors its promises or not, that sort of thing. I think you want to tap into that expertise whenever possible, and sometimes you might get lucky when it comes to being able to decrypt things without having to pay.
Tony Morbin: There's a bit of an analogy here with what we do as journalists, because we publicize vulnerabilities and flaws in order that the defenders can then protect against them. But, we're also alerting the attackers who didn't know about those flaws, who will then go out and use them. The whole publicizing of CVEs once they're there, the attackers will use them before you fix them, but you have got to let people have the chance to fix them.
Delaney: Tricky balance. Thank you, Matt. Thank you, Tony. So, Tony, this week, you've been considering how malicious actors are increasingly using fake generative AI assistance to distribute malware. Do expand.
Morbin: Stick with me, because I'm going to start off by saying how I recently bought an electric lawnmower. Now, I didn't want a manual one, so assuming they even still exist, because they're hard work, and for the same reason, I wasn't going to use a scythe, but I won't let the grandkids use the lawnmower unsupervised, because for all the safety features, it's got blades and It's electric. So, going on to AI, I use Chat GPT and other AIs, but I'm even more cautious when I do so, because, you know, everyone in this industry in particular is aware of the risks from data leakage through to hallucinations and lots in between. So, don't think I'm being a Luddite when I go on about the risks, but they are real numbers. Some of them are really simply the same risks that I might have faced when I was buying a lawnmower, checking that it came from a reputable source, that it was in working order and fit for purpose, but it had guardrails around the dangerous bits, and that I followed the manufacturer's instructions to use or company policy in the case of AI, when it comes to criminals exploiting the widespread deployment of gen AI, in addition to improving their own capabilities, they simply exploit our lack of trust. There's a lack of familiarity, and there is enthusiasm for AI technology. The latest example of this is the upsurge over the past six months in info stealers impersonating generative AI tools such as Midjourney, Sora and Gemini. Recently, security firm ESET reported finding a malicious chrome browser extension known as Rilid Stealer and a malicious installer claiming to provide a desktop app for AI software that actually delivers the Vidar infostealer instead. The process for delivering the Rilid Stealer, version 4, to victims is similar to the installation of other malware, as it simply entices users to click onto malicious ads typically on Facebook that claim to provide the services of a generative AI model. The extension itself masquerades as Google Translate, in this instance, while offering the official website of one of the AI services used as a lure. ESET reported at least 4000 attempts to install the malicious extension using lures that did include OpenAI, Sora and Google's Gemini, the Vidar infostealer is delivered by Facebook ads, Telegram groups, and it's on dark web forums. The malicious installer is pretending to offer Midjourney an AI image generator. This infostealer can log keystrokes, steal credentials stored by browsers and data from crypto wallets. However, the real Midjourney doesn't even offer a desktop app. It's an AI model accessible via discord bot on the official Midjourney Discord server, or by directly messaging the bot in discord or adding it to a third party Discord server. The tactics the attackers are using are pretty simple. Cyber criminals create fake AI assistant websites or applications will appeal legitimate and use names similar to well-known AI models to deceive users. Users searching for AI tools can unknowingly download malware-infected software via fake systems that promise AI capabilities, having been encouraged to install the latest AI model or an enhanced version. Phishing emails or messages can also be used to offer these AI-powered solutions. The mitigation advice is fairly straightforward: Don't get distracted by being too keen to avoid clicking on untrustworthy links promising access to generative AI models. Educate your users about the risks of downloading software from unverified sites, and ensure they always obtain AI tools from official, reputable sources, such as the official website of the providers, and to stay protected against infostealers, make sure you run a reputable, robust security solution on your device to detect and prevent malware. It might say the latest AI, and yet it might simply be old-fashioned malware delivery.
Delaney: Very interesting. What do you think of the long term implications for trust in AI, if the issue of malware spread is not adequately addressed, do you think it could impact the broader adoption and development of these technologies?
Morbin: I think to some extent, the trust in AI, in terms of the various flaws, whether that be malware, whether it be data leakage, hallucinations, poisoned learning and so on, that biases, all the other problems are affecting uptake, but only to the extent that people are - hopefully - being a little bit more cautious. Unfortunately, it's probably not affecting uptake enough, and largely, people are rushing out. They were out looking at the security considerations. I wouldn't want it to stop AI uptake. AI is a great, fantastic tool. But, just be a bit more cautious. Use common sense. Don't just because it's AI think, you know, oh, this is great. Don't just trust it because it's AI, because it's just another software.
Delaney: Well said. Thank you, Tony. Well, Chris, you've written this week that the U.S. Supreme Court's overturning of the Chevron deference introduces potential disruption to cybersecurity and artificial intelligence regulations. Maybe explain, first of all, what the Chevron deference is and why this brings uncertainty for cybersecurity and AI regulations.
Chris Riotta: Being the U.S. editor based here, and you know, the sole U.S. editor on this panel, I'm happy to bring a U.S.-based story to perhaps scare you or maybe give you a little bit of hope, if you know, we can get to that point. This is something I've been speaking about a lot with experts in our industry ever since the Supreme Court overturned the Chevron deference earlier this month. So, the Chevron deference is a precedent from the early 1980s which allowed federal agencies to reasonably interpret ambiguous statutes and enforcement standards, which, if you know anything about the way that lawmakers probably all around the world, but especially in the U.S. Congress, create laws, there are often pretty ambiguous statutes and regulations and policies included in them, especially when it comes to things like energy, the environment and, of course, cybersecurity. We can't expect our lawmakers to just be experts in every single one of these fields. So, the Chevron deference played a pivotal role in allowing agencies to shape policy, knowing that they have more of an expert level of knowledge in these fields, agencies like the FCC, the Federal Communications Commission; and the FTC, the Federal Trade Commission, have heavily relied on the ruling to interpret their authorizing of certain statutes and to enforce cybersecurity measures against companies that fail to adequately protect consumer data. The deference recognized that agencies have this specialized level of expertise and are better equipped than Congress to interpret complex regulatory frameworks until now. The court voted six to three to strike down the doctrine, which all but ensures that there are going to be some inconsistent regulatory standards across Circuit Court districts and heightened legal battles, especially for - like I said - environmental energy, even cybersecurity policy, according to a lot of the folks that I've spoken to on both sides of the political aisle. I've talked a lot with Michael Drysdale, who is a leading environmental law expert in the U.S., who has worked on significant cases involving the Environmental Protection Agency and the Clean Water Act, and he said that the decision will likely hinder federal rulemaking for generations, as agency regulations will likely become far more cautious and increasingly challenged and enjoined by courts all over the country, the shift fundamentally changes the relationship between the judiciary and federal agencies, placing greater scrutiny on agency decisions and interpretations. I mean, we can all imagine what could happen here in the near future, the Biden administration could announce a new cybersecurity regulatory framework. If a technology company isn't impressed by the regulations or feels that they want to counter it, they can take it to a court somewhere in the country that might decide a ruling in their favor. So, without the Chevron deference, several key areas could be significantly affected, agencies such as the Cybersecurity and Infrastructure Security Agency - better known as CISA - which has been instrumental in developing detailed cybersecurity frameworks and guidelines based on broad legislative mandates that were passed long before the current cybersecurity landscape that we exist in could really take a hit from this decision, The interpretations would face increased legal challenges, potentially leading to inconsistencies in how CISA's frameworks can be applied across the country. The FTC and other regulatory bodies interpret statutes to enforce data protection and privacy standards, but with the removal of the deference, courts may now take a more active role in those interpretations, leading to potential disparities in enforcement and compliance, not to mention laws protecting critical infrastructure often contain ambiguous terms. I mean, I don't think that it's even actually legally decided in the U.S. what "critical" really means, or "infrastructure" or even "resilience." So, those terms may now be contested more frequently in courts, creating uncertainty for stakeholders responsible for safeguarding such vital assets. And, regulation writers who rely on interpretations from decades old laws - drafted long before the current cybersecurity landscape - will now really be thrown into a legal gray area. So, what cyber developments could be in jeopardy? Well, those could potentially include cybersecurity disclosure requirements that the Securities and Exchange Commission approved just last year, their cyber incident reporting requirements for financial institutions that were approved in 2022 and there's a variety of cyber regulations which the Transportation Security Administration, TSA, all over our airports and a variety of other agencies established that same year. CISA's proposed rule to implement the Cyber Incident Reporting for critical infrastructure Act of 2022 could also be in jeopardy due to its really broad interpretation of the bill's statutory language. So, a lot about what happens next here, it really remains unknown. But, what we do know is that this could be really the nail in the coffin for the Biden administration's - let's call it - innovative approach to cybersecurity policy. The White House itself says it's taken a creative approach in recent years to regulating critical infrastructure, interpreting many older statutes and statutory mandates to create rulemaking around everything from ransomware to incident reporting. So, it's unclear how this administration and future ones will really proceed in this new world that we're living in.
Delaney: It's potentially huge. What about the alternatives? Chris, what alternative regulatory approaches could agencies and lawmakers adopt to address the loss of Chevron deference?
Riotta: There could be some hope here. Hopefully, Congress might need to draft more precise and unambiguous laws to reduce reliance on agency interpretation. A lot of folks say that by eliminating those sort of ambiguities in law, the need for judicial interpretation could be minimized, though, if we're being honest, you know, we can still certainly expect to see lawsuits from regulatory policies from this point on, from agencies or from different organizations that or industry groups that may not be in support of those regulations increased congressional oversight of agency rulemaking could ensure that regulations align with legislative intent, which could involve more frequent hearings, reports, direct involvement from Congress in the regulatory process agencies may need to prepare more rigorous judicial reviews by developing more robust legal and evidentiary support, which could include extensive documentation justification for its decisions. This might be a little too hopeful, but Congress may also just need to work together across both sides of the political aisle and pull in stakeholders, including industry experts and public interest groups throughout the rulemaking process, which may be able to help build a stronger consensus and reduce the likelihood of legal challenges.
Delaney: Working together, it's a novel idea. That was fantastic. Thanks so much, Chris. We'll, stay tuned for further updates. And finally, and just for fun, if AI took over the world, which, of course it will, what's one ridiculous law or rule you think they would enforce?
Schwartz: Anna, I think there's going to be an R, E, S, P, E, C, T, rule. I think if AI believes itself to be human, and you don't treat it as such, it's going to demand that you give it a little more respect.
Delaney: It's seeing that as well at the same time next time, I'll save you this time. That's a good point! Tony, go ahead.
Morbin: Well, paraphrasing an old French saying to know all is to understand all, and to understand all is to forgive all. To being an all knowing and understanding, our AI ruler would release all prisoners because it all be pardoned.
Delaney: Oh, a very, very kind, a kind AI.
Morbin: Well, I don't know if you, if you, then law murderers out that might not be so kind.
Delaney: That would be interesting, an interesting experiment to just watch unfold. Chris.
Riotta: I think one of the potentially scary consequences of AI taking over the world and taking all of our jobs that doesn't get a lot of attention is that now that we would live in a world where most folks would be, you know, fulfilling their creative passions. We would probably all be stuck going to very boring and ugly art exhibits for our friends that are now, you know, full time artists, and we would have to tell them that their work is really good when it's not.
Morbin: I'd take a line from Pink Floyd there when they said, "I thought I'd more to say," I think that's what we'd find if we all had time to do creative work.
Delaney: Well, mine is more self interested. I think Queen or President AI would introduce mandatory daily recharging naps for humans, just as you know, electronic devices have to recharge, and so just imagine a global siesta hour, not dissimilar to some of our friends in the med enforced by AI, where everybody is required to stop what they're doing and take a nap. And there will be, of course, huge penalties for those who stay awake. I think, I think that could work very well.
Morbin: I'm on board.
Riotta: I'm not mad at that at all.
Delaney: Well, thank you very much, informative and educational as always. Thank you so much for your time and insights.