3rd Party Risk Management , Fraud Management & Cybercrime , Governance & Risk Management
ISMG Editors: UnitedHealth Group's HIPAA Breach Fallout
Also: The End of an Era at Mandiant and Privacy and Ethics Concerns Related to LLMs Anna Delaney (annamadeline) • May 24, 2024In the latest weekly update, Information Security Media Group editors discussed the implications of Kevin Mandia stepping down as Mandiant CEO; UnitedHealth Group's responsibility for a massive HIPAA breach at its subsidiary, Change Healthcare; and privacy concerns over large language models.
See Also: AI and ML: Ushering in a new era of network and security
The panelists - Anna Delaney, director, productions; Tony Morbin, executive news editor, EU; Marianne Kolbasuk McGee, executive editor, HealthcareInfoSecurity; and Michael Novinson, managing editor, ISMG business - discussed:
- Key factors that led to Kevin Mandia's decision to transition from CEO to an advisory role at Google;
- Why more than 100 medical associations and industry groups are urging the U.S. Department of Health and Human Services to hold UnitedHealth Group solely responsible for HIPAA breach notifications following the Change Healthcare ransomware attack;
- The privacy and ethics debates surrounding large language models as many companies, such as Slack and Salesforce, are automatically opting in users to use their data for training.
The ISMG Editors' Panel runs weekly. Don't miss our previous installments, including the May 10 edition on the wrap-up of RSA Conference 2024 and the May 17 edition on why synthetic ID fraud is on the rise.
Transcript
This transcript has been edited and refined for clarity.
Anna Delaney: Hello and welcome to the ISMG Editors' Panel. I'm Anna Delaney. And this week, we'll discuss Kevin Mandia stepping down as Mandiant's CEO, UnitedHealth group's responsibility in a HIPAA breach Fallout and privacy concerns with large language models. The quartet today comprises Marianne Kolbasuk McGee, executive editor for HealthcareInfoSecurity; Tony Morbin, executive news editor for the EU; and Michael Novinson, managing editor for ISMG business. Good to see you all.
Marianne McGee: Hi Anna!
Tony Morbin: Heya!
Michael Novinson: Nice to see you as well.
Delaney: So Michael, starting with you this week. You've written that Kevin Mandia is stepping down as the CEO of Mandiant to take on an advisory role at Google, with Sandra Joyce and Jurgen Kutscher taking over leadership of Mandiant's threat intelligence and incident response units, respectively. So what were the key factors that led to Kevin Mandia's decision to transition from CEO to an advisory role?
Novinson: Thank you for the question Anna. And I realize in the grand scheme of things, it's not incredibly surprising that Mandiant itself was acquired by Google in September of 2022. And typically, you will see chief executives of companies that are acquired stick around often for two, three years, it's often written into the acquisition agreement. So not shocking to see him transitioning out in that sense. But in another sense, given what a well-known figure he is in this industry, it is surprising to see him leaving the company that bears his own name. So it's been quite a journey for him. Air Force veteran, started the consulting practice in 2004, took the name Mandiant in 2006 and has tried to find a way to turn these essential services the world needs threatened halogens, insights into adversaries are in responding to some of the most well-known cyber incidents in the world. Anytime you see an FCC filing from a major company that was compromised, Mandiant is the one they're using. So finding a way to turn these essential services and make it a viable business. And that is challenging, because Mandiant's secret sauce is having the smartest people in the room together, and it shows the quality of the work they do. But smart people are very expensive. It is hard to skill labor the same way that you skill technology, and certainly that's been one of the things he's grappled with over the years. First, Mandiant became part of FireEye. The idea was bringing that FireEye technology, those network firewall sandboxing and APT products together with Mandiant's services. This essentially broke up the products and they were spun off into a separate group; they formed products there. And on the services side, Mandiant was independent for a couple of months. But then yes, found a home in Google. Obviously, Google, which is a very well capitalized company, can support this. They were already doing a lot of work around security operations and invested in Chronicle. So yeah, a place where they don't have to answer the Wall Street every quarter and have all their numbers scrutinize summit. It may be a better home for them in that way. But yeah, Mandiant certainly is getting embedded more into Google. They are keeping the brand around but it certainly is part of the Google security practice due to which it is not surprising to see him transition out. But I think in terms of his impact, he is somebody who's able to talk articulately about what adversaries are doing and was able to bring cybersecurity to the masses. You see executives who are on the financial shows on CNBC in the United States talking about how much money they can make investors. But, in terms of explaining what adversaries in China and Russia and North Korea are doing, on CBS in the New York Times, that's something that I think is perhaps more so than almost anyone else. Kevin Mandiant did a good job of bringing to the masses and in plain English talking to people about what adversaries are doing and why you as John Q citizen should care. But at the same time, in terms of the reports they commissioned and put together just had high levels of detail and were able to talk about what specific individuals were doing in a way that nobody had done before them. That I think educates the general public as well as the security community about what different adversaries are doing. We've seen a lot of people imitate Mandiant since they did it, but Mandiant was almost certainly the first to do it in this level of granular detail and probably perhaps still the best, and I think we have Kevin Mandia to thank for that.
Delaney: How has the industry reacted to this leadership change at Mandiant or has there been a market reaction of any sorts?
Novinson: It has been pretty quiet. I think there's been less visibility just because Mandiant is a part of Google. And I think he's done doing some stuff and he's been a little less visible since Mandiant became a division within Google rather than his own company. But yeah, I think certainly, there's a question around what's next for him. I guess the obvious test would be undertaking the consultant routine. He became a strategic partner at Ballistic Ventures, which is a venture capital firm, after the former alien Palacios there as well. He has made investments in 15 companies since late 2021. He sits on a handful of boards. So certainly you see a lot of former high profile CEOs go that route. I think just public service you think about as well, he spent six years at the ASVAB - six years in the Air Force. He certainly sits on a number of boards and commissions for President Biden and for CISA and is very well tied into that knapsack intel community. I guess I have to wonder when I think about what's next for him. Is he going to be primarily following the footsteps of Dave DeWalt, the former supervisor doing a lot on the private sector side? Are you going to see him more on the government side? Certainly he has the chops, the expertise and the relationships that he could use to make an impact on the public side as well.
Delaney: Very good. Thanks so much Michael for updating us.
Delaney: Marianne! Is UnitedHealth Group on the hook? You've reported that over 100 medical associations and industry groups are urging the U.S. Department of Health and Human Services to hold UnitedHealth Group solely responsible for HIPAA breach notifications following the Change Healthcare ransomware attack. Can you share the latest?
McGee: Sure. The dust is starting to settle in terms of the massive IT disruption that was caused by the Change Healthcare attack in February, with most of the company's major IT services and products back online. But now the reality is starting to set in for U.S. healthcare providers, including thousands of doctor practices in hospitals about the ransomware attackers compromising the protected health information belonging to those patients and the resulting HIPAA breach notification duties that could be triggered. Now UnitedHealth Group, which is the parent company of change, has offered to handle breach notification for its customers and for entities affected by the breach. But entities are not so sure what the regulator's think. So earlier this week, more than 100 industry groups that represent doctor practices as well as healthcare CISOs and CIOs sent a letter to the U.S. Department of Health and Human Services asking for clarity regarding the breach notification responsibilities of HIPAA-covered entities and their business associates that are affected by the incident. The groups are essentially asking HHS Office for Civil Rights to publicly state that UnitedHealth Group is indeed solely responsible for breach notification and not the entities whose patients' PHI was compromised. UnitedHealthcare has estimated that the PHI of up to one-third of the U.S. population could be affected by the attack. This means breach notification could involve more than a 100 million people. So that will undoubtedly result in a massive record-breaking breach notification event for the healthcare sector. Going back in April, HHS OCR issued guidance in the form of frequently asked questions regarding the healthcare, the Change Healthcare attack and potential breach notification duties of HIPAA-covered entities and business associates that are affected. But HHS's guidance pretty much didn't solve the problem in terms of clarity for many of these entities. HHS said that covered entities that were affected are still required to file breach reports to HHS and provide notification to affected individuals without reasonable delay. And there's fine print of course all of this. Business associates in the meantime that are affected by the incident must notify their affected covered entities after the discovery of the breach. So covered entities have up to 60 calendar days from the date of discovery of a breach of unsecured PHI to file breach reports to OCR through its portal for breaches affecting 500 more individuals according to the guidance that HHS offered, and that guidance is not new in terms of what HIPAA calls for. So even though UnitedHealth Group has offered to handle breach notification for customers affected by the Change Healthcare attack, the complex relationships in the situation muddies the water, especially in terms of HHS's OCR guidance. For instance, some healthcare providers might be business associates of Change Healthcare, and for others, in the situation, United Healthcare might be a clearing house that would be a covered entity, which would be responsible for notification. So this all gets kind of muddy depending on different aspects of what UnitedHealth Group does for an entity and the kind of services that Change provided. But in any case, HHS OCR wants to ensure that no affected individuals fall through the cracks in terms of breach notification, with healthcare entities thinking that they don't need to notify their patients because UnitedHealth Group will do that. So these health industry groups, they're arguing that if anything, many patients could end up getting multiple notifications for the same breach. If every affected doctors’ office or hospital or specialist needs to notify the entities, the groups contend that this is only going to confuse the patients, they're going to get multiple breach notifications, they're going to be alarmed, they'll be confused so on and so forth. So, as far as we know, at this point, UnitedHealth Group still hasn't reported a breach to regulators. And the company has said that it could take several weeks or months for that analysis to be done. But once that happens, I bet there will be a new round of panic in the healthcare sector, this time regarding breach notification issues, unless this gets clarified pretty soon. And to me, it seems like the regulators are sticking to what HIPAA says unless, for instance a covered entity has a business associate agreement, and under that term of that contract it says the business associate is responsible for breach notification. So this gets pretty muddy for the industry.
Delaney: Muddy indeed, given the scale of this breach, which potentially affected one-third of the U.S. population. What do you think of the expected short-term and long-term consequences on the healthcare sector as well as patients?
McGee: In the short term, once the breach notifications kick in, there's going to be a lot of scrambling, they'll be confusion. But, in the long term, there's going to be dozens and dozens of lawsuits. Dozens of lawsuits have already been filed against UnitedHealthcare and changed by patients who are assuming that their data was affected. And then, once the individual doctor practices become identified and whose patients were affected, you'll probably see either these lawsuits get amended or new lawsuits that name both UnitedHealth Group and these various healthcare providers as codefendants in these cases. And in many cases, these small doctor practices, there's thousands of small doctor practices that were hurt on the financial end due to this attack because they couldn't process claims. The last thing that they need to do now is to hire lawyers to defend them, and lawsuits, we'll see those happen, I'm sure.
Delaney: Marianne, the saga continues. Thank you so much for sharing the latest. Tony, you're looking at a story covered by our colleague Mat Schwartz, which delves into the privacy and ethics debate surrounding large language models. And it turns out that many companies like Slack and Salesforce are automatically opting users in to use their data for training, which of course raises concerns about transparency and compliance with GDPR rules, does it not?
Tony Morbin: GDPR and a lot of other concerns as well. If you don't pay for a product or service online, you are the product because the data has a value. But today, data-hungry large AI language learning models are hoovering up all your data, whether you paid for the service or not. So as you said, just this week, we've been learning about enterprise customers at Slack, run by Salesforce, discovering that they're automatically opted in and have their data used to train the Slack's global LLMs. Now, Slack responded to Mat's article, which is also included, saying there is an opt out option that the data they're collecting is metadata and not personally identifiable data. But I think any of us in this industry will know exactly how useful metadata can be. And you don't actually need to have the personal data. Also, it can be argued that long before ChatGPT accelerated LLM use, we were using machine learning and AI to make sense of customer data. We had the original Facebook, effectively waiting people at Harvard. And Amazon is telling us what we might like to buy based on what we previously bought. So there's Mat's excellent report notes. Slack isn't alone. Adobe, Amazon Web Services, Google, Gemini, LinkedIn, OpenAI and many others have terms and conditions that say, by default, they can use their customers' data and interactions to train their LLMs. Legal and privacy experts are concerned. These organizations need to ensure that they comply with relevant privacy regulations, as you mentioned Anna, that do not include general data protection regulation in Europe by GDPR. And now just this week, we've got the AI act. They've just passed the boat saying that the world's first AI law is set to come in force in the EU next month. A key requirement is that companies need to be transparent with users about what data the companies are collecting and for what purpose. And the general consensus is that a small note in terms of additions isn't likely to be enough to count that as informed consent. It's not just in Europe that privacy is a growing concern. But, with GDPR, European regulations can also have a global impact. If you're dealing with people in that territory, you want to meet the high standard. We've also seen things like, this week Snapchat revised its AI privacy policy following the U.K. ICO probe, perhaps more copyright and privacy but in the same area, we also saw Hollywood megastar Scarlett Johansson say that she was shocked, angered and in disbelief that Mr. Altman of OpenAI would pursue a voice that sounded so eerily similar to hers, after she had specifically declined to allow OpenAI to use her voice. On the other side, for many, the benefits of AI justify this loss of privacy. There was a recent PwC study that found that productivity growth was almost five times as rapid in parts of the economy where AI penetration was highest compared to less exposed sectors. And he went on to say that we're only seeing the tip of the iceberg because so many areas aren't actually using AI. If you go on to any AI entrepreneurial forums, which are quite interesting, obviously, they're pretty gung-ho, and there's a consensus that the ability to innovate in AI requires a near absence of regulation. I'm quite pessimistic about the possibility of legally developing large language models in Europe unless you break the law. So that's going the extreme. And certainly China and the U.S. have taken a more liberal approach when it comes to recommendations, specifically to try and avoid stifling innovation. Having said all that, even while we're recording this, there's an international AI safety conference underway in South Korea, with both China and the U.S. attending. And the room, which does go beyond privacy, includes reliability, potential misuse and existential threats. But the premise is that there's a need to address these risks uniformly, saying that risks and challenges of AI can't be reduced to one jurisdiction. So it's calling for the establishment of a consistent global approach to AI regulation, putting checks in place that can be adapted across the board, rather than specific regional industry standards. The idea is that tech companies will be held accountable for deploying tech responsibly, while governments put regulations in place to ensure safe deployment. But, in addition to the differing national approaches, a big problem is that the rate of change and advancement in AI is so rapid that legislation was unlikely to keep up. Then, the problem for the rest of us is this fend for yourself environment - what do we do - and many CISOs are looking at how they can prevent their intellectual property classified information, regulated data, including PHI and other sensitive data mapping, somebody else's LLM ... they can include the use of small language models using only your own data, private chat box that outsiders can't access, but then that will limit their learning capacity. Others such as Britain's Department for Work and Pensions has banned employees and contractors from using ChatGPT and similar LLMs. But then, of course, we have potentially the problem of shadow IT, or shadow AI I should say in this case. We're also seeing an increasing interest in technical controls, such as applying data loss prevention software, blocking or filtering of sites and services and inventory controls that vet the AI models before they're deployed, and monitoring the providers for compliance and application of privacy and security rules. So this basically also does include strengthening the terms of engagement in relation to what services and AI products that we actually not just buy but commission if we get others to build them on our behalf. So how are we likely to get a global consensus on how we should develop and deploy AI? If you look at any of the polarized discussions underway online today, you can choose any geography and almost any topic. It is clear you're seeing neither side trust anything the other one is saying. And given today's geopolitics, you can say that international AI agreement is impossible, yet through their flaws we do have International Atomic Energy Commission, International Aviation Law and the Law of the Sea, because we understand and agree about the dangers and benefits. And the international AI agreement does need to be added to that list, even if it's just to establish principles, if not laws.
Delaney: That was a great overview as a massive debate. Tony, what questions remain for you? Where will you be watching us as this continues to unfold?
Tony Morbin: There is a big competition going on between the U.S. and China as to who's going to control AI. China has huge amounts of data internally from its own population, who don't have any say over whether that data is being used or not. China's looking for geopolitical advantage, and others such as Russia and Korea and so on will probably finally do the same. In the U.S., there is this political advantage but also commercial advantage and how do you get commercial advantage. Europe is trying to regulate privacy for individuals, but is it going to lose out on the innovation as the entrepreneurs seem to be saying it will do. So, that's a big concern. And this is just one of the aspects of safety and privacy. In the U.K., the former head of the post office gave evidence over a load of post office workers being imprisoned for allegedly stealing money when it appears that it was all down to a computer flaw. So I'm just saying that this emphasizes how much we can trust systems that we don't know how they work. Because the consequences can be very dire.
Delaney: Absolutely, we need complete transparency here and I don't know if we're going to get that ever or anytime soon. But no doubt ... to be continued. Thank you so much, Tony. And finally, just for fun, as Tony said, the latest controversy around OpenAI allegedly using a vocal likeness of Scarlett Johansson for their new AI assistant's voice inspired this question. As a journalist whose voice would you personally choose for AI-generated content and why?
McGee: For me, I will say Diane Sawyer. I don't know if Anna and Tony you in the U.K. are familiar with her but she was a longtime correspondent at 60 minutes. She later became anchor of ABC World News and she was a co-anchor of 20/20. And I say her because I always found her authoritative, yet empathetic and very believable, and just soothing. So, I think that she has that voice and her style would make people convinced that whatever this AI is saying must be true. And Diane, I advise you not to lend your voice for any of this.
Delaney: For that trust, yes, I'm going to Google her voice now. Michael?
Novinson: Guess I'll give someone Anna and Tony are definitely familiar with. It should be Dame Helen Mirren, not a journalist but a lovely voice nonetheless. And if I had someone narrating my life and my work, who better than her?
Delaney: We loved Dame Helen. Great choice. Tony?
Tony Morbin: I'm hoping you will have heard of the one I'm saying. I once heard a comment that there comes a time in your life you have to choose whether your role model for old ages, Keith Richards or David Attenborough. I've switched over to the David Attenborough side now, and I'd be looking to his voice for his authenticity. And along the lines of Marianne, as I say, if you can fake authenticity, you've got it made.
Delaney: So Tony, one of my suggestions was also David. So David, who is obviously a lead authority in nature documentaries, but also Morgan Freeman, because he is the voice of God. But I think I'm going to go for the Duchess a bit like Marianne said, we need someone trustworthy. The Duchess is the female cat among the rest of cats, of course, voiced by Eva Gabor, and I just think her voice exudes sophistication and warmth and elegance. And as I said, ultimately, you trust her.
Tony Morbin: Another voice that I do love but unfortunately he's evil is the tiger - Shere Khan in The Jungle Book.
Delaney: Awesome stuff in The Lion King, we could go on. Thank you so much. This has been great, informative as always, educational. Thanks.
Novinson: Thank you Anna.
McGee: Thanks Anna.
Delaney: Thanks so much for watching. Until next time.