Skip to main content
    All Episodes
    Episode 037 · September 9, 2025 · 25m listen

    When Medical Device Cybersecurity Becomes a Crime | Ep. 36

    Episode Summary

    In "When Medical Device Cybersecurity Becomes a Crime," episode 36 of The Med Device Cyber Podcast, we explore the significant shift in medical device cybersecurity enforcement. Historically, cybersecurity issues in healthcare primarily fell under HIPAA, focusing on data privacy. However, a recent Department of Justice (DOJ) enforcement action against Illumina highlights a new era: when cybersecurity flaws in medical devices lead to patient harm, they can result in legal prosecution under the False Claims Act.This episode delves into the critical distinction between data breaches and direct patient safety risks inherent in compromised medical devices like infusion pumps or pacemakers. The discussion emphasizes that known, unmitigated cybersecurity risks, especially when misrepresented to federal healthcare organizations, can lead to severe consequences, including misdiagnosis, mistreatment, and even death. The hosts discuss the challenges medical device manufacturers face in integrating cybersecurity by design from the outset, particularly with the FDA's evolving guidance (specifically September 2023) and lengthy development cycles. The conversation underscores the growing recognition of cybersecurity as a clinical risk, moving beyond theoretical concerns to tangible patient mortality. It also touches on the secure product development framework (SPDF) and evolving regulatory strategies, acknowledging a slow but positive shift in industry awareness and proactive engagement with cybersecurity, despite the inherent tensions of speed-to-market pressures. The episode concludes with a look at the future of medical device security, emphasizing the importance of aligning organizational functions to address cybersecurity throughout the total product life cycle.

    Key Takeaways

    • 01A recent Department of Justice enforcement against Illumina under the False Claims Act signifies a major shift, making medical device cybersecurity failures a prosecutable offense, not just a penalty.
    • 02Unlike HIPAA, which focuses on health information privacy, current enforcement prioritizes direct patient safety concerns arising from compromised medical devices, where cyberattacks can lead to tangible physical harm or death.
    • 03The medical device industry is challenged by the FDA's relatively new cybersecurity guidance (September 2023) and lengthy development cycles, which often necessitate retrofitting security into products already in development.
    • 04Companies are increasingly adopting proactive regulatory strategies, including anticipating FDA deficiencies and preparing remediation plans during review cycles, to expedite market entry and enhance cybersecurity.
    • 05The industry is slowly recognizing cybersecurity as an acute clinical risk, with a growing understanding that poor security can directly contribute to patient mortality through delayed treatment or device malfunction, necessitating a "security by design" approach from the start of the total product life cycle.
    • 06Adherence to a secure product development framework (SPDF) from the early stages of development is becoming crucial for medical device manufacturers to mitigate legal, regulatory, and patient safety risks.
    • 07Manufacturers must align sales, engineering, marketing, and compliance teams to ensure device security from initial development throughout the total product life cycle, especially given the high failure rate of medtech startups that overlook regulatory complexities.
    • 08Misrepresenting cybersecurity protections, particularly to federally funded healthcare organizations, can invoke severe legal repercussions, highlighting the increased government oversight and scrutiny.
    • 09The transition from cybersecurity as a technical risk to a significant legal and clinical risk is fundamentally reshaping how medical device manufacturers approach product security and regulatory compliance.
    • 10The proactive integration of security controls and documentation throughout the entire development process reduces the likelihood of costly and time-consuming remediations later on, especially as regulatory bodies intensify their cybersecurity focus.

    Frequently Asked Questions

    Quick answers drawn from this episode.

    • In "When Medical Device Cybersecurity Becomes a Crime," episode 36 of The Med Device Cyber Podcast, we explore the significant shift in medical device cybersecurity enforcement. Historically, cybersecurity issues in healthcare primarily fell under HIPAA, focusing on data privacy.

    • A recent Department of Justice enforcement against Illumina under the False Claims Act signifies a major shift, making medical device cybersecurity failures a prosecutable offense, not just a penalty. Unlike HIPAA, which focuses on health information privacy, current enforcement prioritizes direct patient safety concerns arising from compromised medical...

    • However, a recent Department of Justice (DOJ) enforcement action against Illumina highlights a new era: when cybersecurity flaws in medical devices lead to patient harm, they can result in legal prosecution under the False Claims Act.This episode delves into the critical distinction between data breaches and direct patient safety risks...

    • A recent Department of Justice enforcement against Illumina under the False Claims Act signifies a major shift, making medical device cybersecurity failures a prosecutable offense, not just a penalty.

    Listeners also asked

    Quick answers pulled from related episodes.

    Share this episode

    Pre-fills with: "A recent Department of Justice enforcement against Illumina under the False Claims Act signifies a major shift, making medical device cybersecurity failures a prosecutable offense, not just a penalty."

    Hi, welcome to another episode of the Med Device Cyber Podcast. Today, we're talking about what happens when your cybersecurity flaw doesn't just cause a breach, but it breaks the law. There's been a recent case where the Department of Justice had an enforcement against a medical device manufacturer, Illumina, and this is public knowledge because they sold their system under false claims. There were some false pretenses about how secure the system was, and the exact vulnerabilities weren't disclosed; they were sort of hidden. The whole idea is that cybersecurity failures today are now being prosecuted, not just penalized, because the risk is much greater with medical devices. We're looking at patient safety. We're looking at potentially killing a patient or maiming a patient or misdiagnosing a patient. This is a much greater risk than something such as HIPAA which has traditionally been the enforcement within the healthcare umbrella. So before we dive in too much, I want to introduce our co-host here, Trevor. Trevor just moved to California, to the Bay Area, and he was explaining that he doesn't have California license plates or a driver's license. So he has to move his car quite often so he doesn't get a ticket. Is that what you were saying? Yes. So in this neighborhood, you have to have a parking permit to park for longer than two hours. To get a parking permit, you need California registration. To get California registration, I need to get California insurance. To get that, I need California residency. Since I just moved here, I don't have any bills or any receipts, anything like that. So this is going to be a fun month of shuffling around my car every two hours. The closest parking garage to me is $500 a month, so I'm not going to do that. I'm just going to do the car shuffle for a month. So this is particular to your complex then, because I mean, I've traveled to California with an out-of-state license plate, and I didn't have to move my car all the time, but I'm not living in a neighborhood. It's just a San Francisco thing. This neighborhood has parking permits and all that. Some don't. Sometimes people just leave their car wherever across the city and then just take the bus back and forth to their car. I guess it makes sense because I've been to California quite a bit, and I would see these old RVs parked in random neighborhoods, and they seem like they just parked there and live in those RVs. So I understand the problem they're trying to solve. No worries. I'm still in Phoenix today. It's supposed to be another hot day. I went out for a walk yesterday. It was like 118, but I cooled down, I think, to 111 when I walked. It was a little bit later, but still pretty hot out. A nice cool temperature of 111, perfect for a walk. Hey, it's a dry heat. I was in New Jersey not too long ago, and it was like 95 and 99% humidity. So that felt hotter to me than here. Yeah, Phoenix doesn't feel too bad. If you're out of the sun, it feels quite nice, actually. Yeah, it's mostly the sun is intense, not really the air. The sun is definitely intense. Yeah. Awesome. So, let's dive into this Civil Cyber-Fraud Initiative, which is DOJ's initiative to use this False Claims Act to really pursue vendors and contractors that misrepresent their cybersecurity protections. And, in particular, in healthcare. What are your thoughts? So, why are we moving this direction, and how come we haven't been doing it more diligently in the past? Cybersecurity is still a pretty new industry as far as things can go, and it's so rapidly evolving. We even look at the addition of regulatory requirements around cybersecurity was only two years ago. So this is a fairly new industry, and it's so rapidly evolving, I think that governments around the world are trying to figure out ways to get on top of it, and unfortunately, regulation moves a little bit slower than some industries such as cybersecurity. So it's an especially hard problem to solve. What we're trying to do now is bring some enforcement to an actual consequence, an actual punishment if cybersecurity standards are not adhered to when going through different processes. For an example, if you're getting a car sold in America or getting a medical device sold in America or industrial control systems and automation systems, these all have different cybersecurity requirements that are constantly getting more and more strict and evolved, especially in healthcare. If these are violated, now there's going to be an enforced actionable punishment against the companies that knowingly violated best cybersecurity practices. Knowingly violated is the key. Yeah. And we've been enforcing HIPAA for a really long time. So is this a big shift in your opinion, like away from or maybe in parallel to this HIPAA enforcement that's been going on? What are your thoughts on HIPAA versus what we're talking about here with medical devices? Well, looking at HIPAA, we can just break apart the acronym to see how it's a little bit different. It's the Health Information Protection and Privacy Act, and that is looking at healthcare information. With a medical device, of course, that's going to be relevant. We think about how many different systems now integrate with electronic health records, saving clinicians tons of time. The processes are super automated and easy to follow now. But there's an added layer: there's the patient harm layer. Think about an implantable device like a drug infusion pump or a pacemaker as another great example. If one of these devices gets hacked into, it's not really going to be much of a concern around the healthcare information. It's going to be a direct safety consideration. So even though, of course, it's very bad, it's very dangerous, it can lead to a lot of downstream problems if your healthcare records get breached, it's not a safety consideration, and you aren't going to be directly physically harmed because of it. So this is why it's a little bit new in the space. There aren't too many industries where cybersecurity can lead to tangible physical harm or even death in some cases. So I think that's a little bit of a disconnect between this industry and other industries, which is why it's so rapidly evolving and why it's a little bit different from HIPAA. So we're seeing some different controls and different consequences as opposed to HIPAA compliance. Yes, and I agree. I mean, the risk is much greater as we always talk about with a medical device versus someone stealing your PHI, which mine has probably been stolen a thousand times already. But it is not directly impacting my health right now or anything. Like a surgical robot performing surgery on my knee, and it goes haywire, could damage my knee, obviously. So, with the Illumina case, I think this one's pretty interesting. I was wondering if this would have even come to light if they didn't sell into federally funded healthcare organizations, because anytime you sell something to the government, you're under more scrutiny, and you're in their crosshairs of their regulations. I know there's a whistleblower involved, but what's a little bit of backstory on the Illumina case that you're aware of, Trevor? So you touched upon a couple of the key main points there. There was a whistleblower involved who went to the DOJ to talk about these problems, either to the DOJ or the FDA. I'm not sure on the exact details there, but this was sort of a joint effort with the Department of Justice and the US FDA to try to prosecute this since the FDA is responsible for medical devices, and the FDA sets and enforces these controls for cybersecurity. Of course, it's the Department of Justice that's executing that enforcement, but the FDA sets these standards. Now, selling into a government agency is certainly going to be part of it. You have to go through the FedRAMP process, get approved into government supplier lists, things like that, which is generally a little more strict than some private controls. But I don't think this would have stayed buried even if they were only selling privately. The class of device that is typically in this diagnostic space will fall under the PMA pathway. And the PMA pathway requires annual reporting on all sorts of different factors, but relating to cybersecurity, it is based off of cybersecurity effectiveness, control, patching rates, lots of stuff like that. So I feel like the cracks would have started to show eventually one way or another. They may have come up during an audit by the FDA or even as part of that annual reporting. Now, I think that it's good that it came out sooner rather than later. After a deeper investigation, it turns out that a lot of very critical security controls were completely omitted from the system. Illumina knowingly and admittedly did not follow security by design principles. So they integrated a lot of different software, a lot of different products or product components into their system while knowingly doing so in a risky manner, especially in the diagnostic space. This can lead to misdiagnoses, mistreatments, or, I guess, it would be a mistreatment, maybe the administration of therapy that could be potentially harmful. You think about something like a cancer diagnosis and a cancer treatment. If you are misdiagnosed with cancer, chemotherapy is brutal, and it's terrible for you. Even if you have cancer, it's really, really hard on your body. So you have to imagine what if you don't have cancer and you're unnecessarily going through a really rough treatment. Yes, and some of the internal communications were made public as well, where the organization, I think the whistleblower or someone in that person's area of work, flat out said that this device is not secure and there are major risks with it, but they chose to go ahead and put it to market anyway. Yes, there were some internal communications where they knowingly had documented that there were uncontrolled risks in the system and had that written down and secured internally that it should not be used in a healthcare delivery organization due to these uncontrolled risks, and then sold it anyway. So there was a lot that went wrong with how this was handled, and it came out to be an act where they willingly were trying to defraud the FDA based on their security controls and saying that they had a secure product when they didn't. Integrating this into other systems can be a huge risk as well. I know we talked about the consequences of a misdiagnosis or something of that nature, but think about the fact that this is going to be used, you know, systems can be used in a wide range of environments. And in a healthcare delivery organization, think about how many hundreds, if not thousands, of different medical devices are there. What if one of them gets compromised that leads to other ones getting compromised, and further downstream consequences there can be pretty significant? Yes, yes, this is an interesting scenario, and maybe it will change the landscape a little bit because I don't know of too many cases like this Illumina case that have come to light with medical devices and the Department of Justice and legal ramifications. Because I think this probably happens more often than we like to think of because if I'm a company and I'm losing money during development, the sooner I get my product to market, the quicker I can start generating revenue. So I think a lot of companies probably take some risk to get that product to market. It's just a matter of what types of risk they're actually taking and which ones are acceptable. And it sounds like the ones that Illumina took are not acceptable due to the criticality of their device and the class of their device. But I know we've had clients in the past that I don't know if it's the organization as a whole or like the specific software developer that we pointed out something that was pretty major, and they fixed it in this one little area to try to fool us but didn't fix it across the system as a whole. So I often wonder: why are they doing that? Is it to test how good we are, or they just want to check a box and move on, and they don't actually care about the security? Or it might just be the software developer's personality. I don't know. What are your thoughts on that? Yes, in that specific scenario, it was very interesting. They applied a control to one user account specifically instead of doing it globally. And, of course, that was our test account that we were using. So we were able to see we had some other vulnerabilities. We were able to compromise other accounts, and we saw through those accounts that this was not a global control. Often times, it can be trying to reduce time to market or trying to speed up a process. There might be some technical constraints that are baked into the device where a remediation is just impossible. It's going to have some level of risk. And this generally means that we've shifted security too far down the line. The FDA states that we should build security into our devices, which helps prevent these problems. If security is built into a product, you're not going to have to worry about down the line going,

    Hosted by

    More from your hosts

    Other episodes diving into Christian and Trevor's areas of focus.

    Episodes covering similar ground.

    Listen to this episode