Skip to main content
    All Episodes
    Episode 007 · December 24, 2024 · 23m listen

    The Evolution of Medical Device Cyber Threats: Past, Present, and Future | Ep. 6

    Episode Summary

    This episode of "The Med Device Cyber Podcast" delves into the evolution of medical device cybersecurity threats, offering essential insights for product security teams, regulatory leads, and engineers. Beginning with historical incidents like the Dick Cheney pacemaker concerns and Barnaby Jack's insulin pump hack, the discussion highlights the early recognition of wireless vulnerabilities in implantable devices. The conversation then transitions to the present, focusing on the FDA's 2023 guidance, which emphasizes designing secure medical devices throughout their entire lifecycle. The episode addresses the significant challenges posed by millions of legacy devices currently in the field and the industry's push for transparency through Software Bill of Materials (SBOMs) to articulate cybersecurity risks. Looking ahead, the episode explores future concerns such as autonomous surgical robots and the dual-edged sword of artificial intelligence in both defending and attacking medical infrastructure. Product security professionals and regulatory specialists will find the discussion on evolving threats, current regulatory landscape, and future considerations invaluable for mitigating risks and ensuring patient safety.

    Key Takeaways

    • 01Early medical device hacks, such as those involving pacemakers and insulin pumps, demonstrated critical vulnerabilities in wireless connectivity and the severe patient risks associated with them.
    • 02The FDA's 2023 guidance has shifted the industry towards integrating cybersecurity throughout the entire medical device lifecycle, from design to disposal.
    • 03Addressing the cybersecurity of millions of legacy medical devices in the field remains a significant challenge, requiring ongoing security research and responsible vulnerability disclosure.
    • 04Transparency through Software Bill of Materials (SBOMs) is crucial for device manufacturers to articulate cybersecurity risks to healthcare providers and patients.
    • 05The future of medical device cybersecurity will contend with emerging threats from autonomous surgical robots and the offensive and defensive applications of artificial intelligence.
    • 06Proximity is not a sufficient security control for wireless medical devices, as specialized equipment can enable remote exploitation from significant distances.

    Frequently Asked Questions

    Quick answers drawn from this episode.

    • This episode of "The Med Device Cyber Podcast" delves into the evolution of medical device cybersecurity threats, offering essential insights for product security teams, regulatory leads, and engineers.

    • Early medical device hacks, such as those involving pacemakers and insulin pumps, demonstrated critical vulnerabilities in wireless connectivity and the severe patient risks associated with them. The FDA's 2023 guidance has shifted the industry towards integrating cybersecurity throughout the entire medical device lifecycle, from design to disposal....

    • This episode covers SBOM Management. It's part of The Med Device Cyber Podcast, hosted by Blue Goat Cyber, focused on practical medical device cybersecurity guidance for MedTech teams.

    • The conversation then transitions to the present, focusing on the FDA's 2023 guidance, which emphasizes designing secure medical devices throughout their entire lifecycle. It's most useful for medical device manufacturers, cybersecurity engineers, regulatory affairs professionals, and MedTech founders preparing for FDA review.

    • Early medical device hacks, such as those involving pacemakers and insulin pumps, demonstrated critical vulnerabilities in wireless connectivity and the severe patient risks associated with them.

    Listeners also asked

    Quick answers pulled from related episodes.

    Share this episode

    Pre-fills with: "Early medical device hacks, such as those involving pacemakers and insulin pumps, demonstrated critical vulnerabilities in wireless connectivity and the severe patient risks associated with them."

    In this episode, we're going to cover the evolution of medical device cyber threats: some of the past, the present, and the future. Let's start off with the past. Trevor, do you want to start off a little bit with some of the history of medical devices and cybersecurity attacks against them? One thing that is an early-on device attack that has seen a little bit of coverage was actually some concerns that Dick Cheney had around 2007, relating to his pacemaker. He had a lot of concerns that there could be an assassination attempt against him since his pacemaker had a wireless connectivity feature, and he was very concerned that someone could hack into it and try to kill him. Interestingly enough, there was a security researcher who was able to prove that his concerns were founded. They were able to take pacemakers, and as a proof of concept, effectively change the functionality and assassinate someone with a pacemaker. That was one of the original notable events in 2007 where medical device cybersecurity was really coming into play. That's about 17 years ago; it's pretty amazing. I think a lot of people don't realize implantables such as pacemakers have wireless functionality, and it's typically Bluetooth because they occasionally need a firmware update. You don't want to take it out of the patient every time you want to update it, so you do it with Bluetooth. Often, data is read off of that device, such as diagnostic data or data about the patient, so that's why it has some sort of wireless capability. Then we've also got hacks with the mysterious guy, Barnaby Jack, kind of a funny name. He hacked an insulin pump and was able to deliver the maximum dose of insulin over and over and over and cause somebody to die. He didn't do it on a real patient, but he did a demonstration at Black Hat, and this was in 2011, only four years later. These insulin pumps, he was able to use a high-power antenna and connect to an insulin pump from a far distance and manipulate it that way. Barnaby Jack was the same guy who was discovering that pacemaker attack and was able to do the proof of concept as well, isn't that correct? From my understanding, he heard about the threat to Dick Cheney and wanted to validate that that was a legitimate threat. He proved he could do it; he proved he could connect to a pacemaker and shock somebody over and over and over. He likes to use these high-power antennas so he could do it from a distance. A lot of people think Bluetooth means you have to be super close, but I've heard people sniffing and connecting to Bluetooth devices like a mile away if you have a high-power antenna. Wow, that's really interesting. I know that a lot of times we'll see proximity as a security control around Bluetooth. Someone will say, "Well, there's not really much likelihood of exploitation just due to the fact that Bluetooth is such a close-range communication," but that's not always the case. With specialized equipment, you can attack it from pretty far away. There's a thing called a blue sniper rifle that is designed to connect to Bluetooth a mile away. It's a very directional antenna; it looks like an actual rifle. You probably shouldn't walk around with it in downtown Phoenix or anything, or New York specifically, or California. But this guy, this researcher, is able to sniff Bluetooth from a mile away and connect to Bluetooth devices. So, proximity is not always a good defense, especially with wireless. We like to use it as a defense, but it's not really, unless you have a Faraday cage or something. I think Arizona is probably the only place where you can walk around in a major city with a rifle and nobody's going to ask you any questions. That's why I switched it to California or New York, because I was thinking if I walked around with my rifle or even my shotgun, probably nobody's going to say anything. I've seen people in liquor stores with a gun in their holster, and I think, "Man, this is kind of interesting. This guy's in a liquor store; he might have been drinking, and he's got a gun in a holster, you know, outside his waistband carry is not concealed carry but open carry." It's kind of interesting. Similar to the insulin pump attack that Barnaby Jack discovered, Johnson & Johnson disclosed a vulnerability in 2015 that was essentially a copy of that problem Barnaby Jack had discovered in the past. Attackers could essentially get into the pump without any access controls; they were able to hack into it and then, same thing, crank up the dose to the maximum level and just continually apply maximum dose and essentially cause someone to die as long as they have this insulin pump. This whole thing again was a remote connection. There have been a lot of attacks against these pumps, like insulin pumps. Then I know in 2017, there was a vulnerability discovered in Smith Medical's Medfusion pump. This was more of a drug infusion pump. Drug infusion pumps are used to administer any type of drug, not just insulin. If you could increase the flow rate by one of these attacks, you could kill somebody. Imagine you have a patient that needs some sort of drug for pain, such as morphine, and if you can increase the flow rate of morphine wirelessly by connecting to the drug infusion pump, you can cause this person to OD and die. These, there are severe consequences with these medical device hacks. Kind of tying into the same devices getting hacked over and over, at a similar point, I believe it was right around the same time, Medtronic had pacemakers, same thing again. Someone was able to control the pacemaker and cause malfunctions that could either lead to the pacemaker not performing its intended function or just actually cause detrimental effects to the patient. I think they issued a recall on the pacemakers. Imagine you're a patient with an implantable pacemaker in you, I think they have to do some sort of open heart surgery to put a pacemaker in there, I imagine, and then it's recalled. Now what do you do? Do you decide to live with a risk that this pacemaker may be attacked with a cybersecurity attack, or do you go back to the doctor and have it removed and have another one put in? Which risk is greater? And, you know, that's something, if I'm a patient, that's a tough decision, I imagine. Definitely. Open heart surgery is pretty famously scary for a good reason; you don't want that if you don't need it. Having to weigh out that balance, if my pacemaker gets hacked into, I can very well die from that, or I need to undergo a risky surgery to have this replaced, it's a difficult decision to make. I think that's why there's been such a big push across the industry for security in the first place, not having these retroactive security concerns for devices that are later found to be vulnerable. Identify what the vulnerabilities are now, and then you're not going to have to worry about it down the line. Right. We want to do our best to design security into the device, and in going to the present time, it was last year that the FDA came out with new guidance in 2023. There is a shift in the industry to securely design medical devices, or have secure medical devices that were from requirements to design to disposal, the whole life cycle, they are secure. A lot of people have wondered about the legacy devices that are out currently in the field if they need to be updated, such as these pacemakers and other devices. What do you know? What have you heard about the legacy devices? It's a really hard issue to tackle. There are estimated to be around two million unique medical devices out in the field right now. What do you mean unique? Like, it's not like out of those two million, they're all different types of devices. Two million unique products. There's, of course, plenty of overlap there, you know how many different insulin pumps there are, but two million unique products, and each one of these has its own unique threat landscape, attack surface, and potential problems. With such a massive number, it's really hard to track down all of these legacy devices, and it's a very difficult problem to come across. A lot of what is done is a push for increased security research against these older devices. I know you briefly touched upon the new FDA guidance as of last year, but the security requirements around cyber devices, which is essentially any device with a computer attached, any medical device with a computer attached, have gone up significantly. The security controls are very strict; there are very strict testing requirements to try to prevent these problems from happening now with these pre-approved devices that were approved before this guidance came out. They didn't go through the same process. So there's been a push across the industry to try to have more responsible research and responsible vulnerability disclosure, which is good-faith security research, that's good guys, the good hackers with good intentions, trying to find these devices and finding problems for the better good and not trying to exploit them for their own personal gain. Then they're able to inform these manufacturers, manufacturers are able to inform the FDA, but without an official strict process in place, it's still a slow, tedious process, and right now it's definitely a pretty big concern to have all of these legacy devices out there in the open. 100%. I think a lot of people are having to do annual tests against those devices and do some updates with them because there is a huge risk with them out there. The FDA is also pushing for transparency. So if I'm a device manufacturer, I'm supposed to be transparent about the cybersecurity risk with my device so a patient or a healthcare provider, whoever is going to use that medical device, understands the risk before they purchase it or decide to have a pacemaker implanted, for instance. As part of that transparency, that would be talking about some of the parts of the device and everything going into it, isn't that correct? Yes, the parts of the device such as all the third-party software that make up the device, what controls make up the device so if I'm going to buy something I can understand, at least the best I can in layman's terms, what the cybersecurity risk is for that device. That makes sense. I know, kind of a similar analogy would be looking at a car. Like if you buy a car, you're going to want to know all the parts that go into it, and if there's a problem with one of those parts, then you're able to address it at that level instead of just having a problem in the entire device. That's a transparency. It's called a software bill of materials. So if I buy a medical device that has software in it, the whole supply chain of all the software, because people that develop software, borrow code from other places. So the software bill of materials lists all the code that was borrowed from other places that makes up the overall software. Kind of like the car you said, if you buy the Cybertruck, talk about cyber, I don't know why they call that thing the Cybertruck, I hate that term for that truck. I don't even like that truck, all space age. Is that what it is? It's supposed to look like something out of cyberpunk, or just that style. It's all angular, super harsh, brutalist looking. Well, that thing's supposed to have, if I'm going to buy it and understand the risk, I want to know like who made the engine, who made the... and that might be a bad example with a Tesla, but who made the tires, who made the brakes? So, you know, that way I know where everything came from. I've seen more of those things being hauled away, those Cybertrucks, by like a gasoline tow truck, than I've seen them actually working, I feel like. Every time I see news about them, it's something like, "Oh yeah, children get locked in the back of the Cybertruck," and then it just won't open when the battery dies, or if you have it in a cold environment, it doesn't work at all. I know there are a couple people up here in Flagstaff with Cybertrucks, you see them once in a while, and it just makes no sense; I think they have 40% reduced battery up here. Because of the altitude, or because of what? Because it's so cold. It's like four degrees in the morning here. These are the risks that need to be articulated. I mean, the Cybertruck is not a medical device; it's the same concept. Because I've heard here in Phoenix, in the valley as they say, it gets super hot, like 120 degrees, and I've heard the batteries of some of these cars like Cybertrucks have drained and people have gotten locked in the vehicle because they don't know how to unlock it, because you can't open the door without the battery. And that is a huge risk, especially if you're trapped in a car when it's 120 degrees, and well, it's just going to get hotter and hotter while the windows are up. Definitely. Isn't it crazy? We're like a two-hour drive apart and we have exact opposite problems here. It's too cold for cars to run, and there it's too hot for cars to run. Yeah, well, my car overheated not too long ago, and I remember you said something I always thought was funny about if you buy a Tesla, it comes with a new personality. What do you mean by that? The first thing you ever hear about anyone who drives a Tesla... I think about anytime I go take an Uber. I just got back from Black Hat in Las Vegas, and I can't stand driving in that city, and I can't stand parking in that city even more. So I'll just take an Uber everywhere I go. If I got in a regular car, like just a Honda Civic or whatever for the Uber, the guy would just say, "Hey, how's it going?" Talk about something else. If I got into a Tesla, the first thing that came up was that guy's Tesla every single time. It is the focal point of a lot of Tesla owners. Yeah, and they should probably for transparency say it comes with, I think they call it range anxiety with the Tesla, that right? Range anxiety, because you're always afraid you might run out of battery without a charger in your area. I'm not a fan of electric cars. I'm a fan of electric go-karts, but electric cars... Electric go-karts are fun. Yeah, so we talked a little bit about the present, some of the changes with the FDA guidance and how to handle some of the legacy systems. Let's move on to the future of medical device cybersecurity. One of the things I'm concerned about is autonomous surgical robots. Right now, we have surgical robots that are being used by a surgeon. There's a drive in the future where you're going to have a robot that performs surgery on you all by itself. We've seen this in sci-fi movies, but we're getting pretty close to it. And imagine if there's an autonomous surgical robot performing surgery on your spine, and that thing becomes compromised. There's no way to interact with it; there's not a surgeon to take it off of you. So that's a pretty scary scenario, don't you think? Definitely. And I know even with the current surgery robots, occasionally you see problems with them, whether it'll be a software malfunction or a hardware malfunction. Not too long ago, there was a woman undergoing surgery through a surgical robot who had her small intestine singed and burned by the robot because of a malfunction. So adding an entire new layer to that of what if someone can force a malfunction? What if a bad guy can hack into that robot and force it to do that on a massive scale? It's a really scary thought, and not having necessarily the checks and kind of the checks and balances in place with a human operator might make it a little bit more of a risk. I've got kind of mixed feelings about these autonomous things. I know here in Phoenix, we have autonomous driving cars, and I take them all the time, and I feel like they're safer than an Uber. But I also wonder, kind of like with these surgical robots, what if somebody hacks in the car, speeds it up to 140, and runs me into a light pole or something? You could kill me, right? But I also have dealt with Uber drivers. Like one time, I was going to the airport, and we decided to take an Uber, and this 85-year-old woman shows up, she can't drive very well. Before my girlfriend got in the car, like the door wasn't closed, she wasn't all the way in the car, she took off. Then she got lost on the way to the airport, and I'm thinking, "Man, maybe that autonomous driving car is safer, and maybe the autonomous surgical robot is safer if it's cyber secure than a surgeon," because it's not going to be like hungover or not have enough sleep or be tired or exhausted or having some emotional issues to deal with. So I think there's a lot of benefits to these autonomous devices as long as we can keep them secure. I agree, it's definitely a mixed bag. If security is addressed, there aren't any software malfunctions, there aren't any hardware malfunctions, and the robot knows how to perform the surgery well, the robot's probably not going to make many mistakes. The robot isn't going to, you know, have a weird twitch in the middle of operation. The robot is going to do it pretty precisely, but it's a pretty long journey to make sure that we're getting to that level of security, we're addressing all of these concerns, and there isn't much in the way of risk as far as hacking into the robot. I know similar to the robot, and likely what would end up powering an autonomous robot, and of course is the big security concern on everyone's mind these days, is artificial intelligence and all of the different AI technologies we see coming out. We already see a lot of AI software as a medical device, and I think that that trend is only bound to continue, and we're going to see more and more of that. But I'm interested to hear what some of the attacks might be against AI that we're not necessarily seeing in the past, some of the new threats that are coming up. AI is used by the bad guys. A lot of us think, you know, AI can be used to defend our environment, but what we have to understand is the criminals, the bad guys, or the black hats use AI to attack our AI. So it's almost becoming like this battle of AI versus AI. It's pretty interesting when I was attending Black Hat this year, they had a booth for Microsoft Co-pilot for Defender, and they go through this whole incident response simulation where you have all of these encrypted logs and these audit trails and all these attack paths, and you're using AI to essentially explain what happened, predict what threat actor group made the attack, anything leading up to it, and then provide recommendations and automatically deploy them out into your network. I was really interested, I've never seen AI applied in that way face to face, so it was kind of a cool demonstration to see how helpful it can be. But I have used AI a lot for offensive security, and I have seen a lot of the application, you know, of course we work as white hat hackers, we're very responsible with our security research and always maintaining strict confidentiality with the clients and helping them fix their problems. But we're using the same tools and techniques as the bad guys, and part of that is I use AI all the time when I'm trying to test a target, using AI to develop payloads and to start exploiting machines, and it's kind of scary to imagine, "Wow, everyone has that same ability, everyone has those same tools to use AI to be the bad guy." I use AI quite a bit as well with ChatGPT and my own customizable GPTs. There's a lot you can do with it. I was using it for a while to prank call friends of mine. I created an AI that sounded exactly like a real person, so it's kind of cool. You could do a lot with it. I was testing it out to maybe use it for sales calls, but I didn't think it was that smart to try it out for sales calls. Probably still a little ways out. Yeah, but there are some good ones out there. I think we've covered some of the past with medical device cybersecurity, a little bit of the present with the FDA's guidance and how to handle legacy, and what currently we're doing to improve medical device security, and some of the future-looking lens for medical device cybersecurity. Anything you think we should add to our sort of walkthrough of the history of medical device cybersecurity here? No, I think that's a pretty good... we kind of covered a lot of the big points. Just looking at how medical device cybersecurity has come into the public eye a little bit more, and then where we see it going in the future. Awesome. And for some reason, I was thinking about that Tesla again. I know you bought a new car recently; did you buy a Tesla or Cybertruck, or did you buy something different? I did not. I got a Jeep, and that Jeep runs on gasoline, and that Jeep is not going to have any problems once the winter rolls around. That's true. I didn't know that Teslas have so many problems in the cold and in the heat. I thought it was just in the heat, but you're telling me they have problems in the cold, too. Yeah, batteries in general just don't like the cold up here, I guess. That's a good point. And also we get like ridiculous amounts of snow and ice, and I don't think Teslas handle that very well. Cool. Well, you're not going to see me buying a Tesla in a long time. I like Elon Musk, I just don't like Teslas. I prefer gasoline vehicles. Yep. Awesome. Well, I hope everyone that tuned in got some value out of our evolution of medical device cyber threats: the past, present, and future. And hopefully you gained some insights, and we look forward to seeing you on our next episode. In the next episode, we're going to be talking about the human factor and how that contributes to a lot of the incidents and issues with medical devices. Thanks for tuning in. See you. See you next time.

    Hosted by

    Explore every episode in the topics covered here.

    More from your hosts

    Other episodes diving into Christian and Trevor's areas of focus.

    Episodes covering similar ground - including SBOM.

    Why this matches shares the SBOM topic and covers similar themes around robots, surgical, exploitation.

    Why this matches shares the SBOM topic and covers similar themes around enable, sboms, posed.

    Why this matches shares the SBOM topic and covers similar themes around threats, throughout, bill.

    Listen to this episode