Skip to main content
    All Episodes
    Episode 053 · January 21, 2025 · 24m listen

    The Human Factor: Why Cybersecurity Awareness is Key in Medical Device Manufacturing | Ep. 8

    Episode Summary

    In this episode of "The Med Device Cyber Podcast," the hosts delve into the critical role of the "human factor" in medical device cybersecurity. They explore how human vulnerabilities, from weak passwords to configuration oversights, often present easier and more impactful attack vectors than direct system exploits. The discussion highlights the limitations of traditional cybersecurity awareness training, drawing parallels to necessary evils like dental visits or car maintenance, which people often approach with reluctance. The episode emphasizes the need for a paradigm shift, advocating for security to be integrated early in the product development lifecycle rather than being a costly afterthought. Key topics include the pervasive challenges of network segmentation, the dangers of default credentials, and the importance of multidisciplinary collaboration among product security teams, engineers, and IT staff. The hosts also touch upon the evolving landscape of FDA guidance and its impact on driving increased awareness and forcing better security practices in the medical device industry, ultimately aiming to mitigate risks like patient harm from compromised devices.

    Key Takeaways

    • 01The human element is often the weakest link in cybersecurity, with social engineering attacks frequently more successful and impactful than technical exploits.
    • 02Traditional cybersecurity awareness training often falls short because people view security as an inconvenience rather than a priority.
    • 03Effective medical device cybersecurity requires secure system design, assuming breaches, and implementing controls like proper access gating and network segmentation.
    • 04A lack of awareness and budget constraints often lead to overlooked security practices, which become exponentially more expensive to fix after a breach or late in the development cycle.
    • 05The FDA guidance is increasingly compelling medical device manufacturers to integrate security throughout the product lifecycle, fostering greater collaboration and a shift in culture.
    • 06Overcoming cybersecurity challenges necessitates better integration and collaboration across development, IT, and security teams, as well as a top-down organizational commitment to security.
    • 07A shift in culture to integrate security professionals' insights into user experience considerations is crucial to finding effective security solutions.
    • 08The financial and reputational costs of neglecting cybersecurity upfront can be immense, potentially leading to product abandonment or regulatory setbacks.
    • 09Medical device manufacturers must prioritize security from the very beginning of the design process, making it an inherent requirement rather than an afterthought.
    • 10Network segmentation and robust asset management are crucial in preventing widespread compromise within hospital networks, which are often considered hostile environments for medical devices.

    Frequently Asked Questions

    Quick answers drawn from this episode.

    • In this episode of "The Med Device Cyber Podcast," the hosts delve into the critical role of the "human factor" in medical device cybersecurity. They explore how human vulnerabilities, from weak passwords to configuration oversights, often present easier and more impactful attack vectors than direct system exploits.

    • The human element is often the weakest link in cybersecurity, with social engineering attacks frequently more successful and impactful than technical exploits. Traditional cybersecurity awareness training often falls short because people view security as an inconvenience rather than a priority. Effective medical device cybersecurity requires secure system...

    • This episode covers Penetration Testing. It's part of The Med Device Cyber Podcast, hosted by Blue Goat Cyber, focused on practical medical device cybersecurity guidance for MedTech teams.

    • The discussion highlights the limitations of traditional cybersecurity awareness training, drawing parallels to necessary evils like dental visits or car maintenance, which people often approach with reluctance. It's most useful for medical device manufacturers, cybersecurity engineers, regulatory affairs professionals, and MedTech...

    • The human element is often the weakest link in cybersecurity, with social engineering attacks frequently more successful and impactful than technical exploits.

    Listeners also asked

    Quick answers pulled from related episodes.

    • What does Episode 56 cover about "What the FDA Wants in Security Architecture Views for Devices"?

      In this episode of The Med Device Cyber Podcast, the hosts delve into the intricacies of the four security architecture views mandated by the FDA for medical devices. They meticulously break down each view: the Global System View, Updatability and Patchability View,...

      From Episode 056 · What the FDA Wants in Security Architecture Views for Devices | Ep. 29
    • What does Episode 3 cover about "Advanced Threat Modeling in Medical Devices"?

      In this episode of The Med Device Cyber Podcast, hosts Christian Espinosa and Trevor discuss the critical practice of threat modeling for medical devices. They emphasize the importance of adopting an attacker's mindset to identify potential entry points and vulnerabilities...

      From Episode 003 · Advanced Threat Modeling in Medical Devices | Ep. 11
    • What does Episode 64 cover about "Why Cybersecurity and Quality Are One and the Same"?

      This episode of The Med Device Cyber Podcast features Ash Garuli, principal and founder of Ingenious Solutions, discussing the critical intersection of cybersecurity and quality management in medical device development. Together with host Trevor Slatterie, Ash tackles common...

      From Episode 064 · Why Cybersecurity and Quality Are One and the Same | Ep. 26

    Share this episode

    Pre-fills with: "The human element is often the weakest link in cybersecurity, with social engineering attacks frequently more successful and impactful than technical exploits."

    From the YouTube description

    In this episode of "The Med Device Cyber Podcast," the hosts delve into the critical role of the "human factor" in medical device cybersecurity. They explore how human vulnerabilities, from weak passwords to configuration oversights, often present easier and more impactful attack vectors than direct system exploits. The discussion highlights the limitations of traditional cybersecurity awareness training, drawing parallels to necessary evils like dental visits or car maintenance, which people often approach with reluctance. The episode emphasizes the need for a paradigm shift, advocating for security to be integrated early in the product development lifecycle rather than being a costly afterthought. Key topics include the pervasive challenges of network segmentation, the dangers of default credentials, and the importance of multidisciplinary collaboration among product security teams, engineers, and IT staff. The hosts also touch upon the evolving landscape of FDA guidance and its impact on driving increased awareness and forcing better security practices in the medical device industry, ultimately aiming to mitigate risks like patient harm from compromised devices.
    Welcome back to another podcast. Trevor, how are you doing today? I'm doing pretty well. How are you doing today, Christian? Doing good, doing good. Didn't sleep too well last night. I had this weird dream that I was an accountant. I don't even know where that came from, and it was kind of like a nightmare. I have for some reason, like bookkeeping, and it probably came from the fact that I'm doing it for a company now and it causes me a lot of anxiety, all the bookkeeping and accounting. But it's a weird dream. Yeah, that would be a pretty scary dream. I've never had much of an affinity for that. I majored in Engineering in college and that was already way too much math for me, so I don't want any more. Yeah, I used to sleep with a recorder next to my bed so I could wake up and record my dreams. And I stopped doing that for some reason, because my one of my favorite bands, Nightwish, the guy that writes all the songs, Tuomas, he has a recorder and he wakes up and fresh from a dream and just talks in the recorder, and that's what becomes the songs. So it's kind of interesting. So we have a lot of wisdom come to us during our dreams, I believe. I probably only remember like one or two dreams a month. I just go to sleep, I'm out, there is not a thing happening until morning. Well, that could be good as well, probably get better sleep that way. Yeah, in this podcast, you know, we're talking about the human factor and how it matters with medical device cybersecurity. Before we dive in here, can you explain like what we mean by the human factor? So in cybersecurity, it's very often said that the human is the weakest link. Of course, computers are vulnerable to attack, they can be exploited with malware, all sorts of different hacks, but it's pretty easy to trick a person into giving up their password. It's often a lot easier to trick a person into giving up their password than a computer. Some of the most success that we've had on penetration tests are through social engineering campaigns. We set up a fake login panel, send out a bunch of emails, and then boom, all of a sudden we have 90 sets of passwords to use instead of trying to hack into the system ourselves. So, not only is it often easier and more successful, but the impact is often far more severe. If someone's giving up their password for a VPN portal, you're able to get into their internal network, you're able to see a lot of pretty dangerous stuff. So what the big concern is that we always need to be thinking about is how can we fix this problem? You're never going to be able to change human behavior, you're never going to be able to change human tendencies. So it's a matter of trying to teach awareness and trying to implement controls that are going to reduce the impact of a successful exploit or reduce the likelihood that it will be a successful exploit. We've been talking about cybersecurity awareness training for a long time. I don't feel it's making a difference because every company that we've worked with, they do phishing training over and over and over. They do this one-hour cybersecurity training annually, yet we come in there and are still able to get through with a phishing attack or still able to get through with some sort of like social engineering attack with like a thumb drive attack. We just drop thumb drives around, they pick them up, put them in a computer. So do you feel like we're actually improving with this awareness? Yes and no. So one thing, and I know we've kind of talked about this before, but oftentimes cybersecurity is viewed as a necessary evil. It's not something that people want to do, it's not something that people want to be aware of. It's usually an inconvenience for what is a necessary evil. What's another example of a necessary evil? I know before I've used the example of, you know, like going to the doctor, going to the dentist. It's something that nobody wants to go do, nobody has a good time when they go to the dentist, but you have to do it, you have to stay on top of these things. I guess that's true, you don't have a good time in the dental chair. Yeah, or like, you know, like regular maintenance of your car. I know a lot of people like cars, I'm not really a car guy, and so anytime I have to fix something on my car, it's annoying, it's something that I have to do but I really don't want to do it. And cybersecurity is sort of the same thing. It usually only costs money, it is seen as a preventative measure but it's expensive. You have to bring in a lot of new people, you have to bring in a lot of training, you have to teach people to do things that they don't want to do, require, you know, complicated processes. And anytime you're logging into your email, you get a text on your phone, adds this layer of complexity, so people don't like cybersecurity. Yeah, it's frustrating, it's frustrating for me. I got so many freaking passwords and so many different text messages, emails and authenticator apps just to log on to things and I forget half of them half the time. Yeah, and it's hard to keep track of all that stuff. And so people don't want to, they want to have their password be the same as their username, they don't want to get a text on their phone, they want it to be simple and easy, but if it's simple and easy then it's simple and easy for the user and it's simple and easy for the hacker too. So that's what people need to be aware of. Now with it sort of being the necessary evil, a lot of people go through that, you know, annual phishing training or whatever, they get the email and they just go, oh great, that's what I have to do on a Friday now. And they sit through this hour-long presentation and their eyes glass over after four minutes and then they walk away without taking any information from it. So what is kind of an important thing to think about is people are never going to be as invested as they probably should be. So what can we do about it to make sure that their issues aren't going to be like these problems aren't going to have as great of an impact? Yeah, I think we have to design systems more securely and assume people are going to make those mistakes. Definitely. So part of that, I like the use of assumption there. A lot of testing is done under an assumed breach scenario. You assume that the device has already been hacked, you assume that someone from HR has already been hacked, you have an Insider threat in the network, there's already a problem. And using that assumption starts, works as a good starting point to build out security controls. We see with a lot of times, you know, proper controls around a device with a medical device can be, okay, well, let's say someone is able to, you know, compromise the user of the device. Well, as long as that user's access is properly gated, they aren't able to see anything that they shouldn't see, they can't move to a different device from that device, anything of nature, the impact is going to get lessened significantly. Same thing in a healthcare environment, like a hospital network. Network segmentation is one of our big recommendations. If someone's able to compromise someone from HR and they can't move into the engineering department from there, that's massively going to limit what they're able to do. Yeah, I know healthcare providers, hospitals in particular, we consider them hostile environments from the perspective of a medical device going into that environment. And I've done a lot of penetration tests that my team has on hospitals. We've always gotten in every single hospital. So if these medical devices are on the same network as the hospital, which they typically are, they're going to be compromised too. I remember a hospital in Louisiana we did a test with, and they had these kiosk computers. So if you're visiting the hospital and you're, you know, waiting on your loved one to have surgery or something, you could go to this kiosk computer. We simply went into the kiosk computer, put this thing called a LAN turtle on the network cable between the network cable and the computer. And then we're able to remotely access that network. And on that network were medical devices and everything else. And what we found is some of those medical devices had default credentials on them. And most devices have some sort of web interface. So we were able to, you know, get into devices that were pretty critical, like devices that were in an operating room. And we were afraid to get into them just because we didn't, we didn't know if somebody had an active surgery for instance. But that just shows you how important I guess awareness is number one for the staff to do some network segregation. Because without that, you know, the risks are pretty high. Like in that scenario, it was pretty ridiculous. So what we were able to do, and I don't think a lot of people think about that at hospitals. But I remember doing some research, and there's an average per hospital bed of devices. And of those 14 devices, most of them have wireless connectivity, most of them are connected to something, and if they're not segmented, a scenario like I just went through where I go to a kiosk computer and hack into it effectively, I can go into any patient room and connect to with those medical devices. I can think of a similar situation where same story, we're dealing with a hospital network and they had their, they had like HR, payroll, accounting, all on the same network as the medical devices. And we were able to hack into a printer in the payroll for like accounting purposes and go from that printer into X-ray machines, into life support machines and lock them out, disable them, change any functionality, any configurations that we wanted. So it's pretty scary to think about what all the problems were there. And it was all just from human error. People were not configuring these devices correctly, people were not setting up their networks correctly. Everything was on the same network, everything used default passwords, there was no multi-factor authentication for anything across the entire network. So if we were able to get one set of passwords, we could just put it anywhere. So a lot of this isn't necessarily, there isn't an easy hack for the device in a traditional sense where you're sending an exploit to a machine and using that exploit to gain access. You're just using human error. You're taking someone's password that they haven't changed since 2016 and putting it across the network to see what works. And sure enough, more often than not, it's going to work. So we're talking about human error, or I guess lack of awareness on a couple different levels. It sounds like we're talking about it from the user themselves, they may have a weak password. We're also talking about it from the IT staff, like the IT staff doesn't enable MFA or multi-factor authentication and they don't do that network segmentation as we talked about. They don't, you know, do patch management or any asset management. And I'm a big opponent of know thyself, like number one in cybersecurity. If you don't know your own environment, how are you going to know if something malicious is on your environment? And I've never been anywhere with the exception of one place, which is a really small environment, where they knew everything on their environment, where they had an accurate network diagram. And that's a pretty scary scenario. And on the opposite side, I've known pen testing companies and ones that I, you know, I've done pen tests too, where we had a more accurate network diagram after we mapped out the target's environment than they had. And imagine if we were a cybercriminal, it's a pretty scary thing. So the awareness I think needs to go on, you know, for the users, we often like to focus on the users. But it also needs to apply to the IT staff and the people setting the network up. So what's a good solution to make sure that this IT staff and the network administrators are properly taking care of their devices, taking care of the network? We know some kind of individual examples like mandatory MFA, having some network segmentation in place, but why isn't this done commonly in practice? Well, I think you said before, cybersecurity is a necessary evil and people don't want to pay for cybersecurity. There's often this lack of budget for cybersecurity. And then what results is there's a data breach, and then all of a sudden there's an unlimited budget for cybersecurity. So it's ironic that the companies that can afford cybersecurity have typically had a breach and they understand the importance. A lot of organizations think, well, that, that could never happen to us. I think you've encountered this before and so have I, where we've done a penetration test and the software developer, you know, from an awareness perspective is like, there's no way you could have got to that software, it's secure. Have you had an experience like that and we were arguing with them and we have to prove that we were able to get into it? Have you had any experiences like that? Yeah, I've had, I've definitely had some confrontations with engineers before. There's certainly a level of, you know, people don't always like cybersecurity, they don't want it. And then we are attacking their product and we're saying this product is not working the way it should, this product can be picked apart by a bad guy, and people don't want to hear that. They don't want to hear that something they've been working really hard on isn't, you know, up to par and there are problems with it. So I can think of a recent example on an AI software as a medical device where I was able to forge my own authentication. They had a custom authentication process that wasn't cryptographically secure. So I could essentially break the encryption on their encrypted tokens, fill in my own data, re-encrypt it, and then pass it back into the application. And that would let me log in as a user and then change that role to admin and pass it in as an admin, there we go. And when I present this to the client and I'm saying this is what I found, this is the problem, they go, that's impossible, this can't be done. I showed them, I said, well, I'll walk through it right now. I turned on a screen recording, I show them step by step of the whole process and they still don't believe it. They go, no, this can't be done, we did not design the product in this way. And at that point it's kind of, well, I don't know what to tell you then, because I showed you that it can be done and it can be done. But, you know, it is difficult for a lot of people to hear that, kind of feels like someone's attacking their system and hearing all these problems with it. It is difficult and I, I think we need a shift in the culture where software developers have more training on secure coding and IT staff who's used to building systems that are functional have more training on cybersecurity or we integrate the teams a lot better. I know we were talking earlier and you mentioned in college you were majored in cybersecurity and you had to take secure software development, secure software development course or secure coding course, right? But then you had some friends that were developers that didn't have to take the same course you had to take, it wasn't required for them and that, that's the irony of this situation, isn't it? Yeah, it was interesting. They were software engineering majors and they didn't take, you know, the class was like secure development practices or something like that, it was application security and they didn't have to take that class and I did. And I was thinking this is really weird, I have no intention to be a software developer. And of course it was trying to go over the secure software development life cycle process and trying to learn about proper security as part of integration more so than the coding itself. But it seemed weird to me that that would be not something developers are concerned about. And I think like you said, it's part of, it's partially a cultural problem. The just tech culture is a little bit more segmented than it should be. There should be a little bit more of an overlap. Developers should be more concerned about security. IT staff should be more concerned about security. Security professionals often need to be more concerned about user experience. I know, and this is a problem that I've been guilty of at times as well, where I have a recommendation, this is what you need to do for security, and someone says, well, that's going to be a really big deal, that is going to degrade the performance of the product, that's going to degrade the user experience. We need to figure something else out. And it forces me to sort of step back and think from the lens of like a product owner or product designer instead of just the security professional saying this is for maximum security, saying what can be done to preserve security and preserve the user experience. So there should be a little bit more of blending of all of these different roles. I agree, and with medical device companies, at least the ones we work with, there seems to be a little bit of friction there between the cybersecurity team and the engineers or the software developers, like you mentioned. Do you feel like, you know, this we talk about the awareness and we talked about a secure software development life cycle, there's a push towards DevSecOps, you know, development, security and operations, they all work together. Do you feel like we're making improvements in this area or you feel like maybe the FDA guidance is helping kind of force some improvement and the culture from a cybersecurity perspective with med tech companies? I think it's a little bit of a mixed bag honestly, seeing what the FDA guidance is doing as far as the culture. On one hand, to meet compliance and to meet regulatory requirements, you have to have this level of cooperation now. You have to address security and multiple teams have to be brought in. And I've seen a lot of engineers now in the past year, especially since the updated guidance, that are thinking about security for the first time and they're saying, well, what happens if this data gets breached? Well, if this functionality gets compromised, that's bad, this is what can happen from there. So, on one hand, it is forcing a lot more awareness, forcing a lot of consideration that we haven't seen before. Now, on the other hand, it's forcing people to do things they don't want to do. They have to meet these requirements and so they have to consider it, they have to start doing all of these different processes and steps. And a lot of times I've heard, I can't, how many times, can't count how many times I've heard someone say exactly, well, we haven't done this before, we didn't used to do this. But why, why don't they want to do it if the result could be a patient dying that's perplexing to me? I'm not going to fix something, I'm okay with a patient dying. Yeah, that's and that's exactly why the guidance is in place because it's not acceptable to say, well, I don't want to do this anymore. But before, it's a lot more work, it's a lot more effort. It requires oftentimes a complete overhaul in design. And that's something, this is like human nature, you're saying like, we're inherently lazy. If we don't want to do something, then we don't want to do like, I don't want to go to the gym to lose weight, I just want to take a pill every day or something. And this, the same, the same human nature applies to this scenario with medical devices and the personnel developing the software is what you're saying. And another part of it is the impact of these changes. I know in the past we've talked about how security should be addressed in the beginning of the development life cycle, it shouldn't be tacked on at the end. And it's oftentimes not possible. You can't tack something on at the end, you have to go back, make significant changes, and it can cost a ridiculous amount of money to completely redesign a product from the ground up. And so when people are confronted with this reality, they get upset, they realize it's going to be expensive, they're going to experience setbacks, they may have made commitments if they're VC funded, they say, hey, we're going to launch on this day, and then all of a sudden they have to redesign their product. It's causes a lot of problems. So, you know, that's a good point about the financial aspect of it. You, we talk about people just don't want to do it, but there's also budgetary constraints. We had a company that wanted to work with us, maybe four months ago, that totally forgot about cybersecurity. Then they had limited budget, they had, they had a grant or I think it's VC funding for this project. So they're about to launch it, the FDA kicked it all back and said, you got to do all the cybersecurity stuff. They talk to us, you know, we, we, we have a fee involved, they have all this team to redo everything. And they decided to abandon that product because the cost to redo everything and to have us evaluate their system was too much, they couldn't get any more funding. So they just abandoned the project and the whole product, they never launched it. I don't, I don't, that probably happens more than, you know, we're aware of because there's that cybersecurity is costly especially the very end. Yeah, I can't even imagine how many times that's probably happened, but it's, it's why there needs to be more awareness. People need to understand this problem from the beginning instead of just trying to slap it on at the end and say, well, hopefully this is going to fix the problems that we've had from the very beginning. But unfortunately, it just doesn't usually work like that. And so manufacturers and developers should be more aware of security from the very beginning. They should integrate it into the design process and integrate it into the requirements, the system instead of having it just be this afterthought. Yeah, and I think this culture, we talked about it in users and engineers, I think it needs to start at the top of an organization as well. I used to work in an organization where I was a director of research and development, and we, we created this product, it was like a cyber attack simulator. And we have this timeline, we said it's going to be done in September, for instance, and that included all the cybersecurity testing of the product. Our CEO one day said we've sold it, and this is like in April. He said, we've, we sold a bunch of them and we need to get them out the door by the end of the month. I'm like, we haven't finished all of our testing. He's like, I don't care, box it up, stop the testing and send it out there. I'm like, it's full of bugs and a lot of those are security bugs. He's like, I don't care, we need to make, we need the revenue, we've already sold it. So I bet that problem was pretty prevalent with medical devices before this guidance, I imagine. Yeah, definitely. And that's part of why we're seeing a lot of the new guidance requiring testing on previously released devices. Since we've seen all these legacy devices with God knows how many security vulnerabilities, and now they require testing, they require annual penetration testing to make sure that these are getting fixed and addressed for already fielded systems. Yeah, so the, the guidance is forcing some awareness and forcing some teams to work together. There might be some friction there, but I think overall it is improving the cybersecurity for medical devices. Yep, I definitely agree. Well, thanks for tuning in to this episode where we talk about the human factor and some of the awareness challenges we have with cybersecurity and how, you know, engineers and developers and the cybersecurity team and leadership and regulatory bodies, you know, we all have to work together and have increase in awareness about cybersecurity and some of the dire ramifications in the case of medical devices if we don't actually include cybersecurity from the inception of the device. I hope you gathered some insights from this episode and thanks for tuning in and I hope to see you on the next one.

    Hosted by

    Explore every episode in the topics covered here.

    More from your hosts

    Other episodes diving into Christian and Trevor's areas of focus.

    Episodes covering similar ground - including Pen Testing.

    Why this matches covers similar themes around system, across, necessitates.

    Why this matches shares the Pen Testing topic and covers similar themes around network, impactful, hostile.

    Why this matches covers similar themes around culture, fostering, integrate.

    Why this matches shares the Pen Testing topic and covers similar themes around dental, exploits, budget.

    Listen to this episode