Apparently It's Way Too Easy to Hack Someone's Pacemaker
A new report highlights troubling security weaknesses in the lifesaving devices.
Photo: Getty Images
While it seems like every new gadget on shelves is increasingly vulnerable to hackers, you're probably not worried about your Sonos or your wi-fi connected refrigerator killing you. But what if hackers could literally stop your heart?
Cybersecurity researchers at the University of Leuven in Belgium and the University of Birmingham in England reported that they were able to successfully hack into ten different implantable cardioverter defibrillators using relatively inexpensive, easily obtainable electronic components and a radio antenna. The researchers are presenting their findings this week at the Annual Computer Security Applications Conference held in Los Angeles.
"The devices we examined in our study were all from one of the big medical implant companies," says lead author Bart Preneel, an electrical engineering professor at the University of Leuven. He adds that the company's market share could be somewhere around 25 percent of the devices worldwide. Between 1993 and 2009, 2.9 million Americans received a pacemaker, according to a study in the Journal of the American College of Cardiology.
Implantable cardioverter defibrillators, or ICDs, are what most people probably refer to as a pacemaker. In reality, these devices are now in their third generation and can perform a few different functions. ICDs are battery-powered, and once they're implanted in your chest they'll deliver electrical shocks to your heart if it's beating too fast or experiencing arrhythmia (a micro version of the paddles you see ER docs using on medical shows). Many ICD models also perform the traditional pacemaker role of sending wake-up shocks to your heart if it beats too slowly. They also collect data on how your heart is functioning to help your doctor diagnose your condition.
A key feature of the latest generation of ICDs—and the one that makes them vulnerable to hacking—is that doctors and other medical professionals can tweak the settings on your ICD from up to five meters away using a remote called a "device programmer." Even though these devices are not connected to the internet, the researchers found that it's possible to listen in on the radio-wave transmissions, decipher the information sent, and then reverse engineer your own remote to send malicious commands.
"The most challenging aspect of this reverse engineering process is to develop the lab setup, this can be quite time consuming, depending on how much experience one has," says Preneel. "But once the setup is finished and up and running, hacking a single ICD can be done in a few days."
One simple, but potentially deadly, option at that point is to simply force the ICD to stay in "standby" mode instead of the battery-saving "sleep" mode. The researchers found that single step could rapidly drain the ICD's charge, which is supposed to last up to seven years. They were also able to pull a patient's personal and private medical information from the device, and hypothesized that it would be relatively easy to track that person's location by placing small beacons in the hospital, public transportation hubs, and near their home.
Security experts have been sounding the alarm for some time that electronic medical devices are vulnerable to tampering. Earlier this year, Johnson & Johnson had to warn patients that their Animas OneTouch Ping insulin pump, which is worn under clothing but controlled with a hand remote, could be hacked remotely and overdose diabetics with enough insulin to cause life-threateningly low blood sugar levels. And former vice president Dick Cheney told 60 Minutes in 2013 that he had the wireless capabilities of his pacemaker disabled as far back as 2007, out of concern that he could be assassinated via remote hack. If that sounds familiar, it's because the exact scenario was used as a plot point in season two of the Showtime series Homeland.
Still, medical device manufacturers have been slow to build in the necessary security features. For one thing, Preneel explains, the manufacturers try to maximize battery life to avoid the potential surgery necessary to replace it, and worry that additional features could siphon away precious joules. (Although Preneel notes that the security measures his team suggest are at an "acceptable level" of energy consumption.) "A second issue is lack of knowledge in the industry," Preneel says. "Even if they are aware of the problem, they may not have the knowledge on how to secure their devices."
In the paper, Preneel's team suggest a combination of solutions to keep ICDs safe. The first and easiest step would be for the external controllers to essentially jam the wireless signal while the ICD is in standby mode to prevent anyone from listening in. Next, manufacturers would need to update the devices so they don't go into standby mode at all, and instead rely on a command to put the devices directly into their lowest power mode after they're done communicating. Finally, the devices would need to implement secure key authentication and encryption, with master keys updated every three months to prevent lost or stolen device programmers from being used to access the implants.
"We have been and are still in contact with the manufacturer and we are confident that they take these problems seriously and intend to improve security," Preneel says. At the same time, he cautions that it's ultimately up to regulatory bodies to require more rigorous measures.