Features

AI in Healthcare Presents Need for Security, Privacy Standards

Duke, Mayo Clinic, and DLA Piper are teaming up to ensure that security, privacy, and safety are top-of-mind when implementing AI in healthcare.

Source: Getty Images

- Responsible implementation of artificial intelligence (AI) in healthcare requires a focus on security and privacy. AI's capabilities in clinical and research settings are continually expanding, but any new technology brings a host of security and privacy concerns.

As AI technology and the regulations that exist to keep that technology in check continue to evolve, there is an increased need for multi-disciplinary industry standards. To combat these concerns, law firm DLA Piper, the Duke Institute for Health Innovation, and Mayo Clinic, among others, launched the Health AI Partnership in late December 2021.

The organizations formed the partnership to establish guidelines and industry standards to promote responsible AI implementation in healthcare.

"We knew that there needed to be some central, not-for-profit approach to unite all these different stakeholders and bring some consensus to the system," Danny Tobey, partner at DLA Piper, explained in an interview with HealthITSecurity.

"My experience in this field is that everybody wants to do the right thing. They're just looking for a little bit of guidance about what that right thing means."

A Brief Overview of AI's Role in Healthcare

Researchers and clinicians have applied AI and machine learning (ML) algorithms to everything from chronic disease management to mental healthcare to medical imaging. The benefits of AI have repeatedly shone through.

Mark Sendak, a clinical data scientist at the Duke Institute for Health Innovation who plays an active role in the Health AI Partnership, spoke to the countless use cases for AI in healthcare that he has observed in his work. From kidney disease to community-based palliative care to heart disease, AI algorithms can be applied to a multitude of care and research settings.

Sendak also noted AI's usefulness in improving chronic disease prevention, monitoring inpatient deterioration, and weaving elements of specialty care into primary care settings.

Despite its benefits, AI technology has room to grow in terms of reliable standards and processes.

"There's so much that clinicians are trying to do with their care and the technology. Streamlining the workflow here or creating this efficiency there can be really impactful for the clinician's ability to care for patients," David Vidal, a vice chair at Mayo Clinic's Center for Digital Health, who oversees the center's AI quality and regulation operations, explained in an interview.

"With that, the AI field has been growing significantly. The benefit to patient care is a good consequence. I think the drawback is the lack of process around the build and application or deployment of the AI."

In addition to a lack of structured industry standards, researchers have also noted instances of bias, often resulting from a lack of representative data. Inequities in data collection can easily lead to skewed outcomes.

Against this backdrop, healthcare industry stakeholders must also consider cybersecurity and privacy concerns when making procurement decisions, just as they would with any other technology.

AI Cybersecurity, Privacy Concerns

"AI is a prime target for bad actors because people are used to relying on AI without understanding how it gets to its answers, which makes it easier for people with bad intentions to fly below the radar," Vidal suggested.

"So, how do you secure these systems, especially with the need for interoperable and transportable patient data? We need to let the good guys in and keep the bad guys out."

The current cyber threat landscape consists of many threat actors equipped with sophisticated tactics. But healthcare organizations are increasingly implementing security controls to protect against traditional network intrusions.

As a result, threat actors are pivoting to new attack vectors, such as AI technology and legacy medical devices, to exploit organizations successfully.

Part of the Health AI Partnership will revolve around assessing and creating best practices for AI cybersecurity, including establishing standards around penetration testing.

The Cloud Security Alliance (CSA) validated AI privacy and security concerns in a report detailing the many challenges that come along with AI in healthcare.

"Security presents AI with a new set of challenges, compounded by the fact that most algorithms require access to massive datasets," the report noted.

"Moving large amounts of data between systems is new to most [healthcare organizations], which are becoming ever more sensitive to the possibility of data breaches."

CSA suggested that organizations combat AI data security concerns by ensuring solid access controls and multi-factor authentication, as well as implementing endpoint security and anomaly detection technologies.

"Ensuring privacy for all data will require data privacy laws and regulation be updated to include data used in AI and ML systems," CSA maintained.

"Privacy laws need to be consistent and flexible to account for innovations in AI and ML. Current regulations have not kept up with changes in technology. HIPAA calls for deidentification of data; however, technology today can link deidentified data resulting in identification."

On top of securing the AI technology itself, there are concerns about the potential privacy and security issues that could arise from the far-reaching capabilities of AI and ML technologies. For example, researchers from UC Berkeley successfully used a machine learning algorithm to identify individuals, even though they only had access to a HIPAA-exempt deidentified dataset.

The study, published in JAMA Network Open, pointed to the dangers of unregulated AI. When HIPAA went into effect over 25 years ago, AI was not even prominent in healthcare, let alone the source of privacy concerns.

The Health AI Partnership, HIPAA's limitations, daily cyberattacks on the healthcare sector, and the increasing popularity of AI all point to a need for industry standards and regulations.

Forming an Industry Consensus on Safe, Secure AI Deployment

Regulating AI is a team effort across multiple disciplines, Tobey, Sendak, and Vidal agreed.

"We do expect that there's going to be a part of responsible AI adoption that is going to fall under the purview of the health systems that are signing procurement contracts, but there's also going to be other responsibilities that are going to have to be attributed to the appropriate actors," Sendak predicted.

Device manufacturers, regulatory bodies, industry groups, and decision-makers within health systems must come together to form a consensus surrounding proper AI deployment.

"You see dozens of proclamations coming out these days about what good AI looks like, and it's all very high level. It needs to be trustworthy. It needs to be transparent. It needs to be secure," Tobey emphasized.

"But what do those things mean when you're actually running a piece of software and implementing it in a complex healthcare system?"

The security and privacy challenges with AI point to a larger industry trend: outdated regulations and standards are increasingly requiring healthcare decision-makers to revisit and revamp industry best practices.

"I applaud the FDA for being agile and ahead of the curve and in a lot of ways in anticipating this wave that's coming. It was just a couple of years ago that they came out with guidance for security for software devices," Vidal noted.

"But the implementation of those standards is something that these healthcare organizations need to figure out together."