Cybersecurity News

CSA Guidance Addresses Security, Privacy Risks of AI in Healthcare

Although experts forecast a promising future for AI in healthcare, security and privacy risks must be considered alongside benefits.

CSA Guidance Addresses Security, Privacy Risks of AI in Healthcare

Source: Getty Images

By Jill McKeon

- The Cloud Security Alliance (CSA) released guidance outlining the benefits and risks associated with AI in healthcare, highlighting the need to address the security and privacy risks that come along with implementing AI-driven technologies.

Artificial intelligence in healthcare has the ability to improve cancer detection, process large amounts of data, and enhance care coordination. However, there are still significant regulatory gaps when it comes to the privacy and security of AI.

In addition to security and privacy risks, healthcare organizations must consider legal and ethical challenges, along with AI bias.

CSA cited a study from the University of California-Berkely that argued that advances in AI have rendered HIPAA obsolete, since it does not cover tech companies. HIPAA only covers these companies if they are business associates to covered entities. Essentially, tech companies and genetic testing companies may have equally sensitive health data without the regulatory standards to keep them in check.

“In 2017, 23andMe received regulatory approval to analyze their customers’ genetic information for risk of ten diseases, including celiac, late-onset Alzheimer’s, and Parkinson’s,” the guide noted.

“One of the uses of this data could be insurance companies who might use this predictive genetic testing to bias selection processes and charge higher premiums. The issue here is how we ensure we use the data in a way that alleviates the privacy concerns.”

AI also requires massive amounts of data in order to draw conclusions and produce new insights. As a result, these databases could become targets for cyberattacks. Current regulations do not necessarily guarantee that this data will be adequately protected from cyber risks.

“AI poses unique challenges and risks with respect to privacy breaches and cybersecurity threats, which have an obvious negative impact on patients and [healthcare organizations]. Ensuring privacy for all data will require data privacy laws and regulation be updated to include data used in AI and ML systems,” CSA noted.

“Privacy laws need to be consistent and flexible to account for innovations in AI and ML. Current regulations have not kept up with changes in technology. HIPAA calls for deidentification of data; however, technology today can link de-identified data resulting in identification.”

The vast amounts of data also require transmitting data between systems and figuring out how to store it securely. CSA recommended that healthcare organizations protect AI devices by ensuring access controls are in place, including multi-factor authentication. In addition, organizations should incorporate endpoint security and anomaly detection into their security architecture.

AI will likely continue to drive innovation and advancements in healthcare, enabling better chronic disease management and early disease detection. However, healthcare organizations also have a responsibility to assess risks.

“As with any cloud implementation, there are privacy and security concerns that need to be addressed,” the guide concluded.

“[Healthcare organizations] cannot lose sight of the shared responsibility model used on cloud computing. This requires they assess the risk and ensure controls are in place to mitigate these risks.”