HIPAA and Compliance News

Examining Health Data Privacy, HIPAA Compliance Risks of AI Chatbots

Healthcare organizations seeking to reap the benefits of AI chatbots must consider the HIPAA compliance and data privacy risks that come along with them.

Examining the Health Data Privacy, HIPAA Compliance Risks of AI Chatbots

Source: Getty Images

By Jill McKeon

- AI chatbots, such as Google’s Bard and OpenAI’s ChatGPT, have sparked continuous conversation and controversy since they became available to the public. In the healthcare arena, patients may be tempted to tell their symptoms to a chatbot rather than a physician, and clinicians may be able to leverage these tools to easily craft medical notes and respond to portal messages.

However, AI chatbots present unique and varied challenges when it comes to protecting patient privacy and complying with HIPAA. Two recent viewpoints published in JAMA explored the health privacy and compliance risks of AI chatbots, each offering thoughts on how providers can navigate HIPAA compliance and honor their duty to protect patient data as these tools gain prominence.

Navigating HIPAA Compliance While Using AI Chatbots

In the first of two viewpoint articles published in JAMA on the topic, researchers noted that the use of AI to improve workflows in healthcare is not a new development. Healthcare organizations have been known to contract with data analytics firms to analyze electronic health records with a business associate agreement (BAA) in place that allows them to do so.

“The innovation—and risk—with an AI chatbot therefore does not lie with its AI engine but with its chat functionality,” the article suggested. For example, physicians may enter a transcript of a patient-physician encounter into a chatbot, which can then produce medical notes in seconds.

This task may be tempting to complete with the help of an AI chatbot, but doing so without a BAA in place may expose patient data.

READ MORE: HHS Settles HIPAA Investigation With Healthcare Business Associate

“Clinicians may not realize that by using ChatGPT, they are submitting information to another organization, OpenAI, the company that owns and supports the technology,” the article stated.

“In other words, the clinical details, once submitted through the chat window, have now left the confines of the covered entity and reside on servers owned and operated by the company. Given that OpenAI has likely not signed a business associate agreement with any health care provider, the input of PHI into the chatbot is an unauthorized disclosure under HIPAA.”

The easiest way to avoid this compliance roadblock, the authors suggested, is to avoid entering any protected health information (PHI) into a chatbot. For example, if a physician wanted to enter a transcript into a chatbot, they would first have to manually deidentify the transcript according to HIPAA’s deidentification standards.

“Although some of the burden of purging PHI from chat inputs falls on the querying clinician, covered entities can take measures to create environments that prevent inadvertent PHI disclosure. At a minimum, covered entities should provide training specifically on chatbot risks, beginning now and continuing in the context of annual HIPAA training,” the article continued.

“Other more restrictive approaches include limiting chatbot access to only workforce members who have had training or blocking network access to chatbots.”

READ MORE: 24 Attorneys General Express Support For Bolstering Reproductive Care HIPAA Protections

As chatbots continue to develop, healthcare organizations will be faced with the decision to embrace or reject these technologies. In the future, the authors predicted that AI chatbot developers will work directly with healthcare providers to develop HIPAA-compliant chat functionalities.

In the meantime, HIPAA-covered entities will have to take care to prevent the unauthorized disclosure of PHI themselves, should they choose to use chatbots.

Is HIPAA Strong Enough?

In the second viewpoint article published in JAMA surrounding AI chatbots and health data privacy, the authors posited that AI chatbots simply cannot comply with HIPAA in any meaningful way, even with industry assurances.

“Even if they could, it would not matter because HIPAA is outdated and inadequate to address AI-related privacy concerns,” the authors wrote. “Consequently, novel legal and ethical approaches are warranted, and patients and clinicians should use these products cautiously.”

The authors suggested that when HIPAA was enacted in 1996, lawmakers could not have predicted how healthcare would digitally transform. HIPAA was enacted when paper records were still used, and when stealing physical records was the primary security risk.

READ MORE: Medical Record Snooping Case Leads to $240K HIPAA Settlement

The authors argued that even deidentified data can pose privacy risks via reidentification and that asking whether chatbots “could be made HIPAA compliant is to pose the wrong question. Even if compliance were possible, it would not ensure privacy or address larger concerns regarding power and inequality.”

The true extent of the privacy risks that these chatbots pose is not yet known, but the authors urged clinicians to remember their duty to protect patients from the unauthorized use of their personal information.

“As alluring as offloading repetitive tasks or obtaining quick information might be, patients and clinicians should resist chatbots’ temptation. They must remember that even if they do not input personal health information, AI can often infer it from the data they provide,” the authors suggested.

“Because HIPAA is antiquated, clinicians should not rely on HIPAA compliance as a proxy for fulfilling their duty to maintain patient confidentiality.”