- Artificial intelligence (AI) can be used to improve healthcare data security, but it can also undermine healthcare data security, observed a panel at the World Medical Innovation Forum held in Boston April 23 to 25.
The use and misuse of AI could be a growing problem as more healthcare organizations look to AI to make use of unstructured clinical data residing in data repositories. This increasing demand is expected to fuel the healthcare AI market, pushing it up by a CAGR of more than 60 percent through 2022, predicted a report by Research and Markets.
AI covers a range of technologies, including machine learning, natural language processing, cognitive computing, image recognition, and speech recognition.
23Bell Managing Partner Stephen McHale told the panel that AI can be used to attack healthcare organizations as well as to defend them.
“AI can be on the defensive, and AI can go on the offensive. It is going to be an arms race,” he said.
Partners Healthcare Systems Chief Information Security and Privacy Officer Jigar Kadakia said that making security decisions on a risk-based value proposition is critical.
“There is a lot of noise around AI and security. So you have to find the key pieces that you feel are most vulnerable and protect those. There is going to be risk associated with it, whether it is risk going out, risk coming in, or the hackers leveraging the technology,” he said.
IBM Watson Health Chief Information Security Officer Carl Kraenzel said AI helps to mitigate the risk if your organization is already breached.
“You have to assume a security posture of, you have already been breached, you just don’t know it. If you assume you are already breached, what do you do about it?” he posited.
At the same time, AI can be corrupted and have a malicious effect, as a type of insider threat.
“If you don’t pay attention to what you trained the AI with—called content curation—you could accidentally let it be given the wrong inputs that produces a poisonous bias. That is a new risk,” IBM’s Kraenzel said.
IBM recently announced a partnership with MIT to promote the use of AI in healthcare decision support and analytics. IBM and MIT are investing $240 million over the next ten years to set up an AI research laboratory that will be co-located with IBM Watson Health and IBM Security headquarters in Boston.
Healthcare diagnostics and clinical decision support, as well as cybersecurity, will be priorities for the new lab.
Kraenzel said that another security risk for healthcare organizations is to manage the “toxicity” of data in the wild.
“We need to figure out why data is toxic when privacy is breached and work to remove the toxicity with regulators, with privacy specialists, with our own patient and healthcare communities. When you assume you’ve been breached, you pay attention to the toxicity and cleanup and how can you mitigate toxicity,” he said.
Kadakia said that healthcare security professions need to be threat hunters, and not sit around and wait for attacks.
Signature-based security solutions are not sufficient to stop cyberattacks, he added.
“We have to move to advanced threat solutions that aren’t signature based. We need to look at the data within the system itself. When you think about AI technology, being able to leverage its learning ability as a defense, to hunt down the threats, is going to be critical,” Kadakia stated.
McHale noted that blockchain technology can be used to protect PHI.
“People want to go after those giant, proprietary, stored datasets. How do you make the data less attractive? You take out the PHI and other data that would be meaningful and have it distributed on a blockchain and encrypt it. If they hit the major data store, all of the important data is off on a blockchain, so they are not going to get much out of it,” he said.
Kadakia suggested that policy-makers and regulators come up with a broad security framework with specific requirements for healthcare organizations to implement to reduce their risks of a data breach. For example, organizations should be required to implement out-of-band authentication, which is a type of two-factor authentication that requires a secondary verification factor through a separate communication channel.
“This is consumer friendly, it provides a layer of identity proofing for the individual, and it’s very hard to hack into because you have to have the form factor associated with it. Make that a requirement for all,” he said.
Kadakia also recommended setting up a federal government-sponsored medical network that would link hospitals and other healthcare providers.
“We would all use it, and it would only be for medical traffic. That way we can protect ourselves as a set of organizations against the hackers because they all leverage the Verizon LAN into your organization. If they had to penetrate a government-supported infrastructure that we all pay into, then it is going to be harder to break in to,” he said.
The EU’s General Data Protection Regulation (GDPR), which takes effect May 25, 2018, is an important focus for healthcare, said Kraenzel. “We are doing a lot of things to adapt for GDPR. It is a regulatory expression of patient control. We are not just doing blockchain for the fun of it. We have a real need to put patients in control. This is an example of a very large industry regulatory impact this year,” he noted.
“Beyond that, as we watched poor Mark Zuckerberg be grilled by Congress, there is a lot of speculation that that particular conversation will bring a GDPR-class of regulation here to the US. We are all trying to adapt to GDPR. None of us gets to say, ‘GDPR, that’s a problem out there’.”
The EU’s new regulation applies to any organization regardless of location that holds and processes personal data of individuals residing in EU countries.
To comply with GDPR, an organization must get clear consent from the EU data subjects that they agree to have their personal information handled and processed, with the purpose of the data processing included in the consent form.