Features

How Providers Can Defend Against AI-Assisted Cyberattacks

Threat actors may leverage AI tools such as ChatGPT to accelerate healthcare cyberattacks and advance their goals of data exfiltration.

Source: Getty Images

- What once seemed like a far-fetched idea is now a reality — artificial intelligence (AI) is advancing steadily, enabling increased efficiency in a variety of sectors. Unfortunately, cyber threat actors can also leverage AI to accelerate healthcare cyberattacks.

From crafting phishing emails to developing malware code and rapidly exploiting vulnerabilities, AI enables threat actors to speed up the rate and volume of their attacks. Thankfully, defenders have the same tools at their disposal to combat these emerging threats.

Understanding How Threat Actors Use AI

In July 2023, the HHS Health Sector Cybersecurity Coordination Center (HC3) issued a brief regarding the threats that AI may pose to healthcare cybersecurity. Specifically, the brief focused on ChatGPT, a generative AI tool that serves as a conversational assistant.

ChatGPT uses deep learning to produce human-like responses via transformer neural networks. As more users enter data into ChatGPT, the tool will continue to learn and adapt its feedback. Since OpenAI released ChatGPT in 2022, questions and concerns have surfaced surrounding everything from using the tool to cheat on college exams to crafting resumes and even leveraging the tech to create clinical notes.

HC3’s brief focused specifically on how threat actors may be able to leverage ChatGPT to design and execute healthcare cyberattacks. The tools have enabled them to develop phishing emails, impersonation attacks, complex malware code, and ransomware.

What’s more, threat actors may use AI to rapidly exploit vulnerabilities, overwhelm human defenses, and automate attack processes.

HC3 provided multiple examples of well-crafted phishing email templates, complete with correct grammar and sentence structure. Each email intends to entice readers to click on the attachment, bringing them closer to providing the attacker with network access.

“The attacker will need to attach a malicious file, and then fill in the blanks and customize it in order to make it even more believable,” HC3 noted.

In addition, HC3 showed examples of proof-of-concept exploits in which ChatGPT helped researchers develop malware. Even with limited technical ability, ChatGPT can help threat actors successfully launch cyberattacks that were previously out of reach.

Proof-of-Concept Exploits Exemplify AI Risks

Researchers at Vedere Research Labs, the cybersecurity arm of Forescout Technologies, set out to explore the tangible risks of AI-assisted attacks in healthcare in their latest research project.

“Like everybody, we were surprised when we saw this ChatGPT stuff coming up,” Daniel dos Santos, head of Security Research at Forescout’s Vedere Labs, said in an interview with HealthITSecurity.

“Everybody started talking about how it can be used for defense and how it can be used for attacks, and so on and so on. But a lot of what we saw in the discussion about how it can be used for attacks is around phishing – basically creating more convincing phishing messages and trying to do abuses like that, which is actually what we see most often being done in practice.”

Crafting convincing phishing messages remains one of the primary ways that ChatGPT can assist threat actors.

“But we wanted to explore a little bit beyond that,” dos Santos explained. “What other types of attacks are possible? We do a lot of research into medical devices, operational technology, IOT, and things like that, so we wanted to know what the impact would be on technology.”

Vedere Labs built upon its previous research, which involved using ChatGPT’s code conversion capability to port an existing Operational Technology (OT) exploit into another language with ease. The latest research project applied these proof-of-concept exploits to healthcare, exemplifying the potential of AI in assisting with healthcare cyberattacks. It is important to note that these exploits have not been observed in the wild, and are merely examples of how ChatGPT could be used maliciously. 

“The advantages of AI in this case are that the attacker does not need to understand the protocols being used (often proprietary or very different from typical IT protocols) and the increased speed of development to obtain the targeted data,” the report noted.

Vedere Labs went on to show how ChatGPT could aid in transmitting sensitive data into clear text via three specific protocols used by point-of-care testing and laboratory devices, as well as a proprietary protocol used by a popular medication dispensing system. ChatGPT guided the researchers through a simulation of parsing and extracting sensitive data, with largely successful results.

However, the research also uncovered the limits of AI for attackers. ChatGPT sometimes led attackers astray by providing wrong answers known as “hallucinations.” These answers sounded convincing but were not actually correct in practice. Thankfully, these incorrect responses may prevent a novice attacker from moving forward in their attacks without additional technical knowledge.

“People keep talking about ChatGPT and how generative AI will change things. From my point of view, it's mostly about accelerating attacks,” dos Santos suggested.

ChatGPT won’t do all the work for the attackers, nor can it. But it can help threat craft phishing emails and develop code that would have taken more time otherwise. As a result, healthcare defenders should be prepared for an increased volume of attacks. 

How Healthcare Defenders Can Leverage AI For Security

Although threat actors may be able to accelerate attack timelines, the methods they are using remain the same. Phishing and ransomware are still top threats to healthcare cybersecurity.

“The traditional mitigations and recommendations still apply, which is to have the proper asset inventory, know what is patched or unpatched or running legacy software, and segment the network,” dos Santos added. “On the other hand, what it also enables for defenders is a lot of new use cases.”

For example, defenders may be able to bolster threat-hunting tactics using generative AI to explain the reverse-engineered code of a potentially malicious file.

HC3 also pointed out the benefits of AI to defenders, noting its ability to enhance penetration testing, automated threat detection, AI training for cybersecurity personnel, and cyber threat analysis and incident handling.

A cybersecurity team with AI-specific knowledge can better detect AI-assisted phishing attacks and effectively reduce the attack surface in relation to AI-enhanced threats, HC3 suggested.

HC3 provided numerous recommendations for defending the healthcare sector against AI threats, while acknowledging that this is an emerging field. A good starting point is the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework (AI RMF), which provides organizations with a roadmap for managing AI cybersecurity risks. Organizations may also leverage MITRE ATLAS, a knowledge base of adversary tactics and techniques targeting AI systems.

Going forward, HC3 urged healthcare organizations to “expect a cat-and-mouse game.”

“As AI capabilities enhance offensive efforts, they’ll do the same for defense; staying on top of the latest capabilities will be crucial,” the brief noted.