Industry Insights with Mikaela B. Lewis, MS, CAHIMS, Principal Consultant, Clearwater

Artificial Intelligence & Machine Learning , Electronic Healthcare Records , Events

The Double-Edged Sword of AI in Healthcare Cybersecurity

Both Medical Professionals and Cyberattackers Are Using AI to Improve Their Work
The Double-Edged Sword of AI in Healthcare Cybersecurity

As artificial intelligence, or AI, grows in popularity for simplifying workflows and diagnosing patients, healthcare leaders need to understand that AI use is also increasing among cyberattackers.

See Also: Live Webinar | Navigating Identity Threats: Detection & Response Strategies for Modern Security Challenges

IBM defines AI as "a field that combines computer science and robust datasets to enable problem-solving." The market size of AI was valued at $328.34 billion in 2021 and is projected to grow from $387.45 billion in 2022 to $1,394.30 billion by 2029.

Uses and Benefits of AI in Healthcare

AI has recently become more common in the healthcare industry because it makes data more available, processes more efficient and complex tasks simpler. AI supports providing real-time data to patients and physicians, including analytics and mobile alerts, which help physicians diagnose and address medical issues accurately and in a timely manner. Using AI to automate simple tasks allows medical professionals more time to assess patients, diagnose illnesses and treat patients appropriately, which promotes cost savings.

Artificial intelligence can also diagnose certain illnesses using imaging systems. For example, IRM|Analysis SaaS platform leverages AI to deliver predictive risk ratings, drawing upon millions of risk scenarios analyzed within the software over time. This helps leaders make better risk-rating decisions and maximize limited resources by freeing analysts and managers to do higher-order work rather than getting bogged down in data preparation.

In the cybersecurity industry, AI is frequently used to detect and protect against attacks. In supporting detection, it helps reduce noise and identify more focused tasks for cybersecurity experts. It also helps determine the priority of responses, provide semi-automated responses to prevent attacks, and analyze the actions of attackers to better understand and predict their next moves.

Acumen Research and Consulting estimates that the global market of AI tools was $14.9 billion in 2021 and will reach $133.8 billion by 2030. This growth can be attributed to increased need based on an increase in attacks and the need to protect at-home workers, given the increase after the COVID-19 pandemic.

How Malicious Actors Are Using AI

Bad actors are taking advantage of AI's capabilities and using them maliciously. For example, AI can identify systems' weaknesses by recognizing certain patterns. Bad actors can then exploit these weaknesses, exposing sensitive data. If left undetected, a malicious user can gain entry to a system, sit dormant and set up backdoors and other connections to attack the system later. Also, research shows that AI-generated phishing emails have a higher rate of being opened due to AI's ability to recognize patterns and target users accordingly.

Cybercriminals are also leveraging AI to write malicious code. ChatGPT, an AI-powered chatbot hailed by many for its ability to answer questions, write and even program computers, is also being used by cyberattackers to develop ransomware and malware. And because inexperienced cyberattackers can use ChatGPT, it lowers the bar for conducting cybercriminal activity.

ChatGPT is unvetted, open-source technology, so its use by healthcare professionals, such as the physician who used it to write an approval request to UnitedHealthcare, is particularly concerning. It is not a direct violation of HIPAA, but it introduces heightened privacy and security risks. Clearwater Chief Risk Officer Jon Moore says healthcare employees and medical providers need to remember that "most, if not all, technologies can be used for good or evil, and ChatGPT is no different."

According to this recent report, use of AI to create and execute autonomous cyberattacks with greater stealth and larger impact will increase in the next five years. Malicious actors also are weaponizing AI to infect the AI systems themselves. They can add false data to make the AI not work as expected and use the system to build malware and conduct stealth attacks, password-guessing and human impersonation.

Because it is easy for employees to get their hands on AI tools such as ChatGPT, healthcare organizations should leverage policies that forbid their use without approval and bar the entry of any ePHI or confidential information.

This blog was originally hosted on Clearwater's website. To read the full, original blog, visit Clearwater's blog page here.



About the Author

Mikaela B. Lewis, MS, CAHIMS, Principal Consultant, Clearwater

As a Clearwater Principal Consultant, Mikaela Lewis is part of the company’s deep team of experts who advise and support organizations across the healthcare ecosystem on the development and implementation of effective cybersecurity and compliance programs. Ms. Lewis has deep experience in healthcare policy, standards, and frameworks, including HIPAA, NIST 800, NIST Cybersecurity Framework, and ISO/IEC 27001.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.com, you agree to our use of cookies.