Euro Security Watch with Mathew J. Schwartz

Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Cybersecurity Challenges and Opportunities With AI Chatbots

'Preparedness Pays,' European AI and Cybersecurity Experts Say at ENISA Conference
Cybersecurity Challenges and Opportunities With AI Chatbots
Image: Shutterstock

How can societies maximize the benefits offered by generative artificial intelligence while minimizing the potential downsides, including cybersecurity risks that are not fully known?

See Also: Live Webinar | Navigating Identity Threats: Detection & Response Strategies for Modern Security Challenges

Enter the European Union agency for cybersecurity, ENISA, which on Wednesday hosted a conference in Brussels billed as a discussion of "key aspects of cybersecurity in AI systems as well as challenges associated with implementation and supervision of secure and trustworthy AI."

"Preparedness is the best way forward," said Apostolos Malatras, an ENISA team leader for knowledge and information (see: Killer Use Cases for AI Dominate RSA Conference Discussions).

Huub Janssen of the Dutch Authority for Digital Infrastructure, which supervises government cybersecurity and AI use, issued a call to action. AI offers "new opportunities and new risks," none of which need be "bad or devastating," he said. He did not suggest trying to ban AI. "But we need to act in order to make sure it's going in the right direction," he said.

EU lawmakers are crafting legislation that would require makers of "foundation models," such as the models underlying various chatbots, to meet certain health, safety and environmental rules and to uphold democracy and people's rights.

Large language models such as ChatGPT continue to improve at "warp speed," Janssen said, and major innovations often appear within weeks of one another. He's now tracking an explosion of open-source light language models small enough to run well on laptops. These are already being used to improve each other. Here's how: Ask three different light language models a question, assign another model to evaluate which answer is best and then tell another model to assign tasks to improve the outputs of the others.

While AI carries risks, it's important to highlight powerful opportunities as well, said Adrien Bécue, an AI and cybersecurity expert at French multinational firm Thales Group, speaking on a panel devoted to AI chatbot cybersecurity promise and peril.

Opportunities he sees include AI's ability to "analyze and synthesize" large volumes of threat intelligence, which today can be a time-consuming and laborious process. For incident response, AI could help automate and coordinate different groups of stakeholders and across language barriers. When software vulnerabilities must be urgently patched, "expert code generator chatbots" could help developers rapidly create and test emergency fixes.

Rapid Advances

Where AI goes from here - and how quickly - remains an open question.

"The development of AI models is really hard to predict," said panelist David Johnson, a data scientist at Europol, the EU's criminal intelligence and coordination agency.

Describing himself as a "large language model enthusiast" - and user - rather than an expert, Johnson said his job is to help Europol ingest and enrich the large sets of data it receives from other law enforcement agencies. He's been experimenting with LLMs for the past few months and reports that so far they're not yet sufficiently reliable. "What I found it does really, really well is give an answer with a lot of confidence - so much confidence that I tend to believe it, but almost half of the time it's completely wrong."

Despite his current skepticism, Johnson said that given how quickly LLMs are advancing, cybersecurity experts need to closely track AI developments, because such problems may get fixed "very soon - maybe even tomorrow."

AI isn't designed to mislead, said Thales' Bécue. "It's not that it intentionally lies; it's that it's designed to please you," he said, and whether or not it's giving right answers, "the problem is that at some point we will start believing what these things say."

Already, people are employing ChatGPT output in perhaps unexpected ways, such as using it to generate Visual Basic for Application code to automatically create PowerPoint presentations, said information management expert Snezhana Dubrovskaya at IBM Belgium.

Whether users who are copying and pasting this code have a good understanding of what it does remains an open question and highlights how human factors are a risk with any AI tools. "As usual, we have very good intentions, but it can be misused," she said.

Even so, she predicts increasing cybersecurity utility from tools such as ChatGPT working as a security chatbot that will provide a user with a list of "typical mitigation actions" for combating a specified threat or taking an alert - say from a CrowdStrike tool - and generating a Splunk query to study logs for signs of intrusion.

Chatbots have shown that they can help to at least help highlight vulnerabilities. "It's certainly possible and the publicly available chatbots do it surprising well," at least in test scenarios, Europol's Johnson said. For potential use in cryptocurrency investigations, Europol tested ChatGPT "with a smart contract that we knew was vulnerable, and it quite specifically told us it was vulnerable and what the developer needed to do to mitigate the vulnerability."

Many of the examples of how chatbots can be used for good in security share this core requirement: a user who knows what they're doing. Specifically, IBM's Dubrovskaya said, a user must give a chatbot good prompts, since the quality of the output depends in part on the quality of the input.

If there's one more thing chatbots don't have a perfect answer for yet, it's the computer science challenge of "garbage in, garbage out."



About the Author

Mathew J. Schwartz

Mathew J. Schwartz

Executive Editor, DataBreachToday & Europe, ISMG

Schwartz is an award-winning journalist with two decades of experience in magazines, newspapers and electronic media. He has covered the information security and privacy sector throughout his career. Before joining Information Security Media Group in 2014, where he now serves as the executive editor, DataBreachToday and for European news coverage, Schwartz was the information security beat reporter for InformationWeek and a frequent contributor to DarkReading, among other publications. He lives in Scotland.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.com, you agree to our use of cookies.