Artificial Intelligence & Machine Learning , DEF CON , Events

Security Experts Prioritize AI Safety Amid Evolving Risks

Nathan Hamiel of Kudelski Security on Security Testing and Biased Algorithms
Nathan Hamiel, senior director, research, Kudelski Security

Ensuring safety and security of AI systems has become a paramount concern amid rapid advancements in the technology. Security professionals have become "more involved in things that they weren't traditionally involved in," said Nathan Hamiel, senior director of research at Kudelski Security.

See Also: Safeguarding Election Integrity in the Digital Age

The concept of AI safety goes beyond conventional definitions, encompassing practical implications for user safety, said Hamiel. Inadequate AI systems can lead to privacy violations and biases, affecting real-world outcomes significantly. Financial and hiring algorithms could become biased, leading to discrimination.

"Think about whether a product is safe to use. Security plays a foundational role in that and determines if something is safe to use," he said. AI systems must ensure user privacy and be reliable, especially as they handle increasing amounts of sensitive data.

In this video interview with Information Security Media Group at DEF CON 2024, Hamiel also discussed:

  • The flawed approach of discarding established security practices with new tech;
  • Deepfakes' role beyond deception and their social impact;
  • Practical steps for assessing AI risks and applying security measures effectively.

Hamiel leads the fundamental and applied research team within the Innovation group. His team focuses on privacy, advanced cryptography, emerging technologies and internal and external collaboration. He is a regular speaker at Black Hat and DEF CON, among other conferences.


About the Author

Aseem Jakhar

Aseem Jakhar

Co-Founder, EXPLIoT

Jakhar is the co-founder of EXPLIoT. He founded null - an open security community platform in Asia. He also organizes Nullcon and hardwear.io security conferences.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.com, you agree to our use of cookies.