In the latest weekly update, Jeremy Grant of Venable LLP joins editors at ISMG to discuss the state of secure identity in 2024, the challenges in developing next-generation remote ID proofing systems, and the potential role generative AI can play in both compromising and protecting identities.
Don't click phishy links. Everyone knows that. But are your end users prepared to quickly identify today's tricky tactics being used by bad actors? Probably not. Cybercriminals have moved beyond simple bait and switch domains. They're now employing a variety of advanced social engineering techniques to entice your...
The escalating adoption of generative AI has introduced concerns regarding data privacy, fake data and bias amplification. Ashley Casovan, managing director of the IAPP AI Governance Center, discusses the need to develop governance models and standardize AI systems.
Fraudsters used deepfake technology to trick an employee at a Hong Kong-based multinational company to transfer $25.57 million to their bank accounts. Hong Kong Police said Sunday that the fraudsters had created deepfake likenesses of top company executives in a video conference to fool the worker.
RSA Conference 2024 is on the horizon, and RSAC has dissected a record-breaking number of speaker submissions to uncover intriguing trends. Don’t miss the RSAC 2024 Call for Submissions Trends Report for an insider’s look into what will be top-of-mind for cybersecurity professionals in 2024 and beyond!
In the latest "Proof of Concept," Sam Curry of Zscaler and Heather West of Venable assess how vulnerable AI models are to potential attacks, offer practical measures to bolster the resilience of AI models and discuss how to address bias in training data and model predictions.
South Korea's intelligence agency has reported that North Korean hackers are using generative AI to conduct cyberattacks and search for hacking targets. Experts believe North Korea's AI capabilities are robust enough for more precise attacks on South Korea.
The potential applications of Artificial Intelligence (AI) are immense. AI aids us in everything from early cancer diagnoses to alleviating public administrative bureaucracy to making our working lives more productive. However, generative AI is also used for illicit purposes. It is being used to impersonate...
Machine learning systems are vulnerable to cyberattacks that could allow hackers to evade security and prompt data leaks, scientists at the National Institute of Standards and Technology warned. There is "no foolproof defense" against some of these attacks, researchers said.
In conjunction with a new report from CyberEd.io, Information Security Media Group asked some of the industry's leading cybersecurity and privacy experts about 10 top trends to watch in 2024. Ransomware, emerging AI technology and nation-state campaigns are among the top threats.
In this weekly update, four editors at Information Security Media Group delve into key 2023 cybersecurity issues, spotlighting efforts by the Biden administration, proposed U.S. healthcare cybersecurity laws, and crucial upcoming dates for the information security community.
Senior analyst Alla Valente discusses Forrester's "Predictions 2024: Cybersecurity, Risk and Privacy" report, which outlines five predictions to help security, risk and privacy leaders prepare for the coming year. She also discusses the significance of governance and accountability in the use of AI.
In the latest weekly update, two analysts at Forrester - Allie Mellen and Jeff Pollard - join three editors at ISMG to discuss important cybersecurity issues, including CISOs' primary inquiries about AI/ML, how organizations can thwart data poisoning attacks, and practical use cases for AI.
When government apps and digital services lag or break, the ramifications can have far-reaching effects – on citizens, infrastructure, and national security.
That's why operational resilience is critical.
In this GovLoop playbook, agencies will learn how to increase operational resilience through unified...
AI-generated attacks can be faster and more adaptable than human-led attacks. Organizations can defend against AI-powered attacks by educating their users, creating policies and using AI-powered security tools, said Vlad Brodsky, chief information security officer at OTC Markets Group.