Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Protect AI Raises $35M to Guard ML From Supply Chain Threats

Series A Funding Will Help Enterprises Spot Vulnerabilities in Open-Source Packages
Protect AI Raises $35M to Guard ML From Supply Chain Threats
Ian Swanson, co-founder and CEO, Protect AI (Image: Protect AI)

A startup led by former AWS and Oracle AI executives completed a Series A funding round to strengthen security around ML systems and AI applications.

See Also: GDPR & Generative AI: A Guide for Customers

Seattle-based Protect AI plans to use the $35 million investment to expand its AI Radar tool, research unique threats in the AI and ML landscape and further its work around open-source initiatives, said co-founder and CEO Ian Swanson. The funding was led by New York-based Evolution Equity Partners, which Swanson praised for its experience in working with and serving on the boards of big cybersecurity companies (see: How Startups Can Help Protect Against AI-Based Threats).

"AI is delivering so much value - driving true digital transformation for companies - but we need to defend it," Swanson told Information Security Media Group. "We need to protect it commensurate to the value that it's delivering. It's the right time for a company to be solely focused on the security of ML systems and AI applications."

Prior to starting Protect AI in 2022, Swanson spent 18 months leading go-to-market activities for AWS' artificial intelligence and machine-learning teams and 15 months overseeing Oracle's AI and ML product offerings. President Daryan Dehghanpisheh spent 30 months leading AWS' AI and ML solution architects, while co-founder and CTO Badar Ahmed spent 44 months leading engineering teams on Oracle's data science service.

"We truly understand ML systems. It's our background, and we've done it at scale across hundreds of thousands of customers throughout our careers," Swanson said.

Taking Machine-Learning Protection to the Next Level

With the $35 million, Swanson plans to expand AI Radar to encapsulate more components across data, infrastructure, code and model artifacts so clients can write policies to check for critical vulnerabilities in open-source packages. Swanson said AI Radar is the only tool on the market that enables clients to see, know, manage and truly understand their ML systems at scale from a single pane of glass.

"AI is delivering so much value … but we need to defend it."
– Ian Swanson, co-founder and CEO, Protect AI

Gaining visibility into the ML life cycle is completely different than being able to observe the software development life cycle since a software bill of materials consists of little more than code. But in ML, customers must account for how the model was tuned. Lots of disparate technology is pieced together to create an ML pipeline, meaning each customer has a different flavor or configuration, Swanson said.

"The world of ML models is a lot more expansive than your typical application development," Swanson said.

One blind spot in the ML development life cycle that Protect AI can shine a light on is when data scientists install open-source packages or build models while working in live systems connected to very sensitive data, Swanson said. Protect AI's security tools can audit the systems of some of the largest banks in the world that have thousands of different ML models deployed in the most critical parts of their business.

"We can look at the ingredients, the recipe and the baker to make sure that the cake is being baked in the way that it should," Swanson said.

Advancing ML Research, Open-Source Initiatives

From a research perspective, Swanson said, Protect AI focuses on popular open-source software used in ML models since AI and machine learning often rely on open-source libraries and packages, foundational models and data sets. The company plans to pursue critical vulnerabilities in the open-source ecosystem, publish research on what it finds and work with manufacturers to fix the identified bugs, he said.

"Organizations are deploying AI in some of the most critical parts of their business," Swanson said. "But if there's arbitrary or malicious code that can be run within those models, that's a serious risk, especially when you talk about financial services, healthcare or life sciences customers."

In terms of open-source initiatives, Swanson said, Protect AI has released an open-source tool to scan Jupyter notebooks for vulnerabilities since most ML projects start with data scientists exploring a notebook. Machine-learning practitioners are very used to working in the open-source ecosystem, and Swanson said Protect AI wants to meet customers where they are to drive security best practices.

From a metrics standpoint, Swanson said, Protect AI plans to track feature velocity, customer adoption and customer renewals to ensure the company has found product-market fit, is scaling its capabilities across a broader set of organizations and is keeping pace with the threat landscape. Protect AI employs 25 workers and expects to add 15 more people to its R&D, product and engineering teams by late 2023.

"The security of AI and machine learning is different than your standard software security," Swanson said. "Our goal here is to educate on those differences and to provide solutions in that space."


About the Author

Michael Novinson

Michael Novinson

Managing Editor, Business, ISMG

Novinson is responsible for covering the vendor and technology landscape. Prior to joining ISMG, he spent four and a half years covering all the major cybersecurity vendors at CRN, with a focus on their programs and offerings for IT service providers. He was recognized for his breaking news coverage of the August 2019 coordinated ransomware attack against local governments in Texas as well as for his continued reporting around the SolarWinds hack in late 2020 and early 2021.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing inforisktoday.com, you agree to our use of cookies.