Hacking the Hackers as AI Redefines Cybersecurity

By Eric Lee, Principal, Cota Capital

 

As we outlined in a previous post, we at Cota Capital believe there is tremendous opportunity in the AI tooling sector. There are many subsectors within AI tooling that intrigue us. In this article, we’ll focus on one key area: security and compliance.

These are the AI tools that can be used to build guardrails for protecting large language models (LLMs) from external attacks and misuse. They’re also the tools that help enforce security policies, detect and mitigate risks, and better secure data and systems.

They’re vitally important because the emergence of generative AI and LLMs has brought risks and vulnerabilities that cyber attackers are eagerly exploiting. In fact, by 2028, more than one-fifth of cyberattacks and data leaks will involve generative AI, according to Gartner. That’s why we’re now seeing a host of AI security startups working to address this challenge.

The race is on to win in AI security

As AI systems grow more prevalent, so do the threats they face. Threats like adversarial attacks, data poisoning, and model inversion are all capable of manipulating or corrupting AI models in ways that are difficult to detect and defend against.

A major factor complicating AI security is the constant evolution of AI models. As systems learn and update, new security weaknesses emerge, so it’s difficult to stay ahead of potential threats. Additionally, AI’s reliance on large datasets, often containing sensitive or proprietary information, poses significant data privacy and security challenges. On top of all this, there is a lack of universally accepted security standards for AI, which adds to the difficulty of AI security because it creates inconsistencies in defense strategies.

These challenges are driving the demand for AI security, and we anticipate that AI security and governance will increasingly account for a larger share of the $184 billion cybersecurity market, as estimated by Gartner. As organizations implement AI everywhere, they’re recognizing the value and necessity of AI security and they’re willing to invest in new solutions.

While the cybersecurity industry is highly fragmented, standalone cybersecurity businesses have significant potential to grow into substantial and successful enterprises, much like market leaders such as Palo Alto Networks, CrowdStrike, and Zscaler. The top 20 cybersecurity vendors generate nearly $120 billion in revenue, seven of which have a market capitalization exceeding $10 billion. So the race is on. Who will win? Using prior innovation cycles as our guide, we can expect the AI security sector to follow a similar trajectory, paving the way for the creation of massive industry leaders.

4 areas where AI security startups can find success

Here are four areas where we see real opportunity in AI security and compliance.

1: Governance

As regulation matures and enterprises see that many of their AI programs do not meet regulatory requirements, attention will shift from merely understanding if models comply with regulations to actually building models that do comply with regulations. Many of the most promising governance solutions come from very young companies—startups founded in the last 1-3 years, often with fewer than 100 employees and limited capital raised.

Will a handful of them emerge and prosper? Or will agile incumbents—such as broader governance, risk, and compliance (GRC) platforms and cloud vendors that support the full ML lifecycle—step in and provide the AI tools organizations need? While we expect to see GRC platforms start releasing AI governance products, we are constructive about vertical-specific solutions, particularly in high-risk industries like financial services and insurance.

We expect vertical-specific platforms to be better equipped to address the unique regulatory, operational, and risk challenges of these industries. For example, in financial services, these platforms can be designed for anti-money laundering (AML) compliance, Know Your Customer (KYC) protocols, and stress-testing requirements. They can also be developed to adapt to the rapidly evolving regulatory landscape specific to the industry, such as GDPR for insurance or Basel III for banking.

2: AI access management

Another big security challenge is managing how employees access GenAI and secure enterprise AI applications. The growth of identity access management companies in cybersecurity, like Okta, demonstrates the potential for startups in the AI access management space to mature into large, independent public companies. Similarly, companies like Auth0 and Ping Identity gained traction by addressing identity security in today’s complex, multi-cloud ecosystems, with each capturing significant market value before being acquired or going public.

The rise of attacks, like model poisoning and data injection, highlights the need for tailored identity and access management (IAM) tools that understand AI pipelines. Startups that make access controls tailored to AI-specific environments and focus on adaptive, AI-driven IAM solutions are poised to become essential players. As companies and regulatory bodies place greater emphasis on securing AI models and data flows, startups addressing these challenges can set themselves apart by offering high-value services that general or incumbent providers cannot deliver.

3: Model building, pre-production

The model-building phase of AI development faces significant security challenges due to the sensitivity of data and the potential for vulnerabilities in the model architecture itself. Before deployment, AI models are trained on large datasets that often contain sensitive or personally identifiable information (PII), which introduces privacy risks and makes compliance with regulations like GDPR complex.

Startups focused on securing the AI model-building phase represent a promising opportunity, given the growing need for privacy-preserving and resilient AI solutions. As regulatory scrutiny of data privacy and model security intensifies, companies innovating with technologies like model vulnerability scanning, PII redaction, synthetic data and federated learning are well-positioned.

4: Model consumption/inference, post-production

In the model-consumption phase of AI, when models are deployed and actively used, there are security challenges that can compromise both the integrity of the model and the safety of the data it processes. Deployed models are vulnerable to adversarial attacks, model theft, and data leakage, which can lead to unintended disclosures and compromised decision-making. Building effective defenses is challenging because the attack surface is broad, encompassing APIs, model outputs, and potential feedback loops.

Startups focused on AI security for the post-production phase present a unique opportunity. Many enterprises tend to focus on securing data pipelines or training models, often overlooking the unique risks that arise after deployment, such as inference-time attacks. Companies pioneering AI-focused detection and response, AI firewalls, and red teaming will become critical to enterprises seeking to secure AI applications as model consumption and inference grow.

Expanding the security perimeter through a platform approach

The speed of development in the AI security space is faster than ever. However, governance, AI access management, model building (pre-production), and model consumption/inference (post-production) security are strong starting points for startups to create wedges on their path to becoming broader platforms. We expect AI security providers to enter the market in one segment and then quickly expand their feature set to span the market map.

Platform-oriented AI security firms have the potential for higher market penetration, as they’re adaptable to various industries. Their tools are being designed to integrate seamlessly with existing systems, making them attractive for enterprise-level clients in sectors including finance, healthcare, and critical infrastructure. Platform-based solutions are also emerging to enable customers to monitor, protect, and audit AI models continuously, a capability that meets growing regulatory expectations while improving transparency and operational efficiency.

As AI adoption accelerates, the demand for comprehensive security solutions that protect against model vulnerabilities and data breaches will continue to rise. Companies that offer a holistic platform approach are well-positioned to capture significant market share and deliver long-term value.

 

By Eric Lee, Principal, Cota Capital

 

As we outlined in a previous post, we at Cota Capital believe there is tremendous opportunity in the AI tooling sector. There are many subsectors within AI tooling that intrigue us. In this article, we’ll focus on one key area: security and compliance.

These are the AI tools that can be used to build guardrails for protecting large language models (LLMs) from external attacks and misuse. They’re also the tools that help enforce security policies, detect and mitigate risks, and better secure data and systems.

They’re vitally important because the emergence of generative AI and LLMs has brought risks and vulnerabilities that cyber attackers are eagerly exploiting. In fact, by 2028, more than one-fifth of cyberattacks and data leaks will involve generative AI, according to Gartner. That’s why we’re now seeing a host of AI security startups working to address this challenge.

The race is on to win in AI security

As AI systems grow more prevalent, so do the threats they face. Threats like adversarial attacks, data poisoning, and model inversion are all capable of manipulating or corrupting AI models in ways that are difficult to detect and defend against.

A major factor complicating AI security is the constant evolution of AI models. As systems learn and update, new security weaknesses emerge, so it’s difficult to stay ahead of potential threats. Additionally, AI’s reliance on large datasets, often containing sensitive or proprietary information, poses significant data privacy and security challenges. On top of all this, there is a lack of universally accepted security standards for AI, which adds to the difficulty of AI security because it creates inconsistencies in defense strategies.

These challenges are driving the demand for AI security, and we anticipate that AI security and governance will increasingly account for a larger share of the $184 billion cybersecurity market, as estimated by Gartner. As organizations implement AI everywhere, they’re recognizing the value and necessity of AI security and they’re willing to invest in new solutions.

While the cybersecurity industry is highly fragmented, standalone cybersecurity businesses have significant potential to grow into substantial and successful enterprises, much like market leaders such as Palo Alto Networks, CrowdStrike, and Zscaler. The top 20 cybersecurity vendors generate nearly $120 billion in revenue, seven of which have a market capitalization exceeding $10 billion. So the race is on. Who will win? Using prior innovation cycles as our guide, we can expect the AI security sector to follow a similar trajectory, paving the way for the creation of massive industry leaders.

4 areas where AI security startups can find success

Here are four areas where we see real opportunity in AI security and compliance.

1: Governance

As regulation matures and enterprises see that many of their AI programs do not meet regulatory requirements, attention will shift from merely understanding if models comply with regulations to actually building models that do comply with regulations. Many of the most promising governance solutions come from very young companies—startups founded in the last 1-3 years, often with fewer than 100 employees and limited capital raised.

Will a handful of them emerge and prosper? Or will agile incumbents—such as broader governance, risk, and compliance (GRC) platforms and cloud vendors that support the full ML lifecycle—step in and provide the AI tools organizations need? While we expect to see GRC platforms start releasing AI governance products, we are constructive about vertical-specific solutions, particularly in high-risk industries like financial services and insurance.

We expect vertical-specific platforms to be better equipped to address the unique regulatory, operational, and risk challenges of these industries. For example, in financial services, these platforms can be designed for anti-money laundering (AML) compliance, Know Your Customer (KYC) protocols, and stress-testing requirements. They can also be developed to adapt to the rapidly evolving regulatory landscape specific to the industry, such as GDPR for insurance or Basel III for banking.

2: AI access management

Another big security challenge is managing how employees access GenAI and secure enterprise AI applications. The growth of identity access management companies in cybersecurity, like Okta, demonstrates the potential for startups in the AI access management space to mature into large, independent public companies. Similarly, companies like Auth0 and Ping Identity gained traction by addressing identity security in today’s complex, multi-cloud ecosystems, with each capturing significant market value before being acquired or going public.

The rise of attacks, like model poisoning and data injection, highlights the need for tailored identity and access management (IAM) tools that understand AI pipelines. Startups that make access controls tailored to AI-specific environments and focus on adaptive, AI-driven IAM solutions are poised to become essential players. As companies and regulatory bodies place greater emphasis on securing AI models and data flows, startups addressing these challenges can set themselves apart by offering high-value services that general or incumbent providers cannot deliver.

3: Model building, pre-production

The model-building phase of AI development faces significant security challenges due to the sensitivity of data and the potential for vulnerabilities in the model architecture itself. Before deployment, AI models are trained on large datasets that often contain sensitive or personally identifiable information (PII), which introduces privacy risks and makes compliance with regulations like GDPR complex.

Startups focused on securing the AI model-building phase represent a promising opportunity, given the growing need for privacy-preserving and resilient AI solutions. As regulatory scrutiny of data privacy and model security intensifies, companies innovating with technologies like model vulnerability scanning, PII redaction, synthetic data and federated learning are well-positioned.

4: Model consumption/inference, post-production

In the model-consumption phase of AI, when models are deployed and actively used, there are security challenges that can compromise both the integrity of the model and the safety of the data it processes. Deployed models are vulnerable to adversarial attacks, model theft, and data leakage, which can lead to unintended disclosures and compromised decision-making. Building effective defenses is challenging because the attack surface is broad, encompassing APIs, model outputs, and potential feedback loops.

Startups focused on AI security for the post-production phase present a unique opportunity. Many enterprises tend to focus on securing data pipelines or training models, often overlooking the unique risks that arise after deployment, such as inference-time attacks. Companies pioneering AI-focused detection and response, AI firewalls, and red teaming will become critical to enterprises seeking to secure AI applications as model consumption and inference grow.

Expanding the security perimeter through a platform approach

The speed of development in the AI security space is faster than ever. However, governance, AI access management, model building (pre-production), and model consumption/inference (post-production) security are strong starting points for startups to create wedges on their path to becoming broader platforms. We expect AI security providers to enter the market in one segment and then quickly expand their feature set to span the market map.

Platform-oriented AI security firms have the potential for higher market penetration, as they’re adaptable to various industries. Their tools are being designed to integrate seamlessly with existing systems, making them attractive for enterprise-level clients in sectors including finance, healthcare, and critical infrastructure. Platform-based solutions are also emerging to enable customers to monitor, protect, and audit AI models continuously, a capability that meets growing regulatory expectations while improving transparency and operational efficiency.

As AI adoption accelerates, the demand for comprehensive security solutions that protect against model vulnerabilities and data breaches will continue to rise. Companies that offer a holistic platform approach are well-positioned to capture significant market share and deliver long-term value.

 

Company building is not a spectator sport. Our team of operating professionals advises and works alongside our companies. Think of us as both coaches and teammates.