HomeArticle

Behind Cyera's valuation reaching $6 billion: Security is not an added bonus for AI, but a necessary part for its implementation.

阿尔法公社2025-06-25 18:18
AI security is not an added bonus but an essential part of the implementation of AI applications.

As half of 2025 has passed, we have witnessed an explosion in AI applications. Various Agent and Vibe Coding tools are everywhere, and a number of AI-native hardware integrating software and hardware have emerged.

At the tool layer, AI security tools are the most active area for startups and financing. In the past month alone, there have been new financing news such as Cyera raising another $500 million in funding, with a valuation of $6 billion; Guardz securing $56 million in Series B financing, and Trustible obtaining $4.6 million in seed-round financing.

The internal reason for the continuous activity of AI security tools is that security is one of the fundamental requirements of the technology industry. Without a secure foundation, the ecosystem of products and applications cannot thrive.

For example, in cloud security: the more secure cloud computing is, the more confident enterprises will be in deploying their businesses on the cloud; the more secure data is, the more freely enterprises and individual users can use AI products, which promotes the development of the entire application ecosystem.

It can be said that AI security is not an added bonus but a necessary part of the implementation of AI applications.

Security tools have covered the entire AI industry chain

Security technology evolves with the evolution of threats and changes in product forms. In the past, areas such as computer security, network security, and cloud security have emerged, and AI security is the latest one.

For example, in the field of computer security, the main security product is antivirus software. A well - known typical product is 360 Antivirus, whose main functions are to detect, remove, and provide real - time protection against computer viruses.

Enterprise network security mainly involves establishing a clear "boundary" between an organization's internal network (trusted area) and the external Internet (untrusted area). Its main goal is to protect this boundary, prevent unauthorized access and malicious traffic from entering the internal network from the outside, and monitor and manage the internal network. A typical company in this field is Cisco.

Since the emergence of cloud computing, there has been a significant shift in security, which has also given rise to a new generation of cloud - security giants, such as Wiz, which was acquired by Google for $32 billion. It develops a Cloud Native Application Protection Platform (CNAPP), which can provide comprehensive protection for cloud platforms and applications on them. Its core lies in the visibility, situational management, and comprehensive protection and authorization management of all digital assets on the cloud platform.

In the AI era, new technologies have put forward new requirements for security. For example, it is necessary to protect AI models, prevent enterprise data from being leaked due to AI applications, defend against new types of AI - driven attacks, and meet the needs of AI security governance and compliance.

Companies protecting AI at the model level

ProtectAI

Financing: It completed a $60 million Series B financing led by Evolution Equity Partners, and the company's cumulative financing has reached $108.5 million.

Product: Protect AI has created a new category called MLSecOps (Machine Learning + Security + Operations). AI Radar is Protect AI's flagship product, which mainly addresses the key challenges of making enterprise users' AI systems more visible, easier to audit, and better managed.

First, AI Radar enables organizations to deploy AI more securely by evaluating the security of their ML supply chain and quickly identifying and mitigating risks. It monitors and gains insights into the attack surface of ML systems in real - time, generates and updates tamper - proof ML Bills of Materials (MLBOM). Different from SBOM, it provides a list of all components and dependencies in ML systems, allowing customers to fully understand the origin of AI/ML. It can also track a company's "software supply chain" components: operational tools, platforms, models, data, services, and cloud infrastructure.

In addition, AI Radar uses an integrated model scanning tool to detect security policy violations, model vulnerabilities, and malicious code injection attacks in large models and other ML inference workloads. It can also be integrated with third - party AppSec and CI/CD orchestration tools and model robustness frameworks.

HiddenLayer

Financing: It received $50 million in Series A financing co - led by M12 and Moore Strategic Ventures.

Product: HiddenLayer's flagship product is a security platform (MLSec) for detecting and preventing cyberattacks against machine - learning - driven systems. It is the industry's first MLDR (Machine Learning Detection and Response) solution, which can protect enterprises and their customers from emerging attack methods.

The MLSec platform can defend against malicious attacks such as model parameter extraction, model theft, model training data extraction, data poisoning, model injection, and model hijacking. In essence, it can be understood as a platform for comprehensive model protection.

Haize Labs

Financing: It received new financing led by General Catalyst, and its post - investment valuation reached $100 million.

Product: In AI security, preventing AI "jailbreaking" is an important but not perfectly solved area. Once an AI is jailbroken, it can be used to generate a large amount of inappropriate text, bloody images, etc., and can even automatically attack other networks, which is a "nightmare" for commercial AI and social media.

The method to prevent AI "jailbreaking" is usually model red - team testing. Specifically, the red team "evaluates the model" to attack the target model in a way that triggers an insecure response. The main challenge is that it still requires human judgment on whether the evaluation of the "evaluation model" is correct, and red - team testing involving humans is difficult to scale and requires a lot of training.

Haize Labs' core technology, Haizing, automates red - team testing and stress testing, which not only reduces costs but also encourages AI companies to conduct more comprehensive inspections and improvements on their AI systems.

Companies protecting AI applications and data

Cyera

Financing: After raising a total of $600 million in Series C and D financing in 2024, it received $500 million in financing led by Lightspeed, Greenoaks, and Georgian, with a valuation of $6 billion, and the cumulative financing exceeded $1.2 billion.

Product: Cyera has created a new security category called DSPM (Data Security Posture Management), whose core concept is discovery + classification + posture. Its functions include data discovery and classification, data visibility, data risk assessment, data security policy management, data remediation, and response.

Specifically, its platform uses AI to learn an enterprise's proprietary data and its business uses in real - time, helping the security team understand the location, use, and access rights of data and apply appropriate control measures to ensure security. Its large model can automatically discover, classify, and protect sensitive data everywhere. Coupled with the platform's policy engine, it can identify misconfigurations, recommend specific access controls, and generate new data security policies to ensure compliance and manage sensitive data access.

Cyberhaven

Financing: Cyberhaven received $100 million in Series D financing led by StepStone Group, with a valuation of over $1 billion.

Product: The main problem Cyberhaven solves is the leakage of an enterprise's sensitive data caused by employees using unmanaged AI applications within the enterprise. It addresses this issue by tracking data lineage or the data lifecycle across different users and endpoints. Data lineage refers to tracking the origin, movement, and transformation of data within an entire organization.

Cyberhaven's core technology is the Large Lineage Model (LLiM). This model is trained with data - flow datasets rather than language datasets. It can identify which data or data flows are at risk and provide explanations. After identifying and tracking the flow of sensitive data, Cyberhaven's platform can prevent data from being leaked to unauthorized AI tools in real - time.

Reco

Financing: It received $25 million in Series A+ financing participated by Insight Partners, Zeev Ventures, Boldstart Ventures, Angular Ventures, and Redseed, with a cumulative financing of $55 million.

Product: Reco uses a new security strategy of "dynamic SaaS security" to revolutionize traditional SaaS Security Posture Management (SSPM) tools. Traditional tools are slow to update and have limited integration capabilities, which are no longer suitable for the new security vulnerabilities brought about by the emergence of AI and AI agents.

Specifically, Reco provides a comprehensive application discovery engine that can identify and classify more than 50,000 applications, providing enterprises with a panoramic view of their SaaS ecosystem.

Its proprietary SaaS App Factory can currently protect more than 175 applications and can integrate new applications within 3 - 5 days, which is much faster than the integration cycle of traditional suppliers, which often takes months.

Reco operates 10 times faster than its competitors, and its deployment and maintenance costs are reduced by 80%. Even when continuously introducing or updating new applications and AI agents, it can provide continuous security and compliance protection, offering the necessary protection at the speed required by enterprises.

In addition, it also provides two security - related AI agents for enterprises. The alert agent is responsible for classifying and enriching a large number of alerts, relieving security analysts of this burden. The identity agent is responsible for monitoring who has which access rights in the customer's SaaS ecosystem and whether these access rights pose risks.

Companies protecting enterprise businesses from AI security governance and compliance

Vanta

Financing: It received $150 million in Series C financing led by Sequoia Capital, with a valuation of $2.45 billion, and the cumulative financing reached $353 million.

Product: Vanta is an AI - driven compliance platform. Its product, Vanta AI, has functions such as supplier security assessment, generative questionnaire response, and intelligent control mapping. It can automatically extract discovery content from SOC 2 reports, customized security questionnaires, and other documents, helping customers complete supplier security audits in a very short time.

It can also learn from an enterprise's database and past AI response insights, quickly respond to customers, and provide intelligent suggestions, mapping existing tests and policies to relevant control measures, making it easy to prove compliance with new frameworks.

The Vanta Trust Center is a platform for Vanta's customers to showcase their security and compliance to potential customers. It can automate the time - consuming security audits in each transaction, ensuring that potential customers can seamlessly access the required security information to make purchasing decisions. Through Vanta's continuous monitoring, customers can provide real - time evidence to prove that their control measures have passed. In cooperation with Vanta AI, the Vanta Trust Center can also provide documents and various personalized security answers for customers' potential customers.

Trustible

Financing: It completed a $4.6 million seed - round financing led by Lookout Ventures.

Product: Trustible helps enterprises manage risks and comply with various global AI regulations while accelerating AI applications through its one - stop software platform. Its product is the Trustible AI Governance Platform, which can record and track internal AI use cases, models, and data sources in an enterprise; manage various risks and hazards of AI systems; ensure that enterprises meet and adapt to the constantly changing global AI regulatory requirements, such as the EU's AI Act and the US NIST AI Risk Management Framework; and systematically evaluate the risks and compliance of AI systems and suppliers.

In short, its software platform automates, streamlines, and visualizes complex governance work (such as asset inventory, risk assessment, and regulatory compliance). The ultimate goal is to enable enterprises to accelerate AI innovation on the basis of building trust.

AI security is not an added bonus but a necessary part of the implementation of AI applications

Currently, AI technology and applications are still in the early stage, and new threats brought by AI technology are emerging one after another.

For example, as AI technology lowers the threshold for attacks and exponentially increases the speed and scale of attacks, according to Cisco's latest report, 74% of organizations have felt the real impact of AI threats, and 90% expect this impact to continue to intensify in the next 1 - 2 years. Moreover, attackers focus on areas such as models and training data, and such attacks are often more hidden and destructive.

In addition, concerns about data privacy are widespread among users, especially enterprise users. Approximately 84% of people prefer AI solutions that do not rely on external data sharing.

The new product form also brings new concerns. Andrej Karpathy, a founding member of OpenAI, said that he dare not use AI agents because agents can access private data, come into contact with untrusted content, and have external communication capabilities, which can easily lead to data leakage.

When AI agents become the mainstream form of AI applications in 2025, the top AI experts dare not use them due to security issues, which further highlights the importance of AI security. (The value of enterprises such as Cyera is further manifested.)

Of course, AI does not only bring bad news. For example, it has comprehensively improved threat - detection capabilities and the operational efficiency of security teams. 88% of security teams have saved a lot of time through AI, mainly reflected in automated alert aggregation (condensing thousands of alerts into key events), accelerating threat investigations (35%), and speeding up the recovery process.

So, in the field of AI security, which areas are the most promising for startups? Obviously, the protection of AI applications and data privacy is the most important direction because it is the biggest obstacle to the popularization of AI applications. Of course, cloud security is also very important because deploying models on the cloud and providing them through APIs has become a mainstream approach. In these two directions, even though there are leading companies overseas, domestic entrepreneurs still have sufficient space for startups.

Of course, in the field of security startups, in addition to technology, entrepreneurs also need rich experience. Because while the ability to solve problems is important, finding the right direction and customers is crucial for an enterprise's survival.

This article is from the WeChat public account "Alpha Commune" (ID: alphastartups). The author is someone who discovers extraordinary entrepreneurs. It is published by 36Kr with authorization.