AI agents are supposed to make work easier. But they’re also creating a whole new category of security nightmares.
As companies deploy AI-powered chatbots, agents, and copilots across their operations, they’re facing a new risk: How do you let employees and AI agents use powerful AI tools without accidentally leaking sensitive data, violating compliance rules, or opening the door to prompt-based injections? Witness AI just raised $58 million to find a solution, building what they call “the confidence layer for enterprise AI.”
Today on TechCrunch’sEquitypodcast, Rebecca Bellan was joined byBarmak Meftah, co-founder and partner at Ballistic Ventures, andRick Caccia, CEO of WitnessAI, to discuss what enterprises are actually worried about, why AI security will become an $800 billion to $1.2 trillion market by 2031, and what happens when AI agents start talking to other AI agents without human oversight.
- How enterprises accidentally leak sensitive data through “shadow AI” usage.
- What CISOs are worried about right now, how the problem has evolved rapidly over 18 months, and what it will look like over the next year.
- Why they think traditional cybersecurity approaches won’t work for AI agents.
- Real examples of AI agents going rogue, including one that threatened to blackmail an employee.
Source: Techcrunch



