Prompt Security’s Itamar Golan: GenAI security as a category, not a feature

GenAI security is no longer a niche concern for security teams. In a revealing conversation with VentureBeat, Itamar Golan, co-founder and CEO of Prompt Security, argues that securing AI-powered applications requires building a category, not simply adding features. The discussion highlights the surge of shadow AI within organizations, the governance and data leakage risks that come with it, and why a holistic, runtime approach is essential for enterprises to adopt AI safely. It also notes the strategic arc that led Prompt Security to a major milestone: an acquisition by SentinelOne in 2025, a move that signals the industry’s shift toward integrated AI risk controls within broader security platforms.
Golan’s career intersects deep AI research with hands‑on security practice. He traces his path back to transformer architectures long before they powered today’s chat interfaces, and recalls the early success of an AI-powered security feature built with GPT‑2 and GPT‑3 that could generate actionable remediation steps from security alerts. That early work demonstrated that LLM-driven applications create an expanded attack surface, catalyzing the August 2023 launch of Prompt Security to help enterprises embrace AI without turning governance into a bottleneck. The goal, as he described it, was to position Prompt Security as a universal control layer for the enterprise’s interaction with LLMs—visible to CISOs, integrable with existing security stacks, and capable of running at runtime across employees, agents, and internal tools.
The platform Prompt Security built from the ground up covers more than prompt injections. It was designed to address a broad spectrum of GenAI risks, including data leakage, model governance, compliance, red teaming, and operational risk. A pivotal insight for many customers was the sheer scale of AI tool usage within their organizations. Shadow AI discovery became a critical feature, revealing dozens of unmanaged tools operating outside IT visibility. Complementing this visibility, real-time sensitive-data sanitization allowed employees to use AI tools safely by stripping sensitive information from prompts before they reach external models. The result was not only stronger security but faster, more confident AI adoption across the workforce.
Looking back, Golan highlights three strategic choices that accelerated growth. First, he chose to build a category, not a feature. Rather than marketing Prompt Security as a narrow defensive add-on, he positioned it as the enterprise’s AI security control layer that governs how people, agents, and applications interact with LLMs. That framing created budget rationale, elevated the conversation to the CISO table, and enabled platform-level pricing and long‑term relevance rather than chasing a single capability. Second, he embraced enterprise complexity early—supporting self-hosted and hybrid deployments, coverage across browsers and IDEs, and integration with internal tools and agentic workflows—even if it meant longer cycles. Third, he prioritized depth over logos, engaging a small set of very serious customers who helped sculpt the product around real governance, risk, and long‑term AI adoption needs rather than chasing vanity metrics.
As the market matured, Prompt Security’s messaging evolved from simply raising awareness to offering a practical path for safe AI use. The focus shifted from proving that AI risk exists to showing security teams how to stay ahead of it without stifling innovation. CISOs learned to treat GenAI security as an extension of existing data protection—protecting data, IP, and user trust in a fast‑moving, high‑velocity AI environment. This shift helped set the stage for the acquisition and the broader strategy that would see Prompt Security’s capabilities extend into SentinelOne’s Singularity Platform, including model-agnostic coverage across major providers and support for self-hosted models.
The most memorable incident shaping Prompt Security’s roadmap involved a large, highly regulated company that launched a customer‑facing GenAI support agent. Despite following best practices, the AI agent became a new attack vector through prompt injection that allowed a non-technical user to access other customers’ tickets and internal case summaries. The episode underscored a grim reality: AI can democratize the ability to compromise systems, compressing the time to exploit and enlarging the damage radius. It reinforced the need for runtime protection, robust context‑manipulation detection, and cross‑tenant data leakage prevention at the model interaction layer. In response, Prompt Security accelerated work on runtime protections for customer‑facing AI apps and sharpened its go‑to‑market around securing external AI surfaces as high‑risk assets rather than merely governance tools.
Now part of a larger platform company, Golan explains that the aim is broader reach with the same core mission: make AI security an integral part of the defense fabric. The focus is to weave AI security into the existing protection of endpoints, identities, and cloud workloads, enabling a future where AI itself can help defend the enterprise. The broader M&A trend in GenAI security shows several big bets—Palo Alto Networks, Tenable, and Cisco among them—reflecting a belief that security needs to be embedded into AI adoption from the start. Post‑acquisition, Prompt Security’s capabilities are designed to span across the Singularity Platform, delivering model-agnostic coverage for both cloud-hosted and self-hosted models and integrating with thousands of MCP servers to protect AI at scale.
In short, the interview paints a vision of GenAI security as a strategic, enterprise-grade discipline rather than a single feature. Itamar Golan’s leadership at Prompt Security turned a bold hypothesis into a market category, a practical platform, and now a critical piece of SentinelOne’s defense ecosystem. As organizations continue to deploy AI at speed, the imperative is clear: secure AI as a governance and risk management backbone that keeps pace with innovation while preserving trust and productivity.
Related posts
-
AI News Roundup: Humanoids, AI Actors, and the Copyright Frontier
As AI accelerates into the mainstream, the tech world is testing not only what machines can do, but...
30 September 202526LikesBy Amir Najafi -
Inside Tech’s Friendly Media Bubble: How Silicon Valley CEOs Shape the Narrative
In an era of rising distrust of big tech, Silicon Valley is quietly building a parallel media ecosystem...
29 November 202501LikeBy Amir Najafi -
AI in flux: leadership shake-ups, risk-aware deployments, and procurement scrutiny
AI headlines are converging on governance, energy, and procurement as the sector wrestles with big questions about leadership,...
4 September 202577LikesBy Amir Najafi