The GenAI Security Crisis Few Can See — But These Startups Are Mapping The Gaps | Byron Acohido (Pulitzer Prize-Winning Business Journalist)

LAS VEGAS — A decade ago, the rise of public cloud brought with it a familiar pattern: runaway innovation on one side, and on the other, a scramble to retrofit security practices not built for the new terrain.

Related: GenAI workflow risks

Shadow IT flourished. S3 buckets leaked. CISOs were left to piece together fragmented visibility after the fact.

13695655290?profile=RESIZE_180x180

Something similar—but more profound—is happening again. The enterprise rush to GenAI is triggering a structural shift in how software is built, how decisions get made, and where the risk lives. Yet the foundational tools and habits of enterprise security—built around endpoints, firewalls, and user identities—aren’t equipped to secure what’s happening inside the large language models (LLMs) now embedded across critical workflows.

This is not just a new attack surface. It’s a systemic exposure—poorly understood and dangerously under-addressed.

The newly published IBM 2025 Cost of a Data Breach Report highlights a widening chasm between AI adoption and governance. It reveals that 13% of organizations suffered breaches involving AI models or applications, and among these, a staggering 97% lacked proper AI access controls.

Encouragingly, a new generation of AI-native security vendors is quietly charting the contours of this gap. Among them: StraikerDataKrypto, and PointGuard AI.

I encountered all three here in Las Vegas at Black Hat 2025 — and their candid insights helped crystallize what I now see as a systemic failure hiding in plain sight.

13695655493?profile=RESIZE_584x

Each startup is tackling a different facet of GenAI’s attack surface. None claim to offer a silver bullet. But taken together, they hint at what an AI-native security stack might eventually require.

AI-powered tools are flooding enterprise workflows at every level. From marketing copy to software development, GenAI is now threaded into production processes with startling speed. But the underlying engines—LLMs—operate using unfamiliar logic, drawing conclusions and taking actions in ways security teams aren’t trained to inspect or control.

Shadow AI is more than an abstract concern. Research from Menlo Security shows a 68% increase in shadow GenAI usage in just 2025, with 57% of employees admitting they’ve input corporate data into unsanctioned AI tools. The rise of AI web traffic—up 50% to 10.5 billion visitssignals how widespread this risk has become, even in just-browser usage contexts.

13695655300?profile=RESIZE_584x

Ankur Shah, CEO of Straiker, put it bluntly: “If you’re not watching what your AI agent is doing in real time, you’re blind.” Straiker focuses on what happens when GenAI becomes agentic—when it starts chaining reasoning steps, invoking tools, or making decisions based on inferred context.

In this mode, traditional AppSec and data loss prevention tools fall flat. Straiker’s Ascend AI and Defend AI offerings are designed to red-team these behaviors and enforce runtime policy guardrails. Their insight: the attack surface is no longer just the prompt. It’s the behavior of the agent.

If Straiker focuses on the “what,” then DataKrypto focuses on the “where.” Specifically: where does GenAI process and store its most sensitive data? The answer, according to DataKrypto founder Luigi Caramico, is both simple and alarming: in cleartext, inside RAM.

13695655694?profile=RESIZE_584x

“All the data—the model weights, the training materials, even user prompts—are held unencrypted in memory,” Caramico observes. “If you have access to the machine, you have access to everything.”

This exposure isn’t hypothetical. As more companies fine-tune LLMs with proprietary IP, the risk of theft or leakage escalates dramatically. Caramico likens LLMs to the largest lossy compression engines ever built—compressing terabytes of training data into billions of vulnerable parameters.

DataKrypto’s response is a product called FHEnom for AI: a secure SDK that encrypts model data in memory using homomorphic encryption, integrated with trusted execution environments (TEEs). This protects both the model itself and the sensitive data flowing into and out of it—without degrading performance. “Encryption at rest and in motion aren’t enough,” Caramico said. “This is encryption in use.”

13695655862?profile=RESIZE_584x

The third leg of the emerging GenAI security stool comes from PointGuard AI, which focuses on discovery and governance. As AI code generation and prompt engineering proliferate, organizations are losing track of what AI tools are being used where, and by whom. Willy Leichter, PointGuard’s Chief Security Officer, frames it as a shadow IT problem on steroids.

“AI is the fastest-growing development platform we’ve ever seen,” he noted. “Developers are pulling in open-source models, auto-generating code, and building apps without any oversight from security teams.”

PointGuard scans code repos, runtime environments, and MLOps pipelines to surface unsanctioned AI use, detect prompt injection exposures, and score AI posture. It builds a bridge between AppSec and data governance teams who increasingly find themselves on the same front lines.

While their approaches differ, these companies are all converging on the same conclusion: the current security model isn’t just incomplete—it’s obsolete. Straiker brings behavioral monitoring into the spotlight. DataKrypto protects the compute layer itself. PointGuard restores visibility and governance to a world of AI-driven code and logic. Their respective visions are drawing the early contours of what a security-first foundation for GenAI might look like.

There is now, in fact, an OWASP Top 10 list of LLM vulnerabilities. But it is still early days, and there are few universal frameworks or agreed-upon best practices for how to integrate these new risks into traditional security operations. CISOs face a landscape that is both fragmented and urgent, where model misuse, shadow deployments, and memory scraping represent three fundamentally different risks—each requiring new tools and mental models.

To keep pace, security itself must evolve. That means understanding AI not just as a tool, but as a new kind of software logic that demands purpose-built protection. It means building systems that can interpret autonomous behavior, encrypt active memory, and continuously surface hidden AI integrations. Most of all, it means learning to think less like compliance officers and more like language models—probabilistic, context-aware, and relentlessly adaptive.

“Security can’t just follow the playbook anymore,” Leichter observed. “We have to match the speed and shape of the thing we’re trying to protect.”

That, in the end, may be the most important shift of all.

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own — drawn from lived experience and editorial judgment honed over decades of investigative reporting.)

 

By Byron Acohido (Pulitzer Prize-Winning Business Journalist)

Original Link to the Blog: Click Here

 
Votes: 0
E-mail me when people leave their comments –

Community Manager, CISO Platform

You need to be a member of CISO Platform to add comments!

Join CISO Platform

Join The Community Discussion

CISO Platform

A global community of 5K+ Senior IT Security executives and 40K+ subscribers with the vision of meaningful collaboration, knowledge, and intelligence sharing to fight the growing cyber security threats.

Join CISO Community Share Your Knowledge (Post A Blog)
 

 

 

CISO Platform Talks : Security FireSide Chat With A Top CISO or equivalent (Monthly)

  • Description:

    CISO Platform Talks: Security Fireside Chat With a Top CISO

    Join us for the CISOPlatform Fireside Chat, a power-packed 30-minute virtual conversation where we bring together some of the brightest minds in cybersecurity to share strategic insights, real-world experiences, and emerging trends. This exclusive monthly session is designed for senior cybersecurity leaders looking to stay ahead in an ever-evolving landscape.

    We’ve had the privilege of…

  • Created by: Biswajit Banerjee
  • Tags: ciso, fireside chat

6 City Round Table On "New Guidelines & CISO Priorities for 2025" (Delhi, Mumbai, Bangalore, Pune, Chennai, Kolkata)

  • Description:

    We are pleased to invite you to an exclusive roundtable series hosted by CISO Platform in partnership with FireCompass. The roundtable will focus on "New Guidelines & CISO Priorities for 2025"

    Date: December 1st - December 31st 2025

    Venue: Delhi, Mumbai, Bangalore, Pune, Chennai, Kolkata

    >> Register Here

  • Created by: Biswajit Banerjee

Fireside Chat With Sandro Bucchianeri (Group Chief Security Officer at National Australia Bank Ltd.)

  • Description:

    We’re excited to bring you an insightful fireside chat with Sandro Bucchianeri (Group Chief Security Officer at National Australia Bank Ltd.) and Erik Laird (Vice President - North America, FireCompass). 

    About Sandro:

    Sandro Bucchianeri is an award-winning global cybersecurity leader with over 25…

  • Created by: Biswajit Banerjee
  • Tags: ciso, sandro bucchianeri, nab