Biswajit Banerjee's Posts (210)

Sort by
13641001857?profile=RESIZE_710x
By Byron V. Acohido

Just hours before it was set to expire on April 16, the federal contract funding MITRE’s stewardship of the CVE (Common Vulnerabilities and Exposures) program was given a temporary extension by CISA. Related: Brian Krebs’ take on MITRE funding expiring

This averted an immediate shutdown, but it didn’t solve the underlying problem. Far from it. The system that underpins vulnerability disclosure—the nervous system of cybersecurity risk management—is showing signs of structural fatigue. And we’re long overdue for a serious discussion about what continuity and resilience should actually look like in this space.

Several longtime colleagues of mine have voiced sharp, necessary observations in the wake of this narrowly avoided shutdown.

One of the clearest signals this crisis sent is how fragile our vulnerability disclosure pipeline really is. The CVE program isn’t just a list of numbers—it’s a Rosetta Stone that security teams rely on to identify, prioritize, and communicate risk. Brian Krebs got straight to the heart of it: without continued funding, the site might stay online, but no new CVEs would be added. That would paralyze threat response efforts across both public and private sectors at a time when precision and speed are everything.

 

Whither the outcry?

What’s more troubling is how little urgency the broader industry showed as the situation unfolded. We all say CVEs are essential—but where was the outcry? Deb Radcliff, a longtime peer whose clarity I’ve come to respect deeply, raised this uncomfortable point on her LinkedIn feed. The community, she observed, largely failed to rally. That’s a telling indictment of how cybersecurity still struggles to treat its shared infrastructure as something worth fighting for.

And if this near-shutdown rattled operations, it also exposed an underlying architectural flaw. The entire system is too centralized, too brittle. Francesco Cipollone, CEO of Phoenix Security, unpacked this well in his recent blog post. He pointed out how modern DevSecOps pipelines depend on timely, machine-readable CVE data—and when that data stutters, threat modeling, SBOM tracking, and risk scoring all start to fail. Cipollone’s response? Build a more resilient, federated model. One that synchronizes across multiple data sources and continues delivering actionable insight, even when a single node falters.

 

New architecture needed?

Cipollone isn’t just observing the problem—he’s actively rethinking the architecture. Phoenix Security is building a federated vulnerability knowledge base, cross-validating against sources like VulnCheck, OSV.dev, and GitHub. That may be a model worth watching—and emulating.

Together, these voices draw a sharp outline. Krebs warned us the foundation is cracking. Radcliff called out the industry’s failure to respond. Cipollone offered a path forward—one that’s decentralized, resilient, and built to last.

And that’s where the real opportunity lies. The emergency patch from CISA buys us time, but not resolution. If anything, this close call should jolt us into rethinking how we fund, govern, and evolve the infrastructure we all rely on. From federated data sources to vendor-backed redundancy, now’s the time to experiment boldly—and build something stronger than what nearly broke.

Let’s not wait for another near-collapse to take this seriously.

13641005462?profile=RESIZE_400x

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own—drawn from lived experience and editorial judgment honed over decades of investigative reporting.)

 

By Byron Acohido (Pulitzer Prize-Winning Business Journalist)

Original Link to the Blog: Click Here

Read more…

The company doesn’t keep logs, so couldn’t turn over data:

Windscribe, a globally used privacy-first VPN service, announced today that its founder, Yegor Sak, has been fully acquitted by a court in Athens, Greece, following a two-year legal battle in which Sak was personally charged in connection with an alleged internet offence by an unknown user of the service.

The case centred around a Windscribe-owned server in Finland that was allegedly used to breach a system in Greece. Greek authorities, in cooperation with INTERPOL, traced the IP address to Windscribe’s infrastructure and, unlike standard international procedures, proceeded to initiate criminal proceedings against Sak himself, rather than pursuing information through standard corporate channels.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

 

 

 

Read more…

Our editorial team has handpicked the best of the best talks on AI/ML from RSA Conference USA – one of the most influential cybersecurity events globally. Held annually, this event brings together the sharpest minds in security to discuss the future of digital defense. Here's our curated list of standout presentations on AI, cyber defense, privacy, and open-source innovation. This guide is a curated compilation of RSAC sessions from various years, with a focus on AI in cybersecurity. While some presentations may be from past editions, the insights remain highly relevant amidst the ongoing AI revolution and its growing impact on cybersecurity strategy and implementation.

 


 

1. Techniques for Automatic Device Identification and Network Assignment

Speaker: Rodney Moseley, Sr Security Program Manager, Microsoft

Rodney explores scalable, automated methods for identifying devices and assigning networks based on behavioral signals. The session shares real-world examples of applying zero-trust principles to modern enterprise environments.

>> Download Presentation

 

 


 

 

2. 10 Key Challenges for AI within the EU Data Protection Framework

Speaker: Valerie Lyons, COO, BH Consulting

Valerie presents a strategic breakdown of the most pressing AI challenges under EU privacy laws, including accountability, fairness, and transparency—vital for anyone navigating regulatory complexities in AI deployments.

>> Download Presentation

 

 


 

 

3. Coordinated Disclosure for ML: What's Different and What's the Same

Speaker: Sven Cattell, Founder, AI Village

This talk dives into how traditional disclosure models need to evolve for machine learning systems. Sven draws parallels and highlights where the new frontiers of responsible AI security diverge from legacy norms.

>> Download Presentation

 

 


 

 

4. GenAI Opportunities and Challenges: Where 370 Enterprises Are Focusing Now

Speaker: David Gruber, Principal Analyst, ESG

Backed by original research, this session gives insights into how top organizations are investing in and struggling with Generative AI, offering a data-driven view of current enterprise priorities.

>> Download Presentation

 

 


 

 

5. Securing AI: There Is No Try, Only Do!

Speaker: Saurabh Shintre, CEO, LangSafe

Saurabh provides a pragmatic roadmap to securing AI systems—starting from first principles and ending with hard-learned lessons in implementation.

>> Download Presentation

 

 


 

 

6. A Constitutional Quagmire: Ethical Minefields of AI, Cyber, and Privacy

Speaker: Daniel Garrie, CEO, Law & Forensics / Adjunct Professor

Explore the intersection of legal and ethical complexities surrounding AI, data privacy, and national security. Daniel examines constitutional pitfalls and offers potential frameworks for the road ahead.

>> Download Presentation

 

 


 

 

7. Cyber Defense Matrix Workshop

Speakers: Walter Williams (CISO, Monotype) & Sounil Yu (CTO, Knostic)

A hands-on, high-impact workshop exploring how to apply the Cyber Defense Matrix to structure, prioritize, and evaluate your security programs more effectively.

>> Download Presentation

 

 


 

 

8. Lessons Learned from Developing Secure AI Workflows

Speaker: Elie Bursztein, Google & DeepMind

Elie draws from his experience leading AI security at Google DeepMind, delivering insights on building robust and ethical AI systems at scale.

>> Download Presentation

 

 


 

 

9. Oh, the Possibilities: Balancing Innovation and Risk with Generative AI

Speakers: William Rankin & Shayla Treadwell, ECS

This dual-speaker session explores how security leaders can walk the tightrope between innovation and governance when rolling out GenAI in production environments.

>> Download Presentation

 

 


 

 

10. Cracking the Code: Unveiling Synergies Between Open Source Security and AI

Speakers: Perri Adams (DARPA) & Omkhar Arasaratnam (OpenSSF)

This talk sheds light on how AI and open-source can empower each other—and where they clash. Perri and Omkhar discuss national-level efforts and global open-source collaborations.

>> Download Presentation

 

 


 

 

11. From Chatbot to Destroyer of Endpoints: Can ChatGPT Automate EDR Bypasses?

Speakers: Daan Raman & Erik Van Buggenhout, NVISO

An eye-opening demo session where the presenters explore how language models can be leveraged to probe and potentially bypass endpoint detection systems.

>> Download Presentation

 

 


 

Discover the Top Cybersecurity Innovators @RSA Conference USA 2025

Get a curated glimpse into the most promising cybersecurity startups featured in the RSA Innovation Sandbox 2025. Our editorial team has handpicked the standout companies shaping the future of cyber defense.

>> Click Here To Explore The Top Cybersecurity Innovators

 



CISO Platform Awards USA 2025

The CISO Platform Awards USA 2025 is your opportunity to gain national recognition for driving innovation and excellence in cybersecurity leadership. Held in Atlanta this October, the awards will spotlight the most impactful security leaders who have led transformative initiatives and demonstrated measurable results in strengthening their organization’s security posture. Are you a cybersecurity leader making an impact? Nominate yourself or a peer for one of the most respected recognitions in the industry.

>> Nominate Your Peer Or Yourself Here
 



Join the CISO Platform Community

CISO Platform is a peer-driven community where CISOs and security leaders collaborate, share authentic reviews, and grow their careers. If you’re not already a member, we’d love to invite you to join our trusted network. Membership is free and gives you access to peer conversations, curated resources, and exclusive events.

>> Join Now – It’s Free

 

 

Read more…

A 23-year-old Scottish man thought to be a member of the prolific Scattered Spider cybercrime group was extradited last week from Spain to the United States, where he is facing charges of wire fraud, conspiracy and identity theft. U.S. prosecutors allege Tyler Robert Buchanan and co-conspirators hacked into dozens of companies in the United States and abroad, and that he personally controlled more than $26 million stolen from victims.

Scattered Spider is a loosely affiliated criminal hacking group whose members have broken into and stolen data from some of the world’s largest technology companies. Buchanan was arrested in Spain last year on a warrant from the FBI, which wanted him in connection with a series of SMS-based phishing attacks in the summer of 2022 that led to intrusions at Twilio, LastPass, DoorDash, Mailchimp, and many other tech firms.

Tyler Buchanan, being escorted by Spanish police at the airport in Palma de Mallorca in June 2024.

As first reported by KrebsOnSecurity, Buchanan (a.k.a. “tylerb”) fled the United Kingdom in February 2023, after a rival cybercrime gang hired thugs to invade his home, assault his mother, and threaten to burn him with a blowtorch unless he gave up the keys to his cryptocurrency wallet. Buchanan was arrested in June 2024 at the airport in Palma de Mallorca while trying to board a flight to Italy. His extradition to the United States was first reported last week by Bloomberg.

Members of Scattered Spider have been tied to the 2023 ransomware attacks against MGM and Caesars casinos in Las Vegas, but it remains unclear whether Buchanan was implicated in that incident. The Justice Department’s complaint against Buchanan makes no mention of the 2023 ransomware attack.

Rather, the investigation into Buchanan appears to center on the SMS phishing campaigns from 2022, and on SIM-swapping attacks that siphoned funds from individual cryptocurrency investors. In a SIM-swapping attack, crooks transfer the target’s phone number to a device they control and intercept any text messages or phone calls to the victim’s device — including one-time passcodes for authentication and password reset links sent via SMS.

In August 2022, KrebsOnSecurity reviewed data harvested in a months-long cybercrime campaign by Scattered Spider involving countless SMS-based phishing attacks against employees at major corporations. The security firm Group-IB called them by a different name — 0ktapus, because the group typically spoofed the identity provider Okta in their phishing messages to employees at targeted firms.

A Scattered Spider/0Ktapus SMS phishing lure sent to Twilio employees in 2022.

The complaint against Buchanan (PDF) says the FBI tied him to the 2022 SMS phishing attacks after discovering the same username and email address was used to register numerous Okta-themed phishing domains seen in the campaign. The domain registrar NameCheap found that less than a month before the phishing spree, the account that registered those domains logged in from an Internet address in the U.K. FBI investigators said the Scottish police told them the address was leased to Buchanan from January 26, 2022 to November 7, 2022.

Authorities seized at least 20 digital devices when they raided Buchanan’s residence, and on one of those devices they found usernames and passwords for employees of three different companies targeted in the phishing campaign.

“The FBI’s investigation to date has gathered evidence showing that Buchanan and his co-conspirators targeted at least 45 companies in the United States and abroad, including Canada, India, and the United Kingdom,” the FBI complaint reads. “One of Buchanan’s devices contained a screenshot of Telegram messages between an account known to be used by Buchanan and other unidentified co-conspirators discussing dividing up the proceeds of SIM swapping.”

U.S. prosecutors allege that records obtained from Discord showed the same U.K. Internet address was used to operate a Discord account that specified a cryptocurrency wallet when asking another user to send funds. The complaint says the publicly available transaction history for that payment address shows approximately 391 bitcoin was transferred in and out of this address between October 2022 and
February 2023; 391 bitcoin is presently worth more than $26 million.

In November 2024, federal prosecutors in Los Angeles unsealed criminal charges against Buchanan and four other alleged Scattered Spider members, including Ahmed Elbadawy, 23, of College Station, Texas; Joel Evans, 25, of Jacksonville, North Carolina; Evans Osiebo, 20, of Dallas; and Noah Urban, 20, of Palm Coast, Florida. KrebsOnSecurity reported last year that another suspected Scattered Spider member — a 17-year-old from the United Kingdom — was arrested as part of a joint investigation with the FBI into the MGM hack.

Mr. Buchanan’s court-appointed attorney did not respond to a request for comment. The accused faces charges of wire fraud conspiracy, conspiracy to obtain information by computer for private financial gain, and aggravated identity theft. Convictions on the latter charge carry a minimum sentence of two years in prison.

Documents from the U.S. District Court for the Central District of California indicate Buchanan is being held without bail pending trial. A preliminary hearing in the case is slated for May 6.

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…

Sooner or later, it’s going to happen. AI systems will start acting as agents, doing things on our behalf with some degree of autonomy. I think it’s worth thinking about the security of that now, while its still a nascent idea.

In 2019, I joined Inrupt, a company that is commercializing Tim Berners-Lee’s open protocol for distributed data ownership. We are working on a digital wallet that can make use of AI in this way. (We used to call it an “active wallet.” Now we’re calling it an “agentic wallet.”)

I talked about this a bit at the RSA Conference earlier this week, in my keynote talk about AI and trust. Any useful AI assistant is going to require a level of access—and therefore trust—that rivals what we currently our email provider, social network, or smartphone.

This Active Wallet is an example of an AI assistant. It’ll combine personal information about you, transactional data that you are a party to, and general information about the world. And use that to answer questions, make predictions, and ultimately act on your behalf. We have demos of this running right now. At least in its early stages. Making it work is going require an extraordinary amount of trust in the system. This requires integrity. Which is why we’re building protections in from the beginning.

Visa is also thinking about this. It just announced a protocol that uses AI to help people make purchasing decisions.

I like Visa’s approach because it’s an AI-agnostic standard. I worry a lot about lock-in and monopolization of this space, so anything that lets people easily switch between AI models is good. And I like that Visa is working with Inrupt so that the data is decentralized as well. Here’s our announcement about its announcement:

This isn’t a new relationship—we’ve been working together for over two years. We’ve conducted a successful POC and now we’re standing up a sandbox inside Visa so merchants, financial institutions and LLM providers can test our Agentic Wallets alongside the rest of Visa’s suite of Intelligent Commerce APIs.

For that matter, we welcome any other company that wants to engage in the world of personal, consented Agentic Commerce to come work with us as well.

I joined Inrupt years ago because I thought that Solid could do for personal data what HTML did for published information. I liked that the protocol was an open standard, and that it distributed data instead of centralizing it. AI agents need decentralized data. “Wallet” is a good metaphor for personal data stores. I’m hoping this is another step towards adoption.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…
250512_Empty-suit-_scapegoat-960x640.png
By Byron V. Acohido

The cybersecurity landscape has never moved faster — and the people tasked with defending it have never felt more exposed. Related: How real people are really using GenAI

Today’s Chief Information Security Officers (CISOs) operate in a pressure cooker: responsible for protecting critical assets, expected to show up in the boardroom with fluency, yet rarely granted the authority, resources — or organizational alignment to succeed. Many burn out. Some are scapegoated. A few, as we’ve seen recently, face criminal charges.

 

And now comes the GenAI wave — flooding security vendors with new tools, but also disrupting organizational dynamics, blurring responsibility lines, and injecting fresh uncertainty into already fragile governance structures.

This is the backdrop for The CISO on the Razor’s Edge, a new book by Steve Tout, longtime identity strategist and advisor to Fortune 500 security leaders. It reads not as a how-to manual, but as a diagnosis of systemic design failure — and a blueprint for recovery. Tout introduces Strategic Performance Intelligence (SPI) as an operating model to help CISOs reclaim their influence, align cybersecurity with business outcomes, and speak the language of decision-makers.

This isn’t another call for CISOs to “communicate better” or “get a seat at the table.” It’s an acknowledgment that the table itself is often rigged, and that rebuilding trust will take structural clarity — not more dashboards or playbooks.

I spoke with Steve to explore what pushed him to write this book now, how GenAI changes the game, and what security leaders must do to escape the scapegoat cycle.

LW: You frame the CISO role as “broken by design.” What convinced you that this wasn’t just a people problem — but a system design issue?

Tout: It started with patterns I kept hearing—from friends in the role, from guests on the Candid CISO podcast, and from consulting work. One friend joked it should be called Chief Scapegoat Officer, and he wasn’t wrong. The way accountability is structured, everything rolls downhill to one person, even when the real issues are baked into the system.

The deeper I looked, the more it became clear this wasn’t just about people—it was about priorities. Cybersecurity programs are operating inside organizations optimized for financial engineering and extracting shareholder value. That’s not inherently wrong, but it pushes security into a compliance role, limits long-term thinking, and creates conditions where the CISO becomes disposable. It’s not a people problem. It’s a structural one.

LW: SPI 360 is a central concept in your book. Can you briefly explain what makes Strategic Performance Intelligence different from current governance, risk and compliance (GRC) or dashboard approaches?

Tout: I’m a long-distance runner—I run in ultra marathons—and one thing I’ve learned is that multiple factors play a role in my performance on any given day. There’s an app on my watch that can track over 600 data points. That inspired me to think differently about how we track human performance in cybersecurity.

SPI 360 is different because it doesn’t just monitor tech. It looks at environment variables—team health, leadership alignment, gaps between strategy and execution. Things SIEMs and GRC dashboards can’t see. because log files don’t tell the whole story, and nearly every tool in this space is obsessed with the [log files] tech stack. But humans play a critical role in outcomes. Strava and my “marathon readiness” score were big inspirations. We have a huge opportunity to do this better.

There’s a saying in the running community: “If it’s not on Strava, it didn’t happen.” It’s cute when we say that about our runs. But in cybersecurity, it points to something deeper. We need to move beyond raw data and start generating meaningful insight that leaders can actually act on. That’s what SPI 360 is designed to deliver.

LW: You make a strong case that cybersecurity has become a “strategic function without a strategy.” What role should boards and CEOs play in fixing that?

Tout: Thank you. Unfortunately, I’m seeing more cases where the CISO is quietly replaced by a “Head of Cybersecurity” with a mandate to manage risk and compliance. Maybe that works outside of public companies, but it’s often just a way to downgrade the role into something purely technical. These heads tend to lack T-shaped skills—no financial discipline, limited leadership experience, and little to no board exposure.

 

Tout

Removing the CISO is one response, but someone still has to lead. My guidance? Invest in leadership development for technical CISOs—and stop treating them like the lone line of defense. Build shared accountability across the C-suite. The next wave of CISOs may have less technical depth, but they’ll bring business fluency, influence, and the ability to link cybersecurity to real outcomes.

LW: GenAI is moving fast — in both attack surface and tooling. How does agentic AI reshape the challenges (or opportunities) for the next-gen CISO?

Tout: Agentic AI is absolutely a force multiplier—on both sides. It’s already making life harder for CISOs by accelerating everything for cybercriminals and nation-state actors. Defense use cases like chaos modeling, monitoring, and pen testing are no-brainers. But the more interesting opportunity is where agentic AI fills gaps most teams just can’t staff.

 

Take a CISO without a dedicated GRC analyst. An agentic system can now surface system-level risks, track performance across business units, and provide insight—without hiring a full-time employee. A vCISO supporting multiple orgs can finally get visibility without assuming full-time liability or overextending bandwidth. I don’t think AI agents replace CISOs anytime soon, but I do think they give lean teams a real shot at higher performance.

It’s not about replacing leadership. It’s about amplifying it—especially in places where resource constraints and complexity have been holding teams back. The smart move is to keep a human in the loop and let AI handle the scale.

LW: You cite high-profile security leaders who’ve been scapegoated. How should CISOs prepare themselves — contractually and strategically — to avoid being next?

TOUT: Perfect question—and a timely one. I’m seeing more interest in vCISO roles where leaders come in as contractors with their own liability insurance and enabling business transformation without putting their career on the line. That model gives organizations flexibility and gives CISOs some breathing room. But for full-time roles, I think more CISOs need to approach the job like executives—with an eye toward negotiation, shared goals and liabilities, and radical transparency. SPI can help support that transparency by making the invisible parts of the system visible and measurable.

I also believe there’s a bigger conversation to be had around protections—maybe even a cybersecurity equivalent of Sarbanes-Oxley, but we cannot wait for that.  It’s not reasonable to ask CISOs to absorb the full weight of systemic, global threats like espionage or terrorism without structural safeguards. There’s still work to do on defining what that looks like.

 

LW: A recurring theme in the book is “strategic amnesia” — the tendency to forget hard lessons after each crisis. Why does this keep happening?

TOUT: I’m sorry… What was the question again? Haha. Honestly, I believe it ties back to an obsession with technology, a fixation on risk and compliance, and the revolving door CISOs are constantly walking through. When the goal is surviving the quarter, there’s no incentive to remember what nearly broke the business last year.

Organizations that normalize heroics without investing in disciplined learning and development are playing a dangerous game. And no, I’m not talking about security awareness training. We could fix corporate amnesia overnight with the right strategic incentives—but that would require companies to stop managing cybersecurity like an expense and start managing it like a long-term investment.

LW: What’s one thing a CISO can do this quarter to begin shifting from tactical defense to strategic influence — without waiting for permission?

Tout: The one thing I’d say? Drop the “paranoid CISO” and “CISO burnout” talk track. It’s a familiar trap — and it’s not helping anyone. Everyone feels the pressure. Everyone’s stretched. But no one is coming to save you. At some point, we have to shift from survival mode to leadership mode. That starts with owning the role for what it is now—not what it used to be.

If you can’t show that your cybersecurity program is a real business enabler with measurable ROI, you’re asleep at the wheel. That might sound blunt, but it’s the job now. Boards aren’t looking for more dashboards or technical detail—they want outcomes, clarity, and a reason to trust that security is helping the business move forward, not just keeping it from falling apart.

Start by learning how business leaders think. Study how they use data to drive decisions. This isn’t about mastering finance or becoming a spreadsheet wizard—it’s about connecting the dots between what you do and why it matters. No one’s going to teach you this on the job. You’ve got to go seek it out. Because if you want to lead, you have to show that you’re already thinking like a leader.

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own—drawn from lived experience and editorial judgment honed over decades of investigative reporting.)

 

By Byron Acohido (Pulitzer Prize-Winning Business Journalist)

Original Link to the Blog: Click Here

Read more…
CISO FireSide Chat : A CISO's Guide On How To Manage A Dynamic Attack Surface With Rick Doten (VP - Information Security, Centene Corporation) In today’s hyper-connected world, the cybersecurity landscape is no longer defined by fixed perimeters. Fo
Read more…

An employee at Elon Musk’s artificial intelligence company xAI leaked a private key on GitHub that for the past two months could have allowed anyone to query private xAI large language models (LLMs) which appear to have been custom made for working with internal data from Musk’s companies, including SpaceXTesla and Twitter/X, KrebsOnSecurity has learned.

13590429055?profile=RESIZE_710x

Image: Shutterstock, @sdx15.

Philippe Caturegli, “chief hacking officer” at the security consultancy Seralys, was the first to publicize the leak of credentials for an x.ai application programming interface (API) exposed in the GitHub code repository of a technical staff member at xAI.

Caturegli’s post on LinkedIn caught the attention of researchers at GitGuardian, a company that specializes in detecting and remediating exposed secrets in public and proprietary environments. GitGuardian’s systems constantly scan GitHub and other code repositories for exposed API keys, and fire off automated alerts to affected users.

GitGuardian’s Eric Fourrier told KrebsOnSecurity the exposed API key had access to several unreleased models of Grok, the AI chatbot developed by xAI. In total, GitGuardian found the key had access to at least 60 fine-tuned and private LLMs.

“The credentials can be used to access the X.ai API with the identity of the user,” GitGuardian wrote in an email explaining their findings to xAI. “The associated account not only has access to public Grok models (grok-2-1212, etc) but also to what appears to be unreleased (grok-2.5V), development (research-grok-2p5v-1018), and private models (tweet-rejector, grok-spacex-2024-11-04).”

Fourrier found GitGuardian had alerted the xAI employee about the exposed API key nearly two months ago — on March 2. But as of April 30, when GitGuardian directly alerted xAI’s security team to the exposure, the key was still valid and usable. xAI told GitGuardian to report the matter through its bug bounty program at HackerOne, but just a few hours later the repository containing the API key was removed from GitHub.

“It looks like some of these internal LLMs were fine-tuned on SpaceX data, and some were fine-tuned with Tesla data,” Fourrier said. “I definitely don’t think a Grok model that’s fine-tuned on SpaceX data is intended to be exposed publicly.”

xAI did not respond to a request for comment. Nor did the 28-year-old xAI technical staff member whose key was exposed.

Carole Winqwist, chief marketing officer at GitGuardian, said giving potentially hostile users free access to private LLMs is a recipe for disaster.

“If you’re an attacker and you have direct access to the model and the back end interface for things like Grok, it’s definitely something you can use for further attacking,” she said. “An attacker could it use for prompt injection, to tweak the (LLM) model to serve their purposes, or try to implant code into the supply chain.”

The inadvertent exposure of internal LLMs for xAI comes as Musk’s so-called Department of Government Efficiency (DOGE) has been feeding sensitive government records into artificial intelligence tools. In February, The Washington Post reported DOGE officials were feeding data from across the Education Department into AI tools to probe the agency’s programs and spending.

The Post said DOGE plans to replicate this process across many departments and agencies, accessing the back-end software at different parts of the government and then using AI technology to extract and sift through information about spending on employees and programs.

“Feeding sensitive data into AI software puts it into the possession of a system’s operator, increasing the chances it will be leaked or swept up in cyberattacks,” Post reporters wrote.

Wired reported in March that DOGE has deployed a proprietary chatbot called GSAi to 1,500 federal workers at the General Services Administration, part of an effort to automate tasks previously done by humans as DOGE continues its purge of the federal workforce.

Reuters report last month said Trump administration officials told some U.S. government employees that DOGE is using AI to surveil at least one federal agency’s communications for hostility to President Trump and his agenda. Reuters wrote that the DOGE team has heavily deployed Musk’s Grok AI chatbot as part of their work slashing the federal government, although Reuters said it could not establish exactly how Grok was being used.

Caturegli said while there is no indication that federal government or user data could be accessed through the exposed x.ai API key, these private models are likely trained on proprietary data and may unintentionally expose details related to internal development efforts at xAI, Twitter, or SpaceX.

“The fact that this key was publicly exposed for two months and granted access to internal models is concerning,” Caturegli said. “This kind of long-lived credential exposure highlights weak key management and insufficient internal monitoring, raising questions about safeguards around developer access and broader operational security.”

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…

A DoorDash driver stole over $2.5 million over several months:

The driver, Sayee Chaitainya Reddy Devagiri, placed expensive orders from a fraudulent customer account in the DoorDash app. Then, using DoorDash employee credentials, he manually assigned the orders to driver accounts he and the others involved had created. Devagiri would then mark the undelivered orders as complete and prompt DoorDash’s system to pay the driver accounts. Then he’d switch those same orders back to “in process” and do it all over again. Doing this “took less than five minutes, and was repeated hundreds of times for many of the orders,” writes the US Attorney’s Office.

Interesting flaw in the software design. He probably would have gotten away with it if he’d kept the numbers small. It’s only when the amount missing is too big to ignore that the investigations start.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…

One one my biggest worries about VPNs is the amount of trust users need to place in them, and how opaque most of them are about who owns them and what sorts of data they retain.

new study found that many commercials VPNS are (often surreptitiously) owned by Chinese companies.

It would be hard for U.S. users to avoid the Chinese VPNs. The ownership of many appeared deliberately opaque, with several concealing their structure behind layers of offshore shell companies. TTP was able to determine the Chinese ownership of the 20 VPN apps being offered to Apple’s U.S. users by piecing together corporate documents from around the world. None of those apps clearly disclosed their Chinese ownership.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…
13581343256?profile=RESIZE_710x
By Byron V. Acohido

SAN FRANCISCO — The first rule of reporting is to follow the tension lines—the places where old assumptions no longer quite hold. Related: GenAI disrupting tech jobs

I’ve been feeling that tension lately. Just arrived in the City by the Bay. Trekked here with some 40,000-plus cyber security pros and company execs flocking to RSAC 2025 at Moscone Center.

13581343274?profile=RESIZE_180x180

Many of the challenges they face mitigating cyber risks haven’t fundamentally changed, just intensified, over the past two decades I’ve been coming to RSAC. But the arrival of LLMs and Gen AI has tilted the landscape in a new, disorienting way.

Yes, the bad actors have been quick to leverage GenAI to scale up their tried-and-true attacks. The good news is that the good guys are doing so, as well. “Incrementally, and mostly behind the scenes, language-activated agentic AI is starting to reshape network protections.”

 

Calibrating LLMs

In recent weeks, I’ve sat down with a cross-section of innovators—each moving methodically to calibrate LLMs and GenAI to function as a force multiplier for defense.

Brian Dye, CEO of Corelight, a specialist in open-source-based network evidence solutions, told me how the field is being split: smaller security teams scrambling to adopt vendor-curated AI while large enterprises spin up their own tailored LLMs.

13581343080?profile=RESIZE_180x180

DiLullo

John DiLullo, CEO of Deepwatch, a managed detection and response firm focused on high-fidelity security operations, has come to an unexpected discovery: LLMs, carefully cordoned and human-vetted, are already outperforming junior analysts at producing incident reports—more consistent, more accurate, less error-prone.

Jamison Utter, security evangelist at A10 Networks, a supplier of network performance and DDoS defense technologies, offers another lens: adversaries are racing ahead, using AI to craft malware and orchestrate attacks at speeds no human scripter could match. The defenders, he notes, must become equally adaptive—learning not just to wield AI, but to think in its native tempo.

There’s a pattern here.

13581343659?profile=RESIZE_584x

Cybersecurity solution providers are starting to discover, each in their own corner of the battlefield, that mastery now requires a new kind of intuition:

•When to trust the machine’s first draft.

• When to double-check its cheerful approximations.

•When to discard fluency in favor of friction.

 

Getting to know my machine

It’s not unlike what I’ve found using ChatGPT-4o as a force multiplier for my own beat reporting.

At first, the tool felt like an accelerant—a way to draft faster, correlate more, test ideas with lightning speed. But over time, I’ve learned that speed alone isn’t the point. What matters is knowing when to lean on the machine—and when to lean away.

The cybersecurity innovators I’ve spoken with, thus far, are internalizing a similar lesson.

13581343466?profile=RESIZE_180x180

Dye

Dye’s team sees AI as a triage engine—brilliant at wading through common attack paths, but unreliable on the crooked trails where nuance matters. “Help me do more with less is one of the cybersecurity industry’s most durable problems,” Dye observes. “So, ‘Help me understand what this alert means in English’ can actually be incredibly valuable, and that’s actually something that AI models do super well.”

DiLullo’s analysts now trust AI to assemble the bones of a report—but know to inspect each joint before sending it out the door. In cybersecurity, DiLullo noted, making educated inferences is essential — and LLMs excel at scaling that process, efficiently surfacing insights in plain English where humans might otherwise struggle

Utter’s colleagues have begin leveraging AI-derived telemetry—but only after investing serious thought into how the tools should be constrained.

 

Intentional orchestration

In each case, calibration is the hidden skill. Not just deploying AI, but orchestrating its role with intention. Not ceding judgment, but sharpening it.

Tomorrow, as I walk the floor at RSA and continue these Fireside Chat conversations, I expect to hear more versions of this same evolving art form.

The vendors who will thrive are not those who see AI as a panacea—or a menace. They’re the ones treating it as what it actually is: a powerful, fallible partner. A new compass—helpful, but requiring a steady hand to navigate the magnetic distortions.

This is not the end of human-centered security; it’s the beginning of a new kind of craftsmanship.

And if the early glimpses are any guide, the quiet genius of this next chapter won’t be found in flashy demos or viral headlines.

 

Prompt engineering is the key

13581343862?profile=RESIZE_180x180

Utter

As A10’s Utter pointed out, it’s a craft that will increasingly depend on prompt engineers—practitioners skilled at shaping AI outputs without surrendering judgment. Those who master the art of asking better questions, not just accepting faster answers, will set the new standard.

It will surface, instead, in the way a well-trained SOC analyst coaxes a hidden thread out of a noisy alert queue.

Or the way a vendor team embeds invisible friction checks into their AI pipeline—not to slow things down, but to make sure the right things get through.

13581343876?profile=RESIZE_584x

The machine can accelerate the flow, but the human will still shape the course.

Observes Utter: “Prompt engineering, I think, is the key to understanding how to get the most out of AI.”

Where this leads, I’ll keep watch — and keep reporting.

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own—drawn from lived experience and editorial judgment honed over decades of investigative reporting.)

 

By Byron Acohido (Pulitzer Prize-Winning Business Journalist)

Original Link to the Blog: Click Here

Read more…

Russia is proposing a rule that all foreigners in Moscow install a tracking app on their phones.

Using a mobile application that all foreigners will have to install on their smartphones, the Russian state will receive the following information:

  • Residence location
  • Fingerprint
  • Face photograph
  • Real-time geo-location monitoring

This isn’t the first time we’ve seen this. Qatar did it in 2022 around the World Cup:

“After accepting the terms of these apps, moderators will have complete control of users’ devices,” he continued. “All personal content, the ability to edit it, share it, extract it as well as data from other apps on your device is in their hands. Moderators will even have the power to unlock users’ devices remotely.”

 

By: Bruce Schneier (Cyptographer, Author & Security Guru)

Original link to the blog: Click Here

Read more…

A Texas firm recently charged with conspiring to distribute synthetic opioids in the United States is at the center of a vast network of companies in the U.S. and Pakistan whose employees are accused of using online ads to scam westerners seeking help with trademarks, book writing, mobile app development and logo designs, a new investigation reveals.

In an indictment (PDF) unsealed last month, the U.S. Department of Justice said Dallas-based eWorldTrade “operated an online business-to-business marketplace that facilitated the distribution of synthetic opioids such as isotonitazene and carfentanyl, both significantly more potent than fentanyl.”

Launched in 2017, eWorldTrade[.]com now features a seizure notice from the DOJ. eWorldTrade operated as a wholesale seller of consumer goods, including clothes, machinery, chemicals, automobiles and appliances. The DOJ’s indictment includes no additional details about eWorldTrade’s business, origins or other activity, and at first glance the website might appear to be a legitimate e-commerce platform that also just happened to sell some restricted chemicals.

13581336698?profile=RESIZE_710x

A screenshot of the eWorldTrade homepage on March 25, 2025. Image: archive.org.

However, an investigation into the company’s founders reveals they are connected to a sprawling network of websites that have a history of extortionate scams involving trademark registration, book publishing, exam preparation, and the design of logos, mobile applications and websites.

Records from the U.S. Patent and Trademark Office (USPTO) show the eWorldTrade mark is owned by an Azneem Bilwani in Karachi (this name also is in the registration records for the now-seized eWorldTrade domain). Mr. Bilwani is perhaps better known as the director of the Pakistan-based IT provider Abtach Ltd., which has been singled out by the USPTO and Google for operating trademark registration scams (the main offices for eWorldtrade and Abtach share the same address in Pakistan).

In November 2021, the USPTO accused Abtach of perpetrating “an egregious scheme to deceive and defraud applicants for federal trademark registrations by improperly altering official USPTO correspondence, overcharging application filing fees, misappropriating the USPTO’s trademarks, and impersonating the USPTO.”

Abtach offered trademark registration at suspiciously low prices compared to legitimate costs of over USD $1,500, and claimed they could register a trademark in 24 hours. Abtach reportedly rebranded to Intersys Limited after the USPTO banned Abtach from filing any more trademark applications.

 

13581336893?profile=RESIZE_584x

In a note published to its LinkedIn profile, Intersys Ltd. asserted last year that certain scam firms in Karachi were impersonating the company.

 

FROM AXACT TO ABTACH

Many of Abtach’s employees are former associates of a similar company in Pakistan called Axact that was targeted by Pakistani authorities in a 2015 fraud investigation. Axact came under law enforcement scrutiny after The New York Times ran a front-page story about the company’s most lucrative scam business: Hundreds of sites peddling fake college degrees and diplomas.

People who purchased fake certifications were subsequently blackmailed by Axact employees posing as government officials, who would demand additional payments under threats of prosecution or imprisonment for having bought fraudulent “unauthorized” academic degrees. This practice created a continuous cycle of extortion, internally referred to as “upselling.”

“Axact took money from at least 215,000 people in 197 countries — one-third of them from the United States,” The Times reported. “Sales agents wielded threats and false promises and impersonated government officials, earning the company at least $89 million in its final year of operation.”

Dozens of top Axact employees were arrested, jailed, held for months, tried and sentenced to seven years for various fraud violations. But a 2019 research brief on Axact’s diploma mills found none of those convicted had started their prison sentence, and that several had fled Pakistan and never returned.

“In October 2016, a Pakistan district judge acquitted 24 Axact officials at trial due to ‘not enough evidence’ and then later admitted he had accepted a bribe (of $35,209) from Axact,” reads a history (PDF) published by the American Association of Collegiate Registrars and Admissions Officers.

In 2021, Pakistan’s Federal Investigation Agency (FIA) charged Bilwani and nearly four dozen others — many of them Abtach employees — with running an elaborate trademark scam. The authorities called it “the biggest money laundering case in the history of Pakistan,” and named a number of businesses based in Texas that allegedly helped move the proceeds of cybercrime.

13581337263?profile=RESIZE_710x

A page from the March 2021 FIA report alleging that Digitonics Labs and Abtach employees conspired to extort and defraud consumers.

The FIA said the defendants operated a large number of websites offering low-cost trademark services to customers, before then “ignoring them after getting the funds and later demanding more funds from clients/victims in the name of up-sale (extortion).” The Pakistani law enforcement agency said that about 75 percent of customers received fake or fabricated trademarks as a result of the scams.

The FIA found Abtach operates in conjunction with a Karachi firm called Digitonics Labs, which earned a monthly revenue of around $2.5 million through the “extortion of international clients in the name of up-selling, the sale of fake/fabricated USPTO certificates, and the maintaining of phishing websites.”

According the Pakistani authorities, the accused also ran countless scams involving ebook publication and logo creation, wherein customers are subjected to advance-fee fraud and extortion — with the scammers demanding more money for supposed “copyright release” and threatening to release the trademark.

Also charged by the FIA was Junaid Mansoor, the owner of Digitonics Labs in Karachi. Mansoor’s U.K.-registered company Maple Solutions Direct Limited has run at least 700 ads for logo design websites since 2015, the Google Ads Transparency page reports. The company has approximately 88 ads running on Google as of today. 

13581337087?profile=RESIZE_710x

Junaid Mansoor. Source: youtube/@Olevels․com School.

Mr. Mansoor is actively involved with and promoting a Quran study business called quranmasteronline[.]com, which was founded by Junaid’s brother Qasim Mansoor (Qasim is also named in the FIA criminal investigation). The Google ads promoting quranmasteronline[.]com were paid for by the same account advertising a number of scam websites selling logo and web design services. 

Junaid Mansoor did not respond to requests for comment. An address in Teaneck, New Jersey where Mr. Mansoor previously lived is listed as an official address of exporthub[.]com, a Pakistan-based e-commerce website that appears remarkably similar to eWorldTrade (Exporthub says its offices are in Texas). Interestingly, a search in Google for this domain shows ExportHub currently features multiple listings for fentanyl citrate from suppliers in China and elsewhere.

The CEO of Digitonics Labs is Muhammad Burhan Mirza, a former Axact official who was arrested by the FIA as part of its money laundering and trademark fraud investigation in 2021. In 2023, prosecutors in Pakistan charged Mirza, Mansoor and 14 other Digitonics employees with fraud, impersonating government officials, phishing, cheating and extortion. Mirza’s LinkedIn profile says he currently runs an educational technology/life coach enterprise called TheCoach360, which purports to help young kids “achieve financial independence.”

Reached via LinkedIn, Mr. Mirza denied having anything to do with eWorldTrade or any of its sister companies in Texas.

“Moreover, I have no knowledge as to the companies you have mentioned,” said Mr. Mirza, who did not respond to follow-up questions.

The current disposition of the FIA’s fraud case against the defendants is unclear. The investigation was marred early on by allegations of corruption and bribery. In 2021, Pakistani authorities alleged Bilwani paid a six-figure bribe to FIA investigators. Meanwhile, attorneys for Mr. Bilwani have argued that although their client did pay a bribe, the payment was solicited by government officials. Mr. Bilwani did not respond to requests for comment.

 

THE TEXAS NEXUS

KrebsOnSecurity has learned that the people and entities at the center of the FIA investigations have built a significant presence in the United States, with a strong concentration in Texas. The Texas businesses promote websites that sell logo and web design, ghostwriting, and academic cheating services. Many of these entities have recently been sued for fraud and breach of contract by angry former customers, who claimed the companies relentlessly upsold them while failing to produce the work as promised.

For example, the FIA complaints named Retrocube LLC and 360 Digital Marketing LLC, two entities that share a street address with eWorldTrade: 1910 Pacific Avenue, Suite 8025, Dallas, Texas. Also incorporated at that Pacific Avenue address is abtach[.]ae, a web design and marketing firm based in Dubai; and intersyslimited[.]com, the new name of Abtach after they were banned by the USPTO. Other businesses registered at this address market services for logo design, mobile app development, and ghostwriting.

13581337282?profile=RESIZE_710x

A list published in 2021 by Pakistan’s FIA of different front companies allegedly involved in scamming people who are looking for help with trademarks, ghostwriting, logos and web design.

360 Digital Marketing’s website 360digimarketing[.]com is owned by an Abtach front company called Abtech LTD. Meanwhile, business records show 360 Digi Marketing LTD is a U.K. company whose officers include former Abtach director Bilwani; Muhammad Saad Iqbal, formerly Abtach, now CEO of Intersys Ltd; Niaz Ahmed, a former Abtach associate; and Muhammad Salman Yousuf, formerly a vice president at Axact, Abtach, and Digitonics Labs.

Google’s Ads Transparency Center finds 360 Digital Marketing LLC ran at least 500 ads promoting various websites selling ghostwriting services . Another entity tied to Junaid Mansoor — a company called Octa Group Technologies AU — has run approximately 300 Google ads for book publishing services, promoting confusingly named websites like amazonlistinghub[.]com and barnesnoblepublishing[.]co.

13581337101?profile=RESIZE_710x

360 Digital Marketing LLC ran approximately 500 ads for scam ghostwriting sites.

Rameez Moiz is a Texas resident and former Abtach product manager who has represented 360 Digital Marketing LLC and RetroCube. Moiz told KrebsOnSecurity he stopped working for 360 Digital Marketing in the summer of 2023. Mr. Moiz did not respond to follow-up questions, but an Upwork profile for him states that as of April 2025 he is employed by Dallas-based Vertical Minds LLC.

In April 2025, California resident Melinda Will sued the Texas firm Majestic Ghostwriting — which is doing business as ghostwritingsquad[.]com —  alleging they scammed her out of $100,000 after she hired them to help write her book. Google’s ad transparency page shows Moiz’s employer Vertical Minds LLC paid to run approximately 55 ads for ghostwritingsquad[.]com and related sites.

13581337665?profile=RESIZE_710x

Google’s ad transparency listing for ghostwriting ads paid for by Vertical Minds LLC.

 

VICTIMS SPEAK OUT

Ms. Will’s lawsuit is just one of more than two dozen complaints over the past four years wherein plaintiffs sued one of this group’s web design, wiki editing or ghostwriting services. In 2021, a New Jersey man sued Octagroup Technologies, alleging they ripped him off when he paid a total of more than $26,000 for the design and marketing of a web-based mapping service.

The plaintiff in that case did not respond to requests for comment, but his complaint alleges Octagroup and a myriad other companies it contracted with produced minimal work product despite subjecting him to relentless upselling. That case was decided in favor of the plaintiff because the defendants never contested the matter in court.

In 2023, 360 Digital Marketing LLC and Retrocube LLC were sued by a woman who said they scammed her out of $40,000 over a book she wanted help writing. That lawsuit helpfully showed an image of the office front door at 1910 Pacific Ave Suite 8025, which featured the logos of 360 Digital Marketing, Retrocube, and eWorldTrade.

13581337686?profile=RESIZE_710x

The front door at 1910 Pacific Avenue, Suite 8025, Dallas, Texas.

The lawsuit was filed pro se by Leigh Riley, a 64-year-old career IT professional who paid 360 Digital Marketing to have a company called Talented Ghostwriter co-author and promote a series of books she’d outlined on spirituality and healing.

“The main reason I hired them was because I didn’t understand what I call the formula for writing a book, and I know there’s a lot of marketing that goes into publishing,” Riley explained in an interview. “I know nothing about that stuff, and these guys were convincing that they could handle all aspects of it. Until I discovered they couldn’t write a damn sentence in English properly.”

Riley’s well-documented lawsuit (not linked here because it features a great deal of personal information) includes screenshots of conversations with the ghostwriting team, which was constantly assigning her to new writers and editors, and ghosting her on scheduled conference calls about progress on the project. Riley said she ended up writing most of the book herself because the work they produced was unusable.

“Finally after months of promising the books were printed and on their way, they show up at my doorstep with the wrong title on the book,” Riley said. When she demanded her money back, she said the people helping her with the website to promote the book locked her out of the site.

13581337700?profile=RESIZE_710x

A conversation snippet from Leigh Riley’s lawsuit against Talented Ghostwriter, aka 360 Digital Marketing LLC. “Other companies once they have you money they don’t even respond or do anything,” the ghostwriting team manager explained.

Riley decided to sue, naming 360 Digital Marketing LLC and Retrocube LLC, among others.  The companies offered to settle the matter for $20,000, which she accepted. “I didn’t have money to hire a lawyer, and I figured it was time to cut my losses,” she said.

Riley said she could have saved herself a great deal of headache by doing some basic research on Talented Ghostwriter, whose website claims the company is based in Los Angeles. According to the California Secretary of State, however, there is no registered entity by that name. Rather, the address claimed by talentedghostwriter[.]com is a vacant office building with a “space available” sign in the window.

California resident Walter Horsting discovered something similar when he sued 360 Digital Marketing in small claims court last year, after hiring a company called Vox Ghostwriting to help write, edit and promote a spy novel he’d been working on. Horsting said he paid Vox $3,300 to ghostwrite a 280-page book, and was upsold an Amazon marketing and publishing package for $7,500.

In an interview, Horsting said the prose that Vox Ghostwriting produced was “juvenile at best,” forcing him to rewrite and edit the work himself, and to partner with a graphical artist to produce illustrations. Horsting said that when it came time to begin marketing the novel, Vox Ghostwriting tried to further upsell him on marketing packages, while dodging scheduled meetings with no follow-up.

“They have a money back guarantee, and when they wouldn’t refund my money I said I’m taking you to court,” Horsting recounted. “I tried to serve them in Los Angeles but found no such office exists. I talked to a salon next door and they said someone else had recently shown up desperately looking for where the ghostwriting company went, and it appears there are a trail of corpses on this. I finally tracked down where they are in Texas.”

It was the same office that Ms. Riley served her lawsuit against. Horsting said he has a court hearing scheduled later this month, but he’s under no illusions that winning the case means he’ll be able to collect.

“At this point, I’m doing it out of pride more than actually expecting anything to come to good fortune for me,” he said.

The following mind map was helpful in piecing together key events, individuals and connections mentioned above. It’s important to note that this graphic only scratches the surface of the operations tied to this group. For example, in Case 2 we can see mention of academic cheating services, wherein people can be hired to take online proctored exams on one’s behalf. Those who hire these services soon find themselves subject to impersonation and blackmail attempts for larger and larger sums of money, with the threat of publicly exposing their unethical academic cheating activity.

13581338075?profile=RESIZE_710x

A “mind map” illustrating the connections between and among entities referenced in this story. Click to enlarge.

 

GOOGLE RESPONDS

KrebsOnSecurity reviewed the Google Ad Transparency links for nearly 500 different websites tied to this network of ghostwriting, logo, app and web development businesses. Those website names were then fed into spyfu.com, a competitive intelligence company that tracks the reach and performance of advertising keywords. Spyfu estimates that between April 2023 and April 2025, those websites spent more than $10 million on Google ads.

Reached for comment, Google said in a written statement that it is constantly policing its ad network for bad actors, pointing to an ads safety report (PDF) showing Google blocked or removed 5.1 billion bad ads last year — including more than 500 million ads related to trademarks.

“Our policy against Enabling Dishonest Behavior prohibits products or services that help users mislead others, including ads for paper-writing or exam-taking services,” the statement reads. “When we identify ads or advertisers that violate our policies, we take action, including by suspending advertiser accounts, disapproving ads, and restricting ads to specific domains when appropriate.”

13581337886?profile=RESIZE_710x

Google did not respond to specific questions about the advertising entities mentioned in this story, saying only that “we are actively investigating this matter and addressing any policy violations, including suspending advertiser accounts when appropriate.”

From reviewing the ad accounts that have been promoting these scam websites, it appears Google has very recently acted to remove a large number of the offending ads. Prior to my notifying Google about the extent of this ad network on April 28, the Google Ad Transparency network listed over 500 ads for 360 Digital Marketing; as of this publication, that number had dwindled to 10.

On April 30, Google announced that starting this month its ads transparency page will display the payment profile name as the payer name for verified advertisers, if that name differs from their verified advertiser name. Searchengineland.com writes the changes are aimed at increasing accountability in digital advertising.

This spreadsheet lists the domain names, advertiser names, and Google Ad Transparency links for more than 350 entities offering ghostwriting, publishing, web design and academic cheating services.

KrebsOnSecurity would like to thank the anonymous security researcher NatInfoSec for their assistance in this investigation.

For further reading on Abtach and its myriad companies in all of the above-mentioned verticals (ghostwriting, logo design, etc.), see this Wikiwand entry.

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…
13571324094?profile=RESIZE_710x
By Byron V. Acohido

SAN FRANCISCO — The cybersecurity industry showed up here in force last week: 44,000 attendees, 730 speakers, 650 exhibitors and 400 members of the media flooding Moscone Convention Center in the City by the Bay. Related: RSAC 2025 by the numbers

Beneath the cacophony of GenAI-powered product rollouts, the signal that stood out was subtler: a broadening consensus that artificial intelligence — especially the agentic kind — isn’t going away. And also that intuitive, discerning human oversight is going to be essential at every step.

13571324285?profile=RESIZE_180x180

Abdullah

Let’s start with Dr. Alissa “Dr. Jay” Abdullah, Mastercard’s Deputy CSO who gave a keynote address at The CSA Summit from Cloud Security Alliance at RSAC 2025. She spoke passionately about being a daily power user of AI, recounting an experiment in which she tried to generate a collectible 3D action figure of herself using multiple GenAI platforms.

Her prompts were clear, detailed, and methodical — yet the results were laughably off-base. The takeaway? Even well-crafted prompts can be derailed by flawed models or skewed training data. In this case, none of the models managed to reliably portray her likeness or professional context — despite the input being consistent.

 

AI needs a human chaperone

This wasn’t just a quirky user experience — it underscored deeper concerns about bias, hallucination, and the immaturity of enterprise-grade AI. Abdullah’s takeaway: lean in, yes. But test relentlessly, and don’t take the output at face value.

That kind of real-world friction — where AI promise meets AI reality — showed up again and again in RSAC’s meatier panels and threat briefings. The SANS Institute’s Five Most Dangerous New Attack Techniques panel highlighted how authorization sprawl is giving attackers frictionless lateral movement in hybrid cloud environments. The fix? Better privilege mapping and tighter identity controls — areas ripe for GenAI-powered solutions, if used responsibly.

13571324480?profile=RESIZE_584x

Similarly, identity emerged as RSAC’s dominant theme, fueled by Verizon’s latest Data Breach Investigations Report showing credential abuse remains a top attack vector. Identity, as Darren Guccione of Keeper Security framed it, is the modern perimeter. Yet AI complicates the landscape: it can accelerate password cracking even as it enables smarter detection. Once again, the takeaway was clear — context, not hype, must drive deployment.

13571324295?profile=RESIZE_180x180

Krebs

Meanwhile, the emotional centerpiece of the conference came from Chris Krebs, the embattled former CISA director. Facing political heat at home, Krebs nonetheless took the stage alongside Jen Easterly and Rob Joyce to reflect on fictional and real-world cyber catastrophes. His call to arms was unflinching: “Cybersecurity is national security. Every one of you is on the front lines of modern warfare.”

And he’s right. Because behind the RSAC glitz lies a gnawing truth: complexity has outpaced human capacity. AI may be the only way defenders can keep up — if regulators allow it, and if we wield it wisely.

 

Customer-ready — on the fly

For all the stage talk about escalating threats, tightening regulations, and the urgent need to shore up identity defenses, it was the hallway conversations — the unscripted, sometimes offbeat stories from seasoned professionals — that offered the clearest glimpse of what comes next.

To wit:  just a few moments after Mastercard’s Abdullah gave her keynote at the CSA Summit, I happen to run into a senior sales rep from a mobile app security firm, whom I’ve known for a few years. I asked him if he was using GenAI, and he shared how he has trained a personal agentic assistant to help field technical questions from prospects.

This veteran sales rep described how he uses ChatGPT to synthesize technical answers and generate customer-ready language on the fly. He told me he takes his responsibility to vet every GenAI output vigorously — especially when deploying it to come up with information relayed back to customers with engineering backgrounds. Any hint of a hallucinated response could destroy credibility he’s spent months building. So he validates, revises and retrains constantly. It’s not about cutting corners; it’s about enhancing fluency without sacrificing integrity, he told me.

 

Natively supported GenAI

I also had an enlightening discussion with Tim Eades, CEO of year-old Anetac, a GenAI-native platform focused on real-time identity risk, who offered sharp insight into why newer vendors have an inherent edge. Older enterprise systems, he explained, are like heritage homes that need to be put on stilts before the foundation can be replaced.

Retrofitting LLMs onto legacy infrastructure is not just expensive; it can be futile without rethinking data pipelines and user interfaces from the ground up. Because Anetac was built in the GenAI era, Eades told me,  they can natively support real-time data integration, dynamic prompt generation, and intuitive user-level customization. This agility doesn’t just reduce hallucinations — it accelerates meaningful innovation, Eades asserts.

 

Curated knowledge sets

Meanwhile, Jason Keirstead, Co-founder and VP of Security Strategy of Simbian, a GenAI-native platform automating alert triage and threat investigation, walked me through how his team integrates LLMs into security operations workflows. We met in the nearby financial district, inside the high-rise offices of Cota Capital, one of Simbian’s early investors.

Unlike platforms that simply bolt on a chatbot and hope users will “talk to the AI,” Simbian embeds agentic AI directly into workflows—handling alert triage, threat hunting, and vulnerability prioritization behind the scenes, Keirstead told me. The user never interacts with a prompt window. Instead, Simbian’s internal RAG (retrieval-augmented generation) system, combined with extensive prompt libraries tuned for cybersecurity use cases, processes each alert and surfaces recommended actions automatically.

Keirstead didn’t downplay the complexity of making this work. While LLMs can still hallucinate, he emphasized that Simbian avoids generic, open-ended use cases in favor of tightly scoped applications. By combining curated knowledge sets, domain-specific tuning, and hands-on collaboration with early adopters, the company has engineered a system designed to deliver consistent, trustworthy results.

 

The 100X effect

A similar dynamic was at play at Corelight, a network detection and response provider focused on high-fidelity telemetry. I spoke with CEO Brian Dye who underscored how agentic AI is beginning to boost threat detection — but only when closely guided. Their team uses LLMs to streamline analysis of noisy telemetry and surface relevant insights faster.

Yet Dye cautioned that simply injecting a chatbot doesn’t cut it; analysts still need domain expertise to steer the tool, validate results, and keep it from introducing blind spots. It’s the human-machine combo, he emphasized, that delivers real value.

Meanwhile,  John DiLullo, CEO of Deepwatch, a managed detection and response firm focused on high-fidelity security operations, framed GenAI as a conversation accelerator — but only when harnessed with clarity and intent. He described how top-tier cybersecurity veterans are using it not to replace judgment but to distill technical nuance for broader audiences. This aligns with what some are calling the ‘100x effect’ — experienced practitioners using GenAI not to automate away their judgment, but to scale their expertise and speed of execution.

 

Must have skill: prompt engineering

Jamison Utter, security evangelist at A10 Networks, a supplier of network performance and DDoS defense technologies, was especially candid. He explained how attackers are already using LLMs to write custom malware, simulate attacks, and bypass traditional defenses — at speed and scale. On defense, A10 has begun tapping GenAI to analyze DDoS telemetry in real time, dramatically reducing time-to-insight. The payoff? Analysts who know how to prompt effectively are seeing gains, but only after substantial trial-and-error. His bottom line: prompt engineering is now a frontline skill.

Akela

Anand Akela, CMO of Alcavio, a deception-driven threat detection company, sketched out a different angle: using AI not to interpret threats, but to camouflage critical assets. Alcavio blends traditional deception tech with AI-powered customization — generating realistic honeypots, honeytokens, and decoy credentials at scale. The idea is to use AI’s generative muscle to outwit AI-generated threats. Akela admitted they don’t rely on full-blown LLMs yet, but said their roadmap includes using GenAI to tailor decoy strategies dynamically, based on evolving attack vectors.

 

Guided speed, common sense

At Cyware, a cyber fusion platform unifying threat intelligence and incident response, Patrick Vandenberg, Senior Director of Product Marketing, emphasized speed. Their threat intelligence chatbot reduces days of manual triage to seconds, surfacing relevant patterns and flagging threats for human review.

But it’s not autopilot. The system only works well when guided by seasoned analysts who understand what to ask for — and how to interpret the results. It’s the classic augmentation model: the AI expands reach and efficiency, but the analyst still holds the reins.

Willy Leichter,  CMO of PointGuard AI, startup focused on visibility and risk governance for GenAI use, captured the unease many feel. His firm helps companies discover and govern shadow AI projects — especially open-source tools and rogue models flowing into production. The market, he said, hasn’t had its “SolarWinds moment” for GenAI misuse yet, but everyone’s bracing for it. His message to worried CISOs: start with visibility, then layer on risk scoring and usage controls. And don’t let urgency erase common sense.

 

Driving resilience — not risk

Across each of these conversations, a common thread emerged: we’re beyond the point of deciding whether to use GenAI. The question now is how to use it well. The answer seems to hinge not on the models themselves, but on the context in which they’re deployed, the clarity of the prompts, and the vigilance of the humans overseeing them.

Agentic AI is here to stay. It’s versatile, powerful, and rapidly evolving. Agentic AI doesn’t wait to be prompted — it’s goal-driven, context-aware, and built to act. But like any high-performance engine, it demands an attentive driver. Without careful prompting, constant tuning, and relentless validation, even the most promising assistants can steer us off course. That tension — powerful augmentation versus potential misfire — defined the conference.

RSAC 2025 didn’t just showcase agentic AI’s momentum; it clarified the mandate. This isn’t about chasing silver bullets. It’s about embracing a tool that demands human vigilance at every turn.

If we want AI to drive resilience — not risk — we’ll need to stay firmly in the driver’s seat. I’ll keep watch and keep reporting.

13571319290?profile=RESIZE_400x

Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own—drawn from lived experience and editorial judgment honed over decades of investigative reporting.)

Original Link to the Blog: Click Here

Read more…

In what experts are calling a novel legal outcome, the 22-year-old former administrator of the cybercrime community Breachforums will forfeit nearly $700,000 to settle a civil lawsuit from a health insurance company whose customer data was posted for sale on the forum in 2023. Conor Brian Fitzpatrick, a.k.a. “Pompompurin,” is slated for resentencing next month after pleading guilty to access device fraud and possession of child sexual abuse material (CSAM).

13571322475?profile=RESIZE_710x

A redacted screenshot of the Breachforums sales thread. Image: Ke-la.com.

On January 18, 2023, denizens of Breachforums posted for sale tens of thousands of records — including Social Security numbers, dates of birth, addresses, and phone numbers  — stolen from Nonstop Health, an insurance provider based in Concord, Calif.

Class-action attorneys sued Nonstop Health, which added Fitzpatrick as a third-party defendant to the civil litigation in November 2023, several months after he was arrested by the FBI and criminally charged with access device fraud and CSAM possession. In January 2025, Nonstop agreed to pay $1.5 million to settle the class action.

Jill Fertel is a former prosecutor who runs the cyber litigation practice at Cipriani & Werner, the law firm that represented Nonstop Health. Fertel told KrebsOnSecurity this is the first and only case where a cybercriminal or anyone related to the security incident was actually named in civil litigation.

“Civil plaintiffs are not at all likely to see money seized from threat actors involved in the incident to be made available to people impacted by the breach,” Fertel said. “The best we could do was make this money available to the class, but it’s still incumbent on the members of the class who are impacted to make that claim.”

Mark Rasch is a former federal prosecutor who now represents Unit 221B, a cybersecurity firm based in New York City. Rasch said he doesn’t doubt that the civil settlement involving Fitzpatrick’s criminal activity is a novel legal development.

“It is rare in these civil cases that you know the threat actor involved in the breach, and it’s also rare that you catch them with sufficient resources to be able to pay a claim,” Rasch said.

Despite admitting to possessing more than 600 CSAM images and personally operating Breachforums, Fitzpatrick was sentenced in January 2024 to time served and 20 years of supervised release. Federal prosecutors objected, arguing that his punishment failed to adequately reflect the seriousness of his crimes or serve as a deterrent.

13571322487?profile=RESIZE_710x

An excerpt from a pre-sentencing report for Fitzpatrick indicates he had more than 600 CSAM images on his devices.

Indeed, the same month he was sentenced Fitzpatrick was rearrested (PDF) for violating the terms of his release, which forbade him from using a computer that didn’t have court-required monitoring software installed.

Federal prosecutors said Fitzpatrick went on Discord following his guilty plea and professed innocence to the very crimes to which he’d pleaded guilty, stating that his plea deal was “so BS” and that he had “wanted to fight it.” The feds said Fitzpatrick also joked with his friends about selling data to foreign governments, exhorting one user to “become a foreign asset to china or russia,” and to “sell government secrets.”

In January 2025, a federal appeals court agreed with the government’s assessment, vacating Fitzpatrick’s sentence and ordering him to be resentenced on June 3, 2025.

Fitzpatrick launched BreachForums in March 2022 to replace RaidForums, a similarly popular crime forum that was infiltrated and shut down by the FBI the previous month. As administrator, his alter ego Pompompurin served as the middleman, personally reviewing all databases for sale on the forum and offering an escrow service to those interested in buying stolen data.

A yearbook photo of Fitzpatrick unearthed by the Yonkers Times.

The new site quickly attracted more than 300,000 users, and facilitated the sale of databases stolen from hundreds of hacking victims, including some of the largest consumer data breaches in recent history. In May 2024, a reincarnation of Breachforums was seized by the FBI and international partners. Still more relaunches of the forum occurred after that, with the most recent disruption last month.

As KrebsOnSecurity reported last year in The Dark Nexus Between Harm Groups and The Com, it is increasingly common for federal investigators to find CSAM material when searching devices seized from cybercriminal suspects. While the mere possession of CSAM is a serious federal crime, not all of those caught with CSAM are necessarily creators or distributors of it. Fertel said some cybercriminal communities have been known to require new entrants to share CSAM material as a way of proving that they are not a federal investigator.

“If you’re going to the darkest corners of Internet, that’s how you prove you’re not law enforcement,” Fertel said. “Law enforcement would never share that material. It would be criminal for me as a prosecutor, if I obtained and possessed those types of images.”

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…
13571318874?profile=RESIZE_710x
By Byron V. Acohido

The response to our first LastWatchdog Strategic Reel has been energizing — and telling. Related: What is a cyber kill chain?

The appetite for crisp, credible insight is alive and well. As the LinkedIn algo picked up steam and auto-captioning kicked in, it became clear that this short-form format resonates. Not just because it’s fast — but because it respects the intelligence of the audience.

This second-day snapshot continues where we left off: amplifying frontline voices from RSAC 2025. What’s most striking is the consistency of message across these interviews. Whether from Fortinet or ESET, Corelight or Anomali, the theme is clear: GenAI is no longer theoretical. It’s here — and it’s already influencing how SOC teams operate, triage, and respond.

Each voice captured in this reel isn’t reading from a script. These are compressed bursts of clarity from senior technologists who live this reality every day.

The goal with Strategic Reels is simple: create a format that works at the speed of LinkedIn but doesn’t sacrifice substance. The result? A tool that helps thought leaders cut through the noise — and stay top of mind.

If this approach resonates with your team or client, reach out. There’s room in this series for more real voices, more credible takes — and more relevance, exactly when it’s needed.

Watch the embedded reel, and follow me on LinkedIn for upcoming drops. For sponsorship opportunities, I’m happy to discuss what’s possible.

 

 

 

13571319290?profile=RESIZE_400x

By Byron Acohido

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

Original Link to the Blog: Click Here

Read more…

The U.S. government today unsealed criminal charges against 16 individuals accused of operating and selling DanaBot, a prolific strain of information-stealing malware that has been sold on Russian cybercrime forums since 2018. The FBI says a newer version of DanaBot was used for espionage, and that many of the defendants exposed their real-life identities after accidentally infecting their own systems with the malware.

 

13571121292?profile=RESIZE_710x

DanaBot’s features, as promoted on its support site. Image: welivesecurity.com.

 

Initially spotted in May 2018 by researchers at the email security firm Proofpoint, DanaBot is a malware-as-a-service platform that specializes in credential theft and banking fraud.

Today, the U.S. Department of Justice unsealed a criminal complaint and indictment from 2022, which said the FBI identified at least 40 affiliates who were paying between $3,000 and $4,000 a month for access to the information stealer platform.

The government says the malware infected more than 300,000 systems globally, causing estimated losses of more than $50 million. The ringleaders of the DanaBot conspiracy are named as Aleksandr Stepanov, 39, a.k.a. “JimmBee,” and Artem Aleksandrovich Kalinkin, 34, a.k.a. “Onix”, both of Novosibirsk, Russia. Kalinkin is an IT engineer for the Russian state-owned energy giant Gazprom. His Facebook profile name is “Maffiozi.”

According to the FBI, there were at least two major versions of DanaBot; the first was sold between 2018 and June 2020, when the malware stopped being offered on Russian cybercrime forums. The government alleges that the second version of DanaBot — emerging in January 2021 — was provided to co-conspirators for use in targeting military, diplomatic and non-governmental organization computers in several countries, including the United States, Belarus, the United Kingdom, Germany, and Russia.

“Unindicted co-conspirators would use the Espionage Variant to compromise computers around the world and steal sensitive diplomatic communications, credentials, and other data from these targeted victims,” reads a grand jury indictment dated Sept. 20, 2022. “This stolen data included financial transactions by diplomatic staff, correspondence concerning day-to-day diplomatic activity, as well as summaries of a particular country’s interactions with the United States.”

The indictment says the FBI in 2022 seized servers used by the DanaBot authors to control their malware, as well as the servers that stored stolen victim data. The government said the server data also show numerous instances in which the DanaBot defendants infected their own PCs, resulting in their credential data being uploaded to stolen data repositories that were seized by the feds.

“In some cases, such self-infections appeared to be deliberately done in order to test, analyze, or improve the malware,” the criminal complaint reads. “In other cases, the infections seemed to be inadvertent – one of the hazards of committing cybercrime is that criminals will sometimes infect themselves with their own malware by mistake.”

 

13571121667?profile=RESIZE_710x

Image: welivesecurity.com

 

statement from the DOJ says that as part of today’s operation, agents with the Defense Criminal Investigative Service (DCIS) seized the DanaBot control servers, including dozens of virtual servers hosted in the United States. The government says it is now working with industry partners to notify DanaBot victims and help remediate infections. The statement credits a number of security firms with providing assistance to the government, including ESETFlashpointGoogleIntel 471LumenPayPalProofpointTeam CYMRU, and ZScaler.

It’s not unheard of for financially-oriented malicious software to be repurposed for espionage. A variant of the ZeuS Trojan, which was used in countless online banking attacks against companies in the United States and Europe between 2007 and at least 2015, was for a time diverted to espionage tasks by its author.

As detailed in this 2015 story, the author of the ZeuS trojan created a custom version of the malware to serve purely as a spying machine, which scoured infected systems in Ukraine for specific keywords in emails and documents that would likely only be found in classified documents.

The public charging of the 16 DanaBot defendants comes a day after Microsoft joined a slew of tech companies in disrupting the IT infrastructure for another malware-as-a-service offering — Lumma Stealer, which is likewise offered to affiliates under tiered subscription prices ranging from $250 to $1,000 per month. Separately, Microsoft filed a civil lawsuit to seize control over 2,300 domain names used by Lumma Stealer and its affiliates.

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…
In this week's highlights, we spotlight essential developments every cybersecurity leader should track. Explore how nation-state actors like Russia’s Fancy Bear and APT28 are intensifying their focus on logistics and IT firms to monitor geopolitical
Read more…

 

13563046688?profile=RESIZE_180x180

Gemini imagines RSA 2025 (very tame!)

 

Ah, RSA. That yearly theater (Carnival? Circus? Orgy? Got any better synonyms, Gemini?) of 44,000 people vaguely (hi salespeople!) related to cybersecurity … where the air is thick with buzzwords and the vendor halls echo with promises of a massive revolution — every year.

And this year, of course, the primary driver was (still) AI. To put it in a culinary analogy — as it is well known, I like my analogies well-done — if last year’s event felt like a hopeful wait for a steak (“where’s the beef?”), this year feels like we got served a plate with a lot of garnish. Very visually stimulating garnish. But still no meat.

And I still can’t shake the feeling that in a year we might be in the same place. Hopefully not.

But let’s break it down. Just like a good stew, let’s delve (guess who wrote this sentence?) into the ingredients that made up RSA 2025.

 

1. The AI Hype Train: All Aboard! (But Where Are We Going?)

First off, let’s address the elephant in the room, or rather, the “hype-intelligent” [A.C. — I wrote this joke, not AI, cool typo, eh?] chatbot in the cloud: AI. Everyone and their grandmother seemed to have an “AI-powered” solution, some even went further for “AI-native” (more on this particular creation later).

Booths were festooned with AI logos, and conversations invariably veered towards gen AI and… yes… agentic AI too (so 2025 of them!). It was as if vendors had discovered again magical incantation that could solve all cybersecurity woes. “Add AI and bam!”, or something like that. Like perhaps zero trust in 2022 or so?

But here’s the rub: under the surface, how much was “sizzle” and how much was “steak”? As noted, many discussions felt like “AI addressable” rather than “AI solvable” (the idea for this term comes from this podcast episode, coined by Eric Foster of Tenex.AI … yes… AI). Which means, sure, we can point AI at a problem, but AI is not actually solving it completely and requires humans to do a non-trivial amount of work. But it does help!

You know those “agentic use cases”? Those real-world game changer use cases that actually deliver significant benefits right now? I was looking for them. And I didn’t find many. In fact, I didn’t find even a single robust one. And we really looked!

We saw a lot of people imagining the future of security, and I saw not much evidence of solid outcomes in the present. A lot of vendors slapped AI mentions onto their existing products (OK, some just onto their booths!), creating what I like to call “AI washing” or gratuitous mentions of AI.

So many AI applications in MDR (Managed Detection and Response) were “AI addressable but not AI solvable.” And let’s talk for a moment about the whole “AI SOC” concept. This is the dream we keep chasing. It echoes the promises made with SOAR (Security Orchestration, Automation, and Response) systems of yesteryear.

Frankly, the more I look at the “AI SOC” vendors with their “triage agents” (just $10 per alert! buy now!) the more I see SOAR circa 2015. These guys are marching towards the same general path that SOAR treaded 10 years ago, much powered by modern tools yet veering towards the same ditch…

Remember when SOAR was supposed to automate everything, eliminating the need for human intervention in security operations? How did that work out? Turns out you still need humans to remediate and interpret the (dirty) data, and deal with messed up IT environments. And I see the “AI SOC” is in danger of repeating the exact same trajectory. The idea of a fully automated security operations center powered by AI is just not realistic at all today.

So “AI in a SOC” — strong YES, “AI SOC” — hard no!

You still need people, humans, the real ones, to deal with the complicated situations, understand the context, use tribal knowledge, and make hard decisions. At most those “AI SOC” can give guidance — “LLM says, hey, you guys should consider doing blah, blah, blah” but it is ultimately humans who make the final call and do things. Today this is true. Please ask me again after RSA 2026…

 

2. The Resilience of the Past: What is Dead May Never Die (Or at Least Takes a Very Long Time to Do So)

Another striking observation was the continued presence and resilience of “legacy” technologies and vendors (some parallels to RSA 2022, as I recall). Think about it: many vendor names that a security manager from 2004 would recognize (or their merged and renamed descendants) were still prominent on the show floor.

Mobile security, our favorite example of a security island merging with the mainland, also appeared, though not as a central theme. It seemed like many technologies thought to be on their last legs are, well, not. I was wondering who buys from “3rd tier AV vendors” or from “54th tier SIEM” vendors? What keeps them afloat? Well, I think part of it is explained by the concept of “change budget” concept, that some of my Deloitte colleagues used to explain.

Essentially, organizations have a limited capacity for change, and when they finally update one security solution, they might not have the resources or will to update others, no matter the need. We do not have capacity to change everything, all at once. Change fatigue is real!

And this inertia allows older technologies to persist, even if better alternatives are available. Change is just hard. And companies keep sticking with what is familiar and what just “works” (even if it really doesn’t). It might be inefficient, it might be outdated, but it is here and is already integrated to other systems. Which, of course, creates even more “fun” problems! Just imagine, there are still some people somewhere working with COBOL and Windows 2003. Terrifying, indeed!

 

3. The Security of AI: Protecting the Protector

An ironic twist in this AI-palooza was the relative scarcity of discussions on securing AI itself (we did a fun presentation on this BTW). While everyone was touting AI’s ability to defend systems, not enough attention was paid to defending AI systems themselves. Are we going back to the “WAF-but-for-AI” type solution? Will we build special boxes to protect those AI systems? I hope not as that would be the wrong approach. As somebody said “‘known bad’ filtering never truly works” (sounds like Marcus Ranum?)

If AI is to become a critical part of our cybersecurity infrastructure, we must ensure it is robust and resilient against attacks. But I think the relative lack of focus on this area meant that buyers aren’t ready to buy AI security or haven’t even considered it at this stage.

Think for a moment: you are ready to deploy “AI for security” but you are not yet ready to “secure AI” — including that AI you just deployed for security. Please get terrified already!

 

4. Quick Hits and Hallway Chatter

Beyond the big themes, a few other observations:

  • Cloud Security: Wiz continued to market itself with a focus on brand recognition, perhaps showing how a powerful brand is cutting through the show’s noise. Their booth messaging focused on “Hi, we’re Wiz” and jokes, rather than detailing capabilities. So we seem to be in the “platforming” stage of cloud security.
  • SecOps/SOAR/SIEM: “AI Native” is now a thing , but its advantages over just “AI capabilities added to existing platforms” are still debated. Can we have an “AI native SIEM” or “AI native SOAR”? I think we will see many attempts, but the actual value here is yet to be proven. The jury is still out. Far out.
  • Pipelines: There are many vendors focused on log and telemetry collection pipelines, with some claiming to be faster or have better UX than existing solutions. The need is real, but whether we need a dozen such vendors remains to be seen.
  • Misc: There were goats , puppies, and unfortunately no bees. Also, some vendors were “shredding” or “destroying” adversaries. Which sounds fun, but maybe not that practical in real world? And I really missed the NSA booth and Enigma machines. Maybe next time? We did ask somebody in the FBI booth about the NSA booth and we got an epic eye roll as a response…

 

 

Random Hot Take (Sorry, Gemini Thinks I Needed One!)

I have a strong feeling that in a year, at RSA 2026 we might be having the same discussions. We might be again waiting for a “steak” while getting a lot of “sizzle”. We might be talking again about how “AI will fix everything” without actually seeing it fixed. We might be looking at the same old technologies staying alive for another year. I really hope I am wrong. I really want the real “game changer” AI use cases to finally emerge. We will see…

You can check out our related presentations from the conference:

And don’t forget to listen to the recap podcast that inspired some of these thoughts!

 

 

- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

Read more…