Biswajit Banerjee's Posts (210)

Sort by

“Who’s winning on the internet, the attackers or the defenders?”

I’m asked this all the time, and I can only ever give a qualitative hand-wavy answer. But Jason Healey and Tarang Jain’s latest Lawfare piece has amassed data.

The essay provides the first framework for metrics about how we are all doing collectively—and not just how an individual network is doing. Healey wrote to me in email:

The work rests on three key insights: (1) defenders need a framework (based in threat, vulnerability, and consequence) to categorize the flood of potentially relevant security metrics; (2) trends are what matter, not specifics; and (3) to start, we should avoid getting bogged down in collecting data and just use what’s already being reported by amazing teams at Verizon, Cyentia, Mandiant, IBM, FBI, and so many others.

The surprising conclusion: there’s a long way to go, but we’re doing better than we think. There are substantial improvements across threat operations, threat ecosystem and organizations, and software vulnerabilities. Unfortunately, we’re still not seeing increases in consequence. And since cost imposition is leading to a survival-of-the-fittest contest, we’re stuck with perhaps fewer but fiercer predators.

And this is just the start. From the report:

Our project is proceeding in three phases—­the initial framework presented here is only phase one. In phase two, the goal is to create a more complete catalog of indicators across threat, vulnerability, and consequence; encourage cybersecurity companies (and others with data) to report defensibility-relevant statistics in time-series, mapped to the catalog; and drive improved analysis and reporting.

This is really good, and important, work.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…

We’re excited to bring you an AI Demo Talk on "Harnessing AI to Personalize and Automate Human Risk Management" with Uzair Ahmed Gilani (CTO, Right-Hand Cybersecurity). In this session, we’ll dive into how AI can transform the way organizations manage human risk—by making training adaptive, threat simulations more realistic, and interventions timely and impactful.

Human error remains one of the biggest contributors to cyber incidents. Traditional awareness programs often fail to engage employees or adapt to evolving threats. This talk will explore how AI-driven approaches can bridge the gap—delivering personalized learning experiences, simulating modern attack techniques like deepfake vishing, and integrating real-time coaching directly into security operations.

Key Discussion Points:

  1. Deepfake Vishing Agent: Demonstrating how we simulate realistic vishing attacks using cloned voices and AI personas to help employees identify and respond to deepfake social engineering threats.
  2. AI-Enabled Content Creation: Showcasing how we generate personalized training content aligned with each company’s policies, tone, and language using generative AI models.
  3. Real-Time Personalized Intervention: Walking through how we integrate with security tools (SIEM, EDR, IAM) to deliver in-the-moment coaching based on live alerts and user behavior.

 

Date: August 26, 2025 (Tuesday)
Time: 9:00 AM PST | 12:00 PM EST | 9:30 PM IST

Join us live or register to receive the session recording if the timing doesn’t suit your timezone.

>> Register Here

Read more…

LAS VEGAS — A decade ago, the rise of public cloud brought with it a familiar pattern: runaway innovation on one side, and on the other, a scramble to retrofit security practices not built for the new terrain.

Related: GenAI workflow risks

Shadow IT flourished. S3 buckets leaked. CISOs were left to piece together fragmented visibility after the fact.

13695655290?profile=RESIZE_180x180

Something similar—but more profound—is happening again. The enterprise rush to GenAI is triggering a structural shift in how software is built, how decisions get made, and where the risk lives. Yet the foundational tools and habits of enterprise security—built around endpoints, firewalls, and user identities—aren’t equipped to secure what’s happening inside the large language models (LLMs) now embedded across critical workflows.

This is not just a new attack surface. It’s a systemic exposure—poorly understood and dangerously under-addressed.

The newly published IBM 2025 Cost of a Data Breach Report highlights a widening chasm between AI adoption and governance. It reveals that 13% of organizations suffered breaches involving AI models or applications, and among these, a staggering 97% lacked proper AI access controls.

Encouragingly, a new generation of AI-native security vendors is quietly charting the contours of this gap. Among them: StraikerDataKrypto, and PointGuard AI.

I encountered all three here in Las Vegas at Black Hat 2025 — and their candid insights helped crystallize what I now see as a systemic failure hiding in plain sight.

13695655493?profile=RESIZE_584x

Each startup is tackling a different facet of GenAI’s attack surface. None claim to offer a silver bullet. But taken together, they hint at what an AI-native security stack might eventually require.

AI-powered tools are flooding enterprise workflows at every level. From marketing copy to software development, GenAI is now threaded into production processes with startling speed. But the underlying engines—LLMs—operate using unfamiliar logic, drawing conclusions and taking actions in ways security teams aren’t trained to inspect or control.

Shadow AI is more than an abstract concern. Research from Menlo Security shows a 68% increase in shadow GenAI usage in just 2025, with 57% of employees admitting they’ve input corporate data into unsanctioned AI tools. The rise of AI web traffic—up 50% to 10.5 billion visitssignals how widespread this risk has become, even in just-browser usage contexts.

13695655300?profile=RESIZE_584x

Ankur Shah, CEO of Straiker, put it bluntly: “If you’re not watching what your AI agent is doing in real time, you’re blind.” Straiker focuses on what happens when GenAI becomes agentic—when it starts chaining reasoning steps, invoking tools, or making decisions based on inferred context.

In this mode, traditional AppSec and data loss prevention tools fall flat. Straiker’s Ascend AI and Defend AI offerings are designed to red-team these behaviors and enforce runtime policy guardrails. Their insight: the attack surface is no longer just the prompt. It’s the behavior of the agent.

If Straiker focuses on the “what,” then DataKrypto focuses on the “where.” Specifically: where does GenAI process and store its most sensitive data? The answer, according to DataKrypto founder Luigi Caramico, is both simple and alarming: in cleartext, inside RAM.

13695655694?profile=RESIZE_584x

“All the data—the model weights, the training materials, even user prompts—are held unencrypted in memory,” Caramico observes. “If you have access to the machine, you have access to everything.”

This exposure isn’t hypothetical. As more companies fine-tune LLMs with proprietary IP, the risk of theft or leakage escalates dramatically. Caramico likens LLMs to the largest lossy compression engines ever built—compressing terabytes of training data into billions of vulnerable parameters.

DataKrypto’s response is a product called FHEnom for AI: a secure SDK that encrypts model data in memory using homomorphic encryption, integrated with trusted execution environments (TEEs). This protects both the model itself and the sensitive data flowing into and out of it—without degrading performance. “Encryption at rest and in motion aren’t enough,” Caramico said. “This is encryption in use.”

13695655862?profile=RESIZE_584x

The third leg of the emerging GenAI security stool comes from PointGuard AI, which focuses on discovery and governance. As AI code generation and prompt engineering proliferate, organizations are losing track of what AI tools are being used where, and by whom. Willy Leichter, PointGuard’s Chief Security Officer, frames it as a shadow IT problem on steroids.

“AI is the fastest-growing development platform we’ve ever seen,” he noted. “Developers are pulling in open-source models, auto-generating code, and building apps without any oversight from security teams.”

PointGuard scans code repos, runtime environments, and MLOps pipelines to surface unsanctioned AI use, detect prompt injection exposures, and score AI posture. It builds a bridge between AppSec and data governance teams who increasingly find themselves on the same front lines.

While their approaches differ, these companies are all converging on the same conclusion: the current security model isn’t just incomplete—it’s obsolete. Straiker brings behavioral monitoring into the spotlight. DataKrypto protects the compute layer itself. PointGuard restores visibility and governance to a world of AI-driven code and logic. Their respective visions are drawing the early contours of what a security-first foundation for GenAI might look like.

There is now, in fact, an OWASP Top 10 list of LLM vulnerabilities. But it is still early days, and there are few universal frameworks or agreed-upon best practices for how to integrate these new risks into traditional security operations. CISOs face a landscape that is both fragmented and urgent, where model misuse, shadow deployments, and memory scraping represent three fundamentally different risks—each requiring new tools and mental models.

To keep pace, security itself must evolve. That means understanding AI not just as a tool, but as a new kind of software logic that demands purpose-built protection. It means building systems that can interpret autonomous behavior, encrypt active memory, and continuously surface hidden AI integrations. Most of all, it means learning to think less like compliance officers and more like language models—probabilistic, context-aware, and relentlessly adaptive.

“Security can’t just follow the playbook anymore,” Leichter observed. “We have to match the speed and shape of the thing we’re trying to protect.”

That, in the end, may be the most important shift of all.

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(Editor’s note: A machine assisted in creating this content. I used ChatGPT-4o to accelerate research, to scale correlations, to distill complex observations and to tighten structure, grammar, and syntax. The analysis and conclusions are entirely my own — drawn from lived experience and editorial judgment honed over decades of investigative reporting.)

 

By Byron Acohido (Pulitzer Prize-Winning Business Journalist)

Original Link to the Blog: Click Here

 
Read more…

This is my completely informal, uncertified, unreviewed and otherwise completely unofficial blog inspired by my reading of our next Cloud Threat Horizons Report, #12 (full version) that we just released (the official blog for #1 reportmy unofficial blogs for #2#3#4#5#6#7#8#9#10 and #11).

My favorite quotes from the report follow below:

  • “Google Cloud’s latest research highlights that common hygiene gaps like credential issues and misconfigurations are persistently exploited by threat actors to gain entry into cloud environments. During the first half of 2025, weak or absent credentials were the predominant threat, accounting for 47.1% of incidentsMisconfigurations (29.4%) and API/UI compromises (11.8%) followed as the next most frequently observed initial access vectors.“

13687235476?profile=RESIZE_180x180
THR 12 cloud compromise visual
  • Notably, compared to H2 2024, we observed a 4.9% decrease in misconfiguration-based access and a 5.3% decrease in API/UI compromises (i.e., when an unauthorized entity gains access to, or manipulates a system or data through an application’s user-facing screen or its programmatic connections). This shift appears to be partly absorbed by the rise of leaked credentials representing 2.9% of initial access in H1 2025. ” [A.C. — It gently suggests that while we’re making some progress on configurations, the attackers are moving to where the fruit is even more low-hanging: already leaked credentials.]
  • “Foundational security remains the strongest defense: Google Cloud research indicates that credential compromise and misconfiguration remain the primary entry points for threat actors into cloud environments, emphasizing the critical need for robust identity and access management and proactive vulnerability management.” [A.C. — it won’t be the magical AI that saves you, it would be not given admin to employees]
  • “Financially motivated threat groups are increasingly targeting backup systems as part of their primary objective, challenging traditional disaster recovery, and underscoring the need for resilient solutions like Cloud Isolated Recovery Environments (CIRE) to ensure business continuity.” [A.C. — if your key defense against ransomware is still backups, well, we got some “news” got you…]
  • “Advanced threat actors are leveraging social engineering to steal credentials and session cookies, bypassing MFA to compromise cloud environments for financial theft, often targeting high-value assets.” [A.C. — this is NOT an anti-MFA stance, this is a reminder that MFA helps a whole lot, yet if yours can be bypassed, then its value diminishes]
  • “Threat actors are increasingly co-opting trusted cloud storage services as a key component in their initial attack chains, deceptively using these platforms to host seemingly benign decoy files, often PDFs.“ and “threat actors used .desktop files to infect systems by downloading decoy PDFs from legitimate cloud storage services from multiple providers, a tactic that deceives victims while additional malicious payloads are downloaded in the background” [A.C. — a nice example of thinking about how the defender will respond by the attacker here]
  • more traditional disaster recovery approaches, focused primarily on technical restoration, often fall short in addressing the complexities of recovering from a cyber event, particularly the need to re-establish trust with third parties.” [A.C. — The technical recovery is only half the battle. This speaks to the human element of incident response, and the broader impact of a breach.]

 

- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

Read more…

I spoke at the Black Hat Conference in Las Vegas for the first time since the COVID-19 pandemic. Here’s what I learned and a few takeaways to share.

 

13687143501?profile=RESIZE_180x180

 

I just returned from Black Hat in Las Vegas, and once again, AI dominated all conversations on both the attack and defend side.

Here is a sample of some of the bold headlines coming out of the Black Hat event this year:

Dark ReadingGoogle Gemini AI Bot Hijacks Smart Homes, Turns Off the Lights – “Using invisible prompts, the attacks demonstrate a physical risk that could soon become reality as the world increasingly becomes more interconnected with artificial intelligence.”

 


eSecurity Planet: Former New York Times Cyber Reporter Issues Chilling Warning at Black Hat – “Cybersecurity is no longer just about code — it is about people, power, and the fight for truth.

“Speaking Thursday at Black Hat USA 2025, Nicole Perlroth, former New York Times reporter and founding partner of Silver Buckshot Ventures, warned that digital threats have outpaced traditional defenses. Malware has gone quiet and autonomous. Ransomware operates like a subscription service. Artificial intelligence has begun to distort reality itself.

“She explained that cyber threats have moved beyond networks, now targeting public discourse, critical systems, and democracy itself.

 


“'But now the threats are being automated by AI and deployed at scale,' Perlroth explained. 'The question is not whether we can stop them. It’s if we even have the courage to try.'”

SC MediaObservations from Black Hat 2025: The human toll behind the headlines – “After spending a few days walking around Mandalay Bay for Black Hat 2025, one theme stood out more than the others: the widening gap between the security industry’s innovations and the well-being of its people.

“Yes, this year’s conference was once again dominated by discussions about AI, threat intelligence, ransomware, cloud security, and identity. And yes, vendors are buzzing with new product announcements. But the conversations that stuck with me — the ones that felt urgent — were from the people talking about burnout, about broken job pipelines, and about the increasingly frustrating search for meaningful, stable employment in security.

“Every year, attendees show up looking for their next opportunity, but this year the tone has shifted. The stories feel heavier, the anxiety more palpable. People are openly wondering whether anyone ever sees the job applications they send, or if AI filters are kicking them out before a human ever has a chance to evaluate their experience. They’re describing a hiring process that feels cold, impersonal, and in many cases — entirely disconnected from the talent it claims to be seeking.”

Note: That article goes on to highlight themes like:
· Disruption fatigue and the AI impact
· Human resilience as the real differentiator

Cybersecurity DiveUS still prioritizing zero-trust migration to limit hacks’ damage – “The U.S. government is still pushing agencies to adopt zero-trust network designs, continuing a project that gained steam during the Biden administration, a senior cybersecurity policy official said on Wednesday.

“'It must continue to move forward, Michael Duffy, the acting federal chief information security officer, said during a panel at the Black Hat cybersecurity conference. “That architectural side of it is very important for us to get right as we integrate new technologies [like] artificial intelligence into the ways we operate.”

For some more specific vendor announcement details, see:

Securityweek.comBlack Hat USA 2025 – Summary of Vendor Announcements (Part 1) and Black Hat USA 2025 – Summary of Vendor Announcements (Part 2)

NOTE: There are four parts to this series, so you can see the later announcements by just changing the last number on the URL.

Opening keynote from Black Hat 2025:

 

COMPARING THE RSA AND BLACK HAT CONFERENCES


Getting a bit more personal in my analysis, there are two monster cybersecurity conferences that dominate the cyber industry each year in the USA: RSA and Black Hat. The show floors for both of these conferences are massive with hundreds of companies having booths and the largest companies have large displays with numerous presentations and swag giveaways like T-shirts, mugs, hats, etc.

The keynotes and breakout sessions are also huge with thousands of attendees, and it is impossible to attend all of the sessions. You can also see many of these sessions on YouTube after the conferences end.

I spoke this year at Black Hat as part of a public-sector breakout panel, which you can see here. (The panel session will be available on demand from Trend Micro soon.)

See these YouTube channels for recorded sessions: RSA Conference sessions and Black Hat conference.

Unlike the RSA Conference in San Francisco, which is held in the spring each year, Blackhat is held in Las Vegas in early August. Yes, it is always HOT outside at Black Hat, about 108-109 degrees each day.

Pro tip: Ask for help from locals or others at the conference on how to use the trams to get between the casinos and hotels where the events are often held and also to the Mandalay Bay Conference Center. I was able to attend more than seven events all over Las Vegas without needing an Uber ride by hopping between buildings and using the tram system.

Both conferences have numerous breakfast, lunch and dinner events for attendees, and getting invitations to these networking events is fairly easy, especially for CISOs and security leaders. I have spoken at both conferences and the application process can be a challenge, but it is worth it. Note: the RSAC 2026 Conference Call for Submissions is now open until Aug. 18.

At RSA, the events tend to be more spread out across San Francisco, so Ubers or a lot of walking is required. (Although I walked a ton at Black Hat this week as well, with over 15,000 steps each day.)

Both conferences offer great times to network with colleagues from across the country, but for some reason, I saw more friends (unplanned) this year at Black Hat than I have ever seen at any one RSA event (on chance encounters). For example, the picture below is with Paul Curylo, Inova Health System CISO, who I have not seen in several years.

13687235468?profile=RESIZE_180x180 Dan Lohrmann with Paul Curylo, Inova Health System CISO.
M. Brown

 

FINAL THOUGHTS


Both the RSA and Black Hat conferences can be overwhelming, and you will get home exhausted. I won’t pick one over the other, as I like different things about both events. However, I have been to many more RSA conferences than Black Hat conferences.

My No. 1 tip is to pace yourself and be very intentional with how you want to use your time. Prioritize relationships over tech details and be sure to get some sleep — despite the urge to go to one more after-hours event.

And when you do see formal colleagues, friends, acquaintances and others on the show floor and running between buildings, stop and catch up. That’s what I also enjoy the most when I look back months later — the unexpected catch-up sessions with others in the cyber industry.

 

By: Dan Lohrmann (Cybersecurity Leader, Technologist, Keynote Speaker & Author)

Original link to the blog: Click Here

Read more…

Airportr is a service that allows passengers to have their luggage picked up, checked, and delivered to their destinations. As you might expect, it’s used by wealthy or important people. So if the company’s website is insecure, you’d be able to spy on lots of wealthy or important people. And maybe even steal their luggage.

Researchers at the firm CyberX9 found that simple bugs in Airportr’s website allowed them to access virtually all of those users’ personal information, including travel plans, or even gain administrator privileges that would have allowed a hacker to redirect or steal luggage in transit. Among even the small sample of user data that the researchers reviewed and shared with WIRED they found what appear to be the personal information and travel records of multiple government officials and diplomats from the UK, Switzerland, and the US.

“Anyone would have been able to gain or might have gained absolute super-admin access to all the operations and data of this company,” says Himanshu Pathak, CyberX9’s founder and CEO. “The vulnerabilities resulted in complete confidential private information exposure of all airline customers in all countries who used the service of this company, including full control over all the bookings and baggage. Because once you are the super-admin of their most sensitive systems, you have have [sic] the ability to do anything.”

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…

San Francisco, Calif., Aug. 1, 2025, CyberNewswire—Comp AI, an emerging player in the compliance automation space, today announced it has secured $2.6 million in pre-seed funding to accelerate its mission of transforming how companies achieve compliance with critical frameworks like SOC 2 and HIPAA.

The funding round was co-led by OSS Capital and Grand Ventures, both bringing specialized expertise in backing innovative technology companies. OSS Capital, known for investing in open-source challengers including ProjectDiscovery, Plane, and Cal.com, joins Grand Ventures, which has a strong track record supporting developer and infrastructure platforms such as Astronomer, Payload, and Tembo.

The round also includes participation from notable angel investors David Cramer, founder of Sentry, and Ben Tossell of Ben’s Bites.

 

Addressing a Broken Industry

Compliance frameworks like SOC 2, HIPAA, and ISO 27001 have become essential for securing enterprise contracts, but the traditional path to achieving certification remains manual, expensive, and time-consuming. Comp AI is positioning itself as a disruptive alternative by combining open-source collaboration with advanced agentic AI automation.

Since emerging from stealth in April 2025, the company reports impressive early traction. Comp AI claims its first batch of customers has collectively saved over 2,500 hours on manual compliance work. The startup has also participated in Vercel’s Spring ’25 OSS initiative and attracted more than 3,500 companies to its pre-launch testing program.

The founding team consists of experienced Silicon Valley entrepreneurs Mariano Fuentes, Lewis Carhart, and Claudio Fuentes, who bring firsthand experience with the compliance challenges facing startups. Having navigated SOC 2 compliance at their previous ventures, the trio identified significant inefficiencies in the current market landscape.

 

Challenging Established Players

Comp AI is directly challenging established compliance platforms, which the company characterizes as costly and labor-intensive solutions that still require founders to spend weeks on manual compliance management. The startup claims its AI-powered approach can automate up to 90% of the compliance process, resulting in what it describes as “instant product-market fit” and monthly growth exceeding 89%.


Investment and Growth Plans

The new funding will support Comp AI’s expansion across multiple fronts over the next three months:

• Open-source platform expansion: Enabling security professionals and auditors to contribute control templates, framework mappings, and automation tools

• AI Agent Studio launch: Moving from beta to general availability, this tool allows customers to deploy automated agents for evidence collection, risk assessments, and vendor onboarding

 

Industry Recognition

The investment has drawn enthusiastic endorsements from both lead investors.”We have been blown away by Comp AI’s speed of execution and customer obsession. GRC has long been overdue for open source disruption, and Comp AI is delivering that in spades,” said Joseph Jacks, Founder of OSS Capital. Nathan Owen, General Partner at Grand Ventures, added: “GRC – specifically compliance (SOC 2, ISO 27001, GDPR, etc.) – has needed bold innovation for years, and Comp AI is leading the charge. Their platform isn’t an incremental improvement – it’s a complete reinvention.”

 

Looking Forward

According to the team, as Comp AI continues scaling its operations, the company is actively recruiting new team members. The funding round positions Comp AI to capitalize on the growing demand for streamlined compliance solutions as more companies seek to accelerate their path to enterprise readiness in an increasingly regulated business environment.

About Comp AI: Comp AI is a San Francisco-based startup founded in 2025 that’s revolutionizing how companies approach compliance certification. The company provides an AI-powered trust management platform that automates compliance for major frameworks, including SOC 2, HIPAA, GDPR, ISO 27001, and 25+ other regulatory standards.

Mission: To help 100,000 companies achieve SOC 2, ISO 27001, and GDPR compliance by 2032, making enterprise-grade security accessible to companies of all sizes without the traditional $25K+ annual costs and complexity. Comp AI is positioned as “the Vercel of compliance” – offering a developer-friendly, modern alternative to legacy compliance platforms that are often slow, expensive, and built primarily for large enterprises.

Media contact: Lewis Carhart, Founder, Bubba AI, Inc., hello@trycomp.ai

Editor’s note: This press release was provided by CyberNewswire as part of its press release syndication service. The views and claims expressed belong to the issuing organization.

 

By Byron Acohido (Pulitzer Prize-Winning Business Journalist)

Original Link to the Blog: Click Here

Read more…

Peter Gutmann and Stephan Neuhaus have a new paper—I think it’s new, even though it has a March 2025 date—that makes the argument that we shouldn’t trust any of the quantum factorization benchmarks, because everyone has been cooking the books:

Similarly, quantum factorisation is performed using sleight-of-hand numbers that have been selected to make them very easy to factorise using a physics experiment and, by extension, a VIC-20, an abacus, and a dog. A standard technique is to ensure that the factors differ by only a few bits that can then be found using a simple search-based approach that has nothing to do with factorisation…. Note that such a value would never be encountered in the real world since the RSA key generation process typically requires that |p-q| > 100 or more bits [9]. As one analysis puts it, “Instead of waiting for the hardware to improve by yet further orders of magnitude, researchers began inventing better and better tricks for factoring numbers by exploiting their hidden structure” [10].

A second technique used in quantum factorisation is to use preprocessing on a computer to transform the value being factorised into an entirely different form or even a different problem to solve which is then amenable to being solved via a physics experiment…

Lots more in the paper, which is titled “Replication of Quantum Factorisation Records with an 8-bit Home Computer, an Abacus, and a Dog.” He points out the largest number that has been factored legitimately by a quantum computer is 35.

I hadn’t known these details, but I’m not surprised. I have long said that the engineering problems between now and a useful, working quantum computer are hard. And by “hard,” we don’t know if it’s “land a person on the surface of the moon” hard, or “land a person on the surface of the sun” hard. They’re both hard, but very different. And we’re going to hit those engineering problems one by one, as we continue to develop the technology. While I don’t think quantum computing is “surface of the sun” hard, I don’t expect them to be factoring RSA moduli anytime soon. And—even there—I expect lots of engineering challenges in making Shor’s Algorithm work on an actual quantum computer with large numbers.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

 
Read more…

We’re excited to bring you an insightful fireside chat with Sandro Bucchianeri (Group Chief Security Officer at National Australia Bank Ltd.) and Erik Laird (Vice President - North America, FireCompass). 

 

About Sandro:

Sandro Bucchianeri is an award-winning global cybersecurity leader with over 25 years of experience, including 15 years as CISO and CSO for multinational organizations. Known for bridging the gap between business and technology, Sandro has led globally dispersed teams, driven strategic transformations, and advised leaders worldwide — including roles with the World Economic Forum’s Centre for Cybersecurity and the PCI Board of Advisors. A lifelong learner with an MSc in Information Security, he inspires confidence in both technical and business audiences through his pragmatic, forward-thinking approach to risk management.

 

Date: September 11, 2025 (Thursday)

Time: 9 PM AEST | 7:00 AM EST | 4:30 PM IST

 

Join us live or register to receive the session recording if the timing doesn’t suit your timezone

 

>> Register Here

 
Read more…

Fraudsters are flooding Discord and other social media platforms with ads for hundreds of polished online gaming and wagering websites that lure people with free credits and eventually abscond with any cryptocurrency funds deposited by players. Here’s a closer look at the social engineering tactics and remarkable traits of this sprawling network of more than 1,200 scam sites.

The scam begins with deceptive ads posted on social media that claim the wagering sites are working in partnership with popular social media personalities, such as Mr. Beast, who recently launched a gaming business called Beast Games. The ads invariably state that by using a supplied “promo code,” interested players can claim a $2,500 credit on the advertised gaming website.


An ad posted to a Discord channel for a scam gambling website that the proprietors falsely claim was operating in collaboration with the Internet personality Mr. Beast. Image: Reddit.com.

The gaming sites all require users to create a free account to claim their $2,500 credit, which they can use to play any number of extremely polished video games that ask users to bet on each action. At the scam website gamblerbeast[.]com, for example, visitors can pick from dozens of games like B-Ball Blitz, in which you play a basketball pro who is taking shots from the free throw line against a single opponent, and you bet on your ability to sink each shot.

The financial part of this scam begins when users try to cash out any “winnings.” At that point, the gaming site will reject the request and prompt the user to make a “verification deposit” of cryptocurrency — typically around $100 — before any money can be distributed. Those who deposit cryptocurrency funds are soon asked for additional payments.


However, any “winnings” displayed by these gaming sites are a complete fantasy, and players who deposit cryptocurrency funds will never see that money again. Compounding the problem, victims likely will soon be peppered with come-ons from “recovery experts” who peddle dubious claims on social media networks about being able to retrieve funds lost to such scams.

KrebsOnSecurity first learned about this network of phony betting sites from a Discord user who asked to be identified only by their screen name: “Thereallo” is a 17-year-old developer who operates multiple Discord servers and said they began digging deeper after users started complaining of being inundated with misleading spam messages promoting the sites.

“We were being spammed relentlessly by these scam posts from compromised or purchased [Discord] accounts,” Thereallo said. “I got frustrated with just banning and deleting, so I started to investigate the infrastructure behind the scam messages. This is not a one-off site, it’s a scalable criminal enterprise with a clear playbook, technical fingerprints, and financial infrastructure.”

After comparing the code on the gaming sites promoted via spam messages, Thereallo found they all invoked the same API key for an online chatbot that appears to be in limited use or else is custom-made. Indeed, a scan for that API key at the threat hunting platform Silent Push reveals at least 1,270 recently-registered and active domains whose names all invoke some type of gaming or wagering theme.


The “verification deposit” stage of the scam requires the user to deposit cryptocurrency in order to withdraw their “winnings.”

Thereallo said the operators of this scam empire appear to generate a unique Bitcoin wallet for each gaming domain they deploy.

“This is a decoy wallet,” Thereallo explained. “Once the victim deposits funds, they are never able to withdraw any money. Any attempts to contact the ‘Live Support’ are handled by a combination of AI and human operators who eventually block the user. The chat system is self-hosted, making it difficult to report to third-party service providers.”

Thereallo discovered another feature common to all of these scam gambling sites [hereafter referred to simply as “scambling” sites]: If you register at one of them and then very quickly try to register at a sister property of theirs from the same Internet address and device, the registration request is denied at the second site.

“I registered on one site, then hopped to another to register again,” Thereallo said. Instead, the second site returned an error stating that a new account couldn’t be created for another 10 minutes.

scambling-spinora.pnghttps://krebsonsecurity.com/wp-content/uploads/2025/07/scambling-spinora-768x414.png 768w, https://krebsonsecurity.com/wp-content/uploads/2025/07/scambling-spinora-782x422.png 782w, https://krebsonsecurity.com/wp-content/uploads/2025/07/scambling-spinora-370x200.png 370w" alt="" width="749" height="404" aria-describedby="caption-attachment-71801" />


The scam gaming site spinora dot cc shares the same chatbot API as more than 1,200 similar fake gaming sites.

“They’re tracking my VPN IP across their entire network,” Thereallo explained. “My password manager also proved it. It tried to use my dummy email on a site I had never visited, and the site told me the account already existed. So it’s definitely one entity running a single platform with 1,200+ different domain names as front-ends. This explains how their support works, a central pool of agents handling all the sites. It also explains why they’re so strict about not giving out wallet addresses; it’s a network-wide policy.”

In many ways, these scambling sites borrow from the playbook of “pig butchering” schemes, a rampant and far more elaborate crime in which people are gradually lured by flirtatious strangers online into investing in fraudulent cryptocurrency trading platforms.

Pig butchering scams are typically powered by people in Asia who have been kidnapped and threatened with physical harm or worse unless they sit in a cubicle and scam Westerners on the Internet all day. In contrast, these scambling sites tend to steal far less money from individual victims, but their cookie-cutter nature and automated support components may enable their operators to extract payments from a large number of people in far less time, and with considerably less risk and up-front investment.

Silent Push’s Zach Edwards said the proprietors of this scambling empire are spending big money to make the sites look and feel like some fancy new type of casino.

“That’s a very odd type of pig butchering network and not like what we typically see, with much lower investments in the sites and lures,” Edwards said.

Here is a list of all domains that Silent Push found were using the scambling network’s chat API.

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…

I will be really, really honest with you — I have been totally “writer-blocked” and so I decided to release it anyway today … given the date.

So abit of history first. So, my “SOC visibility triad” was released on August 4, 2015 as a Gartner blog (it then appeared in quite a few papers, and kinda became a thing). It stated that to have good SOC visibility you need to monitor logs (L)endpoint (E) sources and network (N) sources. So, L+E+N was the original triad of 2015. Note that this covers monitoring mechanisms, not domains of security (more on this later; this matters!)

5 years later, in 2020, I revisited the triad, and after some agonizing thinking (shown at the above link), I kept it a triad. Not a quad, not a pentagram, not a freakin’ hex.

So, here in 2025, I am going to agonize much more .. and then make a call (hint: blog title has a spoiler!)

How do we change my triad?

First, should we …

… Cut Off a Leg?

Let’s look at whether the three original pillars should still be here in 2025. We are, of course, talking about endpoint visibility, network visibility and logs.


13674776053?profile=RESIZE_180x180
(src: Gartner via 2020 blog)

My 2020 analysis concluded that the triad is still very relevant, but potential for a fourth pillar is emerging. Before we commit to this possibly being a SOC visibility quad — that is, dangerously close to a quadrant — let’s check if any of the original pillars need to be removed.

Many organizations have evolved quite a bit since 2015 (duh!). At the same time, there are many organizations where IT processes seemingly have not evolved all that much since the 1990s (oops!).

First, I would venture a guess that, given that EDR business is booming, the endpoint visibility is still key to most security operations teams. A recent debate of Sysmon versus EDR is a reflection of that. Admittedly, EDR-centric SOCs peaked perhaps in 2021, and XDR fortunately died since that time, but endpoints still matter.

Similarly, while the importance of sniffing the traffic has been slowly decreasing due to encryption and bandwidth growth, cloud native environments and more distributed work, network monitoring (now officially called NDR) is still quite relevant at many companies. You may say that “tcpdump was created in 1988” and that “1980s are so over”, but people still sniff. Packets, that is.

The third pillar of the original triad — logs — needs no defense. Log analysis is very much a booming business and the arrival of modern IT infrastructure and practices, cloud DevOps and others have only bolstered the importance of logs (and of course their volume). A small nit appears here: are eBPF traces logs? Let’s defer this question, we don’t need this answer to reassert the dominance of logs for detection and response.

At this point, I consider the original three legs of a triad to be well defended. They are still relevant, even though it is very clear that for true cloud native environments, the role of E (endpoint) and N (network) has decreased in relative terms, while importance of logs increased (logs became more load bearing? Yes!)

Second, should we …

Add a Leg?

Now for the additions I’ve had a few recent discussions with people about this, and I’m happy to go through a few candidates.

Add Cloud Visibility?

First, let’s tackle cloud. There are some arguments that cloud represents a new visibility pillar. The arguments in favor include the fact that cloud environments are different and that cloud visibility is critical. However, to me, a strong counterpoint is that cloud visibility In many cases, is provided by endpoint, network, and logs, as well as a few things. We will touch these “few things” in a moment.

YES?

  • Cloud native environments are different, they suppress E and N
  • Cloud visibility is crucial today
  • Addresses unique cloud challenges
  • Cloud context is different, even if E and N pillars are used for visibility
  • CDR is a thing some say

NO?

  • Cloud INCLUDES logs (lots, some say 3X in volume), and also E and N
  • Too much overlap with other pillars (such as E and N)
  • Cloud is a domain, not a mechanism for visibility.
  • CDR is not a thing, perhaps

Verdict:

  • NO, not a new pillar, part of triad already (via all other pillars)

Add Identity Visibility?

The second candidate to be added is, of course, identity. Here we have a much stronger case that identity needs to be added as a pillar. So perhaps we would have an endpoint, network, logs and identity as our model. Let’s review some pros and cons for identity as a visibility pillar.

 

YES?

  • Identity is key in the cloud; we observe a lot of things via IDP … logs (wait.. we already have a pillar called “logs”)
  • By making identity a dedicated pillar, organizations can ensure that it receives the attention
  • ITDR is a thing

NO?

  • But identify visibility is in the logs … we already have logs!
  • Too much overlap with other pillars (such as logs and E as well)
  • ITDR is kinda a thing, but it is also not a thing

Verdict:

  • Sorry, still a NO, but a weak NO. Identity is critical as context for logs, endpoint data and network telemetry, but it is not (on its own) a visibility mechanism.

Still, I don’t want to say that identity is merely just about logs, because “baby … bathwater.” Some of the emerging ITDR solutions are not simply relying on logs. I don’t think that identity is necessarily a new pillar, but there are strong arguments that perhaps it should be…

What do you think — should identity be a new visibility pillar?

Add Application visibility?

Hold on here, Anton, we need more data!

Here:


13674776059?profile=RESIZE_180x180
(source: X poll)

and


13674775689?profile=RESIZE_180x180
(source: LinkedIn poll)

Now let’s tackle the final candidate, the one I considered in 2020 to be the fourth leg of a three legged stool. There is, of course, application visibility, powered by increased popularity of observability data, eBPF, etc. Application visibility is not really covered by endpoint orgs and definitely not by EDR observation. Similarly, application visibility is very hard to deduce from network traffic data.

YES?

  • Application visibility is not covered by E and N well enough
  • SaaS, cloud applications and — YES! — AI agents require deep application visibility.
  • This enables deeper insights of the app guts, as well as business logic

NO?

  • Is it just logs? Is it, though?
  • Do organizations have to do application visibility (via ADR or whatever?) Is this a MUST-HAVE … but for 2030?
  • Are many really ready for it in their SOCs today?

Verdict:

  • YES! I think to have a good 2025 SOC you must have the 4th pillar of application visibility.
  • And, yes, many are not ready for it yet, but this is coming…

So, we have a winner. Anton’s SOC visibility QUAD of 2025

  1. Logs
  2. Endpoint
  3. Network
  4. Application
13674775695?profile=RESIZE_180x180
SOC visibility quad 2025 by Anton Chuvakin

Are you ready? … Ready or not, HERE WE GOOOO!

 

- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

 
Read more…

ProPublica is reporting:

Microsoft is using engineers in China to help maintain the Defense Department’s computer systems—with minimal supervision by U.S. personnel—leaving some of the nation’s most sensitive data vulnerable to hacking from its leading cyber adversary, a ProPublica investigation has found.

The arrangement, which was critical to Microsoft winning the federal government’s cloud computing business a decade ago, relies on U.S. citizens with security clearances to oversee the work and serve as a barrier against espionage and sabotage.

But these workers, known as “digital escorts,” often lack the technical expertise to police foreign engineers with far more advanced skills, ProPublica found. Some are former military personnel with little coding experience who are paid barely more than minimum wage for the work.

This sounds bad, but it’s the way the digital world works. Everything we do is international, deeply international. Making anything US-only is hard, and often infeasible.

EDITED TO ADD: Microsoft has stopped the practice.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…

I froze when the question came in. If you work in cyber, you’ll know this question all too well. It’s the one that continues to resurface, both in boardrooms and at industry events:

“Why are people still the weakest link?”

Yes, it was familiar. Yes, it was provocative. But as I stood on stage, reading from my notes, I paused, looked at the question… and moved on to another.

I couldn’t ask it. Not because it was technically wrong — we all know the role human mistakes play in incidents — but because it reflected a mindset that’s no longer fit for modern leadership.

Putting it bluntly, framing people as the “weakest link” misses the mark. It’s a perspective rooted in blame rather than constructive leadership. And today, with an increasing volume of digital challenges – from malicious to mistakes and malfunction, it’s vital we move beyond this narrative and focus on governance and empowerment instead.

The good news? Change is happening.

The UK Government’s refreshed Cyber Governance Code of Practice sets a clear direction with guidance, and is holding boards accountable for human cyber risk.

In this blog, I’m going to be taking a deeper dive into this transformation and the actionable steps organizations can take to address this critical issue. I’m approaching this from my role with OutThink, the Cybersecurity Human Risk Management platform I proudly represent as an advisor and brand ambassador.

 

The Shift from “Blame” to “Governance”

With the UK Government’s recently refreshed Cyber Governance Code of Practice, we now have official recognition that cyber risk (particularly human cyber risk) is a board-level responsibility. Not a bolt-on. Not a technicality. But a governance issue that sits squarely with those who lead.

At the launch of the Code, Cyber Minister Feryal Clark said:

“Boards must take responsibility for cybersecurity. They are ultimately accountable for ensuring their organization is resilient.” (The Times, April 2024)

This is not just rhetoric. The Code outlines clear, actionable expectations for how boards and executives must govern human risk — an area long treated as a side note or delegated to compliance teams.

NCSC CEO, Richard Horne reinforced the point, saying:

“In today’s digital world, where organisations increasingly rely on data and technology, cyber security is not just an IT concern — it is a business-critical risk, on a par with financial and legal challenges.”

That statement alone should shift the tone in every boardroom in the country.

 

Principle C — The People’s Mandate

Among the five key principles of the Code, ‘Principle C: People’ is arguably the most transformative. It redefines human cyber risk not as an operational problem, but as a strategic leadership issue with four areas of governance responsibility outlined:

 

1. Create a Cyber-Resilient Culture

Boards are expected to promote and model behaviors that enable a secure culture from the top down. Top management is expected to lead by example, prioritise secure practices, and ensure that risk awareness is embedded in how decisions are made.

Too often, boardroom agendas treat cybersecurity as an item to be “noted.” This principle says: if culture eats strategy for breakfast, then cyber culture must sit at the head of the table!

 

2. Align Policies to Enable the Right Behaviors

Policies shouldn’t exist just to satisfy auditors. They must align with how people work and behave daily. Abstract or punitive policies disconnected from workplace realities set employees up to fail.

When staff bypass policies to do their jobs, it’s typically not due to recklessness. Rather, it’s operational (or control) friction, i,e., a failure of governance. Secure behaviour should be the easiest choice and the path of least resistance, not the hardest one.

Boards must therefore ensure policies are practical, actionable, and integrated into workflows. More importantly, they need governance systems to actively monitor if these policies truly work in practice. Policies should empower secure behavior, not hinder it.

 

3. Develop Cyber Knowledge, Skills, and Literacy at All Levels

Many organisations invest in security awareness training and phishing simulations for staff, but overlook their leadership teams. Boards must invest in their own security awareness not to become technical experts, but to be effective stewards. This means asking the right questions, understanding behavioural and technical risk, and overseeing strategic interventions. That entails making security awareness training adaptive and specific to the roles performed, too.

 

4. Use Metrics to Monitor Cultural and Behavioral Risk

If you can’t measure it, you can’t govern it. Yet most cybersecurity reports to boards focus on threat activity and system vulnerabilities, not on human risk indicators.

If they do include any reference to people, it’s typically in terms of security awareness and phishing training. However, boards need visibility into how people actually behave, what risks they take, and how these patterns shift over time. This means going beyond checkbox compliance to true performance-based assurance.

And this is where traditional tools fall short — and where OutThink is changing the game.

 

Why We Need a New Category: Cybersecurity Human Risk Management

For years, organisations have focused on raising security awareness through both training and simulation, and that’s not a bad thing. But cyberattacks haven’t slowed and behavioural risks remain high. That’s because awareness is not the same as behavior. And measurement that yields true, actionable, behavioral insight has been missing.

At OutThink, I’m seeing how organisations are shifting from compliance-driven awareness to data-driven risk governance. Unlike legacy security awareness and phishing training tools, it enables leadership teams to:

  • Quantify human cyber risk at the individual, team, and business unit level.
  • Monitor behavioural indicators like phishing susceptibility, policy bypassing, or risk sentiment.
  • Track cultural maturity over time, with real metrics aligned to governance frameworks.
  • Provide boards with dynamic dashboards that reflect real risk, not just activity.

This is how you bring Principle C to life. This is how you move from oversight to foresight.

 

What “Good” Looks Like Today

Leading organisations are no longer asking if their people are trained in security awareness and phishing attacks. Instead, they’re asking:

  • Are secure behaviours embedded?
  • Can we predict and reduce human error before it becomes a cyber incident?
  • Do we have the data to govern human cyber risk effectively?

The best boards now receive monthly reporting on human cyber risk trends. They’re using risk scores to prioritise investment. They’re partnering with platforms like OutThink to visualise and reduce cyber risk at scale, not just raise security awareness.

This isn’t aspirational, it’s operational. And increasingly expected by regulators, insurers, and investors.

 

From Blame to Leadership: A Final Word

Back to that panel.

The reason I skipped the “weakest link” question wasn’t to avoid a tough conversation but to reframe it. The question we should be asking is:

“What have we done as leaders to make secure behaviour the path of least resistance?”

Too often, human mistakes are the result of poor leadership design: unclear policies, contradictory incentives, inadequate training, or toxic cultures. If a frontline employee falls for a phishing email, the issue isn’t their intelligence; it’s the fact that the system wasn’t built to support success. When people are supported, trained, and valued — when they see leadership walking the talk — they become your most powerful layer of defence.

So no, people are not the weakest link.

They are our most underutilised security control.

When equipped, supported, and led well, they are the most adaptive and resilient cyber defence we have.

 

For C-suites and Boards: What to Do Next

  • Download the UK Cyber Governance Code Toolkit to assess your current state.
  • Start asking better questions. Not “are we compliant?” but “are we reducing cyber risk?”
  • See how cybersecurity human risk management platforms like OutThink can help you operationalise Principle C with the appropriate data, dashboards, and insights aligned to the boardroom.

 

By Jane Frankland (Business Owner & CEO, KnewStart)

Original link of post is here

Read more…

The Chinese have a new tool called Massistant.

  • Massistant is the presumed successor to Chinese forensics tool, “MFSocket”, reported in 2019 and attributed to publicly traded cybersecurity company, Meiya Pico.
  • The forensics tool works in tandem with a corresponding desktop software.
  • Massistant gains access to device GPS location data, SMS messages, images, audio, contacts and phone services.
  • Meiya Pico maintains partnerships with domestic and international law enforcement partners, both as a surveillance hardware and software provider, as well as through training programs for law enforcement personnel.

From a news article:

The good news, per Balaam, is that Massistant leaves evidence of its compromise on the seized device, meaning users can potentially identify and delete the malware, either because the hacking tool appears as an app, or can be found and deleted using more sophisticated tools such as the Android Debug Bridge, a command line tool that lets a user connect to a device through their computer.

The bad news is that at the time of installing Massistant, the damage is done, and authorities already have the person’s data.

Slashdot thread.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…

In my days there, Gartner had Maverick research (here is mine, from 2015 about social engineering AIs…. yes, really!) that “deliberately exposed unconventional thinking and may not agree with Gartner’s official positions.”

Here is a “maverick-ish” blog for you. DO NOT try this at homeDO learn and think about it.

I’ve been obsessing with SIEM migration since at least 2011, and many of my beautifully nuanced blogs, choke full of “it depends” covered things like “By taking the time to evaluate your log sources before you migrate them, you can streamline your SIEM and focus on the data that is most important” and “adopt the practice of testing your SIEM and detection content by regularly injecting data that will test your detections, check parsing, and validate data flow.”

So there.

How about … fuck nuance and fuck it all?! Enter SIEM migration by re-creation aka “scorched Earth SIEM migration.”

BTW, this is not “recreation” as such, even though this can be fun for some, and I don’t want to judge. But re-creation or creation from scratch, for sure!

Think about it: when we talk about SIEM migration, the usual mental image involves a meticulous, perhaps painful, process of porting everything from the old system to the new. But what if we considered a different path? Instead of carefully migrating old content, what if we used the opportunity to re-create everything from scratch, leveraging all the hard-won lessons from your previous environment? This isn’t merely a migration, not “transformation lite”; it’s a transformation+”.

It implies a fundamental rethinking of SIEM (and likely SOC around it) rather than just incremental improvements.

13671901853?profile=RESIZE_180x180

Made by Humans (i.e. Anton)

 

There are some compelling advantages to this seemingly drastic approach.

 

The Benefits of a Fresh Start

First, and perhaps most crucially, unnecessary content — particularly detection content that no longer serves your purposes — has a dramatically lower chance of making it into the new product, wasting your time for years to come. You’re much less likely to inadvertently copy existing detection technical debt into your shiny new modern SIEM.

Think of all those cumbersome old rules that have grown unwieldy over the years, or those that simply don’t align with the realities of modern threats or new SIEM capabilities (got AI?). A re-creation approach helps you avoid spending time converting them. It’s often possible to detect certain things far more efficiently in a new product than they ever could be with the convoluted logic of the old one (yes, really, I don’t even have a marketing hat anymore, this is just a fact).

Despite widespread acknowledgment that simply copying log sources is counterproductive and that they absolutely need “assessment”, many traditional migrations still begin by porting all existing log sources into the new product (yuck!). The ‘from scratch’ approach avoids that risk and cost, allowing you to use the valuable lessons from your previous SIEM experience (but not the stale content from the actual product) and your current risk posture and IT environment to identify and then integrate only the necessary log sources. Again, this avoids detection technical debt and the “lazy thinking” of simply porting everything over.

As a result, you avoid many of the painful elements of the classic migration approach, ultimately leaving you in a far better position.

 

You’re Not Worse Off (Seriously)

Moreover, you’re genuinely not worse off when it comes to migrating detection content. You might think, “But what about my existing rules and use cases?” Well, first, you get a chance to completely rethink them. Second, even in the classic migration model, as we’ve stated quite a few times, you’re not really migrating it.

Yes, with the current proliferation of LLMsyou can, in theory, convert well-formed language (e.g., rules, searches, etc) from one SIEM to another. However, we’re hearing that the track record of this is, at best, mixed. In fact, you’re often recreating, not simply modifying or tuning anyway. So, in many cases, you’d be doing similar work anyway.

A significant advantage of the ‘from scratch’ model is that you’re deploying the best possible SIEM for today’s realities and the best content for today;’s product. This unchains you from being guided by historical product limitations. All the mistakes you were willing to tolerate in the old product — perhaps because they ‘weren’t that bad’ or fell into the ‘we’ve always done it this way’ bucket — won’t make it into your new product. All the erroneous, inefficient, or simply tolerated ‘stuff’ now has a chance to be eliminated. Boom!

Instead of spending time converting, tweaking, refining, adapting and adjusting inefficient or expensive searches and reports that you have no use for, you can instead use that time to strategically design the reports and dashboards you genuinely need to communicate with stakeholders and leadership. Imagine, clean reports that actually mean something! And deliver value and not just “mementos” of the SIEM you had in 2008.

 

A Horse to a Car… or a Car to a Plane?

One reason this blog post was conceived is that the modern generation of SIEM products differs from the classics in more ways than many often care to admit. In essence, migrating from a horse to a car isn’t about asking, “But where do you store the manure for the car (also known as gasoline)?” It’s a fundamental shift.

 

But I believe the real metaphor, and the real change, is even bigger. You’re not just migrating from a car to a plane; you’re fundamentally changing how you travel, even if the end result — getting from A to B — is the same. I digress, but I suspect when flying cars finally become commonplace, we’ll realize they represent a fundamentally different form of transportation, not just a regular car that happens to fly. Thinking in three dimensions will do that to you. So, think 3D and burn your old SIEM to the crisp!

So, while I’m piling up ideas and metaphors, the real lesson for real-world SIEM deployments is this: People spend too much time thinking about the details of the migration, and but they can think about what they truly need and truly can get with a new product.

 

Engineering Your Way to a Modern SOC

This radical approach to SIEM isn’t just about deleting old baggage; it’s about building a future-proof modern SOC from a “blank sheet of paper.” The goal is an engineering-led SOC (like our ASO approach) capable of detecting the attacks you care about and embracing an “everything as code” mindset. This means codifying as much as possible for efficiency and continuous improvement, from infrastructure to detection rules.

Of course, the biggest hurdles in such a transformation are often mindset and skillset challenges (we got a class on it!). The solution involves creating a dedicated D&R team with cloud, AI, DevOps/SRE, etc and whatever else needed to boost your new SIEM into orbit, rather than drag it with a mule team :-)

For detection and response engineering, continuous testing, the adoption of “CD/CR” pipelines, and peer review of detection/response logic are paramount. This ensures all detection, alerting and response (playbooks!) mechanisms are thoroughly tested at “code speed” through automated pipelines before ever reaching prod. Naturally, this means using a version of agile methodology. This allows for quick course correction and continuous feedback (central in ASO thinking).

Now, let it all burn. Don’t evolve, don’t transform — re-create.

Don’t Actually Burn Your SIEM — But Think About These

OK, we promised controversy, but we also want to be sane … a bit. Don’t actually destroy your SIEM by fire, but keep the lessons while tossing the outdated crap out.

13671901860?profile=RESIZE_180x180

Made by AI (Gemini Deep Research, infographic mode)

 

So, the acknowledgment that operational inefficiencies and human factors contribute to SIEM underperformance suggests that a purely technical “lift and shift” migration, even if technically feasible, would likely perpetuate existing challenges.

The “from scratch” approach, by its very nature, compels a comprehensive re-evaluation of every aspect of the security operation. This fundamental shift in perspective ensures that the migration addresses the root causes of past limitations, rather than just their symptoms. You really want this in 2025!

 

By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud)

Original link of post is here

Read more…

On Sunday, July 20, Microsoft Corp. issued an emergency security update for a vulnerability in SharePoint Server that is actively being exploited to compromise vulnerable organizations. The patch comes amid reports that malicious hackers have used the SharePoint flaw to breach U.S. federal and state agencies, universities, and energy companies.

13671903300?profile=RESIZE_710x

 

In an advisory about the SharePoint security hole, a.k.a. CVE-2025-53770, Microsoft said it is aware of active attacks targeting on-premises SharePoint Server customers and exploiting vulnerabilities that were only partially addressed by the July 8, 2025 security update.

The Cybersecurity & Infrastructure Security Agency (CISA) concurred, saying CVE-2025-53770 is a variant on a flaw Microsoft patched earlier this month (CVE-2025-49706). Microsoft notes the weakness applies only to SharePoint Servers that organizations use in-house, and that SharePoint Online and Microsoft 365 are not affected.

The Washington Post reported on Sunday that the U.S. government and partners in Canada and Australia are investigating the hack of SharePoint servers, which provide a platform for sharing and managing documents. The Post reports at least two U.S. federal agencies have seen their servers breached via the SharePoint vulnerability.

According to CISA, attackers exploiting the newly-discovered flaw are retrofitting compromised servers with a backdoor dubbed “ToolShell” that provides unauthenticated, remote access to systems. CISA said ToolShell enables attackers to fully access SharePoint content — including file systems and internal configurations — and execute code over the network.

Researchers at Eye Security said they first spotted large-scale exploitation of the SharePoint flaw on July 18, 2025, and soon found dozens of separate servers compromised by the bug and infected with ToolShell. In a blog post, the researchers said the attacks sought to steal SharePoint server ASP.NET machine keys.

“These keys can be used to facilitate further attacks, even at a later date,” Eye Security warned. “It is critical that affected servers rotate SharePoint server ASP.NET machine keys and restart IIS on all SharePoint servers. Patching alone is not enough. We strongly advise defenders not to wait for a vendor fix before taking action. This threat is already operational and spreading rapidly.”

Microsoft’s advisory says the company has issued updates for SharePoint Server Subscription Edition and SharePoint Server 2019, but that it is still working on updates for supported versions of SharePoint 2019 and SharePoint 2016.

CISA advises vulnerable organizations to enable the anti-malware scan interface (AMSI) in SharePoint, to deploy Microsoft Defender AV on all SharePoint servers, and to disconnect affected products from the public-facing Internet until an official patch is available.

The security firm Rapid7 notes that Microsoft has described CVE-2025-53770 as related to a previous vulnerability — CVE-2025-49704, patched earlier this month — and that CVE-2025-49704 was part of an exploit chain demonstrated at the Pwn2Own hacking competition in May 2025. That exploit chain invoked a second SharePoint weakness — CVE-2025-49706 — which Microsoft unsuccessfully tried to fix in this month’s Patch Tuesday.

Microsoft also has issued a patch for a related SharePoint vulnerability — CVE-2025-53771; Microsoft says there are no signs of active attacks on CVE-2025-53771, and that the patch is to provide more robust protections than the update for CVE-2025-49706.

This is a rapidly developing story. Any updates will be noted with timestamps.

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…

Seems like an old system system that predates any care about security:

The flaw has to do with the protocol used in a train system known as the End-of-Train and Head-of-Train. A Flashing Rear End Device (FRED), also known as an End-of-Train (EOT) device, is attached to the back of a train and sends data via radio signals to a corresponding device in the locomotive called the Head-of-Train (HOT). Commands can also be sent to the FRED to apply the brakes at the rear of the train.

These devices were first installed in the 1980s as a replacement for caboose cars, and unfortunately, they lack encryption and authentication protocols. Instead, the current system uses data packets sent between the front and back of a train that include a simple BCH checksum to detect errors or interference. But now, the CISA is warning that someone using a software-defined radio could potentially send fake data packets and interfere with train operations.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…

We are excited to invite you to the CISO Cocktail Reception if you are there at the BlackHat USA, Las Vegas 2025. This event is organized by EC-Council with CISOPlatform and FireCompass as proud community partners.

Please note that this event is exclusively for Director-level and above information security practitioners. All registrations are subject to approval.

 

Cocktail Reception Details 

  • When: Monday, August 4th, 2025

  • Where: Topgolf, Las Vegas

  • Time: 6:00 PM – 10:30PM (local time)

 

We encourage you to share this invitation with your colleagues and trusted peers or nominate a CISO in your network. Many may be attending the BlackHat Conference and we have limited seats. 

Express Interest Now (since we have limited seats on the yacht)

See you there !

Read more…

It started in a rugby box.

There I was, watching the match from a VIP suite—surrounded by a handful of other cybersecurity leaders. The beers were cold, the banter flowing, but one comment cut through the noise:

“Cybersecurity’s no longer about technology. It’s about sovereignty.”

That stuck with me.

That rugby-box insight wasn’t just banter—it reflected a deeper truth that’s reshaping the cyber landscape.

Because it’s true: cybersecurity has evolved from a purely technical discipline into a front line of geopolitical and economic warfare. Around the world, governments are weaponising regulation—using cyber laws to block foreign firms, force data localisation, and demand access to proprietary systems under the guise of compliance.

Suddenly, the centralised security models we’ve relied on for years are liabilities.

In this blog, I’m unpacking how the global regulatory landscape is fragmenting—and why decentralising cybersecurity, while expensive and complex, has become a strategic necessity. You’ll learn what this shift costs, where the risks lie, and how leaders can strike the right balance between compliance, control, and cost.

 

How Cybersecurity Regulation is Being Weaponized

Cybersecurity laws were originally designed to protect consumers and critical infrastructure, but they are now being used to:

  • Impose selective enforcement on foreign businesses and political opponents.
  • Create trade barriers by forcing companies to comply with complex local cybersecurity laws.
  • Mandate data localization, preventing companies from storing or processing data across borders.
  • Control digital infrastructure through surveillance-friendly regulations.
  • Use compliance as an excuse for corporate espionage, demanding access to proprietary cybersecurity tools and encryption.

For businesses operating internationally, these tactics create a fragmented and high-risk regulatory landscape, where failure to comply can result in fines, legal battles, operational restrictions, or outright bans.

 

Why Centralised Cybersecurity is Becoming Risky and Costly

Many global organisations have traditionally operated with centralised cybersecurity teams, often based in major business hubs like the U.S. or Europe. This model has been efficient and cost-effective, enabling them to:

  • Standardise security policies across all regions.
  • Centralise talent and reduce staffing costs.
  • Streamline compliance efforts with a single, unified strategy.

However, due to increasing regulatory fragmentation, a one-size-fits-all security approach no longer works. The risks and costs of maintaining a centralised security model are now outweighing the benefits in many industries. For example:

  1. Regulatory Fines & Compliance Risks: Cybersecurity regulations such as the GDPR (EU), China’s Cybersecurity Law, and India’s Digital Personal Data Protection Act impose steep fines for non-compliance. Non-compliance penalties can reach up to €20 million or 4% of global revenue under the GDPR. Centralised models may struggle to meet regional compliance requirements, exposing organisations to legal and financial risks.
  2. Restricted Cross-Border Data Transfers: Many governments prohibit organisations from storing or processing data outside national borders. This forces them to build and maintain multiple regional data centers, significantly increasing IT infrastructure costs. Organisations relying on centralised cloud providers (AWS, Google Cloud, Azure, etc.) face higher operational risks if cloud services are restricted or banned in specific regions.
  3. Delayed Incident Response & Security Breaches: Many regulations prevent real-time cybersecurity data sharing across borders, making it difficult for centralised teams to respond to global cyber threats. This increases the risk of prolonged breaches, reputational damage, and financial losses. Regulatory barriers limit access to global threat intelligence, reducing a company’s ability to predict and prevent attacks.
  4. Geopolitical Risks & Market Access Restrictions: Governments can use cybersecurity laws to ban foreign tech firms, as seen with the U.S. bans on Huawei, Kaspersky and TikTok or China’s restrictions on Western cloud providers. A centralised security model makes organisations more vulnerable to geopolitical tensions, potentially leading to forced exits from key markets.

Given these challenges, businesses are shifting toward a decentralised cybersecurity approach, despite the higher costs and complexities involved.

 

The Shift to Decentralised Cybersecurity: Costs vs. Risk Reduction

To reduce regulatory risks, organisations are increasingly decentralising cybersecurity operations, and moving security functions closer to the regions they serve. For example they are:

Hiring Regional Cybersecurity Talent

  • Why? Regulations now require in-country expertise to manage compliance and incident response.
  • Cost Impact: Companies must increase cybersecurity headcount in each region, leading to higher labour costs—especially in markets where cybersecurity talent is scarce and expensive.
  • In Short: Higher costs, lower compliance risks

Establishing Regional Security Operations Centres (SOCs)

  • Why? A global SOC can no longer effectively monitor threats across regulated regions.
  • Cost Impact: Organisations must set up multiple regional SOCs, increasing real estate, staffing, and infrastructure costs.
  • In Short: Significant investment required.

Expanding Data Centres & Cloud Infrastructure

  • Why? Data localisation laws prohibit the storage of sensitive data outside certain regions.
  • Cost Impact: Organisations must build new regional data centres or partner with local cloud providers, adding millions in additional IT spending.
  • In Short: Major IT expenses.

Investing in Country-Specific Compliance Programs

  • Why? Compliance must be tailored to each country’s cybersecurity regulations.
  • Cost Impact: Organisations must increase legal and compliance spending, hiring in-house specialists and external consultants to navigate complex regulations.
  • In Short: Ongoing legal costs.

Managing Multi-Region Security Architectures

  • Why? Some governments restrict the use of foreign security software and require region-specific security tools.
  • Cost Impact: Organisations must purchase and maintain multiple cybersecurity tools to comply with different national security policies, increasing licensing, maintenance, and operational costs.
  • In Short: Higher complexity and operational costs.

 

Centralised vs. Decentralised: Strategic Trade-Offs

13671898059?profile=RESIZE_710x

For large organisations and enterprises, the higher costs of decentralisation are a necessary trade-off to maintain market access and regulatory compliance. However, for small and mid-sized businesses, the financial burden may be unsustainable, forcing them to exit heavily regulated markets or partner with local firms instead.

 

What Security Leaders Must Do to Balance Cost and Risk

  • Reevaluate Security Budgets – Expect higher compliance and infrastructure costs and adjust spending accordingly.
  • Invest in Regional Talent – Hire cybersecurity and compliance experts in each key market.
  • Build Redundant Security Infrastructures – Implement localised security solutions to meet country-specific requirements.
  • Stay Agile Amid Changing Regulations – Be prepared to quickly adapt to new cybersecurity laws and geopolitical risks.
  • Optimise Costs Without Compromising Security – Use hybrid models that balance centralised oversight with regional security capabilities.

 

To End

Cybersecurity is now a cornerstone of corporate sovereignty. The weaponisation of cyber regulations means that decentralisation is no longer optional—it’s essential.

For enterprises, the costs are high, but the risks of inaction are higher. Organisations that strategically decentralise will not only remain compliant and secure but will be more resilient, adaptive, and competitive in the new digital order.

Your cybersecurity strategy is now a geopolitical play. Are you ready to lead?

 

Now I want to hear from you

Tell me—how is your organisation navigating the regulatory minefield? Are you decentralising security operations, adopting a hybrid model, or holding the line with a centralised team? What’s working, what’s breaking, and where are you seeing the biggest trade-offs?

Drop your thoughts, war stories, or questions on LinkedIn as it’s where I have all my conversations. Let’s compare notes—because no one’s facing this challenge alone.

 

By Jane Frankland (Business Owner & CEO, KnewStart)

Original link of post is here

Read more…
In this interview with Peter Ulrich, Denver’s information technology audit manager, we explore relationships between auditors and security teams in government. 
 
13670635460?profile=RESIZE_180x180
 
Back in 1789, Benjamin Franklin wrote a letter to a French scientist named Jean-Baptiste Le Roy in which he penned the famous quote: “In this world, nothing can be said to be certain, except death and taxes."

Fast forward more than 200 years, and most government technology and security professionals would add auditors — who often bring audit findings — to Franklin's list.

Nevertheless, does the relationship between security and technology leaders and their auditors need to be contentious?
 

My opinion on that question is generally “no.” In fact, I believe that auditors and chief information security officers (CISOs) can even be friends and work well together and have many mutual cyber goals.

But, as said in this 2021 article, which describes auditor relationships during my years as CISO in Michigan government, “Auditors and chief information security officers are both focused on finding vulnerabilities, fixing security problems and stopping data breaches. So why do they so seldom see eye-to-eye?”

Which leads us to March of this year when I met Peter Ulrich, the information technology audit manager with the Denver Auditor’s Office, at the Billington State and Local CyberSecurity Summit in Washington, D.C. Peter has achieved CISA and CSX-A certifications and has an amazing set of professional experiences, including many public- and private-sector leadership roles prior to joining Denver government service.
 

In addition to Peter’s kind, professional demeanor and his immense knowledge of the cybersecurity industry and the audit profession, I was especially impressed with how he described his relationship with Merlin Namuth, the CISO for the city and county of Denver. Indeed, their teams get along well, as I heard firsthand during an online meeting with the two of them. These conversations led to my interview with Peter, which is the focus of this blog. I hope to be able to interview Merlin in a separate blog later this year.
 
 
13670635288?profile=RESIZE_180x180
Peter Ulrich
 
 
Dan Lohrmann (DL): You have a fascinating professional background. How did you get into your current role in tech and security auditing in government?

Peter Ulrich (PU): I was thriving in my role at Vantage Data Centers and learning a ton about the critical infrastructure running the cloud and AI, but was looking for more meaning in my role and serving others. I found the IT audit role at the Auditor’s Office at the city and county of Denver and was attracted by the mission of serving the residents of Denver and the people using the Denver International Airport. The scope of the operations was really big and there are so many different missions with very different requirements — I was also attracted by this complexity.


DL: How would you describe your role in working with Merlin Namuth in Denver? Is it working well so far?

PU: First and foremost, I must keep my independence and objectivity so I do not impair my ability to perform audits of his function. With that groundwork being established, Merlin has been transparent and open in his communication with my team and me. I am always happy when the people I am auditing are completely open about where they are and allow me to focus on really defining the root cause of the problem and designing recommendations that make real improvements to the risk profile in a reasonable amount of time.

I think this is working well because we both want the same result, which is to reduce risk and improve security. While he may not agree with every recommendation I make, we do not disagree on the intent of the recommendation, just maybe the way to reduce the risk.


DL: How do you deal with PR surrounding audit findings?

PU: I am extremely lucky that Denver's auditor, Timothy O’Brien, has a communication team that composes press releases and handles any media inquiries. The press releases are circulated among the leadership and audit team members on each audit for review and editing. This helps ensure that we have the right message to residents and other stakeholders.


DL: Cybersecurity weaknesses are not something that government wants in the news. How do you balance that public announcement role (and Freedom of Information Act requirements) with the need to not disclose problems to bad actor hackers?

PU: My professional judgement is key to ensure the office is balancing transparency with the risk of disclosing confidential or questionable information. Once I determine the information may need to be restricted, I have discussions with Auditor O’Brien and office leadership to balance the transparency needed by the residents and the risk that bad actors could use the information. In Colorado, FOIA is called the Colorado Open Records Act (CORA), and there is municipal code that provides options for us to disclose the information to the city’s Audit Committee and the agency in a confidential workpaper. Additionally, we follow generally accepted government auditing standards (GAGAS), as outlined in the city charter, and the standards require us to protect sensitive audit evidence from public view. The Auditor’s Office’s workpapers are not subject to CORA, as we audit more than just technology and may have protected or regulatory data in our possession to perform our audit.

We take transparency very seriously, so we do not just place everything in a confidential workpaper, but make thoughtful decisions to protect the city from cyber threats while trying to make sure that residents know the IT systems and processes they need for the city and county to function are effective, secure, confidential, available and have integrity.


DL: What makes your interactions with Merlin both effective and impactful so both of you can get done what needs to be done in government cybersecurity?

PU: We both have the same goal to reduce risk, improve security and shrink the threat surface, but we may not always agree as to what the most important things are to correct with limited resources. I think we also understand both our roles, where Merlin and his team protect the city in a first line and second line, and my team and I provide assurance that the subjects under audit are either working effectively or not. The role clarity makes sure we can get done what needs to be done in our current responsibilities.

The other factor is trust, as in any relationship. I trust Merlin is doing his best to protect the city and county and has the skill to do his job. I believe he also trusts that I am thoughtful and skillful and will not overreact to things we find and will seek to understand the issues and problems associated with the findings.


DL: Why do you think many CISOs struggle with auditors, and what advice would you give both CISOs and auditors to improve value and effectiveness for both sides?

PU: I think a lot of CISOs see the audit results as a grade, and to some extent it is. However, I think a lot of auditors approach audits as "gotcha moments," and to some extent it is. I think I had a lot of success selling the audit services from a couple of perspectives. First, my team and I are free consulting. Many of our audits would cost the organization hundreds of thousands of dollars. However, sometimes we do rebill the cost to the agency. Second, my team and I are here to reduce risk, improve security, and make processes more efficient, not to assign blame. I care about why it happened, not who was the leader when it happened, and how can we make the process under audit better.

I think a lot of IT auditors need to approach their work with a different mindset. We need to realize that the people in the organization under audit have full-time jobs and are often stretched thin without the added work of educating us about what they do. I also think many auditors need to have a relationship and hospitality mindset. We need the cooperation of the agency or department client to do our job, and we need to be flexible and try to be as understanding as possible. However, we need to help our clients realize we have professional standards and sometimes we must do some things they do not want us to do like walkthroughs or policy reviews.

FINAL THOUGHTS


Back in 2018, I wrote a blog entitled, "Security Audit Weaknesses Offer a Silver Lining," ending with these words:

“Remember that although it may not feel like it, auditors can be helpful to your organization. Early audit findings surrounding cybersecurity helped steer enterprise priorities. This audit action data allowed us to obtain funding for key security and infrastructure initiatives during difficult budget times. We even gave our auditor general the results of internal security assessments. By developing positive relationships and building trust with auditors, you can solve problems simultaneously — like obtaining compliance and strengthening security.

"Leaders must follow through with audit remediation plans. Corporate memory is often lost with staff turnover, but remember compliance because the auditors won’t forget.”
 
 
 

By: Dan Lohrmann (Cybersecurity Leader, Technologist, Keynote Speaker & Author)

Original link to the blog: Click Here

 
Read more…