Biswajit Banerjee's Posts (210)

Sort by
Actionable Insights For CISOs:

1. Shift your focus from detection to investigation

Insight: The blog emphasises that the real bottleneck in incident response isn’t detection—it’s investigation. Alerts are plentiful, but turning those alerts into actionable conclusions is where work stalls.

Action steps:

  • Map your current incident lifecycle: how many alerts → how many investigations → how many incidents resolved, and measure time spent in the investigation phase.

  • Set a target to reduce “mean investigation time” (from first alert to root-cause clarity) by, say, 30-50% over the next 12 months.

  • Implement tools/workflows that provide forensic-level evidence rapidly (e.g., integrated endpoint evidence collection, timeline builders) so your analysts don’t have to piece evidence together manually.

  • Conduct regular after-action reviews of investigations that took the longest: what slowed them? Was it data access, tool fragmentation, missing logs? Use that to feed process/tool improvements.

 

2. Maximise value from your existing security stack — don’t just add more alerts

Insight: Ozkaya argues that piling on more alerts (from SIEMs, EDR, XDR) doesn’t equal resilience; instead you should convert your stack into an intelligence-pipeline that surfaces answers, not just signals.

Action steps:

  • Inventory all current alerting tools and systems (SIEM, EDR, XDR, NDR etc) and catalog the volume, noise level, false positives, and downstream investigation burden they create.

  • For each tool, evaluate: “Is the signal actionable? Does it lead to decisive investigation or does it generate more work for the analyst?”

  • Introduce or integrate platforms that can enrich alerts with context (asset risk, past incidents, threat intelligence) and automate triage decisions (escalate vs dismiss) to speed up investigation.

  • Retire or repurpose redundant alert sources that contribute noise and drain analyst time.

 

3. Democratise investigations and strengthen front-line analyst capability

Insight: Investigations shouldn’t require only elite forensics expertise; they should be fast, accessible, and collaborative across the SOC. Ozkaya highlights the need to empower junior analysts and free senior staff for strategic work.

Action steps:

  • Define investigation workflow tiers: Tier 1/2 handle triage and basic investigation, Tier 3/Forensics handle advanced attribution. Provide tooling and guided workflows so T1/T2 can perform more without delaying.

  • Provide training and playbooks that allow frontline analysts to carry out investigation actions (evidence collection, root-cause trace, timeline creation) with confidence.

  • Monitor analyst burnout and throughput: If senior staff are overloaded with simple investigations, fix tooling or workflow so their time is focused on higher-value tasks.

  • Foster cross-team collaboration (SOC, IR, forensics, legal) so investigations progress smoothly rather than being stuck with hand-offs.

 

4. Align incident response (IR) processes with business continuity and compliance

Insight: The blog connects modern IR not just with cyber-breach mitigation but with business resilience, regulatory audit readiness and the ability to act decisively.


Action steps:

  • Review your IR processes and map them directly to business-impact metrics: e.g., time to containment, business-unit downtime, regulatory breach risk.

  • Ensure your IR toolkit records investigation evidence in a way that supports audit-trail needs (e.g., for NIS2, DORA or ISO 27001). Ozkaya mentions that traceability and conclusive investigations matter. 

  • Incorporate service-continuity and business-operation recovery as core KPIs inside IR playbooks, not just “eradicate malware” or “patch vulnerability.”

  • Periodically test IR workflows (tabletop or simulation) with business-leaders present to validate recovery plans and communication protocols.

 

5. Use automation and tooling strategically to overcome talent shortage

Insight: Given the global shortage of skilled cybersecurity professionals, Ozkaya suggests using automation and platforms that let less-experienced staff be productive, freeing up senior specialists for mission-critical tasks.

Action steps:

  • Identify repetitive tasks in your IR workflow (evidence collection, log aggregation, alert enrichment, call-tree contact management) and automate them.

  • Evaluate or pilot “investigation-assist” tools that provide guided workflows, dashboards with root-cause hypothesis, and forensic-level data extraction without deep manual effort.

  • Measure ROI in terms of cases per analyst, time to resolution, and cost per incident before vs after automation.

  • Ensure that automation is aligned with your skill-uplift strategy: automation doesn’t replace analysts, but augments their capability.

 

6. For MSSPs and SOC providers: build differentiated service offerings based on investigation-outcomes

Insight: Ozkaya points out that for MSSPs and MSPs, the shift is moving from detection-only (alerting clients) to providing conclusive investigations and response capabilities, turning across end-to-end into value-driven service.

Action steps:

  • If you are an MSSP/managed SOC provider, define service tiers: e.g., basic alerting, advanced investigation and root-cause delivery, full IR/responder offering. Highlight investigation as the value differentiator.

  • Build multi-tenant investigation platforms, unified dashboards, standard workflows to scale across clients without adding linear headcount cost.

  • Define and track investigation-centric KPIs for clients: e.g., “alerts triaged to root-cause,” “time to containment,” “repeat-incident frequency,” to show measurable value.

  • Use automation and guided investigation flows to allow Tier 1 analysts to handle more client IR work, reserving Tier 3 for client-critical escalations.

 
About Author:

Dr. Erdal Ozkaya is a veteran cybersecurity leader with nearly three decades of experience spanning IT, cyber-risk, governance and leadership roles. He has served as a Chief Information Security Officer (CISO) and advisor to global organisations, drawing on deep expertise in building and maturing security programmes across diverse sectors.

An award-winning author, speaker and community builder, Erdal is known for connecting the complex world of cybersecurity to practical outcomes and fostering peer networks among CISOs and security executives. He is committed to continuous learning and advancing the discipline of cyber leadership for the evolving digital-risk landscape.

 

Now, let’s hear directly from Dr. Erdal Ozkaya on this subject:

Revolutionizing Incident Response and Building True Cyber Resilience

Curtesy of Binalyze

Cyber threats are relentless and constantly evolving. Organizations face an increasingly complex threat landscape, compounded by a persistent shortage of cybersecurity talent, overwhelming alert volumes, and pressure to ensure uninterrupted business operations.

Against this backdrop, the need for a modernized, intelligence-driven approach to incident response has never been greater. This is not simply about reacting faster, it’s about achieving cyber resilience without adding operational complexity. It’s about maximizing the value of your existing security stack, improving efficiency, and accelerating response times to protect what matters most: business continuity.

Today’s SOCs are flooded with alerts. Detection tools have done their job well—perhaps too well. SIEMs, EDRs, and XDR platforms surface endless alerts, each demanding attention, triage, and context. But for all their speed and scale, they stop short of delivering what teams really need: conclusive answers.

Cyber resilience doesn’t come from more alerts. It comes from knowing what matters, acting quickly, and responding with confidence. That requires investigations without limits.

 

The Real Bottleneck in Incident Response

Investigations remain one of the slowest, most manual phases of the response lifecycle. Analysts waste time jumping between tools, chasing incomplete logs, or waiting on access to evidence. Even identifying the root cause—a basic requirement for meaningful response—can take days.

This isn’t just inefficient. It’s risky. Without timely, conclusive investigation:

  • Threats linger, sometimes unnoticed.
  • Compliance deadlines are missed.
  • The same attacker comes back.

Why It’s Time to Rethink the Role of Investigation

As the threat landscape evolves, investigation must move from being a last resort to a first-class capability:

  • To validate and escalate alerts faster. Not every signal warrants a war room. The sooner teams can determine impact and priority, the better.
  • To shorten time-to-containment. Delays often stem not from detection, but from uncertainty. Clear raw evidence enables decisive action.
  • To meet rising regulatory expectations. Frameworks like ISO 27001, NIS2 and DORA demand audit-ready investigations. Circumstantial isn’t enough.

Investigation shouldn’t require elite forensics expertise or specialist-only tools. It should be fast, collaborative, and accessible across the SOC.

 

Maximize the ROI of Your Existing Security Investments

Security teams often find themselves overwhelmed by the volume of alerts produced by their SIEM, EDR, and XDR systems. Rather than functioning as an “alert factory,” your security stack should evolve into an intelligence pipeline—one that surfaces actionable insights instead of raw noise.

Actionable, effective insights come with conclusive evidence. Modern tools can plug directly into your existing infrastructure to deliver forensic-level insights in real-time. This enables your SOC to operate with clarity, reduces alert fatigue, and improves incident triage confidence. Most importantly, this evolution doesn’t require disruptive replacement—it’s about enhancing workflows intelligently, not reinvention.

Build Cyber Resilience Without Adding Complexity

Traditionally, powerful forensic and investigative capabilities have come with steep learning curves. But newer platforms are removing these barriers, offering expert-grade functionality through intuitive interfaces that don’t require deep forensic expertise or knowhow.

This democratizes investigations, allowing junior analysts to contribute meaningfully and consistently, while freeing senior staff to focus on more complex threats. The result is a more effective, collaborative, and confident security team, helping to close the cybersecurity skills gap from within.

Strengthen Compliance Posture with Evidence-Backed Response

In a regulatory environment that demands accountability, audit-ready investigations are essential. The right tools provide comprehensive, timeline-based evidence collection, enabling full traceability and defensibility for each incident.

This supports compliance with frameworks like ISO 27001, NIS2, DORA, and other sector-specific mandates. With a clear, documented response process, organizations can face audits, regulatory disclosures, and internal reviews with confidence.

Address the Talent Shortage Without Compromising Quality

The global shortage of skilled cybersecurity professionals is unlikely to be resolved in the near term. Organizations must find ways to scale expertise without increasing headcount.

Advanced solutions do just that.  They act as a force multiplier, automating repetitive tasks and providing built-in knowledge that allow analysts to focus and quickly pivot into strategic actions. This improves onboarding efficiency for new staff and helps prevent burnout among seasoned professionals, contributing to long-term team sustainability.

 

For MSSPs and MSPs: Elevate Your Service Delivery and Unlock New Revenue

For Managed Security Service Providers (MSSPs) and Managed Service Providers (MSPs), these capabilities represent a significant opportunity to scale operations, differentiate offerings, and grow revenue.

  • Deliver Faster, Smarter Service Without Adding Headcount: Cut investigation times across hundreds of endpoints while maintaining SLA commitments. These solutions enable your analysts to handle more cases, more quickly—transforming response speed into a competitive advantage.
  • Move from Detection to Resolution: Move beyond simply alerting clients to incidents. By integrating forensic-grade investigation capabilities, you can provide conclusive findings as part of your MDR or IR retainer services. This shift from signal to resolution adds immediate value to your engagements.
  • Empower Analysts Across All Tiers: Browser-based interfaces, guided workflows, and automation empower Tier 1 analysts to perform at Tier 3 levels. This not only reduces reliance on specialized forensic talent, but also improves consistency and outcomes across the board.
  • Unlock High-Margin Services Without Rebuilding Your Stack: With multi-tenant capabilities and low overhead, MSSPs can introduce IR services such as remote compromise assessments, forensic triage, or proactive compromise scanning—creating new revenue streams and upsell opportunities from existing MDR and SOC clients.
  • Prove Value with Evidence and Precision: Provide clients with detailed, timeline-based narratives and forensic evidence. This transparency builds trust and helps demonstrate quantifiable improvements in time-to-detect and time-to-respond—critical KPIs in today’s performance-driven security market.

 

The Path Forward: Resilience Through Intelligence

These capabilities are not aspirational—they’re already being delivered by forward-thinking platforms. Binalyze AIR is a prime example of a solution that embodies this next-generation approach, helping organizations and service providers alike:

  • Accelerate investigation and response
  • Strengthen compliance and audit readiness
  • Optimize team performance and morale
  • Extract more value from existing security investments

Whether you’re an enterprise security leader or an MSSP seeking to scale smarter, the future of cybersecurity lies not in complexity, but in clarity, intelligence, and speed. By transforming incident response into a proactive, resilient, and efficient process, you not only secure your organization—you enable it to thrive.

 

 

By: Dr. Erdal Ozkaya (Cybersecurity Advisor, Author, and Educator)

Original link to the blog: Click Here

Read more…

In today’s rapidly evolving threat landscape, Security Operations Centers (SOCs) face mounting pressure to investigate incidents faster and with higher accuracy. Analysts spend valuable time switching between tools, writing queries, and compiling inconsistent reports — often during critical response windows.

In a recent session, Sanglap Patra, Security Engineer at Nielsen, showcased an innovative prototype — an AI-powered SOC Investigation Assistant that integrates natural language processing (NLP) with popular SOC tools like Splunk, Jira, and WhatsApp. This assistant automates investigation workflows, generates hunting queries from plain English prompts, and provides intelligent context-based analysis.

The session demonstrated how AI can act as a virtual teammate for analysts — handling repetitive investigation steps, correlating logs, and producing consistent, actionable insights. For CISOs, this signals the next phase of SOC modernization: AI-augmented detection and response operations.

 

Key Highlights:

  • How analysts can perform investigations over WhatsApp (voice/text) with instant Splunk results.
     
  • Using Gemini AI to interpret logs and provide contextual analysis. 

  • Business value of bridging SIEM with everyday communication apps for faster SOC operations.

 

About Speaker:

Sanglap Patra is a Security Engineer currently working at Nielsen, with prior experience at Toyota and Lumi. With a background spanning incident response, red teaming, digital forensics, and security engineering, he is now focused on applying AI and automation to simplify SOC workflows and improve incident handling speed and quality.

His hands-on demonstration reflected not just technical depth but a vision for how SOCs can evolve from manual analysis to context-aware, AI-driven operations.

 

Listen To Live Chat : (Recorded) 

 

Executive Summary

1. The Real-World Problem

During his tenure as a SOC analyst, Sanglap often faced challenges such as:

  • Long, manual investigations requiring custom queries.

  • Context switching across multiple tools (SIEM, ticketing, chat).

  • Inconsistent reporting from different analysts.

  • Critical incidents occurring during off-hours with limited analyst availability.

These operational inefficiencies inspired him to design an AI system that could act as an investigation co-pilot — reducing manual overhead while improving consistency and speed.

 

2. The Vision: Natural Language–Driven Investigations

The core idea behind the AI Assistant is simple yet powerful:

Analysts should be able to “talk” to their SOC — ask questions in plain English and get actionable investigative results.

By using Natural Language Processing (NLP), the system translates analyst queries into SIEM searches, runs those queries, interprets the logs, and summarizes results in human-readable form — all within the same conversational interface.

 

3. Architecture Overview

The automation comprises three AI agents working in tandem:

  • Session Controller: Tracks case context via Jira and manages user sessions over WhatsApp.

  • Query Agent: Understands user intent, formulates search queries, and runs them over Splunk (or other SIEMs like Sentinel, Elastic, or QRadar).

  • Analysis Agent: Analyzes returned logs, summarizes findings, and determines if the event is a true positive or false positive.

The system integrates seamlessly via APIs, enabling incident management, data retrieval, and response — all triggered by simple chat commands.

 

4. The Demonstration

Sanglap’s demo highlighted how an analyst could initiate, continue, or close investigations entirely through WhatsApp messages:

  • Asking: “Check for unusual logins for sanglap.patra.”

  • Receiving: a generated Splunk query, execution results, and summarized analysis.

  • Following up: “Summarize the investigation” — and getting a concise summary of findings.

The automation handled:

  • AI-driven log analysis

  • Context retention across sessions

  • Automated ticketing in Jira

  • Intelligent report generation

It showcased how AI could turn routine SOC tasks into dynamic, interactive workflows.

 

5. Questions & Insights

During Q&A, participants explored key points:

  • The prototype currently uses Google Gemini for LLM tasks, but enterprise deployments would benefit from self-hosted models trained on internal threat data.

  • Integration can extend to Microsoft Sentinel, Elastic, and QRadar — any platform supporting API-based queries.

  • Analysts can incorporate Threat Intelligence (TI) sources for enrichment (e.g., VirusTotal, AbuseIPDB).

  • The system can evolve to include automated response actions, closing the loop from detection to mitigation.

 

CISO Playbook: Turning Insights Into Action

1. Begin with Workflow Mapping:
Identify repetitive SOC tasks (query generation, log parsing, case updates) that consume analyst hours and cause burnout.

2. Pilot AI-Assisted Workflows:
Start small — integrate NLP-based automation for investigation summaries or log correlation. Use open APIs (Splunk, Sentinel, Jira) to prototype quickly.

3. Ensure Data Governance:
Deploy AI models within secure, compliant environments. Train them on sanitized log schemas and threat patterns relevant to your organization.

4. Empower Analysts, Not Replace Them:
The goal is not full automation — it’s augmentation. Enable analysts to focus on judgment calls while AI handles the grunt work.

5. Measure & Iterate:
Track KPIs such as Mean Time to Investigate (MTTI) and Mean Time to Detect (MTTD). Use these to benchmark AI performance and refine your prompts and model logic.

 

Conclusion

The AI-powered SOC Investigation Assistant exemplifies how AI can operationalize security intelligence — making investigations faster, context-rich, and scalable.

As Sanglap emphasized, this is only the beginning. The future SOC will not be a static dashboard but an interactive, cognitive system — one that understands analyst intent, contextualizes threats, and drives autonomous action.

For CISOs, now is the time to reimagine SOC strategy with AI at its core — balancing human expertise with machine efficiency to stay ahead of evolving cyber threats.

Read more…

Actionable Insights for CISOs

1. Evaluate the Viability of Decoupled SIEM Architectures

While decoupled SIEMs offer flexibility by separating data collection, storage, and threat detection, they may introduce complexity and integration challenges. Assess whether your organization has the engineering resources and expertise to manage such a modular approach effectively.

2. Consider the Benefits of Integrated SIEM Solutions

Integrated SIEM platforms, which bundle data collection, storage, and detection capabilities, can simplify management and reduce integration overhead. For organizations with limited resources or those seeking streamlined operations, this approach may be more practical.

3. Leverage AI to Enhance SIEM Capabilities

Incorporating AI agents into your SIEM strategy can automate threat detection and response, improving efficiency and reducing manual workload. However, ensure that your AI tools are compatible with your chosen SIEM architecture and can handle the complexities of federated log searches if applicable.

4. Assess Compliance and Data Sovereignty Requirements

Decoupled SIEM architectures, especially those utilizing federated log searches, may pose challenges in meeting compliance standards and data sovereignty laws. Evaluate your organization's regulatory obligations to determine if a decentralized approach aligns with legal requirements.

5. Plan for Future Scalability

As your organization's security needs grow, ensure that your SIEM solution can scale accordingly. Integrated SIEM platforms often offer more straightforward scalability, while decoupled systems may require additional engineering effort to expand effectively.

 

About Author:

Dr. Anton Chuvakin is a leading voice in cybersecurity, currently driving security solution strategy at Google Cloud following its acquisition of Chronicle Security. Widely recognized for his pioneering work in SIEM, log management, and threat detection, he is credited with coining the term “EDR” (Endpoint Detection and Response).

Before joining Google, Anton served as Research VP and Distinguished Analyst at Gartner, where he guided enterprise security leaders on detection, response, and operational strategy. He has co-authored several influential books, including Security Warrior, PCI Compliance, and Logging and Log Management, and his early blog, securitywarrior.org, was among the most-read in the industry.

 

Now, let’s hear directly from Dr. Anton Chuvakin on this subject:

In the world of security operations, there is a growing fascination with the concept of a “decoupled SIEM,” where detection, reporting, workflows, data storage, parsing (sometimes) and collection are separated into distinct components, some sold by different vendors.

Closely related to this is the idea of federated log search, which allows data to be queried on demand from various locations without first centralizing it in a single system.

When you combine these two trends with the emergence of AI agents and the “AI SOC,” a compelling vision appears — one where many of security operations’ biggest troubles are solved in an elegant and highly automated fashion. Magic!

 

Magical decoupled SIEM + magical federated log search + magical AI agents 90X the magic

(Is my math mathing? Cheap + good + fast + AI powered … pick any …ehh… I digress!)

However, a look at the market reveals a conflicting — dare I saw opposite — trend. Many organizations are actively choosing the very opposite approach: tightly integrated platforms where search, dashboards, detection, data collection, and AI capabilities are bundled together — and additional things are added on top (such as EDR).

Let’s call this “EDR-ized SIEM” or “SIEM with XDR-inspired elements” (for those who think they can define XDR) or “supercoupled SIEM” (but this last one is a bit of a mouthful..)

While some suggest this is a split between large enterprises choosing disaggregated stacks and smaller companies opting for closer integration, this doesn’t fully capture the success rates of these different models (one is successful and another is, well, also successful but at a very small number of extra-large, engineering-heavy organizations)

If one were to take a contrarian view (as I will in this post!), it might be that the decoupled and federated approach, with or without AI agents, is destined to be a secondary, auxiliary path in the evolution of SIEM. 

This isn’t a nostalgic vote for outdated, 1990s-era ideas (“gimme a 1U SIEM appliance with MySQL embedded!”), but rather a realistic assessment based on past lessons, such as the niche fascination with security data science.

Many years ago (2012), while at Gartner, I wrote a notorious “Big Analytics for Security: A Harbinger or An Outlier?” (archiverepost), and it is now very clear that late 2000s-early 2010s security data science “successes” remained a tiny, micro minority examples. A trend can be emergent, growing tenfold from a tiny base of 0.01% of companies, yet still only reach 0.1% of the market — making it an outlier, not a harbinger of the mainstream future.

Ultimately, the evidence suggests that a decoupled, federated architecture will not form the basis of the typical SIEM of 2027. Instead, the centralized platform model, enhanced and supercharged by AI, will reign supreme (and, yes, it will also include some auxiliary decentralized elements as needed, think of it as “90% centralized / 10% federated SIEM” — a better model for the future).

My conclusion:

  1. SIEM has a future! If you hate SIEM so much that you … rename it, then, well, SIEM still has a future (hi XDR!)
  2. Decoupled SIEM and federated log search belong in the future of SIEM.
  3. However, decoupled SIEM and federated log search (In My NSHOare not THE future of SIEM.
  4. I think this because both are just too damn messy for many clients to make them work well. They also fail many compliance tests (well, the federated part, not the decoupled)
  5. AI and AI agents are a very big part of the SIEM future. However, AI agents do not make decoupled SIEM and federated log search less messy enough (“I didn’t save any logs from X, hey AI agent .. get me logs from X” does not work IRL)

 

Put another way:

The Romantic Ideal: The theory is that scalable data platforms and specialized threat analysis are dramatically different, so they should be handled by specialists, and modern APIs should make connecting them “easy.” Magic!

The Real Reality: A natively designed, single-vendor, integrated SIEM is inherently simpler and easier to manage and support than a multi-component stack you have to assemble “at home.” It is also faster! AI integrated inside it just works better. With decoupling, also lose the benefit of having a “single face to scream at” when things break. Reality!

 

By Anton Chuvakin (Office of the CISO, Google Cloud)

Original Link to the Blog: Click Here

 

Join CISO Platform and become part of a global network of 40,000+ security leaders.

Sign up now: CISO Platform

Read more…

Join us for a live AI Demo Talk on "Mapping the AI Security Landscape: How CISOs Can Navigate Innovation and Risk" with Richard Stiennon, Chief Research Analyst at IT-Harvest

 

What You'll See :

  • The AI Security Stack: How to architect defenses for AI-driven environments
  • Automating the SOC: From co-pilots to autonomous response
  • Governance, Guardrails, and Risk: Keeping AI under control

 

Date: October 30, 2025 (Thursday)
Time: 12:00 PM EST | 9:30 PM IST

 

>> View Detailed Talk Here

Read more…

Actionable Insights For CISOs:

  • Prioritize Defense-in-Depth

    • Implement layered security across all system levels.

    • Maintain a detailed understanding of assets, data flows, and vulnerabilities.

    • Regularly update threat models to reflect evolving threats.

  • Enhance Monitoring and Detection

    • Deploy AI/ML-based anomaly detection systems.

    • Integrate real-time threat intelligence feeds.

    • Conduct red teaming exercises simulating AI-driven attacks.

  • Invest in AI-Resilient Infrastructure

    • Design AI systems with strong security measures (encryption, access control).

    • Audit AI models regularly for biases and vulnerabilities.

    • Collaborate with vendors to improve AI security continuously.

  • Educate and Train the Workforce

    • Run ongoing security awareness programs focused on AI-related threats.

    • Foster a culture of security accountability among employees.

    • Simulate phishing/social engineering attacks to test readiness.

  • Collaborate and Share Threat Intelligence

    • Join ISACs and other industry forums for intelligence sharing.

    • Participate in public-private cybersecurity initiatives.

    • Engage with government and non-government bodies to enhance collective defense.

  • Takeaway:

    • AI currently favors attackers, but proactive, layered, and collaborative defense strategies can help CISOs regain balance.

 

About Author:

Bruce Schneier is an internationally renowned security technologist, cryptographer, and author, often called a “security guru” by The Economist. He serves as a Lecturer in Public Policy at Harvard Kennedy School and a Fellow at the Berkman Klein Center for Internet & Society.

Bruce has written numerous influential books, including Applied Cryptography, Secrets and Lies, Data and Goliath, and A Hacker’s Mind. He also runs the popular blog Schneier on Security and the newsletter Crypto-Gram.

Throughout his career, he has shaped global conversations on cryptography, privacy, and trust, bridging the worlds of technology and public policy.

 

Now, let’s hear directly from Bruce Schneier on this subject:

 

His conclusion:

Context wins

Basically whoever can see the most about the target, and can hold that picture in their mind the best, will be best at finding the vulnerabilities the fastest and taking advantage of them. Or, as the defender, applying patches or mitigations the fastest.

And if you’re on the inside you know what the applications do. You know what’s important and what isn’t. And you can use all that internal knowledge to fix things­—hopefully before the baddies take advantage.

Summary and prediction

  1. Attackers will have the advantage for 3-5 years. For less-advanced defender teams, this will take much longer.
  2. After that point, AI/SPQA will have the additional internal context to give Defenders the advantage.

LLM tech is nowhere near ready to handle the context of an entire company right now. That’s why this will take 3-5 years for true AI-enabled Blue to become a thing.

And in the meantime, Red will be able to use publicly-available context from OSINT, Recon, etc. to power their attacks.

agree.

By the way, this is the SPQA architecture.

 

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

 

Join CISO Platform and become part of a global network of 40,000+ security leaders.

Sign up now: CISO Platform

Read more…

Actionable Insights For CISOs:

  • Expand tabletop exercises to include data extortion / leak scenarios.

  • Review and harden OAuth, API, and third-party app integrations.

  • Conduct phishing and vishing simulations, especially for high-privilege users.

  • Increase scrutiny and auditing of third-party vendors and supply chain partners.

  • Strengthen vulnerability management and rapid patch deployment, especially for zero-day exploits.

  • Deploy endpoint and runtime protection capable of detecting memory-resident or fileless malware.

  • Enforce data minimization, network segmentation, and zero trust principles to reduce exposure.

  • Pre-establish extortion response policies, including legal, PR, and negotiation strategies.

  • Monitor threat actor blogs, leak sites, and intelligence feeds for early warnings.

  • Track metrics for extortion risk exposure, such as vulnerable integrations, app consents, and patch timelines.

 

About Author:

Brian Krebs is an award-winning journalist and one of the most respected voices in cybersecurity. He is the founder of KrebsOnSecurity.com, a widely read daily blog covering computer security, cybercrime, and the underground economy.

Before launching his independent platform, Brian spent over a decade at The Washington Post (1995–2009), where he wrote hundreds of stories on internet security and technology policy. His investigative reporting has exposed major data breaches, cybercrime networks, and emerging threats that shape today’s digital landscape.

Brian’s work is known for making complex cybersecurity issues accessible and engaging for a global audience, bridging the gap between technical detail and public understanding.

 

Now, let’s hear directly from Brian Krebs on this subject:

A cybercriminal group that used voice phishing attacks to siphon more than a billion records from Salesforce customers earlier this year has launched a website that threatens to publish data stolen from dozens of Fortune 500 firms if they refuse to pay a ransom. The group also claimed responsibility for a recent breach involving Discord user data, and for stealing terabytes of sensitive files from thousands of customers of the enterprise software maker Red Hat.

13741438873?profile=RESIZE_710x

The new extortion website tied to ShinyHunters (UNC6040), which threatens to publish stolen data unless Salesforce or individual victim companies agree to pay a ransom.

In May 2025, a prolific and amorphous English-speaking cybercrime group known as ShinyHunters launched a social engineering campaign that used voice phishing to trick targets into connecting a malicious app to their organization’s Salesforce portal.

The first real details about the incident came in early June, when the Google Threat Intelligence Group (GTIG) warned that ShinyHunters — tracked by Google as UNC6040 — was extorting victims over their stolen Salesforce data, and that the group was poised to launch a data leak site to publicly shame victim companies into paying a ransom to keep their records private. A month later, Google acknowledged that one of its own corporate Salesforce instances was impacted in the voice phishing campaign.

Last week, a new victim shaming blog dubbed “Scattered LAPSUS$ Hunters” began publishing the names of companies that had customer Salesforce data stolen as a result of the May voice phishing campaign.

“Contact us to negotiate this ransom or all your customers data will be leaked,” the website stated in a message to Salesforce. “If we come to a resolution all individual extortions against your customers will be withdrawn from. Nobody else will have to pay us, if you pay, Salesforce, Inc.”

Below that message were more than three dozen entries for companies that allegedly had Salesforce data stolen, including ToyotaFedExDisney/Hulu, and UPS. The entries for each company specified the volume of stolen data available, as well as the date that the information was retrieved (the stated breach dates range between May and September 2025).

13741438699?profile=RESIZE_710x

Image: Mandiant.

On October 5, the Scattered LAPSUS$ Hunters victim shaming and extortion blog announced that the group was responsible for a breach in September involving a GitLab server used by Red Hat that contained more than 28,000 Git code repositories, including more than 5,000 Customer Engagement Reports (CERs).

“Alot of folders have their client’s secrets such as artifactory access tokens, git tokens, azure, docker (redhat docker, azure containers, dockerhub), their client’s infrastructure details in the CERs like the audits that were done for them, and a whole LOT more, etc.,” the hackers claimed.

Their claims came several days after a previously unknown hacker group calling itself the Crimson Collective took credit for the Red Hat intrusion on Telegram.

Red Hat disclosed on October 2 that attackers had compromised a company GitLab server, and said it was in the process of notifying affected customers.

“The compromised GitLab instance housed consulting engagement data, which may include, for example, Red Hat’s project specifications, example code snippets, internal communications about consulting services, and limited forms of business contact information,” Red Hat wrote.

Separately, Discord has started emailing users affected by another breach claimed by ShinyHunters. Discord said an incident on September 20 at a “third-party customer service provider” impacted a “limited number of users” who communicated with Discord customer support or Trust & Safety teams. The information included Discord usernames, emails, IP address, the last four digits of any stored payment cards, and government ID images submitted during age verification appeals.

The Scattered Lapsus$ Hunters claim they will publish data stolen from Salesforce and its customers if ransom demands aren’t paid by October 10. The group also claims it will soon begin extorting hundreds more organizations that lost data in August after a cybercrime group stole vast amounts of authentication tokens from Salesloft, whose AI chatbot is used by many corporate websites to convert customer interaction into Salesforce leads.

In a communication sent to customers today, Salesforce emphasized that the theft of any third-party Salesloft data allegedly stolen by ShinyHunters did not originate from a vulnerability within the core Salesforce platform. The company also stressed that it has no plans to meet any extortion demands.

“Salesforce will not engage, negotiate with, or pay any extortion demand,” the message to customers read. “Our focus is, and remains, on defending our environment, conducting thorough forensic analysis, supporting our customers, and working with law enforcement and regulatory authorities.”

The GTIG tracked the group behind the Salesloft data thefts as UNC6395, and says the group has been observed harvesting the data for authentication tokens tied to a range of cloud services like Snowflake and Amazon’s AWS.

Google catalogs Scattered Lapsus$ Hunters by so many UNC names (throw in UNC6240 for good measure) because it is thought to be an amalgamation of three hacking groups — Scattered Spider, Lapsus$ and ShinyHunters. The members of these groups hail from many of the same chat channels on the Com, a mostly English-language cybercriminal community that operates across an ocean of Telegram and Discord servers.

The Scattered Lapsus$ Hunters darknet blog is currently offline. The outage appears to have coincided with the disappearance of the group’s new clearnet blog — breachforums[.]hn — which vanished after shifting its Domain Name Service (DNS) servers from DDoS-Guard to Cloudflare.

But before it died, the websites disclosed that hackers were exploiting a critical zero-day vulnerability in Oracle’s E-Business Suite software. Oracle has since confirmed that a security flaw tracked as CVE-2025-61882 allows attackers to perform unauthenticated remote code execution, and is urging customers to apply an emergency update to address the weakness.

Mandiant’s Charles Carmakal shared on LinkedIn that CVE-2025-61882 was initially exploited in August 2025 by the Clop ransomware gang to steal data from Oracle E-Business Suite servers. Bleeping Computer writes that news of the Oracle zero-day first surfaced on the Scattered Lapsus$ Hunters blog, which published a pair of scripts that were used to exploit vulnerable Oracle E-Business Suite instances.

On Monday evening, KrebsOnSecurity received a malware-laced message from a reader that threatened physical violence unless their unstated demands were met. The missive, titled “Shiny hunters,” contained the hashtag $LAPSU$$SCATEREDHUNTER, and urged me to visit a page on limewire[.]com to view their demands.

13741439073?profile=RESIZE_710x

A screenshot of the phishing message linking to a malicious trojan disguised as a Windows screensaver file.

KrebsOnSecurity did not visit this link, but instead forwarded it to Mandiant, which confirmed that similar menacing missives were sent to employees at Mandiant and other security firms around the same time.

The link in the message fetches a malicious trojan disguised as a Windows screensaver file (Virustotal’s analysis on this malware is here). Simply viewing the booby-trapped screensaver on a Windows PC is enough to cause the bundled trojan to launch in the background.

Mandiant’s Austin Larsen said the trojan is a commercially available backdoor known as ASYNCRAT, a .NET-based backdoor that communicates using a custom binary protocol over TCP, and can execute shell commands and download plugins to extend its features.

13741439268?profile=RESIZE_710x

A scan of the malicious screensaver file at Virustotal.com shows it is detected as bad by nearly a dozen security and antivirus tools.

“Downloaded plugins may be executed directly in memory or stored in the registry,” Larsen wrote in an analysis shared via email. “Capabilities added via plugins include screenshot capture, file transfer, keylogging, video capture, and cryptocurrency mining. ASYNCRAT also supports a plugin that targets credentials stored by Firefox and Chromium-based web browsers.”

Malware-laced targeted emails are not out of character for certain members of the Scattered Lapsus$ Hunters, who have previously harassed and threatened security researchers and even law enforcement officials who are investigating and warning about the extent of their attacks.

With so many big data breaches and ransom attacks now coming from cybercrime groups operating on the Com, law enforcement agencies on both sides of the pond are under increasing pressure to apprehend the criminal hackers involved. In late September, prosecutors in the U.K. charged two alleged Scattered Spider members aged 18 and 19 with extorting at least $115 million in ransom payments from companies victimized by data theft.

U.S. prosecutors heaped their own charges on the 19 year-old in that duo — U.K. resident Thalha Jubair — who is alleged to have been involved in data ransom attacks against Marks & Spencer and Harrods, the British food retailer Co-op Group, and the 2023 intrusions at MGM Resorts and Caesars Entertainment. Jubair also was allegedly a key member of LAPSUS$, a cybercrime group that broke into dozens of technology companies beginning in late 2021.

13741439456?profile=RESIZE_710x

A Mastodon post by Kevin Beaumont, lamenting the prevalence of major companies paying millions to extortionist teen hackers, refers derisively to Thalha Jubair as a part of an APT threat known as “Advanced Persistent Teenagers.”

In August, convicted Scattered Spider member and 20-year-old Florida man Noah Michael Urban was sentenced to 10 years in federal prison and ordered to pay roughly $13 million in restitution to victims.

In April 2025, a 23-year-old Scottish man thought to be an early Scattered Spider member was extradited from Spain to the U.S., where he is facing charges of wire fraud, conspiracy and identity theft. U.S. prosecutors allege Tyler Robert Buchanan and co-conspirators hacked into dozens of companies in the United States and abroad, and that he personally controlled more than $26 million stolen from victims.

Update, Oct. 8, 8:59 a.m. ET: A previous version of this story incorrectly referred to the malware sent by the reader as a Windows screenshot file. Rather, it is a Windows screensaver file.

 

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

 

Join CISO Platform and become part of a global network of 40,000+ security leaders.

Sign up now: CISO Platform

Read more…

In the race to adopt AI, security executives might feel a bit like Martinus Evans, who came to fame for running eight marathons while weighing more than 300 pounds.

Evans didn’t believe he could run a marathon until he did it, and the same is true for security executives: You might not know that AI can help you, until you find it doing just that. At Google Cloud’s Office of the CISO, we believe that the large-scale promise of AI can only be achieved when it’s developed and deployed in a responsible, ethical, and safe way.

Securing AI use plays a big role in responsible, safe AI use cases. This is where our Secure AI Framework (SAIF) comes in, as well as our new deep-dive report on how to apply SAIF in the real world.

We’ve also heard from AI skeptics. Some customers are interested in working AI into their workflows, but aren’t sure where to begin in a way that will generate results. Some are facing institutional headwinds from leaders and even security engineers who push back when AI is discussed. Others are dead-set against using gen AI until it can prove its worth — perhaps by waiting for others to explore how to use gen AI in security and then report back.

The reality is that generative AI is already providing clear and impactful security results. Today, we’re reviewing three decisive use-cases that you can adopt for your own, and perhaps help inspire you to find new uses for AI in security, too.

 

The big boost that gen AI gives threat hunting

In the ever-shifting cybersecurity landscape, where threats change faster than a chameleon colors in a disco, traditional defenses often find themselves a step behind. That's where proactive threat hunting comes in.

While we offer intelligence-led, human-driven Custom Threat Hunt services to reveal ongoing and past threat actor activity in cloud and on-premise environments, you can also use AI as a threat-hunting advisor. It can help you:

  • Generate threat hunting hypotheses.
  • Provide log sources that would be needed for the hunt.
  • Align the hunt to the unique threats targeting a specific industry.
  • Offer guidance on how to generate hunt queries.
  • Suggest next steps on how to pivot the search if the hunt gets stuck.
  • Help write hunt findings reports.
  • Create detections based on hunt findings.
  • Provide configuration changes when detection requires additional log sources.

Some prompts you can use to get started integrating AI into your threat hunts include:

  • "If I have [a specific threat] in my environment, and want to find APT42 persistence, what should I search for in my Elastic?"
  • "Suggest a number of threat hunt hypotheses that align to the MITRE ATT&CK framework."
  • "Based on the threat profile for [my company], suggest threat hunts that align to APT groups that would target that company."
  • "I'm stuck at [situation] and I'm hunting for [a threat], what should I pivot to next to investigate?"
  • "Based on [a specific] hypothesis, what data would I need for a successful hunt? What should I search for?"

Gen AI can help transform threat hunting from a daunting challenge into an exhilarating pursuit.

 

How gen AI helps make stronger security validations

Think of security validation as a rigorous inspection of your defenses, meticulously examining each control to ensure it functions as intended and withstands the pressures of real-world attacks. The validation process can help bridge the gap between security theory and IT reality, uncovering hidden vulnerabilities, generating actionable insights, mapping your path to compliance, and even encourage cross-team knowledge-sharing, all the while helping you build a foundation for proactive defense.

Gen AI can be powerful ally of security validation, offering a range of capabilities that can enhance and streamline the testing process:

  • Create test cases based on existing detections and controls, ensuring comprehensive coverage and minimizing the risk of overlooking potential weaknesses.
  • Generate scripts in seconds, even for those unfamiliar with specific security controls, accelerating the testing process and reducing the barrier to entry.
  • Suggest security controls to prioritize for testing based on your threat profile and industry, optimizing resource allocation and focusing efforts on the most critical areas.
  • Develop threat models to help you anticipate potential attack vectors and formulate proactive mitigation strategies.
  • Recommend mitigation strategies based on security validation test results that can address weaknesses, strengthening your defenses against potential threats.
  • Map your security controls to frameworks, simplifying compliance efforts and ensuring adherence to industry standards.

Some prompts you can use to boost your security validations:

  • "I need to test [a specific] security control, generate a script that can test this."
  • "What security controls should I test if I'm in the [specific] industry?"
  • "Generate a threat model for my organization."
  • "Based on [data] from the validation tests, what mitigation strategies should I focus on implementing?"
  • "Map my security controls to [specific] framework."

Gen AI can help mature your security validation process into one that’s more proactive and dynamic, with continuous assessments and recommendations on how to strengthen your defenses.

 

AI delivers smarter red team data analysis

Red teams often face the challenge of processing vast amounts of unstructured data collected during reconnaissance and internal network exploration. This data can include text from social media, various file types, and descriptions in Active Directory objects. Traditional methods of sifting through this information can be time-consuming and inefficient.

Generative AI, particularly large language models (LLMs), offer a powerful solution to this problem. By feeding this unstructured data into an LLM, red teams can use its ability to parse and understand text. LLMs can be prompted to return structured data in formats including JSON, XML, and CSV, making it much easier to analyze.

Here’s how AI can enhance red team data analysis:

This approach accelerates the time-consuming process of sifting through data, allowing red team operators to more quickly identify potential leads, vulnerabilities, and paths for exploitation, ultimately improving the overall efficiency and impact of the engagement.

Here are some useful sample prompts for red teams:

  • "Analyze these Active Directory descriptions and identify systems that are likely backup servers or domain controllers."
  • "Scan the content of these files and identify any potential credentials (usernames, passwords)."
  • "Analyze this social media data and identify potential targets for phishing campaigns, especially those not in IT or security roles."
  • "Explain why this piece of information is relevant to a potential security issue found in the provided data."
  • "Analyze this unstructured Active Directory data and detect high-value target systems, cluster user accounts, and correlate users to their likely workstations."
  • "Based on this data from internal network exploration, provide a concise summary highlighting key findings and potential vulnerabilities."

By using AI in this way, red teams can transform data analysis from a bottleneck into a force multiplier, significantly enhancing their operational capabilities.

To learn more about how to use AI as your security sidekick, come see us at the RSA Conference, and check out our latest report on AI and security.

 

- By Anton Chuvakin (Ex-Gartner VP Research; Head Security Google Cloud) & Trisha Alexander (Senior Consultant, Mandiant Consulting)

Original link of post is here

 
Read more…

Join us for a live AI Demo Talk on "AI-Powered SOC Agent: Conversational Security Investigations with Whatsapp, Splunk & Gemini" with Sanglap Patra, Cybersecurity Engineer (SIEM & SOAR), Nielsen.

 

What You'll See :

  • Investigations over WhatsApp (voice/text) returning Splunk queries in seconds.
  • Gemini AI interpreting logs & providing contextual insights.
  • How bridging SIEM + chat apps accelerates SOC operations.

 

Date: October 15, 2025 (Wednesday)
Time: 9:00 AM PST | 12:00 PM EST | 9:30 PM IST

 

Join us live or register to receive the session recording if the timing doesn’t suit your timezone. 

 

>> Register Here (Or Send Your Team)

Read more…

We had an amazing CISOPlatform Playbook Roundtable in Atlanta! The energy in the room was incredible, and we’re taking it forward by launching a Cybersecurity Chapter in Atlanta to create the standards for pen testing programs. (Join CISOPlatform Atlanta Chapter)

A shoutout to the Bikash Barai and B Liebert for moderating the session and to the discussion leaders Matthew Harris Jeffrey Apolis Shankar Babu Chebrolu Baratunde Williams Cesar L. Sameer Ali, MBA, CISSP Larisa Thomas

A big thanks to EC-Council Global CISO Forum and Hacker Halted Cybersecurity Conference for partnering with us and giving us the platform to make this possible.

 

Key Discussion Highlights 

Our discussions spanned some of the most critical areas shaping cybersecurity today:

  • Continuous Automated Pen Testing

  • Cyber Insurance: Evolving Needs & Challenges

  • Agentic AI Security

  • Building Standards for Pen Testing Programs

 

Join The Atlanta Chapter

If you’re passionate about advancing cybersecurity and want to contribute to developing pen testing standards, join the CISOPlatform Atlanta Chapter. Let’s collaborate, innovate, and build together.

Join CISOPlatform Atlanta Chapter

Together, we can shape the future of cybersecurity leadership.

Read more…

In an age where AI-driven agents increasingly handle sensitive requests, the critical question is: how do we trust the identity behind every interaction? Traditional methods like passwords and OTPs are proving inadequate in stopping fraud, deepfakes, and account takeovers. This AI Demo featured Nadav Stern (Head of Engineering, Anonybit) and Jeremiah Mason (Chief Product Officer, Anonybit), who demonstrated how privacy-first biometrics and decentralized identity verification can secure the next generation of AI workflows.

 

Key Highlights:

  • Verifying True Identity: How to confirm the real human or entity behind AI-initiated requests to prevent misuse and fraud. 

  • Privacy-First Biometrics: Why biometrics with built-in privacy safeguards are essential to secure access to AI agents and control their actions. 

  • Seamless Trusted Identity: Practical ways to embed trusted identity into AI-driven workflows without creating friction for users.

 

About Speaker:

- Nadav Stern (Head of Engineering, Anonybit)

- Jeremiah Mason (Chief Product Officer, Anonybit)

 

Listen To Live Chat : (Recorded)

Featuring Nadav Stern (Head of Engineering, Anonybit) & Jeremiah Mason (Chief Product Officer, Anonybit)

 

Executive Summary

  • Verifying True Identity: AI-driven interactions are vulnerable if we cannot confirm who (or what) is behind a request. Decentralized biometrics allow organizations to establish strong trust without relying on passwords, devices, or OTPs.

  • Privacy-First Biometrics: Anonybit’s platform decentralizes biometric data, breaking it into shards stored across multiple cloud environments. This ensures data can never be reassembled, preserving user privacy while maintaining strong authentication.

  • Seamless Trusted Identity: Trusted identity can be embedded into logins, transactions, help desk calls, and even chatbot flows—delivering frictionless continuity across user journeys without exposing biometric data to AI systems.

  • Use Cases: Banking transactions, help desk automation, AI chatbots, and enterprise access control can all be strengthened with privacy-preserving biometric trust.

 

Conversation Highlights

Why Identity is Central to Securing AI

Jeremiah and Nadav emphasized that AI agents are only as trustworthy as the identity layer behind them. Traditional methods like SMS OTP and device-bound biometrics fall short—either too weak (OTP phishing) or too rigid (device loss, re-enrollment issues).

Anonybit’s approach uses cloud-based, decentralized biometrics that work across devices and contexts, ensuring identity continuity while removing single points of failure.

 

Privacy-First Biometrics: Protecting Users at Scale

Key features of Anonybit’s privacy-preserving model:

  • Decentralization: Biometric data is broken into shards across multiple servers, ensuring no central database exists to breach.

  • One-to-One & One-to-Many Matching: Supports authentication and deduplication, helping detect fraud or synthetic identities.

  • Risk-Aware Authentication: Incorporates IP, device, and behavioral signals alongside biometric checks.

  • Multi-Modal Support: Face, palm, iris, voice, or fingerprint—all configurable to enterprise needs.

This ensures compliance, removes storage risks, and enables enterprises to use biometrics without compromising privacy.

 

Real-World Demonstrations

  • Banking Transactions: Instead of OTP for high-value transfers, users confirm transactions via facial or multimodal biometric verification—instantly secured, with data never exposed to the bank or AI fraud engine.

  • Help Desk & IVR Systems: AI-based support systems can trigger biometric verification seamlessly, reducing fraud in account recovery and lowering costs of human-assisted calls.

  • AI Chatbots: A user chatting with an AI agent can be prompted for biometric authentication inline—ensuring that while the agent gets a “yes/no” result, the biometric data never touches the AI system itself.

  • Enterprise Access: Integrates with platforms like Okta, Ping, or Entra for workforce authentication, delivering a single biometric identity across all services.

 

Final Thoughts

As AI agents become decision-makers in financial services, customer support, and enterprise workflows, trust becomes the ultimate currency. The session demonstrated that embedding privacy-preserving biometric identity into AI workflows can close major security gaps—without introducing friction or new privacy risks.

Read more…

In today’s cybersecurity landscape, where analysts are overwhelmed by data and threats evolve faster than manual processes can handle, task-driven AI agents are emerging as game-changers. This AI Demo Talk featured Steve Povolny (Senior Director, Security Research & Competitive Intelligence, Exabeam), who demonstrated how agentic platforms use AI-powered assistants to augment investigations, accelerate response, and deliver CISO-level insights.

 

Key Highlights:

- AI-Driven Investigations: Live demo of a conversational agent performing detection-specific analysis.
- CISO-Level Advisor: Showcasing an agent that delivers strategic insights and security posture analysis.
- NLP-Powered Orchestration: Demonstrating natural language queries to run complex searches and generate visualizations in seconds.

 

About Speaker:

- Steve Povolny (Senior Director, Security Research & Competitive Intelligence, Exabeam)

Listen To Live Chat : (Recorded)

Featuring Steve Povolny (Senior Director, Security Research & Competitive Intelligence, Exabeam)

 

Executive Summary

  • Security teams face mounting challenges: alert fatigue, complexity of threat analysis, and shortage of skilled analysts.

  • Task-driven AI agents provide automation and context at every level—helping junior analysts triage alerts, empowering senior investigators with depth, and equipping CISOs with strategic visibility.

  • This session highlighted:
      1. AI-Driven Investigations – inline agents that summarize, classify, and explain cases in seconds.
      2. CISO-Level Advisory – agents acting as strategic advisors for posture assessment and coverage gaps.
      3. NLP-Powered Orchestration – natural language queries enabling fast searches and visualizations without complex query language.

  • The promise: reduced mean-time-to-detect (MTTD) and mean-time-to-respond (MTTR), improved analyst productivity, and deeper strategic visibility.

 

Conversation Highlights

AI-Driven Investigations: From Noise to Narrative

Steve showcased Exabeam’s Investigation Agent, which transforms raw detections into structured case summaries. Instead of manually sifting through 50+ detections, analysts receive a high-level synopsis (including timeline, threat vectors, and classification such as compromised insider).

Key points:

  • Summaries balance high-level CISO-friendly language with technical context for remediation.

  • Built-in explainable AI reasoning shows why a case is classified a certain way.

  • Analysts get prioritized “top 10 most relevant detections” plus recommended next steps (isolate host, reset password, enforce MFA).

This ensures teams can act quickly with confidence instead of drowning in raw alerts.

 

CISO-Level Advisor: Strategic Guidance at Scale

Beyond investigations, Steve introduced the Advisor Agent within Exabeam’s Outcomes Navigator. Acting like a “virtual consultant,” it continuously analyzes log sources, connectors, and use-case coverage across MITRE ATT&CK.

Highlights:

  • Identifies strengths (e.g., ransomware, phishing coverage) and gaps (crypto mining, insider threats).

  • Provides prioritized recommendations: enhance data sources, improve DLP controls, expand cloud monitoring.

  • Future releases aim to map gaps to specific vendor integrations.

The result: CISOs get a real-time executive view of coverage trends—without expensive manual assessments.

 

NLP-Powered Orchestration: Natural Language Search & Visualization

Analysts no longer need deep SQL or query-language expertise. Exabeam’s NLP Search Agent converts plain language requests into structured queries and visualizations.

Examples from the demo:

  • Search for “all malware cases with score above 20 in the last 14 days” and instantly return filtered results.

  • Auto-generated case names summarize complex chains (e.g., phishing email → malicious domain → credential theft → data exfiltration).

  • Create visualizations (“bar chart of alerts by user over 14 days”) in seconds, powering threat hunting and executive dashboards.

This democratizes advanced analysis across the SOC—junior analysts can query as easily as senior staff.

 

Final Thoughts

This session demonstrated that task-driven AI agents are no longer futuristic—they’re practical tools reshaping how investigations, responses, and executive decisions happen in cybersecurity. By combining automation, natural language interfaces, and explainable intelligence, platforms like Exabeam Nova bridge the gap between analyst workloads and CISO strategy.

 

Read more…

In today’s rapidly evolving threat landscape, human risk remains one of the most critical challenges for CISOs. While technology defenses are essential, employee behaviors often define the difference between a contained incident and a costly breach. This AI Demo Talk explored how AI is reshaping human risk management by bringing automation, personalization, and real-time intervention into the security culture.

 

Key Highlights:

- Deepfake Vishing Agent: Demonstrating how we simulate realistic vishing attacks using cloned voices and AI personas to help employees identify and respond to deepfake social engineering threats.

- AI-Enabled Content Creation: Showcasing how we generate personalized training content aligned with each company’s policies, tone, and language using generative AI models.

- Real-Time Personalized Intervention: Walking through how we integrate with security tools (SIEM, EDR, IAM) to deliver in-the-moment coaching based on live alerts and user behavior.

 

About Speaker:

- Uzair Ahmed Gilani (CTO, Right Hand Cybersecurity)

 

Listen To Live Chat : (Recorded)

Featuring Uzair Ahmed Gilani (CTO, Right Hand Cybersecurity)

Executive Summary

  • Human vulnerabilities remain a top attack vector. To address them, security teams must move from reactive training to ongoing, contextual engagement.

  • This talk spotlighted three core areas:
      1. Deepfake vishing agents – using voice cloning and AI personas to simulate advanced social engineering attacks.
      2. AI-enabled content creation – auto-generating training that aligns with corporate policy, tone, and individual risk profiles.
      3. Real-time personalized intervention – linking with SIEM, EDR, IAM, etc. to deliver “in the moment” coaching nudges when risky behavior is detected.

  • The vision: turn alerts into teaching moments, reduce phishing click rates, and shift security culture toward continuous learning.

  • But the path is not without its challenges—data privacy, false positives, model bias, and user fatigue all must be managed.

 

Conversation Highlights

Deepfake Vishing Agents: Experiencing the Threat

One of the most striking demos was the deepfake vishing scenario. Uzair illustrated how the system can clone a leader’s voice and craft an AI persona to call employees, coaxing them into divulging sensitive information or performing actions. This “red team as a service” approach surfaces blind spots in verification protocols.

Key takeaways:

  • Even well-trained employees struggled to distinguish voice clones from genuine calls when context and conversational cues are realistic.

  • The exercise exposed the need for verification layers—call-back policies, secondary channels, or multimodal authentication.

  • Organizations should run periodic adversarial simulations (vishing, smishing, etc.), not just generic training, to build awareness of evolving threats.


AI-Enabled Personalized Training Content

Generic security modules often fall flat. Uzair explained how Right Hand Cybersecurity leverages generative models to produce training aligned to each company’s voice, terminology, policy structure, and risk posture.

Highlights:

  • Micro-modules generated automatically (e.g. 1–3 minute clips), tailored to user roles, prior performance, locale, and language.

  • Dynamic versioning to reflect policy updates or emergent threats (e.g. new phishing tactics).

  • Better engagement and retention due to customized relevance vs one-size-fits-all modules.


Real-Time Personalized Intervention: Coaching at the Point of Risk

Perhaps the most compelling component was the system’s integration with security infrastructure. When an alert triggers—say a risky app installation or suspicious login—the platform can automatically deliver feedback or guidance to the user (via email, Slack, Teams, etc.).

Key insights:

  • This approach turns alerts into teachable moments rather than just logs.

  • The interventions are contextual: referencing the specific behavior (e.g. “We saw you installed software from an unknown vendor—here’s why that might be risky”).

  • There’s a feedback loop: user responses and behavior changes feed back into the model to reduce false positives and make the coaching smarter over time.

 

Final Thoughts

Traditional awareness training is no longer sufficient. As attackers adopt AI-powered deception, defense must evolve. The future of human risk management lies at the intersection of simulation, personalization, and in-time intervention. This session made a compelling case: when security touches the human moment—in context and with relevance—behavioral risk can be managed much more effectively.

For CISOs and security leaders, the ask is clear: pilot human risk AI, measure its efficacy, and adopt iteratively. The human layer is the last frontier—AI just might be the tool to bring it under control.

Read more…

We’re excited to bring you an insightful AI Demo Talk on "Building Trust in AI-Driven Interactions: Securing Agentic AI with Trusted Identity and Privacy-First Biometrics" with Nadav Stern (Head of Engineering, Anonybit) & Jeremiah Mason (Chief Product Officer, Anonybit).

In this session, we’ll explore how to build trust in AI-driven interactions by securing agentic AI with trusted identity and privacy-first biometrics. See practical demonstrations of verifying true identity to ensure every AI-initiated request comes from a legitimate human or entity, reducing risk of misuse and fraud. We’ll dive into privacy-first biometric methods that protect sensitive data while enabling secure access to AI agents, and show how trusted identity can be seamlessly embedded into AI-driven workflows without disrupting user experience. Join us to understand how these techniques strengthen security, trust, and control in AI-powered systems.

 

Key Discussion Points:

  • Verifying True Identity: How to confirm the real human or entity behind AI-initiated requests to prevent misuse and fraud. 

  • Privacy-First Biometrics: Why biometrics with built-in privacy safeguards are essential to secure access to AI agents and control their actions. 

  • Seamless Trusted Identity: Practical ways to embed trusted identity into AI-driven workflows without creating friction for users.

 

Date: September 25, 2025 (Thursday)
Time: 9:00 AM PST | 12:00 PM EST | 9:30 PM IST

Join us live or register to receive the session recording if the timing doesn’t suit your timezone.

>> Register Here

Read more…

We’re excited to bring you an insightful AI Demo Talk on "Task driven agents for investigation, response, analysis and more!" with Steve Povolny (Senior Director, Security Research & Competitive Intelligence, Exabeam).

In this session, we’ll take a deep dive into Exabeam Nova, exploring its task-driven AI agents in action. See how security teams can accelerate investigations, automate responses, and perform advanced threat analysis. We’ll demonstrate conversational agents for detection-focused workflows, CISO-level advisory insights, and NLP-powered orchestration that translates natural language into complex queries and real-time visualizations. Join us to see how Nova enables faster, more precise, and intelligence-driven SOC operations from a technical perspective.

Key Discussion Points:

  1. AI-Driven Investigations: Live demo of a conversational agent performing detection-specific analysis.

  2. CISO-Level Advisor: Showcasing an agent that delivers strategic insights and security posture analysis.

  3. NLP-Powered Orchestration: Demonstrating natural language queries to run complex searches and generate visualizations in seconds.

 

Date: September 23, 2025 (Tuesday)
Time: 9:00 AM PST | 12:00 PM EST | 9:30 PM IST

Join us live or register to receive the session recording if the timing doesn’t suit your timezone.

>> Register Here

Read more…

Palo Alto, Calif., July 29, 2025, CyberNewswire — Despite the expanding use of browser extensions, the majority of enterprises and individuals still rely on labels such as “Verified” and “Chrome Featured” provided by extension stores as a security indicator.

The recent Geco Colorpick case exemplifies how these certifications provide nothing more than a false sense of security – Koi Research[1] disclosed 18 malicious extensions that distributed spyware to 2.3M users, with most bearing the well-trusted “Verified” status.

SquareX researchers disclosed the technological reason behind this vulnerability, highlighting an architectural flaw in Browser DevTools that prevents browser vendors and enterprises from performing the thorough security analysis many enterprises expect.

Sharma

“Aside from the fact that thousands of extension updates and submissions are being made daily, it is simply impossible for browser vendors to monitor and assess an extension’s security posture at runtime,” says Nishant Sharma, Head of Security Research at SquareX, “This is because existing DevTools were designed to inspect web pages. Extensions are complex beasts that can behave dynamically, work across multiple tabs and have “superpowers” that allow them to easily bypass detection via rudimentary Browser DevTool telemetry.”

In other words, even if browser vendors were not inundated by the sheer quantity of extension submission requests, the architectural limitations of Browser DevTools today would still allow numerous malicious extensions to pass DevTool based security inspections.

Browser DevTools were introduced in the late 2000s, long pre-dating the widespread extension adoption. These tools were invented to help users and web developers debug websites and inspect web page elements. However, browser extensions have unique capabilities to, among others, modify, take screenshots and inject scripts into multiple web pages, which cannot be easily monitored and attributed by Browser DevTools.

For example, an extension may make a network request through a web page by injecting a script into the page. With Browser DevTools, there is no way to differentiate network requests made by the web page itself and those by an extension.

Detailed in the technical blog, SquareX’s researchers propose a novel approach that uses the combination of a modified browser and Browser AI Agents to plug this gap. The modified browser exposes critical telemetry required to understand an extension’s true behavior, while the Browser AI Agent simulates different user personas to incite various extension behaviors at runtime for monitoring and security analysis.

This not only allows a dynamic analysis of the extension, but also discoveries of various “hidden” extension behaviors that are only triggered by time, a certain user action or device environments. Named the Extension Monitoring Sandbox, the research details the necessary modifications required for the modified browser.

The revelation of Browser DevTools’ architectural limitations exposes a fundamental security gap that has led to millions of users being compromised. As browser extensions become a core part of the enterprise workflow, it is critical for enterprises to move from superficial labels to solutions specifically designed to tackle extension security. It is absolutely critical for browser vendors, enterprises and security vendors to work closely together in tackling what has become one of the fastest emerging threat vectors.

This August, SquareX is offering a free enterprise-wide extension audit in August. The audit involves conducting an extensive audit of all extensions installed across the organization using all three components of the SquareX Extension Analysis Framework – metadata analysis, static code analysis and dynamic analysis with the Extension Monitoring Sandbox – providing a full analysis of the organization’s extension risk exposure and a risk score for each extension.

About SquareXSquareX’s browser extension transforms any browser on any device into an enterprise-grade secure browser. SquareX’s industry-first Browser Detection and Response (BDR) solution empowers organizations to proactively detect, mitigate, and threat-hunt client-side web attacks including malicious browser extensions, advanced spearphishing, browser-native ransomware, GenAI data loss prevention, and more.

Unlike legacy security approaches and cumbersome enterprise browsers, SquareX seamlessly integrates with users’ existing consumer browsers, ensuring enhanced security without compromising user experience or productivity. By delivering unparalleled visibility and control directly within the browser, SquareX enables security leaders to reduce their attack surface, gain actionable intelligence, and strengthen their enterprise cybersecurity posture against the newest threat vector – the browser.

More information available at: sqrx.com

Reference: [1] http://www.bleepingcomputer.com/news/security/malicious-chrome-extensions-with-17m-installs-found-on-web-store/

 Media contact: Junice Liew, Head of PR, SquareX, junice@sqrx.com

Editor’s note: This press release was provided by CyberNewswire as part of its press release syndication service. The views and claims expressed belong to the issuing organization.

 

By Byron Acohido (Pulitzer Prize-Winning Business Journalist)

Original Link to the Blog: Click Here

Read more…

Google’s vulnerability finding team is again pushing the envelope of responsible disclosure:

Google’s Project Zero team will retain its existing 90+30 policy regarding vulnerability disclosures, in which it provides vendors with 90 days before full disclosure takes place, with a 30-day period allowed for patch adoption if the bug is fixed before the deadline.

However, as of July 29, Project Zero will also release limited details about any discovery they make within one week of vendor disclosure. This information will encompass:

  • The vendor or open-source project that received the report
  • The affected product
  • The date the report was filed and when the 90-day disclosure deadline expires

I have mixed feelings about this. On the one hand, I like that it puts more pressure on vendors to patch quickly. On the other hand, if no indication is provided regarding how severe a vulnerability is, it could easily cause unnecessary panic.

The problem is that Google is not a neutral vulnerability hunting party. To the extent that it finds, publishes, and reduces confidence in competitors’ products, Google benefits as a company.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…

Security teams can no longer afford to wait for alerts — not when cyberattacks unfold in milliseconds.

That’s the core warning from Fortinet’s Derek Manky in a new Last Watchdog Strategic Reel recorded at RSAC 2025. As adversaries adopt AI-driven tooling, defenders must rethink automation, exposure management, and the role of human analysts.

“We saw a 39% increase in CVEs last year — over 40,000,” Manky said. “Traditional automation can’t keep up. Attackers are scripting their next move before the first alert even fires.”

Manky describes how Fortinet is responding with simulation-based defense: continuous threat exposure testing, purple teaming, and the use of attacker playbooks — spoofed back at speed — to preempt compromise.

The result, he argues, is a more agile defense posture that relies less on headcount and more on adaptive technologies that amplify human insight.

“Tech that helps analysts move faster — that’s the path forward,” he said.

The full Strategic Reel captures these insights in a sharp 60-second format, part of Last Watchdog’s ongoing series spotlighting forward-leaning cybersecurity leadership.

 

Pulitzer Prize-winning business journalist Byron V. Acohido is dedicated to fostering public awareness about how to make the Internet as private and secure as it ought to be.

(LW provides consulting services to the vendors we cover.)

 

By Byron Acohido (Pulitzer Prize-Winning Business Journalist)

Original Link to the Blog: Click Here

 

Read more…

Newark, NJ, Aug. 4, 2025, CyberNewswire—Early Bird registration is now available for the inaugural OpenSSL Conferencescheduled for October 7–9, 2025, in Prague. The event will bring together leading voices in cryptography, secure systems, and open-source infrastructure. Early registrants can save up to $240 per ticket.

Registration Information

Registration packages are designed to reflect the diversity of the OpenSSL Communities, offering options for both essential access and extended participation. Seating is limited, and early registration, available until August 31, is strongly encouraged.

Attendees can REGISTER NOW

Featured Speakers & Programme Tracks

Confirmed speakers include Daniel J. Bernstein, Research Professor, University of Illinois Chicago; Matt Caswell, President, OpenSSL Foundation; Tanja Lange, Professor, Eindhoven University of Technology; Tim Hudson, President, OpenSSL Corporation, and many more.

Participants will engage across four focused program tracks:

•Business Value and Enterprise Adoption

•Technical Deep Dive and Innovation

•Security, Compliance, and the Law

•Community, Contribution, and the Future

Sponsorship Opportunities

Organisations committed to advancing digital security are invited to explore sponsorship opportunities for the OpenSSL Conference. Sponsors benefit from high-impact visibilitymeaningful engagement with technical and policy leaders, and a direct connection to the global cryptographic community.

Sponsorship helps support community participation, collaborative innovation, and the continued development of open, secure infrastructure.

Scholarships Provided by the OpenSSL Foundation 

The OpenSSL Foundation is offering a limited number of scholarships to support individuals attending the inaugural OpenSSL Conference in Prague. Read more and apply now.

“The OpenSSL Library is critical infrastructure, and the OpenSSL Conference reflects our commitment to transparency, accountability, and community-driven direction. Come and meet our team in-person and engage with like-minded community members in influencing how we deliver on the OpenSSL Mission. Users, developers, managers, lawyers, policy makers, researchers – there is content for everyone!” – Tim Hudson, President, OpenSSL Corporation

“The OpenSSL Conference represents a unique opportunity for cryptography and secure communications experts, developers and enthusiasts to gather together and hear talks and discussions on a broad range of topics. I am delighted by the high quality of the sessions that have been submitted to us and the fascinating agenda that has been put together.”– Matt Caswell, President, OpenSSL Foundation

Contacting & Staying Informed

For questions, attendees may reach out via email at info@openssl-conference.org or schedule a meeting with the OpenSSL Conference team. More information can be found on the official OpenSSL Conference website OpenSSL Conference.

About The OpenSSL Corporation: The OpenSSL Corporation is a global leader in cryptographic solutions, specializing in developing and maintaining the OpenSSL Library – an essential tool for secure digital communications. The OpenSSL Corporation provides a range of services tailored to assist businesses of all sizes to ensure the secure and efficient implementation of OpenSSL solutions. The OpenSSL Corporation also supports projects aligned with its Mission and Values by providing infrastructure, resources, expert advice, and engagement through advisory committees, particularly in the commercial sector. Collaboration among these projects fosters innovation, enhances security standards, and effectively addresses common challenges, benefiting all our communities.

Media contact: Hana Andersen, MarCon, OpenSSL Software Services, info@openssl-conference.org

Editor’s note: This press release was provided by CyberNewswire as part of its press release syndication service. The views and claims expressed belong to the issuing organization.

 

By Byron Acohido (Pulitzer Prize-Winning Business Journalist)

Original Link to the Blog: Click Here

Read more…

An Arizona woman was sentenced to eight-and-a-half years in prison for her role helping North Korean workers infiltrate US companies by pretending to be US workers.

From an article:

According to court documents, Chapman hosted the North Korean IT workers’ computers in her own home between October 2020 and October 2023, creating a so-called “laptop farm” which was used to make it appear as though the devices were located in the United States.

The North Koreans were hired as remote software and application developers with multiple Fortune 500 companies, including an aerospace and defense company, a major television network, a Silicon Valley technology company, and a high-profile company.

As a result of this scheme, they collected over $17 million in illicit revenue paid for their work, which was shared with Chapman, who processed their paychecks through her financial accounts.

“Chapman operated a ‘laptop farm’ where she received and hosted computers from the U.S. companies her home, so that the companies would believe the workers were in the United States,” the Justice Department said on Thursday.

“Chapman also shipped 49 laptops and other devices supplied by U.S. companies to locations overseas, including multiple shipments to a city in China on the border with North Korea. More than 90 laptops were seized from Chapman’s home following the execution of a search warrant in October 2023.”

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…

London, Aug. 11, 2025, CyberNewswire—A survey of 80 North American MSPs shows fragmented security stacks drive fatigue, missed threats, and business inefficiency

13695658882?profile=RESIZE_400x

Security tools meant to protect managed service providers are instead overwhelming them.

A new study from Heimdal and FutureSafe reveals that 89% of MSPs struggle with tool integration while 56% experience alert fatigue daily or weekly.

The research exposes a dangerous paradox. MSPs experiencing high alert fatigue are significantly more likely to miss real threats.

The very tools deployed to enhance security are creating blind spots through exhaustion.

 

Scale of the problem

The average MSP now runs five security tools, with 20% juggling seven to ten and 12% managing more than ten.

13695659259?profile=RESIZE_584x

Only 11% report seamless integration. The remaining 89% must flip between separate dashboards and waste time on manual workflows.

One in four security alerts prove meaningless, with some MSPs reporting that 70% of their alerts are false alarms.

Among MSPs managing 1,000+ clients, 100% report daily fatigue.

13695658893?profile=RESIZE_180x180

Frederiksen

“MSPs are drowning in complexity, not from threats, but from the tools meant to stop them,” said Jesper Frederiksen, CEO at Heimdal. “Every new point solution adds another agent, console, and alert stream. That noise exhausts people and quietly degrades protection.”

 

Beyond security ops

Agent fatigue extends beyond alert management. Disconnected platforms slow billing processes, complicate client onboarding, and create compliance reporting headaches.

13695658900?profile=RESIZE_180x180

Whitehurst

“Agent fatigue isn’t just a tech issue. It’s a business risk,” said Jason Whitehurst, CEO at FutureSafe. “MSPs are juggling tool after tool, but they don’t work together.”

 

Solution in plain sight

Despite widespread recognition of the problem, only 20% of MSPs have consolidated their security solutions. Those who have reported fewer alerts, faster response times, and happier staff.

 

Key survey findings

•56% experience alert fatigue daily or weekly, 75% at least monthly

•Only 11% enjoy seamless tool connectivity

•MSPs using 7+ tools report nearly double the fatigue levels

•High false positive rates triple the chance of missing genuine incidents

• honey I’m working too hard I’m going crazyThe 20% who consolidate report better outcomes across all metrics

 

Research methodology: The State of MSP Agent Fatigue 2025 surveyed 80 North American MSPs in H1 2025, combining quantitative analysis with thematic coding of over 300 free-text responses. Users can download the complete report free at: heimdalsecurity.com/msp-agent-fatigue-report  

About Heimdal: Established in Copenhagen in 2014, Heimdal empowers security teams and MSPs through unified cybersecurity solutions spanning endpoint to network security, including vulnerability management, threat prevention, and ransomware mitigation. 

About FutureSafe: FutureSafe is the exclusive provider of Heimdal in the United States, helping MSPs cut through tool sprawl and deliver consolidated cybersecurity.

Media contact: Danny Mitchell, Head of Content & PR, Heimdal Security, dmi@heimdalsecurity.com

Editor’s note: This press release was provided by CyberNewswire as part of its press release syndication service. The views and claims expressed belong to the issuing organization.

 

By Byron Acohido (Pulitzer Prize-Winning Business Journalist)

Original Link to the Blog: Click Here

Read more…