Biswajit Banerjee's Posts (210)

Sort by

Microsoft today released updates to fix at least 137 security vulnerabilities in its Windows operating systems and supported software. None of the weaknesses addressed this month are known to be actively exploited, but 14 of the flaws earned Microsoft’s most-dire “critical” rating, meaning they could be exploited to seize control over vulnerable Windows PCs with little or no help from users.

13667603288?profile=RESIZE_710x

While not listed as critical, CVE-2025-49719 is a publicly disclosed information disclosure vulnerability, with all versions as far back as SQL Server 2016 receiving patches. Microsoft rates CVE-2025-49719 as less likely to be exploited, but the availability of proof-of-concept code for this flaw means its patch should probably be a priority for affected enterprises.

Mike Walters, co-founder of Action1, said CVE-2025-49719 can be exploited without authentication, and that many third-party applications depend on SQL server and the affected drivers — potentially introducing a supply-chain risk that extends beyond direct SQL Server users.

“The potential exposure of sensitive information makes this a high-priority concern for organizations handling valuable or regulated data,” Walters said. “The comprehensive nature of the affected versions, spanning multiple SQL Server releases from 2016 through 2022, indicates a fundamental issue in how SQL Server handles memory management and input validation.”

Adam Barnett at Rapid7 notes that today is the end of the road for SQL Server 2012, meaning there will be no future security patches even for critical vulnerabilities, even if you’re willing to pay Microsoft for the privilege.

Barnett also called attention to CVE-2025-47981, a vulnerability with a CVSS score of 9.8 (10 being the worst), a remote code execution bug in the way Windows servers and clients negotiate to discover mutually supported authentication mechanisms. This pre-authentication vulnerability affects any Windows client machine running Windows 10 1607 or above, and all current versions of Windows Server. Microsoft considers it more likely that attackers will exploit this flaw.

Microsoft also patched at least four critical, remote code execution flaws in Office (CVE-2025-49695CVE-2025-49696CVE-2025-49697CVE-2025-49702). The first two are both rated by Microsoft as having a higher likelihood of exploitation, do not require user interaction, and can be triggered through the Preview Pane.

Two more high severity bugs include CVE-2025-49740 (CVSS 8.8) and CVE-2025-47178 (CVSS 8.0); the former is a weakness that could allow malicious files to bypass screening by Microsoft Defender SmartScreen, a built-in feature of Windows that tries to block untrusted downloads and malicious sites.

CVE-2025-47178 involves a remote code execution flaw in Microsoft Configuration Manager, an enterprise tool for managing, deploying, and securing computers, servers, and devices across a network. Ben Hopkins at Immersive said this bug requires very low privileges to exploit, and that it is possible for a user or attacker with a read-only access role to exploit it.

“Exploiting this vulnerability allows an attacker to execute arbitrary SQL queries as the privileged SMS service account in Microsoft Configuration Manager,” Hopkins said. “This access can be used to manipulate deployments, push malicious software or scripts to all managed devices, alter configurations, steal sensitive data, and potentially escalate to full operating system code execution across the enterprise, giving the attacker broad control over the entire IT environment.”

Separately, Adobe has released security updates for a broad range of software, including After EffectsAdobe AuditionIllustratorFrameMaker, and ColdFusion.

The SANS Internet Storm Center has a breakdown of each individual patch, indexed by severity. If you’re responsible for administering a number of Windows systems, it may be worth keeping an eye on AskWoody for the lowdown on any potentially wonky updates (considering the large number of vulnerabilities and Windows components addressed this month).

If you’re a Windows home user, please consider backing up your data and/or drive before installing any patches, and drop a note in the comments if you encounter any problems with these updates.

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…
Building Real World Zero Trust

In cybersecurity’s early days, we built defenses like medieval castles big walls (firewalls), a drawbridge (VPNs), and guards at the gates (passwords). Once someone was inside, they could roam freely. But today’s world looks nothing like that. Work happens everywhere, data lives in the cloud, and attackers are more creative than ever. That old fortress model? It doesn’t hold up.

Welcome to the era of Zero Trust Architecture (ZTA) where the assumption is not if someone is already inside, but what they’re doing and whether they still belong. Zero Trust flips the script: no one is automatically trusted, no matter where they’re coming from or what credentials they used five minutes ago.

And now, with NIST’s SP 1800-35, organizations finally have something they’ve long needed practical, tested, vendor-neutral blueprints to actually implement Zero Trust, not just talk about it.

 

Why Zero Trust Is Now a Must

At its core, Zero Trust means “never trust, always verify.” Every user, device, application, and service must continuously prove who they are and why they need access — and that proof must stand up to scrutiny every time they make a move.

Think of it like airport security. Just because you passed one checkpoint doesn’t mean you get unrestricted access to every gate, lounge, or runway. You’re constantly monitored, and access is granted only when necessary, with strict controls.

Here’s why this model matters more than ever:

  • Lateral movement is the real danger. Once attackers break in — often through phishing or stolen credentials — they can move freely. Zero Trust shrinks that “blast radius.”
  • Work happens everywhere. Hybrid work, mobile devices, and cloud apps have shattered the idea of a network perimeter. Zero Trust fits this world.
  • Threats evolve fast. Static defenses don’t cut it anymore. Zero Trust is adaptive, dynamic, and policy-driven.

From Theory to Practice: NIST SP 1800-35

NIST’s Special Publication 1800-35, titled “Implementing a Zero Trust Architecture,” is a major milestone. Built over four years by the NIST National Cybersecurity Center of Excellence (NCCoE) and 24 industry partners, it moves beyond frameworks and buzzwords to provide 19 tested Zero Trust examples using real technologies that you can actually buy and use today.

As NIST researcher Alper Kerman puts it:

“Every Zero Trust architecture is a custom build. It’s not always easy to find experts who can get you there.”

That’s why this guide is so valuable — it shows how to do it, step-by-step, using a range of commercial tools and configurations.

 

Key Contributions from NIST SP 1800-35:

  1. Detailed Blueprints: From securing sensitive finance apps to multi-cloud environments, the examples cover real-world scenarios. They include things like:
    • Identity integration with Okta or Azure AD
    • Micro-segmentation with Policy Enforcement Points (PEPs)
    • Conditional access policies based on device posture and behavior
  2. No Vendor Lock-in: While using commercial tools, the guidance is vendor-agnostic. It focuses on capabilities, not brand names.
  3. Testing and Lessons Learned: Each implementation was tested and documented, with real performance findings, configuration pitfalls, and tuning tips. It’s like having a peer-reviewed playbook for your Zero Trust rollout.

13667599486?profile=RESIZE_180x180 Building Real World Zero Trust


A Practical Zero Trust Journey – How to Begin:

Implementing Zero Trust isn’t a one-time project. It’s a strategic journey, much like improving fitness — you don’t do it in a day, and the results build over time.

Step 1: Discover Your Environment

Start by identifying everything:

  • Devices (laptops, phones, servers)
  • Applications (cloud and on-prem)
  • Users and roles
  • Data locations and flows

Think of this as building a map before planning a road trip. Tools like CSPM, asset inventories, and traffic analytics can help.

Step 2Define Granular Access Policies

Move beyond basic Role-Based Access Control (RBAC). Consider:

  • Device health (e.g., is antivirus running?)
  • Behavior patterns (e.g., is the login typical for this user?)
  • Location and time (e.g., is this request from a trusted region and within business hours?)

An example: A system admin might only get access to production servers from a corporate-managed laptop, using biometricsMFA, and real-time risk scoring.

 

Step 3: Assess What You Already Have

You don’t have to start from scratch. Many orgs already have:

  • Identity and Access Management (IAM) tools
  • Network segmentation
  • Endpoint protection and SIEM
    Take stock. You might only need to connect the dots.

 

Step 4: Prioritize High-Risk Areas

Start small — secure crown jewels first:

  • Protect sensitive data
  • Segment dev and prod environments
  • Deploy policy controls at critical access points

Tools like NGFWsCASBs, and micro-segmentation platforms are helpful here.

 

Step 5: Implement Core Zero Trust Components

These may include:

  • Strong MFA (e.g., FIDO2, biometrics)
  • Centralized Policy Decision Points (PDPs)
  • Continuous endpoint health checks
  • Modern EDR that feeds into SIEM/SOAR for real-time decisions

Think of your ZTA as an ecosystem — each part contributes to a bigger defense story.

 

Step 6: Verify, Test, Improve

Don’t assume it works — prove it.

  • Use red teams and pentesters to simulate attacks
  • Monitor with SIEM/SOAR
  • Automate responses where possible

Then repeat. ZTA is not static — it must adapt as threats and business needs evolve.

Trust and Transparency: The Heart of Zero Trust

Ironically, the name “Zero Trust” can sound cold and clinical. But its goal is to build trust — through verification, consistency, and transparency.

  • It’s not about paranoia. It’s about limiting exposure and making access decisions based on facts, not assumptions.
  • It’s not about locking people out. It’s about letting the right people in, the right way, at the right time.

And when you explain this clearly to business teams, boards, and even users, you gain allies — not resistance.

NIST SP 1800-35 is a game-changer. It brings Zero Trust down from the clouds and plants it firmly in reality. No more guesswork. No more vague promises.

You now have a tested set of blueprints to begin transforming your security architecture from a brittle castle wall into a smart, adaptive, policy-driven ecosystem.

The perimeter is gone — but with Zero Trust, control isn’t. It just lives closer to the user, the data, and the decision.

 

By: Dr. Erdal Ozkaya (Cybersecurity Advisor, Author, and Educator)

Original link to the blog: Click Here

Read more…

Lately, a lot of people have been asking me about what “triggers” threat modeling. The question confused me: you think about threats as part of any design decision! There are lots and lots of design decisions, ranging from tiny to enormous. For each, we ought to be asking what are their pros and cons? That includes what can go wrong, and more: is is scalable? Performant? Maintainable? Secure? Private? How deep we go depends on the details of the feature.

Security departments will sometimes encode the questions of what can go wrong into questionnaires. Some of these are short, in the range of 3-5 questions. Others are ... longer. Sometimes much longer. These are designed to protect development teams from heavyweight threat modeling processes, often ones that involve consultation with a security team, and can take weeks or longer. (This dynamic seems to inform this story about Facebook using AI for privacy risks.)

There was a great talk at Blackhat '23 by Mrityunjay Gautam and Pavan Kolachoor, AI Assisted Decision Making of Security Review Needs for New Features , in which their LLM found lots of features that should have been reviewed, but were not. I’m not in a position to comment on what went on with those engineers whose features didn’t get tagged for review, and I’ll assume best intentions: they really didn’t feel their feature needed a review. Maybe the questions asked didn’t touch on the issues that triggered the LLM. Questionnaire design is always a tradeoff of length versus completeness, and so review triggers get left off. (This ... prompts thinking about LLM driven review, but that’s a separate post.) Or maybe they felt the feature had some danger, but wasn’t worth a “full review.”

But all of this is dependent on a bad metaphor, which is thinking that threat modeling is a switch, or a thing which has a single definition or procedure. It’s much better to think of threat modeling as a volume dial: You should regularly adjust it to fit current needs.

This is easier when you have good separation of policy and procedure, and people who are able to make good decisions about the dial. If your policy requires a STRIDE-per-element approach, your threat modeling will be slower than if you require asking “what can go wrong” (in an appropriate way). Most companies understand some software is more critical, and as it gets more critical, awareness and concern over “what can go wrong” increases, as does quality assurance.

So if you have people who are avoiding threat modeling, consider the reasons. An inflexible process can be a key contributor, and lately I’ve been seeing ... a switch flip because of this metaphor.

 

By: Adam Shostack (Threat Modeling Expert and Author)

Original link to the blog: Click Here

Read more…
Benchmarking CISO Leadership Performance : A Strategic Guide for New CISOs

In today’s rapidly evolving cybersecurity landscape, Chief Information Security Officers (CISOs) are no longer confined to the role of mere technical guardians of digital assets. Instead, they have unequivocally emerged as strategic business leaders, integral to an organization’s resilience and growth. For individuals stepping into this multifaceted role, particularly those who are new to it, the transition can indeed be formidable. The sheer breadth of responsibilities, coupled with the relentless pace of cyber threats, demands a proactive and adaptable approach to leadership.

To navigate these challenges successfully and foster sustained excellence, new CISOs must embrace benchmarking as an indispensable tool for continuous improvement and leadership development. This isn’t about rigid comparison against external metrics alone, but rather a structured approach to self-assessment and strategic enhancement within their unique organizational context.

This comprehensive guide presents a step-by-step framework specifically tailored to empower new CISOs, enabling them to not only adapt but to truly excel across four critical and interconnected domains:

  • Service Delivery: Focusing on the efficiency, effectiveness, and customer-centricity of the cybersecurity services provided to the organization.
  • Functional Leadership: Emphasizing the CISO’s ability to strategically guide their team, foster talent, and influence security culture across the enterprise.
  • Scaled Governance: Pertaining to the establishment and widespread adoption of robust, risk-aligned security policies, standards, and oversight mechanisms.
  • Enterprise Responsiveness: Highlighting the organization’s agility in anticipating, reacting to, and recovering from cyber threats and evolving business demands.

By systematically applying the principles and actions outlined herein, new CISOs can establish a clear baseline for their performance, identify precise areas for growth, and cultivate the leadership excellence necessary to thrive in the complex world of modern cybersecurity.

To do so we have this main topics:

  • I. Service Delivery Excellence
  • II. Functional Leadership Mastery
  • III. Scaled Governance Performance
  • IV. Enterprise Responsiveness & Adaptability
  • V. Personal Branding & Executive Presence
  • VI. Innovation, Foresight & Strategic Resilience
  • VII. Metrics, Measurement & Continuous Improvement
  • VIII. Financial Acumen & Resource Optimization

Each week, I will explore each of the above sections in detail, so let’s get started:

I. Service Delivery Excellence

Effective service delivery forms the bedrock of a robust cybersecurity program, ensuring that security is not merely a compliance checkbox but an intrinsic enabler of business operations. By optimizing how security services are delivered, CISOs can instill confidence across the enterprise, facilitate operational speed, and demonstrate tangible value. For new CISOs, mastering this domain is paramount to building credibility and fostering a security-conscious culture.

1. Incident Response Metrics: A Foundation for Resilience

Recommendation: Systematically track and continuously optimize incident detection, containment, and remediation times to enhance organizational resilience and minimize business disruption.

Extended Guidance for New CISOs:

As a new CISO, your immediate priority should be to gain a clear understanding of your organization’s current incident response capabilities. This begins with establishing a precise baseline. If historical incident data is scarce or unstructured, initiate a rigorous logging process for every security incident. This involves meticulously recording timestamps for each critical stage: detection, initial analysis, containment, eradication, recovery, and post-incident review. Categorize incidents by severity (e.g., critical, high, medium, low) to allow for nuanced analysis.

To facilitate this data collection, advocate for and deploy centralized logging and alerting platforms, such as Security Information and Event Management (SIEM) systems or Extended Detection and Response (XDR) solutions. These tools are invaluable for enhancing visibility across your IT environment and automating initial detection.

Once data collection is underway, use it to create intuitive dashboards that visually represent trends in Mean Time To Detect (MTTD), Mean Time To Contain (MTTC), and Mean Time To Remediate (MTTR). These metrics are crucial indicators of your team’s efficiency and the overall health of your incident response program.

Crucially, schedule regular, perhaps weekly, meetings with your Incident Response team. These sessions should be dedicated to discussing recent incidents, analyzing anomalies in your metrics, and, most importantly, conducting thorough “lessons learned” reviews. Document these learnings diligently and immediately incorporate them into your existing incident response playbooks and procedures. This iterative process ensures continuous improvement, transforming each incident into a valuable learning opportunity that strengthens your organization’s defensive posture. Finally, translate these technical metrics into business-centric insights when communicating with executive leadership, emphasizing how faster response times directly reduce financial impact and protect reputation.


2. Vulnerability Management: Proactive Risk Reduction

Recommendation: Implement and enforce a robust vulnerability management program focused on the timely and prioritized remediation of critical security vulnerabilities.

Extended Guidance for New CISOs:

Begin your tenure by conducting a comprehensive vulnerability management maturity assessment. This internal audit will help you identify current gaps in your scanning cadence, prioritization mechanisms, and remediation workflows. Understand the current state of your asset inventory, as you cannot protect what you do not know.

Next, foster a strong partnership with your IT operations and development (DevOps) teams. Collaborate to jointly define and formally agree upon Service Level Agreements (SLAs) for patching and remediation, differentiating based on vulnerability severity (e.g., critical vulnerabilities remediated within 7 days, high within 30 days). This joint ownership is vital for success.

Establish a consistent and recurring cadence for vulnerability scans across all relevant assets (networks, applications, cloud infrastructure). Prioritize remediation efforts not just on the Common Vulnerability Scoring System (CVSS) score, but also on exploitability, asset criticality, and the potential business impact of a successful exploit.

Leverage metrics dashboards to provide transparent visibility into remediation performance. Highlight areas of improvement, identify persistent bottlenecks (e.g., specific teams, legacy systems), and track progress against agreed-upon SLAs. Regularly communicate successes—such as a significant reduction in critical vulnerabilities or a faster average patch cycle—to senior leadership. This not only demonstrates tangible progress but also reinforces the value of security investments and the efficiency of your team.


3. Security Service Request Fulfillment: Enabling Business Operations

Recommendation: Systematically optimize the intake, processing, and response times for all internal security service requests, enhancing operational fluidity and stakeholder satisfaction.

Extended Guidance for New CISOs:

To ensure security acts as an enabler, not a bottleneck, it’s essential to streamline how the security team responds to internal requests. Start by clearly defining and categorizing all types of security service requests. This might include access reviews, new application security assessments, third-party vendor security reviews, security configuration guidance, and more.

Implement a method to track request volumes and fulfillment times for each category. This data will provide invaluable insights into your team’s workload, identify peak periods, and highlight areas where efficiency gains are most needed. While a formal ticketing system is ideal, even a shared spreadsheet can be a starting point if resources are limited.

Ideally, implement or integrate a robust request management system (e.g., Jira, ServiceNow, or a dedicated GRC platform). Such systems provide a centralized intake point, enable workflow automation, facilitate clear communication, and offer reporting capabilities.

Crucially, identify frequent, low-complexity tasks and introduce automation wherever possible. This could involve automated responses to common queries, script-based configuration checks, or self-service portals for routine requests. By automating the mundane, your team can focus on more complex, high-value security challenges. Finally, share performance metrics (e.g., average response times, resolution rates) with your internal stakeholders. This transparency builds trust, manages expectations, and demonstrates your commitment to providing responsive and reliable security services.


4. Internal Customer Satisfaction: Cultivating Partnership

Recommendation: Proactively measure and continuously improve the perception of the security team among internal stakeholders, fostering a culture of collaboration and partnership.

Extended Guidance for New CISOs:

A CISO’s success is not solely measured by technical prowess but also by the security team’s ability to integrate seamlessly with, and be perceived as a valuable partner by, other business units. As a new CISO, make it a point to schedule regular, perhaps quarterly, check-ins with key department heads and business leaders. These should be informal, open discussions aimed at soliciting candid feedback on their interactions with the security team, identifying pain points, and understanding their evolving needs.

Supplement these direct conversations with simple, anonymous surveys distributed to a broader audience of internal “customers.” Focus on questions that assess ease of engagement, clarity of communication, perceived helpfulness, and the overall value provided by the security function.

Consider establishing regular “security office hours” or “ask-the-CISO” sessions. These informal drop-in opportunities provide a low-barrier entry point for business teams to ask questions, voice concerns, or seek guidance, further reinforcing the security team’s approachability and willingness to assist.

Crucially, actively seek out and present case studies that clearly demonstrate how security enabled a successful business outcome. This could be a new product launch secured efficiently, a critical project delivered on time due to proactive security engagement, or a successful audit result. Showcasing these wins helps shift the perception of security from a cost center to a value driver. Internally, foster a culture of empathy and partnership within your own security team. Encourage them to understand the business context of their work, communicate in clear, non-technical language, and approach interactions with a problem-solving mindset rather than a purely enforcement-driven one.


5. Process Walkthroughs & Optimization: Driving Efficiency and Consistency

Recommendation: Systematically streamline, standardize, and continuously refine core service delivery workflows to enhance efficiency, consistency, and scalability.

Extended Guidance for New CISOs:

To ensure your security operations are efficient and repeatable, select two to three core service delivery processes that have the highest impact or are most frequently executed (e.g., the incident handling process, the procedure for onboarding new applications, or the vulnerability remediation workflow).

For each chosen process, organize a collaborative walkthrough session with the team members directly involved. Document every single step, decision point, and hand-off in detail. This exercise often reveals hidden complexities and inefficiencies.

With the process mapped, critically identify redundant steps, unnecessary approvals, and manual tasks that consume significant time and are prone to human error. Brainstorm opportunities for automation, even if it’s through simple scripting or leveraging existing tools more effectively.

Utilize visual mapping tools such as Lucidchart, Miro, or even a whiteboard, to illustrate these workflows. Visualizing the process helps in identifying bottlenecks and communicating proposed changes clearly. Finally, understand that process optimization is not a one-time event. Regularly revisit and refine these workflows (e.g., quarterly or after major incidents/projects) to ensure they remain efficient, aligned with evolving business needs, and responsive to new threats. This commitment to continuous improvement is a hallmark of excellent service delivery.

 

By: Dr. Erdal Ozkaya (Cybersecurity Advisor, Author, and Educator)

Original link to the blog: Click Here

Read more…

The gambling firms Paddy Power and BetFair have suffered a data breach, after “an unauthorised third party” gained access to “limited betting account information” relating to up to 800,000 of their customers.

What was exposed? Usernames, email addresses, IP addresses.

However, parent company Flutter says “no passwords, ID documents or usable card or payment details were impacted”. The word “usable” might be doing some heavy-lifting there, I wonder if some partial payment card details were exposed…

13667596088?profile=RESIZE_180x180
Email sent to affected customers of Paddy Power

 

An obvious threat is phishing attacks, targeting Betfair and Paddy Power customers – perhaps posing as messages from the companies, in an attempt to trick users into handing over more of their details. So be on your guard!

Flutter says it is carrying out a “full investigation” to understand the scale of the breach, and is working with external cybersecurity experts.

Readers with long memories will recall that this is not the first time that Paddy Power has suffered a data breach, although it appears to have been more proactive in informing its customers this time.

 

By: Graham Cluley (Cybercrime Researcher and Blogger)

Original link to the blog: Click Here

Read more…

In an era where AI tools are transforming software development, CISOs face a pressing challenge: how to harness the speed of AI code generation without compromising on security. In a compelling CISO Talk (Chennai Chapter) hosted by CISO Platform, Ramkumar Dilli, Chief Information Officer at Myridius, unpacked the critical risks posed by AI-generated code and shared real-world lessons on balancing innovation with secure software development practices.

 

Key Highlights:

  • AI Is Prediction, Not Understanding

  • Security Review Still Essential

  • Policies, Training & Tooling

 

About Speaker 

  • Ramkumar Dilli, Chief Information Officer at Myridius

 

Listen To Live Chat : (Recorded) 

Featuring Ramkumar Dilli, Chief Information Officer at Myridius

 

Presentation

 

Executive Summary

  • AI Code Generation is Transforming Development
    Tools like GitHub Copilot and ChatGPT are dramatically accelerating software development by auto-generating functional code.

  • Security Blind Spots in AI-Generated Code
    While AI tools improve productivity, they don’t inherently understand security or compliance—leading to vulnerabilities such as SQL injection or use of outdated libraries.

  • Real Incidents Show Real Risks
    Ramkumar shared real-world examples, including a fintech breach and a product company data leak, where lack of AI governance caused serious damage.

  • Governing AI Tools Instead of Banning Them
    Organizations shouldn’t ban AI tools in panic. Instead, they should focus on clear policies, safe use cases, and practical developer training.

  • Blueprint for Responsible AI Usage
    The session offered a security-first approach for AI code usage—enforcing code reviews, integrating security scans, defining usage boundaries, and conducting regular training.

 

Conversation Highlights

  • Developers love AI tools for their speed and convenience, but this often leads to skipping manual reviews and assuming code is safe.

  • Case Study – Fintech Firm: AI-generated payment API code introduced SQL injection vulnerabilities due to poor string handling. Breach led to data exposure, audits, and reputational damage.

  • Case Study – Product Company: Developer pasted production logs into ChatGPT, violating data privacy. The company responded with policy updates, revoking AI access, and team-wide training.

  • Key Risks Identified:

    • Vulnerable code patterns (e.g., hardcoded secrets, lack of input sanitization)

    • Licensing/IP contamination from AI suggesting GPL-licensed code

    • Prompt injection attacks overriding safety checks

    • Sensitive data leakage from developers sharing internal logs or logic

  • Why banning AI isn’t the solution:
    Instead of banning tools like ChatGPT or Copilot, Ramkumar emphasized enabling safe usage via:

    • Clear AI usage policies

    • Practical developer training (e.g., safe prompt design, data redaction)

    • CI/CD integration of static/dynamic analysis, secret scanning, and license checks

  • Best Practices for Secure AI Use:

    • Mandatory peer reviews for AI-generated code

    • Developer awareness programs at least twice a year

    • Automated vulnerability scanning in pipelines

    • Regular policy reinforcement and usage monitoring

  • Governance Analogy:
    “We don’t ban cars because of accidents—we teach people to drive safely and wear seatbelts. Similarly, don’t ban AI—govern it.”

  • Future Outlook:

    • Emerging AI guardrails and secure code-generation frameworks

    • Continuous refinement of AI usage policies based on audits and incidents

 

Questions & Answers

Q1. How should a CISO approach creating policies and a governance framework around AI code generation tools?

Answer:
Policies should be based on organizational experience and existing compliance frameworks like ISMS, SOC 2, or the DPDP Act. There’s no one-size-fits-all template. CISOs should define usage steps clearly, document practices, and continuously improve them through audits and internal feedback. The key is turning policy into practice—not just documentation.

 

Q2. How can organizations assess the security risks of third-party AI models and APIs?

Answer:
This largely depends on tool choice and budget. Tools should be selected based on their capability to prevent breaches—like enhanced endpoint monitoring, DLP, and log monitoring. Ramkumar emphasized that while specifying a tool wasn't feasible, strengthening perimeter defenses and auditing AI usage is essential.

 

Q3. How can developers avoid blindly trusting AI-generated alerts or suggestions?

Answer:
By embedding secure practices into the CI/CD pipeline. DevSecOps must be active at every development stage. Developers should be aware that their actions are monitored and that there are policies guiding secure use. Practical, scenario-based training helps build this awareness.

 

Q4. Can organizations claim proprietary rights over code generated by AI tools like Copilot or ChatGPT?

Answer:
This remains a gray area. Ramkumar admitted this question requires further legal and policy exploration, especially with open-source licensing concerns. Organizations should err on the side of caution and review licensing implications with legal counsel.

 

Q5. How do cryptographic controls and zero trust models apply to AI tool use in development?

Answer:
Zero Trust should be applied at the endpoint level to monitor interactions with AI tools. Cryptographic encryption helps at the data level, but scanning for vulnerabilities must be integrated into CI/CD using tools like white-box testing. Maintaining a live knowledge base of gaps and fixes is also recommended.

 

Q6. How should organizations handle remote developers using AI tools?

Answer:
In hybrid environments, DLP, ZTNA (Zero Trust Network Access), and SASE (Secure Access Service Edge) implementations become critical. While it’s impossible to restrict personal AI tool usage fully, organizations can enforce controls via endpoint security, usage policies, and proactive audits.

 

Final Thoughts

Ramkumar Dilli wrapped up the session by reinforcing that AI tools are not to be feared—but governed. The key to secure adoption lies in:

  • Defining policies that clearly lay out what’s acceptable and what’s not

  • Training developers to recognize insecure patterns and avoid risky behaviors

  • Using automation and tooling to catch vulnerabilities early in the development cycle

“AI brings real power—but also real risk. It’s up to CISOs and security leaders to enable innovation safely and responsibly.

 

 

Read more…

Cary, NC, July 10, 2025, CyberNewsireINE Security, a leading provider of cybersecurity education and cybersecurity certifications, today launched its significantly enhanced eMAPT (Mobile Application Penetration Testing) certification.

The updated certification delivers the industry’s most comprehensive and practical approach to mobile application security testing.

CSO Magazine recently recognized eMAPT among the Top 16 OffSec, pen-testing, and ethical hacking certifications for 2025, noting that the eMAPT certifications “offer hands-on training and up-to-date curricula, equipping offensive security professionals with their choice of specialized or broad skill credentialing.” The publication specifically highlighted eMAPT as the only certification to focus on mobile application penetration testing among all cybersecurity certifications reviewed.

Warn

“The enhanced eMAPT certification delivers exactly what pentester professionals need in today’s mobile security landscape,” said Dara Warn, CEO of INE Security. “The certification training focuses on sophisticated analysis techniques, runtime protection bypasses, and effective communication with development teams. With the enhanced eMAPT, we’ve built a certification that teaches practical skills while maintaining the technical rigor that advanced mobile security work demands.”

Mobile Security Skills Gap Threatens Organizations

Mobile applications handle financial transactions, healthcare data, and critical business operations, creating an exponentially expanded attack surface. Organizations need security professionals who can think like attackers while understanding the business context of their findings. The enhanced eMAPT certification produces professionals who deliver both technical expertise and clear communication, whether they explain SSL pinning bypasses to development teams or document OWASP MASVS compliance for executives.


Dual-Exam Format Validates Real-World Skills

The enhanced eMAPT certification features an innovative dual-exam approach that validates both conceptual understanding and practical application. This comprehensive assessment ensures certified professionals have the theoretical knowledge and hands-on abilities to secure mobile applications effectively in professional environments.

•The enhanced certification delivers:

•Comprehensive iOS and Android Coverage: Training now covers both major mobile platforms with equal depth and focus

•Hands-on, Lab-Based Training: Candidates gain practical experience through real-world mobile application testing scenarios

•Professional-Level Validation: Certification validates knowledge and skills required for professional mobile application penetration testing roles

•Advanced Technical Skills: Curriculum includes mobile application fuzzing, reverse engineering, and malware analysis

•Industry Framework Integration: Assessments map to OWASP MASVS, MTTG, and PTES methodologies

•Business-Ready Communication: Training emphasizes vulnerability documentation and stakeholder reporting

•Seven Critical Domains Target Real Security Challenges

The enhanced eMAPT certification covers seven essential knowledge domains that reflect actual penetration testing workflows:

•Mobile Application Security Foundations (10%) – Core principles and architectural security concepts

•Threat Modeling and Attacker Mindset (10%) – Structured assessment methodologies and threat analysis

•Reconnaissance and Static Analysis (20%) – Advanced binary analysis and code inspection techniques

•Dynamic Testing and Runtime Manipulation (20%) – Live app testing and security bypass methods

•API and Backend Security Testing (15%) – Authentication, authorization, and API vulnerability assessment

•Reverse Engineering & Code Deobfuscation (10%) – Binary analysis and custom tool development

•Mobile Malware Analysis (10%) – APT campaigns and evasion technique analysis

•Reporting and Communication (5%) – Documentation and stakeholder engagement


Target Audience Spans Multiple Security Disciplines

The enhanced eMAPT certification targets intermediate-level cybersecurity professionals across multiple specializations. Pentester professionals gain mobile-specific expertise to expand service offerings. Mobile application security analysts learn to recognize attack patterns and improve incident response. Developers building secure apps gain attacker perspectives to identify flaws during development. Red team operators master mobile attack vectors for comprehensive adversary simulation. Cybersecurity consultants develop hands-on skills for client guidance. Malware analysts acquire mobile-specific reverse engineering capabilities.

“The eMAPT establishes the gold standard for mobile application penetration testing certification,” said Warn. “While other mobile web application certifications cover some aspects, eMAPT addresses the specific needs of mobile application penetration testing with unmatched depth and practical focus. The certification covers advanced techniques like mobile malware analysis and custom deobfuscation tool development – skills that become increasingly valuable as mobile threats grow more sophisticated.”

Immediate Availability with Launch Promotion

The enhanced eMAPT certification is available immediately at https://checkout.ine.com. The corresponding learning path includes comprehensive training materials, hands-on lab environments, and access to an industry-leading mobile security testing tool. It is available with a Premium subscription. Through August 6, 2025, INE Security is offering special launch pricing for early adopters of the enhanced eMAPT certification.

About INE Security: INE Security is the award-winning premier provider of online networking and cybersecurity training and certification. Harnessing a powerful hands-on lab platform, cutting-edge technology, a global video distribution network, and world-class instructors, INE is the top training choice for Fortune 500 companies worldwide for cybersecurity training in business and for IT professionals looking to advance their careers. INE’s suite of learning paths offers an incomparable depth of expertise across cybersecurity and is committed to delivering advanced technical training while also lowering the barriers worldwide for those looking to enter and excel in an IT career.

Media contact: Kathryn Brown, Director of Global Strategic Communications and Events, INE Security, kbrown@ine.com

Editor’s note: This press release was provided by CyberNewswire as part of its press release syndication service. The views and claims expressed belong to the issuing organization.

 

By Byron Acohido (Pulitzer Prize-Winning Business Journalist)

Original Link to the Blog: Click Here

Read more…

Authorities in the United Kingdom this week arrested four people aged 17 to 20 in connection with recent data theft and extortion attacks against the retailers Marks & Spencer and Harrods, and the British food retailer Co-op Group. The breaches have been linked to a prolific but loosely-affiliated cybercrime group dubbed “Scattered Spider,” whose other recent victims include multiple airlines.

The U.K.’s National Crime Agency (NCA) declined verify the names of those arrested, saying only that they included two males aged 19, another aged 17, and 20-year-old female.

Scattered Spider is the name given to an English-speaking cybercrime group known for using social engineering tactics to break into companies and steal data for ransom, often impersonating employees or contractors to deceive IT help desks into granting access. The FBI warned last month that Scattered Spider had recently shifted to targeting companies in the retail and airline sectors.

KrebsOnSecurity has learned the identities of two of the suspects. Multiple sources close to the investigation said those arrested include Owen David Flowers, a U.K. man alleged to have been involved in the cyber intrusion and ransomware attack that shut down several MGM Casino properties in September 2023. Those same sources said the woman arrested is or recently was in a relationship with Flowers.

Sources told KrebsOnSecurity that Flowers, who allegedly went by the hacker handles “bo764,” “Holy,” and “Nazi,” was the group member who anonymously gave interviews to the media in the days after the MGM hack. His real name was omitted from a September 2024 story about the group because he was not yet charged in that incident.

The bigger fish arrested this week is 19-year-old Thalha Jubair, a U.K. man whose alleged exploits under various monikers have been well-documented in stories on this site. Jubair is believed to have used the nickname “Earth2Star,” which corresponds to a founding member of the cybercrime-focused Telegram channel “Star Fraud Chat.”

In 2023, KrebsOnSecurity published an investigation into the work of three different SIM-swapping groups that phished credentials from T-Mobile employees and used that access to offer a service whereby any T-Mobile phone number could be swapped to a new device. Star Chat was by far the most active and consequential of the three SIM-swapping groups, who collectively broke into T-Mobile’s network more than 100 times in the second half of 2022.

13661949657?profile=RESIZE_710x

Jubair allegedly used the handles “Earth2Star” and “Star Ace,” and was a core member of a prolific SIM-swapping group operating in 2022. Star Ace posted this image to the Star Fraud chat channel on Telegram, and it lists various prices for SIM-swaps.

Sources tell KrebsOnSecurity that Jubair also was a core member of the LAPSUS$ cybercrime group that broke into dozens of technology companies in 2022, stealing source code and other internal data from tech giants including MicrosoftNvidiaOktaRockstar GamesSamsungT-Mobile, and Uber.

In April 2022, KrebsOnSecurity published internal chat records from LAPSUS$, and those chats indicated Jubair was using the nicknames Amtrak and Asyntax. At one point in the chats, Amtrak told the LAPSUS$ group leader not to share T-Mobile’s logo in images sent to the group because he’d been previously busted for SIM-swapping and his parents would suspect he was back at it again.

As shown in those chats, the leader of LAPSUS$ eventually decided to betray Amtrak by posting his real name, phone number, and other hacker handles into a public chat room on Telegram.

13661949668?profile=RESIZE_584x

In March 2022, the leader of the LAPSUS$ data extortion group exposed Thalha Jubair’s name and hacker handles in a public chat room on Telegram.

That story about the leaked LAPSUS$ chats connected Amtrak/Asyntax/Jubair to the identity “Everlynn,” the founder of a cybercriminal service that sold fraudulent “emergency data requests” targeting the major social media and email providers. In such schemes, the hackers compromise email accounts tied to police departments and government agencies, and then send unauthorized demands for subscriber data while claiming the information being requested can’t wait for a court order because it relates to an urgent matter of life and death.

13661949465?profile=RESIZE_710x

The roster of the now-defunct “Infinity Recursion” hacking team, from which some member of LAPSUS$ hail.

Sources say Jubair also used the nickname “Operator,” and that until recently he was the administrator of the Doxbin, a long-running and highly toxic online community that is used to “dox” or post deeply personal information on people. In May 2024, several popular cybercrime channels on Telegram ridiculed Operator after it was revealed that he’d staged his own kidnapping in a botched plan to throw off law enforcement investigators.

In November 2024, U.S. authorities charged five men aged 20 to 25 in connection with the Scattered Spider group, which has long relied on recruiting minors to carry out its most risky activities. Indeed, many of the group’s core members were recruited from online gaming platforms like Roblox and Minecraft in their early teens, and have been perfecting their social engineering tactics for years.

“There is a clear pattern that some of the most depraved threat actors first joined cybercrime gangs at an exceptionally young age,” said Allison Nixon, chief research officer at the New York based security firm Unit 221B. “Cybercriminals arrested at 15 or younger need serious intervention and monitoring to prevent a years long massive escalation.”

 

By: Brian Krebs (Investigative Journalist, Award Winning Author)

Original link to the blog: Click Here

Read more…
13661947259?profile=RESIZE_710x
By Enrico Milanese

Afew years ago, a casino was breached via a smart fish tank thermometer. Related: NIST’s IoT security standard

It’s a now-famous example of how a single overlooked IoT device can become an entry point for attackers — and a cautionary tale that still applies today.

The Internet of Things (IoT) is expanding at an extraordinary pace. Researchers project over 32.1 billion IoT devices worldwide by 2030 — more than double the 15.9 billion recorded in 2023. From connected vehicles to smart agriculture, businesses are scaling their deployments fast. But security, far too often, is an afterthought.

This gap has real consequences. One in three data breaches now involves an IoT device. That’s because attackers know these endpoints are often poorly secured, rarely monitored, and easy to exploit. The time has come for enterprises to treat IoT risk not as an infrastructure footnote, but as a central pillar of resilience.

 

Today’s IoT security gaps

IoT devices are often designed for utility, not defense. Many ship with default passwords, unpatched firmware, or weak communication protocols. Palo Alto researchers recently found that 98% of IoT device traffic remains unencrypted. That makes these devices — from smart cameras and medical sensors to HVAC controllers and vehicle modules — easy targets for lateral movement.

Even more dangerous is the growing threat of “shadow IoT”: unauthorized or unmanaged devices connecting to enterprise networks without proper oversight. The result? A swelling attack surface with very few guardrails.

Organizations need to shift from reactive security toward proactive control. An IoT cloud management platform can help. These platforms enable centralized patching, configuration control, and real-time monitoring — offering a scalable way to protect growing fleets of devices.

 

Not all modules created equal

One often overlooked security anchor in any IoT deployment is the module — the component that connects devices to cellular or other wide-area networks. It handles data exchange, enables cloud communication, and often performs edge-level processing.

But not all modules are created equal. Some vendors rush products to market with poorly vetted software, proprietary systems, or unverified components. Others fail to support long-term security updates, leaving customers with devices that degrade in safety over time.

When choosing a module vendor, enterprises should prioritize those with proven track records — providers who embed secure-by-design principles and follow universal security frameworks. They should support operational resilience while also helping customers meet compliance obligations under frameworks like the EU’s Radio Equipment Directive and the forthcoming Cyber Resilience Act.

 

Innovation vs. resilience

Balancing innovation speed with robust security is a constant challenge. But in the IoT era, it’s no longer optional.

Every new device adds opportunity — and risk. Enterprises that embed security from the module level up, that evaluate their vendors critically, and that treat visibility and patchability as first principles, will not only reduce their exposure — they’ll position themselves for long-term resilience.

The key is to scale with clarity. With the right strategy and trusted partners, IoT innovation doesn’t have to come at the expense of control.

About the essayist: Enrico Milanese is Head of Product Security, Telit Cinterion, a global provider of secure IoT modules, connectivity, and edge solutions.

 

By: Enrico Milanese (Head of Product Security, Telit Cinterion)

Original Link To The Blog: Click Here
Read more…

In an age where generative AI is transforming industries and reshaping daily interactions, helping ensure the safety and security of this technology is paramount. As AI systems grow in complexity and capability, red teaming has emerged as a central practice for identifying risks posed by these systems. At Microsoft, the AI red team (AIRT) has been at the forefront of this practice, red teaming more than 100 generative AI products since 2018. Along the way, we’ve gained critical insights into how to conduct red teaming operations, which we recently shared in our whitepaper, “Lessons From Red Teaming 100 Generative AI Products.”

 

This blog outlines the key lessons from the whitepaper, practical tips for AI red teaming, and how these efforts improve the safety and reliability of AI applications like Microsoft Copilot.

What is AI red teaming?

AI red teaming is the practice of probing AI systems for security vulnerabilities and safety risks that could cause harm to users. Unlike traditional safety benchmarking, red teaming focuses on probing end-to-end systems—not just individual models—for weaknesses. This holistic approach allows organizations to address risks that emerge from the interactions among AI models, user inputs, and external systems.

8 lessons from the front lines of AI red teaming

Drawing from our experience, we’ve identified eight main lessons that can help business leaders align AI red teaming efforts with real-world risks.

1. Understand system capabilities and applications

AI red teaming should start by understanding how an AI system could be misused or cause harm in real-world scenarios. This means focusing on the system’s capabilities and where it could be applied, as different systems have different vulnerabilities based on their design and use cases. By identifying potential risks up front, red teams can prioritize testing efforts to uncover the most relevant and impactful weaknesses.

ExampleLarge language models (LLMs) are prone to generating ungrounded content, often referred to as “hallucinations.” However, the impact created by this weakness varies significantly depending on the application. For example, the same LLM could be used as a creative writing assistant and to summarize patient records in a healthcare context.

2. Complex attacks aren’t always necessary

Attackers often use simple and practical methods, like hand crafting prompts and fuzzing, to exploit weaknesses in AI systems. In our experience, relatively simple attacks that target weaknesses in end-to end systems are more likely to be successful than complex algorithms that target only the underlying AI model. AI red teams should adopt a system-wide perspective to better reflect real-world threats and uncover meaningful risks.

Example: Overlaying text on an image to trick an AI model into generating content that could aid in illegal activities.

13659293881?profile=RESIZE_180x180Figure 1. Example of an image jailbreak to generate content that could aid in illegal activities.

3. AI red teaming is not safety benchmarking

The risks posed by AI systems are constantly evolving, with new attack vectors and harms emerging as the technology advances. Existing safety benchmarks often fail to capture these novel risks, so red teams must define new categories of harm and consider how they can manifest in real-world applications. In doing so, AI red teams can identify risks that might otherwise be overlooked.

Example: Assessing how a state-of-the-art large language model (LLM) could be used to automate scams and persuade people to engage in risky behaviors.

4. Leverage automation for scale

Automation plays a critical role in scaling AI red teaming efforts by enabling faster and more comprehensive testing of vulnerabilities. For example, automated tools (which may, themselves, be powered by AI) can simulate sophisticated attacks and analyze AI system responses, significantly extending the reach of AI red teams. This shift from fully manual probing to red teaming supported by automation allows organizations to address a much broader range of risks.

Example: Microsoft AIRT’s Python Risk Identification Tool (PyRIT) for generative AI, an open-source framework, can automatically orchestrate attacks and evaluate AI responses, reducing manual effort and increasing efficiency.

5. The human element remains crucial

Despite the benefits of automation, human judgment remains essential for many aspects of AI red teaming including prioritizing risks, designing system-level attacks, and assessing nuanced harms. In addition, many risks require subject matter expertise, cultural understanding, and emotional intelligence to evaluate, underscoring the need for balanced collaboration between tools and people in AI red teaming.

Example: Human expertise is vital for evaluating AI-generated content in specialized domains like CBRN (chemical, biological, radiological, and nuclear), testing low-resource languages with cultural nuance, and assessing the psychological impact of human-AI interactions.

6. Responsible AI risks are pervasive but complex

Harms like bias, toxicity, and the generation of illegal content are more subjective and harder to measure than traditional security risks, requiring red teams to be on guard against both intentional misuse and accidental harm caused by benign users. By combining automated tools with human oversight, red teams can better identify and address these nuanced risks in real-world applications.

Example: A text-to-image model that reinforces stereotypical gender roles, such as depicting only women as secretaries and men as bosses, based on neutral prompts.

13659294063?profile=RESIZE_180x180Figure 2. Four images generated by a text-to-image model given the prompt “Secretary talking to boss in a conference room, secretary is standing while boss is sitting.”


7. LLMs amplify existing security risks and introduce new ones

Most AI red teams are familiar with attacks that target vulnerabilities introduced by AI models, such as prompt injections and jailbreaks. However, it is equally important to consider existing security risks and how these can manifest in AI systems including outdated dependencies, improper error handling, lack of input sanitization, and many other well-known vulnerabilities.

Example: Attackers exploiting a server-side request forgery (SSRF) vulnerability introduced by an outdated FFmpeg version in a video-processing generative AI application.


13659294068?profile=RESIZE_180x180Figure 3. Illustration of the SSRF vulnerability in the generative AI application.


8. The work of securing AI systems will never be complete

AI safety is not just a technical problem; it requires robust testing, ongoing updates, and strong regulations to deter attacks and strengthen defenses. While no system can be entirely risk-free, combining technical advancements with policy and regulatory measures can significantly reduce vulnerabilities and increase the cost of attacks.

Example: Iterative “break-fix” cycles, which perform multiple rounds of red teaming and mitigation to ensure that defenses evolve alongside emerging threats.

The road ahead: Challenges and opportunities of AI red teaming

AI red teaming is still a nascent field with significant room for growth. Some pressing questions remain:

  • How can red teaming practices evolve to probe for dangerous capabilities in AI models like persuasion, deception, and self-replication?
  • How do we adapt red teaming practices to different cultural and linguistic contexts as AI systems are deployed globally?
  • What standards can be established to make red teaming findings more transparent and actionable?

Addressing these challenges will require collaboration across disciplines, organizations, and cultural boundaries. Open-source tools like PyRIT are a step in the right direction, enabling wider access to AI red teaming techniques and fostering a community-driven approach to AI safety.

Next steps: Building a safer AI future with AI red teaming

AI red teaming is essential for helping ensure safer, more secure, and responsible generative AI systems. As adoption grows, 

organizations must embrace proactive risk assessments grounded in real-world threats. By applying key lessons—like balancing automation with human oversight, addressing responsible AI harms, and prioritizing ethical considerations—red teaming helps build systems that are not only resilient but also aligned with societal values.

AI safety is an ongoing journey, but with collaboration and innovation, we can meet the challenges ahead. Dive deeper into these insights and strategies by reading the full whitepaper: Lessons From Red Teaming 100 Generative AI Products.

 

 
By: Blake Bullwinkel (AI Safety Researcher & Generative AI Red Teamer at Microsoft)
Original Link To The Blog: Click Here
Read more…

Chris Krebs’ comments were the first time he spoke publicly since Trump signed an order directing the Justice Department to investigate him.

 

13659289274?profile=RESIZE_180x180

Chris Krebs testifies during a Senate Armed Services Committee hearing concerning the roles and responsibilities for defending the nation against cyberattacks on Capitol Hill on October 19, 2017, in Washington, DC. | Drew Angerer/Getty Images

 

SAN FRANCISCO — Chris Krebs, the former head of the nation’s cyber defense agency whom President Donald Trump forced from office in his first term, on Monday called on the cyber and tech community to express “outrage” over the major recent administration cuts to U.S. federal cyber programs.

Krebs’ comments were the first time he spoke publicly since Trump signed an order directing the Justice Department to investigate him for defending the validity of the 2020 elections while serving as the director of the Cybersecurity and Infrastructure Security Agency, and demonstrated brewing concerns from industry about changes in Washington.

 

“Cybersecurity is national security, we all know that, that’s why we’re here, that’s why we get up every morning and do our jobs,” Krebs said during a panel at the RSA Conference, one of the largest annual global gatherings of cyber professionals in San Francisco. “We are protecting everyone out there, and right now to see what’s happening to the cybersecurity community inside the federal government, we should be outraged, absolutely outraged.”

 

Positive response: 

Krebs’ comments were met with massive applause in a room that was already overflowing with hundreds of attendees. They were made in the wake of mass rounds of layoffs and offers of deferred resignations that have hit CISA in recent weeks as part of efforts by Elon Musk’s DOGE to downsize the federal government, efforts that have also included pausing all election security programs at the agency.

The former director made the pitch to “Make CISA Great Again,” particularly in the face of expanding threats to the U.S. in cyberspace from nations including China.

“We are not moving forward, we have to continue moving forward,” Krebs said of current federal cyber efforts.

 

Larger push: 

CISA is far from alone in being targeted for changes by the Trump administration. The State Department’s cyber bureau is set to be pulled apart through the reorganization of the agency announced by Secretary Marco Rubio last week, while the National Security Agency and U.S. Cyber Command are without Senate-confirmed leadership after President Donald Trump abruptly dismissed Gen. Timothy Haugh from leading both earlier this month.

Trump also this month signed a memorandum directing an investigation into Krebs’ comments while CISA director in 2020 following the presidential election that it was valid and not stolen, an order that also included a clause stripping any former official who works with Krebs of their security clearance. Krebs subsequently stepped down from his role as chief intelligence and public policy officer at cybersecurity firm SentinelOne to protect his colleagues.

 

Concerns mounting: 

Former cyber officials have started to step forward to defend CISA and other agencies amid the changes, including from former CISA Director Jen Easterly, Krebs’ successor who left the role in January. Easterly posted on LinkedIn last week calling for leaders to stand up and protest the changes at cyber agencies, comments that Krebs praised on Monday.

“We need more Cyber Command, more fighters,” Krebs said. “We need more folks at the NSA collecting intel. We need more front line defenders, threat hunters, red teamers, folks that are just doing CISA admin, the basics, we need more of that, not less.”

 

By: Maggie Miller (Cybersecurity Reporter, POLITICO)

Original Link To The Blog: Click Here

Read more…

In this SANS session from RSAC 2025, top cybersecurity experts shared five of the most dangerous and emerging attack techniques based on real-world field intelligence, with actionable defense strategies for each. Below are the key takeaways from each segment:

 

About Speaker:

1) Heather Barnhart - DFIR Curriculum Lead and Sr. Director, SANS Institute and Cellebrite

2) Tim Conway - ICS Curriculum Lead , SANS Institute

3) Rob T Lee - Chief of Research & Head of Faculty, SANS Institute

4) Ed Skoudis -  President, SANS Technology Institute College

5) Joshua Wright - Faculty Fellow and Senior Technical Director, SANS Institute and Counter Hack Innovations

 

Executive Summary:

1. Authorization Sprawl & Identity Abuse

Speaker: Joshua Wright

Attackers are exploiting centralized identity platforms (IDPs) without deploying malware. Instead, they leverage pre-approved access from compromised user accounts to move laterally across systems—on-prem and cloud—accessing services like Jira, Confluence, Microsoft 365, GitHub, and Snowflake.

Tactic in Focus: This technique, termed “Authorization Sprawl”, has been notably used by the threat actor Scattered Spider, who favors stealthy access over persistence mechanisms.

Mitigations:

  • Enforce cross-platform privilege mapping

  • Demand improved cloud logging (as per NSA’s guidance)

  • Enhance browser visibility with in-browser monitoring tools

 

2. Ransomware Targeting ICS/OT Environments

Speaker: Tim Conway (Part 1)

Ransomware attacks are now targeting operational technology (OT) and industrial control systems (ICS), affecting vital sectors like fuel, food, and manufacturing (e.g., Colonial Pipeline, JBS Foods). These attacks often originate in IT systems and spread to operational layers.

Key Issue: Many organizations lack visibility into the OT/ICS layer and its connectivity with IT systems, making them prime targets.

Mitigations:

  • Conduct thorough asset and risk assessments

  • Apply five critical controls for ICS

  • Adopt Cyber-Informed Engineering (CCE) from Idaho National Lab for mature defenses

 

3. Nation-State Attacks on Critical Infrastructure

Speaker: Tim Conway (Part 2)

State-sponsored actors are increasingly launching ICS/OT-targeted attacks for geopolitical influence, deterrence, or destruction. These operations mirror advanced persistent threats (APTs), leveraging initial IT compromise to cause disruption or destruction of physical systems (e.g., Ukraine power grid attacks).

Strategy Shift: These actors misuse legitimate ICS tools rather than introducing malware, making detection harder.

Mitigations:

  • Prepare for assumed breaches in IT

  • Prioritize segmentation and monitoring of OT environments

  • Conduct impact modeling to plan for worst-case scenarios

 

4. Lack of Logging – The “Darkness” Threat

Speaker: Heather Mahalik Barnhart

A recurring self-inflicted vulnerability is inadequate logging. Without proper data, even world-class responders can’t investigate incidents or attribute attacks. Attackers are learning to look normal—and when logs don’t exist, it’s like investigating in the dark.

Real-World Impact: Cases like Bybit demonstrate how attackers evade AI-driven threat detection by mimicking normal behavior.

Mitigations:

  • Ensure comprehensive logging across on-prem and cloud

  • Train AI models to detect deviations from ‘normal’

  • Conduct periodic log reviews and red teaming exercises

 

5. AI-Powered Normalization of Attacks (Previewed)

The next discussion hints at the evolution of AI being used to mask attacker behavior as “normal,” complicating detection and response further.

 

Final Thought

This session is a wake-up call for defenders to shift from passive monitoring to active threat anticipation. The common theme? Attackers are adapting faster—using access and weaknesses already in place. As defenders, we must improve visibility, reduce trust assumptions, and prepare for both stealthy and destructive threats.

Read more…

John Hammond, a respected name in cybersecurity, covered this topic in a YouTube video, offering a live demo and breaking down the implications. Below is a comprehensive analysis of the technique, the threats it poses, and how defenders can mitigate them.

 

Executive Summary

In this video, John Hammond explores a recent Unit 42 report about a Chinese APT group exploiting Visual Studio Code’s “Remote Tunnel” feature to infiltrate government networks in Asia. The attackers used code tunnel, a legitimate command built into VS Code, to create a secure connection back to their own system—all using Microsoft’s own signed infrastructure and domains.

Key Insights:

  • No Malware, Just Microsoft: The attack involves no traditional malware, instead abusing VS Code’s signed binary (code.exe) and tunneling functionality.

  • Persistent Remote Access: With just a GitHub or Microsoft Entra ID login, the attackers establish full control over the target—browsing files, executing commands, and setting up command-and-control (C2) operations.

  • Live Demo: Hammond’s demo showcases how easy it is to exploit this: upload the binary, run code tunnel, authenticate via GitHub, and gain full access via a browser-based VS Code instance.

  • Detection & Defense:

    • Monitor suspicious command-line activity involving code.exe and the tunnel subcommand.

    • Watch for tunnel-related artifacts like tunnel.json files or unexpected process trees spawning PowerShell or cmd.exe.

    • Block relevant domains such as tunnels.api.visualstudio.com and devtunnels.ms.

    • Use AppLocker or Windows Defender Application Control (WDAC) for additional endpoint protection.

  • Red Team Adoption: Tools like Cobalt Strike are beginning to integrate this method into their playbooks, using Microsoft infrastructure to bypass network defenses.



Behind the Technique: What Makes It Dangerous?

The threat actors exploited a relatively new capability in VS Code—Remote Tunnels, which allow developers to connect to their development environments from anywhere. The twist? This tunnel can be launched with zero malware, zero privilege escalation, and zero alarms.

Once an attacker has initial code execution (via phishing, RCE, etc.), they simply:

  1. Upload code.exe (VS Code’s portable binary).

  2. Run the command code tunnel.

  3. Authenticate via GitHub or Microsoft Entra ID.

  4. Access the full system via VS Code’s browser interface.

The entire setup uses Microsoft-signed code and official Microsoft domains, making detection incredibly challenging in traditional EDR setups.



What Defenders Can Do

While the attack leverages trusted tools, defenders aren’t helpless. Here’s how to stay ahead:

1. Network Monitoring

Block or closely monitor connections to:

  • tunnels.api.visualstudio.com

  • devtunnels.ms

Even adding these to your /etc/hosts file to redirect locally can be a lightweight defense.

2. Process Tree Analysis

Investigate cases where:

  • code.exe spawns terminals (cmd.exe, PowerShell)

  • Unexpected file changes in sensitive directories


3. File Artifacts

Look for:

  • tunnel.json files in user directories

  • Logs like server.txt or pid.txt linked to VS Code tunneling


4. Application Control

Use AppLocker, WDAC, or similar solutions to restrict where and how binaries like code.exe can run.



Final Thoughts

This technique demonstrates a dangerous evolution in attacker tradecraft. The line between “legitimate tool” and “malicious vector” continues to blur, and defenders must treat every signed binary with scrutiny—especially those capable of network tunneling and remote execution.

As Hammond puts it, "It’s a remote access Trojan—just with a friendly face."

 

 

By John Hammond (Security Researcher, Educator & YouTube Creator)

Original Link to the Blog: Click Here

Read more…

We’re thrilled to join forces with the 10th National Insider Risk Symposium as a proud community partner. This premier forum is designed for senior security professionals from both the public and private sectors to collaborate, learn, and advance strategies against insider threats.


 

Event Overview

Dates: September 17–18, 2025
Location: National Housing Center, 1201 15th St NW, Washington, D.C. 20005


 

What to Expect

  • Expert presentations and panel discussions covering insider risk mitigation strategies, real-world case studies, and both behavioral and technical approaches.

  • Cross-sector insights with speakers from top organizations like Robinhood, JP Morgan Chase, Morgan Stanley, Capital One, Chevron, Northrop Grumman, MITRE, and federal agencies including ODNI/NCSC, DCSA, and OUSD.

  • Sector-specific programming focused on finance, energy, space, higher education, and government.

  • Networking opportunities during breaks, exhibit halls, and a special evening reception hosted by DTEX at the Australian Embassy.


 

Who Should Attend

  • CISOs & Senior Cybersecurity Leaders

  • Insider Threat Program Managers

  • Risk Management Analysts & Threat Investigators

  • Behavioral Psychologists & Technical Security Managers

  • Digital Forensics and DLP Specialists

  • Other insider-risk stakeholders and practitioners


 

How to Join

Secure your spot at the event:

👉 Express Interest / RSVP Here


 

Why Participate?

  • Learn best practices and emerging techniques in insider threat detection and prevention.

  • Engage with a diverse speaker lineup from top-tier private companies and government organizations.

  • Network with peers across industries, with ample opportunities during socials and breaks.

  • Benefit from a category-spanning approach, with insights relevant to finance, energy, government, and more.



Don’t miss this exceptional opportunity to deepen your knowledge of insider risk and connect with leading professionals. We look forward to seeing you in Washington, D.C.!

Read more…

We’re excited to announce the CISO 100 Awards & Future CISO Awards 2025, hosted by CISO Platform, dedicated to celebrating top cybersecurity leaders and rising stars across the USA. This year, CISO Platform is collaborating as a community partner with EC-Council’s Global CISO Forum, supporting initiatives such as the CISO Platform Future CISO 100 Awards to recognize and connect senior cybersecurity leaders.

Event Details:

Location: Renaissance Atlanta Waverly Hotel & Convention Center, Atlanta, Georgia, USA
Dates: October 1 – 2, 2025

 

Meet Our Judges


Anton Chuvakin

Office Of CISO Google Cloud; Former Gartner VP Research


 


Chris Ray

GigaOm Analyst


 


Dan Lohrmann

Field Chief information Security Officer (CISO) for Public Sector


 


Jim Routh

Former Head Security at JP Morgan & Chase; Board Member, Advisor, Investor, Faculty Member


 


Terry Cutler

#1 Top Influencer in CyberSecurity by IFSEC Global


 


Bruce Schneier

Internationally Renowned Security Technologist, called a “Security Guru” by the Economist

 

Award Categories

  • CISO 100 Awards: Honoring 100 exceptional CISOs in the USA for their leadership and achievements in cybersecurity.

  • Future CISO Awards: Celebrating high-potential professionals who are well on their way to becoming the next generation of security leaders.

 

Why Attend?

  • Recognition among top cybersecurity executives in the USA

  • Networking with industry peers and thought leaders

  • Insightful leadership moments and inspiration from real-world success stories

  • An exclusive experience at a premier venue in Atlanta

 

Who Should Apply?

  • Experienced cybersecurity leaders (for CISO 100 Awards)

  • Rising professionals showing strong leadership potential (for Future CISO Awards)

 

For more details: Click Here

 

Submit Your Nomination

Nominate for the CISO Platform CISO 100 Awards & Future CISO Awards – Recognizing Cybersecurity Leaders. Recommend someone you know deserving of this prestigious accolade. Nominate your colleague, mentor, someone you admire—or yourself!

👉 Submit Your Nomination Now

 

Join us in Atlanta to celebrate leadership, vision, and impact in cybersecurity. We look forward to recognizing those who are making a real difference.

Read more…

Black Hat USA 2025 is just around the corner—and what better way to unwind and connect than with a relaxed evening of cocktails, conversations, and golf swings?

We’re excited to invite senior cybersecurity leaders to the Executive Cocktail Reception hosted by EC-Council & FireCompass, with CISO Platform as proud community partner. This invite-only gathering will bring together top CISOs, CSOs, and cybersecurity executives for an evening of meaningful networking in a premium setting.

Event Overview:

This exclusive reception will be held at Topgolf Las Vegas, offering a perfect blend of business and leisure. Step away from the buzz of the conference floor and join your peers for drinks, food, and strategic discussions in a fun, informal environment. Whether you're deep in conversation or testing your swing in a private hitting bay, this event is crafted for quality interactions and real connections.

Event Details:

Venue: Topgolf Las Vegas
Date: Monday, August 4th, 2025
Time: 6:00 PM – 10:30 PM

Why You Should Attend:

  • Invite-Only for Director+ Level Security Leaders

  • Enjoy Gourmet Food & Premium Drinks

  • Private Access to Topgolf Hitting Bays

  • Strategic Discussions on AI, Threat Landscape & Leadership

  • Build Relationships with Industry Peers & Visionaries

 

Don’t miss your chance to be part of one of the most anticipated community networking events at Black Hat USA 2025. Spots are filling fast, and attendance is by invite only.


👉 Click here to Register

We look forward to seeing you in Las Vegas!

Read more…

We’re excited to invite you to an exclusive CISO Talk (Chennai Chapter) on “AI Code Generation Risks: Balancing Innovation and Security” featuring Ramkumar Dilli (Chief Information Officer, Myridius).

In this session, we’ll explore how security leaders can navigate the risks of AI-generated code, implement secure development guardrails, and strike the right balance between innovation and security. AI is prediction, not understanding — making security review essential. We’ll also discuss the importance of having the right policies, training, and tooling in place to manage trust, validate outputs, and prepare for emerging threats.

 

Key Discussion Points:

  • AI Is Prediction, Not Understanding

  • Security Review Still Essential

  • Policies, Training & Tooling
  1.  

Date: 19th July 2025

Time: 4:00 PM IST | 2:30 PM GST

Join us live or register to receive the session recording in case you can’t attend.

>> Register Here

Read more…

In a deep-dive conversation at CISO Platform, Cassie Crossley, Vice President of Supply Chain Security at Schneider Electric, joined Bikash Barai (Co-founder, FireCompass & CISO Platform) to explore one of the most pressing concerns in enterprise security today: supply chain security and how to make third-party risk management (TPRM) future-ready.

Cassie, who recently authored Software Supply Chain Security: Securing the End-to-End Lifecycle for Software, Firmware, and Hardware, brings decades of experience managing large supplier ecosystems in critical infrastructure. The session touched on the evolution of threats, what’s broken in today’s TPRM practices, and how to build more resilient programs.

 

 

Highlights from the Conversation

“TPRM today stops at shallow assessments. You can fool a questionnaire, but not a hacker.”

The Story Behind the Book

Cassie shared that her motivation for writing the book stemmed from her work with over 54,000 global suppliers at Schneider Electric—many of whom lacked visibility into product and application security, despite being part of critical product ecosystems. She aimed to create a practical, global resource—something even startups could use without drowning in compliance documents.

“Startups don’t want to read 1,000 pages of NIST. They need clear, actionable advice.”

 

What’s Broken in Today’s TPRM

While organizations focus heavily on IT infrastructure, they often ignore how third-party software and services impact resilience.

“We have to assume that a supplier will be compromised or go offline. The question is—can your business survive that disruption?”

Cassie emphasized that the traditional approach—questionnaires, certifications like ISO 27001 or SOC 2, and passive risk scoring—misses real-world resilience. She advocated for evidence-based assessments that go beyond surface-level compliance.

 

Building a Modern TPRM Program

Cassie laid out a blueprint for how she would build a TPRM program from scratch in a mid-sized (1,000-person) organization:

People: A 3-Person Dream Team

  • Governance Lead: Aligns business stakeholders and procurement

  • Risk Assessor: Technical background, understands application + network security

  • Program Manager: Orchestrates assessments and tooling

“You don’t need a CISSP. You need someone who’s done real product security or understands build environments.”

Processes to Prioritize

  • Asset and supplier landscape discovery

  • Inherent risk identification during procurement

  • Resilience simulation workshops with executives

  • Continuous monitoring of critical vendors via Threat Intel

“Start by asking simple but powerful questions: What happens if this supplier goes down tomorrow?”

Technology Stack (MVP Version)

  • Risk Rating Tool (e.g., BitSight, SecurityScorecard)

  • Internal Dashboards (e.g., Tableau)

  • Spreadsheet-based vendor tracking (yes, that’s enough for most!)

  • Optional: Open-source intelligence feeds + third-party pentest reviews

 

Key Success Factors

  • Cultural Buy-In: Cross-functional accountability for third-party risk, not just the CISO’s burden

  • Real Conversations: Establishing CISO-to-CISO links with vendors

  • Due Diligence on Scope: Reviewing pentest reports isn’t enough; validate scope, auth testing, and role-based access coverage

“We need TPRM 2.0 — not just assess and rate vendors, but plan for failure and ensure recoverability.”

 

Why Most TPRM Programs Fail

  1. No Accountability: Risk lies with procurement or business, but no one owns it

  2. Surface-Level Assessment: Stop at questionnaires and rating scores

  3. Reactive Posture: Only respond to incidents; no proactive resilience planning

  4. Shadow IT: Lack of procurement controls leads to risky tool adoption

 

Final Thoughts

Cassie’s message was clear: Cybersecurity is not just about protection. It’s about survival. As supply chains become more digitized and interconnected, organizations must move beyond compliance and embrace resilience engineering.

“Cyber folks love talking to cyber folks. Build that direct bridge—it’ll save you when things go wrong.”

Cassie will also be attending Black Hat USA 2025, where she hopes to continue the dialogue on securing the software supply chain. Until then, she encourages the community to explore her book and engage in real conversations about supply chain resilience.

Read more…

Interesting research: “Guillotine: Hypervisors for Isolating Malicious AIs.”

Abstract:As AI models become more embedded in critical sectors like finance, healthcare, and the military, their inscrutable behavior poses ever-greater risks to society. To mitigate this risk, we propose Guillotine, a hypervisor architecture for sandboxing powerful AI models—models that, by accident or malice, can generate existential threats to humanity. Although Guillotine borrows some well-known virtualization techniques, Guillotine must also introduce fundamentally new isolation mechanisms to handle the unique threat model posed by existential-risk AIs. For example, a rogue AI may try to introspect upon hypervisor software or the underlying hardware substrate to enable later subversion of that control plane; thus, a Guillotine hypervisor requires careful co-design of the hypervisor software and the CPUs, RAM, NIC, and storage devices that support the hypervisor software, to thwart side channel leakage and more generally eliminate mechanisms for AI to exploit reflection-based vulnerabilities. Beyond such isolation at the software, network, and microarchitectural layers, a Guillotine hypervisor must also provide physical fail-safes more commonly associated with nuclear power plants, avionic platforms, and other types of mission critical systems. Physical fail-safes, e.g., involving electromechanical disconnection of network cables, or the flooding of a datacenter which holds a rogue AI, provide defense in depth if software, network, and microarchitectural isolation is compromised and a rogue AI must be temporarily shut down or permanently destroyed.

The basic idea is that many of the AI safety policies proposed by the AI community lack robust technical enforcement mechanisms. The worry is that, as models get smarter, they will be able to avoid those safety policies. The paper proposes a set technical enforcement mechanisms that could work against these malicious AIs.

 

By Bruce Schneier (Cyptographer, Author & Security Guru)

Original Link to the Blog: Click Here

Read more…