CISO Talk (Chennai Chapter) On - AI Code Generation Risks: Balancing Innovation & Security With Ramkumar Dilli (Chief Information Officer, Myridius)

In an era where AI tools are transforming software development, CISOs face a pressing challenge: how to harness the speed of AI code generation without compromising on security. In a compelling CISO Talk (Chennai Chapter) hosted by CISO Platform, Ramkumar Dilli, Chief Information Officer at Myridius, unpacked the critical risks posed by AI-generated code and shared real-world lessons on balancing innovation with secure software development practices.

 

Key Highlights:

  • AI Is Prediction, Not Understanding

  • Security Review Still Essential

  • Policies, Training & Tooling

 

About Speaker 

  • Ramkumar Dilli, Chief Information Officer at Myridius

 

Listen To Live Chat : (Recorded) 

Featuring Ramkumar Dilli, Chief Information Officer at Myridius

 

Presentation

 

Executive Summary

  • AI Code Generation is Transforming Development
    Tools like GitHub Copilot and ChatGPT are dramatically accelerating software development by auto-generating functional code.

  • Security Blind Spots in AI-Generated Code
    While AI tools improve productivity, they don’t inherently understand security or compliance—leading to vulnerabilities such as SQL injection or use of outdated libraries.

  • Real Incidents Show Real Risks
    Ramkumar shared real-world examples, including a fintech breach and a product company data leak, where lack of AI governance caused serious damage.

  • Governing AI Tools Instead of Banning Them
    Organizations shouldn’t ban AI tools in panic. Instead, they should focus on clear policies, safe use cases, and practical developer training.

  • Blueprint for Responsible AI Usage
    The session offered a security-first approach for AI code usage—enforcing code reviews, integrating security scans, defining usage boundaries, and conducting regular training.

 

Conversation Highlights

  • Developers love AI tools for their speed and convenience, but this often leads to skipping manual reviews and assuming code is safe.

  • Case Study – Fintech Firm: AI-generated payment API code introduced SQL injection vulnerabilities due to poor string handling. Breach led to data exposure, audits, and reputational damage.

  • Case Study – Product Company: Developer pasted production logs into ChatGPT, violating data privacy. The company responded with policy updates, revoking AI access, and team-wide training.

  • Key Risks Identified:

    • Vulnerable code patterns (e.g., hardcoded secrets, lack of input sanitization)

    • Licensing/IP contamination from AI suggesting GPL-licensed code

    • Prompt injection attacks overriding safety checks

    • Sensitive data leakage from developers sharing internal logs or logic

  • Why banning AI isn’t the solution:
    Instead of banning tools like ChatGPT or Copilot, Ramkumar emphasized enabling safe usage via:

    • Clear AI usage policies

    • Practical developer training (e.g., safe prompt design, data redaction)

    • CI/CD integration of static/dynamic analysis, secret scanning, and license checks

  • Best Practices for Secure AI Use:

    • Mandatory peer reviews for AI-generated code

    • Developer awareness programs at least twice a year

    • Automated vulnerability scanning in pipelines

    • Regular policy reinforcement and usage monitoring

  • Governance Analogy:
    “We don’t ban cars because of accidents—we teach people to drive safely and wear seatbelts. Similarly, don’t ban AI—govern it.”

  • Future Outlook:

    • Emerging AI guardrails and secure code-generation frameworks

    • Continuous refinement of AI usage policies based on audits and incidents

 

Questions & Answers

Q1. How should a CISO approach creating policies and a governance framework around AI code generation tools?

Answer:
Policies should be based on organizational experience and existing compliance frameworks like ISMS, SOC 2, or the DPDP Act. There’s no one-size-fits-all template. CISOs should define usage steps clearly, document practices, and continuously improve them through audits and internal feedback. The key is turning policy into practice—not just documentation.

 

Q2. How can organizations assess the security risks of third-party AI models and APIs?

Answer:
This largely depends on tool choice and budget. Tools should be selected based on their capability to prevent breaches—like enhanced endpoint monitoring, DLP, and log monitoring. Ramkumar emphasized that while specifying a tool wasn't feasible, strengthening perimeter defenses and auditing AI usage is essential.

 

Q3. How can developers avoid blindly trusting AI-generated alerts or suggestions?

Answer:
By embedding secure practices into the CI/CD pipeline. DevSecOps must be active at every development stage. Developers should be aware that their actions are monitored and that there are policies guiding secure use. Practical, scenario-based training helps build this awareness.

 

Q4. Can organizations claim proprietary rights over code generated by AI tools like Copilot or ChatGPT?

Answer:
This remains a gray area. Ramkumar admitted this question requires further legal and policy exploration, especially with open-source licensing concerns. Organizations should err on the side of caution and review licensing implications with legal counsel.

 

Q5. How do cryptographic controls and zero trust models apply to AI tool use in development?

Answer:
Zero Trust should be applied at the endpoint level to monitor interactions with AI tools. Cryptographic encryption helps at the data level, but scanning for vulnerabilities must be integrated into CI/CD using tools like white-box testing. Maintaining a live knowledge base of gaps and fixes is also recommended.

 

Q6. How should organizations handle remote developers using AI tools?

Answer:
In hybrid environments, DLP, ZTNA (Zero Trust Network Access), and SASE (Secure Access Service Edge) implementations become critical. While it’s impossible to restrict personal AI tool usage fully, organizations can enforce controls via endpoint security, usage policies, and proactive audits.

 

Final Thoughts

Ramkumar Dilli wrapped up the session by reinforcing that AI tools are not to be feared—but governed. The key to secure adoption lies in:

  • Defining policies that clearly lay out what’s acceptable and what’s not

  • Training developers to recognize insecure patterns and avoid risky behaviors

  • Using automation and tooling to catch vulnerabilities early in the development cycle

“AI brings real power—but also real risk. It’s up to CISOs and security leaders to enable innovation safely and responsibly.

 

 

Votes: 0
E-mail me when people leave their comments –

Community Manager, CISO Platform

You need to be a member of CISO Platform to add comments!

Join CISO Platform

Join The Community Discussion

CISO Platform

A global community of 5K+ Senior IT Security executives and 40K+ subscribers with the vision of meaningful collaboration, knowledge, and intelligence sharing to fight the growing cyber security threats.

Join CISO Community Share Your Knowledge (Post A Blog)
 

 

 

CISO Platform Talks : Security FireSide Chat With A Top CISO or equivalent (Monthly)

  • Description:

    CISO Platform Talks: Security Fireside Chat With a Top CISO

    Join us for the CISOPlatform Fireside Chat, a power-packed 30-minute virtual conversation where we bring together some of the brightest minds in cybersecurity to share strategic insights, real-world experiences, and emerging trends. This exclusive monthly session is designed for senior cybersecurity leaders looking to stay ahead in an ever-evolving landscape.

    We’ve had the privilege of…

  • Created by: Biswajit Banerjee
  • Tags: ciso, fireside chat

6 City Round Table On "New Guidelines & CISO Priorities for 2025" (Delhi, Mumbai, Bangalore, Pune, Chennai, Kolkata)

  • Description:

    We are pleased to invite you to an exclusive roundtable series hosted by CISO Platform in partnership with FireCompass. The roundtable will focus on "New Guidelines & CISO Priorities for 2025"

    Date: December 1st - December 31st 2025

    Venue: Delhi, Mumbai, Bangalore, Pune, Chennai, Kolkata

    >> Register Here

  • Created by: Biswajit Banerjee

Fireside Chat With Sandro Bucchianeri (Group Chief Security Officer at National Australia Bank Ltd.)

  • Description:

    We’re excited to bring you an insightful fireside chat with Sandro Bucchianeri (Group Chief Security Officer at National Australia Bank Ltd.) and Erik Laird (Vice President - North America, FireCompass). 

    About Sandro:

    Sandro Bucchianeri is an award-winning global cybersecurity leader with over 25…

  • Created by: Biswajit Banerjee
  • Tags: ciso, sandro bucchianeri, nab