Recently, Anthropic introduced Claude Mythos Preview, a general-purpose model with advanced coding and agentic capabilities, to select organisations as part…

Recently, Anthropic introduced Claude Mythos Preview, a general-purpose model with advanced coding and agentic capabilities, to select organisations as part of a new initiative called Project Glasswing. The model has the ability to autonomously find thousands of high-severity, zero-day vulnerabilities in major operating systems and web browsers. Obviously, the waves were felt in the cybersecurity space for better and for worse.

However, unlike the SaaSPocalypse, Claude Mythos is not going to replace cybersecurity professionals. While that is some good news in a world that is scared of AI replacing them or making their roles redundant, the ugly side is that the very tech will also amplify the scale of cyberattacks. This makes us ponder: how has AI increased security risks?

And are these risks only restricted to models, or do they pose a more severe threat? If yes, how are we planning to deal with it? Let’s find out in this edition of The AI Shift.

The Silent Evolution Of Cyber Threats For years, cybersecurity evolved alongside infrastructure shifts — from web applications to cloud. With AI, that progression has taken a more complex turn. Unlike earlier layers, AI is not just an interface sitting atop systems.

It is deeply embedded in enterprise workflows, decision-making engines, and data pipelines. At the application layer, attacks such as prompt injection are already emerging, where malicious instructions are embedded into inputs to manipulate model behaviour. But as JP Mishra, founder and CEO of Deep Algorithms, an Indian cybersecurity company that primarily serves the BFSI sector, pointed out, the bigger concern is how invisible these attacks are.

Unlike traditional exploits, these attacks don’t look malicious. They resemble normal text, documents, or queries, making it difficult for both humans and systems to distinguish intent. “The AI might treat something that looks harmless to us as a command,” he said.

Alongside this, risks like data poisoning are quietly emerging, where attackers manipulate training or input data over time to skew outputs. But the bigger shift is happening beneath that layer. According to Rahul Sasi, the cofounder and CEO of cybersecurity company CloudSEK, the AI infrastructure, including model pipelines, APIs and orchestration systems, is becoming a viable target.

Misconfigured access to tools like Google Gemini or poorly secured integrations can expose sensitive enterprise data. Then there is the architectural layer where AI agents operate through skills or instruction sets. If these are tampered with, the system can execute malicious actions without triggering traditional security alerts.

Unlike malware, these attacks do not rely on files or binaries, making them harder to detect using conventional tools, reaffirming Mishra’s point above. Supply Chains: The New Weak Link Attackers are not always going after the AI directly. They are going after what the AI depends on.

This includes third-party libraries, APIs, data sources, and orchestration tools that power AI systems. Aashish Bharadwaj, the cofounder of Fencio.dev, a security-focused startup for AI agents, shared a recent example of a LiteLLM breach, where attackers compromised a dependency library used by an AI gateway. “The system itself remained untouched, but the breach propagated through its dependencies, exposing sensitive information downstream,” he said.

What makes these attacks harder to detect is that AI systems trust their inputs and integrations by default. If an agent is connected to a compromised API or tool, it will continue interacting with it — unknowingly ingesting or propagating malicious data. In another example, OpenAI disclosed a supply chain incident involving a compromised Axios dependency in its macOS signing workflow.

While no breach was reported, the episode highlighted how indirect dependencies can quietly become high-impact attack vectors in modern AI pipelines. Sasi of CloudSEK added that exposed credentials, publicly accessible dashboards and plaintext cloud keys are emerging as common vulnerabilities in the AI infrastructure world. However, what’s even scarier is that the challenge extends beyond individual systems in the fintech space.

With increasing reliance on SaaS integrations and third-party vendors, supply chain risks have grown significantly in the sector. A Speed Mismatch In Cybersecurity Beyond new attack surfaces, AI is fundamentally changing the speed and scale of cyber threats. AI is making threats faster, more personalised, and easier to scale.

Attackers can now continuously scan systems, identify vulnerabilities, and exploit them without waiting for manual triggers. As Mishra of Deep Algorithms put it, this has turned cyberattacks into an always-on process. To counter this, some companies are moving towards continuous threat exposure management (CTEM) models, where systems are constantly stress-tested and not periodically audited. Neeraj Chauhan, the CISO of PayU, frames this as