AI is changing cybersecurity
The new cybersecurity realities:
Every software download is a trust decision
AI is changing cybersecurity, accelerating the threat and in new ways
In our most recent newsletter, we highlighted how AI is changing cybersecurity. Recent events this past week underscore the changes and the increasing pace of change.
For years, leaders could treat cybersecurity as a matter of firewalls and patching. That is no longer enough.
The connected attack chain incident
Today, one of the biggest risks comes from something far more ordinary: the everyday software components companies download and rely on to build products, run operations, and move quickly. In a world built on connected software supply chains, a single poisoned component, weak configuration, or stolen credential can spread risk far beyond the original target.
The recent LiteLLM incident makes that danger clear. What looked like a routine software download was, in reality, a trust decision that extended far beyond one tool. Every time a team downloads a software package, it is also placing trust in the many supporting components behind it, along with the people, systems, and release processes that maintain them.
In the LiteLLM case, attackers reportedly used stolen publishing access to push malicious versions of the real package to PyPI, turning a normal update into a credential-theft event.
The lesson is larger than one library: any point in that chain can be poisoned, and one compromised link can ripple across many organizations at once.
Even advanced companies can have vulnerabilities
At the same time, Anthropic recently exposed unpublished internal materials because of what it described as a human error in its content-management configuration. Reports (e.g., https://fortune.com/2026/03/26/anthropic-leaked-unreleased-model-exclusive-event-security-issues-cybersecurity-unsecured-data-store/) indicate that draft assets were left publicly accessible, including information about a forthcoming model Anthropic characterized as a major leap in AI-automated cybersecurity hacking capability.
That matters because it is a reminder that even highly sophisticated AI companies remain vulnerable to basic operational mistakes. In other words, advanced technology does not eliminate simple risk.
New AI models will increase AI attack threats
The larger shift is even more consequential.
Anthropic’s own research unveiled by the hack suggests frontier AI models are becoming increasingly capable of finding serious vulnerabilities, navigating multistep attack paths, and helping produce exploit strategies at a pace that compresses the defender’s response window. Anthropic has said defenders may still hold a temporary edge, but it has also warned that this advantage may not last.
That changes the threat model fundamentally. Attackers no longer need the same depth of expertise, time, or custom tooling to create serious damage. AI is reducing cyber threat skill barriers, increasing speed, and making threats more scalable. AI is changing cybersecurity and more dangerously than before.
New AI models will increase AI attack threats
For leaders and teams, the implication is straightforward.
Cybersecurity is no longer just about protecting the perimeter or improving patch cycles. It is about managing trust across software supply chains, reducing the impact of inevitable mistakes, and preparing for a world in which attackers can move faster than ever before. For manufacturers of safety-critical systems like medical devices, this is even more urgent and important.
The organizations that adapt best will be the ones that treat software governance, configuration discipline, access control, and operational resilience as leadership priorities, not merely technical concerns.
Cybersecurity has moved to 24/7 … proceed accordingly.
