The Autonomy Shift: A Structural Perspective on Where Cybersecurity Is Heading

Share on

Mahendran Chandramohan
Senior Vice President & Head - Engineering & DFIR

Over the past year, I’ve been reflecting on how cybersecurity is evolving — not in terms of new tools, but in terms of underlying assumptions.

For decades, enterprise security has been built around a relatively stable model: authenticate the actor, authorize access, monitor for misuse, and respond when something looks wrong. That model has adapted well to automation, large-scale attacks, and increasingly sophisticated threat actors. The fundamentals remain sound. But some of the assumptions beneath them are beginning to show stress.

What is changing is not simply the volume or speed of attacks. It is the nature of how decisions are made on both sides of the equation.

We are gradually moving from automated execution — scripts and playbooks running at scale — toward more autonomous and adaptive systems. Increasingly capable AI agents can explore environments, test hypotheses, and refine their approach with minimal human intervention. This does not represent a sudden collapse of existing security practices. But it does introduce structural pressures that we should be designing for now, not reacting to later.

The Detection Problem Is Shifting

Mature security teams are already well-equipped to detect obvious automation — large-scale scanning, credential abuse, distributed bot activity, known ransomware playbooks. We have built strong defenses against adversaries who repeat themselves.

The emerging challenge is subtler. Activity that appears entirely legitimate in isolation, but raises concern when viewed in aggregate. Low-and-slow reconnaissance. Coordinated probing across multiple vectors. Behavioral mimicry rather than brute force. These approaches are specifically designed to stay within the thresholds that traditional detection is calibrated to catch.

This shifts part of the detection problem from identifying patterns to understanding context and intent. That is a meaningfully harder problem, and most security architectures were not designed with it in mind. Broader correlation, faster analysis, and AI-assisted triage will become baseline requirements — not competitive advantages — for security teams operating in this environment.

Identity Is Becoming More Complex

At the same time, the identity landscape is changing in ways that compound the detection challenge.

Non-human identities — service accounts, API tokens, automation agents, CI/CD pipelines — already represent a significant and often under-governed portion of access activity in enterprise environments. As organizations deploy AI copilots and intelligent assistants across HR, finance, engineering, legal, and customer operations, this landscape becomes considerably more layered.

In such an environment, authentication alone is rarely sufficient to establish trust. Knowing that a credential is valid tells you progressively less when that credential belongs to an automated system acting on behalf of a user, within a workflow, across multiple integrated platforms. The chain of accountability becomes attenuated in ways that point-in-time authentication cannot resolve.

Continuous, behavioral validation — understanding how users and systems typically act, and identifying meaningful deviations from that baseline — becomes increasingly important. This is not a replacement for authentication. It is a necessary evolution of it. If I were to make one concrete recommendation for security teams today, it would be this: establish behavioral baselines for your non-human identities before your next AI deployment, not after.

The Autonomous Insider

There is a related risk category that deserves more attention than it currently receives.

The AI copilots and automation agents being deployed across enterprise functions operate with delegated, legitimate access. They are trusted by design. The risk is not necessarily overt compromise — it is subtle manipulation. Through prompt injection, data poisoning, or gradual behavioral drift, an internal system with legitimate permissions can begin operating slightly outside its intended purpose, quietly and persistently, while appearing entirely normal to the teams that depend on it.

This is what I would call the autonomous insider: not a rogue employee, not a stolen credential, but a trusted internal system that has been quietly redirected. It sits in a gap that most current security programs were not designed to address — somewhere between insider threat, misconfiguration, and supply chain risk.

Establishing governance frameworks, behavioral monitoring, and clear accountability for enterprise AI and automation systems should be part of security architecture conversations now. The deployments are already happening. The governance, in most organizations, has not caught up.

Protecting Your Own Defenses

There is one more structural consideration that I believe is underweighted in most security discussions: the integrity of defensive systems themselves.

As security teams adopt AI-assisted detection, correlation, and triage, the data and models powering those systems become critical assets in their own right. A detection system running on manipulated telemetry or quietly drifting training data does not fail loudly — it degrades silently, producing subtly worse decisions over time while appearing to function normally. The attacker who understands this benefits from every false negative your system generates.

Applying adversarial thinking to our own defensive stack — validating data provenance, monitoring for model drift, periodically stress-testing detection logic against novel inputs — is not a theoretical concern. For organizations that are already reliant on AI-assisted security tooling, it is a present-day operational responsibility.

The Direction of Travel

None of this suggests that existing frameworks are obsolete. Defense in Depth remains the right foundational model. But its execution must evolve: detection needs broader and faster correlation, identity must be continuously contextualized, enterprise automation requires governance, and defensive AI must be treated as an asset that itself requires protection.

The direction is clear. Cybersecurity is moving toward an environment where adaptive systems operate on both sides of every engagement. Human judgment remains essential — particularly for high-stakes decisions, novel situations, and governance. The goal is not to remove it. It is to ensure that by the time a human needs to intervene, the systems around them have already detected, correlated, and contained the majority of the risk.

Organizations that begin designing for this shift now will be better positioned when it fully arrives. Those that treat it as a future concern may find themselves reacting under pressure rather than responding with intention.

This is not a call for alarm. It is a call for thoughtful architecture.

SISA’s Latest
close slider