In late 2025, Amazon disclosed a security incident that, on the surface, sounded almost trivial. A remote IT worker had been unmasked as a foreign infiltrator because of a slight delay in keystroke timing. About 110 milliseconds. Barely noticeable to a human. Invisible to traditional security controls.
But that detail is precisely why the case matters.
The worker had valid credentials. He had passed hiring checks, used approved tools. There was no malware, no exploit, no policy violation in the conventional sense. What exposed the intrusion was behavior that didn’t quite align with what Amazon’s systems understood as “normal” for a U.S.-based remote employee. That small, consistent latency triggered deeper investigation and ultimately revealed that the system was being remotely controlled from abroad, tied to a fabricated identity and a broader North Korean remote worker scheme.
Amazon later confirmed it had blocked more than 1,800 similar fraudulent applications. The volume alone should concern CISOs. But the deeper lesson lies in how detection happened. Not through signatures. Through behavioral deviation.
That same lesson echoes across some of the most consequential security incidents of the last decade.
If the Amazon case feels unusually elegant, it’s because it exposes a truth security teams have been grappling with for years. Modern attackers don’t break in loudly. They log in quietly.
Take for example, the 2019 breach at Capital One is often cited as a cloud misconfiguration failure. But once again, the more uncomfortable reality sits beneath the surface. The attacker didn’t deploy malware or trigger alarms. They used legitimate AWS access paths and standard tools. Everything they did was technically allowed.
What made the activity suspicious in hindsight wasn’t what was accessed, but how. Data was enumerated across systems that didn’t normally interact in that sequence. Access velocity exceeded what operational workflows required. The behavior made sense to an attacker, not to the business. Security researchers later pointed out that behavioral analytics could have flagged those anomalies early, even though no explicit rule was broken.
This pattern repeated itself, at a far larger scale, during the SolarWinds supply-chain compromise.
The SolarWinds attack remains a defining moment because it dismantled several long-held assumptions. The malware was digitally signed. Delivered through legitimate updates. Signature-based defenses had nothing to latch onto.
What finally exposed the intrusion were subtle inconsistencies in identity use and lateral movement. FireEye, now Trellix, noticed authentication behavior that didn’t align with normal administrative activity. The attackers were cautious, but behavior over time betrayed intent.
SolarWinds forced a reckoning. If adversaries can operate entirely within trusted channels, then trust itself becomes the attack surface. Detection can no longer depend on recognizing known threats. It must focus on recognizing when legitimate access is being misused.
That realization is now playing out inside enterprises every day.
When Insiders and Outsiders Behave the Same
At Google, internal security teams have long monitored behavioral patterns to detect misuse of user data. Engineers and support staff access information in predictable ways. When query sequences drift, when access doesn’t align with role, alarms trigger automatically. No malware required. No explicit policy breach necessary. Behavior alone is enough.
The same convergence of insider and outsider behavior was visible during the 2022 breach at Uber. Attackers used stolen credentials and MFA fatigue to gain access. Once inside, they behaved exactly like employees, navigating Slack, dashboards, and internal tools through approved paths. Detection came not from signatures, but from access patterns that didn’t make sense for a newly authenticated user.
Across these cases, a single theme emerges. The attacker doesn’t need to look malicious. They just need to look slightly wrong.
Why Remote Work Makes This Problem Worse
Remote work didn’t create these risks, but it amplified them.
In a distributed workforce, identity is no longer proof of legitimacy. Credentials are assumed compromised. Devices roam across unmanaged networks. Context changes constantly. Attackers and employees often look indistinguishable at the surface level.
Most damaging breaches today are not explosive. They are slow. Patient. Low-volume. Designed to stay beneath static thresholds. This is why signature-based systems struggle. They were built for certainty. Modern threats thrive on ambiguity.
Behavioral analytics thrives there too.
Across Amazon, Capital One, SolarWinds, Google and multiple government cases, the same pattern holds:
Attackers used legitimate credentials.
Accessed systems through approved tools.
Avoided known indicators of compromise.
Were ultimately exposed by sequence, timing, volume, or contextual anomalies.
This is not a coincidence. It is the new normal.
For CISOs, the strategic shift is clear. Security can no longer rely primarily on knowing what “bad” looks like. It must understand what “normal” looks like, continuously, and at scale.
Behavioral analytics does not replace signatures. It steps in where signatures fail. In a world defined by remote work, cloud platforms, and identity-driven access, behavior is the most reliable signal left. And unlike malware hashes or IP addresses, behavior is exceptionally hard to fake consistently over time.
The Amazon case wasn’t clever. It was inevitable.
The only surprise is how many organizations are still not looking for their own 110 milliseconds.