95% of cyber breaches now start with human error — AI phishing is making it worse, warns Threatcop report

Cybercriminals are using artificial intelligence to outpace corporate security training, weaponising human behaviour at a scale organisations are struggling to defend against. According to the Threatcop People Security Report 2025, human error is now linked to 95% of all cyber breaches, as AI-driven phishing and deception techniques rapidly evolve beyond the reach of traditional awareness programmes.

The report delivers a stark verdict: static security training is no longer working. As attackers use AI to dynamically adapt language, timing, context, and targeting, employee retention from periodic training sessions has dropped to near zero. Despite record investments in security tools, people remain the most exploited attack surface.

Rather than exploiting software flaws, attackers are increasingly relying on social engineering, credential misuse, and behavioural manipulation. The financial impact is already severe. Business Email Compromise alone caused over $3 billion in global losses in 2023, proving that behaviour-based attacks now translate directly into boardroom-level risk.

A key concern highlighted in the report is the erosion of what CISOs call the “golden hour”—the critical window between initial compromise and detection, where containment can prevent major damage. AI-powered attackers are moving faster during this window, while organisations struggle to detect early behavioural signals that indicate a breach is underway.

The report argues that defending against AI-driven deception now requires AI on the defender’s side. Security leaders interviewed for the study stress that without continuous, adaptive employee simulations that mirror real-world attack behaviour, organisations are effectively training for threats that no longer exist.

The risk is especially acute in highly regulated industries. Threatcop found that 95% of attacks on financial institutions involve a human element, increasing pressure on banks and insurers to rethink people-centric security controls as AI compresses attack cycles and shortens detection timelines.

Commenting on the findings, Pavan Kushwaha, CEO, Threatcop & Kratikal, said: “AI has completely changed the economics of social engineering. Attackers can now test, refine, and deploy deception at a scale that traditional awareness training was never built for. Our research shows organisations must shift from occasional training sessions to continuous, AI-driven testing that reflects how real attacks unfold. Without that shift, the gap between compromise and detection will only keep widening.”

AIAI SecurityCyber Breaches
Comments (0)
Add Comment