From 20 Lives an Hour to Zero: Can AI Power India’s Road Safety Reset?

Vinay Rai, Founding Member and Executive Vice President – Technology at Netradyne

India has made a clear and ambitious commitment. Under the Stockholm Declaration, the country aims to reduce road accident fatalities by 50% by 2030. But the numbers remind us how urgent this mission is. As per MoRTH data, India still loses nearly 20 lives every hour to road crashes. Beyond the human tragedy, the economic cost is staggering — road accidents are estimated to erode nearly 3.14% of India’s GDP every year.

What makes the challenge more complex is where these accidents occur. National and state highways account for less than 5% of India’s road network, yet they contribute to over half of all road fatalities. These corridors power India’s logistics backbone. They carry commercial fleets, long-haul drivers working against tight schedules, and vehicles operating under constant time pressure. Fatigue, distraction, and human error become systemic risks — not isolated incidents.

The gap between India’s 2030 commitment and current reality cannot be closed through enforcement and post-accident reporting alone. It demands a prevention-first approach — one that intervenes before a crash happens. This is where AI-led safety systems are beginning to redefine the equation. By analysing 100% of driving time, detecting behaviours such as drowsiness and distraction in real time, and delivering instant in-cab alerts, edge-based vision technology is shifting road safety from reactive investigation to proactive intervention.

In this conversation with Vinay Rai, Founding Member and Executive Vice President – Technology at Netradyne, we explore whether India can realistically meet its 2030 road safety targets, how AI-powered prevention systems can be deployed at national scale, and what it will take to embed intelligence directly into the country’s transport backbone — where safety, productivity, and economic growth intersect.

Some edited excerpts:

India has committed to reducing road fatalities by 50% by 2030, yet we still lose nearly 20 lives every hour. From a technology standpoint, where is the biggest disconnect between policy intent and on-ground execution?
From a tech lens, the missing piece on the ground is continuous risk detection with immediate correction, at scale. Think of it like this, if the only time a driver feels the consequence of risk is at a checkpoint, behaviour changes briefly. When the “nudge” happens during the risky moment, exactly when speed crosses a certain threshold, or when the driver gets distracted, or when the following distance collapses, the behaviour of the driver changes more consistently because the driver can self-correct in the exact moment.

Hence, the conversation has been shifting from “recordings & post analysis” to “faster, real-time and in-cab alerts” and a coaching loop that is actually sustainable. The government’s own approach recognises this direction through the 2030 target under the Stockholm commitment, but the real acceleration will come when prevention through technology becomes consistent and built-in part of every journey, not just seasonal or symbolic. Behaviour changes take a longer time, that’s where technology intervention will play a critical role.

Highways form less than 5% of India’s road network but account for over half of fatalities. Why do these corridors remain so dangerous, and what specific driver behaviours contribute most to this risk?
Highways are built to be smooth and predictable and that’s exactly why, at times, they can turn risky. Long, straight, low-friction stretches create a false sense of control and drivers settle into a steady rhythm, where attention can drift, and fatigue can creep in without any warning. Psychologists sometimes also refer to this as “highway hypnosis”, an autopilot state where you’re technically awake but less mentally sharp. On Indian corridors, that ‘autopilot’ state plays out in a far more chaotic reality with mixed traffic, unpredictable cut-ins, slow-moving vehicles, and sudden speed swings. So the moment attention dips, the road punishes you.

Within human factors, official data often points to over-speeding as the dominant cause. MoRTH’s Road Accidents in India 2023 attributes nearly 68% of accidents and fatalities to speeding. But what this data cannot explain is context, what was happening around that speed. What if we could explain what was happening during the speeding, not just that it occurred, but the behaviours unfolding in those critical moments? Whether a driver was tailgating at highway speeds, momentarily distracted, fatigued, or reacting late to a vehicle ahead is rarely visible in post-incident records. Yet these vision-led behaviours dramatically amplify risk.

This is exactly where preventive technology helps because it works in the same time window as the risk. Vision-based, in-cab AI can detect early cues of fatigue or distraction, unsafe following distance, and speed creep, and alert the driver immediately, while there’s still a safe window to respond.

Traditionally, road safety has been reactive—focused on reporting accidents after they happen. What does a prevention-first safety model look like in practice, and how is it fundamentally different?
A prevention-first model starts earlier in the timeline. It focuses on reducing the risk build-up that leads to a crash, not just documenting the crash after it happens. Edge AI and vision-based safety fundamentally change this approach by enabling prevention in real time, not just analysis after the fact. In practice, prevention-first means three things happen consistently on every trip. First, risk is detected continuously at 100% Drive time, not just when a harsh event is triggered, because the “before” is usually where the lesson is.

Second, drivers get real-time, in-vehicle alerts when behaviours like tail gating, distraction, lane drift or early fatigue cues start showing up. So they can correct immediately, while there’s still room for correction.

Third, those same alerts and events are shared with fleet managers with the right video context, so safety teams do not have to hunt through footage. The driving behaviours also roll up into a score that helps managers spot patterns, prioritise coaching, and recognise improvement, so it becomes a repeatable process rather than a one-off intervention. Over time, this is what creates a structured coaching loop that reduces repeat behaviours month over month, instead of repeating the same post-incident conversations.

Netradyne talks about analysing 100% of driving time using on-device Edge AI. Why is continuous, in-cab intelligence critical compared to periodic monitoring or post-trip analytics?
Most serious incidents don’t come out of nowhere. They come from a few ordinary seconds where risk stacks up, like a closing gap, a brief glance away, or fatigue building near the end of a shift. If you only sample driving periodically, you miss those sequences. If you only rely on post-trip analytics, you learn what happened after the fact, when the driver no longer has a chance to correct that moment.

That is why analysing 100% of driving time matters. It captures what led up to risk, how often it repeats, and under what conditions it shows up. It also gives fleets a fair baseline to measure improvement, spot repeat patterns across routes and shifts, and coach consistently instead of relying on anecdotes. Vision-based Edge AI is what makes this preventive rather than retrospective. Vision adds context that basic telemetry cannot. It shows not only what went wrong, but what was happening around it and why it unfolded. That helps in two ways at once. It supports in-the-moment correction for drivers, and it gives fleets clearer insight into what is going right and what is going wrong, so coaching improves trip after trip.

Driver fatigue and distraction are often cited as major causes of crashes. How reliably can AI detect these human factors in real time, and how do you avoid false alerts that drivers may ignore?
Fatigue and distraction can be detected reliably in real time when the system looks for patterns that build over a short window, instead of just reacting to a single frame. For drowsiness, vision-based detection relies on measurable indicators such as eye closures, blink rate, blink duration, and the percent of eye closure over time. It also considers head movement and gaze stability so the system can pick up early signs like microsleeps rather than waiting for a full lapse.

Distraction is similar in spirit and it is not just about whether a driver glanced away, it is about whether attention moved away from the road at a moment when the driving context demanded it. WHO states that drivers using a mobile phone are approximately four times more likely to be involved in a crash than drivers not using a phone
Avoiding false alerts is just as important as detecting risk, because drivers will only respond to an alert they trust.

False alerts can erode trust and lead drivers to disengage from a warning system. We manage this by designing for precision, meaning when the system flags an event, it is highly likely to be real. The way you get there is context and corroboration. The models learn what “normal” looks like on the roads they operate on, and alerts are strengthened when multiple risk signals stack up together in a tight window, because that is what serious incidents often look like in the seconds before they happen. Scale matters here, too. When models are trained on 25+ billion vision-analysed miles, you get better accuracy overall and better precision in the moments that trigger an alert, which is what keeps alerts fewer, more meaningful, and harder to ignore.

Commercial vehicle drivers operate under tight delivery timelines and economic pressure. How do you balance safety interventions with productivity, without creating resistance from drivers?
Commercial drivers work under intense pressure. Long hours, unpredictable traffic, and tight delivery windows are the daily reality. One India-based study found that on average, a truck driver clocks 11.9 driving hours a day, and 49% drivers said they still drive even when they feel fatigued or sleepy. The government has also acknowledged this fatigue exposure and spoken about fixing driving hours for commercial drivers.

So it is not “safety versus productivity.” It is about reducing the behaviours that destroy productivity in the first place. Crashes are the biggest disruption, and even before a crash, harsh braking, tailgating, and aggressive lane changes raise operating costs and make ETAs unstable. The way to avoid resistance is to make safety feel like support, not punishment.

Alerts need to be timely and relevant so drivers can correct in the moment, and coaching needs to be fair and consistent so it is accepted. That is why recognition matters as much as correction. GreenZone scoring makes this tangible with a score that reflects both safe and risky behaviours, so drivers get credit for doing the right things consistently, not just penalties. DriverStars adds positive reinforcement so progress feels visible and fair. When fleets combine that approach with consistent coaching, the business impact becomes measurable.

Many fleets already have cameras installed. What is the difference between simply recording video and extracting actionable intelligence from it?
Most fleets already have cameras, but many are still used as black boxes. The footage helps after something goes wrong, for claims, disputes, or investigations. That is useful, but it is reactive and it does not reduce the next incident.

Vision-based edge AI provides real-time alerts and course corrects drivers when risky driving behaviour/ maneuvers arises. Most importantly, it understands context, with both inward facing camera and out-ward facing camera. This gives a system a reason why a driver acted in a certain way and alerts immediately.

Because the video is tied to context, coaching becomes specific and fair, and drivers respond better. It also changes fleet management. Instead of teams hunting for clips, they get prioritised events and trend views across routes, shifts, and driver groups. That makes coaching consistent, reduces operational overhead, and improves cost-per-kilometre by cutting preventable downtime.

India’s road safety challenge is also a scale challenge—millions of vehicles, fragmented fleets, and varying compliance levels. How feasible is it to deploy AI-led prevention systems nationwide?
It’s an important question, and we are already seeing the shift happening. More enterprises are moving from “safety as compliance” to “safety as a culture” because the cost of incidents is too high to absorb.

At India’s scale, deployment works when it is practical for fragmented fleets. That means low-friction rollout, driver-first adoption, and prevention that runs consistently even when operating conditions and compliance levels vary.

When drivers see it as fair support and managers see repeat risky behaviour reduce over time, adoption spreads to outcomes. The ecosystem is also improving upstream. The government is strengthening digital accident reporting through systems like e-DAR, which helps the country understand patterns better over time. When better national data is paired with consistent, vehicle-level prevention, nationwide deployment becomes an execution path, not a distant ambition.

Road crashes cost India an estimated 3.14% of GDP annually. From your experience, how quickly can fleets see measurable financial returns from investing in AI-driven safety?

In India, the socio-economic cost of one road death has been estimated at around ₹91.16 lakh. That single number explains why fleets can see returns from AI-driven safety faster than they expect. A serious incident does not just create a repair bill. It creates downtime, missed deliveries, claims, legal exposure, driver churn, and a hit to service reliability. Those costs show up immediately in day-to-day operations and in cost-per-kilometre.

At the fleet level, the return usually arrives in two connected steps. First, you see a behaviour shift because real-time, in-cab feedback reduces repeat high-risk moments across every trip. Then you see the financial impact because fewer high-risk moments translate into fewer incidents, lower claim severity, and better vehicle availability. That is the prevention-first model in practice, and it compounds because it runs on every kilometre, not only after something goes wrong.

Looking ahead to 2030, do you believe India can realistically meet its road fatality reduction target? What must change in the next 3–4 years for that goal to remain achievable?
India can still meet the 2030 target, but only if safety stops being a set of campaigns and starts being managed like an operating system. The 50 per cent reduction commitment is already on record under the Stockholm Declaration direction, and the urgency is clear in the latest road death totals presented in Parliament.

What has to change in the next three to four years is the feedback loop. Today, we still learn too late, after the crash. MoRTH’s own data shows most serious incidents are happening in “normal” conditions, with 76.1 percent of accidents and 72.3 percent of fatalities in sunny or clear weather, and straight roads accounting for the largest share of accidents and deaths. That points to behaviour and attention, not just infrastructure. So the shift has to be prevention at scale, especially in commercial fleets, using continuous in-cab intelligence that works in real time, delivers fewer but credible alerts, and feeds consistent driver-first coaching and recognition. When repeat risk drops month after month across high-kilometre fleets, national numbers start moving for real. Another important aspect that should change is safety as a culture. We are already seeing this change in the enterprises, it should become a standard, not a compliance.

AI in Transportationdigital IndiaEdge AIMoRTHPublic SafetyRoad SafetyVision AI
Comments (0)
Add Comment