At a time when real-time intelligence is redefining digital businesses, Porter is rethinking how modern systems should operate—shifting from service-heavy architectures to a truly event-driven enterprise.
Speaking at Confluent’s Data Streaming World Tour in Mumbai, Ambuj Singh, Chief Architect at Porter, outlined how the company rebuilt its data backbone using Kafka to enable scale, resilience, and real-time decision-making.
Porter’s journey began like many digital-native startups—with a monolith that eventually gave way to microservices. But while microservices promised flexibility, they also introduced new challenges. Services became tightly coupled through constant API calls, creating unpredictable system loads and cascading dependencies. A single spike—like recalculating earnings across orders—could overwhelm core systems.
The turning point came with a simple but powerful principle: publish everything.
Instead of services calling each other, every significant action is emitted as an event to Kafka. Systems no longer request data—they react to it. An order placement, for instance, triggers downstream processes like driver allocation or order history updates without direct service-to-service communication. This shift dramatically reduced interdependencies and improved system stability, especially during periods of rapid growth.
Kafka became the central data backbone—the layer through which all data flows. Whether it’s application events, database changes, or analytics outputs, everything moves through a simple pattern: source to Kafka, then Kafka to destination. By integrating Change Data Capture (CDC), Porter ensured that every database update is streamed in real time, eliminating the need for batch processing and fragile pipelines.
This architecture also introduced strong data governance. With schema registry in place, every event is well-defined, discoverable, and backward-compatible. Teams can confidently consume data without worrying about breaking changes, much like working with a shared API contract. Over time, this significantly improved trust in data and reduced coordination overhead between teams.
The impact extends beyond architecture into how teams work. Instead of relying on centralised systems or requesting access to data, teams can independently publish and consume events. This autonomy has accelerated development cycles and enabled innovation at the edges of the organisation rather than the core.
On the data side, Kafka evolved into a universal movement layer. Earlier, batch jobs periodically moved data from application databases to warehouses, often leading to delays and inconsistencies. By streaming database logs directly into Kafka, Porter ensured that every change is captured reliably and in real time. From there, data can be routed to multiple destinations—warehouses for analytics, real-time systems for monitoring, or machine learning pipelines—without building complex point-to-point integrations.
This foundation made it easier to unlock real-time intelligence. When the business needed instant visibility into critical metrics like order fulfillment or supply-demand gaps, the data was already available in Kafka. Adding a real-time analytics system required minimal effort, since the pipeline was already in place.
With stream processing engines like Apache Flink, Porter now processes events as they arrive, combining multiple data streams to drive live decisions. This powers use cases such as real-time driver performance tracking, dynamic customer segmentation, and fraud detection, where acting instantly is critical.
What emerges is a clear operating model: declare events, publish them, move them through a unified pipeline, and process them continuously. This approach has transformed Kafka from a messaging system into the central nervous system of the company.
Porter’s journey reflects a broader industry shift—from API-driven systems to event-driven platforms, and from batch processing to continuous intelligence. What they’ve built is more than a logistics platform; it’s a real-time data ecosystem designed for scale, speed, and autonomy.