Evolution of network security architectures: From perimeter defense to zero trust

By Hitesh Dharmdasani, CTO, AnexGATE & Founder And CEO, NetSense CyberSecurity Pvt. Ltd.

The evolution of network security becomes clear when you look at how real networks were built and how they are used today. Earlier, a typical setup had a central office, a few branch locations, and everything important, such as applications, databases, and authentication servers, running inside that network. A firewall sat at the edge, allowing or blocking traffic based on rules like IP address, port, or protocol. If traffic passed through that firewall, it was treated as safe. This worked because most users were physically inside the network.

An employee would connect their laptop to the office LAN, get an IP address from DHCP, and access internal applications directly. If someone needed remote access, they used a VPN, which effectively placed their device inside the network once connected. After that, they could reach almost everything the same way an internal user could.

As these networks grew, the nature of threats shifted significantly. Earlier concerns were often about brute force attacks, but they eventually evolved into more sophisticated phishing and credential theft.

This forced a change in how we prove our identity. Security moved from the era of “something you know,” like a simple password, to “something you have,” such as a physical token or a one-time code on a smartphone. While this added a layer of protection, it created a massive problem in real situations. If an attacker got hold of a user’s credentials and bypassed that second factor, they could log in through the VPN and appear as a legitimate user. Once inside, there were very few checks. They could scan the network, access shared drives, and connect to internal services because the network assumed anything inside was trusted.

The transition became even more complex as companies started moving applications to the cloud with the rise of AWS and Azure. Remote work forced this transition since hosting something within the office was more difficult than hosting in the cloud. Traffic no longer flowed through a single firewall. It went directly from user devices to cloud services over the internet. At that point, relying on a network boundary stopped making sense. You could no longer say what was inside or outside in a meaningful way.

To handle this, we reached the current stage of evolution: security based on “something the system knows about you.” This is the core of Zero Trust. It moves beyond a one-time login and instead relies on continuous context. The system looks at your typical login hours, your geographical location, the health of your specific hardware, and even your typing patterns or the way you navigate an app.

Zero Trust still utilizes the same foundational technologies such as firewalls and tunnels, but it has fundamentally changed the way the software running inside those systems operates. Instead of a blanket allow after an initial check, the software now looks at every single action. Anything deviating from the established baseline will have to form trust once again.

In the past, enforcement happened at the lower levels of the network stack, specifically Layer 3 and Layer 4, using Access Control Lists that only looked at IP addresses and ports. Today, Zero Trust enforcement has moved up to Layer 7, utilizing an inspection suite like a proxy. This shift allows for a much deeper context of the HTTP and HTTPS traffic. Rather than just seeing that a device is connecting to a specific port, the system understands the specific application request being made. It can see the headers, the specific URL path, and the identity of the user behind the request.

Under this model, your access to a CRM might change instantly if you log in from a new location or use a different device. The system verifies your credentials and confirms the device is compliant, but it does not stop there. If you suddenly try to download a large amount of data or access a system you do not normally use, the system checks again. It may ask for reauthentication or block the request entirely. This approach also fits well with distributed environments where applications are hosted across various data centers and cloud platforms.

However, we are now entering a true frontier of complexity with the rise of encrypted protocols like DNS over TLS, DNS over HTTPS, and QUIC. Most large companies are currently struggling with enforcing Zero Trust because these protocols either hide DNS queries or use UDP in a way that traditional proxies on firewalls cannot easily inspect. This level of encryption makes it harder for the network to verify the intent of the traffic without breaking end-to-end security.

At the same time, we are seeing the rise of AI tools which are now starting to be treated as separate identities. In a modern architecture, an AI agent is not just an extension of the user but a distinct entity with its own permissions and behavioral baselines. Building the technologies to manage this is a very long-term engineering endeavor. It requires a fundamental rebuilding of the network stack to be aware of identity and intent at every millisecond.

The Make in India initiative is set to benefit significantly from this shift. As Indian companies develop the deep network engineering knowledge required to build these indigenous Layer 7 proxies and Zero Trust controllers, they are moving beyond just implementation and into the creation of core security technology. By fostering this expertise locally, the nation is not just consuming security products but is building the very infrastructure that will define the next decade of digital trust.

The shift from perimeter-based security to Zero Trust is not about adding more layers at the edge. It is about removing the assumption that being on the network is enough. Every request is treated the same way: it must be verified against what the system knows to be true at that exact moment.

Comments (0)
Add Comment