The Journey of a Web Request: Unraveling Multi-Tier Architecture and Cloud Security

Introduction:

Imagine you have a company with a website, say XYZ.com, where you sell grocery items online.

When a client (let’s call them Client A) tries to access your website using a web browser, they type the website address (URL). The Domain Name System (DNS) resolves this webpage name (like www.xyz.com) into an IP address. Once the IP address is obtained, the browser sends an HTTP request to the serve.

Now, what exactly is a server?
Every webpage you see is essentially a static file provided by a server to your web browser. A server is nothing but a computer that is ready to serve you with the files or anything it is configured to server.

For example, even something as simple as searching for www.xyz.com triggers an HTTP request to the server. The server then responds by sending back the requested webpage, typically the homepage, to the browser.


While this process may seem straightforward, a lot of complex operations happen behind the scen

es every time you see such simple interactions.


Let’s Dive into the Architecture:

When a client requests a webpage from the server, the web server handles this request.

In a typical internet-hosted application architecture, there are usually three main components:

  • Web Server

  • Application Server

  • Database Server

The Flow:




Let's take Client A again, who wants to access www.xyz.com.

  • The client types the website name in their browser.

  • The browser resolves the DNS name (www.xyz.com) into an IP address.

  • It then sends an HTTP request to the web server.

The web server is essentially a computer (or a virtual machine in cloud environments) running software like Nginx or HAProxy. Its role is to serve static content (like HTML, CSS, images) back to the client.

However, if the client requests dynamic content (such as specific details about a grocery item), the request reaches the web server first.

  • The web server’s logic determines that this is a complex request.

  • It then forwards the request to the application server.

The application server is another computer (or virtual machine) where the business application is hosted.
This server contains the business logic to handle complex operations such as:

  • Discounts

  • Fetching specific data

  • Processing payments

When the application server processes a request, it may need to fetch or manipulate data.

  • It creates an SQL query.

  • Sends this query to the database server.

The database server is yet another computer (or virtual machine) where the database resides.
This database holds critical and sensitive information such as:

  • Customer details

  • Product catalog

  • Transaction history

Because of the sensitive nature of this information, database servers require robust security measures.

Once the required data is retrieved:

  • The database server sends the data back to the application server.

  • The application server processes it and sends it back to the web server.

  • Finally, the web server delivers the response to the client’s web browser, where the user sees the updated information.


Now the above flow tells us how exactly the information is flowing from web browser back to the client and how many components are present in between. 

Let's complicate the model architecture by introducing the concept of "Networking" in it.

Just to summarize what we did is we have a client that sends the http request to the web server which checks if it has a easy or static request such as home page, if not the request is forwarded to application server which applies business logic or redirects the request to database server, and then traverse it back to the web server and then to the respective web browser. 

Now, to strengthen security, especially in cloud environments, it is important to introduce networking concepts.

In the cloud, we have key components such as VNet, Subnet, Route Table, Firewall, Gateway, NSG, ASG, and others.

What are these components, and how do they fit into the overall architecture?


VNet (Virtual Network):

A VNet acts as a private boundary that isolates and protects cloud resources from direct public access over the Internet.

But why do we need it?

Because whenever data packets move across systems, we want to ensure the transmission remains secure and invisible to the outside world. A VNet provides a secured, private tunnel or space, ensuring that data movements happen safely without being exposed to public threats.
 

Adding on to our previous case scenario, what we can do is provide the resources created in cloud platform(say azure) inside a private network.

SubNet:

A subnet acts as a furthur diversification of virtual network as it provides more fine-grained access and control of traffic.


The below diagram depicts a logical representation of the flow:-




So now coming back to our example case scenario, where the resources are deployed in azure within a virtual network and different servers that acts up to form the main components of architecture are segregated within their respective subnets. 

But why do we need to do that?

Since I don't want any user to access the application server directly, want to restrict the user to only have access to the web server. Also i want that my application server should only talk to the database server as it contains very sensitive information , hence only requests from application server should be entertained. 

Now this can only be achieved when i have the ability to put the restrictions on it if they act as separate entity, this is achieved by creating subnets, as individual rules can be applied on each servers which are sitting on the their subnets.

But how does this rule gets applied on different subnets? How and where does this rules gets assigned?

That's where NSG comes into the picture....

Network Security Group(NSG):

NSG is a security feature in azure which acts as a firewall or say a security personnel which has a set of rules that decides which traffic can come in (inbound) and which traffic can go out (outbound).
They can be set on subnets or NIC in case of VM; NIC helps connect a computer to connect to a network. NSG's make sure which type of traffic can make request to which servers. In our case we want to make sure that web servers are accessible by public internet but not the application server or database server.

Similarly application server can only be accessed by web server and not the public internet or any other server. And database server should only be accessible by application server. So its clear that it acts as a security guard to provide rules which only allows listed ports or ip-addresses to access the resources within the defined subnet for any requests.

What is the use of ASG, then?

Application Security Group(ASG):

In the above case scenario, ASG is not much applicable as all the resources of specific functionality are seperated by their own subnet, hence applying the rules by NSG can do the job. But lets consider a scenario where the web server and application server is in one subnet and database server is in another subnet. The image below depicts the same:-






Now in the above image it is visible that the web server and application server is in "Subnet 1" and database server is in "Subnet 3".

Here, if i want to restrict public access to users for application server and only allow it for web server, creating an NSG rule on subnet won't work as either i have to give individual ip address for application server and web server to create NSG rules which is not clean approach as it gets cumbersome if there are many resources. 

Hence, the better approach is to group the resources within the Subnet and create a group in Application Security Group(ASG) which would be total of 2-ASG's (one for web server and other for application server). The NSG rules can be applied on the ASG's for granular access control as now we can assign internet access to web-server and deny it for application server.

This solution is not only scalable but also manageable as even if the no of web servers or application servers increases, you don't have to make any changes to the NSG rules, but rather add the resource to the ASG.





Since we had a pretty good coverage on the NSG, ASG, VNet and Subnet, lets understand how does route table works.

Route Table:

Route tables are nothing but guides that tells the data packet how to reach the destination and which path to take.

It's often easier to confuse between route table and NSG. The fundamental difference between the two is that while NSG allows or deny the entry of data packets to a particular destination , route table helps the data packets reach its destination.

So in one-line we can say route tables tells the traffic where to go and NSG tells the traffic if they are allowed to go inside or not.



Now, since we have covered so many concepts, lets cover the last one and we shall present the overall scenario covering all the components in it.

Load Balancer(Application Gateway):

The load balancer as the name suggest distributes the traffic equally such that there is no load on the servers to perform their function. Based on the case scenario we discussed, lets look at the below diagram and try to understand the functionality of load balancer.



In a typical multi-tier architecture like the image above, when a client (such as a web browser) sends a request to access web content, that request first hits the load balancer. The load balancer plays a crucial role in distributing incoming traffic efficiently across multiple web servers located in Subnet 1, ensuring optimal performance and high availability.

If the client is requesting static content—such as the homepage, images, or HTML files—the load balancer directs the request straight to one of the web servers. These servers are optimized to serve static files quickly and efficiently.

However, if the request involves dynamic or complex content—for example, anything that requires user-specific data, transactions, or real-time business logic—the load balancer routes the request to the application server. The application server is responsible for processing such requests, which may involve interacting with the database server to retrieve or update data.

Once the application server completes its processing—be it for transactions, payments, or executing backend logic—it sends the response back through the load balancer, which then forwards it to the client’s browser.

This layered approach ensures that each server handles the type of workload it's best suited for, resulting in better performance, scalability, and maintainability.


Every thing will make sense if all of these contents interact together in an architecture....

This particular half is the summarized version of the entire blog post and sums up the functionality of all the components in a single architecture that aims to present the logical representation of how the web application interact with different servers and sends back the desired request to the client adhering to multiple checks and balances presented by the components responsible for security & authentication.

The below image presented below represents the scout view of this entire blog:-



The image above can be understood like this:

  • The client computer connects to the internet through a VPN (Virtual Private Network) and tries to access www.xyz.com, as per the example we discussed earlier. The request first goes through the VPN gateway, which has rules to check if access to that website is allowed. If it’s permitted, the request moves forward.
  • From there, the traffic hits the load balancer, which decides where to send it. If the request is for something simple, like a static page, it’s sent to a web server. But if it needs more processing—like dynamic content or something that requires data—it’s forwarded to an application server.
  • Here’s something important: only the application server is allowed to talk to the database server. This is because the database holds sensitive data and needs stronger security. All servers are placed in their own subnets, so we can use NSG (Network Security Group) rules to control who talks to whom. Basically, only the app server can reach the database, and the database only responds to the app server.
  • Once the request is handled—whether by the web server or the app server—it goes back to the load balancer, which acts like a reverse proxy. Then finally, the response is sent back to the web browser, and that’s when the user sees the result on their screen.

Conclusion:

The blog aimed to present the basic fundamentals blocks that makes up a generic architecture of any web application interacting with clients leveraging azure cloud features to make their multi-tier architecture more robust, reliable and scalable in nature.

The case scenario discussed above takes a progessive approach as we started with small components and kept adding the complexity to conclude with a full multi-tier architecture at the end.







Please feel free to leave your comments!
Will happily here from you!

Thanks & Regards,
Manan Choudhary





Comments

Popular posts from this blog

How to Set Up an SFTP Server and Seamlessly Connect It to Azure Data Factory

How to Connect Data Ex-filtration Protection(DEP) enabled Synapse Workspace to Azure Cosmos DB (For NoSQL)

How to Configure Email Notifications For ADF Pipeline Runs Using Logic Apps