Deployment of 3-Tier Web Application on Amazon Elastic Kubernetes Service (EKS)

Resources used

VPC | EKS  | RDS | EC2 | IAM role | ECR | Docker | Load balancer

Flow Diagram

 

Steps:

Step 1: Create 2 IAM roles one for the EKS cluster and one for node-group

Step 2: Create a VPC and its components

Step 3: Create an EKS cluster

Step 4: Create a node group in an EKS cluster

Step 5: Create an EC2 instance for worker machine

Step 6: Build and Push Backend Application Container Image

Step 7: Create Kubernetes Deployments and Services for Backend

Step 8: Apply Kubernetes Manifests for Backend

Step 9: Build and Push Frontend Application Container Image

Step 10: Create Kubernetes Deployments and Services for frontend

Step 11: Apply Kubernetes Manifests for frontend

Step 12: Access the application through the Frontend Load Balancer URL

 

 

Kubernetes:

Kubernetes is an open-source project that provides a set of tools and APIs for managing containerized applications. It is a powerful platform that offers a wide range of features and flexibility. However, it can be complex to set up and manage, especially for large or complex applications.

 

EKS:

Amazon EKS (Elastic Kubernetes Service) is a fully managed Kubernetes service provided by Amazon Web Services (AWS). It simplifies the deployment, management, and scaling of containerized applications using Kubernetes.

 

Feature Kubernetes EKS
Open source Yes No
Managed No Yes
Control plane Self-managed Managed by AWS
Worker nodes Self-managed Managed by AWS
Pricing Pay-as-you-go Pay-as-you-go
Complexity Can be complex, especially for large or complex applications Simpler and easier to use
Flexibility Wide range of features and customization options Less flexible than Kubernetes
Reliability Can be less reliable than EKS, as you are responsible for managing the control plane and worker nodes More reliable than Kubernetes, as AWS manages the control plane and worker nodes
Security Can be less secure than EKS, as you are responsible for managing the control plane and worker nodes More secure than Kubernetes, as AWS manages the control plane and worker nodes
Cost Can be more cost-effective than EKS for small or simple applications Can be more expensive than Kubernetes for large or complex applications

 

EKS cluster:

Amazon Elastic Kubernetes Service (Amazon EKS) cluster is a group of Amazon Elastic Compute Cloud (Amazon EC2) instances that run Kubernetes, an open-source container orchestration platform. The cluster provides a unified view of the underlying infrastructure, allowing you to deploy, manage, and scale containerized applications.

 

Nodes:

Each Amazon EKS cluster consists of one or more nodes. A node is an Amazon EC2 instance that runs the Kubernetes kubelet process. The kubelet is responsible for managing containers on the node, such as starting and stopping containers, monitoring their health, and reporting their status to the Kubernetes control plane.

 

Control plane

The Kubernetes control plane is a set of master nodes that manage the cluster. The control plane is responsible for scheduling pods on nodes, maintaining the desired state of the cluster, and providing an API for managing the cluster.

 

Worker nodes:

The worker nodes are the nodes that run the containers. The worker nodes are responsible for running the kubelet process, which manages the containers on the node.

 

Pods:

Pod is the smallest, deployable unit of computation. It is a group of one or more containers that share resources and are co-scheduled on the same worker node. Pods are the building blocks of Kubernetes applications and are used to organize and manage containerized applications.

 

 

Flow Diagram

The flow diagram represents a simplified architecture of a web application running on Amazon Elastic Kubernetes Service (EKS). It shows the key components of the application and how they interact with each other.

Key Components:

  1. Frontend Load Balancer: This is a load balancer that distributes traffic to the frontend pods. It is typically a public-facing address that users can access to reach the application.

  2. Frontend Pods: These are the pods that serve the frontend of the application. They are responsible for rendering the web pages and handling user requests.

  3. Backend Load Balancer: This is a load balancer that distributes traffic to the backend pods. It is typically internal to the VPC and not accessible to the public.

  4. Backend Pods: These are the pods that serve the backend of the application. They are responsible for processing data, making database queries, and other back-end tasks.

  5. RDS MySQL Database: This is the database that stores the application’s data. It is accessible to the backend pods through a private network connection.

Overall Flow:

  1. A user accesses the web application via the public-facing frontend load balancer URL.

  2. The frontend load balancer receives the user’s request and distributes it to one of the frontend pods.

  3. The frontend pod processes the request and may retrieve data from the backend or images from the backend load balancer URL mentioned in the Dockerfile.

  4. If the frontend pod needs to access the MySQL database, it will establish a connection using the database credentials specified in its configuration.

  5. The frontend pod sends database queries to the MySQL database instance running on AWS RDS.

  6. The MySQL database instance processes the queries and returns the results to the frontend pod.

  7. The frontend pod incorporates the retrieved data into the response it sends back to the user.

  8. The user receives the response from the frontend load balancer and can view the web application.

This simplified diagram provides an overview of the basic architecture of a web application running on EKS. The actual implementation may vary depending on the specific application and requirements.

 

 

 

Step 1: Create 2 IAM roles one for the EKS cluster and one for nodes

   1. Creating role for the EKS cluster

   Navigate to IAM >> roles >> Create role >> Choose EKS in Use Case >> Select EKS cluster >> Add AmazonEKSclusterpolicy >> give name to role —- EKS-clusrole >> create role

 

2. Creating role for EKS Node

    Navigate to IAM >> roles >> Create role >> Choose EC2 in Use Case >>

    Add below policies to the role:

 AmazonEC2ContainerRegistryReadOnly
AmazonEC2FullAccess
AmazonEKS_CNI_Policy
AmazonEKSClusterPolicy
AmazonEKSLocalOutpostClusterPolicy
AmazonEKSServicePolicy
AmazonEKSWorkerNodePolicy

>> give name to role —- EKS-node-role>> create role

 

 

Step 2: Create a VPC and its components

   Navigate to All services >> VPC >> create VPC

     a. Create a VPC with the desired CIDR block and subnets.

     b. Ensure that there are at least two public subnets in different Availability Zones to ensure high availability.

     c. Create a route table for each subnet and add a route to the internet gateway.

                                               OR Directly

   Navigate to All services >> VPC >> create VPC

   Follow the below snapshot for reference

 

Step 3: Create an EKS cluster

   Navigate to All Services >> EKS >> Add cluster >> Create

     a. Navigate to the Amazon EKS service in the AWS Management Console.

     b. Click “Create cluster” and choose “Custom VPC.”

     c. Provide a name for the cluster and select the VPC you created earlier.

     d. Choose the public subnets and private subnets for the cluster and the desired instance type for the nodes.

     e. Select the IAM role you created for the nodes.

     f. Click “Next: Networking” and configure any additional networking options, such as an Amazon VPC CNI.

     g. Cluster endpoint access —- select “Public and Private” here API server in Public and worker nodes in Private

     h. Click “Next: Review” and review the cluster configuration.

     i. Click “Create” to create the EKS cluster.

 

 

Step 4: Create a node group in an EKS cluster

Select EKS Cluster created >> goto Compute section below >> add node group

  1. Name it and choose IAM node role created in step 1
  2. Choose only private subbnets for worker nodes and Create

 

 

Step 5: Create an EC2 instance for worker machine

   Create a worker machine, that we can  to connect to eks cluster and deploy the application.

    All services >> EC2 >> launch instance (while creating instance choose self-created VPC and public subnet and enable “Auto Assign public IP” for the instance).

    Here choose AMI — Linux 2 image >> instance type – t2.medium

  • Ssh to self-created worker machine – workstation .. then do aws configure and provide credentials of root user

    – navigate to All services >> IAM >> My Security credentials >> create access key >> save them —- these should be used in #aws configure

  • In the self-created worker machine – i.e workstation ..perform the below steps

Run below commands:



aws configure (use root credentials, if eks cluster is created using console and root account, - credentials of other admin users will not work)

kubectl version --client
curl -O https://s3.us-west-2.amazonaws.com/amazon-eks/1.23.15/2023-01-11/bin/linux/amd64/kubectl
chmod +x ./kubectl
mkdir -p $HOME/bin && cp ./kubectl $HOME/bin/kubectl && export PATH=$PATH:$HOME/bin
kubectl version

eksctl version
curl -sLO "https://github.com/eksctl-io/eksctl/releases/latest/download/eksctl_Linux_amd64.tar.gz"
tar -zxvf eksctl_Linux_amd64.tar.gz
chmod +x ./eksctl; mkdir -p $HOME/bin && cp ./eksctl $HOME/bin/eksctl && export PATH=$HOME/bin:$PATH
echo 'export PATH=$HOME/bin:$PATH' >> ~/.bashrc
eksctl version

 

 

configure setting for eks cluster created


aws eks --region ap-south-1 update-kubeconfig --name webappeks --alias iams #Replace region and eks cluster name accordingly
kubectl config set-context --current
kubectl config view
kubectl get nodes #It should list instances/nodes runnning


 

Step 6: Build and Push Backend Application Container Image

These things should be done from Local machine

   We will be using the Amazon Elastic Container Registry (ECR) to store and manage your container images.

    The backend image can be pushed directly but

                                                                     <ref here to push docker images for backend only>

 

 

Step 7: Create Kubernetes Deployments and Services for Backend

       This step involves defining Kubernetes deployment and service manifests. These manifests will describe how the application’s containers should be deployed across the cluster’s nodes.

  • Deployment manifests will specify the number of replicas for each container, the image to be pulled, and other deployment options.

  • Service manifests will define how  application will be exposed to the outside world. This involves creating a load balancer or exposing the service on a specific port.

First we will define backend deployments and services(in yaml files)
From workerstation machine Run the below commands:

Here in Image section of backend-deployment.yaml and frontend-deployment.yaml replace with your (ECR) image URI.


[root@ip-172-31-4-175 ec2-user]# vi backend-deployment.yaml
#paste the below content and save it
apiVersion: apps/v1
kind: Deployment
metadata:
  name: backend-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: backend
  template:
    metadata:
      labels:
        app: backend
    spec:
      containers:
      - name: backend
        image: 976995869248.dkr.ecr.ap-south-1.amazonaws.com/shoewebapp:webappss-be
        ports:
        - containerPort: 5000
 


[root@ip-172-31-4-175 ec2-user]# vi backend-service.yaml
#paste the below content and save it
apiVersion: v1
kind: Service
metadata:
  name: backend-service
spec:
  selector:
    app: backend
  ports:
  - name: backend-port
    protocol: TCP
    port: 5000
    targetPort: 5000
  - name: db-port
    protocol: TCP
    port: 3306
    targetPort: 3306
  type: LoadBalancer

 

Step 8: Apply Kubernetes Manifests for Backend

Applying the Kubernetes manifests created above, using the kubectl command-line tool. This will instruct the cluster to create and manage  application’s containers and services.

   To Start Deployment and Service follow the below steps:

RUN:


kubectl apply -f backend-deployment.yaml

kubectl apply -f backend-service.yaml


 

Step 9: Build and Push Frontend Application Container Image

 These things should be done from Local machine

    We will be using the Amazon Elastic Container Registry (ECR) to store and manage your container images.

     Frontend image should be pushed only after making changes in the Dockerfile of FE by giving URL of backend load balancer i.e we need to give backend load balancer url here and then build image and push it to ECR repo

      Here we have not created any load balancer but backend and frontend load balancers along with Auto Scaling Group are created automatically by eks cluster when we create node groups.

     Navigate to All services >> EC2 >> Load balancer >> select backend load balancer >> Grab its URL   and paste in Dockerfile of Frontend and build the image out of it —– ref below link

                                                                      <ref here to push frontend docker image>

 

 

 

Step 10: Create Kubernetes Deployments and Services for frontend

 This step involves defining Kubernetes deployment and service manifests. These manifests will describe how the application’s containers should be deployed across the cluster’s nodes.

  • Deployment manifests will specify the number of replicas for each container, the image to be pulled, and other deployment options.

  • Service manifests will define how  application will be exposed to the outside world. This involves creating a load balancer or exposing the service on a specific port.

Define frontend deployments and services(in yaml files)


[root@ip-172-31-4-175 ec2-user]#vi frontend-deployment.yaml
#paste the below content and save it
apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: frontend
  template:
    metadata:
      labels:
        app: frontend
    spec:
      containers:
      - name: frontend
        image: 976995869248.dkr.ecr.ap-south-1.amazonaws.com/shoewebapp:webappss
        ports:
        - containerPort: 80


[root@ip-172-31-4-175 ec2-user]#vi frontend-service.yaml
#paste the below content and save it
apiVersion: v1
kind: Service
metadata:
  name: frontend-service
spec:
  selector:
    app: frontend
  ports:
  - name: frontend-port
    protocol: TCP
    port: 80
    targetPort: 80
  type: LoadBalancer


 

Step 11: Apply Kubernetes Manifests for frontend

Applying the Kubernetes manifests created above, using the kubectl command-line tool. This will instruct the cluster to create and manage  application’s containers and services.

   To Start Deployment and Service follow the below steps:

RUN:

 

kubectl apply -f frontend-deployment.yaml

kubectl apply -f frontend-service.yaml

 

 

Step 12: Access the application through Frontend Load Balancer URL

   To access the application we need URL, here we can access the application with frontend load balancer URL since in our service.yaml file we have choosen type as LoadBalancer, to get FE load balancer URL we need to Navigate to

   All services >> EC2 >> Load balancers >> Select FE load balancer >> Grab URL

   Open incognito tab >> Hit FE load balancer URL with port 80 we will be able to access the application.

 

Leave a Reply

Your email address will not be published. Required fields are marked *