Dockerizing 3 Tier E-Commerce web application

In this document, while Dockerizing a 3-tier web application, the frontend and backend will be running on the local docker containers and the database will be from the RDS service of AWS.

 

Pre-requisite :

-AWS account (AWS RDS)

-Dockercli in Local machine

Flow Diagram

 

 

Step-1: Create a database in AWS RDS

Step-2: Create a container to connect to database in the AWS RDS and upload data. 

Step 3: Building Backend Docker image

Step 4: Building Frontend Docker image

Step 5: Running FE and BE containers out of docker images.

Step 6: Run localhost in browser and browse the E-commerce app deployed.

 

Docker:

Docker is an open-source platform for developing, shipping and running applications. It packages up an application’s code and dependencies into a standardized unit called a container, which can then be run on any computing environment that supports Docker. This makes it easy to deploy and manage applications across different environments, from development to production.
Docker containers are lightweight and portable, and they share the kernel of the host operating system, which makes them much more efficient than virtual machines. This also means that Docker containers are more secure, as they cannot access the host system’s files or network directly.
Docker has become the de facto standard for containerization, and it is used by a wide range of organizations, including Amazon, Google, Microsoft, and Netflix. Docker is a powerful tool that can help you develop, deploy, and manage your applications more efficiently and securely.
Here are some of the benefits of using Docker:
Portability: Docker containers can be run on any computing environment that supports Docker, regardless of the underlying operating system. This makes it easy to move applications between different environments, such as development, staging, and production.
Efficiency: Docker containers are lightweight and share the kernel of the host operating system, which makes them much more efficient than virtual machines. This can lead to significant savings in compute resources.
Security: Docker containers are more secure than virtual machines, as they cannot access the host system’s files or network directly. This makes Docker a good choice for running applications that need to be isolated from the host system.
Scalability: Docker containers can be easily scaled up or down, making it easy to manage applications that have fluctuating workloads.

AWS RDS:

Amazon RDS (Relational Database Service) is a managed relational database service provided by Amazon Web Services (AWS). It simplifies the setup, operation, and scaling of a relational database, making it easier for developers to focus on their applications rather than database management tasks.

 

Flow-Diagram

In this scenario, the user accesses the application via localhost:80, which redirects to the app-frontend container on the local machine. The app-frontend container communicates with the app-backend container on port 5000, both residing locally. The backend container interacts with an AWS RDS MySQL database, fetching and storing data. The interaction with the RDS database is facilitated through the RDS endpoint URL and port 3306. This architecture separates the frontend and backend components, with the backend container managing data operations and connecting seamlessly to a remote database in AWS RDS for data storage.

Step-1 : – Create a database in AWS RDS 

Database created in RDS should be  “publicly accessible” and then need to create a security group for the database allowing port 3306 to be accessed from anywhere.

 

  • To Create Database in RDS, please follow below steps, ( also this link can be referred. )

 

– login to AWS

– Services >> Database >> RDS >> Create database.

– Choose a database creation method >> Standard create.

– Engine options >> MySql

– Templates >> Free Tier

– Settings >> Credentials Settings >> Give password.

– Instance configuration >> db.t3.micro

– Connectivity >> Don’t connect to an EC2.

– Public access >> Yes

– VPC security group (firewall) >> create new.

– Database authentication >> Password authentication and create.

 

 

  • Update Security Group of the database

 

Navigate to Databases and select security-group the one that has been created.

Connectivity & security >> VPC security groups >> Edit inbound rules >> PORT 3306 (MYSQL) and Source -Anywhere – save

 

 

Step 2: Create a container to connect to database in the AWS RDS and upload data.

 

Follow below steps :

  • Grab database Endpoint URL from the AWS RDS

To grab database endpoint – browse to AWS-RDS-database >> Connectivity & security >> endpoint and port.

  • Create a container and connect to it from local machine

The below command needs to be run on a local PC (in my case it’s a Windows cmd). This will create a container from docker image node:14-alpine and also login to the container.

winpty docker run -it -p 3306:3306 node:14-alpine sh

 

 

  • Once logged in to the container, follow below steps in which we create a directory, clone the git repo, connect to the database server and create database and then upload db dump to the database.

 

           Create directory :

mkdir -p /home/webapp

cd /home/webapp

 

           Install mysql and git in the container

apk add git mysql mysql-client

 

            Git clone 

git clone https://github.com/devopsenlight/web-app-deployment-on-cloud.git

cd /home/webapp/web-app-deployment-on-cloud/backend

 

            Connect to the database server and create database 

mysql -h test-databaseq.caxdt0nmodrh.ap-south-1.rds.amazonaws.com -u admin -p

create database webapp;

exit;

 

             Upload db dump to the database 

mysql -h test-databaseq.caxdt0nmodrh.ap-south-1.rds.amazonaws.com -u admin -p webapp < sql_dump.sql

 

               Some hand on the database 

mysql -h test-databaseq.caxdt0nmodrh.ap-south-1.rds.amazonaws.com -u admin -p

show databases ;

use webapp;

show tables;

select * from users;

ALTER TABLE users DROP COLUMN username;

select * from users;

exit;

 

 

Step 3: Building Backend Docker image

– Use below dockerfile to create backend docker image.

 


#  Webapp - Backend : Use the official DockerHub Node Alpine base image as the starting point
FROM node:14-alpine

# Install curl, git, npm and nginx on alpine image
RUN apk add curl && \
    apk add git && \
    apk add npm && \
    apk add nginx 

# create /home/webapp as the working directory
RUN mkdir -p /home/webapp
WORKDIR /home/webapp

# Install nvm, source nvm and use nvm version 16.20.2 and install angular/cli module
RUN curl -o install.sh https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh && \
    sh /home/webapp/install.sh && \
    source ~/.nvm/nvm.sh && \
    nvm install 16.20.2 && \
    nvm use 16.20.2 && \
    npm install -g @angular/cli@14 && \
    npm install -g pm2

# Install and configure MySQL
RUN apk --no-cache add mysql mysql-client

# Clone your Node.js application repository
WORKDIR /home/webapp
RUN git clone https://github.com/devopsenlight/web-app-deployment-on-cloud.git

# Install dependencies for your Node.js application
WORKDIR /home/webapp/web-app-deployment-on-cloud/backend
RUN npm install

# Create a directory for environment files
WORKDIR /home/webapp/web-app-deployment-on-cloud/backend/env

# Create a production environment file
RUN echo "PORT=5000" > production.env && \
    echo "DB_HOST='test-databaseq.caxdt0nmodrh.ap-south-1.rds.amazonaws.com'" >> production.env && \
    echo "DB_USER='admin'" >> production.env && \
    echo "DB_PASSWORD='Rakshit12*'" >> production.env && \
    echo "DB_NAME='webapp'" >> production.env

# Define the command to start your application
WORKDIR /home/webapp/web-app-deployment-on-cloud/backend
CMD ["pm2-runtime", "start", "npm", "--", "start", "--no-daemon"]

 

In the above BE dockerfile, update RDS databse endpoint as shown in below screeshot:

 

 

Then build docker image using the docker file. Command to create docker image from dockerfile:

cd /path/where/Dockerfile of FE is present

docker build . --no-cache -t webapp-be

Once Backend image is build we need to create frontend image.

 

 

Step 4: Building Frontend Docker image

– Use the below dockerfile to create Frontend docker image.

 

# Webapp - Frontend Use the official dockerhub node aplpne base image as the starting point
FROM node:14-alpine

# Install curl, git, npm and nginx on alpine image
RUN apk add curl && \
    apk add git && \
    apk add npm && \
    apk add nginx 

# create /home/webapp as the working directory
RUN mkdir -p /home/webapp
WORKDIR /home/webapp

# Install nvm, source nvm and use nvm version 16.20.2 and install angular/cli module
RUN curl -o install.sh https://raw.githubusercontent.com/nvm-sh/nvm/v0.38.0/install.sh && \
    sh /home/webapp/install.sh && \
    source ~/.nvm/nvm.sh && \
    nvm install 16.20.2 && \
    nvm use 16.20.2 && \
    npm install -g @angular/cli@14
	
# Create a directory for your Node.js application
WORKDIR /home/webapp
RUN git clone https://github.com/devopsenlight/web-app-deployment-on-cloud.git

#Install node modules as mentioned in package.json in the client folder
WORKDIR /home/webapp/web-app-deployment-on-cloud/client
RUN npm install


WORKDIR /home/webapp/web-app-deployment-on-cloud/client/src/environments
RUN echo "export const environment = {" > environment.prod.ts && \
    echo "production: true," >> environment.prod.ts && \
    echo "apiUrl: 'http://localhost:5000/api/v1/'," >> environment.prod.ts && \
    echo "};" >> environment.prod.ts

# Copy nginx.conf file from /home/webapp/web-app-deployment-on-cloud/nginx.conf
RUN cp /home/webapp/web-app-deployment-on-cloud/nginx.conf /etc/nginx/nginx.conf

WORKDIR /home/webapp/web-app-deployment-on-cloud/client/
RUN ng build --prod

CMD ["nginx", "-g", "daemon off;"]

 

To create the frontend image from FE Dockerfile, run the below docker command:

cd /path/where/Dockerfile of FE is present

#docker build . --no-cache -t webapp-fe

 

 

Step 5: Running FE and BE containers out of images created

 

Run Backend and Frontend containers with port mapping

Run BE container and open port 5000 to establish connection between backend with frontend and open port 3306 for data transferfrom backend with database.

#docker run -it -d -p 5000:5000 -p 3306:3306 webapp-be

 

 

Run Frontend container

While starting frontend container, we need to map port number 80 of frontend so that the frontend web application is accessible via port 80.

#docker run -it -d -p 80:80 webapp-fe

 

Thats all, database is available in RDS, backend container is connected to database using port 3306 and backend container is connected to frontend container via port 5000 and frontend container is running and accessible via port 80.

 

 

Step 6: Run localhost in browser and browse the E-commerce app we deployed

 

RUN localhost in your browser on port 80 to browse the web application.

 

localhost    —  this will start working and web app will be displayed 🙂

 

 

Thanks for reading, Please contact-us in case of any queries.

2 thoughts on “Dockerizing 3 Tier E-Commerce web application”

  1. Great walkthrough! Dockerizing the 3-tier e-commerce application was simple with your clear instructions. The step-by-step approach made the process easy to understand and implement. Thanks for the helpful guide!

  2. Fantastic tutorial! The instructions were concise and easy to follow. I successfully dockerized the 3-tier e-commerce web app without any issues. Thanks for the clear breakdown!

Leave a Reply

Your email address will not be published. Required fields are marked *