Build AWS VPC and Load Balancer for Deploy Web App

Enrico Megantara
9 min readNov 20, 2022

--

Here we will create our own VPC and we will deploy our web app using a load balancer. After that we will create a muli Avaiblity zone (AZ). For the load balancer we will create in the public subnet. And the intance made of auto scalling will be printed on the privavte subnet, for some instances will be created automatically from the auto scaling group. VPC endpoints will be used to connect EC2 instances with S3 databse. Within the instance itself will later use django.

Create VPC (Virtual Private Cloud)

A virtual private cloud (VPC) is a virtual network that is very similar to a traditional network that you will operate in your own datacenter. After you create a VPC, you can add subnets. A is the IP address range in your VPC. The subnet must be in a single Availability Zone. After you add a subnet, you can deploy AWS resources in your VPC. There are 2 types of subnets, namely Public Subnet and Private Subnet.

  • A public subnet is a subnet that is associated with a route table that has a route to an Internet gateway. This connects the VPC to the Internet and to other AWS services.
  • A private subnet is a subnet that is associated with a route table that doesn’t have a route to an internet gateway.

We already have a default vpc but at this point we will create a new VPC for this project. We’re going to build this project starting with creating a VPC. After that we go into the AWS console and log in to the VPC. Here we create a new VPC with create subnet and so on. We’ll have two AZs, as well as 2 public subnets and 2 private subnets.

Our CIDR remains with the default CIDR. CIDR 10.0.0.0/20 for the AZ public subnet A and CIDR 10.0.16.0/20 for the AZ-A public subnet which is B. For private 10.0.128.0/20 AZ-A and 10.0.144.0/20 AZ-B. For NAT gateways use one AZ. Its VPC endpoint is required to connect the S3 gateway. We can see the subnet, route tables and network connections diagram as shown above.

Create Amzon S3 Bucket

Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance. Customers of all sizes and industries can use Amazon S3 to store and protect any amount of data for a range of use cases, such as data lakes, websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides management features so that you can optimize, organize, and configure access to your data to meet your specific business, organizational, and compliance requirements.

Create Amazon RDS (Relational Database System)

Amazon RDS is a managed relational database service that provides six familiar database engines to choose from, including Amazon Aurora, MySQL, MariaDB, PostgreSQL, Oracle, and Microsoft SQL Server. This means that the code, applications, and tools you already use today with your existing databases can be used with Amazon RDS. Amazon RDS handles routine database tasks, such as provisioning, patching, backup, restore, failure detection, and repair.

Create IAM (Identity and Access Management)

AWS Identity and Access Management (IAM) is a web service that helps you securely control access to AWS resources. You use IAM to control who is authenticated (signed in) and authorized (has permissions) to use resources. When you create an AWS account, you begin with one sign-in identity that has complete access to all AWS services and resources in the account. This identity is called the AWS account root user and is accessed by signing in with the email address and password that you used to create the account.

Deploy EC2 instance

Amazon Elastic Compute Cloud (Amazon EC2) provides scalable computing capacity in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 eliminates your need to invest in hardware up front, so you can develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual servers as you need, configure security and networking, and manage storage. Amazon EC2 enables you to scale up or down to handle changes in requirements or spikes in popularity, reducing your need to forecast traffic.

First we need an instance just for the need to deploy our application first. EC2 first we use instances to create AMIs. It has not been used for production. Later the AMI will be used when auto scaling the group. We’ll create an instance in the public subnet first. We are here using amazon linux. We also create a new keypari. Network settings we select the VPC that we created. We created a subnet in the public subnet first because this is only for deployment. For security groups allow http allow from anywhere and allow custom tcp port 8000 for application testing.

Deploy Web App Django

Django is a full stack framework that serves to create web applications using the python language. Similar to flask, developers can build websites backend or frontend using only this framework. This python framework is known for its fast performance in developing applications and has a cleaner pragmatic design

First we will install some git, mysql and python dependencies. After that we will clone our project. After that we will install virtual enviroment in python. sSetealh that we will create a virtual folder environtmen. After that we install any environment needed for a web app such as a module or framework. After that we will file an .env file where the file contains the access key from our aws.

After that we can enter the aws access key, aws secret key, aws storage bucket name, database name, user database, database password, host database, from AWS that we have created earlier. and secretkey for the security of our files. After that with the migrate command it works to connect to our databse RDS. Once successful then our bucket can save the file from our web app. After that we can try our web app test with EC2 public ip copy pasty and added with port 8000.

Deploy Web Server NGINX

NGINX is an open-source web server software that functions as a reverse proxy, HTTP load balancer, as well as email proxy for IMAP, POP3, and SMTP.

  • sudo amazon-linux-extras enable nginx1
  • sudo yum clean metadata
  • sudo yum install nginx
  • sudo nano /etc/nginx/conf.d/django.conf

After that we will install the nginx web server. So when we access our web app that will be accessed by the internet is our web server. After that, nginx will communicate with uwsgi and then communicate with our web app. From the web app there is also a static file that can communicate directly with nginx. After that for the first command we enable our nginx first. and we can install our nginx. After installing next we edit the new file in nginx to configure our web server.

  • sudo cp static /usr/share/nginx/html/ -rf
  • sudo systemctl restart nginx
  • sudo systemctl enable nginx
  • pip install uwsgi
  • sudo mkdir /run/uwsgi/
  • sudo chown ec2-user:ec2-user /run/uwsgi/
  • uwsgi — socket /run/uwsgi/teman_kelas.sock — chdir /home/ec2-user/teman_kelas/ — module teman_kelas.wsgi — chmod-socket=666
  • sudo mkdir -p /etc/uwsgi/sites
  • sudo nano /etc/uwsgi/sites/teman_kelas.ini
  • sudo systemctl start teman_kelas
  • sudo systemctl enable teman_kelas

Create AMI (Amazon Machine Image)

An Amazon Machine Image (AMI) is a supported and maintained image provided by AWS that provides the information required to launch an instance. You must specify an AMI when you launch an instance. You can launch multiple instances from a single AMI when you require multiple instances with the same configuration. You can use different AMIs to launch instances when you require instances with different configurations.

Create Load Balance and Lauch Template

Next we’ll create a lauch template for load balancing. We select auto scling guaidance because we will use this template for auto scaling. For the image, we select the image that we have created. Disin I named it as image-project. In the image we have installed our web app including the nginx web server. Security group we create a group that we have created.

Load Balancer and Auto Scalling Group

An Application Load Balancer makes routing decisions at the application layer (HTTP/HTTPS), supports path-based routing, and can route requests to one or more ports on each container instance in your cluster. Application Load Balancers support dynamic host port mapping. For example, if your task’s container definition specifies port 80 for an NGINX container port, and port 0 for the host port, then the host port is dynamically chosen from the ephemeral port range of the container instance (such as 32768 to 61000 on the latest Amazon ECS-optimized AMI).

Create Auto Scaling

An Auto Scaling group contains a collection of EC2 instances that are treated as a logical grouping for the purposes of automatic scaling and management. An Auto Scaling group also lets you use Amazon EC2 Auto Scaling features such as health check replacements and scaling policies. Both maintaining the number of instances in an Auto Scaling group and automatic scaling are the core functionality of the Amazon EC2 Auto Scaling service.

How to test the load balancer, we copy the dns name of the load balancer. And successfully log in to our application. When we refresh the web app, it will change to 2 intancse. will find the contents of the 2 intances the same so we don’t know which instance is running if our application can already open with dns load balance then our load balance has been successful.

--

--