Deploying FastAPI on AWS EC2 with Gunicorn, Nginx and Certbot

To be true, I hated aws 3-4 years back. I lost around 200$ because I mistakenly started some additional services😥. Yes, that time I did not know about billing thresholds😒. For this reason, I like Digitalocean more than aws. Digitalocean, Linode these SASS come with predictive pricing.
However, there is one more truth, There are more companies that deploy their applications on AWS and are expecting AWS knowledge in their job description. So, Unfortunately, I need to cover it.😒

Let's visit AWS and search for EC2.

Once you land the EC2 dashboard, click on "Launch Instance".
aws ec2 launch instance for deployment

Now, comes the part where we need to do some configurations for our new upcoming Ubuntu machine. Feel free to choose an OS of your choice, most of the commands should be similar. Also, make sure that you choose a machine that has a free tier eligible. Mostly the t2.micro machines have a free tier.

I would suggest to create a new key pair and download the .pem file. This will help us easily ssh into the machine.

Finally, configure the Network settings section, allow http and https traffic from over the internet. You need not touch storate and additional settings. Feel free to experient if you want.

That's it, now click on launch instance and wait for a while. EC2 redirects to a page with cards, The first card might say "create billing alerts", I suggest you to create billing alerts but in case you are rich like Suniyo or Richie Rich then click on 2nd card that says, "connect to your instance".
Now, you can connect to your instance using your terminal or using the web UI. To connect with web UI, simply click select "Connect using EC2 instance connect" and click on the connect button.

In case you are like me and prefer a terminal, then move the .pem file to .ssh folder and then you need to type:

ssh -i ".ssh\aws-ec2.pem" [email protected]

In a Linux machine, you need to store the .pem file at /home/username/.ssh and accordingly change the command. The first thing that I generally do after sshing into the machine is to run

sudo apt update
sudo apt install python3-pip python3-venv 
python3 -m venv env
source ./env/bin/activate

Now, we can pull our codebase from GitHub and install the requirements

git clone https://github.com/sourabhsinha396/fastapi-blog
cd fastapi-blog/backend/
pip install -r requirements.txt

Assuming you are using SQLite, we need not do the postgres installation. In case you need Postgres, please execute the below commands and put the necessary configurations in .env file afterward:

sudo apt install python3-dev libpq-dev postgresql postgresql-contrib
sudo -u postgres psql
CREATE DATABASE blog;
CREATE USER algoholic WITH PASSWORD 'supersecret';
ALTER ROLE algoholic SET client_encoding TO 'utf8';
ALTER ROLE algoholic SET default_transaction_isolation TO 'read committed';
ALTER ROLE algoholic SET timezone TO 'UTC';
GRANT ALL PRIVILEGES ON DATABASE blog TO algoholic;
\q

Create and configure the .env file accordingly

POSTGRES_USER=algoholic
POSTGRES_PASSWORD=supersecret
POSTGRES_SERVER=localhost
POSTGRES_PORT=5432
POSTGRES_DB=blog

SECRET_KEY=supersecret


Now, we can try to start the uvicorn server using the below command.

 uvicorn main:app --host 0.0.0.0 --port 8000

Now, we can visit http://public_ip:8000, But it won't work 🤣Think... think what could be the missing piece. The reason is that, the request is coming to port 8000 but it's not open.

Once the security group wizard opens, we need to click on edit inbound rules. Then add a new rule that allows traffic on 8000 port from anywhere.

That's it, now just make sure that the uvicorn server is running and you can reach the server at http:public_ip:8000. Take some rest. It is a good achievement.

However, there are 3 problems that I can currently notice.
1. We need to keep the uvicorn command running in the terminal, but that non-feasible.
2. There is 1 uvicorn process/worker that will handle the incoming requests. It is not scalable.
3. If I restart the system, then the uvicorn server should auto-restart. Which is currently not happening. To fix these issues, I plan to add gunicorn in the pipeline.

Let's first add it, then you will automatically understand the parts in the implementation process. Now, let's jump on to the Gunicorn configuration. Before that we first need to install gunicorn by executing: 

pip install gunicorn==21.2.0

Create a Gunicorn service file. Create a file named your_app_name_gunicorn.service in /etc/systemd/system/

#fastapi_gunicorn.service

[Unit]
Description=Gunicorn instance to serve fastapi application
After=network.target

[Service]
User=ubuntu
Group=ubuntu
WorkingDirectory=/home/ubuntu/fastapi-blog/backend
ExecStart=/home/ubuntu/env/bin/gunicorn -w 4 -k uvicorn.workers.UvicornWorker -b 0.0.0.0:8000 main:app

[Install]
WantedBy=multi-user.target

Now, we can enable and start the gunicorn service:
 

sudo systemctl enable fastapi_gunicorn

sudo systemctl start fastapi_gunicorn

sudo systemctl status fastapi_gunicorn
● fastapi_gunicorn.service - Gunicorn instance to serve fastapi application
     Loaded: loaded (/etc/systemd/system/fastapi_gunicorn.service; enabled; vendor preset: enabled)
     Active: active (running) since Sun 2024-01-14 12:15:39 UTC; 9s ago
   Main PID: 5592 (gunicorn)
      Tasks: 5 (limit: 1121)
     Memory: 170.9M
        CPU: 2.427s

Wow, the gunicorn server started beautifully, now we can test it by executing

curl localhost:8000

You should get the response of the homepage of your fastapi application. You can now also visit the http://public_ip:8000 and see you application live. Now, even if you exit the virtual machine, no issues, our server is still running.

Let's tame the next problems

1. How to access our server with a domain name instead of an IP?
2. How to restrict some ips from visiting our server?
3. How to load balance the requests. For instance, lets say we start getting 1000 requests per second. Then how to add more machines and load balance.
To achieve the solution for above problem statements, we will be using NGINX. Which is a reverse proxy, in simple words it is a router for our Virtual Machine. Let us first install nginx.

sudo apt install nginx

Let us create a new file fastapi_nginx inside /etc/nginx/sites-available/

sudo nano fastapi_nginx
server {
    listen 80;
    server_name 44.204.235.1 algoholic.pro;

    location / {
        proxy_pass http://127.0.0.1:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

We also need to create an A record in our domain provider for instance, GoDaddy or Hostinger. Finally, we can link nginx to sites-enabled, test our configuration, and restart nginx if everything looks good.

sudo nginx -t
#nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
#nginx: configuration file /etc/nginx/nginx.conf test is successful

sudo ln -s /etc/nginx/sites-available/fastapi_nginx /etc/nginx/sites-enabled/

sudo systemctl restart nginx

Finally, after all of this hard work, Here comes the moment of peace. visit http://doamin.com. In my case, I will visit http://algoholic.pro
Note that, manually type out http. If you visit https version, browser will cache it and start force redirection on https version which will not work.

One last thing, let us secure our application with https. No one would trust a product without https. To get https support, I will be using certbot tool.
 

sudo snap install --classic certbot
sudo certbot --nginx -d algoholic.pro

Now, you can safely visit the https version of the webapp: https://algoholic.pro/

FastAPITutorial

Brige the gap between Tutorial hell and Industry. We want to bring in the culture of Clean Code, Test Driven Development.

We know, we might make it hard for you but definitely worth the efforts.

Contacts

Refunds:

Refund Policy
Social

Follow us on our social media channels to stay updated.

© Copyright 2022-23 Team FastAPITutorial