25
The cheap way how to use Docker to deploy your FastAPI
FastAPI is a powerful API framework for Python that allows to quickly create and develop APIs in Python. But how to deploy those APIs?
From the FastAPI website:
FastAPI is a modern, fast (high-performance), web framework for building APIs with Python 3.6+ based on standard Python type hints.
What does this mean? It means that FastAPI allows to quickly build and develop APIs with some powerful features:
- OpenAPI for API declaration
- Automatic data model documentation with JSON Schema
- Automatic client code generation in many languages
- Automatic docs with Swagger UI (allows testing the API from the docs) and ReDoc
- Standard Python type declarations, no new syntax needed
- Sensible defaults
- Validations for all Python types, including JSON dict and list, all the way to URL and Email
- Security and Authentication with HTTP Basic, OAuth2 and passing of api keys
First step is, of course, to install FastAPI with PIP:
pip install fastapi
We also need to install a ASGI server to run and test the API:
pip install uvicorn
Now let’s create a quick and simple API to get domain name information:
import requests
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
def root():
return {"Hello": "API"}
@app.get("/domain/{domain}")
def get_domain(domain: str):
response = requests.get("http://ip-api.com/json/" + domain)
return response.json()
For our API we also need to install the “requests” library:
pip install requests
As you can see, with 15 lines of code we have an API created, so now let’s examine the code:
- Line 1 and 2, we perform our imports
- Line 4, we define our main application
- Line 7, we define our root endpoint
- Line 12, we define our search endpoint, this endpoint uses request to perform a simple API call to an external domain API and return a JSON
For reference here is the requirements.txt file:
certifi==2020.6.20
chardet==3.0.4
click==7.1.2
fastapi==0.61.1
h11==0.9.0
idna==2.10
pydantic==1.6.1
requests==2.24.0
starlette==0.13.6
urllib3==1.25.10
uvicorn==0.11.8
websockets==8.1
To run our API with start our Uvicorn server with:
uvicorn main:app
Opening our browser in http://127.0.0.1:8000/docs we can see the docs.
Now that we have our API code and we have tested it, we need to deploy it in our “Production” environment to make it accessible to the world.
For that we can choose from different possibilities:
- Create a VM in the cloud, install Python and the necessary dependencies and run our API
- Deploy to AWS Lambda (or other cloud based execution service)
- Run a Docker container and expose it directly (or with Nginx)
In this case we will deploy our API to a Docker container and use Nginx to expose our API to the outside world.
Exposing our API with Nginx has several advantages but the main one for this example is the ability to easily add an SSL certificate with Let’s Encrypt.
In order to expose our API we will create the following structure for our Docker infrastructure, which will be up and running in a cloud VPS:
- A Docker instance running our FastAPI application
- A Docker instance running Nginx with SSL to receive the requests and pass them on to the API instance
In order to achieve this combination of Docker instances we will use docker-compose.
But first let’s start by creating our file structure to have our Dockerfiles organized:
.
├── docker-compose.yaml
├── domainchecker
│ ├── Dockerfile
│ └── app
│ └── DomainChecker
│ ├── main.py
│ └── requirements.txt
└── nginx
├── Dockerfile
├── fullchain.pem
├── nginx.conf
├── options-ssl-nginx.conf
├── privkey.pem
└── ssl-dhparams.pem
Our file tree starts with the docker-compose.yaml file and two directories for the two docker instances:
- domainchecker folder contains our FastAPI file and the Dockerfile for that docker instance
- nginx folder contains the nginx.conf config file, associated files for the SSL certificate and the Dockerfile for that instance
Let’s check our docker-compose file:
version: "3.7"
services:
web:
build: nginx
ports:
- 80:80
- 443:443
depends_on:
- api
api:
build: domainchecker
environment:
- PORT=8080
ports:
- 8080:8080
In our docker-compose file we define the following services:
- web, contains the nginx instance. We expose ports 80 and 443 (HTTP and HTTPS), define a dependency on having the api docker instance available and that we want to build from the Dockerfile in the nginx folder.
- api, contains our FastAPI instance. We expose port 8080 with an environment variable to the instance and we build it with the Dockerfile from the domainchecker folder
Our docker-compose file depends on the Dockerfiles in each folder to create each instance, so let’s examine those files:
=> domainchecker/Dockerfile
FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7
COPY ./app/DomainChecker/requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
COPY ./app/DomainChecker /app
For the FastAPI build, we retrieve the official FastAPI docker image, copy the requirements file, install those requirements and finally copy our app to the docker instance ‘/app’ folder (FastAPI docker image expects the application in that folder).
=> nginx/Dockerfile
FROM nginx
COPY nginx.conf /etc/nginx/nginx.conf
COPY fullchain.pem /etc/letsencrypt/live/api.domain.com/fullchain.pem
COPY privkey.pem /etc/letsencrypt/live/api.domain.com/privkey.pem
COPY options-ssl-nginx.conf /etc/letsencrypt/options-ssl-nginx.conf
COPY ssl-dhparams.pem /etc/letsencrypt/ssl-dhparams.pem
For the nginx build, we retrieve the official nginx docker image, copy the nginx configuration file and all the files associated with the SSL certificates.
Let’s check the nginx configuration file to see how these files will be used:
=> nginx/nginx.conf
worker_processes 1;
events {
worker_connections 1024; # increase if you have lots of clients
accept_mutex off; # set to 'on' if nginx worker_processes > 1
# 'use epoll;' to enable for Linux 2.6+
# 'use kqueue;' to enable for FreeBSD, OSX
}
http {
include mime.types;
# fallback in case we can't determine a type
default_type application/octet-stream;
access_log /var/log/nginx/access.log combined;
sendfile on;
upstream app_server {
# fail_timeout=0 means we always retry an upstream even if it failed
# to return a good HTTP response
# for UNIX domain socket setups
#server unix:/tmp/gunicorn.sock fail_timeout=0;
# for a TCP configuration
server api:8080 fail_timeout=0;
}
server {
# if no Host match, close the connection to prevent host spoofing
listen 80 default_server;
return 444;
}
server {
# use 'listen 80 deferred;' for Linux
# use 'listen 80 accept_filter=httpready;' for FreeBSD
client_max_body_size 4G;
# set the correct host(s) for your site
server_name api.domain.com;
keepalive_timeout 5;
# path for static files
root /path/to/app/current/public;
location / {
# checks for static file, if not found proxy to app
try_files $uri @proxy_to_app;
}
location @proxy_to_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $http_host;
# we don't want nginx trying to do something clever with
# redirects, we set the Host: header above already.
proxy_redirect off;
proxy_pass http://app_server;
}
#error_page 500 502 503 504 /500.html;
#location = /500.html {
# root /path/to/app/current/public;
#}
listen 443 ssl; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/api.domain.com/fullchain.p>
ssl_certificate_key /etc/letsencrypt/live/api.domain.com/privkey>
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}
server {
if ($host = api.domain.com) {
return 301 https://$host$request_uri;
} # managed by Certbot
listen 80;
server_name api.domain.com;
return 404; # managed by Certbot
}}
The nginx configuration file follows the standard recommend configuration, but let’s discuss the main sections:
- upstream app server, contains the definition of our API endpoint, as you can see if references the url ‘api:8080’ (api is our other docker instance name from the docker-compose file, this allows cross communication between instances)
- server_name, defines the correct domain name for our nginx server
- location @proxy_to_app, defines the proxy rules to reach and serve the API endpoints
- listen 443 ssl, defines listen to requests with SSL and the files to support the SSL endpoint
This is all the configurations files that we need to get our API up and running with Docker.
Now we just need to start up our docker-compose instances (on the folder containing the docker-compose.yaml file):
sudo docker-compose up --build
We run the docker-compose with the build tag in this case to force rebuilding the docker instances.
As you can see, the files for the API are copied and the API server started, the nginx configuration files are copied and the server started.
You can now reach the endpoint at your defined server_name url endpoint.
As you saw in this article it is very simple and fast to create an API with FastAPI, including documentation.
Also, using Docker, after the proper configuration files are created (docker-compose.yaml and Dockerfiles), it is also easy and fast to deploy the API for live usage.
The beauty of Docker is that this blueprint can be copied and used with other API deployments.
With this skills you can now create your own API and even sell it on RapidAPI, for instance.
You can checkout the live API that powered this example at: https://rapidapi.com/nunobispo/api/domain-checker8
You can check out my GitHub at https://github.com/nunombispo
Or check my website at https://developer-service.io
25