As frustrating as it might be for a Python developer to figure out how to get Docker running locally, it’s even more so figuring out how to get it working remotely, in production, and available online. Often running in production requires an application server which is what runs the Python application and handles web requests, and a web server and reverse proxy which serves static files, handles SSL, and forwards requests from users to the application server. In this example, we’ll use Gunicorn as the app server, and nginx as the web server/reverse proxy.
In addition to these new components that get usually get configured in code, we’ll also need to use a service that can make your code available online – something that provide resources that runs your code, and that allocates an IP address to your web server so it can be accessed online. For this example, we’re going to use DigitalOcean.
DigitalOcean
The first thing we want to do is create a droplet on Digital Ocean.
As a preface, in my (distant) past, I used to do this sort of stuff on AWS. At the time, AWS was still sorta new, and so there were limited components to play with, and thus limited-er ways to screw things up. Since then, the ecosystem has exploded, and so I found it much easier to use a more user-friendly approach, and opted with DigitalOcean.
A droplet is a scalable virtual machine that runs on DigitalOcean’s infrastructure. It provides the computational resources needed to run your application, including CPU, memory, and storage. When you create a droplet, you choose the operating system (e.g., Ubuntu), the size (amount of CPU and memory), and additional features like SSH keys for secure access.
To create a droplet:
- Log in to your DigitalOcean account (obviously create one first if you need to).
- Create a new project if you haven’t already.
- Click on ‘Create’ and select ‘Droplets’.
- Choose the latest version of Ubuntu as your operating system.
- Select your desired plan, whatever is cheapest to start – a small web app doesn’t need much. And yes, this isn’t free, publishing things online will cost you money, albeit pennies for just small stuff for short period.
- Add your SSH key to the droplet. You can follow the instructions here for this step.
- Click “Create Droplet” and wait for the magic to happen (should just take a few seconds for the new VPS to start up).
Connect to the Droplet
The next step is to SSH into your droplet, and start installing and deploying stuff. Connecting is a critical step, so it gets its own heading here. If you F’d up your SSH key, this won’t work. Grab the IP address from your new droplet, and let’s say for example it’s aa.bb.cc.dd, then run the following from a terminal (or command prompt) window:
ssh root@aa.bb.cc.dd
Install Docker and Docker Compose
Assuming you got onto the machine, you can now start installing all the pre-reqs.
To set up Docker on an Ubuntu system, you need to run several commands. Here’s a detailed explanation of each command:
Update the Package List
This command updates the list of available packages and their versions. It does not install or upgrade any packages but fetches the most recent information about the available packages from the repositories specified in /etc/apt/sources.list
. This ensures that you can install the latest versions of the packages.
sudo apt-get update
Install Required Packages
sudo apt-get install ca-certificates curl gnupg
Create the Directory for the Docker Keyring
This command creates a directory (/etc/apt/keyrings
) where the Docker GPG key will be stored. The install
command is used with the -m 0755
option, which sets the permissions of the directory to 0755
(read and execute permissions for everyone, and write permission for the owner). The -d
option indicates that a directory is being created.
sudo install -m 0755 -d /etc/apt/keyrings
Download and Add the Docker GPG Key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
Set the Correct Permissions on the Docker GPG Key
This command changes the permissions of the Docker GPG key file to make it readable by all users (a+r
). This is necessary because the apt
command needs to be able to read this key to verify the authenticity of the Docker packages during installation.
sudo chmod a+r /etc/apt/keyrings/docker.gpg
Add the Docker Repository
The following command adds the Docker repository to your Ubuntu system. This command combines several shell utilities to achieve this:
echo "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
Let’s break down this command:
echo
: This command outputs the string to the terminal.deb
: This keyword indicates that the repository is a Debian archive.[arch=$(dpkg --print-architecture)]
: This specifies the architecture of your system (e.g., amd64). The $(dpkg –print-architecture) command dynamically inserts your system’s architecture.signed-by=/etc/apt/keyrings/docker.gpg
: This option specifies the location of the GPG key used to verify the packages from this repository.https://download.docker.com/linux/ubuntu
: This is the URL of the Docker repository.$(lsb_release -cs)
: This command inserts the codename of your Ubuntu release (e.g., focal for Ubuntu 20.04). This ensures you get the appropriate packages for your version of Ubuntu.stable
: This indicates that you want to use the stable version of Docker packages.|
: This is a pipe, which passes the output of one command as input to another.sudo
: This runs the command with superuser privileges, which is necessary to write to system directories.tee /etc/apt/sources.list.d/docker.list
: This writes the output to a file named docker.list in the /etc/apt/sources.list.d directory.> /dev/null
: This discards the standard output, effectively silencing the tee command.
After that, update the package list once more via sudo apt-get update
to refresh the list of available packages and their versions from all configured repositories, including the newly added Docker repository. This ensures that you can install the latest Docker packages from the Docker repository.
Lastly, you can then install:
- Docker Community Edition (CE), which is the core Docker software that includes the Docker Engine and CLI. It allows you to run containerized applications.
- The Docker CLI (Command-Line Interface) client, which is used to interact with the Docker daemon (the background service that manages Docker containers). This package allows you to run Docker commands from the terminal.
- Containerd, which is an industry-standard container runtime that manages the complete container lifecycle of its host system. It is a core component of Docker, responsible for managing containers’ execution and state.
- Docker Buildx which is a CLI plugin that extends the Docker command with advanced features for building Docker images, such as multi-architecture builds, cache import/export, and more.
- Docker Compose, a tool for defining and running multi-container Docker applications. The docker-compose-plugin integrates Docker Compose functionality into the Docker CLI, allowing you to use docker compose commands to manage your multi-container applications.
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin
Verify Docker
If Docker and Docker Compose are installed properly, you should be able to verify by running this command:
sudo docker run hello-world
Deploying your Application
As a next step, you need to generate an SSH key and add it to your GitHub account, which tells your GitHub it’s ok for your VPS to pull down code.
ssh-keygen -t ed25519 -C "your.email@gmail.com"
cat /root/.ssh/id_ed25519.pub
That generates your key, but then you need to add it to GitHub. Head here, and then click New SSH Key and paste the details you just printed.
And finally (sorta), pull down your code to the droplet from GitHub:
cd /var
mkdir www
cd www
git clone git@github.com:yourusername/yourrepository.git
cd yourrepository
A Note on Configuration Files
When deploying a Django web application using Docker, several key files are essential for setting up the environment. These include the Dockerfile, Docker Compose file, Django settings, and Nginx configuration. You can check this blog post for a bit of intro detail on the Docker stuff, which gets expanded here for the production components. Here’s a brief interlude on each.
Dockerfile
The Dockerfile is a script that contains a series of instructions on how to build a Docker image for your Django application. Here is a sample that works for me:
# Specifies the base image. Using a slim version reduces the image size.
FROM python:3.11
# Environment Variables: ENV PYTHONDONTWRITEBYTECODE 1 and ENV PYTHONUNBUFFERED 1 ensure Python doesn’t write .pyc files and the output is unbuffered.
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Sets the working directory inside the container.
WORKDIR /app
# Install netcat-openbsd, a networking utility for reading from and writing to network connections using the TCP or UDP protocols. It’s often used for debugging and network diagnostics.
RUN apt-get update && apt-get install -y netcat-openbsd && rm -rf /var/lib/apt/lists/*
# Copy the rest of your Django application
COPY . /app
# Install Python dependencies
RUN pip install pipenv gunicorn
RUN pipenv --python /usr/local/bin/python3.11
RUN pipenv install --system --deploy
# Run collectstatic command to collect static files, including React build artifacts
# Note: Django settings should be configured to include /app/static in STATICFILES_DIRS or directly as STATIC_ROOT
RUN python manage.py collectstatic --noinput
# Make port 8001 available to the world outside this container (you can use whatever port you want)
EXPOSE 8001
# Run Django server
# Copy the entrypoint script
COPY entrypoint.sh /entrypoint.sh
# Make the entrypoint script executable
RUN chmod +x /entrypoint.sh
# Set the entrypoint script to run when the container starts
ENTRYPOINT ["/entrypoint.sh"]
Docker Compose File
You can create a separate docker-compose file for each environment, and then ‘inherit’ the configuration when running the actual commands. For sake of simplify, the following is a complete (no inheritance) example of a production file:
services:
db:
image: postgres
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
ports:
- "5434:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
web:
build: .
command: gunicorn --bind 0.0.0.0:8001 config.wsgi:application
environment:
- DJANGO_SETTINGS_MODULE=config.settings.prod
expose:
- "8001"
volumes:
- .:/app
- static_volume:/app/staticfiles
depends_on:
- db
env_file:
- .env
nginx:
build: ./nginx
ports:
- "80:80"
depends_on:
- web
volumes:
- static_volume:/static
volumes:
postgres_data:
static_volume:
I don’t want to tell you how long it took me to get this right, but suffice to say, figuring out the static file part was not easy – lots of trial and error and googling and ChatGPT to figure out what worked. Let me break this thing down.
- Services: Defines three services: db (PostgreSQL), web (Django application), and nginx (web server).
- Environment Variables: Used to configure the database service. You’ll need a
.env
file in the same directory that specifies these variables which get templated in here. - Build and Run Commands: The web service builds from the current directory and uses Gunicorn to serve the Django application. The nginx service builds from a referred Dockerfile in the
./nginx
folder (more below). - Dependencies: The nginx and web services depend on the db service.
- Volumes: Persistent storage for PostgreSQL data and static files. This is what got me. The web container maps
/app/staticfiles
to a persistent volume. When you collect static files via Django, they should get dropped into this location. That same volume then gets mapped to the/static
folder in the nginx container. When you access static data from the web, you need to have files in this/static
folder to serve back, and the volume ensures consistency from the collected files in the web container to nginx. I know thats a mouthful.
While theoretically mapping.
to/app
(in the web container) should include all subdirectories including static data, the practical aspects of Docker’s volume management and container lifecycle necessitate using a named volume for reliable and consistent file sharing between services. This approach ensures that static files collected by the web service are always available to the nginx service, preventing the 404 errors you experienced.
Django Settings
The Django settings file (base.py
) needs specific configurations for a production environment.
import os
DEBUG = False
ALLOWED_HOSTS = ['your_domain.com', 'your_server_ip']
STATIC_URL = '/static/'
STATIC_ROOT = os.path.join(BASE_DIR, 'staticfiles')
- Debug Mode: DEBUG = False disables debug mode for production. You can of course leave as TRUE for extra help.
- Allowed Hosts: ALLOWED_HOSTS specifies the domains/IPs that can serve the application. You need to add your new DigitalOcean IP address here. Or, you can be slick by using this:
ALLOWED_HOSTS = os.getenv('ALLOWED_HOSTS').split(',')
- Static Files: STATIC_URL and STATIC_ROOT configure the static files settings for serving via Nginx. Again, the static files must get dropped into the staticfiles folder, and URL requests for those files need to go to the /static/ URL part.
With Django settings, you can create a base settings file, and then extend that by environment, such as production vs development. Simply import the base settings at the top.
Nginx Dockerfile
The Nginx Dockerfile sets up the Nginx web server with your custom configuration.
FROM nginx:1.25
# Remove default configuration
RUN rm /etc/nginx/conf.d/default.conf
# Copy custom configuration
COPY default.conf /etc/nginx/conf.d
Nginx Configuration
The Nginx configuration file (default.conf
) is used to reverse proxy requests to the Gunicorn server and serve static files.
server {
listen 80;
location / {
proxy_pass http://web:8001;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
location /static/ {
alias /static/;
}
}
- Listening Port:
listen 80;
specifies that Nginx listens on port 80. - Reverse Proxy: The
location /
block proxies requests to the Gunicorn server running onhttp://web:8001
. (Again, you can specify whatever port you want.) - Static Files: The
location /static/
block serves static files directly from the/static/
directory. - For those proxy… lines:
- proxy_pass http://web:8001;: Forwards the request to the Gunicorn server.
- proxy_set_header Host $host;: Ensures the Host header is preserved, which is crucial for virtual hosting.
- proxy_set_header X-Real-IP $remote_addr;: Passes the client’s IP address to the backend server.
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;: Maintains a list of proxies through which the request has passed.
- proxy_set_header X-Forwarded-Proto $scheme;: Tells the backend server whether the original request was HTTP or HTTPS.
Entrypoint
The entrypoint script ensures that the database is ready before running Django migrations and starting the application. (Make sure to give execute permission to entrypoint.sh.)
#!/bin/sh
# Wait for the database to be ready
echo "Waiting for PostgreSQL to start..."
while ! nc -z db 5432; do
sleep 0.1
done
echo "PostgreSQL started"
# Run Django migrations
echo "Running migrations"
python manage.py migrate --noinput
# Start the Django app (specified in the Dockerfile)
exec "$@"
Config Conclusion
These configurations work together to deploy your Django application using Docker, Gunicorn, and Nginx. The Dockerfile sets up the application environment, the Docker Compose file defines the services, the Django settings configure the application for production, the Nginx configuration handles web traffic and static file serving, and the entrypoint script ensures the database is ready before starting the application.
The folder and file structure for this should (could) look like the following (at least it works for me this way):
your_project/
├── app/
│ ├── Dockerfile
│ ├── entrypoint.sh
│ ├── Pipfile
│ ├── Pipfile.lock
│ ├── manage.py
│ ├── config/
│ │ ├── __init__.py
│ │ ├── settings/
│ │ │ ├── __init__.py
│ │ │ ├── base.py
│ │ │ └── prod.py
│ │ ├── urls.py
│ │ └── wsgi.py
│ ├── myapp/
│ │ ├── __init__.py
│ │ ├── admin.py
│ │ ├── apps.py
│ │ ├── models.py
│ │ └── views.py
│ └── ...
├── nginx/
│ ├── Dockerfile
│ └── default.conf
├── docker-compose.prod.yml
└── .env
Deploy to Digital Ocean
- Set Up Environment Variables: Create a .env file in your project root with your environment variables.
- Build and Start Docker Containers: this command runs
docker compose
, referencing your production settings file (if you’re using inheritence/override with docker, you can stack these -f files), and then spins up a newly built container, in detached mode (in the background).
sudo docker compose -f docker-compose.prod.yml up --build -d
Create a Superuser
The entrypoint script here runs Django migrations, but if you’ve gone rogue and using your own, you’ll need to do that yourself. Otherwise, the last step (done once) will be to create a superuser for your new database:
sudo docker compose exec web python manage.py createsuperuser
This commands runs the python manage.py createsuperuser
script within your Docker web container, and will then prompt you for the usual Django stuff to create an admin in your app.
Conclusion
With these steps, you should have a fully functional Django web application running in production on Digital Ocean using Docker, Gunicorn, and Nginx. This setup ensures your application is scalable and easy to manage.
Personally, getting this right offered an unexpected level of challenge and my repeated asks of ChatGPT left me even more frustrated as I tweaked settings and worked through things. I found this article by Michael Herman on testdriven.io to be incredibly helpful as a reference. Hopefully someone (or perhaps just my future self) finds this all useful!