In this post, I’ll show how to containerize an existing project using Docker. I’ve picked a random project from GitHub that had an open issue saying Dockerize to contribute and use as an example here.
Why in the world do you want to Dockerize an existing Django web application? There are plenty of reasons, but if you don’t have one just do it for fun!
I decided to use docker because one of my applications was getting hard to install. Lots of system requirements, multiple databases, celery, and rabbitmq. So every time a new developer joined the team or had to work from a new computer, the system installation took a long time.
Difficult installations lead to time losses and time losses lead to laziness and laziness leads to bad habits and it goes on and on… For instance, one can decide to use SQLite instead of Postgres and not see truncate problems on a table until it hits the Test server.
If you don’t know what docker is, just picture it as a huge virtualenv that instead of containing just some python packages have Containers for isolating everything from the OS to your app, databases, workers, etc.
Getting Things Done
Ok, talk is cheap. Show me some code, dude.
First of all, install Docker. I did it using Ubuntu and Mac OS without any problem, but on Windows Home, I couldn’t have it working.
To tell Docker how to run your application as a container you’ll have to create a Dockerfile
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /webapps
WORKDIR /webapps
# Installing OS Dependencies
RUN apt-get update && apt-get upgrade -y && apt-get install -y \
libsqlite3-dev
RUN pip install -U pip setuptools
COPY requirements.txt /webapps/
COPY requirements-opt.txt /webapps/
RUN pip install -r /webapps/requirements.txt
RUN pip install -r /webapps/requirements-opt.txt
ADD . /webapps/
# Django service
EXPOSE 8000
So, let’s go line by line:
Docker Images
FROM python:3.6
Here we’re using an Image from docker hub. e.q. One pre-formated container that helps build on top of it. In this case, Python 3.6 is an Ubuntu container that already has Python3.6 installed on it.
Environment Variables
You can create all sort of Environment Variables using Env.
ENV PYTHONUNBUFFERED 1 # Here we can create all Environment variables for our container
For instance, if you use them for storing your Django’s Secret Key, you could put it here:
ENV DJANGO_SECRET_KEY abcde0s&&$uyc)hf_3rv@!a95nasd22e-dxt^9k^7!f+$jxkk+$k-
And in your code use it like this:
import os SECRET_KEY = os.environ['DJANGO_SECRET_KEY']
Run Commands
Docker Run Commands are kinda obvious. You’re running a command “inside” your container. I’m quoting inside because docker creates something as sub-containers, so it doesn’t have to run the same command again in case of rebuilding a container.
RUN mkdir /webapps WORKDIR /webapps # Installing OS Dependencies RUN apt-get update && apt-get upgrade -y && apt-get install -y \ libsqlite3-dev RUN pip install -U pip setuptools COPY requirements.txt /webapps/ COPY requirements-opt.txt /webapps/ RUN pip install -r /webapps/requirements.txt RUN pip install -r /webapps/requirements-opt.txt ADD . /webapps/
In this case, We are creating the directory that will hold our files webapps/.
Workdir is also kind of self-evident. It just telling docker to run the commands in the indicated directory.
After that, I am including one OS dependency. When we’re just using requirements.txt we are not including any OS requirement for the project and believe me, for large projects you’ll have lots and lots of OS requirements.
COPY and ADD
Copy and ADD are similar. Both copy a file from your computer (the Host) into the container (The Guest OS). In my example, I’m just coping python requirements to pip install them.
EXPOSE
Expose Instruction is for forwarding a port from Guest to the Host.
# Django service EXPOSE 8000
Ok, so now what? How can we add more containers and make them work together? What if I need a Postgresql inside a container too? Don’t worry, here we go.
Docker-Compose
Compose is a tool for running multiple Docker containers. It’s a yml file, you just need to create a docker-compose.yml on your project folder.
version: '3.3' services: # Postgres db: image: postgres environment: - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres - POSTGRES_DB=postgres web: build: . command: ["./run_web.sh"] volumes: - .:/webapps ports: - "8000:8000" links: - db depends_on: - db
In this case, I’m using an Image of Postgres from Docker Hub.
Now, let’s change the settings.py to use Postgres as Database.
DATABASES = { 'default': { 'ENGINE': 'django.db.backends.postgresql_psycopg2', 'NAME': 'postgres', 'USER': 'postgres', 'PASSWORD': 'postgres', 'HOST': 'db', 'PORT': '5432', } }
We’re almost done. Let me talk a little about the docker-compose file.
VOLUMES
Remember vagrant?
Once upon a time, there was Vagrant and it was a form to run a project inside a Virtual Machine but easily configuring it, forwarding ports, provisioning requirements and, sharing volumes. Your machine (Host) could share a volume with your Virtual Machine (Guest). In docker, it’s exactly the same. When you’re writing a file on a shared volume this file is being written on your container as well.
volumes: - .:/webapps
In this case, the current directory (.) is being shared as webapps on the container.
LINKS
links: - db
You can refer to another container that belongs to your compose using its name. Since we created a db container for our Postgres we can link it to our web container. You can see in our settings.py file that I’ve used ‘db‘ as host.
DEPENDS_ON
In order for your application to work, your database has to be ready for use before web container, otherwise, it will raise an exception.
depends_on: - db
Command
Command is the default command that your container will run right after it is up.
For our example, I’ve created a run_web.sh, that will run the migrations, collect the static files and start the development server.
#!/usr/bin/env bash cd django-boards/ python manage.py migrate python manage.py collectstatic --noinput python manage.py runserver 0.0.0.0:8000
One can argue that run the migrate at this point, automatically, every time the container is up is not a good practice. I agree. You can run it directly on the web machine. You can access your container (just like the good’ol vagrant ssh) :
docker-compose exec web bash
If you’d like you can run it without accessing the container itself, just change the last argument from the previous command.
docker-compose exec web python manage.py migrate
The same for other commands
docker-compose exec web python manage.py test docker-compose exec web python manage.py shell
Running Docker
With our Dockerfile, docker-compose.yml and run_web.sh set in place, just run it all together:
docker-compose up
You can see this project here on my GitHub.
**EDIT**
At first, I was using run instead of exec. But Bruno FS convinced me that exec is better because you’re executing a command inside the container you’re already running, instead of creating a new one.
Great post, Fe!
I like to dockerize my development env to isolate all components, like database, redis, celery, etc etc. It’s very useful!
Regards
Nice article, loved “run_web.sh” part!
Please don’t use ENV variables for storing secrets in Docker files – use .env for that and then reference them in docker-compose.yml. If a .env file exists docker-compose will interpolate the variables in docker-compose.yml. Just make sure you don’t commit the .env file (put it in .gitignore)!
$ cat docker-compose.yml
version: ‘3.3’
services:
# Postgres
db:
image: postgres
environment:
– POSTGRES_USER=”$POSTGRES_USER”
$ cat .env
POSTGRES_USER=herouser
Also, “links” are deprecated – use networks.
Also, to minimize cache misses, consider combining RUN commands 🙂