Are you ready for the umpteenth Docker + Python application out there? Good, so am I. This one is going to focus on an existing application and the steps I take to turn it into an image that can be shoved up on a host running docker. I choose my IsThatWho but only because I had it laying around and it has external dependencies (redis, namely). It's not the most elegant thing, but it's there.
The obvious one is redis, probably followed by the stuff listed in requirements.txt. Less obvious is the TMDB key, but keys and configuration are dependencies, too. Additionally, you may want to consider angular as a dependency as well, but given it's loaded from a CDN that's not a huge concern for me right now.
Lucky for me, redis has an existing image that can be pulled. The Python dependencies and environment keys I can deal with. So let's deal with those instead of fiddling with redis.
Building a Python app container
This is a first pass, just make it run attempt at building the application container. There will be some good practice stuff in it, but that's more incidental to get the Flask dev server running properly.
The Dockerfile looks like:
FROM python:3.6 COPY . /app WORKDIR /app RUN pip install -r requirements.txt ENV PYTHONUNBUFFERED 1 ENF PYTHONDONTWRITEBYTECODE 1 EXPOSE 5000 ENTRYPOINT ["entrypoint.sh"] CMD ["dev"]
If you're familar with Docker, then there's nothing surprising here. If you're not, this provisions an image that looks exactly like what you'd imagine. The biggest oddity here could be the ENTRYPOINT directive -- this directs Docker to what should be the main process of the container, CMD is the default argument.
As for what that entrypoint looks like.
#!/bin/bash case $1 in dev) shift; export FLASK_APP=app.py export FLASK_DEBUG=1 exec flask run --host=0.0.0.0 $@ ;; *) exec "$@" ;; esac
Just a simple bash script. The strangest thing about it might be why we tell Flask to listen to 0.0.0.0. The reason for that is not that I want everyone to be able to find the app on my network (that's a separate issue) but because Flask binds to localhost as the default and the localhost it sees is the docker container. Even if you run docker inspect on the running container and go to the IP address, Flask will refuse the connection.
Other than that, it provides the ability to pass extra arguments to the flask command (say you wanted to bind to a different port, etc).
With all that squared away, let's build this bad boy:
docker build -t itw:1 .
And once it's built:
docker run -p 5000:5000 -e TMBD_KEY=... --rm itw:1 * Serving Flask app "app" * Forcing debug mode on * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit) * Restarting with stat * Debugger is active! * Debugger pin code: ...
Of course if you try to do anything, you'll find a lovely traceback in the terminal about the redis connection being refused. Whoops. The next step is getting redis running along side this container.
Kill the flask app container, as we're going to need to make changes to it and it's configuration.
The first is updating the isthatwho/api.py to connect to a host named "redis" rather than localhost. The next is setting up a redis server in another container.
docker pull redis docker run --rm --name some-redis redis
You should see a bunch of redis-y content on your terminal. In another, let's set up the app container again. Instead of using docker-compose yet, there's something to be said for doing it the manual way to apperciate what docker-compose does under the hood.
docker run -p 5000:5000 -e TMBD_KEY=... --rm --link some-redis:redis itw:1
Assuming you have an actual TMDB key loaded, you'll be able to interact with the application. Wooooo.
Building a docker-compose.yml out of all this is actually pretty straight forward and it ends up looking like this:
version: 2 services: itw: build: . command: ["dev"] environment: - TMDB_KEY=... ports: - 5000:5000 links: - redis redis: image: redis
Running docker-compose up will build the itw container and start it and the redis container
As for creating WSGI server (uwsgi/gunicorn/etc) and nginx components for this, I'm not going to delve into that (they're 99.99% droll configuration files). I will say that you should run the uwsgi process in the itw container -- or even extend from it and add the WSGI server to it -- and nginx in a separate container. I recommend taking a look at the jwilder/nginx image which does some fancy docker magic to dynamically configure nginx to proxy to many containers based on exposed ports and a special environment variable (why not a label, I don't know).
The biggest issue here is that if we make a change to the code, then the container needs to be rebuilt to pick that up. If you're used to using the flask reloader in local development, this is a huge burden as you need to take extra steps to update your code.
This is the easyto address as you can mount a volume over /app that will provide the container with live updates to the code. In fact, it's easy enough to show:
services: itw: volumes: - ./isthatwho:/app/isthatwho command: ["dev", "--reload"]
If you docker-compose up now and then in a separate terminal run touch isthathwho/api.py you'll see thene reloader get to work. You can also see how the ability to pass extra arguments to the entrypoint script is paying off dividends now. We didn't need to change the entrypoint script (which would need a rebuild to pick up), instead just kill the docker containers and start them again.
For a quicky and dirty "shove an existing application into docker" this isn't too bad. I'd recommend reading up on the docker-compose syntax to see what each individual piece does and other interesting ways to build upon this example, in particular using the env_file setting to load a configuration file instead of having a bunch of declarations in the compose file itself.
However, this isn't the end of this dockerizing IsThisWho. Instead, I want to use this as a starting point to address other issues I've found in other docker + python tutorials (and in general, most docker tutorials and articles) as well as some woes I've run into using docker in my daily dev work.
I spilled my brains, spill some of yours.