I am thinking to use a docker for django.
Since this docker image will be exclusive for a particular django project is it ok to just pip install everything in docker rather than creating a virtualenv and then install all the required django and related packages using pip
So what is the best way and also safe way if one want to stick to docker for django projects.
You are right that you don't need a virtual environment inside the django container.
If you are always using pip and store the the requirements in a requirements.txt you can use this to initialize a virtual environment for development without docker as well as for setting up the docker container:
To reduce the size of the container remove the pip cache after installation:
FROM python:3.6.7-alpine3.8
...
RUN pip3.6 install -U pip setuptools \
&& pip3.6 install -r requirements.txt \
&& pip3.6 install gunicorn \. # or uwsgi or whatever
&& rm -rf /root/.cache
Docker containers provide already isolated environment which is a similar goal to that of virtualenv. So, if it's only 1 application running in a Docker container, it is fine to use it without another layer that virtualenv would bring. Personally, I don't remember seeing a Django app used with virtualenv in a container.
Related
Currently, I'm trying to access a simple Django application, which I created in an Azure Virtual Machine. As the application is still simple, I only want to access the "The install worked successfully! Congratulations!" page from my local machine by accessing http://VM_IP:PORT/. I was able to do just that, but when I tried to Dockerize the project and access the built image from my local machine, it didn't work.
I've already made some setup in my Azure portal so that the Virtual Machine is able to listen to a specific port; in this case is 8080 (so http://VM_IP:8080/). I'm quite new to Docker, so I'm assuming there was something missing in the Dockerfile I've created for the project.
Dockerfile
RUN mkdir /app
WORKDIR /app
# Add current directory code to working directory
ADD . /app/
# set default environment variables
ENV PYTHONUNBUFFERED 1
ENV LANG C.UTF-8
ENV DEBIAN_FRONTEND=noninteractive
# set project environment variables
# grab these via Python's os.environ# these are 100% optional here
ENV PORT=8080
# Install system dependencies
RUN apt-get update && apt-get install -y --no-install-recommends \
tzdata \
python3-setuptools \
python3-pip \ python3-dev \
python3-venv \
git \
&& \
apt-get clean && \
rm -rf /var/lib/apt/lists/*
# install environment dependencies
RUN pip3 install --upgrade pip
RUN pip3 install pipenv
# Install project dependencies
RUN pipenv install --skip-lock --system --dev
EXPOSE 8888
CMD gunicorn simple_project.wsgi:application --bind 0.0.0.0:$PORT
I'm not sure what's happening. It will be very appreciated if someone could point out what I was missing? Thanks in advance.
I suspect the problem may be that you're confusing the EXPOSE build-time instruction with the publish runtime flag. Without the latter, any containers on your VM would be inaccessible to the host machine.
Some background:
The EXPOSE instruction is best thought of as documentation; it has no effect on container networking. From the docs:
The EXPOSE instruction does not actually publish the port. It functions as a type of documentation between the person who builds the image and the person who runs the container, about which ports are intended to be published.
Seems kind of odd at first glance, but it's for good reason: the image itself does not have the permissions to declare host port-mappings — that's up to the container runtime and whoever is operating it (you!).
The way to do this is by passing the --publish or -p flag to the docker run command, allowing you to define a mapping between the port open in the container network and a port on the host machine. So, for example, if I wanted to run an nginx container that can be accessed at port 8080 on my localhost, I'd run: docker run --rm -d -p 8080:80 nginx. The running container is then accessible at localhost:8080 on the host. Of course, you can also use this to expose container ports from one host to another. Without this, any networking configuration in your Dockerfile is executed in the context of the container network, and is basically inaccessible to the host.
TL;DR: you probably just need to publish your ports when you create and run the container on your VM: docker run -p {vm_host_port}:{container_port} {image_name}. Note that port mappings cannot be added or changed for existing containers; you'd have to destroy the container and recreate it.
Side note: while docker run is quick and easy, it can quickly become unmanageable as your project grows and you add in environment variables, attach volumes, define inter-container dependencies, etc. An alternative with native support is docker-compose, which lets you define the runtime configuration of your container (or containers) declaratively in YAML — basically picking up where the Dockerfile leaves off. And once it's set up, you can just run docker-compose up, instead of having to type out lengthy docker commands, and waste time debugging when you forgot to include a flag, etc. Just like we use Dockerfiles to have a declarative, version-controlled description of how to build our image, I like to think of docker-compose as one way to create a declarative, version-controlled description for how to run and orchestrate our image(s).
Hope this helps!
I have a django project with different applications and I am trying to build a docker image for every single app. However in development I want a docker image to contain the whole project. I was using some multistage before to handle dev dependencies. Is there any way to achieve the following in a different way?
FROM python:3.7-stretch AS base
RUN pip install -r /projectdir/requirements.txt
FROM base AS app1
RUN pip install -r /projectdir/app1/requirements.txt
FROM base AS app2
RUN pip install -r /projectdir/app2/requirements.txt
FROM app1, app2 AS dev
RUN pip install -r /projectdir/requirements_dev.txt
I have a Django project running in docker.
When I add some packages to my requirments.txt file, they don't get downloaded when I run docker-compose up
Here is the relevant commands from my Dockerfile:
ADD ./evdc/requirements.txt /opt/evdc-venv/
ADD ./env-requirements.txt /opt/evdc-venv/
# Active venv
RUN . /opt/evdc-venv/bin/activate && pip install -r /opt/evdc- venv/requirements.txt
RUN . /opt/evdc-venv/bin/activate && pip install -r /opt/evdc-venv/env-requirements.txt
It seems docker is using a cached version of my requirements.txt file, as when I shell into the container, the requirements.txt file in /opt/evdc-venv/requirements.txt does not include the new packages.
Is there some way I can delete this cached version of requirements.txt?
Dev OS: Windows 10
Docker: 17.03.0-ce
docker-compose: 1.11.2
docker-compose up doesn't build a new image unless you have the build section defined with your Dockerfile and you pass it the --build parameter. Without that, it will reuse the existing image.
If your docker-compose.yml does not include the build section, and your build your images with docker build ..., then after you recreate your image, a docker-compose up will recreate the impacted containers.
I am having several containers, and each of my containers are having their own Dockerfile. Everytime I am building, using docker-compose build, each container runs its own requirements; either from a requirements.txt file (RUN pip install -r requirements.txt), or directly from the Dockerfile (RUN pip install Django, celery, ...). Many of the requirements are common in some of the containers (almost all).
It is working perfectly, but there is a problem with build time. It takes almost 45 minutes to build every container from scratch. (lets say after I deleted all images and containers)
Is there a way, to install all the requirements in a common directory for all containers, so that we dont install the common requirements each time a new container is building?
Docker-compose I am using is version 2.
You can define your own base image. Let's say all your containers need django and boto for instance, you can create your own Dockerfile:
FROM python:3
RUN pip install django boto
# more docker commands
Then you can build this image as arrt_dtu/envbase and publish it somewhere (dockerhub, internal docker environment of your company). Now you can create your specialized images using this one:
FROM arrt_dtu/envbase
RUN pip install ...
That's exactly the same principle we have with the ruby image, for instance. The ruby one uses a linux one. If you want a rails image, you can use the ruby one as well. Docker images are totally reusable!
I'm hosting a Python server in the Google Cloud platform. However, I find it hard to determine what exactly is the difference between a flexible runtime and a custom runtime.
This is described in more detail here.
According to the documentation for both runtimes Dockerfile modifications are permitted.
I was using a flexible runtime. However, I needed to install some custom libraries. So I added the following Dockerfile:
FROM gcr.io/google_appengine/python
RUN apt-get update && apt-get install --yes \
libgeos-dev \
libmagic1
RUN virtualenv /env -p python3.4
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD . /app/
RUN pip install -r requirements.txt
CMD python main.py
Also I set this in app.yaml:
runtime: custom
vm: true
Does this mean it's a custom runtime now, because there's a Dockerfile and the runtime has been set to custom?
Or is it a flexible runtime, because it builds on one of the predefined base images?
If I have to specify a Dockerfile anyway, is there any upside to using such a predefined image instead? I don't see any reason not to use another base image allowing to use Python 3.5 for example.