Django rest project dockerfile - django

I am absolutely new to docker. I have an existing Django Rest project whose structure looks like following:
My requirements.txt:
django==1.8.8
djangorestframework
markdown
django-filter
django-rest-auth
django-cors-headers
django-secure
django-sslserver
django-rest-auth[extras]
Normally I create a virtual env > activate it > do pip install requirements.txt and additionally I need easy_install mysql-python to get started.
I want to dockerize this project. Can someone help me build a simple docker file this project?

As #DriesDeRydt suggest in his comment, in the provided link there is a very simple example of a docker file which installs requirements:
Add the following content to the Dockerfile.
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code/
This Dockerfile starts with a Python 2.7 base image. The base image is
modified by adding a new code directory. The base image is further
modified by installing the Python requirements defined in the
requirements.txt file.
You can change the image to fit your needed python version, by pulling the needed python image. For example:
FROM python:2.7 change to FROM python:3.5 or FROM python:latest
But as the above Dockerfile stands and assuming that you will place it inside the server folder, it will work for a test case.
Here is the Dockerfile documentation for further reading.

Related

How to translate flask run to Dockerfile?

So I'm trying to learn how to containerize flask apps and so far I've understood two ways of firing up a flask app locally:
One is to have this code in the main file:
if __name__ == '__main__':
APP.run(host='0.0.0.0', port=8080, debug=False)
and run with
python3 main.py
The other is to remove the above code from main.py, and just define an environment variable and do flask run:
export FLASK_APP=main.py
flask run
I tried to convert both of these methods into a Dockerfile:
ENTRYPOINT["python3", "main.py"]
which works quite well for the first. However when I try to do something like:
ENV FLASK_APP "./app/main.py"
ENTRYPOINT ["flask", "run"]
I am not able to reach my server via the browser. The container starts up all well, just that it's not reachable. One trick that does work, is if I add the host address in the entrypoint like so:
ENTRYPOINT ["flask", "run", "--host=0.0.0.0"]
I am not sure why do I have to the --host to the entrypoint when locally I can do also without it. Another funny thing that I noticed, was that if I put the host as --host=127.0.0.1, it still doesn't work.
Can someone explain what really is happening here? Either I don't understand the ENTRYPOINT correctly or maybe flask.. or maybe both.
EDIT: The whole Dockerfile for reference is:
FROM python:stretch
COPY . /app
WORKDIR /app
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
ENV FLASK_APP "/app/main.py"
ENTRYPOINT ["flask", "run", "--host=127.0.0.1"]
Try to define FLASK_APP env via absolute path.
Or put this string to your Dockerfile:
WORKDIR /path/to/dir/that/contains/main.py/file
Oh sorry. Host in ENTRYPOINT statement must be 0.0.0.0 :
ENTRYPOINT ["flask", "run", "--host=0.0.0.0"]
And do not forget to tie 5000 port outside via -p option :
docker run -p 5000:5000 <your container>
i believe this task would be better accomplished implementing CMD as opposed to ENTRYPOINT.
(you should also define the working directory before running COPY command.)
for example, your Dockerfile should look something like..
FROM python:stretch
WORKDIR /app
COPY . .
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements.txt
CMD python3 main.py

ModuleNotFoundError: No module named 'flask_sqlalchemy' error observer while i do running the docker images

I am getting ModuleNotFoundError No module named 'flask_sqlalchemy' when try to docker run of builded docker images.The same python flask run using terminal is working fine for me but not in docker?
FROM python:3.6
ADD . /app
WORKDIR /app
RUN pip install flask gunicorn
EXPOSE 8000
CMD ["gunicorn", "-b", "0.0.0.0:8000", "app"]
The good practice is to save file with installed pip packages, e.g. in requirements.txt. (preferably with only needed packages, e.g. when you are using pyevn).
This is done by pip freeze > requirements.txt (when located on local machine, in terminal).
Then, all you should do is replace RUN pip install flask gunicorn with RUN pip install -r requirements.txt and all installed packages on the local machine will be installed in the docker too.

Multistage build with docker from multiple sources

I have a django project with different applications and I am trying to build a docker image for every single app. However in development I want a docker image to contain the whole project. I was using some multistage before to handle dev dependencies. Is there any way to achieve the following in a different way?
FROM python:3.7-stretch AS base
RUN pip install -r /projectdir/requirements.txt
FROM base AS app1
RUN pip install -r /projectdir/app1/requirements.txt
FROM base AS app2
RUN pip install -r /projectdir/app2/requirements.txt
FROM app1, app2 AS dev
RUN pip install -r /projectdir/requirements_dev.txt

Docker error with pipenv on django app: Warning: --system is intended to be used for pre-existing Pipfile

When I was trying to dockerize my django app, I followed a tutorial telling me to structure my Dockerfile like this
FROM python:3.6
ENV PYTHONUNBUFFERED 1
COPY . /code/
WORKDIR /code/
RUN pip install pipenv
RUN pipenv install --system
EXPOSE 8000
After I saved that and run docker build .
the system threw me this error
Warning: --system is intended to be used for pre-existing Pipfile
installation,not installation of specific packages. Aborting.
I think it is complaining about the --system suffix above but the tutorial says it's crucial to have it so that my packages are applied in the entire docker container. I'm new to docker and even pipenv because I took over a previous person's code and isn't sure where their pipfile is or even if they have a pipfile. If you have any insights on how to fix this error thank you in advance.
pipenv --rm
This helped me! I was starting the "Django for beginners" and at the very beginning, got this error (accidently deleted Pipfile & Pipfile.lock)
Your warning is saying you that there is no Pipfile in your project dir.
--system is intended to be used for pre-existing Pipfile.
So before running
docker build .
run
pipenv install
in your project folder
Above solution didn't work for me.
After installing in the virtual env I also had to explicitly include Pipfile and Pipfile.lock into my dockerfile:
COPY Pipfile* .
# Install dependencies
RUN pip install pipenv && pipenv install --system
Then rebuild with docker compose:
docker-compose build
You can find more info in this thread
It has Error in pipenv
It is 👇 ERROR:: --system is intended to be used for pre-existing Pipfile installation, not installation of specific packages. Aborting.
try it
pipenv check or python3 -m pipenv check
Be careful when using Docker bind mounts!
Summary: In my case, I was using bind mounts in my dev environment, and mounting a docker bind mount on a non-empty directory would overwrite the contents of the container's directory, removing the Pipfile and Pipfile.lock, which showed the error mentioned when running the container.
Explanation
Directory structure on the host
> ls project/
docker-compose.yml Dockerfile Pipfile Pipfile.lock app/
Dockerfile
My Dockerfile would copy the contents of the project and then install the dependencies with pipenv, like this:
FROM python:3.8
# ...
COPY Pipfile Pipfile.lock /app/
RUN pipenv install --deploy --ignore-pipfile
COPY ./app /app/
CMD ["pipenv", "run", "uvicorn", "etc..", "--reload"]
Pipfile, Pipfile.lock and the code of ./app would all be in the same /app directory inside the container.
docker-compose.yml
I wanted uvicorn to hot-reload, so I mounted the code in /app inside the container's /app directory.
service:
app:
#...
volumes:
- type: bind
source: ./app
target: /app
This meant that when I changed the code in /app, the code in the container's /app directory would also change.
Effects
The side effect of this bind mount is that the content mounted on /app "obscured" the content previously copied in there.
Container's content with the bind mount:
> ls app/
code1.py code2.py
Container's content without the bind mount:
> ls app/
Pipfile Pipfile.lock code1.py code2.py
Solution
Either make sure that you include the Pipfile and Pipfile.lock as well when mounting the bind mount, or make sure that you COPY these 2 files to a directory that won't get overwritten by a bind mount.

docker-compose not downloading additions to requirements.txt file

I have a Django project running in docker.
When I add some packages to my requirments.txt file, they don't get downloaded when I run docker-compose up
Here is the relevant commands from my Dockerfile:
ADD ./evdc/requirements.txt /opt/evdc-venv/
ADD ./env-requirements.txt /opt/evdc-venv/
# Active venv
RUN . /opt/evdc-venv/bin/activate && pip install -r /opt/evdc- venv/requirements.txt
RUN . /opt/evdc-venv/bin/activate && pip install -r /opt/evdc-venv/env-requirements.txt
It seems docker is using a cached version of my requirements.txt file, as when I shell into the container, the requirements.txt file in /opt/evdc-venv/requirements.txt does not include the new packages.
Is there some way I can delete this cached version of requirements.txt?
Dev OS: Windows 10
Docker: 17.03.0-ce
docker-compose: 1.11.2
docker-compose up doesn't build a new image unless you have the build section defined with your Dockerfile and you pass it the --build parameter. Without that, it will reuse the existing image.
If your docker-compose.yml does not include the build section, and your build your images with docker build ..., then after you recreate your image, a docker-compose up will recreate the impacted containers.