Hot reload in Django app inside docker containers not working - django

So today is my first day ever to use docker, I tried use it many times but I noticed that hot reload does not work
I opened the container using vscode and navigate through the files and tried changing files and nothing happens
here's Dockerfile
FROM python:3.8-slim-buster
WORKDIR /usr/project
ENV PYTHONUNBUFFERED 1
ENV PYTHONDONTWRITEBYTECODE 1
COPY . .
RUN pip install -r requirements.txt
and here's docker-compose.yml
version: '3.7'
services:
web:
restart : always
build:
context: .
dockerfile: Dockerfile
command: sh -c "python manage.py runserver 0.0.0.0:8000"
ports:
- "8000:8000"
env_file:
- .env
volumes:
- .:/user/project
even unchecked docker-compose 2 from desktop docker and restarted the app and the containers, still nothing happens, so what am I doing wrong?

I kind of figured it out, apparently I was using docker wrong
I was using docker-compose run instead of docker-compose up
and I had a typo, I copied the code to a dir use and mounted another dir user :"D

Related

How to sync local host with docker host?

I have a hello world Django project and i want to dockerize it. My OS is windows 8.1 and I'm using docker toolbox. Using volumes I could persist data in docker container and what I want to do is to sync the code in docker container with the code in my local host in the directory where my project code is stored and so far I couldn't do it.
Here is my docker-compose.yml:
version: '3.7'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- myvol1:/code
ports:
- 8000:8000
volumes:
myvol1:
and Dockerfile:
# Pull base image
FROM python:3.7
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# Set work directory
WORKDIR /code
# Install dependencies
COPY requirement.txt /code/
RUN pip install -r requirement.txt
# Copy project
COPY . /code/
without using volumes I can run my code in the container but the data is not persisted.
I'd be grateful for your help.
Maybe try
version: '3.7'
services:
web:
build: .
command: python manage.py runserver 127.0.0.1:8000
volumes:
- myvol1:/code
ports:
- 8000:8000
volumes:
myvol1:
I thought maybe changing to the localhost IP might help or the ports could also be changed following the format of
<port-number-host> : <port-number-container>
"your listening port : container's listening port"
The port might be busy, but these are things that I would troubleshoot and try.
My resources/references: Udemy Class from Bret Fisher

Duplicate images on docker-compose build. How to properly push two services of docker-compose.yml to Docker hub registry?

I have a docker-compose.yml defined as follows with two services (the database and the app):
version: '3'
services:
db:
build: .
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
The reason I have build: . in both services is due to how you can't do docker-compose push unless you have a build in all services. However, this means that both services are referring to the same Dockerfile, which builds the entire app. So after I run docker-compose build and look at the images available I see this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mellon_app latest XXXXXXXXXXXX 27 seconds ago 1.14GB
postgres latest XXXXXXXXXXXX 27 seconds ago 1.14GB
The IMAGE_ID is the exact same for both images, the size is exactly the same for both images. This makes me think I've definitely done some unnecessary duplication as they're both just running the same Dockerfile. I don't want to take up any unnecessary space, how do I do this properly?
This is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
What is the proper way to push the images to my Docker hub registry without this duplication?
Proper way is to do
docker build -f {path-to-dockerfile} -t {desired-docker-image-name}.
docker tag {desired-docker-image-name}:latest {desired-remote-image-name}:latest or not latest but what you want, like datetime in int format
docker push {desired-remote-image-name}:latest
and cleanup:
4. docker rmi {desired-docker-image-name}:latest {desired-remote-image-name}:latest
Whole purpose of docker-compose is to help your local development, so it's easier to start several pods and combine them in local docker-compose network etc...

Docker-compose command: file not found

I want to initialize Docker for my Django project with postreSQL. I followed instrunctions from https://docs.docker.com/compose/django/
I also want to be sure that db runs before web so I use wait_for_db.sh. When I try to execute command docker-compose up
I see following respond:
web_1 | chmod: cannot access 'wait_for_db.sh': No such file or directory
pipingapi_web_1 exited with code 1
Before I try to use "docker-compose run", I Change directory to project root. I tried also to write
$ docker-compose run web django-admin startproject pipingapi . even though project was created before with venv.
I guess its not exactly about .sh file because when I erase lines reffering to that file, Docker cant find manage.py then (look at command order in docker-compose.yml). I also tried to put code/ before wait_for_db.sh in docker-compose.yml but it did not work.
My project tree:
.
L apienv/
L docker-compose.yml
L Dockerfile
L manage.py
L project/
L README.md
L requirements.txt
L restapi/
L wait_for_db.sh
Dockerfile:
FROM python:3.6
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
RUN apt-get update -q
RUN apt-get install -yq netcat
docker-compose.yml
version: '3'
services:
db:
image: postgres:12.3
volumes:
- /var/lib/postgresql/data
env_file:
- ./.env
web:
build: .
command:
sh -c "chmod +x wait_for_db.sh
&& ./wait_for_db.sh
&& python manage.py makemigrations
&& python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
env_file:
- ./.env
If it matters: I use Docker Toolbox on win 8.1
EDIT(SOLVED):
It looked like I was overwritting my tree with "code" directory so I deleted
volumes:
- .:/code
and it works
As the image building stage is complete, you could drop into the docker image and interactively run the commands you are trying to fix.
That should give you some hints
docker run -it web_1 bash
My guess is, as you are setting WORKDIR before you run the COPY, you are probably in the wrong directory.
It looked like I was overwritting my tree with "code" directory so I deleted
volumes:
- .:/code
and it works

docker-compose volumes with a custom docker file

I am trying to mount my code directory using docker volume, but unable to do so.
Here's the relevant section of my docker-compose file.
web:
build: ./web
dockerfile: Dockerfile
links:
- db:db
- elasticsearch:elasticsearch
volumes:
- ./web:/usr/src/app
web folder has a DockerFile with the following instructions.
FROM python:2.7
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
docker-compose up is working without any issues, but I don't see any volumes after start up.
root#test-new:/home/django/test# docker inspect test_web | grep -i volume
"Volumes": null,
"Volumes": null,
Here's the rest of my stack, if that is relevant.
elasticsearch
nginx
postgres db
The following syntax works.
volumes:
- ${PWD}/web:/usr/src/app
web:
build: ./web
dockerfile: Dockerfile-aside
links:
- db:db
- elasticsearch:elasticsearch
volumes:
- ${PWD}/web:/usr/src/app
env_file: .env
One of the reasons the earlier migrations were not being copied is because I was running the code inside a new container, instead of the container that is currently up.
Instead of
docker-compose run --rm web /bin/bash
When I tried
docker exec -it my_container_name web /bin/bash
And then run the migrate command, the newly generated database migration commands are not present on the source directory. It would be ideal to use docker-compose exec, but the current version I am using is below 1.7, so I am using the above interim solution in the mean time.
https://github.com/d11wtq/dockerpty/pull/48

Auto-reloading of code changes with Django development in Docker with Gunicorn

I'm using a Docker container for Django development, and the container runs Gunicorn with Nginx. I'd like code changes to auto-load, but the only way I can get them to load is by rebuilding with docker-compose (docker-compose build). The problem with "build" is that it re-runs all my pip installs.
I'm using the Gunicorn --reload flag, which is apparently supposed to do what I want. Here are my Docker config files:
## Dockerfile:
FROM python:3.4.3
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install -r /code/requirements/docker.txt
## docker-compose.yml:
web:
restart: always
build: .
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload
nginx:
restart: always
build: ./config/nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
ports:
- "5432:5432"
I've tried some of the other Docker commands (docker-compose restart, docker-compose up), but the code won't refresh.
What am I missing?
Thanks to kikicarbonell, I looked into having a volume for my code, and after looking at the Docker Compose recommended Django setup, I added volumes: - .:/code to my web container in docker-compose.yml, and now any code changes I make automatically apply.
## docker-compose.yml:
web:
restart: always
build: .
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
- .:/code
env_file: .env
command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload
Update: for a thorough example of using Gunicorn and Django with Docker, checkout this example project from Rackspace, which also shows how to use docker-machine to launch the setup on remote servers like Rackspace Cloud.
Caveat: currently, this method does not work when your code is stored locally and the docker host is remote (e.g., on a cloud provider like Digital Ocean or Rackspace). This also applies to virtual machines if your local file system is not mounted on the VM. Note that there are separate volume drivers (e.g., flocker), and there might be something out there to address this need. For now, the "fix" is to rsync/scp your files up to a directory on the remote docker host. Then, the --reload flag will auto-reload gunicorn after any scp/rsync. Update: If pushing code to remote docker host, I find it far easier to just rebuild the docker container (e.g., docker-compose build web && docker-compose up -d). This can be slower though than the rsync approach if your src folder is large.
You have another problem- Docker caches each layer that it builds. You shouldn't have to re-run pip install every time!
ADD . /code/
RUN pip install -r /code/requirements/docker.txt
This is your problem- Docker checks every ADD statement to see if any files have changed and invalidates the cache for it and every later step if it has. The correct way to do this is...
ADD ./requirements/docker.txt /code/requirements/
RUN pip install -r /code/requirements/docker.txt
ADD ./code/
Which will only invalidate your pip install line if your requirements file changes!
It seems like you need to match your WORKDIR/COPY commands in your Dockerfile in your docker-compose.yml when creating the volume. Here is an example:
Dockerfile
WORKDIR /app
COPY . /app
docker-compose.yml
app:
/ other commands /
volumes:
- ./app:/app
I faced very similar problem trying to configure auto-reload of the project with a little bit different setup. I set up volumes but it did not work anyway. After an hour of googling and thorough examination of my code I figured out that volume paths in Dockerfile and docker-compose.yml simply do not match. Make sure that they are the same.
My Dockerfile
FROM python:3.6.9-alpine3.10
COPY ./requirements/local.txt /app/requirements/local.txt
RUN set -ex \
&& apk add --no-cache --virtual .build-deps postgresql-dev git gcc libgcc musl-dev jpeg-dev zlib-dev build-base \
&& python -m venv /env \
&& /env/bin/pip install --upgrade pip \
&& /env/bin/pip install --no-cache-dir -r /app/requirements/local.txt \
&& runDeps="$(scanelf --needed --nobanner --recursive /env \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u)" \
&& apk add --virtual rundeps $runDeps \
&& apk del .build-deps
### Here is the path to the project
COPY . /app
WORKDIR /app/project
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
EXPOSE 8088
My docker-compose.yml
version: '3'
services:
web:
build:
context: ../..
dockerfile: compose/local/Dockerfile
restart: on-failure
command: python manage.py runserver 0.0.0.0:8088 --settings=project.settings.local
volumes:
# - .:/var/www/app # messed up path
- .:/app # correct path
env_file:
- ../../.env.local
depends_on:
- db
ports:
- "8000:8000"
Since I never found a desirable solution consider this interesting hack. Posting here I wanted to see if anyone has similar/good/bad experiences with this "work around".
To make code reload locally for development I simply created a View that immediately calls exit(). The exit will crash Django and a reload will occur where code changes are available. The reboot takes a fraction of a second and can be done via a tab in the browser, a requests.get call, or any other similar call. The reload is not automatic but it does skip any Docker lag such as a restart.
When the exit is called you will see the PID increment (if tailing logs):
web | [2019-07-15 18:29:52 +0000] [22] [INFO] Worker exiting (pid: 22)
web | [2019-07-15 18:29:52 +0000] [24] [INFO] Booting worker with pid: 24
I hope this helps others and/or gets feed back on this approach.
I you use docker-compose:
DockerFile: When you build image from Dockerfile you need to add some directory to save your code (in my case /api/):
WORKDIR /api/ -> important
COPY . . -> important
Docker-compose: Your docker-compose file haves you app service with the image in django just builded from Dockerfile, now you need to add the volume with the same WORKDIR that you use in Dockerfile:
volumes:
- .:/app -> important
And is all.