I have this weird issue where not all file edits are being picked up by the hot reloader for Django.
I have this structure:
/
app/ ... apps in here.
config/settings.py
manage.py
Now, any changes to config/settings.py or manage.py will result in the django runserver reloading.
But any changes to files inside app/... don't trigger a reload - I have to go and add a newline to manage.py and save (quite irritating).
Any ideas why this might be?
At first I thought it was a docker thing, and it was only picking up files in the base dir, but then changes to config/settings.py also trigger the reload, so clearly it can see deeper.
EDIT: Addition info
Django 3.2, PyCharm, MacOS - yes, all apps are in INSTALLED_APPS
I have another project that has the EXACT same structure and for some reason it works... I'm really stumped.
EDIT adding dc and dockerfile
FROM python:3.8-slim
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY ./requirements /requirements
ARG pip_requirement_file
RUN apt-get update && apt-get install -y libjpeg62-turbo-dev zlib1g-dev gcc ca-certificates gcc postgresql-client sed xmlsec1 pax-utils && apt-get clean
RUN pip install --no-cache-dir -r /requirements/$pip_requirement_file \
&& find /usr/local \
\( -type d -a -name test -o -name tests \) \
-o \( -type f -a -name '*.pyc' -o -name '*.pyo' \) \
-exec rm -rf '{}' +
# Copy requirements and install local one
RUN rm -rf /requirements
COPY ./compose/django/entrypoint.sh /entrypoint.sh
RUN sed -i 's/\r//' /entrypoint.sh
RUN chmod +x /entrypoint.sh
COPY ./compose/django/start-server.sh /start-server.sh
RUN sed -i 's/\r//' /start-server.sh
RUN chmod +x /start-server.sh
# Very specifically copy the files we want to avoid bloating the image.
COPY ./manage.py /app/
COPY ./app/ /app/app/
COPY ./admin_static/ /app/admin_static/
COPY ./config/ /app/config/
COPY ./database/ /app/database/
ENTRYPOINT ["/entrypoint.sh"]
CMD ["/start-server.sh"]
docker compose
services:
django:
container_name: django
build:
context: .
dockerfile: ./compose/django/Dockerfile
args:
pip_requirement_file: local.txt
depends_on:
- postgres
ports:
- "8000:8000"
links:
- postgres:postgres
volumes:
- .:/app
env_file: .env
I'd like to thank everyone for trying to help me solve this riddle that has been doing my head in.
I decided to strip both projects (old and new) back to see why the old worked and the new doesn't.
This is what was different.
Inside config/init.py on the old project, this line exists:
from __future__ import absolute_import, unicode_literals
Inside the new project, this was missing.
When I add this in, hot reloading starts working for all files inside app/* ?????
It's working, but honestly, I have no idea why that would make a difference.
I am not 100% sure what the issue is, but offering this answer with a couple of things you can try:
There are reports that Pycharm will sometimes cause problems if it has been configured not to update the timestamps of files when saving them (there is a Preserve files timestamps setting to control this) which you could try toggling. An easy way to verify if this is the issue is to try editing a file with a different editor (or touch the file) and see if that triggers a reload - if it does then the issue is PyCharm.
Note that the default StatReloader will not work if file timestamps don't change.
Try installing pywatchman and the Watchman service, as mention in the documentation, which provides a much more efficient way to watch for changes than StatReloader which simply polls all files for changes every second - if it's a large project it may be that the StatReloader is just taking too long to spot changes.
Related
Having this dockerfile:
FROM python:3.8.3-alpine
ENV MICRO_SERVICE=/home/app/microservice
# RUN addgroup -S $APP_USER && adduser -S $APP_USER -G $APP_USER
# set work directory
RUN mkdir -p $MICRO_SERVICE
RUN mkdir -p $MICRO_SERVICE/static
# where the code lives
WORKDIR $MICRO_SERVICE
# set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev gcc python3-dev musl-dev \
&& apk del build-deps \
&& apk --no-cache add musl-dev linux-headers g++
# install dependencies
RUN pip install --upgrade pip
# copy project
COPY . $MICRO_SERVICE
RUN pip install -r requirements.txt
COPY ./entrypoint.sh $MICRO_SERVICE
CMD ["/bin/bash", "/home/app/microservice/entrypoint.sh"]
and the following docker-compose.yml file:
version: "3.7"
services:
nginx:
build: ./nginx
ports:
- 1300:80
volumes:
- static_volume:/home/app/microservice/static
depends_on:
- web
restart: "on-failure"
web:
build: . #build the image for the web service from the dockerfile in parent directory
command: sh -c "python manage.py collectstatic --no-input &&
gunicorn djsr.wsgi:application --bind 0.0.0.0:${APP_PORT}"
volumes:
- .:/microservice:rw # map data and files from parent directory in host to microservice directory in docker containe
- static_volume:/home/app/microservice/static
env_file:
- .env
image: wevbapp
expose:
- ${APP_PORT}
restart: "on-failure"
volumes:
static_volume:
I need to reference the following files (in the docker-compose.yml file) being in other directories rather than the .devcontainer:
manage.py
requirements.txt
.env
This is my folder structure:
An easy solution would be to move the dockerfile, docker-compose.yml, and .env in the django directory djsr, but I am trying to keep the files structured like this. How can I do reference those files in docker-compose.yml?
It is fairly common to put the couple of Docker-related files in the project root directory, and that can potentially save you some trouble; I'd recommend that as a first choice.
If you do want to keep it all in a subdirectory, it's possible, though. When you run docker-compose, you can specify the location of the configuration file. It will consider all paths as relative to this file's directory.
# Either:
docker-compose -f .devcontainer/docker-compose.yml up
cd .devcontainer && docker-compose up
When you go to build the image, the build reads in a context directory, and COPY statements are always interpreted relative to this directory. For your setup, you need the context directory to be the top of your source tree, and then specify an alternate Dockerfile in a subdirectory.
services:
web:
build:
context: ..
dockerfile: .dockerenv/Dockerfile
For the most part the Dockerfile itself is fine, but where the entrypoint script is in a subdirectory, the COPY command needs to reflect that too. Since you're copying the entire source directory, you could also rearrange things inside the image to be the layout you want.
COPY requirements.txt ./
RUN pip install -r requirements.txt
COPY . ./
# Either:
COPY .dockerenv/entrypoint.sh ./
# Or:
RUN mv .dockerenv/entrypoint.sh .
# Or:
CMD ["./dockerenv/entrypoint.sh"]
I don't recommend the volume structure you have, but if you want to keep it, you also need to change the source path of the bind mount to be the parent directory. (Note particularly, in the previous Dockerfile fragment, a couple of the options involve moving files inside the image, and a bind mount will hide that change.)
services:
web:
volumes:
# Ignore the application built into the container, and use
# whatever's checked out on the host system instead.
- ..:/home/app/microservice
# Further ignore the static assets on the host system and
# use the content in a named volume instead.
- static_volume:/home/app/microservice/static
why don't you mount the same as you did with the folders only for these files?
The source of the mount. For bind mounts, this is the path to the file or directory on the
Docker daemon host. May be specified as source or src.
I have been running an app without docker and have just added in Dockerfile and docker-compose.
The issue I am having is that after I successfuly build the app, runserver produces the below error when I run either that or migrate.
➜ app git:(master) sudo docker-compose run app sh -c "python manage.py runserver"
Error loading shared library libpython3.8.so.1.0: No such file or directory (needed by /usr/local/bin/python)
Error relocating /usr/local/bin/python: Py_BytesMain: symbol not found
failed to resize tty, using default size
%
➜ app git:(master) sudo docker-compose run app sh -c "python manage.py migrate"
Error loading shared library libpython3.8.so.1.0: No such file or directory (needed by /usr/local/bin/python)
Error relocating /usr/local/bin/python: Py_BytesMain: symbol not found
Dockerfile
FROM python:3.8-alpine
MAINTAINER realize-sec
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /requirements.txt
RUN pip install -r requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
USER user
docker-compose.yml
version: "3"
services:
app:
build:
context: ""
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
What am I doing wrong that is causing this?
When I run without docker using python3 manage.py runserver it works fine.
Because I haven’t tested the build, I don’t know whether any of these things will help you to ultimately build your containers, however here are some observations to hopefully set you on the right path.
Your context is a null string and is usually a dot (.)
You typically finish the Dockerfile with the following command:
CMD [ "python3", "manage.py", "runserver", "0.0.0.0:8000" ]
So you can remove that from your compose file.
Other than that, on a more general note, although Alpine images are small, they are prone to breaking because of the additional dependencies and packages that you need to add/remove. You’re probably better off with going for the slim version overall. The original build will take a bit longer but it will be more manageable.
Also, if you’re running a modern version of Docker on your machine, then you can move the syntax version of the compose file to version 3.7 or 3.8, depending upon your version of Docker.
I'm building a django app using docker. The issue I am having is that my local filesystem is not synced to the docker environment so making local changes have no effect until I rebuild.
I added a volume
- ".:/app:rw"
which is syncing to my local filesystem but does my bundles that get built via webpack during the build don't get inserted (because they aren't in my filesystem)
My dockerfile has this
... setup stuff...
ENV NODE_PATH=$NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules \
PATH=$NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
ENV PATH=/node_modules/.bin:$PATH
COPY package*.json /
RUN (cd / && npm install && rm -rf /tmp/*)
...pip install stuff...
COPY . /app
WORKDIR /app
RUN npm run build
RUN DJANGO_MODE=build python manage.py collectstatic --noinput
So I want to sync to my local filesystem so I can make changes and have them show up immediately AND have my bundles and static assets present. The way I've been developing so far is to just comment out the app:rw line in my docker-compose.yml which allows all the assets and bundles to be present.
The solution that ended up working for me was to assign a volume to the directory I wanted to not sync to my local environment.
volumes:
- ".:/app/:rw"
- "/app/project_folder/static_source/bundles/"
- "/app/project_folder/bundle_tracker/"
- "/app/project_folder/static_source/static/"
Arguably there's probably a better way to do this, but this solution does work. The dockerfile compiles webpack and collect static does it's job both within the container and the last 3 lines above keep my local machine from overwriting them. The downside is that I still have to figure out a better solution for live recompile of scss or javascript, but that's a job for another day.
You can mount a local folder into your Docker image. Just use the --mount option at the docker run command. In the following example the current directory will be available in your Docker image at /app.
docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Reference: https://docs.docker.com/storage/bind-mounts/
This question already has an answer here:
Docker bound mount - can not see changes on browser
(1 answer)
Closed 3 years ago.
EDIT: Gulp "Watch" doesn't work on windows with a mounted volumes because no "file change" event is sent. My current solution is to run Docker Windows Volume Watcher on my local machine while I see if I can integrate this solution into my code.
I'm trying to run a gulp watch task in my dockerfile and gulp isn't catching when my files are getting changed.
Quick Notes:
This set up works when I use it for my locally hosted wordpress installs
The file changes reflect in my docker container according to pycharm's docker service
Running the "styles" gulp task works, it's just the file watching that does not
It's clear to me that there's some sort of disconnect between how gulp watches for changes, and how Docker is letting that happen.
Github link
**Edit: It looks possible to do what I want, here's a link to someone doing it slightly differently.
gulpfile excerpt:
export const watchForChanges = () => {
watch('scss-js/scss/**/*.scss', gulp.series('styles'));
watch('scss-js/js/**/*.js', scripts);
watch('scss-js/scss/*.scss', gulp.series('styles'));
watch('scss-js/js/*.js', scripts);
// Try absolute path to see if it works
watch('scss-js/scss/bundle.scss', gulp.series('styles'));
}
...
// Compile SCSS through styles command
export const styles = () => {
// Want more than one SCSS file? Just turn the below string into an array
return src('scss-js/scss/bundle.scss')
// If we're in dev, init sourcemaps. Any plugins below need to be compatible with sourcemaps.
.pipe(gulpif(!PRODUCTION, sourcemaps.init()))
// Throw errors
.pipe(sass().on('error', sass.logError))
// In production use auto-prefixer, fix general grid and flex issues.
.pipe(
gulpif(
PRODUCTION,
postcss([
autoprefixer({
grid: true
}),
require("postcss-flexbugs-fixes"),
require("postcss-preset-env")
])
)
)
.pipe(gulpif(PRODUCTION, cleanCss({compatibility:'ie8'})))
// In dev write source maps
.pipe(gulpif(!PRODUCTION, sourcemaps.write()))
// TODO: Update this source folder
.pipe(dest('blog/static/blog/'))
.pipe(server.stream());
}
...
export const dev = series(parallel(styles, scripts), watchForChanges);
Docker-Compose:
version: "3.7"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "8002:8000"
- "3001:3001"
- "3000:3000"
volumes:
- ./django_project:/django_project
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
restart: always
depends_on:
- db
db:
image: postgres
environment:
POSTGRES_PASSWORD: example1
ports:
- "5432:5432"
restart: always
Dockerfile:
FROM python:3.8-buster
MAINTAINER Austin
ENV PYTHONUNBUFFERED 1
# Install node
RUN apt-get update && apt-get -y install nodejs
RUN apt-get install npm -y
# Replace shell with bash so we can source files
RUN rm /bin/sh && ln -s /bin/bash /bin/sh
# Update Node
# Install base dependencies
RUN apt-get update && apt-get install -y -q --no-install-recommends \
apt-transport-https \
build-essential \
ca-certificates \
curl \
git \
libssl-dev \
wget
ENV NVM_DIR /usr/local/nvm
ENV NODE_VERSION 12.14.0
WORKDIR $NVM_DIR
RUN curl https://raw.githubusercontent.com/creationix/nvm/master/install.sh | bash \
&& . $NVM_DIR/nvm.sh \
&& nvm install $NODE_VERSION \
&& nvm alias default $NODE_VERSION \
&& nvm use default
ENV NODE_PATH $NVM_DIR/versions/node/v$NODE_VERSION/lib/node_modules
ENV PATH $NVM_DIR/versions/node/v$NODE_VERSION/bin:$PATH
# What PIP installs need to get done?
COPY django_project/requirements.txt /requirements.txt
RUN pip install -r /requirements.txt
# Copy local directory to target new docker directory
RUN mkdir -p /django_project
WORKDIR /django_project
COPY ./django_project /django_project
# Make Postgres Work
EXPOSE 5432/tcp
WORKDIR /django_project
RUN npm install gulp-cli -g
What do you think could be going on?
My guess is you are running on windows, right?
If so take a look at the following answer
https://stackoverflow.com/a/58969398/12153397
Below the gist of the linked answer
Issue identified
Bind mounting actually does not work for docker toolbox:
file change events in mounted folders of host are not propagated to
container by Docker for Windows
Solution
This script is intended to be the answer to this issue: docker-windows-volume-watcher.
I have django web application based on cookiecutter-django. The stack is build on several containers running: django, redis, celerybeat, celery worker, celery flower, postgres, caddy. When I launched application in production-like environment on VPS I have experienced strange behavior - django seems running old version of code (e.g. using version of form) despite checking out new code from git repository. I have tried few actions to "force" refresh of application code:
docker-compose down and then, rebuild all containers with docker-compose build, and then docker-compose up
similar rebuild as above but only for the container with django.
When I inspect code inside django container - there is proper version of code.
I did checkup app with Django Debug Toolbar - and seems that pages are are not loaded from cache (there are no calls to cache from backend, and there is number of queries to database which might indicate that pages are not loaded from cache).
I was expecting that django will automatically detect change of code and restart running new code, additionally interpreter restart could be needed (which should be solved via putting containers down and rebuid). Are there ideas what else to check or try? Removing all containers, images and volumes helped but it is no my preferred way to introduce each update.
I went through solutions from Why does docker-compose build not reflect my django code changes? and After docker-compose build the docker-compose up run old not updated containers
but none was working for me except "nuke everything". Is there a way for "soft reload"?
Here is Dockerfile for django container:
# Dockerfile for django container
FROM python:3.6-alpine
ENV PYTHONUNBUFFERED 1
RUN apk update \
# psycopg2 dependencies
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
# Pillow dependencies
&& apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev \
# CFFI dependencies
&& apk add libffi-dev py-cffi
RUN addgroup -S django \
&& adduser -S -G django django
# Requirements are installed here to ensure they will be cached.
COPY ./requirements /requirements
RUN pip install --no-cache-dir -r /requirements/production.txt \
&& rm -rf /requirements
COPY ./compose/production/django/entrypoint /entrypoint
RUN sed -i 's/\r//' /entrypoint
RUN chmod +x /entrypoint
RUN chown django /entrypoint
COPY ./compose/production/django/start /start
RUN sed -i 's/\r//' /start
RUN chmod +x /start
RUN chown django /start
COPY ./compose/production/django/celery/worker/start /start-celeryworker
RUN sed -i 's/\r//' /start-celeryworker
RUN chmod +x /start-celeryworker
RUN chown django /start-celeryworker
COPY ./compose/production/django/celery/beat/start /start-celerybeat
RUN sed -i 's/\r//' /start-celerybeat
RUN chmod +x /start-celerybeat
RUN chown django /start-celerybeat
COPY ./compose/production/django/celery/flower/start /start-flower
RUN sed -i 's/\r//' /start-flower
RUN chmod +x /start-flower
COPY . /app
RUN chown -R django /app
USER django
WORKDIR /app
ENTRYPOINT ["/entrypoint"]
Incase someone comes across this, the answer is you have to rebuild the container every time you push new code.
Just run
docker-compose -f production.yml build
to update the production version