Why does docker-compose use ~60GB to build this image - django

When I start docker-compose build I have 60 gigs free. I run out of space before it finishes. Any idea what could possibly be going on?
I'm running latest of Docker for Mac and docker-compose
here's my docker-compose file:
version: '3'
services:
db:
image: postgres:9.6-alpine
volumes:
- data:/var/lib/postgresql/data
ports:
- 5432:5432
web:
image: python:3.6-alpine
command: ./waitforit.sh solr:8983 db:5432 -- bash -c "./init.sh"
build: .
env_file: ./.env
volumes:
- .:/sark
- solrcores:/solr
ports:
- 8000:8000
links:
- db
- solr
restart: always
solr:
image: solr:6-alpine
ports:
- 8983:8983
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- sark
volumes:
- solrcores:/opt/solr/server/solr/mycores
volumes:
data:
solrcores:
and my dockerfile for the "web" image:
FROM python:3
# Some stuff that everyone has been copy-pasting
# since the dawn of time.
ENV PYTHONUNBUFFERED 1
# Install some necessary things.
RUN apt-get update
RUN apt-get install -y swig libssl-dev dpkg-dev netcat
# Copy all our files into the image.
RUN mkdir /sark
WORKDIR /sark
COPY . /sark/
# Install our requirements.
RUN pip install -U pip
RUN pip install -Ur requirements.txt
This image itself when built is ~3 gigs.
I'm pretty flummoxed.

Related

Django/Docker: web container not up-to-date code

EDIT 22/05/2022
Docker version 20.10.14
docker-compose version 1.25.0
I delete again all containers/images and re-build using docker-compose -f docker-compose.preprod.yml build --force-rm --no-cache but I still observe the same issue: code in not up-to-date in web container
I use Django docker app and do not manage to apply code update to my web container.
I've tried to delete all containers (docker rm -f ID ; docker system prune) and images (docker rmi -f ID ; docker image prune) related to my app and re-build with docker-compose -f docker-comose.preprod.yml build
Then I run docker-compose -f docker-comose.preprod.yml up but for some reasons when I connect to my web running container (docker exec -it web sh) and read my updated files, I observe that update are not applied...
How should I do to make my update applied?
# Pull the official base image
FROM python:3.8.3-alpine
# Set a work directory
WORKDIR /usr/src/app
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update && apk add postgresql-dev gcc g++ python3-dev musl-dev
RUN apk --update add libxml2-dev libxslt-dev libffi-dev musl-dev libgcc openssl-dev curl postgresql-client
RUN apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev nano
RUN pip3 install psycopg2 psycopg2-binary
# install xgettext for i18n
RUN apk add gettext
# Install dependencies
COPY requirements/ requirements/
RUN pip install --upgrade pip && pip install -r requirements/preprod.txt
# Copy the entrypoint.sh file
COPY entrypoint.preprod.sh .
# Copy the initdata sql file
COPY initdata.preprod.sql .
# Copy the project's files
COPY . .
RUN chmod +x entrypoint.preprod.sh
version: '3.7'
services:
web:
restart: always
container_name: ecrf_web
build:
context: ./app
dockerfile: Dockerfile.preprod
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- app_volume:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env.preprod
entrypoint: [ "/usr/src/app/entrypoint.preprod.sh" ]
depends_on:
- redis
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:8000/" ]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: ecrf_redis
image: "redis:alpine"
celery:
container_name: ecrf_celery
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core worker -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
celery-beat:
container_name: ecrf_celery-beat
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core beat -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
nginx:
container_name: ecrf_nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1370:80
depends_on:
- web
volumes:
static_volume:
media_volume:
app_volume:

"sqlite3.OperationalError: attempt to write a readonly database" even after chmod 777

I' m running a Django application within a docker container. I'm getting this error sqlite3.OperationalError: attempt to write a readonly database. I've tried everything in the Dockerfile
RUN chown username db.sqlite3
RUN chmod 777 db.sqlite3
I tried also to run the application as root user, but I still get the same error.
Here is my Dockerfile
FROM python:3.9.5-alpine
RUN addgroup -S apigroup && adduser -S weatherapi -G apigroup
WORKDIR /app
ADD requirements.txt .
RUN apk update && apk upgrade
RUN python3 -m pip install --upgrade pip && python3 -m pip install --no-cache-dir -r requirements.txt
COPY . .
USER root
EXPOSE 8000
RUN chmod 777 db.sqlite3
USER weatherapi
RUN python3 manage.py migrate
CMD ["sh", "-c", "python3 manage.py runserver 0.0.0.0:8000"]
And my docker-compose
version: '3.7'
networks:
weather_api_net:
driver: bridge
driver_opts:
com.docker.network.enable_ipv6: "false"
services:
web:
restart: unless-stopped
image: weatherapi:1.0
container_name: weatherapi
ports:
- "8000:8000"
volumes:
- .:/app
deploy:
replicas: 1
update_config:
parallelism: 2
delay: 10s
order: start-first
rollback_config:
parallelism: 2
delay: 10s
failure_action: continue
monitor: 60s
order: stop-first
restart_policy:
condition: on-failure
networks:
- weather_api_net
I just had to change the base image to a Debian based image.
FROM python:3.8-buster
Apparently Alpine doesn't like SQLite.

Mounted a local direcotry in my Docker image, but it can't read a file from that directory

I'm trying to build a docker container with MySql, Django, and Apache images. I have set up this docker-compose.yml ...
version: '3'
services:
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3306"
volumes:
- my-db:/var/lib/mysql
command: ['mysqld', '--character-set-server=utf8mb4', '--collation-server=utf8mb4_unicode_ci']
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
env_file: .env
environment:
DEBUG: 'true'
command: /usr/local/bin/gunicorn maps.wsgi:application --reload -w 2 -b :8000
volumes:
- ./web/:/app
depends_on:
- mysql
apache:
restart: always
build: ./apache/
ports:
- "9090:80"
links:
- web:web
volumes:
my-db:
I would like to mount my docker Django image to a directory on my local machine so that local edits can be reflected in the docker container, which is why I have this
volumes:
- ./web/:/app
in my "web" portion. This is the web/Dockerfile I'm using ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
WORKDIR /app/
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
RUN ["chmod", "+x", "/app/entrypoint.sh"]
ENTRYPOINT ["/app/entrypoint.sh"]
However, when I run things using "docker-compose up,", I get this error ...
chmod: cannot access '/app/entrypoint.sh': No such file or directory
Even though when I look in my local directory, I can see the file ...
localhost:maps davea$ ls -al web/entrypoint.sh
-rw-r--r-- 1 davea staff 99 Mar 9 15:23 web/entrypoint.sh
I sense I haven't mapped/mounted things properly, but not sure where the issue is.
It seems that your docker-compose and Dockerfiles are set up correctly.
However, 1 thing I notice is that, your entrypoint ENTRYPOINT ["/app/entrypoint.sh"] is executing the file /app/entrypoint.sh which you do not have permission to do so according to the ls -al command
-rw-r--r-- 1 davea staff 99 Mar 9 15:23 web/entrypoint.sh
There are 2 simple solutions for this:
Give execution permission to the entrypoint.sh file:
chmod a+x web/entrypoint.sh
Or if you do not want to give this permission, you can update your entrypoint to be like ENTRYPOINT ["bash", "/app/entrypoint.sh"]
Note that, in either case, this is not a problem with your docker-compose mounting but your Dockerfile and hence, you will need to rebuild your docker image after making the changes like
docker-compose up -d --build

What is the docker command to run my Django server?

I'm trying to Dockerize my local Django/MySql setup. I have this directory and file structure ...
apache
docker-compose.yml
web
- manage.py
- venv
- requirements.txt
- ...
Below is the docker-compose.yml file I'm using ...
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: [ "python", "./web/manage.py runserver 0.0.0.0:8000" ]
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
expose:
# Opens port 3406 on the container
- '3406'
volumes:
- my-db:/var/lib/mysql
volumes:
web-django:
web-static:
my-db:
However when I run
docker-compose up
I get errors like the below
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
maps_web_1 exited with code 2
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
maps_web_1 exited with code 2
Is there another way I'm supposed to be referencing the manage.py file?
Edit: Added info requested in comments ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . .
As others suggested, this is most probably because of running the manage.py runserver from a wrong directory or something very similar to this.
You are not using WORKDIR directive in your Dockerfile, at all. It is much safer if you do use them. Change your Dockerfile and docker-compose.yml files as below, and you problem should be solved.
Dockerfile
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . /app/
docker-compose.yml
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
...
Notice
You should be able to fix the problem by simply deleting web from your command for running the server. That's because when you are building the Dockerfile, you are inside the web directory. So when you do COPY . . you are copying contents inside web directory, and not the web directory itself. Actually, your file structure inside the docker image, should look something similar to this:
- root
- home
- var
- ...
- manage.py
- venv
- requirements.txt
- ...
In the command: directive, if you're using the array syntax, you're responsible for breaking up the command into words. As you've shown it you're running the equivalent of python "manage.py runserver 0.0.0.0:8000" at the shell prompt, and it's dutifully considering the entire command and options as the filename of a script to be run, including spaces. If you break this up into single words it will work better
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
But there's not really a reason to specify this in docker-compose.yml at all. This is the default command you'd want to run to launch the container no matter how you ran it, so it should be the default command in your image's Dockerfile
...
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
You don't need links: at all on modern Docker (Docker Compose automatically sets up inter-container networking for you). You definitely don't want to mount named volumes over your application code: this hides what's in your image, and (since you've told Docker this is critical user data) it forces Docker to use an old version of your application if you try to update your image.
That leaves you with a simpler docker-compose.yml file:
version: '3'
services:
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
environment:
DEBUG: 'true'
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
MYSQL_USER: 'chicommons'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3306" # second port is always container-internal port
volumes:
- my-db:/var/lib/mysql
volumes:
my-db:
Let us try to debug this error:
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
Looks like either code is not not copied to container (named 'web') or command is triggered from root/home directory, where manage.py is not accessible.
1. Is the code available on container? How to check?
Usually, docker will just commands in container execute and exit unless there is unfinished running task (like server running in background).
To stop exiting and enable debugging it, let us add a running command, so that you can login to container and see if code is present.
command: tail -f /dev/null #trick to keep the docker alive for debug mode.
docker-compose.yml
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: tail -f /dev/null #trick to keep the docker alive for debug mode.
Login to container 'web', from command line run docker exec -it web bash
Check if project files are present, now you can run python manage.py runserver 8000 command manually. If it works, then we can be sure of that the server can be run on container. Now, we can analyse initial working directory.
If code is present, check why manage.py is not found? Is the working directory set? meaning, does the container know what is the base directory to run command?
Specify which is the working directory, in Dockerfile, before you copy the project files in to container.
Dockerfile in web directory
ENV PYTHONUNBUFFERED 1
ARG PROJ_DIR=/usr/project/web
RUN mkdir -p $PROJ_DIR
WORKDIR $PROJ_DIR
COPY . $WORKDIR
docker-compose.yml
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: python manage.py runserver 0.0.0.0:8000 #note this command is triggered from $WORKDIR that we set in Dockerfile.
I think this should resolve the issue or help you to figure out the problem.

django docker exited with code 0 in request GET or POST

Good friends, I am developing an application in django 1.11 with docker on windows, recently update the git repository of the project and also made some changes with docker containers.
The problem is that when entering the main page and some other URLS nothing happens, but when I try to login to the administrator, the django container is closed and I do not even get any error by the browser, console or log .
Example:
When I come in here they are fine
GET / 200 OK
POST / 403 Forbidden
GET / api / auth / 405 Method not allowed
But when I enter these, without showing any message, close the docker container (proyect_django_1 exited with code 0)
GET / admin No answer
POST / api / auth / No answer
My docker-compose
version: '3'
services:
db:
build: docker/postgres
volumes:
- ./docker/data/postgres:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=postgres
- POSTGRES_USER=postgres
- POSTGRES_DB=project
redis:
image: redis:3.2-alpine
volumes:
- ./docker/data/redis:/data
rabbit:
image: rabbitmq:3-management-alpine
environment:
- RABBITMQ_DEFAULT_USER=admin
- RABBITMQ_DEFAULT_PASS=admin
django:
build:
context: .
args:
- REQUIREMENTS=development.txt
command: python3.6 manage.py runserver 0.0.0.0:8008
volumes:
- ./:/code
working_dir: /code/project
env_file: ./docker/DevelopmentEnv
ports:
- "8008:8008"
links:
- db
- rabbit
- redis
depends_on:
- db
celeryworker:
build:
context: .
args:
- REQUIREMENTS=development.txt
working_dir: /code/project
volumes:
- ./:/code
env_file: ./docker/DevelopmentEnv
links:
- db
- rabbit
command: celery -A config worker -l INFO -Q celery
frontend:
image: node:8.4-alpine
volumes:
- ./:/code
working_dir: /code/frontend
command: ash -c "yarn install --no-bin-links && yarn run build"
socketio:
image: node:8.4-alpine
volumes:
- ./:/code
working_dir: /code/sockets
command: ash -c "yarn install --no-bin-links && yarn start"
ports:
- "3000:3000"
links:
- redis
- django
depends_on:
- redis
My dockerfile
FROM python:3.6.2-alpine3.6
ARG REQUIREMENTS
RUN apk update
RUN apk add postgresql-dev postgresql-client
RUN apk add libffi-dev gcc
RUN apk add musl-dev zlib-dev jpeg-dev
RUN apk add --no-cache --virtual .build-deps-testing \
--repository http://dl-cdn.alpinelinux.org/alpine/edge/testing \
gdal-dev
RUN mkdir /code
ADD ./ /code/
WORKDIR /code
RUN pip3.6 install -r requirements/$REQUIREMENTS
WORKDIR /code/project
You could add restart: always to your django service definition. This will start a new django container if the previous one exits for any reason.
You should be getting some logs about why the process is exiting. Try running docker inspect <container-name> to see if there are any clues about why the process exits. There is probably a bug in your Python code triggered by some URLs, and it causes the process to exit.