Im new to docker.
I have a docker-compose.yml file like this :
version: '3.7'
services:
nginx_sarahmaso:
build:
context: .
dockerfile: ./compose/production/nginx/Dockerfile
restart: always
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
ports:
- 4000:80
depends_on:
- web_sarahmaso
networks:
spa_network_sarahmaso:
web_sarahmaso:
build:
context: .
dockerfile: ./compose/production/django/Dockerfile
restart: always
command: /start
volumes:
- staticfiles_sarahmaso:/app/static
- mediafiles_sarahmaso:/app/media
- sqlite_sarahmaso:/app/db
env_file:
- ./env/prod-sample
networks:
spa_network_sarahmaso:
networks:
spa_network_sarahmaso:
volumes:
sqlite_sarahmaso:
staticfiles_sarahmaso:
mediafiles_sarahmaso:
I'm deploying this on a server with a sh script running these commands :
mkdir -p /app
rm -rf /app/* && tar -xf /tmp/project.tar -C /app
sudo docker-compose -f /app/docker-compose.yml build
sudo supervisorctl restart react-wagtail-project
sudo ufw allow port
However the supervisorctl doesnt run correctly. But the console tells me "Successfully built dc10bd26b175"after the docker build.
But when i run docker-compose ps or docker ps -a i dont see any containers.
Docker-compose ps asks me for a docker-compose.yml file and if i do docker-compose ps -f path_to/docker-compose.yml the console shows me the help slug :
List containers.
Usage: ps [options] [SERVICE...]
Options:
-q, --quiet Only display IDs
--services Display services
--filter KEY=VAL Filter services by a property
-a, --all Show all stopped containers (including those created by the run command)
How come i dont see my containers?
It seems your containers are not started.
With your line sudo docker-compose -f /app/docker-compose.yml build you are building your container, as the console message tells you.
I do not exactly know what this line does sudo supervisorctl restart react-wagtail-project, but to me, it does not look like a command to START your newly built containers.
Try to explicitely start your containers by adding
./path_to_compose/docker-compose up or
./path_to_compose/docker-compose up -d to your script.
Related
I'm trying to Dockerize my local Django/MySql setup. I have this directory and file structure ...
apache
docker-compose.yml
web
- manage.py
- venv
- requirements.txt
- ...
Below is the docker-compose.yml file I'm using ...
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: [ "python", "./web/manage.py runserver 0.0.0.0:8000" ]
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
# So you don't have to use root, but you can if you like
MYSQL_USER: 'chicommons'
# You can use whatever password you like
MYSQL_PASSWORD: 'password'
# Password for root access
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3406"
expose:
# Opens port 3406 on the container
- '3406'
volumes:
- my-db:/var/lib/mysql
volumes:
web-django:
web-static:
my-db:
However when I run
docker-compose up
I get errors like the below
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
maps_web_1 exited with code 2
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
maps_web_1 exited with code 2
Is there another way I'm supposed to be referencing the manage.py file?
Edit: Added info requested in comments ...
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . .
As others suggested, this is most probably because of running the manage.py runserver from a wrong directory or something very similar to this.
You are not using WORKDIR directive in your Dockerfile, at all. It is much safer if you do use them. Change your Dockerfile and docker-compose.yml files as below, and you problem should be solved.
Dockerfile
FROM python:3.7-slim
RUN apt-get update && apt-get install
RUN apt-get install -y libmariadb-dev-compat libmariadb-dev
RUN apt-get update \
&& apt-get install -y --no-install-recommends gcc \
&& rm -rf /var/lib/apt/lists/*
RUN python -m pip install --upgrade pip
RUN mkdir -p /app/
WORKDIR /app/
COPY requirements.txt requirements.txt
RUN python -m pip install -r requirements.txt
COPY . /app/
docker-compose.yml
version: '3'
services:
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: [ "python", "manage.py", "runserver", "0.0.0.0:8000" ]
...
Notice
You should be able to fix the problem by simply deleting web from your command for running the server. That's because when you are building the Dockerfile, you are inside the web directory. So when you do COPY . . you are copying contents inside web directory, and not the web directory itself. Actually, your file structure inside the docker image, should look something similar to this:
- root
- home
- var
- ...
- manage.py
- venv
- requirements.txt
- ...
In the command: directive, if you're using the array syntax, you're responsible for breaking up the command into words. As you've shown it you're running the equivalent of python "manage.py runserver 0.0.0.0:8000" at the shell prompt, and it's dutifully considering the entire command and options as the filename of a script to be run, including spaces. If you break this up into single words it will work better
command: ["python", "manage.py", "runserver", "0.0.0.0:8000"]
But there's not really a reason to specify this in docker-compose.yml at all. This is the default command you'd want to run to launch the container no matter how you ran it, so it should be the default command in your image's Dockerfile
...
EXPOSE 8000
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
You don't need links: at all on modern Docker (Docker Compose automatically sets up inter-container networking for you). You definitely don't want to mount named volumes over your application code: this hides what's in your image, and (since you've told Docker this is critical user data) it forces Docker to use an old version of your application if you try to update your image.
That leaves you with a simpler docker-compose.yml file:
version: '3'
services:
web:
restart: always
build: ./web
ports: # to access the container from outside
- "8000:8000"
environment:
DEBUG: 'true'
mysql:
restart: always
image: mysql:5.7
environment:
MYSQL_DATABASE: 'maps_data'
MYSQL_USER: 'chicommons'
MYSQL_PASSWORD: 'password'
MYSQL_ROOT_PASSWORD: 'password'
ports:
- "3406:3306" # second port is always container-internal port
volumes:
- my-db:/var/lib/mysql
volumes:
my-db:
Let us try to debug this error:
maps_web_1 exited with code 2
web_1 | python: can't open file './web/manage.py runserver 0.0.0.0:8000': [Errno 2] No such file or directory
Looks like either code is not not copied to container (named 'web') or command is triggered from root/home directory, where manage.py is not accessible.
1. Is the code available on container? How to check?
Usually, docker will just commands in container execute and exit unless there is unfinished running task (like server running in background).
To stop exiting and enable debugging it, let us add a running command, so that you can login to container and see if code is present.
command: tail -f /dev/null #trick to keep the docker alive for debug mode.
docker-compose.yml
web:
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: tail -f /dev/null #trick to keep the docker alive for debug mode.
Login to container 'web', from command line run docker exec -it web bash
Check if project files are present, now you can run python manage.py runserver 8000 command manually. If it works, then we can be sure of that the server can be run on container. Now, we can analyse initial working directory.
If code is present, check why manage.py is not found? Is the working directory set? meaning, does the container know what is the base directory to run command?
Specify which is the working directory, in Dockerfile, before you copy the project files in to container.
Dockerfile in web directory
ENV PYTHONUNBUFFERED 1
ARG PROJ_DIR=/usr/project/web
RUN mkdir -p $PROJ_DIR
WORKDIR $PROJ_DIR
COPY . $WORKDIR
docker-compose.yml
restart: always
build: ./web
expose:
- "8000"
links:
- mysql:mysql
volumes:
- web-django:/usr/src/app
- web-static:/usr/src/app/static
#env_file: web/venv
environment:
DEBUG: 'true'
command: python manage.py runserver 0.0.0.0:8000 #note this command is triggered from $WORKDIR that we set in Dockerfile.
I think this should resolve the issue or help you to figure out the problem.
I have a Django project and I've been struggling with the automation of the static files generation. My project structure has a docker-compose.yml file and a Dockerfile for every container image.
The docker-compose.yml file for my project:
version: '3'
services:
web:
build: ./dispenser
command: gunicorn -c gunicorn.conf.py dispenser.wsgi
volumes:
- ./dispenser:/dispenser
ports:
- "8000:8000"
restart: on-failure
nginx:
build: ./nginx/
depends_on:
- web
command: nginx -g 'daemon off;'
ports:
- "80:80"
volumes:
- ./dispenser/staticfiles:/var/www/static
restart: on-failure
The Dockerfile for the Django project I'm using:
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
WEBAPP_DIR=/dispenser \
GUNICORN_LOG_DIR=/var/log/gunicorn
WORKDIR $WEBAPP_DIR
RUN mkdir -p $GUNICORN_LOG_DIR \
mkdir -p $WEBAPP_DIR
ADD pc-requirements.txt $WEBAPP_DIR
RUN pip install -r pc-requirements.txt
ADD . $WEBAPP_DIR
RUN python manage.py makemigrations && \
python manage.py migrate && \
python manage.py collectstatic --no-input
After several hours of test and research I've found out that running the collectstatic and migrations commands from the Dockerfile doesn't produce the same result as doing it via the command argument on the docker-compose.yml file.
If I do it as shown above, when time for running the collectstatic command comes, only the "staticfiles" folder is generated (no files inside it). Also database migrations weren't applied (note that I'm using the default .sqlite3 db). Even though the stdout when creating the container said that migrations were applied and staticfiles generated.
The only workaround I found to make it work was executing bash from the container and then running those commands from there.
But later I've found out that if I specify those commands on the docker-file.yml everything works as expected. Leaving the files as follows:
docker-compose.yml
version: '3'
services:
web:
build: ./dispenser
command: bash -c "python manage.py makemigrations && python manage.py migrate && python manage.py collectstatic --no-input && gunicorn -c gunicorn.conf.py dispenser.wsgi"
volumes:
- ./dispenser:/dispenser
ports:
- "8000:8000"
restart: on-failure
nginx:
build: ./nginx/
depends_on:
- web
command: nginx -g 'daemon off;'
ports:
- "80:80"
volumes:
- ./dispenser/staticfiles:/var/www/static
restart: on-failure
Dockerfile
FROM python:3.7.4
ENV PYTHONUNBUFFERED=1 \
PYTHONDONTWRITEBYTECODE=1 \
WEBAPP_DIR=/dispenser \
GUNICORN_LOG_DIR=/var/log/gunicorn
WORKDIR $WEBAPP_DIR
RUN mkdir -p $GUNICORN_LOG_DIR \
mkdir -p $WEBAPP_DIR
ADD pc-requirements.txt $WEBAPP_DIR
RUN pip install -r pc-requirements.txt
ADD . $WEBAPP_DIR
Can anyone explain me why does this occur? And if is there another way of achieving what I intend without having to specify the commands on the docker-compose.yml file?
When you mount a host directory into a container, the contents of host directory shadow the contents of the container.
volumes:
- ./dispenser:/dispenser
So when you run your container, the initial contents of /dispenser inside container will be the contents of ./dispenser from host machine. Any content already at /dispenser inside the container is shadowed. So the content generated during image build time by the RUN instructions inside your Dockerfile will be lost.
In your second approach of using command in compose file, you are mounting the volume first and then generating the content and hence it works.
The command instruction in Dockerfile is used to override the default command in the Docker image which can be set using CMD instruction in Dockerfile. Since you want to use the first approach of running your python script during image build time using RUN instructions, you can RUN them in a different directory(say /tmp/dispenser) and as part of the command in compose or CMD in Dockerfile, you can move the generated content from /tmp/dispenser to /dispenser.
I'm trying to create a Docker image with my Django application, but unfortunately I'm getting troubles trying to run my entrypoint script.
Docker exits eith code error 127 and display the following message:
/docker-entrypoint.sh: ignoring /docker-entrypoint-initdb.d/*
You find below the respective configuration files:
Dockerfile
FROM python:2.7
ENV PYTHONUNBUFFERED 1
RUN mkdir -p /web/src
ADD . /web/src
WORKDIR /web/src
RUN pip install -U pip
RUN pip install -r requirements.txt -U
RUN chmod u+x docker-entrypoint.sh
ENTRYPOINT ["/bin/bash", "docker-entrypoint.sh"]
docker-entrypoint.sh
#!/bin/bash
python manage.py migrate
python manage.py collectstatic --noinput
touch /srv/logs/gunicorn.log
touch /srv/logs/access.log
tail -n 0 -f /srv/logs/*.log &
echo Starting Gunicorn...
exec gunicorn config.wsgi:application \
--name django_server \
--bind 0.0.0.0:8000 \
--workers 3 \
--log-level=info \
--log-file=/srv/logs/gunicorn.log \
--access-logfile=/srv/logs/access.log \
"$#"
docker-compose.yml
version: '2.0'
services:
db:
container_name: db_server
image: postgres
web:
container_name: django_server
build: .
volumes:
- .:/web/src
environment:
- SECRET_KEY=k3jghf1jk%$JH^1GJH5#YUTR#!MBMB<5=7DXXG)JHSX=
- PGDATABASE=postgres
- PGUSER=postgres
- PGPASSWORD=''
- PGHOST=db
- DJANGO_ENV=development
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
links:
- db
After reproducing the problem locally: docker build . build the image successfully, but when trying to start the image using docker-compose up I got the following error exec: gunicorn: not found as the OP mentioned above. Based on this thread I could solve the problem running docker-compose build. So to sum up the 3 following commands should solve the problem:
docker build .
docker-compose build
docker-compose up
Despite this solves the problem for me I'm still confused here, why do I need to run build twice. I mean it should be something wrong somewhere, because as I far as I have understood, docker-compose build should do the same work as docker build ..
I am trying to mount my code directory using docker volume, but unable to do so.
Here's the relevant section of my docker-compose file.
web:
build: ./web
dockerfile: Dockerfile
links:
- db:db
- elasticsearch:elasticsearch
volumes:
- ./web:/usr/src/app
web folder has a DockerFile with the following instructions.
FROM python:2.7
RUN mkdir -p /usr/src/app
WORKDIR /usr/src/app
COPY requirements.txt /usr/src/app/
RUN pip install --no-cache-dir -r requirements.txt
docker-compose up is working without any issues, but I don't see any volumes after start up.
root#test-new:/home/django/test# docker inspect test_web | grep -i volume
"Volumes": null,
"Volumes": null,
Here's the rest of my stack, if that is relevant.
elasticsearch
nginx
postgres db
The following syntax works.
volumes:
- ${PWD}/web:/usr/src/app
web:
build: ./web
dockerfile: Dockerfile-aside
links:
- db:db
- elasticsearch:elasticsearch
volumes:
- ${PWD}/web:/usr/src/app
env_file: .env
One of the reasons the earlier migrations were not being copied is because I was running the code inside a new container, instead of the container that is currently up.
Instead of
docker-compose run --rm web /bin/bash
When I tried
docker exec -it my_container_name web /bin/bash
And then run the migrate command, the newly generated database migration commands are not present on the source directory. It would be ideal to use docker-compose exec, but the current version I am using is below 1.7, so I am using the above interim solution in the mean time.
https://github.com/d11wtq/dockerpty/pull/48
I'm using a Docker container for Django development, and the container runs Gunicorn with Nginx. I'd like code changes to auto-load, but the only way I can get them to load is by rebuilding with docker-compose (docker-compose build). The problem with "build" is that it re-runs all my pip installs.
I'm using the Gunicorn --reload flag, which is apparently supposed to do what I want. Here are my Docker config files:
## Dockerfile:
FROM python:3.4.3
RUN mkdir /code
WORKDIR /code
ADD . /code/
RUN pip install -r /code/requirements/docker.txt
## docker-compose.yml:
web:
restart: always
build: .
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload
nginx:
restart: always
build: ./config/nginx
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
image: postgres:latest
volumes:
- /var/lib/postgresql
ports:
- "5432:5432"
I've tried some of the other Docker commands (docker-compose restart, docker-compose up), but the code won't refresh.
What am I missing?
Thanks to kikicarbonell, I looked into having a volume for my code, and after looking at the Docker Compose recommended Django setup, I added volumes: - .:/code to my web container in docker-compose.yml, and now any code changes I make automatically apply.
## docker-compose.yml:
web:
restart: always
build: .
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
- .:/code
env_file: .env
command: /usr/local/bin/gunicorn myapp.wsgi:application -w 2 -b :8000 --reload
Update: for a thorough example of using Gunicorn and Django with Docker, checkout this example project from Rackspace, which also shows how to use docker-machine to launch the setup on remote servers like Rackspace Cloud.
Caveat: currently, this method does not work when your code is stored locally and the docker host is remote (e.g., on a cloud provider like Digital Ocean or Rackspace). This also applies to virtual machines if your local file system is not mounted on the VM. Note that there are separate volume drivers (e.g., flocker), and there might be something out there to address this need. For now, the "fix" is to rsync/scp your files up to a directory on the remote docker host. Then, the --reload flag will auto-reload gunicorn after any scp/rsync. Update: If pushing code to remote docker host, I find it far easier to just rebuild the docker container (e.g., docker-compose build web && docker-compose up -d). This can be slower though than the rsync approach if your src folder is large.
You have another problem- Docker caches each layer that it builds. You shouldn't have to re-run pip install every time!
ADD . /code/
RUN pip install -r /code/requirements/docker.txt
This is your problem- Docker checks every ADD statement to see if any files have changed and invalidates the cache for it and every later step if it has. The correct way to do this is...
ADD ./requirements/docker.txt /code/requirements/
RUN pip install -r /code/requirements/docker.txt
ADD ./code/
Which will only invalidate your pip install line if your requirements file changes!
It seems like you need to match your WORKDIR/COPY commands in your Dockerfile in your docker-compose.yml when creating the volume. Here is an example:
Dockerfile
WORKDIR /app
COPY . /app
docker-compose.yml
app:
/ other commands /
volumes:
- ./app:/app
I faced very similar problem trying to configure auto-reload of the project with a little bit different setup. I set up volumes but it did not work anyway. After an hour of googling and thorough examination of my code I figured out that volume paths in Dockerfile and docker-compose.yml simply do not match. Make sure that they are the same.
My Dockerfile
FROM python:3.6.9-alpine3.10
COPY ./requirements/local.txt /app/requirements/local.txt
RUN set -ex \
&& apk add --no-cache --virtual .build-deps postgresql-dev git gcc libgcc musl-dev jpeg-dev zlib-dev build-base \
&& python -m venv /env \
&& /env/bin/pip install --upgrade pip \
&& /env/bin/pip install --no-cache-dir -r /app/requirements/local.txt \
&& runDeps="$(scanelf --needed --nobanner --recursive /env \
| awk '{ gsub(/,/, "\nso:", $2); print "so:" $2 }' \
| sort -u \
| xargs -r apk info --installed \
| sort -u)" \
&& apk add --virtual rundeps $runDeps \
&& apk del .build-deps
### Here is the path to the project
COPY . /app
WORKDIR /app/project
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
EXPOSE 8088
My docker-compose.yml
version: '3'
services:
web:
build:
context: ../..
dockerfile: compose/local/Dockerfile
restart: on-failure
command: python manage.py runserver 0.0.0.0:8088 --settings=project.settings.local
volumes:
# - .:/var/www/app # messed up path
- .:/app # correct path
env_file:
- ../../.env.local
depends_on:
- db
ports:
- "8000:8000"
Since I never found a desirable solution consider this interesting hack. Posting here I wanted to see if anyone has similar/good/bad experiences with this "work around".
To make code reload locally for development I simply created a View that immediately calls exit(). The exit will crash Django and a reload will occur where code changes are available. The reboot takes a fraction of a second and can be done via a tab in the browser, a requests.get call, or any other similar call. The reload is not automatic but it does skip any Docker lag such as a restart.
When the exit is called you will see the PID increment (if tailing logs):
web | [2019-07-15 18:29:52 +0000] [22] [INFO] Worker exiting (pid: 22)
web | [2019-07-15 18:29:52 +0000] [24] [INFO] Booting worker with pid: 24
I hope this helps others and/or gets feed back on this approach.
I you use docker-compose:
DockerFile: When you build image from Dockerfile you need to add some directory to save your code (in my case /api/):
WORKDIR /api/ -> important
COPY . . -> important
Docker-compose: Your docker-compose file haves you app service with the image in django just builded from Dockerfile, now you need to add the volume with the same WORKDIR that you use in Dockerfile:
volumes:
- .:/app -> important
And is all.