I have an application written in Django and I am trying to run it in docker on Digital Ocean droplet. Currently I have two files.
Can anybody advise how to get rid of docker-compose.yml file and integrate all the commands within Dockerfile ???
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r reqirements.txt
RUN python /code/jk/manage.py collectstatic --noinput
docker-compose.yml
version: '3'
services:
web:
build: .
command: python jk/manage.py runserver 0.0.0.0:8081
volumes:
- .:/code
ports:
- "8081:8081"
I run my application and docker image like following:
docker-compose run web python jk/manage.py migrate
docker-compose up
output:
Starting workspace_web_1 ...
Starting workspace_web_1 ... done
Attaching to workspace_web_1
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
web_1 | December 02, 2017 - 09:20:51
web_1 | Django version 1.11.3, using settings 'jk.settings'
web_1 | Starting development server at http://0.0.0.0:8081/
web_1 | Quit the server with CONTROL-C.
...
Ok so I have take the following approach:
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r reqirements.txt
RUN python /code/jk/manage.py collectstatic --noinput
then I ran:
docker build -t "django:v1" .
So docker images -a throws:
docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
django v1 b3dec6aaf9b9 5 minutes ago 949MB
<none> <none> 55370397f7f2 5 minutes ago 948MB
<none> <none> e7eba7113203 7 minutes ago 693MB
<none> <none> dc3d7705c45a 7 minutes ago 691MB
<none> <none> 12825382746d 7 minutes ago 691MB
<none> <none> 2304087e8b82 7 minutes ago 691MB
python 3 79e1dc9af1c1 3 weeks ago 691MB
And finally I ran:
cd /opt/workspace
docker run -d -v /opt/workspace:/code -p 8081:8081 django:v1 python jk/manage.py runserver 0.0.0.0:8081
Two simple questions:
do i get it right that each <none> listed image is created when running docker build -t "django:v1" . command to build up my image ... So it means that it consumes like [(691 x 4) + (693 x 1) + (948) + (949)]MB of disk space ??
Is it better to use gunicorn or wsgi program to run django in production ?
And responses from #vmonteco:
I think so, but a way to reduce the size taken by your images is to reduce their number by using a single RUN directive for several chained commands in your Dockerfile. (RUN cmd1 && cmd2 rather than RUN cmd1 then RUN cmd
It's up to you to make your own opinion. I personnally use uWSGI but there even are other choices than gunicorn/uwsgi (Not just "wsgi", that's the name of a specification for interface, not a programm). Have fun finding your prefered one! :)
TL;DR
You can pass some informations to your Dockefile (the command to run) but that wouldn't be equivalent and you can't do that with all the docker-compose.yml file content.
You can replace your docker-compose.yml file with commands lines though (as docker-compose is precisely to replace it).
In your case you can add the command to run to your Dockerfile as a default command (which isn't roughly the same as passing it to containers you start at runtime) :
CMD ["python", "jk/manage.py", "runserver", "0.0.0.0:8081"]
or pass this command directly in command line like the volume and port which should give something like :
docker run -d -v .:/code -p 8081:8080 yourimage python jk/manage.py runserver 0.0.0.0:8081
BUT
Keep in mind that Dockerfiles and docker-compose serve two whole different purposes.
Dockerfile are meant for image building, to define the steps to build your images.
docker-compose is a tool to start and orchestrate containers to build your applications (you can add some informations like the build context path or the name for the images you'd need, but not the Dockerfile content itself).
So asking to "convert a docker-compose.yml file into a Dockerfile" isn't really relevant.
That's more about converting a docker-compose.yml file into one (or several) command line(s) to start containers by hand.
The purpose of docker-compose is precisely to get rid of these command lines to make things simpler (it automates it).
also :
From the manage.py documentation:
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone
through security audits or performance tests. (And that’s how it’s
gonna stay.
Django's runserver included in the manage.py tool isn't meant for production.
You might want to consider using a WSGI server behind a proxy.
Related
I have a docker-compose.yml defined as follows with two services (the database and the app):
version: '3'
services:
db:
build: .
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
The reason I have build: . in both services is due to how you can't do docker-compose push unless you have a build in all services. However, this means that both services are referring to the same Dockerfile, which builds the entire app. So after I run docker-compose build and look at the images available I see this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mellon_app latest XXXXXXXXXXXX 27 seconds ago 1.14GB
postgres latest XXXXXXXXXXXX 27 seconds ago 1.14GB
The IMAGE_ID is the exact same for both images, the size is exactly the same for both images. This makes me think I've definitely done some unnecessary duplication as they're both just running the same Dockerfile. I don't want to take up any unnecessary space, how do I do this properly?
This is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
What is the proper way to push the images to my Docker hub registry without this duplication?
Proper way is to do
docker build -f {path-to-dockerfile} -t {desired-docker-image-name}.
docker tag {desired-docker-image-name}:latest {desired-remote-image-name}:latest or not latest but what you want, like datetime in int format
docker push {desired-remote-image-name}:latest
and cleanup:
4. docker rmi {desired-docker-image-name}:latest {desired-remote-image-name}:latest
Whole purpose of docker-compose is to help your local development, so it's easier to start several pods and combine them in local docker-compose network etc...
I have been running an app without docker and have just added in Dockerfile and docker-compose.
The issue I am having is that after I successfuly build the app, runserver produces the below error when I run either that or migrate.
➜ app git:(master) sudo docker-compose run app sh -c "python manage.py runserver"
Error loading shared library libpython3.8.so.1.0: No such file or directory (needed by /usr/local/bin/python)
Error relocating /usr/local/bin/python: Py_BytesMain: symbol not found
failed to resize tty, using default size
%
➜ app git:(master) sudo docker-compose run app sh -c "python manage.py migrate"
Error loading shared library libpython3.8.so.1.0: No such file or directory (needed by /usr/local/bin/python)
Error relocating /usr/local/bin/python: Py_BytesMain: symbol not found
Dockerfile
FROM python:3.8-alpine
MAINTAINER realize-sec
ENV PYTHONUNBUFFERED 1
COPY requirements.txt /requirements.txt
RUN pip install -r requirements.txt
RUN mkdir /app
WORKDIR /app
COPY ./app /app
RUN adduser -D user
USER user
docker-compose.yml
version: "3"
services:
app:
build:
context: ""
ports:
- "8000:8000"
volumes:
- ./app:/app
command: >
sh -c "python manage.py runserver 0.0.0.0:8000"
What am I doing wrong that is causing this?
When I run without docker using python3 manage.py runserver it works fine.
Because I haven’t tested the build, I don’t know whether any of these things will help you to ultimately build your containers, however here are some observations to hopefully set you on the right path.
Your context is a null string and is usually a dot (.)
You typically finish the Dockerfile with the following command:
CMD [ "python3", "manage.py", "runserver", "0.0.0.0:8000" ]
So you can remove that from your compose file.
Other than that, on a more general note, although Alpine images are small, they are prone to breaking because of the additional dependencies and packages that you need to add/remove. You’re probably better off with going for the slim version overall. The original build will take a bit longer but it will be more manageable.
Also, if you’re running a modern version of Docker on your machine, then you can move the syntax version of the compose file to version 3.7 or 3.8, depending upon your version of Docker.
I've been trying to find the best method to handle setting up a Django project with Docker. But I'm somewhat confused as to how CMD and ENTRYPOINT function in relation to the compose commands.
When I first set the project up, I need to run createsuperuser and migrate for the database. I've tried using a script to run the commands as the entrypoint in my Dockerfile but it didn't seem to work consistently. I switched to the configuration shown below, where I overwrite the Dockerfile CMD with commands in my compose file where it is told to run makemigrations, migrate, and createsuperuser.
The issue I'm having is exactly how to set it up so that it does what I need. If I set a command (shown as commented out in the code) in my compose file it should overwrite the CMD in my Dockerfile from what I understand.
What I'm unsure of is whether or not I need to use ENTRYPOINT or CMD in my Dockerfile to achieve this? Since CMD is overwritten by my compose file and ENTRYPOINT isn't, wouldn't it cause problems if it was set to ENTRYPOINT, since it would try to run gunicorn a second time after the compose command is executed?
Would there be any drawbacks in this approach compared to using an entrypoint script?
Lastly, is there a general best practice approach to handling Django's setup commands when deploying a dockerized Django application? Or am I already doing what is typically done?
Here is my Dockerfile:
FROM python:3.6
LABEL maintainer x#x.com
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=site.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY site site
COPY templates templates
COPY logs logs
COPY scripts scripts
EXPOSE 8001
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=site.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "site.wsgi:application"]
And my compose file (omitted the nginx and postgres sections as they are unnecessary to illustrate the issue):
version: "3.2"
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile.prodtest
args:
requirements: requirements/production.txt
#command: bash -c "python manage.py makemigrations && python manage.py migrate && gunicorn --config gunicorn.conf --log-config loggigng.conf -e DJANGO_SETTINGS_MODULE=site.settings.production_test -W 4 -b 0.0.0.0:8000 site.wsgi"
container_name: dj01
environment:
- DJANGO_SETTINGS_MODULE=site.settings.production_test
- PYTHONDONTWRITEBYTECODE=1
volumes:
- ./:/app
- /static:/static
- /media:/media
networks:
- main
depends_on:
- db
I have the following entrypoint script that will attempt to do the migrate automatically on my Django project:
#!/bin/bash -x
python manage.py migrate --noinput || exit 1
exec "$#"
The only change that would need to happen to your Dockerfile is to ADD it and specify the ENTRYPOINT. I usually put these lines directly about the CMD instruction:
ADD docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod a+x /docker-entrypoint.sh
ENTRYPOINT ["/docker-entrypoint.sh"]
(please note that the chmod is only necessary if the docker-entrypoint.sh file on in your build environment is not executable already)
I add || exit 1 so that the script will stop the container should the migrate fail for any reason. When starting your project via docker-compose, it's possible that the database may not be 100% ready to accept connections when this migrate command runs. Between the exit on error approach and the restart: always that you have in your docker-compose.yml already, this will handle that race condition properly.
Note that the -x option I specify for bash echoes out what bash is doing, which I find helpful for debugging my scripts. It can be omitted if you want less verbosity in the container logs.
Dockerfile:
...
ENTRYPOINT ["entrypoint.sh"]
CMD ["start"]
entrypoint.sh will be executed all the time whilst CMD will be the default argument for it (docs)
entrypoint.sh:
if ["$1" = "start"]
then
/usr/local/bin/gunicorn --config config/gunicorn.conf \
--log-config config/logging.conf ...
elif ["$1" = "migrate"]
# whatever
python manage.py migrate
fi
now it is possible to do something like
version: "3.2"
services:
app:
restart: always
build:
...
command: migrate # if needed
or
docker exec -it <container> bash -c entrypoint.sh migrate
I had a working Django project that had built and deployed many images over the last few weeks. My django project kept working fine when ran as "python manage.py runserver" and the Docker images kept being built fine (all successful builds). However the django app now doesn't deploy. What could be the issue and where should I start to look for it? I've tried the logs but they only say "Starting Django" without actually starting the service
I use github and have gone back to previous versions of the code and none of them now work, even though the code is exactly the same. It also fails to deploy the Django server on AWS Elastic Beanstalk infrastructure which is my ultimate goal with this code.
start.sh:
#!/bin/bash
echo Starting Django
cd TN_API
exec python ../manage.py runserver 0.0.0.0:8000
Dockerfile:
FROM python:2.7.13-slim
# Set the working directory to /app
WORKDIR /TN_API
# Copy the current directory contents into the container at /app
ADD . /TN_API
# COPY startup script into known file location in container
COPY start.sh /start.sh
# Install requirements
RUN pip install -r requirements.txt
# EXPOSE port 8000 to allow communication to/from server
EXPOSE 8000
# CMD specifcies the command to execute to start the server running.
CMD ["/start.sh"]
Commands:
sudo docker run -d tn-api
sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d28115a919f9 tn-api "/start.sh" 11 seconds ago Up 8 seconds 8000/tcp festive_darwin
sudo docker logs [container id]
Starting Django
(doesn't do the whole:
Performing system checks...
System check identified no issues (0 silenced).
August 06, 2017 - 20:54:36
Django version 1.10.5, using settings 'TN_API.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.)
I changed several things and although it doesn't work locally it seems to work fine when deployed to AWS. I still don't get the feedback I used to get such as below but that's ok. I can hit the server and it works. Thank you all for your help.
Performing system checks...
System check identified no issues (0 silenced). August 06, 2017 - 20:54:36 Django version 1.10.5, using settings 'TN_API.settings' Starting development server at http://127.0.0.1:8000/ Quit the server with CONTROL-C.
It looks like the path is wrong for the manage.py script in /start.sh.
Your start.sh:
#!/bin/bash
echo Starting Django
cd TN_API
exec python ../manage.py runserver 0.0.0.0:8000
Seeing that you set WORKDIR to you project directory in the Dockerfile, the start.sh script is actually run from inside the project directory - which means it is actually doing this:
cd /TN_API # WORKDIR directive in Dockerfile
echo Starting Django # from the start.sh script
cd /TN_API/TN_API # looking for TN_API within your current pwd
exec python /TN_API/manage.py runserver 0.0.0.0:8000 # goes up a level (..) to look for manage.py
So it could be that your context for running runserver is off.
You can avoid this path jumping by rewriting your Dockerfile to include an CMD directive as follows:
FROM python:2.7.13-slim
# Set the working directory to /app
WORKDIR /TN_API
# Copy the current directory contents into the container at /app
ADD . /TN_API
# Install requirements
RUN pip install -r requirements.txt
# EXPOSE port 8000 to allow communication to/from server
EXPOSE 8000
# CMD specifcies the command to execute to start the server running.
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
Here using python manage.py runserver 0.0.0.0:8000 will work since you set the WORKDIR to your project directory already. So you wouldn't need your start.sh script necessarily.
I've imported to PyCharm 5.1 Beta 2 a tutorial project, which works fine when I run it from the commandline with docker-compose up
: https:// docs.docker.com/compose/django/
Trying to set a remote python interpreter is causing problems.
I've been trying to work out what the service name field is expecting:
remote interpreter - docker compose window - http:// i.stack.imgur.com/Vah7P.png.
My docker-compose.yml file is:
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
When I try to enter web or db or anything at all that comes to mind, I get an error message: Service definition is expected to be a map
So what am I supposed to enter there?
EDIT1 (new version: Pycharm 2016.1 release)
I have now updated to the latest version and am having still issues: .IOError: [Errno 21] Is a directory
Sorry for not tagging all links - have a new user link limit
The only viable way we found to workaround this (Pycharm 2016.1) is setting up an SSH remote interpreter.
Add this to the main service Dockerfile:
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Then log into docker container like this (in the code sample pass 'screencast'):
$ ssh root#192.168.99.100 -p 2000
Note: We aware the IP and port might change depending on your docker and compose configs
For PyCharm just set up a remote SSH Interpreter and you are done!
https://www.jetbrains.com/help/pycharm/2016.1/configuring-remote-interpreters-via-ssh.html