Docker connection refused hanging Django - django

When I run my container, it just hangs on the next line and if I write
curl http://0.0.0.0:8000/
I get
Failed to connect to 0.0.0.0 port 8000: Connection refuse
This is my dockerfile
FROM python:3.6.1
# Set the working directory to /app
WORKDIR /app
# Copy the current directory contents into the container at /app
ADD . /app
RUN pip3 install -r requirements.txt
CMD ["python3", "dockerizing/manage.py", "runserver", "0.0.0.0:8000"]
I also tried doing it through a docker-compose.yml file and again nothing happens, I´ve searched a lot and haven´t found a solution, this is the docker-compose.yml
version: "3"
services:
web:
image: app1
deploy:
replicas: 5
resources:
limits:
cpus: "0.1"
memory: 50M
restart_policy:
condition: on-failure
ports:
- "8000:8000"
networks:
- webnet
networks:
webnet:
By the way, if I run docker ps with myapp image I get this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
e9633657f060 app1 "python3 dockerizi..." 5 seconds ago Up 5 seconds friendly_dijkstra
When I deploy the service with the django-compose.yml and docker ps I get this:
`MacBook-Pro-de-Jesus:docker-django Almaral$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
13677a71d9d5 app1:latest "python3 dockerizin..." 15 seconds ago Up 11 seconds getstartedlab_web.1.cq3zqmpfsii5g6m5r9qsnmtb1
c6693118ef70 app1:latest "python3 dockerizin..." 16 seconds ago Up 12 seconds getstartedlab_web.4.r472oh80s4zd1yymj447f1df6
f3822e47970b app1:latest "python3 dockerizin..." 16 seconds ago Up 12 seconds getstartedlab_web.2.lkp43v9h30esjohcnf3pe31hi
f66a4038ebdf app1:latest "python3 dockerizin..." 16 seconds ago Up 12 seconds getstartedlab_web.5.xxu01ruebd84tnlxmoymsu0vo
e3d31c419c11 app1:latest "python3 dockerizin..." 16 seconds ago Up 13 seconds getstartedlab_web.3.uqswgirmg22sjnekzmf5b4xo7`

Your docker ps output shows nothing in the PORTS column. That means that there's no port forwarding from the host to the container.
[...] STATUS PORTS NAMES
[...] Up 5 seconds friendly_dijkstra
If you use the command docker run to run your container, you should explicitly specify port number both on host and on the container using the command option -p hostPort:containerPort
docker run -p 8000:8000 app1
Now, running docker ps should show port forwarding.
[...] STATUS PORTS NAMES
[...] Up 5 seconds 0.0.0.0:8000->8000/tcp friendly_dijkstra
If you are using docker-compose to start your containers, the host and container ports are already configured in your docker-compose.yml file, so you don't need a command line option.
docker-compose up web
To use docker compose, you have to install it on the host.
It's a python module, so you can install it with pip pip install docker-compose

Into your docker-compose config file, modify your port redirection from: 8000:8000 to 127.0.0.1:8000:8000

Related

dockerizing django, docker-compose works but not docker run

I have a django app (url shortener for k8s) here [https://github.com/MrAmbiG/shorty/tree/k8s][1]. The docker-compose version works with the same docker image but the docker run doesn't work (I cannot access from host, no errors). Docker and docker-compose up both are from docker.io and both are using the same docker image but why the difference?
I apologize for not posting all the contents of the file but rather posting the github url itself.
version: '3.7'
services:
django:
image: gajuambi/shorty
ports:
- 80:8001
env_file:
- ../.env
Below Doesnt work
docker run --name shorty -it --env-file .env gajuambi/shorty -p 8001:8001
The docker image itself runs with no error but when I enter the address in the browser of the host (my windows laptop) and I get nothing.
I tried the following urls in my host where docker is running (laptop) browser
http://localhost:8001/
http://127.0.0.1:8001/
I tried binding the django to the following addresses
0.0.0.0
0
127.0.0.1
but no go.
ports:
- 80:8001
i think your application running on port 80 as you tried binding app on 0.0.0.0 default port will be 80
but you forwarding port 8001 while running the docker command
Please try with
docker run --name shorty -it --env-file .env gajuambi/shorty -p 8001:80
Also, try with opening the IP of host machine (computer) or **docker bridge IP**
http://{host IP}:8001
updated the entrypoint command to
daphne shorty.asgi:application -b 0 -p 8000
Currently docker rm shorty -f && docker build -t gajuambi/shorty -f .\Deployment\Dockerfile . && docker run --name shorty -it --env-file .env -p 80:8000 gajuambi/shorty is working fine.
I have updated the github repo for reference.
https://github.com/MrAmbiG/shorty.git

Duplicate images on docker-compose build. How to properly push two services of docker-compose.yml to Docker hub registry?

I have a docker-compose.yml defined as follows with two services (the database and the app):
version: '3'
services:
db:
build: .
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
The reason I have build: . in both services is due to how you can't do docker-compose push unless you have a build in all services. However, this means that both services are referring to the same Dockerfile, which builds the entire app. So after I run docker-compose build and look at the images available I see this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mellon_app latest XXXXXXXXXXXX 27 seconds ago 1.14GB
postgres latest XXXXXXXXXXXX 27 seconds ago 1.14GB
The IMAGE_ID is the exact same for both images, the size is exactly the same for both images. This makes me think I've definitely done some unnecessary duplication as they're both just running the same Dockerfile. I don't want to take up any unnecessary space, how do I do this properly?
This is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
What is the proper way to push the images to my Docker hub registry without this duplication?
Proper way is to do
docker build -f {path-to-dockerfile} -t {desired-docker-image-name}.
docker tag {desired-docker-image-name}:latest {desired-remote-image-name}:latest or not latest but what you want, like datetime in int format
docker push {desired-remote-image-name}:latest
and cleanup:
4. docker rmi {desired-docker-image-name}:latest {desired-remote-image-name}:latest
Whole purpose of docker-compose is to help your local development, so it's easier to start several pods and combine them in local docker-compose network etc...

How to translate docker-compose.yml to Dockerfile

I have an application written in Django and I am trying to run it in docker on Digital Ocean droplet. Currently I have two files.
Can anybody advise how to get rid of docker-compose.yml file and integrate all the commands within Dockerfile ???
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r reqirements.txt
RUN python /code/jk/manage.py collectstatic --noinput
docker-compose.yml
version: '3'
services:
web:
build: .
command: python jk/manage.py runserver 0.0.0.0:8081
volumes:
- .:/code
ports:
- "8081:8081"
I run my application and docker image like following:
docker-compose run web python jk/manage.py migrate
docker-compose up
output:
Starting workspace_web_1 ...
Starting workspace_web_1 ... done
Attaching to workspace_web_1
web_1 | Performing system checks...
web_1 |
web_1 | System check identified no issues (0 silenced).
web_1 | December 02, 2017 - 09:20:51
web_1 | Django version 1.11.3, using settings 'jk.settings'
web_1 | Starting development server at http://0.0.0.0:8081/
web_1 | Quit the server with CONTROL-C.
...
Ok so I have take the following approach:
Dockerfile
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY . /code/
RUN pip install -r reqirements.txt
RUN python /code/jk/manage.py collectstatic --noinput
then I ran:
docker build -t "django:v1" .
So docker images -a throws:
docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
django v1 b3dec6aaf9b9 5 minutes ago 949MB
<none> <none> 55370397f7f2 5 minutes ago 948MB
<none> <none> e7eba7113203 7 minutes ago 693MB
<none> <none> dc3d7705c45a 7 minutes ago 691MB
<none> <none> 12825382746d 7 minutes ago 691MB
<none> <none> 2304087e8b82 7 minutes ago 691MB
python 3 79e1dc9af1c1 3 weeks ago 691MB
And finally I ran:
cd /opt/workspace
docker run -d -v /opt/workspace:/code -p 8081:8081 django:v1 python jk/manage.py runserver 0.0.0.0:8081
Two simple questions:
do i get it right that each <none> listed image is created when running docker build -t "django:v1" . command to build up my image ... So it means that it consumes like [(691 x 4) + (693 x 1) + (948) + (949)]MB of disk space ??
Is it better to use gunicorn or wsgi program to run django in production ?
And responses from #vmonteco:
I think so, but a way to reduce the size taken by your images is to reduce their number by using a single RUN directive for several chained commands in your Dockerfile. (RUN cmd1 && cmd2 rather than RUN cmd1 then RUN cmd
It's up to you to make your own opinion. I personnally use uWSGI but there even are other choices than gunicorn/uwsgi (Not just "wsgi", that's the name of a specification for interface, not a programm). Have fun finding your prefered one! :)
TL;DR
You can pass some informations to your Dockefile (the command to run) but that wouldn't be equivalent and you can't do that with all the docker-compose.yml file content.
You can replace your docker-compose.yml file with commands lines though (as docker-compose is precisely to replace it).
In your case you can add the command to run to your Dockerfile as a default command (which isn't roughly the same as passing it to containers you start at runtime) :
CMD ["python", "jk/manage.py", "runserver", "0.0.0.0:8081"]
or pass this command directly in command line like the volume and port which should give something like :
docker run -d -v .:/code -p 8081:8080 yourimage python jk/manage.py runserver 0.0.0.0:8081
BUT
Keep in mind that Dockerfiles and docker-compose serve two whole different purposes.
Dockerfile are meant for image building, to define the steps to build your images.
docker-compose is a tool to start and orchestrate containers to build your applications (you can add some informations like the build context path or the name for the images you'd need, but not the Dockerfile content itself).
So asking to "convert a docker-compose.yml file into a Dockerfile" isn't really relevant.
That's more about converting a docker-compose.yml file into one (or several) command line(s) to start containers by hand.
The purpose of docker-compose is precisely to get rid of these command lines to make things simpler (it automates it).
also :
From the manage.py documentation:
DO NOT USE THIS SERVER IN A PRODUCTION SETTING. It has not gone
through security audits or performance tests. (And that’s how it’s
gonna stay.
Django's runserver included in the manage.py tool isn't meant for production.
You might want to consider using a WSGI server behind a proxy.

Docker Cloud autotest cant find service

I am currently trying to dockerize one of my Django API projects. It uses postgres as the database. I am using Docker Cloud as a CI so that I can build, lint and run tests.
I started with the following DockerFile
# Start with a python 3.6 image
FROM python:3.6
ENV PYTHONUNBUFFERED 1
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD xxx
ENV DB_HOST db
RUN mkdir /code
ADD . /code/
WORKDIR /code
RUN pip install -r requirements.txt
RUN pylint **/*.py
# First tried running tests from here.
RUN python3 src/manage.py test
But this DockerFile always fails as Django cant connect to any database when running the unit tests and justs fails with the following error as no postgres instance is running in this Dockerfile
django.db.utils.OperationalError: could not translate host name "db"
to address: Name or service not known
Then I discovered something called "Autotest" in Docker Cloud that allows you to use a docker-compose.text.yml file to describe a stack and then run some commands with each build. This seemed like what I needed to run the tests, as it would allow me to build my Django image, reference an already existing postgres image and run the tests.
I removed the
RUN python3 src/manage.py test
from the DockerFile and created the following docker-compose.test.yml file.
version: '3.2'
services:
db:
image: postgres:9.6.3
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
sut:
build: .
command: python src/manage.py test
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- DB_HOST=db
depends_on:
- db
Then when I run
docker-compose -f docker-compose.test.yml build
and
docker-compose -f docker-compose.test.yml run sut
locally, the tests all run and all pass.
Then I push my changes to Github and Docker cloud builds it. The build itself succeeds but the autotest, using the docker-compose.test.yml file fails with the following error:
django.db.utils.OperationalError: could not connect to server:
Connection refused
Is the server running on host "db" (172.18.0.2) and accepting
TCP/IP connections on port 5432?
So it seems like the db service isnt being started or is too slow to start on Docker Cloud compared to my local machine?
After Google-ing around a bit I found this https://docs.docker.com/compose/startup-order/ where it says that the containers dont really wait for each other to be a 100% ready. Then they recommend writing a wrapper script to wait for postgres if that is really needed.
I followed their instructions and used the wait-for-postgres.sh script.
Juicy part:
until psql -h "$host" -U "postgres" -c '\l'; do
>&2 echo "Postgres is unavailable - sleeping"
sleep 1
done
and replaced the command in my docker-compose.test.yml from
command: python src/manage.py test
to
command: ["./wait-for-postgres.sh", "db", "python", "src/manage.py",
"test"]
I then pushed to Github and Docker Cloud starts building. Building the image works but now the Autotest just waits for postgres forever (I waited for 10 minutes before manually shutting down the build process in Docker Cloud)
I have Google-d a fair bit around today and it seems like most "Dockerize Django" tutorials dont really mention unit testing at all.
Am I running Django unit tests completely wrong using Docker?
Seems strange to me that it runs perfectly fine locally but when Docker Cloud runs it, it fails!
I seem to have fixed it by downgrading the docker-compose version in the file from 3.2 to 2.1 and using healthcheck.
The healthcheck option gives me a syntax error in depends_on clause as you have to pass an array into it. No idea why this is not supported in version 3.2
But here is my new docker-compose.test.yml that works
version: '2.1'
services:
db:
image: postgres:9.6.3
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
healthcheck:
test: ["CMD-SHELL", "psql -h 'localhost' -U 'postgres' -c
'\\l'"]
interval: 30s
timeout: 30s
retries: 3
sut:
build: .
command: python3 src/manage.py test
environment:
- POSTGRES_USER=$POSTGRES_USER
- POSTGRES_PASSWORD=$POSTGRES_PASSWORD
- DB_HOST=db
depends_on:
// Does not work in 3.2
db:
condition: service_healthy

Service name for Docker Compose remote interpreter in PyCharm 5.1 Beta 2

I've imported to PyCharm 5.1 Beta 2 a tutorial project, which works fine when I run it from the commandline with docker-compose up
: https:// docs.docker.com/compose/django/
Trying to set a remote python interpreter is causing problems.
I've been trying to work out what the service name field is expecting:
remote interpreter - docker compose window - http:// i.stack.imgur.com/Vah7P.png.
My docker-compose.yml file is:
version: '2'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
When I try to enter web or db or anything at all that comes to mind, I get an error message: Service definition is expected to be a map
So what am I supposed to enter there?
EDIT1 (new version: Pycharm 2016.1 release)
I have now updated to the latest version and am having still issues: .IOError: [Errno 21] Is a directory
Sorry for not tagging all links - have a new user link limit
The only viable way we found to workaround this (Pycharm 2016.1) is setting up an SSH remote interpreter.
Add this to the main service Dockerfile:
RUN apt-get install -y openssh-server
RUN mkdir /var/run/sshd
RUN echo 'root:screencast' | chpasswd
RUN sed -i 's/PermitRootLogin without-password/PermitRootLogin yes/' /etc/ssh/sshd_config
# SSH login fix. Otherwise user is kicked off after login
RUN sed 's#session\s*required\s*pam_loginuid.so#session optional pam_loginuid.so#g' -i /etc/pam.d/sshd
ENV NOTVISIBLE "in users profile"
RUN echo "export VISIBLE=now" >> /etc/profile
EXPOSE 22
CMD ["/usr/sbin/sshd", "-D"]
Then log into docker container like this (in the code sample pass 'screencast'):
$ ssh root#192.168.99.100 -p 2000
Note: We aware the IP and port might change depending on your docker and compose configs
For PyCharm just set up a remote SSH Interpreter and you are done!
https://www.jetbrains.com/help/pycharm/2016.1/configuring-remote-interpreters-via-ssh.html