Cannot connect to redis://localhost:6379// while using docker-compose - django

This may be a simple question but I've just started to learn docker and I am making my first project with it.
I have a django project using celery and Redis. I've made Dockerfile and docker-compose.yml:
Dockerfile
FROM python:3.8
RUN apt-get update && apt-get upgrade -y && apt-get autoremove && apt-get autoclean
RUN apt-get install -y \
libffi-dev \
libssl-dev \
libxml2-dev \
libxslt-dev \
libjpeg-dev \
libfreetype6-dev \
zlib1g-dev \
net-tools
ARG PROJECT=djangoproject
ARG PROJECT_DIR=/var/www/${PROJECT}
RUN mkdir -p $PROJECT_DIR
WORKDIR $PROJECT_DIR
COPY requirements.txt .
RUN pip install -r requirements.txt
EXPOSE 8000
STOPSIGNAL SIGINT
CMD ["python", "manage.py", "runserver", "127.0.0.1:8000"]
Docker-compose.yml:
version: "3"
services:
redis:
image: redis:latest
container_name: rd01
ports:
- '6379:6379'
restart: always
expose:
- '6379'
django:
container_name: django_server
build:
context: .
dockerfile: Dockerfile
image: docker_tutorial_django
volumes:
- ./parser_folder:/var/www/djangoproject
ports:
- "8000:8000"
links:
- redis
depends_on:
- celery
celery:
build: .
command: celery -A Parsing worker -B --loglevel=DEBUG
volumes:
- ./parser_folder:/var/www/djangoproject
links:
- redis
When I execute docker-compose up I get an error consumer: Cannot connect to redis://localhost:6379//: Error 99 connecting to localhost:6379. Cannot assign requested address..
I tried to change ports and write the command for Redis in docker-compose.yml but it won't working. Help me to figure it out please

Related

Django/Docker: web container not up-to-date code

EDIT 22/05/2022
Docker version 20.10.14
docker-compose version 1.25.0
I delete again all containers/images and re-build using docker-compose -f docker-compose.preprod.yml build --force-rm --no-cache but I still observe the same issue: code in not up-to-date in web container
I use Django docker app and do not manage to apply code update to my web container.
I've tried to delete all containers (docker rm -f ID ; docker system prune) and images (docker rmi -f ID ; docker image prune) related to my app and re-build with docker-compose -f docker-comose.preprod.yml build
Then I run docker-compose -f docker-comose.preprod.yml up but for some reasons when I connect to my web running container (docker exec -it web sh) and read my updated files, I observe that update are not applied...
How should I do to make my update applied?
# Pull the official base image
FROM python:3.8.3-alpine
# Set a work directory
WORKDIR /usr/src/app
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install psycopg2 dependencies
RUN apk update && apk add postgresql-dev gcc g++ python3-dev musl-dev
RUN apk --update add libxml2-dev libxslt-dev libffi-dev musl-dev libgcc openssl-dev curl postgresql-client
RUN apk add jpeg-dev zlib-dev freetype-dev lcms2-dev openjpeg-dev tiff-dev tk-dev tcl-dev nano
RUN pip3 install psycopg2 psycopg2-binary
# install xgettext for i18n
RUN apk add gettext
# Install dependencies
COPY requirements/ requirements/
RUN pip install --upgrade pip && pip install -r requirements/preprod.txt
# Copy the entrypoint.sh file
COPY entrypoint.preprod.sh .
# Copy the initdata sql file
COPY initdata.preprod.sql .
# Copy the project's files
COPY . .
RUN chmod +x entrypoint.preprod.sh
version: '3.7'
services:
web:
restart: always
container_name: ecrf_web
build:
context: ./app
dockerfile: Dockerfile.preprod
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
volumes:
- app_volume:/usr/src/app
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
expose:
- 8000
env_file:
- ./.env.preprod
entrypoint: [ "/usr/src/app/entrypoint.preprod.sh" ]
depends_on:
- redis
healthcheck:
test: [ "CMD", "curl", "-f", "http://localhost:8000/" ]
interval: 30s
timeout: 10s
retries: 50
redis:
container_name: ecrf_redis
image: "redis:alpine"
celery:
container_name: ecrf_celery
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core worker -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
celery-beat:
container_name: ecrf_celery-beat
build:
context: ./app
dockerfile: Dockerfile.preprod
command: celery -A core beat -l info
volumes:
- app_volume:/usr/src/app
env_file:
- ./.env.preprod
depends_on:
- web
- redis
nginx:
container_name: ecrf_nginx
build: ./nginx
restart: always
volumes:
- static_volume:/usr/src/app/static
- media_volume:/usr/src/app/media
ports:
- 1370:80
depends_on:
- web
volumes:
static_volume:
media_volume:
app_volume:

docker-compose.yml for production - Django and Celery

I'm looking to deploy a simple application which uses Django and celery.
docker-compose.yml:
version: "3.8"
services:
django:
build: .
container_name: django
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/usr/src/app/
ports:
- "8000:8000"
environment:
- DEBUG=1
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=djcelery.backends.database:DatabaseBackend
depends_on:
- redis
celery:
build: .
command: celery -A core worker -l INFO
volumes:
- .:/usr/src/app
environment:
- DEBUG=1
- CELERY_BROKER=redis://redis:6379/0
- CELERY_BACKEND=djcelery.backends.database:DatabaseBackend
depends_on:
- django
- redis
redis:
image: "redis:alpine"
volumes:
pgdata:
Dockerfile:
FROM python:3.7
WORKDIR /app
ADD . /app
#Install dependencies for PyODBC
RUN apt-get update \
&& apt-get install unixodbc -y \
&& apt-get install unixodbc-dev -y \
&& apt-get install tdsodbc -y \
&& apt-get clean -y
# install ODBC driver in docker image
RUN apt-get update \
&& curl https://packages.microsoft.com/keys/microsoft.asc | apt-key add - \
&& curl https://packages.microsoft.com/config/debian/10/prod.list > /etc/apt/sources.list.d/mssql-release.list \
&& apt-get update \
&& ACCEPT_EULA=Y apt-get install --yes --no-install-recommends msodbcsql17 \
&& apt-get clean \
&& rm -rf /var/lib/apt/lists/* \
&& rm -rf /tmp/*
# install requirements
RUN pip install --trusted-host pypi.python.org -r requirements.txt
EXPOSE 5000
ENV NAME OpentoAll
CMD ["python", "app.py"]
Project Directories:
When I run "docker-compose up" locally, the celery worker is run and I am able to go to localhost:8000 to access the API to make asynchronous requests to a celery task.
Now I'm wondering how can I deploy this to the cloud environment? What would be the image I would need to build and deploy? Thanks
You will need to install an application server (eg. gunicorn) in your django container and then run it on say port 8000. You'll also need a webserver (eg. nginx) in a container or installed on the host. The web server will need to act as a reverse proxy for gunicorn and also serve your static Django content.

docker-compose build requirements.txt not update

I want to use docker to publish my Django project.
I have create a docker-compose.yml file, a .dockerignore and a Dockerfile like this one:
FROM python:3.6-alpine
RUN apk add --no-cache gcc musl-dev linux-headers
RUN apk update && apk add postgresql-dev gcc python3-dev musl-dev
RUN mkdir /code
COPY requirements.txt /code
WORKDIR /code
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver", "127.0.0.1:8000"]
When I first run docker compose I get an error installing a package contained in my requirements.txt file, at this point, I remove the packages from my file and run:
docker-compose down
docker-compose build --no-cache
here my docker-compose.yml
version: '3'
networks:
mynetwork:
driver: bridge
services:
db:
image: postgres
restart: always
ports:
- "5432:5432"
networks:
- mynetwork
environment:
POSTGRES_USER: myuser
POSTGRES_PASSWORD: mypass
POSTGRES_DB: mydb
volumes:
- ./data:/var/lib/postgresql/data
web:
build: .
networks:
- mynetwork
volumes:
- .:/DEV
ports:
- "8000:8000"
depends_on:
- db
When I proceed to execute pip install -r requirements.txt there is again the package in file that execute causing an issue... how can I clear the cache and use my new saved requirements.txt file?

How to attach graph-tool to Django using Docker

I need to use some graph-tool calculations in my Django project. So I started with docker pull tiagopeixoto/graph-tool and then added it to my Docker-compose file:
version: '3'
services:
db:
image: postgres
graph-tool:
image: dcagatay/graph-tool
web:
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
- graph-tool
When I up my docker-compose I got a line:
project_graph-tool_1_87e2d144b651 exited with code 0
And finally when my Django projects starts I can not import modules from graph-tool, like:
from graph_tool.all import *
If I try work directly in this docker image using:
docker run -it -u user -w /home/user tiagopeixoto/graph-tool ipython
everything goes fine.
What am I doing wrong and how can I fix it and finally attach graph-tool to Django? Thanks!
Rather than using a seperate docker image for graphtool, i think its better to use it within the same Dockerfile which you are using for Django. For example, update your current Dockerfile:
FROM ubuntu:16.04 # using ubuntu image
ENV PYTHONUNBUFFERED 1
ENV C_FORCE_ROOT true
# python3-graph-tool specific requirements for installation in Ubuntu from documentation
RUN echo "deb http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list && \
echo "deb-src http://downloads.skewed.de/apt/xenial xenial universe" >> /etc/apt/sources.list
RUN apt-key adv --keyserver pgp.skewed.de --recv-key 612DEFB798507F25
# Install dependencies
RUN apt-get update \
&& apt-get install -y python3-pip python3-dev \
&& apt-get install --yes --no-install-recommends --allow-unauthenticated python3-graph-tool \
&& cd /usr/local/bin \
&& ln -s /usr/bin/python3 python \
&& pip3 install --upgrade pip
# Project specific setups
# These steps might be different in your project
RUN mkdir /code
WORKDIR /code
ADD . /code
RUN pip3 install -r requirements.pip
Now update your docker-compose file as well:
version: '3'
services:
db:
image: postgres
web:
build: .
container_name: djcon # <-- preferred over generated name
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
Thats it. Now if you go to your web service's shell by docker exec -ti djcon bash(or any generated name instead of djcon), and access the django shell like this python manage.py shell. Then type from graph_tool.all import * and it will not throw any import error.

Why does docker-compose use ~60GB to build this image

When I start docker-compose build I have 60 gigs free. I run out of space before it finishes. Any idea what could possibly be going on?
I'm running latest of Docker for Mac and docker-compose
here's my docker-compose file:
version: '3'
services:
db:
image: postgres:9.6-alpine
volumes:
- data:/var/lib/postgresql/data
ports:
- 5432:5432
web:
image: python:3.6-alpine
command: ./waitforit.sh solr:8983 db:5432 -- bash -c "./init.sh"
build: .
env_file: ./.env
volumes:
- .:/sark
- solrcores:/solr
ports:
- 8000:8000
links:
- db
- solr
restart: always
solr:
image: solr:6-alpine
ports:
- 8983:8983
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- sark
volumes:
- solrcores:/opt/solr/server/solr/mycores
volumes:
data:
solrcores:
and my dockerfile for the "web" image:
FROM python:3
# Some stuff that everyone has been copy-pasting
# since the dawn of time.
ENV PYTHONUNBUFFERED 1
# Install some necessary things.
RUN apt-get update
RUN apt-get install -y swig libssl-dev dpkg-dev netcat
# Copy all our files into the image.
RUN mkdir /sark
WORKDIR /sark
COPY . /sark/
# Install our requirements.
RUN pip install -U pip
RUN pip install -Ur requirements.txt
This image itself when built is ~3 gigs.
I'm pretty flummoxed.