I have a django application i want to run redis and celery using docker run command
after I build images using docker-compose file
I run two commands on different windows powershell
docker run -it -p 6379:6379 redis
docker run -it image_celery
my celery powershell is not able to detect redis
[2020-02-08 13:08:44,686: ERROR/MainProcess] consumer: Cannot connect to redis://redis:6379/1: Error -2 connecting to redis:6379. Name or service not known..
Trying again in 2.00 seconds...
version: '3'
services:
the-redis:
image: redis:3.2.7-alpine
ports:
- "6379:6379"
volumes:
- ../data/redis:/data
celery_5:
build:
context: ./mltrons_backend
dockerfile: Dockerfile_celery
volumes:
- ./mltrons_backend:/code
- /tmp:/code/static
depends_on:
- the-redis
deploy:
replicas: 4
resources:
limits:
memory: 25g
restart_policy:
condition: on-failure
volumes:
db_data:
external: true
Dockerfile_celery
FROM python:3.6
ENV PYTHONUNBUFFERED 1
# Install Java
RUN apt-get -y update && \
apt install -y openjdk-11-jdk && \
apt-get install -y ant && \
apt-get clean && \
rm -rf /var/lib/apt/lists/ && \
rm -rf /var/cache/oracle-jdk11-installer;
ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64/
RUN mkdir /code
WORKDIR /code
ADD requirements.txt /code/
RUN pip install -r requirements.txt
ADD . /code
ENV REDIS_HOST=redis://the-redis
ENV REDIS_PORT=6379
RUN pip install --upgrade 'sentry-sdk==0.7.10'
ENTRYPOINT celery -A mlbot_webservices worker -c 10 -l info
EXPOSE 8102
celery.py
from __future__ import absolute_import, unicode_literals
import os
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'mlbot_webservices.settings')
app = Celery('mltrons_training')
app.config_from_object('django.conf:settings', namespace='CELERY')
app.autodiscover_tasks()
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
settings.py
CELERY_BROKER_URL = 'redis://the-redis:6379/'
CELERY_RESULT_BACKEND = 'redis://the-redis:6379/'
CELERY_ACCEPT_CONTENT = ['application/json']
CELERY_RESULT_SERIALIZER = 'json'
CELERY_TASK_SERIALIZER = 'json'
It is expected because when you start container as you do (docker run IMAGE), the containers use the default bridge network of Docker.
You can check it by inspecting that network : docker network inspect bridge.
The default bridge doesn't accept network resolution of the containers by container name as you do (redis).
Besides the default name of a container is not the image name but a generated name by docker.
That's why you get that error at runtime :
Cannot connect to redis://redis:6379/1
Note that you can still reference containers belonging to the default bridge by their ip addresses, but that is generally undesirable because that hard code them from the client side.
That works with Docker compose because :
By default Compose sets up a single network for your app. Each
container for a service joins the default network and is both
reachable by other containers on that network, and discoverable by
them at a hostname identical to the container name.
To be able to communicate by container name with docker run, you need :
- to add these containers in the same network but not the default one provided by Docker
- to give an explicit name to the container that you want to reference (while doing it for both container is good to monitor/manager it more simply) by the other.
For example create a user-defined bridge network and add the containers to that when you start them :
docker network create -d bridge my-bridge-network
docker run -it -p 6379:6379 --network=my-bridge-network --name=redis redis
docker run -it --network=my-bridge-network --name=celery image_celery
Related
EDITED
I am learning CICD and Docker. So far I have managed to successfully create a docker image using the code below:
Dockerfile
# Docker Operating System
FROM python:3-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
#App folder on Slim OS
WORKDIR /app
# Install pip requirements
COPY requirements.txt requirements.txt
RUN python -m pip install --upgrade pip pip install -r requirements.txt
#Copy Files to App folder
COPY . /app
docker-compose.yml
version: '3.4'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- 8000:8000
My code is on BitBucket and I have a pipeline file as follows:
bitbucket-pipelines.yml
image: atlassian/default-image:2
pipelines:
branches:
master:
- step:
name:
Build And Publish To Azure
services:
- docker
script:
- docker login -u $AZURE_USER -p $AZURE_PASS xxx.azurecr.io
- docker build -t xxx.azurecr.io .
- docker push xxx.azurecr.io
With xxx being the Container registry on Azure. When the pipeline job runs I am getting denied: requested access to the resource is denied error on BitBucket.
What did I not do correctly?
Thanks.
The Edit
Changes in docker-compose.yml and bitbucket-pipeline.yml
docker-compose.yml
version: '3.4'
services:
web:
build: .
image: xx.azurecr.io/myticket
container_name: xx
command: python manage.py runserver 0.0.0.0:80
ports:
- 80:80
bitbucket-pipelines.yml
image: atlassian/default-image:2
pipelines:
branches:
master:
- step:
name:
Build And Publish To Azure
services:
- docker
script:
- docker login -u $AZURE_USER -p $AZURE_PASS xx.azurecr.io
- docker build -t xx.azurecr.io/xx .
- docker push xx.azurecr.io/xx
You didnt specify CMD or ENTRYPOINT in your dockerfile.
There are stages when building a dockerfile
Firstly you call an image, then you package your requirements etc.. those are stages that are being executed while the container is building. you are missing the last stage that executes a command inside the container when its already up.
Both ENTRYPOINT and CMD are essential for building and running Dockerfiles.
for it to work you must add a CMD or ENTRYPOINT at the bottom of your dockerfile..
Change your files accordingly and try again.
Dockerfile
# Docker Operating System
FROM python:3-slim-buster
# Keeps Python from generating .pyc files in the container
ENV PYTHONDONTWRITEBYTECODE=1
# Turns off buffering for easier container logging
ENV PYTHONUNBUFFERED=1
#App folder on Slim OS
WORKDIR /app
# Install pip requirements
COPY requirements.txt requirements.txt
RUN python -m pip install --upgrade pip pip install -r requirements.txt
#Copy Files to App folder
COPY . /app
# Execute commands inside the container
CMD manage.py runserver 0.0.0.0:8000
Check you are able to build and run the image by going to its directory and running
docker build -t app .
docker run -d -p 80:80 app
docker ps
See if your container is running.
Next
Update the image property in the docker-compose file.
Prefix the image name with the login server name of your Azure container registry, .azurecr.io. For example, if your registry is named myregistry, the login server name is myregistry.azurecr.io (all lowercase), and the image property is then myregistry.azurecr.io/azure-vote-front.
Change the ports mapping to 80:80. Save the file.
The updated file should look similar to the following:
docker-compose.yml
Copy
version: '3'
services:
foo:
build: .
image: foo.azurecr.io/atlassian/default-image:2
container_name: foo
ports:
- "80:80"
By making these substitutions, the image you build is tagged for your Azure container registry, and the image can be pulled to run in Azure Container Instances.
More in documentation
I'm working on a Django project and it's dockerized, I've deployed my application at the Amazon EC2 instance, so currently, the EC2 protocol is HTTP and I want to make it HTTPS so I've created a cloud front distribution to redirect at my EC2 instance but unfortunately I'm getting the following error.
error:
CloudFront attempted to establish a connection with the origin, but either the attempt failed or the origin closed the connection. We can't connect to the server for this app or website at this time. There might be too much traffic or a configuration error. Try again later, or contact the app or website owner.
If you provide content to customers through CloudFront, you can find steps to troubleshoot and help prevent this error by reviewing the CloudFront documentation.
Generated by cloudfront (CloudFront)
Request ID: Pa0WApol6lU6Ja5uBuqKVPVTJFBpkcnJQgtXMYzQP6SPBhV4CtMOVw==
docker-compose.yml
version: "3.8"
services:
db:
container_name: db
image: "postgres"
restart: always
volumes:
- ./scripts/init.sql:/docker-entrypoint-initdb.d/init.sql
- postgres-data:/var/lib/postgresql/data/
env_file:
- prod.env
app:
container_name: app
build:
context: .
restart: always
volumes:
- static-data:/vol/web
depends_on:
- db
env_file:
- prod.env
proxy:
container_name: proxy
build:
context: ./proxy
restart: always
depends_on:
- app
ports:
- 80:8000
volumes:
- static-data:/vol/static
volumes:
postgres-data:
static-data:
Dockerfile
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
WORKDIR /app
EXPOSE 8000
COPY ./core/ /app/
COPY ./scripts /scripts
# installing nano and cron service
RUN apt-get update
RUN apt-get install -y cron
RUN apt-get install nano
RUN pip install --upgrade pip
COPY requirements.txt /app/
# install dependencies and manage assets
RUN pip install -r requirements.txt && \
mkdir -p /vol/web/static && \
mkdir -p /vol/web/media
# files for cron logs
RUN mkdir /cron
RUN touch /cron/django_cron.log
# start cron service
RUN service cron start
RUN service cron restart
RUN chmod +x /scripts/run.sh
CMD ["/scripts/run.sh"]
I have a docker-compose.yml defined as follows with two services (the database and the app):
version: '3'
services:
db:
build: .
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
The reason I have build: . in both services is due to how you can't do docker-compose push unless you have a build in all services. However, this means that both services are referring to the same Dockerfile, which builds the entire app. So after I run docker-compose build and look at the images available I see this:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
mellon_app latest XXXXXXXXXXXX 27 seconds ago 1.14GB
postgres latest XXXXXXXXXXXX 27 seconds ago 1.14GB
The IMAGE_ID is the exact same for both images, the size is exactly the same for both images. This makes me think I've definitely done some unnecessary duplication as they're both just running the same Dockerfile. I don't want to take up any unnecessary space, how do I do this properly?
This is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
What is the proper way to push the images to my Docker hub registry without this duplication?
Proper way is to do
docker build -f {path-to-dockerfile} -t {desired-docker-image-name}.
docker tag {desired-docker-image-name}:latest {desired-remote-image-name}:latest or not latest but what you want, like datetime in int format
docker push {desired-remote-image-name}:latest
and cleanup:
4. docker rmi {desired-docker-image-name}:latest {desired-remote-image-name}:latest
Whole purpose of docker-compose is to help your local development, so it's easier to start several pods and combine them in local docker-compose network etc...
I am using Django and PostgreSQL as different containers for a project. When I run the containers using docker-compose up, my Django application can connect to the PostgreSQL, but the same, when I build the docker image with docker build . --file Dockerfile --tag rengine:$(date +%s), the image successfully builds, but on the entrypoint.sh, it is unable to find host as db.
My docker-compose file is
version: '3'
services:
db:
restart: always
image: "postgres:12.3-alpine"
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_PORT=5432
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data/
networks:
- rengine_network
web:
restart: always
build: .
command: python3 manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
networks:
- rengine_network
networks:
rengine_network:
volumes:
postgres_data:
Entrypoint
#!/bin/sh
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z db 5432; do
sleep 0.1
done
echo "PostgreSQL started"
fi
python manage.py migrate
# Load default engine types
python manage.py loaddata fixtures/default_scan_engines.json --app scanEngine.EngineType
exec "$#"
and Dockerfile
# Base image
FROM python:3-alpine
# Labels and Credits
LABEL \
name="reNgine" \
author="Yogesh Ojha <yogesh.ojha11#gmail.com>" \
description="reNgine is a automated pipeline of recon process, useful for information gathering during web application penetration testing."
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
RUN apk update \
&& apk add --virtual build-deps gcc python3-dev musl-dev \
&& apk add postgresql-dev \
&& apk add chromium \
&& apk add git \
&& pip install psycopg2 \
&& apk del build-deps
# Copy requirements
COPY ./requirements.txt /tmp/requirements.txt
RUN pip3 install -r /tmp/requirements.txt
# Download and install go 1.13
COPY --from=golang:1.13-alpine /usr/local/go/ /usr/local/go/
# Environment vars
ENV DATABASE="postgres"
ENV GOROOT="/usr/local/go"
ENV GOPATH="/root/go"
ENV PATH="${PATH}:${GOROOT}/bin"
ENV PATH="${PATH}:${GOPATH}/bin"
# Download Go packages
RUN go get -u github.com/tomnomnom/assetfinder github.com/hakluke/hakrawler github.com/haccer/subjack
RUN GO111MODULE=on go get -u -v github.com/projectdiscovery/httpx/cmd/httpx \
github.com/projectdiscovery/naabu/cmd/naabu \
github.com/projectdiscovery/subfinder/cmd/subfinder \
github.com/lc/gau
# Make directory for app
RUN mkdir /app
WORKDIR /app
# Copy source code
COPY . /app/
# Collect Static
RUN python manage.py collectstatic --no-input --clear
RUN chmod +x /app/tools/get_subdomain.sh
RUN chmod +x /app/tools/get_dirs.sh
RUN chmod +x /app/tools/get_urls.sh
RUN chmod +x /app/tools/takeover.sh
# run entrypoint.sh
ENTRYPOINT ["/app/docker-entrypoint.sh"]
When I run the build image, it says no such host db, on entrypoint.sh file.
Can somebody help me, where am I going wrong?
Base on a comment the error appeared during
docker run rengine:1234,
So the error is expected in this case, as hostname db will only work in the docker-compose network. Inside docker-compose stack one service can communicate with other service using service name, but in case of docker run both container running in isolated environment.
You have two option to resolve this issue
Run the DB container and use legacy-link to link the application container with DB
docker run -it --name db mydb_image
# now link application container
docker run -it --link db:db rengine:1234
Now the container will able to communicate with host db.
Second option is to create docker network and then run both container in same network.
docker run -itd --network=mynetwork db
docker run -itd --network=mynetwork app
But you are already using docker-compose, so better to do testing with docker-compose.
I'm trying to add Docker support to my project.
My structure looks like this:
front/Dockerfile
back/Dockerfile
docker-compose.yml
My Dockerfile for django:
FROM ubuntu:18.04
RUN apt-get update && apt-get install -y python-software-properties software-properties-common
RUN add-apt-repository ppa:ubuntugis/ubuntugis-unstable
RUN apt-get update && apt-get install -y python3 python3-pip binutils libproj-dev gdal-bin python3-gdal
ENV APPDIR=/code
WORKDIR $APPDIR
ADD ./back/requirements.txt /tmp/requirements.txt
RUN ./back/pip3 install -r /tmp/requirements.txt
RUN ./back/rm -f /tmp/requirements.txt
CMD $APPDIR/run-django.sh
My Dockerfile for Vue.js:
FROM node:9.11.1-alpine
# install simple http server for serving static content
RUN npm install -g http-server
# make the 'app' folder the current working directory
WORKDIR /app
# copy both 'package.json' and 'package-lock.json' (if available)
COPY package*.json ./
# install project dependencies
RUN npm install
# copy project files and folders to the current working directory (i.e. 'app' folder)
COPY . .
# build app for production with minification
RUN npm run build
EXPOSE 8080
CMD [ "http-server", "dist" ]
and my docker-compose.yml:
version: '2'
services:
rabbitmq:
image: rabbitmq
api:
build:
context: ./back
environment:
- DJANGO_SECRET_KEY=${SECRET_KEY}
volumes:
- ./back:/app
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
ports:
- "15672:15672"
- "5672:5672"
labels:
NAME: "rabbitmq1"
volumes:
- "./enabled_plugins:/etc/rabbitmq/enabled_plugins"
django:
extends:
service: api
command:
./back/manage.py runserver
./back/uwsgi --http :8081 --gevent 100 --module websocket --gevent-monkey-patch --master --processes 4
ports:
- "8000:8000"
volumes:
- ./backend:/app
vue:
build:
context: ./front
environment:
- HOST=localhost
- PORT=8080
command:
bash -c "npm install && npm run dev"
volumes:
- ./front:/app
ports:
- "8080:8080"
depends_on:
- django
Running docker-compose fails with:
ERROR: for chatapp2_django_1 Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: for rabbit1 Cannot start service rabbit1: b'driver failed programming external connectivity on endpoint chatapp2_rabbit1_1 (05ff4e8c0bc7f24216f2fc960284ab8471b47a48351731df3697c6d041bbbe2f): Error starting userland proxy: listen tcp 0.0.0.0:15672: bind: address already in use'
ERROR: for django Cannot start service django: b'OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \\"./back/manage.py\\": stat ./back/manage.py: no such file or directory": unknown'
ERROR: Encountered errors while bringing up the project.
I don't understand what is this 'unknown' directory it's trying to get. Have I set this all up right for my project structure?
For the django part you're missing a copy of your code for the django app which im assuming is in back. You'll need to add ADD /back /code. You probably also wanna probably run the python alpine docker build instead of the ubuntu as it will significantly reduce build times and container size.
This is what I would do:
# change this to whatever python version your app is targeting (mine is typically 3.6)
FROM python:3.6-alpine
ADD /back /code
# whatever other dependencies you'll need i run with the psycopg2-binary build so i need these (the nice part of the python-alpine image is you don't need to install any of those python specific packages you were installing before
RUN apk add --virtual .build-deps gcc musl-dev postgresql-dev
RUN pip install -r /code/requirements.txt
# expose whatever port you need for your Django app (default is 8000, we use non-default but you can do whatever you need)
EXPOSE 8000
WORKDIR /code
#dont need /code here since WORKDIR is effectively a change directory
RUN chmod +x /run-django.sh
RUN apk add --no-cache bash postgresql-libs
CMD ["/run-django.sh"]
We have a similar run-django.sh script that we call python manage.py makemigrations and python manage.py migrate. I'm assuming yours is similar.
Long story short, you weren't copying in the code from back to code.
Also in your docker-compose you dont have build context like you do for the vue service.
As for your rabbitmq container failure, you need to stop the /etc service associated with rabbit on your computer. I get this error if i'm trying to expose a postgresql container or a redis container and have to /etc/init.d/postgresql stop or /etc/init.d/redis stop to stop the service running on your machine in order to allow for no collisions on that default port for that service.