I am trying to run celery in a separate docker container alongside a django/redis docker setup.
When I run docker-compose up -d --build, my logs via docker-compose logs --tail=0 --follow show the celery_1 container spamming the console repeatedly with
Usage: nc [OPTIONS] HOST PORT - connect
nc [OPTIONS] -l -p PORT [HOST] [PORT] - listen
-e PROG Run PROG after connect (must be last)
-l Listen mode, for inbound connects
-lk With -e, provides persistent server
-p PORT Local port
-s ADDR Local address
-w SEC Timeout for connects and final net reads
-i SEC Delay interval for lines sent
-n Don't do DNS resolution
-u UDP mode
-v Verbose
-o FILE Hex dump traffic
-z Zero-I/O mode (scanning)
I am able to get celery working correctly by removing the celery service from docker-compose.yaml and manually running docker exec -it backend_1 celery -A proj -l info after docker-compose up -d --build. How do I replicate the functionality of this manual process within docker-compose.yaml?
My docker-compose.yaml looks like
version: '3.7'
services:
backend:
build: ./backend
command: python manage.py runserver 0.0.0.0:8000
volumes:
- ./backend/app/:/usr/src/app/
ports:
- 8000:8000
env_file:
- ./.env.dev
depends_on:
- db
- redis
links:
- db:db
celery:
build: ./backend
command: celery -A proj worker -l info
volumes:
- ./backend/app/:/usr/src/app/
depends_on:
- db
- redis
redis:
image: redis:5.0.6-alpine
command: redis-server
expose:
- "6379"
db:
image: postgres:12.0-alpine
ports:
- 5432:5432
volumes:
- /tmp/postgres_data:/var/lib/postgresql/data/
I found out the problem was that my celery service could not resolve the SQL host. This was because my SQL host is defined in .env.dev which the celery service did not have access to. I added
env_file:
- ./.env.dev
to the celery service and everything worked as expected.
Related
Hey everyone i am trying to connect my postgres database install in ubuntu 20.04 to the docker container, which will be outside of the container. I am working on django project.
I am able to create the postgres database inside the docker container and connect my django project to that database, but i want is to connect localdatabase to the django project which is running in docker container
Here is my docker-compose.yml file
version: '3.3'
services:
# Description (For the postgres databse)
kapediadb:
image: postgres
restart: always
container_name: kapediadb
# For accessing env data
environment:
- POSTGRES_DB=${DB_NAME}
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASSWORD}
# Description (For django applications)
kapedia:
restart: always
container_name: kapedia
command:
- /bin/bash
- -c
- |
python manage.py makemigrations accounts
python manage.py makemigrations posts
python manage.py makemigrations quiz
python manage.py migrate
gunicorn kapedia.wsgi:application --bind 0.0.0.0:8000
image: kapedia
# Description (define your dockerfile location here)
build: .
volumes:
- .:/kapedia
ports:
- "8000:8000"
depends_on:
- kapediadb
env_file:
- .env
# Description (For volumes)
volumes:
static:
simply you can add this inside project container:
extra_hosts:
- "host.docker.internal:172.17.0.1"
To find IP of docker i.e. 172.17.0.1 (in my case) you can use in local machine's terminal:
$> ifconfig docker0
docker0: flags=4099<UP,BROADCAST,MULTICAST> mtu 1500
inet 172.17.0.1 netmask 255.255.0.0 broadcast 172.17.255.255
In postgresql.conf, change listen_addresses to listen_addresses = '*'
In pg_hba.conf, add this at the end of line
host all all 0.0.0.0/0 md5
Now restart postgresql service using, sudo service postgresql restart
Please use host.docker.internal hostname to connect database from Server Application.
Ex: jdbc:postgresql://host.docker.internal:5432/bankDB
Note: sudo nano /etc/postgresql/<your_version>/main/postgresql.conf use this command to open postgresql.conf file
This is the way you can connect your local database to docker-contaner
Problems I am having with my Docker instance on a GCE VM:
It keeps restarting
I cannot access tcp:80 after creating several firewall policies
Here are the steps I took in creating the instance with gcloud: Why is Google Compute Engine not running my container?
What I have tried:
To open the port, I created several Firewall policies and even tagged the VM, but still getting this when I run nmap -Pn 34.XX.XXX.XXX
PORT STATE SERVICE
25/tcp open smtp
110/tcp open pop3
119/tcp open nntp
143/tcp open imap
465/tcp open smtps
563/tcp open snews
587/tcp open submission
993/tcp open imaps
995/tcp open pop3s
# 80/tcp is not open
Alternatively, I tried opening the port from inside the container after SSH:
docker run --rm -d -p 80:80 us-central1-docker.pkg.dev/<project>/<repo>/<image:v2>
curl http://127.0.0.1
#which results in:
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
Also, running docker ps from within the container shows a restarting status.
If it helps, I have two docker-compose files, one local and prod. The prod has the following details:
version: '3'
services:
web:
restart: on-failure
build:
context: ./
dockerfile: Dockerfile.prod
image: <image-name>
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:10-alpine
env_file:
- ./.env.prod
volumes:
- pgdata:/var/lib/postgresql/data
expose:
- 5432
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
volumes:
pgdata:
static_volume:
media_volume:
I don't know what I am doing wrong at this point. I've killed many instances with different restart-policies but it stays the same. I have the sample setup running nginx successfully. It also works successfully locally.
Could I be using the wrong compose file?
Is my project nginx settings wrong?
Why is the container always restarting?
Why can't I open tcp:80 after setting Firewall rules
Does tagging such as :v2 after the performance on the container in GCE?
I'd appreciate any help. I have wasted hours trying to figure it out.
PS: If it helps, the project is built with Django and Python3.9
I am trying to run Django tests on Gitlab CI but getting this error, Last week it was working perfectly but suddenly I am getting this error during test run
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "database" (172.19.0.3) and accepting
TCP/IP connections on port 5432?
My gitlab-ci file is like this
image: docker:latest
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
test:
stage: test
image: tiangolo/docker-with-compose
script:
- docker-compose -f docker-compose.yml build
- docker-compose run app python3 manage.py test
my docker-compose is like this:
version: '3'
volumes:
postgresql_data:
services:
database:
image: postgres:12-alpine
environment:
- POSTGRES_DB=test
- POSTGRES_USER=test
- POSTGRES_PASSWORD=123
- POSTGRES_HOST=database
- POSTGRES_PORT=5432
volumes:
- postgresql_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -e \"SHOW DATABASES;\""]
interval: 5s
timeout: 5s
retries: 5
ports:
- "5432"
restart: on-failure
app:
container_name: proj
hostname: proj
build:
context: .
dockerfile: Dockerfile
image: sampleproject
command: >
bash -c "
python3 manage.py migrate &&
python3 manage.py wait_for_db &&
gunicorn sampleproject.wsgi:application -c ./gunicorn.py
"
env_file: .env
ports:
- "8000:8000"
volumes:
- .:/srv/app
depends_on:
- database
- redis
So why its refusing connection? I don't have any idea and it was working last week.
Unsure if it would help in your case but I was getting the same issue with docker-compose. What solved it for me was explicitly specifying the hostname for postgres.
services:
database:
image: postgres:12-alpine
hostname: database
environment:
- POSTGRES_DB=test
- POSTGRES_USER=test
- POSTGRES_PASSWORD=123
- POSTGRES_HOST=database
- POSTGRES_PORT=5432
...
Could you do a docker container ls and check if the container name of the database is in fact, "database"?
You've skipped setting the container_name for that container, and it may be so that docker isn't creating it with the default name of the service, i.e. "database", thus the DNS isn't able to find it under that name in the network.
Reboot the server. I encounter similar errors on Mac and Linux from time to time when I run more than one container that use postgres.
Everytime I restart my ec2 server I have to do:
sudo systemctl start docker and then docker-compose up -d to launch all my containers.
Would there be a way to automatically run these two commands at the start of the instance?
I have read this answer and I think ideally I would like to know how to do that:
Create a systemd service and enable it. All the enabled systems
services will be started on powering.
Do you know how to create such systemd service?
[EDIT 1]: Following Chris William's comment, here is what I have done:
Thanks Chris, so I created a docker_boot.service with the following content:
[Unit]
Description=docker boot
After=docker.service
[Service]
Type=simple
Restart=always
RestartSec=1
User=ec2-user
ExecStart=/usr/bin/docker-compose -f docker-compose.yml up
[Install]
WantedBy=multi-user.target
I created it in /etc/systemd/system folder
I then did:
sudo systemctl enable docker
sudo systemctl enable docker_boot
When I turn on the server, the only Docker images that are running are certbot/certbot and telethonkids/shinyproxy
Please find below the content of my docker-compose.yml file.
Do you see what is missing so that all images are up and running?
version: "3.5"
services:
rstudio:
environment:
- USER=username
- PASSWORD=password
image: "rocker/tidyverse:latest"
build:
context: ./Docker_RStudio
dockerfile: Dockerfile
volumes:
- /home/ec2-user/R_and_Jupyter_scripts:/home/maxence/R_and_Jupyter_scripts
working_dir: /home/ec2-user/R_and_Jupyter_scripts
container_name: rstudio
ports:
- 8787:8787
jupyter:
image: 'jupyter/datascience-notebook:latest'
ports:
- 8888:8888
volumes:
- /home/ec2-user/R_and_Jupyter_scripts:/home/joyvan/R_and_Jupyter_scripts
working_dir: /home/joyvan/R_and_Jupyter_scripts
container_name: jupyter
shiny:
image: "rocker/shiny:latest"
build:
context: ./Docker_Shiny
dockerfile: Dockerfile
container_name: shiny
ports:
- 3838:3838
nginx:
image: nginx:alpine
container_name: nginx
restart: on-failure
networks:
- net
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
ports:
- 80:80
- 443:443
command: "/bin/sh -c 'while :; do sleep 6h & wait $${!}; nginx -s reload; done & nginx -g \"daemon off;\"'"
depends_on:
- shinyproxy
certbot:
image: certbot/certbot
container_name: certbot
restart: on-failure
volumes:
- ./data/certbot/conf:/etc/letsencrypt
- ./data/certbot/www:/var/www/certbot
entrypoint: "/bin/sh -c 'trap exit TERM; while :; do certbot renew; sleep 12h & wait $${!}; done;'"
shinyproxy:
image: telethonkids/shinyproxy
container_name: shinyproxy
restart: on-failure
networks:
- net
volumes:
- ./application.yml:/opt/shinyproxy/application.yml
- /var/run/docker.sock:/var/run/docker.sock
expose:
- 8080
cron:
build:
context: ./cron
dockerfile: Dockerfile
container_name: cron
volumes:
- ./Docker_Shiny/app:/home
networks:
- net
networks:
net:
name: net
Using Amazon Linux 2 I tried to replicate the issue. Obviously, I don't have all the dependencies to run your exact docker-compose.yml, thus I used the docker-compose.yml from here for my verification. The file setups wordpress with mysql .
Steps I took were following (executed as ec2-user in home folder):
1. Install docker
sudo yum update -y
sudo yum install -y docker
sudo systemctl enable docker
sudo systemctl start docker
2. Install docker-compose
sudo curl -L https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m) -o /usr/bin/docker-compose
sudo chmod +x /usr/bin/docker-compose
3. Create docker-compose.yml
mkdir myapp
Create file ./myapp/docker-compose.yml:
version: '3.3'
services:
db:
image: mysql:5.7
volumes:
- db_data:/var/lib/mysql
restart: always
environment:
MYSQL_ROOT_PASSWORD: somewordpress
MYSQL_DATABASE: wordpress
MYSQL_USER: wordpress
MYSQL_PASSWORD: wordpress
wordpress:
depends_on:
- db
image: wordpress:latest
ports:
- "8000:80"
restart: always
environment:
WORDPRESS_DB_HOST: db:3306
WORDPRESS_DB_USER: wordpress
WORDPRESS_DB_PASSWORD: wordpress
WORDPRESS_DB_NAME: wordpress
volumes:
db_data: {}
4. Create docker_boot.service
The file is different then yours, as there were few potential issues in your file:
not using absolute paths
ec2-user may have no permissions to run docker
Create file ./myapp/docker_boot.service:
[Unit]
Description=docker boot
After=docker.service
[Service]
Type=oneshot
RemainAfterExit=yes
WorkingDirectory=/home/ec2-user/myapp
ExecStart=/usr/bin/docker-compose -f /home/ec2-user/myapp/docker-compose.yml up -d --remove-orphans
[Install]
WantedBy=multi-user.target
5. Copy docker_boot.service to systemd
sudo cp -v ./myapp/docker_boot.service /etc/systemd/system
6. Enable and start docker_boot.service
sudo systemctl enable docker_boot.service
sudo systemctl start docker_boot.service
Note: First start may take some time, as it will pull all docker images required. Alternatively start docker-compose manually first to avoid this.
7. Check status of the docker_boot.service
sudo systemctl status docker_boot.service
8. Check if the wordpress is up
curl -L localhost:8000
9. Reboot
Check if the docker_boot.service is running after instance reboot by logging in into the instance and using sudo systemctl status docker_boot.service and/or curl -L localhost:8000.
To have a service launch at launch you would run the following command sudo systemctl enable docker.
For having it then launch your docker compose up -d command you'd need to create a new service for your specific action, and then enable it with the contents similar to the below.
[Unit]
After=docker.service
Description=Docker compose
[Service]
ExecStart=docker-compose up -d
[Install]
WantedBy=multi-user.target
More information for this is available in this post.
I have a docker-compose.yml file which I am trying to run inside of Google Cloud Container-Optimized OS (https://cloud.google.com/community/tutorials/docker-compose-on-container-optimized-os). Here's my docker-compose.yml file:
version: '3'
services:
client:
build: ./client
volumes:
- ./client:/usr/src/app
ports:
- "4200:4200"
- "9876:9876"
links:
- api
command: bash -c "yarn --pure-lockfile && yarn start"
sidekiq:
build: .
command: bundle exec sidekiq
volumes:
- .:/api
depends_on:
- db
- redis
- api
redis:
image: redis
ports:
- "6379:6379"
db:
image: postgres
ports:
- "5433:5432"
api:
build: .
command: bash -c "rm -f tmp/pids/server.pid && bundle exec rails s -p 3000 -b '0.0.0.0'"
volumes:
- .:/myapp
ports:
- "3000:3000"
depends_on:
- db
When I run docker-compose up, I eventually get the error:
Cannot start service api: error while creating mount source path '/rootfs/home/jeremy/my-repo': mkdir /rootfs: read-only file system
Reading further, it appears that /rootfs is locked down (https://cloud.google.com/container-optimized-os/docs/concepts/security), with only a few paths writeable. I'd like to mount all my volumes to one of these directories such as /home, any suggestions on how I can do this? Is it possible to mount all my volumes to /home/xxxxxx by default without having to change my docker-compose.yml file, such as passing a flag to docker-compose up?