Problems I am having with my Docker instance on a GCE VM:
It keeps restarting
I cannot access tcp:80 after creating several firewall policies
Here are the steps I took in creating the instance with gcloud: Why is Google Compute Engine not running my container?
What I have tried:
To open the port, I created several Firewall policies and even tagged the VM, but still getting this when I run nmap -Pn 34.XX.XXX.XXX
PORT STATE SERVICE
25/tcp open smtp
110/tcp open pop3
119/tcp open nntp
143/tcp open imap
465/tcp open smtps
563/tcp open snews
587/tcp open submission
993/tcp open imaps
995/tcp open pop3s
# 80/tcp is not open
Alternatively, I tried opening the port from inside the container after SSH:
docker run --rm -d -p 80:80 us-central1-docker.pkg.dev/<project>/<repo>/<image:v2>
curl http://127.0.0.1
#which results in:
curl: (7) Failed to connect to 127.0.0.1 port 80: Connection refused
Also, running docker ps from within the container shows a restarting status.
If it helps, I have two docker-compose files, one local and prod. The prod has the following details:
version: '3'
services:
web:
restart: on-failure
build:
context: ./
dockerfile: Dockerfile.prod
image: <image-name>
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
expose:
- 8000
command: gunicorn core.wsgi:application --bind 0.0.0.0:8000
env_file:
- ./.env.dev
depends_on:
- db
db:
image: postgres:10-alpine
env_file:
- ./.env.prod
volumes:
- pgdata:/var/lib/postgresql/data
expose:
- 5432
nginx:
build: ./nginx
volumes:
- static_volume:/home/app/web/staticfiles
- media_volume:/home/app/web/mediafiles
ports:
- 1337:80
depends_on:
- web
volumes:
pgdata:
static_volume:
media_volume:
I don't know what I am doing wrong at this point. I've killed many instances with different restart-policies but it stays the same. I have the sample setup running nginx successfully. It also works successfully locally.
Could I be using the wrong compose file?
Is my project nginx settings wrong?
Why is the container always restarting?
Why can't I open tcp:80 after setting Firewall rules
Does tagging such as :v2 after the performance on the container in GCE?
I'd appreciate any help. I have wasted hours trying to figure it out.
PS: If it helps, the project is built with Django and Python3.9
Related
I setup my django and postgres container on my local machine and all working fine. Local server is running, database running but I am not being able to connect to the created postgres db.
docker-compose.yml
version: '3'
services:
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/app
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres:13.0-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=my_user
- POSTGRES_PASSWORD=my_password
- POSTGRES_DB=my_db
volumes:
postgres_data:
I tried this command:
docker exec -it container_id psql -U postgres
error:
psql: error: could not connect to server: FATAL: role "postgres" does not exist
I am very new to Docker.
You're not using the username and the password you provided in your docker-compose file. Try this and then enter my_password:
docker exec -it container_id psql -U my_user -d my_db --password
Check the official documentation to find out about the PostgreSQL terminal.
I would also like to add, in your compose file you're not exposing any ports for the db container. So it will be unreachable via external sources (you, your app or anything that isn't ran within that container).
I think you need to add environment to project container.
environment:
- DB_HOST=db
- DB_NAME=my_db
- DB_USER=youruser
- DB_PASS=yourpass
depends_on:
- db
add this before depends_on
And now see if it solves
You should add ports to the docker-compose for the postgres image,as this would allow postgres to be accessible outside the container
- ports:
"5432:5432"
You can checkout more here docker-compose for postgres
I have everything on one network (it talks to another app in another docker-compose file).
But I keep getting this error:
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "db" (192.168.208.3) and accepting
TCP/IP connections on port 5000?
When I change my SQL_PORT=5432 (default port that's running in the postgres container) the error above disappears and my app is up, but then it has problems when trying to connect to celery or in shell it says db is not connected.
I have to use 5000 cause there is another postgres db on the other app in the second docker-compose setup.
I think I'm lost on the internals of networks. I was pretty sure I am suppose to use the exposed port of 5000 for my database.
version: "3.9"
services:
app:
build: .
command: python manage.py runserver 0.0.0.0:8000
container_name: app
environment:
- DEBUG=True
- PYTHONUNBUFFERED=1
- CELERY_BROKER=redis://broker:6379/0
- CELERY_BACKEND=redis://broker:6379/
- APP_BASIC_AUTH_PASSWORD=adPswd12*
- APP_BASIC_AUTH_USER=admin
- APP_TOKEN_AUTH=NHEC_UTILITY
- VTN_API_URL=vtn_app:8000
- VTN_API_TOKEN=NHECAPP_UTILITY
- SQL_PORT=5000
volumes:
- .:/code
ports:
- "9000:8000"
networks:
- app-web-net
depends_on:
- db
- celery-worker
- broker
app_test:
build: .
command: python manage.py test
container_name: app_test
environment:
- DEBUG=True
- PYTHONUNBUFFERED=1
volumes:
- .:/code
depends_on:
- db
db:
image: postgres:10
container_name: app_postgres
ports:
- 5000:5432
volumes:
- db_data:/var/lib/postgresql/data
environment:
POSTGRES_DB: nhec
POSTGRES_USER: nhec
POSTGRES_PASSWORD: nhec
networks:
- app-web-net
celery-worker:
build: .
command: celery -A app worker -l DEBUG
depends_on:
- db
- broker
environment:
CELERY_BROKER_URL: redis://broker:6379/0
networks:
- app-web-net
broker:
image: redis:6-alpine
ports:
- 6379:6379
networks:
- app-web-net
volumes:
db_data: {}
networks:
app-web-net:
driver: bridge
The ports will expose the postgres server on the your localhost port 5000, not to internal services. Your services will be able to reach the database container and its ports within the same network.
If you still want to change the default postgres port, here's a related answer that you might find helpful. Changing a postgres containers server port in Docker Compose
Your container name is actually app_postgres, not db as specified by this in your docker compose:
container_name: app_postgres
The other thing you can do is change the name of the container from app_postgres to something not the same as the postgres in your other app's docker compose file. Both postgres instances can have port 5432 exposed for your apps to use.
I am trying to build my airflow using docker and rabbitMQ. I am using rabbitmq:3-management image. And I am able to access rabbitMQ UI, and API.
In airflow I am building airflow webserver, airflow scheduler, airflow worker and airflow flower. Airflow.cfg file is used to config airflow.
Where I am using broker_url = amqp://user:password#127.0.0.1:5672/ and celery_result_backend = amqp://user:password#127.0.0.1:5672/
My docker compose file is as follows
version: '3'
services:
rabbit1:
image: "rabbitmq:3-management"
hostname: "rabbit1"
environment:
RABBITMQ_ERLANG_COOKIE: "SWQOKODSQALRPCLNMEQG"
RABBITMQ_DEFAULT_USER: "user"
RABBITMQ_DEFAULT_PASS: "password"
RABBITMQ_DEFAULT_VHOST: "/"
ports:
- "5672:5672"
- "15672:15672"
labels:
NAME: "rabbitmq1"
webserver:
build: "airflow/"
hostname: "webserver"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "8080:8080"
depends_on:
- rabbit1
command: webserver
scheduler:
build: "airflow/"
hostname: "scheduler"
restart: always
environment:
- EXECUTOR=Celery
depends_on:
- webserver
- flower
- worker
command: scheduler
worker:
build: "airflow/"
hostname: "worker"
restart: always
depends_on:
- webserver
environment:
- EXECUTOR=Celery
command: worker
flower:
build: "airflow/"
hostname: "flower"
restart: always
environment:
- EXECUTOR=Celery
ports:
- "5555:5555"
depends_on:
- rabbit1
- webserver
- worker
command: flower
I am able to build images using docker compose. However, I am not able to connect my airflow scheduler to rabbitMQ. I am getting following error:
consumer: Cannot connect to amqp://user:**#localhost:5672//: [Errno
111] Connection refused.
I have tried using 127.0.0.1 and localhost both.
What I am doing wrong ?
From within your airflow containers, you should be able to connect to the service rabbit1. So all you need to do is to change amqp://user:**#localhost:5672//: to amqp://user:**#rabbit1:5672//: and it should work.
Docker compose creates a default network and attaches services that do not explicitly define a network to it.
You do not need to expose the 5672 & 15672 ports on rabbit1 unless you want to be able to access it from outside the application.
Also, generally it is not recommended to build images inside docker-compose.
I solved this issue by installing rabbitMQ server into my system with command sudo apt install rabbitmq-server.
I am trying to run Django tests on Gitlab CI but getting this error, Last week it was working perfectly but suddenly I am getting this error during test run
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "database" (172.19.0.3) and accepting
TCP/IP connections on port 5432?
My gitlab-ci file is like this
image: docker:latest
services:
- docker:dind
variables:
DOCKER_HOST: tcp://docker:2375
DOCKER_DRIVER: overlay2
test:
stage: test
image: tiangolo/docker-with-compose
script:
- docker-compose -f docker-compose.yml build
- docker-compose run app python3 manage.py test
my docker-compose is like this:
version: '3'
volumes:
postgresql_data:
services:
database:
image: postgres:12-alpine
environment:
- POSTGRES_DB=test
- POSTGRES_USER=test
- POSTGRES_PASSWORD=123
- POSTGRES_HOST=database
- POSTGRES_PORT=5432
volumes:
- postgresql_data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U $${POSTGRES_USER} -e \"SHOW DATABASES;\""]
interval: 5s
timeout: 5s
retries: 5
ports:
- "5432"
restart: on-failure
app:
container_name: proj
hostname: proj
build:
context: .
dockerfile: Dockerfile
image: sampleproject
command: >
bash -c "
python3 manage.py migrate &&
python3 manage.py wait_for_db &&
gunicorn sampleproject.wsgi:application -c ./gunicorn.py
"
env_file: .env
ports:
- "8000:8000"
volumes:
- .:/srv/app
depends_on:
- database
- redis
So why its refusing connection? I don't have any idea and it was working last week.
Unsure if it would help in your case but I was getting the same issue with docker-compose. What solved it for me was explicitly specifying the hostname for postgres.
services:
database:
image: postgres:12-alpine
hostname: database
environment:
- POSTGRES_DB=test
- POSTGRES_USER=test
- POSTGRES_PASSWORD=123
- POSTGRES_HOST=database
- POSTGRES_PORT=5432
...
Could you do a docker container ls and check if the container name of the database is in fact, "database"?
You've skipped setting the container_name for that container, and it may be so that docker isn't creating it with the default name of the service, i.e. "database", thus the DNS isn't able to find it under that name in the network.
Reboot the server. I encounter similar errors on Mac and Linux from time to time when I run more than one container that use postgres.
I'm trying to connect to an instance of django running in docker. As far as i can tell I've opened the correct port, and see in docker ps that there is tcp on port 8000, but it there is no forwarding to the port.
After reading the docker compose docs on ports, i would expect this to work (I can view pgadmin on 127.0.0.1:9000 too).
My docker compose:
version: '3'
services:
postgresql:
restart: always
image: postgres:latest
environment:
POSTGRES_USER: pguser
POSTGRES_PASSWORD: pgpassword
POSTGRES_DB: pgdb
pgadmin:
restart: always
image: dpage/pgadmin4:latest
environment:
PGADMIN_DEFAULT_EMAIL: admin#admin.com
PGADMIN_DEFAULT_PASSWORD: admin
GUNICORN_THREADS: 4
PGADMIN_LISTEN_PORT: 9000
volumes:
- ./utility/pgadmin4-servers.json:/pgadmin4/servers.json
depends_on:
- postgresql
ports:
- "9000:9000"
app:
build: .
environment:
POSTGRES_DB: pgdb
POSTGRES_USER: pguser
POSTGRES_PASSWORD: pgpassword
POSTGRES_HOST: postgresql
volumes:
- .:/code
ports:
- "127.0.0.1:8000:8000"
- "5555:5555"
depends_on:
- postgresql
- pgadmin
I have tried with the following combinations for (app) ports, as are suggested here:
app:
...
ports:
- "8000"
- "8000:8000"
- "127.0.0.1:8000:8000"
but i still see This site can’t be reached 127.0.0.1 refused to connect. on trying to access the site.
I'm sure that this is a port forwarding problem, and that my server is turning correctly in django because i can run a docker attach to the container and curl a url with the expected response.
What am i doing wrong?
I was running my application using the command:
docker-compose run app python3 manage.py runserver 0.0.0.0:8000
With docker-compose you need to use the argument --service-ports to:
Run command with the service's ports enabled and mapped to the host.
Thus my final command looked like this:
docker-compose run --service-ports app python3 manage.py runserver 0.0.0.0:8000
Documentation on run can be found here
If you are running docker inside a virtual machine then you need to access your application through the virtual machine IP address and not using localhost or 127.0.0.1. Try to get the virtual machine IP. Also please specify in which platform/environment you installed and running the docker.