I've been developing my python application using Django with PostgreSQL and Nginx and Docker, driven by Docker-Compose on my Macbook M1. And recently, decided to deploy it on the remote server, which has Ubuntu 22.04 LTS, but encountered really weird issue.
My web application says, that it cannot resolve host postgresql (service at docker-compose file, which you can find down below), however, it works fine on my machine, can it be related to the Docker?
It might be because I installed imcompatible versions of Docker and Docker-compose on this remote server or apparently used wrong commands, but not really sure about that, as I haven't gained enough experience of working with Ubuntu yet.
My Macbook's M1 Docker & Docker-compose Versions:
~ Docker - 20.10.12
~ Docker-Compose - 1.29.2
My Remote's Server Ubuntu Docker & Docker-compose Versions:
~ Docker - 20.10.23
~ Docker-compose - 1.29.2
My docker-compose.yaml file
version: "3.9"
services:
nginx_server:
container_name: "nginx_web_server"
image: crazycoderrr/vdnh_nginx_server:latest # образ nginx для нашего проекта
ports:
- "8010:80"
depends_on:
- application_service
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
networks:
- private_network
application_service:
container_name: "application_server"
image: crazycoderrr/vdnh_application:latest # образ backend сервиса нашего приложения
ports:
- "8000:8000"
depends_on:
- postgresql
env_file: ./env/application.env
networks:
- private_network
postgresql:
container_name: "postgresql_database"
image: crazycoderrr/vdnh_postgres_db:latest # образ для postgres нашего проекта
environment:
POSTGRES_USER: "postgres_user"
POSTGRES_PASSWORD: "postgres_password"
POSTGRES_DB: "vdnh_db"
ports:
- "5436:5436"
command:
- -p 5436
networks:
- private_network
networks:
private_network:
external: True
name: private-project-network # приватная сеть, соединяющая все компоненты проекта
# должна быть создана вручную!!!! <docker network create private-project-network -d bridge>
application.env file, that is used in my project
DATABASE_NAME="vdnh_db"
DATABASE_HOST="postgresql"
DATABASE_PORT=5436
DATABASE_USER="postgres_user"
DATABASE_PASSWORD="postgres_password"
If you are familiar with Django, this log message might be useful
File "/usr/local/lib/python3.9/site-packages/django/db/utils.py", line 90, in __exit__
dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 219, in ensure_connection
self.connect()
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 33, in inner
return func(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/django/db/backends/base/base.py", line 200, in connect
application_server | self.connection = self.get_new_connection(conn_params)
File "/usr/local/lib/python3.9/site-packages/django/utils/asyncio.py", line 33, in inner
application_server | return func(*args, **kwargs)
nginx_web_server | exec /docker-entrypoint.sh: exec format error
postgresql_database | exec /usr/local/bin/docker-entrypoint.sh: exec format error
application_server | File "/usr/local/lib/python3.9/site-packages/django/db/backends/postgresql/base.py", line 187, in get_new_connection
application_server | connection = Database.connect(**conn_params)
File "/usr/local/lib/python3.9/site-packages/psycopg2/__init__.py", line 122, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not translate host name "postgresql" to address: Name or service not known
What I did on the remote server:
Installed Docker using this command from the official website:
$ curl -fsSL https://get.docker.com -o get-docker.sh
$ sudo sh get-docker.sh
Then, I installed Docker-Compose using the following command from the official website:
$ sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
$ sudo chmod +x /usr/local/bin/docker-compose
Created private network for my project
$ docker network create private-project-network -d bridge
Then cloned gitlab repository with the project and run docker-compose up -d command
Then I decided to check logs using docker-compose logs and figured out, that it broken
If you have any thoughts on that, please, write out down in the comment section :)
Related
So I've managed to build my Docker image locally using docker-compose build and I've pushed the image to my Docker Hub repository.
Now I'm trying to get it working on DigitalOcean so I can host it. I've pulled the correct version of the image and I am trying to run it with the following command:
root#my-droplet:~# docker run --rm -it -p 8000:8000/tcp mycommand/myapp:1.1 (yes, 1.1)
However I soon run into these two errors:
...
File "/usr/local/lib/python3.8/dist-packages/django/db/backends/postgresql/base.py",
line 185, in get_new_connection
connection = Database.connect(**conn_params) File "/usr/local/lib/python3.8/dist-packages/psycopg2/__init__.py", line
127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "postgres" to address: Name or service not known ```
...
File "/usr/local/lib/python3.8/dist-packages/django/db/backends/base/base.py",
line 197, in connect
self.connection = self.get_new_connection(conn_params) File "/usr/local/lib/python3.8/dist-packages/django/utils/asyncio.py", line
26, in inner
return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/django/db/backends/postgresql/base.py",
line 185, in get_new_connection
connection = Database.connect(**conn_params) File "/usr/local/lib/python3.8/dist-packages/psycopg2/__init__.py", line
127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known ```
This may be due to how I have divided my application (using docker-compose,yml) into two services and have only pushed the image of the app since my previous post. Here is my docker-compose.yml file:
Here is my docker-compose.yml file:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
Here is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
and here is a snippet of my settings.py file:
...
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'postgres',
'USER': 'admin',
'PASSWORD': '[THE PASSWORD]',
'HOST': 'db',
'PORT': 5432,
}
}
...
How can I fix this issue? I've looked around but can't see what I'm doing wrong.
From what I saw online, I added the following to my project, but it doesn't work either:
Here is my base.py (new file):
import psycopg2
conn = psycopg2.connect("dbname='postgres' user='admin' host='db' password='[THE PASSWORD]'")
Additions to Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN pip3 install psycopg2-binary
COPY base.py base.py
RUN python3 base.py
...
Yet the build fails due to this error:
Traceback (most recent call last): File "base.py", line 3, in
<module>
conn = psycopg2.connect("dbname='postgres' user='admin' host='db' password='[THE PASSWORD]'")
File "/usr/local/lib/python3.8/dist-packages/psycopg2/__init__.py", line
127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "db" ([SOME-IP-ADDRESS]) and accepting
TCP/IP connections on port 5432?
ERROR: Service 'app' failed to build: The command '/bin/sh -c python3 base.py' returned a non-zero code: 1
me#My-MacBook-Pro-5 mellon %
I'm unsure as to what to try next but I feel like I am making a simple mistake. Why is the connection being refused at the moment?
When you launch a docker-compose.yml file, Compose will automatically create a Docker network for you and attach containers to it. Absent any specific networks: declarations in the file, Compose will name the network default, and then most things Compose creates are prefixed with the current directory name. This setup is described in more detail in Networking in Compose.
docker run doesn't look at the Compose settings at all, so anything you manually do with docker run needs to replicate the settings docker-compose.yml has or would automatically generate. To make a docker run container connect to a Compose network, you need to do something like:
docker network ls # looking for somename_default
docker run \
--rm -it -p 8000:8000/tcp \
--net somename_default \ # <-- add this option
mycommand/myapp:1.1
(The specific error message could not translate host name "postgres" to address hints at a Docker networking setup issue: the database client container isn't on the same network as the database server and so the hostname resolution fails. Contrast with a "connection refused" type error, which would suggest one container is finding the other but either the database isn't running yet or the port number is wrong.)
In your docker build example, code in a Dockerfile can never connect to a database. At a mechanical level, there's no way to attach it to a Docker network as we did with docker run. At a conceptual level, building an image produces a reusable artifact; if I docker push the image to a registry and docker run it on a different host, that won't have the database setup, or similarly if I delete and recreate the database container locally.
I am trying to run it with the following command: root#my-droplet:~#
docker run --rm -it -p 8000:8000/tcp mycommand/myapp:1.1 (yes, 1.1)
Docker-compose allows to create a composition of several containers.
Running the django container with docker run defeats its purpose.
What you want is running :
docker-compose build
docker-compose up
or in a single command :
docker-compose up --build
Why is the connection being refused at the moment?
Maybe because that is not done at the right moment.
I am not a Django specialist but your second try looks ok while your django service misses the Dockerfile attribute in the compose declaration (typing error I assume).
For your first try, that error :
django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known ```
means that django uses "postgres" as host of the database.
But according to your compose, the database could be reached with a host named "db". These don't match.
Regarding your second try that defines database properties, it looks fine and it also matches to what the official doc states :
In this section, you set up the database connection for Django.
In your project directory, edit the composeexample/settings.py file.
Replace the DATABASES = ... with the following:
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'postgres',
'USER': 'postgres',
'PASSWORD': 'postgres',
'HOST': 'db',
'PORT': 5432,
} } These settings are determined by the postgres Docker image specified in docker-compose.yml.
For your second try, that error :
conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "db" ([SOME-IP-ADDRESS]) and accepting
TCP/IP connections on port 5432?
may mean that the Postgres DB listener is not ready yet when Django tries to init the connection with.
That attributes of your compose : depends_on: defines order of container start but doesn't wait for that the application behind the container be "ready".
Here two way to addresse that kind of scenario :
either add a sleep of some seconds before starting your django service
or set your django compose conf to restart on failure.
For the first way, configure your django service with a CMD such as (not tried) :
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
command: /bin/bash -c "sleep 30; python3 manage.py collectstatic && python3 manage.py makemigrations && python3 manage.py migrate && gunicorn mellon.wsgi --bind 0.0.0.0:8000 "
For the second way, try something like that :
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
restart: on-failure
I am currently working on a Django project, which is supposed to send messages to a mobile app via websockets. For the Django project I used Docker to put it online. Now I want to send planned messages for the first time, for this I use Apscheduler or django-apscheduler. I try to save the jobs in my Redis container. But for some reason the connection is denied. Am I basically doing something wrong or does it hang somewhere?
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
RUN pip install -r requirements.txt
docker-compose.yml
version: '3'
services:
redis:
image: redis
command: redis-server
ports:
- '6379:6379'
- '6380:6380'
web:
build: .\experiencesampling
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:\code
ports:
- "8000:8000"
# worker_channels:
#
# build: .\experiencesampling
# command: python manage.py runworker channels
# volumes:
# - .:\code
# links:
# - redis
channels:
build: .\experiencesampling
command: daphne -p 8001 experiencesampling.asgi:application
volumes:
- .:\code
ports:
- "8001:8001"
links:
- redis
jobs.py (trying connect to redis), I have already tried 0.0.0.0, localhost, redis://redis for the "host"
jobstores = {
'default': RedisJobStore(jobs_key='dispatched_trips_jobs', run_times_key='dispatched_trips_running', host='redis', port=6380)
}
executors = {
'default': ThreadPoolExecutor(20),
'processpool': ProcessPoolExecutor(5)
}
job_defaults = {
'coalesce': False,
'max_instances': 3
}
#jobStore.remove_all_jobs()
scheduler = BackgroundScheduler(jobstores=jobstores, executors=executors, job_defaults=job_defaults)
register_events(scheduler)
scheduler.start()
print("Scheduler started!")
Error (appears multiple times)
web_1 |
web_1 | Scheduler started!
web_1 | Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | System check identified no issues (0 silenced).
web_1 | July 11, 2020 - 19:00:30
channels_1 | 2020-07-11 19:00:29,866 WARNING Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | Django version 3.0.8, using settings 'experiencesampling.settings'
web_1 | Starting ASGI/Channels version 2.3.1 development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.
Getting error while installing GeoDjango on docker with postgres db.
I'm completely new to docker and i'm setting up my project on docker. I don't know, is this error is regarding django or postgres.
Found this error
AttributeError: /usr/lib/libgdal.so.1: undefined symbol:
OGR_F_GetFieldAsInteger64 While installing in Docker
docker-compose.yml
version: '3'
services:
postgres:
restart: always
image: postgres:alpine
volumes:
- ./postgres_gis/gis_db:/home/dev/gis_db.sql
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: Dev#mishra123
POSTGRES_DB: gis_db
expose:
- "5432"
web:
build: ./HomePage
restart: always
expose:
- "8000"
volumes:
- ./HomePage:/home/dev/app/HomePage
depends_on:
- postgres
environment:
- DEBUG=1
web/Dockerfile
from python:3.6.2-slim
RUN groupadd dev && useradd -m -g dev -s /bin/bash dev
RUN echo "dev ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers
RUN mkdir -p /home/dev/app/HomePage
RUN chown -R dev:dev /home/dev/app
RUN chmod -R +x+r+w /home/dev/app
WORKDIR /home/dev/app/HomePage
RUN apt-get update && apt-get -y upgrade
COPY requirements.txt /home/dev/app/HomePage
RUN apt-get install -y python3-dev python3-pip
RUN apt-get install -y libpq-dev
RUN apt-get install -y binutils libproj-dev gdal-bin
RUN pip install --upgrade pip
RUN pip install -r requirements.txt
COPY ./docker-entrypoint.sh /
RUN chmod +x /docker-entrypoint.sh
USER dev
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
web/docker-entrypoint.sh
#!/bin/sh
python manage.py makemigrations
python manage.py migrate
python manage.py runserver 0.0.0.0:8000
docker-compose ps :
root#BlackCat:/home/dev/Project-EX/django-PR# docker-compose up
Starting django-pr_postgres_1 ... done
Starting django-pr_web_1 ... done
Attaching to django-pr_postgres_1, django-pr_web_1
postgres_1 | 2019-12-12 16:34:43.907 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432
postgres_1 | 2019-12-12 16:34:43.908 UTC [1] LOG: listening on IPv6 address "::", port 5432
postgres_1 | 2019-12-12 16:34:44.099 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432"
postgres_1 | 2019-12-12 16:34:44.672 UTC [18] LOG: database system was shut down at 2019-12-11 18:45:45 UTC
postgres_1 | 2019-12-12 16:34:44.912 UTC [1] LOG: database system is ready to accept connections
web_1 | Traceback (most recent call last):
web_1 | File "manage.py", line 22, in <module>
web_1 | execute_from_command_line(sys.argv)
web_1 | File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
web_1 | utility.execute()
web_1 | File "/usr/local/lib/python3.6/site-packages/django/core/management/__init__.py", line 377, in execute
web_1 | django.setup()
web_1 | File "/usr/local/lib/python3.6/site-packages/django/__init__.py", line 24, in setup
web_1 | apps.populate(settings.INSTALLED_APPS)
web_1 | File "/usr/local/lib/python3.6/site-packages/django/apps/registry.py", line 114, in populate
web_1 | app_config.import_models()
web_1 | File "/usr/local/lib/python3.6/site-packages/django/apps/config.py", line 211, in import_models
web_1 | self.models_module = import_module(models_module_name)
web_1 | File "/usr/local/lib/python3.6/importlib/__init__.py", line 126, in import_module
web_1 | return _bootstrap._gcd_import(name[level:], package,
web_1 | func = self.__getitem__(name)
web_1 | File "/usr/local/lib/python3.6/ctypes/__init__.py", line 366, in __getitem__
web_1 | func = self._FuncPtr((name_or_ordinal, self))
web_1 | AttributeError: /usr/lib/libgdal.so.1: undefined symbol: OGR_F_GetFieldAsInteger64
web_1 | Watching for file changes with StatReloader
The problem you are encountering is that your version of GDAL is too old. Your Dockerfile is built on python:3.6.2-slim, which is based off Debian Jessie, and installs gdal version 1.10.1. The OGR_F_GetFieldAsInteger64 variable was introduced in v. 2.0.0
According to the GDAL package page at debian.org, you need a newer version of Debian (stretch, buster, bullseye will all work). As such, I would recommend that you change your Dockerfile to use python:3.8.0-slim-buster or newer. Please check the hub.docker.com python page for more information
Also, as mentioned in the comments, your Dockerfile should only have one of CMD or ENTRYPOINT, but not both. Since your entrypoint.sh does what CMD does and more, I'd just remove CMD and stick with ENTRYPOINT
Disclosure: I work for EnterpriseDB (EDB)
I had a similar issue, without Docker. I fixed it with this:
sudo add-apt-repository ppa:ubuntugis/ubuntugis-unstable
sudo apt-get update
sudo apt-get install gdal-bin
More info: https://developers.planet.com/planetschool/gdal-qgis-installation-setup/
I'm having an issue in my docker bash, i'm trying to create a super user on django using docker-compose exec web python manage.py createsuperuser but I have this error below.
Traceback (most recent call last):
File "docker-compose", line 3, in <module>
File "compose\cli\main.py", line 68, in main
File "compose\cli\main.py", line 118, in perform_command
File "compose\cli\main.py", line 431, in exec_command
File "compose\cli\main.py", line 1236, in call_docker
File "distutils\spawn.py", line 220, in find_executable
File "ntpath.py", line 85, in join
UnicodeDecodeError: 'ascii' codec can't decode byte 0xe9 in position 8: ordinal not in range(128)
Failed to execute script docker-compose
I think it's because my database Postgresql is encoded in 'ascii' instead of utf-8, what are the commands to encode my psql database to utf-8?
Dockerfile
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN mkdir /config
ADD /config/requirements.pip /config/
RUN pip install -r /config/requirements.pip
RUN mkdir /src;
WORKDIR /src
Docker-compose.yml
version: '2'
services:
nginx:
image: nginx:latest
container_name: NGINX
ports:
- "8000:8000"
volumes:
- ./src:/src
- ./config/nginx:/etc/nginx/conf.d
- /static:/static
- /media:/media
depends_on:
- web
web:
restart: always
build: .
container_name: DJANGO
command: bash -c "python manage.py collectstatic --noinput && python manage.py makemigrations && python manage.py migrate && gunicorn oqtor.wsgi -b 0.0.0.0:8000"
depends_on:
- db
volumes:
- ./src:/src
- /static:/static
- /media:/media
expose:
- "8000"
db:
image: postgres:latest
container_name: PSQL
You have a tilde character ("é" ) in your docker-compose.yml
Edit. Probably you have accents in the involved paths and probably you are facing some python bug in your host. You can try updating python in the host (docker-compose is made in python).
I m creating a docker-compose config for an django app, the Dockerfile builds successfully but when I compose them up, django return an issue -- cannot connect to postgres.
I run docker-compose run web bash, found redis and posgres both cannot be connected.
My docker-compose.yml file
web:
build: .
ports:
- "8000:8000"
environment:
- 'DATABASE_HOST=db'
- 'DATABASE_NAME=mydb'
- 'DATABASE_USER=root'
- 'DATABASE_PASSWORD=root'
links:
- db
db:
image: postgres:9.1
when running sudo docker-compose up i got the following error.
web_1 | File "/usr/local/lib/python2.7/site packages/django/db/backends/postgresql/base.py", line 175, in get_new_connection
web_1 | connection = Database.connect(**conn_params)
web_1 | File "/usr/local/lib/python2.7/site-packages/psycopg2/__init__.py", line 164, in connect
web_1 | conn = _connect(dsn, connection_factory=connection_factory, async=async)
web_1 | django.db.utils.OperationalError: could not connect to server: Connection refused
web_1 | Is the server running on host "localhost" (::1) and accepting
web_1 | TCP/IP connections on port 5432?
web_1 | could not connect to server: Connection refused
web_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
web_1 | TCP/IP connections on port 5432?
I also built a clustering with docker-compose, it probably will help you and answer your problem (here is the repo). You can see the docker-compose.yml file, and the django settings file (I marked the lines you need).
You can also clone this repo and get django, angular2, postgresql, and nginx containers, all link together already.
You are linking your web container with the postgres container but you are not defining database name, password and user.
web:
build: .
ports:
- "8000:8000"
links:
- db
db:
restart: always
image: postgres:9.1
ports:
- "5432:5432"
volumes:
- pgvolume:/var/lib/postgresql/data
environment:
- POSTGRES_PASSWORD=root
- POSTGRES_DB= aiotadb
- POSTGRES_USER=root
data:
restart: always
image: postgres:9.1
volumes:
- /var/lib/postgresql
command: tail -f /dev/null
Also, if you already define your database options in your settings file, you don't need to declare it as env variables in web container.
version: '3'
services:
basicproject:
build: .
container_name: basicproject-container
depends_on:
- postgres
ports:
- "8000:8000"
postgres:
image: postgres:9.4
ports:
- "5432"
environment:
- POSTGRES_USER=test
- POSTGRES_PASSWORD=testing
- POSTGRES_DB=test_db
Add dependency in your 'web' service like below:
depends_on:
- db