Docker / ptvsd / Django: Failed to attach (connect ECONNREFUSED ) - django

I am trying to implement a debugging plugin ptvsd into my existing dockerized application
Running on Google Compute Engine with Ubuntu 18.04. The entire application is containerized with Docker-Compose. Back-end Plugin: Django
manage.py:
#!/usr/bin/env python
"""Django's command-line utility for administrative tasks."""
import os
import sys
import ptvsd
def main():
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'djangoserver.settings')
try:
from django.core.management import execute_from_command_line
except ImportError as exc:
raise ImportError(
"Couldn't import Django. Are you sure it's installed and "
"available on your PYTHONPATH environment variable? Did you "
"forget to activate a virtual environment?"
) from exc
ptvsd.enable_attach(address=('0.0.0.0', 5050))
execute_from_command_line(sys.argv)
if __name__ == '__main__':
main()
Launch.json
{
"name": "Attach Remote Django",
"type": "python",
"request": "attach",
"pathMappings": [
"localRoot": "${workspaceRoot}/djangoserver",
"remoteRoot": "/usr/src/"
],
"port": 5050,
"secret": "secret",
"host": "localhost"
}
docker-compose.yml
web:
build: ./djangoserver
command: gunicorn djangoserver.wsgi:application --noreload --nothreading --bind 0.0.0.0:8001
volumes:
- ./djangoserver:/usr/src
# entrypoint: /entrypoint.sh
ports:
- 5050:5050
expose:
- 8001
env_file: .env.dev
depends_on:
- db_development_2
stdin_open: true
Whenever i a do build and run the docker-composer, it starts without any problems, but later on, when I am trying to connect to the server with the debugger, I receive following error:
Failed to attach (connect ECONNREFUSED 127.0.0.1:5050)

Related

Docker image not running, db error: django.db.utils.OperationalError could not translate host name "postgres" to address: Name or service not known

So I've managed to build my Docker image locally using docker-compose build and I've pushed the image to my Docker Hub repository.
Now I'm trying to get it working on DigitalOcean so I can host it. I've pulled the correct version of the image and I am trying to run it with the following command:
root#my-droplet:~# docker run --rm -it -p 8000:8000/tcp mycommand/myapp:1.1 (yes, 1.1)
However I soon run into these two errors:
...
File "/usr/local/lib/python3.8/dist-packages/django/db/backends/postgresql/base.py",
line 185, in get_new_connection
connection = Database.connect(**conn_params) File "/usr/local/lib/python3.8/dist-packages/psycopg2/__init__.py", line
127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "postgres" to address: Name or service not known ```
...
File "/usr/local/lib/python3.8/dist-packages/django/db/backends/base/base.py",
line 197, in connect
self.connection = self.get_new_connection(conn_params) File "/usr/local/lib/python3.8/dist-packages/django/utils/asyncio.py", line
26, in inner
return func(*args, **kwargs) File "/usr/local/lib/python3.8/dist-packages/django/db/backends/postgresql/base.py",
line 185, in get_new_connection
connection = Database.connect(**conn_params) File "/usr/local/lib/python3.8/dist-packages/psycopg2/__init__.py", line
127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known ```
This may be due to how I have divided my application (using docker-compose,yml) into two services and have only pushed the image of the app since my previous post. Here is my docker-compose.yml file:
Here is my docker-compose.yml file:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=(adminname)
- POSTGRES_PASSWORD=(adminpassword)
- CLOUDINARY_URL=(cloudinarykey)
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
Here is my Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN mkdir usr/src/mprova
WORKDIR /usr/src/mprova
COPY frontend ./frontend
COPY backend ./backend
WORKDIR /usr/src/mprova/frontend
RUN npm install
RUN npm run build
WORKDIR /usr/src/mprova/backend
ENV DJANGO_PRODUCTION=True
RUN pip3 install -r requirements.txt
EXPOSE 8000
CMD python3 manage.py collectstatic && \
python3 manage.py makemigrations && \
python3 manage.py migrate && \
gunicorn mellon.wsgi --bind 0.0.0.0:8000
and here is a snippet of my settings.py file:
...
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'postgres',
'USER': 'admin',
'PASSWORD': '[THE PASSWORD]',
'HOST': 'db',
'PORT': 5432,
}
}
...
How can I fix this issue? I've looked around but can't see what I'm doing wrong.
From what I saw online, I added the following to my project, but it doesn't work either:
Here is my base.py (new file):
import psycopg2
conn = psycopg2.connect("dbname='postgres' user='admin' host='db' password='[THE PASSWORD]'")
Additions to Dockerfile:
FROM (MY FRIENDS ACCOUNT)/django-npm:latest
RUN pip3 install psycopg2-binary
COPY base.py base.py
RUN python3 base.py
...
Yet the build fails due to this error:
Traceback (most recent call last): File "base.py", line 3, in
<module>
conn = psycopg2.connect("dbname='postgres' user='admin' host='db' password='[THE PASSWORD]'")
File "/usr/local/lib/python3.8/dist-packages/psycopg2/__init__.py", line
127, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "db" ([SOME-IP-ADDRESS]) and accepting
TCP/IP connections on port 5432?
ERROR: Service 'app' failed to build: The command '/bin/sh -c python3 base.py' returned a non-zero code: 1
me#My-MacBook-Pro-5 mellon %
I'm unsure as to what to try next but I feel like I am making a simple mistake. Why is the connection being refused at the moment?
When you launch a docker-compose.yml file, Compose will automatically create a Docker network for you and attach containers to it. Absent any specific networks: declarations in the file, Compose will name the network default, and then most things Compose creates are prefixed with the current directory name. This setup is described in more detail in Networking in Compose.
docker run doesn't look at the Compose settings at all, so anything you manually do with docker run needs to replicate the settings docker-compose.yml has or would automatically generate. To make a docker run container connect to a Compose network, you need to do something like:
docker network ls # looking for somename_default
docker run \
--rm -it -p 8000:8000/tcp \
--net somename_default \ # <-- add this option
mycommand/myapp:1.1
(The specific error message could not translate host name "postgres" to address hints at a Docker networking setup issue: the database client container isn't on the same network as the database server and so the hostname resolution fails. Contrast with a "connection refused" type error, which would suggest one container is finding the other but either the database isn't running yet or the port number is wrong.)
In your docker build example, code in a Dockerfile can never connect to a database. At a mechanical level, there's no way to attach it to a Docker network as we did with docker run. At a conceptual level, building an image produces a reusable artifact; if I docker push the image to a registry and docker run it on a different host, that won't have the database setup, or similarly if I delete and recreate the database container locally.
I am trying to run it with the following command: root#my-droplet:~#
docker run --rm -it -p 8000:8000/tcp mycommand/myapp:1.1 (yes, 1.1)
Docker-compose allows to create a composition of several containers.
Running the django container with docker run defeats its purpose.
What you want is running :
docker-compose build
docker-compose up
or in a single command :
docker-compose up --build
Why is the connection being refused at the moment?
Maybe because that is not done at the right moment.
I am not a Django specialist but your second try looks ok while your django service misses the Dockerfile attribute in the compose declaration (typing error I assume).
For your first try, that error :
django.db.utils.OperationalError: could not translate host name "postgres" to address: Name or service not known ```
means that django uses "postgres" as host of the database.
But according to your compose, the database could be reached with a host named "db". These don't match.
Regarding your second try that defines database properties, it looks fine and it also matches to what the official doc states :
In this section, you set up the database connection for Django.
In your project directory, edit the composeexample/settings.py file.
Replace the DATABASES = ... with the following:
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'postgres',
'USER': 'postgres',
'PASSWORD': 'postgres',
'HOST': 'db',
'PORT': 5432,
} } These settings are determined by the postgres Docker image specified in docker-compose.yml.
For your second try, that error :
conn = _connect(dsn, connection_factory=connection_factory, **kwasync) psycopg2.OperationalError: could not connect to server: Connection refused
Is the server running on host "db" ([SOME-IP-ADDRESS]) and accepting
TCP/IP connections on port 5432?
may mean that the Postgres DB listener is not ready yet when Django tries to init the connection with.
That attributes of your compose : depends_on: defines order of container start but doesn't wait for that the application behind the container be "ready".
Here two way to addresse that kind of scenario :
either add a sleep of some seconds before starting your django service
or set your django compose conf to restart on failure.
For the first way, configure your django service with a CMD such as (not tried) :
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
command: /bin/bash -c "sleep 30; python3 manage.py collectstatic && python3 manage.py makemigrations && python3 manage.py migrate && gunicorn mellon.wsgi --bind 0.0.0.0:8000 "
For the second way, try something like that :
app:
build: .
ports:
- "8000:8000"
depends_on:
- db
restart: on-failure

How can I connect from Django to Docker Redis Container using BackgroundScheduler?

I am currently working on a Django project, which is supposed to send messages to a mobile app via websockets. For the Django project I used Docker to put it online. Now I want to send planned messages for the first time, for this I use Apscheduler or django-apscheduler. I try to save the jobs in my Redis container. But for some reason the connection is denied. Am I basically doing something wrong or does it hang somewhere?
Dockerfile:
FROM python:3
ENV PYTHONUNBUFFERED 1
RUN mkdir /code
WORKDIR /code
COPY requirements.txt /code/
RUN pip install -r requirements.txt
COPY . /code/
RUN pip install -r requirements.txt
docker-compose.yml
version: '3'
services:
redis:
image: redis
command: redis-server
ports:
- '6379:6379'
- '6380:6380'
web:
build: .\experiencesampling
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:\code
ports:
- "8000:8000"
# worker_channels:
#
# build: .\experiencesampling
# command: python manage.py runworker channels
# volumes:
# - .:\code
# links:
# - redis
channels:
build: .\experiencesampling
command: daphne -p 8001 experiencesampling.asgi:application
volumes:
- .:\code
ports:
- "8001:8001"
links:
- redis
jobs.py (trying connect to redis), I have already tried 0.0.0.0, localhost, redis://redis for the "host"
jobstores = {
'default': RedisJobStore(jobs_key='dispatched_trips_jobs', run_times_key='dispatched_trips_running', host='redis', port=6380)
}
executors = {
'default': ThreadPoolExecutor(20),
'processpool': ProcessPoolExecutor(5)
}
job_defaults = {
'coalesce': False,
'max_instances': 3
}
#jobStore.remove_all_jobs()
scheduler = BackgroundScheduler(jobstores=jobstores, executors=executors, job_defaults=job_defaults)
register_events(scheduler)
scheduler.start()
print("Scheduler started!")
Error (appears multiple times)
web_1 |
web_1 | Scheduler started!
web_1 | Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | System check identified no issues (0 silenced).
web_1 | July 11, 2020 - 19:00:30
channels_1 | 2020-07-11 19:00:29,866 WARNING Error getting due jobs from job store 'default': Error 111 connecting to redis:6380. Connection refused.
web_1 | Django version 3.0.8, using settings 'experiencesampling.settings'
web_1 | Starting ASGI/Channels version 2.3.1 development server at http://0.0.0.0:8000/
web_1 | Quit the server with CONTROL-C.

debugging a django app (locally) in VS Code can't find my database container db

When I run a debug configuration in VS Code:
"configurations": [
{
"name": "Django Tests",
"type": "python",
"request": "launch",
"program": "${workspaceFolder}/src/manage.py",
"args": [
"test",
"src"
],
"django": true
}
I get this error:
django.db.utils.OperationalError: could not translate host name "db" to address: Name or service not known
I don't have this problem when I run the app normally:
./src/manage.py runserver
db is my containerized postgresql database.
services:
db:
container_name: db
build: ./postgresql
expose:
- "5432"
ports:
- 5432:5432
...
Perhaps I'm not using the proper python path? I should be running this from within my python virtual environment but I'm not sure how to set that in the VS Code configuration, if that's the problem.
Here's my settings.py
POSTGRES_HOST = os.environ.get('POSTGRES_HOST', '127.0.0.1')
POSTGRES_PORT = os.environ.get('POSTGRES_PORT', '5432')
It seems like db is visible locally:
$ docker logs db
PostgreSQL Database directory appears to contain a database; Skipping initialization
Here's the command CS Code runs when I try to choose the debug configuration:
(venv) me#host:/path/to/myproject/$ env DEBUGPY_LAUNCHER_PORT=36867 /path/to/myproject/venv/bin/python3 /home/me/.vscode/extensions/ms-python.python-2020.3.71659/pythonFiles/lib/python/debugpy/no_wheels/debugpy/launcher /path/to/myproject/src/manage.py runserver --noreload
you need to make sure vs-code can find the host db or uses 127.0.0.1 directly.
In youre case overriding the environment "envFile": "" did the trick.

docker-compose django app cannot find postgresql db

I'm trying to create a Django app in a docker container. The app would use a postgres db with postgis extension, which I have in another database. I'm trying to solve this using docker-compose but can not get it working.
I can get the app working without the container with the database containerized just fine. I can also get the app working in a container using a sqlite db (so a file included without external container dependencies). Whatever I do, it can't find the database.
My docker-compose file:
version: '3.7'
services:
postgis:
image: kartoza/postgis:12.1
volumes:
- postgres_data:/var/lib/postgresql/data/
ports:
- "${POSTGRES_PORT}:5432"
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
env_file:
- .env
web:
build: .
# command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
# restart: on-failure
ports:
- "${APP_PORT}:8000"
volumes:
- .:/code
depends_on:
- postgis
env_file:
- .env
environment:
WAIT_HOSTS: 0.0.0.0:${POSTGRES_PORT}
volumes:
postgres_data:
name: ${POSTGRES_VOLUME}
My Dockerfile (of the app):
# Pull base image
FROM python:3.7
LABEL maintainer="yb.leeuwen#portofrotterdam.com"
# Set environment variables
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
# install dependencies
# RUN pip install pipenv
RUN pip install pipenv
RUN mkdir /code/
COPY . /code
WORKDIR /code/
RUN pipenv install --system
# RUN pipenv install pygdal
RUN apt-get update &&\
apt-get install -y binutils libproj-dev gdal-bin python-gdal python3-gdal postgresql-client
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait
# set work directory
WORKDIR /code/app
# RUN python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# RUN cd ${WORKDIR}
# If we want to run docker by itself we need to use below
# but if we want to run from docker-compose we'll set it there
EXPOSE 8000
# CMD /wait && python manage.py migrate --no-input
# CMD ["python", "manage.py", "migrate", "--no-input"]
# CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
My .env file:
# POSTGRES
POSTGRES_PORT=25432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=localhost
# GEOSERVER
# DJANGO
APP_PORT=8000
And finally my in my settings.py of the django app:
DATABASES = {
'default': {
'ENGINE': 'django.contrib.gis.db.backends.postgis',
'NAME': os.getenv('POSTGRES_DBNAME'),
'USER': os.getenv('POSTGRES_USER'),
'PASSWORD': os.getenv('POSTGRES_PASS'),
'HOST': os.getenv("POSTGRES_HOST", "localhost"),
'PORT': os.getenv('POSTGRES_PORT')
}
}
I've tried quite a lot of things (as you see in some comments). I realized that docker-compose doesn't seem to wait until postgres is fully up, spinning and accepting requests so I tried to build in a waiting function (as suggested on the website). I first had migrations and running the server inside the Dockerfile (migrations in the build process and runserver as the startup command), but that requires postgres and as it wasn't waiting for it it didn't function. I finally took it all out to the docker-compose.yml file but still can't get it working.
The error I get:
web_1 | Is the server running on host "localhost" (127.0.0.1) and accepting
web_1 | TCP/IP connections on port 25432?
web_1 | could not connect to server: Cannot assign requested address
web_1 | Is the server running on host "localhost" (::1) and accepting
web_1 | TCP/IP connections on port 25432?
Does anybody have an idea why this isn't working?
I see that in your settings.py of the django app, you are connecting to Postgres via
'HOST': os.getenv("POSTGRES_HOST", "localhost"),
While in .env you are setting the value of to POSTGRES_HOST to localhost. This means that the web container is trying to reach the Postgres server postgis at localhost which should not be the case.
In order to solve this problem, simply update your .env file to be like this:
POSTGRES_PORT=5432
...
POSTGRES_HOST=postgis
...
The reason is that in your case, the docker-compose brings up 2 containers: postgis and web inside the same Docker network and they can reach each other via their DNS name i.e. postgis and web respectively.
Regarding the port, web container can reach postgis at port 5432 but not 25432 while your host machine can reach the database at port 25432 but not 5432
you can not use localhost for the docker containers, it will be pointing to the container itself, not to the host of the containers. Instead switch to use the service name.
to fix the issue, change your env to
# POSTGRES
POSTGRES_PORT=5432
POSTGRES_USER=username
POSTGRES_PASSWORD=pass
POSTGRES_DB=db
POSTGRES_VOLUME=data
POSTGRES_HOST=postgis
# DJANGO
APP_PORT=8000
and you compose file to
version: '3.7'
services:
postgis:
image: kartoza/postgis:12.1
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_DB=${POSTGRES_DB}
env_file:
- .env
web:
build: .
# command: sh -c "/wait && python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
command: sh -c "python manage.py migrate --no-input && python /code/app/manage.py runserver 0.0.0.0:${APP_PORT}"
# restart: on-failure
ports:
- "${APP_PORT}:8000"
volumes:
- .:/code
depends_on:
- postgis
env_file:
- .env
environment:
WAIT_HOSTS: postgis:${POSTGRES_PORT}
volumes:
postgres_data:
name: ${POSTGRES_VOLUME}

Setting up CI/CD on Gitlab for django project

This is the first time I've tried adding CI/CD for a Django project on gitlab. I want to set up automatic testing and deploying to the server in the development branch if it is successful.
With tests almost everything worked out, dependencies are installed and started python manage.py test but the problem with the testing database. The traceback error is a bit lower and here I don't quite understand how the interaction with the database happens during the tests.
Creating test database for alias 'default'...
.....
MySQLdb._exceptions.OperationalError: (2002, "Can't connect to MySQL server on '127.0.0.1' (115)")
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 21, in <module>
main()
File "manage.py", line 17, in main
...
super(Connection, self).__init__(*args, **kwargs2)
django.db.utils.OperationalError: (2002, "Can't connect to MySQL server on '127.0.0.1' (115)")
In the settings of django settings.py, a connector to the database is obtained through such variables in a .env file.
.env
SECRET_KEY=ja-t8ihm#h68rtytii5vw67*o8=o)=tmojpov)9)^$h%9#16v&
DEBUG=True
DB_NAME=db_name
DB_USER=username
DB_PASSWORD=dbpass
DB_HOST=127.0.0.1
And with the deployment of the project, everything is not clear yet. I'd appreciate your help setting it up.
gitlab-ci.yml
stages:
- test
- deploy
test:
stage: test
script:
- apt update -qy
- apt install python3 python3-pip virtualenvwrapper -qy
- virtualenv --python=python3 venv/
- source venv/bin/activate
- pip install -r requirements.txt
- python manage.py test
stage: deploy
script:
...
???
only:
- develop
UPD
Accordingly Ruddra recommendation I added to yml file next line:
services:
- mysql
variables:
# Configure mysql service (https://hub.docker.com/_/mysql/)
MYSQL_DATABASE: test
MYSQL_ROOT_PASSWORD: mysql
connect:
image: mysql
script:
- echo "SELECT 'OK';" | mysql --user=root --password="$MYSQL_ROOT_PASSWORD" --host=mysql "$MYSQL_DATABASE"
And as a result I got connect successful status and test error
status with the same traceback as a starting question
Actually you can run MySQL as service in GitLab. For example:
services:
- mysql:latest
variables:
# Configure mysql environment variables (https://hub.docker.com/_/mysql/)
MYSQL_DATABASE: "db_name"
MYSQL_ROOT_PASSWORD: "dbpass"
MYSQL_USER: "username"
MYSQL_PASSWORD: "dbpass"
Update: In your .env file, update the following settings:
DB_HOST=mysql
Update 2: (Based on this issue on GitLab) You can update the code like this:
variables:
MYSQL_DATABASE: "db_name"
MYSQL_ROOT_PASSWORD: "dbpass"
MYSQL_USER: "username"
MYSQL_PASSWORD: "dbpass"
test:
script:
- apt update -qy
- mysql --user=$MYSQL_USER --password=$MYSQL_PASSWORD --database=$MYSQL_DATABASE --host=$MYSQL_HOST --execute="SHOW DATABASES; ALTER USER '$MYSQL_USER'#'%' IDENTIFIED WITH mysql_native_password BY '$MYSQL_PASSWORD'"