I am trying to use my localhost database inside docker Django container.
I have allowed listen_address to all in postgresql.conf file.
I have added host all all localhost,192.168.1.9 trust in pg_hba.conf file.
192.168.1.9 is my en0 address.
Now i want to use 192.168.1.9 as my host in database setting in django.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'db_name',
'USER': 'db_user',
'PASSWORD': 'db_password',
'HOST': '192.168.1.9',
'PORT': '5432',
}
}
I am trying this but i am not able to succeed.
Am i doing something wrong? I want postgres to accept all connect so Django app container can connect to my local machine database.
I am trying
psql -h 192.168.1.9 -U db_user -d db_name
Getting psql: could not connect to server: Permission denied
Is the server running on host "192.168.1.9" and accepting
TCP/IP connections on port 5432?
I am not sure what wrong i am doing.
the docker containers are usually on separate network interface (docker0), so if your app wants to access your host you have to use the docker0 interface.
you can get the ip of host from your container with
/sbin/ip route|awk '/default/ { print $3 }'
Related
I deployed an application in django on GCP in a VM (ubuntu 22.04 01 LTS)
The app is working normally, database is postgresql.
But I can't remotely access the database, I always get timeout error.
My settings.py
....
DATABASES = {
'default': {
'ENGINE': os.environ.get('DATABASE_ENGINE'),
'NAME': os.environ.get('DATABASE_NAME'),
'USER': os.environ.get('DATABASE_USER'),
'PASSWORD': os.environ.get('DATABASE_PASSWORD'),
'HOST': os.environ.get('DATABASE_HOST'),
'PORT': os.environ.get('DATABASE_PORT'),
}
}
...
My .env file :
# Postgres
DATABASE_ENGINE = 'django.db.backends.postgresql'
DATABASE_NAME = "basededados"
DATABASE_USER = "user"
DATABASE_PASSWORD = "senha"
DATABASE_HOST = "127.0.0.1"
DATABASE_PORT = "5432"
I created the database like this:
sudo -u postgres psql
CREATE ROLE user WITH LOGIN SUPERUSER CREATEDB CREATEROLE PASSWORD 'senha';
CREATE DATABASE basededados WITH OWNER user;
GRANT ALL PRIVILEGES ON DATABASE basededados TO user;
I already set up the postgres.conf file:
listen_addresses = '*'
And the pg_hba.conf file**:**
# IPv4 local connections:
host all all 0.0.0.0/0 md5
I alredy allowed port 5432 through the firewall by executing:
sudo ufw allow 5432/tcp
I'm trying to access the DB like this:
psql -h {Virtual machine IP} -d basededados -U user
the error:
psql: error: connection to server at {Virtual machine IP}, port 5432 failed: Connection timed out
Is the server running on that host and accepting TCP/IP connections?
Am I doing something wrong?
.env
# Postgres
DATABASE_ENGINE=django.db.backends.postgresql
DATABASE_NAME=basededados
DATABASE_USER=user
DATABASE_PASSWORD=senha
DATABASE_HOST=127.0.0.1
DATABASE_PORT=5432
my DB connection is
DATABASES = {
'default': {
'ENGINE': 'djongo',
'ENFORCE_SCHEMA': True,
'NAME': 'taxiapp',
'HOST':'127.0.0.1',
'PORT': xxxx,
}
}
my SSH ADDRESS x.x.x.x
SSH USERNAME 'admin' SSH PORT 0000 SSH AUTH method password and USER PASSWORD 0000
currently, I am getting
pymongo.errors.ServerSelectionTimeoutError: 127.0.0.1:xxxx: [Errno 111] Connection refused
How to define SSH details in my settings.py file ???
You can use Paramiko library for this.
it works pretty well.
here is good explanation of how to set it up.
https://hackersandslackers.com/automate-ssh-scp-python-paramiko/
don't hesitate to forward encountering problems once you do it.
Trying to run my django server in a docker, but the postgres port is already being used? When I run "docker-compose up", I receive this error:
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
ERROR: Service 'web' failed to build: The command '/bin/sh -c python manage.py migrate' returned a non-zero code: 1
sudo service postgresql status
returns:
9.6/main (port 5432): online
sudo lsof -nP | grep LISTEN
returns:
postgres 15817 postgres 3u IPv4 1022328 0t0 TCP 127.0.0.1:5432
I tried to run "sudo kill -9 15817", but docker-compose up still receives the same error.
Docker-compose.yml
version: '3'
services:
db:
image: postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'stemletics',
'USER': 'stemleticsadmin',
'PASSWORD': 'changeme',
'HOST': '127.0.0.1', # set in docker-compose.yml
'PORT': 5432 # default postgres port
}
}
In order to use postgres inside of Docker you will need to configure information like the database user, password and db-name. This is done through setting environment variables for the container. A complete list of supported variables can be found here.
Additionally you will want to expose port 5432 of postgres to your web service inside your docker-compose file.
Something like this should work:
docker-compose.yml
version: '3'
services:
db:
image: postgres
ports:
- "5432"
environment:
- POSTGRES_DB=stemletics
- POSTGRES_USER=stemleticsadmin
- POSTGRES_PASSWORD=changeme
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
You will also have to change the hostname you are using inside settings.py. docker-compose creates a default network for your services and attaches the running containers to this network. Inside your web container the database will be available at the hostname db.
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'stemletics',
'USER': 'stemleticsadmin',
'PASSWORD': 'changeme',
'HOST': 'db', # set in docker-compose.yml
'PORT': 5432 # default postgres port
}
}
Lastly if you do not have any sort of database reconnection logic in your python code the migration may still fail. This is due to the fact that depends_on only waits for the container to start, but postgres will take a couple of seconds to initialze after the container is running.
In order to get around this quickly it will be easiest to run one container at a time.
i.e.:
$ docker-compose up -d db
Wait for postgres to initialize
$ docker-compose up -d web
Hopefully this gets you up and running.
I was able to fix this issue simply building my db container, wait few seconds, then building the web container:
docker-compose up -d --build db
wait a few seconds
docker-compose up -d --build web
I hope this helps
I face the same problem to connect the PostgreSQL server to the windows operating system. Then I apply the following way. I hope it will help to solve this problem...
Download postgres_sql
Install postgresql
Search option open 'SQL Shell (psql)'
Create database
Settings add DATABASES....
settings.py
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'stemletics',
'USER': 'postgres', #default user name
'PASSWORD': 'changeme',
'HOST': '127.0.0.1', # default host
'PORT': '5432', # default postgres port
}
}
I ran into this issue and it turned out that I hadn't started docker desktop. Apparently, if docker desktop is not running, any docker commands you use do not apply to docker desktop but rather a different version of docker on your system. My OS is ubuntu 22.04.
Currently I am trying to deploy a Django app on Google App Engine(GAE). All goes well and app is deployed, but when it gets deployed, its connection with Postgres instance lost. I don't know why this happening. following is my settings.py file.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'dbname',
'USER': 'username',
'PASSWORD': 'password',
}
}
# In the flexible environment, you connect to CloudSQL using a unix socket.
# Locally, you can use the CloudSQL proxy to proxy a localhost connection
# to the instance
DATABASES['default']['HOST'] = '/cloudsql/shopnroar-175407:us-central1:snr-instance1'
if os.getenv('GAE_INSTANCE'):
pass
else:
DATABASES['default']['HOST'] = '100.107.126.241'
When i run it locally, it's making connection with google cloud Postgres as i have given ipv4 address to make connection, but as soon as i deploy it on GAE, following error comes while accessing database.
Is the server running locally and accepting
connections on Unix domain socket "/cloudsql/shopnroar-175407:us-central1:snr-instance1/.s.PGSQL.5432"?
here is my app.yaml
# [START runtime]
runtime: python
env: flex
entrypoint: gunicorn -b :$PORT SNR.wsgi
env_variables:
# Replace user, password, database, and instance connection name with the values obtained
# when configuring your Cloud SQL instance.
SQLALCHEMY_DATABASE_URI: >-
postgresql+psycopg2://amad.uddin:goingtoin1122#/shopnroar?host=/cloudsql/shopnroar-175407:us-central1:snr-instance1
beta_settings:
cloud_sql_instances: shopnroar-175407:us-central1:snr-instance1
runtime_config:
python_version: 2
# [END runtime]
Can anybody tell me how can i make connection with postgres instance after deploying django app on GAE?
Any help or suggestions will be highly appreciated.
Actually - forget my old answer - try turning the app.yaml values into strings, that helped me out:
cloud_sql_instances: 'shopnroar-175407:us-central1:snr-instance1'
edited the DATABASES entry in my settings.py to be:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'Limbo',
'USER': '<username>',
'PASSWORD': '<password>',
'HOST': '<dbname>.<gibberish>.us-west-2.rds.amazonaws.com',
'PORT': '5432',
}
}
now when I .manage.py runserver 0.0.0.0:800, it says:
Performing system checks...
System check identified no issues (0 silenced).
then after a minute or two:
File
"/home/ec2-user///local/lib64/python2.7/site-packages/psycopg2/init.py", line 164, in connect
conn = _connect(dsn, connection_factory=connection_factory, async=async) django.db.utils.OperationalError: could not connect to
server: Connection timed out
Is the server running on host "..us-west-2.rds.amazonaws.com" (172.rest.of.ip) and
accepting
TCP/IP connections on port 5432?
I have made sure to include the access via 5432 incoming from my ec2's IP (verified with curl ifconfig.co) and the ip listed in the error message (starting with 172 above). perhaps I need to use a larger subnet (than 32)in the 172 source?
EDIT: same error when I run python limbo/manage.py migrate
EDIT2: if I allow 5432 connections from any IP in the security group it works, but as stated above, I am allowing my EC2's IP (according to curl ifconfig.co's return value. what other IP should I be including?
enter the following into the linux command line:
cd /path/to/djangoproj
screen
source <env>/bin/activate
then you should see the command line tool reads something like:
(<env>)[ec2-user#ip-172-31-26-243 djangoproj]$
see that section in the middle: ip-172-31-26-243, that's the local ip, use that in the security group settings. in this case, I used 172.31.26.243/32 as the incoming IP allowed
then try to runserver again