I'm trying to set up a remote connection through PostgreSQL running on my server , based on Ubuntu 16.04. So far, when I click on the Save button on pgAdmin, it sort of freezes, does nothing. After typing .../manage.py runserver My_droplet_IP:5432, I try the webpage, and it is accessible.
I followed this tutorial after creating my droplet.
https://www.digitalocean.com/community/tutorials/how-to-install-and-use-postgresql-on-ubuntu-16-04
Then I edited the settings.py; pg_hba.conf; postgresql.conf files
settings.py:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresqlpsycopg2',
'NAME': '.....',
'USER': '....',
'PASSWORD': '....',
'HOST': '127.0.0.1',
'PORT': '5432',
STATICROOT = os.path.join(BASE_DIR, 'static/') - at the end of the page
And, ofcourse changed the ALLOWED HOSTS = ['....'] with my droplet ip aswell.
postgresql.conf listen_address is set to '*'
pg_hba.conf file:
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 0.0.0.0/0 md5
# IPv6 local connections:
host all all ::1/128 md5
Also allowed firewall, and made an exception to 5432 to be allowed.
Any ideas?
First of all test if you can connect to the database via psql:
psql -h ip_address -d name_of_the_database -U username
If you get connection refused error you had to set up something wrong and check the What should I check if remote connect to PostgreSQL not working?
psql: could not connect to server: Connection refused Is the server running on host ip_address
What should I check if remote connect to PostgreSQL not working?
Check the authentication configuration in pg_hba.conf
Usually located on linux - /etc/postgresql/version/main/pg_hba.conf.
You should allow authentication for client for specific IP all from all IP addresses:
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 0.0.0.0/0 md5
# IPv6 local connections:
host all all ::0/0 md5
#all ips
host all all all md5
More information how to set up pg_hba.conf you can find in documentation.
Then you should set up listening on specific port.
You have to find the postgresql.conf. Usually located /etc/postgresql/9.1/main/postgresql.conf) file and change the line with listen_address from:
#listen_address = ''
to (don't forget remove # which means comment):
listen_address = '*'
After every step you should restart Postgresql service:
sudo service postgresql restart
After step 2 you should see port 5432 (or 5433) in listening address after netstat command:
netstat -ntlp
After that you have to open port for PostgreSQL in firewall:
sudo ufw allow 5432
You can check firewall settings with (you should see 5432 in the list):
sudo ufw status
If any of the previous step doesn't work you should check if PostgreSQL is not running on different port (usually 5433) and repeat the previous steps.
This happens very often when you have more running versions of PostgreSQL or you upgrade database and forgot stop the previous version of PostgreSQL.
If you have problems to find configuration files you can check this thread Where are my postgres *.conf files?.
In case you are using GCP remember to set the firewall rule inside GCP to allow that port, it might save you some hours of debugging.
Related
Hello I just want to deploy my django project in python anywhere .... and when I run the command python manage.py migrate
it shows this error message django.db.utils.OperationalError: connection to server at "<Host name>" (<IP address>), port 5432 failed: Connection refused Is the server running on that host and accepting TCP/IP connections?
I think that the problem in python anywhere because when I connect to the server in pgadmin using the same info in the settings.py file I don't get any error messages and you have to know that I am using neon.tech for my postgresql database
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': '<the database>',
'USER':'<User>',
'PASSWORD':'<Password>',
'HOST':'<Host>',
'PORT':5432,
}
}
and I am sure that all information is right because I used it to connect to the server in pgadmin in my local machiene
If you are having trouble connecting to a PostgreSQL database in a deployment environment, there are a few things you can check:
Verify that the database is running: Make sure that the PostgreSQL database is running and accessible from the deployment environment. You can check this by attempting to connect to the database using the 'psql' command-line tool.
Check the connection settings: Ensure that the connection settings in your deployment configuration are correct, including the database host, port, database name, user, and password.
Check firewall settings: If you are connecting to a remote PostgreSQL database, ensure that the necessary firewall ports are open to allow incoming connections to the database server.
Check for network issues: Check for any network issues that may be preventing the deployment environment from connecting to the database. For example, if you are deploying to a virtual private cloud, ensure that the network settings are configured correctly.
Check for authentication issues: Make sure that the user and password specified in the connection settings have the necessary permissions to access the database.
Check logs for errors: Check the logs for any error messages that may indicate the cause of the connection issue.
By investigating the above points, you should be able to narrow down the cause of the connection issue and take the necessary steps to resolve it.
I created a database using pgAdmin GUI tool to use with postgres for my django project. There are only two databases in pgAdmin, the default 'postgres' db you get out of the box with pgAdmin, and my new database, dbfunk.
I'm using django and added postgres as my database and gave the necessary info in settings.
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql',
'NAME': 'dbfunk',
'USER': 'postgres',
'PASSWORD': 'XXXXXX',
'HOST': 'localhost'
}
}
But when I run
python manage.py makemigrations
it gives the error, the database 'dbfunk' does not exist, even though it's in both settings.py in Django and added in pgAdmin. The full text of the error is:
django.db.utils.OperationalError: FATAL: database "dbfunk" does not exist
I've installed the adaptor psycopg2. I also tried 'ENGINE': 'django.db.backends.postgresql_psycopg2', in Settings.py but this didn't make any difference.
Is there something else I am missing?
When I ran the command psql \u it is only showing the postgres database there also, and not this new database 'dbfunk'.
I don't know if this helps, but upon installing postgres, I was given 5433 in the prompt as the port number.
UPDATE: I just ran createdb dbfunk from command line and that seems to have created it as I can now run python manage.py makemigrations. But why did I need to do that when I had already done this in pgAdmin? That is, why did I have to create it twice? Is this usual?
UPDATE2: Sadly the database dbfunk that I created from command line using createdb and the dbfunk in pgAdmin are not synchronised and migrations won't carry over, such that I don't see the tables for the django models in pgAdmin.
I installed Postgres initially using Homebrew, and pgAdmin separately but then it didn't provide the default server/database on pgAdmin so I deleted pgAdmin, and uninstalled Postgres, and then downloaded Postgres from the website instead, and it seems that installs pgAdmin alongside it because then when I opened pgAdmin it had the default server/db postgres. I am on macOS, 10.14.6.
When I check ports on 5432 I get this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
postgres 16337 me 5u IPv4 0x7560dce8e7fXXXX 0t0 TCP localhost:postgresql (LISTEN)
postgres 16337 me 6u IPv6 0x7560dce8d2XXXXX 0t0 TCP localhost:postgresql (LISTEN)
And on 5433 there are more entries:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
postgres 16809 postgres 4u IPv6 0x7560dce8e883ed93 0t0 TCP *:pyrrho (LISTEN)
postgres 16809 postgres 5u IPv4 0x7560dce8e46dXXXX 0t0 TCP *:pyrrho (LISTEN)
pgAdmin4 17043 me 20u IPv4 0x7560dce8e74cXXXx 0t0 TCP localhost:56820->localhost:pyrrho (ESTABLISHED)
pgAdmin4 17043 me 21u IPv4 0x7560dce8e74cXXXX 0t0 TCP localhost:57054->localhost:pyrrho (ESTABLISHED)
postgres 17051 postgres 12u IPv4 0x7560dce8e74cXXXX 0t0 TCP localhost:pyrrho->localhost:56820 (ESTABLISHED)
postgres 17217 postgres 12u IPv4 0x7560dce8e74cXXXX 0t0 TCP localhost:pyrrho->localhost:57054 (ESTABLISHED)
For starters you did not specify PORT in DATABASES. The default port for postgres is 5432, so if your database cluster that contained dbfunk was listening on port 5433 then Django would not find it. My guess is you have two instances of postgres running, one on port 5432 and the other on 5433. When you did createdb dbfunk I'm guessing you again did not specify a port and the database was created in the cluster listening on 5432. Now python manage.py makemigrations could find it using the settings in DATABASES.
I am trying to connect to a PostgreSQL Database via ssh tunnel. It is set up to listen on port 3333 and forward to port 5432 on the machine with the database. I am able to connect using the psql command with password authentication via the tunnel, but for some reason when I attempt to connect using psycopg2 via the tunnel, I get the error FATAL: password authentication failed for user database_user. I have tried putting quotes around user names and passwords to no avail.
Successful psql command:
psql -h localhost -p 3333 -U database_name database_user
#This command brings up password prompt
Failed pscyopg2 command:
psycopg2.connect("dbname='database_name' user='database_user' host='localhost' password='database_password' port=3333")
Output:
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/database_user/.local/share/virtualenvs/project-QNhT-Vzg/lib/python3.7/site-packages/psycopg2/__init__.py", line 126, in connect
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: FATAL: password authentication failed for user "database_user"
FATAL: password authentication failed for user "database_user"
Here is part of my pg_hba.conf for reference:
# Database administrative login by Unix domain socket
local all postgres peer
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local replication all peer
host replication all 127.0.0.1/32 md5
host replication all ::1/128 md5
When debugging a connection issue it is always worthy to remember what layers we must go through before reaching the service. When you connect PostgreSQL service there will be at least three layers:
Networking: Firewall, NAT, Port Forwarding
PostgreSQL ACL
PostgreSQL login
It is important to understand what layer cause the issue, the PostgreSQL client (wrapped in psycopg2 in your scenario) error will help you to resolve this by issuing an ad-hoc error message:
Network issue will generally raise a typical: Is the server running locally and accepting connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?which means you did not succeed to connect the PostgreSQL service at all, problem relies before the service;
ACL issue will generally raise a typical: No pg_hba.conf entry for host <hostname>, user <username>, database <database> which means you did connect the PostgreSQL service but the connection is not referenced as valid in ACL;
Login issue will generally raise the error you have got: password authentication failed for user "<user>" which means you did connect the PostgreSQL service and the connection complies with an ACL entry but the authentication failed.
In the later scenario, it is important to know which entry triggered, because it defines the authentication mode. In your case, it was a md5 entry (because there is no password in peer mode and your SSH tunnel should map the localhost so you are seen as host instead of local for a postgreSQL perspective):
host all all 127.0.0.1/32 md5
Apparently your password is not what you expect it to be. To solve this, ensure:
you have set the password to the postgreSQL user and checked the LOGIN privileges (not the unix/SSH user, there are different concepts);
you use the same password in your psycopg2 connection, then you must be able to connect;
Reading your comment, it seems you may have ' quote in your password as well. So your password in your connection might be:
psycopg2.connect("dbname='database_name' user='database_user' host='localhost' password="'database_password'" port=3333")
Or if the quote are required it may indicate that you use some special characters that need to be escaped. You can also try simpler password to debug and then fallback on a stronger one.
Solution
Always make sure you reserve your IPs when using a Static IP
Versions
VirtualBox Version: 6.0.0 ( I think )
Vagrant Version: 2.2.3
CentosBox: "centos/7"
Nginx Version: 1.16.1
uWSGI Version: 2.0.18
Django Version: 2.2.1
Background
I have two vagrant boxes running, a test and a production. The only difference is IP and core count. I've set up both so I can ssh directly into the boxes, instead of having to ssh into the host machine and then run 'vagrant ssh'
General Issue
The production version will randomly boot me out of the ssh (Connection reset by IP port 22) and then i'll get Connection Refused. If I ssh into the Host machine and then 'vagrant ssh' I can still get in and everything seems to be fine, I can even still ping other computers on the network. But I can't access it from outside the host, this goes for the nginx server as well (IP refused to connect.) on chrome
The issue will occasionally fix itself in a couple minutes, but the majority of the time requires a 'vagrant destroy' and 'vagrant up --provision' / recreate the box. I also occasionally get booted out of the Host Machine and well as the test box, but both I can still access externally after (even the nginx server on test) I'm working over a VPN and I also occasionally get booted out of that as well, but i can reconnect when I notice
VagrantFile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Please don't change it unless you know what you're doing.
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.hostname = "DjangoProduction"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
config.vm.network "public_network", ip: "IP"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
config.vm.synced_folder "./", "D:/abcd", type: "sshfs", group:'vagrant', owner:'vagrant'
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |v|
v.name = "DjangoProduction"
# test has these two commented out
v.memory = 6000
v.cpus = 4
end
#
# View the documentation for the provider you are using for more
# information on available options.
## Keys
### For SSH directly into the Box
# Work Laptop Key
config.vm.provision "file", source: ".provision/keys/work.pub", destination: "~/.ssh/work.pub"
config.vm.provision "shell", inline: "cat ~vagrant/.ssh/work.pub >> ~vagrant/.ssh/authorized_keys"
# Personal Laptop Key
config.vm.provision "file", source: ".provision/keys/msi.pub", destination: "~/.ssh/msi.pub"
config.vm.provision "shell", inline: "cat ~vagrant/.ssh/msi.pub >> ~vagrant/.ssh/authorized_keys"
##
required_plugins = %w( vagrant-sshfs )
required_plugins.each do |plugin|
exec "vagrant plugin install #{plugin};vagrant #{ARGV.join(" ")}" unless Vagrant.has_plugin? plugin || ARGV[0] == 'plugin'
end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
config.vm.provision :shell, path: ".provision/boot.sh"
end
boot.sh
# networking
sudo yum -y install net-tools
ifconfig eth1 IP netmask 255.255.252.0
route add -net 10.1.0.0 netmask 255.255.252.0 dev eth1
route add default gw 10.1.0.1
# I manually set the gateway so It can be accessed through VPN
## install, reqs + drop things to places - gonna leave all that out
Error messages
Django
This issue starting popping up earlier this week with django sending me error emails saying. it's always random URLs there's no consistency
OperationalError at /
(2003, 'Can\'t connect to MySQL server on \'external-ip\' (110 "Connection timed out")')
I used to get this email once every other day and paid it no attention, but currently it's sending me at least 20 a day and the site is almost unusable- it's either really slow or I get chrome errors: 'ERR_CONNECTION_TIMED_OUT' or 'ERR_CONNECTION_REFUSED' or 'ERR_CONNECTION_RESET' .. it will be fine for an hour and then everything hits the fan
I originally thought it was an issue with the db or uwsgi or django, but working with it yesterday I realized there was a correlation with the timed out and getting kicked out of ssh.
Nginx Server Settings ( I have't changed nginx.conf )
upstream django {
server unix:///vagrant/abcd.sock;
}
server{
listen 8080;
return 301 https://$host$request_uri;
}
server{
charset utf-8;
listen 443 ssl;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
uwsgi_pass django;
include /vagrant/project/uwsgi_params;
uwsgi_read_timeout 3600;
uwsgi_ignore_client_abort on;
}
location /static {
alias /vagrant/static;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /vagrant/templates/core;
}
}
UWSGI command used
uwsgi --socket abcd.sock --module project.wsgi --chmod-socket=664 --master --processes 8 --threads 4 --buffer-size=65535 --lazy
Nginx Error Logs
Nothing.
Messages file
only shows the '(110 "Connection timed out")' dump when it happens
Can you test the behaviour but commenting the line "config.vm.synced_folder..."?
I am trying to deploy my django app on heroku server,i followed the instructions from this website https://devcenter.heroku.com/articles/getting-started-with-python#introduction .it worked fine till , "heroku open" command.When i came to the part where i need to host my database using " heroku run python manage.py syncdb" command , it failed showing the mesage "OperationalError: could not connect to server: Connection refused
Is the server running on host "localhost" (127.0.0.1) and accepting
TCP/IP connections on port 5432?". I tried lots of fixes including the one suggested here Deploying Django app's local postgres database to heroku? and http://www.cyberciti.biz/tips/postgres-allow-remote-access-tcp-connection.html .I tried all the solutions including editing the "listen_address" = '*' and tcpip_socket='true' in postgresql.conf and editing the ipv4 and v6 values in pg_hba.conf to
host all all 127.0.0.1 255.255.0.1 trust
host all all 10.0.0.99/32 md5
host all all 0.0.0.0/0 .
But none of them worked .I am guessing the problem arises because heroku can not connect to my local postgres server.This is strange because i'm able to access the postgres server via pgadmin.
And also in the django settings.py looks like this
DATABASES =
{
'default':
{
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'django_test',
'USER': 'postgres',
'PASSWORD': '******',
'HOST': 'localhost', # Or an IP Address that your DB is hosted on
'PORT': '5432',
}
}
Do i need to change this and use heroku's database settings instead??
localhost on the server points to the server not your local machine. The reason why is because the server running your django code will try and resolve the dns name localhost and it has a pointer to 127.0.0.1 which is local to the server resolving that name. That will NOT point to your computer you are working on.
You need to get an instance of postgres on heroku and change HOST: 'xxx.xxx.xxx.xxx to the IP address of your new postgres instance in your django settings.