Trouble Connecting to PostgreSQL Running in a Ubuntu VM - django

I have created an instance of PostgreSQL running in a Ubuntu/Bionic box in Vagrant/VirtualBox that will be used by Django in my dev environment. I wanted to test my ability to connect to it with either the terminal or pgAdmin before connecting with DJango, just to be sure it was working on that end first; the idea being that I could make later Django debugging easier if I am assured the connection works; but, I've had no success.
I have tried editing the configuration files that many posts suggest, with no effect. I can, however, ping the box via the ip assigned in the Vagrantfile with no issue - but not when specifying port 5432 with ping 10.1.1.1:5432. I can also use psql from within the box, so it's running.
I have made sure to enable ufw on the vm, created a rule to allow port 5432 and insured that it took using sudo ufw status. I have also confirmed that I'm editing the correct files using the show command within psql.
Here are the relevant configs as they currently are:
Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.hostname = "hg-site-db"
config.vm.provider "virtualbox" do |v|
v.memory = 2048
v.cpus = 1
end
config.vm.box = "ubuntu/bionic64"
config.vm.network "forwarded_port", host_ip: "127.0.0.1", guest: 5432, host: 5432
config.vm.network "public_network", ip: "10.1.1.1"
config.vm.provision "shell", inline: <<-SHELL
# Update and upgrade the server packages.
sudo apt-get update
sudo apt-get -y upgrade
# Install PostgreSQL
sudo apt-get install -y postgresql postgresql-contrib
# Set Ubuntu Language
sudo locale-gen en_US.UTF-8
SHELL
end
/etc/postgresql/10/main/postgresql.conf:
listen_addresses = '*'
/etc/postgresql/10/main/pg_hba.conf - I am aware this is insecure, but I was just trying to find out why it was not working, with plans to go back and correct this:
host all all 0.0.0.0/0 trust

As we discussed in comments, you should remove host_ip from your forwarded port definition and just leave the guest and host ports.

Related

How to get to postgres database at localhost from Django in Docker container [duplicate]

This question already has answers here:
From inside of a Docker container, how do I connect to the localhost of the machine?
(40 answers)
Closed last year.
I have a django in a docker container that must access a postgres database at my localhost. The Dockerfile works fine when accessing the database residing at an external host, but it can't find the database at my host.
It is a well known problem and there is a lot of documentation, but it doesn't work in my case. This question resembles another question but it did not solve my problem. I actually describe the correct solution of this problem as #Zeitounator pointed out, but it still did not work. It was thanks #Zeitounator that I realised two parts of the problem must be solved: the docker side and the PostgreSQL side. I did not find that solution in any of the answers I read. I did however read about the same frustration: getting a solution that did not work.
It all focuses on which internet address I transmit to the HOST key in the database driver dictionary in the django settings.py:
print('***', os.environ['POSTGRES_DB'], os.environ['POSTGRES_USER'],
os.environ['POSTGRES_HOST'], os.environ['POSTGRES_PORT'])
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2', # os.environ['ENGINE'],
'NAME': 'demo',
'USER': os.environ['POSTGRES_USER'],
'PASSWORD': os.environ['POSTGRES_PASSWORD'],
'HOST': os.environ['POSTGRES_HOST'],
}
}
And running the Dockerfile:
docker build -t dd .
docker run --name demo -p 8000:8000 --rm dd
When POSTGRES_HOST points to my external server 192.168.178.100 it works great. When running python manage.py runserver it finds the host and the database. The server starts and waits for commands. When pointing to 127.0.0.1 it fails (which actually is great too: the container really isolates).
django.db.utils.OperationalError: could not connect to server: Connection refused
Is the server running on host "127.0.0.1" and accepting
TCP/IP connections on port 5432?
But as I can connect to my server, I should be able to connect to the localhost IP and that fails just as well (I forgot to tell that the database really runs). When I use host.docker.internal it doesn't work either when running:
docker run --name demo -p 8000:8000 --rm --add-host host.docker.internal:host-gateway dd
It replaces host.docker.internal by 172.17.0.1. The only solution that works so far is:
# set POSTGRES_HOST at 127.0.0.1
docker run --name demo -p 8000:8000 --rm --network host dd
but that seems something I don't want. As far as I understand the documentation it makes the full host stack available to the container which defeats the idea of a container in the first place. But second: it doesn't work in docker-compose, though I specify:
extra_hosts:
- "host.docker.internal:host-gateway"
network_mode: host leads to a compile error, docker-compose refuses to run.
Is there a way to access a service running on my PC from a docker running on my PC as well from Dockerfile as well as docker-compose and can be deployed as well?
using Ubuntu 21.04 (latest version), Docker version 20.10.7, build 20.10.7-0ubuntu5.1, postgresql v14. My Dockerfile:
FROM python:3
ENV PYTHONDONTWRITEBYTECODE=1
ENV PYTHONUNBUFFERED=1
RUN mkdir /app
WORKDIR /app
COPY . .
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver", "127.0.0.1:8000"]
There are two parts in solving this problem: the docker part and the postgresql part as #Zeitounator pointed out. I was not aware of the postgres part. Thanks to his comment I could resolve this issue. And it works for Dockerfile as well as docker-compose where this Dockerfile is used.
One has to change two postgres configuration files, both are on /etc/postgresql/<version>/main:
postgresql.conf
Change the listen address. Initially it shows:
listen_addresses = 'localhost' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
I changed 'localhost' to '*'. It can be more specific, in this case to the docker address, but as a Proof of Concept it worked.
pg_hba.conf
In my case host.docker.internal resolves to 172.17.0.1. This seems some default docker gateway address as I noticed in most discussion regarding this subject.
Add two lines at the very end of the file:
host all all 172.17.0.1/0 md5
host all all ::ffff:ac11:1/0 md5

How to configure Cassandra in GCP to remotely connect?

I am following the below steps to install and configure Cassandra in GCP.
It works perfectly as long as working with Cassandra within GCP.
$java -version
$echo "deb http://downloads.apache.org/cassandra/debian 40x main" | sudo tee -a /etc/apt/sources.list.d/cassandra.sources.list
$curl https://downloads.apache.org/cassandra/KEYS | sudo apt-key add -
$sudo apt install apt-transport-https
$sudo apt-get update
$sudo apt-get install cassandra
$sudo systemctl status cassandra
//Active: active (running)
$nodetool status
//Datacenter: datacenter1
$tail -f /var/log/cassandra/system.log
$find /usr/lib/ -name cqlshlib
##/usr/lib/python3/dist-packages/cqlshlib
$export PYTHONPATH=/usr/lib/python3/dist-packages
$sudo nano ~/.bashrc
//Add
export PYTHONPATH=/usr/lib/python3/dist-packages
//save
$source ~/.bashrc
$python --version
$cqlsh
//it opens cqlsh shell
But I want to configure Cassandra to remotely connect.
I tried the following 7 different solutions.
But still I am getting the error.
1.In GCP,
VPC network -> firewall -> create
IP 0.0.0.0/0
port tcp=9000,9042,8088,9870,8123,8020, udp=9000
tag = hadoop
Add this tag in VMs
2.rm -Rf ~/.cassandra
3.sudo nano ~/.cassandra/cqlshrc
[connection]
hostname = 34.72.70.173
port = 9042
4. cqlsh 34.72.70.173 -u cassandra -p cassandra
5. firewall - open ports
https://stackoverflow.com/questions/2359159/cassandra-port-usage-how-are-the-ports-used
9000,9042,8088,9870,8123,8020,7199,7000,7001,9160
6. Get rid of this line: JVM_OPTS="$JVM_OPTS -Djava.rmi.server.hostname=localhost"
Try restart the service: sudo service cassandra restart
If you have a cluster, make sure that ports 7000 and 9042 are open within your security group.
7. you can set the environment variable $CQLSH_HOST=1.2.3.4. Then simply type cqlsh.
https://stackoverflow.com/questions/20575640/datastax-devcenter-fails-to-connect-to-the-remote-cassandra-database/20598599#20598599
sudo nano /etc/cassandra/cassandra.yaml
listen_address: localhost
rpc_address: 34.72.70.173
broadcast_rpc_address: 34.72.70.173
sudo service cassandra restart
sudo nano ~/.bashrc
export CQLSH_HOST=34.72.70.173
source ~/.bashrc
sudo systemctl restart cassandra
sudo service cassandra restart
sudo systemctl status cassandra
nodetool status
Please suggest how to get rid of the following error
Connection error: ('Unable to connect to any servers', {'127.0.0.1:9042': ConnectionRefusedE
rror(111, "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection refused")})
This indicates that when you ran cqlsh, you didn't specify the public IP:
Connection error: ('Unable to connect to any servers', \
{'127.0.0.1:9042': ConnectionRefusedError(111, "Tried connecting to [('127.0.0.1', 9042)]. \
Last error: Connection refused")})
When running Cassandra nodes on public clouds, you need to configure cassandra.yaml with the following:
listen_address: private_IP
rpc_addpress: public_IP
The listen address is the what Cassandra nodes use for communicating with each other privately, e.g. gossip protocol.
The RPC address is what clients/apps/drivers use to connect to nodes on the CQL port (9042) so it needs to be set to the nodes' public IP address.
To connect to a node with cqlsh (a client), you need to specify the node's public IP:
$ cqlsh <public_IP>
Cheers!

Django Application running on Ubuntu VPS: This site can’t be reached

I am running an ubuntu 16.04 cloud VPS server. I've set up a venv and activated it, and installed django.
I run the server with
python3 manage.py runserver 0.0.0.0:8000
I am trying to access this application from a remote computer (not inside the same LAN); I'm trying to make the application visible to the world outside the VPS and VPLAN. When I try to access the site in my home computer broswer like: xx.xx.xxx.xxx:8000 I get the error:
This site can’t be reached. http://xx.xx.xxx.xxx:8000/ is unreachable.
Now I've tried a traceroute and it seems to reach the server ok. I also did
sudo ufw enable
sudo ufw 8000 allow
sudo iptables -S | grep 8000 (and see the proper entries)
In the settings file I have:
ALLOWED_HOSTS = ["*", "0.0.0.0", "localhost", "xx.xx.xxx.xxx","xxx.temporary.link"]
If I wget localhost:8000 I get a response fine. I have tried doing all of the above as root and as another dedicated user but it makes no difference.
I ran through this guide
https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-16-04
and I still have the same issue.
Does anyone have any other ideas? Thanks in advance
Try:
sudo ufw allow 8000
Not:
sudo ufw 8000 allow

Vagrant, Centos7, Nginx, Uwsgi, Django. SSH + Nginx Connection Reset then Connection Refused

Solution
Always make sure you reserve your IPs when using a Static IP
Versions
VirtualBox Version: 6.0.0 ( I think )
Vagrant Version: 2.2.3
CentosBox: "centos/7"
Nginx Version: 1.16.1
uWSGI Version: 2.0.18
Django Version: 2.2.1
Background
I have two vagrant boxes running, a test and a production. The only difference is IP and core count. I've set up both so I can ssh directly into the boxes, instead of having to ssh into the host machine and then run 'vagrant ssh'
General Issue
The production version will randomly boot me out of the ssh (Connection reset by IP port 22) and then i'll get Connection Refused. If I ssh into the Host machine and then 'vagrant ssh' I can still get in and everything seems to be fine, I can even still ping other computers on the network. But I can't access it from outside the host, this goes for the nginx server as well (IP refused to connect.) on chrome
The issue will occasionally fix itself in a couple minutes, but the majority of the time requires a 'vagrant destroy' and 'vagrant up --provision' / recreate the box. I also occasionally get booted out of the Host Machine and well as the test box, but both I can still access externally after (even the nginx server on test) I'm working over a VPN and I also occasionally get booted out of that as well, but i can reconnect when I notice
VagrantFile
# -*- mode: ruby -*-
# vi: set ft=ruby :
# Please don't change it unless you know what you're doing.
Vagrant.configure("2") do |config|
config.vm.box = "centos/7"
config.vm.hostname = "DjangoProduction"
# Disable automatic box update checking. If you disable this, then
# boxes will only be checked for updates when the user runs
# `vagrant box outdated`. This is not recommended.
# config.vm.box_check_update = false
# Create a public network, which generally matched to bridged network.
# Bridged networks make the machine appear as another physical device on
# your network.
config.vm.network "public_network", ip: "IP"
# Share an additional folder to the guest VM. The first argument is
# the path on the host to the actual folder. The second argument is
# the path on the guest to mount the folder. And the optional third
# argument is a set of non-required options.
config.vm.synced_folder "./", "D:/abcd", type: "sshfs", group:'vagrant', owner:'vagrant'
# Provider-specific configuration so you can fine-tune various
# backing providers for Vagrant. These expose provider-specific options.
# Example for VirtualBox:
#
config.vm.provider "virtualbox" do |v|
v.name = "DjangoProduction"
# test has these two commented out
v.memory = 6000
v.cpus = 4
end
#
# View the documentation for the provider you are using for more
# information on available options.
## Keys
### For SSH directly into the Box
# Work Laptop Key
config.vm.provision "file", source: ".provision/keys/work.pub", destination: "~/.ssh/work.pub"
config.vm.provision "shell", inline: "cat ~vagrant/.ssh/work.pub >> ~vagrant/.ssh/authorized_keys"
# Personal Laptop Key
config.vm.provision "file", source: ".provision/keys/msi.pub", destination: "~/.ssh/msi.pub"
config.vm.provision "shell", inline: "cat ~vagrant/.ssh/msi.pub >> ~vagrant/.ssh/authorized_keys"
##
required_plugins = %w( vagrant-sshfs )
required_plugins.each do |plugin|
exec "vagrant plugin install #{plugin};vagrant #{ARGV.join(" ")}" unless Vagrant.has_plugin? plugin || ARGV[0] == 'plugin'
end
# Enable provisioning with a shell script. Additional provisioners such as
# Puppet, Chef, Ansible, Salt, and Docker are also available. Please see the
# documentation for more information about their specific syntax and use.
config.vm.provision :shell, path: ".provision/boot.sh"
end
boot.sh
# networking
sudo yum -y install net-tools
ifconfig eth1 IP netmask 255.255.252.0
route add -net 10.1.0.0 netmask 255.255.252.0 dev eth1
route add default gw 10.1.0.1
# I manually set the gateway so It can be accessed through VPN
## install, reqs + drop things to places - gonna leave all that out
Error messages
Django
This issue starting popping up earlier this week with django sending me error emails saying. it's always random URLs there's no consistency
OperationalError at /
(2003, 'Can\'t connect to MySQL server on \'external-ip\' (110 "Connection timed out")')
I used to get this email once every other day and paid it no attention, but currently it's sending me at least 20 a day and the site is almost unusable- it's either really slow or I get chrome errors: 'ERR_CONNECTION_TIMED_OUT' or 'ERR_CONNECTION_REFUSED' or 'ERR_CONNECTION_RESET' .. it will be fine for an hour and then everything hits the fan
I originally thought it was an issue with the db or uwsgi or django, but working with it yesterday I realized there was a correlation with the timed out and getting kicked out of ssh.
Nginx Server Settings ( I have't changed nginx.conf )
upstream django {
server unix:///vagrant/abcd.sock;
}
server{
listen 8080;
return 301 https://$host$request_uri;
}
server{
charset utf-8;
listen 443 ssl;
ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
location / {
uwsgi_pass django;
include /vagrant/project/uwsgi_params;
uwsgi_read_timeout 3600;
uwsgi_ignore_client_abort on;
}
location /static {
alias /vagrant/static;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /vagrant/templates/core;
}
}
UWSGI command used
uwsgi --socket abcd.sock --module project.wsgi --chmod-socket=664 --master --processes 8 --threads 4 --buffer-size=65535 --lazy
Nginx Error Logs
Nothing.
Messages file
only shows the '(110 "Connection timed out")' dump when it happens
Can you test the behaviour but commenting the line "config.vm.synced_folder..."?

Host key verification failed in google compute engine based mpich cluster

TLDR:
I have 2 google compute engine instances, I've installed mpich on both.
When I try to run a sample I get Host key verification failed.
Detailed version:
I've followed this tutorial in order to get this task done: http://mpitutorial.com/tutorials/running-an-mpi-cluster-within-a-lan/.
I have 2 google compute engine vms with ubuntu 14.04 (the google cloud account is a trial one, btw). I've downloaded this version of mpich on both instances: http://www.mpich.org/static/downloads/3.3rc1
/mpich-3.3rc1.tar.gz and I installed it using these steps:
./configure --disable-fortran
sudo make
sudo make install
This is the way the /etc/hosts file looks on the master-node:
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
169.254.169.254 metadata.google.internal metadata
10.128.0.3 client
10.128.0.2 master
10.128.0.2 linux1.us-central1-c.c.ultimate-triode-161918.internal linux
1 # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
And this is the way the /etc/hosts file looks on the client-node:
127.0.0.1 localhost
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
ff02::3 ip6-allhosts
169.254.169.254 metadata.google.internal metadata
10.128.0.2 master
10.128.0.3 client
10.128.0.3 linux2.us-central1-c.c.ultimate-triode-161918.internal linux
2 # Added by Google
169.254.169.254 metadata.google.internal # Added by Google
The rest of the steps involved adding an user named mpiuser on both nodes and configuring passwordless ssh authentication between the nodes. And configuring a cloud shared directory between nodes.
The configuration worked till this point. I've downloaded this file https://raw.githubusercontent.com/pmodels/mpich/master/examples/cpi.c to /home/mpiuser/cloud/mpi_sample.c, compiled it this way:
mpicc -o mpi_sample mpi_sample.c
and issued this command on the master node while logged in as the mpiuser:
mpirun -np 2 -hosts client,master ./mpi_sample
and I got this error:
Host key verification failed.
What's wrong? I've tried to troubleshoot this problem over more than 2 days but I can't get a valid solution.
Add
package-lock.json
in ".gcloudignore file".
And deploy it again.
It turned out that my password less ssh wasn't configured properly. I've created 2 new instances and did the following things to get a working password less and thus get a working version of that sample. The following steps were execute on an ubuntu server 18.04.
First, by default, instances on google cloud have PasswordAuthentication setting turned off. In the client server do:
sudo vim /etc/ssh/sshd_config
and change PasswordAuthentication no to PasswordAuthentication yes. Then
sudo systemctl restart ssh
Generate a ssh key from the master server with:
ssh-keygen -t rsa -b 4096 -C "user.mail#server.com"
Copy the generated ssh key from the master server to the client
ssh-copy-id client
Now you get a fully functional password less ssh from master to client. However mpich still failed.
The additional steps that I did was to copy the public key to the ~/.ssh/authorized_keys file, both on master and client. So execute this command from both servers:
sudo cat .ssh/id_rsa.pub >> .ssh/authorized_keys
Then make sure the /etc/ssh/sshd_config files from both the client and server have the following configurations:
PasswordAuthentication no
ChallengeResponseAuthentication no
UsePAM no
Restart the ssh service from both client and master
sudo systemctl restart ssh
And that's it, mpich works smoothly now.