GCP firewall rule for tcp port are not working - google-cloud-platform

I've a VM on which I installed postgres, now I'm trying to connect the this PG from outside, I created a firewall rule that opens the 5432 port to any source IP like below
My instance has the rule
But when I try to check if the port is open it fails for me
$ nc -zv public-ip 5432
nc: connectx to public-ip port 5432 (tcp) failed: Connection refused
$ nc -zv public-ip 22
Connection to public-ip port 22 [tcp/ssh] succeeded!
$ psql -h public-ip -p 5432 --username=myuser --dbname=mydb --password
Password:
psql: error: connection to server at "public-ip", port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
I tried restaring the VM but that didn't help. what am I missing?

Connection refused means you can initiate a TCP connection but no process is listening on the port, so the connection attempt is refused. This means the firewall is probably not the problem. A firewall problem usually results in a Timeout error.
Edit the postgresql.conf configuration file:
listen_addresses = '*'
18.3.1. Connection Settings

Related

Ingress minikube implementation connection refused when trying to connect by local ip

I have been trying to connect to my minikube cluster via local IP address. I get the following error:
> curl 192.168.68.101:3000
curl: (7) Failed to connect to 192.168.68.101 port 3000: Connection refused
> curl 192.168.68.101:80
curl: (7) Failed to connect to 192.168.68.101 port 80: Connection refused
Whereas the following works (3000 is mapped back to port 80 via kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 3000:80):
> curl localhost:3000
200!
> curl localhost:80
200!
Seems like this implementation of nginx controller is bound to work only on localhost? Here's what my ingress is:
NAME CLASS HOSTS ADDRESS PORTS AGE
my-ingress nginx xyz.com localhost 80, 443 19m
And it's not that the ports are not open, I am run a different service (other than kubernetes) and it works on port 3000.

Lost control of EC2 instance after listen 127.0.0.1

I was following an tutorial and by mistake I typed the command listen 127.0.0.1 and restarted sshd service. When I tried to login again, I got:
ssh: connect to host x.x.x.x port 22: Connection refused
Did I lost forever access to my server?
Is there any way to connect on the ec2 instance and fix it?
Restarted server, than the same:
OpenSSH_8.1p1, LibreSSL 2.7.3
...
debug2: resolve_canonicalize: hostname x.x.x.x is address <br>
debug2: ssh_connect_direct debug1: Connecting to x.x.x.x port 22. <br>debug1: connect to address x.x.x.x port 22: Connection refused <br>
ssh: connect to host x.x.x.x port 22: Connection refused<br>
Is there a way to reset the /etc/ssh/sshd_config from was console?
Sorry, probably the problem was not the listen command but PasswordAuthentication yes entry on sshd_config
I believe the solution will be to create a new instance, attach this volume to it, change the file and the return it to previous instance. Is there a simpler solution?

I can not access to the website with a public IP

I am setting up a new EC2 with Ubuntu, but I am getting a weird error and it is that I can not access to the public ip, it says refused connection.
My security group has these ports enabled:
HTTP TCP 80 0.0.0.0/0 -
HTTP TCP 80 ::/0 -
SSH TCP 22 0.0.0.0/0 -
SSH TCP 22 ::/0 -
My public is: http://3.16.154.123/
The EC2 interface is running, it's in green and that is the public ip which it gives me... so I wonder what is the problem? why can I not access to the public ip? why does it say refused connection? or more clear this error ERR_CONNECTION_REFUSED
Thanks.
telnet 3.16.154.123 22
Trying 3.16.154.123...
Connected to 3.16.154.123.
Escape character is '^]'.
SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
^]
Your ssh is working as expected but on port 80 it fails.
telnet 3.16.154.123 80
Trying 3.16.154.123...
telnet: Unable to connect to remote host: Connection refused
Can you check if there is any service running on the host itself using telnet localhost 80 - If this works then it will be worth to check the NACL at the vpc level for any block on port 80.

telnet using internal ip from command line in amazon aws

In one of my amazon aws server installed memcahed server in port 11211.
Now i ssh to that server and run this command
telnet 127.0.0.1 11211
I get connected to and can access memcache data.
If i use private or public ip instead of 127.0.0.1
telnet <private ip> 11211
i get this
telnet: Unable to connect to remote host: Connection refused
Lets call this server master server where memcached is installed.
If i now ssh to other app server and run this command
telnet <private ip> 11211
get the same error. But the master server security group has this inbound rules.
All traffic All All sg-xxxxxx (app server)
Should we not get access to all services running in our master server from app servers?

Can´t connect to my Amazon EC2 instance. It pings but connection times out

I did setup my 1st EC2 instance on AWS on a free tier using Ubuntu as the OS. I followed all the steps and my instance is up.
I´ve build the following security rules:
Ports Protocol Source Personal_SG_NVirginia
80 tcp 0.0.0.0/0 ✔
22 tcp 0.0.0.0/0 ✔
3306 tcp 0.0.0.0/0 ✔
443 tcp 0.0.0.0/0 ✔
-1 icmp 0.0.0.0/0 ✔
I can ping my instance, but cannot connect to it either using PuTTY, ssh on my linux and even on miniterm console.
$ ssh -vv -i "xxxx.pem" ubuntu#52.91.95.205
OpenSSH_6.6.1, OpenSSL 1.0.1f 6 Jan 2014
debug1: Reading configuration data /etc/ssh/ssh_config
debug2: ssh_connect: needpriv 0
debug1: Connecting to 52.91.95.205 [52.91.95.205] port 22
debug1: connect to address 52.91.95.205 port 22: Connection timed out
ssh: connect to host 52.91.95.205 port 22: Connection timed out
Tha same happens if I use DNS name.
Miniterm console error:
Connection to 52.91.95.205: Connection timed out: no further information
I have already restarted the instance and recreated it, but no success at all.
Help appreciatted.
Verify the IP address is valid
$ ssh -vv -i "xxxx.pem" ubuntu#54.210.1133.50
Is this hand-written or did you copy and paste? The IP address is an invalid IP address ("1133" is >255), and doesn't match your debug output. Make sure you're connecting to the correct public IP address of the instance.
Verify you are using the correct user
Are you sure the initial user is "ubuntu"? Some EC2 Linux instances use "ec2-user" for the initial setup.
Try: ssh -vv -i "xxxx.pem" ec2-user#123.123.123.123
Verify default SSH port is not blocked (correct solution)
Per discussion below, it turns out that port 22 was blocked by the user's ISP. Switching to a non-standard port (2022) resolved the issue.