Can not access to GCP VM Instance via external IP although apache2 started - google-cloud-platform

I've create an test vm instance on gcp:
install apache2 and the service started success apache2 started
the firewall setup as default: firewall setup
the apache ports config: port config
external ip: external ip
it seems ok but I can not access via external ip as the document said https://cloud.google.com/community/tutorials/setting-up-lamp
Please give me some suggestions, thanks.
=================================
curl --head http://35.240.177.89/
curl: (7) Failed to connect to 35.240.177.89 port 80: Operation timed out
curl --head https://35.240.177.89/
curl: (7) Failed to connect to 35.240.177.89 port 443: Operation timed out
netstat -lntup:
result

Assuming that your Linux has dual stack enabled, the netstat with :::80 means that Apache2 is listening on both IPv4 and IPv6 port 80 for all network interfaces. You can check with the following command. A 0 value means that dual stack is enabled.
cat /proc/sys/net/ipv6/bindv6only
Given the above, then most likely your system does not have an iptables rule allowing port 80. Assuming Ubuntu 18.04 (modify for your distribution):
Backup the iptables rules:
sudo iptables-save > iptables.backup
Allow ingress port 80:
sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT
Optionally allow ingress port 443:
sudo iptables -I INPUT -p tcp --dport 443 -j ACCEPT

Related

Why can't I access port 80 when I can access port 8080?

I have deployed my web application via AWS EC2. I have made inbound rules as below.
Inbound Rules
I can now access through myIP:8080 but I get an error with myIP or myIP:80. The error message I get is: This site can’t be reached. refused to connect. Try: Checking the connection. Checking the proxy and the firewall. ERR_CONNECTION_REFUSED
What am I doing wrong in here?
I have managed to resolve the issue by port forwarding with the following command:
sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 80 -j REDIRECT --to-port 8080

How to open port 80 on AWS EC2

I want to open port 80 to allow HTTP connections on my EC2 server. But when I'm entering "telnet xx.xx.xx.xx 80" on a terminal the following is displayed
"Trying xx.xx.xx.xx..."
telnet: Unable to connect to remote host: Connection timed out
In AWS I've opened port 80 by defining an Inbound Rule on the Security group (only one security group is defined for this EC2 server)
I'm using the Public IPv4 address to make a telnet connection
I noticed you have a fresh install -- fresh installs do not have software listening over HTTP by default.
If there is no application listening on a port, incoming packets to that port will simply be rejected by the computer's operating system. Ports can be "closed" through the use of a firewall, which you have disabled, therefore the ports are open just unresponsive which makes them appear closed.
If the port is enabled in the firewall in terminal using
sudo apt-get install ufw
sudo ufw allow ssh
sudp ufw allow https
sudo ufw allow http
sudo reboot
and enabled in the aws console as a rule, the port is open and just not responsive so it's seen as closed. By installing either nginx or something that binds to port 80, external requests to that port will be connected successfully, and the port will therefore be recognized as open. The reason ssh is recognized as open is because 1. it has firewall transparency, and 2. it is always listening (unlike port 80!).
Before installing nginx even though ports are allowed thru firewall:
sudo apt-get install nginx
sudo ufw allow 'Nginx HTTP'
sudo systemctl status nginx
(more nginx info)
After:
Simple port tester tool here

What is the simplest way to get vagrant/virtualbox to access host services?

I've been reading through many examples (both here and through various blogs and virtualbox/vagrant documentation) and at this point I think I should be able to do this.
What I ultimately would like to do is communicate with my docker daemon on my host machine and all the subsequent services I spin up arbitrarily.
To try to get this to work, I run the simple nginx container on my host and confirm it works:
$ docker run --name some-nginx -d -p 8080:80 docker.io/library/nginx:1.17.9
$ curl localhost:8080
> Welcome to nginx!
In my Vagrantfile I've define my host-only network:
config.vm.network "private_network", ip: "192.168.50.4",
virtualbox__intnet: true
Now in my guest vagrant box, I expect that I should be able to access this same port:
$ curl localhost:8080
> curl: (7) Failed to connect to localhost port 8080: Connection refused
$ curl 127.0.0.1:8080
> curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
$ curl 192.168.50.4:8080 # I hope not, but maybe this will work?
> curl: (7) Failed to connect to 192.168.50.4 port 8080: Connection refused
If you're "inside" the Vagrant guest machine, localhost will be the local loopback adapter of THAT machine and not of your host.
In VirtualBox virtualization, which you are using, you can always connect to services running on your hosts' localhost via the 10.0.2.2 address. See: https://www.virtualbox.org/manual/ch06.html#network_nat
So in your case, with the web server running on port 8080 on your host, using
curl 10.0.2.2:8080
would mean success!
run vagrant up to start the vm and use NAT as network interface, which means guest vm will run as equal as the host in the same network.
vagrant ssh into the vm and install net-tools if you machine doesn't have tool netstat
use netstat -rn to find any routable gateway. Below gateways 10.0.2.2, 192.168.3.1 they're the gateways present in guest vm.
[vagrant#localhost ~]$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
0.0.0.0 192.168.3.1 0.0.0.0 UG 0 0 0 eth2
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2
192.168.33.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
Go to host and run ifconfig. Find out the gateway 192.168.3.1 shared at host and host possesses IP 192.168.3.x. And make sure service at host can be accessed at 192.168.3.x.
And make sure service at host can be accessed at 192.168.3.x.
try it at host curl -v http://<192.168.3.x>:<same port on host>, if can accessed, ok.
Now go to guest vm, try curl -v http://<192.168.3.x>:<same port on host>. If can be accessed too
now you can access services at host from guest vm.

Google Cloud direct default port to GlassFish port

A GlassFish application hosted in a Google Cloud VM Instance is running in port 8080. I need to direct traffic of default port 80 to port 8080. What is the best way to achieve that?
I tried to set port 80 as GlassFish port, but failed as on Ubuntu we can't listen on a port lower than 1024.
You can use the Linux feature iptables to redirect traffic received on one port to a different port.
sudo iptables -t nat -I PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 8080
sudo iptables -I INPUT -p tcp --dport 8080 -j ACCEPT
/etc/init.d/iptables save
Double-check the documentation as you do not mention the version of Linux that you are running.
Create an instance group for your VM. Create a Load Balancer with that directs external port 80 traffic to port 8080 on your VM.

psycopg2/psql unable to connect to the postgres db

I have two VPSes:
webserver 10.0.0.5
dbserver 10.0.0.6
I've set a few firewall rules on them:
#webserver to allow for the 10.0.0.6 dbserver
iptables -A INPUT -p tcp -s 10.0.0.6 --dport 5432 -m conntrack --ctstate ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -d 10.0.0.6 --sport 5432 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
and
#dbserver to allow for the 10.0.0.5 webserver
iptables -A INPUT -p tcp -s 10.0.0.5 --dport 5432 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
iptables -A OUTPUT -p tcp -d 10.0.0.5 --sport 5432 -m conntrack --ctstate NEW,ESTABLISHED -j ACCEPT
I'm using Azure VMs with static IPs. I don't believe I need to define any security rules on their firewall, because the traffic is internal to the hypervisor group (I can ssh from one VM to another fine).
I can't ./manage.py migrate my django app because psycopg2 can't connect to the database server. (I believe my settings.py is correct.)
The relevant entry in pg_hba.conf:
#accept connections from the 10.0.0.0 subnet
local all 10.0.0.0/24 trust
The relevant entry in postgresql.conf:
listen_addresses = 'localhost, 10.0.0.5'
I can connect with psql on the dbserver locally. I am unable to connect with psql -h 10.0.0.6 -U postgres -W over the network from the webserver. Just to make sure it isn't the firewall rules, when I flush all rules from the db server and try to connect from the webserver, it tells me:
psql: could not connect to server: Connection refused
Is the server running on host "10.0.0.6" and accepting
TCP/IP connections on port 5432?
nmap 10.0.0.6 -p5432 says that:
Starting Nmap 6.47 ( http://nmap.org ) at 2016-04-11 05:42 UTC
Nmap scan report for 10.0.0.6
Host is up (0.0026s latency).
PORT STATE SERVICE
5432/tcp closed postgresql
So clearly postgres is not listening on 5432 like it's supposed to be. I guess I have something wrong with pg_hba.conf or postgresql.conf, but I can't see what.
Edit: I opened up port 5432 to the 10.0.0.0/24 subnet in the hypervisor firewall just in case. Didn't make a difference.
I was testing postgresql.conf and it seems that listen_addresses is the list of destination addresses, not source addresses. Changing this entry to 10.0.0.6 fixed the problem.