Ingress minikube implementation connection refused when trying to connect by local ip - kubectl

I have been trying to connect to my minikube cluster via local IP address. I get the following error:
> curl 192.168.68.101:3000
curl: (7) Failed to connect to 192.168.68.101 port 3000: Connection refused
> curl 192.168.68.101:80
curl: (7) Failed to connect to 192.168.68.101 port 80: Connection refused
Whereas the following works (3000 is mapped back to port 80 via kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 3000:80):
> curl localhost:3000
200!
> curl localhost:80
200!
Seems like this implementation of nginx controller is bound to work only on localhost? Here's what my ingress is:
NAME CLASS HOSTS ADDRESS PORTS AGE
my-ingress nginx xyz.com localhost 80, 443 19m
And it's not that the ports are not open, I am run a different service (other than kubernetes) and it works on port 3000.

Related

GCP firewall rule for tcp port are not working

I've a VM on which I installed postgres, now I'm trying to connect the this PG from outside, I created a firewall rule that opens the 5432 port to any source IP like below
My instance has the rule
But when I try to check if the port is open it fails for me
$ nc -zv public-ip 5432
nc: connectx to public-ip port 5432 (tcp) failed: Connection refused
$ nc -zv public-ip 22
Connection to public-ip port 22 [tcp/ssh] succeeded!
$ psql -h public-ip -p 5432 --username=myuser --dbname=mydb --password
Password:
psql: error: connection to server at "public-ip", port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
I tried restaring the VM but that didn't help. what am I missing?
Connection refused means you can initiate a TCP connection but no process is listening on the port, so the connection attempt is refused. This means the firewall is probably not the problem. A firewall problem usually results in a Timeout error.
Edit the postgresql.conf configuration file:
listen_addresses = '*'
18.3.1. Connection Settings

Can not access to GCP VM Instance via external IP although apache2 started

I've create an test vm instance on gcp:
install apache2 and the service started success apache2 started
the firewall setup as default: firewall setup
the apache ports config: port config
external ip: external ip
it seems ok but I can not access via external ip as the document said https://cloud.google.com/community/tutorials/setting-up-lamp
Please give me some suggestions, thanks.
=================================
curl --head http://35.240.177.89/
curl: (7) Failed to connect to 35.240.177.89 port 80: Operation timed out
curl --head https://35.240.177.89/
curl: (7) Failed to connect to 35.240.177.89 port 443: Operation timed out
netstat -lntup:
result
Assuming that your Linux has dual stack enabled, the netstat with :::80 means that Apache2 is listening on both IPv4 and IPv6 port 80 for all network interfaces. You can check with the following command. A 0 value means that dual stack is enabled.
cat /proc/sys/net/ipv6/bindv6only
Given the above, then most likely your system does not have an iptables rule allowing port 80. Assuming Ubuntu 18.04 (modify for your distribution):
Backup the iptables rules:
sudo iptables-save > iptables.backup
Allow ingress port 80:
sudo iptables -I INPUT -p tcp --dport 80 -j ACCEPT
Optionally allow ingress port 443:
sudo iptables -I INPUT -p tcp --dport 443 -j ACCEPT

I can not access to the website with a public IP

I am setting up a new EC2 with Ubuntu, but I am getting a weird error and it is that I can not access to the public ip, it says refused connection.
My security group has these ports enabled:
HTTP TCP 80 0.0.0.0/0 -
HTTP TCP 80 ::/0 -
SSH TCP 22 0.0.0.0/0 -
SSH TCP 22 ::/0 -
My public is: http://3.16.154.123/
The EC2 interface is running, it's in green and that is the public ip which it gives me... so I wonder what is the problem? why can I not access to the public ip? why does it say refused connection? or more clear this error ERR_CONNECTION_REFUSED
Thanks.
telnet 3.16.154.123 22
Trying 3.16.154.123...
Connected to 3.16.154.123.
Escape character is '^]'.
SSH-2.0-OpenSSH_7.6p1 Ubuntu-4ubuntu0.3
^]
Your ssh is working as expected but on port 80 it fails.
telnet 3.16.154.123 80
Trying 3.16.154.123...
telnet: Unable to connect to remote host: Connection refused
Can you check if there is any service running on the host itself using telnet localhost 80 - If this works then it will be worth to check the NACL at the vpc level for any block on port 80.

What is the simplest way to get vagrant/virtualbox to access host services?

I've been reading through many examples (both here and through various blogs and virtualbox/vagrant documentation) and at this point I think I should be able to do this.
What I ultimately would like to do is communicate with my docker daemon on my host machine and all the subsequent services I spin up arbitrarily.
To try to get this to work, I run the simple nginx container on my host and confirm it works:
$ docker run --name some-nginx -d -p 8080:80 docker.io/library/nginx:1.17.9
$ curl localhost:8080
> Welcome to nginx!
In my Vagrantfile I've define my host-only network:
config.vm.network "private_network", ip: "192.168.50.4",
virtualbox__intnet: true
Now in my guest vagrant box, I expect that I should be able to access this same port:
$ curl localhost:8080
> curl: (7) Failed to connect to localhost port 8080: Connection refused
$ curl 127.0.0.1:8080
> curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
$ curl 192.168.50.4:8080 # I hope not, but maybe this will work?
> curl: (7) Failed to connect to 192.168.50.4 port 8080: Connection refused
If you're "inside" the Vagrant guest machine, localhost will be the local loopback adapter of THAT machine and not of your host.
In VirtualBox virtualization, which you are using, you can always connect to services running on your hosts' localhost via the 10.0.2.2 address. See: https://www.virtualbox.org/manual/ch06.html#network_nat
So in your case, with the web server running on port 8080 on your host, using
curl 10.0.2.2:8080
would mean success!
run vagrant up to start the vm and use NAT as network interface, which means guest vm will run as equal as the host in the same network.
vagrant ssh into the vm and install net-tools if you machine doesn't have tool netstat
use netstat -rn to find any routable gateway. Below gateways 10.0.2.2, 192.168.3.1 they're the gateways present in guest vm.
[vagrant#localhost ~]$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
0.0.0.0 192.168.3.1 0.0.0.0 UG 0 0 0 eth2
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2
192.168.33.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
Go to host and run ifconfig. Find out the gateway 192.168.3.1 shared at host and host possesses IP 192.168.3.x. And make sure service at host can be accessed at 192.168.3.x.
And make sure service at host can be accessed at 192.168.3.x.
try it at host curl -v http://<192.168.3.x>:<same port on host>, if can accessed, ok.
Now go to guest vm, try curl -v http://<192.168.3.x>:<same port on host>. If can be accessed too
now you can access services at host from guest vm.

telnet using internal ip from command line in amazon aws

In one of my amazon aws server installed memcahed server in port 11211.
Now i ssh to that server and run this command
telnet 127.0.0.1 11211
I get connected to and can access memcache data.
If i use private or public ip instead of 127.0.0.1
telnet <private ip> 11211
i get this
telnet: Unable to connect to remote host: Connection refused
Lets call this server master server where memcached is installed.
If i now ssh to other app server and run this command
telnet <private ip> 11211
get the same error. But the master server security group has this inbound rules.
All traffic All All sg-xxxxxx (app server)
Should we not get access to all services running in our master server from app servers?