My setup is a docker container with a node app running on port 3000 ran using
docker run -d -p 3000:3000 <IMAGE> node dist/src/main.js
ubuntu#ip-172-31-8-192:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS
da3fdeb2b843 image "docker-entrypoint.s…" 18 minutes ago Up 18 minutes 0.0.0.0:3000->3000/tcp, :::3000->3000/tcp
From inside EC2 netstat,
tcp 0 0 0.0.0.0:3000 0.0.0.0:* LISTEN
tcp6 0 0 [::]:3000 [::]:* LISTEN
My EC2 Instance has public access.
AWS Security Group Rules
When I SSH into my EC2 container I can run curl localhost:3000/status and see a response from node app.
ubuntu#ip-172-31-8-192:~$ curl localhost:3000/status
{"statusCode":404,"message":"Cannot GET /status","error":"Not Found"}
Unfortunately, from my local terminal curl {EC2-Public IPv4 DNS}:3000/status times out, verbose output
Jamess-MBP-2:~ jamesspaniak$ curl -v <EC2-Public IPv4 DNS>:3000/status
* Trying <EC2-Public IPv4>:3000...
* connect to <EC2-Public IPv4> port 3000 failed: Operation timed out
* Failed to connect to <EC2-Public IPv4 DNS> port 3000 after 75008 ms: Couldn't connect to server
* Closing connection 0
curl: (28) Failed to connect to <EC2-Public IPv4 DNS> port 3000 after 75008 ms: Couldn't connect to server
I've also tried opening all ports and using port 80 with docker run -p 80:3000 but same result.
I've also added an inbound rule to allow ICMP Echo Request and can successfully ping my public IP.
What other things can I look at to resolve this? I expected to be able to make a request to the running docker container from outside the EC2 instance. Apreciated.
Related
I am running nginx in a docker container in an EC2 instance. I started nginx using docker run --name mynginx1 -p 80:80 -d nginx and can access it via curl http://localhost from inside the EC2 instance. Now when I try to access it from my outside through my browser, my request is always timing out. I have set the security rules on my EC2 instance to allow all traffic, all protocols from any IP address for the purpose of testing.
I have verified that nginx is listening on any IP address using ss -tuln | grep 80
tcp LISTEN 0 4096 0.0.0.0:80 0.0.0.0:*
tcp LISTEN 0 4096 [::]:80 [::]:*
Any ideas?
Note: When I install nginx on EC2 directly and run it using sudo systemctl start nginx, I am able to go to http://<ec2_dns> and see the nginx welcome page. So I believe this is an issue specific to running docker containers on EC2 and not a problem with the instance security rules.
Edit 1: Subnet network ACLs inbound rules are as follows:
I have a question Regarding ”Health Check” of an Application. I am referring to document : https://docs.cloudfoundry.org/devguide/deploy-apps/healthchecks.html and I understand that when we deploy an application a default “health check” is created which is of type PORT. CloudFoundry automatically checks this port for health status.
My question is:
I have deployed an application on CF with default Health Check. When I ssh into the deployed application , and try to search the available ports using command lsof -i -P -n, I see the following response :
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 7 vcap 47u IPv4 474318791 0t0 TCP *:8080 (LISTEN)
diego-ssh 8 vcap 3u IPv4 474330297 0t0 TCP *:2222 (LISTEN)
diego-ssh 8 vcap 7u IPv4 474330524 0t0 TCP 10.XXX.XX.XXX:2222->10.YYY.YY.YYY:58858 (ESTABLISHED)
Can you tell me which one of the above response acts as the health check port ? ( or am I looking at the wrong place ??)
I understand that CF connects this port to do a health check for the deployed app. Is it possible to connect to this Health Check Port of a deployed application manually ( similar to what CF does internally) ? How to do so from a Mac system ( which has the cf cli installed )
Can you tell me which one of the above response acts as the health check port ? ( or am I looking at the wrong place ??)
I suspect that this is your application.
java 7 vcap 47u IPv4 474318791 0t0 TCP *:8080 (LISTEN)
It is listening on port 8080, which is almost always the port on which Cloud Foundry will tell your app to listen (i.e. $PORT).
This isn't a response though. It's your application listening for connections on that port. The health check (a TCP health check) will periodically run and make a TCP connection to the value assigned through $PORT (i.e. 8080). If that TCP connection is successful then the health check passes. If it cannot connect or times out, then the health check fails and the platform determines your application has crashed. It will then restart the application instance.
I understand that CF connects this port to do a health check for the deployed app. Is it possible to connect to this Health Check Port of a deployed application manually ( similar to what CF does internally) ?
Yes. cf ssh into your application. Then run nc -v localhost 8080. That will make a TCP connection. The -v flag gives you verbose output.
Ex:
> nc -v localhost 8080 # successful
Connection to localhost port 8080 [tcp/http-alt] succeeded!
> nc -v localhost 8081 # failure
nc: connectx to localhost port 8081 (tcp) failed: Connection refused
nc: connectx to localhost port 8081 (tcp) failed: Connection refused
How to do so from a Mac system ( which has the cf cli installed )
By default, you won't have access to do this directly. It's not really a fair comparison either. The health check runs from inside the container, so technically running nc from inside the container after you cf ssh is the most accurate comparison.
If you wanted to make this work, you could probably use the tunneling capability in cf ssh. I didn't test, but I think something like this would work: cf ssh -L 8080:localhost:8080 YOUR-HOST-APP.
You could then nc -v localhost 8080 and nc would connect to the local port on which ssh is listening (i.e. <local-port>:<destination>:<destination-port>). Again, if you want accuracy, then you should cf ssh into the container and run nc from there.
I've been reading through many examples (both here and through various blogs and virtualbox/vagrant documentation) and at this point I think I should be able to do this.
What I ultimately would like to do is communicate with my docker daemon on my host machine and all the subsequent services I spin up arbitrarily.
To try to get this to work, I run the simple nginx container on my host and confirm it works:
$ docker run --name some-nginx -d -p 8080:80 docker.io/library/nginx:1.17.9
$ curl localhost:8080
> Welcome to nginx!
In my Vagrantfile I've define my host-only network:
config.vm.network "private_network", ip: "192.168.50.4",
virtualbox__intnet: true
Now in my guest vagrant box, I expect that I should be able to access this same port:
$ curl localhost:8080
> curl: (7) Failed to connect to localhost port 8080: Connection refused
$ curl 127.0.0.1:8080
> curl: (7) Failed to connect to 127.0.0.1 port 8080: Connection refused
$ curl 192.168.50.4:8080 # I hope not, but maybe this will work?
> curl: (7) Failed to connect to 192.168.50.4 port 8080: Connection refused
If you're "inside" the Vagrant guest machine, localhost will be the local loopback adapter of THAT machine and not of your host.
In VirtualBox virtualization, which you are using, you can always connect to services running on your hosts' localhost via the 10.0.2.2 address. See: https://www.virtualbox.org/manual/ch06.html#network_nat
So in your case, with the web server running on port 8080 on your host, using
curl 10.0.2.2:8080
would mean success!
run vagrant up to start the vm and use NAT as network interface, which means guest vm will run as equal as the host in the same network.
vagrant ssh into the vm and install net-tools if you machine doesn't have tool netstat
use netstat -rn to find any routable gateway. Below gateways 10.0.2.2, 192.168.3.1 they're the gateways present in guest vm.
[vagrant#localhost ~]$ netstat -rn
Kernel IP routing table
Destination Gateway Genmask Flags MSS Window irtt Iface
0.0.0.0 10.0.2.2 0.0.0.0 UG 0 0 0 eth0
0.0.0.0 192.168.3.1 0.0.0.0 UG 0 0 0 eth2
10.0.2.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0
192.168.3.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2
192.168.33.0 0.0.0.0 255.255.255.0 U 0 0 0 eth1
Go to host and run ifconfig. Find out the gateway 192.168.3.1 shared at host and host possesses IP 192.168.3.x. And make sure service at host can be accessed at 192.168.3.x.
And make sure service at host can be accessed at 192.168.3.x.
try it at host curl -v http://<192.168.3.x>:<same port on host>, if can accessed, ok.
Now go to guest vm, try curl -v http://<192.168.3.x>:<same port on host>. If can be accessed too
now you can access services at host from guest vm.
I am trying to test a simple http server on ec2 with port 8080 by python -m SimpleHTTPServer 8080 but it is not working. I have added the security group for TCP 8080, tried ALL TCP and even all All traffic. But still I cannot open the Public_DNS_IPv4:8080 in the browser. I checked on the ec2 is listening to 8080 as per netstat below.
My ec2 AMI ID is amzn-ami-hvm-2017.09.1.20180115-x86_64-gp2 (ami-97785bed)
Interestingly, if I ran sudo python -m SimpleHTTPServer 80 then it is working on Public_DNS_IPv4
Can any one help to see what I have missed?
[ec2-user#XXXXXXX ~]$ python -m SimpleHTTPServer 8080
Serving HTTP on 0.0.0.0 port 8080 ...
[ec2-user#XXXXXXX ~]$ netstat -tulpn | grep 8080
(Not all processes could be identified, non-owned process info
will not be shown, you would have to be root to see it all.)
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 8844/python
UPDATED Network ACL
route table
it turns out to be my network firewall setup causing the issue, only certain ports are open.
I am not able to connect to my Redis server from remote AWS instance (both instances are in same VPC though)...
I have launched CentOS 6 instance and launched Redis server. I can confirm that server is running:
tcp 0 0 *:6379 *:* LISTEN 891/redis-server *
tcp 0 0 *:6379 *:* LISTEN 891/redis-server *
I have set AWS security group to be:
Custom TCP | port 6379 | 0.0.0.0/0
I am able to connect to the Redis server from the same instance using redis-cli but when I try to do it from some other AWS instance I get:
Could not connect to Redis at ec2-*.compute.amazonaws.com:6379: No route to host
Seems like you are using 127.0.0.1 IP for binding instead of 0.0.0.0. Open your /etc/redis.conf and check bind option.
Turns out firewall was on, so it wasn't possible to connect from outside. So to wrap it up:
1.Set Redis to allow remote connections by setting bind 0.0.0.0 in redis.conf
2.Make sure the firewall is not preventing you to connect to your server. On AWS you can turn it off by:
sudo service iptables save
sudo service iptables stop
sudo chkconfig iptables off