I have a question Regarding ”Health Check” of an Application. I am referring to document : https://docs.cloudfoundry.org/devguide/deploy-apps/healthchecks.html and I understand that when we deploy an application a default “health check” is created which is of type PORT. CloudFoundry automatically checks this port for health status.
My question is:
I have deployed an application on CF with default Health Check. When I ssh into the deployed application , and try to search the available ports using command lsof -i -P -n, I see the following response :
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
java 7 vcap 47u IPv4 474318791 0t0 TCP *:8080 (LISTEN)
diego-ssh 8 vcap 3u IPv4 474330297 0t0 TCP *:2222 (LISTEN)
diego-ssh 8 vcap 7u IPv4 474330524 0t0 TCP 10.XXX.XX.XXX:2222->10.YYY.YY.YYY:58858 (ESTABLISHED)
Can you tell me which one of the above response acts as the health check port ? ( or am I looking at the wrong place ??)
I understand that CF connects this port to do a health check for the deployed app. Is it possible to connect to this Health Check Port of a deployed application manually ( similar to what CF does internally) ? How to do so from a Mac system ( which has the cf cli installed )
Can you tell me which one of the above response acts as the health check port ? ( or am I looking at the wrong place ??)
I suspect that this is your application.
java 7 vcap 47u IPv4 474318791 0t0 TCP *:8080 (LISTEN)
It is listening on port 8080, which is almost always the port on which Cloud Foundry will tell your app to listen (i.e. $PORT).
This isn't a response though. It's your application listening for connections on that port. The health check (a TCP health check) will periodically run and make a TCP connection to the value assigned through $PORT (i.e. 8080). If that TCP connection is successful then the health check passes. If it cannot connect or times out, then the health check fails and the platform determines your application has crashed. It will then restart the application instance.
I understand that CF connects this port to do a health check for the deployed app. Is it possible to connect to this Health Check Port of a deployed application manually ( similar to what CF does internally) ?
Yes. cf ssh into your application. Then run nc -v localhost 8080. That will make a TCP connection. The -v flag gives you verbose output.
Ex:
> nc -v localhost 8080 # successful
Connection to localhost port 8080 [tcp/http-alt] succeeded!
> nc -v localhost 8081 # failure
nc: connectx to localhost port 8081 (tcp) failed: Connection refused
nc: connectx to localhost port 8081 (tcp) failed: Connection refused
How to do so from a Mac system ( which has the cf cli installed )
By default, you won't have access to do this directly. It's not really a fair comparison either. The health check runs from inside the container, so technically running nc from inside the container after you cf ssh is the most accurate comparison.
If you wanted to make this work, you could probably use the tunneling capability in cf ssh. I didn't test, but I think something like this would work: cf ssh -L 8080:localhost:8080 YOUR-HOST-APP.
You could then nc -v localhost 8080 and nc would connect to the local port on which ssh is listening (i.e. <local-port>:<destination>:<destination-port>). Again, if you want accuracy, then you should cf ssh into the container and run nc from there.
Related
I am very new to coding so trying to figure this out was very hard for me. I'm trying to deploy my code with docker and running my code inside the EC2 cloud. But I can't seem to get the instance's url to work. I set my inbound (security group) HTTP (80) => 0.0.0.0/0, HTTPs (443) => 0.0.0.0/0, and SSH(22) => my ip. I read that setting my SSH to 0.0.0.0/0 was a bad idea, so I went with my ip (there was an option called 'my ip'). Also, I am using ubuntu for my AMI.
While successfully docker using (docker-compose up), I used curl http://localhost:3001 (3001 is my exposed port inside my code) and it works fine. But when I used curl ec2-XX-XXX-XXX-XXX.us-west-1.compute.amazonaws.com, it outputs:
curl: (6) Could not resolve host: ssh and
curl: (7) Failed to connect to ec2-XX-XXX-XXX-XXX.us-west-1.compute.amazonaws.com port 80: Connection refused
Curl ec2-xxx-xx-amazonaws.com send request on port 80 , while you are docker is running at port 3001.
First verify that you have exposed some host port to docker. Something like this should come in docker ps -a
0.0.0.0/3001--> 3001 . the first 3001 can be any host port
Next make sure that the first port whichever you used is there in security group and opened for your ip.
Hopefully if all good at vpc and route tables settings then :3001(use whatever host port you gave if used anything apart of 3001) all should work
I am using the ECS optimized ECS image and deploying using ECS.
So if i bash into the container and curl localhost i get the expected output (expected to be on port 80), this works fine.
Then if i run docker ps
I get the following output
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1234 orgname/imagename:release-v0.3.1 "npm start" 53 minutes ago Up 53 minutes 0.0.0.0:80->80/tcp ecs-myname-1234`
Which would suggest port 80 is being mapped as expected. (I also see Amazon ECS Agent but have posted that above as not important)
Then i can run netstat -tulpn | grep :80 and i get the following output
(No info could be read for "-p": geteuid()=500 but you should be root.)
tcp 0 0 :::80 :::* LISTEN -
Then as root i run sudo netstat -tulpn | grep :80 and i get the following output
tcp 0 0 :::80 :::* LISTEN 21299/docker-proxy
This makes me think it's only listening on the IPv6 interface? I as the host record for localhost is 127.0.0.1 that is why when i run curl localhost or curl 127.0.0.1 on the host i get curl: (56) Recv failure: Connection reset by peer
I have also checked the security groups and networks ACLS (not that they should have an effect on localhost)...
Any thoughts would be much appreciated!
Edit:
For good measure (some people suggest netstat only shows ipv6 and not ipv4 when ipv6 is available. I have also ran this command lsof -OnP | grep LISTEN gives the following output
sshd 2360 root 3u IPv4 10256 0t0 TCP *:22 (LISTEN)
sshd 2360 root 4u IPv6 10258 0t0 TCP *:22 (LISTEN)
sendmail 2409 root 4u IPv4 10356 0t0 TCP 127.0.0.1:25 (LISTEN)
exe 2909 root 4u IPv4 13802 0t0 TCP 127.0.0.1:51678 (LISTEN)
exe 21299 root 4u IPv6 68069 0t0 TCP *:80 (LISTEN)
exe 26395 root 4u IPv6 89357 0t0 TCP *:8080 (LISTEN)
I ran into this exact problem when using the bridge network mode. I haven't found a solution yet. However, I have used two workarounds.
Workarounds
The easiest for me was to change NetworkMode to host in my ECS Task Definition.
Alternately, you can remove the need to know or care how ports are mapped by using an Application Load Balancer.
Network Modes
bridge maps the container port to another port (which may be different) on the host via the docker-proxy. This is the mode I have had trouble with in ECS.
host allows the container to open a port directly on the host, without requiring a proxy. The downside is that instances of the same container can't run on the same host without causing port conflicts.
awsvpc is like host, except it maps to an ENI in your VPC instead of a port on the host IP.
none is what it sounds like.
Application Load Balancer
Since posting this answer, the requirements of my project have changed. I haven't had a chance to go back and test port mappings in bridge mode directly. However, I am now using an Application Load Balancer to provide access to my containers.
When using an ALB, you don't have to worry about the host port at all. Instead, your ECS Service automatically adds your container as a target to a given ALB Target Group. This document goes over details on how to do that:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/service-load-balancing.html
Not elaborating more here because it's not a direct answer to the question about port binding issues.
Interestingly, network modes for ECS was announced just 5 days after you asked your question:
Announcement: https://aws.amazon.com/about-aws/whats-new/2016/08/amazon-ec2-container-service-now-supports-networking-modes-and-memory-reservation/
Network Mode Documentation: https://docs.aws.amazon.com/AmazonECS/latest/APIReference/API_RegisterTaskDefinition.html#ECS-RegisterTaskDefinition-request-networkMode
Hopefully this answer helps a few other Googlers. Note: I'll update this answer if I figure out how to get bridge mode working right in ECS.
I had a similar issue, but I was running Java in docker which was binding on IPv6 port only. Turns out was Java related. More on this here
I'm running Bitnami MEAN on an EC2 instance. I can host my app just fine on port 3000 or 8080. Currently if I don't specify a port I'm taken to the Bitnami MEAN homepage. I'd like to be able to access my app by directly from my EC2 public dns without specifying a port in the url. How can I accomplish this?
The simple way to do that is Port Forwarding by using below command:
sudo iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-ports 8080
After logging into the AWS using putty by having private key & with username "bitnami". Type the above command & enter.
Then, you will automatically redirected to your application.
Note : I am assuming, you have already configure port 8080 to security group on AWS
You'll have to open port 80 on the server's firewall, and either run your server on port 80 or forward port 80 to port 8080. You'll need to lookup the instructions for doing that based on what version of Linux you are running, but it is probably going to be an iptables command.
You'll also need to open port 80 on the EC2 server's security group.
I am not able to connect to my Redis server from remote AWS instance (both instances are in same VPC though)...
I have launched CentOS 6 instance and launched Redis server. I can confirm that server is running:
tcp 0 0 *:6379 *:* LISTEN 891/redis-server *
tcp 0 0 *:6379 *:* LISTEN 891/redis-server *
I have set AWS security group to be:
Custom TCP | port 6379 | 0.0.0.0/0
I am able to connect to the Redis server from the same instance using redis-cli but when I try to do it from some other AWS instance I get:
Could not connect to Redis at ec2-*.compute.amazonaws.com:6379: No route to host
Seems like you are using 127.0.0.1 IP for binding instead of 0.0.0.0. Open your /etc/redis.conf and check bind option.
Turns out firewall was on, so it wasn't possible to connect from outside. So to wrap it up:
1.Set Redis to allow remote connections by setting bind 0.0.0.0 in redis.conf
2.Make sure the firewall is not preventing you to connect to your server. On AWS you can turn it off by:
sudo service iptables save
sudo service iptables stop
sudo chkconfig iptables off
I'm developing an application which will use AWS's SNS service to receive notifications over HTTP.
As I am developing the application locally and have no control of our company firewall, I am attempting to tunnel HTTP connections from an external EC2 host to my local machine for the purposes of testing.
Everything looks fine when verifying the connection from the EC2 host itself, however the port is closed when examined externally.
My local application is on port 2222. I have executed the following command on my local machine to establish the proxy:
ssh -i myCredentials.pem ec2-user#myserver.com -R 2222:localhost:2222
Where myserver.com points to an EC2 instance. SSH'ing to the EC2 instance, I can successfully connect to my application via the tunnel, and nmap displays the following:
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00055s latency).
Not shown: 997 closed ports
PORT STATE SERVICE
22/tcp open ssh
25/tcp open smtp
2222/tcp open EtherNet/IP-1
However when I run nmap against the EC2 instance from my local machine, the port is closed:
Nmap scan report for xxxxxx
Host is up (0.24s latency).
Not shown: 998 filtered ports
PORT STATE SERVICE
22/tcp open ssh
2222/tcp closed EtherNet/IP-1
The security group assigned to the server is allowing TCP traffic on ports 2222 on 0.0.0.0/0 and iptables isn't running on the server.
What do I need to do on the EC2 end to make this port open to the outside world?
The tunnelling command is correct, however in order for SSH to bind to the wildcard address, the following setting is required in /etc/ssh/sshd_config on the remote server:
GatewayPorts yes
Once this is added, restart sshd and the tunnelling will work as desired provided no firewalls are in the way.