How to fix 'Health checks failed with these codes: ' on elasticbeanstalk instace? - amazon-web-services

My application has Health Status of Severe because of Target.ResponseCodeMismatch error.
I've tried following Redirection is not configured on the backend in this aws instruction . And I've changed my port to '443' and my protocol to 'HTTPS' on eb config and redeployed. It changes the Health Status to Ok but when I access my url I get the page 'Index of' only
Here is what eb status --verbose returns:
Description: Health checks failed with these codes: [301]
Reason: Target.ResponseCodeMismatch
And this is from eb config:
AWSEBV2LoadBalancerListener.aws:elbv2:listener:default:
DefaultProcess: default
ListenerEnabled: 'true'
Protocol: HTTP
Rules: null
SSLCertificateArns: null
SSLPolicy: null
AWSEBV2LoadBalancerListener443.aws:elbv2:listener:443:
DefaultProcess: default
ListenerEnabled: 'true'
Protocol: HTTPS
Rules: null
SSLCertificateArns: arn:aws:acm:us-east-2:XXXX:certificate/XXXXXX
SSLPolicy: ELBSecurityPolicy-XX-XX-XXXX
aws:elasticbeanstalk:environment:process:default:
DeregistrationDelay: '20'
HealthCheckInterval: '15'
HealthCheckPath: /
HealthCheckTimeout: '5'
HealthyThresholdCount: '3'
MatcherHTTPCode: null
Port: '443'
Protocol: HTTPS

For someone that may come across this as I did, I found the solution to be setting up the Health Check endpoint of the ELB target group to an actual URL on my website that returned an HTTP 200 code.
On the EC2 dashboard, under Load Balancing -> Target Groups, go to the tab Health Checks and edit the path to a path in your site that returns an 200 code.

I had a similar problem. In my case the fix was to change the health check's "Success codes" setting from 200 to 200,301.

Typically your app is going to get deploy exposing it's native port. In the case of java this is usually 8080, with node it's 3000. Then AWS will, as part of EB, will still a proxy of either apache or nginx in front of your app exposing port 80. It's ELB that exposes port 443 to the outside.
So you probably want to change the port and protocol to 80 / HTTP

This seem to have helped me, in
"health check settings" > "Advanced health check settings"
Under: "Port" from "Traffic port" to "Override" and used "80"
Under: "Success codes"
One Instance of EB
from "200" to "200,300,301,302,303,304,305,306,307,308" (In Elastic Beanstalk Health was down because of "3xx Responses")
Second Instance of EB
from "200" to "200,400,401,402,403,404,405,406,407,408,409,410,411,412,413,414,415,416,417,418,422,425,426,428,429,431,451" (In Elastic Beanstalk Health was down because of "4xx Responses")

I had the following problem:
Description: Health checks failed with these codes: [400]
Reason: Target.ResponseCodeMismatch
What fixed for me was entering the Success codes value to 400.
EC2 (service) > Load Balancing (from left side menu) > Target Groups (click on Name) > Health Check (click on tab) > Edit > Advanced health check settings > Success codes = 400

For me, this was as simple as making sure the route that the health check is pointed to, like '/' for my example, returns a 200 result.
const router = require("express").Router();
// Health Check
router.get('/', (req, res)=>{
res.sendStatus(200);
})
module.exports = router;

In DotNet I added this controller that will allow anonymous access and then in EC2 > Load Balancing > Target Group set the health check path to /health
public class HealthController : Controller
{
[AllowAnonymous]
public IActionResult Index()
{
return Ok($"HEALTH: OK - {DateTime.Now}");
}
}
Under Advanced Settings, include the HTTP code 302 to the expected results

Related

AWS Global Accelerator in front of ALB managed with EKS alb ingress health checks fail

got an EKS cluster with alb ingress controller and external DNS connected to route53, now some clients want static IPs or IP range for connecting to our servers and whitelisting these IPs in their firewall.
Tried the new AWS Global Accelerator, followed this tutorial https://docs.aws.amazon.com/global-accelerator/latest/dg/getting-started.html but it fails with :
Listeners in this accelerator have an unhealthy status. To make sure that Global Accelerator can run health checks successfully, ensure that a service is responding on the protocol and port that you specified in the health check configuration. Learn more
With further reading understood that the healthchecks will be the same configured at the ALB, also that it might fail because of the route53 Healthchecks ips are not whitelisted but all the inbound traffic is open in ports 80 and 443, so not quite sure how to further debug this or if there is any other solution for getting an ip range or static ip for the ALB.
You need to add a healthcheck rule like this one to the ingress controller:
- http:
paths:
- path: /global-accelerator-healthcheck
backend:
serviceName: global-accelerator-healthcheck
servicePort: use-annotation
Then an annotation:
alb.ingress.kubernetes.io/actions.global-accelerator-healthcheck: '{"Type": "fixed-response", "FixedResponseConfig": {"ContentType": "text/plain", "StatusCode": "200", "MessageBody": "healthy" }}'
Then configure the global accelerator to the health checks to that endpoint
When it comes to AWS ALB Ingress controller, always try to think of
it as you are working with AWS ALB, and its Target Groups.
You can even identify the ALB and its target groups by logging in to AWS console UI.
To answer your question try adding following details to your ingress,
code:
annotations:
alb.ingress.kubernetes.io/healthcheck-protocol: HTTP
alb.ingress.kubernetes.io/healthcheck-port: "8161"
alb.ingress.kubernetes.io/healthcheck-path: /admin
alb.ingress.kubernetes.io/success-codes: '401'
alb.ingress.kubernetes.io/backend-protocol: HTTP`
Note: If you have different health check settings for different services, remove this block from K8s "Ingress" and add blocks per K8s "Service".
If more information required, please refer to: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.1/guide/ingress/annotations/

Why does GCE Load Balancer behave differently through the domain name and the IP address?

A backend service happens to be returning Status 404 on the health check path of the Load Balancer. When I browse to the Load Balancer's domain name, I get "Error: Server Error/ The server encountered a temporary error", and the logs show
"type.googleapis.com/google.cloud.loadbalancing.type.LoadBalancerLogEntry"
statusDetails: "failed_to_pick_backend", which makes sense.
When I browse to the Load Balancer's Static IP, my browser shows the 404 Error Message which the underlying Kubernetes Pod returned, In other words the Load Balancer passed on the request despite the failed health check.
Why these two different behaviors?
[Edit]
Here is the yaml for the Ingress that created the Load Balancer:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress1
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
I did a "deep dive" into that and managed to reproduce the situation on my GKE cluster, so now I can tell that there are a few things combined here.
A backend service happens to be returning Status 404 on the health check path of the Load Balancer.
There could be 2 options (it is not clear from the description you have provided).
something like:
"Error: Server Error
The server encountered a temporary error and could not complete your request.
Please try again in 30 seconds."
This one you are geting from LoadBalancer in case HealthCheck failed for pod. The official documentation on GKE Ingress object says that
a Service exposed through an Ingress must respond to health checks from the load balancer.
Any container that is the final destination of load-balanced traffic must do one of the following to indicate that it is healthy:
Serve a response with an HTTP 200 status to GET requests on the / path.
Configure an HTTP readiness probe. Serve a response with an HTTP 200 status to GET requests on the path specified by the readiness probe. The Service exposed through an Ingress must point to the same container port on which the readiness probe is enabled.
It is needed to fix HealthCheck handling. You can check Load balancer details by visiting GCP console - Network Services - Load Balancing.
"404 Not Found -- nginx/1.17.6"
This one is clear. That is the response returned by endpoint myservice is sending request to. It looks like something is misconfigured there. My guess is that pod merely can't serve that request properly. Can be nginx web-server issue, etc. Please check the configuration to find out why pod can't serve the request.
While playing with the setup I have find an image that allows you to check if request has reached the pod and requests headers.
so it is possible to create a pod like:
apiVersion: v1
kind: Pod
metadata:
annotations:
run: fake-web
name: fake-default-knp
# namespace: kube-system
spec:
containers:
- image: mendhak/http-https-echo
imagePullPolicy: IfNotPresent
name: fake-web
ports:
- containerPort: 8080
protocol: TCP
to be able to see all the headers that were in incoming requests (kubectl logs -f fake-default-knp ).
When I browse to the Load Balancer's Static IP, my browser shows the 404 Error Message which the underlying Kubernetes Pod returned.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress1
spec:
rules:
- host: example.com
http:
paths:
- backend:
serviceName: myservice
servicePort: 80
Upon creation of such an Ingress object, there will be at least 2 backends in GKE cluster.
- the backend you have specified upon Ingress creation ( myservice one)
- the default one (created upon cluster creation).
kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP
l7-default-backend-xyz 1/1 Running 0 20d 10.52.0.7
Please note that myservice serves only requests that have Host header set to example.com . The rest of requests are sent to "default backend" . That is the reason why you are receiving "default backend - 404" error message upon browsing to LoadBalancer's IP address.
Technically there is a default-http-backend service that has l7-default-backend-xyz as an EndPoint.
kubectl get svc -n kube-system -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
default-http-backend NodePort 10.0.6.134 <none> 80:31806/TCP 20d k8s-app=glbc
kubectl get ep -n kube-system
NAME ENDPOINTS AGE
default-http-backend 10.52.0.7:8080 20d
Again, that's the "object" that returns the "default backend - 404" error for the requests with "Host" header not equal to the one you specified in Ingress.
Hope that it sheds a light on the issue :)
EDIT:
myservice serves only requests that have Host header set to example.com." So you are saying that requests go to the LB only when there is a host header?
Not exactly. The LB receives all the requests and passes requests in accordance to "Host" header value. Requests with example.com Host header are going to be served on myservice backend .
To put it simple the logic is like the following:
request arrives;
system checks the Host header (to determine user's backend)
request is served if there is a suitable user's backend ( according to the Ingress config) and that backend is healthy , otherwise "Error: Server Error The server encountered a temporary error and could not complete your request. Please try again in 30 seconds." is thrown if backend is in non-healthy state;
if request's Host header doesn't match any host in Ingress spec, request is sent to l7-default-backend-xyz backend (not the one that is mentioned in Ingress config). That backend replies with: "default backend - 404" error .
Hope that makes it clear.

Meteor on AWS using Mup - SSL with ELB

I'm migrating my Meteor app to AWS, want to use ACM issued SSL cert attached to ELB.
My current setup is:
ELB with ACM SSL cert(verified that load balancing and HTTPS is working on simple HTTP server inside EC ubuntu machine)
Meteor up is deployed on EC2 machine using Mup (Please see my mup.js which works well with SSL cert physically available from file system)
I want to stop using reverse proxy from mup.js config completely and let ELB run all SSL stuff. Problem is that ELB is not able to communicate with Meteor up,
have tried different ROOT_URLs but none are working:
EC2 Elastic IP with HTTP and HTTPS
(i.e. ROOT_URL: 'https://my-ec2-elastic-ip.com', ROOT_URL: 'http://my-ec2-elastic-ip.com')
ELB domain name with HTTP and HTTPS
What should I put for ROOT_URL and is it game changer in accepting requests? i.e. if I have wrong ROOT_URL, will Meteor still be able to accept incoming requests?
Mup version: 1.4.3
Meteor version: 1.6.1
Mup config
module.exports = {
servers: {
one: {
host: 'ec2-111111.compute-1.amazonaws.com',
username: 'ubuntu',
pem: 'path to pem'
}
},
meteor: {
name: 'my-app',
path: 'path',
servers: {
one: {}
},
buildOptions: {
serverOnly: true,
},
env: {
ROOT_URL: 'https://ec2-111111.compute-1.amazonaws.com',
MONGO_URL: 'mongo url',
},
dockerImage: 'abernix/meteord:node-8.9.1-base',
deployCheckWaitTime: 30,
},
proxy: {
domains: 'ec2-111111.compute-1.amazonaws.com,www.ec2-111111.compute-1.amazonaws.com',
ssl: {
crt: './cert.pem',
key: './key.pem'
}
}
};
Resolved, first and general issue was that I was using classic ELB, which doesn't support WebSockets and was preventing DDP connection. Newer Application Load Balancer which comes with WebSocket and Sticky Sessions helped. More on the diff here: https://aws.amazon.com/elasticloadbalancing/details/#details
Another issue more specific to my use case was having no endpoint for ELB health check, I was hiding/securing everything behind basic_auth, health check was getting 403 unauthorized failing and not registering EC2 instance in ELB. Make sure you have endpoint for health check that returns 200 OK, and also revisit your security groups - check out inbound rules and make sure ELB has access to corresponding ports to EC2 instance(80, 443 etc.).

AWS load balancer for Mean stack

I am learning load balancer and I have 2 instances connected to my load balancer but I always get out of service error.
Node is running in port 3000
my port configuration: 80 (HTTP) forwarding to 80 (HTTP)
health check: HTTP:3000/
My health check
When you use "HTTP" ping protocol you have to upload a test file at the path "/" you cannot just use "/" in the path field.
Use the below setting and it will work.

AWS EC2 instances are "Out of Service" after getting registered with ELB

I am facing a problem in AWS. I am creating and registering an instance with an ELB. Though it is getting registered, it is not passing through the health check and showing Out-of-Service. The error reason is "Instance has failed at least the Unhealthy Threshold number of health checks consecutively".
My health check values are as follows:
Ping Target: TCP:80
Timeout: 10 seconds
Interval: 24 seconds
Unhealthy Threshold: 6
Healthy Threshold: 10
Appreciate your help.
Thanks,
Chandan
Is your instance behind the ELB running a web server ? If it is does it return an '200' (OK) ? If not then that's your problem.
If you are running a web service that returns a 200, is your security group open to the ELB? Meaning the ELB's source security group has to allowed into your instance's security group.
Make sure you application is running and not throwing any exception. In my case, it was node application
1. right click on 'EC2 > Load Balancer > ' and select on 'Edit health check'
2. change the protocol from https to tcp
3. Click on instance tab and status was inService
For me it worked (became in-service), by changing the port from 80 (default) to 8080, where we installed Tomcat web server.
Changing the ping target of tcp port from 80 to 8080 for 20 seconds(depending on your timeout) and back to port 80 worked for me.
right click on 'EC2 > Load Balancer > ' and select on 'Edit health check'
change 'Ping Port' to 8080 and 'Save'
Wait 20 seconds and change the port back to 80 and 'Save'
click on the 'Instance' tab and the 'Status' should as 'InService'