I have an ELB setup for my Magento2 application. The application is running on EC2 instances. In Magento 2 I need to specify a base url for the site. I am setting that as my load balancer public dns.
When the ELB performs Health Checks on the individual EC2 Instances they are returning a 302 as magento is trying to redirect the call to the public dns record for the ELB.
How do I deal with this?
I created a file health.html and placed this in the root magento folder on the EC2 instances.
I updated the health check to load /health.html.
This works fine and the Load Balancers are able to direct traffic to these instances as they are healthy.
This is not really ideal and mainly served to verify configurations between M2 and the ELB and EC2 instances.
I would like the health check to make sure Magento2 is actually healthy.
You could assign the health endpoint to a magento action directly.
I updated the health check to load /health.html.
Set that to an HTTP request declared in your Application routes, and add your checks there. /health/action for example.
I found the answer, There is a setting in Stores->Configuration->General->Web->Url-Options that allows you to turn off the auto redirect. I disabled this and the checks are now working
Related
I created a Target Group with two AWS Jenkins instances following the below documentation.
https://www.jenkins.io/doc/tutorials/tutorial-for-installing-jenkins-on-AWS/
Then I created an ALB and use the Target Group as the listener.
I used Amazon ACM and enable HTTPS on my ALB. I added a CNAME Record in the Route 53 for ALB DNS Name.
Now when I am trying to login using the CNAME I am observing the following scenario
If I have multiple EC2 instances in my TG and I keep on trying to login but it only succeeds after 3rd or 4th attempt. What is the reason for this ? How to debug this? Can I setup Cloudwatch logs at the ELB level to check this ?
If I have only one EC2 instance in my TG then the login always succeeds in first attempt.
If I login directly to each of the instances I can login to them always in first attempt.
Also if I enable stickiness at the TG level then even if I have both the EC2 instances in my TG then I can login with my first attempt using the CNAME ? Why do I have to enable stickiness and the impact of this ?
Is there way for me to know if I am deploying a web application(3rd party like Jenkins) if and when I need to enable stickiness and the side effect of doing this action ?
Thanks in advance.
Load balancers send traffic to one of the EC2 instances in the TG. The Jenkins responds with session tokens and cookies so your browser and the server are in sync.
When there is only 1 instance all the traffic is sent to it.
When there are two or more instances then the traffic is sent to each of them in turn, typical round-robin behavior.
The problem is that the Jenkins Controller is not a clusterable resource.
Basically Jenkins A has no idea that the other Jenkins exists.
So, what is happening is that the login request goes to Jenkins A, that responds with a session token, then the login redirect happens and your browser makes the request for the dashboard page and sends in the session token, this request gets sent to Jenkins B which promptly denies all knowledge of the session token and bounces you back to the login page.
The Advice from Jenkins is to have a main instance and a "warm" standby that is brought online when the main goes down.
If you are running a cluster in order to build more things then you probably need to run more Agents and connect them to the Controller so that they can be provisioned by an AutoScaling group and scaled up when needed and down when it is all quiet.
I am using a reverse proxy in front of my load balancer. Currently I am just trying to make a TCP connection with LB from reverse proxy to check its health and if it succeeds then I will send the request to main load balancer. I want to check that whether my main load balancer have any servers running or not. If not I want to redirect those requests to another server fleet. Is there api or anything else which AWS load balancer exposes to tell the status of the its targets.
Go to the EC2 console, and then to the target groups section. Select your target group. From there, you should be able to see which instances are passing the healthcheck.
I have a running Web server on Google Cloud. It's a Debian VM serving a few sites with low-ish traffic, but I don't like Cloudflare. So, Cloud CDN it is.
I created a load balancer with static IP.
I do all the items from the guides I've found. But when it comes time to Add origin to Cloud CDN, no load balancer is available because it's "unhealthy", as seen by rolling over the yellow triangle in the LB status page: "1 backend service is unhealthy".
At this point, the only option is to choose Create a Load Balancer.
I've created several load balancers with different attributes, thinking that might be it, but no luck. They all get the "1 backend service is unhealthy" tag, and thus are unavailable.
---Edit below---
During LB creation, I don't see anywhere that causes the LB to know about the VM, except in cert issue (see below). Nowhere does it ask for any field that would point to the VM.
I created another LB just now, and here are those settings. It finishes, then it's marked unhealthy.
Type
HTTP(S) Load Balancing
Internet facing or internal only?
From Internet to my VMs
(my VM is not listed in backend services, so I create one... is this the problem?)
Create backend service
Backend type: Instanced group
Port numbers: 80,443
Enable Cloud CDN: checked
Health check: create new: https, check /
Simple host and path rule: checked
New Frontend IP and port
Protocol: HTTPS
IP: v4, static reserved and issued
Port: 443
Certificate: Create New: Create Google-managed certificate, mydomain.com and www.mydomain.com
Load balancer's unhealthy state could mean that your LB's healthcheck probe is unable to reach your backend service(Your Debian VM in this case).
If your backend service looks good now, I think there is a problem with your firewall configuration.
Check your firewall rules whether it allows healthcheck probe's IP address range or not.
Refer to the docoment below to get more detailed information.
Required firewall rule
I have configured app load balancer on amazon. Set up DNS LB to route 53 with alias for A. Behind LB i have 2 instances with IIS. If i set up 2 sites on both instances, balancer automatically balance client by rotation
(as i know round robin). But, if i turn off site on IIS in one instance, load balancer continue go to that instance and if i go to exapmle.com i will have one time worked site and if refresh the page i will have error (because site turned off in IIS). Could you please tell me, how can i set up load balance to route traffic in working instance if one of them not working. Thank you
Load balancers continue to distribute the traffic on healthy servers. If it is not happening in your case, I would recheck the health check configuration under Target Groups.
You need to modify the port/path so that health checks start failing once the site is turned off. Only then, the load balancer will pass all traffic to healthy host, not the unhealthy host
What does the LB health checks say? If the back-end instances are not listening on the health check port then LB marks it as unhealthy and stops forwarding requests to it. If you are using Application loadbalancer then I think you can get the health check status within the target groups associated with the loadbalancer.
I have created a load balancer in amazon AWS.I created the load balancer in order to set up an ssl in server which already had another domain with SSL.The load balancer was working fine till today.But sometime before I noticed that the status of the instance has changed to Outofservice.
Im new to aws and couldnt find what is going wrong.
My health check is set as
Please help out.
Here is my checklist to troubleshoot this type of issue
Is the Security group of your instance OK ? ELB needs to have access to your instance for the health check
Is your Web / App server correctly running on the instance ? Does it accept connection requests ?
Is the HTTP return code of your health check URL 200 ? If your healthcheck URL returns anything else (a 30x redirect for example), ELB will consider your instance invalid. You can check this with curl -I on Linux instances.
HTH
--Seb