I created a Target Group with two AWS Jenkins instances following the below documentation.
https://www.jenkins.io/doc/tutorials/tutorial-for-installing-jenkins-on-AWS/
Then I created an ALB and use the Target Group as the listener.
I used Amazon ACM and enable HTTPS on my ALB. I added a CNAME Record in the Route 53 for ALB DNS Name.
Now when I am trying to login using the CNAME I am observing the following scenario
If I have multiple EC2 instances in my TG and I keep on trying to login but it only succeeds after 3rd or 4th attempt. What is the reason for this ? How to debug this? Can I setup Cloudwatch logs at the ELB level to check this ?
If I have only one EC2 instance in my TG then the login always succeeds in first attempt.
If I login directly to each of the instances I can login to them always in first attempt.
Also if I enable stickiness at the TG level then even if I have both the EC2 instances in my TG then I can login with my first attempt using the CNAME ? Why do I have to enable stickiness and the impact of this ?
Is there way for me to know if I am deploying a web application(3rd party like Jenkins) if and when I need to enable stickiness and the side effect of doing this action ?
Thanks in advance.
Load balancers send traffic to one of the EC2 instances in the TG. The Jenkins responds with session tokens and cookies so your browser and the server are in sync.
When there is only 1 instance all the traffic is sent to it.
When there are two or more instances then the traffic is sent to each of them in turn, typical round-robin behavior.
The problem is that the Jenkins Controller is not a clusterable resource.
Basically Jenkins A has no idea that the other Jenkins exists.
So, what is happening is that the login request goes to Jenkins A, that responds with a session token, then the login redirect happens and your browser makes the request for the dashboard page and sends in the session token, this request gets sent to Jenkins B which promptly denies all knowledge of the session token and bounces you back to the login page.
The Advice from Jenkins is to have a main instance and a "warm" standby that is brought online when the main goes down.
If you are running a cluster in order to build more things then you probably need to run more Agents and connect them to the Controller so that they can be provisioned by an AutoScaling group and scaled up when needed and down when it is all quiet.
Related
I created a REST AWS API Gateway and it worked perfectly when it was targeting a single ec2 instance. I then went on to set it up with an EC2 Load Balancer for a Target Group with 2 EC2 instances. Now when I make a request that I synchronously get the status of, I get a 404 error. My guess is that the initial job was posted on one machine and then I try to access it on the other machine yielding a 404 error. I tried to enable stickiness to the target group, but that did nothing. Any suggestions?
Stickiness config
I would suggest you to check the logs on your EC2 instances and see which is the exact request routed from the LB to the EC2 machine. My experience is that LB calls the EC2 instances using their internal IP address and the URL might be modified, based on configuration.
Checking the logs will help you debug this error. With stickness you're doing good.
I have created a Jmeter script to check the performance of a site. The website is hosted in AWS with elastic scaling and with sticky sessions. Basically the AWS load balancer will assign a session cookie to each user so the load balancer can direct the user to the correct instance.
source
My problem is, as I'm using a cookie manager and clearing all the cookies with each iteration. Does it clear these assigned cookies too? I suspect this because the script error rate is lower when we execute the script on a single AWS instance than in auto-scaling ( multiple instances )
Any idea ?
I don't know about how do you "clear" cookies, if you use this box of the HTTP Cookie Manager:
then it removes all cookies on each new iteration of the Thread Group (other loop generation options like Loop Controller or While Controller will not trigger clearing of cookies)
Also if your Load Balancer has more that one IP address you might want to add the DNS Cache Manager to your Test Plan in order to avoid DNS requests caching on JVM or OS side.
The problem was not with the JMeter script. It's with the AWS ELB in a elastic server config. We had configured a Alarm to remove the instance from load balancer, so even with the stickey sessions enabled when the instance is removed it generated the error.
After moving the session managemet to a Elasticache - Redid based solution, this issue will be fixed.
Thank you all who supported.
I have an Application load balancer and have 4 app servers created in single target group. After enabling the session stickiness in front load balancer, request is not routing to single healthy instance; instead it is routing to multiple EC2 instance, which is breaking my application.
Any alternative ideas to have this point to single EC2 instance in the target group rather hopping to any random EC2 instance whenever I try to hit the application URL.
You need to make sure that initial request should be handled by the instance of your choice. Then you can use 'Application-Controlled Session Stickiness' to to associate the session with the instance that handled the initial request.
Please read Configure Sticky Sessions for Your Classic Load Balancer - Elastic Load Balancing. This might help.
Also if you have 4 servers in target group and want to send request to only 1 server, then you can remove rest of the three servers temporarily and initiate a request. In that case, request will always go to that single server, you wanted. Then you can add back rest of the three servers again. Now you can set the stickiness to associate the session with the session with the first server, you wanted.
I have an ELB setup for my Magento2 application. The application is running on EC2 instances. In Magento 2 I need to specify a base url for the site. I am setting that as my load balancer public dns.
When the ELB performs Health Checks on the individual EC2 Instances they are returning a 302 as magento is trying to redirect the call to the public dns record for the ELB.
How do I deal with this?
I created a file health.html and placed this in the root magento folder on the EC2 instances.
I updated the health check to load /health.html.
This works fine and the Load Balancers are able to direct traffic to these instances as they are healthy.
This is not really ideal and mainly served to verify configurations between M2 and the ELB and EC2 instances.
I would like the health check to make sure Magento2 is actually healthy.
You could assign the health endpoint to a magento action directly.
I updated the health check to load /health.html.
Set that to an HTTP request declared in your Application routes, and add your checks there. /health/action for example.
I found the answer, There is a setting in Stores->Configuration->General->Web->Url-Options that allows you to turn off the auto redirect. I disabled this and the checks are now working
I have a solution where an ELB is configured to use sticky sessions. How can I actually verify that requests for a client is actually routed to one and the same instance in the auto-scaling group behind the ELB?
For web applications, in my dev/testing environments, I usually grab the instance-id using the EC2 meta data service and spit it out in the HTML. That way I can see what instance is serving my request.
Other than that, to my knowledge there is no way to verify sticky sessions are working unless you log session ids and all requests and check through all of your logs across each of the relevant instances.
ELB access logs contain both the requesting client's and the backend instance's IP.