in my project I have two instances (based on ECS) which run Node.js app. Both of them are the same (just for HA purposes) use cookies and are located behind load balancer. Problem is that instances don't share session between themselves and when I login to first instance and do back action, load balancer sometimes switch me to second instance which doesn't have any session data (cookie generated by first instance) and I need to login again. I know that there is option to force two instances to share session between themselves but this approach require some modification in app code. So instead of it I would like to force my load balancer to hold and use this one instance which he had chosen for first time until the user finished his job and log off (or close the browser). Is it possible?
You can enable sticky sessions on your target groups. To do this:
In the Amazon EC2 console, go to Target Groups under LOAD BALANCING.
Select the target group and go to the Description tab.
Press the Edit attributes and enable Stickiness.
Set the duration and save.
These steps might be slightly different if you have Classic Load Balancer. Read more here and here.
Related
I have a website which has main domain and sub domain (I have different subdomain for different countries) eg: mysite.com ( is the main domain ), country-a.mysite.com ( for country A ), country-b.mysite.com ( for country B) NOTE: each country has independent users / data and linked with separate databases.
Now I'm managing them in one EC2 instance. Where I have subfolders for each country and point them to subdomain using Route53. And they are working fine.
But now I wanted to them scalable as I'm expecting more traffic. What is the best practice for such scenario ?
Is it possible to get another EC2 instance and clone all the subfolders and introduce a load balancer to handle the traffic between these 2 instances ? I mean, when a user from country A and B will hit the load balancer, the load balancer will handle it properly and redirect the user to the right subfolder in these 2 instances and manage the traffic ?
If yes, how should I configure the Route53 ?
How the load balancer is handling user sessions ? I mean, let say first time a user hit the load balancer direct the user to 1st instance and when the other request comes from the same user hit the 2nd instance. If a session create on the 1st instance and this session data will be available at 2nd instance?
Also I wonder how I can manage the source codes in these instances. I mean, if I wanted to update the code do I have to update in these 2 instance separately? OR is there a easy way where I upload the files to one of the instance and it will clone to other instances ?
BTW, my website built using Laravel framework and Postgres.
Im new to load balancer, pls help me to find the perfect solution.
If yes, how should I configure the Route53 ?
There is nothing you should be doing in R53. Its load balancer (LB) that distributes traffic among your instances, not R53. R53 will just direct traffic to the LB, nothing else.
How the load balancer is handling user sessions ?
It does not handle it. You could enable sticky sessions in your target group (TG) so that LB tries to "maintain state information in order to provide a continuous experience to clients".
However, a better solution is to make your instances stateless. This means that all session/state information for your application is kept outside of the instances, e.g. in DynamoDB, ElastiCache or S3. This way you are making your application scalable and eliminate a problem of keeping track of session data stored on individual instances.
Also I wonder how I can manage the source codes in these instances. I mean, if I wanted to update the code do I have to update in these 2 instance separately?
Yes. Your instances should be identical. Usually CodeDeploy is used to ensure smooth and reproducable updates of number of instances.
I have an Application load balancer and have 4 app servers created in single target group. After enabling the session stickiness in front load balancer, request is not routing to single healthy instance; instead it is routing to multiple EC2 instance, which is breaking my application.
Any alternative ideas to have this point to single EC2 instance in the target group rather hopping to any random EC2 instance whenever I try to hit the application URL.
You need to make sure that initial request should be handled by the instance of your choice. Then you can use 'Application-Controlled Session Stickiness' to to associate the session with the instance that handled the initial request.
Please read Configure Sticky Sessions for Your Classic Load Balancer - Elastic Load Balancing. This might help.
Also if you have 4 servers in target group and want to send request to only 1 server, then you can remove rest of the three servers temporarily and initiate a request. In that case, request will always go to that single server, you wanted. Then you can add back rest of the three servers again. Now you can set the stickiness to associate the session with the session with the first server, you wanted.
I will need to host a PHP Laravel application on Google Cloud Compute Engine with auto scaling and load balancing. I tried to setup and configure following:
I Created instance template, where I have added startup script to install apache2, PHP, cloning the git repository of my project, Configuring the Cloud SQL proxy, and configure all settings required to run this Laravel project.
Created Instance group, Where I have configured a rule when CPU reaches certain percent it start creating other instances for auto scale.
Created Cloud SQL instance.
Created Storage bucket, in my application all of the public contents like images will be uploaded into storage bucket and it will be served from there.
Created Load Balancer and assigned the Public IP to load balancer, configured the fronted and backed correctly for load balancer.
As per my above configuration, everything working fine, When a instance reaches a defined CPU percentage, Auto scaling start creating another instances and load balancer start routing the traffic to new instance.
The issue I'm getting, to configure and setup my environment(the startup script of instance template) takes about 20-30 minutes to configure and start ready to serve the content from the newly created instance. But when the load balancer detects if the newly created machine is UP and running it start routing the traffic to new VM instance which is not being ready to serve the content from it.
As a result, when load balancer routes the traffic to not ready machine, it obviously send me 404 error, and some other errors.
How to prevent to happen it, is there any way that the instance that created through auto scaling service send some information to load balancer after this machine is ready to serve the content and then only the load balancer route the traffic to the newly created instance?
How to prevent Google Cloud Load balancer to forward the traffic to
newly created auto scaled Instance without being ready?
Google Load Balancers use the parameter Cool Down to determine how long to wait for a new instance to come online and be 100% available. However, this means that if your instance is not available at that time, errors will be returned.
The above answers your question. However, taking 20 or 30 minutes for a new instance to come online defeats a lot of the benefits of autoscaling. You want instances to come online immediately.
Best practices mean that you should create an instance. Configure the instance with all the required software applications, etc. Then create an image of this instance. Then in your template specify this image as your baseline image. Now your instances will not have to wait for software downloads and installs, configuration, etc. All you need to do is run a script that does the final configuration, if needed, to bring an instance online. Your goal should be 30 - 180 seconds from launch to being online and running for a new instance. Rethink / redesign anything that takes longer than 180 seconds. This will also save you money.
John Hanley answer is pretty good, I'm just completing it a bit.
You should take a look at packer to create your preconfigured google images, this will help you when you need to add a new configuration or do updates.
The cooldown is a great way, but in your case you can't really be sure that your installation won't take a bit more time sometimes due to updates as you should do an apt-get update && apt-get upgrade at instance startup to be up to date it will only take more and more time...
Load balancers normally should have a health check configured and should not route traffic unless the instance is detected as healthy. In your case as you have apache2 installed I suppose you have a HC on the port 80 or 443 depending on your configuration on a /healthz path.
A way to use the health check correctly would be to create a specific vhost for the health check and you add a fake domain in the HC, let's say health.test, that would give a vhost listening for health.test and returning a 200 response on /healthz path.
This way if you don't change you conf, just activate the health vhost last so the loadbalancer don't start routing traffic before the server is really up...
I'm trying to set up an AWS environment for the first time and have so far got a DNS entry pointing to a Classic ELB. I also have an EC2 instance running but I don't seem to be able to add the EC2 instance to the ELB in the UI. The documentation says that I can do this in the "Instances" tab in Load balancer screen but I don't have the tab at all. All I can see is Description | Listeners | Monitoring | Tags.
Does anyone have any ideas why the "Instances" tab night be missing from the UI?
There are now two different types of Elastic Load Balancer.
ELB/1.0, the original, is now called a "Classic" balancer.
ELB/2.0, the new version, is an "Application Load Balancer" (or "ALB").
They each have different features and capabilities.
One notable difference is that ALB doesn't simply route traffic to instances, it routes traffic to targets on instances, so (for example) you could pool multiple services on the same instance (e.g. port 8080, 8081, 8082) even though those requests all came into the balancer on port 80. And these targets are configured in virtual objects called target groups. So there are a couple of new abstraction layers, and this part of the provisioning workflow is much different. It's also provisioned using a different set of APIs.
Since the new Application Load Balancer is the default selection in the console wizard, it's easy to click past it, not realizing that you've selected something other than the classic ELB you might have been expecting, and that sounds like what occurred in this case.
It might be possible that you have selected Application Load Balancer instead of selecting Classic Load Balancer.
If that is the case, then you need to add your instance in Target Group as Instances tab is not available in the Application Load Balancer.
I hope above may resolve your case.
Is it possible to reuse existing load balancer using elastic beanstalk?
As far as I could manage the only way I could get this to work was as follows:
Create your environment as a single instance and not load balanced. You will
find that EB creates an Auto Scaling group regardless.
Manually create a Target Group for the EB environment (in the EC2 console under Target Groups)
Assign the Target Group you just created to the Auto Scale group (in the EC2 console under Target Groups, click on the Auto Scale group and edit the details)
Add the Listeners for the Target Group to the desired ALB
Done
Managing scaling has to be done on the Auto Scale group directly as it remains disabled on the EB console.
Changing configurations and updating the application works and pushes to all instances.
I haven't tested upgrading the OS but I assume that it will work without issue as it won't likely rebuild the Auto Scaling group
Rebuilding the environment works but as the Auto Scale group gets rebuilt you need to reset the Target Group and auto-scaling configuration on it manually.
Update: I've been running several clients with this setup without issue for over a year.
AWS now supports sharing of an Application Load Balancer among Elastic Beanstalk environments.
However, this can only be done during environment creation. Here're the steps to use a shared load balancer.
Open the Elastic Beanstalk console, and in the Regions list, select your AWS Region.
In the navigation pane, choose Environments.
Choose Create a new environment to start creating your environment.
On the wizard's main page, before choosing Create environment, choose Configure more options.
Choose the High availability configuration preset.
Alternatively, in the Capacity configuration category, configure a Load balanced environment type. For details, see Capacity.
In the Load balancer configuration category, choose Edit.
Select the Application Load Balancer option, if it isn't already selected, and then select the Shared option.
Make any shared Application Load Balancer configuration changes that your environment requires.
Choose Save, and then make any other configuration changes that your environment requires.
Choose Create environment.
After doing the above steps, Elastic Beanstalk creates rules inside the shared load balancer.
The rules forward requests based on the Host header.
In the end, your shared load balancer will look like this:
If you want to modify the current EB environment to use shared ALB, I recommend the following steps:
Use eb config get <saved_configuration_name> to download the current configuration of your environment.
Modify the configuration on your local computer.
Run eb config put <modified_configuration_name> to upload the configuration file to Elastic Beanstalk.
Use the modified saved configuration to launch a new environment to replace the old environment.
I don't think its possible. Elastic beanstalk works on its on set of resources, like ASG, Security group and LB's etc. Sharing them with other components may cause unwanted changes to the components, that may take the system down.
However, In my opinion, you should be able to add machines to EB load balancer once its created, however you will be in trouble when you terminate/recreate the application.