I have a website which has main domain and sub domain (I have different subdomain for different countries) eg: mysite.com ( is the main domain ), country-a.mysite.com ( for country A ), country-b.mysite.com ( for country B) NOTE: each country has independent users / data and linked with separate databases.
Now I'm managing them in one EC2 instance. Where I have subfolders for each country and point them to subdomain using Route53. And they are working fine.
But now I wanted to them scalable as I'm expecting more traffic. What is the best practice for such scenario ?
Is it possible to get another EC2 instance and clone all the subfolders and introduce a load balancer to handle the traffic between these 2 instances ? I mean, when a user from country A and B will hit the load balancer, the load balancer will handle it properly and redirect the user to the right subfolder in these 2 instances and manage the traffic ?
If yes, how should I configure the Route53 ?
How the load balancer is handling user sessions ? I mean, let say first time a user hit the load balancer direct the user to 1st instance and when the other request comes from the same user hit the 2nd instance. If a session create on the 1st instance and this session data will be available at 2nd instance?
Also I wonder how I can manage the source codes in these instances. I mean, if I wanted to update the code do I have to update in these 2 instance separately? OR is there a easy way where I upload the files to one of the instance and it will clone to other instances ?
BTW, my website built using Laravel framework and Postgres.
Im new to load balancer, pls help me to find the perfect solution.
If yes, how should I configure the Route53 ?
There is nothing you should be doing in R53. Its load balancer (LB) that distributes traffic among your instances, not R53. R53 will just direct traffic to the LB, nothing else.
How the load balancer is handling user sessions ?
It does not handle it. You could enable sticky sessions in your target group (TG) so that LB tries to "maintain state information in order to provide a continuous experience to clients".
However, a better solution is to make your instances stateless. This means that all session/state information for your application is kept outside of the instances, e.g. in DynamoDB, ElastiCache or S3. This way you are making your application scalable and eliminate a problem of keeping track of session data stored on individual instances.
Also I wonder how I can manage the source codes in these instances. I mean, if I wanted to update the code do I have to update in these 2 instance separately?
Yes. Your instances should be identical. Usually CodeDeploy is used to ensure smooth and reproducable updates of number of instances.
Related
I have a situation here.
I made 2 environments prod and preprod, both has two vms each (like two nodes per environment).
Now i have to create a Load Balancer keeping those to nodes on the back end. Once of the nodes has SSL configured with a domain name (say example.com).
Its a Pega App Server with two nodes pointing to the same DB on Google SQL. now Client wants a Load Balancer in the front which will share or balance the traffic between these two nodes.
Is that possible?
If yes, the domain name has been registered with the ip of Node1, but Load Balancer will have a different ip right?
So if the Pega URL that was working before https://example.com/prweb will not work, isnt it?
But the requirement is they will just type the domain name and ill access the Pega App via Load balancer, as in, to which Node the requests gonna go.
Is that Possible at all guys?
Guys honestly i am a noob in all these Cloud thing, please if possible help me out. I ould really appriciate it. Thanks.
I tried to create an HTTPS Load Balancer classic and added those two instances in the Backend, but 1 target pool detected out of 2 instances, its showing "instance xxxx is unhealthy for [the ip of the load balancer]
So next i created HTTPS type Load Balancer with Network endpoint group, where i added those two nodes private ip. But not sure how to do it. Please let me know if anybody knows how to do it.
We are looking to separate our blog platform to a separate ec2 server (In Nginx) for better performance and scalability.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2 Server
Blog request (www.example.com/blog) -> Load Balancer/Route -> New Separate EC2 Server for blog
Please help in this case what is the best option to use:
Haproxy
ALB - AWS
Any other solution?
Also, is it possible to have the load balancer or routing mechanism in a different AWS region? We are currently hosted in AWS.
Haproxy
You would have to set this up on an EC2 server and manage everything yourself. You would be responsible for scaling this correctly to handle all the traffic it gets. You would be responsible for deploying it to multiple availability zones to provide high availability. You would be responsible for installing all security updates on the operating system.
ALB - AWS
Amazon will automatically scale this out to handle any amount of traffic you get. Amazon will handle all security patches of the underlying system. Amazon provides free SSL certificates for ALBs. Amazon will deploy this automatically across multiple availability zones to provide high availability.
Any other solution?
I think AWS Global Accelerator would work here as well, but you would have to weigh the differences between Global Accelerator and ALB to decide which fits your use case and budget the best.
You could also look at placing a CDN in front of everything, like CloudFront or Cloudflare.
Also, is it possible to have the load balancer or routing mechanism in
a different AWS region?
AWS Global Accelerator would be the thing to look at if load balancing in different regions is a concern for you. Given the details you have provided I'm not sure why you would want this however.
Probably what you really need is a CDN in front of your websites, with or without the ALB.
Scenario is:
Web request (www.example.com) -> Load Balancer/Route -> Current EC2
Server Blog request (www.example.com/blog) -> Load Balancer/Route ->
New Separate EC2 Server for blog
In my view you can use ALB deployed in multi AZ for high availability for the following reasons :-
aws alb allows us to route traffic based on various attributes and path in URL is one of them them.
https://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#rule-condition-types
With aws ALB you can have two target groups with instance handling traffic one for first path (www.example.com) and second target group for another path (www.example.com/blog).
ALB allows something called SNI (which allows to handle multiple certications behind a single alb for multiple domains), so all you need to do is set up single https listener and upload your certificates https://aws.amazon.com/blogs/aws/new-application-load-balancer-sni/
i have answered on [something similar] it might help you also
This is my opinion, take it as that. I am sure a lot of people wont agree.
If your project is small or personal, you can go with HAProxy (Cheap USD4 or less if you get a t3a as a spot instance) Or free if you place it inside another EC2 of yours may be using docker.
If your project is not personal or not small, go with ALB (Expensive but simpler and better integrated to other AWS stuff)
HAProxy can handle tons of connections, but you have to do more things by yourself. ALB can also handle tons of connections and AWS will do most of the work.
I think HAProxy is more suitable for personal/small projects because if your project doesnt grow, then you dont have to touch HAProxy. It is set and forget the same as ALB but cost less.
You usually wont mind about Availability zones or disaster tolerance in a personal project, so HAProxy should be easy to config.
Another consideration: AWS offers a free tier on ALB, so if your project will run for less than a year ALB is the way to go.
If you are learning, then ALB should be considered because real clients usually love to stick to AWS in all aspects, and HAProxy is your call and also your risk (just to reduce cost for a company that usually pays a lot more for your salary, so not worth the risk).
I have NGINX setup on Google Cloud Compute Engine using a managed instance group setup [powered by managed instance templates].
I simulated a cpu load on one of the servers and that spawned a couple of additional servers, each running NGINX.
So what's the best practice for hosting a website using this?
Do I just create an A-record in DNS and point it to the IP address of the original instance [of the group]? Looks like this would be problematic given that the IPs are ephemeral?!
Do I reserve a static IP address [in VPC Network]? I tried to create a static IP address and attach it to the original instance in the group, but when I did that, the said instance went away leaving another spawned instance as the new primary instance?!
Is there some load balancer hidden somewhere that I can point an A-record to?
Managed instance groups seem like a great idea, but would like to know the best way to set it up that will not break unexpectedly in DNS.
You should setup a load balancer to distribute traffic across the instances in your group. To create a load balancer, you'll have to setup several components, instance groups being one of them. Check out this example. This uses unmanaged groups, but you can use managed instead. Once you've setup a load balancer, I would recommend creating a script in a language of your choice (python, JS, bash) that automates this process. I would even go further and write a script to tear down your load balancer.
As far as your domain is concerned, during the setup of your load balancer, you'll have to create static IPv4 and optional IPv6 addresses. You can then create A/AAAA records that point to these addresses. Finally, make sure you wait ~5-20 minutes after you've pointed your A/AAAA records to these ip's before you wonder why it's not working.
in my project I have two instances (based on ECS) which run Node.js app. Both of them are the same (just for HA purposes) use cookies and are located behind load balancer. Problem is that instances don't share session between themselves and when I login to first instance and do back action, load balancer sometimes switch me to second instance which doesn't have any session data (cookie generated by first instance) and I need to login again. I know that there is option to force two instances to share session between themselves but this approach require some modification in app code. So instead of it I would like to force my load balancer to hold and use this one instance which he had chosen for first time until the user finished his job and log off (or close the browser). Is it possible?
You can enable sticky sessions on your target groups. To do this:
In the Amazon EC2 console, go to Target Groups under LOAD BALANCING.
Select the target group and go to the Description tab.
Press the Edit attributes and enable Stickiness.
Set the duration and save.
These steps might be slightly different if you have Classic Load Balancer. Read more here and here.
If I have an ECS cluster with N distinct websites running as N services on said cluster - how do I go about setting up the load balancers?
The way I've done it currently is for each website X,
I create a new target group spanning all instances in the cluster
I create a new application load balancer
I attach the ALB to the service using the target group
It seems to work... but am want to make sure this is the correct way to do this
Thanks!
The way you are doing it is of course one way to do it and how most people accomplish this.
Application load balancers also support two other types of routing. Host based and path based.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#host-conditions
Host based routing will allow you to route based off of the incoming host from that website. So for instance if you have website1.com and website2.com you could send them both through the same ALB and route accordingly.
http://docs.aws.amazon.com/elasticloadbalancing/latest/application/load-balancer-listeners.html#path-conditions
Similarly you can do the same thing with the path. If you websites were website1.com/site1/index.html and website1.com/site2/index.html you could put both of those on the same ALB and route accordingly.