I was wondering if anyone had ever implemented multiple Django webservers pointing to a single database, essentially functioning as a single website via load balancing?
What software did you use for load balancing?
What additional setup/configuration did your Django webservers require?
Did you need to modify your Django code in any way?
On an Amazon EC2 setup, I found AWS's Elastic Load Balancer to be pretty cool (apart from only supporting a single IP address per ELB instance).
The front-end Django boxes just needed their database settings altering to point to a separate database (ie, given the database box's IP, which was an internal IP in terms of our EC2 ecosystem) and, once the database box was made to listen on that IP and the appropriate port, we were ready to rock.
Related
I have a website which has main domain and sub domain (I have different subdomain for different countries) eg: mysite.com ( is the main domain ), country-a.mysite.com ( for country A ), country-b.mysite.com ( for country B) NOTE: each country has independent users / data and linked with separate databases.
Now I'm managing them in one EC2 instance. Where I have subfolders for each country and point them to subdomain using Route53. And they are working fine.
But now I wanted to them scalable as I'm expecting more traffic. What is the best practice for such scenario ?
Is it possible to get another EC2 instance and clone all the subfolders and introduce a load balancer to handle the traffic between these 2 instances ? I mean, when a user from country A and B will hit the load balancer, the load balancer will handle it properly and redirect the user to the right subfolder in these 2 instances and manage the traffic ?
If yes, how should I configure the Route53 ?
How the load balancer is handling user sessions ? I mean, let say first time a user hit the load balancer direct the user to 1st instance and when the other request comes from the same user hit the 2nd instance. If a session create on the 1st instance and this session data will be available at 2nd instance?
Also I wonder how I can manage the source codes in these instances. I mean, if I wanted to update the code do I have to update in these 2 instance separately? OR is there a easy way where I upload the files to one of the instance and it will clone to other instances ?
BTW, my website built using Laravel framework and Postgres.
Im new to load balancer, pls help me to find the perfect solution.
If yes, how should I configure the Route53 ?
There is nothing you should be doing in R53. Its load balancer (LB) that distributes traffic among your instances, not R53. R53 will just direct traffic to the LB, nothing else.
How the load balancer is handling user sessions ?
It does not handle it. You could enable sticky sessions in your target group (TG) so that LB tries to "maintain state information in order to provide a continuous experience to clients".
However, a better solution is to make your instances stateless. This means that all session/state information for your application is kept outside of the instances, e.g. in DynamoDB, ElastiCache or S3. This way you are making your application scalable and eliminate a problem of keeping track of session data stored on individual instances.
Also I wonder how I can manage the source codes in these instances. I mean, if I wanted to update the code do I have to update in these 2 instance separately?
Yes. Your instances should be identical. Usually CodeDeploy is used to ensure smooth and reproducable updates of number of instances.
I have NGINX setup on Google Cloud Compute Engine using a managed instance group setup [powered by managed instance templates].
I simulated a cpu load on one of the servers and that spawned a couple of additional servers, each running NGINX.
So what's the best practice for hosting a website using this?
Do I just create an A-record in DNS and point it to the IP address of the original instance [of the group]? Looks like this would be problematic given that the IPs are ephemeral?!
Do I reserve a static IP address [in VPC Network]? I tried to create a static IP address and attach it to the original instance in the group, but when I did that, the said instance went away leaving another spawned instance as the new primary instance?!
Is there some load balancer hidden somewhere that I can point an A-record to?
Managed instance groups seem like a great idea, but would like to know the best way to set it up that will not break unexpectedly in DNS.
You should setup a load balancer to distribute traffic across the instances in your group. To create a load balancer, you'll have to setup several components, instance groups being one of them. Check out this example. This uses unmanaged groups, but you can use managed instead. Once you've setup a load balancer, I would recommend creating a script in a language of your choice (python, JS, bash) that automates this process. I would even go further and write a script to tear down your load balancer.
As far as your domain is concerned, during the setup of your load balancer, you'll have to create static IPv4 and optional IPv6 addresses. You can then create A/AAAA records that point to these addresses. Finally, make sure you wait ~5-20 minutes after you've pointed your A/AAAA records to these ip's before you wonder why it's not working.
Hie,
I read a lot of blogs and tutorials. I cannot figure it out how to carry out performance testing on a cookie based sticky web application which sits behind a reverse proxy load balancer. I have 3 backed application servers serving same instance of a shopping cart. A load balancer sits infront of them and directs the traffic.
Problem: when i send HTTP request for performance analysis the load balancer (tracks client ip through cookie) redirects the HTTP request to the same back end server that was assigned to. I have an option of using IP spoofing but it wont work when the backend servers are distribted in WAN rather than LAN. Moreover, each backend servers has its own public IP address and sits behind the firewall.
Question: IS there a way Jmeter can be configured to load test in this scenario. or is there othere better solution
Much appreciate your thoughts and contribution.
Regards
Here are few possible workarounds:
Point different JMeter instances directly to different backend hosts bypassing the load balancer.
Use Distributed Testing having JMeter nodes somewhere in the cloud, i.e. Amazon Micro Instances are free. You can use JMeter ec2 Script to simplify the installation, configuration and execution.
Try using DNS Cache Manager, it enables individual DNS resolution for each JMeter thread.
I'm currenty upscaling from 1xEC2 server to:
1xLoad Balancer
2xEC2 servers
I have quiet a lot of customers, each running our service on their own domain.
We have a webfront and admin-interface and use a lot of caching. When something is changed on the admin-part, the server calls eg.: customer.net/cacheutil.ashx?f=delete&obj=objectname to remove the object on crossdomains.
Hence the new setup, I don't know how to do this with multiple servers, ensuring that the cached objects is deleted on both servers (or more, if we choose to launch more).
I think that it is a "bit much" to require our customers to add eg. "web1.customer.net", "web2.customer.net" and "customer.net" to point at 3 different DNS CNAMEs, since they are not that IT experienced.
How does anyone else do this?
When scaling horizontally, it is recommended to keep your web servers stateless. That is, do not store data on a specific server. Instead, store the information in a database or cache that can be accessed by all servers. (eg DynamoDB, ElastiCache)
Alternatively, use the Sticky Sessions feature of the Elastic Load Balancing service, which uses a cookie to always redirect a user's connection back to the same server.
See documentation: Configure Sticky Sessions for Your Load Balancer
I have an application on an Windows server EC2 with an SQL server for our database.
What I would like to do is an load balancer so the application won't fail due to overload.
I have a couple of questions that Im not certain about it.
I believe that i need to create an image of my current instance and duplicate it. my problem is that my database is based on my current instance so it would duplicate my database as well.
Do I need another instance just for my database?
If yes, then it means that I need a total of 3 instances. 2 for the application and 1 for the database.
In this case I need to change my application to connect to the new instance database instead of the current database.
After all that happens I need to add a load balancer.
I hope I made myself clear.
I would recommend using RDS (http://aws.amazon.com/rds/) for this. This way you don't have to worry about the database server and just host your application server on EC2 instances. Your AMI will then contain just the application server and thus when you scale up you will be launching additional app servers only and not database servers.
Since you are deploying a .NET application, I would also recommend taking a look at Elastic Beanstalk (http://aws.amazon.com/elasticbeanstalk/) since it will really help to make auto scaling much easier and your solution will scale up/down as well as self-heal itself.
As far the load balancer is concerned, you can either manually update your load balancer will the new instances of your application server or you can let your auto scale script do it for you. If you go for ElasticBeanstalk, then Elastic Beanstalk will take care of adding/removing instances to/from your Elastic Load Balancer for you on its own.