I have an application on an Windows server EC2 with an SQL server for our database.
What I would like to do is an load balancer so the application won't fail due to overload.
I have a couple of questions that Im not certain about it.
I believe that i need to create an image of my current instance and duplicate it. my problem is that my database is based on my current instance so it would duplicate my database as well.
Do I need another instance just for my database?
If yes, then it means that I need a total of 3 instances. 2 for the application and 1 for the database.
In this case I need to change my application to connect to the new instance database instead of the current database.
After all that happens I need to add a load balancer.
I hope I made myself clear.
I would recommend using RDS (http://aws.amazon.com/rds/) for this. This way you don't have to worry about the database server and just host your application server on EC2 instances. Your AMI will then contain just the application server and thus when you scale up you will be launching additional app servers only and not database servers.
Since you are deploying a .NET application, I would also recommend taking a look at Elastic Beanstalk (http://aws.amazon.com/elasticbeanstalk/) since it will really help to make auto scaling much easier and your solution will scale up/down as well as self-heal itself.
As far the load balancer is concerned, you can either manually update your load balancer will the new instances of your application server or you can let your auto scale script do it for you. If you go for ElasticBeanstalk, then Elastic Beanstalk will take care of adding/removing instances to/from your Elastic Load Balancer for you on its own.
Related
I have an EC2 instance which hosts a windows service, .net API and a simple .net website. There's also the added complication of a Route 53 endpoint pointing to it and an https cert being allocated via Amazon certificate manager. Yes, it's a lot of apps on a single instance and I will look at separating them later. I got a message from AWS saying that due to the underlying infrastructure becoming unstable, they'll need to terminate the instance in a week.
Lot of options come to mind, none of which I've tried before or know much about. These options include spinning up another instance, backing up and restoring this instance on to the new one. OR using AWS elastic beanstalk or something to automate the infrastructure setup and code deployment. Which of these (or another) options is most feasible and quick to get working and where should I start looking?
If it's just the instance, I'd go for an EBS snapshot and then restore the ec2 instance from it. Finally, swap the IP in Route 53.
It's a relatively quick and rather straight-forward process, that's well documented by AWS and there are loads of how-to's on the Web too.
Here's where to start:
Create Amazon EBS Snapshot
and here's how to restore it.
On the other hand, you could go for a .Net app on Elastic Beanstalk but that requires a bit more work to set up the environment and prepare the app for deployment.
More on how to create and deploy .NET on Elastic Beanstalk.
I will need to host a PHP Laravel application on Google Cloud Compute Engine with auto scaling and load balancing. I tried to setup and configure following:
I Created instance template, where I have added startup script to install apache2, PHP, cloning the git repository of my project, Configuring the Cloud SQL proxy, and configure all settings required to run this Laravel project.
Created Instance group, Where I have configured a rule when CPU reaches certain percent it start creating other instances for auto scale.
Created Cloud SQL instance.
Created Storage bucket, in my application all of the public contents like images will be uploaded into storage bucket and it will be served from there.
Created Load Balancer and assigned the Public IP to load balancer, configured the fronted and backed correctly for load balancer.
As per my above configuration, everything working fine, When a instance reaches a defined CPU percentage, Auto scaling start creating another instances and load balancer start routing the traffic to new instance.
The issue I'm getting, to configure and setup my environment(the startup script of instance template) takes about 20-30 minutes to configure and start ready to serve the content from the newly created instance. But when the load balancer detects if the newly created machine is UP and running it start routing the traffic to new VM instance which is not being ready to serve the content from it.
As a result, when load balancer routes the traffic to not ready machine, it obviously send me 404 error, and some other errors.
How to prevent to happen it, is there any way that the instance that created through auto scaling service send some information to load balancer after this machine is ready to serve the content and then only the load balancer route the traffic to the newly created instance?
How to prevent Google Cloud Load balancer to forward the traffic to
newly created auto scaled Instance without being ready?
Google Load Balancers use the parameter Cool Down to determine how long to wait for a new instance to come online and be 100% available. However, this means that if your instance is not available at that time, errors will be returned.
The above answers your question. However, taking 20 or 30 minutes for a new instance to come online defeats a lot of the benefits of autoscaling. You want instances to come online immediately.
Best practices mean that you should create an instance. Configure the instance with all the required software applications, etc. Then create an image of this instance. Then in your template specify this image as your baseline image. Now your instances will not have to wait for software downloads and installs, configuration, etc. All you need to do is run a script that does the final configuration, if needed, to bring an instance online. Your goal should be 30 - 180 seconds from launch to being online and running for a new instance. Rethink / redesign anything that takes longer than 180 seconds. This will also save you money.
John Hanley answer is pretty good, I'm just completing it a bit.
You should take a look at packer to create your preconfigured google images, this will help you when you need to add a new configuration or do updates.
The cooldown is a great way, but in your case you can't really be sure that your installation won't take a bit more time sometimes due to updates as you should do an apt-get update && apt-get upgrade at instance startup to be up to date it will only take more and more time...
Load balancers normally should have a health check configured and should not route traffic unless the instance is detected as healthy. In your case as you have apache2 installed I suppose you have a HC on the port 80 or 443 depending on your configuration on a /healthz path.
A way to use the health check correctly would be to create a specific vhost for the health check and you add a fake domain in the HC, let's say health.test, that would give a vhost listening for health.test and returning a 200 response on /healthz path.
This way if you don't change you conf, just activate the health vhost last so the loadbalancer don't start routing traffic before the server is really up...
I'm building a Sails app that is using socket.io and see that Sails offers a method for using multiple servers via redis:
http://sailsjs.org/documentation/concepts/realtime/multi-server-environments
Since I will be placing the app on AWS, preferably with ELB (elastic load balancer) and autoscale group with multiple EC2 instances was wondering how I can handle so it doesn't need a separate redis instance?
Maybe we can use AWS Elasticache? If so - how would this be done?
Now that AWS has released the new ALB application load balancer which has websockets, could this be used to help simplify things?
Thanks in advance
Updates for use-cases in application
Allow end-user to update data dynamically from their own dashboard
and display analytics/stats in real-time to an administrator
Application status' to change based on specific timings eg. at a
given start date/time the app allows users to update data.
Regarding your first question, you don't want to run Redis on the same servers that Sails is running on, especially if you are using AutoScaling. The Redis server needs to be a separate server that won't disappear if your environment experiences a "scale-in" event. So Redis is going to have to be on a separate "server" somewhere.
ElastiCache is just separate EC2 instances, running Redis, where AWS handles most of the management for you to the point that you can't even SSH into the instance. It's similar to how RDS works. ElastiCache will certainly work for your scenario. You might also want to look at the third-party service RedisLabs which also manages Redis instances on AWS for you.
Regarding your second question, an Application Load Balancer will have no bearing on your Redis usage. It will however bring actual support for WebSockets which it sounds like you are using. So yes, you should be using an ALB instead of an ELB.
I am developing an application service based on WSO2 AS. my intention is that the application should be deployed in an AS-cluster in order to cope with the high volume traffic.
the cluster should be a dynamic one in order to scale up or down as per the traffic changes.
also, a user's service might persist in one of the instances for quite some time; in case of failure, a user's service should be restored in a peer instance by the backup and restore mechanism of an object archive(database).
So, the challenge is:
I need to tell the load balancer something about the instance in which the user service persists. so that the load balancer will always route the same user's requests to the same instance in the cluster. and in case of failure, I could update the load balancer with the new instance in which the user's service had been restored.
preferably it could be something that could be generated dynamically by a application server instance; accessible in the program environment; understood and used by the load balancer to route request...
anyone has any idea?
thanks a lot
After googling around for some time. I found an alternative which WSO2 claimed supporting(http://wso2.com/products/elastic-load-balancer/).
NGinx Plus comes with a feature named Session Persistence (https://www.nginx.com/products/session-persistence/) which provides methods directing load balancer of its routing of incoming requests to a specific back end server
I was wondering if anyone had ever implemented multiple Django webservers pointing to a single database, essentially functioning as a single website via load balancing?
What software did you use for load balancing?
What additional setup/configuration did your Django webservers require?
Did you need to modify your Django code in any way?
On an Amazon EC2 setup, I found AWS's Elastic Load Balancer to be pretty cool (apart from only supporting a single IP address per ELB instance).
The front-end Django boxes just needed their database settings altering to point to a separate database (ie, given the database box's IP, which was an internal IP in terms of our EC2 ecosystem) and, once the database box was made to listen on that IP and the appropriate port, we were ready to rock.