I'm trying to decide what metric to use as a trigger for eb auto scaling to fire up a new instance, and what I'm leaning towards atm is response time - so if a user doesn't get a response in say 4 seconds another ec2 instance is fired up.
What I'm struggling to find out, however, is how long it takes on average for eb to bring another instance online. I'm just concerned that if it gets to the point where the existing instances aren't coping with the load, are people going to be refused a connection and/or experience an extremely slow website for several minutes until auto scaling detects the problem and brings another instance online?
If anyone has experience of this with an ecommerce solution I would love to hear what auto scaling configuration you find works to ensure a seamless user experience.
It really depends on your application. Generally though, you can expect it to take 5-10 minutes for a new instance to come online, register with the ELB, and begin serving traffic.
Autoscaling isn't really intended for bursting, it works better when you have predictable traffic patterns. But with custom Cloudwatch metrics, you can do some pretty cool, predictive things that autoscale based on external factors such as: volume of Twitter mentions, Google Analytics data, number of active user sessions, etc.
Related
I have a general question regarding how APIs are scaled. I have a basic RESTful API powered by Django rest framework, the backend uses RDS for database management. Right now, I'm deploying my django application to a digitalocean droplet but thinking of switching over to EC2 or potentially EKS. Is my understanding correct that I can effectively point my application to the RDS endpoint and spin up several EC2 instances with the same Django application fronted by an ELB? Would this take care of incoming traffic and the scalability of the django application?
This isn't exactly a coding question so I'm not sure if this is the best stackexchange site to ask this.
My two cents here:
I`ve been using lambda to serve Django and Flask apis for quite some time now, and it works great. You don't need to worry about scalability at all, unless there is a chance that your API would receive more than 10,000 requests per second (very unlikely on most scenarios). It will be way cheaper than EKS, even cheaper than EC2. I have a app with 400k active users which is served by an API running on lambda, I never paid more than $25 on invokations.
You can use Zappa (which is exclusively for python, I recommend) or Serverless framework, they will take care of most of the heavy work and make the deployment very easy.
But have in mind that lambda is not very good for long running tasks, like cronjobs. If you have have crons that might take some time to be executed your invokations can get a little expensive if you invoke it ofteen (lambdas can run up to 15 minutes, but those 15 minutes will be much more expensive than EC2). Also, the apigateway in front of the lambda function have a 30 seconds timeout, so your requests must be processed before that. If you think your requests will take longer, you will need to leverage some async requests. I think it is a very small price to have a full service without having to worry about the infrastructure.
You are right, but you can think not only about ec2 and EKS. You can also look into ECS and Fargate options. ELB distribute traffic across compute resources inside Target Group and it can be Autoscaling Group for EC2. Also, with RDS you can scale read replicas for handling mor read traffic independent from master node
I have a web service running on several EC2 boxes. Based on the Cloudwatch latency metric, I'd like to scale up additional boxes. But, given that it takes several minutes to spin up an EC2 from an AMI (with startup code to download the latest application JAR and apply OS patches), is there a way to have a "cold" server that could instantly be turned on/off?
Not by using AutoScaling. At least not, instant in the way you describe. You could make it much faster however, by making your own modified AMI image where you place the JAR and the latest OS patches. These AMI's can be generated as part of your build pipeline. In that case, your only real wait time is for the OS and services to start, similar to a "cold" server.
Packer is a tool commonly used for such use cases.
Alternatively, you can mange it yourself, by having servers switched off, and start them by writing some custom Lambda scripts that gets triggered by Cloudwatch alerts. But since stopped servers aren't exactly free either, i would recommend against that for cost reasons.
Before you venture into the journey of auto scaling your infrastructure and spending time/effort. Perhaps you should do a little bit of analysis on the traffic pattern day over day, week over week and month over month and see if it's even necessary? Try answering some of these questions.
What was the highest traffic ever your app handled, How did the servers fare given the traffic? How was the user response time?
When does your traffic ramp up or hit peak? Some apps get traffic during business hours while others in the evening.
What is your current throughput? For example, you can handle 1k requests/min and two EC2 hosts are averaging 20% CPU. if the requests triple to 3k requests/min are you able to see around 60% - 70% avg cpu? this is a good indication that your app usage is fairly predictable can scale linearly by adding more hosts. But if you've never seen traffic burst like that no point over provisioning.
Unless you have a Zynga like application where you can see large number traffic at once perhaps better understanding your traffic pattern and throwing in an additional host as insurance could be helpful. I'm making these assumptions as I don't know the nature of your business.
If you do want to auto scale anyway, one solution would be to containerize your application with Docker or create your own AMI like others have suggested. Still it will take few minutes to boot them up. Next option is the keep hosts on standby but and add those to your load balancers using scripts ( or lambda functions) that watches metrics you define (I'm assuming your app is running behind load balancers).
Good luck.
I have a site that I will be launching soon. Not entirely sure how heavy the traffic will get.
I am using Django+Nginx+Gunicorn+Mysql. There will be support for SSL/HTTPS.
As a starting point, I am thinking of having two micro instances balanced by Elastic Load Balancing.
The MySql database will be on one of the instances. If traffic gets heavy, I might move static files to a CDN. The micro instances serve as front-end servers responsible for only churning out HTML/JSON and serving static files. Static files are mainly CSS/js and several images (not many). I foresee database will be read-heavy and less writes.
Questions:
Assuming the traffic rises to 100k page views per day, will the 2 micro instances suffice?
Do I have to move the database to a separate instance? And what instance type would be good?
What if the traffic is only 1k page views per day?
How many gunicorn processes to run on a micro instance?
In general, what type of metrics will help me determine what kind and how many instances I would need? What is the methodology to decide what kind of architecture I would need?
Thanks a lot!
Completely dependant on how dynamic the site is planning to be. Do users generate content towards the service or is it mostly static? If the former you're going to get a lot from putting stuff like avatars, images etc. into S3 and putting that on Cloudfront. Same with your static files... keeping your servers stateless will allow you scale with ease.
At 100k page views a day you will definitely struggle with just micros... you really should only use those in a development environment and aren't meant to handle stuff like serving users. I'd use at a minimum a single small instance in-front of a Load Balancer, may sound strange but you will be able to throw in another instance when things get busy without having to mess with Route 53 or potentially having your site fail. The stateless stuff is quite important now as user-generated assets may only be reference able from one instance and not the other.
At 1k page views I'd still use a small for web serving and another small for MySQL. You can look into RDS which is great if you're doing this solo, forget about needing to upgrade versions and stuff like maintenance, backups etc.
You will also be able to one-click spin up read replicas for peak. Look into the Amazon CLI as well to help automate those tasks. Cronjobs will make it a cinch if you're time stressed otherwise Opsworks, Cloudformation and Auto-Scaling will all help with the above.
Oh and just as a comparison, an Application server of mine running Apache, PHP with APC to serve our users starts to struggle with about 80 concurrent users. Runs on a small EC2 Instance with a Small RDS (which sits at about 15% at the same time as the Application Server is going downhill)
Probably not. Micro instances are not designed for heavy production loads. They use a burstable CPU profile. They can run at 2 ECU for a couple of minutes, and then they get locked at 0.1-0.2 ECU. I tend to like c1.medium, but small may be enough for you.
Maybe, as long as they are spread out during the day and not all in a short window.
1-2 per core. Micro only has 1 core.
Every application is different. The best thing to do is run your own benchmarks using tools like ab (Apache Bench)
Following the AWS best practices architecture diagram is always a good start.
http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_web_01.pdf
I strongly advise you to store all your files on Amazon S3, and use a Route 53 DNS (or any other DNS if you want) in front of it to distribute the files, because later on if you decide to use CloudFront CDN it will be very easy to change. And, just to mention using CloudFront as CDN will increase your cost only a little bit, not a huge thing.
Doesn't matter the scenario, if we're talking a about production, you should definitely go for separate instances, at least 1 EC2 for web and 1 EC2/RDS for database.
If you are geek and like to get into the nitty gritty details, create your own infrastructure and feel free to use any automation tool (puppet, chef) or not. Or if you just want to collect the profit, or have scarce resources to take care of everything, you should try Elastic Beanstalk (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html)
Anyway, going to create your own infrastructure or choose elastic beanstalk, always execute stress tests to have a better overview of your capacity planning needs. After you choose your initial environment, stress it using apache bench, siege or whatever other tool you may like.
Hope this helps.
I would suggest to use small instances instead of micro as micro instances often stop responding on heavy load and then it requires a stop-start. Use s3 for static files which helps in faster loading and have a look over cloud front.
The region for instance also helps in serving requests and if you target any specific region, create the instance selecting that region.
Create the database in new instance and attach ebs volume to that instance. Automate backup script to copy database files and store in ebs to avoid any issues. The instance selected here can be iops for faster processing over standard. Aws services provide lot of flexibility but you need to have scripts running to scale up and down the servers as per the timings.
Spot instance can help in future as they come cheap in case you are scaling up.
I have had an EC2 instance working just fine for months (still developing, app not live yet), but I just realized I don't even know how to make my EC2 instance scale up / down depending on traffic.
The sheer number of services offered by Amazon is overwhelming, and I'm very confused.
Initially, I though I'd just have one instance, and Amazon would transparently allocate resources or create identical instances to handle traffic but it seems my impression was wrong.
My question is: can someone please tell me (in simple words, bullet list or point me to a tutorial) how to make my instance automatically grows to handle 100,000 simultaneous users then automatically goes back when the surge is done?
Assuming this is possible, can I do this via the AWS control panel? If so, how?
All I can see is micro, small, medium, etc.. instances. Each one has limited resources and it's not clear how to automatically setup the instance so that Amazon dynamically allocate additional resources to handle traffic spikes (or even gradually go up to keep with natural traffic growth for that matter).
Side question May I assume that Amazon auto-handle DDOS attacks when scaling up? (meaning rogue traffic would eventually stopped/slowed down by Amazon and scaling would only affect legitimate traffic spike). I realize this side question may be really stupid, keep in mind I didn't take my coffee yet :)
This article details how to auto scale using load balancers and EC2: http://kkpradeeban.blogspot.com/2011/01/auto-scaling-with-amazon-ec2.html
For scalability you may also want to look into this article on implementing a pub/sub system for distributed systems: http://www.infoq.com/articles/AmazonPubSub
You can't automatically change the instance type (m1.small, m1.large, etc.) in response to changing load. You can, however, have AWS automatically create new instances as your load increases, and tear them down when load subsides.
I believe this article will help you: http://aws.amazon.com/autoscaling/.
The company I work for built and hosts a web app used by our customers and I am interested in creating some kind of external monitoring page (similar to trust.salesforce.com) that users can go to to see the current state of our servers/app. I know there are tons of different 'monitoring' services out there but I want to create the service myself, to have complete control and customization. Obviously, the service would have to be hosted in a different location and data center than the app itself. One thing I am concerned about is that if I just choose a different host in a different location, if that host goes down for any reason (power failure, server failure, or even ISP failure) the monitoring software is down. For this reason, I am thinking of hosting the monitoring app on an amazon EC2 instance. With their elastic IP feature, if for some reason the data center or point where the instance is running fails, I can just create a duplicate instance with the same data (but in a different location) and everything would work fine still.
Does this sound like a feasible plan? For even more security, I was thinking of creating 2 instances in different locations and monitoring from both of them. If one instance fails, the other would still be up. Obviously, one instance has to act as the actual web host for the monitoring page. Is it possible programatically for one instance to switch the elastic IP over to itself if it detects the other instance has failed for any reason?
I know there's a lot of different things involved in this question, I'm just looking for feedback regarding ANY of it...
If you've made it this far, thanks for taking the time to read this!
What you are talking about is a complicated solution for a complicated issue. I think you are on the right track with using something like Amazon's EC2 to reduce the chance of your monitoring app of going down. Also, you could develop it yourself but there are a great deal of free monitoring solutions out there like Nagios that will do everything you are asking for and is highly extensible so you can spend your time making it look and feel like you want while leaving the more complicated portions under the hood to software that is tried and tested. The worst thing would be for you to have a bug in your software that shows something as up when it is actually down. Based off of what you are talking about doing, I would assume that would be a huge issue.
Instead of using an elastic ip - which is only assigned to one instance, consider using the Elastic Load Balancer http://aws.amazon.com/elasticloadbalancing/ which then can route over instances in any of the availability zones. This way AWS manages taking instances in/out of the pool if they become unavailable for some reason and you do not have to spend time 'moving' the Elastic IP around. It is then easy to assign your monitoring cname to the ELB hostname.
I think RandomBen's idea of using Nagios on your instances is a good one because then you do not have to recreate all the functionality in Nagios. You then spend development time setting up the system and customizing the look and feel to your needs.
Also, if you can use MySQL, you should consider using RDS http://aws.amazon.com/rds/ although you will need to pay transfer fees if you have servers outside of a region accessing the RDS in another region.