AWS EC2 Immediate Scaling Up? - amazon-web-services

I have a web service running on several EC2 boxes. Based on the Cloudwatch latency metric, I'd like to scale up additional boxes. But, given that it takes several minutes to spin up an EC2 from an AMI (with startup code to download the latest application JAR and apply OS patches), is there a way to have a "cold" server that could instantly be turned on/off?

Not by using AutoScaling. At least not, instant in the way you describe. You could make it much faster however, by making your own modified AMI image where you place the JAR and the latest OS patches. These AMI's can be generated as part of your build pipeline. In that case, your only real wait time is for the OS and services to start, similar to a "cold" server.
Packer is a tool commonly used for such use cases.
Alternatively, you can mange it yourself, by having servers switched off, and start them by writing some custom Lambda scripts that gets triggered by Cloudwatch alerts. But since stopped servers aren't exactly free either, i would recommend against that for cost reasons.

Before you venture into the journey of auto scaling your infrastructure and spending time/effort. Perhaps you should do a little bit of analysis on the traffic pattern day over day, week over week and month over month and see if it's even necessary? Try answering some of these questions.
What was the highest traffic ever your app handled, How did the servers fare given the traffic? How was the user response time?
When does your traffic ramp up or hit peak? Some apps get traffic during business hours while others in the evening.
What is your current throughput? For example, you can handle 1k requests/min and two EC2 hosts are averaging 20% CPU. if the requests triple to 3k requests/min are you able to see around 60% - 70% avg cpu? this is a good indication that your app usage is fairly predictable can scale linearly by adding more hosts. But if you've never seen traffic burst like that no point over provisioning.
Unless you have a Zynga like application where you can see large number traffic at once perhaps better understanding your traffic pattern and throwing in an additional host as insurance could be helpful. I'm making these assumptions as I don't know the nature of your business.
If you do want to auto scale anyway, one solution would be to containerize your application with Docker or create your own AMI like others have suggested. Still it will take few minutes to boot them up. Next option is the keep hosts on standby but and add those to your load balancers using scripts ( or lambda functions) that watches metrics you define (I'm assuming your app is running behind load balancers).
Good luck.

Related

AWS EC2 t3.micro instance sufficiently stable for spring boot services

I am new to AWS and recently set up a free t3.micro instance. My goal is to achieve a stable hosting of an Angular application with 2 spring boot services. I got everything working, but after a while, the spring boot services are not reachable anymore. When i redeploy the service it will run again. The spring boot services are packed as jar and after the deployment the process is started as a java process.
I thought AWS guarantees permanent availability out of the box. Do i need some more setup such as autoscaling to achieve the desired uptime of the services or is the t3.micro instance not suffienciently performant, so that i need to upgrade to a stronger instance to avoid the problem?
It depends :)
I think you did the right thing by starting with a small instance type and avoid over provisioning in the first place. T3 instance types are generally beneficial for 'burst' usage scenarios i.e. your application sporadically needs a compute spike but not a persistent one. T3 instance types usually work with credits based system, where you instance 'earns' credits when it is idle, and that buffer is always available in times of need (but only until consumed entirely). Then you need to wait for some time window again and earn the credits back.
For your current problem, I think first approach can be to get an idea of the current usage by going through the 'Monitoring' tab on the EC2 instance details page. This will help you understand if the needs are more compute related or i/o related and then you can choose an appropriate instance type from :
https://aws.amazon.com/ec2/instance-types
Next step could also be to profile your application and understand the memory, compute utilisation better. AWS does guarantee availability/durability of resources, but how you consume those resources is more of an application thing, which AWS does not guarantee/control
For your ideas around, autoscaling and availability, it again depends on what your needs are in terms of cost, outages in AWS data centres etc. To have a reliable production setup, you could consider them, but not something really important in the first place.

AWS Network out

Our web application has 5 pages (Signin, Dashboard, Map, Devices, Notification)
We have done the load test for this application, and load test script does the following:
Signin and go to Dashboard page
Click Map
Click Devices
Click Notification
We have a basic free plan in AWS.
While performing load test, till about 100 users, we didn’t get any error. please see the below image. We could see NetworkIn, CPUUtilization seems to be normal. But the NetworkOut showed 846K.
But when reach around 114 users, we started getting error in the map page (highlighted in red). During that time, it seems only NetworkOut is high. Please see the below image.
We want to know what is the optimal score for the NetworkOut, If this number is high, is there any way to reduce this number?
Please let me know if you need more information. Thanks in advance for your help.
You are using a t2.micro instance.
This instance type has limitations on CPU that means it is good for bursty workloads, but sustained loads will consume all the available CPU credits. Thus, it might perform poorly under sustained loads over long periods.
The instance also has limited network bandwidth that might impact the throughput of the server. While all Amazon EC2 instances have limited allocations of bandwidth, the t2.micro and t2.nano have particularly low bandwidth allocations. You can see this when copying data to/from the instance and it might be impacting your workloads during testing.
The t2 family, especially at the low-end, is not a good choice for production workloads. It is great for workloads that are sometimes high, but not consistently high. It is also particularly low-cost, but please realise that there are trade-offs for such a low cost.
See:
Amazon EC2 T2 Instances – Amazon Web Services (AWS)
CPU Credits and Baseline Performance for Burstable Performance Instances - Amazon Elastic Compute Cloud
Unlimited Mode for Burstable Performance Instances - Amazon Elastic Compute Cloud
That said, the network throughput showing on the graphs is a result of your application. While the t2 might be limiting the throughput, it is not responsible for the spike on the graph. For that, you will need to investigate the resources being used by the application(s) themselves.
NetworkOut simply refers to volume of outgoing traffic from the instance. You reduce the requests you are sending from this instance to reduce the NetworkOut .So you may need to see which one of click Map, Click Devices and Click Notification is sending traffic outside of the instances. It may not necessarily related only to the number of users but a combination of number of users and application module.

finding best deployment locations in aws regions

Given we are on aws platform we need to subscribe to different sources of data, which are located around the world. How can we efficiently determine what is the region with lowest latency to some target IP (not our browser)?
There is a service called cloudping which pings from your current browser to aws regions, but this cannot be useful for obvious reasons.
Is there any tool similar to cloudping that such that we could specify what ip we want to ping to?
And a secondary question. I suppose it is possible to spawn instances using aws console api, does amazon have some significant fees if i have a script that spawns a compute instance does some short work and terminates it and does this for every single region?
Worst case we could spawn instances on all regions for short amount of time and ping to all destinations we are interested but that would be a lot of work for something rather simple... My assumption is that even within one region you might end up with some instances having significantly better latency than others, a script could spawn instances until the best one is found and terminates others...
UPDATE
It seems it rather easy to spawn instances and execute commands in them, shouldnt be hard to terminate them as well. Here is a good tool for this, now the question is will aws punish me with bills and isn't there already solution for this?
You can certainly launch and terminate Amazon EC2 instances all any region you wish. Amazon will not "punish" you -- the system will simply charge the normal cost for resources you use.
If you launch an Amazon EC2 instance with the Amazon Linux AMI, then the instance will be charged per-second, so the cost will be very low. For example, you could use a t2.micro instance for a few cents per hour (charged per second).
You could then run your own timing test from each region. However, you could probably predict the best performance simply based upon the location of the region (US East, US West, Frankfurt, Sydney, etc).
Also, please note that Ping is not a reliable measure for how your actual application would perform. To obtain the best measure, you should run an application in each region that connects to the 'source of data' you are trying to use. Measure performance as it would be used by your actual application. You might find that the remote service has higher latency than the network, meaning that location would only have a minor impact on performance.
If you use somebody else's timing or somebody else's tool, it will not be as accurate as measuring your actual application doing "real" work.

EC2 Architecture design for Website

I have a site that I will be launching soon. Not entirely sure how heavy the traffic will get.
I am using Django+Nginx+Gunicorn+Mysql. There will be support for SSL/HTTPS.
As a starting point, I am thinking of having two micro instances balanced by Elastic Load Balancing.
The MySql database will be on one of the instances. If traffic gets heavy, I might move static files to a CDN. The micro instances serve as front-end servers responsible for only churning out HTML/JSON and serving static files. Static files are mainly CSS/js and several images (not many). I foresee database will be read-heavy and less writes.
Questions:
Assuming the traffic rises to 100k page views per day, will the 2 micro instances suffice?
Do I have to move the database to a separate instance? And what instance type would be good?
What if the traffic is only 1k page views per day?
How many gunicorn processes to run on a micro instance?
In general, what type of metrics will help me determine what kind and how many instances I would need? What is the methodology to decide what kind of architecture I would need?
Thanks a lot!
Completely dependant on how dynamic the site is planning to be. Do users generate content towards the service or is it mostly static? If the former you're going to get a lot from putting stuff like avatars, images etc. into S3 and putting that on Cloudfront. Same with your static files... keeping your servers stateless will allow you scale with ease.
At 100k page views a day you will definitely struggle with just micros... you really should only use those in a development environment and aren't meant to handle stuff like serving users. I'd use at a minimum a single small instance in-front of a Load Balancer, may sound strange but you will be able to throw in another instance when things get busy without having to mess with Route 53 or potentially having your site fail. The stateless stuff is quite important now as user-generated assets may only be reference able from one instance and not the other.
At 1k page views I'd still use a small for web serving and another small for MySQL. You can look into RDS which is great if you're doing this solo, forget about needing to upgrade versions and stuff like maintenance, backups etc.
You will also be able to one-click spin up read replicas for peak. Look into the Amazon CLI as well to help automate those tasks. Cronjobs will make it a cinch if you're time stressed otherwise Opsworks, Cloudformation and Auto-Scaling will all help with the above.
Oh and just as a comparison, an Application server of mine running Apache, PHP with APC to serve our users starts to struggle with about 80 concurrent users. Runs on a small EC2 Instance with a Small RDS (which sits at about 15% at the same time as the Application Server is going downhill)
Probably not. Micro instances are not designed for heavy production loads. They use a burstable CPU profile. They can run at 2 ECU for a couple of minutes, and then they get locked at 0.1-0.2 ECU. I tend to like c1.medium, but small may be enough for you.
Maybe, as long as they are spread out during the day and not all in a short window.
1-2 per core. Micro only has 1 core.
Every application is different. The best thing to do is run your own benchmarks using tools like ab (Apache Bench)
Following the AWS best practices architecture diagram is always a good start.
http://media.amazonwebservices.com/architecturecenter/AWS_ac_ra_web_01.pdf
I strongly advise you to store all your files on Amazon S3, and use a Route 53 DNS (or any other DNS if you want) in front of it to distribute the files, because later on if you decide to use CloudFront CDN it will be very easy to change. And, just to mention using CloudFront as CDN will increase your cost only a little bit, not a huge thing.
Doesn't matter the scenario, if we're talking a about production, you should definitely go for separate instances, at least 1 EC2 for web and 1 EC2/RDS for database.
If you are geek and like to get into the nitty gritty details, create your own infrastructure and feel free to use any automation tool (puppet, chef) or not. Or if you just want to collect the profit, or have scarce resources to take care of everything, you should try Elastic Beanstalk (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_Python_django.html)
Anyway, going to create your own infrastructure or choose elastic beanstalk, always execute stress tests to have a better overview of your capacity planning needs. After you choose your initial environment, stress it using apache bench, siege or whatever other tool you may like.
Hope this helps.
I would suggest to use small instances instead of micro as micro instances often stop responding on heavy load and then it requires a stop-start. Use s3 for static files which helps in faster loading and have a look over cloud front.
The region for instance also helps in serving requests and if you target any specific region, create the instance selecting that region.
Create the database in new instance and attach ebs volume to that instance. Automate backup script to copy database files and store in ebs to avoid any issues. The instance selected here can be iops for faster processing over standard. Aws services provide lot of flexibility but you need to have scripts running to scale up and down the servers as per the timings.
Spot instance can help in future as they come cheap in case you are scaling up.

Elastic beanstalk auto scaling - how long to bring up a new instance

I'm trying to decide what metric to use as a trigger for eb auto scaling to fire up a new instance, and what I'm leaning towards atm is response time - so if a user doesn't get a response in say 4 seconds another ec2 instance is fired up.
What I'm struggling to find out, however, is how long it takes on average for eb to bring another instance online. I'm just concerned that if it gets to the point where the existing instances aren't coping with the load, are people going to be refused a connection and/or experience an extremely slow website for several minutes until auto scaling detects the problem and brings another instance online?
If anyone has experience of this with an ecommerce solution I would love to hear what auto scaling configuration you find works to ensure a seamless user experience.
It really depends on your application. Generally though, you can expect it to take 5-10 minutes for a new instance to come online, register with the ELB, and begin serving traffic.
Autoscaling isn't really intended for bursting, it works better when you have predictable traffic patterns. But with custom Cloudwatch metrics, you can do some pretty cool, predictive things that autoscale based on external factors such as: volume of Twitter mentions, Google Analytics data, number of active user sessions, etc.