What's the meaning of min/max=1 in AWS auto scaling - amazon-web-services

I am studying AWS, per the illustration in AWS here:
For a min/max=1 case, what does it implicit to? Seems no scaling to me as min = max
Thank you for your kind enlightening.
UPDATE:
so here is an example use case:
http://www.briefmenow.org/amazon/how-can-you-implement-the-order-fulfillment-process-while-making-sure-that-the-emails-are-delivered-reliably/
Your startup wants to implement an order fulfillment process for
selling a personalized gadget that needs an average of 3-4 days to
produce with some orders taking up to 6 months you expect 10 orders
per day on your first day. 1000 orders per day after 6 months and
10,000 orders after 12 months. Orders coming in are checked for
consistency men dispatched to your manufacturing plant for production
quality control packaging shipment and payment processing If the
product does not meet the quality standards at any stage of the
process employees may force the process to repeat a step Customers are
notified via email about order status and any critical issues with
their orders such as payment failure. Your case architecture includes
AWS Elastic Beanstalk for your website with an RDS MySQL instance for
customer data and orders. How can you implement the order fulfillment
process while making sure that the emails are delivered reliably?
Options:
A.
Add a business process management application to your Elastic Beanstalk app servers and re-use the ROS
database for tracking order status use one of the Elastic Beanstalk instances to send emails to customers.
B.
Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group
with min/max=1 Use the decider instance to send emails to customers.
C.
Use SWF with an Auto Scaling group of activity workers and a decider instance in another Auto Scaling group
with min/max=1 use SES to send emails to customers.
D.
Use an SQS queue to manage all process tasks Use an Auto Scaling group of EC2 Instances that poll the tasks
and execute them. Use SES to send emails to customers.
The voted answer is C.
Can anyone kindly share the understanding? Thank you very much.

Correct, there will be no scaling outward or inward when min/max=1. Or when min=max. This situation is generally used for keeping a service available in case of failures.
Consider the alternative; you launch with an EC2 instance that's been bootstrapped with some user data script. If the instance has issues, you'll need to stop it and begin another.
Instead, you launch using an AutoScaling Group with a Launch Configuration that takes care of bootstrapping instances. If your application server begins to fail, you can just de-register it from the AutoScaling Group. AWS will take care of bringing up another instance while you triage the defective one.
Another situation you might consider is when you want the option to deploy a new version of an application with the same AutoScaling Group. In this case, create a new Launch Configuration and register it with the ASG. Increase max and desired by 1 temporarily. AWS will launch the instance for you and if it succeeds, you can then reduce Max and Desired back down to 1. By default, AWS will remove the oldest server but you can guarantee that the new one stays up by using termination protection.

Related

What's the best method for creating a scheduler for running EC2 instances?

I want to create a web app for my organization where users can schedule in advance at what times they'd like their EC2 instances to start and stop (like creating events in a calendar), and those instances will be automatically started or stopped at those times. I've come across four different options:
AWS Datapipeline
Cron running on EC2 instance
Scheduled scaling of Auto Scaling Group
AWS Lambda scheduled events
It seems to me that I'll need a database to store the user's scheduled times for autostarting and autostopping an instance, and that I'll have to pull that data from the database regularly (to make sure that's the latest updated schedule). Which would be the best of the four above options for my use case?
Edit: Auto Scaling only seems to be for launching and terminating instances, so I can rule that out.
Simple!
Ask users to add a tag to their instance(s) indicating when they should start and stop (figure out some format so they can easily specify Mon-Fri or Every Day)
Create an AWS Lambda function that scans instances for their tags and starts/stops them based upon the tag content
Create an Amazon CloudWatch Event rule that triggers the Lambda function every 15 minutes (or whatever resolution you want)
You can probably find some sample code if you search for AWS Stopinator.
Take a look at ParkMyCloud if you're looking for an external SaaS app that can help your users easily schedule (or override that schedule) your EC2, RDS, and ASG instances. It also connects to SSO, provides an API, and shows you all of your resources across regions/accounts/clouds. There's a free trial available if you want to test it out.
Disclosure: I work for ParkMyCloud.

Updating an AWS ECS Service

I have a service running on AWS EC2 Container Service (ECS). My setup is a relatively simple one. It operates with a single task definition and the following details:
Desired capacity set at 2
Minimum healthy set at 50%
Maximum available set at 200%
Tasks run with 80% CPU and memory reservations
Initially, I am able to get the necessary EC2 instances registered to the cluster that holds the service without a problem. The associated task then starts running on the two instances. As expected – given the CPU and memory reservations – the tasks take up almost the entirety of the EC2 instances' resources.
Sometimes, I want the task to use a new version of the application it is running. In order to make this happen, I create a revision of the task, de-register the previous revision, and then update the service. Note that I have set the minimum healthy percentage to require 2 * 0.50 = 1 instance running at all times and the maximum healthy percentage to permit up to 2 * 2.00 = 4 instances running.
Accordingly, I expected 1 of the de-registered task instances to be drained and taken offline so that 1 instance of the new revision of the task could be brought online. Then the process would repeat itself, bringing the deployment to a successful state.
Unfortunately, the cluster does nothing. In the events log, it tells me that it cannot place the new tasks, even though the process I have described above would permit it to do so.
How can I get the cluster to perform the behavior that I am expecting? I have only been able to get it to do so when I manually register another EC2 instance to the cluster and then tear it down after the update is complete (which is not desirable).
I have faced the same issue where the tasks used to get stuck and had no space to place them. Below snippet from AWS doc on updating a service helped me to make the below decision.
If your service has a desired number of four tasks and a maximum
percent value of 200%, the scheduler may start four new tasks before
stopping the four older tasks (provided that the cluster resources
required to do this are available). The default value for maximum
percent is 200%.
We should have the cluster resources available / container instances available to have the new tasks get started so they can start and the older one can drain.
These are the things i do
Before doing a service update add like 20% capacity to your cluster. You can use the ASG (Autoscaling group) commandline and from the desired capacity add 20% to your cluster. This way you will have some additional instance during deployment.
Once you have the instance the new tasks will start spinning up quickly and the older one will start draining.
But does this mean i will have extra container instances ?
Yes, during the deployment you will add some instances but as the older tasks drain they will hang around. The way to remove them is
Create a MemoryReservationLow alarm (~70% threshold in your case) for like 25 mins (longer duration to be sure that we have over commissioned). As the reservation will go low once you have those extra server not being used they can be removed.
I have seen this before. If your port mapping is attempting to map a static host port to the container within the task, you need more cluster instances.
Also this could be because there is not enough available memory to meet the memory (soft or hard) limit requested by the container within the task.

How to autoscale EMR task instances

I am using EMR with task instance groups as spot instances. I want to maintain minimum number of task instances always.
Means, whenever EMR terminates task instances because of bid price goes higher than what we set, my application should launch another task instance with little higher bid price.
My research-
Use Cloudwatch to inform when it breaches threshhold, and auto-scale task instances. But as per study, there is no concept of auto-scaling in EMR.
Use Cloudwatch, and notify SQS when threshhold breahes, and there is one service who is always consuming and expand task instances.
Questions
Is there any auto-scaling present in EMR ? If that is available, then my efforts will reduce to just set threshhold, and corresponding expansion task instances action.
If you have any other approach to solve this problem, please suggest.
How Spot Prices Work
When an Amazon EC2 instance is launched with a spot price (including when launched from Amazon EMR), the instance will start if the current spot price is below the provided bid price. If the spot price rises above the bid price, the instance is terminated. Instances are only charged the current spot price.
Therefore, the logic of launching a new spot instance with a "little higher bid price" is not necessary. The instance will always be charged the current spot price, so simply bid as high as you are willing to pay for a spot instance. You will either pay less than the spot price (great!) or your instance will be terminated because the price has gone higher than you are willing to pay (in which case you don't want to pay a "little higher" for the instance).
If you wish to "maintain minimum number of task instances" at all times, then either pay the normal EMR charge (which means the instances won't be terminated) or bid a particularly large price for the spot instances, such as 2 x the normal price. Yes, you might occasionally pay more for instances, but on average your price will be quite low.
If you wish to be particularly sneaky, you could bid up to the normal price for the EC2 instances then, if instances are terminated, launch more task nodes without using spot pricing. That way, your instances won't be terminated and you won't pay more than the normal EC2 price. However, you would have to terminate and replace those instances when the spot price drops, otherwise you are paying too much. That's why it might be better just to provide a high bid price on your spot instances.
Bottom line: Use spot pricing, but bid a high price. You'll get a good price most of the time.
AWS EMR does not have a autoscaling option available. But you can use a work around and integrate Autoscaling using AWS SQS. This is a rough picture what you can integrate.
Launch you EMR cluster using spot instance.
Set up a SQS Queue and create 3 triggers one for CPU threshold , second for EC2 spot instance termination notice and third for changing the spot instance bid prices.
So if the CPU usage increases SQS will trigger an event to launch a new instance to cluster, if there is spot instance termination notice SQS will trigger to launch another instance to balance the load and send a event to change the bid price to launch another spot instance. (This is just rough sketch but I guess you will understand the logic.
This is guide to AWS SQS Autoscaling.
https://docs.aws.amazon.com/autoscaling/latest/userguide/as-using-sqs-queue.html
As has been correctly pointed, the EMR API provides all necessary ingredients to 1) collect monitoring data, and 2) programmatically scale the cluster up and down.
Basically, there are two main options to implement autoscaling for EMR clusters:
Autoscaling Loop: A process that is running on a server and continuously monitors the cluster for its current load. Performance metrics (memory, CPU, I/O, etc) can be collected in regular intervals and stored in a database. Autoscaling rules are evaluated against the performance metrics, and the cluster's task nodes are scaled up or down if required.
Event-Based Autoscaling: Using CloudWatch metrics (e.g., metrics for EMR or EC2), you can programmatically define triggers that are fired under certain conditions (for instance, add nodes if average CPUUtilization of all nodes exceeds 80%).
Both options have their pros and cons. The main advantage of option 2 is that it is a server-less approach (does not require to run your own server). Option 1, on the other hand, does require a server, but therefore comes with more control to customize the logic of your scaling rules. Also, it allows to keep searchable records of the history of the scaling decisions.
You could take a look at Themis, an EMR autoscaling framework developed at Atlassian. Themis implements the autoscaling loop as discussed in option 1 above. Current features include proactive as well as reactive autoscaling, support for spot/on-demand task nodes, it comes with a Web UI, and the tool is very easy to configure.
I have had a similar problem, and I wanted to share one possible alternative. I have written a Java tool to dynamically resize an EMR cluster during the processing. It might help you. Check it out at:
http://www.lopakalogic.com/articles/hadoop-articles/dynamically-resize-emr/
The source code is available on Github

High availability periodic task (cron) on AWS

What is the recommend way/pattern for setting up a High Availability (multiple Availability Zones) periodic task (probably triggered using Cron) on AWS?
I would like to have the software installed on multiple EC2 instances in multiple Availability Zones but only have the task run on a single instance at a time. It doesn't matter which instance.
Before moving to AWS, we used to use database locking in a MySQL instance - only the instance that successfully creates a lock would run.
But there must be a better way on AWS? Particularly if there is no requirement for a database.
Thanks!
nick.
Since I first asked this question, it is now possible to use CloudWatch Events to schedule periodic events. Events can be:
A fixed rate of N minutes/hours/days
Cron expression
The targets can be:
An AWS Lambda function
Post to an SNS topic
Write to an SQS queue
Write to a Kinesis stream
Perform an action on EC2/EBS instance
SQS could then be used to notify a single instance in a cluster of machines in multiple availability zones to perform an action.
There is more information here:
https://docs.aws.amazon.com/AmazonCloudWatch/latest/DeveloperGuide/ScheduledEvents.html
Although it does not include a resilience/availability statement of what happens if an Availability Zone goes down.
One suggested solution that has been suggested to me is to use an Auto Scaling Group, with the maximum and minimum number of instances set to 1. This means that if an availability zone goes offline, the ASG will cause a new instance in another zone to be launched.
This technique was briefly covered on the Architecting on AWS training course, but I don't know how reliable it is.
Amazon just released the solution to your problem: Beanstalk worker tier periodic tasks:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html#worker-periodictasks
It basically relies on a yaml file listing cron schedules calling the API you want:
To invoke periodic tasks, your application source bundle must include a cron.yaml file at the root level. The file must contain information about the periodic tasks you want to schedule. Specify this information using standard crontab syntax.
It really depends on your requirements.
You could post your tasks to a SQS queue and have your instances (possibly an autoscaling group spread across different zones) poll that queue. SQS' at least (and generally only) once semantics could be a problem here if it is critical for you that tasks get executed only once. If that's the case, you could easily use a DynamoDB table and conditional writes. Or, if you're more after a full-fledged fault-tolerant solution, you might give airbnb chronos a try.

Proxy server to manage demo instances on AWS

I manage a group of integration projects and for many we provide an Amazon instance with our product for development and/or demo purposes. In order to be economical with IT budgets, I wonder if there is a proxy software that can measure the traffic to those servers and start the instance on the first request and shut it down if there is no request for a set time (i.e. 60 min.)
Ideally the first request would trigger a page informing the user about the delay and keep autoloading until the instance has been up.
I'd also love to see usage statistics by IP, so I can measure the spread of users, how many different IPs, and the time they kept up the instance. But that is secondary.
Is there any such software/service out there? Preferably in FOSS?
If you utilize Auto Scaling and Custom CloudWatch Metrics you can potentially use any data you want to decide how to Auto Scale. If you have other log sources or application level code, you won't need a proxy, just something to interpret that data, pass it to your CloudWatch metric and then the auto scaling will occur as needed.
Utilizing t1.micro, you can have one instance handling requests and scale out from there with your autoscale group. Pricing for 1 or 3 year reserved instances costs extremely little. You need something to understand incoming volume, so having one instance would be required anyways. Using t1.micro, you operating costs are low and you can scale very granularly.