AWS Scale out , Scale Up - amazon-web-services

In AWS, we come across scaling up (Adding more storage i.e from t1.small to t2.medium or t2.large) and scaling out is adding up of instances (adding EC2 instances or other). How are these related to Horizontal scaling and vertical scaling. Also, what is preferred to be used more in Recovery and Backups, Volume management more often while the condition is to minimize the cost of the infrastructure maintenance.

Scaling up is when you change the instance types within your Auto Scaling Group to a higher type (for example: changing an instance from a m4.large to a m4.xlarge), scaling down is to do the reverse.
Scaling out is when you add more instances to your Auto Scaling Group and scaling in is when you reduce the number of instances in your Auto Scaling Group.
When you scale out, you distribute your load and risk which in turn provides a more resilient solution, here is an example:
Let's say you have an ASG with 4x m4.xlarge instances. If one fails that means you lost 25% of your processing capability, it doesn't matter that these are sizeable instances with a good amount of CPU and Ram, the fact is by having bigger instance types but less of them you increase the impact of a failure.
However if you had say 8x m4.large instead, your total compute is the same as 4x m4.xlarge however if 1 instance dies then you only lose 12.5% of your resources.
Typically its better to use more smaller instances than less larger ones, so you will see that its more common to "scale-out" to meet demand than it is to "scale-up".
One last consideration is, in order to scale-up/scale-down you have to restart the instance, so there is a service impact when you scale-up/scale-down. There is no such impact when you scale-in/scale-out however.
I hope this helps!

This might help to get better picture on scaling in AWS
Any application loaded with a considerable amount of business logic, typically, follows a three tier architecture (client, server and data-storage) with multiple TSL. Right combination of AWS services can help to achieve the scalability goal. Let's focus on each layer individually and come up with an infrastructure plan on scalability.
Full Article is Here

Related

Increase vCPUS/RAM if needed

I have create a AWS EC2 instance to run a computation routine that works for most cases, however every now and then I get an user that needs to run a computation routine that crashes my program due to lack of RAM.
Is it possible to scale the EC2 instance's RAM and or vCPUs if required or if certain threshold (say when 80% of RAM is used) is reached. What I'm trying to avoid is keeping and unnecessary large instance and only scale resources when needed.
It is not possible to adjust the amount of vCPUs or RAM on an Amazon EC2 instance.
Instead, you must:
Stop the instance
Change the Instance Type
Start the instance
The virtual machine will be provisioned on a different 'host' computer that has the correct resources matched to the Instance Type.
A common approach is to scale the Quantity of instances to handle the workload. This is known as horizontal scaling and works well where work can be distributed amongst multiple computers rather than making a single computer 'bigger' (which is 'Vertical Scaling').
The only exception to the above is when using Burstable performance instances - Amazon Elastic Compute Cloud, which are capable of providing high amounts of CPU but only for limited periods. This is great when you have bursty needs (eg hourly processing or spiky workloads) but should not be used when there is a need for consistent high workloads.

AWS ECS Scaling based on memoryreservation

I've been given a AWS environment to look after and it runs ECS on EC2 instances and has scaling configured using ECS Memory Reservation. The system was originally running before Cluster Autoscaling was made generally available so it's just using a cloudwatch metric to scale out and scale in. As far as I can work out it is following a basic AWS design.
The EC2 has an autoscaling group and allows scale from 1 to 5 instances with 1 being the desired state.
There is 1 cluster service running with 6 tasks configured.
5 of those tasks are configured to run up to 2 copies of the task maximum and 1 the desired, the other is set to maximum of 1.
The tasks have MemoryReservation (soft limit) figures configured but not Memory (hard limit).
The tasks are primarily running Java.
The highest memory reservation is set at about 200MB and most are around this figure.
The scale out rule is based on MemoryReservation at 85%.
Docker stats shows most of the tasks are running about 300MB and some exceed 600MB.
The instance size has 4GB of RAM.
If the maximum reservation is 2GB, even if the tasks are consuming more like 3GB in reality, am I right in believing that the scale out rule will NEVER be invoked because 2GB is 50% of available RAM? Do I need to increase the memory reservations to something more realistic?
Also if it is only running a single EC2 instance am I right in thinking even if I increased the MemoryReservation figures to something more realistic, just because there's no theoretical room to start another task it won't spin up a second EC2 instance automatically? Just picked this up from different articles I've been reading when searching.
Thanks
After the update of Capacity Providers in May 2022, Capacity Providers still have a gap to fill in Memory scaling.
As per the OP "ECS Memory Reservation" seems not to even be an option any more (at least in the web console)
And when creating the Capacity Provider, only the target value is configurable.
There are more details into how this Capacity is calculated in this blog, but while it mentions:
This calculation accounts for vCPU, memory, ENI, ports, and GPUs of the tasksĀ andĀ the instances
If you have tasks that not necessarily grow memory consumption, but you have a service with scheduled actions configured to scale tasks (eg: minimum tasks at different times of day)
This case will not trigger a scale out, since the memory in the instances does not get to be used if the tasks simply does not fit in, due to its configuration and you will see errors (in the service events) like:
service myservice was unable to place a task because no container
instance met all of its requirements. The closest matching
container-instance abc123xxxx has insufficient memory available.
This basically mean a scheduled task scaling change may not happen if the task memory setting is just big enough so it doesn't fit in the running instances, and the CapacityProviderReservation does not change because the calculation is only done when tasks are in Provisioned state, which does not happen in this case.
Possible workarounds
Decrease the Capacity Reservation. This basically means "to have spare capacity", ie: by default Reservation is 100 (%) so it tries to use the ASG cluster resources as much as possible, so having a number less than 100, means it will scale out when the cluster is used at that capacity therefore having a margin spare of resources at all times, which means new scheduled tasks will fit in, as long as the spare is enough (eg: calculate per task memory reservation and cluster memory reservation of all expected running tasks)
Setup ASG rules for scaling that match the service scaling rules.
While possible, this may be bound for problems with timing and auto scaling due to other triggers.
A few things:
Cluster AutoScaling usually is just the term ECS uses for "An AutoScaling Group that launches instances into the cluster", and it sounds like that's what you are currently using. Capacity Providers are a newer feature where ECS more directly manages the ASG, which might be the newer feature you're thinking onf
'Desired Capacity' isn't a state that you set for where you want the group to be, its the current amount of capacity that AutoScaling wants there to be in the group. So if a scaling policy goes off and says +1, the desired will change to 2, and then AutoScaling will try to launch an instance since you presumably only had 1 before (since the desired was 1 before)
Memory reservation is based on that 2GB's reserved, so it doesn't mater how much is in use for scaling purposes. This is importaint because even if you had 6/8GB reserved (from 3 2GB tasks), but 7.5Gb in use, ECS would still allow another task to be launched, since there's still 2 reservable GBs
Because of 3) you should probably increase the reservation value, wouldn't want an instance to get overloaded. Java can be nasty about RAM issues. This would also help with your scale out threshold issue.
For your second question, scaling will only happen after the cloudwatch alarm is triggered. So if the metric never goes above that threshold, alarm can't trigger the scaling policy. There are a whole host of cases where just because the alarm triggers, scaling won't happen (more of them for scaling in than scaling out, but it can still happen on scale out too); but the alarm going into the Alarm state is definitely a required step.

I want AWS Spot pricing for a long-running job. Is a spot request of one instance the best way to achieve this?

I have a multi-day analysis problem that I am running on a 72 cpu c5n EC2 instance. To get spot pricing, I made my code interruption-resilient and am launching a spot request of one instance. It works great, but this seems like overkill given that Spot can handle thousands of instances. Is this the correct way to solve my problem or am I using a sledgehammer to squash a fly?
I've tried normal EC2 launching, which works great, except that it is four times the price. I don't know of any other way to approach this except for these two ways. I thought about Fargate or containers or something, but I am running a 72 cpu c5n node, and those other options won't let me use that kind of horsepower (that I know of, hence my question).
Thanks!
Amazon EC2 Spot Instances are an excellent way to get cheaper compute (up to 90% discount). The only downside is that the instances might be stopped/terminated (your choice) if there is insufficient capacity.
Some strategies to improve your chance of obtaining spot instances:
Use instances across different Instance Types and Availability Zones because they each have different availability pools (EC2 Spot Fleet can assist with this)
Use resources on weekends and in evenings (even in different regions!) because these tend to be times of lower usage
Use Spot Instances with a specified duration (also known as Spot blocks), but this is at a higher price and a maximum duration of 6 hours
If your software permits it, you could split your load between multiple instances to get the job done faster and to be more resilient against any stoppages of your Spot instances.
Hopefully your application is taking advantage of all the CPUs, otherwise you'd be better-off with smaller instances.

Would it be best to scale fewer larger instances, or more smaller instances?

what will be the best option to choose b/w less number of large instances or more number of the small instance when the performance is concerned, as the cloudwatch (load balancing and scaling) will be used if the traffic floods on the servers.
AWS is all about ELASTICITY
There is no need to provision large instances when not needed and burn out money.
There can be many instances when your CPU on one goes high and the next large instance you created remains under-utilized.
You should have medium instances to small w.r.t the tier you require (Memory Intensive, CPU, or Network) and scale those instances with properly written policies.
As long as the userdata, ami is stable you can spawn many instances within minutes making sure you are not spending way too much and saving every Penny.
SCALE WHEN NEEDED HORIZONTALLY
This is heavily dependent on your application.
I agree with Faisal Nizam's intuition of favoring horizontal scaling. However, there are many applications that will not run very well on small instances.
For example, Elastic recommends to have Elasticsearch cluster nodes with 64GB of RAM. Similar reasoning can be applied to many other data related applications, where it can be beneficial if a single instance is able to keep large data chunks in memory.
I would recommend to find the ideal instance size for your application, and from there scale horizontally.
Each EC2 has also some overhead, so you need to find a balance between large & costly instances vs. a lot and small instances with overhead.
(As of today) To vertically scale up/scale down an EC2 server, it needs to be shut down and spun back up - something to keep in mind before deciding to go for it.

What AWS EC2 Instance Types suitable for chat application?

Currently i'm building a chat application base on NodeJs
So i considered choose which is the best instance type for our server?
Because AWS have a lot of choice: General purpose, compute optimize, memory optimize ....
Could you please give me advise :(
You can read this - https://aws.amazon.com/blogs/aws/choosing-the-right-ec2-instance-type-for-your-application/
Actually it doesn't matter what hosting you chose -AWS, MS Azure, Google Compute Engine etc...
If you want to get as much as you can from your servers and infrastructure, you need to solve your current task.
First of all decide how many active users at the same time you will get in closest 3-6 months.
If there will be less than 1000k active users (connections) per second - I think you can start from the smallest instance type. You should check how you can increase CPU/RAM/HDD(or SSD) of your instance.
SO when you get more users you will have a plan how to speed up your server.
And keep an eye on your server analytics - CPU/RAM/IO utilizations when you are getting more and more users.
The other questions if you need to pass some certifications related to security restrictions...
Since you are not quite sure where to start with, I would recommend to start with General Purpose EC2 instance for production from M category (M3 or M4). You can start with smaller instance type like m3.medium.
Note: If its an internal chat application with low traffic you can even consider T series EC2 instances.
The important part here is not to try to predict the capacity needs. Instead you can start small with general purpose EC2 instance and down the line looking at the resource consumption of EC2 instance you can do a proper capacity planning. Since you can both Scale the instances Horizontally and Vertically, it will require to trade of the instance type also considering Cost and timely load requirements before selecting the scaling unit of EC2 instance.
One of the approach I'm following is as follows
Start with General Purpose Instance (Unless I'm confident that there are special needs such as Networking, IO & etc.)
Do a load test(Without Autoscaling for a single EC2 instance) of the application by changing the number of users and find out the limits (How many users can a single EC2 instance can handle).
After analyzing the Memory, CPU & IO utilization, you can also consider shifting to a different EC2 category or stick with the same type. (Lets say CPU goes to its limit but memory is hardly used, you can consider using C series instances).
Scale the EC2 instance vertically by moving to the next size (e.g m3.medium to m3.large) and carry out the load tests to find out its limits.
After repeating step, 3 and 4 you can find an optimal balance between Cost and Performance.
Lets take 3 instance types with cost as X for the lowest selected (Since increasing the EC2 size in one unit, makes the cost doubles)
m3.medium - can serve 100 users, cost X
m3.large - can serve 220 users, cost 2X
m3.xlarge - can serve 300 users. cost 3X
Its an easy choice to select m3.large as the EC2 instance size since it can serve 110 per X cost.
However its not straight forward for some applications where you need to decide the instance type based on your average expected load.
Setup autoscaling and load balancing to horizontally scale the EC2 instances to handle load above average.
For more details, refer the Architecting for the Cloud: Best Practices whitepaper.
I would recommend starting with a T2.micro Linux instance. Watch the CPU usage in CloudWatch. Once the CPU usage starts to exceed 50% to 75%, or free memory gets low, or disk I/O gets saturated, switch to the next larger instance.
T2.micro Linux instances are (for the most part) free. Read the fine print. T2.micro instances are burstable which means that you can get good performance from a small instance.
Unless your chat application has a huge customer / transaction base, you (probably) won't need the other instance types.