AWS Elasticache - Redis Autoscaling - amazon-web-services

There is an redis instance been created in ElasticCache and this will be used to store and retrieve data as usual.
Is there any max memory for this redis instance and how can that be checked?
All I need is say example if the data size in redis reaches above 100 mb then it should be auto scaled without me having to manually scale it or create a new instance and things like that.
And when the data size is reduced(example: From 300mb to 50 mb due to less traffic) then the instances should be reduced so that there is no extra cost incured.
How can this be configured in AWS ElastiCache?

unfortunately there is no auto-scaling policy attach with Elasticcache out of the box, amazon ElastiCache provides console, CLI, and API support for scaling your Redis (cluster mode disabled) replication group up.
One option that you can try is to set cloud watch alarm base on node memory and then trigger lambda function that will scale up and down base on metrics.
Create a CW alarm
Select Elastic cache metrics
Select Node level metrics
Select Free memory metrics
Trigger notification to SNS topic
Subscribe lambda function
scaleup/scaledown base on metrics

Now Elasticache supports autoscaling
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/AutoScaling.html

Related

AWS Resource Usage Data - CPU, Memory and Disk

I am trying to build an analytics Dashboard using the below Metrics/KPIs for all the EC2 Instance.
Total CPU vs CPUUtilized
Total RAM vs RAMUtilized
Total EBS Volume vs EBSUtilized.
For example, I have lunch an EC2 instance with 4 CPU, 16GiB RAM and 50GB SSD, I would like to know the above KPIs in a time series trend. I am not getting any clue on where to get the data from EC2. Tried the EC2 instance metrics through CloudWatch using boto3 client, however did not get the above Metrics. I would like to know :
Where to find the data with above Metrics ?
Need the above metrics data in s3 on an daily basis.
Similarly is there a way to get similar metrics for AWS RDS and AWS EKS Cluster ?
Thanks!
The Amazon EC2 service collects information about the virtual machine (instance) and sends it to Amazon CloudWatch Logs.
See: List the available CloudWatch metrics for your instances - Amazon Elastic Compute Cloud
Note that it only collects metrics that can be observed from the virtual machine itself -- CPU Utilization, network traffic and Amazon EBS traffic. The EC2 service cannot see what is happening 'inside' the instance, since it is the Operating System that controls memory and manages the contents of the disks.
If you wish to collect metrics from the Operating System, then you would need to Collect metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent - Amazon CloudWatch. This agent runs in the instance and sends metrics out to CloudWatch.
You can write code that calls the CloudWatch Metrics APIs to retrieve metrics. Note that the metrics returned are calculated over a time period (eg average CPU Utilization over a 5-minute period). It is not possible to retrieve the actual raw datapoints.
See also:
Monitoring Amazon RDS metrics with Amazon CloudWatch - Amazon Relational Database Service
Amazon EKS and Kubernetes Container Insights metrics - Amazon CloudWatch

Alarm in AWS when Storage gets full(Ec2)

I need to create an Alarm in AWS which notifies me when my Storage used >= 80%
AWS has no visibility inside your Amazon EC2 instance. This is because the instance is run by the Operating System and AWS does not have a login to the instance.
However, you can Collect metrics and logs from Amazon EC2 instances and on-premises servers with the CloudWatch agent - Amazon CloudWatch, which is a piece of software you install on the instance. It then runs inside the instance and sends metrics (such as available disk space) to Amazon CloudWatch. You can then create an Alarm on that metric to receive notification when the disk spaces metric passes a threshold.

how to stop an EC2 instance after checking the memory utilization

I want to stop my EC2 instances if the memory utilization is more than x% from my Lambda function(python) , is their any possibility to check the memory utilization of an EC2 instance
For EC2 by default it will only have the host level metrics be accessible, this includes CPU, Disk Performance and Network Performance but does not include other metrics such as Memory Utilization.
For this you will need to push a custom metric from the EC2 instance into AWS, this can be performed by installing the CloudWatch Agent.
Once you have the memory metric being pushed into CloudWatch you can create an alarm that will trigger on a specific threshold being exceeded, allowing you to trigger an SNS topic. This can have a Lambda subscribe to the topic to be triggered under the condition.
You need to install the CloudWatch agent on the EC2 instances, if it's not there already. Then the memory usage will be a metric in CloudWatch that your Lambda function can query.

Why AWS CloudWatch does not have Memory usage metric for Autoscaling group

I am trying to create a graph for memory usage of an autoscaling group but I discovered that there is no such metric. Although there is Memory usage metric but it is for individual instances. It is useless since instances keep on changing in autoscaling group. I want to know the technical reason why AWS cloudwatch didn't provide it. Moreover I want to know the work around to achieve it.
The metrics that AWS provides can be collected at the hypervisor level. But memory metrics (like disk metrics) is from the OS level. So it is a custom metric that you have to periodically push to CloudWatch.
Monitoring Memory and Disk Metrics for Amazon EC2 Linux Instances
shows how to push your metrics to CloudWatch. Install the scripts (along with credentials if you are not using IAM role) before creating your AMI and you are set. Each instance in AS will start pushing its memory metric to CloudWatch. Not sure how useful it will be for you.

How can we provision number of core instances in AWS Data Pipeline job

Requirement: Restore DynamoDB table from S3 Backup location.
We created Data Pipeline job, and then edit Resources section in Architect Wizard.
We placed 20 instances under Core Instance count, but after the Data Pipeline job activation, EMR Cluster was created with only one master and one core instance.
Could you please suggest us, how to increase the number of cores under Resources section
You might be hitting the total EC2 resource limit. You can have only 20 On-Demand EC2 instances running unless you have requested a limit raise.
Datapipeline acknowledges this limit. You have to increase the EC2 Ondemand resource limit using the link below.
A similar scenario from the docs:
If you configure AWS Data Pipeline to automatically create a 20-node
Amazon EMR cluster to process data and your AWS account has an EC2
instance limit set to 20, you may inadvertently exhaust your available
backfill resources. As a result, consider these resource restrictions
in your design or increase your account limits accordingly.
If you require additional capacity, you can use the Amazon Web
Services Support Center request form to increase your capacity.