AWS Elastic beanstalk scale triggering [closed] - amazon-web-services

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I set the following parameters in my elastic beanstalk environment:
Do you think this settings are reasonable?
I didn't understand the breach duration parameter. What does it means? is 5 minutes is reasonable?
Thanks

This all depends. Different apps need to scale for different reasons. This is why Elastic Beanstalk lets you choose various ways to scale your application. Sure, this is reasonable. Does it work for your application? Not sure. Is your app CPU intensive or is it just serving static content? What is the number one factor to latency and 500's?
The breach duration parameter is the time frame of data to look at. For example, during the last 5 minutes, has the CPU been above 70%? Lower the number to be more "real time" and increase the number to be more "safe".
From the docs:
For Measurement period, specify how frequently Amazon CloudWatch measures the metrics for your trigger. Breach duration is the amount of time a metric can extend beyond its defined limit (as specified for Upper threshold and Lower threshold) before the trigger fires.

Related

Is there anything against using Cloud Run for a server side GTM [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
I have been setting up several server side GTMs for my company in the last months. I have deployed both App Engine Flexible versions, as well as Cloud Run hosted ssGTMs.
I found using Cloud Run easier to setup and also cheaper as long as you stay under 300 million requests per month. Custom domain setup is also only slightly different.
The official documentation Basically only covers App Engine and Manual deployment.
I was wondering if there is any downside towards using Cloud Run for hosting your ssGTM besides potential cold starts (which I do not really care about).
I'm not very familiar with GTM but here are few things for using Cloud Run you have to figure out first.
Is GTM completley stateless ? Or does it needs State, Cloud Run doesn't offer a filesystem kind capabilities for storing files on disk
Is GTM already available as a container ?
You can avoid cold starts by setting the min replicas to 1 or higher so that there is at least always 1 instance available to serve traffic.

AWS RDS MariaDB capacity planning [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Customer has provisioned following for AWS RDS MariaDB instance:
Instance type: db.m5.large, vCPUs: 2, RAM: 8 GB, Multi AZ: No, Replication: No, Storage: 100 GB, Type: General purpose SSD
We are not sure what is the basis for provisioning the instance. Questions are:
What all factors should be considered to do capacity planning?
Is this a typical production grade database configuration?
Since
Customer has provisioned
we should account for customer's opinion and consider the factors which let them arrive to this plan, however there are factors which can help you in capacity planning i.e
If the transaction size static or dynamic.
If it is dynamic what could be the maximum transaction size.
What is the amount of network bandwidth each transaction is going to consume.
Will the number of transaction grow over the time ( it is suppose to grow anyways)
About production grade database configuration is anyways subjective question and can be debated however a basic architecture which is production grade looks like below -
Aws Pricing calculator is a good place to start with for most of the factors which should be considered.
Build the system on your laptop. If it scales well enough there, get an RDS of similar specs.
If it clearly does not scale well enough, get an RDS of the size up. Then work on optimizing things.
In some situations, you may have a slow query that just needs a better index. Keep in mind that "throwing hardware at a problem" is rarely optimal.
It is impossible to answer this question without knowing the exact specifics of the workload. However, it is unusual to have only 8GB of RAM for a 100GB database size. This puts you, optimistically, at about 5% ratio between buffer pool (cache) and data size, so unless the amount of hot data is surprisingly small in the specific workload that is intended, you will probably want at least double that amount of memory.

How to get number of calls per x made to AWS API Gateway using CloudWatch? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to get metrics that show how many calls were made to our APIs (on API Gateway) in a given period of time. I'm using CloudWatch. There is a metric called Count, which AWS documentation says is "The total number API requests in a given period." But it only ever comes to 1, whether the given period is 1 minutes, 5 minutes or 1 week. It may well be that we never have 2 calls hit a given API at exactly the same point, so each API endpoint is only ever hit once, but what I want is if I have 10 people hitting my API endpoint 10 times each in 5 minutes then I am shown 100. If I have 15 people hitting my API endpoint 10 times each in 10 minutes then I want to see 150.
Is there any way to do this?
Yes,
You now probably have the default (=Average) statistic selected. To see the total, you need to select Sum as the Statistic in the 'Graphed Metrics' tab.

AWS - T2 Instances for beta, low usage application [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have an iOS application, which hits a backend we've set up on AWS. Basically, we have a Staging and a Production environment, with some basic Load Balancing across two AZs (in Production), a small RDS instance, a small Cache instance, some SQS queues processing background tasks, and S3 serving up assets.
The app is in beta, so "Production" has a limited set of users. Right now, it's about 100, but it could be double or so in the coming weeks.
My question is: we had been using t2.micro instances on Staging and for our initial beta users on Production, and they seemed to perform well. As far as I can see, the CPU usage averages less than 10%, and the maximum seems to be about 25 - 30%.
Judging by these metrics, is there any reason not to continue using the t2 instances for the time being and is there anything i'm overlooking as far as how the credit system works, or is it possible that i'm getting "throttled" by the T2s?
For the time being, traffic will be pretty predictable, so there won't be 10K users tomorrow :)
You just need to watch the CPU credits metric on the instances to make sure you dont get throttled. Set up alerts in CloudWatch for this and you should be fine.

cron like events - AWS Lambda or AWS Data pipeline [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have an EC2 instance that schedule many tasks (using crontab).
some of them are executed every 1 min, 5 min, and so on..
I want to move all cron tasks into AWS service.
I am trying to figure which AWS service can give me the best solution.
I found 2 services that can schedule cron like tasks:
AWS Data Pipeline
AWS Lambda
which of them can give me the best solution?
I don't know how you want to define "best" but if you have many tasks, each one will require a separate pipeline, and that will cost you around $1 each.
Lambda on the other hand will probably be much less - you get 1M requests free, and they're $0.20 / million after that. You will also get charged based on the time & memory each task takes to run. There are some limits (5 min is the max time I think) so you'll have to take that into consideration.
But overall, I think Lambda will be much cheaper to run.