Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I have an iOS application, which hits a backend we've set up on AWS. Basically, we have a Staging and a Production environment, with some basic Load Balancing across two AZs (in Production), a small RDS instance, a small Cache instance, some SQS queues processing background tasks, and S3 serving up assets.
The app is in beta, so "Production" has a limited set of users. Right now, it's about 100, but it could be double or so in the coming weeks.
My question is: we had been using t2.micro instances on Staging and for our initial beta users on Production, and they seemed to perform well. As far as I can see, the CPU usage averages less than 10%, and the maximum seems to be about 25 - 30%.
Judging by these metrics, is there any reason not to continue using the t2 instances for the time being and is there anything i'm overlooking as far as how the credit system works, or is it possible that i'm getting "throttled" by the T2s?
For the time being, traffic will be pretty predictable, so there won't be 10K users tomorrow :)
You just need to watch the CPU credits metric on the instances to make sure you dont get throttled. Set up alerts in CloudWatch for this and you should be fine.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
I have been setting up several server side GTMs for my company in the last months. I have deployed both App Engine Flexible versions, as well as Cloud Run hosted ssGTMs.
I found using Cloud Run easier to setup and also cheaper as long as you stay under 300 million requests per month. Custom domain setup is also only slightly different.
The official documentation Basically only covers App Engine and Manual deployment.
I was wondering if there is any downside towards using Cloud Run for hosting your ssGTM besides potential cold starts (which I do not really care about).
I'm not very familiar with GTM but here are few things for using Cloud Run you have to figure out first.
Is GTM completley stateless ? Or does it needs State, Cloud Run doesn't offer a filesystem kind capabilities for storing files on disk
Is GTM already available as a container ?
You can avoid cold starts by setting the min replicas to 1 or higher so that there is at least always 1 instance available to serve traffic.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
This post was edited and submitted for review last year and failed to reopen the post:
Original close reason(s) were not resolved
Improve this question
Based on their documentation, it's free.
There is no additional charge for Compute Optimizer. EC2 instance type and EC2 Auto Scaling group configuration recommendations are available for free. You pay only for the AWS Compute resources needed to run your applications and Amazon CloudWatch monitoring fees.
Being free and potentially can lower cost, I fail to see the downside of it.
And since the feature is not enabled by default, there must be reasons why one should not enable it.
While I cannot think of a reason to not opt-in for the service, it is NOT totally free, and costs $0.0003360215 resource/hour based on the number of hours a resource is running per month. This should, however, be much lesser a price to be paid for the savings it can potentially offer in most cases.
Do make sure to check the official page for the latest pricing - https://aws.amazon.com/compute-optimizer/pricing/
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
I set the following parameters in my elastic beanstalk environment:
Do you think this settings are reasonable?
I didn't understand the breach duration parameter. What does it means? is 5 minutes is reasonable?
Thanks
This all depends. Different apps need to scale for different reasons. This is why Elastic Beanstalk lets you choose various ways to scale your application. Sure, this is reasonable. Does it work for your application? Not sure. Is your app CPU intensive or is it just serving static content? What is the number one factor to latency and 500's?
The breach duration parameter is the time frame of data to look at. For example, during the last 5 minutes, has the CPU been above 70%? Lower the number to be more "real time" and increase the number to be more "safe".
From the docs:
For Measurement period, specify how frequently Amazon CloudWatch measures the metrics for your trigger. Breach duration is the amount of time a metric can extend beyond its defined limit (as specified for Upper threshold and Lower threshold) before the trigger fires.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I am looking for a way to pass log events from AWS application to my company site.
The thing is that the AWS application is 100% firewalled from everything except only one IP address because it's encryption related service.
I just don't know what service I should use to do this. There's so many services so I do really have no idea what is it.
I think I'd just use simple message service, does this makes sense? The thing is there's plenty of events (let's say 1M per day), so I don't want big extra costs for this.
Sorry for the generic question, but I think it's quite concrete - "What is the most optimal way to pass event message from AWS when volume is approx 1M per day each 256 bytes on average?".
I'd like to connect to AWS service instead to any of the EC2 hosts...
On both sides I have tomcats with AWS-SDK.
I just want to avoid rewriting. Maybe I should do it with S3? The files are immutable, but I could upload files every 1h. I don't need real-time events. I just need to have logfiles on site for analysis of user experience and that customers can access it, but having log in 1M chunks would either require further assembling etc, I am really confused, sorry.
Kinesis is good for streaming event data. S3 is good if you already have files that you want stored.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I want to deploy a django project with the following stack: Django with Nginx, Gunicorn, virtualenv, supervisor and PostgreSQL.
I was thinking to use a Linode 1GB server which has:
1 GB RAM
1 CPU Core
24 GB SSD Storage
2 TB Transfer
40 Gbit Network In
125 Mbit Network Out
At the beginning I expect to have very low traffic. Is a Linode 1GB enough or should I choose a better one with more RAM/cores? I would like to choose the minimum one that fits my needs now and upgrade as the traffic grows.
Bonus general question: How can I calculate the server requirements for a specific stack and traffic?
Is a Linode 1GB enough
Well, it'll all run on that. You don't say what sort of load you want to support though.
So - here's what you want to do.
Add some basic monitoring into the mix - mem/cpu/disk/network traces + record them.
Script your server so you can go from an empty vm to working system automatically. There's all sorts of stuff out there - puppet/chef/vagrant. You're already using python, so ansible might suit you.
Now test it. Fire up a local VM (or hire a Linode one by the hour) and stress-test it.
Rent a bigger one + test that too.
Now you know what size VM you need and when you'll need to switch.