What level is S3 request rate limit applicable on? - amazon-web-services

Is the rate limit of 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD applicable on instance level in S3 or is it for that particular VPC/subnet?
I went through the official docs, but it doesn't specify the level at which this limit is applicable.

Related

Getting Quota Exceeded on calling the Google Photos API. What are the actual quotas?

I am uploading photos to Google Photos and placing them in Albums via the Photos Library API.
Every now and then, I get an error "Quota exceeded for quota metric 'Write requests' and limit 'Write requests per minute per user' of service 'photoslibrary.googleapis.com' for consumer 'project_number:XXXXXXX'.
The documentation states 10,000 requests per project per day. But then it says In addition to these limits there are other quotas that exist to protect the reliability and integrity of our systems. I am assuming it's referring to the "per minute per user" that I am receiving.
So what is the actual limit? How many times can I call the API per minute?
P.S. There is a API quote page for Google Sheets API, which states that the limit is 60 requests per minute. Does that limit also apply to other Google APIs?
P.P.S. There is a similar question from 2018, but the issue was too much bandwidth - doesn't apply to my situation.

Different between transaction vs request in AWS IOT limit

In most of the official documents to express throttling limits, AWS uses metrics like Requests per second or Requests per client. e.g. here. But for AWS IOT API throttling limit, there are using a metric called Transactions per seconds. Is there an actual difference between "Transactions per Second" and "Requests per second" metrics or they are just the same?
They mean the same thing — the rate in which you're allowed to call the API. It seems there's no standard for this term; it's chosen at the discretion of the writers. Some services only state a plain number, i.e. 1000, others use requests, and a few use transactions.

Scaling AWS AppSync beyond 1000 rps

AWS throttles AppSync to 1000 rps per API. What can be done if expected request rate is 50000 rps?
According to Amazon's documentation, you can't manually increase requests per second on your own. Depending on your account, you may be able to request an increase by selecting "Service limit increase" in the AWS Support Center.

How to rate limit per user in API Gateway?

I'm running a business API on AWS, through API Gateway and Lambda.
Currently, I handle rate limiting with the built in usage plans and api keys.
Each account tier (think basic, medium, premium) is associated to a usage plan, to which each customer's api key is linked.
I just found out that there is a hard (but increasable) limit of 500 api keys that a single AWS account can have per region (https://docs.aws.amazon.com/fr_fr/apigateway/latest/developerguide/limits.html).
Is it sustainable to rely on api keys to rate limit each customer ? We will get to the 500 limit eventually. Are there other solutions we could use ?
Thanks a lot
If you read the table carefully you will notice that the last column has a header "Can Be Increased" and value "Yes" for "Maximum number of API keys per account per region".
Just contact support once you will be getting close to your limit and ask for an increase. It may take up to 2-3 work days, but otherwise it should be only a matter of asking.

AWS S3 Request Rate Performance to Single Object

Folks,
What is the throughput limit on GET calls to a single object in a S3 bucket? The AWS documentation suggests implementing CloudFront, however, they do not cover the case when a single object exists in a bucket. Does anyone know if the same applies, ie ~300 GET requests/sec?
http://docs.aws.amazon.com/AmazonS3/latest/dev/request-rate-perf-considerations.html
Thanks!
Note: as of July 17 2018, the request limits have been dramatically increased along with the auto-partitioning of s3 buckets.
More information here
There is no throughput limit applied on objects in Amazon S3. However, a high rate of requests per second may limit the ability for S3 to respond to queries. As per the documentation you linked, this will only be of concern above 300 requests per second.
Larger objects can therefore provide more throughput than smaller objects at the same number of requests per second.
Amazon CloudFront can provide faster responses because information is cached rather than served directly from Amazon S3. CloudFront also has over 50 edge locations throughout the world, allowing it to serve content in parallel from multiple locations and at lower latency compared to S3.