Kms request automatically increasing on AWS - amazon-web-services

KMS requests are continuously increasing on my AWS account. I am on Free Tier package. My monthly quota is 20,000 requests, but in first 7 days, I've used 45 % of it (9000 requests).
Please tell me how to control this number I have no instance running at the time still the requests are increasing. No instance, no KMS keys, no web apps, no deployments, and I don't know why this is happening to me. I tried a lot to search on Google but couldn't find anything helpful.
EDIT:
First I created an instance and deployed a Django project. After 3 days I terminated that instance. Now I have no services running. In last 2 days, KMS requests count has been increased by 10%.

KMS is used by a number of other AWS services, and there is also a default key. Some examples of where this can be used:
Encrypting data of any type
AWS Certification Manager SSL certs in an ELB/CloudFront
As for encryption, there's encrypted EBS volumes, RedShift data, S3 bucket data, parameters in EC2 Parameter Store, etc. If you still haven't got any idea what is causing the KMS allocation hits you might want to use CloudTrail to log calls. Note that CloudTrail itself can encrypt data and essentially kill your KMS allocations, and the logs it stores in S3 count against your S3 allocations.

After trying a lot I finally sorted out this by my self.
the problem was a remaining S3 bucket, after deleting those the KMS requests stopped increasing.

Related

AWS Backup for S3 buckets - what is the size limit?

I am using AWS Backup to back up S3 buckets. One of the buckets is about 190GB (the biggest of the buckets I am trying to back up) and it is the only bucket that the backup job fails on, with the error message:
Bucket [Bucket name] is too large, please contact AWS for support
The backup job failed to create a recovery point for your resource [Bucket ARN] due to missing permissions on role [role ARN]
As you can see, these are two error messages concatenated together (probably an AWS bug) but I think that the second message is incorrect, because all the rest of the buckets were backed up successfully with the same permissions, and they are configured that same way. Thus, I think the first message is the issue.
I was wondering what is the size limit for AWS backup for S3. I took a look at the AWS Backup quotas page and there was no mention of a size limit. How do I fix this error?
Here is the information you're looking for :
https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html#S3-backup-limitations
Backup size limitations: AWS Backup for Amazon S3 allows you to
automatically backup and restore S3 buckets up to 1 PB in size and
containing fewer than 24 million objects.

Why i still get charged amazon S3

Hello guys i have a weird problem with my amazon account..I enable the S3 free tier service and i upload some files to the bucket.After 1 month i remove all the files and i delete the bucket..I thought that i have finish with this but then yestarday i recieved a weird email that says amazon will charge me if i dont disable my Free Tier Services.In my account setting i can see
but its weird because i dont have any buckets
As you've now deleted the S3 bucket you should not be charged anything, it's possible that the notification was delayed. If you have multiple accounts ensure that you're in the correct account.
The 2 requests in your screenshot are presumably from two ListBuckets requests when you attempted to view your S3 buckets in your AWS account.
Just in case you're using organisations with shared billing be aware that the free tier would be used by a single account.
At the end of the month you should receive your billing for the month, if S3 is added there you can use Cost Explorer to dive into your service usage that might help to identify any resource(s) you were not aware of. Using this would cost $0.01 per query to the service.

my S3 requests keep adding up, even when I dont have any s3 buckets?

I have been monitoring my billing dashboard for a few days now and I notice that my s3 requests, put, copy, post, list requests and get requests keep adding up even though Im not using s3, in fact I have stopped using my AWS account for few days to monitor any changes, also I have deleted all my previously created s3 buckets, lambda functions associated with it, dynamodb tables, api gateways. I remeber hosting a website using s3, but I had deleted that bucket, is there something that i am missing which is causing it. I am on the free tier and I am afraid that I might exceed the free tier if I dont know what is causing this, despite mine not using my AWS account. i am new to AWS hence, the difficulty in understanding it. I would really appreciate some help in this matter.

How can I avoid AWS S3 throttling issues?

All my work to access AWS S3 is in the region: us-east2 and the AZ is us-east-2a.
But I saw some throttling complaints from S3, so I am wondering if I move some of my work to another AZ like us-east-2b, could it mitigate the problem? ( Or it will not help since us-east-2a and us-east-2b are actually pointing to same endpoint?)
Thank you.
The throttling is not per AZ, its for a bucket. The below quote is from the AWS documentation.
You can send 3,500 PUT/COPY/POST/DELETE and 5,500 GET/HEAD requests per second per partitioned prefix in an S3 bucket. When you have an increased request rate to your bucket, Amazon S3 might return 503 Slow Down errors while it scales to support the request rate. This scaling process is called partitioning.
To avoid or minimize 503 Slow Down responses, verify that the number of unique prefixes in your bucket supports your required transactions per second (TPS). This helps your bucket leverage the scaling and partitioning capabilities of Amazon S3. Additionally, be sure that the objects and the requests for those objects are distributed evenly across the unique prefixes. For more information, see Best Practices Design Patterns: Optimizing Amazon S3 Performance.
If possible enable exponential backoff to retry in the S3 bucket. If the application uploading to S3 is performance sensitive the suggestion would be to handoff to a background application that can upload to S3 at a later time.

Rate Limit on S3 bucket

I have a 3rd party client to which I have exposed my S3 bucket which will upload files in it. I want to rate limit on the bucket so that in case of anomaly at their end I dont receive a lot of file upload requests on my bucket which is connected to my SQS queue and DynamoDB so it will lead to throttling at the DB and in queue as well. Also I would be charged heftily.How do I prevent this?
It is not possible to configure a rate-limit for Amazon S3. However, in some situations, Amazon S3 might impose a rate limit when data.
A way to handle this would be to process all uploads through API Gateway and your back-end service. However, this might lead to more overhead and costs than you are trying to save.
You could configure an AWS Lambda function to be triggered when a new object is created, then store information in a database to track the upload rate, but this again would involve more complexity and (a little) expense.