AWS S3 bucket alarm revoke action - amazon-web-services

I have a bucket with an object lock policy, for security reasons.
The problem is, I don't want the bucket to accidentally grow to hundreds of gigabytes for which I will have to pay for a long long time.
Is there a way using an alarm or any suitable amazon service to revoke access to a user if a bucket exceeds a certain size (or any other solution to this problem)?

Related

How can I prevent users from locking a bucket retention policy in GCP's Google Cloud Storage but still allow them to create/manage buckets?

Reading about Bucket Locks in Cloud Storage made me think of something very evil and bad that one could do:
Create a Cloud Storage Bucket.
Set a retention policy of 100 years on the bucket.
Lock the retention policy to the bucket.
Upload many petabytes of objects to the bucket.
The project is now stuck with a bucket that cannot be deleted for 100 years and the project can never be deleted either due to a "lien". And theoretically, someone is stuck paying the bill to store the petabytes. For 100 years.
Is there any way, preferably programmatically or through configuration, to prevent users from locking a retention policy on a bucket but still permitting them to create and manage other aspects of Cloud Storage buckets that can't be bucket locked?
The more blunt permission system doesn't seem like it's fine grained enough to permit or deny locking:
https://cloud.google.com/storage/docs/access-control/iam-json
I'm thinking there's some way to use IAM Conditions to accomplish what I want, but I'm not sure how.
Update: I'm looking for a solution that does not force a retention policy to be set. John Hanley's organization policy contraint solution is interesting, but it forces a retention policy to be set with at least a 1 second retention across all applicable projects and it also disables the option to have bucket versioning enabled in the bucket.
A forced retention of 1 second can cause certain issues with applications that write and delete objects at the same key multiple times a second.
FWIW, AWS identifies these kinds of radioactive waste creation actions and lets policies be set on them accordingly.
Method 1:
Select or create a custom role for bucket users that does not have the permission resourcemanager.projects.updateLiens. That permission is required to create a Retention Policy.
Method 2:
This method has side effects such as not supporting object versioning but can prevent a long bucket lock such as 100 years.
You can set an Organization Policy Constraint to limit the maximum duration of a Retention Policy.
Name:
constraints/storage.retentionPolicySeconds
Description:
Retention policy duration in seconds

AWS Backup for S3 buckets - what is the size limit?

I am using AWS Backup to back up S3 buckets. One of the buckets is about 190GB (the biggest of the buckets I am trying to back up) and it is the only bucket that the backup job fails on, with the error message:
Bucket [Bucket name] is too large, please contact AWS for support
The backup job failed to create a recovery point for your resource [Bucket ARN] due to missing permissions on role [role ARN]
As you can see, these are two error messages concatenated together (probably an AWS bug) but I think that the second message is incorrect, because all the rest of the buckets were backed up successfully with the same permissions, and they are configured that same way. Thus, I think the first message is the issue.
I was wondering what is the size limit for AWS backup for S3. I took a look at the AWS Backup quotas page and there was no mention of a size limit. How do I fix this error?
Here is the information you're looking for :
https://docs.aws.amazon.com/aws-backup/latest/devguide/s3-backups.html#S3-backup-limitations
Backup size limitations: AWS Backup for Amazon S3 allows you to
automatically backup and restore S3 buckets up to 1 PB in size and
containing fewer than 24 million objects.

Does AWS charge for shutting down S3 when I terminate an account?

So we need to terminate an AWS account for a customer. This account has some rather heavy S3 buckets with tens of thousands of images. The storage class for most of these buckets are Deep Glacier.
I am unsure if there would be any hidden costs if I just terminate the account. Or do I need to delete these buckets manually? I read that the delete operator does not incur cost, but for deleting AWS needs to "list" the bucket items and that can incur cost.
What is the most cost effective way to terminate an AWS account that has heavy duty S3 buckets?
Any insight for AWS Gurus are much appreciated.
Quoting the relevant AWS documentation:
Closing your account might not automatically terminate all your active resources. You might continue to incur charges for some of your active resources even after you close your account. You're charged for any usage fees incurred before closure.
REFERENCE: https://aws.amazon.com/premiumsupport/knowledge-center/terminate-resources-account-closure/

Can I pay for the data transfer even if an s3 bucket isn't specified as requester pays?

I'm going to copy an entire bucket (50TB) and would like to make sure I pay for the transfer costs and the owner doesn't get hit with a big bill. Can I just specify the --request-payer requester flag and take the tab for transfer or is that only possible if the owner has explicitly made the bucket as requester pays?
is that only possible if the owner has explicitly made the bucket as requester pays?
Yes. The bucket must be set to Requester Pays. If its not set, then the owner misconfigured the bucket or simply does not care about the cost.
If the bucket is public, then Requester Pays can't be set to begin with.
If you are copying data between Amazon S3 buckets in the same region, then there is no Data Transfer charge. You would only be charged for Requests ($0.0004 per 1000 requests).
If you are transferring the data between S3 buckets in different regions, then Data Transfer (~$0.02 per GB depending on region) would be charged. I think it would be charged to the bucket owner since Data Transfer is charged for Outbound data flows.

How much will it cost to read from an S3 bucket in the same region but on different account

I read it doesn't cost anything to read data from an S3 bucket on the same region within the same account.
However, I'm interested how much will it cost to read a GB from a different account's bucket, on the same region.
There will be no Data Transfer cost, since the Amazon EC2 instance and Amazon S3 bucket are in the same region.
The owner of the bucket will be charged for the GET Request ($0.0004 per 1,000 requests). Apart from that, the fact that the bucket belongs to a different AWS Account will not impact cost.