Been digging through tutorials for days, but they all say the same thing, and it seems like I should be in slam dunk territory here, but I get the above error whenever I try to read or write from my Amazon S3 bucket.
I only have one AWS account, so my lambda function should be owned by the same account as my Amazon S3 bucket. I have given my lambda role s3:GetObject and PutObject permissions, as well as just s3:*, I have verified that my S3 bucket policy is not denying access explicitly, but nothing changes the message.
I am new to AWS policies and permissions, but google isn't giving up a lot of other people getting this message. I don't know where I am supposed to be supplying my AccountID or why it isn't already there. Would be grateful for any insights.
EDIT: I have added AmazonS3FullAccess to my policies and removed my previous policy, which only allowed GetObject and PutObject specifically. Sadly, behavior has not changed.
Here are a couple of screenshots:
And, since my roles seem to be correct, here is my code, any chance there is anything here that could be causing my problem?
You should use the bucket name only - without the full arn stuff.
You can solve this issue by ensuring that the IAM role associated with your Lambda function has the correct permissions. For example, here is the IAM role i use to invoke Amazon S3 operations from a Lambda function:
Also make sure in the Lambda console, you select the proper IAM role, as shown here:
Had this issue but I later realized I provided s3 arn instead of the bucket name as an environment variable
I got this problem when I had incorrect REGION in S3Client inicialization. Here is the correct code example (change region to yours):
const REGION = "eu-central-1"; //e.g. "us-east-1"
const s3Client = new S3Client({ region: REGION });
Source: step 2 in AWS Getting started in Node.js Tutorial
Its possible to do object logging on a S3 bucket to Cloud trail using the following guide, but this is through the console.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html
I've been trying to figure out a way to do this via the cli since want to do this for many buckets but haven't had much luck. I've setup a new cloud trail on my account and would like to map it to s3 buckets to do object logging. Is there a cli for this?
# This is to grant s3 log bucket access (no link to cloudtrail here)
aws s3api put-bucket-logging
It looks like you'll need to use the CloudTrail put_event_selectors() command:
DataResources
CloudTrail supports data event logging for Amazon S3 objects and AWS Lambda functions.
(dict): The Amazon S3 buckets or AWS Lambda functions that you specify in your event selectors for your trail to log data events.
Do a search for object-level in the documentation page.
Disclaimer: The comment by puji in the accepted answer works. This is an expansion of that answer with the resources.
Here is the AWS documentation on how to do this through the AWS CLI
https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/put-event-selectors.html
The specific CLI command you are interested is the following from the above documentation. The original documentation lists two objects in the same bucket. I have modified it to cover all the objects in two buckets.
aws cloudtrail put-event-selectors --trail-name TrailName --event-selectors '[{"ReadWriteType": "All","IncludeManagementEvents": true,"DataResources": [{"Type":"AWS::S3::Object", "Values": ["arn:aws:s3:::mybucket1/","arn:aws:s3:::mybucket2/"]}]}]'
If you want all the S3 buckets in your AWS accounts covered you can use arn:aws:s3::: instead of list of bucket arns like the following.
aws cloudtrail put-event-selectors --trail-name TrailName2 --event-selectors '[{"ReadWriteType": "All","IncludeManagementEvents": true,"DataResources": [{"Type":"AWS::S3::Object", "Values": ["arn:aws:s3:::"]}]}]'
I have this URL, but I can't find it in any of the S3 buckets. I'm quite sure that I'm logged in with the same AWS account as I did when I uploaded it. I might be mistaken of course. All the buckets I can see in the AWS Management Console (logged in with a root account) have URLs that start with http://[bucket-name].s3-website-eu-west-1.amazonaws.com
How can I reverse engineer the Amazon account and S3 bucket that this file is placed in?
https://s3.eu-central-1.amazonaws.com/ac-mail/footer.jpg
My end goal is to replace the file with another one. But I first need to find it.
The bucket name is "ac-mail". I believe it is not possible to find the owner account of the bucket just from the bucket name unless you own the bucket or you have the permission via the bucket policy. In that case, the command will be: aws s3api get-bucket-acl --bucket ac-mail
** It does return the canonical id of the owner.
There seems to be different regions appearing across my AWS.
My S3 bucket says Ohio (us-east-2):
however the URL of my bucket says us-east-1 (N.Virginia):
https://s3.console.aws.amazon.com/s3/buckets/*****-bucket/?region=us-east-1&tab=overview
How do I find out what the actual region of my AWS is?
To choose a proper region refer this official doc.
https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html
To know which region your s3 bucket in, you can find it in amazon s3 console.
https://s3.console.aws.amazon.com/s3/home
I have an S3 bucket that is 100% empty. Versioning was never enabled on the bucket. However, I still cannot remove the bucket. I have tried via the Console and the CLI tool. On the console it just says "Error" with no error message. From the cli and api it tells me: "An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: The bucket you tried to delete is not empty". I have tried all of the following:
aws s3 rb s3://<bucket_name> --force -> BucketNotEmpty
aws s3 rm s3://<bucket_name> --recursive -> No output (because it's already empty)
aws s3api list-object-versions --bucket <bucket_name> -> No output (because versioning was never enabled)
aws s3api list-multipart-uploads --bucket <bucket_name> -> No outputs
aws s3api list-objects --delimiter=/ --prefix= --bucket <bucket_name> -> No Output (because it's empty)
It has no dependencies (it's not used by cloudfront or anything else that I'm aware of).
The bucket has been empty for approximately 5 days.
I was able to delete another very similar bucket with the same IAM user. Additionally my IAM user has Admin access.
I was facing this same problem. I was able to fix the issue by going into the bucket and deleting the "Bucket Policy" for the bucket. After that, deleting the bucket worked correctly.
I did this through the AWS console, for an S3 bucket created by Elastic Beanstalk (ie elasticbeanstalk-us-west-2-861587641234). I imagine the creation script includes a policy to prevent people from accidentally deleting the bucket.
I had a similar issue and was able to delete the bucket after waiting overnight.
It's a pretty weak solution but may save you and other some time from pounding on it.
If it's still not deleting after all the actions in the comments there are some things that only AWS support can fix properly. Again a weak answer but register a ticket with AWS support and then post their response here as an answer for others.
To delete an Elastic Beanstalk storage bucket (console)
1. Open the Amazon S3 Management Console
2. Select the Elastic Beanstalk storage bucket.
3. Choose Properties.
4. Choose Permissions.
5. Choose Edit Bucket Policy - Allow to delete and make it public.
6. Save.
7. Choose Actions and then choose Delete Bucket.
8, Type the name of the bucket and then choose Delete.
This is what had worked for me. I didn't have versioning enabled on the bucket. When you delete an object from s3 bucket, it puts a 'delete marker' on that object and hides it from the listing. When you click 'show' version button you will see your deleted objects with the delete marker. Select this object (with delete marker) and delete again. This is a permanent delete. Now your object is really gone and your bucket is really empty. After this I was able to delete my bucket.
I guess, versioning=true only means that s3 will create versions of the object if you upload with the same name.
For users who are facing similar issue.
I tried #Federico solution still no success. There was an other option "Empty" next to delete.
So I emptied the bucket first and then tried delete, it worked.
I was facing an issue with deleting the Elastic Beanstalk storage bucket.
Follow the below steps:
1. Select the Elastic Beanstalk storage bucket.
2. Choose Permissions.
3. Delete the bucket policy
4. Save.
If your bucket is empty, you can delete the bucket.
Sometimes after attempting to delete a bucket, it's not actually deleted, but the permissions are lost.
In my case, I went to the "permissions" tab, re-granted permissions to myself, and was then able to remove it
I had the same issue and there was not a policy, so added permission for the email I was logged in with and saved. After granting myself permission I was able to delete the bucket. I also had another bucket that had a policy, so I delete the policy and was able to delete that bucket as well.
Using aws cli :
# delete contents of a bucket
aws s3api delete-objects --bucket nameOfYourBucket --delete "$(aws s3api list-object-versions --bucket nameOfYourBucket --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"
# delete older version files from bucket
aws s3api delete-objects --bucket nameOfYourBucket --delete "$(aws s3api list-object-versions --bucket nameOfYourBucket --query='{Objects: DeleteMarkers[].{Key:Key,VersionId:VersionId}}')"
And then you can delete the bucket.
i made the s3 bucket permission to public, and gave access to everyone. Then i was able to delete the Bucket from the AWS console.
I am using the AWS Console to perform deletion of the bucket.
had the same problem and I tried all the above solutions and not worked for me then I figured out another way.
My bucket was used by ElasticBean and whenever I deleted the bucket the ElasticBean created one automatically. I then deleted the ElasticBean service and tried to delete the bucket again but not worked again this time, the bucket was empty but was not allowing to delete.
I tried to change permissions but the bucket was still there.
Finally I deleted the bucket policy and came back and deleted the bucket and it was gone.
Problem solved
I tried to look at many of the solutions mentioned. The only thing that worked for me is deleting it through Cyberduck (I neither work for nor am promoting Cyberduck, i genuinely used it and it worked). Here are the steps of what I did:
1 - Download and install Cyberduck.
2 - Click on Open Connection
3 - Select Amazon S3 from the dropdown (default would be FTP)
4 - Enter your access key ID and secret Access key (if you dont have one then you need to create one on your s3 bucket through IAM on AWS)
5 - You will see a list your S3 buckets. Select the file or folder or bucket you want to delete, right click and delete. Even files with 0kb show up here and can be deleted.
Hope this helps