How to see JSON of S3 Lifecycle rule created in console? - amazon-web-services

I set up a Lifecycle rule on an S3 bucket by going to the bucket in the console -> Management -> Create lifecycle rule, and set it up in the configuration that I want it. How can I see the JSON that now defines this rule now that set up is done?
Any help would be much appreciated!

To add to #Anon Coward's comment, this will be a perfect solution.
aws s3api get-bucket-lifecycle --bucket bucket-name --output json

Related

How do you enable S3 Object Logging to Cloud Trail using AWS CLI?

Its possible to do object logging on a S3 bucket to Cloud trail using the following guide, but this is through the console.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/enable-cloudtrail-events.html
I've been trying to figure out a way to do this via the cli since want to do this for many buckets but haven't had much luck. I've setup a new cloud trail on my account and would like to map it to s3 buckets to do object logging. Is there a cli for this?
# This is to grant s3 log bucket access (no link to cloudtrail here)
aws s3api put-bucket-logging
It looks like you'll need to use the CloudTrail put_event_selectors() command:
DataResources
CloudTrail supports data event logging for Amazon S3 objects and AWS Lambda functions.
(dict): The Amazon S3 buckets or AWS Lambda functions that you specify in your event selectors for your trail to log data events.
Do a search for object-level in the documentation page.
Disclaimer: The comment by puji in the accepted answer works. This is an expansion of that answer with the resources.
Here is the AWS documentation on how to do this through the AWS CLI
https://docs.aws.amazon.com/cli/latest/reference/cloudtrail/put-event-selectors.html
The specific CLI command you are interested is the following from the above documentation. The original documentation lists two objects in the same bucket. I have modified it to cover all the objects in two buckets.
aws cloudtrail put-event-selectors --trail-name TrailName --event-selectors '[{"ReadWriteType": "All","IncludeManagementEvents": true,"DataResources": [{"Type":"AWS::S3::Object", "Values": ["arn:aws:s3:::mybucket1/","arn:aws:s3:::mybucket2/"]}]}]'
If you want all the S3 buckets in your AWS accounts covered you can use arn:aws:s3::: instead of list of bucket arns like the following.
aws cloudtrail put-event-selectors --trail-name TrailName2 --event-selectors '[{"ReadWriteType": "All","IncludeManagementEvents": true,"DataResources": [{"Type":"AWS::S3::Object", "Values": ["arn:aws:s3:::"]}]}]'

How does S3 get permission from Lambda trigger?

I'm working out the security details for working with Lambda. One thing I can't find out is how S3 gets permission to push to Lambda when you add a trigger from the Lambda console or via S3 - Properties - Events. I know how it works using the CLI and I know you could do it via the SDK but I also noticed it isn't always necessary. Mostly the trigger just 'works' without me adding any permissions. Does anybody know why?
And is there a way to find out what Permissions S3/an S3 bucket has? I know there's a tab 'Permissions' but that's not giving me any information. I also know about Truster Advisor but that's just telling me there's no explicit problem with the permissions. I'm wondering if I can get a list of permissions though?
I hope someone can help me out, thanks in advance!
Adding a trigger in the console is the equivalent of assigning permissions and setting a bucket notification. You can see the policy associated with a particular lambda function by using the get-policy cli command:
aws lambda get-policy --function-name <name>
This will tell you what the policy is for your function. Including the resources with rights to invoke it. This policy isn't applied to the S3 bucket, but instead your lambda function.
You can also see what your bucket is set up to notify in the console under Properties > Events or review this with the cli using the get-bucket-notification command:
aws s3api get-bucket-notification --bucket <bucket>

Can't Delete Empty S3 Bucket

I have an S3 bucket that is 100% empty. Versioning was never enabled on the bucket. However, I still cannot remove the bucket. I have tried via the Console and the CLI tool. On the console it just says "Error" with no error message. From the cli and api it tells me: "An error occurred (BucketNotEmpty) when calling the DeleteBucket operation: The bucket you tried to delete is not empty". I have tried all of the following:
aws s3 rb s3://<bucket_name> --force -> BucketNotEmpty
aws s3 rm s3://<bucket_name> --recursive -> No output (because it's already empty)
aws s3api list-object-versions --bucket <bucket_name> -> No output (because versioning was never enabled)
aws s3api list-multipart-uploads --bucket <bucket_name> -> No outputs
aws s3api list-objects --delimiter=/ --prefix= --bucket <bucket_name> -> No Output (because it's empty)
It has no dependencies (it's not used by cloudfront or anything else that I'm aware of).
The bucket has been empty for approximately 5 days.
I was able to delete another very similar bucket with the same IAM user. Additionally my IAM user has Admin access.
I was facing this same problem. I was able to fix the issue by going into the bucket and deleting the "Bucket Policy" for the bucket. After that, deleting the bucket worked correctly.
I did this through the AWS console, for an S3 bucket created by Elastic Beanstalk (ie elasticbeanstalk-us-west-2-861587641234). I imagine the creation script includes a policy to prevent people from accidentally deleting the bucket.
I had a similar issue and was able to delete the bucket after waiting overnight.
It's a pretty weak solution but may save you and other some time from pounding on it.
If it's still not deleting after all the actions in the comments there are some things that only AWS support can fix properly. Again a weak answer but register a ticket with AWS support and then post their response here as an answer for others.
To delete an Elastic Beanstalk storage bucket (console)
1. Open the Amazon S3 Management Console
2. Select the Elastic Beanstalk storage bucket.
3. Choose Properties.
4. Choose Permissions.
5. Choose Edit Bucket Policy - Allow to delete and make it public.
6. Save.
7. Choose Actions and then choose Delete Bucket.
8, Type the name of the bucket and then choose Delete.
This is what had worked for me. I didn't have versioning enabled on the bucket. When you delete an object from s3 bucket, it puts a 'delete marker' on that object and hides it from the listing. When you click 'show' version button you will see your deleted objects with the delete marker. Select this object (with delete marker) and delete again. This is a permanent delete. Now your object is really gone and your bucket is really empty. After this I was able to delete my bucket.
I guess, versioning=true only means that s3 will create versions of the object if you upload with the same name.
For users who are facing similar issue.
I tried #Federico solution still no success. There was an other option "Empty" next to delete.
So I emptied the bucket first and then tried delete, it worked.
I was facing an issue with deleting the Elastic Beanstalk storage bucket.
Follow the below steps:
1. Select the Elastic Beanstalk storage bucket.
2. Choose Permissions.
3. Delete the bucket policy
4. Save.
If your bucket is empty, you can delete the bucket.
Sometimes after attempting to delete a bucket, it's not actually deleted, but the permissions are lost.
In my case, I went to the "permissions" tab, re-granted permissions to myself, and was then able to remove it
I had the same issue and there was not a policy, so added permission for the email I was logged in with and saved. After granting myself permission I was able to delete the bucket. I also had another bucket that had a policy, so I delete the policy and was able to delete that bucket as well.
Using aws cli :
# delete contents of a bucket
aws s3api delete-objects --bucket nameOfYourBucket --delete "$(aws s3api list-object-versions --bucket nameOfYourBucket --query='{Objects: Versions[].{Key:Key,VersionId:VersionId}}')"
# delete older version files from bucket
aws s3api delete-objects --bucket nameOfYourBucket --delete "$(aws s3api list-object-versions --bucket nameOfYourBucket --query='{Objects: DeleteMarkers[].{Key:Key,VersionId:VersionId}}')"
And then you can delete the bucket.
i made the s3 bucket permission to public, and gave access to everyone. Then i was able to delete the Bucket from the AWS console.
I am using the AWS Console to perform deletion of the bucket.
had the same problem and I tried all the above solutions and not worked for me then I figured out another way.
My bucket was used by ElasticBean and whenever I deleted the bucket the ElasticBean created one automatically. I then deleted the ElasticBean service and tried to delete the bucket again but not worked again this time, the bucket was empty but was not allowing to delete.
I tried to change permissions but the bucket was still there.
Finally I deleted the bucket policy and came back and deleted the bucket and it was gone.
Problem solved
I tried to look at many of the solutions mentioned. The only thing that worked for me is deleting it through Cyberduck (I neither work for nor am promoting Cyberduck, i genuinely used it and it worked). Here are the steps of what I did:
1 - Download and install Cyberduck.
2 - Click on Open Connection
3 - Select Amazon S3 from the dropdown (default would be FTP)
4 - Enter your access key ID and secret Access key (if you dont have one then you need to create one on your s3 bucket through IAM on AWS)
5 - You will see a list your S3 buckets. Select the file or folder or bucket you want to delete, right click and delete. Even files with 0kb show up here and can be deleted.
Hope this helps

Get ARN of S3 Bucket with aws cli

Is it possible to get the ARN of an S3 bucket via the AWS command line?
I have looked through the documentation for aws s3api ... and aws s3 ... and have not found a way to do this.
It's always arn:PARTITION:s3:::NAME-OF-YOUR-BUCKET. If you know the name of the bucket and in which partition it's located, you know the ARN. No need to 'get' it from anywhere.
The PARTITION will be aws, aws-us-gov, or aws-cndepending on whether you're in general AWS, GovCloud, or China resepectively.
You can also select your S3 bucket ARN by selecting it using the tick mark at the s3 management console, which will pop up a Side bar. where there is a provision to copy your S3 bucket ARN.S3 management console with bucket ARN
aws articles spell out the arn format but never say go here to see it. Highlighting my s3 bucket and seeing that Copy Bucket ARN kept me sane for a few more hours.

How to know whether versioning is enabled on an S3 bucket in command line?

We have multiple S3 buckets, in which some are having s3 versioning enabled and some are disabled. How to know which bucket is enabled with versioning?
Is it possible disable s3-versioning on a sub-object (sub-folder) when S3-bucket(Main folder) is enabled with versioning?
The following command retrieves the versioning configuration for a bucket named my-bucket:
aws s3api get-bucket-versioning --bucket my-bucket
If the bucket is version enabled you will the below mentioned output
{
"Status": "Enabled"
}
If you want to check all the bucket is version enabled you can write a small shell script to list the buckets and check the version. For more information visit http://docs.aws.amazon.com/cli/latest/reference/s3api/get-bucket-versioning.html
If you don't get anything back, versioning isn't enabled for that particular bucket. You probably will want to echo the bucket in your loop so you can see which ones have it enabled vs. not