How to move a bucket policy to another CloudFormation stack? - amazon-web-services

I have an S3 bucket with which I want to move from one stack to another one. I was able to move the bucket using the instructions provided by Amazon, however, I wasn't able to move the bucket policy attached to it. What I tried was:
Make the bucket and the policy retainable
Remove the resources from the origin stack
Import the bucket into the target stack (I cannot import Bucket Policy, it's not supported)
After these actions, the bucket has the policy, however, it's not managed by CloudFormation. If I try to add AWS::S3::BucketPolicy definition for the bucket to the target stack, the deploy will fail with The bucket policy already exists on bucket. I'm looking for a way to overcome this. I understand that I can simply remove the policy using the console and then the error will disappear, but in that case, the resources in the bucket will be unavailable for some time.
The only solution I see is to leave it as-is for production. For other environments (testing, development, etc.) I want CloudFormation to correctly create and remove the bucket and its policy. So the workaround is to define bucket policy for all the environments except production.

Related

is it possible to copy s3 bucket content from one bucket to another account s3 bucket without using bucket policy?

I want to copy the S3 bucket object to a different account, but the requirement can't use the Bucket policy,
then is it possible to copy content from one bucket to another without using the bucket policy?
You cannot use native S3 object replication between different accounts without using a bucket policy. As stated in the permissions documentation:
When the source and destination buckets aren't owned by the same accounts, the owner of the destination bucket must also add a bucket policy to grant the owner of the source bucket permissions to perform replication actions
You could write a custom application that uses IAM roles to replicate objects, but this will likely be quite involved as you'll need to track the state of the bucket and all of the objects written to it.
install AWS CLI,
run AWS configure set source bucket credentials as default and,
visit https://github.com/Shi191099/S3-Copy-old-data-without-Policy.git

Cloudformation template fails due to S3Bucket resource already exists

I have created an S3 Bucket, with the cloud formation, Lets Say Bucket Name is S3Bucket,
I don't want this bucket getting deleted if I delete stack, so added Deletion Policy to Retain,
Now the problem here is, If run the stack again, it complains S3Bucket name already exists.
If a bucket already exists, it should not complain.
What to do for this.
Please help
I faced this in the past and what i did in order to resolve this is that i created a common AWS cloudformation template/stack which will create all our common resources which are static(Handle it like a bootstrap template).
Usually i am adding in this template the creation of s3 buckets,VPC, networking, databases creation, etc.
Then you can create other AWS cloudformation templates/stacks for your rest resources which are dynamic and changing usually like lambdas,ec2, api gateway etc.
S3 names are globally unique. (e.g if I have s3 bucket in my AWS account s3-test, you cannot have a bucket with the same name).
The only way to use same name is to delete the bucket, or retype your cloud formation template and use new cloud formation feature to import resource:
https://aws.amazon.com/blogs/aws/new-import-existing-resources-into-a-cloudformation-stack/

Amazon S3 Folder Level Permissions Bucket policy NOT IAM

I keep seeing posts that refer to setting a policy and while they mention S3 buckets the policy they are often referring to are IAM policies.
In my case I want to control access to my S3 bucket only by an actual "S3 bucket policy".
My current path is :::mybucket, which has /thing1/ and /thing2/
If I wanted a bucket policy that allows a CLI user the ability to list and get /thing1/* but not /thing2/* how would this be done? I've tried my policy with all kinds of conditions, paths etc but nothing seems to work...

CloudFormation resource AWS::S3::Bucket doesn't show up in S3 console

Using my cloudformation template, i was able to create two buckets and one bucket policy in my stack. Turns out my bucket policy had the wrong permissions so i decided to delete the buckets and recreate them with a new template.
It doesn't look like cloudformation has detected my deleted s3 buckets. The buckets still show up in my stack resources but are marked as "Deleted"
My stack is also marked as drifted. When i try to access the s3 buckets via the link in cloudformation, i get "Error Data not found"
My stack has been in this state for about 16 hours. Any idea on how to get cloudformation to sync up with s3?
Your template isn't telling CloudFormation what resources to create, its telling CloudFormation the state that you want.
It sounds like you created a stack with a template with a resource for a bucket.
You then realized a problem and deleted the bucket manually.
You then updated the stack with an updated template with the same resource for the bucket (but with correct permissions)
When CloudFormation processed this updated template, it determined that it had already created the bucket and as a result it didn't recreate it.
You likely could have achieved your desired result without deleting the bucket by just updating the template.
Because you deleted the bucket, your stack is in a bad state. If you have the flexibility to do so, you could delete your stack and recreate it. When you delete it, it may complain about not being able to delete the bucket, you may have to retry once, then get the option to ignore it.

Copy files from an S3 bucket in one AWS account to another AWS account

There is a S3 bucket owned by a different AWS account which has a list of files. I need to copy the files to my S3 bucket. I would like to perform 2 things in order to do this:
Add an S3 bucket event in the other account which will trigger a lambda to copy files in my aws account.
My lambda should be provided permission (possibly through an assumed role) in order to copy the files.
What are the steps that I must perform in order to achieve 1 and 2?
The base requirement of copying files is straight-forward:
Create an event on the source S3 bucket that triggers a Lambda function
The Lambda function copies the object to the other bucket
The complicating factor is the need for cross-account copying.
Two scenarios are possible:
Option 1 ("Pull"): Bucket in Account-A triggers Lambda in Account-B. This can be done with Resource-Based Policies for AWS Lambda (Lambda Function Policies) - AWS Lambda. You'll need to configure the trigger via the command-line, not the management console. Then, a Bucket policy on the bucket in Account-A needs to allow GetObject access by the IAM Role used by the Lambda function in Account-B.
Option 2 ("Push"): Bucket in Account-A triggers Lambda in Account-A (same account). The Bucket policy on the bucket in Account-B needs to allow PutObject access by the IAM Role used by the Lambda function in Account-A. Make sure it saves the object with an ACL of bucket-owner-full-control so that Account-B 'owns' the copied object.
If possible, I would recommend the Push option because everything is in one account (aside from the Bucket Policy).
There is an easier way of doing it without lambda, AWS allows to set the replication of a S3 bucket( including cross region and different account), when you setup the replication all new objects will get copied to the replicated bucket, for existing objects using aws CLI just do copy object again with same bucket so that it gets replicated to target bucket, Once all existing the objects are copied you can turn off replication if you don't wise for future objects to get replicated, Here AWS does the heavy lifting for you :) https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html
There is few ways to achieve this.
You could use SNS notification and cross account IAM to trigger the lambda. Read this: cross-account-s3-data-copy-using-lambda-function explains pretty well what you are trying to achieve.
Another approach is to deploy lambda and all the resources required in the account that holds the files. You would need to create S3 notification that triggers lambda which copies the files to your account or have cloudwatch schedule (bit like cronjob) that triggers the lambda.
In this case lambda and the trigger would have to exists in the account that holds the files.
In both scenarios minimal IAM permissions that lambda would have to have is to be able to read and write to and from s3 buckets. To use STS in order to assume role. You also need to add Cloudwatch permissions to be able to generate lambda logs.
Rest of the required IAM permissions will depend of the approach you are going to take.