We have already created some infrastructure manually and with terraform, including some s3 buckets. In the future I would like to use pure CloudFormation to define the infrastructure as code.
So I created a CloudFormation yaml definition which references an existing bucket:
AWSTemplateFormatVersion: '2010-09-09'
Resources:
TheBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: my-existing-bucket-name
When I try to apply it, execution fails, with CloudFormation stack event:
The following resource(s) failed to update: [TheBucket].
12:33:47 UTC+0200 UPDATE_FAILED AWS::S3::Bucket TheBucket
my-existing-bucket-name already exists
How can I start managing existing resources with CloudFormation without recreating them? Or is it impossible by design?
You will need to create a new bucket and sync the data from the old bucket to the new bucket. I have not seen a way to use an modify an existing S3 bucket.
The resources section of a cloud formation template defines which resources should be created by cloud formation. Try refering to the existing resources by defining them as parameters instead.
You should be able to import it by using the "Import resources into stack" option:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/resource-import-existing-stack.html
As the documentation explains, you should add a "DeletionPolicy": "Retain" attribute to the already existing resources in your stack.
Related
I created and deployed a S3 resource (bucket) using cloudformation.
After that i deployed a version without that resource.
Then i deployed a version with the resource.
Since the bucket exists, it gives me an error that it cannot deploy.
This has happened to me before, in past times I deleted the resource and deployed again.
I'm looking for a way to use the resource for future deployments. It is the exact same resource, this is the yaml :
Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: !Sub "myBucketName"
Is there anything I can add to the resource, a policy, a unique ID, anything so that i could use the existing resource?
Thanks!
To "use the existing resource" in a CFN, you have to import it. Also its a bad practice to keep modify resources created by CFN outside of CFN. This leads to drift and number of issues, one of which you are experiencing.
am trying to create S3 bucket using serverless framework. but when I deploy, it's creating two buckets one with the name I have mentioned in the severless.yml file and another bucket.
serverless.yml
service: aws-file-upload-tos3
provider:
name: aws
runtime: nodejs12.x
stage: dev
region: us-east-2
lambdaHashingVersion: 20201221
custom:
fileUploadBucketName: ${self:service}-${self:provider.stage}-bucket
resources:
Resources:
FileBucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.fileUploadBucketName}
AccessControl: PublicRead
Buckets created are
why its creating two buckets like this
By default, serverless framework creates a bucket with a generated name like <service name>-serverlessdeploymentbuck-1x6jug5lzfnl7 to store your service's stack state.
Each successive version of your serverless app is bundled and uploaded by sls to the deployment bucket, and deployed from there.
I think you have some control over how sls does this if you use the serverless-deployment-bucket plugin.
By default, the Serverless Framework creates a number of things on your machine in order to deploy what you have configured in your serverles.yml. It then needs to make use of a service inside AWS called CloudFormation to actually create the resources you configured, like your S3 bucket. The best way to do this is to take the things it created on your machine and upload them to AWS to ensure that the deployment continues without interruption or issue and the best place to do that is S3.
So the Serverless Framework will always (by default) create its own S3 bucket entirely unrelated to what you configured as a location to store the files it generated on your AWS account, then point CloudFormation at it to build the things you configured to get built.
While you have some control over this deployment bucket there always needs to be one. And it is completely unrelated to the bucket you configured.
I've created a bucket without a DeletionPolicy and want to add it. Updated our configuration (in our serverless.yml), and we now see the DeletionPolicy: retain in the cloud-formation template.
However, based on this blurb:
One quirk in the update template workflow is that DeletionPolicy
cannot be updated by itself but must accompany some other change that
"add, modify or delete properties" of an existing resource. A fun fact
about the AWS::S3::Bucket resource: it cannot be updated after
creation. Good news: this does not apply to the DeletionPolicy
attribute. Bad news: CFN won't pick up the changes unless another
property of the "immutable" S3 bucket is updated.
I've searched for quite a while, and I cannot determine how to query the S3 bucket and determine if the DeletionPolicy is actually set or not. I don't see where this is exposed in the AWS console, nor do I see in the AWS cli where this can be queried. It doesn't appear to be in the Bucket Policy as far as I can tell.
How do I validate the DeletionPolicy is actually set?
I think the linked article (from 1/12/2014) is outdated. I just did the following:
Deployed a stack containing
MyBucket:
Type: AWS::S3::Bucket
.
.
.
Verified it deployed successfully, then deployed
MyBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
.
.
.
Verified it deployed successfully, then deleted my stack, and verified MyBucket is still present in S3.
I'm attempting to achieve the following through CloudFormation.
From a stack created in EU region I want to create (and verify) a public certificate against Route53 in US-EAST-1 due to using Cloudfront. Aiming to have zero actions performed in the console or AWS CLI.
The new CloudFormation support for ACM was a little sketchy last week but seems to be working now.
Certifcate
Resources:
Certificate:
Type: AWS::CertificateManager::Certificate
Properties:
DomainName: !Sub "${Env}.domain.cloud"
ValidationMethod: DNS
DomainValidationOptions:
-
DomainName: !Sub "${Env}.domain.cloud"
HostedZoneId: !Ref HostedZoneId
All I need to do is use Cloudformation to deploy this into the US-EAST-1 region from stack in a different region. Everything else is ready for this.
I thought that using Codepipeline's cross region support would be great so I started to look into [this documentation][1] after getting setting things up in my template I met the following error message...
An error occurred while validating the artifact bucket {...} The bucket named is not located in the `us-east-1` AWS region.
To me this makes no sense as it seems that you already need at least a couple of resources to exist in target region for it to work. Cart before the horse kind of behavior. To test this I create an artifact bucket in the target region by hand and things worked fine, but requires using CLI or the console when I'm aiming for a CloudFormation based solution.
Note: I'm running out of time to write this so I'll update it when I can in a few hours time. any help before I can do that would be great though
Sadly, that's required for cross-region CodePipeline. From docs:
When you create or edit a pipeline, you must have an artifact bucket in the pipeline Region and then you must have one artifact bucket per Region where you plan to execute an action.
If you want to fully automate this through CloudFormation, you either have to use custom resource to create buckets in all the regions in advance or look at stack sets to deploy one template bucket in multiple regions.
p.s.
Your link does not work, thus I'm not sure if you refer to the same documentation page.
I am using Terraform for most of my infrastructure, but at the same time I'm using the serverless framework to define some Lambda functions. Serverless uses CloudFormation under the hood where I need access to some ARNs for resources created by Terraform.
My idea was to create a CloudFormation stack in Terraform and export all of the value that I need, but it complains that it cannot create a stack without any resources. I don't want to define any resources in CloudFormation, only the outputs, so I though maybe there is a way to define some dummy resource, but I couldn't find any.
Is there a way to work around this issue? If not, I'm also open to other suggestions for getting parameters passed from Terraform to CloudFormation.
You can use AWS::CloudFormation::WaitConditionHandle for this. Example:
Resources:
NullResource:
Type: AWS::CloudFormation::WaitConditionHandle
The Resource section is required, but you can create non-resource type of resource.
For example, minimalist template with only a non-resource would be:
Conditions:
Never:
!Equals [ "A", "B" ]
Resources:
NonResource:
Type: Custom::NonResource
Condition: Never
Outputs:
MyOutput:
Value: some-value
You can use create AWS SSM parameter using Terraform and reference them in your serverless framework. That would do the job easily.
https://www.serverless.com/blog/definitive-guide-terraform-serverless/