As a DevOps guy I wanted to use the same template to provision both Dev and Prod stacks... Where dev stacks should not have any DeletionPolicy but Prod stacks should utilize a DeletionPolicy
So, at first sight CFT gives an ok tooling for this but.... there is no possibility to parametrize S3 DeletionPolicy (that I've been able to locate at least)...
Here's some threads I dug up
https://forums.aws.amazon.com/message.jspa?messageID=560586
https://www.unixdaemon.net/cloud/cloudformation-annoyance-deletion-policy-parameters/
The suggested workaround from AWS was to make the whole resource conditional, which leads us duplicating the resource and create a „Deletable and „Undeletable versions of it and all the depending resources should handle that condition...
This seems wonky and bloated, is there a way to parameterize this or a better methodology to accomplish my end goal?
Doesn't seem like there's an option in CFT other than resource duplication.
What you can do is create a Lambda with a Python script that would setup the S3 deletion policy. That Lambda function can be triggered through SNS during CloudFormation stack creation. Here is described how this can be configured:
Is it possible to trigger a lambda on creation from CloudFormation template
But in your particular case I'd go with resource duplication in same CFT.
Related
TLDR: Is there a way to -- using a single Cloudformation template -- deploy a lambda function with code in S3 or ECR?
Say I have two Lambda functions.
Function A has code in an S3 bucket and relies on a Lambda Layer
Function B is a containerized function with the image in an ECR instance
Here's my deployment setup for the two functions:
function-a/
s3-bucket.template // CFN Stack template for S3 bucket
lambda-function.template // CFN Stack template for lambda function
deploy.sh // Script that creates S3 bucket template,
builds/uploads code,
and creates lambda function stack
function-b/
ecr.template // CFN Stack template for ECR instance
lambda-function.template // CFN Stack template for lambda function
deploy.sh // Script that creates ECR,
builds/uploads docker image,
and creates lambda function stack
Results: 4 Cloudformation stacks, 1 S3 bucket, 1 ECR, 2 Lambda functions
I find this amount of configuration setup for these two functions needlessly complex.
I understand that buckets and registries need to exist. But I don't want to explicitly define/deploy/manage them using extra build steps.
What else I looked at: I checked AWS SAM -- but SAM also doesn't absolve me from managing the code deployment myself. I used AWS CDK which actually abstracts this away. But for certain reasons I don't want to use CDK here atm. I do not want to use the Serverless framework.
I'm disappointed that most of the examples from the documentation of Cloudformation and SAM just end up creating buckets and registries manually. This doesn't seem like a scalable way to handle it for many environments. This isn't Infrastructure-as-Code.
Is there a simpler way?
The S3 bucket and ECR would be reused for future functionality. So I think of it as two shared resources (S3 code bucket and ECR) and then two new resources (the new Lambda functions).
Most likely you'll have a stack of shared items, things are used by everything but don't structurally change that much. Then another stack of application functions, which will likely change more often. Separating these two different types of things is a good idea.
I've made a CodePipeline pipline CloudFormation template and deployed it as a stack. I'd like to add an action to this existing pipeline via another CloudFormation stack.
From the documentation I can only see pipeline resources which would allow me to create a whole new stack, not edit an existing one by providing an ARN or something similar. There are also no granular resources that provide support for CodePipeline functionality such as actions. See URL below:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-codepipeline-pipeline.html
Does anyone know how I can achieve this? By the looks of it I'd say I have to update the template for the pipeline, adding the new action. Assuming this is the only way, how could achieve this from another CloudFormation stack?
So a template would be configured to add a new action in the pipeline template, and then trigger an update of the pipeline stack. I'm guessing I'd have to use a CloudFormation macro and keep the pipeline template stored in s3. I'd then take the template out of s3, add the action, save the change and then what? I've also considered how I might use nested stacks or the Import macro.
Thanks for any help!
#Marcin inspired me to this solutions. Thanks :)
Essentially I did this:
First I created a "Change" pipeline that took the stack template modified during the build stage, that I originally wanted to deploy across multiple stacks in a deploy action, and wrote it out to the path within an s3 bucket.
Second I created a "Deploy" pipeline that used the s3 path pointing to the output of the "Change" pipeline. This pipeline contains a deploy action that uses a SourceArtifact of the outputted template. This is essentially the deploy action I wanted in the "Change" pipeline.
I have now created a CFN template for the "Deploy" pipeline, allowing me to create any number of "Deploy" pipelines pointing to different stacks. When the "Change" pipeline is triggered it's output triggers all the "Deploy" pipelines. My approval and testing process goes into the "Change" pipeline to avoid spam and I can roll back no problem.
I have to create multiple IAM users from a single cloudformation stack at once.
Since, Cloudformation doesn't support Loop. I have Created a Code Pipeline which deploys cloudformation template stored in AWS CodeCommit.
Can I use Parameter Override Feature of Code Pipeline to Create Multiple Users like giving parameter in list as:
{
"Username":["Bob","Alice","John"]
}
You're going to need an action between the CodeCommit and CloudFormation actions to generate a template that includes each IAM user resource (unless you plan to commit the expanded CloudFormation template). CodeBuild is probably your best bet to run some command that generates the CoudFormation template.
You might find CDK (https://github.com/awslabs/aws-cdk/) interesting for a use case like this. It will let you describe IAM users in a loop and then synthesize a CoudFormation temple. At the time of writing this answer it's in preview, so don't rely on it for production.
You should, but if you don't leave pre-existing ones in, I believe it will drop the previous ones. You could do a Custom resource tied to a Lambda Function, then your Lambda function could "not" drop the previous resources.
Lets say I have a CloudFormation stack running, which creates and deploys an Lambda function. In the AWS Console, if I connect my Lambda function to an API in API Gateway, will my CloudFormation Template be updated immediately if the Lambda function successfully integrates with the API?
It's a one way traffic from Cloudformation to resources.
Meaning if you modify your Cloudformation template and update the stack then the resources that were created by Cloudformation get modified/updated. However the other way is not true. Meaning if you modify your resources the Cloudformation template does not get updated.
Moreover, as a good practice you should avoid modifying the resources directly because you may end up breaking the Cloudformation's update stack functionality for that that stack
I've been working with CloudFormation YAML for awhile and have found it to be comprehensive - until now. I'm struggling in trying to use SAM/CloudFormation to create a Lambda function that is triggered whenever an object is added to an existing S3 bucket.
All of the examples I've seen thus far seem to require that you create the bucket in the same CloudFormation script as you create the Lambda function. This doesn't work for me, because we have a design goal to be able to use CloudFormation redeploy our entire stack to different regions or AWS accounts and quickly stand up our application. S3 bucket names must be globally unique, so if I create the bucket in CloudFormation, the script will break when I try to deploy it to a different region/account. I could probably get around this by creating buckets with the account name/region in the name, but that's just not desirable from a bucket sprawl perspective.
So, does anyone have a solution for creating a Lambda function in CloudFormation that is triggered by objects being written to an existing S3 bucket?
Thanks!
This is impossible, according to the SAM team. This is something which the underlying CloudFormation service can't do.
There is a possible workaround, if you implement a Custom resource which would trigger a separate Lambda function to modify the existing bucket and link it to the Lambda function that you want to deploy.
As "implement a Custom Resource" isn't very specific: Here is an AWS github repo with scaffold code to help write it, and then you declare something like the following in your template (where LambdaToBucket) is the custom function you wrote. I've found that you need to configure two things in that function: one is a bucket notification configuration on the bucket (saying tell Lambda about changes), the other is a Lambda Permission on the function (saying allow invocations from S3).
Resources:
JoinLambdaToBucket:
Type: Custom::JoinLambdaToExistingBucket
Properties:
ServiceToken: !GetAtt LambdaToBucket.Arn