I have to create multiple IAM users from a single cloudformation stack at once.
Since, Cloudformation doesn't support Loop. I have Created a Code Pipeline which deploys cloudformation template stored in AWS CodeCommit.
Can I use Parameter Override Feature of Code Pipeline to Create Multiple Users like giving parameter in list as:
{
"Username":["Bob","Alice","John"]
}
You're going to need an action between the CodeCommit and CloudFormation actions to generate a template that includes each IAM user resource (unless you plan to commit the expanded CloudFormation template). CodeBuild is probably your best bet to run some command that generates the CoudFormation template.
You might find CDK (https://github.com/awslabs/aws-cdk/) interesting for a use case like this. It will let you describe IAM users in a loop and then synthesize a CoudFormation temple. At the time of writing this answer it's in preview, so don't rely on it for production.
You should, but if you don't leave pre-existing ones in, I believe it will drop the previous ones. You could do a Custom resource tied to a Lambda Function, then your Lambda function could "not" drop the previous resources.
Related
I'm working with a CloudFormation template which is defining a lot of parameters for static values out of the scope of the template.
For example, the template is creating some EC2, and it has parameters for each VPC subnet. If this was Terraform, I would just remove all of these parameters and use data to fetch the information.
Is it possible to do that with CloudFormation?
Notice that I'm not talking about referencing another resource created within the same template, but about a resource that already exists in the account that could have been created by different means (manual, Terraform, CloudFormation, whatever...)
No, CloudFormation does not have any native ability to look up existing resources. You can, however, achieve this using a Cloudformation macro.
A CloudFormation macro leverages a lambda function, which you can implement with whatever logic you need (e.g. using boto3) so that it returns the value you're after. You can even pass parameters to it.
Once the macro has been created, you can then consume it in your existing template.
You can find a full example on how to implement a macro, and on how to consume it, here: https://stackoverflow.com/a/70475459/3390419
I've made a CodePipeline pipline CloudFormation template and deployed it as a stack. I'd like to add an action to this existing pipeline via another CloudFormation stack.
From the documentation I can only see pipeline resources which would allow me to create a whole new stack, not edit an existing one by providing an ARN or something similar. There are also no granular resources that provide support for CodePipeline functionality such as actions. See URL below:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-codepipeline-pipeline.html
Does anyone know how I can achieve this? By the looks of it I'd say I have to update the template for the pipeline, adding the new action. Assuming this is the only way, how could achieve this from another CloudFormation stack?
So a template would be configured to add a new action in the pipeline template, and then trigger an update of the pipeline stack. I'm guessing I'd have to use a CloudFormation macro and keep the pipeline template stored in s3. I'd then take the template out of s3, add the action, save the change and then what? I've also considered how I might use nested stacks or the Import macro.
Thanks for any help!
#Marcin inspired me to this solutions. Thanks :)
Essentially I did this:
First I created a "Change" pipeline that took the stack template modified during the build stage, that I originally wanted to deploy across multiple stacks in a deploy action, and wrote it out to the path within an s3 bucket.
Second I created a "Deploy" pipeline that used the s3 path pointing to the output of the "Change" pipeline. This pipeline contains a deploy action that uses a SourceArtifact of the outputted template. This is essentially the deploy action I wanted in the "Change" pipeline.
I have now created a CFN template for the "Deploy" pipeline, allowing me to create any number of "Deploy" pipelines pointing to different stacks. When the "Change" pipeline is triggered it's output triggers all the "Deploy" pipelines. My approval and testing process goes into the "Change" pipeline to avoid spam and I can roll back no problem.
As a DevOps guy I wanted to use the same template to provision both Dev and Prod stacks... Where dev stacks should not have any DeletionPolicy but Prod stacks should utilize a DeletionPolicy
So, at first sight CFT gives an ok tooling for this but.... there is no possibility to parametrize S3 DeletionPolicy (that I've been able to locate at least)...
Here's some threads I dug up
https://forums.aws.amazon.com/message.jspa?messageID=560586
https://www.unixdaemon.net/cloud/cloudformation-annoyance-deletion-policy-parameters/
The suggested workaround from AWS was to make the whole resource conditional, which leads us duplicating the resource and create a „Deletable and „Undeletable versions of it and all the depending resources should handle that condition...
This seems wonky and bloated, is there a way to parameterize this or a better methodology to accomplish my end goal?
Doesn't seem like there's an option in CFT other than resource duplication.
What you can do is create a Lambda with a Python script that would setup the S3 deletion policy. That Lambda function can be triggered through SNS during CloudFormation stack creation. Here is described how this can be configured:
Is it possible to trigger a lambda on creation from CloudFormation template
But in your particular case I'd go with resource duplication in same CFT.
I am using CodePipeline to deploy my SAM (lambda etc) application referencing https://docs.aws.amazon.com/lambda/latest/dg/build-pipeline.html.
The "issue" now is my CloudFormation has some parameters inside and CodePipeline requires that I set these. I could do so via parameter overrides
But is this the correct way? I actually only want it set once at the start. And I'd rather have users set it in CloudFormation and CodePipeline should follow those values.
This stack is already created, why isit that CodePipeline complains I need them set?
The input parameters are required by CloudFormation to update.
Template configuration is the recommended way to specify the input parameters. You could create a template file of input parameters for the customers to use.
Possible solution is to create custom Lambda functions which will be invoked from CodePipeline using Invoke action.
As a parameter to such Lambda you would specify CloudFormation stack name. Lambda then will load CloudFormation parameters from existing stack and create output from it (using appropriate AWS SDK). Such artifact will be used as an input to CloudFormation deployment.
Another solution is to create CodeBuild project which will do the same thing.
It's a bit complex but it seems that CodePipeline always needs full set of parameters unfortunately.
I've been working with CloudFormation YAML for awhile and have found it to be comprehensive - until now. I'm struggling in trying to use SAM/CloudFormation to create a Lambda function that is triggered whenever an object is added to an existing S3 bucket.
All of the examples I've seen thus far seem to require that you create the bucket in the same CloudFormation script as you create the Lambda function. This doesn't work for me, because we have a design goal to be able to use CloudFormation redeploy our entire stack to different regions or AWS accounts and quickly stand up our application. S3 bucket names must be globally unique, so if I create the bucket in CloudFormation, the script will break when I try to deploy it to a different region/account. I could probably get around this by creating buckets with the account name/region in the name, but that's just not desirable from a bucket sprawl perspective.
So, does anyone have a solution for creating a Lambda function in CloudFormation that is triggered by objects being written to an existing S3 bucket?
Thanks!
This is impossible, according to the SAM team. This is something which the underlying CloudFormation service can't do.
There is a possible workaround, if you implement a Custom resource which would trigger a separate Lambda function to modify the existing bucket and link it to the Lambda function that you want to deploy.
As "implement a Custom Resource" isn't very specific: Here is an AWS github repo with scaffold code to help write it, and then you declare something like the following in your template (where LambdaToBucket) is the custom function you wrote. I've found that you need to configure two things in that function: one is a bucket notification configuration on the bucket (saying tell Lambda about changes), the other is a Lambda Permission on the function (saying allow invocations from S3).
Resources:
JoinLambdaToBucket:
Type: Custom::JoinLambdaToExistingBucket
Properties:
ServiceToken: !GetAtt LambdaToBucket.Arn