How do I force a CloudFormation stack to update when the parameter is updated? - amazon-web-services

I am running a AWS CloudFormation stack that takes in some parameters and launches EC2 instances along with other AWS resources. The parameters are fed into the user data of the EC2 instance and based on that changes are made dynamically to the web application residing on the EC2 instance.
UserData:
Fn::Base64:
Fn::Join:
- ""
-
- "#!/bin/bash \n"
- "sh website-conf/website_mysql_config.sh "
- " -c \""
-
Ref: "CompanyName"
As shown in the example above, CompanyName is one of the many parameters passed to the userdata script. The problem is, when any one or multiple of parameters are updated, CloudFormation does not detect that and instead throws this error.
So, in order to update the stack, I have to edit the stack and make changes to the ASG so that CloudFormation 'sees' the changes and executes the stack update.
Is there a way to force CFN to update the stack when the parameters are updated?

CloudFormation will not update the stack unless there is a change in properties of the resources already created in the stack.
For example:
Consider I have a simple template to create a database where I need to pass 2 parameters:
db-name
region
Assume that I am using db-name passing it as value to DBInstanceIdentifier.
Also assume that I am not using the input parameter region for any purpose in creation of resources (or its properties) of the stack in any way.It is more of a dummy parameter I keep for readability purpose.
I passed (TEST-DB1, us-east-1) as input parameters to the CloudFormation template and successfully created the resources.
Scenario-1:
Now if I update the stack(still using the existing template) and just change the input parameters to (TEST-DB2, us-east-1). ie: changing just the db-name and not the region. Then CloudFormation will detect that, this parameter update, results in change in properties of running resource(s) of the stack and will compute and display the modifications as a change set.
Scenario-2:
Suppose I make another update(still using the existing template) property and just change the input parameters to (TEST-DB1, us-east-2). ie: changing just the region and not the db-name. Then CloudFormation will detect that, this parameter update, result in NO change in properties of running resource(s) of the stack will show the Error creating change set.
Bottomline:
Your change in input parameter must result in an update/replacement of any resources(or its attributes like security-groups,port etc..) of the stack. Then AWS CloudFormation will display them as Change Sets for your review. Also, the method (update or replacement) AWS CloudFormation uses depends on which property you update for a given resource type.
Your parameter "CompanyName" is not making any changes to the running
resources of the stack. Hence it is reporting as Error creating
change set. You need to use it to create any resource/resource properties of the stack. Then CloudFormation will detect the change-sets when you modify it. The same applies for any other input-parameters which you use.

Use the AWS CLI Update-Stack command. If you use the AWS CLI you can inject parameters into your stack so any change to any of the parameters result in a new stack. I do this myself to inject the Git/version commit ID into UserData so simply committing changes to the stack's JSON/Yaml to Git will allow stack updates. Any change to the parameters file will allow stack updates, even just a comment. I reference my Git commit ID in UserData the same way you are referencing Ref:CompanyName so when I change the Git commit ID the userData section is updated on stack updates.
Update Stack Command
aws cloudformation update-stack --stack-name MyStack --template-body file:///Users/Documents/Git/project/cloudformation/stack.json --parameters file:///Users/Documents/Git/project/cloudformation/parameters/stack-parameters.dev.json --capabilities CAPABILITY_IAM
Process
With this approach you make your parameters changes to the parameters json or yaml file then check it into version control. Now if you use a build server you can update your stack by checking out master and just running that one line above. Using AWS CodeBuild makes this easy so you don't need jenkins.

The answer of your problem is already answered with this state, CloudFormation will not update the stack unless there is a change in properties of the resources already created in the stack.
And for the answer for your question, please check the explanation below.
There is a way to force Cloudformation to update the stack using the AWS::CloudFormation::Init.
By using cfn-init, each instance can update itself when it detect the change that made by AWS::CloudFormation::Init in metadata.
There is a concept that we must understand first, that is the difference between UserData and metadata, at least under the AWS::CloudFormation::Init case.
Userdata: Will be only called once when the instance is being launch for the first time (this including update that need the instance to be replaced). So, if you update the stack (not creating a new one), even if you change the parameter value, it won't change anything if you call the parameter under UserData.
Metadata: Can be updated anytime. To make it works, you have to make sure that the daemon that detect the metadata changed is running (the daemon is called the cfn-hup)
If you already use the Metadata and AWS::CloudFormation::Init, the data is not immediately being updated. As far I know, here is the condition the data to be change after change the Metadata value.
Reboot the instance
Run cfn-init command again with it's parameter
Waiting about 15 minutes, because the daemon to check the change in Metadata is checking the change once in 15 minutes.

Related

Terraform handle multiple lambda functions

I have a requirement for creating aws lambda functions dynamically basis some input parameters like name, docker image etc.
I have been able to build this using terraform (triggered using gitlab pipelines).
Now the problem is that for every unique name I want a new lambda function to be created/updated, i.e if I trigger the pipeline 5 times with 5 names then there should be 5 lambda functions, instead what I get is the older function being destroyed and a new one being created.
How do I achieve this?
I am using Resource: aws_lambda_function
Terraform code
resource "aws_lambda_function" "executable" {
function_name = var.RUNNER_NAME
image_uri = var.DOCKER_PATH
package_type = "Image"
role = role.arn
architectures = ["x86_64"]
}
I think there is a misunderstanding on how terraform works.
Terraform maps 1 resource to 1 item in state and the state file is used to manage all created resources.
The reason why your function keeps getting destroyed and recreated with the new values is because you have only 1 resource in your terraform configuration.
This is the correct and expected behavior from terraform.
Now, as mentioned by some people above, you could use "count or for_each" to add new lambda functions without deleting the previous ones, as long as you can keep track of the previous passed values (always adding the new values to the "list").
Or, if there is no need to keep track/state of the lambda functions you have created, terraform may not be the best solution to solve your needs. The result you are looking for can be easily implemented by python or even shell with aws cli commands.

I need a strategy for handling optional SSM Parameter Store parameters in CDK

In my stack definition I pull in a number of parameters from SSM Parameter Store...
const p1 = ssm.StringParameter.fromStringParameterAttributes( ... )
const p2 = ssm.StringParameter.fromStringParameterAttributes( ... )
I then pass them along to the relevant lambdas as environment vars...
environment: {
PARAM_ONE: p1.stringValue
PARAM_TWO: p2.stringValue
}
However I don't want all of those parameters to be mandatory. I would like the ones that exist to be passed in as env vars, and the ones that don't to just remain undefined as my app has defaults for them anyway. However, trying to inspect the value of p1.stringValue just gives me a Token, not a value, so I can't do any logic based on it's presence or absence: https://docs.aws.amazon.com/cdk/latest/guide/tokens.html
If I ask for the parameter and it is not defined in SSM Parameter Store I then get an error that I can't catch or ignore when it tries to build the changeset and the deployment fails...
MyApp: creating CloudFormation changeset...
❌ MyAppStack failed: Error [ValidationError]: Unable to fetch parameters [/myapp/param1,/myapp/param2] from parameter store for this account.
So how can I deal with SSM parameters which may or may not exist at deploy time?
I assume you are only grabbing the manager in your import, not the actual values inside your secrets. If this is the case, then your best bet is to leverage the SDK to do this for you - a simple call using the SDK (which will be run during the synth stage of a cdk deploy or cdk synth) to see if said SMM fields/groups exist. If they do, go ahead and import them.
I do something very similar with Layers - the from methods for layers require the version number - that may change at any time. So i have a small function that gets the latest version number of a given layer using the SDK and i can then use that to import the layer definition into my stack.
If you are trying to get the actual secret inside the secret manager parameter ... that is better suited to outside the CDK for most scenarios - done in the exact location you need the secrets so you dont end up with secret value in plain text somewhere.

Cloudformation: The resource you requested does not exist

I have a cloudformation stack which has a Lambda function that is mapped as a trigger to an SQS queue.
What happened was that I had to delete the mapping and create it again manually cos I wanted to change the batch size. Now when I want to update the mapping the cloudformation throws an error with The resource you requested does not exist. message.
The resource mapping code looks like this:
"EventSourceMapping":{
"Properties":{
"BatchSize":5,
"Enabled":"true",
"EventSourceArn":{
"Fn::GetAtt":[
"ProcessorQueue",
"Arn"
]
},
"FunctionName":{
"Fn::GetAtt":[
"ProcessorLambda",
"Arn"
]
}
},
"Type":"AWS::Lambda::EventSourceMapping"
}
I know that I've deleted the mapping cloudformation created initially and added it manually which is causing the issue. How do I fix this? Cos I cannot push any update now.
Please help
What you did, from my perspective, it is a mistake. When you use Cloud Formation you are not suppose to apply changes manually. You can, and maybe that's fine since one may don't care about the stack once is created. But since you are trying to update the stack, this tells me that you want to keep the stack and update it on a time basis.
To narrow down your problem, first let make clear that the manually-created mapping is out of sync with your cloud formation stack. So, from a cloud formation perspective, it doesn't matter if you keep that mapping or not. I'm wondering, what would happen if you keep the manually-created mapping and create a new from Cloud Formation? Maybe it will complain, since you would have repeated mappings for the same pair of (lambda,queue). Try this:
Create a change for your stack, where you completely remove the EventSourceMapping resource from your script. This step is to basically clean loosing references. Apply the change set.
Then, and this is where I think you may get some kind of issue, add back again EventSourceMapping to your stack.
If you get errors in the step 2, like "this mapping already exists", you will have to remove the manually-created mapping from the console. And then try again step 2.
You probably know now that you should not have removed the resource manually. If you change the CF, you can update it without changing resources which did not change in CF. You can try to replace the resource with the exact same physical name https://aws.amazon.com/premiumsupport/knowledge-center/failing-stack-updates-deleted/ The other option is to remove the resource from CF, update, and then add it back and update again - from the same doc.
While comments above are valid, I found it interesting, that no one mentioned much simpler option: using SAM commands (sam build/sam deploy). It's understandable that during the development process and designing the architecture, there might be flaws and situations where manual input in the console is necessary, therefore there's something I reference to every time I have similar issue.
Simply comment out the chunk of code that is creating troubles, run sam build/deploy on top of it, CloudFormation stack will recognize that the resource no longer in the template and will delete it.
Now, since the resource is no longer in the architecture anyway(removed manually prior), it will have no issues passing the step and successfully updating the stack.
Then simply uncomment, make any necessary changes (if any) and deploy.
Works every time.

Does ansible-playbook use a previously created changeset if one exists or does it run using the playbooks and vars?

We use ansible-playbook to create/update cloudformation stacks.
If we run the playbook with create_changeset=yes then ansible creates a changeset rather than updating the stack.
If we then run the playbook again with create_changeset=no, does it execute the changeset or does it run a normal stack update? After we run the playbook the changeset is gone but that doesn't tell me what actually happened.
I'm hoping it does not execute the changeset because I wonder what happens if I change one or more vars between creating the changeset and executing the playbook the second time. In that case the change to the vars would seem to be missed.
Verified that ansible-playbook with create_changeset=no just does a normal update. The changeset is deleted but that happens anyway when you do an update.
Verified this by changing a var value, then creating a changeset, then changing the value again and running the playbook...resulting in the last var value being used rather than the var value in the changeset.
Confirmed this by looking at CloudTrails to see what actual api calls were made.

Add Resources to existing CloudFormation stack

I have a CloudFormation master stack. I want to write a tool that allows me to add Lambda functions to the master stack using Boto3.
In order to add the function to the stack I need to be able to
Get Outputs from the master stack to use in the function's template.
Add the Function to the master stack.
I have only been able to get this to work with:
Build, zip, and upload the function to S3
Add the function's template to the master stack's templates. (requires editing the master stack's files)
Deploy the master stack.
I would like to be able to create the function without editing the master stack's files.
(i.e. boto3.get_stack_id -> boto3.add_resource_to_stack_by_stack_id)
Is this possible? If so, how do I do it?
No that is not possible. When updating a stack, you always have to provide a URL for the new stack template, or provide the full template body as a string, or use the previous template.
Source: https://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_UpdateStack.html
There is no API call that allows you to directly add a resource to a stack.