AWS Cloudformation: No change set if the CF's Referenced S3 files are changed - amazon-web-services

My CF is having Step Function state machine configuration which definition is placed at some S3 location.
Now if there is any change in the definition, i.e. change only in the S3 file, in this case updating CF is failing.
Solution I tried
I Have declared a parameter in my CF(say buildVersion) and every change in the S3 will lead to new build version and this build version I am sending though parameters
aws cloudformation --region "xyz" update-stack --stack-name "my stack name" --timeout-in-minutes 15 --template-body "CF file path" --parameters "parameter-file path"
This is how my parameter file look like, So before calling this command I am updating the parameter file with this buildVersion parameter.
[
{"ParameterKey":"EnvironmentName", "ParameterValue":"dev"},
{"ParameterKey":"BuildVersion", "ParameterValue":"0.0"}
]
But it is not solving the purpose and update command is failing, If I am doing the same thing with the AWS console i.e. updating parameter and click on update it is working fine.
Any suggestions will be highly appreciated.

When you're updating your template. It will not look into the file referenced on S3 to see if anything has changed. So it will simply see that the file location has not changed and conclude that there is no change in the template itself.
Since your question is not really complete, I'll have to answer based on some assumptions. You could enable versioning on the S3 bucket that holds your definition and pass in the S3 version identifier as a parameter to your stack to use as a property of your StepFunction S3 Location declaration. If you do this, you can upload a new version of the state machine declaration to S3 and use that version identifier as a parameter for updating your CloudFormation stack.
You could automate the entire process of updating the CloudFormation stack by using a CodePipeline or a Lambda trigger on S3 to automatically update the CloudFormation stack when a new StateMachine definition is uploaded.

The issue with BuildVersion approach was, this parameter was not participating in the AWS resource creation and updating this parameter though CLI does not affecting any thing in the change set.
I have added this parameter in the 'Tag' section of the state machine Resource and it worked for me.

Related

AWS CDK accessing parameters when deploying stacks on the pipeline via yaml, typescript and nodejs

I'm fairly new to AWS and using the CDK but have been working on a project which deploys to AWS via a pipeline using yaml for the cf-template and later a node script to run cdk deploy on a set of stack files that have been written in Typescript.
In the cf-template yaml where the cdk-toolkit is being defined there's a bucket resource with name X. After the toolkit has been created/updated in the pipeline, the cdk deploy command is executed to deploy some stacks and workers, which should live in bucket X. They aren't automatically being uploaded there however, so I've tried using the --parameters flag to specify X as below.
cdk deploy --toolkit-stack-name my-toolkit --parameters uploadBucketName=X --ci --require-approval never
When I do this I get the following error in the pipeline for the first stack that gets deployed:
Parameters: [uploadBucketName] do not exist in the template
I assumed this meant that in the MyFirstStack.ts file it was missing a parameter definition as suggested by the AWS documentation, but it's not clear to me why this is necessary or how it's supposed to be used when it's the cdk deploy command which provides a value for this parameter. I tried adding it per the docs
const uploadBucketName = new CfnParameter(this, "uploadBucketName", {
type: "String",
description: "The name of the Amazon S3 bucket where uploaded files will be stored."});
but not sure if this is really the right thing to do, and it doesn't work besides - I still get the same error.
Does anyone have any ideas where I'm going wrong?

Working on Video on demand CloudFormation template, need to customize the template

I need to modify the CloudFormation template : Video on Demand on AWS CloudFormation template
When I deploy the main template without any modifications and upload video in the source S3 bucket, then the folders that are getting created in the destination S3 bucket are having their names as guid of Dynamodb item as shown in the below picture,
In my case, the requirement is that those folders in the destination S3 bucket should get created with some meaningful names.
To resolve this issue, where exactly do I need to modify the template
Steps for modifying a CloudFormation stack can be found at:
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-get-template.html
In general, Stacks support passed parameters of the form:
aws cloudformation create-stack --stack-name mystack
--template-body file://mypath/mystack.json
--parameters ParameterKey=KeyPairName
So one of the passed parameters could be the desired output filename.
Regarding file naming within MediaConvert, the service supports a set of time+date variables for naming output files, which can found at: https://docs.aws.amazon.com/mediaconvert/latest/ug/list-of-settings-variables-with-examples.html
Alternatively, you could rename the files after output using a Lambda Function triggered by S3 File Events. This would allow you to generate or retrieve a name conducive to your workflows, as well as set Tags and other object metadata.
Examples of this workflow are documented here: https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html
I hope this information helps you tune your workflow. Good luck!

How do I update a CloudFormation Template (via CLI or API) if none of the active resources are affected by the update?

If I have an existing CloudFormation stack with some resources that are always active, and some that are not always active (i.e., resources that have a Condition that is evaluating to false), and I attempt to update the template of ONLY those inactive resources without activating them (i.e., their Condition is still evaluating to false) via the CLI or API, I get a No updates are to be performed. error:
aws cloudformation update-stack --stack-name <name> --template-body "..."
An error occurred (ValidationError) when calling the UpdateStack operation: No updates are to be performed.
If I then check the Stack Template, it has the previous template, not the new one.
However, if I do what is essentially the same thing but from the AWS Console (i.e., Update Stack -> Replace current template -> Upload a template file -> No other changes), the template will be updated.
Is there some way to accomplish such a template update via CLI or API?
Edit: This doesn't work. When using the console CloudTrail logs the API call as UpdateStack, but using the same parameters in the CLI command doesn't seem to work.
Instead of aws cloudformation update-stack you can use aws cloudformation deploy --no-fail-on-empty-changeset.
References:
Documentation for deploy
Difference between deploy and create (or update)

Storing AWS Lambda Function code directly in S3 Bucket

AWS Lambda Functions have an option to enter the code uploaded as a file from S3. I have a successfully running lambda function with the code taken as a zip file from an S3 Bucket, however, any time you would like to update this code you would need to either manually edit the code inline within the lambda function or upload a new zip file to S3 and go into the lambda function and manually re-upload the file from S3. Is there any way to get the lambda function to link to a file in S3 so that it will automatically update its function code when you update the code file (or zip file) contained in S3?
Lambda doesn't actually reference the S3 code when it runs--just when it sets up the function. It is like it takes a copy of the code in your bucket and then runs the copy. So while there isn't a direct way to get the lambda function to automatically run the latest code in your bucket, you can make a small script to update the function code using SDK methods. I don't know which language you might want to use, but for example, you could write a script to call the AWS CLI to update the function code. See https://docs.aws.amazon.com/cli/latest/reference/lambda/update-function-code.html
Updates a Lambda function's code.
The function's code is locked when you publish a version. You can't
modify the code of a published version, only the unpublished version.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
Synopsis
update-function-code
--function-name [--zip-file ] [--s3-bucket ] [--s3-key ] [--s3-object-version ] [--publish |
--no-publish] [--dry-run | --no-dry-run] [--revision-id ] [--cli-input-json ] [--generate-cli-skeleton ]
You could do similar things using Python or PowerShell as well, such as using
https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/lambda.html#Lambda.Client.update_function_code
You can set up an AWS Code deploy pipeline to get your code build and deployed on code commit in your code repository(github,bitbucket,etc)
CodeDeploy is a deployment service that automates application
deployments to Amazon EC2 instances, on-premises instances, serverless
Lambda functions, or Amazon ECS services.
Also, wanted to add if you want to go on a more unattended route of deploying your Updated code to the Lambda use this flow in your code Pipeline
Source -> Code Build (npm installs and zipping etc.) -> S3 Upload (sourcecode.zip in S3 bucket) -> Code Build (another build just for aws lambda update-funtion-code)
Make sure the role for the last stage has both S3 getObject and Lambda UpdateFunctionCode policies attached to it.

Is there a way to use an SSM Parameter to provide the path for an S3 deployment in CodePipeline

So, I've got a simple CodePipeline setup that uses CodeBuild to put some artifacts together and then provisions a CloudFormation Stack.
One of the resources created is an S3 bucket for storing static files. It also creates an SSM parameter with the name of the bucket.
Currently, to deploy to the bucket I'm using an S3 stage to unpack the initial set of files. Unfortunately I can only figure out how to set the bucket manually. This works ok if the stack is already provisioned but fails if the stack is created from fresh (with a new bucket name assigned).
Is there a way I can use the SSM parameter as part of this stage that I'm not seeing?
I agree with #shariqmaws. You can save the environment variable in SSM parameter store and can use that variable as follows:
env:
parameter-store:
key: value
Once that's done, you can use that variable as follows:
aws s3 sync your-s3-files/ "s3://${key}"
If I understood you correctly, you wish to dynamically change the Bucket name of the S3 Deploy Action in CodePipeline. Currently this is not possible as this is part of the pipeline configuration [1].
What you can do instead is take matters in your own hands, replace the S3 Deploy action with a CodeBuild action and sync the files to the S3 bucket yourself. You can read in the parameter store value using this syntax in buildspec [2]:
env:
parameter-store:
key: "value"
... or use the 'aws ssm' cli 'get-parameter' command [3] on-demand.
[1] https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#w498aac41c11c11c31b3
[2] https://docs.aws.amazon.com/codebuild/latest/userguide/build-spec-ref.html#build-spec-ref-syntax-link3
[3] https://docs.aws.amazon.com/cli/latest/reference/ssm/get-parameter.html