I'm wondering if creating SSM documents via CloudFormation actually makes sense or if instead I should use another mechanism.
My concern is, that when the content changes, CloudFormation actually creates a new document and destroys the old one. In that process also the name of the document changes. The name cannot be hardcoded or CloudFormation complains with:
CloudFormation cannot update a stack when a custom-named resource requires replacing
With permanently changing names its going to be impossible to reference the document anywhere.
I haven't seen a possibility to create a new document version via CFN, as I can do manually in the AWS console.
What's best practice here?
I know I can create a custom CFN resource and deal with the document update in a lambda. But ain't there a simple solution?
The challenge you describe has, I think, been solved or mitigated by the (recently released?) UpdateMethod property for AWS::SSM::Document. Now, you can specify NewVersion for that property, and that will create a new version of the same document and set it as the default version.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-document.html#cfn-ssm-document-updatemethod
Related
We have a lot of resources created manually. How do I add them to CloudFormation stack without manually adding each of them in template? There are so many resources added manually, that's why it will take too much time If I start adding them manually one by one to template.
Update:
Looks like there is no other way than adding them to new template manually. I completed it by updating the infrastructure with new template for the resources that I wanted to sync on PROD env.
Yes, you have to do it manually. But to jump start the process you can use former2 tool, which can generate the cloudformation templates from existing resources for you.
This resource creates a beanstalk application version: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-beanstalk-version.html
Having that in a template is convenient and useful because the Beanstalk environment and the application version can be deployed at the same time.
Because it's defined as a resource in the template, though when the version is updated to a new one it's completely replaced.
This is a problem because the beanstalk UI makes it easy to deploy previous versions- you just select one and deploy it to the environment. But because cloudformation is destroying/creating the version each time there is only one. It's useful to have the UI fallback for tracking/redeploying previous version- even if it's mostly going to be done through CloudFormation.
I want CloudFormation to create a new version (when it changes) and leave the old one untouched- instead of destroying it.
I only want this behaviour for this one resource in the template- other resources can use the default behaviour.
I didnt realize there is a CF feature for this
This works as expected
MyVersion:
Type: AWS::ElasticBeanstalk::ApplicationVersion
UpdateReplacePolicy: Retain
We would like to be able to understand the version of our software that is currently deployed to a particular AWS lambda. What are the best practices for tracking a code commit hash to an AWS lambda? We've looked at AWS tagging and we've also looked at AWS Lambda Aliasing but neither approach seems convenient or user-friendly. Are there other services that AWS provides?
Thanks
Without context and a better understanding of your use case around the commit hash, its difficult to give a directly useful answer, and as other answers have shown, there are many ways you could accomplish this. That being said, the commit hash of particular code is ultimately metadata and AWS has a solution for dealing with resource metadata: tags.
Best practice is to tag your resources with metadata. Almost all, if not all, AWS resources (including Lambda) support tags. As stated in the AWS documentation “tagging allows you to quickly search, filter, and manage resources” and subject to AWS limits your tags can be pretty much any key-value pair you like, including “commit: hash”.
The tagging strategy here would be to assign the commit hash to a tag, “commit” with the value “e63abe27”. You can tag resources manually through the console or you can apply tags as part of your build process.
Once tagged, at a high level, you would then be able to identify which commit is being used by listing the tags for the lambda in question. The CLI command would be something like:
aws lambda list-tags —resource arn:aws:lambda:us-east-1:123456789012:function:myFunction
You can learn more about tags and tagging strategy by reviewing the AWS docs here and you can download the Tagging Best Practice whitepaper here.
One alternative could be to generate a file with the Git SHA as part of your build system that is packaged together with the other files in the build artifact. The following script generates a gitSha.json file in the ${outputDir}:
#!/usr/bin/env bash
gitSha=$(git rev-parse --short HEAD)
printf "{\"gitSha\": \"%s\"}" ${gitSha} > "${outputDir}/git-sha.js"
Consequently, the gitSha.json may look like:
{"gitSha": "5c805638"}
This file can then be accessed either by downloading the package. Alternatively, you can create a function that inspects the file in runtime and returns its value to the caller, writes it to a log, or similar depending on your use case.
This script was implemented using bash and git rev-parse, but you can use any scripting language in combination with a Git library that you are comfortable with.
Best way to version Lambda is to Create Version option and adding these versions into an Alias.
We use this mechanism extensively for mapping a single AWS Lambda into different API gateway endpoints. We use environment variables provided by AWS Lambda to move all configurations outside the Lambda function. In each version of the Lambda, we change these environment variables as required and create a new version. This version can be mapped to an alias, which will help to keep the API gateway or the integration points intact (without any change for the integration)
if using serverless
try in serverless.yml
provider:
versionFunctions: true
and
functions:
MyFunction:
tags:
try serverless plugins, one of them
serverless plugin install --name serverless-version-tracker
it uses git tags as version
you need to manage git tugs then
I want to use AWS Data Pipeline service and have created some using the manual JSON based mechanism which uses the AWS CLI to create, put and activate the pipeline.
My question is that how can I automate the editing or updating of the pipeline if something changes in the pipeline definition? Things that I can imagine changing could be schedule time, addition or removal of Activities or Preconditions, references to DataNodes, resources definition etc.
Once the pipeline is created, we cannot edit quite a few things as mentioned here in the official doc: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-manage-pipeline-modify-console.html#dp-edit-pipeline-limits
This makes me believe that if I want to automate the updating of pipeline then I would have to delete and re-create/activate a new pipeline? If yes, then the next question is that how can I create a automated process which identifies the previous version's ID, deletes it and creates a new one? Essentially trying to build a release management flow for this where the configuration JSON file is released and deployed automatically.
Most commands like activate, delete, list-runs, put-pipeline-definition etc. take the pipeline-id which is not known until a new pipeline created. I am unable to find anything which remains constant across updates or recreation (the unique-id and name parameters of the createpipeline command are consistent but then I can't use them for the above mentioned tasks (I need pipeline-id for that.
Of course I can try writing shell scripts which grep and search the output and try to create a script but is there any other better way? Some other info that I am missing?
Thanks a lot.
You cannot edit schedules completely or change references so creating/deleting pipelines seems to be the best way for your scenario.
You'll need the pipeline-id to delete a pipeline. Is it not possible to keep a record of that somewhere? You can have a file with the last used id stored locally or in S3 for instance.
Some other ways I can think of are:
If you have only 1 pipeline in the account you can list-pipelines and
use the only result
If you have the pipeline name you can list-pipelines and find the id
I see that there are a lot of success stories using CloudFormation, we're planning to use it to make sure our Prod/Dev environments are identical. I heard that its a a great place to have a single file, in version control, for deploying multiple similar environments.
I've a doubt, lets say if I use CloudFormer and create a template of say my DB instance and save it GIT, and say in next 10-15 days I make couple of changes like add new volumes in instance to store DataFiles, or delete some volumes etc, Now, when I use that Template in say our Dev Environment will it reflect the volumes which I added/deleted. I mean how does it work behind the scene.
This is the basic way to use CloudFormation:
Create a JSON template describing your stack. You can write it manually, or write code that creates the JSON for you.
Create one or more stacks based on the template.
Whenever you want to change something, edit your template (always committing changes to version control) and update the stack(s).
You will often have several templates, where stacks based on one template uses resources created by stacks based on other templates. Outputs and parameters are good for coordinating this.
Most importantly: You should never change resources created using CloudFormation in any other way than by changing the stack template and updating the stack.
No, such changes would not be reflected automatically.
A CloudFormation template is a declarative description of AWS resources. When you create a Stack from a template, AWS will provision all resources described in the template. You can also update a stack with new resources or delete entire stacks.
ClodFormer is a separate tool that will scan you account for resources and create a template describing them.
So, if you create two stacks from the same template, they will be similar only after created, but totally separate lives thereafter. But you can have resources that are shared between stacks, for example, you can have one database stack that is referenced by two application stacks, if that makes sense to your environment.