I see that there are a lot of success stories using CloudFormation, we're planning to use it to make sure our Prod/Dev environments are identical. I heard that its a a great place to have a single file, in version control, for deploying multiple similar environments.
I've a doubt, lets say if I use CloudFormer and create a template of say my DB instance and save it GIT, and say in next 10-15 days I make couple of changes like add new volumes in instance to store DataFiles, or delete some volumes etc, Now, when I use that Template in say our Dev Environment will it reflect the volumes which I added/deleted. I mean how does it work behind the scene.
This is the basic way to use CloudFormation:
Create a JSON template describing your stack. You can write it manually, or write code that creates the JSON for you.
Create one or more stacks based on the template.
Whenever you want to change something, edit your template (always committing changes to version control) and update the stack(s).
You will often have several templates, where stacks based on one template uses resources created by stacks based on other templates. Outputs and parameters are good for coordinating this.
Most importantly: You should never change resources created using CloudFormation in any other way than by changing the stack template and updating the stack.
No, such changes would not be reflected automatically.
A CloudFormation template is a declarative description of AWS resources. When you create a Stack from a template, AWS will provision all resources described in the template. You can also update a stack with new resources or delete entire stacks.
ClodFormer is a separate tool that will scan you account for resources and create a template describing them.
So, if you create two stacks from the same template, they will be similar only after created, but totally separate lives thereafter. But you can have resources that are shared between stacks, for example, you can have one database stack that is referenced by two application stacks, if that makes sense to your environment.
Related
We have a lot of resources created manually. How do I add them to CloudFormation stack without manually adding each of them in template? There are so many resources added manually, that's why it will take too much time If I start adding them manually one by one to template.
Update:
Looks like there is no other way than adding them to new template manually. I completed it by updating the infrastructure with new template for the resources that I wanted to sync on PROD env.
Yes, you have to do it manually. But to jump start the process you can use former2 tool, which can generate the cloudformation templates from existing resources for you.
We have a root CF stack and multiple nested CF stacks which templates are stored in an S3 bucket. There is also a CodePipeline which is triggered whenever a repository containing the template files are updated. The CodePipeline uploads the updated template files to S3 and triggers the root CF stack and the nested stacks to be updated. Some of those nested stacks consist of Lambda applications which have some old runtime.
However, on the date when AWS stops supporting the runtime (https://docs.aws.amazon.com/lambda/latest/dg/runtime-support-policy.html), the CF stack failed to update because of the deprecated Lamdba runtime version. The root stack cannot finish the update rollback because the nested stacks failed to update but there is no way of updating the nested stack besides updating the root stack which is in UPDATE_ROLLBACK_FAILED status, and cannot be updated.
Reading https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors-update-rollback-failed, we know that we needed to fix the problem manually but we have no idea how to update the nested stack template.
Is deleting the failed nested stacks the only way to recover from this situation? If yes, do all the resources in the nested stack disappear after deletion of the nested CF stack? We are looking for a solution to update the nested stack and keeping existing resources intact.
Short Answer: You can't do that!
Long Answer: You have to delete the stack. Update the runtime and any other required changes and deploy fresh. You might have to clean up resources which doesn't get deleted even after deleting the CloudFormation stack. Services like S3, ECRs, etc. needs to be deleted manually.
The solution is here
If none of the solutions in the troubleshooting guide worked, you can
use the advanced option to skip the resources that CloudFormation
can't successfully roll back. You must look up and type the logical
IDs of the resources that you want to skip. Specify only resources
that went into the UPDATE_FAILED state during the UpdateRollback and
not during the forward update.
There is a way to skip the failing resources i.e. the Lambdas in my case. This way, you can make the root and the nested stacks reach the UPDATE_ROLLBACK_COMPLETE state.
I'm wondering if creating SSM documents via CloudFormation actually makes sense or if instead I should use another mechanism.
My concern is, that when the content changes, CloudFormation actually creates a new document and destroys the old one. In that process also the name of the document changes. The name cannot be hardcoded or CloudFormation complains with:
CloudFormation cannot update a stack when a custom-named resource requires replacing
With permanently changing names its going to be impossible to reference the document anywhere.
I haven't seen a possibility to create a new document version via CFN, as I can do manually in the AWS console.
What's best practice here?
I know I can create a custom CFN resource and deal with the document update in a lambda. But ain't there a simple solution?
The challenge you describe has, I think, been solved or mitigated by the (recently released?) UpdateMethod property for AWS::SSM::Document. Now, you can specify NewVersion for that property, and that will create a new version of the same document and set it as the default version.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-document.html#cfn-ssm-document-updatemethod
How to manually forcefully discard a aws lambda function in the cluster using aws console or aws cli for development and testing purposes ?
If you redeploy the function it'll terminate all existing containers. It could be as simple as assigning the current date/time to the description of the Lambda function and redeploying. This will allow you to redeploy as many times as you need because something is unique and it will tear down all existing containers each time you do the deployment.
With that said, Lambda functions are supposed to be stateless. You should keep that in mind when you write your code (eg. avoid using global variables, use random file names if creating something temp, etc). From the sounds of things, I think you might have an issue with your design if you require the Lambda container to be torn down.
If you're using the UI, then a simple way to do this is to add or alter an environment variable on the function configuration page.
When you click "Save" the function will be reloaded.
Note: this won't work if you're using the versioned functions feature.
I want to use AWS Data Pipeline service and have created some using the manual JSON based mechanism which uses the AWS CLI to create, put and activate the pipeline.
My question is that how can I automate the editing or updating of the pipeline if something changes in the pipeline definition? Things that I can imagine changing could be schedule time, addition or removal of Activities or Preconditions, references to DataNodes, resources definition etc.
Once the pipeline is created, we cannot edit quite a few things as mentioned here in the official doc: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-manage-pipeline-modify-console.html#dp-edit-pipeline-limits
This makes me believe that if I want to automate the updating of pipeline then I would have to delete and re-create/activate a new pipeline? If yes, then the next question is that how can I create a automated process which identifies the previous version's ID, deletes it and creates a new one? Essentially trying to build a release management flow for this where the configuration JSON file is released and deployed automatically.
Most commands like activate, delete, list-runs, put-pipeline-definition etc. take the pipeline-id which is not known until a new pipeline created. I am unable to find anything which remains constant across updates or recreation (the unique-id and name parameters of the createpipeline command are consistent but then I can't use them for the above mentioned tasks (I need pipeline-id for that.
Of course I can try writing shell scripts which grep and search the output and try to create a script but is there any other better way? Some other info that I am missing?
Thanks a lot.
You cannot edit schedules completely or change references so creating/deleting pipelines seems to be the best way for your scenario.
You'll need the pipeline-id to delete a pipeline. Is it not possible to keep a record of that somewhere? You can have a file with the last used id stored locally or in S3 for instance.
Some other ways I can think of are:
If you have only 1 pipeline in the account you can list-pipelines and
use the only result
If you have the pipeline name you can list-pipelines and find the id