We have a lot of resources created manually. How do I add them to CloudFormation stack without manually adding each of them in template? There are so many resources added manually, that's why it will take too much time If I start adding them manually one by one to template.
Update:
Looks like there is no other way than adding them to new template manually. I completed it by updating the infrastructure with new template for the resources that I wanted to sync on PROD env.
Yes, you have to do it manually. But to jump start the process you can use former2 tool, which can generate the cloudformation templates from existing resources for you.
Related
We have a root CF stack and multiple nested CF stacks which templates are stored in an S3 bucket. There is also a CodePipeline which is triggered whenever a repository containing the template files are updated. The CodePipeline uploads the updated template files to S3 and triggers the root CF stack and the nested stacks to be updated. Some of those nested stacks consist of Lambda applications which have some old runtime.
However, on the date when AWS stops supporting the runtime (https://docs.aws.amazon.com/lambda/latest/dg/runtime-support-policy.html), the CF stack failed to update because of the deprecated Lamdba runtime version. The root stack cannot finish the update rollback because the nested stacks failed to update but there is no way of updating the nested stack besides updating the root stack which is in UPDATE_ROLLBACK_FAILED status, and cannot be updated.
Reading https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/troubleshooting.html#troubleshooting-errors-update-rollback-failed, we know that we needed to fix the problem manually but we have no idea how to update the nested stack template.
Is deleting the failed nested stacks the only way to recover from this situation? If yes, do all the resources in the nested stack disappear after deletion of the nested CF stack? We are looking for a solution to update the nested stack and keeping existing resources intact.
Short Answer: You can't do that!
Long Answer: You have to delete the stack. Update the runtime and any other required changes and deploy fresh. You might have to clean up resources which doesn't get deleted even after deleting the CloudFormation stack. Services like S3, ECRs, etc. needs to be deleted manually.
The solution is here
If none of the solutions in the troubleshooting guide worked, you can
use the advanced option to skip the resources that CloudFormation
can't successfully roll back. You must look up and type the logical
IDs of the resources that you want to skip. Specify only resources
that went into the UPDATE_FAILED state during the UpdateRollback and
not during the forward update.
There is a way to skip the failing resources i.e. the Lambdas in my case. This way, you can make the root and the nested stacks reach the UPDATE_ROLLBACK_COMPLETE state.
We have four AWS accounts used to define different environments: dev, sqe, stg, prd. We're only now using CF and I'd like to import an existing resource into a stack. As we roll this out each environment will get the new stack and I'm wondering if there's an easier way to import the resource in each env. than to initially go through the console to import the reasource while add the stack (would be nice if we could just deploy via our deployment system.)
What I was hoping for was something I could specify in the stack definition itself (e.g., "here's a bucket that already exists, take ownership"), but I'm not finding anything. Currently it seems like the easiest route would be to create an empty stack in each environment which imports the resource and then just deploy as normal.
Also, what happens when/if an update fails and a stack gets stuck in ROLLBACK_COMPLETE? Do I have to go through this again after deleting the stack?
What you have described sounds exactly like your after a Continuous Integration / Continuous Deployment (CICD) pipeline. Instead of trying to import existing resources into your accounts, your better off designing the cloudformation templates then deploying them to each environment through Code Pipeline. This will also provide a clean separation between the accounts instead of importing stg resources to prd.
A fantastic example and quickstart is the serverless-cicd-for-enterprise which should serve as a good starting point for you.
You can't get stuck on 'rollback complete', as that is the last action a failed change set executes. What it means is that it tried to update, couldn't and has reverted to the last successful deployment. If this is the first deployment (no successful deployments) you will need to delete the stack and try again. However, if you have had a successful deployment you can run an update stack.
I'm wondering if creating SSM documents via CloudFormation actually makes sense or if instead I should use another mechanism.
My concern is, that when the content changes, CloudFormation actually creates a new document and destroys the old one. In that process also the name of the document changes. The name cannot be hardcoded or CloudFormation complains with:
CloudFormation cannot update a stack when a custom-named resource requires replacing
With permanently changing names its going to be impossible to reference the document anywhere.
I haven't seen a possibility to create a new document version via CFN, as I can do manually in the AWS console.
What's best practice here?
I know I can create a custom CFN resource and deal with the document update in a lambda. But ain't there a simple solution?
The challenge you describe has, I think, been solved or mitigated by the (recently released?) UpdateMethod property for AWS::SSM::Document. Now, you can specify NewVersion for that property, and that will create a new version of the same document and set it as the default version.
See https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ssm-document.html#cfn-ssm-document-updatemethod
I want to use AWS Data Pipeline service and have created some using the manual JSON based mechanism which uses the AWS CLI to create, put and activate the pipeline.
My question is that how can I automate the editing or updating of the pipeline if something changes in the pipeline definition? Things that I can imagine changing could be schedule time, addition or removal of Activities or Preconditions, references to DataNodes, resources definition etc.
Once the pipeline is created, we cannot edit quite a few things as mentioned here in the official doc: http://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-manage-pipeline-modify-console.html#dp-edit-pipeline-limits
This makes me believe that if I want to automate the updating of pipeline then I would have to delete and re-create/activate a new pipeline? If yes, then the next question is that how can I create a automated process which identifies the previous version's ID, deletes it and creates a new one? Essentially trying to build a release management flow for this where the configuration JSON file is released and deployed automatically.
Most commands like activate, delete, list-runs, put-pipeline-definition etc. take the pipeline-id which is not known until a new pipeline created. I am unable to find anything which remains constant across updates or recreation (the unique-id and name parameters of the createpipeline command are consistent but then I can't use them for the above mentioned tasks (I need pipeline-id for that.
Of course I can try writing shell scripts which grep and search the output and try to create a script but is there any other better way? Some other info that I am missing?
Thanks a lot.
You cannot edit schedules completely or change references so creating/deleting pipelines seems to be the best way for your scenario.
You'll need the pipeline-id to delete a pipeline. Is it not possible to keep a record of that somewhere? You can have a file with the last used id stored locally or in S3 for instance.
Some other ways I can think of are:
If you have only 1 pipeline in the account you can list-pipelines and
use the only result
If you have the pipeline name you can list-pipelines and find the id
I see that there are a lot of success stories using CloudFormation, we're planning to use it to make sure our Prod/Dev environments are identical. I heard that its a a great place to have a single file, in version control, for deploying multiple similar environments.
I've a doubt, lets say if I use CloudFormer and create a template of say my DB instance and save it GIT, and say in next 10-15 days I make couple of changes like add new volumes in instance to store DataFiles, or delete some volumes etc, Now, when I use that Template in say our Dev Environment will it reflect the volumes which I added/deleted. I mean how does it work behind the scene.
This is the basic way to use CloudFormation:
Create a JSON template describing your stack. You can write it manually, or write code that creates the JSON for you.
Create one or more stacks based on the template.
Whenever you want to change something, edit your template (always committing changes to version control) and update the stack(s).
You will often have several templates, where stacks based on one template uses resources created by stacks based on other templates. Outputs and parameters are good for coordinating this.
Most importantly: You should never change resources created using CloudFormation in any other way than by changing the stack template and updating the stack.
No, such changes would not be reflected automatically.
A CloudFormation template is a declarative description of AWS resources. When you create a Stack from a template, AWS will provision all resources described in the template. You can also update a stack with new resources or delete entire stacks.
ClodFormer is a separate tool that will scan you account for resources and create a template describing them.
So, if you create two stacks from the same template, they will be similar only after created, but totally separate lives thereafter. But you can have resources that are shared between stacks, for example, you can have one database stack that is referenced by two application stacks, if that makes sense to your environment.