I have a script which is going to create a new LaunchConfiguration with new AMI_ID and retaining all the existing metadata like user_data etc. Then I'm triggering a ASG instance refresh to launch instances with the new LC. What I'm concerned is, as I'm doing things outside of cfn what is going to happen when the cfn stack update gonna happen next time? is this change is going to be considered a drift? just wanted to make sure that this LC creation and updating the ASG don't affect things with CFN.
is this change is going to be considered a drift?
Yes. You are causing a drift. When you update your stack, the ASG will be reverted back to the state it is in the template, not whatever you made it to be outside of CFN.
Its not a good practice to manually change your resources outside of CFN. You will have to update your template to match your current ASG state.
Related
I'm updating our production ASG at night to use small type of instances.
For example, using m5 type instances in business hours, and using t3 type instances at night.
For this, I update the launch template version and desired capacity of the ASG by lambda with cloudWatch.
When it update the launch template version and desired capacity, it start a new instance depends on the new version of template well. But the problem is, sometimes ASG stop the new instance instead of the old one (old version type)
So I'm planning to update the minSize of the ASG also and change it again after sometimes to wait the new version instance be started well.
For example, update the minSize and desired capacity as 2 and wait to start the new type instance by updated version launch template. And after sometimes, update the minSize and desired capacity as 1 to stop the old type instance.
Is this right way? or Could you advice me better way?
Thanks.
The solution is to set termination policy setting in the autoscaling group to OldestInstance.
This way, ASG will first terminate the oldest instances, which are the instances that you want to get rid of.
Our team had an issue where someone manually modified an IAM role for an operational event. We hoped that a CloudFormation stack redeployment would revert the state of the IAM role, however, the manual change was still there.
My working theory is that since the IAM role arn is the same, that CloudFormation does not delete and recreate it. Is that accurate? And if so, how do we ensure all relevant resources are torn down during a deployment?
My working theory is that since the IAM role arn is the same, that CloudFormation does not delete and recreate it. Is that accurate?
CloudFormation (CFN) does not check for any changes made outside of its control. You could remove the role, and CFN would still "think" that the role is there.
If you change a resource created by CFN manually outside of CFN (bad practice), you have so called a stack drift. CFN by itself is not aware of any changes made to resources it creates that occurred outside its control. But, CFN provides special tools which you have to explicitly call to detect the drift:
Detect drift on an entire CloudFormation stack
Not all resources support drift detection, but AWS::IAM::Role is one which does.
And if so, how do we ensure all relevant resources are torn down during a deployment?
Not sure what do you mean here. But you have to manually fix the drift. You have four choices:
Change the role back to its original state,
Update template to reflect the external changes,
Use import to import the changed role to stack,
Delete the entire stack, and create new one from the original template.
The last choice ensures that the modified role is also deleted and recreated in its original form.
I'm trying to do the following: the parent Stacks launches the first child Stack which creates a fully configured EC2 instance. Once that is completed, the parent Stack kicks off a second Stack that uses a Lambda function to create an AMI, which is then used for an AutoScaling setup even further downstream. This is working perfectly.
Now the challenge: when I update the metadata for the EC2 instance from the first child Stack I would really like the second Stack to be triggered. In other words: I want to be able to change the seed instance and have the CloudFormation Stack update, creating a new AMI.
I'm able to get the seed instance to update, but the second child Stack isn't triggered :-(
I've Google everything I could think of, but Update Policy doesn't apply, manually kicking off the second child is defying the point of having nested Stacks and I'm pretty sure I'm missing some obvious feature or clever trick, so I'm asking you guys to help me out. Please.
Have you tried using Lambda-backed custom resource? You can have the service token of resource as the Lambda and use DependsOn with the first nested stack. It will kick off whenever the CF script runs or updates.
You can also lookup the stack itself from the Lambda function to determine any changes if you want.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html
I have tried setting up a Blue/Green deployment by copying AutoScalingGroup, however this leaves the CloudFormation stack detached from its original resources as CodeDeploy creates a new copy and deletes the original. I understand from another post (https://forums.aws.amazon.com/thread.jspa?messageID=861085) that AWS are developing improvements for this, however for now I am trying the following workaround. Any ideas would be really helpful.
CloudFormation creates the following:
Elastic Load Balancer
Target Group
AutoScalingGroup One (with LaunchConfiguration)
AutoScalingGroup Two (same as one but has no instances)
DeploymentGroup (with In-Place DeploymentStyle) which deploys a revision to AutoScalingGroup One
After CloudFormation finishes, I do the following manually in the console:
I update the created Deployment Group to be of Deployment Style Blue/Green and set its original environment to be AutoScalingGroup One.
I add an instance to AutoScalingGroup Two
I create a deployment in CodeDeploy. However, this does not work as when a new instance is attached to AutoScalingGroup Two, it gets added to the TargetGroup immediately and does not pass health checks.
Any ideas on how to implement a set of resources with CloudFormation that will make blue green deployments simple, i.e. one click in CodeDeploy and CloudFormation resources still remaining intact?
With regard to the initial issue you are describing, did you experiment with the Health Check Grace Period? That should prevent the problems you describe with the failing health check when the instance hits the target group.
An alternative approach (which has plenty of its own downsides) is to adapt the CloudFormation template to compensate for the behavior when CodeDeploy replaces the ASG in a Blue-Green deployment.
Within the ASG template, create a "yes/no" parameter called
"ManageAutoScalingGroup". Create the ASG conditionally on the value
of this parameter being "yes". Set a deletion policy on the ASG of
retain so that CloudFormation will leave the group in place when the
parameter is changed to "no".
Spin up the group with a default "yes"
on this parameter.
Once the instances are healthy, and CodeDeploy has completed an initial in-place deployment, you can change the DeploymentGroup to use Blue-Green where CodeDeploy will replace your ASG.
Be sure to update the ASG and change ManageAutoScalingGroup to "no". CloudFormation will delete the reference from your stack, but it will leave the resource in place.
This will give you the one-click deployments you desire through CodeDeploy, but be aware that it comes with some costs:
CodeDeploy will not copy the TargetGroup parameter of your Auto Scaling Group (as described by others in https://forums.aws.amazon.com/thread.jspa?threadID=249406&tstart=0). You should be able to work around this with a clever use of CloudWatch event rules and SSM Automation to mark the instance unhealthy when the ALB changes its status.
The copies that CodeDeploy produces seem to be fairly unreliable. At least once, I've seen my LaunchTemplate version reset to an incorrect value. I've also run into scenarios where the deployment group lost track of which ASG it was supposed to track.
Continuing to apply changes from your template to the ASG is a hassle. The process to "refresh" the group is: 1) Revert the parameter described earlier such that CloudFormation will generate a new group. 2) Modify the deployment group to target this group and complete an in-place deployment. 3) Modify the deployment group to restore Blue-Green deployments and update your stack accordingly.
I'm not too impressed with CodeDeploy in this department. I'd love to see them work in the same fashion as an ASG that is set to replace itself on application of a new LaunchTemplate version. If you are feeling a bit ambitious, you could mimic this behavior by leveraging Step Functions with ASG instance lifecycle hooks. This is a solution that I'm considering once I have the time.
Lets say an AWS stack was created using CloudFormation.
Now one of those resources was modified outside CloudFormation.
1) Is it possible to have CloudFormation specifically create those resources? Based on my understanding, we can't do that because CloudFormation does not identify a difference, and so does not create the modified resources. Is my observation correct?
2) Also, what options do I have to revert a stack to its original state, if modified outside CloudFormation?
This is one possible hack you could use without deleting the entire stack.
From the template remove the specific resource which got deleted accidentally.
Now update the stack which makes your stack and resources in your account in sync.
Revert the template to its state before step1 and update again which will create the resource which got deleted accidentally.
Unfortunately the answer for both your questions is NO.
If you modify the resources in the stack after stack creation status is COMPLETE, there is nothing CF can do since it doesn't keep track of modification to resources
You have no option other than deleting the current stack and create a new one
First, beware that modifying CloudFormation-created resources outside of CloudFormation is explicitly discouraged, according to AWS CloudFormation Best Practices:
Manage All Stack Resources Through AWS CloudFormation
After you launch a stack, use the AWS CloudFormation console, API, or AWS CLI to update resources in your stack. Do not make changes to stack resources outside of AWS CloudFormation. Doing so can create a mismatch between your stack's template and the current state of your stack resources, which can cause errors if you update or delete the stack.
However, if you've modified a CloudFormation-managed resource accidentally and need to recover, you may have some limited options beyond simply deleting and re-creating the stack altogether (which may not be an acceptable option):
It is not possible for CloudFormation to automatically update its internal state based on the current state of an externally-modified resource.
However, depending on the exact resource type, in some cases you can manually update CloudFormation afterwards by applying a stack update that matches the current state of the resource.
Similarly, it is not possible for CloudFormation to automatically revert an externally-modified resource back to its original unmodified CloudFormation state.
However, depending on the exact resource type, in some cases you can either:
Revert a resource by manually updating the resource back to its original state;
Update the resource by applying a stack update, bringing both the CloudFormation stack and the managed resource to an altogether new state that will once again be in sync.
To force the EC2 re-creating, I do use a simple trick, when I'm deploying, I jump between AMI's IDs (I took two similar AMI's ID), that had helped me when I'm testing user data or things that I want to test during the EC2 bootstrap. Again, it just works for EC2.
Unfortunately, the answer is NO
if you made changes in the stack after the creation, Cloudformation can't track those changes.
if you need to revert those changes, you must delete the stack and rebuild.