I’m trying to update rule source that contain ASG with additional ASG, by using the following:
networkSecurityGroup
.update()
.updateRule(ruletest)
.withSourceApplicationSecurityGroup("/subscriptions/”subscription_id”/resourceGroups/TEST/providers/Microsoft.Network/applicationSecurityGroups/ASG2")
.parent()
.apply();
The code seems to execute with no errors or exceptions, but the ASG is not added to the source.
Related
I am new in AWS so have couple of AWS VPC creation using CloudFormation service questions.
1. Scenario: I have created the yaml file and executed that in the stack. the VPC, route table and all the subnets get created successfully. now I have deleted one of the subnet manually (through console). Now I want that subnet back, so I was trying to run the "update" stack using the "current template" (though I have not made any modification in the template). it is showing me that there is no modification in the template error.
Question 1: How to install the deleted resource through template stack without modifying it.
2. Scenario: When we create VPC, we get default route table and NACL created.
Question 2: Why can't we use the default route table and NACL through cloudformation.
Question 3: is there any command from where we can get the default route table and NACL ID in cloudformation. (for eg : there is command where we can associate the subnets to routetable. something like that).
Thanks in advance.
It can be tricky indeed when stuff has changed outside of cloudformations state. Unlike some other IaC tools, it doesn't 'correct' the state of resources when they have deviated from the given state.
Remove the subnet resource from the template, update the stack with the removed resource, add the subnet back and again update the stack.
It's actually best practice to create new route tables and NACLs and associate them with the corresponding subnets, so there is actually no need to modify the default resources.
You can create a cloudformation custom resource to query for the id's and pass them to other resources. However this is not recommended due to answer 2. Ask yourself: what am I trying to achieve here? Is it really necessary?
In my workplace, we have a process of changing the EC2's AMI every month with the new patched private AMI.
Our internal operations team makes these patched AMI available for us as private AMI for EC2.
In our terraform script, we change the name of the AMI to the new one before executing the script via Jenkins.
However, we have noticed that after the script is executed EC2 instance is not affected by the AMI name change, we have to manually terminate each EC2 instance for the AMI change to take effect.
What I want to know is-
Is this a problem someone has faced before?
Is there a way to remove the manual termination of instance in Terraform OR is there a way in Terraform by which the changes will be taken cared of automatically?
The instances in ASG are not being updated with the new AMI because by default, only your launch configuration (LC) or launch template (LT) are updated with the new AMI. This does not automatically causes an update of the instances to use the new LC/LT.
However, since not too long ago, AWS has introduce instance refresh to combat this specific issue. Subsequently, this functionality was added to terraform and is configured using instance_refresh block of aws_autoscaling_group resource.
Thus, you could setup instance_refresh in your aws_autoscaling_group and specify what triggers it. Usually the trigger would be changes to the associated launch_configuration or launch_template.
I have tried setting up a Blue/Green deployment by copying AutoScalingGroup, however this leaves the CloudFormation stack detached from its original resources as CodeDeploy creates a new copy and deletes the original. I understand from another post (https://forums.aws.amazon.com/thread.jspa?messageID=861085) that AWS are developing improvements for this, however for now I am trying the following workaround. Any ideas would be really helpful.
CloudFormation creates the following:
Elastic Load Balancer
Target Group
AutoScalingGroup One (with LaunchConfiguration)
AutoScalingGroup Two (same as one but has no instances)
DeploymentGroup (with In-Place DeploymentStyle) which deploys a revision to AutoScalingGroup One
After CloudFormation finishes, I do the following manually in the console:
I update the created Deployment Group to be of Deployment Style Blue/Green and set its original environment to be AutoScalingGroup One.
I add an instance to AutoScalingGroup Two
I create a deployment in CodeDeploy. However, this does not work as when a new instance is attached to AutoScalingGroup Two, it gets added to the TargetGroup immediately and does not pass health checks.
Any ideas on how to implement a set of resources with CloudFormation that will make blue green deployments simple, i.e. one click in CodeDeploy and CloudFormation resources still remaining intact?
With regard to the initial issue you are describing, did you experiment with the Health Check Grace Period? That should prevent the problems you describe with the failing health check when the instance hits the target group.
An alternative approach (which has plenty of its own downsides) is to adapt the CloudFormation template to compensate for the behavior when CodeDeploy replaces the ASG in a Blue-Green deployment.
Within the ASG template, create a "yes/no" parameter called
"ManageAutoScalingGroup". Create the ASG conditionally on the value
of this parameter being "yes". Set a deletion policy on the ASG of
retain so that CloudFormation will leave the group in place when the
parameter is changed to "no".
Spin up the group with a default "yes"
on this parameter.
Once the instances are healthy, and CodeDeploy has completed an initial in-place deployment, you can change the DeploymentGroup to use Blue-Green where CodeDeploy will replace your ASG.
Be sure to update the ASG and change ManageAutoScalingGroup to "no". CloudFormation will delete the reference from your stack, but it will leave the resource in place.
This will give you the one-click deployments you desire through CodeDeploy, but be aware that it comes with some costs:
CodeDeploy will not copy the TargetGroup parameter of your Auto Scaling Group (as described by others in https://forums.aws.amazon.com/thread.jspa?threadID=249406&tstart=0). You should be able to work around this with a clever use of CloudWatch event rules and SSM Automation to mark the instance unhealthy when the ALB changes its status.
The copies that CodeDeploy produces seem to be fairly unreliable. At least once, I've seen my LaunchTemplate version reset to an incorrect value. I've also run into scenarios where the deployment group lost track of which ASG it was supposed to track.
Continuing to apply changes from your template to the ASG is a hassle. The process to "refresh" the group is: 1) Revert the parameter described earlier such that CloudFormation will generate a new group. 2) Modify the deployment group to target this group and complete an in-place deployment. 3) Modify the deployment group to restore Blue-Green deployments and update your stack accordingly.
I'm not too impressed with CodeDeploy in this department. I'd love to see them work in the same fashion as an ASG that is set to replace itself on application of a new LaunchTemplate version. If you are feeling a bit ambitious, you could mimic this behavior by leveraging Step Functions with ASG instance lifecycle hooks. This is a solution that I'm considering once I have the time.
I've been working on a DevOps pipeline for an application hosted on AWS. I want to make an improvement to my current setup, but I'm not sure the best way to go about doing it. My current set up is as follows:
ASG behind ELB
Desired capacity: 1
Min capacity: 1
Max capacity: 1
Code deployment process:
move deployable to S3
terminate instance in ASG
new instance is automatically provisioned
new instance pulls down deployable in user data
The problem with this setup is that the environment is down from when the instance is terminated to when the new instance has been completely provisioned.
I've been thinking about ways that I can improve this process to eliminate the downtime, and I've come up with two possible solutions:
SOLUTION #1:
ASG behind ELB
Desired capacity: 1
Min capacity: 1
Max capacity: 2
Code deployment process:
move deployable to S3
launch new instance into ASG
new instance pulls down deployable in user data
terminate instance with old deployable
With this solution, there is always at least one instance capable of serving requests in the ASG. The problem is, ASGs don't seem to support a simple operation of manually calling on it to spin up a new instance. (They only launch new instances when the scaling policies call for it.) You can attach existing instances to the group, but this causes the desired capacity value to increase, which I don't want.
SOLUTION #2:
ASG behind ELB
Desired capacity: 2
Min capacity: 2
Max capacity: 2
Code deployment process:
move deployable to S3
terminate instance-A
new instance-A is automatically provisioned
instance-A pulls down new deployable by user data script
terminate instance-B
new instance-B is automatically provisioned
instance-B pulls down new deployable by user data script
Just as with the previous solution, there is always at least one instance available to serve requests. The problem is, there are usually two instances, even when only one is needed. Additionally, the code deployment process seems needlessly complicated.
So which is better: solution #1, solution #2, or some other solution I haven't thought of yet? Also a quick disclaimer: I understand that I'm using ASGs for something other than their intended purpose, but it seemed the best way to implement automated code deployments along AWS's "EC2 instances are cattle" philosophy.
The term you are looking for is "zero-downtime deployment."
The problem is, ASGs don't seem to support a simple operation of manually calling on it to spin up a new instance. (They only launch new instances when the scaling policies call for it.) You can attach existing instances to the group, but this causes the desired capacity value to increase, which I don't want.
If you change desired capacity yourself (e.g. via an API call), the Auto Scaling Group will automatically launch an extra instance for you. For example, here is a simple way to implement zero-downtime deployment for your Auto Scaling Group (ASG):
Run the ASG behind an Elastic Load Balancer (ELB).
Initially, the desired capacity is 1, so you have just one EC2 Instance in the ASG.
To deploy new code, you first create a new launch configuration with the new code (e.g. new AMI or new User Data).
Next, you change the desired capacity from 1 to 2. The ASG will automatically launch a new EC2 Instance with the new launch configuration.
Once the new EC2 Instance is up and running and registered in your ELB, you change the desired capacity from 2 back to 1, and the ASG will automatically terminate the older EC2 Instance.
You can implement this manually or use existing tools to do it for you, such as:
Define your ASG using CloudFormation and specify an UpdatePolicy that does a zero-downtime rolling deployment.
Define your ASG using Terraform and use the create_before_destroy lifecycle property to do a zero-downtime (sort-of) blue-green deployment as described here.
Define your ASG using Ansible and use the serial keyword to do rolling upgrades.
Use the aws-ha-release script.
You can learn more about the trade-offs between tools like Terraform, CloudFormation, Ansible, Chef, and Puppet here.
Even though this is a DevOps pipeline and not a production environment, what you are describing sounds like a blue/green deployment scenario in which you want to be able to switch between environments without downtime. I think the best answer is largely specific to your requirements (which we don't 100% know), but a guide like The DOs and DON'Ts of Blue/Green Deployment will be beneficial in finding the best way to achieve your goals, whether it is #1, #2, or something else.
I haven't been able to find anywhere to see what order a deployment goes out. We have a primary instance, and then 3-4 autoscaling instances on an ELB. We selected the deployment by tags (for the AS instances) and then the primary instance by name. We then deploy half at a time. We were hoping the AS instances would always deploy first so if a deployment failed we could just terminate those instances and it was easier to fix. (Fixing the primary would be more manual work since we can't just terminate it for other reasons.)
Is there a way to specify the order in which a deployment should go out?
You cannot specify the order in which the instances will be deployed within a deployment group. AWS CodeDeploy sorts the instances under a deployment group based on instance AZ and tries to do best effort striping across AZs. If you specifically want Autoscaling instances to go first, one way to workaround is to have a separate deployment group containing the Autoscaling group.