Editing cloud formation templates terminates existing instances and creates new instances - amazon-web-services

We are using a cloud formation template for a set of VMs and each time after the code deployment , we need to edit the package version on the template parameters for auto scaling to take the latest package from the s3 bucket.
The issue is, editing cloud formation template triggers a cloudformation-based upgrade of the instances(which involves destroying the existing machines and creating new ones from scratch, which is time consuming).
Is there anyway we can prevent this.
Basically, we dont need the cloud formation template to destroy and recreate the instances whenever we edit it.?
EDIT : This my autoscaling group setting
"*********":{
"Type":"AWS::AutoScaling::AutoScalingGroup",
"Properties":{
"AvailabilityZones":[
{
"Ref":"PrimaryAvailabilityZone"
}
],
"Cooldown":"300",
"DesiredCapacity":"2",
"HealthCheckGracePeriod":"300",
"HealthCheckType":"EC2",
"LoadBalancerNames":[
{
"Ref":"elbxxbalancer"
}
],
"MaxSize":"8",
"MinSize":"1",
"VPCZoneIdentifier":[
{
"Ref":"PrivateSubnetId"
}
],
"Tags":[
{
"Key":"Name",
"Value":"my-Server",
"PropagateAtLaunch":"true"
},
{
"Key":"VPCRole",
"Value":{
"Ref":"VpcRole"
},
"PropagateAtLaunch":"true"
}
],
"TerminationPolicies":[
"Default"
],
"LaunchConfigurationName":{
"Ref":"xxlaunch"
}
},
"CreationPolicy":{
"ResourceSignal":{
"Timeout":"PT10M",
"Count":"1"
}
},
"UpdatePolicy":{
"AutoScalingRollingUpdate":{
"MinInstancesInService":"1",
"MaxBatchSize":"1",
"PauseTime":"PT10M",
"WaitOnResourceSignals":"true"
}
}
},

You can look at the documentation and view the Update requires: field on the property you modifying on your CF template.
If it says Replacement it will recreate the instance, with a new logical id
If it says Some Interruption it will make the instance unavailable, in the ec2 case, restarting it, but will not recreate the instance, keeping the same logical id
If it says No interruption it will not impact the instance at all
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-properties-ec2-instance.html
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/using-cfn-updating-stacks-update-behaviors.html

Related

Trigger AWS Step Function via Cloudwatch and pass some variables

a file in S3 bucket triggers cloud watch event ( I am able to capture the url and key via $.detail.xxxx
Code below
How can I then pass these to a step function and from step function pass them to fargate instance as an environment variable
trying to use terraform's "aws_cloudwatch_event_target" however, I cannot find good examples of launching and passing inputs to step function
Here is the full function i have so far
resource "aws_cloudwatch_event_target" "cw-target" {
arn = aws_sfn_state_machine.my-sfn.arn
rule = aws_cloudwatch_event_rule.cw-event-rule.name
role_arn = aws_iam_role.my-iam.arn
input_transformer {
input_paths = {
bucket = "$.detail.requestParameters.bucketName"
}
}
input_template = <<TEMPLATE
{
"containerOverrides": [
{
"environment": [
{ "name": "BUCKET", "value": <bucket> },
]
}
]
}
TEMPLATE
}
on the cloudwatch event via the console I can see
{"bucket":"$.detail.requestParameters.bucketName"}
and
{
"containerOverrides": [
{
"environment": [
{ "name": "BUCKET", "value": <bucket> },
]
}
]
}
Just need to know how do I fetch this information inside the step function and then send it as ENV var when calling fargate
For using input transformers in AWS eventbridge, check this guide.
You can transform the payload of the event to your liking into the step function by using an InputPath (as you have already done) and an input template, where you use variables defined in you InputPath to define a new payload. This new payload will be used as input for the step function.
Here are more examples of input paths and templates: https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-transform-target-input.html
Edit:
If you want to start a fargate task with these environment variables, your best option is to indeed use the environment Overrides to specify new env variables on each task.
Old edit:
If you want to start a fargate task with these environment variables, you have two options:
Create a new task definition in you step function with the specified env vars, then start a new fargate task from this task definition.
2.only use 1 task definition created beforehand, and use env files in that task definition. More info here. Basically what happens is when the task is started the task will fetch a file from s3 and use the values in that file as env vars. Then you step function only has to contain a step to upload a file to s3, en sten start a fargate task using the existing task definition.

Access API gateway endpoint in cloudformation using custom resource

I want to be able to call a API gateway endpoint from within cloudformation and parse the response from the output and pass in relevant information to one of the other service in the cloudformation.
I have an api endpoint
https://123x123x.execute-api.eu-west-2.amazonaws.com/myendpoint/tenants
with
x-api-key: b8Yk6m63rq8XRnMDKa2PeWE3KvBcU7ZyFIn0Vvrty
Content-Type: application/json
which returns
{
"tenants": [
{
"tenantId": "tenant-1234",
"AZ": "us-west-2c",
"tenantUsers": 24,
"instanceType": "m1.small"
},
{
"tenantId": "tenant-2345",
"AZ": "us-west-2b",
"tenantUsers": 32,
"instanceType": "t2.micro"
},
{
"tenantId": "tenant-3456",
"AZ": "us-west-2a",
"tenantUsers": 12
"instanceType": "m1.large"
}
]}
I want to be able to set the InstanceTypeParameter which needs to be a list ["t2.micro", "m1.small", "m1.large"] retrieved from the above response and passed in as parameter in cloudformation as below.
"Ec2Instance" : {
"Type" : "AWS::EC2::Instance",
"Properties" : {
"InstanceType" : { "Ref" : "InstanceTypeParameter" },
"ImageId" : "ami-0ff8a91507f77f867"
}
}
I am assuming the only way to do this would be using a custom resource. Can someone help me develop that (atleast a pseudocode)?
You are correct, it must be a custom resource. Below I will provide general steps which can be fallowed to achieve your aim.
Develop a standalone lambda function. Just plain, regular function for now, which is going to call the API, gets its response, parse it prepare result you require based on input parameters you will provide. The aim is to test how such lambda function will work. Its like a blue-print for a custom resource to be developed.
Once you know how the lambda function will work, its time to prepare a custom resource. I recommend creating a new function for that using custom-resource-helper. The helper simplifies a lot development of custom resources. To use it, you will have to prepare a zip deployment package to bundle it with your function handler. Since you know from step 1 exactly how your function should work from step 1, you need to amend it to work in context of the helper. Adding modified the code into def create(event, context) of the helper should be enough. delete(event, context) can be empty as you are not creating any new physical resource in AWS. update(event, context) its up to you want to do with that.
Once you deploy your custom resource lambda, its time to actually create a custom resource in your CFN tempalte. General form is as follows:
MyGetExternalApiResponseResource:
Type: Custom::CallExternalAPI
Version: "1.0"
Properties:
ServiceToken: <ARN of function from step 2>
InputParameterToFunction1: <for example, api key>
InputParameterToFunction2: <for example, url of api to call>
Lots of debugging and troubleshooting. It will almost center not work first time.
Once it works, you can get return values from the custom resource, using either !Ref MyGetExternalApiResponseResource or !GetAtt MyGetExternalApiResponseResource.InstanceTypeParameter. Depends which way you prefare. Second way would be better probably, as the custom resource doesn't create physical resource. Usually !Ref would be used for id of physical resource created, e.g. id of an AMI, id of an instance.
To fully automate it, you would also deploy the code for custom lambda as a CFN template, instead of doing this manually. In this scenario your template would both create a custom resource lambda function, and a custom resource itself using the function.

Execute stack after cloudFormation Deploy

I have a ApiGateway made with Serverless-model-application that I made a integration with GitHub via CodePipeline, everything is running fine, the pipeline reads the webhook, builds the buildpsec.yml and deploys the CloudFormation file, creating the updating the stack.
The thing is after the stack is updated it still needs a approval on the console, how can I make the execute on the stack update be auto-run?
It sounds like your pipeline is doing one of two things, unless I'm misunderstanding you:
Making a change set but not executing it in the cloudformation console.
Proceeding to a manual approval step in the pipeline and awaiting your confirmation.
Since #2 is simply solved by removing that step, let's talk about #1.
Assuming you are successfully creating a change set called ChangeSetName, you need a step in your pipeline with the following (cfn JSON template syntax):
"Name": "StepName",
"ActionTypeId": {"Category": "Deploy",
"Owner": "AWS",
"Provider": "CloudFormation",
"Version": "1"
},
"Configuration": {
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": {
"Ref": "ChangeSetName"
},
...
Keep the other parameters (e.g. RoleArn) consistent per usual.

How to update machine type property through GCP deployment manager

I have a python template as shown below for a compute instance type along with other needed config.yaml file.
...
CONTROLLER_MACHINE_TYPE='n1-standard-8'
controller_template = {
'name': 'controller-it',
'type': 'it_template.py',
'properties': {
'machineType': CONTROLLER_MACHINE_TYPE,
'dockerImage': CONTROLLER_IMAGE,
'dockerEnv': {
'ADC_LISTEN_QUEUE': 'controller-subscriber'
},
'zone': ZONE,
'network': NETWORK_NAME,
'saEmail': SA_EMAIL
}
}
it_template.py contents
def GenerateConfig(context):
resources = [{
'name': context.env['name'],
'type': 'compute.v1.instanceTemplate',
'properties': {
'zone': context.properties['zone'],
'properties': {
"machineType": context.properties['machineType'],
"metadata": {
"items": [{
"key": 'gce-container-declaration',
"value": GenerateManifest(context)
}]
}, ...
I have deployed it with the environment named qa. Now after sometime I realized that I need to change the machine type of this instance. For eg instead of n1-standard-8, I want my qa environment to update the type of machine for this resource.
However I don't see any example which mentions about updating the property of any resource.
Can we update the property of a resource in an environment using gcp deployment manager?
Or I need to add a new resource with another name and desired machine type property?
Update
As suggested by #jordi Miralles, I modified my template to have machineType as n1-standard-16 and tried to update the deployment.
But I got below error
cloud deployment-manager deployments update qa --config dm_config.yaml
The fingerprint of the deployment is KkD38j9KYiBTiaIW8SltbA==
Waiting for update [operation-1525444590548-56b623ef1b421-b4733efd-53174d1b]...failed.
ERROR: (gcloud.deployment-manager.deployments.update) Error in Operation [operation-1525444590548-56b623ef1b421-b4733efd-53174d1b]: errors:
- code: NO_METHOD_TO_UPDATE_FIELD
message: No method found to update field 'properties' on resource 'controller-it'
of type 'compute.v1.instanceTemplate'. The resource may need to be recreated with
the new field.
Please help.
You can update resources in your deployment manager, there is no need to create a new one with a different name if you want to change a resource. See updating a deployment
You can use your existing deployment yaml file with the changes you want to apply and then update your existing deployment:
gcloud deployment-manager deployments update [EXISTING DEPLOYMENT] --config [UPDATED YAML]
Take into account that your instance must be stopped. All other implications on changing machine type apply. Also, no data on any of your persistent disks will be lost.
Remember to turn on the instances when the update finishes!

Cloudformation - Redeploy environment that uses a recordset (with Jenkins)

TL;DR - What's the recommended way, using a CI server, to keep an AWS environment up to date, and always pointed to from the same CNAME?
We're just starting to use AWS with a new project, and as part of the project I've been tasked with creating a simple demo environment, and updating this environment each night to show the previous days progress.
I'm using Jenkins and the Cloudformation plugin to do this, and it works great in creating a simple EC2 instance in an existing security group, pointed to by a Route53 CNAME so it can be browsed at subdomain.example.com.
The problem I have is that I can't redeploy the same stack, because the recordset already exists, and CF won't overwrite it.
There are lots of guides on how to deploy an environment, but I'm struggling to find one on how to keep an environment up to date.
So I guess my question is: What's the recommended way, using a CI server, to keep an AWS environment up to date, and always pointed to from the same CNAME?
I agree with the comments in your question i.e. probably better to create a clean server and upload / update to it via continuous integration (Jenkins). Docker is super useful in this scenario which you mentioned in a later comment.
However, if you are leaning towards "immutable infrastructure" and want everything encapsulated in your CloudFormation template (Including creating a record in Route53) you could do something like the following code snippet in your AWS::CloudFormation::Init section - (See 'http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource- init.html' for more info)
"Resources": {
"MyServer": {
"Type": "AWS::EC2::Instance",
"Metadata": {
"AWS::CloudFormation::Init": {
"configSets" : { "Install" : [ "UpdateRoute53", "ConfigSet2, .... ] },
"UpdateRoute53" : {
"files" : {
"/usr/local/bin/cli53" : {
"source" : "https://github.com/barnybug/cli53/releases/download/0.6.3/cli53-linux-amd64",
"mode" : "000755", "owner" : "root", "group" : "root"
},
"/tmp/update_route53.sh" : {
"content" : { "Fn::Join" : ["", [
"#!/bin/bash\n\n",
"PRIVATE_IP=`curl http://169.254.169.254/latest/meta-data/local-ipv4/`\n",
"/usr/local/bin/cli53 rrcreate ",
{"Ref": "Route53HostedZone" },
" \"", { "Ref" : "ServerName" },
" 300 A $PRIVATE_IP\" --replace\n"
]]},
"mode" : "000755", "owner" : "root", "group" : "root"
}
},
"commands" : {
"01_UpdateRoute53" : {
"command" : "/tmp/update_route53.sh > /tmp/update-route53.log 2>&1"
}
}
}
}
},
"Properties": { ... }
}
}
....
I've ommitted large chunks of the template to focus on the important info. The "UpdateRoute53" section creates 2 files:
/usr/local/bin/cli53 - CLI53 is a great little wrapper program around AWS Route53 (as AWS CLI version of route53 is pretty horrible to use i.e. requires creating large chunks of JSON) - see https://github.com/barnybug/cli53 for more info on CLI53
/tmp/update_route53.sh - creates a script to upload to Route53 via the CLI53 script we installed in (1). This script determines the PRIVATE_IP via a curl comand to the special AWS meta data endpoint (see http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-instance-metadata.html for more details). The "zone id" to the correct hosted zone is injected via a CloudFormation parameter (i.e. {"Ref": "Route53HostedZone" }). Finally the name of the record comes from the "ServerName" parameter but how this is set could vary from template to template.
In the "commands" section we run the script we created in "files" section (2) and output the results a log file in the /tmp folder.
NOTE (1) - The parameter Route53HostedZone can be declared as follows: -
"Route53HostedZone": {
"Description": "Route 53 hosted zone for updating internal DNS",
"Type": "AWS::Route53::HostedZone::Id",
"Default": "VIWIWK4PYAC23B"
}
The cool thing about the "AWS::Route53::HostedZone::Id") parameter type is that it displays a combo box (when running a CloudFormation template via the AWS web console) showing the zone name with the value being the Zone ID.
NOTE (2) - The --replace attribute in the CLI53 script overrides existing records which is probably what you want.
NOTE (3) - Another option would be to SSH via Jenkins (e.g. using the the "Publish Over SSH Plugin" - https://wiki.jenkins-ci.org/display/JENKINS/Publish+Over+SSH+Plugin), determine the private IP and using the CLI53 script update Route53 either from the server you've logged into or even the build server (when Jenkins is running).
Lots of options - hope you get it sorted! :-)