Trying to understand CFs behavior better.
I have a template that defines an ECS service:
ECSService:
Type: AWS::ECS::Service
Properties:
Cluster: mycluster
...
DesiredCount: 2
I go to the service the CF creates and set the DesiredCount to 0.
Then I deploy the CF template again but it doesn't change back the DesiredCount to 2.
Why doesn't it assert the full config?
The functionality you're looking for is called "Drift Detection".
This feature is not yet a part of CloudFormation, but it is currently in beta and is a planned release for 2018, according to Amazon.
It's generally a good practice to not modify resources managed by a Cloudformation stack. If you need to update a resource, perform a stack update.
Update (11/19): Good news! AWS has released this feature: https://aws.amazon.com/blogs/aws/new-cloudformation-drift-detection/
Related
I'm trying to replicate an extremely basic manually configured AWS ECS Fargate deployment of a single container using CloudFormation. Looks like I'm almost there; the resulting stack spins up a container I can access. But there are no logs.
I compared my manual task (created via the UI) and the CloudFormation one, and added an identical log configuration to the container definition, but simply changing the log group from /ecs/foo to /ecs/bar:
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-create-group: true
awslogs-group: '/ecs/bar'
awslogs-region: !Ref AWS::Region
awslogs-stream-prefix: 'ecs'
But now the the task fails to start a container. It gives an error like this:
Resourceinitializationerror: failed to validate logger args: create stream has been retried 1 times: failed to create Cloudwatch log group: AccessDeniedException: User: arn:aws:sts::…:assumed-role/ecsTaskExecutionRole/… is not authorized to perform: logs:CreateLogGroup on resource: arn:aws:logs:us-east-1:…:log-group:/ecs/bar:log-stream: because no identity-based policy allows the logs:CreateLogGroup action status code: 400, request id: … : exit status 1
One documentation page mentions this logs:CreateLogGroup permission, and says:
To use the awslogs-create-group option, add logs:CreateLogGroup as an inline IAM policy.
But what I don't understand is how my CloudFormation template differs from the stack manually created via the UI. By looking at the generated template for the manually-created stack, it appears both task definitions indicate the ecsTaskExecutionRole. My CloudFormation template task definition has this:
ExecutionRoleArn: 'arn:aws:iam::…:role/ecsTaskExecutionRole'
How was the manually-created stack able to create the log group, but my standalone from-scratch CloudFormation template could not? Where would I indicate the logs:CreateLogGroup permission? The manually-created stack doesn't seem to indicate any inline policy. (Admittedly for some reason the manually-created task definition doesn't seem to use a CloudFormation stack, so maybe it has some hidden settings I'm not seeing in the UI.)
ecsTaskExecutionRole should be assigned to TaskRoleArn, not to ExecutionRoleArn.
If I want the task to automatically create a log group dynamically using awslogs-create-group, it appears that the correct approach is to have an IAM policy that includes the logs:CreateLogGroup permission, as mentioned at Using the awslogs log driver. (I still don't understand how creating the task definition manually in the UI resulted in the log group getting created.) Another page related to ECS resource initialization errors says I need to "add logs:CreateLogGroup as an inline IAM policy", but no one has been able to provide an example of how to do that in CloudFormation. I'm sure I could figure it out …
However rather than having the service dynamically create a log group, it seems to me better practice to declare and configure the AWS::Logs::LogGroup resource in the CloudFormation template itself. (Thanks to the AWS CloudFormation - Beginner to Advanced (Hands-On Guide) Udemy course for inspiring this approach.) Thus I would declare the log group like this in CloudFormation:
BarLogGroup:
Type: AWS::Logs::LogGroup
DeletionPolicy: Retain
Properties:
LogGroupName: '/ecs/bar'
Then to avoid duplication, reference the log group in the log configuration:
LogConfiguration:
LogDriver: awslogs
Options:
awslogs-group: !Ref BarLogGroup
awslogs-region: !Ref AWS::Region
awslogs-stream-prefix: 'ecs'
This gets around the problem of runtime permissions and dynamic creation altogether. The log group is a resource, too, and since the task depends on it we might as well describe it declaratively with the other resources. This way log group creation depends on permissions of the role creating the stack, not the task itself, which seems more appropriate.
(I've added a deletion policy, assuming you want to keep the logs even if you delete/recreate the stack.)
After making the above changes, my CloudFormation EC Fargate stack now runs and produces logs just like the manually created stack, so this approach is a success.
Im creating API gateway stage using cloudformation.
ApiDeployment:
Type: AWS::ApiGateway::Deployment
Properties:
RestApiId: !Ref ExampleRestApi
StageName: dev
Here is the problem, Whenever I create a new API, I just need to deploy the stage using AWS console. is there any way that I can automate the deploy process so that no further console action is required.
When you define a Deployment resource like this, CloudFormation will create the deployment only on the first run. On the second run it will observe that the resource already exists and the CloudFormation definition did not change, so it won't create another deployment. To work around that, you can add something like a UUID/timestamp placeholder to the resource ID and replace it everytime before doing the CloudFormation update:
ApiDeployment#TIMESTAMP#:
Type: AWS::ApiGateway::Deployment
Properties:
RestApiId: !Ref ExampleRestApi
StageName: dev
This way you are still able to see your deployment history in the API Gateway console.
If you don't want to manipulate your template like this, you can also add a Lambda-backed Custom Resource to your CloudFormation stack. Using an AWS SDK, you can have the Lambda function automatically creating new deployments for you when the API was updated.
I've found berenbums response to be mostly correct, but there are a few things I don't like.
The proposed method of creating a resource like ApiDeployment#TIMESTAMP# doesn't keep the deployment history. This makes sense, since the old ApiDeployment#TIMESTAMP# element is being deleted and a new one is being created every time.
Using ApiDeployment#TIMESTAMP# creates a new deployment every time the template is deployed, which might be undesirable if the template is being deployed to create/update other resources.
Also, using ApiDeployment#TIMESTAMP# didn't work well when adding the StageDescription property. A potential solution is to add a static APIGwDeployment resource for the initial deployment (with StageDescription) and ApiDeployment#TIMESTAMP# for the updates.
The fundamental issue though, is that creating a new api gw deployment is not well suited for cloudformation (beyond the initial deployment). I think after the initial deployment, it's better to do an AWS API invocation to update the deployment (see https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-deployments.html).
In my particular case I created a small Ansible module to invoke aws apigateway create-deployment which updates an existing stage in one operation.
I would like to perform the following operations in order with CloudFormation.
Start up an EC2 instance.
Give it privileges to access the full internet using security group A.
Download particular versions of Java and Python
Remove its internet privileges by removing security group A and adding a security group B.
I observe that there is a DependsOn attribute for specifying the order in which to create resources, but I was unable to find a feature that would allow me to update the security groups on the same EC2 instance twice over the course of creating a stack.
Is this possible with CloudFormation?
Not in CloudFormation natively, but you could launch the EC2 instance with a configured userdata script that itself downloads Java/Python and the awscli, as necessary, and then uses the awscli to switch security groups for the current EC2 instance.
However, if all you need is Java and Python pre-loaded then why not simply create an AMI with them already installed and launch from that AMI?
The best way out is to utilise a Cloudformation custom resource here. You can create a lambda function that does exactly what you need. This lambda function can then be called as a custom resource function in the cloud formation template.
You can pass your new security group ID and instance ID to the lambda function and code the lambda function to use AWS SDK and do the modifications that you need.
I have leveraged it to post an update to my web server about the progress of the cloud formation template. Below is the sample code of the template.
EC2InstanceProfile:
Type: AWS::IAM::InstanceProfile
Properties:
Path: /
Roles: [!Ref 'EC2Role']
MarkInstanceProfileComplete:
Type: 'Custom::EC2InstanceProfileDone'
Version: '1.0'
DependsOn: EC2InstanceProfile
Properties:
ServiceToken: !Ref CustomResourceArn
HostURL: !Ref Host
LoginType: !Ref LoginType
SecretId: !Ref SecretId
WorkspaceId: !Ref WorkspaceId
Event: 2
Total: 3
Here the resource MarkInstanceProfileComplete is a custom resource that calls a Lambda function. It takes the event count and total count as input and processes them to calculate percentage progress. Based on that it sends out a request to my web server. For all we care, this Lambda function can do potentially anything you want it to do.
I am quite new to AWS and want to know how to achieve following task with CloudFormation.
I want to spin up an EC2 instance with tomcat and deploy a java application on it. This java application will perform some operation. Once the operation is done, I want to delete all the resources created by this CloudFormation stack.
All these activities should be automatic. For example -- I will create the CloudFormation stack JSON file. At particular time of a day, a job should be kicked off (I don't know where in AWS to configure such job or how). But I know through Jenkins we can create a CloudFormation stack that will create all resources.
Then, after some time (lets say 2 hrs), another job should kick off and delete all resources created by CloudFormation.
Is this possible in AWS? If yes, any hints on how to do this?
Just to confirm, what you intend to do is have an EC2 instance get created on a schedule, and then have it shut down after 2 hours. The common way of accomplishing that is to use an Auto-Scaling Group (ASG) with a ScheduledAction to scale up and a ScheduledAction to scale down.
ASGs have a "desired capacity" (the number of instances in the ASG). You would want this to be "0" by default, change it to "1" at your desired time, and change it back to "0" two hours after that. What that will do is automatically start and subsequently terminate your EC2 instance on your schedule.
They also use a LaunchConfiguration, which is a template for your EC2 instances that will start on the schedule.
MyASG:
Type: AWS::AutoScaling::AutoScalingGroup
Properties:
AvailabilityZones: !GetAZs !Ref "AWS::Region"
LaunchConfigurationName: !Ref MyLaunchConfiguration
MaxSize: 1
MinSize: 0
DesiredCapacity: 0
ScheduledActionUp:
Type: AWS::AutoScaling::ScheduledAction
Properties:
AutoScalingGroupName: !Ref MyASG
DesiredCapacity: 1
Recurrence: "0 7 * * *"
ScheduledActionDown:
Type: AWS::AutoScaling::ScheduledAction
Properties:
AutoScalingGroupName: !Ref MyASG
DesiredCapacity: 0
Recurrence: "0 9 * * *"
MyLaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
ImageId: ami-xxxxxxxxx # <-- Specify the AMI ID that you want
InstanceType: t2.micro # <-- Chaneg the instance size if you want
KeyName: my-key # <-- Change to the name of an EC2 SSH key that you've added
UserData:
Fn::Base64: !Sub |
#!/bin/bash
yum install -y aws-cfn-bootstrap
# ...
# ... run some commands to set up the instance, if you need to
# ...
Metadata:
AWS::CloudFormation::Init:
config:
files:
"/etc/something/something.conf":
mode: 000600
owner: root
group: root
content: !Sub |
#
# Add the content of a config file, if you need to
#
Depending on what you want your instances to interact with, you might also need to add a Security Group and/or an IAM Instance Profile along with an IAM Role.
If you're using Jenkins to deploy the program that will run, you would add a step to bake an AMI, build and push a docker image, or take whatever other action you need to deploy your application to the place that it will be used by your instance.
I note that in your question you say that you want to delete all of the resources created by CloudFormation. Usually, when you deploy a stack like this, the stack remains deployed. The ASG will remain there until you decide to remove the stack, but it won't cost anything when you're not running EC2 instances. I think I understand your intent here, so the advice that I'm giving aligns with that.
You can use Lambda to execute events on a regular schedule.
Write a Lambda function that calls CloudFormation to create your stack of resources. You might even consider including a termination Lambda function in your CloudFormation stack and configure it to run on a schedule (2 hours after the stack was created) to delete the stack that the termination Lambda function itself is part of (have not tried this, but believe that it will work). Or you could trigger stack deletion from cron on the EC2 instance running your Java app, of course).
If all you want is an EC2 instance, it's probably easier to simply create the EC2 instance rather than a CloudFormation stack.
Something (eg an AWS Lambda function triggered by Amazon CloudWatch Events) calls the EC2 API to create the instance
User Data is passed to the EC2 instance to install the desired software OR use a custom AMI with all software pre-installed
Have the instance terminate itself when it has finished processing -- this could be as simple as calling the Operating System to shutdown the machine, with the EC2 Shutdown Behavior set to Terminate.
When I run CloudFormation deploy using a template with API Gateway resources, the first time I run it, it creates and deploys to stages. The subsequent times I run it, it updates the resources but doesn't deploy to stages.
Is that behaviour as intended? If yes, how'd I get it to deploy to stages whenever it updates?
(Terraform mentions a similar issue: https://github.com/hashicorp/terraform/issues/6613)
Seems like there is no way to easily create a new Deployment whenever one of your Cloudformation Resources changes.
One way to work around that would be to use a Lambda-backed Custom Resource (see http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html).
The Lambda should create the new Deployment, only if one of your Resources has been updated. To determine if one of your Resources has been updated,
you will probably have to implement custom logic around this API call: http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_DescribeStackEvents.html
In order to trigger updates on your Custom Resource, I suggest you supply a Cloudformation Parameter that will be used to force an update of your Custom Resource (e.g. the current time, or a version number).
Note that you will have to add a DependsOn clause to your Custom Resource that will include all Resources relevant to your API. Otherwise, your deployment might be created before all your API Resources are updated.
Hope this helps.
When your template specifies a deployment, CloudFormation will create that deployment only if it doesn't already exist. When you attempt to run it again, it observes that the deployment still exists so it won't recreate it, thus no deployment. You need a new resource id for the deployment so that it will create a new deployment. Read this for more information: https://currentlyunnamed-theclassic.blogspot.com/2018/12/mastering-cloudformation-for-api.html
CloudFormation in Amazon's words is:
AWS CloudFormation takes care of provisioning and configuring those resources for you
http://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/Welcome.html
Redeployment of APIs is not a provisioning task... It is a promotion activity which is part of a stage in your software release process.
AWS CodePipeline is a continuous delivery service you can use to model, visualize, and automate the steps required to release your software.
http://docs.aws.amazon.com/codepipeline/latest/userguide/welcome.html
CodePipeline also supports execution of Lambda functions from Actions in the pipeline. So, as advised before, create a Lambda function to deploy your API but call it from Codepipeline instead of CloudFormation.
Consult this page for details:
http://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html
I was using above approach but it looks to complicated to me just to deploy API gateway. If we are changing name of the resources then it takes time to delete and recreate the resources which increases down time for you application.
I'm following below approach to deploy API gateway to the stage using AWS CLI and it is not affecting the deployment with Cloudformation stack.
What I'm doing is, running below AWS CLI command after deployment is completed for API Gateway. It will update the existing stage with latest updates.
aws apigateway create-deployment --rest-api-id tztstixfwj --stage-name stg --description 'Deployed from CLI'
The answer here is to use the AutoDeploy property of the Stage:
Stage:
Type: AWS::ApiGatewayV2::Stage
Properties:
StageName: v1
Description: 'API Version 1'
ApiId: !Ref: myApi
AutoDeploy: true
Note that the 'DeploymentId' property must be unspecified when using 'AutoDeploy'.
See documentation, here: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html
From the blogspot post linked by TheClassic (best answer so far!), you have to keep in mind that if you aren't generating your templates with something that can insert a valid timestamp in place of $TIMESTAMP$, you must update that manually with a time stamp or otherwise unique ID. Here is my functional example, it successfully deletes the existing deployment and creates a new one, but I will have to update those unique values manually when I want to create another change set:
rDeployment05012019355:
Type: AWS::ApiGateway::Deployment
DependsOn: rApiGetMethod
Properties:
RestApiId:
Fn::ImportValue:
!Sub '${pApiCoreStackName}-RestApi'
StageName: !Ref pStageName
rCustomDomainPath:
Type: AWS::ApiGateway::BasePathMapping
DependsOn: [rDeployment05012019355]
Properties:
BasePath: !Ref pPathPart
Stage: !Ref pStageName
DomainName:
Fn::ImportValue:
!Sub '${pApiCoreStackName}-CustomDomainName'
RestApiId:
Fn::ImportValue:
!Sub '${pApiCoreStackName}-RestApi'
I may be late, but here are the options which which you do a redeployment if a API resources changes, may be helpful to people who still looking for options -
Try AutoDeploy to true. If you are using V2 version of deployment. Note that you need to have APIGW created through V2. V1 and V2 are not compatible to each other. https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-apigatewayv2-stage.html#cfn-apigatewayv2-stage-autodeploy
Lambda backed custom resource, Lambda inturn call createDeployment API - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/template-custom-resources.html
CodePipeline that has an action that calls a Lambda Function much like the Custom Resource would - https://docs.aws.amazon.com/codepipeline/latest/userguide/actions-invoke-lambda-function.html
SAM(Serverless Application Model) follows a similar syntax to CloudFormation which simplifies the resource creation into abstractions and uses those to build and deploy a normal CloudFormation template. https://docs.aws.amazon.com/serverless-application-model/latest/developerguide/what-is-sam.html
If you are using any abstraction layer to cloudformation like Sceptre, you can have a hook to call createDeployment after any-update to the resource https://sceptre.cloudreach.com/2.3.0/docs/hooks.html
I gone with third option since I kept using Sceptre for Cloudformation deployment. Implementing hooks in sceptre is easy as well.
Reading through this article, I did not come to a conclusion right away, as the information here is stretched through multiple sources. I try to sum up all the findings from here (and linked source) as my personal testing to help others avoid the hunt.
Important to know is that each API always has a dedicated URL. The associated stages only get a separate suffix. Updating the deployment does not change the URL, recreating the API does.
API
├─ RestAPI (incl. Resource, Methods etc)
├─ Deployment
├─ Stage - v1 https://6s...com/v1
├─ Stage - v2 https://6s...com/v2
Relation stage and deployment:
To deploy AWS API Gateway through CloudFormation (Cfn) you need a RestApi-Cfn-Resource and a Deployment-Cfn-Resource. If you give the Deployment-Resource a stage name, the deployment automatically creates a deployment on top of the "normal" creation. If you leave this out, the API is created without any stage. Either way, if you have a deployment, you can add n-stages to a deployment by linking the two, but a stage and its API always has only one deployment.
Updating simple API:
Now if you want to update this "simple API" just consisting of a RestAPI plus a deployment you face the issue, that if the deployment has a stage name - it can not be updated as it already "exists". To detect that the deployment has to be updated in the first place, you have to either add a timestamp or hash to the deployment resource name in CloudFormation else there is even no update triggered.
Solving the deployment update:
To now enable updating the deployment, you have to split deployment and stage up into separate Cfn-Resources. Meaning, you remove the stage name from the Deployment-Cfn-Resource and create a new Stage-Cfn-Resource which references the deployment resource. This way you can update the deployment. Still, the stage - the part you reference via URL - is not automatically updated.
Propagating the update from the deployment to your stages:
Now that we can update the deployment - aka the blueprint of the API - we can propagate the change to its respective stage. This step AS OF MY KNOWLEDGE is not possible using CloudFormation. Therefore, to trigger the update you either need to add a "custom resource" our you do it manually. Other "none" CloudFormation ways are summed up by #Athi's answer above, but no solution for me as I want to limit the used tooling.
If anybody has an example for the Lambda update, please feel free to ping me - then I would add it here. The links I found so far only reference a plain template.
I hope this helped others understanding the context a bit better.
Sources:
Problem description with Cfn-template, 2
Adding timestamp to deployment resource, 2
Using CodePipeline as a solution
Related question and CLI update answer
Related terraform issue
Related AWS forum thread
This worked for me :
cfn.yml
APIGatewayStage:
Type: 'AWS::ApiGateway::Stage'
Properties:
StageName: !Ref Environment
DeploymentId: !Ref APIGatewayDeployment$TIMESTAMP$
RestApiId: !Ref APIGatewayRestAPI
Variables:
lambdaAlias: !Ref Environment
MethodSettings:
- ResourcePath: '/*'
DataTraceEnabled: true
HttpMethod: "*"
LoggingLevel: INFO
MetricsEnabled: true
DependsOn:
- liveLocationsAPIGatewayMethod
- testJTAPIGatewayMethod
APIGatewayDeployment$TIMESTAMP$:
Type: 'AWS::ApiGateway::Deployment'
Properties:
RestApiId: !Ref APIGatewayRestAPI
DependsOn:
- liveLocationsAPIGatewayMethod
- testJTAPIGatewayMethod
bitbucket-pipelines.yml
script:
- python3 deploy_api.py
deploy_api.py
import time
file_name = 'infra/cfn.yml'
ts = str(time.time()).split(".")[0]
print(ts)
with open(file_name, 'r') as file :
filedata = file.read()
filedata = filedata.replace('$TIMESTAMP$', ts)
with open(file_name, 'w') as file:
file.write(filedata)
========================================================================
Read this for more information: https://currentlyunnamed-theclassic.blogspot.com/2018/12/mastering-cloudformation-for-api.html
If you have something to do the $TIMESTAMP$ replacement, I'd probably go with that as it's cleaner and you don't have to do any manual API Gateway management.
I have found that the other solutions posted here mostly do the job with one major caveat - you can't manage your Stage and Deployment separately in CloudFormation because whenever you deploy your API Gateway, you have some sort of downtime between when you deploy the API and when the secondary process (custom resource / lambda, code pipeline, what have you) creates your new deployment. This downtime is because CloudFormation only ever has the initial deployment tied to the Stage. So when you make a change to the Stage and deploy, it reverts back to the initial deployment until your secondary process creates your new deployment.
*** Note that if you are specifying a StageName on your Deployment resource, and not explicitly managing a Stage resource, the other solutions will work.
In my case, I don't have that $TIMESTAMP$ replacement piece, and I needed to manage my Stage separately so I could do things like enable caching, so I had to find another way. So the workflow and relevant CF pieces are as follows
Before triggering the CF update, see if the stack you're about to update already exists. Set stack_exists: true|false
Pass that stack_exists variable in to your CF template(s), all the way down to the stack that creates the Deployment and Stage
The following condition:
Conditions:
StackExists: !Equals [!Ref StackAlreadyExists, "True"]
The following Deployment and Stage:
# Only used for initial creation, secondary process re-creates this
Deployment:
DeletionPolicy: Retain
Type: AWS::ApiGateway::Deployment
Properties:
Description: "Initial deployment"
RestApiId: ...
Stage:
Type: AWS::ApiGateway::Stage
Properties:
DeploymentId: !If
- StackExists
- !Ref AWS::NoValue
- !Ref Deployment
RestApiId: ...
StageName: ...
Secondary process that does the following:
# looks up `apiId` and `stageName` and sets variables
CURRENT_DEPLOYMENT_ID=$(aws apigateway get-stage --rest-api-id <apiId> --stage-name <stageName> --query 'deploymentId' --output text)
aws apigateway create-deployment --rest-api-id <apiId> --stage-name <stageName>
aws apigateway delete-deployment --rest-api-id <apiId> --deployment-id ${CURRENT_DEPLOYMENT_ID}
Use SAM
AWS::Serverless::Api
This does the deployment for you when it does the Transformation