I’m creating a generic stack template using CloudFormation, and I’ve hit a rather annoying circular reference.
Overall Requirements:
I want to be able to provision (a lot of other things, but mainly) an ECS Cluster Service that auto-scales using capacity providers, the capacity providers are using auto-scaling groups, and the auto scaling groups are using a launch template.
I don’t want static resource names. This causes issues if a resource has to be re-created due to an update and that particular resource has to have a unique name.
Problem:
Without the launch template “knowing the cluster name” (via UserData) the service tasks get stuck in a PROVISIONING state.
So we have the first dependency chain:
Launch Template <- Cluster (Name)
But the Cluster has a dependency chain of:
Cluster <- Capacity Provider <- AutoScalingGroup <- Launch Template
Thus, we have a circular reference: Cluster <-> Launch Template
——
One way I can think of resolving this is to add a suffix to another resource’s name (one that lives outside of this dependency chain, e.g., the target group) as the Cluster’s name; in that way, it is not static but also removes the circular reference.
My question is: is there a better way?
It feels like there should be a resource that the cluster can subscribe to and the ec2 instance can publish to, which would remove the circular dependency as well as the need to assign resource names.
There is no such resource to break the dependency and the cluster name must be pre-defined. This has already been recognized as a problem and its part of open github issue:
[ECS] Full support for Capacity Providers in CloudFormation.
One of the issues noted is:
Break circular dependency so that unnamed clusters can be created
At the moment one work around noted is to partially predefine the name, e.g.:
ECSCluster:
Type: AWS::ECS::Cluster
Properties:
ClusterName: !Sub ${AWS::StackName}-ECSCluster
LaunchConfiguration:
Type: AWS::AutoScaling::LaunchConfiguration
Properties:
UserData:
Fn::Base64: !Sub |
#!/bin/bash
echo ECS_CLUSTER=${AWS::StackName}-ECSCluster >> /etc/ecs/ecs.config
Alternatively, one could try to solve that by development of some custom resource that would be in the form of a lambda function. So you could probably create your unnamed cluster with launch template (LT) that has some dummy name for cluster. Then once the cluster is running, you would use the custom resource to create new version of LT with updated cluster name and refresh your auto-scaling group to use the new LT version. But I'm not sure if this would work. Nevertheless, its something that can be considered at least.
Sharing an update from the GitHub issue. The circular dependency has been broken by introducing a new resource: Cluster Capacity Provider Associations.
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ecs-clustercapacityproviderassociations.html
To use it in my example, you:
Create Cluster (without specifying name)
Create Launch Template (using Ref to get cluster name)
Create Auto Scaling Group(s)
Create Capacity Provider(s)
Create Cluster Capacity Provider Associations <- This is new!
The one gotcha is that you have to wait for the new association to be created before you can create a service on the cluster. So be sure that your service "DependsOn" these associations!
Related
Using AWS CDK we could create multi-region KMS keys by
Creating the principal key(pk) with the level 1 constructor CfnKey
Creating the replica of the principal key using the level 1 constructor CfnReplicaKey, which takes as one of its parameters the pk_arn
Those constructors however do not specify the regions, where I want to make those keys available.
My question is:
What aws-CDK constructor or pattern should I use to make the replicas available in certain regions, using aws-CDK?
Thanks in advance
CfnReplicaKey will be created in the parent stack's region (see a CloudFormation example in the docs).
For the CDK (and CloudFormation), the unit of deployment is [Edit:] the Stack, which is tied to one environment:
Each Stack instance in your AWS CDK app is explicitly or implicitly associated with an environment (env). An environment is the target AWS account and region into which the stack is intended to be deployed.
This logic applies generally to all CDK resources - the account/region is defined at the stack level, not the construct level. Stacks can be replicated across regions and accounts in several ways, including directly in a CDK app:
# replicate the stack in several regions using CDK
app = core.App()
for region in ["us-east-1". "us-west-1", "us-central-1", "eu-west-1"]:
MyStack(app, "MyStack_" + region, env=Environment(
region=region,
account="555599931100"
))
I'm trying to use CloudFormation AddOn template in the following scenario:
Service 1
creates an SNS Topic and a Managed Policy that has all the necessary permissions to publish to it. The SNS Topic will collect "Activity" records and then fan them out to multiple subscribers.
A common code library abstracts away the usage of SNS - any applications that need to post activity messages do so without any knowledge that SNS is being used underneath the covers.
Service N needs to publish activity messages using the common code library and needs whatever permissions are necessary.
So service 1 writes the Managed Policy ARN out as an exported output to the AddOn stack like so:
Outputs:
activityPublishPolicy:
Description: "Activity Publish Policy ARN"
Value: !Ref activitySnsTopicPublishPolicy
Export:
Name: !Sub ${App}-${Env}-activity-publish-policy
Then in service N, I was hoping to import the ARN of the publishing policy and get it attached to the task role:
Outputs:
activityPublishAccessPolicy:
Description: "The IAM::ManagedPolicy to attach to the task role."
Value: !ImportValue
'Fn::Sub': '${App}-${Env}-activity-publish-policy'
The ARN is imported just fine and written out to the Cloud Formation stack of Service N; however, the Task Role does not get the Managed Policy attached to it.
I did a quick test to see if adding the policy directly to the AddOn stack would attach and that does indeed work.
Outputs:
activityPublishAccessPolicy:
Description: "The IAM::ManagedPolicy to attach to the task role."
Value: !Ref activityPolicy
This leads me to believe that Copilot only attaches ManagedPolicies to the Task Role that are created in its own AddOn Stack, but that's just a guess.
I'd prefer not to write a new policy in every service to do this, and I'd prefer not to open up the topic policy our whole VPC if possible.
Is there a better way of doing this?
Thanks!
This is because Copilot scans the Addons template to determine the type of the resource you're outputting. There are several "magic" outputs for addons. They are:
Security Groups
Managed Policies
Secrets
To detect these outputs, we scan the template looking for the logical ID of the referenced resource. This means that we don't currently have a way of deriving the resource type of the results of Fn::ImportValue calls, since they don't refer to a logical ID defined in that addons template!
I'm sorry this is causing you problems--it seems like you may need to add the managed policy to the addons stack of each service you want to grant this access to. This is something we might be able to do something about, though, and would love if if you could cut us a Github issue so we can prioritize and gather feedback on a proposal.
Im creating API gateway stage using cloudformation.
ApiDeployment:
Type: AWS::ApiGateway::Deployment
Properties:
RestApiId: !Ref ExampleRestApi
StageName: dev
Here is the problem, Whenever I create a new API, I just need to deploy the stage using AWS console. is there any way that I can automate the deploy process so that no further console action is required.
When you define a Deployment resource like this, CloudFormation will create the deployment only on the first run. On the second run it will observe that the resource already exists and the CloudFormation definition did not change, so it won't create another deployment. To work around that, you can add something like a UUID/timestamp placeholder to the resource ID and replace it everytime before doing the CloudFormation update:
ApiDeployment#TIMESTAMP#:
Type: AWS::ApiGateway::Deployment
Properties:
RestApiId: !Ref ExampleRestApi
StageName: dev
This way you are still able to see your deployment history in the API Gateway console.
If you don't want to manipulate your template like this, you can also add a Lambda-backed Custom Resource to your CloudFormation stack. Using an AWS SDK, you can have the Lambda function automatically creating new deployments for you when the API was updated.
I've found berenbums response to be mostly correct, but there are a few things I don't like.
The proposed method of creating a resource like ApiDeployment#TIMESTAMP# doesn't keep the deployment history. This makes sense, since the old ApiDeployment#TIMESTAMP# element is being deleted and a new one is being created every time.
Using ApiDeployment#TIMESTAMP# creates a new deployment every time the template is deployed, which might be undesirable if the template is being deployed to create/update other resources.
Also, using ApiDeployment#TIMESTAMP# didn't work well when adding the StageDescription property. A potential solution is to add a static APIGwDeployment resource for the initial deployment (with StageDescription) and ApiDeployment#TIMESTAMP# for the updates.
The fundamental issue though, is that creating a new api gw deployment is not well suited for cloudformation (beyond the initial deployment). I think after the initial deployment, it's better to do an AWS API invocation to update the deployment (see https://docs.aws.amazon.com/apigateway/latest/developerguide/set-up-deployments.html).
In my particular case I created a small Ansible module to invoke aws apigateway create-deployment which updates an existing stage in one operation.
Is there a way to create a resource if it doesn't exist and use an existing resource if it does?
resources:
- name: "my-topic"
type: gcp-types/pubsub-v1:projects.topics
properties:
topic: "this-exists-already"
- name: "my-other-resource"
type: 'gcp-types/cloudfunctions-v1:projects.locations.functions'
properties:
functionName: "function"
environmentVariables:
# get a ref to new or already existing topic
my-toptic: "$(ref.my-topic.name)"
Per #kolban's link I think I want to use abandon here. Can I selectively "abandon" a specific resource so I can, for example, attach an accessControl policy to an existing bucket but then NOT delete that bucket if the deployment is deleted?
ABANDON - This removes any references to the resource from the
deployment but does not delete the underlying resource. For example,
abandoning an instance means that it is removed from a deployment but
the instance still exists for you to use.
Edit
Maybe I should use an "action" to assign an acl instead of a resource? Is this the right way and are there examples of this? So DM would essentially just execute an api call to apply an acl out-of-band. That would mean it would leave the acl behind if the deployment is deleted but I'm okay with that.
It looks like I want to do something like this but instead of applying an acl to a specific file I want to set it on the bucket (with an action) https://github.com/GoogleCloudPlatform/deploymentmanager-samples/blob/master/community/storage-bucket-acl/storagebucket-acl.jinja#L29.
If we read this section of the Deployment Manager documentation:
https://cloud.google.com/deployment-manager/docs/deployments/updating-deployments#policies_for_adding_resources
We read about the concept of "create or acquire". The way I read this is that if a resource your configuration says should be created then the default appears to be that if it already exists, it will not cause an error and will be "acquired" for this deployment which I take to mean that it will be as though it had been created.
I have a yml cloudformation template (A) for an AWS codepipeline build that I want to make a variation of it in an another template (B).
The original (A) has a repository as one of its resources which it created when initially run thru cloudformation. I'd like the variation (B) template to use the same ECR repository generated in the original (A), for the codebuild.
Is there a way I can have (B) template use the ECR resource created in A by passing in the repository resource value as a parameter or something?
For example the resource in A that I want to reuse (not recreate) in B is something like :
Repository:
Type: AWS::ECR::Repository
Properties:
RepositoryName: !Sub comp/${ServiceName}
RepositoryPolicyText:
Version: 2012-10-17
Statement:
...
I am not sure from your question what resources you are referring to. But in general you can export any value from one stack into another, using the Export property of the Output section
From Exporting Stack Output Values
To share information between stacks, export a stack's output values.
Other stacks that are in the same AWS account and region can import
the exported values. For example, you might have a single networking
stack that exports the IDs of a subnet and security group for public
web servers. Stacks with a public web server can easily import those
networking resources. You don't need to hard code resource IDs in the
stack's template or pass IDs as input parameters.