We have a cloud formation stack that contains data storage resources like RDS,S3. We want to preserve them even if the stack is deleted or when upgrading the stack, few parameters can cause the services to re-create. so we have set a deletion policy to retain. Now when I re-run the stack it is unable to create the resources because the resources with the same name exist.
I thought of creating the resources by checking if it exists or not. Is it possible for cloud formation to have this check by itself. I also realized the retained resources have become rouge resources as they are not now part of this cloud formation.
What would be the best approach to get around this solution?
You have to import these buckets back into your stack.
Sadly CFN does not have functionality to check if something exists or not, and perform the import automatically by itself. So you have to do it manually using AWS console or programmatically using SDK or CLI.
This is a complex question in general, and without knowledge of your setup and specific CF template, it's hard to give specific advice. I will address some general points and suggested actions.
"when I re-run the stack it is unable to create the resources because the resources with the same name exist" - This is common problem with named resources. As a general guidance, avoid statically naming those resources, with which no human operators will interact. If you have strict naming convention in place, then always add random suffix to names, which will make them unique to that stack.
If you have storage resources, i.e. S3 buckets and DBs, then consider putting them into separate CF stack and expose relevant attributes (like DB endpoint address) as CF exports. You can reference them in other stacks by using Fn::ImportValue. This way you can separate "stable" resources from "volatile" ones and still have benefits of CF.
You can import some existing resources into CF stacks. They will act as a legitimate part of the stack. But not all resources can be imported like this.
I think yes you can do this by putting a condition in your CFN whether in JSON/YAML
{
"AWSTemplateFormatVersion": "2010-09-09",
"Transform": "AWS::Serverless-2016-10-31",
"Description": "",
"Parameters": {
"ShouldCreateBucketInputParameter": {
"Type": "String",
"AllowedValues": [
"true",
"false"
],
"Description": "If true then the S3 bucket that will be proxied will be created with the CloudFormation stack."
}
},
"Conditions": {
"CreateS3Bucket": {
"Fn::Equals": [
{
"Ref": "ShouldCreateBucketInputParameter"
},
"true"
]
}
},
"Resources": {
"SerialNumberBucketResource": {
"Type": "AWS::S3::Bucket",
"Condition": "CreateS3Bucket",
"Properties": {
"AccessControl": "Private"
}
}
}
}
And then using CLI you just need to set "true" or "false"
aws cloudformation deploy --template ./s3BucketWithCondition.json --stack-name bucket-stack --parameter-overrides ShouldCreateBucketInputParameter="true"
Related
We have an AWS Amplify project that I am in the process of migrating the API from Transformer 1 to 2.
As part of this, we have a number of custom resolvers that previously had their own stack JSON template in the stacks/ folder as generated by the Amplify CLI.
As per the migration instructions, I have created new custom resources using amplify add custom which allow me to create either CDK (Cloud development kit) resource or a CloudFormation template. I just want a lift n shift for now so I've gone with the template option and moved the content from the stack JSON to the new custom resolver JSON template.
This seems like it should work, but the custom templates no longer have access to the parameters shared from the parent stack:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {},
"Parameters": {
"AppSyncApiId": {
"Type": "String",
"Description": "The id of the AppSync API associated with this project."
},
"S3DeploymentBucket": {
"Type": "String",
"Description": "The S3 bucket containing all deployment assets for the project."
},
"S3DeploymentRootKey": {
"Type": "String",
"Description": "An S3 key relative to the S3DeploymentBucket that points to the root of the deployment directory."
}
},
...
}
So these are standard parameters that were used previously and my challenge now is in accessing the deployment bucket and root key as these values are generated upon deployment.
The exact use case is for the AppSync function configuration when I attempt to locate the request and response mapping template S3 locations:
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/MyCustomResolver.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
The error message I am receiving is
AWS::CloudFormation::Stack Tue Feb 15 2022 18:20:42 GMT+0000 (Greenwich Mean Time) Parameters: [S3DeploymentBucket, AppSyncApiId, S3DeploymentRootKey] must have values
I feel like I am missing a step to plumb the output values to the parameters in the JSON but I can't find any documentation to suggest how to do this using the updated Amplify CLI options.
Let me know if you need any further information and fingers crossed it is something simple for you Amplify/CloudFormation ninjas out there!
Thank you in advance!
In my CloudFormation template, I'm trying to create an S3 Bucket only if S3 doesn't already have a bucket that includes a certain keyword in it's name. For example, if my keyword is 'picture', I only want this S3 bucket to be created if no bucket in S3 contains the word 'picture' in its name.
"Resources": {
"myBucket": {
"Condition" : "<NO_S3_BUCKET_WITH_'picture'_IN_ITS_NAME_IN_THIS_ACCOUNT>",
"Type": "AWS::S3::Bucket",
"Properties": {
<SOME_PROPERTIES>
}
},
<OTHER_RESOURCES>
}
Is this possible? if so, can it be done with other AWS resources (CloudFront Distribution etc.)?
Is this possible?
Not with plain CloudFormation. You would have to develop a custom resource to do this.
I'm creating resources in AWS (mainly EC2, EBS disks and S3 space) for our customers as part of our SaaS product. I would like to be able to get the usage of those resources to be able to send that usage to Stripe to charge and invoice my users.
I was thinking that tagging would be a good way to group resources of a specific customer, so if I put this tag to all its resources: "Customer" => "cust_id_4894168127", then I could do this in pseudo-code API call:
https://www.aws_api_url.com/api/getResourceUsage?Tag=Customer/cust_id_4894168127&From=2020/02/02%To=2020/03/03
And the API would return something like:
{
[
"ResourceID": "8hf8972g8h9",
"ResourceType": "EC2",
"UsageHours": 231,
],
[
"ResourceID": "09j05h05hj",
"ResourceType": "EBS disk",
"DiskSpaceUsedGB": 200,
],
[
"ResourceID": "h87f3go2f2",
"ResourceType": "S3 space",
"SpaceUsedGB": 500,
],
}
I would like to get everything that Amazon is going to charge me in order to charge the customer for all that concepts. If I can't find a way to do it, I'll have to store all the user actions in my database, then calculate the time the EC2 was running, etc.
Do you know of a way to do it with the SDK API?
You can use the get-resources function with tag filters to get all resources.
An example of this is below
aws resourcegroupstaggingapi get-resources --tag-filters Key=Customer,Values=cust_id_4894168127
This would return the Arns of all resources along with any tags attached, from here you would need to call the API for that service to get the metadata you need about it.
For billing you can make use of cost allocation tags to identify the charges for a specific customer.
We are using multiple AWS services from last few years. Now we have many aws resources which are created but doesnt have created by tag. We would like to tag each on our resources (which supports tagging) with w created by tag, specifying name/email of the user who created it. Is it possible to to do through any API (Boto3) or Console. As per my research it seems impossible but I would like to confirm with the community if there is any way to do it.
There is no out of the box solution but you can create a custom solution by using CloudWatch Events and Lambda. I implemented a similar solution only for EC2 resources last year.
Create event rules for the resources you want to tag. For example, the following event rule calls the target Lambda function whenever a/an instance/volume/snapshot/AMI is created.
{
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"ec2.amazonaws.com"
],
"eventName": [
"CreateVolume",
"RunInstances",
"CreateImage",
"CreateSnapshot"
]
}
}
The target Lambda function parses the event data. You need to extract all resource IDs and principal data and make an API call to tag the resources. The following example uses Boto3 EC2 API; resource_ids, username and principal are variables extracted from the event.
ec2.create_tags(Resources=resource_ids, Tags=[{'Key': 'Owner', 'Value': username}, {'Key': 'PrincipalId', 'Value': principal}])
You can extend this solution to tag other resources too.
I'm trying to create an s3 bucket in a specific region (us-west-2).
This doesn't seem possible using Cloudformation Templates.
Any ideas? I have had no luck explicitly naming it using the service-region-hash convention that i read about.
Template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"AccessControl": "PublicReadWrite",
}
}
},
"Outputs": {
"BucketName": {
"Value": {
"Ref": "S3Bucket"
},
"Description": "Name of the sample Amazon S3 bucket with a lifecycle configuration."
}
}
}
If you want to create the s3 bucket in a specific region, you need to run the Cloudformation JSON template from this region.
The end point URLs for cloudFormation are Region based (https://aws.amazon.com/cloudformation/faqs/?nc1=h_ls#regions and http://docs.aws.amazon.com/general/latest/gr/rande.html#cfn_region) and you cannot as for today cross check region.
I saw it has been discussed few times on aws support thread so might be something that will be done in future
The bucket is created within 1 region and when you need to access it you need to pass the region to the s3 end-point.
If you want to enable cross-region replication you can do that from the cloud formation template with the ReplicationConfiguration properties