AWS Cloudformation Template - Set Region in S3 Bucket - amazon-web-services

I'm trying to create an s3 bucket in a specific region (us-west-2).
This doesn't seem possible using Cloudformation Templates.
Any ideas? I have had no luck explicitly naming it using the service-region-hash convention that i read about.
Template:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"Properties": {
"AccessControl": "PublicReadWrite",
}
}
},
"Outputs": {
"BucketName": {
"Value": {
"Ref": "S3Bucket"
},
"Description": "Name of the sample Amazon S3 bucket with a lifecycle configuration."
}
}
}

If you want to create the s3 bucket in a specific region, you need to run the Cloudformation JSON template from this region.
The end point URLs for cloudFormation are Region based (https://aws.amazon.com/cloudformation/faqs/?nc1=h_ls#regions and http://docs.aws.amazon.com/general/latest/gr/rande.html#cfn_region) and you cannot as for today cross check region.
I saw it has been discussed few times on aws support thread so might be something that will be done in future
The bucket is created within 1 region and when you need to access it you need to pass the region to the s3 end-point.
If you want to enable cross-region replication you can do that from the cloud formation template with the ReplicationConfiguration properties

Related

EventBridge notification on Amazon s3 folder

I have to start stepMachine execution upon file upload on a folder inside bucket, I got to know how we can configure eventbridge on S3 bucket level. But on the same bucket there can be multiple file uploads. I need to get notified when object inserted into a particular folder inside bucket. Is there any possible way to achieve this?
Here is another solution. Since folders technically do not exist in S3 and merely a UI feature, "folders" in S3 are ultimately called prefixes.
You can trigger an EventBridge Notification on an S3 folder with the following event pattern:
{
"source": ["aws.s3"],
"detail-type": ["Object Created"],
"detail": {
"bucket": {
"name": ["<bucket-name>"]
},
"object": {
"key": [{
"prefix": "<prefix/folder-name>"
}]
}
}
}
Yes, you can use an EventBridge Filter to only send the events when the S3 object's prefix matches your folder name.

Following migrating Amplify CLI and AppSync Transformer to v2 no longer able to access CloudFormation parameters from a custom resource

We have an AWS Amplify project that I am in the process of migrating the API from Transformer 1 to 2.
As part of this, we have a number of custom resolvers that previously had their own stack JSON template in the stacks/ folder as generated by the Amplify CLI.
As per the migration instructions, I have created new custom resources using amplify add custom which allow me to create either CDK (Cloud development kit) resource or a CloudFormation template. I just want a lift n shift for now so I've gone with the template option and moved the content from the stack JSON to the new custom resolver JSON template.
This seems like it should work, but the custom templates no longer have access to the parameters shared from the parent stack:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Metadata": {},
"Parameters": {
"AppSyncApiId": {
"Type": "String",
"Description": "The id of the AppSync API associated with this project."
},
"S3DeploymentBucket": {
"Type": "String",
"Description": "The S3 bucket containing all deployment assets for the project."
},
"S3DeploymentRootKey": {
"Type": "String",
"Description": "An S3 key relative to the S3DeploymentBucket that points to the root of the deployment directory."
}
},
...
}
So these are standard parameters that were used previously and my challenge now is in accessing the deployment bucket and root key as these values are generated upon deployment.
The exact use case is for the AppSync function configuration when I attempt to locate the request and response mapping template S3 locations:
"RequestMappingTemplateS3Location": {
"Fn::Sub": [
"s3://${S3DeploymentBucket}/${S3DeploymentRootKey}/resolvers/MyCustomResolver.req.vtl",
{
"S3DeploymentBucket": {
"Ref": "S3DeploymentBucket"
},
"S3DeploymentRootKey": {
"Ref": "S3DeploymentRootKey"
}
}
]
},
The error message I am receiving is
AWS::CloudFormation::Stack Tue Feb 15 2022 18:20:42 GMT+0000 (Greenwich Mean Time) Parameters: [S3DeploymentBucket, AppSyncApiId, S3DeploymentRootKey] must have values
I feel like I am missing a step to plumb the output values to the parameters in the JSON but I can't find any documentation to suggest how to do this using the updated Amplify CLI options.
Let me know if you need any further information and fingers crossed it is something simple for you Amplify/CloudFormation ninjas out there!
Thank you in advance!

Not able to create AWS Lambda using CloudFormation without code

I am trying to set up a resource pipeline, where I want to deploy all my resources using CloudFormation. I have a separate pipeline to deploy code.
Using below CloudFormation template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"S3Bucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"Properties": {
"BucketName": "devanimalhubstorage"
}
},
"HelloLambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "HelloLambdaRole",
"AssumeRolePolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
}
},
"AnimalhubLambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"FunctionName": "AnimalhubLambdaFunction",
"Role": {
"Fn::GetAtt": ["HelloLambdaRole","Arn"]
},
"Code": {},
"Runtime": "dotnetcore2.1",
"Handler": "myfirstlambda::myfirstlambda.Function::FunctionHandler"
}
}
}
}
Problem Resource handler returned message: "Please provide a source for function code. (Service: Lambda, Status Code: 400, Request ID: 377fff66-a06d-495f-823e-34aec70f0e22, Extended Request ID: null)" (RequestToken: 9c9beee7-2c71-4d5d-e4a8-69065e12c5fa, HandlerErrorCode: InvalidRequest)
I want to build a separate pipeline for code build and deployment. Can't we deploy Lambda function without code?
What is the recommended solution in AWS? (I used to follow this approach in Azure, am new in AWS)
First, I would advise you to look into AWS SAM. It is very helpful when creating and deploying serverless applications and will have a lot of examples to help you with your use case.
Second, using separate pipelines for this purpose is not the recommended way in AWS. Using dummy code, as the other answer suggests, is also quite dangerous, since an update to your cloudformation would override any other code that you have deployed to the lambda function using your other pipeline.
In a serverless application like this, you could make a separation into two or more cloudformation stacks. For example, you could create your S3 buckets and other more "stable" infrastructure in one stack, and deploy this either manually or in a pipeline. And deploy your code in a separate pipeline using another cloudformation stack. Any values (ARNs etc) that would be needed from the more stable resources, you could inject as a parameter in the template, or use the ImportValue function of CloudFormation. I'd personally recommend using the parameter since it is more flexible for future changes.

How to conditionally create s3 bucket in cloudformation?

We have a cloud formation stack that contains data storage resources like RDS,S3. We want to preserve them even if the stack is deleted or when upgrading the stack, few parameters can cause the services to re-create. so we have set a deletion policy to retain. Now when I re-run the stack it is unable to create the resources because the resources with the same name exist.
I thought of creating the resources by checking if it exists or not. Is it possible for cloud formation to have this check by itself. I also realized the retained resources have become rouge resources as they are not now part of this cloud formation.
What would be the best approach to get around this solution?
You have to import these buckets back into your stack.
Sadly CFN does not have functionality to check if something exists or not, and perform the import automatically by itself. So you have to do it manually using AWS console or programmatically using SDK or CLI.
This is a complex question in general, and without knowledge of your setup and specific CF template, it's hard to give specific advice. I will address some general points and suggested actions.
"when I re-run the stack it is unable to create the resources because the resources with the same name exist" - This is common problem with named resources. As a general guidance, avoid statically naming those resources, with which no human operators will interact. If you have strict naming convention in place, then always add random suffix to names, which will make them unique to that stack.
If you have storage resources, i.e. S3 buckets and DBs, then consider putting them into separate CF stack and expose relevant attributes (like DB endpoint address) as CF exports. You can reference them in other stacks by using Fn::ImportValue. This way you can separate "stable" resources from "volatile" ones and still have benefits of CF.
You can import some existing resources into CF stacks. They will act as a legitimate part of the stack. But not all resources can be imported like this.
I think yes you can do this by putting a condition in your CFN whether in JSON/YAML
{
"AWSTemplateFormatVersion": "2010-09-09",
"Transform": "AWS::Serverless-2016-10-31",
"Description": "",
"Parameters": {
"ShouldCreateBucketInputParameter": {
"Type": "String",
"AllowedValues": [
"true",
"false"
],
"Description": "If true then the S3 bucket that will be proxied will be created with the CloudFormation stack."
}
},
"Conditions": {
"CreateS3Bucket": {
"Fn::Equals": [
{
"Ref": "ShouldCreateBucketInputParameter"
},
"true"
]
}
},
"Resources": {
"SerialNumberBucketResource": {
"Type": "AWS::S3::Bucket",
"Condition": "CreateS3Bucket",
"Properties": {
"AccessControl": "Private"
}
}
}
}
And then using CLI you just need to set "true" or "false"
aws cloudformation deploy --template ./s3BucketWithCondition.json --stack-name bucket-stack --parameter-overrides ShouldCreateBucketInputParameter="true"

AWS Cloudformation: How do I check if a bucket exists from within the Cloudformation template?

In my CloudFormation template, I'm trying to create an S3 Bucket only if S3 doesn't already have a bucket that includes a certain keyword in it's name. For example, if my keyword is 'picture', I only want this S3 bucket to be created if no bucket in S3 contains the word 'picture' in its name.
"Resources": {
"myBucket": {
"Condition" : "<NO_S3_BUCKET_WITH_'picture'_IN_ITS_NAME_IN_THIS_ACCOUNT>",
"Type": "AWS::S3::Bucket",
"Properties": {
<SOME_PROPERTIES>
}
},
<OTHER_RESOURCES>
}
Is this possible? if so, can it be done with other AWS resources (CloudFront Distribution etc.)?
Is this possible?
Not with plain CloudFormation. You would have to develop a custom resource to do this.