How to access cross region resources in Cloudformation - amazon-web-services

I have a static website stack that I deploy to us-east-1. I only need the s3 bucket to be deployed in the eu-west-1 region, so to achieve this I used Stack Sets like this;
StackSet:
Type: AWS::CloudFormation::StackSet
Properties:
Description: Multiple S3 buckets in multiple regions
PermissionModel: SELF_MANAGED
StackInstancesGroup:
- DeploymentTargets:
Accounts:
- !Ref "AWS::AccountId"
Regions:
- eu-west-1
StackSetName: !Sub "AppBucketStack"
TemplateBody: |
AWSTemplateFormatVersion: 2010-09-09
Description: Create a S3 bucket
Resources:
WebsiteBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Properties:
BucketName: !Join
- ''
- - ameta-app-
- !Ref 'AWS::Region'
- '-'
- !Ref 'AWS::AccountId'
AccessControl: Private
CorsConfiguration:
CorsRules:
- AllowedHeaders:
- "*"
AllowedMethods:
- GET
- POST
- PUT
AllowedOrigins:
- "*"
MaxAge: 3600
WebsiteConfiguration:
IndexDocument: index.html
ErrorDocument: 404.html
Tags:
- Key: Company
WebsiteBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref 'WebsiteBucket'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- s3:GetObject
Effect: Allow
Resource: !Join
- ''
- - 'arn:aws:s3:::'
- !Ref 'WebsiteBucket'
- /*
Principal:
CanonicalUser: !GetAtt OriginAccessIdentity.S3CanonicalUserId
However now I need to address the bucket's domain name(!GetAtt WebsiteBucket.DomainName) in cloudfront which is being deployed in us-east-1. It seems that I can't use the output of the StackSet since the resources are different regions.
Do you guys have any suggestions?

It seems that I can't use the output of the StackSet since the resources are different regions.
That's correct. You can't reference outputs across regions nor accounts. CloudFormation (CFN) is region-specific. The easiest way is to deploy your resources in us-east-1 and the pass their outputs as parameters to the second stack in different region. You can do it manually, or automatically using AWS CLI or SDK from your local workstation or ec2 instance.
But if want to keep everything within CFN, you would have to develop a custom resource for the second stack. The resource would be in the form of a lambda function which would use AWS SDK to get the outputs from us-east-1 and pass them to your stack in different region.

Related

Is it possible to use S3 Access Point as a static website?

I'm trying to figure out whether it is possible to use AWS S3 Access Point for hosting a static S3 website.
S3WebsiteBucket.WebsiteURL resource described below works great but I need to use Access Point instead.
Failure message whenever I request the index file(URL is like https://my-access-point-0000000000.s3-accesspoint.eu-north-1.amazonaws.com/index.html) is the following:
InvalidRequest
The authorization mechanism you have provided is not supported. Please use Signature Version 4.
My CloudFormation template:
AWSTemplateFormatVersion: '2010-09-09'
Resources:
S3WebsiteBucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: PublicRead
WebsiteConfiguration:
IndexDocument: index.html
ErrorDocument: error.html
VersioningConfiguration:
Status: Enabled
S3WebsiteBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
PolicyDocument:
Id: AllowPublicRead
Version: 2012-10-17
Statement:
- Sid: PublicReadForGetBucketObjects
Effect: Allow
Principal: '*'
Action: 's3:GetObject'
Resource: !Join
- ''
- - 'arn:aws:s3:::'
- !Ref S3WebsiteBucket
- /*
Bucket: !Ref S3WebsiteBucket
S3AccessPoint:
Type: AWS::S3::AccessPoint
Properties:
Bucket: !Ref S3WebsiteBucket
Name: my-access-point
PublicAccessBlockConfiguration:
BlockPublicAcls: true
IgnorePublicAcls: true
BlockPublicPolicy: true
RestrictPublicBuckets: false
Is it possible to use S3 Access Point for such a task at all or it's not meant for public access(static websites)? If that's possible, is there anything that I missed - perhaps S3AccessPoint needs its own IAM access policy?
My primary motivation for using S3 Access Point is to hide the original bucket name without using Route 53 and custom domains.
Sadly you can't do this, as S3 website mode is for buckets only (not access points) . From docs:
Amazon S3 website endpoints do not support HTTPS or access points.

Usage of AWS Lake Formation with CloudFormation

I want to set up an additional security layer on top of my S3 / Glue Data Lake
using Lake Formation. I want to do as much as possible via Infrastructure as Code, so naturally I looked into the documentation of the CloudFormation implementation of Lake Formation which is currently, frankly speaking, very useless.
I have a simple use case: Granting admin permission to one IAM-User on one bucket.
Can someone help me out with an example or anything similar?
This is what I found out:
Setting a data lake location and granting data permissions to your data bases is currently possible. Unfortunately it seems like CloudFormation doesn't support Data locations yet. You will have to grant your IAM Role access to the S3 Bucket by hand in the AWS Console under Lake Formation -> Data locations. I will update the answer as soon as CloudFormation supports more.
This is the template that we are using at the moment:
DataBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Properties:
AccessControl: Private
BucketEncryption:
ServerSideEncryptionConfiguration:
- ServerSideEncryptionByDefault:
SSEAlgorithm: AES256
VersioningConfiguration:
Status: Enabled
LifecycleConfiguration:
Rules:
- Id: InfrequentAccessRule
Status: Enabled
Transitions:
- TransitionInDays: 30
StorageClass: INTELLIGENT_TIERING
GlueDatabase:
Type: AWS::Glue::Database
Properties:
CatalogId: !Ref AWS::AccountId
DatabaseInput:
Name: !FindInMap [Environment, !Ref Environment, GlueDatabaseName]
Description: !Sub Glue Database ${Environment}
GlueDataAccessRole:
Type: AWS::IAM::Role
Properties:
ManagedPolicyArns:
- arn:aws:iam::aws:policy/service-role/AWSGlueServiceRole
AssumeRolePolicyDocument:
Version: 2012-10-17
Statement:
- Sid: ''
Effect: Allow
Principal:
Service: glue.amazonaws.com
Action: sts:AssumeRole
Policies:
- PolicyName: AccessDataBucketPolicy
PolicyDocument:
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- glue:*
- lakeformation:*
Resource: '*'
- Effect: Allow
Action:
- s3:GetObject
- s3:PutObject
- s3:ListBucket
- s3:DeleteObject
Resource:
- !Sub ${DataBucket.Arn}
- !Sub ${DataBucket.Arn}/*
DataBucketLakeFormation:
Type: AWS::LakeFormation::Resource
Properties:
ResourceArn: !GetAtt DataBucket.Arn
UseServiceLinkedRole: true
DataLakeFormationPermission:
Type: AWS::LakeFormation::Permissions
Properties:
DataLakePrincipal:
DataLakePrincipalIdentifier: !GetAtt GlueDataAccessRole.Arn
Permissions:
- ALL
Resource:
DatabaseResource:
Name: !Ref GlueDatabase
DataLocationResource:
S3Resource: !Ref DataBucket

How to migrate sns and sqs using cloudformation?

In my aws account, there is having a 250+ SNS and SQS i need to migrate one region to another region using cloudformation, any one can help to write a script using yaml
Resources:
T1:
Type: 'AWS::SNS::Topic'
Properties: {}
Q1:
Type: 'AWS::SQS::Queue'
Properties: {}
Q1P:
Type: 'AWS::SQS::QueuePolicy'
Properties:
Queues:
- !Ref Q1
PolicyDocument:
Id: AllowIncomingAccess
Statement:
-
Effect: Allow
Principal:
AWS:
- !Ref AWS::AccountId
Action:
- sqs:SendMessage
- sqs:ReceiveMessage
Resource:
- !GetAtt Q1.Arn
-
Effect: Allow
Principal: '*'
Action:
- sqs:SendMessage
Resource:
- !GetAtt Q1.Arn
T1SUB:
Type: 'AWS::SNS::Subscription'
Properties:
Protocol: sqs
Endpoint: !GetAtt Q1.Arn
TopicArn: !Ref T1
You can try using Former2 which is an open-sourced tool to:
Former2 allows you to generate Infrastructure-as-Code outputs from your existing resources within your AWS account. By making the relevant calls using the AWS JavaScript SDK, Former2 will scan across your infrastructure and present you with the list of resources for you to choose which to generate outputs for.

Cloud Formation: separate cloudformation template of S3 bucket and Lambda

I have created a cloudformation template to configure a S3 bucket with an event notification that will call a lambda function. The lamba is triggered whenever a new object is created in the bucket.
The problem I have is when I delete the stack the bucket is also deleted. For debugging and testing purpose I had to delete the stack.
AWSTemplateFormatVersion: '2010-09-09'
Description: Upload an object to an S3 bucket, triggering a Lambda event, returning the object key as a Stack Output.
Parameters:
Body:
Description: Stack to create s3 bucket and the lambda trigger
Type: String
Default: Test
BucketName:
Description: S3 Bucket name
Type: String
Default: image-process-bucket
Resources:
ImageProcessorExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: S3Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:PutObject'
- 'S3:DeleteObject'
Resource: !Sub "arn:aws:s3:::${BucketName}/*"
ImageProcessor:
Type: AWS::Lambda::Function
Properties:
Description: Prints the filename
Handler: imageProcessor.handler
Role: !GetAtt ImageProcessorExecutionRole.Arn
Code: .
Runtime: nodejs12.x
Environment:
Variables:
BucketName:
Ref: BucketName
Bucket:
Type: AWS::S3::Bucket
DependsOn: BucketPermission
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:*'
Function: !GetAtt ImageProcessor.Arn
BucketPermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName: !Ref ImageProcessor
Principal: s3.amazonaws.com
SourceAccount: !Ref "AWS::AccountId"
SourceArn: !Sub "arn:aws:s3:::${BucketName}"
To resolve this, I separated the two resources on separate template using Outputs. Problem with this is that I cannot delete Lambda function stack because it is being referenced by the Bucket stack.
I want to know what is the right approach. Is it really required to separate these two resources. I believe lambda function is required to be changed frequently.
If yes what is the correct way to do it.
If not how should I handle the necessity to makes changes.
The approach using Outputs and Imports will always create the dependencies and will not allow to delete. This is a generic behavior in any resources. How do we deal with deleting in this case.Is it good to use this approach
Description: Upload an object to an S3 bucket, triggering a Lambda event, returning the object key as a Stack Output.
Parameters:
Body:
Description: Stack to create s3 bucket and the lambda trigger
Type: String
Default: Test
BucketName:
Description: S3 Bucket name
Type: String
Default: image-process-bucket
Resources:
ImageProcessorExecutionRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: lambda.amazonaws.com
Action: 'sts:AssumeRole'
Path: /
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: S3Policy
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 's3:PutObject'
- 'S3:DeleteObject'
Resource: !Sub "arn:aws:s3:::${BucketName}/*"
ImageProcessor:
Type: AWS::Lambda::Function
Properties:
Description: Prints the filename
Handler: imageProcessor.handler
Role: !GetAtt ImageProcessorExecutionRole.Arn
Code: .
Runtime: nodejs12.x
Environment:
Variables:
BucketName:
Ref: BucketName
Outputs:
ImageProcessingARN:
Description: ARN of the function
Value:
Fn::Sub: ${ImageProcessor.Arn}
Export:
Name: ImageProcessingARN
ImageProcessingName:
Description: Name of the function
Value: !Ref ImageProcessor
Export:
Name: ImageProcessingName
AWSTemplateFormatVersion: '2010-09-09'
Description: Test
Parameters:
BucketName:
Description: Name of the bucket
Type: String
Default: imageprocess-bucket
Resources:
Bucket:
Type: AWS::S3::Bucket
DependsOn: BucketPermission
Properties:
BucketName: !Ref BucketName
NotificationConfiguration:
LambdaConfigurations:
- Event: 's3:ObjectCreated:*'
Function:
Fn::ImportValue: ImageProcessingARN
BucketPermission:
Type: AWS::Lambda::Permission
Properties:
Action: 'lambda:InvokeFunction'
FunctionName:
Fn::ImportValue: ImageProcessingName
Principal: s3.amazonaws.com
SourceAccount: !Ref "AWS::AccountId"
SourceArn: !Sub "arn:aws:s3:::${BucketName}"
There is no such thing as the right approach, it almost always depends on your unique situation. Strictly speaking it is not required to separate the resources in different CloudFormation templates. A lambda function that changes a lot is also not a sufficient reason for separating the resources.
You seem to be separating the resources correctly in two different stacks. You just do not like that you have to delete the S3 bucket first because it makes debugging more difficult.
If my assumption is correct that you want to delete or update the Lambda CloudFormation stack frequently while not wanting to delete S3 bucket, then there are at least 2 solutions to this problem:
Put a Deletion Policy and an UpdateReplacePolicy on your S3 bucket. By adding these policies you can delete the CloudFormation stack, while keeping the S3 bucket. This will allow you to keep the S3 bucket and the Lambda function in one CloudFormation Template. To create the new stack again, remove the S3 Bucket Resource from the template and manually import the resource back into the CloudFormation stack later.
Use Queue Configuration as Notification Configuration. This is a good approach if you plan on separating the CloudFormation Template in a S3 bucket template and a Lambda function template (a decision based on frequency of change and dependencies between the two templates). Put an SQS queue in the S3 bucket template. Create the CloudFormation stack based on the S3 bucket template. Use the SQS arn (as a CloudFormation template configuration parameter or use the ImportValue intrinsic function) in the Lambda function stack and let SQS trigger the Lambda function. I think this is the best approach since you can now remove the Lambda function stack without having to delete the S3 bucket stack. This way you effectively reduce the coupling between the two CloudFormation stacks since you make the SQS in the S3 bucket stack unaware of potential Lambda function listeners.
4: I think that it is still possible to delete the S3 bucket CloudFormation stack first and delete the Image Processing Lambda CloudFormation stack second. Although I assume this is not something you typically want to do.

Enable object logging on s3 bucket via cloudformation

In AWS S3, you have the ability to visit the console and add 'Object-level logging' to a bucket. You create or select a pre-existing trail and select read and write log types.
Now I am creating buckets via Yaml CloudFormation and want to add a pre-existing trail (or create a new one) to these too. How do I do that? I can't find any examples.
Object logging for S3 buckets with CloudTrail is done by defining so called event selectors for data events in CloudTrail. That is available through CloudFormation as well. The following CloudFormation template shows how that's done. The important part is in the lower half (the upper half is just for setting up an S3 bucket CloudTrail can log to):
AWSTemplateFormatVersion: "2010-09-09"
Resources:
s3BucketForTrailData:
Type: "AWS::S3::Bucket"
trailBucketPolicy:
Type: "AWS::S3::BucketPolicy"
Properties:
Bucket: !Ref s3BucketForTrailData
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service: "cloudtrail.amazonaws.com"
Action: "s3:GetBucketAcl"
Resource: !Sub "arn:aws:s3:::${s3BucketForTrailData}"
- Effect: Allow
Principal:
Service: "cloudtrail.amazonaws.com"
Action: "s3:PutObject"
Resource: !Sub "arn:aws:s3:::${s3BucketForTrailData}/AWSLogs/${AWS::AccountId}/*"
Condition:
StringEquals:
"s3:x-amz-acl": "bucket-owner-full-control"
s3BucketToBeLogged:
Type: "AWS::S3::Bucket"
cloudTrailTrail:
Type: "AWS::CloudTrail::Trail"
DependsOn:
- trailBucketPolicy
Properties:
IsLogging: true
S3BucketName: !Ref s3BucketForTrailData
EventSelectors:
- DataResources:
- Type: "AWS::S3::Object"
Values:
- "arn:aws:s3:::" # log data events for all S3 buckets
- !Sub "${s3BucketToBeLogged.Arn}/" # log data events for the S3 bucket defined above
IncludeManagementEvents: true
ReadWriteType: All
For more details check out the CloudFormation documentation for CloudTrail event selectors.
Though same only but this is how I have done it .
cloudtrail:
Type: AWS::CloudTrail::Trail
Properties:
EnableLogFileValidation: Yes
EventSelectors:
- DataResources:
- Type: AWS::S3::Object
Values:
- arn:aws:s3:::s3-event-step-bucket/
IncludeManagementEvents: Yes
ReadWriteType: All
IncludeGlobalServiceEvents: Yes
IsLogging: Yes
IsMultiRegionTrail: Yes
S3BucketName: s3-event-step-bucket-storage
TrailName: xyz