Is it possible to use S3 Access Point as a static website? - amazon-web-services

I'm trying to figure out whether it is possible to use AWS S3 Access Point for hosting a static S3 website.
S3WebsiteBucket.WebsiteURL resource described below works great but I need to use Access Point instead.
Failure message whenever I request the index file(URL is like https://my-access-point-0000000000.s3-accesspoint.eu-north-1.amazonaws.com/index.html) is the following:
InvalidRequest
The authorization mechanism you have provided is not supported. Please use Signature Version 4.
My CloudFormation template:
AWSTemplateFormatVersion: '2010-09-09'
Resources:
S3WebsiteBucket:
Type: AWS::S3::Bucket
Properties:
AccessControl: PublicRead
WebsiteConfiguration:
IndexDocument: index.html
ErrorDocument: error.html
VersioningConfiguration:
Status: Enabled
S3WebsiteBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
PolicyDocument:
Id: AllowPublicRead
Version: 2012-10-17
Statement:
- Sid: PublicReadForGetBucketObjects
Effect: Allow
Principal: '*'
Action: 's3:GetObject'
Resource: !Join
- ''
- - 'arn:aws:s3:::'
- !Ref S3WebsiteBucket
- /*
Bucket: !Ref S3WebsiteBucket
S3AccessPoint:
Type: AWS::S3::AccessPoint
Properties:
Bucket: !Ref S3WebsiteBucket
Name: my-access-point
PublicAccessBlockConfiguration:
BlockPublicAcls: true
IgnorePublicAcls: true
BlockPublicPolicy: true
RestrictPublicBuckets: false
Is it possible to use S3 Access Point for such a task at all or it's not meant for public access(static websites)? If that's possible, is there anything that I missed - perhaps S3AccessPoint needs its own IAM access policy?
My primary motivation for using S3 Access Point is to hide the original bucket name without using Route 53 and custom domains.

Sadly you can't do this, as S3 website mode is for buckets only (not access points) . From docs:
Amazon S3 website endpoints do not support HTTPS or access points.

Related

How can I fix the circular dependency between my S3 bucket and SQS?

I am trying to write a serverless configuration for my service. A requirement is that the S3 bucket sends notifications to an SQS queue on object create events. However, when I try to deploy my service using serverless deploy, I get this error:
Serverless Error ----------------------------------------
An error occurred: PolicyS3Bucket - Unable to validate the following destination configurations (Service: Amazon S3; Status Code: 400; Error Code: InvalidArgument; Request ID: 4D25CQFZN0R2Q9FG; S3 Extended Request ID: dLfKHJgOnDUcAF3xwN9EgW9LibP3bt7ITj7PyuCXs2qH6Qvmn2iZu7aXYbbUdqptPvgvjwkcWYM=; Proxy: null).
I found this page which (if I understand correctly) explains that I have a circular dependency between my S3 bucket and my SQS queue, and that I must fix this circular dependency in order to be able to successfully deploy my service.
This page explains that I can use Fn::Sub or Fn::Join to fix the circular dependency. Based on this suggestion, I modified my configuration from the original version to a new version as below, using Sub:
cfn.s3.yml (original version)
Resources:
PolicyS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.config.policyBucketName}
AccessControl: Private
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
NotificationConfiguration:
QueueConfigurations:
- Event: s3:ObjectCreated:*
Queue: !GetAtt SQSQueue.Arn
BucketEncryption:
ServerSideEncryptionConfiguration:
- BucketKeyEnabled: true
ServerSideEncryptionByDefault:
KMSMasterKeyID: !Ref CustomMasterKey
SSEAlgorithm: aws:kms
Tags: ${redacted}
cfn.s3.yml (new version, change in bold)
Resources:
PolicyS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: ${self:custom.config.policyBucketName}
AccessControl: Private
PublicAccessBlockConfiguration:
BlockPublicAcls: true
BlockPublicPolicy: true
IgnorePublicAcls: true
RestrictPublicBuckets: true
NotificationConfiguration:
QueueConfigurations:
- Event: s3:ObjectCreated:*
Queue: !Sub arn:aws:sqs:${self:provider.region}:${AWS::AccountId}:${self:custom.config.sqsQueueName}
BucketEncryption:
ServerSideEncryptionConfiguration:
- BucketKeyEnabled: true
ServerSideEncryptionByDefault:
KMSMasterKeyID: !Ref CustomMasterKey
SSEAlgorithm: aws:kms
Tags: ${redacted}
My unchanged cfn.sqs.yml
Resources:
SQSQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:custom.config.sqsQueueName}
When I tried serverless deploy with the new version, I get the same error.
I also tried #kgiannakakis's suggestion to use DependsOn, but I get the same error when I try that.
How can I fix my serverless configuration so that I can successfully deploy my service?
I found a fix that worked.
I had to update my cfn.sqs.yml to include permissions for S3 buckets to send events to the SQS queue, as below:
Resources:
SQSQueue:
Type: AWS::SQS::Queue
Properties:
QueueName: ${self:custom.config.sqsQueueName}
S3EventQueuePolicy:
Type: AWS::SQS::QueuePolicy
DependsOn: SQSQueue
Properties:
PolicyDocument:
Id: SQSPolicy
Statement:
- Effect: Allow
Sid: PutS3Events
Action: SQS:SendMessage
Resource: !GetAtt SQSQueue.Arn
Principal:
Service: s3.amazonaws.com
Queues:
- !Ref SQSQueue
As for my cfn.s3.yml, the correct way to reference the queue was
Queue: !GetAtt SQSQueue.Arn
I believe the problem was that AWS checks that the notification will be possible at deployment time, rather than letting your service fail at runtime, as explained in this answer:
A lot of AWS configuration allows you to connect services and they fail at runtime if they don't have permission, however S3 notification configuration does check some destinations for access.
This would mean that, since I hadn't configured my SQS queue to allow notifications from the S3 bucket, AWS noticed this misconfiguration and stopped the deployment with an error.

Cryptic CloudFormation failure when creating CloudFront Distribution

I have a CloudFormation template set up to track a CloudFront distribution among other things. Getting this set up, I created an AWS::CertificateManager::Certificate and an AWS::CloudFront::Distribution resource, where the CDN just serves from a non-website S3 origin.
When I run the change set, I get this incredibly vague failure.
"Access denied for operation 'AWS::CloudFront::Distribution'." kind of loses me here. For one thing, it's not clear to me what operation this is supposed to be. On top of that, the stack rollback after this is incomplete. The CloudFormation events don't even show an attempt to remove the CDN or the cert, and when I try to hit the CloudFront URL from my browser, it works flawlessly, so I am not even sure what my template was trying to do here that failed. In fact, the only reason this is an issue for me is because the incomplete rollback tries to revert my lambdas in the stack to nodejs8.10, which causes larger failures. If that weren't an issue, I don't know that I would feel the effects of this vague error.
Template, based on the static site sample from a couple of years ago:
AWSTemplateFormatVersion: 2010-09-09
Transform:
- AWS::Serverless-2016-10-31
- AWS::CodeStar
Parameters:
ProjectId:
Type: String
Description: AWS CodeStar projectID used to associate new resources to team members
CodeDeployRole:
Type: String
Description: IAM role to allow AWS CodeDeploy to manage deployment of AWS Lambda functions
Stage:
Type: String
Description: The name for a project pipeline stage, such as Staging or Prod, for which resources are provisioned and deployed.
Default: ''
Globals:
Api:
BinaryMediaTypes:
- image~1png
Function:
Runtime: nodejs14.x
AutoPublishAlias: live
DeploymentPreference:
Enabled: true
Type: Canary10Percent5Minutes
Role: !Ref CodeDeployRole
Resources:
MahCert:
Type: AWS::CertificateManager::Certificate
Properties:
DomainName: domain.com
DomainValidationOptions:
- DomainName: domain.com
HostedZoneId: Z2GZX5ZQI1HO5L
SubjectAlternativeNames:
- '*.domain.com'
CertificateTransparencyLoggingPreference: ENABLED
ValidationMethod: DNS
CloudFrontCDN:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
Comment: Source for all static resources
PriceClass: PriceClass_100
Aliases:
- domain.com
ViewerCertificate:
AcmCertificateArn: !Ref MahCert
MinimumProtocolVersion: TLSv1.2_2021
SslSupportMethod: sni-only
DefaultRootObject: index.html
DefaultCacheBehavior:
ViewerProtocolPolicy: redirect-to-https
CachePolicyId: b2884449-e4de-46a7-ac36-70bc7f1ddd6d
TargetOriginId: SiteBucket
Enabled: True
Origins:
- DomainName: <my_bucket>.s3.amazonaws.com
Id: SiteBucket
S3OriginConfig:
OriginAccessIdentity: ''
ServerlessRestApi:
Type: AWS::Serverless::Api
Properties:
StageName: Prod
DefinitionBody:
swagger: 2.0
info:
title: Static resource proxy
paths:
/static/{proxy+}:
get:
x-amazon-apigateway-integration:
httpMethod: ANY
type: http_proxy
uri: <my_bucket>.s3.amazonaws.com/static/{proxy}
responses: {}
GetHelloWorld:
Type: AWS::Serverless::Function
Properties:
Handler: index.get
Role:
Fn::GetAtt:
- LambdaExecutionRole
- Arn
Events:
GetEvent:
Type: Api
Properties:
Path: /
Method: get
ProxyEvent:
Type: Api
Properties:
Path: /{proxy+}
Method: any
GetStaticContent:
Type: AWS::Serverless::Function
Properties:
Handler: index.getResource
Role:
Fn::GetAtt:
- LambdaExecutionRole
- Arn
Events:
GetResourceEvent:
Type: Api
Properties:
Path: /static/{folder}/{file}
Method: get
GetQuote:
Type: AWS::Serverless::Function
Properties:
Handler: index.getQuote
Role:
Fn::GetAtt:
- LambdaDynamoDBReadRole
- Arn
Events:
GetRandomQuoteEvent:
Type: Api
Properties:
Path: /getquote
Method: get
GetQuoteEvent:
Type: Api
Properties:
Path: /getquote/{id}
Method: get
LambdaExecutionRole:
Description: Creating service role in IAM for AWS Lambda
Type: AWS::IAM::Role
Properties:
RoleName: !Sub 'CodeStar-${ProjectId}-Execution${Stage}'
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [lambda.amazonaws.com]
Action: sts:AssumeRole
Path: /
ManagedPolicyArns:
- !Sub 'arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
PermissionsBoundary: !Sub 'arn:${AWS::Partition}:iam::${AWS::AccountId}:policy/CodeStar_${ProjectId}_PermissionsBoundary'
LambdaDynamoDBReadRole:
Description: Creating service role in IAM for AWS Lambda
Type: AWS::IAM::Role
Properties:
RoleName: !Sub '${ProjectId}-DynamoDB-Read'
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: [lambda.amazonaws.com]
Action: sts:AssumeRole
Path: /
Policies:
-
PolicyName: "dynamodb-read-quotes"
PolicyDocument:
Version: "2012-10-17"
Statement:
-
Effect: "Allow"
Action:
- "dynamodb:GetItem"
- "dynamodb:DescribeTable"
Resource: "<dynamo_arn>"
ManagedPolicyArns:
- !Sub 'arn:${AWS::Partition}:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole'
Note - domain.com is not the actual domain I'm using here.
Updates:
I deleted the stack completely and recreated it from this template, thinking something was wrong with the history of the stack. However, I got the same error.
The IAM role that the stack uses has these permissions
Indeed, the problem persisted even after granting this role full write access to CloudFront resources.
Based on the chat discussion.
The cause of the issue was found to be missing IAM permissions for the IAM role that is used to deploy the stack. Specifically, the permission that was missing was:
cloudfront:GetDistribution - Grants permission to get the information about a web distribution
Adding that permission to the role, solved the problem.
To find the missing permission, CloudTrial's Event History was used.

How to access cross region resources in Cloudformation

I have a static website stack that I deploy to us-east-1. I only need the s3 bucket to be deployed in the eu-west-1 region, so to achieve this I used Stack Sets like this;
StackSet:
Type: AWS::CloudFormation::StackSet
Properties:
Description: Multiple S3 buckets in multiple regions
PermissionModel: SELF_MANAGED
StackInstancesGroup:
- DeploymentTargets:
Accounts:
- !Ref "AWS::AccountId"
Regions:
- eu-west-1
StackSetName: !Sub "AppBucketStack"
TemplateBody: |
AWSTemplateFormatVersion: 2010-09-09
Description: Create a S3 bucket
Resources:
WebsiteBucket:
Type: AWS::S3::Bucket
DeletionPolicy: Retain
UpdateReplacePolicy: Retain
Properties:
BucketName: !Join
- ''
- - ameta-app-
- !Ref 'AWS::Region'
- '-'
- !Ref 'AWS::AccountId'
AccessControl: Private
CorsConfiguration:
CorsRules:
- AllowedHeaders:
- "*"
AllowedMethods:
- GET
- POST
- PUT
AllowedOrigins:
- "*"
MaxAge: 3600
WebsiteConfiguration:
IndexDocument: index.html
ErrorDocument: 404.html
Tags:
- Key: Company
WebsiteBucketPolicy:
Type: AWS::S3::BucketPolicy
Properties:
Bucket: !Ref 'WebsiteBucket'
PolicyDocument:
Version: '2012-10-17'
Statement:
- Action:
- s3:GetObject
Effect: Allow
Resource: !Join
- ''
- - 'arn:aws:s3:::'
- !Ref 'WebsiteBucket'
- /*
Principal:
CanonicalUser: !GetAtt OriginAccessIdentity.S3CanonicalUserId
However now I need to address the bucket's domain name(!GetAtt WebsiteBucket.DomainName) in cloudfront which is being deployed in us-east-1. It seems that I can't use the output of the StackSet since the resources are different regions.
Do you guys have any suggestions?
It seems that I can't use the output of the StackSet since the resources are different regions.
That's correct. You can't reference outputs across regions nor accounts. CloudFormation (CFN) is region-specific. The easiest way is to deploy your resources in us-east-1 and the pass their outputs as parameters to the second stack in different region. You can do it manually, or automatically using AWS CLI or SDK from your local workstation or ec2 instance.
But if want to keep everything within CFN, you would have to develop a custom resource for the second stack. The resource would be in the form of a lambda function which would use AWS SDK to get the outputs from us-east-1 and pass them to your stack in different region.

Enable object logging on s3 bucket via cloudformation

In AWS S3, you have the ability to visit the console and add 'Object-level logging' to a bucket. You create or select a pre-existing trail and select read and write log types.
Now I am creating buckets via Yaml CloudFormation and want to add a pre-existing trail (or create a new one) to these too. How do I do that? I can't find any examples.
Object logging for S3 buckets with CloudTrail is done by defining so called event selectors for data events in CloudTrail. That is available through CloudFormation as well. The following CloudFormation template shows how that's done. The important part is in the lower half (the upper half is just for setting up an S3 bucket CloudTrail can log to):
AWSTemplateFormatVersion: "2010-09-09"
Resources:
s3BucketForTrailData:
Type: "AWS::S3::Bucket"
trailBucketPolicy:
Type: "AWS::S3::BucketPolicy"
Properties:
Bucket: !Ref s3BucketForTrailData
PolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service: "cloudtrail.amazonaws.com"
Action: "s3:GetBucketAcl"
Resource: !Sub "arn:aws:s3:::${s3BucketForTrailData}"
- Effect: Allow
Principal:
Service: "cloudtrail.amazonaws.com"
Action: "s3:PutObject"
Resource: !Sub "arn:aws:s3:::${s3BucketForTrailData}/AWSLogs/${AWS::AccountId}/*"
Condition:
StringEquals:
"s3:x-amz-acl": "bucket-owner-full-control"
s3BucketToBeLogged:
Type: "AWS::S3::Bucket"
cloudTrailTrail:
Type: "AWS::CloudTrail::Trail"
DependsOn:
- trailBucketPolicy
Properties:
IsLogging: true
S3BucketName: !Ref s3BucketForTrailData
EventSelectors:
- DataResources:
- Type: "AWS::S3::Object"
Values:
- "arn:aws:s3:::" # log data events for all S3 buckets
- !Sub "${s3BucketToBeLogged.Arn}/" # log data events for the S3 bucket defined above
IncludeManagementEvents: true
ReadWriteType: All
For more details check out the CloudFormation documentation for CloudTrail event selectors.
Though same only but this is how I have done it .
cloudtrail:
Type: AWS::CloudTrail::Trail
Properties:
EnableLogFileValidation: Yes
EventSelectors:
- DataResources:
- Type: AWS::S3::Object
Values:
- arn:aws:s3:::s3-event-step-bucket/
IncludeManagementEvents: Yes
ReadWriteType: All
IncludeGlobalServiceEvents: Yes
IsLogging: Yes
IsMultiRegionTrail: Yes
S3BucketName: s3-event-step-bucket-storage
TrailName: xyz

give public read and view access to s3 bucket objects using cloudformation template

I am writing a AWS cloudformation template to receive a file inside a s3 bucket from Kinesis Firehose. I have gave public read access to the bucket (bucket is public) but when i access the file inside the bucket using object URL, i get "The XML file does not appear to have any style associated with it" error and it says access denied. However the object (JSON file) is downloadable.
I have given full access to the s3 bucket
Resources:
# Create s3 bucket
MyS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: health-app-buckett
AccessControl: PublicRead
# Create Role
S3BucketRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service:
- s3.amazonaws.com
Action:
- 'sts:AssumeRole'
#Create policy for bucket
S3BucketPolicies:
Type: 'AWS::IAM::Policy'
Properties:
PolicyName: S3BucketPolicy
PolicyDocument:
Statement:
- Sid: PublicReadForGetBucketObjects
Effect: Allow
Action: 's3:GetObject'
Resource: !Join
- ''
- - 'arn:aws:s3:::'
- !Ref MyS3Bucket
- /*
Roles:
- !Ref S3BucketRole
I want to be able to view the file using Object URL
You need to add PublicAccessBlockConfiguration to your template
MyS3Bucket:
Type: AWS::S3::Bucket
Properties:
BucketName: health-app-buckett
AccessControl: PublicRead
PublicAccessBlockConfiguration:
BlockPublicAcls : false
BlockPublicPolicy : false
IgnorePublicAcls : false
RestrictPublicBuckets : false