I'm trying to setup a static website into S3 with a custom domain and using CloudFront to handle HTTPS.
The thing is that the root path works properly but not the child paths.
Apparently, it's all about the default root object which I have configured as index.html in both places.
example.com -> example.com/index.html - Works fine
example.com/about/ -> example.com/about/index.html - Fails with a NoSuchKey error
The funny thing is that if I open read access to S3 bucket and I use the S3 URL it works completely fine.
There is an AWS documentation page where they talk about that: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DefaultRootObject.html, but they don't even say a solution, or at least I haven't been able to find it.
However, if you define a default root object, an end-user request for
a subdirectory of your distribution does not return the default root
object. For example, suppose index.html is your default root object
and that CloudFront receives an end-user request for the install
directory under your CloudFront distribution:
http://d111111abcdef8.cloudfront.net/install/
CloudFront does not return the default root object even if a copy of
index.html appears in the install directory.
If you configure your distribution to allow all of the HTTP methods
that CloudFront supports, the default root object applies to all
methods. For example, if your default root object is index.php and you
write your application to submit a POST request to the root of your
domain (http://example.com), CloudFront sends the request to
http://example.com/index.php.
The behavior of CloudFront default root objects is different from the
behavior of Amazon S3 index documents. When you configure an Amazon S3
bucket as a website and specify the index document, Amazon S3 returns
the index document even if a user requests a subdirectory in the
bucket. (A copy of the index document must appear in every
subdirectory.) For more information about configuring Amazon S3
buckets as websites and about index documents, see the Hosting
Websites on Amazon S3 chapter in the Amazon Simple Storage Service
Developer Guide.
S3 Bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCloudFrontAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com/*"
}
]
}
CloudFront setup:
Thank you
This got fixed, just to sum up what I did there are two solutions:
Adding the S3 URL as custom origin in CloudFront, the tradeoff is that this forces us to open the S3 bucket for anonymous traffic.
Setting up a Lambda#Edge that will translate the requests, the tradeoff is that we'll pay also Lambda requests.
So each one has to decide the option it fits better, in my case the traffic is expected to be super low so I chose the second option.
I leave some useful links in case anybody else faces the same problem:
Useful Reddit thread here
AWS Lambda#Edge+CloudFront explained by AWS here
Fix to Lambda error I faced here
All setup process explained here
Related
Use case:
I want to encrypt the data in transit from s3 as well. Encryption at Rest is already present and handled by S3 encryption key.
My Findings:
I found few articles where is it sates modifying the bucket policy with "aws: SecureTransport" condition.
here is sample bucket policy:
"Version": "2012-10-17",
"Statement": [
"Effect":
"Principal":
"Action":
"Resource": "arn: aws: s3::: example-bucket/**,
"Condition":
I
"Bool":
{
"aws: SecureTransport":
"false"
My concern:
By doing this how the decryption at the receiver end happens? how it needs to be handled?
Assuming a application is accessing the S3 data for some reports.
Could any one help me with this?
Encryption in transit refers to using HTTPS protocol to upload your objects to S3. S3 supports both HTTP (unencrypted) and HTTPS (encrypted) endpoints. Just like with any other website that uses HTTPS, you don't have to do anything. All encryption/decryption is done automatically through HTTPS.
However, since S3 supports HTTP, it may be a security risk to upload objects through HTTP, as objects travel the Internet in a plain-text form. Thus, you can enforce HTTPS for your bucket by setting up the following S3 bucket policy:
What S3 bucket policy should I use to comply with the AWS Config rule s3-bucket-ssl-requests-only?
Typically, you don't need to do anything with the bucket. Encryption in transit depends on the settings of your, well, in transit methods. So, when you say application is accessing the S3 data - it's not enough. For example, CLI and API encrypt the data out of the box.
If you have a static website, then you can't set it up as HTTPS (that is, with encryption) - only HTTP. (this statement is not 100% correct; but consider it good enough for now). So, if you want to have a static website, then you can front it with CloudFront and have it run over HTTPS.
First problem:
I have a static webpage hosted on S3, a CloudFront distribution pointing to this S3 bucket, and an A record on my domain pointing to this CloudFront distro. I also have some API Gateway and Lambda and DynamoDB stuff going on.
This webpage is a React app following the create-react-app template. As such, when I yarn build, all of the js and css fragments are cache-busted nicely with these random main.d74fc389.chunk.js names. However, importantly, the index.html (and other static files) are not.
When I aws s3 sync build/ s3://xxxx, everything gets uploaded nicely, but the cloudfront root is still pointing at the old cached index.html!
What can I do about this so that my automatic deployment script (basically just yarn build && aws s3 sync build/ s3://xxxxxx works properly?
I am pointing my domain to CloudFront rather than to the straight S3 website because I want a TLS certificate. I have therefore denied access in my policies to the S3 bucket to anyone except the CloudFront OAI.
Second problem:
I've solved this by setting up a default object in CloudFront.
For some reason I keep getting `access denied` errors on my `https://cloudfront.xxxxxx.xxx` (and therefore on my `https://mydomain.xxx`, but accessing `https://mydomain.xxx/index.html` and any other item (including the newly uploaded items, as evidenced by the updated javascript!) has absolutely no issue. Wtf? Here is my S3 policy:
{
"Version": "2012-10-17",
"Id": "Policy1631694343564",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::xxxxxxx/*"
}
]
}
This was literally autogenerated by CloudFront, so I have no idea how it could be incorrect.
I do have my bucket set to serve a static website, but have declined access to s3:GetObject to the general public, so this URL does nothing. The origin I have set up for my CloudFront is the S3's REST api (i.e. xxxxxx.s3.us-west-1.amazonaws.xxx) rather than the bucket's website URL (http://xxxxxxx.s3-website-us-west-1.amazonaws.xxx/).
The .com in URLs was replaced with .xxx because of StackOverflow rules
Wait for the TTL or invalidate your cache.
aws cloudfront create-invalidation --distribution-id E2FXXXXXX4N0MS --paths "/*"
This may not be suitable if you are doing lots of deployments as there are some limits and costs.
AWS Documentation
I've setup S3 + Cloudfront to host a static website using a subdomain provided by Namecheap, but when navigating to the Cloudfront URL, or domain URL, AWS responds with a 504: "The request could not be satisfied" error.
The steps I've completed are:
Setup the S3 bucket to have Static website hosting: Enabled with the hosting type set to "Bucket hosting". The bucket is Publicly accessible.
Setup a Cloudfront distribution with its origin domain set to [bucket name].s3-website-ap-southeast-2.amazonaws.com which has completed deployment.
Set [subdomain].[domain].io as an alternate domain name within the Cloudfront distribution.
Assigned a Custom SSL certificate to my distribution that has a status of "Issued" from AWS Certificate Manager for my custom domain [subdomain].[domain].io
Setup a CNAME within NameCheap so that [subdomain] points to [abc123].cloudfront.net. which has propagated (confimed by whatsmydns.net)
I'm new to Cloudfront + S3 hosting and trying to skill up, but not hosting in general (I usually use EC2 with either Apache or NGINX).
How can I resolve the 504 error?
my solution to this problem in terraform is with :
origin_protocol_policy = "http-only"
because s3 static website endpoint is on http, so if you choose "https-only"
it will give 504 error.
This error 504: "The request could not be satisfied" usually means that the cloudfront distribution can’t connect to the configured source. Are you able to access your system using the website URL directly without CF?
Have you set cloudfront, during the distribution creation, to be the only allowed to access the bucket by accident?
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-access-to-amazon-s3/
Everything makes me believe that your bucket is not accessible even if you set the website hosting feature.
The issue turned out to be that I was using [bucket name].s3-website-ap-southeast-2.amazonaws.com like the tutorials I'm using take special note to specify. Nowadays it seems you should use [bucket name].s3.ap-southeast-2.amazonaws.com instead.
Once I made the above update and the distribution was deployed everything started working.
Two things solved my issue:
Added s3:GetObjectVersion action to the public permission for s3
{
"Sid": "Allow-Public-Access-To-Bucket",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:GetObject",
"s3:GetObjectVersion"
],
"Resource": "arn:aws:s3:::ops.freeway-camper-tests.com/*"
}
Ensure that NO custom headers are defined in the origin
I am trying to make sure I have my S3 bucket secure. I need to allow some sort of public access due to my website displays the images that are uploaded to my S3 bucket.
My Public Access settings look sleek this:
I then set up my Cross-origin resource sharing (CORS) to look like this:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"GET",
"PUT",
"POST"
],
"AllowedOrigins": [
"https://example.com",
"https://www.example.com"
],
"ExposeHeaders": [],
"MaxAgeSeconds": 3000
}
]
And my S3 ACLs look like this:
After doing this my images are still visible on my website hosted on AWS. My question here is am I missing anything?
I don't think I fully understand the Cross-origin resource sharing (CORS) of this. I assumed the AllowedOrigins tag would only allow the images to be viewed on my domain? So I took the address to one of my images and threw it in my web browser and it loaded. Is this correct behavior or am I misunderstanding this?
Any more suggestions on how to secure my S3 bucket? I basically just want user on my website to be able to view my images and upload images from only my site. Thanks!
Updates
For a more full view, my bucket policy is:
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com.storage/*"
}
]
}
My ACLs in S3 are configured as:
You asked "how to secure my S3 bucket?"
Buckets in Amazon S3 are private by default, so they are automatically 'secure'. However, you want to make the objects (eg images) accessible to users on your website, so you need to open sufficient access to permit this (as you have successfully done!).
In fact, the only elements you actually needed were:
On "Block Public Access", allow Bucket Polices (Done!)
Create a Bucket Policy that grants GetObject to anyone (Done!)
You only need the CORS settings if you are experiencing a particular problem, and there is no need to change the Bucket ACLs from their default values.
The bucket policy is only allowing people to download objects, and only if they know the name of the object. They are not permitted to upload objects, delete objects or even list the objects in the bucket. That's pretty secure!
Your settings are fine for publicly-accessible content that you are happy for anyone to access. If you have any personal or confidential content (eg documents, or items requiring login) then you would need an alternate way of granting access only to appropriately authorized people. However, this doesn't seem to be a requirement in your situation.
Bottom line: You are correctly configured for granting public read-only access to anyone, without providing any additional access. Looks good!
Amazon CloudFront (CF) is often used for serving content from S3 buckets without needing the buckets to be public. This way your website would server your images from CF, rather than directly from the bucket. CF would fetch and cache the images from the bucket privately.
The way it works is that in your bucket, you would setup a special bucket policy which would allow a CF user, called origin access identity (OAI), to access your bucket.
The use of CF and OAI to serve your images from your bucket not only keeps your bucket fully private, but also reduces load times as CF caches the images in its edge locations.
More details on this are in:
Restricting Access to Amazon S3 Content by Using an Origin Access Identity
Amazon S3 + Amazon CloudFront: A Match Made in the Cloud
How do I use CloudFront to serve HTTPS requests for my Amazon S3 bucket?
I deploy a simple web app to S3 via amplify publish. The hosting has Cloudfront enabled (I selected the PROD environment in amplify while setting up hosting) and I'm working in the eu-central-1 region. But whenever I try to access the Cloudfront URL, I receive an AccessDenied error.
I followed a tutorial at https://medium.com/quasar-framework/creating-a-quasar-framework-application-with-aws-amplify-services-part-1-4-9a795f38e16d an the only thing I did differently was the region (tutorial uses us-east-1 while I use eu-central-1).
The config of S3 and Cloudfront was done by amplify and so should be working in theory:
Cloudfront:
Origin Domain Name or Path: quasar-demo-hosting-bucket-dev.s3-eu-central-1.amazonaws.com (originally it was without the eu-central-1, but I added it manually after it didn't work).
Origin ID: hostingS3Bucket
Origin Type: S3 Origin
S3 Bucket Policy:
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Sid": "APIReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity ********"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::quasar-demo-hosting-bucket-dev/*"
}
]
}
Research showed me that Cloudfront can have temporary trouble to access S3 buckets in other regions. But I manually added the region to the origin in Cloudfront AND I have waited for 24h. I still get the "access denied".
I suspect this has something to do with the S3 bucket not being in the default us-east-1 region and amplify not setting up Cloudfront correctly in that case.
How can I get amplify to set the S3 bucket and Cloudfront up correctly so that I can access my website through the Cloudfront URL?
For those whom the first solution does not work, also make sure that the javascript.config.DistributionDir in your project-config.json file is configured correctly. That can also cause the AccessDenied error (as I just learned the hard way).
Amplify expects your app entrypoint (index.html) to be at the first level within the directory you have configured. So if you accept the amplify default config (dist) and are using a project that puts the built files at a deeper level in the hierarchy (dist/<project name> in the case of angular 8), then it manifests as a 403 AccessDenied error after publishing. This is true of both the amplify and s3 hosting options.
docs: https://docs.aws.amazon.com/amplify/latest/userguide/manual-deploys.html (see the end)
Thanks for the additional information.
your S3 Bucket Policy looks Ok.
Regarding Origin Domain name or Path, It is always S3 bucket appears in the drop down so no need to update it with region
However there is one setting missing in your Cloudfront Origin.
you need to select Restrict Bucket access to Yes
As per AWS documentation
If you want to require that users always access your Amazon S3 content using CloudFront URLs, not Amazon S3 URLs, click Yes. This is useful when you are using signed URLs or signed cookies to restrict access to your content. In the Help, see "Serving Private Content through CloudFront
Now create new Identity or select Existing Identity
Click on Create button to save Origin.
While the answer by #raj-paliwal helped me tremendously solving my original problem, Amplify has since fixed the problem with a new option.
If you type Amplify add hosting (or Amplify update hosting for an existing site), Amplify gives you the option of Hosting with Amplify Console.
Choosing this will also create a hosting environment with S3 and CloudFront, but Amplify will manage everything for you. With this option I had no problems at all. It seems that this first option fixes the bug I encountered.
If you want to upgrade an existing site from manual CloudFront and S3 hosting to a Hosting with Amplify Console, you have to call amplify update hosting and select the new option.
{
"Sid": "Allow-Public-Access-To-Bucket",
"Effect": "Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::DOC-EXAMPLE-BUCKET/"
]
}
SOLVED: add this to the bucket policy
https://aws.amazon.com/premiumsupport/knowledge-center/s3-website-cloudfront-error-403/