Amplify publish causes AccessDenied error - amazon-web-services

I deploy a simple web app to S3 via amplify publish. The hosting has Cloudfront enabled (I selected the PROD environment in amplify while setting up hosting) and I'm working in the eu-central-1 region. But whenever I try to access the Cloudfront URL, I receive an AccessDenied error.
I followed a tutorial at https://medium.com/quasar-framework/creating-a-quasar-framework-application-with-aws-amplify-services-part-1-4-9a795f38e16d an the only thing I did differently was the region (tutorial uses us-east-1 while I use eu-central-1).
The config of S3 and Cloudfront was done by amplify and so should be working in theory:
Cloudfront:
Origin Domain Name or Path: quasar-demo-hosting-bucket-dev.s3-eu-central-1.amazonaws.com (originally it was without the eu-central-1, but I added it manually after it didn't work).
Origin ID: hostingS3Bucket
Origin Type: S3 Origin
S3 Bucket Policy:
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Sid": "APIReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity ********"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::quasar-demo-hosting-bucket-dev/*"
}
]
}
Research showed me that Cloudfront can have temporary trouble to access S3 buckets in other regions. But I manually added the region to the origin in Cloudfront AND I have waited for 24h. I still get the "access denied".
I suspect this has something to do with the S3 bucket not being in the default us-east-1 region and amplify not setting up Cloudfront correctly in that case.
How can I get amplify to set the S3 bucket and Cloudfront up correctly so that I can access my website through the Cloudfront URL?

For those whom the first solution does not work, also make sure that the javascript.config.DistributionDir in your project-config.json file is configured correctly. That can also cause the AccessDenied error (as I just learned the hard way).
Amplify expects your app entrypoint (index.html) to be at the first level within the directory you have configured. So if you accept the amplify default config (dist) and are using a project that puts the built files at a deeper level in the hierarchy (dist/<project name> in the case of angular 8), then it manifests as a 403 AccessDenied error after publishing. This is true of both the amplify and s3 hosting options.
docs: https://docs.aws.amazon.com/amplify/latest/userguide/manual-deploys.html (see the end)

Thanks for the additional information.
your S3 Bucket Policy looks Ok.
Regarding Origin Domain name or Path, It is always S3 bucket appears in the drop down so no need to update it with region
However there is one setting missing in your Cloudfront Origin.
you need to select Restrict Bucket access to Yes
As per AWS documentation
If you want to require that users always access your Amazon S3 content using CloudFront URLs, not Amazon S3 URLs, click Yes. This is useful when you are using signed URLs or signed cookies to restrict access to your content. In the Help, see "Serving Private Content through CloudFront
Now create new Identity or select Existing Identity
Click on Create button to save Origin.

While the answer by #raj-paliwal helped me tremendously solving my original problem, Amplify has since fixed the problem with a new option.
If you type Amplify add hosting (or Amplify update hosting for an existing site), Amplify gives you the option of Hosting with Amplify Console.
Choosing this will also create a hosting environment with S3 and CloudFront, but Amplify will manage everything for you. With this option I had no problems at all. It seems that this first option fixes the bug I encountered.
If you want to upgrade an existing site from manual CloudFront and S3 hosting to a Hosting with Amplify Console, you have to call amplify update hosting and select the new option.

{
"Sid": "Allow-Public-Access-To-Bucket",
"Effect": "Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::DOC-EXAMPLE-BUCKET/"
]
}
SOLVED: add this to the bucket policy
https://aws.amazon.com/premiumsupport/knowledge-center/s3-website-cloudfront-error-403/

Related

React, S3, and CloudFront Versioning for Deployment

First problem:
I have a static webpage hosted on S3, a CloudFront distribution pointing to this S3 bucket, and an A record on my domain pointing to this CloudFront distro. I also have some API Gateway and Lambda and DynamoDB stuff going on.
This webpage is a React app following the create-react-app template. As such, when I yarn build, all of the js and css fragments are cache-busted nicely with these random main.d74fc389.chunk.js names. However, importantly, the index.html (and other static files) are not.
When I aws s3 sync build/ s3://xxxx, everything gets uploaded nicely, but the cloudfront root is still pointing at the old cached index.html!
What can I do about this so that my automatic deployment script (basically just yarn build && aws s3 sync build/ s3://xxxxxx works properly?
I am pointing my domain to CloudFront rather than to the straight S3 website because I want a TLS certificate. I have therefore denied access in my policies to the S3 bucket to anyone except the CloudFront OAI.
Second problem:
I've solved this by setting up a default object in CloudFront.
For some reason I keep getting `access denied` errors on my `https://cloudfront.xxxxxx.xxx` (and therefore on my `https://mydomain.xxx`, but accessing `https://mydomain.xxx/index.html` and any other item (including the newly uploaded items, as evidenced by the updated javascript!) has absolutely no issue. Wtf? Here is my S3 policy:
{
"Version": "2012-10-17",
"Id": "Policy1631694343564",
"Statement": [
{
"Sid": "1",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::xxxxxxx/*"
}
]
}
This was literally autogenerated by CloudFront, so I have no idea how it could be incorrect.
I do have my bucket set to serve a static website, but have declined access to s3:GetObject to the general public, so this URL does nothing. The origin I have set up for my CloudFront is the S3's REST api (i.e. xxxxxx.s3.us-west-1.amazonaws.xxx) rather than the bucket's website URL (http://xxxxxxx.s3-website-us-west-1.amazonaws.xxx/).
The .com in URLs was replaced with .xxx because of StackOverflow rules
Wait for the TTL or invalidate your cache.
aws cloudfront create-invalidation --distribution-id E2FXXXXXX4N0MS --paths "/*"
This may not be suitable if you are doing lots of deployments as there are some limits and costs.
AWS Documentation

AWS S3+Cloudfront static website subdirectories not working

I'm trying to setup a static website into S3 with a custom domain and using CloudFront to handle HTTPS.
The thing is that the root path works properly but not the child paths.
Apparently, it's all about the default root object which I have configured as index.html in both places.
example.com -> example.com/index.html - Works fine
example.com/about/ -> example.com/about/index.html - Fails with a NoSuchKey error
The funny thing is that if I open read access to S3 bucket and I use the S3 URL it works completely fine.
There is an AWS documentation page where they talk about that: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/DefaultRootObject.html, but they don't even say a solution, or at least I haven't been able to find it.
However, if you define a default root object, an end-user request for
a subdirectory of your distribution does not return the default root
object. For example, suppose index.html is your default root object
and that CloudFront receives an end-user request for the install
directory under your CloudFront distribution:
http://d111111abcdef8.cloudfront.net/install/
CloudFront does not return the default root object even if a copy of
index.html appears in the install directory.
If you configure your distribution to allow all of the HTTP methods
that CloudFront supports, the default root object applies to all
methods. For example, if your default root object is index.php and you
write your application to submit a POST request to the root of your
domain (http://example.com), CloudFront sends the request to
http://example.com/index.php.
The behavior of CloudFront default root objects is different from the
behavior of Amazon S3 index documents. When you configure an Amazon S3
bucket as a website and specify the index document, Amazon S3 returns
the index document even if a user requests a subdirectory in the
bucket. (A copy of the index document must appear in every
subdirectory.) For more information about configuring Amazon S3
buckets as websites and about index documents, see the Hosting
Websites on Amazon S3 chapter in the Amazon Simple Storage Service
Developer Guide.
S3 Bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowCloudFrontAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::example.com/*"
}
]
}
CloudFront setup:
Thank you
This got fixed, just to sum up what I did there are two solutions:
Adding the S3 URL as custom origin in CloudFront, the tradeoff is that this forces us to open the S3 bucket for anonymous traffic.
Setting up a Lambda#Edge that will translate the requests, the tradeoff is that we'll pay also Lambda requests.
So each one has to decide the option it fits better, in my case the traffic is expected to be super low so I chose the second option.
I leave some useful links in case anybody else faces the same problem:
Useful Reddit thread here
AWS Lambda#Edge+CloudFront explained by AWS here
Fix to Lambda error I faced here
All setup process explained here

Getting 403 (Forbidden) error when accessing static site from custom domain

I'm setting up a static site on S3 and Cloudfront. I've setup SSL, etc. on Cloudfront and I can access the site using the *.cloudfront.net URL. However, when accessing from the custom domain, I get the 403 error. Does anyone know why? The bucket policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.mydomain.com/*"
}
]
}
This should permit access from the custom domain mydomain.com, right?
For the sake of testing, I've tried setting "Principal": "*", but it still gives 403.
Any help appreciated.
The issue is looks like caused by "Object is not publicly available for read".
By default S3 Buckets are set to "block all public access." So you need to check if it's disabled.
And then You can configure your objects to be publicly available which can be done in 2 ways:
Bucket Level Restriction via Policy
Object Level Restriction (in case you have a use case that requires granular control)
Lastly if you have scripts that uploads those contents you can also utilize this scripts to append restriction policy on demand.
aws s3 cp /path/to/file s3://sample_bucket --acl public-read
I've fixed it now. I've mistakenly left 'Alternate Domain Names' blank.
I know this is an older question, but I figured I'd add on for future searchers: make sure that (as of mid-2021) your Cloudfront origins are configured to hit example.com.s3-website-us-east-1.amazonaws.com instead of example.com.s3.amazonaws.com once you set your S3 bucket up for static website hosting - I got tripped up because (again, as of mid-2021) the Cloudfront UI suggests the latter, incorrect S3 URLs as dropdown autocompletes and they're close enough that you might not figure out what's the subtle difference if you're trying to go from memory instead of following the docs.

AWS Amplify CLI creates inaccessible bucket for hosting

I have got a strange problem with the Amplify CLI. When adding hosting to my angular app through
amplify hosting add
and subsequently calling
amplify publish
the link provided at the end of the process links to a webpage that just shows an XML document telling me the access was denied. What is happening here? It seems to me like the hosting bucket has a wrong policy attached, but why would the amplify CLI create a private bucket?
Can someone shed some light here?
Here is the bucket policy created by the CLI:
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Sid": "APIReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity xxx"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::xxx/*"
}
]
}
When using the amplify init command be very careful when passing the path to the project build.
In case of angular projects the default value is dist/{project-name}, so when initializing the amplify to correctly this path, this way:
Distribution Directory Path: dist/{project-name}
If the path is not passed correctly, the access message has been denied it will appear as if everything is working, but in fact it can not find the files.
Finally if even then the error continues to happen here is a reference to other possible reasons for this to be happening.
I just encountered this as well. If I go into the Cloudfront distribution and update the origin to include an Origin Path pointing to the main subdirectory in the S3 bucket (folder is the same name as the Amplify project), the problem appears to be resolved.
I just came across this issue and it ended up being a safety setting in S3.
1. Go to S3 > Public access settings for this account > Untick those

how appoint a subdomain for a s3 bucket?

Good morning,
I am using amazon s3 bucket as the image server.
And I want to use a subdomain of my site, how to address this bucket.
eg: a picture is now in: https://s3-sa-east-1.amazonaws.com/nomeBucket/pasta/imag.png, and I access it through this same link.
Would that it were so: imagens.mydomain.com.br / folder / imag.png
Is there any way I can do this? appoint a subdomain address to a bucket?
I've tried the amazon route 53, as CNAME. I tried this: https://s3-sa-east-1.amazonaws.com/nomeBucket/
I took the test yesterday, but apparently it did not work.
Someone already did something similar, and / or know how to help me?
Note: I'm using nginx. also need to configure it for subdomain?
Thank you
You need to rename your bucket to match the custom domain name (e.g. imagens.mydomain.com.br) and set up that domain as a CNAME to
<bucket-name>.s3.amazonaws.com.
(in your case imagens.mydomain.com.br.s3.amazonaws.com.
The full instructions are available here:
http://docs.aws.amazon.com/AmazonS3/latest/dev/VirtualHosting.html
Update 2019 : AWS SUBDOMAIN hosting in S3
As of today following steps worked to have a successfully working subdomain for AWS S3 hosted static website:
Create a bucket with subdomain name. In this example www.subtest.mysite.com
Note: Make sure on 'Permission' tab of bucket:
1.Block public access (bucket settings)
2.Access Control List &
3.Bucket policy
are appropriately set to make sure bucket is public. ( Assuming you already did this for your root domain bucket, those settings can be mirrored on this subdomain bucket)
Upload the index.html file in the bucket
Create a CNAME record with your domain provider
I'm going to build on the other answers here for completeness.
I have moved my bucket to a subdomain so that the contents can be cached by Cloudflare.
Old S3 Bucket Name: autoauctions
New S3 Bucket Name: img.autoauctions.io
CNAME record: img.autoauctions.io.s3.amazonaws.com
Now you'll need to copy all of your objects since you cannot rename a bucket. Here's how to do that with AWS CLI:
pip install awscli
aws configure
Go to https://console.aws.amazon.com/iam/home and create a user or go to an existing user
Go to the user's Security credentials tab
Click Create access key. Copy the secret.
Here's a list of AWS regions.
Now you'll copy your old bucket contents to your new bucket.
aws s3 sync s3://autoauctions s3://img.autoauctions.io
I found this to be too slow for the 1TB of images I needed to copy, so I increased the number of concurrent connections and re-ran from an EC2 instance.
aws configure set default.s3.max_concurrent_requests 400
Sync it up!
Want to make folders within your bucket public? Create a bucket policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::img.autoauctions.io/copart/*"
},
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::img.autoauctions.io/iaai/*"
}
]
}
And now the image loads from img.autoauctions.io via Cloudflare's cache.
Hope this helps some people!