I have got a strange problem with the Amplify CLI. When adding hosting to my angular app through
amplify hosting add
and subsequently calling
amplify publish
the link provided at the end of the process links to a webpage that just shows an XML document telling me the access was denied. What is happening here? It seems to me like the hosting bucket has a wrong policy attached, but why would the amplify CLI create a private bucket?
Can someone shed some light here?
Here is the bucket policy created by the CLI:
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Sid": "APIReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity xxx"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::xxx/*"
}
]
}
When using the amplify init command be very careful when passing the path to the project build.
In case of angular projects the default value is dist/{project-name}, so when initializing the amplify to correctly this path, this way:
Distribution Directory Path: dist/{project-name}
If the path is not passed correctly, the access message has been denied it will appear as if everything is working, but in fact it can not find the files.
Finally if even then the error continues to happen here is a reference to other possible reasons for this to be happening.
I just encountered this as well. If I go into the Cloudfront distribution and update the origin to include an Origin Path pointing to the main subdirectory in the S3 bucket (folder is the same name as the Amplify project), the problem appears to be resolved.
I just came across this issue and it ended up being a safety setting in S3.
1. Go to S3 > Public access settings for this account > Untick those
Related
I'm setting up a static site on S3 and Cloudfront. I've setup SSL, etc. on Cloudfront and I can access the site using the *.cloudfront.net URL. However, when accessing from the custom domain, I get the 403 error. Does anyone know why? The bucket policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.mydomain.com/*"
}
]
}
This should permit access from the custom domain mydomain.com, right?
For the sake of testing, I've tried setting "Principal": "*", but it still gives 403.
Any help appreciated.
The issue is looks like caused by "Object is not publicly available for read".
By default S3 Buckets are set to "block all public access." So you need to check if it's disabled.
And then You can configure your objects to be publicly available which can be done in 2 ways:
Bucket Level Restriction via Policy
Object Level Restriction (in case you have a use case that requires granular control)
Lastly if you have scripts that uploads those contents you can also utilize this scripts to append restriction policy on demand.
aws s3 cp /path/to/file s3://sample_bucket --acl public-read
I've fixed it now. I've mistakenly left 'Alternate Domain Names' blank.
I know this is an older question, but I figured I'd add on for future searchers: make sure that (as of mid-2021) your Cloudfront origins are configured to hit example.com.s3-website-us-east-1.amazonaws.com instead of example.com.s3.amazonaws.com once you set your S3 bucket up for static website hosting - I got tripped up because (again, as of mid-2021) the Cloudfront UI suggests the latter, incorrect S3 URLs as dropdown autocompletes and they're close enough that you might not figure out what's the subtle difference if you're trying to go from memory instead of following the docs.
I deploy a simple web app to S3 via amplify publish. The hosting has Cloudfront enabled (I selected the PROD environment in amplify while setting up hosting) and I'm working in the eu-central-1 region. But whenever I try to access the Cloudfront URL, I receive an AccessDenied error.
I followed a tutorial at https://medium.com/quasar-framework/creating-a-quasar-framework-application-with-aws-amplify-services-part-1-4-9a795f38e16d an the only thing I did differently was the region (tutorial uses us-east-1 while I use eu-central-1).
The config of S3 and Cloudfront was done by amplify and so should be working in theory:
Cloudfront:
Origin Domain Name or Path: quasar-demo-hosting-bucket-dev.s3-eu-central-1.amazonaws.com (originally it was without the eu-central-1, but I added it manually after it didn't work).
Origin ID: hostingS3Bucket
Origin Type: S3 Origin
S3 Bucket Policy:
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Sid": "APIReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity ********"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::quasar-demo-hosting-bucket-dev/*"
}
]
}
Research showed me that Cloudfront can have temporary trouble to access S3 buckets in other regions. But I manually added the region to the origin in Cloudfront AND I have waited for 24h. I still get the "access denied".
I suspect this has something to do with the S3 bucket not being in the default us-east-1 region and amplify not setting up Cloudfront correctly in that case.
How can I get amplify to set the S3 bucket and Cloudfront up correctly so that I can access my website through the Cloudfront URL?
For those whom the first solution does not work, also make sure that the javascript.config.DistributionDir in your project-config.json file is configured correctly. That can also cause the AccessDenied error (as I just learned the hard way).
Amplify expects your app entrypoint (index.html) to be at the first level within the directory you have configured. So if you accept the amplify default config (dist) and are using a project that puts the built files at a deeper level in the hierarchy (dist/<project name> in the case of angular 8), then it manifests as a 403 AccessDenied error after publishing. This is true of both the amplify and s3 hosting options.
docs: https://docs.aws.amazon.com/amplify/latest/userguide/manual-deploys.html (see the end)
Thanks for the additional information.
your S3 Bucket Policy looks Ok.
Regarding Origin Domain name or Path, It is always S3 bucket appears in the drop down so no need to update it with region
However there is one setting missing in your Cloudfront Origin.
you need to select Restrict Bucket access to Yes
As per AWS documentation
If you want to require that users always access your Amazon S3 content using CloudFront URLs, not Amazon S3 URLs, click Yes. This is useful when you are using signed URLs or signed cookies to restrict access to your content. In the Help, see "Serving Private Content through CloudFront
Now create new Identity or select Existing Identity
Click on Create button to save Origin.
While the answer by #raj-paliwal helped me tremendously solving my original problem, Amplify has since fixed the problem with a new option.
If you type Amplify add hosting (or Amplify update hosting for an existing site), Amplify gives you the option of Hosting with Amplify Console.
Choosing this will also create a hosting environment with S3 and CloudFront, but Amplify will manage everything for you. With this option I had no problems at all. It seems that this first option fixes the bug I encountered.
If you want to upgrade an existing site from manual CloudFront and S3 hosting to a Hosting with Amplify Console, you have to call amplify update hosting and select the new option.
{
"Sid": "Allow-Public-Access-To-Bucket",
"Effect": "Allow",
"Principal": "",
"Action": "s3:GetObject",
"Resource": [
"arn:aws:s3:::DOC-EXAMPLE-BUCKET/"
]
}
SOLVED: add this to the bucket policy
https://aws.amazon.com/premiumsupport/knowledge-center/s3-website-cloudfront-error-403/
I am trying to set up a static website using an S3 bucket using the cdk. However, when I deploy the stack I receive the error API: s3:PutBucketPolicy Access Denied. The CLI user I am using has administrator permissions.
I have tried to manually create a bucket with the "Static website hosting" property configured, but when I add the following bucket policy, I receive an Access denied error, even though I am the root user.
{
"Id": "PolicyId",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Sid",
"Action": "s3:*",
"Effect": "Allow",
"Resource": "arn:aws:s3:::BUCKET_NAME",
"Principal": "*"
}
]
}
Something similar to here.
I have deselected all the public access settings like is suggested - but I still receive an access denied.
I believe the problem when deploying the cdk code may be related to the problem when creating the bucket manually, but I don't know how to debug it.
This worked for me:
//Create the web bucket and give it public read access
this.webBucket = new Bucket(this, 'WebBucket', {
websiteIndexDocument: 'index.html',
publicReadAccess: true
});
//Deploy the frontend to the to the web bucket
new BucketDeployment(this, 'DeployFrontend', {
source: Source.asset('../ui/dist'),
destinationBucket: this.webBucket
});
Also, make sure the "Block public access (account settings)" is turned off in the S3 Console.
For folks struggling with this error using aws-cdk and already existing bucket:
Take a look if you are not trying to modify bucket policy when you have set "blockPublicAccess: s3.BlockPublicAccess.BLOCK_ALL" in Bucket properties.
You have to turn it off or remove that property if you want to modify the policy. After deploying (modifying) policy you can set the blockPublicAccess property back again.
So I've been working with this problem all day, and I can't seem to find the cause of this issue.
I have an action in SES that will forward all emails at a specific subdomain to a specific bucket. These messages can be downloaded fine and contain all necessary information when interacted with in the console, but fail to be retrieved by using getObject() in the Java SDK.
I can confirm that the SDK credentials work correctly, as I can download other files from the same bucket, even with the same key prefix through my code.
That proves my bucket policy is set up correctly. The entry dealing with the getObject permission looks like this:
{
"Sid": "EmailsAccess",
"Effect": "Allow",
"Principal": "*",
"Action": [
"s3:DeleteObject",
"s3:GetObject"
],
"Resource": "arn:aws:s3:::foo-bucket-foo/foo-prefix-foo/*"
}
I'm sure that the root cause of the issue has to do with the owner, since that is defined as "aws-ses+publishing.us-east-1.prod" in each file generated by SES. Why is that causing my code to bring up 403s? Is there any way to change a file's owner, or is there a more elegant solution?
I found the issue being that the accounts used in the SDK were a different set of accounts from the bucket owner, and did not have permission to view those specific messages.
Following this guide would have solved the issue, but we decided to go another route and are trying to log in using the bucket owner accounts.
I have a tomcat instance that runs in Beanstalk and in the configuration for Beanstalk I pass in a config.file as a parameter like so:
-Dconfig.url=https://s3.amazonaws.com/my-bucket/my-file.txt
This file is in s3 but I have to set permissions to 'Everyone': 'Open', which I do not like doing because this is unsafe, but can't seem to find any other way of doing this. I've looked at the url signing method and this isn't a good solution as both the file and the Beanstalk app are updated frequently and I'd like to have all this automated i.e, if the app breaks and restarts it will not be able to read the file because the signing key would have expired.
I've looked at the docs regararding roles but cannot seem to get this working. I've added a custom policy to the aws-elasticbeanstalk-ec2-role (shown below) and this isn't doing anything - my app can still not access files in the bucket. Could someone please tell me how / whether this can be fixed?
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket-name/*"
}
]
}
Is there another way I can allow the Beanstalk application to read files in an S3 bucket? Any help is appreciated.