AWS S3: Unable to make access public - amazon-web-services

Goal: Publish static webpage using AWS S3
Issues: Access Denied and 403 Errors
I have been working on this issue for several hours now. After watching several tutorials (such as the one here: https://www.youtube.com/watch?v=4UafFZsCQLQ), deploying a static webpage on AWS S3 appeared to be quite easy. However, I am continually running into "Access Denied" errors when following tutorials, and 403 errors when trying to access my page.
403 Error when loading page
When viewing what should be my static webpage (http://watchyourinterest.live.s3-website.us-east-2.amazonaws.com), I receive a 403 error (see above image). This is after adding the following bucket policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::watchyourinterest.live/*"
}
]
}
I have also changed all of the Public Access Settings in the permissions to False (just to make sure nothing should be restricting this, though I do plan to change them to what they should be later once I have this working).
Public Access Settings
I also made sure to set the index document correctly to my index.html page, and set the error document correctly to my error.html file as well.
When viewing tutorials, it appears that this should make my page good to go. However, as I said before, I keep getting the 403 errors. Upon further thinking, I tried to set Public Access to Everyone for all of the files, but each time I try to click the Everyone selection, I get an error that says "Access Denied".
Trying to set file to public access
Access denied error when I attempt setting public access...
Similarly, the same happens when I click on files individually and take actions on them in a different way, as is seen below:
Access denied again when trying to make public
On the main page that lists all of my buckets, I am also getting this odd "Access" state of my bucket, when I want it to be public instead of this:
"Access" state of bucket, I WANT THIS TO BE PUBLIC
Any help would be greatly appreciated!!

If you have already allowed public access, then under the Permissions tab for your bucket, check the Object Ownership section. If it says "Bucket owner enforced, ACLs are disabled", click Edit. Set ACLs to enabled and Save Changes. After this, the "Make Public" option will be available for objects in the bucket.

I think you may be missing list operation, try
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicListObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::watchyourinterest.live"
},
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": ["s3:Get*","s3:List*"]
"Resource": ["arn:aws:s3:::watchyourinterest.live/*","arn:aws:s3:::watchyourinterest.live"]
}
]
}

Your root cause of the issue is public access settings on bucket level. As per screenshot, your bucket is only allowing authorized users to access whatever there is inside your bucket.
The public access settings blocks the access even if you have given the access to your bucket objects through bucket policies.
To solve the issue, Please change the public access settings as below:
Click on edit public access settings, it should show below settings.
Leave all the checkbox unchecked. click on save. It will ask for confirmation. Type "confirm" in the given box.
That should show the access for that bucket as public.
Now you should be able to access your website with given endpoint for static website hosting.

Similar to the answer explained by #Sangam Belrose, but instead this NEEDS TO BE APPLIED TO THE ENTIRE AWS CONSOLE ACCOUNT AS WELL. When these were changed, I no longer ran into my issues. Images below illustrate this:
Select the "Public Access Settings for this Account" tab on the left hand side of the AWS console. Note here how originally the access for this account is only for "Only authorized users of this account".
ACCOUNT Public Access Settings
Make sure that the last checkbox, the one stating "Block public and cross-account access to buckets that have public policies" is UNCHECKED.
UNCHECK THIS BOX
Type confirm on when the box confirmation window appears
Now see that if this AND the bucket's public access settings are set correctly, this bucket will now be public.
It is now public, woo!

Related

S3 replacing default xml error with custom error not working

I feel stupid for having to ask this but I cannot get amazon's s3 error document to work. What I want to do is show a custom error document when a user tries to access a file that doesn't not exist. So I followed to documentation at https://docs.aws.amazon.com/AmazonS3/latest/userguide/CustomErrorDocSupport.html but this simply doesn't work.
I can access files that exist but when I enter https://mybucketurl/notexistingdoc.html it trows the usual access denied/key not found xml error.
As the documentation is pretty barebones and it there isn't much to configure I have no clue what is wrong. I even tried setting to permissions on my bucket to s3:* to make sure it wasn't a permission issue.
This is what is tried and my error page also works.
Created a bucket, changed permission to make it public.
under permission -> block public access turn it off , and
attached a policy bucket policy to grant public read access to your bucket. When you grant public read access, anyone on the internet can access your bucket.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::your-bucket name/*"
}
] }
upload your index.html and error.html as objects
my error.html file <h1>there was an error</h1>
Go to properties, under static website hosting,enable it,choose a
hosting type as static, mention exact names index.html for index document and
error.html for error document nd then you can verify it by trying to
access your bucket URL with anything which doesn't exist it will
render the error page
For detailed explanation follow docs

How to make S3 objects readable only from certain IP addresses?

I am trying to setup Cloudflare to cache images from S3. I want to be as restrictive (least permissive) as possible in doing this. I assume I need to accept requests from Cloudflare to read my S3 images. I want all other requests to be rejected.
I followed this guide: https://support.cloudflare.com/hc/en-us/articles/360037983412-Configuring-an-Amazon-Web-Services-static-site-to-use-Cloudflare
I did not enable static website hosting on my bucket, because it's not necessary for my case.
In my bucket permissions I turned off "Block all public access" and temporarily turned off "Block public access to buckets and objects granted through new public bucket or access point policies". I needed to do this in order to add a bucket policy.
From the link above, I then added a bucket policy that looks something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.example.com/*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
<CLOUDFLARE_IP_0>,
<CLOUDFLARE_IP_1>,
<CLOUDFLARE_IP_2>,
...
]
}
}
}
]
}
At this point, a message appeared in the AWS console stating:
"This bucket has public access
You have provided public access to this bucket. We highly recommend that you never grant any kind of public access to your S3 bucket."
I then turned back on "Block public access to buckets and objects granted through new public bucket or access point policies" and turned off "Block public and cross-account access to buckets and objects through any public bucket or access point policies".
At this point, the S3 image request behavior seems to be working as intended, but I am not confident that I set everything up to be minimally permissive, especially given the warning message in the AWS console.
Given my description, did I properly set things up in this bucket to accept read requests only from Cloudflare and deny all other requests? I want to make sure that requests from any origin other than Cloudflare will be denied.
Sounds good! If it works from CloudFlare, but not from somewhere else, then it meets your requirements.
Those Block Public Access warnings are intentionally scary to make people think twice before opening their buckets to the world.
Your policy is nicely limited to only GetObject and only to a limited range of IP addresses.

Getting 403 (Forbidden) error when accessing static site from custom domain

I'm setting up a static site on S3 and Cloudfront. I've setup SSL, etc. on Cloudfront and I can access the site using the *.cloudfront.net URL. However, when accessing from the custom domain, I get the 403 error. Does anyone know why? The bucket policy is as follows:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "2",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity XXXXXXXX"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::www.mydomain.com/*"
}
]
}
This should permit access from the custom domain mydomain.com, right?
For the sake of testing, I've tried setting "Principal": "*", but it still gives 403.
Any help appreciated.
The issue is looks like caused by "Object is not publicly available for read".
By default S3 Buckets are set to "block all public access." So you need to check if it's disabled.
And then You can configure your objects to be publicly available which can be done in 2 ways:
Bucket Level Restriction via Policy
Object Level Restriction (in case you have a use case that requires granular control)
Lastly if you have scripts that uploads those contents you can also utilize this scripts to append restriction policy on demand.
aws s3 cp /path/to/file s3://sample_bucket --acl public-read
I've fixed it now. I've mistakenly left 'Alternate Domain Names' blank.
I know this is an older question, but I figured I'd add on for future searchers: make sure that (as of mid-2021) your Cloudfront origins are configured to hit example.com.s3-website-us-east-1.amazonaws.com instead of example.com.s3.amazonaws.com once you set your S3 bucket up for static website hosting - I got tripped up because (again, as of mid-2021) the Cloudfront UI suggests the latter, incorrect S3 URLs as dropdown autocompletes and they're close enough that you might not figure out what's the subtle difference if you're trying to go from memory instead of following the docs.

Admin level user denied access to S3 objects

I'm really struggling to gain access to objects in an S3 bucket.
Things I've done:
IAM user has admin level privileges already
Granted AmazonS3FullAccess
Set the bucket policy to public allowed get...
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::images.bucketname.com/*"
}
]
}
I don't want get to be public but I'm just trying to get this to work right now. I had set up a new IAM user for the application itself that will fetch objects as the principal but again, for some reason that didn't work.
I'm uploading the images with putObjectin Node.
Am I missing something here because I'm getting full access denied to everything in S3. I can't open an image even logged in as the root user. I can't download an object. There is no viable way for me to view the images I'm uploading.
All of these buttons within the console either throw a blank error or route to the standard AWS access denied XML page.
On the other hand I can successfully, programmatically, upload files to the bucket using the root users credentials.
What am I missing here? Thanks for the help.
If you just want to access the bucket for some MVP or hobby project and you don't care about security then I would recommend you switch off the default settings of the bucket here:
To re-iterate, only do this in development as it may not be recommended for production

aws s3 website hosting, setting permissions for private keys file

I have a static website hosted on an aws s3 bucket. I am using a few different api's like google, trello etc. I am not sure how to keep some of these keys private as I set up my bucket to use PublicReadForGetBucketObjects which makes the entire website public. I have looked into AssumeRoleWithWebIdentity and permissions to restrict access but still cannot figure out how to make one of my files private. It seems to me that this is probably something easy but I cannot find a way.
Here is what my bucket policy looks like
{
"Version": "2008-10-17",
"Statement": [
{
"Sid": "PublicReadForGetBucketObjects",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::[my bucket]/*"
}
]
}
Thanks
The policy you have listed can apply permissions based on path. For example, setting the Resource to arn:aws:s3:::[my bucket]/public/* would only make the public sub-directory public (or more accurately, any path that starts with /public/).
Similarly, a policy can also define a path to specifically Deny, which will override the Allow (so you could make everything public but then specifically deny certain files and paths)
However, you mention that you would like to keep some files private, yet this is a static website, with no compute component. It would not be possible for only 'some' of your website to access the desired objects, since all the logic is taking place in your users' browsers rather than on your web server. Therefore, a file would either be public or private, but the private files could not be accessed as part of the static website. This might not be what you are trying to achieve.