AWS S3 Bucket policy editor access denied - amazon-web-services

I am logged in with the root account trying to give public access to a bucket inline with the instructions for setting up a static s3 web site.
However I get an access denied message when running the bucket policy.
There is no more detail on the message.

This could be due to recent changes in S3. To fix this issue, you need to assign Public Access to the bucket, follow the below steps:
In the Permissions tab click on the Block Public Access settings.
Click Edit to the right of these settings.
Make sure Block public access to buckets and objects granted through new public bucket or access point policies option is deselected.
Click Save.
Go back to the Bucket Policy and try again.

"Manage public bucket policies for this bucket" section need to be unchecked for to introduce "Allow" policies.
But be cautious, unchecking these might enable you to introduce a
policy but that policy is a public policy making your bucket public.
Having these checked - You won't be able to introduce "Allow" policies that for this bucket.
You can however introduce "Deny" policies, with these options checked.

The accepted answer works even if related comments suggest it's not a good idea for security reasons. In fact it is in line with AWS instructions for static website hosting here
https://docs.aws.amazon.com/AmazonS3/latest/userguide/HostingWebsiteOnS3Setup.html
which answers the OP's question.
To summarise the steps (given in the linked page) to configure a static website on Amazon S3:
Create a bucket
Enable static website hosting
Unblock all public access
Add the bucket policy that makes your content publicly available (the yaml config you have in your post)
Configure your index document (usually index.html)
Configure any error/redirect/no-auth documents (for React this is usually also index.html)

Change the permissions to below works for me:

If your bucket policy grants public access, check if S3 Block Public Access is enabled on the bucket.

Related

GCS limit bucket access to an existing service account

Usually there is Compute Engine default service account that is created automatically by GCP, this account is used for example by VM agents to access different resources across GCP and by default has role/editor permissions.
Suppose I want to create GCS bucket that can only be accessed by this default service account and no one else. I've looked into ACLs and tried to add an ACL to the bucket with this default service account email but it didn't really work.
I realized that I can still access bucket and objects in this bucket from other accounts that have for example storage bucket read and storage object read permissions and I'm not sure what I did wrong (maybe some default ACLs are present?).
My questions are:
Is it possible to limit access to just that default account? In that case who will not be able to access it?
What would be the best way to do it? (would appreciate a lot an example using Storage API)
There are still roles such as role/StorageAdmin, and actually no matter what ACLs will be put on the bucket I could still access it if I had this role (or higher role such as owner) right?
Thanks!
I recommend you not to use ACL (and Google also). It's better to switch the bucket in uniform IAM policy.
There are 2 bad side of ACL:
New created files aren't ACL and you need to set it everytime that you create a ne file
It's difficult to know who has and who hasn't access with ACL. IAM service is better for auditing.
When you switch to Uniform IAM access, Owner, Viewer, and Editor role no longer have access to buckets (the role/storage.admin isn't included in this primitive role). It could solve in one click all the unwanted access. Else, as John said, remove all the IAM permission on the bucket and the project that have access to the bucket except your service account.
You can control access to buckets and objects using Cloud IAM and ACLs.
For example grant the service account WRITE (R: READ,W: WRITE,O: OWNER) access to the bucket using ACLs:
gsutil acl ch -u service-account#project.iam.gserviceaccount.com:W gs://my-bucket
To remove access of service account from the bucket:
gsutil acl ch -d service-account#project.iam.gserviceaccount.com gs://my-bucket
If There are roles such as role/StorageAdmin in the IAM identities (project level), they will have access to all the GCS resources of the project. You might have to change the permission to avoid them having access.

Allow CloudFront to access S3 origin while also having S3 bucket Block all public access?

I am trying to setup the S3 buckets I want my CloudFront distribution to access.
From my client I use AWS mobile SDK to upload to S3. When clients consume files from S3 I hit CloudFront and things worked until I made this change:
When I created the distribution, I had CloudFront update the bucket policy to have the OAI included in the principal:
So, then I thought I could run GET calls on CloudFront, because CloudFront has the OAI setup and S3 bucket reflects that.
However, I keep getting Access denied:
What else do I need to do to secure down the bucket and only allow CloudFront to read and allow my client app to be able to upload files to it using the SDK configured with the poolId I have setup for it?. Unless I leave the "Block all public access" unchecked, I get access denied via CloudFront.
Unfortunately according to the documentation the following is stated:
Amazon S3 Block Public Access must be disabled on the bucket.
This is because it will ignore the bucket policy due to the Block public and cross-account access to buckets and objects through any public bucket or access point policies value.
Unless your bucket policy also allows anonymous GetObject by default your objects will not be public.

Cannot see a public Amazon S3 bucket

I created a bucket and made it public. I created a user XYZ and i can't see the public bucket from that users' account. When a bucket is public all the users should be seeing it from their accounts irrespective of policies attached to them. Am I wrong
When a bucket is public all the users should be seeing it from their accounts irrespective of policies attached to them
Actually XYZ should be able to see the contents of the public bucket. The bucket itself would not be listed under the S3 buckets of XYZ.
If you made the bucket public by going to the permissions tab of the bucket properties and changing the bucket Access Control List to public from there then check the following:
Remove your bucket policy or check it , best way is to remove it for trouble shooting in a dev environment
When looking at the bucket do you see an orange Public marker on the permissions tab or bucket overview
Check that in the access control list under permissions under Public access check:
List objects = yes
It would be safer to share the bucket explicitly to the other accounts (see
here) the reason for this is that the entire world can see your bucket contents and there is an army of people scanning for buckets just like yours and sucking down all the content... Plenty of large corporations have ended up in the newspaper because of this.
If you still can't get it to work get them to give you the output of the following command so we can debug:
# aws s3 cp s3://mypublicbucket/mytestdocument.txt ./mytestdocument.txt
And also if they try and get the object from their browser then show us the error displayed in the browser. Also let us know if it works when you try from your own browser?

Access Denied on AWS S3 bucket with no permission set up

In our S3 configuration we have a bucket that ended up without any permissions, which I reckon my colleague deleted.
Now, we cannot read this bucket, I cannot add permissions to it using the management console, selecting grantee and the permission, as it says "Sorry! You do not have permissions to view this bucket.", When I click on "Add Bucket policy", it opens the dialog which says "Loading" and it keeps loading forever.
I've tried to use aws s3 and aws s3api to grand permission and/or delete the bucket with no success.
I want to either delete this bucket or change it's permissions.
EDIT: We also noticed that the bucket has no owner.
In the Amazon S3 Management Console:
Select the bucket (don't click on its name, just click the line it is on)
Go to the Properties pane on the right
Expand the Permissions section
If there is no line displayed, click Add more permissions, then select the Grantee (possibly your account name?) and tick some permission boxes
These permissions are on the Bucket itself.
Permissions to list the contents of an Amazon S3 bucket are normally granted via Identity and Access Management (IAM) rather than a bucket policy. Traditionally, bucket policies are used to grant access to objects within a bucket.
From your description, it appears that there is no bucket policy in place, which is perfectly okay. All new buckets have no bucket policy anyway.
If the above fix doesn't work, you should check your permissions in IAM to see what you are permitted to do in Amazon S3:
Is there a policy granting you access to everything in S3 (s3:*), or at least a policy granting you access to this bucket?
Is there a policy that is explicitly denying access to this bucket? (Deny overrides Allow)

AccessDeniedException: 403 Forbidden on GCS using owner account

I have tried to access files in a bucket and I keep getting access denied on the files. I can see them in the GCS console but can access them through that and cannot access them through gsutil either running the command below.
gsutil cp gs://my-bucket/folder-a/folder-b/mypdf.pdf files/
But all this returns is AccessDeniedException: 403 Forbidden
I can list all the files and such but not actually access them. I've tried adding my user to the acl but that still had no effect. All the files were uploaded from a VM through a fuse mount which worked perfectly and just lost all access.
I've checked these posts but none seem to have a solution thats helped me
Can't access resource as OWNER despite the fact I'm the owner
gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE
gsutil cors set command returns 403 AccessDeniedException
Although, quite an old question. But I had a similar issue recently. After trying many options suggested here without success, I carefully re-examined my script and discovered I was getting the error as a result of a mistake in my bucket address gs://my-bucket. I fixed it and it worked perfectly!
This is quite possible. Owning a bucket grants FULL_CONTROL permission to that bucket, which includes the ability to list objects within that bucket. However, bucket permissions do not automatically imply any sort of object permissions, which means that if some other account is uploading objects and sets ACLs to be something like "private," the owner of the bucket won't have access to it (although the bucket owner can delete the object, even if they can't read it, as deleting objects is a bucket permission).
I'm not familiar with the default FUSE settings, but if I had to guess, you're using your project's system account to upload the objects, and they're set to private. That's fine. The easiest way to test that would be to run gsutil from a GCE host, where the default credentials will be the system account. If that works, you could use gsutil to switch the ACLs to something more permissive, like "project-private."
The command to do that would be:
gsutil acl set -R project-private gs://muBucketName/
tl;dr The Owner (basic) role has only a subset of the GCS permissions present in the Storage Admin (predefined) role—notably, Owners cannot access bucket metadata, list/read objects, etc. You would need to grant the Storage Admin (or another, less privileged) role to provide the needed permissions.
NOTE: This explanation applies to GCS buckets using uniform bucket-level access.
In my case, I had enabled uniform bucket-level access on an existing bucket, and found I could no longer list objects, despite being an Owner of its GCP project.
This seemed to contradict how GCP IAM permissions are inherited— organization → folder → project → resource / GCS bucket—since I expected to have Owner access at the bucket level as well.
But as it turns out, the Owner permissions were being inherited as expected, rather, they were insufficient for listing GCS objects.
The Storage Admin role has the following permissions which are not present in the Owner role: [1]
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.setIamPolicy
storage.buckets.update
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.getIamPolicy
storage.objects.list
storage.objects.setIamPolicy
storage.objects.update
This explained the seemingly strange behavior. And indeed, after granting the Storage Admin role (whereby my user was both Owner and Storage Admin), I was able to access the GCS bucket.
Footnotes
Though the documentation page Understanding roles omits the list of permissions for Owner (and other basic roles), it's possible to see this information in the GCP console:
Go to "IAM & Admin"
Go to "Roles"
Filter for "Owner"
Go to "Owner"
(See list of permissions)