We're using google cloud storage as our CDN.
However, any visitors can list all files by typing: http://ourcdn.storage.googleapis.com/
How to disable it while all the files under the bucket is still public readable by default?
We previously set the acl using
gsutil defacl ch -g AllUsers:READ
In GCP dashboard:
get in your bucket
click "Permissions" tab and get in.
in member list find "allUsers", change role from Storage Object Viewer to Storage Legacy Object Reader
then, listing should be disabled.
Update:
as #Devy comment, just check the note below here
Note: roles/storage.objectViewer includes permission to list the objects in the bucket. If you don't want to grant listing publicly, use roles/storage.legacyObjectReader.
Upload an empty index.html file in the root of your bucket. Open the bucket settings and click Edit website configuration - set index.html as the Main Page.
It will prevent the listing of the directory.
Your defacl looks good. The problem is most likely that for some reason AllUsers must also have READ, WRITE, or FULL_CONTROL on the bucket itself. You can clear those with a command like this:
gsutil acl ch -d AllUsers gs://bucketname
Your command set the default object ACL on the bucket to READ, which means that objects will be accessible by anyone. To prevent users from listing the objects, you need to make sure users don't have an ACL on the bucket itself.
gsutil acl ch -d AllUsers gs://yourbucket
should accomplish this. You may need to run a similar command for AllAuthenticatedUsers; just take a look at the bucket ACL with
gsutil acl get gs://yourbucket
and it should be clear.
Related
I current apply permissions to a bucket like this
gsutil acl ch -u service#account:O gs://my-bucket/
gsutil acl ch -r -u service#account:O gs://my-bucket/*
Then I add a file and the above permissions don't get applied, I have to reapply them.
Is there any way to apply the permissions to all new files added to bucket? I would want to share these files with 5 different projects.
Another way to do this, is share a user across multiple project.
In the project you created the bucket, you create a service account that has only the right to access the bucket. Then you share this account with the other 4 projects.
With this you won't have to reapply rights each time and you can access your datas.
I am trying to allow anonymous (or just from my applications domain) read access for files in my bucket.
When trying to read the files I get
```
<Error>
<Code>AccessDenied</Code>
<Message>Access denied.</Message>
<Details>
Anonymous users does not have storage.objects.get access to object.
</Details>
</Error>
```
I also tried to add a domain with the object default permissions dialog in the google cloud console. that gives me the error "One of your permissions is invalid. Make sure that you enter an authorized id or email for the groups and users and a domain for the domains"
I have also looked into making the ACL for the bucket public-read. My only problem with this is that it removes my ownership over the bucket. I need to have that ownership since I want to allow uploading from a specific Google Access Id.
You can also do it from the console.
https://console.cloud.google.com/storage/
Choose edit the bucket permissions:
Input "allUsers" in Add Members option and "Storage Object Viewer" as the role.
Then go to "Select a role" and set "Storage" and "Storage Object Legacy" to "Storage Object View"
You can use gsutil to make new objects created in the bucket publicly readable without removing your ownership. To make new objects created in the bucket publicly readable:
gsutil defacl ch -u AllUsers:R gs://yourbucket
If you have existing objects in the bucket that you want to make publicly readable, you can run:
gsutil acl ch -u AllUsers:R gs://yourbucket/**
Using IAM roles, to make the files readable, and block listing:
gsutil iam ch allUsers:legacyObjectReader gs://bucket-name
To make the files readable, and allow listing:
gsutil iam ch allUsers:objectViewer gs://bucket-name
Open the Cloud Storage browser in the Google Cloud Platform Console.
In the list of buckets, click on the name of the bucket that contains the object you want to make public, and navigate to the object if it's in a subdirectory.
Click the drop-down menu associated with the object that you want to make public.
The drop-down menu appears as three vertical dots to the far right of the object's row.
Select Edit permissions from the drop-down menu.
In the overlay that appears, click the + Add item button.
Add a permission for allUsers.
Select User for the Entity.
Enter allUsers for the Name.
Select Reader for the Access.
Click Save.
Once shared publicly, a link icon appears in the public access column. You can click on this icon to get the URL for the object.
Instruction on Making Data Public from Google Cloud Docs
If you upload files in firebase functions you'll need to call makePublic() on the reference object in order to make it accessible without passing token.
If you want to allow specific bucket to be accessible with the specific "folder/content" then you have to specify in the command:
gsutil iam -r ch allUsers:legacyObjectReader gs://your-bucket/your-files/**
But this is for specific content inside a bucket that is not public!
Apr, 2022 Update:
You can allow all users to read files in your bucket on Cloud Storage.
First, in Bucket details, click on "PERMISSIONS" then "ADD":
Then, type "allUsers":
Then, select the role "Storage Legacy Object Reader" so that all users can read files:
Then, click on "SAVE":
Then, you should be asked as shown below so click on "ALLOW PUBLIC ACCESS":
Finally, you can allow all users to read files in your bucket:
I have two S3 buckets -
production
staging
I want to periodically refresh the staging bucket so it has all the latest production objects for testing, so I used the aws-cli as follows -
aws s3 sync s3://production s3://staging
So now both buckets have the exact same files.
However, for any given file/object, the production link works and the staging doesn't
e.g.
This works: https://s3-us-west-1.amazonaws.com/production/users/photos/000/001/001/medium/my_file.jpg
This doesn't: https://s3-us-west-1.amazonaws.com/staging/users/photos/000/001/001/medium/my_file.jpg
The staging bucket's objects are not public links, and are private by default.
Is there a way to correct this or avoid this with the aws-cli? I know I can change the bucket policy itself, but it was previously working with all the files that were there. So I'm wondering what it is about copying files over that changed their visibility.
Thanks!
you should be able to add --acl flag :
aws s3 sync s3://production s3://staging --acl public-read
as mentioned in doc private acl is the default
Just did some more research.
Frédéric's answer is correct, but just wanted to expand on that a bit more.
aws s3 sync isn't really a true "sync" by default. It just goes through each file in the source bucket and copies files into the target bucket
If a target file with the same name already exists. I looked for a --force flag to force the overwrite, but apparently none exists
It won't delete "extra" files in the target directory by default (i.e. a file that does not exist in the source directory). The --delete flag will allow you to do that
It does not copy over permissions by default. It's true that --acl public-read will set the target permissions to publicly readable, but that has 2 problems - (1) it just blindly sets that for all files, which you may not want, and (2) it doesn't work when you have several files of varying permissions.
There's an issue about it here, and a PR that's open but still un-merged as of today.
So if you're trying to do a full blind refresh like me for testing purposes, the best option is to
Completely empty the target staging bucket by right clicking in the console and clicking Empty
Run the sync and blindly set everything as public-read (other visibility options are available, see documentation here). - aws s3 sync s3://production s3://staging --acl public-read
I have tried to access files in a bucket and I keep getting access denied on the files. I can see them in the GCS console but can access them through that and cannot access them through gsutil either running the command below.
gsutil cp gs://my-bucket/folder-a/folder-b/mypdf.pdf files/
But all this returns is AccessDeniedException: 403 Forbidden
I can list all the files and such but not actually access them. I've tried adding my user to the acl but that still had no effect. All the files were uploaded from a VM through a fuse mount which worked perfectly and just lost all access.
I've checked these posts but none seem to have a solution thats helped me
Can't access resource as OWNER despite the fact I'm the owner
gsutil copy returning "AccessDeniedException: 403 Insufficient Permission" from GCE
gsutil cors set command returns 403 AccessDeniedException
Although, quite an old question. But I had a similar issue recently. After trying many options suggested here without success, I carefully re-examined my script and discovered I was getting the error as a result of a mistake in my bucket address gs://my-bucket. I fixed it and it worked perfectly!
This is quite possible. Owning a bucket grants FULL_CONTROL permission to that bucket, which includes the ability to list objects within that bucket. However, bucket permissions do not automatically imply any sort of object permissions, which means that if some other account is uploading objects and sets ACLs to be something like "private," the owner of the bucket won't have access to it (although the bucket owner can delete the object, even if they can't read it, as deleting objects is a bucket permission).
I'm not familiar with the default FUSE settings, but if I had to guess, you're using your project's system account to upload the objects, and they're set to private. That's fine. The easiest way to test that would be to run gsutil from a GCE host, where the default credentials will be the system account. If that works, you could use gsutil to switch the ACLs to something more permissive, like "project-private."
The command to do that would be:
gsutil acl set -R project-private gs://muBucketName/
tl;dr The Owner (basic) role has only a subset of the GCS permissions present in the Storage Admin (predefined) role—notably, Owners cannot access bucket metadata, list/read objects, etc. You would need to grant the Storage Admin (or another, less privileged) role to provide the needed permissions.
NOTE: This explanation applies to GCS buckets using uniform bucket-level access.
In my case, I had enabled uniform bucket-level access on an existing bucket, and found I could no longer list objects, despite being an Owner of its GCP project.
This seemed to contradict how GCP IAM permissions are inherited— organization → folder → project → resource / GCS bucket—since I expected to have Owner access at the bucket level as well.
But as it turns out, the Owner permissions were being inherited as expected, rather, they were insufficient for listing GCS objects.
The Storage Admin role has the following permissions which are not present in the Owner role: [1]
storage.buckets.get
storage.buckets.getIamPolicy
storage.buckets.setIamPolicy
storage.buckets.update
storage.multipartUploads.abort
storage.multipartUploads.create
storage.multipartUploads.list
storage.multipartUploads.listParts
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.getIamPolicy
storage.objects.list
storage.objects.setIamPolicy
storage.objects.update
This explained the seemingly strange behavior. And indeed, after granting the Storage Admin role (whereby my user was both Owner and Storage Admin), I was able to access the GCS bucket.
Footnotes
Though the documentation page Understanding roles omits the list of permissions for Owner (and other basic roles), it's possible to see this information in the GCP console:
Go to "IAM & Admin"
Go to "Roles"
Filter for "Owner"
Go to "Owner"
(See list of permissions)
My manager has an AWS account n using his credentials we create buckets per employee. Now i want to access another bucket through command line. So is it possible that i can access two buckets (mine and one more)? I have the access key for both buckets. But still i am not able to access both the buckets simultaneously.so that i could upload and download my files on which ever bucket i want..?
I have already tried changing access key and security in my s3config. But it didn't serve the purpose.
I have been already granted the ACL for that new bucket.
Thanks
The best you can do without having a single access key that has permissions for both buckets is create a separate .s3cfg file. I'm assuming you're using s3cmd.
s3cmd --configure -c .s3cfg_bucketname
Will allow you to create a new configuration in the config file .s3cfg_bucketname. From then on when you are trying to access that bucket you just have to add the command line flag to specify which configuration to use:
s3cmd -c .s3cfg_bucketname ls
Of course you could add a bash function to your .bashrc (now I'm assuming bash... lots of assumptions! Let me know if I'm in the wrong, please) to make it even simpler:
function s3bucketname(){
s3cmd -c ~/.s3cfg_bucketname "$#"
}
Usage:
s3bucketname ls
I'm not sure which command line tool you are using, if you are using Timothy Kay's tool
than you will find that the documentation allows you the set the access key and secret key as environment variables and not only in a config file so you can set them in command line before the put command.
I am one of the developer of Bucket Explorer and you can use its Transfer panel with two credentials and perform operation between your and other bucket.
for more details read Copy Move Between two different account