Disable the public-read permission to AllUsers (GCP) - google-cloud-platform

I was setting the permission to AllUsers for uploading the files,
I used:
gsutil acl ch -u AllUsers:R gs://[mywebsite.com]
gsutil defacl set public-read gs://[mywebsite.com]
but I found the directory was wrong.
So I wanna disable the permission to the current directory.
First, I checked the IAM policy for my setting by
gsutil iam get gs://[mywebsite.com]
and the part of the results show:
{
"members": [
"allUsers",
"projectViewer:[myprojectID]"
],
"role": "roles/storage.legacyBucketReader"
}
I guess it means the permission to all users, so I have to disable it.
Then, removing AllUsers from this directory by:
gsutil iam ch -d All gs://[mywebsite.com]
However....it can't work......it shows
CommandException: Incorrect public member type for binding AllUsers:R
Is there any solution to this ?
If I delete the bucket, the permission will also disabled?
(Update) Does "public-read" mean user can read the directory that you set? or just the permission to upload files to the Storage? (this is what I really wanna know)

Looking at the syntax mentioned in gsutil help iam ch, it says to specify allUsers, not AllUsers. This works when you specify the former, but throws an error for the latter.
Case sensitivity for removing users/roles was fixed for acl ch -d in this GitHub commit, but it looks like it was never fixed for iam ch -d. I've opened a GitHub issue for this.

Related

How do you restore the intrinsic permissions associated with a bucket in GCS?

I was using Terraform's google_storage_bucket_iam_policy instead of google_storage_bucket_iam_member to apply permissions which resulted in all the default permissions including the intrinsic permissions that project viewers, editors, and owners have (https://cloud.google.com/storage/docs/access-control/iam-roles#basic-roles-intrinsic). I was able to use terraform destroy to undo the authoritative google_storage_bucket_iam_policy; however, the intrinsic permissions have not been restored. I tried adding these three permissions back using the console but there's no viewer, editor, or owner group. The only thing I see is allUsers and allAuthenticatedUsers. Is there a way to restore these permissions either manually or automatically without deleting the bucket entirely?
You have to perform following commands:
gsutil acl ch -p editors-<project number>:O gs://<bucket name>
gsutil acl ch -p owners-<project number>:O gs://<bucket name>
gsutil acl ch -p viewers-<project number>:R gs://<bucket name>

How to restore permissions on bucket

I just created a new bucket under the default project "My First Project".
I accidentally deleted all permissions on the bucket. Is it possible for the default permissions to be restored?
I don't need the bucket so it can be deleted, but I don't have permission to do that either.
Update
To clarify, I own the project and bucket. No other user should have access.
Following suggestions by #gso_gabriel I have tried the following:
I can list objects in the bucket:
> gsutil ls -r gs://my-bucket-name/
gs://my-bucket-name/name-of-my-file
I cannot change the ACL:
> gsutil defacl set public-read gs://my-bucket-name/
Setting default object ACL on gs://my-bucket-name/...
AccessDeniedException: 403 my-email-address does not have storage.buckets.update access to the Google Cloud Storage bucket.
> gsutil acl set -R public-read gs://my-bucket-name/
Setting ACL on gs://my-bucket-name/name-of-my-file...
AccessDeniedException: 403 my-email-address does not have storage.objects.update access to the Google Cloud Storage object.
I think there is no ACL (see the last line):
> gsutil ls -L gs://my-bucket-name/
gs://my-bucket-name/name-of-my-file
Creation time: Wed, 10 Jun 2020 01:31:20 GMT
Update time: Wed, 10 Jun 2020 01:31:20 GMT
Storage class: STANDARD
Content-Length: 514758
Content-Type: application/octet-stream
Hash (crc32c): AD4ziA==
Hash (md5): W3aLFrdB/eF85IZux9UVfQ==
ETag: CIPc1uiM9ukCEAE=
Generation: 1591752680386051
Metageneration: 1
ACL: []
Update 2
The output from the gcloud command suggested by #gso_gabriel is:
> gcloud projects get-iam-policy my_project_ID
bindings:
- members:
- user:my-email-address
role: roles/owner
etag: BwWnsC5jgkw=
version: 1
I also tried the "Policy Troubleshooter" in the IAM & Admin section of the GCP console. It showed the following:
I can create buckets and objects on the project e.g. storage.buckets.create is enabled
I cannot delete buckets and objects on the project e.g. storage.buckets.delete is disabled
I cannot get the IAM policy on buckets and objects on the project e.g. storage.buckets.getIamPolicy is disabled
The "Roles" associated with the project include permissions in the Storage Admin group (see the Roles subsection in the IAM & Admin section of the GCP console). i.e. permissions such as storage.objects.delete is supposedly enabled, but the Policy Troubleshooter shows that they are not being granted.
As well explained here, if you are the owner of the bucket - or at least has access to the account who owns it - you should be able to modify the ACL of it and add the permissions back as they were.
Once you are logged in as the owner, you just need to run the command gsutil acl set -R public-read gs://bucketName to provide public read to the bucket for users. You can also check the exactly default permissions here. In case you are not sure which account is the Owner, run the below command - as indicated here - that it will returns all accounts with permissions, including one that will mention Owner on it.
gsutil ls -L gs://your-bucket/your-object
The return should be something like this.
{
"email": "your-service-account#appspot.gserviceaccount.com",
"entity": "user-your-service-account#appspot.gserviceaccount.com",
"role": "OWNER"
}
Let me know if the information helped you!

How to fix the error in GCP CommandException: "lifecycle" command spanning providers not allowed

I am learning GCP now. I have a bucket with the name of welynx-test1_copy
I want to set a lifecycle policy to it so as the bucket would be deleted after 23 days, by following the command help I executed the following command in CLI:
xenonxie#cloudshell:~ (rock-perception-263016)$ gsutil ls
gs://rock-perception-263016.appspot.com/
gs://staging.rock-perception-263016.appspot.com/
gs://welynx-test1/
gs://welynx-test1_copy/
So you can see the bucket exists.
Setting the policy below errors me out:
xenonxie#cloudshell:~ (rock-perception-263016)$ gsutil lifecycle set {"rule": [{"action": {"type": "Delete"}, "condition": {"age": 23}}]} gs://welynx-test1_copy
CommandException: "lifecycle" command spanning providers not allowed.
I've tried to follow the syntax found in the help as below:
xenonxie#cloudshell:~ (rock-perception-263016)$ gsutil lifecycle
--help NAME lifecycle - Get or set lifecycle configuration for a bucket
SYNOPSIS gsutil lifecycle get url gsutil lifecycle set
config-json-file url...
DESCRIPTION The lifecycle command can be used to get or set
lifecycle management policies for the given bucket(s). This command
is supported for buckets only, not objects. For more information on
object lifecycle management, please see the Google Cloud Storage
docs <https://cloud.google.com/storage/docs/lifecycle>_.
The lifecycle command has two sub-commands: GET Gets the lifecycle
configuration for a given bucket. You can get the lifecycle
configuration for only one bucket at a time. The output can be
redirected into a file, edited and then updated via the set
sub-command.
SET Sets the lifecycle configuration on one or more buckets. The
config-json-file specified on the command line should be a path to a
local file containing the lifecycle configuration JSON document.
EXAMPLES The following lifecycle configuration JSON document
specifies that all objects in this bucket that are more than 365
days old will be deleted automatically:
{
"rule":
[
{
"action": {"type": "Delete"},
"condition": {"age": 365}
}
]
}
The following (empty) lifecycle configuration JSON document removes
all lifecycle configuration for a bucket:
{}
What am I missing here and how do I fix it? Thank you very much.
The issue with your command is that you put the rules in the command you want to run instead of the configuration file.
The way to do it is to:
Create a JSON file with the lifecycle configuration rules
Use lifecycle set like this gsutil lifecycle set [CONFIG_FILE] gs://[BUCKET_NAME]
Basically, you can just put as in the example you gave:
{
"rule":
[
{
"action": {"type": "Delete"},
"condition": {"age": 23}
}
]
}
And change CONFIG_FILE with the JSON file you have created.
Apparently, gsutil checks to see if the bucket name belongs to google before it checks to see if the lifecycle file exists:
❯ gsutil lifecycle set foo bar gs://baz
CommandException: "lifecycle" command spanning providers not allowed.
❯ gsutil lifecycle set foo gs://baz
AccessDeniedException: 403 user#domain.com does not have storage.buckets.get access to baz.
❯ gsutil lifecycle set foo gs://a-real-bucket-name
Setting lifecycle configuration on gs://a-real-bucket-name/...
ArgumentException: JSON lifecycle data could not be loaded from:
So if you provide anything other than a google-controlled bucket in the fifth position:
gsutil lifecycle set file.json THIS_ARGUMENT
You'll see errors relating to that problem instead of errors relating to the file.
This confused me as well, I think google could make some simple modifications to gsutil to make the error messages more helpful. I've filed a bug to that effect here: https://issuetracker.google.com/issues/147020031

How to disable directory listing in Google Cloud [duplicate]

We're using google cloud storage as our CDN.
However, any visitors can list all files by typing: http://ourcdn.storage.googleapis.com/
How to disable it while all the files under the bucket is still public readable by default?
We previously set the acl using
gsutil defacl ch -g AllUsers:READ
In GCP dashboard:
get in your bucket
click "Permissions" tab and get in.
in member list find "allUsers", change role from Storage Object Viewer to Storage Legacy Object Reader
then, listing should be disabled.
Update:
as #Devy comment, just check the note below here
Note: roles/storage.objectViewer includes permission to list the objects in the bucket. If you don't want to grant listing publicly, use roles/storage.legacyObjectReader.
Upload an empty index.html file in the root of your bucket. Open the bucket settings and click Edit website configuration - set index.html as the Main Page.
It will prevent the listing of the directory.
Your defacl looks good. The problem is most likely that for some reason AllUsers must also have READ, WRITE, or FULL_CONTROL on the bucket itself. You can clear those with a command like this:
gsutil acl ch -d AllUsers gs://bucketname
Your command set the default object ACL on the bucket to READ, which means that objects will be accessible by anyone. To prevent users from listing the objects, you need to make sure users don't have an ACL on the bucket itself.
gsutil acl ch -d AllUsers gs://yourbucket
should accomplish this. You may need to run a similar command for AllAuthenticatedUsers; just take a look at the bucket ACL with
gsutil acl get gs://yourbucket
and it should be clear.

Amazon S3 File Permissions, Access Denied when copied from another account

I have a set of video files that were copied from one AWS Bucket from another account to my account in my own bucket.
I'm running into a problem now with all of the files where i am receiving Access Denied errors when I try to make all of the files public.
Specifically, I login to my AWS account, go into S3, drill down through the folder structures to locate one of the videos files.
When I look at this specificfile, the permissions tab on the files does not show any permissions assigned to anyone. No users, groups, or system permissions have been assigned.
At the bottom of the Permissions tab, I see a small box that says "Error: Access Denied". I can't change anything about the file. I can't add meta-data. I can't add a user to the file. I cannot make the file Public.
Is there a way i can gain control of these files so that I can make them public? There are over 15,000 files / around 60GBs of files. I'd like to avoid downloading and reuploading all of the files.
With some assistance and suggestions from the folks here I have tried the following. I made a new folder in my bucket called "media".
I tried this command:
aws s3 cp s3://mybucket/2014/09/17/thumb.jpg s3://mybucket/media --grants read=uri=http://acs.amazonaws.com/groups/global/AllUsers full=emailaddress=my_aws_account_email_address
I receive a fatal error 403 when calling the HeadObject operation: Forbidden.
A very interesting conundrum! Fortunately, there is a solution.
First, a recap:
Bucket A in Account A
Bucket B in Account B
User in Account A copies objects to Bucket B (having been granted appropriate permissions to do so)
Objects in Bucket B still belong to Account A and cannot be accessed by Account B
I managed to reproduce this and can confirm that users in Account B cannot access the file -- not even the root user in Account B!
Fortunately, things can be fixed. The aws s3 cp command in the AWS Command-Line Interface (CLI) can update permissions on a file when copied to the same name. However, to trigger this, you also have to update something else otherwise you get this error:
This copy request is illegal because it is trying to copy an object to itself without changing the object's metadata, storage class, website redirect location or encryption attributes.
Therefore, the permissions can be updated with this command:
aws s3 cp s3://my-bucket/ s3://my-bucket/ --recursive --acl bucket-owner-full-control --metadata "One=Two"
Must be run by an Account A user that has access permissions to the objects (eg the user who originally copied the objects to Bucket B)
The metadata content is unimportant, but needed to force the update
--acl bucket-owner-full-control will grant permission to Account B so you'll be able to use the objects as normal
End result: A bucket you can use!
aws s3 cp s3://account1/ s3://accountb/ --recursive --acl bucket-owner-full-control
To correctly set the appropriate permissions for newly added files, add this bucket policy:
[...]
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456789012::user/their-user"
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::my-bucket/*"
}
And set ACL for newly created files in code. Python example:
import boto3
client = boto3.client('s3')
local_file_path = '/home/me/data.csv'
bucket_name = 'my-bucket'
bucket_file_path = 'exports/data.csv'
client.upload_file(
local_file_path,
bucket_name,
bucket_file_path,
ExtraArgs={'ACL':'bucket-owner-full-control'}
)
source: https://medium.com/artificial-industry/how-to-download-files-that-others-put-in-your-aws-s3-bucket-2269e20ed041 (disclaimer: written by me)
In case anyone trying to do the same but using Hadoop/Spark job instead of AWS CLI.
Step 1: Grant user in Account A appropriate permissions to copy
objects to Bucket B. (mentioned in above answer)
Step 2: Set the fs.s3a.acl.default configuration option using Hadoop Configuration. This can be set in conf file or in program:
Conf File:
<property>
<name>fs.s3a.acl.default</name>
<description>Set a canned ACL for newly created and copied objects. Value may be Private,
PublicRead, PublicReadWrite, AuthenticatedRead, LogDeliveryWrite, BucketOwnerRead,
or BucketOwnerFullControl.</description>
<value>$chooseOneFromDescription</value>
</property>
Programmatically:
spark.sparkContext.hadoopConfiguration.set("fs.s3a.acl.default", "BucketOwnerFullControl")
by putting
--acl bucket-owner-full-control
made it to work.
I'm afraid you won't be able to transfer ownership as you wish. Here's what you did:
Old account copies objects into new account.
The "right" way of doing it (assuming you wanted to assume ownership on the new account) would be:
New account copies objects from old account.
See the small but important difference? S3 docs kind of explain it.
I think you might get away with it without needing to download the whole thing by just copying all of the files within the same bucket, and then deleting the old files. Make sure you can change the permissions after doing the copy. This should save you some money too, as you won't have to pay for the data transfer costs of downloading everything.
boto3 "copy_object" solution :
Providing Grant control to the destination bucket owner
client.copy_object(CopySource=copy_source, Bucket=target_bucket, Key=key, GrantFullControl='id=<bucket owner Canonical ID>')
Get for console
Select bucket, Permission tab, "Access Control List" tab