Unable to make GCP bucket public - google-cloud-platform

I'm trying to make a bucket in Google Cloud Storage public, but I'm receiving this error:
Error
Sorry, there’s a problem. If you entered information, check it and try again. Otherwise, the problem might clear up on its own, so check back later.
Tracking Number: 8176737072451350548
Send feedback
I'm sending permission to allUsers to the role StorageObjectViewer. I'm doing this direct by the platform

On GCP uses the shell command inside your project:
$ gsutil defacl set public-read gs://your-bucket-name
after use:
$ gsutil ls -L -b gs://your-bucket-name
to see the ACL configuration of your bucket.
https://codelabs.developers.google.com/codelabs/cloud-upload-objects-to-cloud-storage/#0

Related

gsutil rsync with s3 buckets gives InvalidAccessKeyId error

I am trying to copy all the data from an AWS S3 bucket to a GCS bucket. Acc. to this answer rsync command should have been able to do that. But I am receiving the following error when trying to do that
Caught non-retryable exception while listing s3://my-s3-source/: AccessDeniedException: 403 InvalidAccessKeyId
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>{REDACTED}</AWSAccessKeyId><RequestId>{REDACTED}</RequestId><HostId>{REDACTED}</HostId></Error>
CommandException: Caught non-retryable exception - aborting rsync
This is the command I am trying to run
gsutil -m rsync -r s3://my-s3-source gs://my-gcs-destination
I have the AWS CLI installed which is working fine with the same AccessKeyId and listing buckets as well as objects in the bucket.
Any idea what am I doing wrong here?
gsutil can work with both Google Storage and S3.
gsutil rsync -d -r s3://my-aws-bucket gs://example-bucket
You just need to configure it with both - Google and your AWS S3 credentials. For GCP you need to add the Amazon S3 credentials to ~/.aws/credentials or you can also store your AWS credentials in the .boto configuration file for gsutil. However, when you're accessing an Amazon S3 bucket with gsutil, the Boto library uses your ~/.aws/credentials file to override other credentials, such as any that are stored in ~/.boto.
=== 1st update ===
Also make sure you have to make sure you have the correct IAM permissions on the GCP side and the correct AWS IAM credentials. Also depending if you have a prior version of Migrate for Compute Engine (formerly Velostrata) use this documentation and make sure you set up the VPN, IAM credentials and AWS network. If you are using the current version (5.0), use the following documentation to check everything is configured correctly.

Where to run the command to access private S3 bucket?

Apologies, this is such a rookie question. A report I set up is being run daily and deposited in the customer S3 bucket. I was given the command to run if I wanted to inspect the bucket contents. I want to verify my report is as expected in there, so I'd like to access it. But I have no idea where to actually run the command.
Do I need to install AWS CLI and run it there, is there something I need to install so I can run it from Terminal. The command has the AWS secret key, access key and URL.
If you wish to access an object from Amazon S3 on your own computer:
Download the AWS Command-Line Interface (CLI)
Run: aws configure and provide your Access Key & Secret Key
To list a bucket: aws s3 ls s3://bucket-name
To download an object: aws s3 cp s3://bucket-name/object-name.txt .
(That last period means "to the current directory".)

mount GCPs buckets with write access

I can successfully mount my bucket using the following command
sudo mount -t gcsfuse -o rw,noauto,user,implicit_dirs,allow_other fakebucket thebucket/
I can go into the bucket find the subfolders and etc. however I can't write anything
touch: cannot touch 'aaa': Permission denied
I have tried to use various parameters in the gcsfuse for example rw,noauto,user,implicit_dirs,allow_other - even I tried a regular chmod command after
sudo chmod -R 777 thebucket/
with no error, but the permission has not changed, neither I can write into the bucket.
Thank you in advance,
Have you checked if your instance has the required API access scopes to write to storage?
By default the access scope to storage is "Read only", this is why you can mount the bucket and list the contents but not write to it.
To edit the scopes you can do it from the web interface, after turning off the instance and editing it or with this command:
gcloud beta compute instances set-scopes INSTANCE_NAME scopes=storage-full
Be sure to add all the scopes you need , the command above will disable all scopes and give you rw access to the storage API.

Fetch content in AWS S3 public bucket from GCP Data Storage

I am trying to fetch the content of the bucket s3://open-images-dataset from GCP data storage through the gsutil or the transfer service. I am using the following command in the case of the command line alternative:
gsutil -m -o GSUtil:parallel_composite_upload_threshold=150M cp -r --no-sign-request s3://open-images-dataset gs://<bucket-name>
The problem here is that the s3://open-images-dataset is public and one would usually do --no-sign-request when downloading it to a local directory. However as far as I have been able to see GCP don't allow any option to go over this issue. Any idea about that problem?
I can not download it first to my local machine because the content of the bucket is too big.
It is not possible at the moment but a PR has been issued to the boto library.
gsutil uses the Boto library to handle communicating with S3. After a bit of digging through the code, it seems Boto allows specifying that an individual connection should be anonymous... but it looks it would require patching the Boto library to make all S3 connections for a given session be anonymous (i.e. setting a Boto config option like "no_sign_request = True" under the [s3] section).
When I try to list that bucket with AWS credentials set, via gsutil ls s3://open-images-dataset, the signed request succeeds. Given that it works, is there any particular reason you don't want the request to be signed?
Edit
I submitted this pull request to add support for no_sign_request in Boto:
https://github.com/boto/boto/pull/3833
It will be in the next version of Boto, whenever they decide to release it. At that point, gsutil can grab the new version and include it in a subsequent release.

Amazon S3 with s3fs and fuse, transport endpoint is not connected

Redhat with Fuse 2.4.8
S3FS version 1.59
From the AWS online management console i can browse the files on the S3 bucket.
When i log-in (ssh) to my /s3 folder, i cannot access it.
also the command: "/usr/bin/s3fs -o allow_other bucket /s3"
return: s3fs: unable to access MOUNTPOINT /s3: Transport endpoint is not connected
What could be the reason? How can i fix it ? does this folder need to be unmount and then mounted again ?
Thanks !
Well, the solution was simple: to unmount and mount the dir. The error transport endpoint is not connected was solved by unmounting the s3 folder and then mounting again.
Command to unmount
fusermount -u /s3
Command to mount
/usr/bin/s3fs -o allow_other bucketname /s3
Takes 3 minutes to sync.
I don't recommend to access s3 via quick and dirty fuse drivers.
S3 isn't really designed to act as a file system,
see this SOF answer for a nice summary
You would probably never dare to mount a Linux mirror website just because it holds files. This is comparable
Let your process write files to your local fs, then sync your s3 bucket with tools like cron and s3cmd
If you insist in using s3fs..
sudo echo "yourawskey:yourawssecret" > /etc/passwd-s3fs
sudo chmod 640 /etc/passwd-s3fs
sudo /usr/bin/s3fs yours3bucket /yourmountpoint -ouse_cache=/tmp
Verify with mount
Source: http://code.google.com/p/s3fs/wiki/FuseOverAmazon
I was using old security credential before. Regeneration of security credentials (AccessId, AccessKey) solved the issue.
This was a permissions issue on the bucket for me. Adding the "list" and "view permissions" for "everyone" in the AWS UI allowed bucket access.
If you don't want to allow everyone access, then make sure you are using the AWS credentials associated with the user that has access to the bucket in S3Fuse
I had this problem and i found that the bucket can only have lowercase characters. Trying to access a bucket named "BUCKET1" via the https://BUCKET1.s3.amazonaws.com or https://bucket1.s3.amazonaws.com will both fail, but if the bucket is called "bucket1", https://bucket1.s3.amazonaws.com will success.
So it is not enough to lowercase the name for you the s3fs command line, you MUST also create the bucket in lowercase.
Just unmount the directory and reboot the server if you already made changes in /etc/fstab which mounts the directory automatically.
To unmount sudo umount /dir
In /etc/fstab these lines should be present. then only it will mount automtically after reboot
s3fs#bucketname /s3 fuse allow_other,nonempty,use_cache=/tmp/cache,multireq_max=500,uid=505,gid=503 0 0
This issue could be due to policy attached to IAM user. make sure IAM user have AdministratorAccess.
I have face same issue & by changing policy to AdministratorAccess issue got fixed.