Fetch content in AWS S3 public bucket from GCP Data Storage - amazon-web-services

I am trying to fetch the content of the bucket s3://open-images-dataset from GCP data storage through the gsutil or the transfer service. I am using the following command in the case of the command line alternative:
gsutil -m -o GSUtil:parallel_composite_upload_threshold=150M cp -r --no-sign-request s3://open-images-dataset gs://<bucket-name>
The problem here is that the s3://open-images-dataset is public and one would usually do --no-sign-request when downloading it to a local directory. However as far as I have been able to see GCP don't allow any option to go over this issue. Any idea about that problem?
I can not download it first to my local machine because the content of the bucket is too big.

It is not possible at the moment but a PR has been issued to the boto library.

gsutil uses the Boto library to handle communicating with S3. After a bit of digging through the code, it seems Boto allows specifying that an individual connection should be anonymous... but it looks it would require patching the Boto library to make all S3 connections for a given session be anonymous (i.e. setting a Boto config option like "no_sign_request = True" under the [s3] section).
When I try to list that bucket with AWS credentials set, via gsutil ls s3://open-images-dataset, the signed request succeeds. Given that it works, is there any particular reason you don't want the request to be signed?
Edit
I submitted this pull request to add support for no_sign_request in Boto:
https://github.com/boto/boto/pull/3833
It will be in the next version of Boto, whenever they decide to release it. At that point, gsutil can grab the new version and include it in a subsequent release.

Related

gsutil rsync with s3 buckets gives InvalidAccessKeyId error

I am trying to copy all the data from an AWS S3 bucket to a GCS bucket. Acc. to this answer rsync command should have been able to do that. But I am receiving the following error when trying to do that
Caught non-retryable exception while listing s3://my-s3-source/: AccessDeniedException: 403 InvalidAccessKeyId
<?xml version="1.0" encoding="UTF-8"?>
<Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>{REDACTED}</AWSAccessKeyId><RequestId>{REDACTED}</RequestId><HostId>{REDACTED}</HostId></Error>
CommandException: Caught non-retryable exception - aborting rsync
This is the command I am trying to run
gsutil -m rsync -r s3://my-s3-source gs://my-gcs-destination
I have the AWS CLI installed which is working fine with the same AccessKeyId and listing buckets as well as objects in the bucket.
Any idea what am I doing wrong here?
gsutil can work with both Google Storage and S3.
gsutil rsync -d -r s3://my-aws-bucket gs://example-bucket
You just need to configure it with both - Google and your AWS S3 credentials. For GCP you need to add the Amazon S3 credentials to ~/.aws/credentials or you can also store your AWS credentials in the .boto configuration file for gsutil. However, when you're accessing an Amazon S3 bucket with gsutil, the Boto library uses your ~/.aws/credentials file to override other credentials, such as any that are stored in ~/.boto.
=== 1st update ===
Also make sure you have to make sure you have the correct IAM permissions on the GCP side and the correct AWS IAM credentials. Also depending if you have a prior version of Migrate for Compute Engine (formerly Velostrata) use this documentation and make sure you set up the VPN, IAM credentials and AWS network. If you are using the current version (5.0), use the following documentation to check everything is configured correctly.

Is it possible to use AWS's transcoding service with Google Storage?

I have a system built on top of Google's services, however AWS seems to have a terrific setup for video utilities (https://aws.amazon.com/elastictranscoder/ and https://aws.amazon.com/mediaconvert/). Is it possible to send my users' video from GCP to AWS and back again?
You can do it if you use Google Cloud Storage and Amazon S3 to store and exchange data between clouds.
Have a look at the gsutil command line documentation:
The gsutil tool lets you access Cloud Storage from the command line.
It can also be used to access and work with other cloud storage
services that use HMAC authentication, like Amazon S3. For example,
after you add your Amazon S3 credentials to the .boto configuration
file for gsutil, you can start using gsutil to manage objects in your
Amazon S3 buckets.
To do it, follow Setting Up Credentials to Access Protected Data guide, then go to your ~/.boto file and find these lines:
# To add HMAC aws credentials for "s3://" URIs, edit and uncomment the
#aws_access_key_id = <your aws access key ID>
#aws_secret_access_key = <your aws secret access key>
fill in the aws_access_key_id and aws_secret_access_key settings with your S3 credentials.
After that, you'll be able to copy from S3 to GCS or vice versa:
gsutil cp -R s3://my-aws-bucket gs://my-gcp-bucket
If you have a large number of files to transfer you might want to use
the top-level gsutil -m option (see gsutil help options), to perform a
parallel (multi-threaded/multi-processing) copy:
gsutil -m cp -R s3://my-aws-bucket gs://my-gcp-bucket
for more information check gsutil cp documentation.
Also, you can use gsutil rsync command to synchronizes data between S3 and GCP:
gsutil rsync -d -r s3://my-aws-bucket gs://my-gcp-bucket
for more information check gsutil rsync documentation.

Google Cloud - Downloading data from bucket to instance

I'm trying download the whole data from my bucket (tracking-data) on google cloud to my instance (instance-1) on Linux system.
I see some options here:
https://cloud.google.com/compute/docs/instances/transfer-files#transfergcloud
but I'm not sure there's a way there to download from bucket to instance.
I'm accessing my instance through my terminal and I've made a few tries with gsutil, but not successfully so far.
Any idea how can I download the whole bucket into my instance? (preferably to put it in MDNet/data, I don't have such directory yet, but I probably should store the data there).
First of all, check the API access rights for your Compute Engine service account:
For instance, read only:
Then, just use gsutil cp (doc) or even gsutil rsync (doc):
gsutil -m cp -r gs://<your-bucket>/* <destination_folder>
Disclaimer: Comments and opinions are my own and not the views of my employer.
Use gsutil cp or gsutil rsync
https://cloud.google.com/storage/docs/gsutil/commands/cp
https://cloud.google.com/storage/docs/gsutil/commands/rsync
Adding fullstop after hitting a space at the end helped me

How to create folder on S3 from Ec2 instance

I want to create folder in S3 bucket from Ec2 instacne . I tried the put object but its not working . Is there any way of creating folder on s3 from ec2 instace using cli.
You don't need to create a folder to put item in it. For example, just run something like the below command and s3 will create the folders if they don't exist:
aws s3 cp ./testfile s3://yourBucketName/any/path/you/like
If you want to use cp recursively you can specify --recursive option, or use "aws s3 sync".
If your command does not work, then you may have permission issues. Paste your error so that we can help you.
aws s3api put-object --bucket bucketname --key foldername/
This command works like a charm.
Courtesy AWS Support.
aws s3 sync <folder_name> s3://<you-prefix>/<some_other_folder>/<folder_name>
And bare in mind that, S3 is an object store. It doesn't deal with folder.
If you create /xyz/ and upload a file call /xyz/foo.txt , those are actually 2 different object. if you delete /xyz/ , it will not delete /xyz/foo.txt.
S3 console allow you to "create folder", but after you play with it, you will notice , you CANNOT RENAME folder, or do ANYTHING that you can play with a folder (like moving a tree structure, recursively specify access rights)
In S3, there is something call "PREFIX" where the API allow you to list/filter file with particular "prefix", that let you deal with abstraction.
As mentioned above, since you CANNOT do anything like a file system folder, if you want to perform task like moving one folder to another folder, You need to write your own code to "rewrite" the file name(To be specific, it is "Key" in S3) , i.e. copy it to new object name and delete the old object.
If you want build advance control on S3, you may choose any of the AWS SDK to do it.
https://aws.amazon.com/tools/
You can play around with the API function call put_object() (naming varied depends on SDK language) and proof those facts (which most is found inside AWS documentation)
update: Since #Tom raise up the issues.
You cannot create an virtual folder using AWS cli (Maybe #Tom can show how), only ways to do that is using AWS SDK put_object()
Let's try this
First I create dummy file in shell
echo "dummy">test.txt
Then try use python aws sdk
import boto3
s3=boto3.client("s3")
s3.create_bucket(Bucket="dummy")
# now create so call xyz/ "empty virtual folder"
s3.put_object(Bucket="dummy", Key="xyz/")
# now I put above file name to S3 , call xyz/test.txt
# First I must open the file, because put_object only take bytes or file object
myfile=open("test.txt")
s3.put_object(Bucket="dummy", Key="xyz/test.txt")
Now, go to your command shell, fire up your AWS CLI (or continue to play with boto3)
# check everything
aws s3 ls s3://dummy --recursive
#now delete the so call "folder"
aws s3 rm s3://dummy/xyz/
# And you see the file "xyz/test.txt" is still there
aws s3 s3://dummy --recursive
You can find the commands here from official blog of AWS:
http://docs.aws.amazon.com/cli/latest/userguide/using-s3-commands.html
And there are different other tools available which can be used to create Bucket/ folders in S3. One of the known tool is S3Browser which is available for windows servers. Install it on your EC2 instance and provide your AWS access key and secret keys to access the S3. This tool provide simple UI to do that.
There is no cli command that allows you to simply create a folder in an s3 bucket. To create this folder I would use the following command, which creates an empty file, with nothing inside. But if you delete the file you will delete the folder as long as you have not added anything else afterwards
aws s3api put-object --bucket bucket_name --key folder_name/empty.csv

Downloading an entire S3 bucket?

I noticed that there does not seem to be an option to download an entire s3 bucket from the AWS Management Console.
Is there an easy way to grab everything in one of my buckets? I was thinking about making the root folder public, using wget to grab it all, and then making it private again but I don't know if there's an easier way.
AWS CLI
See the "AWS CLI Command Reference" for more information.
AWS recently released their Command Line Tools, which work much like boto and can be installed using
sudo easy_install awscli
or
sudo pip install awscli
Once installed, you can then simply run:
aws s3 sync s3://<source_bucket> <local_destination>
For example:
aws s3 sync s3://mybucket .
will download all the objects in mybucket to the current directory.
And will output:
download: s3://mybucket/test.txt to test.txt
download: s3://mybucket/test2.txt to test2.txt
This will download all of your files using a one-way sync. It will not delete any existing files in your current directory unless you specify --delete, and it won't change or delete any files on S3.
You can also do S3 bucket to S3 bucket, or local to S3 bucket sync.
Check out the documentation and other examples.
Whereas the above example is how to download a full bucket, you can also download a folder recursively by performing
aws s3 cp s3://BUCKETNAME/PATH/TO/FOLDER LocalFolderName --recursive
This will instruct the CLI to download all files and folder keys recursively within the PATH/TO/FOLDER directory within the BUCKETNAME bucket.
You can use s3cmd to download your bucket:
s3cmd --configure
s3cmd sync s3://bucketnamehere/folder /destination/folder
There is another tool you can use called rclone. This is a code sample in the Rclone documentation:
rclone sync /home/local/directory remote:bucket
I've used a few different methods to copy Amazon S3 data to a local machine, including s3cmd, and by far the easiest is Cyberduck.
All you need to do is enter your Amazon credentials and use the simple interface to download, upload, sync any of your buckets, folders or files.
You've many options to do that, but the best one is using the AWS CLI.
Here's a walk-through:
Download and install AWS CLI in your machine:
Install the AWS CLI using the MSI Installer (Windows).
Install the AWS CLI using the Bundled Installer for Linux, OS X, or Unix.
Configure AWS CLI:
Make sure you input valid access and secret keys, which you received when you created the account.
Sync the S3 bucket using:
aws s3 sync s3://yourbucket /local/path
In the above command, replace the following fields:
yourbucket >> your S3 bucket that you want to download.
/local/path >> path in your local system where you want to download all the files.
To download using AWS S3 CLI:
aws s3 cp s3://WholeBucket LocalFolder --recursive
aws s3 cp s3://Bucket/Folder LocalFolder --recursive
To download using code, use the AWS SDK.
To download using GUI, use Cyberduck.
The answer by #Layke is good, but if you have a ton of data and don't want to wait forever, you should read "AWS CLI S3 Configuration".
The following commands will tell the AWS CLI to use 1,000 threads to execute jobs (each a small file or one part of a multipart copy) and look ahead 100,000 jobs:
aws configure set default.s3.max_concurrent_requests 1000
aws configure set default.s3.max_queue_size 100000
After running these, you can use the simple sync command:
aws s3 sync s3://source-bucket/source-path s3://destination-bucket/destination-path
or
aws s3 sync s3://source-bucket/source-path c:\my\local\data\path
On a system with CPU 4 cores and 16GB RAM, for cases like mine (3-50GB files) the sync/copy speed went from about 9.5MiB/s to 700+MiB/s, a speed increase of 70x over the default configuration.
100% works for me, i have download all files from aws s3 backet.
Install AWS CLI. Select your operating system and follow the steps here: Installing or updating the latest version of the AWS CLI
Check AWS version: aws --version
Run config command: aws configure
aws s3 cp s3://yourbucketname your\local\path --recursive
Eg (Windows OS): aws s3 cp s3://yourbucketname C:\aws-s3-backup\project-name --recursive
Check out this link: How to download an entire bucket from S3 to local folder
If you use Visual Studio, download "AWS Toolkit for Visual Studio".
After installed, go to Visual Studio - AWS Explorer - S3 - Your bucket - Double click
In the window you will be able to select all files. Right click and download files.
For Windows, S3 Browser is the easiest way I have found. It is excellent software, and it is free for non-commercial use.
Use this command with the AWS CLI:
aws s3 cp s3://bucketname . --recursive
Another option that could help some OS X users is Transmit.
It's an FTP program that also lets you connect to your S3 files. And, it has an option to mount any FTP or S3 storage as a folder in the Finder, but it's only for a limited time.
I've done a bit of development for S3 and I have not found a simple way to download a whole bucket.
If you want to code in Java the jets3t lib is easy to use to create a list of buckets and iterate over that list to download them.
First, get a public private key set from the AWS management consule so you can create an S3service object:
AWSCredentials awsCredentials = new AWSCredentials(YourAccessKey, YourAwsSecretKey);
s3Service = new RestS3Service(awsCredentials);
Then, get an array of your buckets objects:
S3Object[] objects = s3Service.listObjects(YourBucketNameString);
Finally, iterate over that array to download the objects one at a time with:
S3Object obj = s3Service.getObject(bucket, fileName);
file = obj.getDataInputStream();
I put the connection code in a threadsafe singleton. The necessary try/catch syntax has been omitted for obvious reasons.
If you'd rather code in Python you could use Boto instead.
After looking around BucketExplorer, "Downloading the whole bucket" may do what you want.
AWS SDK API is only the best option for uploading entire folder and repository to AWS S3 and to download entire AWS S3 bucket locally.
To upload whole folder to AWS S3: aws s3 sync . s3://BucketName
To download whole AWS S3 bucket locally: aws s3 sync s3://BucketName .
You can also assign path like BucketName/Path for particular folder in AWS S3 bucket to download.
If you only want to download the bucket from AWS, first install the AWS CLI in your machine. In terminal change the directory to where you want to download the files and run this command.
aws s3 sync s3://bucket-name .
If you also want to sync the both local and s3 directories (in case you added some files in local folder), run this command:
aws s3 sync . s3://bucket-name
You can do this with MinIO Client as follows: mc cp -r https://s3-us-west-2.amazonaws.com/bucketName/ localdir
MinIO also supports sessions, resumable downloads, uploads and many more. MinIO supports Linux, OS X and Windows operating systems. It is written in Golang and released under Apache Version 2.0.
AWS CLI is the best option to download an entire S3 bucket locally.
Install AWS CLI.
Configure AWS CLI for using default security credentials and default AWS Region.
To download the entire S3 bucket use command
aws s3 sync s3://yourbucketname localpath
Reference to AWS CLI for different AWS services: AWS Command Line Interface
To add another GUI option, we use WinSCP's S3 functionality. It's very easy to connect, only requiring your access key and secret key in the UI. You can then browse and download whatever files you require from any accessible buckets, including recursive downloads of nested folders.
Since it can be a challenge to clear new software through security and WinSCP is fairly prevalent, it can be really beneficial to just use it rather than try to install a more specialized utility.
If you use Firefox with S3Fox, that DOES let you select all files (shift-select first and last) and right-click and download all.
I've done it with 500+ files without any problem.
When in Windows, my preferred GUI tool for this is CloudBerry Explorer Freeware for
Amazon S3. It has a fairly polished file explorer and FTP-like interface.
You can use sync to download whole S3 bucket. For example, to download whole bucket named bucket1 on current directory.
aws s3 sync s3://bucket1 .
If you have only files there (no subdirectories) a quick solution is to select all the files (click on the first, Shift+click on the last) and hit Enter or right click and select Open. For most of the data files this will download them straight to your computer.
Try this command:
aws s3 sync yourBucketnameDirectory yourLocalDirectory
For example, if your bucket name is myBucket and local directory is c:\local, then:
aws s3 sync s3://myBucket c:\local
For more informations about awscli check this
aws cli installation
It's always better to use awscli for downloading / uploading files to s3. Sync will help you to resume without any hassle.
aws s3 sync s3://bucketname/ .
aws s3 sync s3://<source_bucket> <local_destination>
is a great answer, but it won't work if the objects are in storage class Glacier Flexible Retrieval, even if the the files have been restored. In that case you need to add the flag --force-glacier-transfer .
Here is a summary of what you have to do to copy an entire bucket:
1. Create a user that can operate with AWS s3 bucket
Follow this official article: Configuration basics
Don't forget to:
tick "programmatic access" in order to have the possibility to deal with with AWS via CLI.
add the right IAM policy to your user to allow him to interact with the s3 bucket
2. Download, install and configure AWS CLI
See this link allowing to configure it: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html
You can use the following command in order to add the keys you got when you created your user:
$ aws configure
AWS Access Key ID [None]: <your_access_key>
AWS Secret Access Key [None]: <your_secret_key>
Default region name [None]: us-west-2
Default output format [None]: json
3. Use the following command to download content
You can a recursive cp commande, but aws sync command is f:
aws s3 sync s3://your_bucket /local/path
To see what would be the dowloaded files before really do the download, you can use the --dryrun option.
To improve speed, you can adjust s3 max_concurrent_requests and max_queue_size properties. See: http://docs.aws.amazon.com/cli/latest/topic/s3-config.html
You can exclude/include some files using --exclude and --include options. See: https://docs.aws.amazon.com/cli/latest/reference/s3/
For example, the below command will show all the .png file presents in the bucket. Replay the command without --dryrun to make the resulting files be downloaded.
aws s3 sync s3://your_bucket /local/path --recursive --exclude "*" --include "*.png" --dryrun
Windows User need to download S3EXPLORER from this link which also has installation instructions :- http://s3browser.com/download.aspx
Then provide you AWS credentials like secretkey, accesskey and region to the s3explorer, this link contains configuration instruction for s3explorer:Copy Paste Link in brower: s3browser.com/s3browser-first-run.aspx
Now your all s3 buckets would be visible on left panel of s3explorer.
Simply select the bucket, and click on Buckets menu on top left corner, then select Download all files to option from the menu. Below is the screenshot for the same:
Bucket Selection Screen
Then browse a folder to download the bucket at a particular place
Click on OK and your download would begin.
aws sync is the perfect solution. It does not do a two way.. it is a one way from source to destination. Also, if you have lots of items in bucket it will be a good idea to create s3 endpoint first so that download happens faster (because download does not happen via internet but via intranet) and no charges
As #layke said, it is the best practice to download the file from the S3 cli it is a safe and secure. But in some cases, people need to use wget to download the file and here is the solution
aws s3 presign s3://<your_bucket_name/>
This will presign will get you temporary public URL which you can use to download content from S3 using the presign_url, in your case using wget or any other download client.
You just need to pass --recursive & --include "*" in the aws s3 cp command as follows: aws --region "${BUCKET_REGION}" s3 cp s3://${BUCKET}${BUCKET_PATH}/ ${LOCAL_PATH}/tmp --recursive --include "*" 2>&1
In addition to the suggestions for aws s3 sync, I would also recommend looking at s5cmd.
In my experience I found this to be substantially faster than the AWS CLI for multiple downloads or large downloads.
s5cmd supports wildcards so something like this would work:
s5cmd cp s3://bucket-name/* ./folder