how to download files from aws s3 using a list - amazon-web-services

I have list of files in a bucket in aws s3, but when i execute the aws cp command it gives me an error saying "unknown option".
my list
s3://<bucket>/cms/imagepool/5f84dc7234bf5.jpg
s3://<bucket>/cms/imagepool/5f84daa19b7df.jpg
s3://<bucket>/cms/imagepool/5f84dcb12f9c5.jpg
s3://<bucket>/cms/imagepool/5f84dcbf25d4e.jpg
My bash script is below:
#!/bin/bash
while read line
do
aws s3 cp "${line}" ./
done <../links.txt
This is the error I get:
Unknown options: s3:///cms/imagepool/5f84daa19b7df.jpg
Does anybody know how to solve this issue.

Turns out the solution below worked(had to include the --no-cli-auto-prompt flag):
#!/bin/bash
while read line
do
aws s3 cp --no-cli-auto-prompt "${line}" ./
done <../links.txt

Related

Powershell Pipe stdout from s3 cp command to gzip

Trying to use Powershell on Windows 10 to download a small .gz file from an s3 bucket using the aws s3 cp command.
I am piping the output of the s3 cp to gzip -d to decompress. My aim is to basically copy, unzip and display contents without saving the .gz file locally.
From reading the official Amazon documentation for the s3 cp command, the following is mentioned:
https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
Downloading an S3 object as a local file stream
WARNING:: PowerShell may alter the encoding of or add a CRLF to piped or >redirected output.
Here is the command I'm executing from powershell:
PS C:\> aws s3 cp s3://my-bucket/test.txt.gz - | gzip -d
Which returns the following error: gzip: stdin: not in gzip format
The command works fine when run from Windows Command Prompt but I just can't seem to get it working with Powershell.
From a Windows Command prompt, it works fine:
C:\Windows\system32>aws s3 cp s3://my-bucket/test.txt.gz - | gzip -d
With some sample test data output as follows:
first_name last_name
---------- ----------
Ellerey Place
Cherie Glantz
Isaak Grazier

How can I copy data in GCP from a public Cloud Storage bucket to my own bucket?

Problem
When following the GCP Automl Vision quickstart(https://cloud.google.com/vision/automl/docs/edge-quickstart),
I'm trying to copy sample images into my own bucket, using the following code in google cloud shell:
gsutil -m cp -R gs://cloud-ml-data/img/flower_photos/ gs://${BUCKET}/img/
However I get the following error:
CommandException: "cp" command does not support provider-only URLs.
How can it be resolved?
Thanks very much.
Giovanni
This might occur when your BUCKET_NAME value is blank (check using echo $BUCKET_NAME).
set a value for BUCKET_NAME using
export BUCKET_NAME=<bucketName>
check the value using :
echo $BUCKET_NAME
Do not use {} in bucket name
BUCKET_NAME = 'my_bucket'
in place of
gsutil cp some.txt gs://${BUCKET_NAME}
Error message
CommandException: "cp" command does not support provider-only URLs.
Use Below -- remove curly bracket {}
gsutil cp some.txt gs://$BUCKET_NAME
Output
Copying file://sa.enc [Content-Type=application/octet-stream]...
/ [1 files][ 2.4 KiB/ 2.4 KiB]
Operation completed over 1 objects/2.4 KiB.

Download list of specific files from AWS S3 using CLI

I am trying to download only specific files from AWS. I have the list of file URLs. Using the CLI I can only download all files in a bucket using the --recursive command, but I only want to download the files in my list. Any ideas on how to do that?
This is possibly a duplicate of:
Selective file download in AWS S3 CLI
You can do something along the lines of:
aws s3 cp s3://BUCKET/ folder --exclude "*" --include "2018-02-06*" --recursive
https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
Since you have the s3 urls already in a file (say file.list), like -
s3://bucket/file1
s3://bucket/file2
You could download all the files to your current working directory with a simple bash script -
while read -r line;do aws s3 cp "$line" .;done < test.list
People, I found out a quicker way to do it: https://stackoverflow.com/a/69018735
WARNING: "Please make sure you don't have an empty line at the end of your text file".
It worked here! :-)

Error "No URLs matched" When copying Google cloud bucket data to my local computer?

I am trying to download a folder which is inside my Google Cloud Bucket, I read from google docs gsutil/commands/cp and executed below the line.
gsutil cp -r appengine.googleapis.com gs://my-bucket
But i am getting the error
CommandException: No URLs matched: appengine.googleapis.com
Edit
By running below command
gsutil cp -r gs://logsnotimelimit .
I am getting Error
IOError: [Errno 22] invalid mode ('ab') or filename: u'.\logsnotimelimit\appengine.googleapis.com\nginx.request\2018\03\14\14:00:00_14:59:59_S0.json_.gstmp'
What is the appengine.googleapis.com parameter in your command? Is that a local directory on your filesystem you are trying to copy to the cloud bucket?
The gsutil cp -r appengine.googleapis.com gs://my-bucket command you provided will copy a local directory named appengine.googleapis.com recursively to your cloud bucket named my-bucket. If that's not what you are doing - you need to construct your command differently.
I.e. to download a directory named folder from your cloud bucket named my-bucket into the current location try running
gsutil cp -r gs://my-bucket/folder .
-- Update: Since it appears that you're using a Windows machine (the "\" directory separators instead of "/" in the error message) and since the filenames contain the ":" character - the cp command will end up failing when creating those files with the error message you're seeing.
Just wanted to help people out if they run into this problem on Windows. As administrator:
Open C:\Program Files (x86)\Google\Cloud SDK\google-cloud-sdk\platform\gsutil\gslib\utils
Delete copy_helper.pyc
Change the permissions for copy_helper.py to allow writing
Open copy_helper.py
Go to the function _GetDownloadFile
On line 2312 (at time of writing), change the following line
download_file_name = _GetDownloadTempFileName(dst_url)
to (for example, objective is to remove the colons):
download_file_name = _GetDownloadTempFileName(dst_url).replace(':', '-')
Go to the function _ValidateAndCompleteDownload
On line 3184 (at time of writing), change the following line
final_file_name = dst_url.object_name
to (for example, objective is to remove the colons):
final_file_name = dst_url.object_name.replace(':', '-')
Save the file, and rerun the gsutil command
FYI, I was using the command gsutil -m cp -r gs://my-bucket/* . to download all my logs, which by default contain : which does not bode well for Windows files!
Hope this helps someone, I know it's a somewhat hacky solution, but seeing as you never need (should have) colons in Windows filenames, it's fine to do and forget. Just remember that if you update the Google SDK you'll have to redo this.
I got same issue and resolved it as below.
Open a cloud shell, and copy objects by using gsutil command.
gsutil -m cp -r gs://[some bucket]/[object] .
On the shell, zip those objects by using zip command.
zip [some file name].zip -r [some name of your specific folder]
On the shell, copy the zip file into GCS by using gsutil command.
gsutil cp [some file name].zip gs://[some bucket] .
On a Windows Command Prompt, copy the zip file in GCS by using gsutil command.
gsutil cp gs://[some bucket]/[some file name].zip .
I wish this information helps someone.
This is also gsutil's way of saying file not found. The mention of URL is just confusing in the context of local files.
Be careful, in this command, the file path is case sensitive. You can check if it is not a capitalized letter issue.

aws s3 cp clobbers files?

Um, not quite sure what to make out of this.
I am trying to download 50 files from S3 to EC2 machine.
I ran:
for i in `seq -f "%05g" 51 101`; do (aws s3 cp ${S3_DIR}part-${i}.gz . &); done
A few minutes later, I checked on pgrep -f aws and found 50 processes running. Moreover, all files were created and started to download (large files, so expected to take a while to download).
At the end, however, I got only a subset of files:
$ ls
part-00051.gz part-00055.gz part-00058.gz part-00068.gz part-00070.gz part-00074.gz part-00078.gz part-00081.gz part-00087.gz part-00091.gz part-00097.gz part-00099.gz part-00101.gz
part-00054.gz part-00056.gz part-00066.gz part-00069.gz part-00071.gz part-00075.gz part-00080.gz part-00084.gz part-00089.gz part-00096.gz part-00098.gz part-00100.gz
Where is the rest??
I did not see any errors, but I saw these for successfully completed files (and these are the files that are shown in the ls output above):
download: s3://my/path/part-00075.gz to ./part-00075.gz
If you are copying many objects to/from S3, you might try the --recursive option to instruct aws-cli to copy multiple objects:
aws s3 cp s3://bucket-name/ . --recursive --exclude "*" --include "part-*.gz"