Copy file from GCP Instance to GCP bucket - google-cloud-platform

I follow the gsutil instructions to copy a local file on my gcp linux instance up to my bucket.
i run the command from the dir where the file is
gsutil -m cp -r gs://bucket_name/folder1/folder2 filename
I get these:
CommandException: Destination URL must name a directory, bucket, or bucket
subdirectory for the multiple source form of the cp command.
CommandException: Destination URL must name a directory, bucket, or bucket
subdirectory for the multiple source form of the cp command.
CommandException: 2 files/objects could not be transferred.
thanks

Provided that filename is a local file that you want to copy to Cloud Storage, use this command:
gsutil -m cp filename gs://bucket_name/folder1/folder2
The -r command line option means recursive copy. You would not use that when specifying a single file. Your source and destination parameters were also reversed

Please modify your command like below.
gsutil -m cp filename gs://bucket_name/folder1/folder2
It should work.

Related

aws s3 CLI, unable to copy entire directory structure

I have the following directory structure
first/
second/
file.txt
If I do aws s3 cp first s3://bucket-name --recursive the file copied in s3 has the path bucket-name/second/file.txt not bucket-name/first/second/file.txt
Why it doesn't behave like the cp command on Linux and how can I achieve the later?
Thanks in advance
Use:
aws s3 cp first s3://bucket-name/first/ --recursive
This mimics normal Linux behaviour, such as:
cp -R first third
Both will result in the contents of first being put into the target directory. Neither commands creates a first directory.

How to get files to copy from S3 bucket

Need some help with cp command in AWS CLI. I am trying to copy files from S3 bucket to a local folder. The command I used seems to have run successfully in Powershell, but the folder is still empty.
Command:
aws s3 cp s3://<my bucket path> <my local destination> --exclude "*" --include "*-20201023*" --recursive --dryrun
The --dryrun parameter prohibits the command from actually copying anything. It just shows you what would happen. Try removing that parameter and running the command.

How to copy multiple file from local to s3?

I am trying to upload multiple files from my local to an AWS S3 bucket,
I am able to use aws s3 cp to copy files one by one,
But I need to upload multiple but not all ie. selective files to the same S3 folder,
Is it possible to do this in a single AWS CLI call, if so how?
Eg -
aws s3 cp test.txt s3://mybucket/test.txt
Reference -
https://docs.aws.amazon.com/cli/latest/reference/s3/cp.html
If you scroll down the documentation link you provided to the section entitled "Recursively copying local files to S3", you will see the following:
When passed with the parameter --recursive, the following cp command recursively copies all files under a specified directory to a specified bucket and prefix while excluding some files by using an --exclude parameter. In this example, the directory myDir has the files test1.txt and test2.jpg
So, assuming you wanted to copy all .txt files in some subfolder to the same bucket in S3, you could try something like:
aws s3 cp yourSubFolder s3://mybucket/ --recursive
If there are any other files in this subfolder, you need to add the --exclude and --include parameters (otherwise all files will be uploaded):
aws s3 cp yourSubFolder s3://mybucket/ --recursive --exclude "*" --include "*.txt"
If you're doing this from bash, then you can use this pattern as well:
for f in *.png; do aws s3 cp $f s3://my/dest; done
You would of course customize *.png to be your glob pattern, and the s3 destination.
If you have a weird set of files you can do something like put their names in a text file, call it filenames.txt and then:
for f in `cat filenames.txt`; do ... (same as above) ...
aws s3 cp <your directory path> s3://<your bucket name>/ --recursive --exclude "*.jpg" --include "*.log”

Trying to copy one file with Amazon S3 CLI

I made a folder with 3 .jpg files in it to test. This folder is called c:\Work\jpg.
I am trying to upload it to a bucket with this command:
aws s3 cp . s3://{bucket}/Test
I get the following every time:
[Errno 2] No such file or directory: "C:\Work\jpg\".
Obviously, it correctly translated the current folder "." into the correct folder, but then it says it doesn't exist?!?
Any help out there to simply copy 3 files?
Are you confusing aws s3 sync with aws s3 cp. For copy, you need to specify the source file. The destination file can be current directory.
aws s3 cp test.txt s3://mybucket/test2.txt
Ensure that your path is correctly written.
Remember add --recursive option, because is folder
aws s3 cp ./ s3://{bucket}/Test --recursive

How to use "aws s3 cp" to output a directory or wildcard to stdout

I have been able to use to --recursive successfully for multiple files and the "-" destination for stdout but can not seem to use them together. For example:
This works:
aws s3 cp s3://mybucket-ed/test/ . --recursive
download: s3://mybucket-ed/test/upload-0 to ./upload-0
download: s3://mybucket-ed/test/tower.json to ./tower.json
And this works:
aws s3 cp s3://mybucket-ed/test/upload-0 -
...
upload-0 file contents
...
But this returns nothing:
aws s3 cp s3://mybucket-ed/test/ - --recursive
Any suggestions short of listing out the directory contents and doing individual cp commands for each file? Note that I just need all of the files in the S3 directory sent out to stdout (and not necessarily the recursive option).