AWS CLI create a folder and upload files - amazon-web-services

I'm trying to create a folder in my AWS bucket and to upload all image files from a local storage. Unfortunately I have tried all possible commands given in the documentation such as the ones below, but none of them are working.
aws s3 cp C:\mydocs\images s3://bucket.pictures --recursive --include ".jpeg"
aws s3api put-object --bucket bucket.images --key mykey dir-images/
Also attaching a picture which ilustrates the 2 commands that I want to perform, but from the backend with the help of AWS CLI.
Could you please help me write the correct command in AWS CLI?

The following works for me on Windows and recursively copies all JPEG files:
aws s3 cp c:\mydocs\images\ s3://mybucket/images/ --recursive --exclude * --include *.jpeg
Note that you have to exclude all files and then include the files of interest (*.jpeg). If you don't use --exclude *, you'll get all files, regardless of extension.

Related

How can i download specified file from s3 bucket

I'm trying to download one file from my s3 bucket
I'm trying this command:
aws s3 sync %inputS3path% %inputDatapath% --include "20211201-1500-euirluclprd01-olX8yf.1.gz"
and I habve also tried_
aws s3 sync %inputS3path% %inputDatapath% --include "*20211201-1500-euirluclprd01-olX8yf.1*.gz"
but when command is executing, I'm get all file that's include folder
Folder looks like :
/2021/12/05
20211201-1500-euirluclprd01-olX8yf.1.gz
20211201-1505-euirluclprd01-olX8yf.1.gz
You can use aws s3 cp to copy a specific file. For example:
aws s3 cp s3://bucketname/path/file.gz .
Looking at your variables, you could probably use:
aws s3 cp %inputS3path%/20211201-1500-euirluclprd01-olX8yf.1.gz %inputDatapath%

Use the aws client to copy s3 files from a single directory only (non recursively)

Consider an aws bucket/key structure along these lines
myBucket/dir1/file1
myBucket/dir1/file2
myBucket/dir1/dir2/dir2file1
myBucket/dir1/dir2/dir2file2
When using:
aws s3 cp --recursive s3://myBucket/dir1/ .
Then we will copy down dir2file[1,2] along with file[1,2]. How to only copy the latter files and not files under subdirectories ?
Responding to a comment: . I am not interested in putting a --exclude for every subdirectory so this is not a duplicate of excluding directories from aws cp
As far as I understood, you want to make sure that the files present in current directories are copied but anything in child directories should not be copied. I think you can use something like that.
aws s3 cp s3://myBucket/dir1/ . --recursive --exclude "*/*"
Here we are excluding files which will have a path separator after "dir1".
You can exclude paths using the --exclude option, e.g.
aws s3 cp s3://myBucket/dir1/ . --recursive --exclude "dir1/dir2/*"
More options and examples can be found by using the aws cli help
aws s3 cp help
There is no way you can control the recursion depth while copying files using aws s3 cp. Neither it is supported in aws s3 ls.
So, if you do not wish to use --exclude or --include options, I suggest you:
Use aws s3 ls command without --recursive option to list files directly under a directory, extract only the file names from the output and save the names to a file. Refer this post
Then write a simple script to read the file names and for each execute aws s3 cp
Alternatively, you may use:
aws s3 cp s3://spaces/dir1/ . --recursive --exclude "*/*"

Copy list of files from S3 bucket to S3 bucket

Is there a way I could copy a list of files from one S3 bucket to another? Both S3 buckets are in the same AWS account. I am able to copy a single file at a time using the aws cli command:
aws s3 cp s3://source-bucket/file.txt s3://target-bucket/file.txt
However I have 1000+ files to copy. I do not want to copy all files in the source bucket so I am not able to utilize the sync command. Is there a way to call a file with the list of file names that needs to be copied to automate this process?
You can use the --exclude and --include filters and as well use the --recursive flag in s3 cp command to copy multiple files
Following is an example
aws s3 cp /tmp/foo/ s3://bucket/ --recursive --exclude "*" --include "*.jpg"
For more details click here
Approaching this problem from the Python aspect, you can run a Python script that does it for you. Since you have a lot of files, it might take a while but should get the job done. Save the following code in a file with .py extension and run it. You might need to run pip install boto3 beforehand in your terminal in case you don't already have it.
import boto3
s3 = boto3.resource('s3')
mybucket = s3.Bucket('oldBucket')
list_of_files = ['file1.txt', 'file2.txt']
for obj in mybucket.objects.all():
if obj.key in list_of_files:
s3.Object('newBucket', obj.key).put(Body=obj.get()["Body"].read())
If you want to use the AWS CLI, you could use cp in a loop over a file containing the names of the files you want to copy:
while read FNAME
do
aws s3 cp s3://source-bucket/$FNAME s3://target-bucket/$FNAME
done < file_list.csv
I've done this for a few hundred files. It's not efficient because you have to make a request for each file.
A better way would be to use the --include argument multiple times in one cp line. If you could generate all those arguments in the shell from a list of files you would effectively have
aws s3 cp s3://source-bucket/ s3://target-bucket/ --exclude "*" --include "somefile.txt" --include "someotherfile.jpg" --include "another.json" ...
I'll let someone more skilled figure out how to script that.

AWS get-object doesn't create local directories

I am trying to download a file from S3 compatible storage and I am running the following command:
aws s3api get-object --endpoint-url https://my.endpoint.url/ --bucket my_bucket --key mailouts/m3/ma2.png mailouts/m3/ma2.png
And I get and error:
[Errno 2] No such file or directory: u'mailouts/m3/ma2.png'
However, when I run the following command:
aws s3api get-object --endpoint-url https://my.endpoint.url/ --bucket my_bucket --key mailouts/m3/ma2.png ma2.png
i do end up with ma2.png file in my current directory. So it looks like aws cli cannot create intermediate directories mailouts/m3
Is there a way to force aws cli to make local directories?
Not when retrieving a single file. The sync command in the AWS S3 CLI will create directories in the destination as long as there is at least one file in the directory. You can use the --include and --exclude options to narrow down the files synced (even down to just ma2.png) if you do not want to sync the entire directory tree.
S3 buckets do not have directories/folders. When you have something like:
mailouts/m3/ma2.png
that is just a filename in your S3 bucket. If you want to save ma2.png in ./mailouts/m3, you have to parse the object name and create the intermediate folders/directories yourself.
The best way is to use aws s3 cp command, it will create needed folders.

uploading all files of a certain extension type

I'm trying to upload all files of type .flv to an S3 bucket using the AWS CLI from a Windows server 2008 command line.
I do this:
aws s3 sync . s3://MyBucket --exclude '*.png'
And it begins uploading .png files instead.
I'm trying to follow the documentation and it gives an example that reads:
Local directory contains 3 files:
MyFile1.txt
MyFile2.rtf
MyFile88.txt
'''
aws s3 sync . s3://MyBucket/MyFolder --exclude '*.txt'
upload: MyFile2.rtf to s3://MyBucket/MyFolder/MyFile2.rtf
So what am I doing wrong?
Use:
aws s3 sync . s3://MyBucket/ --exclude "*" --include "*.flv"
It excludes all files, then includes .flv files. The order of parameters is important.
You can also use:
aws s3 cp . s3://MyBucket/ --recursive --exclude "*" --include "*.flv"
The difference is that sync will not re-copy a file that already exists in the destination.