AWS get-object doesn't create local directories - amazon-web-services

I am trying to download a file from S3 compatible storage and I am running the following command:
aws s3api get-object --endpoint-url https://my.endpoint.url/ --bucket my_bucket --key mailouts/m3/ma2.png mailouts/m3/ma2.png
And I get and error:
[Errno 2] No such file or directory: u'mailouts/m3/ma2.png'
However, when I run the following command:
aws s3api get-object --endpoint-url https://my.endpoint.url/ --bucket my_bucket --key mailouts/m3/ma2.png ma2.png
i do end up with ma2.png file in my current directory. So it looks like aws cli cannot create intermediate directories mailouts/m3
Is there a way to force aws cli to make local directories?

Not when retrieving a single file. The sync command in the AWS S3 CLI will create directories in the destination as long as there is at least one file in the directory. You can use the --include and --exclude options to narrow down the files synced (even down to just ma2.png) if you do not want to sync the entire directory tree.

S3 buckets do not have directories/folders. When you have something like:
mailouts/m3/ma2.png
that is just a filename in your S3 bucket. If you want to save ma2.png in ./mailouts/m3, you have to parse the object name and create the intermediate folders/directories yourself.

The best way is to use aws s3 cp command, it will create needed folders.

Related

AWS CLI create a folder and upload files

I'm trying to create a folder in my AWS bucket and to upload all image files from a local storage. Unfortunately I have tried all possible commands given in the documentation such as the ones below, but none of them are working.
aws s3 cp C:\mydocs\images s3://bucket.pictures --recursive --include ".jpeg"
aws s3api put-object --bucket bucket.images --key mykey dir-images/
Also attaching a picture which ilustrates the 2 commands that I want to perform, but from the backend with the help of AWS CLI.
Could you please help me write the correct command in AWS CLI?
The following works for me on Windows and recursively copies all JPEG files:
aws s3 cp c:\mydocs\images\ s3://mybucket/images/ --recursive --exclude * --include *.jpeg
Note that you have to exclude all files and then include the files of interest (*.jpeg). If you don't use --exclude *, you'll get all files, regardless of extension.

How can i download specified file from s3 bucket

I'm trying to download one file from my s3 bucket
I'm trying this command:
aws s3 sync %inputS3path% %inputDatapath% --include "20211201-1500-euirluclprd01-olX8yf.1.gz"
and I habve also tried_
aws s3 sync %inputS3path% %inputDatapath% --include "*20211201-1500-euirluclprd01-olX8yf.1*.gz"
but when command is executing, I'm get all file that's include folder
Folder looks like :
/2021/12/05
20211201-1500-euirluclprd01-olX8yf.1.gz
20211201-1505-euirluclprd01-olX8yf.1.gz
You can use aws s3 cp to copy a specific file. For example:
aws s3 cp s3://bucketname/path/file.gz .
Looking at your variables, you could probably use:
aws s3 cp %inputS3path%/20211201-1500-euirluclprd01-olX8yf.1.gz %inputDatapath%

AWS S3 File merge using CLI

I am trying to combine/merge contents from all the files existing in a S3 bucket folder into a new file. The combine/merge should be done by the ascending order of the Last modified of the S3 file.
I am able to do that manually by having hard coded file names like as follows:
(aws s3 cp s3://bucket1/file1 - && aws s3 cp s3://bucket1/file2 - && aws s3 cp s3://bucket1/file3 - ) | aws s3 cp - s3://bucket1/new-file
But, now I want to change the CLI command so that we can do this file merge based on list of as many files as they exist in a folder, sorted by Last Modified. So ideally, the cp command should receive the list of all files that exist in a S3 bucket folder, sorted by Last Modified and then merge them into a new file.
I appreciate everyone's help on this.
Give you some hints.
First list the files in the reverse order of Last Modified.
aws s3api list-objects --bucket bucket1 --query "reverse(sort_by(Contents,&LastModified))"
Then you should be fine to attach the rest commands as you did
aws s3api list-objects --bucket bucket1 --query "reverse(sort_by(Contents,&LastModified))" |jq -r .[].Key |while read file
do
echo $file
# do the cat $file >> new-file
done
aws s3 cp new-file s3://bucket1/new-file

AWS CLI Download list of S3 files

We have ~400,000 files on a private S3 bucket that are inbound/outbound call recordings. The files have a certain pattern to it that lets me search for numbers both inbound and outbound. Note these calls are on the Glacier storage class
Using AWS CLI, I can search through this bucket and grep the files I need out. What I'd like to do is now initiate an S3 restore job to expedited retrieval (so ~1-5 minute recovery time), and then maybe 30 minutes later run a command to download the files.
My efforts so far:
aws s3 ls s3://exetel-logs/ --recursive | grep .*042222222.* | cut -c 32-
Retreives the key of about 200 files. I am unsure of how to proceed next, as aws s3 cp wont work for any objects in storage class.
Cheers,
The AWS CLI has two separate commands for S3: s3 ands3api. s3 is a high level abstraction with limited features, so for restoring files, you'll have to use one of the commands available with s3api:
aws s3api restore-object --bucket exetel-logs --key your-key
If you afterwards want to copy the files, but want to ensure to only copy files which were restored from Glacier, you can use the following code snippet:
for key in $(aws s3api list-objects-v2 --bucket exetel-logs --query "Contents[?StorageClass=='GLACIER'].[Key]" --output text); do
if [ $(aws s3api head-object --bucket exetel-logs --key ${key} --query "contains(Restore, 'ongoing-request=\"false\"')") == true ]; then
echo ${key}
fi
done
Have you considered using a high-level language wrapper for the AWS CLI? It will make these kinds of tasks easier to integrate into your workflows. I prefer the Python implementation (Boto 3). Here is example code for how to download all files from an S3 bucket.

How do I use the aws cli to set permissions on files in an S3 bucket?

I am new to the aws cli and I've spent a fair amount of time in the documentation but I can't figure out how to set permissions on files after I've uploaded them. So if I uploaded a file with:
aws s3 cp assets/js/d3-4.3.0.js s3://example.example.com/assets/js/
and didn't set access permissions, I need a way to set them. Is there an equivalent to chmod 644 in the aws cli?
And for that matter is there a way to view access permission?
I know I could use the --acl public-read flag with aws s3 cp but if I didn't, can I set access without repeating the full copy command?
The awscli supports two groups of S3 actions: s3 and s3api.
You can use aws s3api put-object-acl to set the ACL permissions on an existing object.
The logic behind there being two sets of actions is as follows:
s3: high-level abstractions with file system-like features such as ls, cp, sync
s3api: one-to-one with the low-level S3 APIs such as put-object, head-bucket
In your case, the command to execute is:
aws s3api put-object-acl --bucket example.example.com --key assets/js/d3-4.3.0.js --acl public-read