AWS Cli in Windows wont upload file to s3 bucket - amazon-web-services

Windows server 12r2 with python 2.7.10 and the aws cli tool installed. The following works:
aws s3 cp c:\a\a.txt s3://path/
I can upload that file without problem. What I want to do is upload a file from a mapped drive to an s3 bucket, so I tried this:
aws s3 cp s:\path\file s3://path/
and it works.
Now what I want to do and cannot figure out is how to not specify, but let it grab all file(s) so I can schedule this to upload the contents of a directory to my s3 bucket. I tried this:
aws s3 cp "s:\path\..\..\" s3://path/ --recursive --include "201512"
and I get this error "TOO FEW ARGUMENTS"
Nearest I can guess it's mad I'm not putting a specific file to send up, but I don't want to do that, I want to automate all things.
If someone could please shed some light on what I'm missing I would really appreciate it.
Thank you

In case this is useful for anyone else coming after me: Add some extra spaces between the source and target. I've been beating my head against running this command with every combination of single quotes, double quotes, slashes, etc:
aws s3 cp /home/<username>/folder/ s3://<bucketID>/<username>/archive/ --recursive --exclude "*" --include "*.csv"
And it would give me: "aws: error: too few arguments" Every. Single. Way. I. Tried.
So finally saw the --debug option in aws s3 cp help
so ran it again this way:
aws s3 cp /home/<username>/folder/ s3://<bucketID>/<username>/archive/ --recursive --exclude "*" --include "*.csv" --debug
And this was the relevant debug line:
MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['s3', 'cp', 'home/<username>/folder\xc2\xa0s3://<bucketID>/<username>/archive/', '--recursive', '--exclude', '*', '--include', '*.csv', '--debug']
I have no idea where \xc2\xa0 came from in between source and target, but there it is! Updated the line to add a couple extra spaces and now it runs without errors:
aws s3 cp /home/<username>/folder/ s3://<bucketID>/<username>/archive/ --recursive --exclude "*" --include "*.csv"

aws s3 cp "s:\path\..\..\" s3://path/ --recursive --include "201512"
TOO FEW ARGUMENTS
This is because, in you command, double-quote(") is escaped with backslash(\), so local path(s:\path\..\..\) is not parsed correctly.
What you need to do is to escape backslash with double backslashes, i.e. :
aws s3 cp "s:\\path\\..\\..\\" s3://path/ --recursive --include "201512"

Alternatively you can try 'mc' which comes as single binary is available for windows both 64bit and 32bit. 'mc' implements mirror, cp, resumable sessions, json parseable output and more - https://github.com/minio/mc
64-bit from https://dl.minio.io/client/mc/release/windows-amd64/mc.exe
32-bit from https://dl.minio.io/client/mc/release/windows-386/mc.exe

Use aws s3 sync instead of aws s3 cp to copy the contents of a directory.

I faced the same situation. Let share two scenarios I had tried to check the same code.
Within bash
please make sure you have AWS profile in place (use $aws configure). Also, make sure you use a proper proxy if applicable.
$aws s3 cp s3://bucket/directory/ /usr/home/folder/ --recursive --region us-east-1 --profile yaaagy
it worked.
Within a perl script
$cmd="$aws s3 cp s3://bucket/directory/ /usr/home/folder/ --recursive --region us-east-1 --profile yaaagy"
I enclosed it within "" and it was successful. Let me know if this works out for you.

I ran into this same problem recently, and quiver's answer -- replacing single backslashes with double backslashes -- resolved the problem I was having.
Here's the Powershell code I used to address the problem, using the OP's original example:
# Notice how my path string contains a mixture of single- and double-backslashes
$folderPath = "c:\\a\a.txt"
echo "`$folderPath = $($folderPath)"
# Use the "Resolve-Path" cmdlet to generate a consistent path string.
$osFolderPath = (Resolve-Path $folderPath).Path
echo "`$osFolderPath = $($osFolderPath)"
# Escape backslashes in the path string.
$s3TargetPath = ($osFolderPath -replace '\\', "\\")
echo "`$s3TargetPath = $($s3TargetPath)"
# Now pass the escaped string to your AWS CLI command.
echo "AWS Command = aws s3 cp `"s3://path/`" `"$s3TargetPath`""

Related

Use the aws client to copy s3 files from a single directory only (non recursively)

Consider an aws bucket/key structure along these lines
myBucket/dir1/file1
myBucket/dir1/file2
myBucket/dir1/dir2/dir2file1
myBucket/dir1/dir2/dir2file2
When using:
aws s3 cp --recursive s3://myBucket/dir1/ .
Then we will copy down dir2file[1,2] along with file[1,2]. How to only copy the latter files and not files under subdirectories ?
Responding to a comment: . I am not interested in putting a --exclude for every subdirectory so this is not a duplicate of excluding directories from aws cp
As far as I understood, you want to make sure that the files present in current directories are copied but anything in child directories should not be copied. I think you can use something like that.
aws s3 cp s3://myBucket/dir1/ . --recursive --exclude "*/*"
Here we are excluding files which will have a path separator after "dir1".
You can exclude paths using the --exclude option, e.g.
aws s3 cp s3://myBucket/dir1/ . --recursive --exclude "dir1/dir2/*"
More options and examples can be found by using the aws cli help
aws s3 cp help
There is no way you can control the recursion depth while copying files using aws s3 cp. Neither it is supported in aws s3 ls.
So, if you do not wish to use --exclude or --include options, I suggest you:
Use aws s3 ls command without --recursive option to list files directly under a directory, extract only the file names from the output and save the names to a file. Refer this post
Then write a simple script to read the file names and for each execute aws s3 cp
Alternatively, you may use:
aws s3 cp s3://spaces/dir1/ . --recursive --exclude "*/*"

Selective file download in AWS CLI

I have files in S3 bucket. I was trying to download files based on a date, like 08th aug, 09th Aug etc.
I used the following code, but it still downloads the entire bucket:
aws s3 cp s3://bucketname/ folder/file \
--profile pname \
--exclude \"*\" \
--recursive \
--include \"" + "2015-08-09" + "*\"
I am not sure, how to achieve this. How can I download selective date file?
This command will copy all files starting with 2015-08-15:
aws s3 cp s3://BUCKET/ folder --exclude "*" --include "2015-08-15*" --recursive
If your goal is to synchronize a set of files without copying them twice, use the sync command:
aws s3 sync s3://BUCKET/ folder
That will copy all files that have been added or modified since the previous sync.
In fact, this is the equivalent of the above cp command:
aws s3 sync s3://BUCKET/ folder --exclude "*" --include "2015-08-15*"
References:
AWS CLI s3 sync command documentation
AWS CLI s3 cp command documentation
Bash Command to copy all files for specific date or month to current folder
aws s3 ls s3://bucketname/ | grep '2021-02' | awk '{print $4}' | aws s3 cp s3://bucketname/{} folder
Command is doing the following thing
Listing all the files under a bucket
Filtering out all the files of 2021-02 i.e. all files of feb month of 2021
Filtering out only the name of them
running command aws s3 cp on specific files
In case your bucket size is large in the upwards of 10 to 20 gigs,
this was true in my own personal use case, you can achieve the same
goal by using sync in multiple terminal windows.
All the terminal sessions can use the same token, in case you need to generate a token for prod environment.
$ aws s3 sync s3://bucket-name/sub-name/another-name folder-name-in-pwd/
--exclude "*" --include "name_date1*" --profile UR_AC_SomeName
and another terminal window (same pwd)
$ aws s3 sync s3://bucket-name/sub-name/another-name folder-name-in-pwd/
--exclude "*" --include "name_date2*" --profile UR_AC_SomeName
and another two for "name_date3*" and "name_date4*"
Additionally, you can also do multiple excludes in the same sync
command as in:
$ aws s3 sync s3://bucket-name/sub-name/another-name my-local-path/
--exclude="*.log/*" --exclude=img --exclude=".error" --exclude=tmp
--exclude="*.cache"
This Bash Script will copy all files from one bucket to another by modified-date using aws-cli.
aws s3 ls <BCKT_NAME> --recursive | sort | grep "2020-08-*" | cut -b 32- > a.txt
Inside Bash File
while IFS= read -r line; do
aws s3 cp s3://<SRC_BCKT>/${line} s3://<DEST_BCKT>/${line} --sse AES256
done < a.txt
aws cli is really slow at this. I waited hours and nothing really happened. So I looked for alternatives.
https://github.com/peak/s5cmd worked great.
supports globs, for example:
s5cmd -numworkers 30 cp 's3://logs-bucket/2022-03-30-19-*' .
is really blazing fast, so you can work with buckets that have s3 access logs without much fuss.

How to search an Amazon S3 Bucket using Wildcards?

This stackoverflow answer helped a lot. However, I want to search for all PDFs inside a given bucket.
I click "None".
Start typing.
I type *.pdf
Press Enter
Nothing happens. Is there a way to use wildcards or regular expressions to filter bucket search results via the online S3 GUI console?
As stated in a comment, Amazon's UI can only be used to search by prefix as per their own documentation:
http://docs.aws.amazon.com/AmazonS3/latest/UG/searching-for-objects-by-prefix.html
There are other methods of searching but they require a bit of effort. Just to name two options, AWS-CLI application or Boto3 for Python.
I know this post is old but it is high on Google's list for s3 searching and does not have an accepted answer. The other answer by Harish is linking to a dead site.
UPDATE 2020/03/03: AWS link above has been removed. This is a link to a very similar topic that was as close as I could find. https://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html
AWS CLI search:
In AWS Console,we can search objects within the directory only but not in entire directories, that too with prefix name of the file only(S3 Search limitation).
The best way is to use AWS CLI with below command in Linux OS
aws s3 ls s3://bucket_name/ --recursive | grep search_word | cut -c 32-
Searching files with wildcards
aws s3 ls s3://bucket_name/ --recursive |grep '*.pdf'
You can use the copy function with the --dryrun flag:
aws s3 ls s3://your-bucket/any-prefix/ .\ --recursive --exclude * --include *.pdf --dryrun
It would show all of the files that are PDFs.
If you use boto3 in Python it's quite easy to find the files. Replace 'bucket' with the name of the bucket.
import boto3
s3 = boto3.resource('s3')
bucket = s3.Bucket('bucket')
for obj in bucket.objects.all():
if '.pdf' in obj.key:
print(obj.key)
The CLI can do this; aws s3 only supports prefixes, but aws s3api supports arbitrary filtering. For s3 links that look like s3://company-bucket/category/obj-foo.pdf, s3://company-bucket/category/obj-bar.pdf, s3://company-bucket/category/baz.pdf, you can run
aws s3api list-objects --bucket "company-bucket" --prefix "category/" --query "Contents[?ends-with(Key, '.pdf')]"
or for a more general wildcard
aws s3api list-objects --bucket "company-bucket" --prefix "category/" --query "Contents[?contains(Key, 'foo')]"
or even
aws s3api list-objects --bucket "company-bucket" --prefix "category/obj" --query "Contents[?ends_with(Key, '.pdf') && contains(Key, 'ba')]"
The full query language is described at JMESPath.
The documentation using the Java SDK suggests it can be done:
https://docs.aws.amazon.com/AmazonS3/latest/dev/ListingKeysHierarchy.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/ListingObjectKeysUsingJava.html
Specifically the function listObjectsV2Result allows you to specify a prefix filter, e.g. "files/2020-01-02*" so you can only return results matching today's date.
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/model/ListObjectsV2Result.html
My guess the files were uploaded from a unix system and your downloading to windows so s3cmd is unable to preserve file permissions which don't apply on NTFS.
To search for files and grab them try this from the target directory or change ./ to target:
for i in `s3cmd ls s3://bucket | grep "searchterm" | awk '{print $4}'`; do s3cmd sync --no-preserve $i ./; done
This works in WSL in windows.
I have used this in one of my project but its a bit of hard coding
import subprocess
bucket = "Abcd"
command = "aws s3 ls s3://"+ bucket + "/sub_dir/ | grep '.csv'"
listofitems = subprocess.check_output(command, shell=True,)
listofitems = listofitems.decode('utf-8')
print([item.split(" ")[-1] for item in listofitems.split("\n")[:-1]])

Mass Copy Files On Amazon AWE with CLI

This is probably easy but it's really stumping me. I literally have about 9 hours experience with Amazon AWS and CLI.
I have a directory
BDp-Archive/item/
on my S3 and I want to copy the text files in that directory into its sub directory called
BDp-Archive/item/txt/
My attempted command was:
aws s3 mv s3://Bdp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/ s3://BDp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/txt/ --include "*.txt"
This is throwing the error:
A client error (NoSuchKey) occurred when calling the HeadObject operation: Key "
00009e98-3e0f-402e-9d12-7aec8e32b783" does not exist
Completed 1 part(s) with ... file(s) remaining
I think the problem is that you need to use the --recursive switch, since by default, the mv command only applies to a single object (much like the other commands - rm, sync, etc...). try:
aws s3 mv s3://Bdp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/ s3://BDp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/txt/ --include "*.txt" --recursive
I needed to configure the region of my bucket (or specify it as part of the cli command
aws s3 cp --region <region> <from> <to>
You need to configure your access keys and secret key, try:
aws configure
For more options, see: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-installing-credentials

how to include and copy files that are in current directory to s3 (and not recursively)

I have some files that I want to copy to s3.
Rather than doing one call per file, I want to include them all in one single call (to be as efficient as possible).
However, I only seem to get it to work if I add the --recursive flag, which makes it look in all children directories (all files I want are in the current directory only)
so this is the command I have now, that works
aws s3 cp --dryrun . mybucket --recursive --exclude * --include *.jpg
but ideally I would like to remove the --recursive to stop it traversing,
e.g. something like this (which does not work)
aws s3 cp --dryrun . mybucket --exclude * --include *.jpg
(I have simplified the example, in my script I have several different include patterns)
AWS CLI's S3 wildcard support is a bit primitive, but you could use multiple --exclude options to accomplish this. Note: the order of includes and excludes is important.
aws s3 cp --dryrun . s3://mybucket --recursive --exclude "*" --include "*.jpg" --exclude "*/*"
Try the command:
aws s3 cp --dryrun . s3://mybucket --recursive --exclude "*/"
Hope it help.
I tried the suggested answers and could not get aws to skip nested folders. Saw some weird outputs about calculating size, and 0 size objects, despite using the exclude flag.
I eventually gave up on the --recursive flag and used bash to perform a single s3 upload for each file matched. Remove --dryrun once you're ready to roll!
for i in *.{jpg,jpeg}; do aws --dryrun s3 cp ${i} s3://your-bucket/your-folder/${i}; done
I would suggest to go for a utility called s4cmd which provides us unix like file system operations and it also allows us to include the wild cards
https://github.com/bloomreach/s4cmd