AWS CLI moving file with wildcard (asterisk) in path - amazon-web-services

I am attempting to move a file, from on s3 location to another, using an activity in a AWS data pipeline.
The command I am using is:
(aws s3 mv s3://foobar/Tagger/out//*/lastImage.txt s3://foobar/Tagger/testInput/lastImage.txt)
But I receive the following error:
A client error (404) occurred when calling the HeadObject operation: Key "Tagger/out//*/lastImage.txt" does not exist
But, if I replace the "*" with the specific directory name, it will work. The problem is I won't always know the name of the directory, so I was hoping I could use the "*" as a wild card.

Wildcards in the AWS S3 CLI only work when using the --recursive flag.
So this should work for you:
aws s3 mv s3://foobar/Tagger/out/ s3://foobar/Tagger/testInput/ --recursive --exclude "*" --include "*/lastImage.txt"
Unfortunately, this will recreate the entire directory structure in your target location, and I'm not immediately sure that can be solved by just using the AWS CLI.

Related

awscli s3 sync wildcards

I'm trying to sync all files in a directory that start with "model.ckpt" to an S3 bucket path, by trying this:
aws s3 sync ./model.ckpt* $S3_CKPT_PATH
But I'm getting the error:
Unknown options: ./model.ckpt-0.meta,<my S3_CKPT_PATH path>
However, aws s3 sync . $S3_CKPT_PATH works, but gives me a lot of additional files I don't want.
Anybody know how I can do this?
When using aws s3 sync, all files in a folder are included.
If you wish to specify wildcards, you will need to Use Exclude and Include Filters.
For example:
aws s3 sync mydir s3://bucket/folder/ --exclude "*" --include "model.ckpt*"

AWS S3 Bucket endpoint

I have created a bucket and trying to use from an application and it is giving the following error:
"error: S3ServiceException:The bucket you are attempting to access must be addressed using the specified endpoint."
I am using this format: s3://bucketname. I know the format is not an issue because I am able to use this format for another public bucket. I think the permissions on my bucket may be an issue but I am not sure.
Can someone pl. help? Thank you in advance.
May be this can help you a bit I used this command to copy mine images in bucket on S3 from linux command line. Please see I also use trailing /
//these commands are bidirectional
// . is showing current directory in which you are
// bucketname & region are mandatory
aws s3 cp . s3://bucketname/foldername/ --recursive --include "*" --region ap-southeast-1

AWS Cli in Windows wont upload file to s3 bucket

Windows server 12r2 with python 2.7.10 and the aws cli tool installed. The following works:
aws s3 cp c:\a\a.txt s3://path/
I can upload that file without problem. What I want to do is upload a file from a mapped drive to an s3 bucket, so I tried this:
aws s3 cp s:\path\file s3://path/
and it works.
Now what I want to do and cannot figure out is how to not specify, but let it grab all file(s) so I can schedule this to upload the contents of a directory to my s3 bucket. I tried this:
aws s3 cp "s:\path\..\..\" s3://path/ --recursive --include "201512"
and I get this error "TOO FEW ARGUMENTS"
Nearest I can guess it's mad I'm not putting a specific file to send up, but I don't want to do that, I want to automate all things.
If someone could please shed some light on what I'm missing I would really appreciate it.
Thank you
In case this is useful for anyone else coming after me: Add some extra spaces between the source and target. I've been beating my head against running this command with every combination of single quotes, double quotes, slashes, etc:
aws s3 cp /home/<username>/folder/ s3://<bucketID>/<username>/archive/ --recursive --exclude "*" --include "*.csv"
And it would give me: "aws: error: too few arguments" Every. Single. Way. I. Tried.
So finally saw the --debug option in aws s3 cp help
so ran it again this way:
aws s3 cp /home/<username>/folder/ s3://<bucketID>/<username>/archive/ --recursive --exclude "*" --include "*.csv" --debug
And this was the relevant debug line:
MainThread - awscli.clidriver - DEBUG - Arguments entered to CLI: ['s3', 'cp', 'home/<username>/folder\xc2\xa0s3://<bucketID>/<username>/archive/', '--recursive', '--exclude', '*', '--include', '*.csv', '--debug']
I have no idea where \xc2\xa0 came from in between source and target, but there it is! Updated the line to add a couple extra spaces and now it runs without errors:
aws s3 cp /home/<username>/folder/ s3://<bucketID>/<username>/archive/ --recursive --exclude "*" --include "*.csv"
aws s3 cp "s:\path\..\..\" s3://path/ --recursive --include "201512"
TOO FEW ARGUMENTS
This is because, in you command, double-quote(") is escaped with backslash(\), so local path(s:\path\..\..\) is not parsed correctly.
What you need to do is to escape backslash with double backslashes, i.e. :
aws s3 cp "s:\\path\\..\\..\\" s3://path/ --recursive --include "201512"
Alternatively you can try 'mc' which comes as single binary is available for windows both 64bit and 32bit. 'mc' implements mirror, cp, resumable sessions, json parseable output and more - https://github.com/minio/mc
64-bit from https://dl.minio.io/client/mc/release/windows-amd64/mc.exe
32-bit from https://dl.minio.io/client/mc/release/windows-386/mc.exe
Use aws s3 sync instead of aws s3 cp to copy the contents of a directory.
I faced the same situation. Let share two scenarios I had tried to check the same code.
Within bash
please make sure you have AWS profile in place (use $aws configure). Also, make sure you use a proper proxy if applicable.
$aws s3 cp s3://bucket/directory/ /usr/home/folder/ --recursive --region us-east-1 --profile yaaagy
it worked.
Within a perl script
$cmd="$aws s3 cp s3://bucket/directory/ /usr/home/folder/ --recursive --region us-east-1 --profile yaaagy"
I enclosed it within "" and it was successful. Let me know if this works out for you.
I ran into this same problem recently, and quiver's answer -- replacing single backslashes with double backslashes -- resolved the problem I was having.
Here's the Powershell code I used to address the problem, using the OP's original example:
# Notice how my path string contains a mixture of single- and double-backslashes
$folderPath = "c:\\a\a.txt"
echo "`$folderPath = $($folderPath)"
# Use the "Resolve-Path" cmdlet to generate a consistent path string.
$osFolderPath = (Resolve-Path $folderPath).Path
echo "`$osFolderPath = $($osFolderPath)"
# Escape backslashes in the path string.
$s3TargetPath = ($osFolderPath -replace '\\', "\\")
echo "`$s3TargetPath = $($s3TargetPath)"
# Now pass the escaped string to your AWS CLI command.
echo "AWS Command = aws s3 cp `"s3://path/`" `"$s3TargetPath`""

How to move files from amazon ec2 to s3 bucket using command line

In my amazon EC2 instance, I have a folder named uploads. In this folder I have 1000 images. Now I want to copy all images to my new S3 bucket. How can I do this?
First Option sm3cmd
Use s3cmd
s3cmd get s3://AWS_S3_Bucket/dir/file
Take a look at this s3cmd documentation
if you are on linux, run this on the command line:
sudo apt-get install s3cmd
or Centos, Fedore.
yum install s3cmd
Example of usage:
s3cmd put my.file s3://pactsRamun/folderExample/fileExample
Second Option
Using Cli from amazon
Update
Like #tedder42 said in the comments, instead of using cp, use sync.
Take a look at the following syntax:
aws s3 sync <source> <target> [--options]
Example:
aws s3 sync . s3://my-bucket/MyFolder
More information and examples available at Managing Objects Using High-Level s3 Commands with the AWS Command Line Interface
aws s3 sync your-dir-name s3://your-s3-bucket-name/folder-name
Important: This will copy each item in your named directory into the s3 bucket folder you selected. This will not copy your directory as a whole.
Or, you can use the following command for one selected file.
aws s3 sync your-dir-name/file-name s3://your-s3-bucket-name/folder-name/file-name
Or you can use a wild character to select all. Note that this will copy your directory as a whole and also generate metadata and save them to your s3 bucket folder.
aws s3 sync . s3://your-s3-bucket-name/folder-name
To copy from EC2 to S3 use the below code in the Command line of EC2.
First, you have to give "IAM Role with full s3 Access" to your EC2 instance.
aws s3 cp Your_Ec2_Folder s3://Your_S3_bucket/Your_folder --recursive
Also note on aws cli syncing with s3 it is multithreaded and uploads multiple parts of a file at one time. The number of threads however, is not configurable at this time.
aws s3 mv /home/inbound/ s3://test/ --recursive --region us-west-2
This can be done very simply. Follow the following steps:
Open the AWS EC2 on console.
Select the instance and navigate to actions.
Select instances settings and select Attach/Replace IAM Role
When this is done, connect to the AWS instance and the rest will be done via the following CLI commands:
aws s3 cp filelocation/filename s3://bucketname
Hence you don't need to install or do any extra efforts.
Please note... the file location refers to the local address. And the bucketname is the name of your bucket.
Also note: This is possible if your instance and S3 bucket are in the same account.
Cheers.
We do have a dryrun feature available for testing.
To begin with I would assign ec2-instance a role to be able read
write to S3
SSH into the instance and perform the following
vi tmp1.txt
aws s3 mv ./ s3://bucketname-bucketurl.com/ --dryrun
If this works then all you have to do is either create a script to
upload all files with specific from this folder to s3 bucket
I have done the wrritten the following command in my script to move
files older than 2 minutes from current directory to bucket/folder
cd dir; ls . -rt | xargs -I FILES find FILES -maxdepth 1 -name
'*.txt' -mmin +2 -exec aws s3 mv '{}' s3://bucketurl.com

Mass Copy Files On Amazon AWE with CLI

This is probably easy but it's really stumping me. I literally have about 9 hours experience with Amazon AWS and CLI.
I have a directory
BDp-Archive/item/
on my S3 and I want to copy the text files in that directory into its sub directory called
BDp-Archive/item/txt/
My attempted command was:
aws s3 mv s3://Bdp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/ s3://BDp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/txt/ --include "*.txt"
This is throwing the error:
A client error (NoSuchKey) occurred when calling the HeadObject operation: Key "
00009e98-3e0f-402e-9d12-7aec8e32b783" does not exist
Completed 1 part(s) with ... file(s) remaining
I think the problem is that you need to use the --recursive switch, since by default, the mv command only applies to a single object (much like the other commands - rm, sync, etc...). try:
aws s3 mv s3://Bdp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/ s3://BDp-Archive/00009e98-3e0f-402e-9d12-7aec8e32b783/txt/ --include "*.txt" --recursive
I needed to configure the region of my bucket (or specify it as part of the cli command
aws s3 cp --region <region> <from> <to>
You need to configure your access keys and secret key, try:
aws configure
For more options, see: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-installing-credentials