I'm trying to sync all files in a directory that start with "model.ckpt" to an S3 bucket path, by trying this:
aws s3 sync ./model.ckpt* $S3_CKPT_PATH
But I'm getting the error:
Unknown options: ./model.ckpt-0.meta,<my S3_CKPT_PATH path>
However, aws s3 sync . $S3_CKPT_PATH works, but gives me a lot of additional files I don't want.
Anybody know how I can do this?
When using aws s3 sync, all files in a folder are included.
If you wish to specify wildcards, you will need to Use Exclude and Include Filters.
For example:
aws s3 sync mydir s3://bucket/folder/ --exclude "*" --include "model.ckpt*"
Related
I am using the aws sync an S3 bucket, it has content at the root and in a specific folder - let's call it files/.
I am using the delete option because I want to remove the files that don't exist in destination in the source as well but just in the root folder. The folder files/* I want to keepintact.
Would that be possible with any of the command's options?
I think you can combine two sync commands to get the desired result:
aws s3 sync <from> <to> --delete --include "*" --exclude "files/*"
aws s3 sync <from> <to> --exclude "*" --include "files/*"
The first one should sync all files with the delete flag except the ones in files/ and the second one should sync only files in the files/ directory. Please be aware that the order of the filter parameters (--include / --exclude) plays a role, see Use of Exclude and Include Filters for an example.
Hope this helps!
I'm trying to download one file from my s3 bucket
I'm trying this command:
aws s3 sync %inputS3path% %inputDatapath% --include "20211201-1500-euirluclprd01-olX8yf.1.gz"
and I habve also tried_
aws s3 sync %inputS3path% %inputDatapath% --include "*20211201-1500-euirluclprd01-olX8yf.1*.gz"
but when command is executing, I'm get all file that's include folder
Folder looks like :
/2021/12/05
20211201-1500-euirluclprd01-olX8yf.1.gz
20211201-1505-euirluclprd01-olX8yf.1.gz
You can use aws s3 cp to copy a specific file. For example:
aws s3 cp s3://bucketname/path/file.gz .
Looking at your variables, you could probably use:
aws s3 cp %inputS3path%/20211201-1500-euirluclprd01-olX8yf.1.gz %inputDatapath%
I have problem where I can't push through all of my zip files to my s3 bucket, it happens right now when i run the bat files it just a second of loading of cmd and it will automatically close. when i refresh my s3 bucket folder there is no copy of zip files.
Command:
AWS S3 BUCKET:
My Script:
aws s3 cp s3://my_bucket/07-08-2020/*.zip C:\first_folder\second_folder\update_folder --recursive
The issue is with the *.zip. In order to copy file with specific extension use the following syntax :
aws s3 cp [LOCAL_PATH] [S3_PATH] --recursive --exclude "*" --include "*.zip"
From the docs:
Note that, by default, all files are included. This means that
providing only an --include filter will not change what files are
transferred. --include will only re-include files that have been
excluded from an --exclude filter. If you only want to upload files
with a particular extension, you need to first exclude all files, then
re-include the files with the particular extension.
More info can be found here.
#AmitBaranes is right. I checked on a Windows box. You could also simplify your command by using sync instead of cp.
So the command using sync could be:
aws s3 sync "C:\first_folder\second_folder\update_folder" s3://my_bucket/07-08-2020/ --exclude "*" --include "*.zip"
I'm trying to use s4cmd to copy files from an AWS S3 bucket using wildcards which are supposedly supported.
For example, I wanted to sync all AWS S3 log files starting with 2017-03-12 to my local current directory:
s4cmd sync s3://myapp-logs/prod/2017-03-12-19* .
which resulted in all files being copied and the wildcard apparently being ignored:
s3://myapp-logs/prod/2015-10-08-19-24-42-92BBBE9DA93917D1 => ./prod/2015-10-08-19-24-42-92BBBE9DA93917D1
s3://myapp-logs/prod/2015-10-08-19-30-09-BE8D5466FBB5DFD1 => ./prod/2015-10-08-19-30-09-BE8D5466FBB5DFD1
...
I can reproduce this failure regardless of the format of my wildcard.
The only time wildcards work as expected is when I use the cp command, e.g.:
s4cmd cp s3://myapp-logs/prod/2017-03-12-19* // Note that cp doesn't support copying to a local directory
or
s4cmd cp s3://mybucket/mystuff/N*.jpg s3://mybuckettest/
Can't you use aws s3 CLI?
aws s3 sync s3://myapp-logs . --exclude "*" --include "*prod/2017-03-12-19*"
should work. If it doesn't see use-of-exclude-and-include-filters and modify the command.
I'm looking through the documentation of aws cli and I cannot find the way to copy the only files in some directory structure to other bucket with "flattened" structure(I want one directory and all files inside of it).
for example
/a/b/c/1.pg
/a/2.jpg
/a/b/3.jpg
i would want to have in different bucket:
/x/1.jpg
/x/2.jpg
/x/3.jpg
Am I missing something or is it impossible?
Do you have an idea how could I do that?
Assuming that you have aws cli configured on the system and assuming that both the buckets are in the same region.
What you can do is first dowload the s3 bucket to your local machine using:
aws s3 sync s3://originbucket /localdir/
Post this, use a find command to get all the files into one dir
find /localdir/ -type f -exec mv {} /anotherlocaldir/
Finally, you can upload the files to s3 again!
aws s3 sync /anotherlocaldir/ s3://destinationbucket
You don't need to download files locally, as suggested in another answer. Instead, you could write a shell script or something that does the following:
Run ls on s3://bucket1 to get fully-qualified names of all files in it.
For each file, run cp to copy it from current location to s3://bucket2/x/
Here are some examples for your reference:
aws s3 sync /a/b/c/1.pg s3://bucketname/
aws s3 sync /a/2.jpg s3://bucketname/
aws s3 sync /a/b/3.jpg s3://bucketname/
To sync all contents of a dir to S3 bucket:
aws s3 sync /directoryPath/ s3://bucketname/
AWS reference url: http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html