I'm trying to copy/move/sync the files from local directory to S3 using the AWS Command-Line Interface (CLI).
I was able to successfully upload files for the very first time to the S3 bucket but when I try to run the same command again for uploading the second time it fails to upload. The command doesn't throw any error.
Here is the command which I ran for moving the files.
aws s3 mv --recursive my-directory s3://my-files/
For instance, I had files file1.pdf, file2.pdf and file3.pdf.
If I delete file2.pdf from the s3 bucket and try to copy the file again using cp or sync or mv. It won't be uploading the file back again to s3 bucket.
AWS CLI Version: aws-cli/1.15.10 Python/2.6.6 Linux/2.6.32-642.6.2.el6.x86_64 botocore/1.10.10
Any thoughts?
Initially I ran the aws s3 mv --recursive my-directory s3://my-files/ which transfers the files and deletes them from the local directory. Only the files were deleted, folders still exist. Files didn't exist in those folders so the subsequent cp & sync commands didn't work.
Related
AWS CLI to download file with its entire folder structure from S3 to local and/or one S3 to another S3
I am looking to download the file from S3 bucket to local with its entire folder structure. For example,
s3://test-s3-dev/apps/test-prd/test/data/sets/frs/bblr/type/level=low/type=data/bd=2022-08-25/region=a/entity=c/ss=tt/dev=mtp/datasetV=1/File123.txt
Above is the S3 path which i need to download on local with it's entire folder structure from S3.
However, by
cp --recursive and synch both are only downloading the File123.txt in current local folder and not downloading the FIle123.txt file with its entire folder structure.
**Please advice how to achieve the File gets downloaded from S3 with its entire folder structure from S3 for ->
To download on local system and/or
Copy from one s3 connection to another S3 connection.**
aws --endpoint-url http://abc.xyz.pqr:9020 s3 cp --recursive s3://test-s3-dev/apps/test-prd/test/data/sets/frs/bblr/type/level=low/type=data/bd=2022-08-25/region=a/entity=c/ss=tt/dev=mtp/datasetV=1/File123.txt ./
OR
aws --endpoint-url http://abc.xyz.pqr:9020 s3 cp --recursive s3://test-s3-dev/apps/test-prd/test/data/sets/frs/bblr/type/level=low/type=data/bd=2022-08-25/region=a/entity=c/ss=tt/dev=mtp/datasetV=1/ ./
OR
aws --endpoint-url http://abc.xyz.pqr:9020 s3 sync s3://test-s3-dev/apps/test-prd/test/data/sets/frs/bblr/type/level=low/type=data/bd=2022-08-25/region=a/entity=c/ss=tt/dev=mtp/datasetV=1/ ./
Above Three aws commands are downloading the file directly in current local folder without copying/sync the file entire directory structure from S3.
I would like to create some dummy files in S3 bucket for testing purposes. Since these are dummy files it seems like an overkill to create them locally and upload to S3 (few GB of data). I created the files with truncate command in linux. Is it possible to create such files directly in S3 or do I need to upload them?
You need to upload them. Since you created the files using a terminal, you can install the AWS CLI and then use the aws s3 cp command upload them to S3. If you have created many files or have a deep folder structure, you can use the --recursive command to upload all files from the myDir to the myBucket recursively:
aws s3 cp myDir s3://mybucket/ --recursive
I made a folder with 3 .jpg files in it to test. This folder is called c:\Work\jpg.
I am trying to upload it to a bucket with this command:
aws s3 cp . s3://{bucket}/Test
I get the following every time:
[Errno 2] No such file or directory: "C:\Work\jpg\".
Obviously, it correctly translated the current folder "." into the correct folder, but then it says it doesn't exist?!?
Any help out there to simply copy 3 files?
Are you confusing aws s3 sync with aws s3 cp. For copy, you need to specify the source file. The destination file can be current directory.
aws s3 cp test.txt s3://mybucket/test2.txt
Ensure that your path is correctly written.
Remember add --recursive option, because is folder
aws s3 cp ./ s3://{bucket}/Test --recursive
I'm looking through the documentation of aws cli and I cannot find the way to copy the only files in some directory structure to other bucket with "flattened" structure(I want one directory and all files inside of it).
for example
/a/b/c/1.pg
/a/2.jpg
/a/b/3.jpg
i would want to have in different bucket:
/x/1.jpg
/x/2.jpg
/x/3.jpg
Am I missing something or is it impossible?
Do you have an idea how could I do that?
Assuming that you have aws cli configured on the system and assuming that both the buckets are in the same region.
What you can do is first dowload the s3 bucket to your local machine using:
aws s3 sync s3://originbucket /localdir/
Post this, use a find command to get all the files into one dir
find /localdir/ -type f -exec mv {} /anotherlocaldir/
Finally, you can upload the files to s3 again!
aws s3 sync /anotherlocaldir/ s3://destinationbucket
You don't need to download files locally, as suggested in another answer. Instead, you could write a shell script or something that does the following:
Run ls on s3://bucket1 to get fully-qualified names of all files in it.
For each file, run cp to copy it from current location to s3://bucket2/x/
Here are some examples for your reference:
aws s3 sync /a/b/c/1.pg s3://bucketname/
aws s3 sync /a/2.jpg s3://bucketname/
aws s3 sync /a/b/3.jpg s3://bucketname/
To sync all contents of a dir to S3 bucket:
aws s3 sync /directoryPath/ s3://bucketname/
AWS reference url: http://docs.aws.amazon.com/cli/latest/reference/s3/sync.html
Sorry I am new to setting up s3 file transfers.
I managed to pull files from my s3 bucket into my local environment using the following commands:
aws s3 sync s3://myBucket myFolder/.
I got the following response:
download: s3://myBucket/image.jpg to myFolder\image.jpg
However, there is nothing in myFolder even though the download was successful. How do I locate the actual file that's being downloaded from s3?
Would the AWS CLI command be the right command to pull files or am I totally missing the point here.