AWS - Download all versions of S3 object - amazon-web-services

I have a file in a S3 bucket with 60 versions. I need to download all versions but using the console I would have to download one by one.
Is there a command to download all the versions?
Or a way to make a lambda function read all the versions and download them as different files to another S3 bucket?
Any help is appreciated.

Related

Integrating AWS RDS SQL Server with Amazon S3

I am trying to download multiple files from S3 to AWS RDS MSSQL at the same time.
The following example shows the stored procedure to download files from S3
exec msdb.dbo.rds_download_from_s3
#s3_arn_of_file='arn:aws:s3:::bucket_name/data.csv',
#rds_file_path='D:\S3\Folder\data.csv',
#overwrite_file=1;
This execution can only execute (download) one at a time and queue the rest. Is there a solution whereby I can download multiple files at a same time?
I was thinking of downloading a zipped file and unzipped once downloaded but it does not support zip format.
I have also checked on the AWS Documentation limitations:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/User.SQLServer.Options.S3-integration.html
Can anyone help?

Upload only newly modified files to S3 bucket using Golang aws-sdk

I'm trying to implement a backup mechanism to S3 bucket in my code.
Each time a condition is met I need to upload an entire directory contents to an S3 bucket.
I am using this code example:
https://github.com/aws/aws-sdk-go/tree/c20265cfc5e05297cb245e5c7db54eed1468beb8/example/service/s3/sync
Which creates an iterator of the directory content's and then use s3manager.Upload.UploadWithIterator to upload them.
Everything works, however I noticed it uploads all files and overwrites existing files on the bucket even if they weren't modified since last backup, I only want to upload the delta between each backup.
I know aws cli has the command aws s3 sync <dir> <bucket> which does exactly what I need, however I couldn't find anything equivalent on aws-sdk documentation.
Appreciate the help, thank you!
There is no such feature in aws-sdk. You could instrument it yourself for each file to check the hash of both objects before upload. Or use a community solution https://www.npmjs.com/package/s3-sync-client

How to copy only updated files/folders from aws s3 bucket to local machine?

I have a requirement of copying certain files from an S3 bucket to local machine. Below are the important points to note on my requirement:
The files are kept in S3 bucket based on the date folder.
The files are in csv.gz extension and I need to change it to csv and copy it to my local machine.
It keeps on updating every minute and I need to copy only the new files and process it. The processed files needs not to be copied again.
I have tried using sync folder but after processing of the file, the file name is renamed and again the csv.gz file is synced with the local folder.
I am planning to use some scheduled task to con.
Amazon S3 is a storage service. It cannot 'process' files for you.
If you wish to change the contents of a file (eg converting from .csv.gz to .csv), you would need to do this yourself on your local computer.
The AWS Command-Line Interface (CLI) aws s3 sync command makes it easy to copy files that have been changed/added since the previous sync. However, if you are changing the files locally (unzipping), then you will likely need to write your own program to download from Amazon S3.
There are AWS SDKs available for popular programming languages. You can also do a web search to find sample code for using Amazon S3.

How to partially upload a ZIP file to S3 bucket?

I want my users to be able to download many files from AWS S3 bucket(potentially over few hundred GBs sized when accumulated) as one large ZIP file. I would download those selected files from S3 first and upload a newly created ZIP file on S3. This job will be rarely invoked during our service, so I decided to use Lambda for it.
But Lambda has its own limitations - 15 min of execution time, ~500MB /tmp storage, etc. I found several workaround solutions on Google that can beat the storage limit(streaming) but found no way to solve execution time limit.
Here are what I've found so far:
https://dev.to/lineup-ninja/zip-files-on-s3-with-aws-lambda-and-node-1nm1
Create a zip file on S3 from files on S3 using Lambda Node
Note that programming language is not a concern here.
Could you please give me a suggestion?

Can not use zip from S3 for AWS Lambda

Situation:
I have a zip file with a JS function in a S3 bucket
The file properties say:
Link:
https://s3.just-an-example-region-for-this-post-1.amazonaws.com/my-bucket/server/func-helloworld.zip
When I create a new Lambda function and chose Upload a .ZIP from Amazon S3 and continue I get:
Trouble uploading file: Invalid S3 URL.
The zip file is accessible for everyone. I can download the zip file.
I can't find a good example on how this link should look like.
I found this: https://forums.aws.amazon.com/thread.jspa?messageID=468968&#468968
But I don't understand where to get my file in a format like mentioned in the thread.
That was fun...
S3 and Lambda need to be in the same region.
I thought just downloading a file from S3 would work no matter which region. Doesn't. Now I know.
I tried it step by step via the web console. Now that I read the CLI docs it says it everywhere... damn. Should have tried the CLI first.