Download s3 files from specific version programatically - amazon-web-services

I made a huge mistake on all files in an S3 bucket. Hopefully, the version control was on and I have the latest versions available. However, I have A LOT of files and downloading them manually from the aws website will take so much time.
Is there a way to do it? I'm wondering if something like "Give me the version of the file if the date is different to yesterday of something like that through the command line
Thank you!!

Related

s3, need to download from browser, 10GB limit

I'm in a bit of a pinch. I'm stuck using a computer that doesn't have the necessary CLI capabilities to download files from S3 (IT restrictions on what I can put on the computer). Hence, I can only download a file via the browser.
Problem: when I try to download a file that is >10Gb, I get an error. I'm guessing that there's a limit to the size of file I'm downloading (I have plenty of drive space for this, so it isn't a space issue).
How can I resolve this? Is there a setting on the browser that I need to change? Or something in S3 that I need to change? Thanks!

How to download last version of all google drive files at scale?

All my google drive files got encrypted by ransomware. Google did not help me with the backup of all drive files available before that encryption date.
The only option I found working is to manually select the file in Google Drive and revert to the previous version by deleting the encrypted current version. Google keeps the previous version of a file in drive only for 30 days.
I am looking for a script that can help me with reverting to the immediately previous version of the file by deleting currently encrypted at scale. I have 60 GB of data in Google Drive.
If you have any script to do that. I see in Google Developer documentation, they have opened Google Drive API for people where using API all versions can be set to forever saved or one can download a particular version of file using API.
I have left coding some 7 years back and struggling to create script. If anyone has such script created, it will help. Google drive is just my personal account.
I had the same problem last week and I have created an Apps Script which deletes the new file versions and keep the old version before the ransomware affected the Drive.
Contact me for the Script.. for some reason I can't paste it here?!
You can Skype me (nickname: gozeril) and I'll give it to you.
Notes:
You need to run it on each root folder one by one, changing in code the folder name only.
** Some folders are very big and therefore you must run the script several times
The Script will run 30 minute at most.
Be patient, it works!
I hope you'll find it useful :-)

How to set no cache AT ALL on AWS S3?

I started to use AWS S3 to provide a fast way to my users download the installation files of my Win32 apps. Each install file has about 60MB and the download it's working very fast.
However when i upload a new version of the app, S3 keeps serving the old file instead ! I just rename the old file and upload the new version with the same name of the old. After i upload, when i try to download, the old version is downloaded instead.
I searched for some solutions and here is what i tried :
Edited all TTL values on cloudfrond to 0
Edited the metadata 'Cache-control' with the value 'max-age=0' for each file on the bucket
None of these fixed the issue, AWS keeps serving the old file instead of the new !
Often i will upload new versions, so i need that when the users try to download, S3 never use cache at all.
Please help.
I think this behavior might be because S3 uses an eventually consistent model, meaning that updates and deletes will propagate eventually but it is not guaranteed that this happens immediately, or even within a specific amount of time. (see here for the specifics of their consistency approach). Specifically, they say "Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all Regions" and I think the case you're describing would be an overwrite PUT. There appears to be a good answer on a similar issue here: How long does it take for AWS S3 to save and load an item? which touches on the consistency issue and how to get around it, hopefully that's helpful

AWS S3 Buckets Access Errors & Buckets Showing Old Files

I have S3 buckets that I have been using for years and today when I logged in through the console to manually upload some files, I have noticed that all of my buckets are showing ERROR under the access tab.
While I can still see the files, I'm unable to upload or modify any files and also all files in my buckets are showing old versions from December even though I have updated some of the text files just this month. Also, all files are missing their meta tags.
I did not manage or change any permissions in my account in years and I'm the only one with access to these files.
Anyone else had this issue? How can I fix this?
It really feels like AWS had some major failure and replaced my current files with some old backup.
I had the same issue (except of the old files part). In my case it was a browser plugin called "Avira Browserschutz", a similar plugin to adblock, which caused it. Other plugins such as uBlock Origin might result in identical behavior.
Test it by disabling said plugins or visit AWS in incognito mode.

Updating uploaded content on Amazon S3?

We have a problem with updating our uploaded content on Amazon S3. We keep our software updates on Amazon S3. We overwrite the old version of our software on S3 with new versions. Sometimes our users get old versions of files, when new versions have already been uploaded over 10 hours ago.
Step by step actions of our team:
We upload our file (size about 300 mb) on S3
This file is located on S3 for some time; more than a day, usually some weeks.
We upload a new version of the file to S3, overwriting the old version of this file
We start testing downloads. Some people get new versions, but another people get old versions.
How to solve this problem?
You should use different file names for different versions, this would make sure that some crazy proxy won't cache old file.
I'd suggest you try to use S3 Object Versioning, and place CloudFront in the solution to expose a short TTL Expiry to make it clear to caches to dismiss it ASAP.
Just a note for CloudFront: Make sure to Invalidate the CloudFront Cache for the Object when releasing a new version