AWS S3 Buckets Access Errors & Buckets Showing Old Files - amazon-web-services

I have S3 buckets that I have been using for years and today when I logged in through the console to manually upload some files, I have noticed that all of my buckets are showing ERROR under the access tab.
While I can still see the files, I'm unable to upload or modify any files and also all files in my buckets are showing old versions from December even though I have updated some of the text files just this month. Also, all files are missing their meta tags.
I did not manage or change any permissions in my account in years and I'm the only one with access to these files.
Anyone else had this issue? How can I fix this?
It really feels like AWS had some major failure and replaced my current files with some old backup.

I had the same issue (except of the old files part). In my case it was a browser plugin called "Avira Browserschutz", a similar plugin to adblock, which caused it. Other plugins such as uBlock Origin might result in identical behavior.
Test it by disabling said plugins or visit AWS in incognito mode.

Related

Hiding file source in an S3 bucket

I'm running an S3 bucket with a Cloudfront distribution. Everything works except the ability to read the source code is still there.
So the bucket is at mybucket.domain.com and that works okay. However, navigating to mybucket.domain.com/script.js or mybucket.domain.com/style.css will reveal the contents of each file.
I have searched far and wide for a solution but seem to be coming up blank every time. I've tried things with the bucket policy and Cloudfront settings to no avail. Any thoughts are appreciated. Thanks.
There's no way to prevent this. The web browser has to be able to download those files to the local computer in order to render your website. In order for the web browser to download those files they have to be publicly available. There's no way to stop someone from viewing the source of files that are publicly available. Since there are copies of these files on every computer that has visited your website, there is absolutely no way to keep people from viewing the source of those files.
You shouldn't place anything in those files that shouldn't be publicly available.

How to set no cache AT ALL on AWS S3?

I started to use AWS S3 to provide a fast way to my users download the installation files of my Win32 apps. Each install file has about 60MB and the download it's working very fast.
However when i upload a new version of the app, S3 keeps serving the old file instead ! I just rename the old file and upload the new version with the same name of the old. After i upload, when i try to download, the old version is downloaded instead.
I searched for some solutions and here is what i tried :
Edited all TTL values on cloudfrond to 0
Edited the metadata 'Cache-control' with the value 'max-age=0' for each file on the bucket
None of these fixed the issue, AWS keeps serving the old file instead of the new !
Often i will upload new versions, so i need that when the users try to download, S3 never use cache at all.
Please help.
I think this behavior might be because S3 uses an eventually consistent model, meaning that updates and deletes will propagate eventually but it is not guaranteed that this happens immediately, or even within a specific amount of time. (see here for the specifics of their consistency approach). Specifically, they say "Amazon S3 offers eventual consistency for overwrite PUTS and DELETES in all Regions" and I think the case you're describing would be an overwrite PUT. There appears to be a good answer on a similar issue here: How long does it take for AWS S3 to save and load an item? which touches on the consistency issue and how to get around it, hopefully that's helpful

Continuously getting Resolve conflict error while uploading new file or updating existing file

I am using the Google Cloud console to upload files to Storage. My IAM account has full rights to upload files to given bucket, recently I started getting following error while updating existing file, now even I get same error while uploading new file.
I do not have purchased GCP support, so thought this may be right platform to check if any one has any solution to this.
This behaviour is usually related to an issue with the session's cookies. This other SO question about the same issue was answered by the poster. It gave two possible solutions:
Clear the cookies
Log in through a private window/tab
On top of those two workarounds, there's another known solution:
Log out of GCP (with all the accounts if there were many) and log in again

Google Cloud Storage - files not showing

I have over 30 Leaflet maps hosted on my Google Cloud Platform bucket (for example) and it has always been an easy process to upload my folder (which includes an html file with sub-folders including .js and .css files) and share the map publicly.
I tried uploading another map today, but within the folder there are no files showing and I get the following message "There are no live objects in this folder. If you have object versioning enabled, this folder may contain archived versions of objects, which aren't visible in the console. You can list archived object versions using gsutil or the APIs."
Does anyone know what is going on here?
We have also seen this problem, and it seems that the issue is limited to buckets that have spaces in the name.
It's also not reproducible through the gcloud web console, but if you use gsutil to upload a file to a bucket with a space in the name then it won't be visible on the web UI.
I can see from your screenshot that your bucket also has spaces (%20 in the url).
If you need a workaround asap, you could rename your bucket...
But google should fix this soon, I hope.
There is currently open issue on GCS/Console integration
If files have any symbols that needs urlencoding - they are not visible in console - but accessible via gsutil/API (which is currently recommended as workaround)
Issue has been resolved as of 8-May-2018 10:00 UTC
This can happen if the file doesn't have an extension, the UI treats it as a folder and lets you navigate into it, showing a blank folder instead of the file contents.
We had the same symptom (files show up in API but invisible on the web and via CLI).
The issue turned out to be that we were saving files to "./uploads", which Google interprets as "create a directory literally called '.' and then a subdirectory called uploads."
The fix was to upload to "uploads/" instead of "./uploads". We also just ran a mass copy operation via the API for everything under "./uploads". All visible now!
I also had spaces in my url and it was not working properly yesterday. Checked this morning and everything is working as expected. I still have the spaces in my URL btw.

aws s3 bucket files locked out, cant add permissions

i recently transferred a couple of s3 buckets to a different account with s3cmd from the master :(
now i cant access any of the files transferred to these buckets since there is no way i can add permissions to these transferred files. when i try to add permissions to these files i get. Sorry! You were denied access to do that even when I'm the admin!
no way to add permissions to files : http://imgur.com/LOPK2dN
I have tried to add everyone permission in the bucket itself but all in vain.
I'll appreciate if anyone can help me to retrieve these files.
Had the exact same problem and it turned to be the permissions set to the files when copying the files over. I was using the PHP library provided by Amazon, and when calling the copy method the permissions needed to be set to "bucket-owner-full-control". That worked for me. You are probably copying your stuff in a different manner, but maybe this is the path to follow.
I started to find the solution reading another question.