i recently transferred a couple of s3 buckets to a different account with s3cmd from the master :(
now i cant access any of the files transferred to these buckets since there is no way i can add permissions to these transferred files. when i try to add permissions to these files i get. Sorry! You were denied access to do that even when I'm the admin!
no way to add permissions to files : http://imgur.com/LOPK2dN
I have tried to add everyone permission in the bucket itself but all in vain.
I'll appreciate if anyone can help me to retrieve these files.
Had the exact same problem and it turned to be the permissions set to the files when copying the files over. I was using the PHP library provided by Amazon, and when calling the copy method the permissions needed to be set to "bucket-owner-full-control". That worked for me. You are probably copying your stuff in a different manner, but maybe this is the path to follow.
I started to find the solution reading another question.
Related
I use google cloud storage to store static data. at the start of the build I was able to create the folder without any problems. some time ago I wanted to create a new folder on my cloud storage and an error appeared as below:
Even though as I recall, I didn't make any changes before, at that time I was able to create folders and now why can't I create folders?
I've tried adding my email to cloud storage permissions with the role as storage admin and storage legacy bucket owner in the hope of creating a folder. but when I save the configuration and try to create the folder again, the same error still appears and still can't create the folder. but if uploading files to cloud storage is not a problem, the files can be uploaded properly. But why can't I create a folder.
Can anyone help me?
As mentioned in the comments and the linked question, you cannot create new buckets due to having a delinquent billing account. You can reactivate it by following these steps. Keep in mind that you will need the required permissions and resolve any delayed/declined payments. You can also contact the Billing team.
I'm running an S3 bucket with a Cloudfront distribution. Everything works except the ability to read the source code is still there.
So the bucket is at mybucket.domain.com and that works okay. However, navigating to mybucket.domain.com/script.js or mybucket.domain.com/style.css will reveal the contents of each file.
I have searched far and wide for a solution but seem to be coming up blank every time. I've tried things with the bucket policy and Cloudfront settings to no avail. Any thoughts are appreciated. Thanks.
There's no way to prevent this. The web browser has to be able to download those files to the local computer in order to render your website. In order for the web browser to download those files they have to be publicly available. There's no way to stop someone from viewing the source of files that are publicly available. Since there are copies of these files on every computer that has visited your website, there is absolutely no way to keep people from viewing the source of those files.
You shouldn't place anything in those files that shouldn't be publicly available.
I have an Amazon S3 bucket with tons of images. A subset of these images need to be synced to a local machine for image analysis (AI) purposes. This has to be done regularly and ideally with a list of file names as input. Not all images need to be synced.
There are ways to synchronise S3 with either Dropbox/Amazon Drive or other storage services, but none of them appear to have the option to provide a list of files that need to be synced.
How can this be implemented?
The first thing that springs to mind when talking about syncing and s3 is using the aws s3 sync cli command. This will allow you to sync specific origin destination folders as well as afford you the ability to use --include, --exclude if you want to list specific files. The commands also allow for the use of wildcards [*] if you have specific naming conventions you can use to identify the files.
You can also repeatedly call the --exclude command for multiple files, so depending on your OS you could either list all files or create a find script that identifies the files and singles them out.
Additionally you are able to do --delete which will remove any files in the destination path that are not in the origin.
As much as I would like to answer but I felt it would be good to
comment with one's thoughts initially if they are in line with the OP!
But I see the comments are being used to provide an answer to gain
points :)
I would like to submit my official answer!
Ans:
If I get this correctly I would use aws cli wth filters of include and exclude.
https://docs.aws.amazon.com/cli/latest/reference/s3/index.html#use-of-exclude-and-include-filters
I have S3 buckets that I have been using for years and today when I logged in through the console to manually upload some files, I have noticed that all of my buckets are showing ERROR under the access tab.
While I can still see the files, I'm unable to upload or modify any files and also all files in my buckets are showing old versions from December even though I have updated some of the text files just this month. Also, all files are missing their meta tags.
I did not manage or change any permissions in my account in years and I'm the only one with access to these files.
Anyone else had this issue? How can I fix this?
It really feels like AWS had some major failure and replaced my current files with some old backup.
I had the same issue (except of the old files part). In my case it was a browser plugin called "Avira Browserschutz", a similar plugin to adblock, which caused it. Other plugins such as uBlock Origin might result in identical behavior.
Test it by disabling said plugins or visit AWS in incognito mode.
I am using the Google Cloud console to upload files to Storage. My IAM account has full rights to upload files to given bucket, recently I started getting following error while updating existing file, now even I get same error while uploading new file.
I do not have purchased GCP support, so thought this may be right platform to check if any one has any solution to this.
This behaviour is usually related to an issue with the session's cookies. This other SO question about the same issue was answered by the poster. It gave two possible solutions:
Clear the cookies
Log in through a private window/tab
On top of those two workarounds, there's another known solution:
Log out of GCP (with all the accounts if there were many) and log in again