Need Solution for this issue we have loaded many images on the server all URL's not working
Make sure that you have setup proper ACL's for your images.Enable Read permissions for everyone on all images or you can setup bucket policy.
Related
I have an old archive folder that exists on an on premise Windows server that I need to put into an S3 bucket, but having issues, it's more my knowledge of AWS tbh, but I'm trying.
I have created the S3 bucket and I can to attach it to the server using net share (AWS gives you the command via the AWS gateway) and I gave it a drive letter. I then tried to use robocopy to copy the data, but it didn't like the drive letter for some reason.
I then read I can use the AWS CLI so I tried something like:
aws s3 sync z: s3://archives-folder1
I get - fatal error: Unable to locate credentials
I guess I need to put some credentials in somewhere (.aws), but after reading too many documents I'm not sure what to do at this point, could someone advise?
Maybe there is a better way.
Thanks
You do not need to 'attach' the S3 bucket to your system. You can simply use the AWS CLI command to communicate directly with Amazon S3.
First, however, you need to provide the AWS CLI with a set of AWS credentials that can be used to access the bucket. You can do this with:
aws configure
It will ask for an Access Key and Secret Key. You can obtain these from the Security Credentials tab when viewing your IAM User in the IAM management console.
I try and succeed to upload a file using AWS Amplify quick start doc and I used this example to set my graphql schema, my resolvers and dataSources correctly: https://github.com/aws-samples/aws-amplify-graphql.
I was stuck for a long time because of an error response "Access Denied" when my image was uploading into the S3 bucket. I finally went to my S3 console, selected the right bucket, went to the Authorization tab, and clicked on "Everyone" and finally selected "Write Object". With that done, everything works fine.
But I don't really understand why it's working, and Amazon show me a big and scary alert on my S3 console now saying "We don't recommend at all to make a S3 bucket public".
I used Amazon Cognito userPool with Appsync and it's inside my resolvers that the image is upload to my S3 bucket if i understood correctly.
So what is the right configuration to make the upload of an image work?
I already try to put my users in a group with the access to the S3 bucket, but it was not working (I guess since the user don't really directly interact with my S3 bucket, it's my resolvers who do).
I would like my users to be able to upload an image, and after displaying the image on the app for everybody to see (very classical), so I'm just looking for the right way to do that, since the big alert on my S3 console seems to tell me that turning a bucket public is dangerous.
Thanks!
I'm guessing you're using an IAM role to upload files to S3. You can set the bucket policy to allow that role with certain permissions whether that is ReadOnly, WriteOnly, etc.
Take a look here: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Ok I find where it was going wrong. I was uploading my image taking the address of my S3 bucket with the address that was given by aws-exports.js.
BUT, when you go to your IAM role policy, and you check the role of your authorize user of your cognito pool, you can see the different strategies and the one that allow to put objects on your S3 bucket use the folders "public", "protected" and "private".
So you have to change those path or add these folder at the end of your bucket address you use on your front-end app.
Hope it will help someone!
I have two buckets, and someone else set up the permission for them. One allows uploads, and the second one isn't. I checked the permissions on both, and neither have a bucket policy or CORS Configuration that I can see. These are the permissions for the one that is allowing uploads
I've opened up the permissions even more for the other bucket, but it still doesn't allow uploads.
Besides those places is there somewhere else that you would set permissions that I'm missing? The Amazon docs just talk about this and bucket policy, but as I said, the Bucket Policy and CORS configuration for the one that is working is blank. I'm not sure what I need to do here.
Do you have anything under Identity & Access Management (IAM)? There might be policy preventing you from access bucket
Here is a link explain how IAM policy work:
http://docs.aws.amazon.com/IAM/latest/UserGuide/access_policies_examples.html#iam-policy-example-s3
I don't know what the issue was, but the original app was written to use the East Coast data center. When I changed to bucket to that location suddenly it started working again. My only assumption is that someone must have hard coded it somewhere.
I'm running through the Redshift tutorials on the AWS site, and I can't access their sample data buckets with the COPY command. I know I'm using the right Key and Secret Key, and have even generated new ones to try, without success.
The error from S3 is S3ServiceException:Access Denied,Status 403,Error AccessDenied. Amazon says this is related to permissions for a bucket, but they don't specify credentials to use for accessing their sample buckets, so I assume they're open to the public?
Anyone got a fix for this or am I misinterpreting the error?
I was misinterpreting the error. The buckets are publicly accessible and you just have to give your IAM user access to the S3 service.
I have data from multiple users inside a single S3 account. My desktop app has an authentication system which let the app know who the user is and which folder to access on S3. but the desktop app has the access code to the whole S3 folder.
somebody told me this is not secure since a hacker could break the request from the app to the S3 and use the credentials to download all the data.
Is this true? and if so how can I avoid it? (he said I need to a client server in the AWS cloud but this isn't clear to me... )
btw. I am using Boto python library to access S3.
thanks
I just found this:
Don't store your AWS secret key in the app. A determined hacker would be able to find it eventially. One idea is that you have a web service hosted somewhere whose sole purpose is to sign the client's S3 requests using the secret key, those requests are then relayed to the S3 service. Therefore you get your users to authenticate agaist your web service using credentials that you control. To re-iterate: the clients talk directly to S3, but get their requests "rubber-stamped"/approved by you.
I don't see S3 necessarily as a flat structure - if you use filesystem notation "folder/subfolder/file.ext" for the keys.
Vanity URLs are supported by S3 see http://docs.amazonwebservices.com/AmazonS3/2006-03-01/VirtualHosting.html - basically the URL "http://s3.amazonaws.com/mybucket/myfile.ext" becomes "http://mybucket.s3.amazonaws.com/myfile.ext" and you can then setup a CNAME in your DNS that maps "www.myname.com" to "mybucket.s3.amazonaws.com" which results in "http://www.myname.com/myfile.ext"
Perfect timing! AWS just announced a feature yesterday that's likely to help you here: Variables in IAM policies.
What you would do is create an IAM account for each of your users. This will allow you to have a separate access key and secret key for each user. Then you would assign a policy to your bucket that restricts access to a portion of the bucket, based on username. (The example that I linked to above has good example of this use case).