I try and succeed to upload a file using AWS Amplify quick start doc and I used this example to set my graphql schema, my resolvers and dataSources correctly: https://github.com/aws-samples/aws-amplify-graphql.
I was stuck for a long time because of an error response "Access Denied" when my image was uploading into the S3 bucket. I finally went to my S3 console, selected the right bucket, went to the Authorization tab, and clicked on "Everyone" and finally selected "Write Object". With that done, everything works fine.
But I don't really understand why it's working, and Amazon show me a big and scary alert on my S3 console now saying "We don't recommend at all to make a S3 bucket public".
I used Amazon Cognito userPool with Appsync and it's inside my resolvers that the image is upload to my S3 bucket if i understood correctly.
So what is the right configuration to make the upload of an image work?
I already try to put my users in a group with the access to the S3 bucket, but it was not working (I guess since the user don't really directly interact with my S3 bucket, it's my resolvers who do).
I would like my users to be able to upload an image, and after displaying the image on the app for everybody to see (very classical), so I'm just looking for the right way to do that, since the big alert on my S3 console seems to tell me that turning a bucket public is dangerous.
Thanks!
I'm guessing you're using an IAM role to upload files to S3. You can set the bucket policy to allow that role with certain permissions whether that is ReadOnly, WriteOnly, etc.
Take a look here: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Ok I find where it was going wrong. I was uploading my image taking the address of my S3 bucket with the address that was given by aws-exports.js.
BUT, when you go to your IAM role policy, and you check the role of your authorize user of your cognito pool, you can see the different strategies and the one that allow to put objects on your S3 bucket use the folders "public", "protected" and "private".
So you have to change those path or add these folder at the end of your bucket address you use on your front-end app.
Hope it will help someone!
Related
I have a React Amplify site with Storage and a Cognito user pool. When a user on my application uploads a picture it triggers a lambda function that modifies the picture and writes it to a public S3 bucket not related to the Amplify bucket I created. I want to update my lambda function to write the file back into their own personal path associated with the Amplify Storage I setup. I want to get rid of the public S3 bucket altogether.
I am currently utilizing boto3 functions assuming I know the name of the Bucket Key path like this:
bucket.upload_file(Lambda Temp Location , User Bucket Item Key)
I had a custom function to upload their files to a public S3 bucket, but in switching to the storage I setup with Amplify I noticed that the bucket names are unique to the user. Researching online I checked the IAM role permissions and I see that the bucket key path looks like this below.
private/${cognito-identity.amazonaws.com:sub}
How can I get the ${cognito-identity.amazonaws.com:sub} into my lambda function? I thought I could use the Sub id to append to the link but it doesn't match what I see in S3. I was thinking of sending this detail if possible from my js script to the API call or getting that info within the Lambda itself via matching to user attributes or something... Any help would be much appreciated.
Thank you!!!!!!
It represents IdentityID from the credentialsProvider. You can check: https://aws-amplify.github.io/aws-sdk-ios/docs/reference/AWSCore/Classes/AWSCognitoCredentialsProvider.html
I have a Laravel application that is hosted on AWS. I am using an S3 bucket to store files. I know that I have successfully connected to this bucket because when I upload files, they appear as I would expect inside the bucket's directories.
However, when I try to use the URL attached to the uploaded file to display it, I receive a 403 Forbidden error.
I have an IAM user set up named laravel which has the permission AmazonS3FullAccess applied to it, and I am using that key/secret.
I have the Object URL like so:
https://<BUCKET NAME>.s3.eu-west-1.amazonaws.com/<DIR>/<FILENAME>.webm
But if I try to access that either in my app (fed into an audio player) or just via the link directly, I get a 403. None of the tutorials I've followed to get this working involve Bucket Policies, but when I've googled the problems I'm having, Bucket Policy seems to come up.
Is there a single source of truth on how I am to do this? My AWS knowledge is very limited, but I am trying to get better!
When you request a URL of the form https://bucket.s3.amazonaws.com/dog/snoopy.png, that request is unauthenticated. Your S3 bucket policy does not allow unauthenticated access to the contents of the bucket so that request is denied with 403.
If you want your files to be downloadable by an unauthenticated/anonymous client then create an S3 bucket policy to allow that.
Alternatively, your server can create signed URLs and share those with the client.
Otherwise, your client's requests need to be authenticated, which means having correctly-permissioned credentials and using an AWS SDK.
Typically, back-end applications that you write that need access to data in S3 (or other AWS resources) would be given AWS credentials allowing the necessary access. If your back-end application runs in AWS then you would do that by launching the compute with an IAM role.
Typically, front-end applications would not have AWS credentials. Instead they would either authenticate to a back-end that then does work with AWS resources on their behalf. There are other options, however, such as AWS Amplify apps.
Use-case
We basically want to collect files from external customers into a file server.
We were thinking of using the S3 bucket as the file server that customers can interact with directly.
Question
Is it possible to accomplish this where we create a bucket for each customer, and he can be given a link to the S3 bucket that also serves as the UI for him to drag and drop his files into directly?
He shouldn't have to log-in to AWS or create an AWS account
He should directly interact with only his S3 bucket (drag-drop, add, delete files), there shouldn't be a way for him to check other buckets. We will probably create many S3 buckets for our customers in the same AWS account. His entry point into the S3 bucket UI is via a link (S3 bucket URL perhaps)
If such a thing is possible - would love some general pointers as to what more I should do (see my approach below)
My work so far
I've been able to create an S3 bucket - grant public access
Set policies to Get, List and PutObject into the S3 bucket.
I've been able to give public access to objects inside the bucket using their link, but never their bucket itself.
Is there something more I can build on or am I hitting a dead-end and this is not possible to accomplish?
P.S: This may not be a coding question, but maybe your answer could have code to accomplish it if at all possible, general pointers if possible
S3 presigned url can help in such cases, but you have write your own custom frontend application for drag and drop features.
Link: https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html
This might be a stupid question, but I've never used AWS before.
So apparently, to create an AWS account I need to give my credit card information, but I don't want to do that.
Is there any other way to access the information from this link?:
https://s3.console.aws.amazon.com/s3/buckets/quizdb-public/?region=us-east-1&tab=overview
The URL https://s3.console.aws.amazon.com/s3/buckets/quizdb-public/?region=us-east-1&tab=overview
is the link that will be shown in the address bar when you log into the AWS console, go to S3 and click on the bucket. If you do not have access to that specific AWS account and the AWS console you will not be able to access the information in the bucket with that URL.
I want to send someone keys that enable them to write files to an S3 bucket of my choice. I created the bucket, and I want to create a ... "role", with associated keys, and set the bucket's permissions so the role can create files in it. Can this be done? How?
I am not very familiar with AWS as you can tell. In the Google Cloud, I can do this by creating a "service account", downloading its .json key, and then giving the service account "Admin" access to the GCS bucket. I can then send the .json key to whomever I want, or use it from any server -- the .json key is not specific to another "user".
In AWS, I haven't been able to do it. It seems I have to know the user ID of the user to assume the role? (which I don't know, I am trying to have a company upload data to my bucket and I don't know what users the company is using, etc).
But I am a beginner in AWS and I think I am just trying to do things the wrong way. How can I do the equivalent of the above in Google Cloud in AWS?