Read/Write to a Cognito Users S3 Bucket via Lambda - amazon-web-services

I have a React Amplify site with Storage and a Cognito user pool. When a user on my application uploads a picture it triggers a lambda function that modifies the picture and writes it to a public S3 bucket not related to the Amplify bucket I created. I want to update my lambda function to write the file back into their own personal path associated with the Amplify Storage I setup. I want to get rid of the public S3 bucket altogether.
I am currently utilizing boto3 functions assuming I know the name of the Bucket Key path like this:
bucket.upload_file(Lambda Temp Location , User Bucket Item Key)
I had a custom function to upload their files to a public S3 bucket, but in switching to the storage I setup with Amplify I noticed that the bucket names are unique to the user. Researching online I checked the IAM role permissions and I see that the bucket key path looks like this below.
private/${cognito-identity.amazonaws.com:sub}
How can I get the ${cognito-identity.amazonaws.com:sub} into my lambda function? I thought I could use the Sub id to append to the link but it doesn't match what I see in S3. I was thinking of sending this detail if possible from my js script to the API call or getting that info within the Lambda itself via matching to user attributes or something... Any help would be much appreciated.
Thank you!!!!!!

It represents IdentityID from the credentialsProvider. You can check: https://aws-amplify.github.io/aws-sdk-ios/docs/reference/AWSCore/Classes/AWSCognitoCredentialsProvider.html

Related

Using S3 bucket as a file server for the public

Use-case
We basically want to collect files from external customers into a file server.
We were thinking of using the S3 bucket as the file server that customers can interact with directly.
Question
Is it possible to accomplish this where we create a bucket for each customer, and he can be given a link to the S3 bucket that also serves as the UI for him to drag and drop his files into directly?
He shouldn't have to log-in to AWS or create an AWS account
He should directly interact with only his S3 bucket (drag-drop, add, delete files), there shouldn't be a way for him to check other buckets. We will probably create many S3 buckets for our customers in the same AWS account. His entry point into the S3 bucket UI is via a link (S3 bucket URL perhaps)
If such a thing is possible - would love some general pointers as to what more I should do (see my approach below)
My work so far
I've been able to create an S3 bucket - grant public access
Set policies to Get, List and PutObject into the S3 bucket.
I've been able to give public access to objects inside the bucket using their link, but never their bucket itself.
Is there something more I can build on or am I hitting a dead-end and this is not possible to accomplish?
P.S: This may not be a coding question, but maybe your answer could have code to accomplish it if at all possible, general pointers if possible
S3 presigned url can help in such cases, but you have write your own custom frontend application for drag and drop features.
Link: https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html

How to access objects in S3 bucket, without making the object's folder public

I have provided AmazonS3FullAccess policy for both the IAM user and group. Also the buket that I am trying to access says "Objects can be public". I have explicitly made the folder inside the bucket public. Despite all this I am getting access denied error when I tried to access it through its url. Any idea on this?
Objects in Amazon S3 are private by default. This means that objects are not accessible by anonymous users.
You have granted permission for your IAM User to be able to access S3. Therefore, you have access to the objects but you must identify yourself to S3 so that it can verify your identity.
You should be able to access S3 content:
Via the Amazon S3 management console
Using the AWS CLI (eg aws s3 ls s3://bucketname)
Via authenticated requests in a web browser
I suspect that you have been accessing your bucket via an unauthenticated request (eg bucketname.s3.amazonaws.com/foo.txt. Unfortunately, this does not tell Amazon S3 who you are, so it will deny the request.
To access content with this type of URL, you can generate an Amazon S3 pre-signed URLs, which appends some authentication information to the URL to prove your identity. An easy way to generate the URL is with the AWS CLI:
aws s3 presign s3://bucketname/foo.txt
It will return a URL that looks like this:
https://bucketname.s3.amazonaws.com/foo.txt?AWSAccessKeyId=AKIAxxx&Signature=xxx&Expires=1608175109
The URL will be valid for one hour by default, up to 7 days.
There are two ways I will recommend.
go to s3 dashboard, and download the object you need, one by one manually, the bucket can be kept private at the same time.
build a gateway/a small service, to handle authentication for you, set a policy and give the permission to the service container/lambda to visit the private bucket, and restrict only specific users to download the objects.
References
download from aws s3
aws policy, permission and roles

How can I use Lambda as a verifier for my S3 bucket

I have a web app that requires file storage for images. I am planning on using AWS S3 to store these files, but I do not know how to keep the images private for each user. I know I can use Cognito and S3 roles to do this as described here, but I want the ability for users of the same company to have a shared S3 directory. Is there a way I can use Lambda as an authorizer instead of cognito to do this? I don't want to pass files through Lambda. I simply want to use it to block access to specific bucket directories.

Understanding how AppSync + S3 work together

I try and succeed to upload a file using AWS Amplify quick start doc and I used this example to set my graphql schema, my resolvers and dataSources correctly: https://github.com/aws-samples/aws-amplify-graphql.
I was stuck for a long time because of an error response "Access Denied" when my image was uploading into the S3 bucket. I finally went to my S3 console, selected the right bucket, went to the Authorization tab, and clicked on "Everyone" and finally selected "Write Object". With that done, everything works fine.
But I don't really understand why it's working, and Amazon show me a big and scary alert on my S3 console now saying "We don't recommend at all to make a S3 bucket public".
I used Amazon Cognito userPool with Appsync and it's inside my resolvers that the image is upload to my S3 bucket if i understood correctly.
So what is the right configuration to make the upload of an image work?
I already try to put my users in a group with the access to the S3 bucket, but it was not working (I guess since the user don't really directly interact with my S3 bucket, it's my resolvers who do).
I would like my users to be able to upload an image, and after displaying the image on the app for everybody to see (very classical), so I'm just looking for the right way to do that, since the big alert on my S3 console seems to tell me that turning a bucket public is dangerous.
Thanks!
I'm guessing you're using an IAM role to upload files to S3. You can set the bucket policy to allow that role with certain permissions whether that is ReadOnly, WriteOnly, etc.
Take a look here: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Ok I find where it was going wrong. I was uploading my image taking the address of my S3 bucket with the address that was given by aws-exports.js.
BUT, when you go to your IAM role policy, and you check the role of your authorize user of your cognito pool, you can see the different strategies and the one that allow to put objects on your S3 bucket use the folders "public", "protected" and "private".
So you have to change those path or add these folder at the end of your bucket address you use on your front-end app.
Hope it will help someone!

Is there any way to add the specific object key and value tag automatically when user upload the file to AWS S3 bucket

I want to add the automatic key and value pair of TAG to be added with the object uploaded via AWS console ,
Example : when a IAM user uploads a file then by default the key has to be CREATEDBY and the value has to be his arn.
I want this condition to be achieved because I want to restrict other users to see / download object uploaded by other IAM users in the same folder using iam user policy by checking object tag values,
My requirement doesn't allow me to create multiple folder for different users as they are too many.
You can use a Lambda function that will be triggered when a new file is uploaded to your bucket. This function would in turn add the tag to the S3 object. Here's a tutorial to help you wire your S3 bucket to your Lambda function.
The event you will receive in your Lambda will be structured like this. From within your Lambda, you can retrieve the principalId field, which will give you information about the user who created the S3 object, as well as the S3 object's key. You can then use that information to tag the S3 object.