Use-case
We basically want to collect files from external customers into a file server.
We were thinking of using the S3 bucket as the file server that customers can interact with directly.
Question
Is it possible to accomplish this where we create a bucket for each customer, and he can be given a link to the S3 bucket that also serves as the UI for him to drag and drop his files into directly?
He shouldn't have to log-in to AWS or create an AWS account
He should directly interact with only his S3 bucket (drag-drop, add, delete files), there shouldn't be a way for him to check other buckets. We will probably create many S3 buckets for our customers in the same AWS account. His entry point into the S3 bucket UI is via a link (S3 bucket URL perhaps)
If such a thing is possible - would love some general pointers as to what more I should do (see my approach below)
My work so far
I've been able to create an S3 bucket - grant public access
Set policies to Get, List and PutObject into the S3 bucket.
I've been able to give public access to objects inside the bucket using their link, but never their bucket itself.
Is there something more I can build on or am I hitting a dead-end and this is not possible to accomplish?
P.S: This may not be a coding question, but maybe your answer could have code to accomplish it if at all possible, general pointers if possible
S3 presigned url can help in such cases, but you have write your own custom frontend application for drag and drop features.
Link: https://docs.aws.amazon.com/AmazonS3/latest/userguide/PresignedUrlUploadObject.html
Related
So, I want to have a service that creates files in an S3 bucket with specific links, and then allow anyone with a link to a file to write to the file and read it.
But it must not be a public privilege to create files, only editing/reading already existing files, given you have the link.
Is this possible with a bucket policy? Basically allowing one service CRUD privileges but having public RU privileges.
You will need to write such a service yourself.
First, please note that there is no difference between 'Create' and 'Update' in Amazon S3 -- both use a PutObject operation. Objects cannot be 'edited' -- they can only be overwritten.
You can achieve your goal for Reading, by using public objects with obfuscated URLs -- as long as somebody knows the URL, they could access the object. Not a perfect means of security, but that is your choice.
You do not want to grant public permission to create objects in a bucket, otherwise anybody would be able to upload any files to the bucket (eg copyrighted movies) and you would be paying the cost of storage and data transfer.
The safer way to permit uploads is to have users authenticate to your back-end, and then your back-end can generate an Amazon S3 pre-signed URL that can be used to upload to the bucket. This pre-signed URL can specify limitations such as file size and the filename of the upload.
For more details, see: Uploading objects using presigned URLs - Amazon Simple Storage Service
I'm new to AWS tools and although I have tried to search thoroughly for an answer I wasn't able to fixate on a solution.
My usecase is this:
I have a bucket where I need to store images, upload them via my server however I need to display them on my website.
Should my bucket be public?
If not, what should I do to allow everyone to read those images but not be able to mass upload on it from origins who are not my server?
If you want the images to be publicly accessible for your website, then the objects need to be public.
This can be done by creating a Bucket Policy that makes the whole bucket, or part of the bucket, publicly accessible.
Alternatively, when uploading the images, you can use ACL='public-read', which makes the individual objects public even if the bucket isn't public. This way, you can have more fine-grained control over what content in the bucket is public.
Both of these options require you to turn off portions of S3 Block Public Access to allow the Bucket Policy or ACLs.
When your server uploads to S3, it should be using Amazon S3 API calls using a set of AWS credentials (Access Key, Secret Key) from an IAM User. Grant the IAM User permission to put objects in the bucket. This way, that software can upload to the bucket totally independently to whether the bucket is public. (Never make a bucket publicly writable/uploadable, otherwise people can store anything in there without your control.)
upload them via my server however I need to display them on my website.
In that case only your server can upload the images. So if you are hosting your web app on EC2 or ECS, then you use instance role and task role to provide S3 write access.
Should my bucket be public?
It does not have to. Often CloudFront is used to host images or files from S3 using OAI. This way your bucket remains fully private.
I have a web app that requires file storage for images. I am planning on using AWS S3 to store these files, but I do not know how to keep the images private for each user. I know I can use Cognito and S3 roles to do this as described here, but I want the ability for users of the same company to have a shared S3 directory. Is there a way I can use Lambda as an authorizer instead of cognito to do this? I don't want to pass files through Lambda. I simply want to use it to block access to specific bucket directories.
I try and succeed to upload a file using AWS Amplify quick start doc and I used this example to set my graphql schema, my resolvers and dataSources correctly: https://github.com/aws-samples/aws-amplify-graphql.
I was stuck for a long time because of an error response "Access Denied" when my image was uploading into the S3 bucket. I finally went to my S3 console, selected the right bucket, went to the Authorization tab, and clicked on "Everyone" and finally selected "Write Object". With that done, everything works fine.
But I don't really understand why it's working, and Amazon show me a big and scary alert on my S3 console now saying "We don't recommend at all to make a S3 bucket public".
I used Amazon Cognito userPool with Appsync and it's inside my resolvers that the image is upload to my S3 bucket if i understood correctly.
So what is the right configuration to make the upload of an image work?
I already try to put my users in a group with the access to the S3 bucket, but it was not working (I guess since the user don't really directly interact with my S3 bucket, it's my resolvers who do).
I would like my users to be able to upload an image, and after displaying the image on the app for everybody to see (very classical), so I'm just looking for the right way to do that, since the big alert on my S3 console seems to tell me that turning a bucket public is dangerous.
Thanks!
I'm guessing you're using an IAM role to upload files to S3. You can set the bucket policy to allow that role with certain permissions whether that is ReadOnly, WriteOnly, etc.
Take a look here: https://docs.aws.amazon.com/AmazonS3/latest/dev/example-bucket-policies.html
Ok I find where it was going wrong. I was uploading my image taking the address of my S3 bucket with the address that was given by aws-exports.js.
BUT, when you go to your IAM role policy, and you check the role of your authorize user of your cognito pool, you can see the different strategies and the one that allow to put objects on your S3 bucket use the folders "public", "protected" and "private".
So you have to change those path or add these folder at the end of your bucket address you use on your front-end app.
Hope it will help someone!
I have files stored in s3, some files are accessed often, but other files are just stored for later use.
We do not need to cdn these files at all.
Is there a way to tell cloudfront not to fetch these files from s3?
Easiest way is to move them to a separate s3 bucket however another option is to keep the objects you don't want exposed as private.
By default, your Amazon S3 bucket and all of the objects in it are private—only the AWS account that created the bucket has permission to read or write the objects in it. If you want to allow anyone to access the objects in your Amazon S3 bucket using CloudFront URLs, you must grant public read permissions to the objects. (This is one of the most common mistakes when working with CloudFront and Amazon S3. You must explicitly grant privileges to each object in an Amazon S3 bucket.)
Source:
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html#GettingStartedUploadContent
Hopefully that answers your question.