How to implement a one-time write ticket to AWS S3 bucket? - amazon-web-services

guys.
I am trying to implement some mechanism such that an anonymous AWS user can write to a specific S3 bucket that belongs to me, using a ticket provided by me(such as a random string). There may be restrictions on the object size and there should be a time limit( such as, write to the bucket within 1 hour after I issue the ticket to him). Is there any way to implement such thing using AWS S3 access policies?
Thanks in advance!

Yes, this is possible using the Post Object API call on S3.
You'll need to generate and sign a security policy and pass it along with the upload. This policy will contain rules as to what types of files can be uploaded, restrictions on file size, location in your bucket where new files can be uploaded, an expiration date for the policy, etc.
To learn more, check out this example as well as this article.

Related

copy images from one S3 bucket to diff account s3 bucket

I am using RESTful API, API provider having images on S3 bucket more than 80GB size.
I need to download these images and upload in my AWS S3 bucket, its time taking job.
Is there any way to copy image from API to my S3 bucket instead of I download and upload again.
I talked with API support they saying you are getting image URL, so its up to you how you handle,
I am using laravel.
is it way to get the sourced images url's and directly move images to S3 instead of first I download and upload.
Thanks
I think downloading and re-uploading to different accounts would be inefficient plus pricey for the API Provider. Instead of that I would talk to the respective API Provider and try to replicate the images across accounts.
Post replicate you can Amazon S3 inventory for various information related to the objects in the bucket.
Configuring replication when the source and destination buckets are owned by different accounts
You want "S3 Batch Operations". Search for "xcopy".
You do not say how many images you have, but 1000 at 80GB is 80TB, and for that size you would not even want to be downloading to a temporary EC2 instance in the same region file by file which might be a one or two day option otherwise, you will still pay for ingress/egress.
I am sure AWS will do this in an ad-hoc manner for a price, as they would do if you were migrating from the platform.
It may also be easier to allow access to the original bucket from the alternative account, but this is no the question.

How to configure Amazon S3 bucket so that external vendors can drop daily files in relevant folders within that bucket?

What is the best way to configure Amazon S3 buckets so that 3rd party external vendor can CREATE folders and DROP files (xml, json, csv etc) in relevant folders within that S3 bucket?
I very much new to AWS world any suggestions or guidelines on that would be greatly appreciated.
Thanks in advance.
This question lacks some detail, but there are a few ways to make it happen:
Create a group within AWS and assign a policy to the group that allows s3:PutObject only on the specific bucket. Create users for the vendors that are within those groups and give the vendors those credentials.
Use the same logic above, except instead of creating users, use Amazon Cognito.
(Depending on time): create a UI that allows users to sign in and upload. This sometimes isn't the best to hear but is also a really good way to write something that can be used in other larger applications.

can someone hack into my s3 with "AWS-cognito-identity-poolID" that is hard-coded?

First i was hardcoded my aws "accessKey" and "securityKey" in client side JS file, but it was very insecure so i read about "aws-cognito", and implemented new JS in following manner :
Still i am confused with one thing that can someone hack into my s3 with "AWS-cognito-identity-poolID" that is hard-coded ? Or any other security steps should i take ?
Thanks,
Jaikey
Definition of Hack
I am not sure what hacking means in the context of your question.
I assume that you actually mean "that anyone can do something different than uploading a file" which includes deleting or accessing objects inside your bucket.
Your solution
As Ninad already mentioned above, you can use your current approach by enabling "Enable access to unauthenticated identities" [1]. You will then need to create two roles of which one is for "unauthenticated users". You could grant that role PutObject permissions to the S3 bucket. This would allow everyone who visits your page to upload objects to the S3 bucket. I think that is what you intend and it is fine from a security point of view since the IdentityPoolId is a public value (i.e. not confidential).
Another solution
I guess, you do not need to use Amazon Cognito to achieve what you want. It is probably sufficient to add a bucket policy to S3 which grants permission for PutObject to everyone.
Is this secure?
However, I would not recommend to enable direct public write access to your S3 bucket.
If someone would abuse your website by spamming your upload form, you will incure S3 charges for put operations and data storage.
It would be a better approach to send the data through Amazon CloudFront and apply a WAF with rate-based rules [2] or implement a custom rate limiting service in front of your S3 upload. This would ensure that you can react appropriately upon malicious activity.
References
[1] https://docs.aws.amazon.com/cognito/latest/developerguide/identity-pools.html
[2] https://aws.amazon.com/about-aws/whats-new/2019/08/lower-threshold-for-aws-waf-rate-based-rules/
Yes, s3 bucket is secure if you are using through "AWS-Cognito-Identity-Pool" at client side, also enable CORS which allow action only from specific domain that ensure if someone try direct upload or list bucket, will get "access-denied".
Also make sure that you have set the file r/w credentials of the hard-coded access so that it can only be read by the local node and nobody else. By the way, the answer is always a yes, it is only a matter how much someone is willing to engage themselves to "hack". Follow what people said here, and you are safe.

S3 bucket - limit number of downloads for a specific IAM user

I want to put some files on S3 bucket for a customer to download.
However, I want to restrict the number of download for the IAM user.
Is there any way to achieve this without deploying any additional service?
I have come across metrics to track how many times the customer has downloaded a file, but I havent found a way to restrict it to a specific number.
I could not find a way to do this directly, after checking Signed URLs do not provide a way to control the number of GET operations.
What you can do is to Create an Alarm for the metric you came across that will trigger a Lambda function to add a Deny policy to the IAM user on the specified files when the threshold is reached.
It is not possible to limit the number of downloads of objects from Amazon S3. You would need to write your own application that authenticates users and then serves the content via the application.
An alternative is to provide a pre-signed URL that is valid for only a short period of time, with the expectation that this would not provide enough time for multiple downloads.

Can i give my customer an AWS bucket link where they can go and upload files?

We want to get the files from our customers in our S3 bucket, I want to know if it's possible to create a bucket and give it's link to the customer so that he can upload the files to that bucket.
The thing that you are looking for is Signed URLs. Here you can read more: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-urls.html
https://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObject.html
The important thing to note is that this signed url is going to be valid for a specific time.
You can share a bucket with others, with security policies as defined in your Bucket Policy and IAM Policies. However, keep in mind that you'd be charged for their usage. You can use S3 Requester Pays to invert request charges, but storage charges still apply to the bucket owner.