I want to put some files on S3 bucket for a customer to download.
However, I want to restrict the number of download for the IAM user.
Is there any way to achieve this without deploying any additional service?
I have come across metrics to track how many times the customer has downloaded a file, but I havent found a way to restrict it to a specific number.
I could not find a way to do this directly, after checking Signed URLs do not provide a way to control the number of GET operations.
What you can do is to Create an Alarm for the metric you came across that will trigger a Lambda function to add a Deny policy to the IAM user on the specified files when the threshold is reached.
It is not possible to limit the number of downloads of objects from Amazon S3. You would need to write your own application that authenticates users and then serves the content via the application.
An alternative is to provide a pre-signed URL that is valid for only a short period of time, with the expectation that this would not provide enough time for multiple downloads.
Related
I know I can use Powershell to initiate and manage a BITS (Background Intelligent Transfer Service) download from my server over VPN, and I am looking to do that to stage large install resources locally while a regular user is logged on, ready for use in software installs and updates later. However, I would like to also support cloud services for the download repository, as I foresee (some) firms no longer having centralized servers and VPN connections, just cloud repositories and a distributed workforce. To that end I have tested using Copy-S3Object from the AWS Powershell tools, and that works. But it isn't throttle-able so far as I can tell. So I wonder, is there a way to configure my AWS bucket so that I can use BITS to do the download, but still constrained by AWS credentials?
And if there is, is the technique valid across multiple cloud services, such as Azure and Google Cloud? I would LIKE to be cloud platform agnostic if possible.
I have found this thread, that seems to suggest that creating presigned URLs would work. But my understanding of that process is, well, non existent. I am currently creating credentials for every user. Do I basically assign those users to an AWS group and give that group some permissions, and then Powershell can be used to sign a URL with the particular user's credentials, and that URL is what BITS uses? So a user who has been removed from the group would no longer be able to create signed URLs, and so would no longer be able to access the available resources?
Alternatively, if there is a way to throttle Copy-S3Object that would work too. But so far as I can tell that is not an option.
Not sure of a way to throttle the copy-s3 object but you can definitely BITS a pre-signed s3 URL.
For example, if you have your AWS group with users a/b/c in there, and the group has a policy attached that allows the relevant access to your bucket - those users a/b/c will be able to create pre-signed URLs for objects in that bucket. For example, the following create a pre-signed url for an object called 'BITS-test.txt':
aws s3 presign s3://youbucketnamehere/BITS-test.txt
That will generate a pre-signed URL that can be passed into an Invoke-WebRequest command.
This URL is not restricted to only those users though, anybody with this URL will be able to download the object - but only users a/b/c (or anyone else with access to that bucket) will be able to create these URLs. If you don't want users a/b/c to be able to create these URLs anymore, then you can just remove them from the AWS group like you mentioned.
You can also add an expiry param to the presign command for example --expires-in 60 which keeps the link valid for only that period of time (in this case for 1 hour - expiry param is set in minutes).
I need to send an email along with a large attachment.
I tried using an AWS lambda function along with SES functionality, and my files are stored in S3 with sizes varying between 1MB to 1GB.
It really isn't advisable to send large attachments in emails like this. It would be much more practical to include a link to this file so that it can be downloaded by the user you're sending the email to. S3 allows you to configure permissions settings so that you can ensure this user can download the file. Consider taking that approach.
I would consider using pre-signed URLs to S3 buckets that are granted a limited time until expiry. Pre-signed URLs. Or perhaps through an IAM route. Bucket access to specific roles
We are using the s3 server to allow users to download large zip files a limited number of times. We are searching for a better method of counting downloads that just counting button clicks.
Is there anyways we can give our user a signed url to temporary download the file (like we are doing now) and check that token with amazon to make sure the file was successfully downloaded?
Please let me know what you think
You could use Amazon S3 Server Access Logging:
In order to track requests for access to your bucket, you can enable access logging. Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and error code, if any.
There is no automatic ability to limit the number of downloads via an Amazon S3 pre-signed URL.
A pre-signed URL limits access based upon time, but cannot limit based upon quantity.
The closest option would be to provide a very small time window for the pre-signed URL, with the assumption that only one download would happen within that time window.
Let's say that I want to create a simplistic version of Dropbox' website, where you can sign up and perform operations on files such as upload, download, delete, rename, etc. - pretty much like in this question. I want to use Amazon S3 for the storage of the files. This is all quite easy with the AWS SDK, except for one thing: security.
Obviously user A should not be allowed to access user B's files. I can kind of add "security through obscurity" by handling permissions in my application, but it is not good enough to have public files and rely on that, because then anyone with the right URL could access files that they should not be able to. Therefore I have searched and looked through the AWS documentation for a solution, but I have been unable to find a suitable one. The problem is that everything I could find relates to permissions based on AWS accounts, and it is not appropriate for me to create many thousand IAM users. I considered IAM users, bucket policies, S3 ACLs, pre-signed URLs, etc.
I could indeed solve this by authorizing everything in my application and setting permissions on my bucket so that only my application can access the objects, and then having users download files through my application. However, this would put increased load on my application, where I really want people to download the files directly through Amazon S3 to make use of its scalability.
Is there a way that I can do this? To clarify, I want to give a given user in my application access to only a subset of the objects in Amazon S3, without creating thousands of IAM users, which is not so scalable.
Have the users download the files with the help of your application, but not through your application.
Provide each link as a link the points to an endpoint of your application. When each request comes in, evaluate whether the user is authorized to download the file. Evaluate this with the user's session data.
If not, return an error response.
If so, pre-sign a download URL for the object, with a very short expiration time (e.g. 5 seconds) and redirect the user's browser with 302 Found and set the signed URL in the Location: response header. As long as the download is started before the signed URL expires, it won't be interrupted if the URL expires while the download is already in progress.
If the connection to your app, and the scheme of the signed URL are both HTTPS, this provides a substantial level of security against any unauthorized download, at very low resource cost.
guys.
I am trying to implement some mechanism such that an anonymous AWS user can write to a specific S3 bucket that belongs to me, using a ticket provided by me(such as a random string). There may be restrictions on the object size and there should be a time limit( such as, write to the bucket within 1 hour after I issue the ticket to him). Is there any way to implement such thing using AWS S3 access policies?
Thanks in advance!
Yes, this is possible using the Post Object API call on S3.
You'll need to generate and sign a security policy and pass it along with the upload. This policy will contain rules as to what types of files can be uploaded, restrictions on file size, location in your bucket where new files can be uploaded, an expiration date for the policy, etc.
To learn more, check out this example as well as this article.