I know I can use Powershell to initiate and manage a BITS (Background Intelligent Transfer Service) download from my server over VPN, and I am looking to do that to stage large install resources locally while a regular user is logged on, ready for use in software installs and updates later. However, I would like to also support cloud services for the download repository, as I foresee (some) firms no longer having centralized servers and VPN connections, just cloud repositories and a distributed workforce. To that end I have tested using Copy-S3Object from the AWS Powershell tools, and that works. But it isn't throttle-able so far as I can tell. So I wonder, is there a way to configure my AWS bucket so that I can use BITS to do the download, but still constrained by AWS credentials?
And if there is, is the technique valid across multiple cloud services, such as Azure and Google Cloud? I would LIKE to be cloud platform agnostic if possible.
I have found this thread, that seems to suggest that creating presigned URLs would work. But my understanding of that process is, well, non existent. I am currently creating credentials for every user. Do I basically assign those users to an AWS group and give that group some permissions, and then Powershell can be used to sign a URL with the particular user's credentials, and that URL is what BITS uses? So a user who has been removed from the group would no longer be able to create signed URLs, and so would no longer be able to access the available resources?
Alternatively, if there is a way to throttle Copy-S3Object that would work too. But so far as I can tell that is not an option.
Not sure of a way to throttle the copy-s3 object but you can definitely BITS a pre-signed s3 URL.
For example, if you have your AWS group with users a/b/c in there, and the group has a policy attached that allows the relevant access to your bucket - those users a/b/c will be able to create pre-signed URLs for objects in that bucket. For example, the following create a pre-signed url for an object called 'BITS-test.txt':
aws s3 presign s3://youbucketnamehere/BITS-test.txt
That will generate a pre-signed URL that can be passed into an Invoke-WebRequest command.
This URL is not restricted to only those users though, anybody with this URL will be able to download the object - but only users a/b/c (or anyone else with access to that bucket) will be able to create these URLs. If you don't want users a/b/c to be able to create these URLs anymore, then you can just remove them from the AWS group like you mentioned.
You can also add an expiry param to the presign command for example --expires-in 60 which keeps the link valid for only that period of time (in this case for 1 hour - expiry param is set in minutes).
Related
I'm working to build a web portal that displays the contents of an S3 bucket to authenticated users. Users would be allowed to download objects via presigned URLs so the content/bandwidth wouldn't need to be ushered through the web portal and credentials wouldn't need to be passed to the client. This works well for single objects. However, I'm uncertain how to leverage presigned URLs when users want to download many objects e.g. all objects with a specific prefix. It seems the issue may be more of a limitation with standard web technologies i.e. multiple downloads triggered by a single action.
I've seen some apps dynamically create a .zip containing all the objects, but I'm trying to avoid moving data through the portal. I also found AWS POST Policies leveraging condition keys like 'starts-with' but it doesn't look like a POST Policy will help with getting objects. The STS AssumeRole could be used to generate temporary/limited credentials to download the objects of a specific prefix, but the user would still need to download each object. Am I overlooking a better solution?
I'm building an app that authenticates users and then return their generated files.
I'm using Amazon S3 to store such files. The Public block access is disabled and the Policy is set in way that only an IAM User can access to the main bucket.
What I only need is returning these files to authenticated users.
I see that a way to achieve this is creating presigned url and it works, but such url will be available for anyone that has the link.
I know I can set a time limit like 1 minute, but it doesn't resolve my problem completely. Maybe I can solve this by using Amazon Cognito, but it forces me to use their Authentication flow, which I don't want to (I plan to use Firebase Auth).
If you know Firebase Cloud Storage then you know that I can easily achieve this through Firebase Storage Rules.
So my questions are:
How can I achieve this in Amazon S3? I mean if there is an option to validate in the backend
Is this really possible, or I'm forced to use services such Google Cloud Storage?
I'm not sure if this is the appropriate use case, so please tell me what to look for if I'm incorrect in my assumption of how to do this.
What I'm trying to do:
I have an s3 bucket with different 'packs' that users can download. Upon their purchase, they are given a user role in Wordpress. I have an S3 browser set up via php that makes requests to the bucket for info.
Based on their 'role', it will only show files that match prefix (whole pack users see all, single product people only see single product prefix).
In that way, the server will be sending the files on behalf of the user, and changing IAM roles based on the user's permission level. Do I have to have it set that way? Can I just analyze the WP role and specify and endpoint or query that notes the prefixes allowed?
Pack users see /
Individual users see /--prefix/
If that makes sense
Thanks in advance! I've never used AWS, so this is all new to me. :)
This sounds too complex. It's possible to do with AWS STS but it would be extremely fragile.
I presume you're hiding the actual S3 bucket from end users and are streaming through your php application? If so, it makes more sense to do any role-based filtering in the php application as you have far more logic available to you there - IAM is granular, but restrictions to resources in S3 is going to be funky and there's always a chance you'll get something wrong and expose the incorrect downloads.
Rather do this inside your app:
establish the role you've granted
issue the S3 ls command filtered by the role - i.e. if the role permits only --prefix, issue the ls command so that it only lists files matching --prefix
don't expose files in the bucket globally - only your app should have access to the S3 bucket - that way people also can't share links once they've downloaded a pack.
this has the added benefit of not encoding your S3 bucket structure in IAM, and keeps your decision logic isolated to code.
There are basically three ways you can grant access to private content in Amazon S3.
Option 1: IAM credentials
You can add a policy to an IAM User, so that they can access private content. However, such credentials should only be used by staff in your own organization. it should not be used to grant access to application users.
Option 2: Temporary credentials via STS
Your application can generate temporary credentials via the AWS Security Token Service. These credentials can be given specific permissions and are valid for a limited time period. This is ideal for granting mobile apps access to Amazon S3 because they can communicate directly with S3 without having to go via the back-end app. The credentials would only be granted access to resources they are permitted to use.
These types of credentials can also be used by web applications, where the web apps make calls directly to AWS services (eg from Node/JavaScript in the browser). However, this doesn't seem suitable for your WordPress situation.
Option 3: Pre-Signed URLs
Imagine a photo-sharing application where users can access their private photos, and users can also share photos with other users. When a user requests access to a particular photo (or when the back-end app is creating an HTML page that uses a photo), the app can generate a pre-signed URL that grants temporary access to an Amazon S3 object.
Each pre-signed URL gives access only to a single S3 object and only for a selected time period (eg 5 minutes). This means that all the permission logic for whether a user is entitled to access a file can be performed in the back-end application. When the back-end application provides a pre-signed URL to the user's browser, the user can access the content directly from Amazon S3 without going via the back-end.
See: Amazon S3 pre-signed URLs
Your situation sounds suitable for Option #3. Once you have determined that a user is permitted to access a particular file in S3, it can generate the pre-signed URL and include it as a link (or even in <img src=...> tags). The user can then download the file. There is no need to use IAM Roles in this process.
Can I allow a 3rd party file upload to an S3 bucket without using IAM? I would like to avoid the hassle of sending them credentials for an AWS account, but still take advantage of the S3 UI. I have only found solutions for one or the other.
The pre-signed url option sounded great but appears to only work with their SDKs and I'm not about to tell my client to install python on their computer to upload a file.
The browser based upload requires me to make my own front end html form and run in on a server just to upload (lol).
Can I not simply create a pre-signed url which navigates the user to the S3 console and allows them to upload before expiration time? Of course, making the bucket public is not an option either. Why is this so complicated!
Management Console
The Amazon S3 management console will only display S3 buckets that are associated with the AWS account of the user. Also, it is not possible to limit the buckets displayed (it will display all buckets in the account, even if the user cannot access them).
Thus, you certainly don't want to give them access to your AWS management console.
Pre-Signed URL
Your user does not require the AWS SDK to use a pre-signed URL. Rather, you must run your own system that generates the pre-signed URL and makes it available to the user (eg through a web page or API call).
Web page
You can host a static upload page on Amazon S3, but it will not be able to authenticate the user. Since you only wish to provide access to specific people, you'll need some code running on the back-end to authenticate them.
Generate...
You ask: "Can I not simply create a pre-signed url which navigates the user to the S3 console and allows them to upload before expiration time?"
Yes and no. Yes, you can generate a pre-signed URL. However, it cannot be used with the S3 console (see above).
Why is this so complicated?
Because security is important.
So, what to do?
A few options:
Make a bucket publicly writable, but not publicly readable. Tell your customer how to upload. The downside is that anyone could upload to the bucket (if they know about it), so it is only security by obscurity. But, it might be a simple solution for you.
Generate a very long-lived pre-signed URL. You can create a URL that works for months or years. Provide this to them, and they can upload (eg via a static HTML page that you give them).
Generate some IAM User credentials for them, then have them use a utility like the AWS Command-Line Interface (CLI) or Cloudberry. Give them just enough credentials for upload access. This assumes you only have a few customers that need access.
Bottom line: Security is important. Yet, you wish to "avoid the hassle of sending them credentials", nor do you wish to run a system to perform the authentication checks. You can't have security without doing some work, and the cost of poor security will be much more than the cost of implementing good security.
you could deploy a lambda function to call "signed URL" then use that URL to upload the file. here is an example
https://aws.amazon.com/blogs/compute/uploading-to-amazon-s3-directly-from-a-web-or-mobile-application/
Let's say that I want to create a simplistic version of Dropbox' website, where you can sign up and perform operations on files such as upload, download, delete, rename, etc. - pretty much like in this question. I want to use Amazon S3 for the storage of the files. This is all quite easy with the AWS SDK, except for one thing: security.
Obviously user A should not be allowed to access user B's files. I can kind of add "security through obscurity" by handling permissions in my application, but it is not good enough to have public files and rely on that, because then anyone with the right URL could access files that they should not be able to. Therefore I have searched and looked through the AWS documentation for a solution, but I have been unable to find a suitable one. The problem is that everything I could find relates to permissions based on AWS accounts, and it is not appropriate for me to create many thousand IAM users. I considered IAM users, bucket policies, S3 ACLs, pre-signed URLs, etc.
I could indeed solve this by authorizing everything in my application and setting permissions on my bucket so that only my application can access the objects, and then having users download files through my application. However, this would put increased load on my application, where I really want people to download the files directly through Amazon S3 to make use of its scalability.
Is there a way that I can do this? To clarify, I want to give a given user in my application access to only a subset of the objects in Amazon S3, without creating thousands of IAM users, which is not so scalable.
Have the users download the files with the help of your application, but not through your application.
Provide each link as a link the points to an endpoint of your application. When each request comes in, evaluate whether the user is authorized to download the file. Evaluate this with the user's session data.
If not, return an error response.
If so, pre-sign a download URL for the object, with a very short expiration time (e.g. 5 seconds) and redirect the user's browser with 302 Found and set the signed URL in the Location: response header. As long as the download is started before the signed URL expires, it won't be interrupted if the URL expires while the download is already in progress.
If the connection to your app, and the scheme of the signed URL are both HTTPS, this provides a substantial level of security against any unauthorized download, at very low resource cost.