How to enable Amazon S3 Files Protection - amazon-web-services

I am developing a web application with two mobile(Android & iOs) based applications of the same. Currently the files uploaded are open to all, which in terms means that anyone with the direct image link can open it using a web browser.
How can I protect or limit the file access to the users of my mobile applications or web application ?
NB: As a beginner, I am not sure about the configuration details to be provided along with question, If I need to give more details on my s3 config. please specify it, I can add it to the s question to make the question more meaningful, so sorry for the inconvenience.

I think an easier approach than pre-signed urls would be to use Amazon Cognito to provide access to AWS resources to your trusted applications, even to unauthenticated users.
To do this you would create an Identity Pool for your application (just need one pool for all 3 of your clients) and then configure it so that when a client provides a valid Identity Pool Id they can assume an IAM role with permissions to access AWS resources.
Then you control what S3 bucket permissions the IAM role they assume would have - you could allow unauthenticated users access to read the S3 objects, or force them to create accounts to be able to read/write to S3 buckets (this is very easy with Cognito - users can sign up with facebook, gmail, their own email, etc.)
There's a step-by-step guide here for setting up an identity pool with Cognito, and then allowing unauthenticated users to assume an IAM role that can access the contents of an S3 bucket
The above causes the same set of permissions for all guest user accounts - that have assumed an IAM role through Amazon Cognito by identifying themselves as part of an identity pool.
edit: I should point out that if you authenticate via Cognito, you'll need to access the S3 bucket through the S3 Transfer Manager from the AWS SDK

Related

Is it possible to use Amazon Cognito groups to set permissions on AW resources such as Amazon DynamoDB and Amazon S3?

In my application I want to users to be able to create an organization i.e. (OrgA) and then have users sign up under said organization using either an invite code or token. Users in OrgA should have access to an Amazon S3 directory (which stores images and files) and access to a database table that has been created for the said organization.
I could not find a solution on how to implement this online and was wondering if using Amazon Cognito groups was a good idea to meet requirements.
This is supported by the Amazon Cognito Service. That is, you can use Amazon Cognito to control permissions for different user groups in your app. This ensures that users have appropriate access to backend resources, determined by the group they belong to. For more information, see Building fine-grained authorization using Amazon Cognito User Pools groups.

Authenticate s3 request based on both file and request

Let's say I want to store some files for each user which is using my website on s3. Later I want authenticate each request to s3 to make sure that user has access to the files she is requesting. I guess this can't be done using presigned URLs or signed cookies(using cloud front). So which Amazon service should I use for that? What is the simplest way to achieve this?
Lets saying I'm authenticating users using jwt and its possible to recognize wheather a user has access to a file or not by the filename and content of the jwt.
I'm sorry that I don't have enough reputation to comment so I'll post an answer here.
One solution is:
AWS Cognito (Federated Identities)
S3 (one bucket)
S3 bucket policies allow you to restrict access to "user folders" equivalent here to "identity" by the prefix like yourbucket/<cognito_identity_id>/* Each user on your webpage will have its own federated identity.
When you create and configure the identity pool in AWS define a custom authentication provider and authenticate users "by the developer" in your backend.
Also, associate the authenticated identities to one IAM Role with access to the S3 bucket where you will keep the data. The bucket policy will take care of only allowing each user to their files and not to others. (See referenced links for policy example and more)
Amazon S3: Allows Amazon Cognito Users to Access Objects in Their Bucket
Access to User level folders using Amazon S3 and Cognito
Developer Authenticated Identities (Identity Pools)

Limited access to AWS S3 bucket

I am trying to understand access security as it relates to Amazon S3. I want to host some files in an S3 bucket, using CloudFront to access it via my domain. I need to limit access to certain companies/individuals. In addition I need to manage that access individually.
A second access model is project based, where I need to make a library of files available to a particular project team, and I need to be able to add and remove team members in an ad hoc manner, and then close access for the whole project at some point. The bucket in question might be the same for both scenarios.
I assume something like this is possible in AWS, but all I can find (and understand) on the AWS site involves using IAM to control access via the AWS console. I don't see any indication that I could create an IAM user, add them to an IAM group, give the group read only access to the bucket and then provide the name and password via System.Net.WebClient in PowerShell to actually download the available file. Am I missing something, and this IS possible? Or am I not correct in my assumption that this can be done with AWS?
I did find Amazon CloudFront vs. S3 --> restrict access by domain? - Stack Overflow that talks about using CloudFront to limit access by Domain, but that won't work in a WfH scenario, as those home machines won't be on the corporate domain, but the corporate BIM Manager needs to manage access to content libraries for the WfH staff. I REALLY hope I am not running into an example of AWS just not being ready for the current reality.
Content stored in Amazon S3 is private by default. There are several ways that access can be granted:
Use a bucket policy to make the entire bucket (or a directory within it) publicly accessible to everyone. This is good for websites where anyone can read the content.
Assign permission to IAM Users to grant access only to users or applications that need to access to the bucket. This is typically used within your organization. Never create an IAM User for somebody outside your organization.
Create presigned URLs to grant temporary access to private objects. This is typically used by applications to grant web-based access to content stored in Amazon S3.
To provide an example for pre-signed URLs, imagine that you have a photo-sharing website. Photos provided by users are private. The flow would be:
A user logs in. The application confirms their identity against a database or an authentication service (eg Login with Google).
When the user wants to view a photo, the application first checks whether they are entitled to view the photo (eg it is their photo). If they are entitled to view the photo, the application generates a pre-signed URL and returns it as a link, or embeds the link in an HTML page (eg in a <img> tag).
When the user accesses the link, the browser sends the URL request to Amazon S3, which verifies the encrypted signature in the signed URL. If if it is correct and the link has not yet expired, the photo is returned and is displayed in the web browser.
Users can also share photos with other users. When another user accesses a photo, the application checks the database to confirm that it was shared with the user. If so, it provides a pre-signed URL to access the photo.
This architecture has the application perform all of the logic around Access Permissions. It is very flexible since you can write whatever rules you want, and then the user is sent to Amazon S3 to obtain the file. Think of it like buying theater tickets online -- you just show the ticket and the door and you are allowed to sit in the seat. That's what Amazon S3 is doing -- it is checking the ticket (signed URL) and then giving you access to the file.
See: Amazon S3 pre-signed URLs
Mobile apps
Another common architecture is to generate temporary credentials using the AWS Security Token Service (STS). This is typically done with mobile apps. The flow is:
A user logs into a mobile app. The app sends the login details to a back-end application, which verifies the user's identity.
The back-end app then uses AWS STS to generate temporary credentials and assigns permissions to the credentials, such as being permitted to access a certain directory within an Amazon S3 bucket. (The permissions can actually be for anything in AWS, such as launching computers or creating databases.)
The back-end app sends these temporary credentials back to the mobile app.
The mobile app then uses those credentials to make calls directly to Amazon S3 to access files.
Amazon S3 checks the credentials being used and, if they have permission for the files being requests, grants access. This can be done for uploads, downloads, listing files, etc.
This architecture takes advantage of the fact that mobile apps are quite powerful and they can communicate directly with AWS services such as Amazon S3. The permissions granted are based upon the user who logs in. These permissions are determined by the back-end application, which you would code. Think of it like a temporary employee who has been granted a building access pass for the day, but they can only access certain areas.
See: IAM Role Archives - Jayendra's Blog
The above architectures are building blocks for how you wish to develop your applications. Every application is different, just like the two use-cases in your question. You can securely incorporate Amazon S3 in your applications while maintaining full control of how access is granted. Your applications can then concentrate on the business logic of controlling access, without having to actually serve the content (which is left up to Amazon S3). It's like selling the tickets without having to run the theater.
You ask whether Amazon S3 is "ready for the current reality". Many of the popular web sites you use every day run on AWS, and you probably never realize it.
If you are willing to issue IAM User credentials (max 5000 per account), the steps would be:
Create an IAM User for each user and select Programmatic access
This will provide an Access Key and Secret Key that you can provide to each user
Attach permissions to each IAM User, or put the users in an IAM Group and attach permissions to the IAM Group
Each user can run aws configure on their computer (using the AWS Command-Line Interface (CLI) to store their Access Key and Secret Key
They can then use the AWS CLI to upload/download files
If you want the users to be able to access via the Amazon S3 management console, you will need to provide some additional permissions: Grant a User Amazon S3 Console Access to Only a Certain Bucket
Alternatively, users could use a program like CyberDuck for an easy Drag & Drop interface to Amazon S3. Cyberduck will also ask for the Access Key and Secret Key.

Overcome 1000 bucket limit in S3 / use access points

I have 1 s3 bucket per customer. Customers are external entities and they dont share data with anyone else. I write to S3 and customer reads from S3. As per this architecture, I can only scale to 1000 buckets as there is a limit to s3 buckets per account. I was hoping to use APs to create 1 AP per customer and put data in one bucket. The customer can then read the files from the bucket using AP.
Bucket000001/prefix01 . -> customeraccount1
Bucket000001/prefix02 . -> customeraccount2
...
S3 access points require you to set policy for a IAM user in access point as well as the bucket level. If I have 1000s of IAM users, do I need to set policy for each of them in the bucket? This would result in one giant policy. there is a max policy size in the bucket, so I may not be able to do that.
Is this the right use case where access points can help?
The recommended approach would be:
Do NOT assign IAM Users to your customers. These types of AWS credentials should only be used by your internal staff and your own applications.
You should provide a web application (or an API) where customers can authenticate against your own user database (or you could use Amazon Cognito to manage authentication).
Once authenticated, the application should grant access either to a web interface to access Amazon S3, or the application should provide temporary credentials for accessing Amazon S3 (more details below).
Do not use one bucket per customer. This is not scalable. Instead, store all customer data in ONE bucket, with each user having their own folder. There is no limit on the amount of data you can store in Amazon S3. This also makes it easier for you to manage and maintain, since it is easier to perform functions across all content rather than having to go into separate buckets. (An exception might be if you wish to segment buckets by customer location (region) or customer type. But do not use one bucket per customer. There is no reason to do this.)
When granting access to Amazon S3, assign permissions at the folder-level to ensure customers only see their own data.
Option 1: Access via Web Application
If your customers access Amazon S3 via a web application, then you can code that application to enforce security at the folder level. For example, when they request a list of files, only display files within their folder.
This security can be managed totally within your own code.
Option 2: Access via Temporary Credentials
If your customers use programmatic access (eg using the AWS CLI or a custom app running on their systems), then:
The customer should authenticate to your application (how this is done will vary depending upon how you are authenticating users)
Once authenticated, the application should generate temporary credentials using the AWS Security Token Service (STS). While generating the credentials, grant access to Amazon S3 but specify the customer's folder in the ARN (eg arn:aws:s3:::storage-bucket/customer1/*) so that they can only access content within their folder.
Return these temporary credentials to the customer. They can then use these credentials to make API calls directly to Amazon S3 (eg from the AWS Command-Line Interface (CLI) or a custom app). They will be limited to their own folder.
This approach is commonly done with mobile applications. The mobile app authenticates against the backend, receives temporary credentials, then uses those credentials to interact directly against S3. Thus, the back-end app is only used for authentication.
Examples on YouTube:
5 Minutes to Amazon Cognito: Federated Identity and Mobile App Demo
Overview Security Token Service STS
AWS: Use the Session Token Service to Securely Upload Files to S3
We have some way to achieve your goal.
use IAM group to grant access to a folder. Create a group, add a user to a group, and assign a role to the group to access the folder.
Another way is to use bucket policy (${aws:username} in Condition) to grant Access to User-Specific Folders. Refer to this link https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/

Waiting until IAM policy has been applied

I am creating shortlived users on AWS on the fly and while debugging why these newly created logins tended to fail with an InvalidAccessKeyId realised that just adding a small sleep solved the problem.
xref How long should I wait after applying an AWS IAM policy before it is valid? re: time for consistency throughout AWS
My follow up question to the above: is there a way to synchronously create a consistent IAM policy? Or at least a way to know they are ready to use?
Amazon IAM is not designed for providing short-lived credentials. You should create IAM Users for long-lived requirements, such as logins for humans and logins for persistent applications.
An IAM User should not be used for application login purposes. For example, if you are creating an Instagram-like application, you should maintain your own database of users or utilize Amazon Cognito for user authentication.
So, how do you then grant users access to AWS resources? For example, if you have an Instagram-like application and you wish to grant application users the ability to upload/download their pictures in Amazon S3 but want to restrict access to a certain bucket and directory?...
The answer is to create temporary credentials using the AWS Security Token Service (STS). Credentials can be created with a given policy for a specific period of time. These credentials work immediately. For example, if an Instragram-like user logs into the app, the backend app could generate temporary credentials that allow the user to access a specific directory within a specific Amazon S3 bucket for a set period of time (eg 15 minutes). These credentials are then passed to the mobile app/web browser for direct access to AWS services.