AWS storage service for multi-tenant web app - amazon-web-services

Which services are handy for creating a specific amount of storage allocation for each tenant, increasing/decreasing capacity, and monitoring free and used capacity.

The most flexible storage option on Amazon Web Services is S3 - Simple Storage Service.
S3 is an object store storage facility with which you can upload objects of any type. S3 also support multipart uploads for big files.
To separate your different tenants data, you could use folders in a bucket and do some application logic to stop different tenants accessing each others files.
You can use bucket policies to give different IAM users access to different folders, however, it wouldn't make sense to create an IAM user for each of your tenants.
I encourage you to read the docs:
http://docs.aws.amazon.com/AmazonS3/latest/dev/Welcome.html

Related

Where to store user private information ( like ID cards images etc.) in a web app?

I am developing an application with django rest and one of the features is to let the user store ID cards and driver license. I am thinking of using amazon AWS S3 to store the files.
Is that secure enough for that functionality? What is usually used for that type of files?
Amazon Simple Storage Service (S3)
It allows you to store an infinite amount of data that can be accessed
programmatically via different methods like REST API, SOAP, web
interface, and more. It is an ideal storage option for videos, images
and application data.
Features:
Fully managed
Store in buckets
Versioning
Access control lists and bucket policies
AES-256 bit encryption at rest
Private by default
Best used for:
Hosting entire static websites
Static web content and media
Store data for computation and large-scale analytics, like analyzing
financial transactions, clickstream analytics, and media transcoding
Disaster recovery solutions for business continuity
Secure solution for backup & archival of sensitive data
Use encryption to protect your data:
If your use case requires encryption during transmission, Amazon S3 supports the HTTPS protocol, which encrypts data in transit to and from Amazon S3. All AWS SDKs and AWS tools use HTTPS by default
Restrict access to your S3 resources:
By default, all S3 buckets are private and can be accessed only by users that are explicitly granted access. When using AWS, it's a best practice to restrict access to your resources to the people that absolutely need it, you can see in that Doc.
I would go with aws s3 for such a use case where I want to store this kind of information.
Setting default server-side encryption behavior for Amazon S3 buckets. Depending on the type of setup and amount of money I am willing to spend, I would choose to go with Customer Managed Key for encrypting the bucket.
Considering the I am going through all the security checks AWS recommends How can I secure the files in my Amazon S3 bucket?.
Enable replication, Versioning, Logging and maybe IP based access for all the good keeping.
S3 provides all kinds of bells and whistles for security in that case.

Overcome 1000 bucket limit in S3 / use access points

I have 1 s3 bucket per customer. Customers are external entities and they dont share data with anyone else. I write to S3 and customer reads from S3. As per this architecture, I can only scale to 1000 buckets as there is a limit to s3 buckets per account. I was hoping to use APs to create 1 AP per customer and put data in one bucket. The customer can then read the files from the bucket using AP.
Bucket000001/prefix01 . -> customeraccount1
Bucket000001/prefix02 . -> customeraccount2
...
S3 access points require you to set policy for a IAM user in access point as well as the bucket level. If I have 1000s of IAM users, do I need to set policy for each of them in the bucket? This would result in one giant policy. there is a max policy size in the bucket, so I may not be able to do that.
Is this the right use case where access points can help?
The recommended approach would be:
Do NOT assign IAM Users to your customers. These types of AWS credentials should only be used by your internal staff and your own applications.
You should provide a web application (or an API) where customers can authenticate against your own user database (or you could use Amazon Cognito to manage authentication).
Once authenticated, the application should grant access either to a web interface to access Amazon S3, or the application should provide temporary credentials for accessing Amazon S3 (more details below).
Do not use one bucket per customer. This is not scalable. Instead, store all customer data in ONE bucket, with each user having their own folder. There is no limit on the amount of data you can store in Amazon S3. This also makes it easier for you to manage and maintain, since it is easier to perform functions across all content rather than having to go into separate buckets. (An exception might be if you wish to segment buckets by customer location (region) or customer type. But do not use one bucket per customer. There is no reason to do this.)
When granting access to Amazon S3, assign permissions at the folder-level to ensure customers only see their own data.
Option 1: Access via Web Application
If your customers access Amazon S3 via a web application, then you can code that application to enforce security at the folder level. For example, when they request a list of files, only display files within their folder.
This security can be managed totally within your own code.
Option 2: Access via Temporary Credentials
If your customers use programmatic access (eg using the AWS CLI or a custom app running on their systems), then:
The customer should authenticate to your application (how this is done will vary depending upon how you are authenticating users)
Once authenticated, the application should generate temporary credentials using the AWS Security Token Service (STS). While generating the credentials, grant access to Amazon S3 but specify the customer's folder in the ARN (eg arn:aws:s3:::storage-bucket/customer1/*) so that they can only access content within their folder.
Return these temporary credentials to the customer. They can then use these credentials to make API calls directly to Amazon S3 (eg from the AWS Command-Line Interface (CLI) or a custom app). They will be limited to their own folder.
This approach is commonly done with mobile applications. The mobile app authenticates against the backend, receives temporary credentials, then uses those credentials to interact directly against S3. Thus, the back-end app is only used for authentication.
Examples on YouTube:
5 Minutes to Amazon Cognito: Federated Identity and Mobile App Demo
Overview Security Token Service STS
AWS: Use the Session Token Service to Securely Upload Files to S3
We have some way to achieve your goal.
use IAM group to grant access to a folder. Create a group, add a user to a group, and assign a role to the group to access the folder.
Another way is to use bucket policy (${aws:username} in Condition) to grant Access to User-Specific Folders. Refer to this link https://aws.amazon.com/blogs/security/writing-iam-policies-grant-access-to-user-specific-folders-in-an-amazon-s3-bucket/

Access to s3 buckets

Is it possible to give different access to different buckets in s3? In detail, I have 10 different buckets in s3 and each of those bucket related to different people. So I want to give them access only to their particular bucket(by sharing a URL or something like that)
Is this possible?
The normal way to assign access is:
Permanent credentials (eg associate with an IAM User) are only provided to internal IT staff who are managing or using the AWS services.
End users of a web application should be authenticated by the application (eg using Amazon Cognito, LDAP, AD, Google). The application will then be responsible for generating Pre-Signed URLs for uploading and downloading files.
For mobile applications, it is quite common to create temporary credentials using the Security Token Service, which allows the mobile app to directly make AWS API calls. The credentials can be given limited permissions, such as only being able to access one S3 bucket.
So, it really comes down to 'how' the users will be accessing the bucket. If they are doing it directly, then provide temporary credentials via STS. If they are doing it via an application, then the application will be responsible for providing individual access to upload/download.
By the way, it's not necessarily a good idea to give a different bucket to every user, because there is a limit on the number of buckets you can create. Instead, you could give access to separate paths within the same bucket. Proper use of permissions will ensure they cannot see/impact other users' data.
For how this works with IAM Users, see: Variables in AWS Access Control Policies | AWS News Blog

How do web applications typically interact with Amazon S3?

I'm new to S3 and I'm wondering how real-world web applications typically interact with it, in particular how user access permissions are handled.
Say, for instance, that I have designed a basic project management web application which, amongst other features, permits users to upload project files into a shared space which other project members can access.
So User file upload/read access would be determined by project membership but also by project roles.
Using S3, would one simply create a Bucket for the entire application with a single S3 user with all permissions and leave the handling of the user permissions to the application ? Or am I missing something ? I haven't been able to find many examples of real-world S3 usage online, in particular where access permissions are concerned.
The typical architecture is to keep the Amazon S3 buckets totally private.
When your application determines that a user is permitted to upload or download a file, it can generate a Presigned URL. This is a time-limited URL that allows an object to be uploaded or downloaded.
When uploading, it is also possible to Create a POST Policy to enforce some restrictions on the upload, such as its length, type and where it is being stored. If the upload meets the requirements, the file will be accepted.
You should maintain a database that identifies all objects that have been uploaded and maps it to the 'owner', permission groups, shares, etc. All of this is application-specific. Later, when a user requests a particular object for download, your app can generate a pre-signed URL that lets the user download the object even those it is a private object.
Always have your application determine permissions for accessing an object. Do not define application users as IAM Users.
If there is a straight-forward permission model (eg all of one user's files are in one path/folder within an S3 bucket), you can generate temporary credentials using the AWS Security Token Service that grants List and Get permissions on the given path. This can be useful for mobile applications that could then directly call the Amazon S3 API to retrieve objects. However, it is not suitable for a web-based application.

Using Google Cloud Platform Storage to store user images

I was trying to understand the Google Cloud Platform storage but couldn't really comprehend the language used in the documentation. I wanted to ask if you could use the storage and the APIs to store photos users take within your application and also get the images back if provided with a URL? and even if you can, would it be a safe and reasonable method to do so?
Yes you can pretty much use a storage bucket to store any kind of data.
In terms of transferring images from an application to storage buckets, the application must be authorised to write to the bucket.
One option is to use a service account key within the application. A service account is a special account that can be used by an application to authorise to various Google APIs, including the storage API.
There is some more information about service accounts here and information here about using service account keys. These keys can be used within your application, and allow the application to inherit the permission/scopes assigned to that service account.
In terms of retrieving images using a URL, one possible option would be to use signed URLs which would allow you to give users read or write access to an object (in your case images) in a bucket for a given amount of time.
Access to bucket objects can also be controlled with ACL (Access Control Lists). If you're happy for you images to be available publicly (i.e. accessible to everybody), it's possible to set an ACL with 'Reader' access for AllUsers.
More information on this can be found here.
Should you decide to make the images available publically, the URL format to retrive the object/image from the bucket would be:
https://storage.googleapis.com/[BUCKET_NAME]/[OBJECT_NAME]
EDIT:
In relation to using an interface to upload the files before the files land in the bucket, one option would be to have a instance with an external IP address (or multiple instances behind a Load Balancer) where the images are initially uploaded. You could mount Cloud Storage to this instance using FUSE, so that uploaded files are easily transferred to the bucket. In terms of databases you have the option of manually installing your database on a Compute Engine instance, or using a fully managed database service such as Cloud SQL.