I'd like to set up a website on S3 bucket. The website is for our team admin to submit a list of student names so that those names could be stored in the database.
Now if I'd like all team members are able to view the website, but only allow one person(team admin) to really submit the names, what should I do? I think this is access permission issue, but not quite clear how AWS deal with this. I guess related to IAM users/roles? But exactly what should I do?
Many thanks
================
Forget to mention, my design involves the whole chain like S3/static website, Javascript, Lambda function, API Gateway, DynamoDB. I'm wondering at which step and how should I control the access?
Another thinking is, should I create an account for team admin, so that only he could login and submit? Maybe not necessary?
S3 Websites are static. This means that you cannot execute code to do anything, such as query a database.
To implement your objective, you will need to combine several services.
S3 Websites: Your S3 bucket will store all of the files such as CSS, JavaScript, HTML, Images, ...
JavaScript: When the client accesses your website, JavaScript functions will be loaded with your HTML to provide client based processing.
Amazon Cognito: Cognito will manage authentication. Using STS your clients will receive temporary access keys to access AWS resources.
DynamoDB: This will be your database. Using the access keys from Cognito / STS, users will access the database. The level of access is controlled by your AWS IAM Policies that you created for each user or group of users.
There are lots of examples of this design on the Internet and several "serverless" books have been written with entire designs mapped out.
Yes, you can use IAM roles to provide read/write access to the DB. (short answer)
S3 is only good for hosting your static website, whereas if you wish to restrict read and write controls - I would suggest you switch to either AWS RDS instance or AWS Aurora.
With RDS, you can have a read replica - which will only give read access to viewing users and only you as an admin can insert/update the tables.
This solution would also make your DB's response time better.Since the reads would be handled by different instance and writing by different.
Hope this helps.
Related
I just want my S3 bucket to be able to access itself. For example in my index.html there is a reference to a favicon, which resides in my s3 (the same actually) bucket. When i call the index.html, i get 403 HTTP ACCESS DENIED error.
If i put block all access off and i add a policy it works, but i do not want the Bucket to be public.
How am i able to invoke my website with my AWS user for example without making the site public (that is with having all internet access blocked)?
I just want my S3 bucket to be able to access itself.
no, the request always comes from the client
How am i able to invoke my website with my AWS user
For the site-level access control there is CloudFront with signed cookie. You will still need some logic (apigw+lambda? lambda on edge? other server?) to authenticate the user and sign the cookie.
You mention that "the websites in the bucket should be only be able to see by a few dedicated users, which i will create with IAM."
However, accessing Amazon S3 content with IAM credentials is not compatible with accessing objects via URLs in a web browser. IAM credentials can be used when making AWS API calls, but a different authentication method is required when accessing content via URLs. Authentication normally requires a back-end to perform the authentication steps, or you could use Amazon Cognito.
Without knowing how your bucket is set up and what permissions / access controls you have already deployed it is hard to give a definite answer.
Having said that it sounds like you simply need to walk through the proper steps for building an appropriate permission model. You have already explored part of this with the block all access and a policy, but there are also ACL's and permission specifics based on object ownership that need to be considered.
Ultimately AWS's documentation is going to do a better job than most to illustrate what to do and where to start:
https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteAccessPermissionsReqd.html
NOTE: if you share more information about how the bucket is configured and how your client side is accessing the website, I can edit the answer to give a more prescriptive solution (assuming the AWS docs don't get you all the way there)
UPDATE: After re-reading your question and comment on my answer, I think gusto2 and John's answers are pointing you in the right direction. What you are wanting to do is to authenticate users before they access the contents of the S3 bucket (which if I understand you right, is a s3 hosted static website). This means you need an authentication layer between the client and the bucket, which can be accomplished in a number of ways (lambda + cloudfront, or using an IdP like Cognito are certainly viable options). It would be a moot point for me to regurgitate exactly how to pull off something like this when there are a ton of accessible blog posts on the topic (search "Authenticate s3 static website").
HOWEVER I also want to point out that what you are wanting to accomplish is not possible in the way you are hoping to accomplish it (using IAM permission modeling to authenticate users against an s3 hosted static website). You can either authenticate users to your s3 website OR you can use IAM + S3 Permissions and ACL to set up AWS User and Role specific access to the contents of a bucket, but you can't use IAM users / roles as a method for authenticating client access to an S3 static website (not in any way I would imagine is simple or recommended at least...)
Suppose you have to share data with a third party over the internet and the data is stored in AWS. What would be the most secure and easy way to do this?
Since sending mail is not very secure, i thought of the solution of creating a S3 bucket and run a SFTP server (with AWS Family) on it. Is there a better solution in AWS to achieve this?
This depends on how you want to "share data" and where that data resides.
Let's say you have an object in Amazon S3 that you would like to make available. There are several options for sharing access:
You could create an Amazon S3 pre-signed URL, which provides time-limited access to a private object. This is similar to storing something in DropBox and using the "Get Link" command to obtain a special URL that provides access to the object.
If the other people have their own AWS Account, you could share a specific bucket or an object with them. This has the benefit that you could put objects in a bucket and they can retrieve any of them whenever they wish.
You could write a web application that requires users to authenticate and then gives them the ability to access objects in Amazon S3. This would be similar to a photo-sharing website, where people login and can access/share photos. You would be responsible for writing this application and managing the authentication.
Update
Based on the information you provided (S3, few users, automated), the easiest method would probably be to have the other users sign-up to AWS or provide them with IAM access credentials from your own AWS Account (not recommended if you have large numbers of such users).
You can grant permission for them to access your data and they could use the AWS Command-Line Interface (CLI) to access/download the data. This can be automated with the aws s3 cp and aws s3 sync commands.
I'm building a platform whereby users upload data to us. Users will upload data to us, whereby the amount of data is appropriately 750 MB for transaction.
What is the standard way to keep track of the data uploaded by user? How do you organize the data by user on S3 buckets?
A simple scheme could be to tag/prefix each item uploaded to the S3 buckets by the username, and use this logic in our application to allow users to work with these files. We can then keep track of username + data uploaded within a database (like AWS Dynamo).
Things get a bit complicated when I start thinking about features allowing groups to access these files of course....
Is there a better approach for this task on AWS? It feels like a standard problem.
AWS does not have build-in tools for keeping track of uploads per "user" nor any upload limits. This is what you, as a developer on AWS, need to design and implement. DynamoDB is a popular choice to keep track of S3 uploads and limits per user in your application.
Regarding organization. Well it depends. If your users login through Cognito to your application, each user will have IAM federated identity associated with them. Thus, you can organize the bucket and control user access using this feature, as shown for instance in the following link:
Amazon S3: Allows Amazon Cognito users to access objects in their bucket
User groups, could also be managed through Congito.
I am trying to understand access security as it relates to Amazon S3. I want to host some files in an S3 bucket, using CloudFront to access it via my domain. I need to limit access to certain companies/individuals. In addition I need to manage that access individually.
A second access model is project based, where I need to make a library of files available to a particular project team, and I need to be able to add and remove team members in an ad hoc manner, and then close access for the whole project at some point. The bucket in question might be the same for both scenarios.
I assume something like this is possible in AWS, but all I can find (and understand) on the AWS site involves using IAM to control access via the AWS console. I don't see any indication that I could create an IAM user, add them to an IAM group, give the group read only access to the bucket and then provide the name and password via System.Net.WebClient in PowerShell to actually download the available file. Am I missing something, and this IS possible? Or am I not correct in my assumption that this can be done with AWS?
I did find Amazon CloudFront vs. S3 --> restrict access by domain? - Stack Overflow that talks about using CloudFront to limit access by Domain, but that won't work in a WfH scenario, as those home machines won't be on the corporate domain, but the corporate BIM Manager needs to manage access to content libraries for the WfH staff. I REALLY hope I am not running into an example of AWS just not being ready for the current reality.
Content stored in Amazon S3 is private by default. There are several ways that access can be granted:
Use a bucket policy to make the entire bucket (or a directory within it) publicly accessible to everyone. This is good for websites where anyone can read the content.
Assign permission to IAM Users to grant access only to users or applications that need to access to the bucket. This is typically used within your organization. Never create an IAM User for somebody outside your organization.
Create presigned URLs to grant temporary access to private objects. This is typically used by applications to grant web-based access to content stored in Amazon S3.
To provide an example for pre-signed URLs, imagine that you have a photo-sharing website. Photos provided by users are private. The flow would be:
A user logs in. The application confirms their identity against a database or an authentication service (eg Login with Google).
When the user wants to view a photo, the application first checks whether they are entitled to view the photo (eg it is their photo). If they are entitled to view the photo, the application generates a pre-signed URL and returns it as a link, or embeds the link in an HTML page (eg in a <img> tag).
When the user accesses the link, the browser sends the URL request to Amazon S3, which verifies the encrypted signature in the signed URL. If if it is correct and the link has not yet expired, the photo is returned and is displayed in the web browser.
Users can also share photos with other users. When another user accesses a photo, the application checks the database to confirm that it was shared with the user. If so, it provides a pre-signed URL to access the photo.
This architecture has the application perform all of the logic around Access Permissions. It is very flexible since you can write whatever rules you want, and then the user is sent to Amazon S3 to obtain the file. Think of it like buying theater tickets online -- you just show the ticket and the door and you are allowed to sit in the seat. That's what Amazon S3 is doing -- it is checking the ticket (signed URL) and then giving you access to the file.
See: Amazon S3 pre-signed URLs
Mobile apps
Another common architecture is to generate temporary credentials using the AWS Security Token Service (STS). This is typically done with mobile apps. The flow is:
A user logs into a mobile app. The app sends the login details to a back-end application, which verifies the user's identity.
The back-end app then uses AWS STS to generate temporary credentials and assigns permissions to the credentials, such as being permitted to access a certain directory within an Amazon S3 bucket. (The permissions can actually be for anything in AWS, such as launching computers or creating databases.)
The back-end app sends these temporary credentials back to the mobile app.
The mobile app then uses those credentials to make calls directly to Amazon S3 to access files.
Amazon S3 checks the credentials being used and, if they have permission for the files being requests, grants access. This can be done for uploads, downloads, listing files, etc.
This architecture takes advantage of the fact that mobile apps are quite powerful and they can communicate directly with AWS services such as Amazon S3. The permissions granted are based upon the user who logs in. These permissions are determined by the back-end application, which you would code. Think of it like a temporary employee who has been granted a building access pass for the day, but they can only access certain areas.
See: IAM Role Archives - Jayendra's Blog
The above architectures are building blocks for how you wish to develop your applications. Every application is different, just like the two use-cases in your question. You can securely incorporate Amazon S3 in your applications while maintaining full control of how access is granted. Your applications can then concentrate on the business logic of controlling access, without having to actually serve the content (which is left up to Amazon S3). It's like selling the tickets without having to run the theater.
You ask whether Amazon S3 is "ready for the current reality". Many of the popular web sites you use every day run on AWS, and you probably never realize it.
If you are willing to issue IAM User credentials (max 5000 per account), the steps would be:
Create an IAM User for each user and select Programmatic access
This will provide an Access Key and Secret Key that you can provide to each user
Attach permissions to each IAM User, or put the users in an IAM Group and attach permissions to the IAM Group
Each user can run aws configure on their computer (using the AWS Command-Line Interface (CLI) to store their Access Key and Secret Key
They can then use the AWS CLI to upload/download files
If you want the users to be able to access via the Amazon S3 management console, you will need to provide some additional permissions: Grant a User Amazon S3 Console Access to Only a Certain Bucket
Alternatively, users could use a program like CyberDuck for an easy Drag & Drop interface to Amazon S3. Cyberduck will also ask for the Access Key and Secret Key.
I have an application where users are part of a 'group' of users. Each group can 'upload' documents to the application. Behind the scenes I am using S3 to store these documents.
I've spent a ton of time reading the AWS documentation but still don't understand the simplest/correct way to do the following:
User 1 in group A can upload documents to application
User 2 in group A can see and access all group A documents in application
User 3 in group B can upload documents to application
User 3 in group B cannot see any documents that belong to group A (and vice-versa)
Should I be using the API to create a new bucket for each 'group'?
Or can all of this be done in a single bucket with subdirectories for each group & then set access limitations?
Should I be setting up an IAM group policy and applying it to each web app user?
I'm not sure of the best architecture for this scenario so would really appreciate a point in the right direction.
AWS credentials should be assigned to your application and to your IT staff who need to maintain the application.
Users of your application should not be given AWS credentials.
Users should interact directly with your application and your application will make calls to the AWS API from the back-end. This way, your application has full control of what data they can see and what operations they can perform.
Think of it like a database -- you never want to give users direct access to a database. Instead, they should always interact via an application, which will store and update information in a database.
There are some common exceptions to the above:
If you want users to access/download a file stored in S3, your application can generate a pre-signed URL, which is a time-limited URL that permits access to an Amazon S3 object. Your application is responsible for generating the URL when it wants to grant access and the URl can be included in an HTML page (eg show a private picture on a web page).
If you want to allow users to upload files directly to S3, you could again use a pre-signed URL or you could grant public Write access to an Amazon S3 bucket. Think of it like a modern FTP server.
Bottom line: Your application is in charge! Also, consider using pre-signed URLs to provide direct access to objects when the application permits it.