Authenticating and using iAM users to access S3 - amazon-web-services

My use case is to allow users to create new user/password, create a folder for each user and allow them to upload files.
Then when they come back, they can login with the user/password and download their files (which are used within our product)
I managed to get most of the staff done using the C# API - very happy!
The only problem is that I cannot find a way to authenticate the user with IAM - using the username/password.
I don't want the end user to worry about key/secrets and long strings, they are suppose to be able to transfer these details (and access to data files) with other users to help them.
Is there a way to authenticate an IAM username/password? Thanks, Uri.

You need to use 'federated access' - with that you have users with accounts and passwords on your system. Then they authenticate with your system, and you grant them access for up to 36 hours through a session based system (encrypted cookie, or memcache etc) to have access to their folder.
You can do this with a web application or with a standalone C# windows app that authenticates to your server.
With a web app, your users log in, user/pass, then you store an encrypted cookie or similar, so that your web app can make pre signed posts, download files, etc while they are logged in. There is no need for them to ever see an AWSID/Secret.

As far as I understand the question, this seems to be available. You just need to have them log into the console from the sign-in link provided on the AWS IAM homepage.
You will then need to assign a password to the user account in question and add a policy for them so they can access the S3 bucket in question. You need the list all my buckets to get past the first screen of the console. I just tried it and something like the following works:
{
"Statement": [
{
"Sid": "Stmt1363222128",
"Effect": "Allow",
"Action": "s3:ListAllMyBuckets",
"Resource": "*"
},
{
"Sid": "Stmt136328293",
"Action": [
"s3:*"
],
"Effect": "Allow",
"Resource": [
"arn:aws:s3:::your_bucket_name"
]
}
]
}

No. You will need to use the key/secrets. That is the short answer. When you create a new user in IAM it will give that specific user their own key/secrets. All they need to do is login, grab their key/secret and punch it in.
Manage IAM users and their access - You can create users in IAM,
assign users individual security credentials (i.e., access keys,
password, Multi Factor Authentication devices) or request temporary
security credentials to provide users access to AWS services and
resources. You can manage permissions to control which operations a
user can perform. - aws.amazon.com/iam

Related

Simplest way to setup bucket-per-client in Amazon S3

I want some clients to be able to programmatically upload files for me. Amazon S3 obviously sounds great in terms of availability and durability. But setup seems like such a complicated (and hence, error-prone) step for this: AFAIK I can't avoid creating users, groups, roles, policies...
Is there something simpler that would allow me to create a bucket with a token, so that I can simply give that token to the client w/o wasting time and w/o the risk of clicking something wrong that could lead to problems or security holes?
P.S. I won't need to do that for thousands of client, just a few.
You can make a automatized system for create many s3 buckets you need, you can use Terraform for interact with the API of your cloud provider and make this tasks programatically, so with this you can create many things around this (maybe a frontend/backend for create this buckets from your browser and of course you can make a function for send through a email all info around the access for the bucket by example).
In this repo you can see an example: https://github.com/terraform-aws-modules/terraform-aws-s3-bucket/tree/master/examples/s3-replication
Your first consideration should be how these clients interact with Amazon S3. This will then impact how you allow them to access Amazon S3.
There are several options:
Option 1: Provide IAM User credentials
Normally, IAM credentials should only be given to staff in your own company. However, if you have a small number of well-known clients, you could create an IAM User for each of them.
You can then assign permissions that allow them to access a specific bucket, or a path within a shared bucket, and they can use the AWS CLI to upload/download files, or programmatically via an AWS SDK.
You would need to give them an Access Key + Secret Key to access their S3 storage.
Rather than using separate buckets, you could grant access to a path within a shared bucket with a relatively simple Bucket Policy that grants access to a path based on their IAM Username. See:IAM policy elements: Variables and tags - AWS Identity and Access Management
Option 2: Provide temporary credentials
If they are programmatically accessing AWS, then the clients could:
Programmatically authenticate against your back-end application
The back-end application uses the AWS Security Token Service (STS) to generate temporary credentials and returns them to the client
The client then uses those credentials in the same way as Option 1
The difference with this option is that the clients authenticate to your own back-end rather than using IAM User credentials.
Option 3: Pre-signed URLs
Instead of providing credentials to your clients, your back-end app can generate Amazon S3 pre-signed URLs, which are time-limited URL that provides temporary access to upload/download private objects in Amazon S3.
This allows the back-end to totally control which objects the users can upload or download. For example, think of a photo-sharing application that keeps photos private. When a user wants to view one of their photos, the app can generate a pre-signed URL that grants access to a private object without having to provide AWS credentials.
Bottom line
The simplest option is to create an IAM user for each client (Option 1) and provide them credentials. They could then use the AWS Command-Line Interface (CLI) or a program you provide to interact with S3.
If you consider this to be too complex, then you might want to use services like box.com or even Microsoft OneDrive, which provide a more friendly interface on top of storage services.
Thanks to John Rotenstein's great detailed answer with various options, I think I found the simplest available option for my case. It is based on IAM policy elements: Variables and tags and involves creating a single bucket with separate "home folders" in it (one per client). No need for user groups or roles.
Here is a step-by-step guide:
Create a common bucket (let's call it bucket-shared-with-clients).
Create a single (universal) policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": ["s3:ListBucket"],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::bucket-shared-with-clients"],
"Condition": {"StringLike": {"s3:prefix": ["${aws:username}/*"]}}
},
{
"Action": [
"s3:GetObject",
"s3:PutObject"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::bucket-shared-with-clients/${aws:username}/*"]
}
]
}
Create IAM user accounts - one per client. The users need to be with Programmatic access enabled and in the Permission view simply go for the Attach existing policies directly option.
Here is an example Python client that uploads a file:
import boto3
ACCESS_KEY = 'YourAccessKeyComesHere'
SECRET_KEY = 'YourSecterKeyComesHere'
USERNAME = 'client-a' # The username is also used as a "home folder", as can be seen below
FILE_TO_UPLOAD = 'some-file.json'
session = boto3.Session(aws_access_key_id=ACCESS_KEY, aws_secret_access_key=SECRET_KEY)
s3 = session.resource('s3')
bucket = s3.Bucket('bucket-shared-with-clients')
key = f'{USERNAME}/{FILE_TO_UPLOAD}' # If USERNAME didn't match the client's IAM User, we would get an AccessDenied error
bucket.upload_file(FILE_TO_UPLOAD, key)

Check whether CognitoUser has specific permission

I'm trying to set up a protected route on my webapp. For this, I've create a Group, Admins, in my User Pool. I've assigned this group to the WebappAdmins role, which contains custom policies:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": "webapp:*",
"Resource": "*"
}]
}
How can I--from the webapp--discern whether the logged in CognitoUser has the webapp:ViewUploadDocumentsPage permission? Since all CognitoUsers that are a part of the Admins group have webapp:* permissions, then they should have webapp:ViewUploadDocumentsPage permission, if I'm not mistaken. I understand that verifying their permissions on the webapp is insecure, and it doesn't matter anyway, since I plan on adding specific lambda permissions to the WebappAdmins role to prevent any actual harm done by other users.
I'm expecting some sort of endpoint that I can make an authenticated post request to on behalf of the CognitoUser, and passing in webapp:ViewUploadDocumentsPage into the body. I haven't found anything alluding to that in my extensive research, so I assume I'm wrong.
Could I create an API Gateway with an Authorizer that only accepts requests from CognitoUsers with the webapp:ViewUploadDocumentsPage permission? I'm truly unsure of how to go about this.
Rather than verifying what IAM permissions the user has wouldn't it be simpler just to check what groups the user is in? If the user is in the Admins group then you know they have the permission you are interested in. You can get the user's group membership any number of ways depending on what language you are using and where you want to do the check.

Restrict S3 bucket access by STS token age?

I have an S3 bucket that I want to restrict access to on the basis of how old the credentials used to access it are. For example if the token used to access the bucket is greater than X days old, I want to deny access. How can I achieve this? Something like this policy -
{
"Version": "2012-10-17",
"Statement": {
"Sid": "RejectLongTermCredentials",
"Effect": "Deny",
"Principal": "*",
"Action": "s3:*",
"Resource": [
"arn:aws:s3:::${bucket}“,
"arn:aws:s3:::${bucket}/*”
],
"Condition": {
aws:TokenIssueTime > 90 days
}
}
}
Is there a way to calculate the age of a token? Any help would be appreciated!
What you are describing sounds very similar to Amazon S3 pre-signed URLs.
A pre-signed URL provides time-limited access to a private object.
Imagine a photo-sharing app. It would work like this:
All photos are kept in private Amazon S3 buckets
A user authenticates to the app
When a user wishes to view a private photo (or the app generates an HTML page that links to a photo, using <img> tags), the app will:
Verify that the user is entitled to view that photo
If they are, the app generates a pre-signed URL, which includes an expiry period (eg 5 minutes)
When the user's browser access the pre-signed URL, Amazon S3 verifies the URL and checks that it is within the expiry period:
If it is, then the private object is private object is returned
If it is not, then the user receives an Access Denied error
It only takes a couple of lines of code to generate a pre-signed URL and it does not require an API call to S3.
In difference to your question, the above process does not require the use of Security Token Service (STS) tokens (which need to be linked to IAM Users or IAM Roles). It is designed to be used for applications rather than IAM Users.

What is the access control model for DynamoDB?

In a traditional MySql Server situation, as the owner of a database, I create a User and from the database I grant certain access rights to the User object. An application can then (and only) access the database by supplying the password for the User.
I am confused and don't see a parallel when it comes to giving access to a DynamoDB table. From the DynamoDB Tables page, I can't find a means to grant permission for an IAM user to access a table. There is an Access Control tab, but that appears to be for Facebook/Google users.
I read about attaching policies but am confused further. How is access controlled if anyone can create a policy that can access all tables?
What am I missing? I just want to create a "login" for a Node application to access my DynamoDB table.
If anyone in your AWS account can create IAM policies you have a real security issue.
Only a few accounts should do that (Create IAM policies).
DynamoDB accesses work along with IAM user like you said, so, you need to do the following:
Create IAM groups to classify your IAM users, for example, DBAGroup for dbas, DEVGroup for developers and so on.
Create IAM policies to grant specific access to your DynamoDB tables for each group.
Apply the policies to the specific groups for granting accesses.
For login purposes, you need to develop a module that will verify the credentials with IAM service, so you need to execute IAM API calls. This module could be deployed within an EC2, could be a Javascript call to an API Gateway's endpoint along with a Lambda function, Etc.
What you need to do:
Create an account on IAM service that will be able to execute API calls to the IAM service for verifying credentials (Login and password).
This account should have only permissions for doing that (Verify user login and password).
Use the API credentials to be able to execute API calls.
If you don't want to create your own module for login purposes, take a look at Amazon Cognito
Amazon Cognito lets you add user sign-up/sign-in and access control to your web and mobile apps quickly and easily. Cognito scales to millions of users and supports sign-in with social identity providers such as Facebook, Google, and Amazon, and enterprise identity providers via SAML 2.0.
The last step is how your module execute API calls to IAM service? As you may know, we need API Credentials. So, using the logged user's credentials you will be able to execute API calls to read data from tables, execute CRUD operations, Etc.
To set specific permissions for certain tables as in SQL Server you must do this:
In Identity and Access Management (IAM) create a new security policy on the JSON tab:
Use the following JSON as an example to allow a full CRUD or remove the items within the "Action" section to allow only the desired items:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ListAndDescribe",
"Effect": "Allow",
"Action": [
"dynamodb:List*",
"dynamodb:DescribeReservedCapacity*",
"dynamodb:DescribeLimits",
"dynamodb:DescribeTimeToLive"
],
"Resource": "*"
},
{
"Sid": "SpecificTable",
"Effect": "Allow",
"Action": [
"dynamodb:BatchGet*",
"dynamodb:DescribeStream",
"dynamodb:DescribeTable",
"dynamodb:Get*",
"dynamodb:Query",
"dynamodb:Scan",
"dynamodb:BatchWrite*",
"dynamodb:CreateTable",
"dynamodb:Delete*",
"dynamodb:Update*",
"dynamodb:PutItem"
],
"Resource": "arn:aws:dynamodb:*:*:table/MyTable"
}
]
}
Give the policy a name and save it.
After that, go to the Identity and Access Management (IAM) Users screen and create a new user as shown below.
Remember to set the field ** Access type ** as * Programmatic access *, it is not necessary to add the user to a group, click on "Atach existing policies directly" and add the policy previously created.
Finished! You already have everything you need to connect your application to Dynamodb.

IAM access to EC2 REST API?

I'm new to AWS. My client uses AWS to host his EC2 instances. Right now, we are trying to get me API access. Obviously, I need my authentication details to do this.
He set me up an IAM identity under his account, so I can login to the AWS web console and configure EC2 instances. I cannot, however, for the life of me, figure out where my API access keys are displayed. I don't have permissions to view 'My Account', which is where I imagine they'd be displayed.
So, what I'm asking, is how can he grant me API access through his account? How can I access the AWS API using my IAM identity?
Michael - sqlbot's answer is correct (+1), but not entirely complete given the comparatively recent but highly useful addition of Variables in AWS Access Control Policies:
Today we’re extending the AWS access policy language to include
support for variables. Policy variables make it easier to create
and manage general policies that include individualized access
control.
This enables implementation of an 'IAM Credentials Self Management' group policy, which would usually be assigned to the most basic IAM group like the common 'Users'.
Please note that the following solution still needs to be implemented by the AWS account owner (or an IAM user with permissions to manage IAM itself), but this needs to be done once only to enable credentials self management by other users going forward.
Official Solution
A respective example is included in the introductory blog post (and meanwhile has been available at Allow a user to manage his or her own security credentials in the IAM documentation too - Update: this example vanished again, presumably due to being applicable via custom solutions using the API only and thus confusing):
Variable substitution also simplifies allowing users to manage their
own credentials. If you have many users, you may find it impractical
to create individual policies that allow users to create and rotate
their own credentials. With variable substitution, this becomes
trivial to implement as a group policy. The following policy permits
any IAM user to perform any of the key and certificate related actions
on their own credentials. [emphasis mine]
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action":["iam:*AccessKey*","iam:*SigningCertificate*"],
"Resource":["arn:aws:iam::123456789012:user/${aws:username}"]
}
]
}
The resource scope arn:aws:iam::123456789012:user/${aws:username} ensures that every user is effectively only granted access to his own credentials.
Please note that this solution still has usability flaws depending on how AWS resources are accessed by your users, i.e. via API, CLI, or the AWS Management Console (the latter requires additional permissions for example).
Also, the various * characters are a wildcard, so iam:*AccessKey* addresses all IAM actions containing AccessKey (see IAM Policy Elements Reference for details).
Extended Variation
Disclaimer: The correct configuration of IAM policies affecting IAM access in particular is obviously delicate, so please make your own judgement concerning the security impact of the following solution!
Here's a more explicit and slightly extended variation, which includes AWS Multi-Factor Authentication (MFA) device self management and a few usability enhancements to ease using the AWS Management Console:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"iam:CreateAccessKey",
"iam:DeactivateMFADevice",
"iam:DeleteAccessKey",
"iam:DeleteSigningCertificate",
"iam:EnableMFADevice",
"iam:GetLoginProfile",
"iam:GetUser",
"iam:ListAccessKeys",
"iam:ListGroupsForUser",
"iam:ListMFADevices",
"iam:ListSigningCertificates",
"iam:ListUsers",
"iam:ResyncMFADevice",
"iam:UpdateAccessKey",
"iam:UpdateLoginProfile",
"iam:UpdateSigningCertificate",
"iam:UploadSigningCertificate"
],
"Effect": "Allow",
"Resource": [
"arn:aws:iam::123456789012:user/${aws:username}"
]
},
{
"Action": [
"iam:CreateVirtualMFADevice",
"iam:DeleteVirtualMFADevice",
"iam:ListVirtualMFADevices"
],
"Effect": "Allow",
"Resource": "arn:aws:iam::123456789012:mfa/${aws:username}"
}
]
}
"You" can't, but:
In IAM, under Users, after he selects your user, he needs to click Security Credentials > Manage Access Keys, and then choose "Create Access Key" to create an API Key and its associated Secret, associated with your IAM user. On the next screen, there's a message:
Your access key has been created successfully.
This is the last time these User security credentials will be available for download.
You can manage and recreate these credentials any time.
Where "manage" means "deactivate or delete," and "recreate" means "start over with a new one." The IAM admin can subsequently see the keys, but not the associated secrets.
From that screen, and only from that screen, and only right then, is where the IAM admin can view the both key and the secret associated with the key or download them to a CSV file. Subsequently, one with appropriate privileges can see the keys for a user within IAM but you can never view the secret again after this one chance (and it would be pretty preposterous if you could).
So, your client needs to go into IAM, under the user he created for you, and create an API key/secret pair, save the key and secret, and forward that information to you via an appropriately-secure channel... if he created it but didn't save the associated secret, he should delete the key and create a new one associated with your username.
If you don't have your own AWS account, you should sign up for one so you can go into the console with full permissions as yourself and understand the flow... it might make more sense than my description.