AWS IAM STS: proper way to make temporary admin? - amazon-web-services

I want to be able to assign a time-based api token to a non-admin AWS user that results in giving that user temporary admin privileges to all AWS services.
Why do I want this? Because when I develop on AWS on my personal account I like to be able to have admin access to every service, but I don't want to have a pair of cleartext undying admin credentials sitting in my .aws/credentials file. So I want to be able to assume an IAM role that will elevate a user to admin and use STS to assign a time-based API token.
At work we use federation via a SAML server so users are given time-based access no matter what role they have: dev, admin, etc, but I don't want to have to set all of that up just to have a time-based API token. I have read the AWS docs and discussed this in #aws and so far the response I have is to make an IAM trust policy that hard-codes a time end:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "*",
"Condition" : {"DateLessThan": {"aws:CurrentTime" : "2017-10-30T00:00:00Z"}}
}
]
}
But I don't want to manually hardcode and update this policy every time and would rather use STS to assign a time-based API token. Any insight would be much appreciated.

Have you tried GetSessionToken , refer this
Sample Request:
https://sts.amazonaws.com/
?Version=2011-06-15
&Action=GetSessionToken
&DurationSeconds=3600
&SerialNumber=YourMFADeviceSerialNumber
&TokenCode=123456
&AUTHPARAMS

STS and IAM Roles:
1) Create your role in the AWS console.
2) Use the AWS CLI to issue you new credentials using this role. You can create a batch script with the command to simplify executing it.
Example:
aws sts assume-role --role-arn arn:aws:iam::123456789012:role/xaccounts3access --role-session-name s3-access-example
The output of the command contains an access key, secret key, and session token that you can use to authenticate to AWS.
Temporary credentials

Related

AWS IAM user receive 401 when accessing to ECR repository, works with root user

I've started using AWS ECR to store my docker images. When I try to authenticate an IAM user, via Powershell (the same happens when I do via AWS command line) I receive a 401:UnAuthorized.
If I use the Auth key/secret of the root user, it works and authenticates.
The PowerShell script I use is
(Get-ECRLoginCommand).Password | docker login --username AWS --password-stdin 474389077978.dkr.ecr.eu-west-3.amazonaws.com/myreoi
I've replaced the AWS user with the IAM user. I've also added the IAM user to the admins, but it doesn't seem enough.
Any suggestion?
Thanks
The IAM user must be assigned a role to access the ECR service. This can be done by adding inline policy in the permission section of the groups.
Please follow the below steps to perform use non-root IAM users can perform docker ecr operation.
1.) Create IAM user say "ecr-user".
2.) Create IAM group called "ecr-group".
3.) Add user ecr-user to ecr-group.
4.) Create a role "ecr-role"
5.) Attach the policy name "AmazonEC2ContainerServiceRole" to the role ecr-role.
6.) Go in the group section of the AWS console.
7.) Select the group "ecr-group" and go to the permission tab.
Add policy - "AmazonEC2ContainerServiceRole" using attach policy button.
8.) "Click here" in the inline policy section of the permission tab.
9.) Choose custom policy.
10.) Choose a name for custom policy - "ecr-passon"
11.) Add policy json given above - ensure to change your account id.
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Action": [
"iam:GetRole",
"iam:PassRole"
],
"Resource": "arn:aws:iam::<account-id>:role/ecr-role"
}]}
All these steps will attach the role ecr-role to the ecr-user of the group ecr-group with policy AmazonEC2ContainerServiceRole.
AWS programmatic IAM users must assume a role to perform some operations.
Use the reference to understand the pass on the role.
Pass a Role to an AWS Service

AWS STS token refresh with existing token received from AssumeRoleWithSAML

I have a use-case where I need to have temporary AWS STS token made available for each authenticated user (auth using company IDP). These tokens will be used to push some data in AWS S3. I am able to get this flow, by using SAML assertion in IDP response and integrating with AWS as SP (IDP initiated sign-on) similar to one shown here.
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_providers_saml.html#CreatingSAML-configuring
But as STS allows token validity to be max for 1 hour, I want to refresh those tokens before expiry so that I don't have to prompt user to give credentials again (bad user experience). Also as these are company login credentials, I cant store them in the application.
I was looking at AWS IAM trust policy, and one way to do this is adding 'AssumeRole' entry to the existing SAML trust policy as shown below (second entry in the policy)
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::xxxxxxxxxxxx:saml-provider/myidp.com"
},
"Action": "sts:AssumeRoleWithSAML",
"Condition": {
"StringEquals": {
"SAML:aud": "https://signin.aws.amazon.com/saml"
}
}
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:sts::xxxxxxxxxxxx:assumed-role/testapp/testuser"
},
"Action": "sts:AssumeRole"
}
]
}
So for first time when testuser logs in as uses AssumeRoleWithSAML API/CLI, he will get temporary credentials. Next, he can use 'AssumeRole' API/CLI with those credentials, so that he can keep on refreshing the tokens without requires IDP credentials.
As can be seen, this works only for STS user with ARN of "arn:aws:sts::xxxxxxxxxxxx:assumed-role/testapp/testuser" for refreshing tokens as he/she can assume that role. but I need a generic way, where for any logged in user, he can generate STS tokens.
One way is to use wildcard characters in Trust policy for Principal, but looks like it is not supported. So I am stuck with tacking credentials every time the tokens expire. Is there a way to solve this?
thanks,
Rohan.
I have been able to get this working by specifying a role instead of an assumed-role in the IAM trust policy. Now my users can indefinitely refresh their tokens if they have assumed the testapp role.
"Principal": {
"AWS": "arn:aws:sts::xxxxxxxxxxxx:role/testapp"
},
AWS STS supports longer role sessions (up to 12 hours) for the AssumeRole* APIs. This was launched on 3/28/18, here is the AWS whats-new link: https://aws.amazon.com/about-aws/whats-new/2018/03/longer-role-sessions/. By that you need not to do a refresh as I assume a typical workday is < 12 hours :-)
Your question is one I was working on solving myself, we have a WPF Desktop Application that is attempting to log into AWS through Okta, then use the AssumeRoleWithSaml API to get the STS Token.
Using this flow invoked the Role Chaining rules and thus our token would expire every hour.
What I did to overcome this is to cache the initial SAMLResponse Data from Okta (after the user does MFA) and use that information to ask for a new Token every 55 minutes. I then use that new token for any future AWS resource calls.
Once 12 hours passes, I ask the user to authenticate with Okta again.
For those wondering about implementation for their own WPF apps, we use the AWS Account Federation App in Okta.
The application uses 2 packages:
Okta .NET Authentication SDK
AWS SDK for .NET
After setting up your AWS Account Federation App in Okta, use the AWS Embed Url and SAML Redirect Url in your application to get your SAMLResponse data.

How to reduce a cognito userpool user to their own folder in S3 bucket?

I am writing a basic react native app where users will be able to register themselves to an AWS cognito userpool and log in with that identity to store/retrieve their data from S3. I only have one bucket and every user will have their own folder in that bucket. How can I restrict each user to their own folder in that case. Here is the scenario.
I created two users in the user pool.
I then created a federated identity for my userpool. This federated identity has two IAM roles, authorized and unauthorized.
I then added a policy to the auth role of federated identity.
Here is my policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "*",
"Resource": "arn:aws:s3:::mybucket/${cognito-identity.amazonaws.com:sub}/*"
}
]
}
I then tried to retrieve data from S3 using Javascript SDK and I could ListObjects from "album-b207a8df-58e8-49cf-ba1b-0b48b7252291" where "b207a8df-58e8-49cf-ba1b-0b48b7252291" is the sub of "madi" user. Why was "test2" able to list that object?
Can you provide a snippet of the onClick_Cognito_receiptsdumpAuth_Role.*** ??
My guess (without your logs)
is that your policy is probably good, but you might have a policy
that grants list access to too much.
Your AWS class is being inited with your developer credentials
(which might have full Admin)
Cognito might have an issue and its
worth logging a support ticket.
Next steps I would try is
You might have a action:List* or equivalent
Also best hidden secret (it's not really a secret) is the policy simulator.
Test your policy against that and it will tell you if at least the policy is good and don't forget that iam policies are concatenated.
Lastly, if you can't figure out how the access is provided to the List Operation, you can enable CloudTrail to dump API logs to S3 and verify that the listobjects is being run by the cognito user you are expecting.

Issue binding API Gateway to DynamoDB

I'm trying to create a simple ApiGateway on top of a DynamoDB to add a endpoint for users to access the data trough this.
Integration type AWS Service
AWS Region eu-west-1
AWS Service DynamoDB
AWS Subdomain
HTTP method GET
Action ListResources
Execution role [iam arn]
Credentials cache Do not add caller credentials to cache key
Content Handling Passthrough
When I click the test Button i get :
Execution failed due to configuration error: API Gateway does not have permission to assume the provided role
Checked here and there but have no clue on the problem. I tried changing the permissions of the IAM user and gave him all Dynamo and APIGateway rights, but no change.
It seems the issue is linked to the fact that I used a IAM user instead of an IAM Role. I'll leave that here, maybe that will help.
First, update the execution role to use a role rather than an IAM user. Then, ensure that the role has permissions for all of the DynamoDB operations and resources that you want to access. Finally, grant API Gateway permissions to assume that role by adding an IAM trust policy as shown below.
From section "API Gateway Permissions Model for Invoking an API" on documentation page here
When an API is integrated with an AWS service (for example, AWS Lambda) in the back end, API Gateway must also have permissions to access integrated AWS resources (for example, invoking a Lambda function) on behalf of the API caller. To grant these permissions, create an IAM role of the Amazon API Gateway type. This role contains the following IAM trust policy that declares API Gateway as a trusted entity that is permitted to assume the role:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "apigateway.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}

S3 giving someone permission to read and write

I've created a s3 server which contain a large number of images. I'm now trying to create a bucket policy, which fits my needs. First of all i want everybody to have read permission, so they can see the images. However i also want to give a specific website the permission to upload and delete images. this website is not stored on a amazon server? how can i achieve this? so far i've created an bucket policy which enables everybody to see the images
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::examplebucket/*"
}
]
}
You can delegate access to your bucket. To do this, the other server will need AWS credentials.
If the other server were an EC2 instance that you owned then you could do this easily by launching it with an IAM role. If the other server were an EC2 instance that someone else owned, then you could delegate access to them by allowing them to assume an appropriate IAM role in your account. But for a non-EC2 server, as seems to be the case here, you will have to provide AWS credentials in some other fashion.
One way to do this is by adding an IAM user with a policy allowing s3:PutObject and s3:DeleteObject on resource "arn:aws:s3:::examplebucket/*", and then give the other server those credentials.
A better way would be to create an IAM role that has the same policy and then have the other server assume that role. The upside is that the credentials must be rotated periodically so if they are leaked then the window of exposure is smaller. To assume a role, however, the other server will still need to authenticate so will need some base IAM user credentials (unless you have some way to get credentials via identity federation). You could add a base IAM user who has permissions to assume the aforementioned role (but has no other permissions) and supply the base IAM user credentials to the other server. When using AssumeRole in this fashion you should require an external ID. You may also be able to restrict the entity assuming this role to the specific IP address(es) of the other server using a policy condition (not 100% sure if this is possible).
The Bucket Policy will work nicely to give everybody read-only access.
To give specific permissions to an application:
Create an IAM User for the application (this also creates access credentials)
Assign a policy to the IAM User that gives the desired permissions (very similar to a Bucket Policy)
The application then makes API calls to Amazon S3 using the supplied access credentials
See also: Amazon S3 Developer Guide