IAM role to access services of another AWS account - amazon-web-services

For security reasons, we have a dev, QA, and a prod AWS account. We are using IAM roles for instances. This is working correctly per account basis.
Now the recruitment here is we want to access multiple aws services {such as S3, SQS, SNS, EC2,etc.} on one of EC2 instance of QA account from Dev aws account.
We have created STS policy and role allowing Trusted entities as another AWS account, but somehow not able to attach this role to EC2 instance.
Example STS policy:
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::546161XXXXXX:role/AdminAccessToAnotherAccount"
}
}
AdminAccessToAnotherAccount: This aws policy on another account with admin access.
This role is not listed while attaching to the ec2 instance.

It appears that your situation is:
You have an EC2 instance in Account-1
An IAM Role ("Role-1") is assigned to the EC2 instance
You want to access resources in Account-2 from the EC2 instance
The following steps can enable this:
Create an IAM Role in Account-2 ("Role-2") with the permissions you want the instance to receive
Add a Trust policy to Role-2, trusting Role-1
Confirm that Role-1 has permission to call AssumeRole on Role-2
From the EC2 instance using Role-1, call AssumeRole on Role-2
It will return a set of credentials (Access Key, Secret Key, Token)
Use those credentials to access services in Account-2 (via aws configure --profile foo or an API call).
If use aws configure, you will also need to manually edit the ~/.aws/credentials file to add the aws_session_token to the profile, since it is not requested by the CLI command.
Examples:
Using Temporary Security Credentials to Request Access to AWS Resources
Cross-Account Access Control with Amazon STS for DynamoDB

Related

Cross account access for AWS accounts using Direct Connect

I have been working with AWS for a number of years, but I am not very strong with some of the advanced networking concepts.
So, we have multiple AWS accounts. None of them have public internet access, but we use Direct Connect for on-prem to AWS connection.
I have a S3 bucket in Account A.
I created an IAM user in Account A along with a access/secret key and granted this IAM user s3:PutObject permission to the S3 bucket.
I write a simple Python script to list the objects in this bucket from on-prem, it works, as expected.
I then execute the same Python script on an EC2 instance running in Account B, I get "botocore.exceptions.ClientError: An error occured (AccessDenied) when calling the ListObjects operation: Access Denied".
Do I need to create VPC endpoint for S3 in Account B? Does cross account IAM role come into play here?
Your situation is:
You have an Amazon S3 Bucket-A in Account-A
You have an IAM User (User-A) in Account-A
You have an Amazon EC2 instance running in Account-B
From that EC2 instance, you wish to access Bucket-A
It also appears that you have a means for the EC2 instance to access Amazon S3 endpoints to make API calls
Assuming that the instance is able to reach Amazon S3 (which appears to be true because the error message refers to permissions, which would have come from S3), there are two ways to authenticate for access to Bucket-A:
Option 1: Using the IAM User from Account-A
When making the call from the EC2 instance to Bucket-A, use the IAM credentials created in Bucket-A. It doesn't matter that the request is coming from an Amazon EC2 instance in Account-B. In fact, Amazon S3 doesn't even know that. An API call can come from anywhere on the Internet (including your home computer or mobile phone). What matters is the set of credentials provided when making the call.
If you are using the AWS Command-Line Interface (CLI) to make the call, then you can save the User-A credentials as a profile by using aws configure --profile user_a (or any name), then entering the credentials from the IAM User in Account-A. Then, access Amazon S3 with aws s3 ls --profile user_a. Using a profile like this allows you to switch between credentials.
Option 2: Using a Bucket Policy
Amazon S3 also has the ability to specify a Bucket Policy on a bucket, which can grant access to the bucket. So, if the EC2 instance is using credentials from Account-B, you can add a Bucket Policy that grants access from those Account-B credentials.
Let's say that the Amazon EC2 instance was launched with an IAM Role called role-b, then you could use a Bucket Policy like this:
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<Account-B>:role/role-b"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-a/*"
}
]
}
Disclaimer: All of the above assumes that you don't have any weird policies on your VPC Endpoints / Amazon S3 Access Points or however the VPCs are connecting with the Amazon S3 endpoints.

AWS SDK/CLI access error with EC2 Instance credentials for aws redshift create-cluster

I have an IAM role associated with my EC2 instances with the following policy regarding Redshift:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"redshift:*"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
However, from my EC2 instance with either the AWS SDK or CLI, I am unable to create a cluster.
Both
InstanceProfileCredentialsProvider creds = new InstanceProfileCredentialsProvider(false);
AmazonRedshift redshift = AmazonRedshiftClientBuilder.standard().withCredentials(creds).build();
CreateClusterRequest request = new CreateClusterRequest();
request.setAllowVersionUpgrade(true);
request.setClusterType("multi-node");
request.setClusterIdentifier("identifier");
request.setDBName("dbname");
request.setIamRoles(Collections.singleton("iam_role_arn")));
request.setPubliclyAccessible(true);
request.setNodeType("dc1.8xlarge");
request.setNumberOfNodes(2);
request.setPort(8192);
request.setMasterUsername("username");
request.setMasterUserPassword("Password1");
Cluster cluster = redshift.createCluster(request);
and
aws redshift create-cluster --cluster-identifier identifier --master-username username --master-user-password Password1 --node-type dc1.8xlarge --region us-west-2 --number-of-nodes 2
result in:
An error occurred (UnauthorizedOperation) when calling the CreateCluster operation: Access Denied. Please ensure that your IAM Permissions allow this operation.
Using the IAM policy simulation tool I was able to confirm that my instance role has the permissions to create a Redshift cluster.
Any help understanding this would be appreciated.
If you are able to call other Redshift operations and it is only failing on createCluster(), then the error is probably being caused by the IAM Role that is being passed to Redshift.
You need to Grant a User Permissions to Pass a Role to an AWS Service.
The logic behind it is this:
Let's say you do not have access to S3 bucket X
Let's say there is a role in your account that does have access to bucket X
You could launch a Redshift cluster with the role and then use it to access bucket X
To prevent people cheating like this, they need PassRole permission that states whether they are allowed to pass a role to Redshift and even which roles they are allowed to pass.
The same applies to passing roles to an Amazon EC2 instance.

Access AWS resources from awscli with new IAM admin user

I'm trying to access AWS resources from the aws cli after configuring a new IAM user with admin rights and it acts as if no resource is available.
What I did:
Created an RDS instance (while being logged in with the root user);
Created a new IAM user to a group that has the AdministratorAccess[*] policy added;
Configured the aws cli to use the named user's access keys and the same region as the RDS instance;
Ran the command aws rds describe-db-instances. The result is:
$ aws rds describe-db-instances
{
"DBInstances": []
}
I would have expected to see my RDS instance listed. Am I missing something?
[*] The policy json contains this:
"Effect": "Allow",
"Action": "*",
"Resource": "*"
I do not think IAM privileges is an issue here since there is no error. Is the region in the default profile for AWS credentials configured correctly?
Or try specifying the region explicitly?
aws rds describe-db-instances --region eu-west-2
If it doesn't work, then the CLI is getting the credentials from somewhere else.

Restrict IAM Role to be attached to an EC2 instance if Instance Id does not match the one in IAM Policy

I am trying to create IAM Policy which restricts passing the IAM Role to an EC2 instance that if instance id does not equal to i-1234567890abcd
There is no error in the policy but there is no effect of this policy either. If I remove Condition from the below policy, it works but it restricts the role to be attached to any EC2 instance.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Action": ["iam:PassRole"],
"Resource": ["arn:aws:iam::000000000000:role/MyEC2InstanceSpecificRole"],
"Condition": {
"ArnNotEquals": {
"ec2:SourceInstanceARN": "arn:aws:ec2:us-east-1:000000000000:instance/i-1234567890abcd"
}
}
}
]
}
I suspect that this is not possible.
The Granting a User Permissions to Pass a Role to an AWS Service documentation states:
To pass a role (and its permissions) to an AWS service, a user must have permissions to pass the role to the service. This helps administrators ensure that only approved users can configure a service with a role that grants permissions. To allow a user to pass a role to an AWS service, you must grant the PassRole permission to the user's IAM user, role, or group.
When a user passes a role ARN as a parameter to any API that uses the role to assign permissions to the service, the service checks whether that user has the iam:PassRole permission. To limit the user to passing only approved roles, you can filter the iam:PassRole permission with the Resources element of the IAM policy statement.
Also on Using an IAM Role to Grant Permissions to Applications Running on Amazon EC2 Instances it states:
PassRole is not an API action in the same way that RunInstances or ListInstanceProfiles is. Instead, it's a permission that AWS checks whenever a role ARN is passed as a parameter to an API (or the console does this on the user's behalf). It helps an administrator to control which roles can be passed by which users.
The normal use-case for PassRole is to ensure that users do not grant AWS Services any more permissions that they should be allowed to use themselves. It tries to avoid a situation where a non-Admin user passes an Admin role to a service with the sinister intention of then using that service to access resources that they would not normally be allowed to access. For example, launching an Amazon EC2 instance with an Admin role, so that they can then login to that instance and issue Admin commands that they would not normally be entitled to use.
The above documentation suggests that the PassRole permission is evaluated to confirm their permission to pass a certain role to a certain service, rather than how that service is going to use the role itself (eg by then assigning it to an EC2 instance to generate STS credentials).

How to secure an S3 bucket to an Instance's Role?

Using cloudformation I have launched an EC2 instance with a role that has an S3 policy which looks like the following
{"Statement":[{"Action":"s3:*","Resource":"*","Effect":"Allow"}]}
In S3 the bucket policy is like so
{
"Version": "2012-10-17",
"Id": "MyPolicy",
"Statement": [
{
"Sid": "ReadAccess",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::123456678:role/Production-WebRole-1G48DN4VC8840"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::web-deploy/*"
}
]
}
When I login to the instance and attempt to curl any object I upload into the bucket (without acl modifications) I receive and Unauthorized 403 error.
Is this the correct way to restrict access to a bucket to only instances launched with a specific role?
The EC2 instance role is more than sufficient to put/read to any of your S3 buckets, but you need to use the instance role, which is not done automatically by curl.
You should use for example aws s3 cp <local source> s3://<bucket>/<key>, which will automatically used the instance role.
There are three ways to grant access to an object in Amazon S3:
Object ACL: Specific objects can be marked as "Public", so anyone can access them.
Bucket Policy: A policy placed on a bucket to determine what access to Allow/Deny, either publicly or to specific Users.
IAM Policy: A policy placed on a User, Group or Role, granting them access to an AWS resource such as an Amazon S3 bucket.
If any of these policies grant access, the user can access the object(s) in Amazon S3. One exception is if there is a Deny policy, which overrides an Allow policy.
Role on the Amazon EC2 instance
You have granted this role to the Amazon EC2 instance:
{"Statement":[{"Action":"s3:*","Resource":"*","Effect":"Allow"}]}
This will provide credentials to the instance that can be accessed by the AWS Command-Line Interface (CLI) or any application using the AWS SDK. They will have unlimited access to Amazon S3 unless there is also a Deny policy that otherwise restricts access.
If anything, that policy is granting too much permission. It is allowing an application on that instance to do anything it wants to your Amazon S3 storage, including deleting it all! It is better to assign least privilege, only giving permission for what the applications need to do.
Amazon S3 Bucket Policy
You have also created a Bucket Policy, which allows anything that has assumed the Production-WebRole-1G48DN4VC8840 role to retrieve the contents of the web-deploy bucket.
It doesn't matter what specific permissions the role itself has -- this policy means that merely using the role to access the web-deploy bucket will allow it to read all files. Therefore, this policy alone would be sufficient to your requirement of granting bucket access to instances using the Role -- you do not also require the policy within the role itself.
So, why can't you access the content? It is because using a straight CURL does not identify your role/user. Amazon S3 receives the request and treats it as anonymous, thereby not granting access.
Try accessing the data via the CLI or programmatically via an SDK call. For example, this CLI command would download an object:
aws s3 cp s3://web-deploy/foo.txt foo.txt
The CLI will automatically grab credentials related to your role, allowing access to the objects.