I'm in the process of trying to use Airflow to trigger ECS Tasks in another AWS Account. The worker nodes that Airflow uses always assume a specific role (role-a) in Account A. The ECS Cluster is in Account B. I have a role in Account B called role-b that should have all the permissions needed to run ECS Tasks and connect to ECR etc, and I'm trying to establish access to this role so that only role-a in Account A can assume it.
When I check the sts identity on a worker node using boto3, it gets returned as arn:aws:sts::494531898320:assumed-role/role-a/botocore-session-1631223174. This last bit at the end is always a random number. Because it's constantly changing I'm trying to use a wildcard in role-b's Trust Policy so that my worker nodes will always be able to assume this role in the other account and run ECS:RunTask operations with it in Account B's ECS Cluster.
Below is my trust policy for role-b.
# role-b in Account B - The account where the ECS Cluster is.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringLike": {
"aws:PrincipalArn": "arn:aws:sts::494531898320:assumed-role/role-a/*"
}
}
}
]
}
However, this doesn't work. My Airflow worker nodes instantly get an error that role-a can't assume this role-b role. Is my wildcard condition not working? Do i have to put a sts:AssumeRole policy in the actual policy attached to this role-b role instead of only in the Trust Permissions?
It'd be a lot easier to do this all in a single account but that's not an option for my use case as of right now. I'm kinda lost on how to proceed and haven't found great examples of how to properly implement this - any help would be appreciated !
sytech's suggestion was correct but seems to have been half the answer; the IAM Role ARN (arn:aws:iam::<account>:role/<your-role>) needs to be in the Trust Policy, not the assumed-role/xxx bit that my sts identity was spitting back.
The other thing I had to do was attach an IAM Policy to role-a that gives sts:AssumeRole access to the role-b in Account B so that role-a actually has permissions to call the AssumeRole API operation on that resource in the other account.
Without this, even though the Trust Policy was now correct it was still returning an AccessDenied simply because the role-a was never given permissions to do an sts:AssumeRole Operation.
Related
I have an AWS OpenSearch cluster configured with an IAM master user role. I have an AWS Lambda which I want to be able to query both OpenSearch and other AWS services like DynamoDB. I don't want to modify the OpenSearch master user role to be able to access other AWS services - it should have zero permissions.
My current solution is letting my Lambda call assumeRole to assume the master user role before querying OpenSearch. Is this the approved way to do it? Seems like it would be more efficient not to have to do the assume role step. And it has the downside that the Lambda then has full access to OpenSearch - I would prefer to give it more granular permissions, e.g. only es:ESHttpGet.
This AWS documentation https://docs.aws.amazon.com/opensearch-service/latest/developerguide/ac.html seems to imply that you can set a resource-based access policy on domain setup which grants permissions to specific users. But I tried creating a maximally permissive policy and I still can't access the domain except as the master role. Am I misunderstanding the docs?
The permissive access policy I tried to use:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Resource": "arn:aws:es:eu-west-1:REDACTED:domain/*/*"
}
]
}
I'm implementing something like that at the moment and it's not quite finished, but I am using API Gateway and a Lambda authoriser function to allow basic authentication. You could try that. The policy I have is almost the same as yours except after domain I have the name of the domain, not a star. I also have vpcs for security locked down to a cidr range.
We have just built a new Things Enterprise server hosted at AWS on an EC2 instance and created an application to use AWS IOT. We are getting the following error
“message”: “User: arn:aws:sts::446971925991:assumed-role/Things-Enterprise-Stack-Srv-StackIAMRole-DBHBSMSY05AQ/i-095895d605fab3fa4 is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::446971925991:role/Bosh-Parking-IOT-Stack-TheThingsStackRoleCD9FBAD2-C44RRJJ53M93”
I have been told
What is the execution role of the TTES instance that is trying to assume the role? The role TTES needs to be able to assume that role. That will give the right permissions.
But I'm not sure what that means, i'm presuming i need to add / alter some permissions within an IAM role. Can someone point me in the right direction Pls.
From the error message it seems that your IAM role for Amazon EC2 has no permissions to assume a role Bosh-Parking-IOT-Stack-TheThingsStackRoleCD9FBAD2-C44RRJJ53M93.
To add such permissions manually you can do the following:
Go to IAM Console->Roles.
In the Roles window, you can use Search bar to locate Things-Enterprise-Stack-Srv-StackIAMRole-DBHBSMSY05AQ role.
Once you find the role, you click on Add inline policy.
Once Create policy window shows, you can go to JSON tab and add the following JSON policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAssumeRole",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::446971925991:role/Bosh-Parking-IOT-Stack-TheThingsStackRoleCD9FBAD2-C44RRJJ53M93"
}
]
}
Then click Review Policy, name the policy (e.g. PolicyToAssumeRole) and Create policy
However, based on your policy names (e.g. Stack-Srv-StackIAMRole) it is possible that they have been create by CloudFormation. If this is the case, then manually changing the roles as described above is a bad practice and will lead to drift. Any changes to resources created by CloudFormation should be done using CloudFormation. Sadly, your question does not provide any details about CloudFormation templates used, therefore its difficult to comment on that more.
Error message: User "arn:aws:redshift:us-west-2:123456789012:dbuser:my-cluster/user2" is not authorized to assume IAM Role "roleArn"
on the role I've updated the trust policy to this which should allow the assume role, what am I messing up here?
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "redshift.amazonaws.com"
},
"Action": "sts:AssumeRole"
},
code is valid JSON had to cut off the rest.
I'm interning and new to IAM roles. if the redshift account also needs the permission update, how do I give it to them? I've been on this issue for a while so thanks to any help you can give.
To be able to use an IAM role with LOAD or UNLOAD operations one has to:
create an IAM role with trust relationship with Redshift service
attach the role to the cluster
You described doing the first step. Have you also attached the role? You can see the attached role in the AWS UI or list them with CLI:
aws redshift describe-clusters --cluster-identifier my-cluster --query 'Clusters[].IamRoles'
[
[
{
"IamRoleArn": "arn:aws:iam::123456789012:role/my-redhift-role",
"ApplyStatus": "in-sync"
}
]
]
Looking at the error you're getting,
Error message: User "arn:aws:redshift:us-west-2:123456789012:dbuser:my-cluster/user2" is not authorized to assume IAM Role "roleArn"
looks like in the operation you're issuing the role is wrongly configured. To me the error suggests that you're instructing Redshift to assume roleArn role, which probably does not exist. You should put your role name there.
I am checking the steps of setting up IAM auth in RDS: https://aws.amazon.com/premiumsupport/knowledge-center/users-connect-rds-iam/ And one of the steps is to attach the IAM role with proper permission: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.IAMDBAuth.IAMPolicy.html
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"rds-db:connect"
],
"Resource": [
"arn:aws:rds-db:us-east-2:1234567890:dbuser:db-ABCDEFGHIJKL01234/db_user"
]
}
]
}
The resource follows this format:
arn:aws:rds-db:region:account-id:dbuser:DbiResourceId/db-user-name
If I understand correctly, as long as I know someone's account-id, DbiResourceId and db-user-name (or maybe db-user-name as I can use wildcard?), then I am able to connect to that DB instance, right?
This sounds insecure. Did I miss anything?
No this would not be possible. The only want to interact with this resource would be to assume a role in the target account.
You can use an IAM role to allow someone (a trusted principal) in a different account to access resources in your account. Roles are the primary way to grant cross-account access. However, with some AWS services, you can attach a policy directly to a resource (instead of using a role as a proxy). To learn the difference between roles and resource-based policies for cross-account access, see How IAM Roles Differ from Resource-based Policies in the IAM User Guide
I am not able to start an OpsWorks instance after I have created the Instance within an OpsWorks Layer, which is part of an OpsWorks Stack. The error that I get after attempting to start the 24/7 instance is the following:
An error occurred while starting the instance java-app1
OpsWorks failed to obtain the necessary credentials to start the instance on your behalf. Please try again after waiting a minute. If this error persists, please check the permissions of the stack IAM role.
The error indicates that I don't have my permissions set correctly for the IAM Role of my Stack. I have created an OpsWorks Stack that contains a reference to a Role ARN that has the AWSOpsWorksFullAccess and AWSOpsWorksRole policies set for the Role's permissions. I would have thought one of those two policies would be enough.
I can create a OpsWorks Layer within that Stack, and create an OpsWorks instance as well. The created instance uses the DefaultInstanceProfileArn of the Stack. In my case, that ARN references a Role that contains the following policies:
AmazonEC2FullAccess
AWSOpsWorksFullAccess
AWSOpsWorksRole
AmazonS3FullAccess
I know that the policies that I have applied are very broad, but at this point I'm just trying to get an OpsWorks instance to start. What policy needs to be applied in order for OpsWorks to have the correct permissions to start an instance within my Stack?
This is a bit late :-)
I had that issue recently.
The roles that are attached are:
AmazonEC2FullAccess
AWSOpsWorksFullAccess
AWSOpsWorksCMServiceRole
This gives me create, start, stop and delete.
This confused me the most. Passing this mountant brings a lot of sights.
Open stack-setting, you will find your current IAM ROLE
Follow the document below to attach an inline policy.
https://docs.aws.amazon.com/opsworks/latest/userguide/opsworks-security-servicerole.html
If you create a custom service role, you must ensure that it grants all the permissions that AWS OpsWorks Stacks needs to manage your stack. The following JSON sample is the policy statement for the standard service role; a custom service role should include at least the following permissions in its policy statement.
{
"Version": "2008-10-17",
"Statement": [
{
"Action": [
"ec2:*",
"iam:PassRole",
"cloudwatch:GetMetricStatistics",
"cloudwatch:DescribeAlarms",
"ecs:*",
"elasticloadbalancing:*",
"rds:*"
],
"Effect": "Allow",
"Resource": [
"*"
],
"Condition": {
"StringEquals": {
"iam:PassedToService": "ec2.amazonaws.com"
}
}
}
]
}
A bit late, but the easiest way to solve this if you are just trying out opsworks is to create a service role with EC2 full access.
This should allow for the creation of the stack