Last marker for boto3 pagination is not working - amazon-web-services

I'm working with roles and policies using AWS boto3 SDK. I want to get the policies attached to a given role and then do some processing. Here's the code.
def get_policies(role_name):
marker = None
while True:
print ('marker in get_policies is {} '.format(marker))
if marker:
response_iterator = iam.list_attached_role_policies(
RoleName=role_name,
# PathPrefix='string',
Marker=marker,
MaxItems=1
)
else:
response_iterator = iam.list_attached_role_policies(
RoleName=role_name,
# PathPrefix='string',
MaxItems=1
)
print("Next Page in get_policy : {} ".format(response_iterator['IsTruncated']))
print(response_iterator['AttachedPolicies'])
for policy in response_iterator['AttachedPolicies']:
detach_policy_from_entities(policy['PolicyArn'])
print(policy['PolicyName'] + " has to be deleted")
# delete_policy_response = iam.delete_policy(
# PolicyArn=policy['PolicyArn']
# )
print('deleted {}'.format(policy['PolicyName']))
if response_iterator['IsTruncated']:
marker = response_iterator['Marker']
else:
print("done with policy deletion")
return "finished"
The code works fine except it returns an empty list with the last marker. So, I have 3 policies attached to the given role.
The code works as follows:
initially marker is None, it just run the else part and returns 1 result with marker for next iteration
I use the marker to get another set of result. It works and returns 1 result with marker for the next iteration
Here I use the marker but it returns and empty list for the policy but I have one more policy
Any help will be greatly appreciated.

It looks like you are mutating the attached role policies and hence invalidating the pagination marker. Also, unless you specifically need it, I would remove MaxItems=1.
One solution is to change the code to simply append the policy ARNs to a list and then process that list for detachment after your for policy in ... loop.
As an FYI, you should consider using the resource-level IAM.Role as it simplifies access to the associated policies (they are available via simple policies and attached_policies collections on the role). For example:
import boto3
iam = boto3.resource("iam")
role = iam.Role("role-name-here")
for p in role.attached_policies.all():
print(p)

Your code works fine with your pagination logic, but I do not think you will need pagination as by default if you do not use MaxItems the list_attached_role_policies function is to return 100 values.
Also, as the Default quotas for IAM entities defines that you can have as default 10 Managed policies attached to an IAM role, if you don't request an increment (More information IAM object quotas can be found here).
So, for you logic works you need something like this:
import boto3
iam = boto3.client("iam")
role = "role_name"
policies = iam.list_attached_role_policies(RoleName=role)
policies_list = policies['AttachedPolicies']
for policy in policies_list:
print(policy['PolicyName'])
# you can add your detach logic here
Also the list_attached_role_policies method does not return inline policies, if the policies that are not being showed to you are inline policies you will need the list_role_policies method.
import boto3
iam = boto3.client("iam")
role = "role_name"
policies = iam.list_role_policies(RoleName=role)
policies_list = policies['PolicyNames']
for policy in policies_list:
print(policy)
# you can add your detach logic here

Related

Set cognito identity pool providers role resolution via Terraform

im trying to deploy cognito for opensearch via terraform. I have a manually built cognito working and ow trying to port it to terraform.
does anyone know how to set the below part?:
Choose role from token
role resolution 'DENY'
Terraform for the identity pool:
resource "aws_cognito_identity_pool" "cognito-identity-pool" {
identity_pool_name = "opensearch-${var.domain_name}-identity-pool"
allow_unauthenticated_identities = false
cognito_identity_providers {
client_id = aws_cognito_user_pool_client.cognito-user-pool-client.id
provider_name = aws_cognito_user_pool.cognito-user-pool.endpoint
}
}
ive tried adding server_side_token_check = false but no joy..
You need to use a different resource, namely aws_cognito_identity_pool_roles_attachment [1]. In order to achieve the same thing you see in the AWS console, you need to add the following block:
resource "aws_cognito_identity_pool_roles_attachment" "name" {
identity_pool_id = aws_cognito_identity_pool.cognito-identity-pool.id
roles = {
"authenticated" = <your-role-arn>
}
role_mapping {
ambiguous_role_resolution = "Deny"
type = "Token"
identity_provider = "${aws_cognito_user_pool.cognito-user-pool.endpoint}:${aws_cognito_user_pool_client.cognito-user-pool-client.id}"
}
}
Note that the roles block is required and the key can be authenticated or unathenticated. Additionally, you will probably have to figure out what kind of permissions the role will need and create it. The example in the documentation can be used as a blueprint. There are also other settings like mapping_rule block which might be of use to you, but since the details are lacking I omitted it from the answer.
[1] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_identity_pool_roles_attachment

Code fails to update a table on BQ using DML, but succeeds for insertion and deletion with RPC

I wrote some code that uses service-account to write to BQ on google-cloud.
A very strange thing is that only "update" operation using DML fails. (Other insertion, deletion RPC calls succeeds).
def create_table(self, table_id, schema):
table_full_name = self.get_table_full_name(table_id)
table = self.get_table(table_full_name)
if table is not None:
return # self.client.delete_table(table_full_name, not_found_ok=True) # Make an API
# request. # print("Deleted table '{}'.".format(table_full_name))
table = bigquery.Table(table_full_name, schema=schema)
table = self.client.create_table(table) # Make an API request.
print("Created table {}.{}.{}".format(table.project, table.dataset_id, table.table_id))
#Works!
def upload_rows_to_bq(self, table_id, rows_to_insert):
table_full_name = self.get_table_full_name(table_id)
for ads_chunk in split(rows_to_insert, _BQ_CHUNK_SIZE):
errors = self.client.insert_rows_json(table_full_name, ads_chunk,
row_ids=[None] * len(rows_to_insert)) # Make an API request.
if not errors:
print("New rows have been added.")
else:
print("Encountered errors while inserting rows: {}".format(errors))
#Permissions Failure
def update_bq_ads_status_removed(self, table_id, update_ads):
affected_rows = 0
table_full_name = self.get_table_full_name(table_id)
for update_ads_chunk in split(update_ads, _BQ_CHUNK_SIZE):
ad_ids = [item["ad_id"] for item in update_ads_chunk]
affected_rows += self.update_bq_ads_status(f"""
UPDATE {table_full_name}
SET status = 'Removed'
WHERE ad_id IN {tuple(ad_ids)}
""")
return affected_rows
I get this error for update only:
User does not have bigquery.jobs.create permission in project ABC.
I will elaborate on my comment.
In GCP you have 3 types of IAM roles.
Basic Roles
include the Owner, Editor, and Viewer roles.
Predefined Roles
provide granular access for a specific service and are managed by Google Cloud. Predefined roles are meant to support common use cases and access control patterns.
Custom Roles
provide granular access according to a user-specified list of permissions.
What's the difference between predefinied and custom roles? If you change (add/remove) permission for a predefinied role it will become custom role.
Predefinied roles for BigQuery with permissions list can be found here
Mentioned error:
User does not have bigquery.jobs.create permission in project ABC.
Means that IAM Role doesn't have specific BigQuery Permission - bigquery.jobs.create.
bigquery.jobs.create permission can be found in two predefinied roles like:
BigQuery Job User - (roles/bigquery.jobUser)
BigQuery User - (roles/bigquery.user)
Or can be added to a different predefinied role, however it would change to custom role.
Just for addition, in Testing Permission guide, you can find information on how to test IAM permissions.
Please give the service account the bigquery.user role and try to run the code again.
BigQuery Job User
role: bigquery.user

aws iam- can a role assume a role and that role assume another role?

Question same as the title, i want to know if a role and assume a role that can assume another role.
Example:
Role A. A Role that is trusted by an external account and it a policy that can assume any role
Role B. This role is assumed by A and it also has a policy that can assume Role C.
Role C. This role has policy that can access S3 bucket for example.
Yes, you can make roles that assume roles. The process is called Role chaining:
Role chaining occurs when you use a role to assume a second role through the AWS CLI or API.
The key thing to remember about this is that once A assumes B, all permissions of A are lost temporary, and the effective permissions are of the role B. So the permissions of roles A, B and C do not add up.
Idea of Role Chaining:
You get the session of every Role in the Chain, then pass each one to the other, until you reach to final Role, which you will pass to it the final session(session of the role before it) and get the Client you want as s3, iam on the final_target_account.
Scenarios for making Account-Cross-Origin using Role Chaining:
if you have three accounts(main_acc --> second_acc --> third_acc), and you are in your main account and you want to reach to the third account, but you cannot do so, only if you assume a role in the second account(cuz the second account is in the trust-relationship inside the role of the third account) to be able to reach to the third account role, that will give you permissions to do whatever you want to on the third account.
You need to have control on the child accounts under an organization account (which is in a control-tower and acts as the main control-tower account 'payer-account'), to be able to create any resources or infra-structure inside them, here you can assume the role of the Oragnization_main_account(payer), then from there assume the role inside the child_account, then do what you want directly on each child account in the organization.
-Note: There is a Role created by default from AWS side on the child_organization_accounts called AWSControlTowerExecution, please refer to AWS Docs.
How to Role Chaining using AWS Boto3 API:
def get_role_client_credentials(session,
session_name,
role_arn,
external_id=""):
client = session.client('sts')
if external_id:
assumed_role = client.assume_role(RoleArn=role_arn,
RoleSessionName=session_name,
ExternalId=external_id)
else:
assumed_role = client.assume_role(RoleArn=role_arn,
RoleSessionName=session_name)
return assumed_role['Credentials']
def get_assumed_client(session,
client_name,
role_arn,
session_name,
region,
external_id=""):
credentials = get_role_client_credentials(session=session,
session_name=session_name,
role_arn=role_arn,
external_id=external_id)
return session.client(client_name,
region_name=region,
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'])
#### Role Chaining ######
def get_role_session_credentials(session,
session_name,
role_arn,
external_id=""):
client = session.client('sts')
if external_id:
assumed_role = client.assume_role(RoleArn=role_arn,
RoleSessionName=session_name,
ExternalId=external_id)
else:
assumed_role = client.assume_role(RoleArn=role_arn,
RoleSessionName=session_name)
return assumed_role['Credentials']
def get_assumed_role_session(session,
role_arn,
session_name,
region,
external_id=""):
credentials = get_role_session_credentials(session,
session_name=session_name,
role_arn=role_arn,
external_id=external_id)
return boto3.session.Session(
region_name=region,
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken']
)
- Use the Above helper Functions This way:
role_A_arn = f"arn:aws:iam::{second_target_account_id}:role/RoleA"
assumed_Role_A_session = get_assumed_role_session(session, role_A_arn,
default_session_name, region, external_id="")
role_B_arn = f"arn:aws:iam::{third_target_account_id}:role/RoleB"
assumed_Role_B_client = get_assumed_client(assumed_Role_A_session, 's3', role_B_arn, different_session_name, region, external_id="")
assumed_Role_B_client.create_bucket(Bucket='testing-bucket',
CreateBucketConfiguration={
'LocationConstraint': f'{region}'
})

IAM permission boundary for CDK apps with conditions

I am writing an IAM role for a CI/CD user which deploys our Cloud Development Kit (CDK) app. The CDK app consists of lambda functions, Fargate etc. The problem is, that CDK does not allow me to specify all the roles it needs. Instead it creates some of then on its own.
Couple of examples:
Each lambda function with log retention has another lambda created by CDK which sets log retention to the log group and log streams.
CloudTrail event executing a step function needs a role with states:StartExecution permission.
CDK creates these roles automatically and also puts inline policies to them. Which forces me to give my CI/CD role permissions to create roles and attach policies. So if anybody gets access to the CI/CD user (for example if our GitHub credentials leak), the attacker could create new roles and give them admin permissions.
I tried creating all the roles myself in a separate stack and then using these roles in CDK app. But as I mentioned above (see the examples above), it's not possible everywhere...
I also tried IAM permission boundary for the deployer role, but I can't figure out how to limit permissions for iam:PutRolePolicy. CDK essentially does the following:
iam:CreateRole
iam:PutRolePolicy
According to AWS documentation, conditions are quite basic string comparisons. I need to be able to select, which actions are allowed in the policy document passed to iam:PutRolePolicy.
This is a sample of my permission boundary allowing the principal to create roles and put role policies. See the condition comment.
permission_boundary = aws_iam.ManagedPolicy(
scope=self,
id='DeployerPermissionBoundary',
managed_policy_name='DeployerPermissionBoundary',
statements=[
aws_iam.PolicyStatement(
actions=['iam:CreateRole'],
effect=aws_iam.Effect.ALLOW,
resources=[f'arn:aws:iam::{core.Aws.ACCOUNT_ID}:role/my-project-lambda-role']
),
aws_iam.PolicyStatement(
actions=['iam:PutRolePolicy'],
effect=aws_iam.Effect.ALLOW,
resources=[f'arn:aws:iam::{core.Aws.ACCOUNT_ID}:role/my-project-lambda-role'],
conditions=Conditions([
StringLike('RoleName', 'Required-role-name'),
StringLike('PolicyName', 'Required-policy-name'),
StringEquals('PolicyDocument', '') # I want to allow only specified actions like logs:CreateLogStream and logs:PutLogEvents
])
)
]
)
deployer_role = aws_iam.Role(
scope=self,
id='DeployerRole',
assumed_by=aws_iam.AccountRootPrincipal(),
permissions_boundary=permission_boundary,
inline_policies={
'Deployer': aws_iam.PolicyDocument(
statements=[
aws_iam.PolicyStatement(
actions=['iam:PutRolePolicy'],
effect=aws_iam.Effect.ALLOW,
resources=[f'arn:aws:iam::{core.Aws.ACCOUNT_ID}:role/my-project-lambda-role']
),
...
...
]
)
}
)
What is the correct way of limiting the PutRolePolicy to selected actions only? I want to allow logs:CreateLogStream and logs:PutLogEvents and nothing else.
I've been fighting with this for quite some time and I don't want to fall back to giving out more permissions than necessary. Thanks everyone in advance!
Here's a solution in Python for CDK 1.4.0 inspired by #matthewtapper's code on GitHub. This allows you to set permission boundary to all the roles in your stack.
Needless to say it's very ugly, since python CDK does not provide construct objects in aspects. We have to dig deep into JSII to resolve the objects. Hope it helps someone.
from jsii._reference_map import _refs
from jsii._utils import Singleton
import jsii
#jsii.implements(core.IAspect)
class PermissionBoundaryAspect:
def __init__(self, permission_boundary: Union[aws_iam.ManagedPolicy, str]) -> None:
"""
:param permission_boundary: Either aws_iam.ManagedPolicy object or managed policy's ARN as string
"""
self.permission_boundary = permission_boundary
def visit(self, construct_ref: core.IConstruct) -> None:
"""
construct_ref only contains a string reference to an object. To get the actual object, we need to resolve it using JSII mapping.
:param construct_ref: ObjRef object with string reference to the actual object.
:return: None
"""
kernel = Singleton._instances[jsii._kernel.Kernel]
resolve = _refs.resolve(kernel, construct_ref)
def _walk(obj):
if isinstance(obj, aws_iam.Role):
cfn_role = obj.node.find_child('Resource')
policy_arn = self.permission_boundary if isinstance(self.permission_boundary, str) else self.permission_boundary.managed_policy_arn
cfn_role.add_property_override('PermissionsBoundary', policy_arn)
else:
if hasattr(obj, 'permissions_node'):
for c in obj.permissions_node.children:
_walk(c)
if obj.node.children:
for c in obj.node.children:
_walk(c)
_walk(resolve)
Usage:
stack.node.apply_aspect(PermissionBoundaryAspect(managed_policy_arn))
Here is the solution for CDK version 1.9.0 + with added and extra try_find_child() to prevent nested child errors on the node, also the stack.node.apply_aspect() method is depreceated by AWS, so there is a new usage implementation.
from aws_cdk import (
aws_iam as iam,
core,
)
import jsii
from jsii._reference_map import _refs
from jsii._utils import Singleton
from typing import Union
#jsii.implements(core.IAspect)
class PermissionBoundaryAspect:
"""
This aspect finds all aws_iam.Role objects in a node (ie. CDK stack) and sets
permission boundary to the given ARN.
"""
def __init__(self, permission_boundary: Union[iam.ManagedPolicy, str]) -> None:
"""
This initialization method sets the permission boundary attribute.
:param permission_boundary: The provided permission boundary
:type permission_boundary: iam.ManagedPolicy|str
"""
self.permission_boundary = permission_boundary
print(self.permission_boundary)
def visit(self, construct_ref: core.IConstruct) -> None:
"""
construct_ref only contains a string reference to an object.
To get the actual object, we need to resolve it using JSII mapping.
:param construct_ref: ObjRef object with string reference to the actual object.
:return: None
"""
if isinstance(construct_ref, jsii._kernel.ObjRef) and hasattr(
construct_ref, "ref"
):
kernel = Singleton._instances[
jsii._kernel.Kernel
] # The same object is available as: jsii.kernel
resolve = _refs.resolve(kernel, construct_ref)
else:
resolve = construct_ref
def _walk(obj):
if obj.node.try_find_child("Resource") is not None:
if isinstance(obj, iam.Role):
cfn_role = obj.node.find_child("Resource")
policy_arn = (
self.permission_boundary
if isinstance(self.permission_boundary, str)
else self.permission_boundary.managed_policy_arn
)
cfn_role.add_property_override("PermissionsBoundary", policy_arn)
else:
if hasattr(obj, "permissions_node"):
for c in obj.permissions_node.children:
_walk(c)
if hasattr(obj, "node") and obj.node.children:
for c in obj.node.children:
_walk(c)
_walk(resolve)
And the new implementation API for the stack is:
core.Aspects.of(stack).add(
PermissionBoundaryAspect(
f"arn:aws:iam::{target_environment.account}:policy/my-permissions-boundary"
)
)
Anyone still struggling in certain cases or wants a Java example:
#Slf4j
public class PermissionBoundaryRoleAspect implements IAspect {
private static final String BOUNDED_PATH = "/bounded/";
#Override
public void visit(final #NotNull IConstruct node) {
node.getNode().findAll().stream().filter(iConstruct -> CfnResource.isCfnResource(iConstruct) && iConstruct.toString().contains("AWS::IAM::Role")).forEach(iConstruct -> {
var resource = (CfnResource) iConstruct;
resource.addPropertyOverride("PermissionsBoundary", "arn:aws:iam::xxx:policy/BoundedPermissionsPolicy");
resource.addPropertyOverride("Path", BOUNDED_PATH);
});
if (node instanceof CfnInstanceProfile) {
var instanceProfile = (CfnInstanceProfile) node;
instanceProfile.setPath(BOUNDED_PATH);
}
}
}
Why I am doing it this way, is because I was faced with a case where not all Roles being created was of type CfnRole
In my case I had to create a CfnCloudFormationProvisionedProduct
This constructor had a weird way of creating Roles. Roles in this constructor is of type CfnResource and cannot be casted to "CfnRole"
Thus I am using iConstruct.toString().contains("AWS::IAM::Role") which works for every resource if its type AWS::IAM::Role and for any CfnRole

AWS boto v2.32.0 - List tags for an ASG

I am trying to use boto v2.32.0 to list the tags on a particular ASG
something simple like this is obviously not working (especially with the lack of a filter system):
import boto.ec2.autoscale
asg = boto.ec2.autoscale.connect_to_region('ap-southeast-2')
tags = asg.get_all_tags('asgname')
print tags
or:
asg = boto.ec2.autoscale.connect_to_region('ap-southeast-2')
group = asg.get_all_groups(names='asgname')
tags = asg.get_all_tags(group)
print tags
or:
asg = boto.ec2.autoscale.connect_to_region('ap-southeast-2')
group = asg.get_all_groups(names='asgname')
tags = group.get_all_tags()
print tags
Without specifying an 'asgname', it's not returning every ASG. Despite what the documentation says about returning a token to see the next page, it doesn't seem to be implemented correctly - especially when you have a large number of ASG's and tags per ASG.
Trying something like this has basically shown me that the token system appears to be broken. it is not "looping" through all ASG's and tags before it returns "None":
asg = boto.ec2.autoscale.connect_to_region('ap-southeast-2')
nt = None
while ( True ):
tags = asg.get_all_tags(next_token=nt)
for t in tags:
if ( t.key == "MyTag" ):
print t.resource_id
print t.value
if ( tags.next_token == None ):
break
else:
nt = str(tags.next_token)
Has anyone managed to achieve this?
Thanks
This functionality is available in AWS using the AutoScaling DescribeTags API call, but unfortunately boto does not completely implement this call.
You should be able to pass a Filter with that API call to only get the tags for a specific ASG, but if you have a look at the boto source code for get_all_tags() (v2.32.1), the filter is not implemented:
:type filters: dict
:param filters: The value of the filter type used
to identify the tags to be returned. NOT IMPLEMENTED YET.
(quote from the source code mentioned above).
I eventually answered my own question by creating a work around using the amazon cli. Since there has been no activity on this question since the day I asked it I am posting this workaround as a solution.
import os
import json
## bash command
awscli = "/usr/local/bin/aws autoscaling describe-tags --filters Name=auto-scaling-group,Values=" + str(asgname)
output = str()
# run it
cmd = os.popen(awscli,"r")
while 1:
# get tag lines
lines = cmd.readline()
if not lines: break
output += lines
# json.load to manipulate
tags = json.loads(output.replace('\n',''))