How to check IAM Password Policies in the AWS using Boto3? - amazon-web-services

I am looking to get all the IAM user's password policies for all the users.
How to check IAM Password Policies enabled or not for all the users in the AWS account using Boto3?

The password policy is set at the account level, not for individual users. You can use something like this:
In [1]: import boto3
In [2]: iam = boto3.client('iam')
In [3]: iam.get_account_password_policy()
Out[3]:
{u'PasswordPolicy': {u'AllowUsersToChangePassword': True,
u'ExpirePasswords': False,
u'MinimumPasswordLength': 8,
u'RequireLowercaseCharacters': True,
u'RequireNumbers': True,
u'RequireSymbols': True,
u'RequireUppercaseCharacters': True},
'ResponseMetadata': {'HTTPStatusCode': 200,
'RequestId': 'f9a8fc8e-fbfc-11e5-992f-df20f934a99a'}}
To determine what the current policy is for an account. If you want to make sure all users adhere to your policy make sure you expire passwords periodically then users will be compelled to create a new password that is compliant with your policy.

You can try to use iam_policy module for Ansible which aims to manage IAM policies for users, groups, and roles. It allows uploading or removing IAM policies for IAM users, groups or roles.
In iam_policy.py file you can find some boto.iam code examples. E.g.
current_policies = [cp for cp in iam.list_role_policies(name).list_role_policies_result.policy_names]
for pol in current_policies:
if urllib.unquote(iam.get_role_policy(name, pol).get_role_policy_result.policy_document) == pdoc:
policy_match = True
if policy_match:
# msg = ("The policy document you specified already exists "
# "under the name %s." % pol)
pass

Related

Set cognito identity pool providers role resolution via Terraform

im trying to deploy cognito for opensearch via terraform. I have a manually built cognito working and ow trying to port it to terraform.
does anyone know how to set the below part?:
Choose role from token
role resolution 'DENY'
Terraform for the identity pool:
resource "aws_cognito_identity_pool" "cognito-identity-pool" {
identity_pool_name = "opensearch-${var.domain_name}-identity-pool"
allow_unauthenticated_identities = false
cognito_identity_providers {
client_id = aws_cognito_user_pool_client.cognito-user-pool-client.id
provider_name = aws_cognito_user_pool.cognito-user-pool.endpoint
}
}
ive tried adding server_side_token_check = false but no joy..
You need to use a different resource, namely aws_cognito_identity_pool_roles_attachment [1]. In order to achieve the same thing you see in the AWS console, you need to add the following block:
resource "aws_cognito_identity_pool_roles_attachment" "name" {
identity_pool_id = aws_cognito_identity_pool.cognito-identity-pool.id
roles = {
"authenticated" = <your-role-arn>
}
role_mapping {
ambiguous_role_resolution = "Deny"
type = "Token"
identity_provider = "${aws_cognito_user_pool.cognito-user-pool.endpoint}:${aws_cognito_user_pool_client.cognito-user-pool-client.id}"
}
}
Note that the roles block is required and the key can be authenticated or unathenticated. Additionally, you will probably have to figure out what kind of permissions the role will need and create it. The example in the documentation can be used as a blueprint. There are also other settings like mapping_rule block which might be of use to you, but since the details are lacking I omitted it from the answer.
[1] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_identity_pool_roles_attachment

Last marker for boto3 pagination is not working

I'm working with roles and policies using AWS boto3 SDK. I want to get the policies attached to a given role and then do some processing. Here's the code.
def get_policies(role_name):
marker = None
while True:
print ('marker in get_policies is {} '.format(marker))
if marker:
response_iterator = iam.list_attached_role_policies(
RoleName=role_name,
# PathPrefix='string',
Marker=marker,
MaxItems=1
)
else:
response_iterator = iam.list_attached_role_policies(
RoleName=role_name,
# PathPrefix='string',
MaxItems=1
)
print("Next Page in get_policy : {} ".format(response_iterator['IsTruncated']))
print(response_iterator['AttachedPolicies'])
for policy in response_iterator['AttachedPolicies']:
detach_policy_from_entities(policy['PolicyArn'])
print(policy['PolicyName'] + " has to be deleted")
# delete_policy_response = iam.delete_policy(
# PolicyArn=policy['PolicyArn']
# )
print('deleted {}'.format(policy['PolicyName']))
if response_iterator['IsTruncated']:
marker = response_iterator['Marker']
else:
print("done with policy deletion")
return "finished"
The code works fine except it returns an empty list with the last marker. So, I have 3 policies attached to the given role.
The code works as follows:
initially marker is None, it just run the else part and returns 1 result with marker for next iteration
I use the marker to get another set of result. It works and returns 1 result with marker for the next iteration
Here I use the marker but it returns and empty list for the policy but I have one more policy
Any help will be greatly appreciated.
It looks like you are mutating the attached role policies and hence invalidating the pagination marker. Also, unless you specifically need it, I would remove MaxItems=1.
One solution is to change the code to simply append the policy ARNs to a list and then process that list for detachment after your for policy in ... loop.
As an FYI, you should consider using the resource-level IAM.Role as it simplifies access to the associated policies (they are available via simple policies and attached_policies collections on the role). For example:
import boto3
iam = boto3.resource("iam")
role = iam.Role("role-name-here")
for p in role.attached_policies.all():
print(p)
Your code works fine with your pagination logic, but I do not think you will need pagination as by default if you do not use MaxItems the list_attached_role_policies function is to return 100 values.
Also, as the Default quotas for IAM entities defines that you can have as default 10 Managed policies attached to an IAM role, if you don't request an increment (More information IAM object quotas can be found here).
So, for you logic works you need something like this:
import boto3
iam = boto3.client("iam")
role = "role_name"
policies = iam.list_attached_role_policies(RoleName=role)
policies_list = policies['AttachedPolicies']
for policy in policies_list:
print(policy['PolicyName'])
# you can add your detach logic here
Also the list_attached_role_policies method does not return inline policies, if the policies that are not being showed to you are inline policies you will need the list_role_policies method.
import boto3
iam = boto3.client("iam")
role = "role_name"
policies = iam.list_role_policies(RoleName=role)
policies_list = policies['PolicyNames']
for policy in policies_list:
print(policy)
# you can add your detach logic here

Code fails to update a table on BQ using DML, but succeeds for insertion and deletion with RPC

I wrote some code that uses service-account to write to BQ on google-cloud.
A very strange thing is that only "update" operation using DML fails. (Other insertion, deletion RPC calls succeeds).
def create_table(self, table_id, schema):
table_full_name = self.get_table_full_name(table_id)
table = self.get_table(table_full_name)
if table is not None:
return # self.client.delete_table(table_full_name, not_found_ok=True) # Make an API
# request. # print("Deleted table '{}'.".format(table_full_name))
table = bigquery.Table(table_full_name, schema=schema)
table = self.client.create_table(table) # Make an API request.
print("Created table {}.{}.{}".format(table.project, table.dataset_id, table.table_id))
#Works!
def upload_rows_to_bq(self, table_id, rows_to_insert):
table_full_name = self.get_table_full_name(table_id)
for ads_chunk in split(rows_to_insert, _BQ_CHUNK_SIZE):
errors = self.client.insert_rows_json(table_full_name, ads_chunk,
row_ids=[None] * len(rows_to_insert)) # Make an API request.
if not errors:
print("New rows have been added.")
else:
print("Encountered errors while inserting rows: {}".format(errors))
#Permissions Failure
def update_bq_ads_status_removed(self, table_id, update_ads):
affected_rows = 0
table_full_name = self.get_table_full_name(table_id)
for update_ads_chunk in split(update_ads, _BQ_CHUNK_SIZE):
ad_ids = [item["ad_id"] for item in update_ads_chunk]
affected_rows += self.update_bq_ads_status(f"""
UPDATE {table_full_name}
SET status = 'Removed'
WHERE ad_id IN {tuple(ad_ids)}
""")
return affected_rows
I get this error for update only:
User does not have bigquery.jobs.create permission in project ABC.
I will elaborate on my comment.
In GCP you have 3 types of IAM roles.
Basic Roles
include the Owner, Editor, and Viewer roles.
Predefined Roles
provide granular access for a specific service and are managed by Google Cloud. Predefined roles are meant to support common use cases and access control patterns.
Custom Roles
provide granular access according to a user-specified list of permissions.
What's the difference between predefinied and custom roles? If you change (add/remove) permission for a predefinied role it will become custom role.
Predefinied roles for BigQuery with permissions list can be found here
Mentioned error:
User does not have bigquery.jobs.create permission in project ABC.
Means that IAM Role doesn't have specific BigQuery Permission - bigquery.jobs.create.
bigquery.jobs.create permission can be found in two predefinied roles like:
BigQuery Job User - (roles/bigquery.jobUser)
BigQuery User - (roles/bigquery.user)
Or can be added to a different predefinied role, however it would change to custom role.
Just for addition, in Testing Permission guide, you can find information on how to test IAM permissions.
Please give the service account the bigquery.user role and try to run the code again.
BigQuery Job User
role: bigquery.user

Cognito Group Permission Dynamically from Database

I created a cognito pool
Created users
Created 2 Groups WITHOUT any IAM roles
Assigned users to 2 different groups.
I store policies for a group in database and cache them .
In the lambda authorizer that has been configured , the deny policy works with principalId set to a random string.
For allowing access , i set the principal Id to the cognito User name. I get the policy from the database with permissions allowed for all api gateway end points. ( For testing )
But even after this i get the "User is not authorized" message.
Is my understanding wrong ? What am i doing wrong.
This is my policy for allowing access with the userId being the cognito user name.
authResponse = {}
authResponse['principalId'] = userId
authResponse['policyDocument'] = {
'Version': '2012-10-17',
'Statement': [
{
'Sid': 'FirstStatement',
'Action': 'execute-api:Invoke',
'Effect': 'Allow',
'Resource': 'arn:aws:execute-api:us-east-1:*:ppg7tavcld/test/GET/test-api-1/users/*'
}
]
}
return authResponse
Sorry . this was a mistake from me.
It was solved due to mixing up the stage position in the Resource

aws iam- can a role assume a role and that role assume another role?

Question same as the title, i want to know if a role and assume a role that can assume another role.
Example:
Role A. A Role that is trusted by an external account and it a policy that can assume any role
Role B. This role is assumed by A and it also has a policy that can assume Role C.
Role C. This role has policy that can access S3 bucket for example.
Yes, you can make roles that assume roles. The process is called Role chaining:
Role chaining occurs when you use a role to assume a second role through the AWS CLI or API.
The key thing to remember about this is that once A assumes B, all permissions of A are lost temporary, and the effective permissions are of the role B. So the permissions of roles A, B and C do not add up.
Idea of Role Chaining:
You get the session of every Role in the Chain, then pass each one to the other, until you reach to final Role, which you will pass to it the final session(session of the role before it) and get the Client you want as s3, iam on the final_target_account.
Scenarios for making Account-Cross-Origin using Role Chaining:
if you have three accounts(main_acc --> second_acc --> third_acc), and you are in your main account and you want to reach to the third account, but you cannot do so, only if you assume a role in the second account(cuz the second account is in the trust-relationship inside the role of the third account) to be able to reach to the third account role, that will give you permissions to do whatever you want to on the third account.
You need to have control on the child accounts under an organization account (which is in a control-tower and acts as the main control-tower account 'payer-account'), to be able to create any resources or infra-structure inside them, here you can assume the role of the Oragnization_main_account(payer), then from there assume the role inside the child_account, then do what you want directly on each child account in the organization.
-Note: There is a Role created by default from AWS side on the child_organization_accounts called AWSControlTowerExecution, please refer to AWS Docs.
How to Role Chaining using AWS Boto3 API:
def get_role_client_credentials(session,
session_name,
role_arn,
external_id=""):
client = session.client('sts')
if external_id:
assumed_role = client.assume_role(RoleArn=role_arn,
RoleSessionName=session_name,
ExternalId=external_id)
else:
assumed_role = client.assume_role(RoleArn=role_arn,
RoleSessionName=session_name)
return assumed_role['Credentials']
def get_assumed_client(session,
client_name,
role_arn,
session_name,
region,
external_id=""):
credentials = get_role_client_credentials(session=session,
session_name=session_name,
role_arn=role_arn,
external_id=external_id)
return session.client(client_name,
region_name=region,
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'])
#### Role Chaining ######
def get_role_session_credentials(session,
session_name,
role_arn,
external_id=""):
client = session.client('sts')
if external_id:
assumed_role = client.assume_role(RoleArn=role_arn,
RoleSessionName=session_name,
ExternalId=external_id)
else:
assumed_role = client.assume_role(RoleArn=role_arn,
RoleSessionName=session_name)
return assumed_role['Credentials']
def get_assumed_role_session(session,
role_arn,
session_name,
region,
external_id=""):
credentials = get_role_session_credentials(session,
session_name=session_name,
role_arn=role_arn,
external_id=external_id)
return boto3.session.Session(
region_name=region,
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken']
)
- Use the Above helper Functions This way:
role_A_arn = f"arn:aws:iam::{second_target_account_id}:role/RoleA"
assumed_Role_A_session = get_assumed_role_session(session, role_A_arn,
default_session_name, region, external_id="")
role_B_arn = f"arn:aws:iam::{third_target_account_id}:role/RoleB"
assumed_Role_B_client = get_assumed_client(assumed_Role_A_session, 's3', role_B_arn, different_session_name, region, external_id="")
assumed_Role_B_client.create_bucket(Bucket='testing-bucket',
CreateBucketConfiguration={
'LocationConstraint': f'{region}'
})