Cognito Group Permission Dynamically from Database - amazon-web-services

I created a cognito pool
Created users
Created 2 Groups WITHOUT any IAM roles
Assigned users to 2 different groups.
I store policies for a group in database and cache them .
In the lambda authorizer that has been configured , the deny policy works with principalId set to a random string.
For allowing access , i set the principal Id to the cognito User name. I get the policy from the database with permissions allowed for all api gateway end points. ( For testing )
But even after this i get the "User is not authorized" message.
Is my understanding wrong ? What am i doing wrong.
This is my policy for allowing access with the userId being the cognito user name.
authResponse = {}
authResponse['principalId'] = userId
authResponse['policyDocument'] = {
'Version': '2012-10-17',
'Statement': [
{
'Sid': 'FirstStatement',
'Action': 'execute-api:Invoke',
'Effect': 'Allow',
'Resource': 'arn:aws:execute-api:us-east-1:*:ppg7tavcld/test/GET/test-api-1/users/*'
}
]
}
return authResponse

Sorry . this was a mistake from me.
It was solved due to mixing up the stage position in the Resource

Related

Set cognito identity pool providers role resolution via Terraform

im trying to deploy cognito for opensearch via terraform. I have a manually built cognito working and ow trying to port it to terraform.
does anyone know how to set the below part?:
Choose role from token
role resolution 'DENY'
Terraform for the identity pool:
resource "aws_cognito_identity_pool" "cognito-identity-pool" {
identity_pool_name = "opensearch-${var.domain_name}-identity-pool"
allow_unauthenticated_identities = false
cognito_identity_providers {
client_id = aws_cognito_user_pool_client.cognito-user-pool-client.id
provider_name = aws_cognito_user_pool.cognito-user-pool.endpoint
}
}
ive tried adding server_side_token_check = false but no joy..
You need to use a different resource, namely aws_cognito_identity_pool_roles_attachment [1]. In order to achieve the same thing you see in the AWS console, you need to add the following block:
resource "aws_cognito_identity_pool_roles_attachment" "name" {
identity_pool_id = aws_cognito_identity_pool.cognito-identity-pool.id
roles = {
"authenticated" = <your-role-arn>
}
role_mapping {
ambiguous_role_resolution = "Deny"
type = "Token"
identity_provider = "${aws_cognito_user_pool.cognito-user-pool.endpoint}:${aws_cognito_user_pool_client.cognito-user-pool-client.id}"
}
}
Note that the roles block is required and the key can be authenticated or unathenticated. Additionally, you will probably have to figure out what kind of permissions the role will need and create it. The example in the documentation can be used as a blueprint. There are also other settings like mapping_rule block which might be of use to you, but since the details are lacking I omitted it from the answer.
[1] https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/cognito_identity_pool_roles_attachment

aws iam- can a role assume a role and that role assume another role?

Question same as the title, i want to know if a role and assume a role that can assume another role.
Example:
Role A. A Role that is trusted by an external account and it a policy that can assume any role
Role B. This role is assumed by A and it also has a policy that can assume Role C.
Role C. This role has policy that can access S3 bucket for example.
Yes, you can make roles that assume roles. The process is called Role chaining:
Role chaining occurs when you use a role to assume a second role through the AWS CLI or API.
The key thing to remember about this is that once A assumes B, all permissions of A are lost temporary, and the effective permissions are of the role B. So the permissions of roles A, B and C do not add up.
Idea of Role Chaining:
You get the session of every Role in the Chain, then pass each one to the other, until you reach to final Role, which you will pass to it the final session(session of the role before it) and get the Client you want as s3, iam on the final_target_account.
Scenarios for making Account-Cross-Origin using Role Chaining:
if you have three accounts(main_acc --> second_acc --> third_acc), and you are in your main account and you want to reach to the third account, but you cannot do so, only if you assume a role in the second account(cuz the second account is in the trust-relationship inside the role of the third account) to be able to reach to the third account role, that will give you permissions to do whatever you want to on the third account.
You need to have control on the child accounts under an organization account (which is in a control-tower and acts as the main control-tower account 'payer-account'), to be able to create any resources or infra-structure inside them, here you can assume the role of the Oragnization_main_account(payer), then from there assume the role inside the child_account, then do what you want directly on each child account in the organization.
-Note: There is a Role created by default from AWS side on the child_organization_accounts called AWSControlTowerExecution, please refer to AWS Docs.
How to Role Chaining using AWS Boto3 API:
def get_role_client_credentials(session,
session_name,
role_arn,
external_id=""):
client = session.client('sts')
if external_id:
assumed_role = client.assume_role(RoleArn=role_arn,
RoleSessionName=session_name,
ExternalId=external_id)
else:
assumed_role = client.assume_role(RoleArn=role_arn,
RoleSessionName=session_name)
return assumed_role['Credentials']
def get_assumed_client(session,
client_name,
role_arn,
session_name,
region,
external_id=""):
credentials = get_role_client_credentials(session=session,
session_name=session_name,
role_arn=role_arn,
external_id=external_id)
return session.client(client_name,
region_name=region,
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken'])
#### Role Chaining ######
def get_role_session_credentials(session,
session_name,
role_arn,
external_id=""):
client = session.client('sts')
if external_id:
assumed_role = client.assume_role(RoleArn=role_arn,
RoleSessionName=session_name,
ExternalId=external_id)
else:
assumed_role = client.assume_role(RoleArn=role_arn,
RoleSessionName=session_name)
return assumed_role['Credentials']
def get_assumed_role_session(session,
role_arn,
session_name,
region,
external_id=""):
credentials = get_role_session_credentials(session,
session_name=session_name,
role_arn=role_arn,
external_id=external_id)
return boto3.session.Session(
region_name=region,
aws_access_key_id=credentials['AccessKeyId'],
aws_secret_access_key=credentials['SecretAccessKey'],
aws_session_token=credentials['SessionToken']
)
- Use the Above helper Functions This way:
role_A_arn = f"arn:aws:iam::{second_target_account_id}:role/RoleA"
assumed_Role_A_session = get_assumed_role_session(session, role_A_arn,
default_session_name, region, external_id="")
role_B_arn = f"arn:aws:iam::{third_target_account_id}:role/RoleB"
assumed_Role_B_client = get_assumed_client(assumed_Role_A_session, 's3', role_B_arn, different_session_name, region, external_id="")
assumed_Role_B_client.create_bucket(Bucket='testing-bucket',
CreateBucketConfiguration={
'LocationConstraint': f'{region}'
})

Want to assign multiple Google cloud IAM roles to a service account via terraform

I want to assign multiple IAM roles to a single service account through terraform. I prepared a TF file to do that, but it has an error. With a single role it can be successfully assigned but with multiple IAM roles, it gave an error.
data "google_iam_policy" "auth1" {
binding {
role = "roles/cloudsql.admin"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
role = "roles/secretmanager.secretAccessor"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
role = "roles/datastore.owner"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
role = "roles/storage.admin"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
}
}
How can I assign multiple roles against a single service account?
I did something like this
resource "google_project_iam_member" "member-role" {
for_each = toset([
"roles/cloudsql.admin",
"roles/secretmanager.secretAccessor",
"roles/datastore.owner",
"roles/storage.admin",
])
role = each.key
member = "serviceAccount:${google_service_account.service_account_1.email}"
project = my_project_id
}
According with the documentation
Each document configuration must have one or more binding blocks, which each accept the following arguments: ....
You have to repeat the binding, like this
data "google_iam_policy" "auth1" {
binding {
role = "roles/cloudsql.admin"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
}
binding {
role = "roles/secretmanager.secretAccessor"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
}
binding {
role = "roles/datastore.owner"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
}
binding {
role = "roles/storage.admin"
members = [
"serviceAccount:${google_service_account.service_account_1.email}",
]
}
}
It's the same thing with you use the gcloud command, you can add only 1 role at the time on a list of email.
I can't comment or upvote yet so here's another answer, but #intotecho is right.
I'd say do not create a policy with Terraform unless you really know what you're doing! In GCP, there's only one policy allowed per project. If you apply that policy, only the service accounts will have access, no humans. :) Even though we don't want humans to do human things, it's helpful to at least have view access to the GCP project you own.
Especccciallyy if you use the model that there are multiple Terraform workspaces performing iam operations on the project. If you use policies it will be similar to how wine is made, it will be a stomping party! The most recently applied policy will win (if the service account TF is using is included in that policy, otherwise it will lock itself out!)
It's possible humans get an inherited viewer role from a folder or the org itself, but assigning multiple roles using the google_project_iam_member is a much much better way and how 95% of the permissions are done with TF in GCP.

Cloudfront signed cookies issue, getting 403

We have used CloudFront to store image URLs and using signed cookies to provide access only through our application. Without signed cookies we are able to access contents but after enabling signed cookies we are getting HTTP 403.
Below is configuration/cookies we are sending:
Cookies going with the request:
CloudFront-Expires: 1522454400
CloudFront-Key-Pair-Id: xyz...
CloudFront-Policy: abcde...
CloudFront-Signature: abce...
Here is our CloudFront policy:
{
"Statement": [
{
"Resource":"https://*.abc.com/*",
"Condition":{
"DateLessThan":{"AWS:EpochTime":1522454400}
}
}
]
}
The cookie domain is .abc.com, and the resource path is https://*.abc.com/*.
We are using CannedPolicy to create CloudFront cookies.
Why isn't this working as expected?
I have got solution.Our requirement was wildcard access.
CloudFrontCookieSigner.getCookiesForCustomPolicy(
this.resourcePath,
pk,
this.keyPairId,
expiresOn,
null,
"0.0.0.0/0"
);
Where:
resource path = https+ "distribution name" + /*
activeFrom = it is optional so pass it as null
pk = private key ( few api also take file but it didn't work, so get the private key from file and use above function)
we want to access all contents under distribution, canned policy doesn't allow wildcard. So, we changed it to custom policy and it worked.
Review the documentation again
There are only 3 cookies, with the last being either CloudFront-Expires for a canned policy, or CloudFront-Policy for a custom policy.
We are using CannedPolicy
A canned policy has an implicit resource of *, so a canned policy statement cannot have an explicit Resource, so you are in fact using a custom policy. If all else is implemented correctly, your solution may simply be to remove the CloudFront-Expires cookie, which isn't used with a custom policy.
"Canned" (bottled, jugged, pre-packaged) policies are used in cases where the only unique information in the policy is the expiration. Their advantage is that they require marginally less bandwidth (and make shorter URLs when creating signed URLs). Their disadvantage is that they are wildcards by design, which is not always what you want.
While there can be multiple reasons for 403 - AccessDenied as response. In our case, after debugging, we learnt that when using signed cookies - the CloudFront-Key-Pair-Id cookie remains same for every request while CloudFront-Policy and CloudFront-Signature cookies change values per request, otherwise 403 access denied will occur.
For anyone still struggling with this today or looking for clarification: You need to generate a custom policy if you would like to use wildcards in the resource url i.e. https://mycdn.abc.com/protected-content-folder/*
The AWS Cloudfront API has changed over the years; Currently, the easiest way to generate signed Cloudfront cookies or urls is via the AWS SDK, if available in your language of choice. Here is an example in NodeJS using the Javascriptv3 AWS SDK:
const { getSignedCookies } = require("#aws-sdk/cloudfront-signer");
// Read in your private_key.pem that you generated.
const privateKey = fs.readFileSync("./.secrets/private_key.pem", {
encoding: "utf8",
});
const resource = `https://mycdn.abc.com/protected-content-folder/*`;
const dateLessThan = 1658593534;
const policyStr = JSON.stringify({
Statement: [
{
Resource: resource,
Condition: {
DateLessThan: {
"AWS:EpochTime": dateLessThan,
},
},
},
],
});
const cfCookies = getSignedCookies({
keyPairId,
privateKey,
policy: policyStr,
});
// Set CloudFront cookies using your web framework of choice
const cfCookieConfig = {
httpOnly: true,
secure: process.env.NODE_ENV === "production" ? true : false,
sameSite: "lax",
signed: false,
expires: expiryDate,
domain: ".abc.com",
};
for (let cookie in cfCookies) {
ctx.cookies.set(cookie, cfCookies[cookie], { ...cfCookieConfig });
}
References:
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-signed-cookies.html

How to check IAM Password Policies in the AWS using Boto3?

I am looking to get all the IAM user's password policies for all the users.
How to check IAM Password Policies enabled or not for all the users in the AWS account using Boto3?
The password policy is set at the account level, not for individual users. You can use something like this:
In [1]: import boto3
In [2]: iam = boto3.client('iam')
In [3]: iam.get_account_password_policy()
Out[3]:
{u'PasswordPolicy': {u'AllowUsersToChangePassword': True,
u'ExpirePasswords': False,
u'MinimumPasswordLength': 8,
u'RequireLowercaseCharacters': True,
u'RequireNumbers': True,
u'RequireSymbols': True,
u'RequireUppercaseCharacters': True},
'ResponseMetadata': {'HTTPStatusCode': 200,
'RequestId': 'f9a8fc8e-fbfc-11e5-992f-df20f934a99a'}}
To determine what the current policy is for an account. If you want to make sure all users adhere to your policy make sure you expire passwords periodically then users will be compelled to create a new password that is compliant with your policy.
You can try to use iam_policy module for Ansible which aims to manage IAM policies for users, groups, and roles. It allows uploading or removing IAM policies for IAM users, groups or roles.
In iam_policy.py file you can find some boto.iam code examples. E.g.
current_policies = [cp for cp in iam.list_role_policies(name).list_role_policies_result.policy_names]
for pol in current_policies:
if urllib.unquote(iam.get_role_policy(name, pol).get_role_policy_result.policy_document) == pdoc:
policy_match = True
if policy_match:
# msg = ("The policy document you specified already exists "
# "under the name %s." % pol)
pass