IAM Permissions Errors When Using boto3 for AWS Comprehend - amazon-web-services

I'm playing around with the command line to run some sentiment analysis through aws and am running into some IAM issues. When running the "detect_dominant_language" function, I'm hitting NotAuthorizedExceptions despite having the policy in place to allow for all comprehend functions. The policy for the account is:
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"comprehend:*",
"s3:ListAllMyBuckets",
"s3:ListBucket",
"s3:GetBucketLocation",
"iam:ListRoles",
"iam:GetRole"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
Any ideas of where I might be going wrong with this? I've triple-checked my access key to make sure that I'm referring to the correct account. When I check the policy, it's there so I'm a little bit at a loss as to the disconnect. S3 seems to be working fine as well.
Steps already taken:
Resetting access key/secret access key.
Creating iam policy which explicitly refers to the needed functionality and attaching it to the "Admin" user.
Calling this method from the CLI (get the same error).
Below, I've included additional information that may be helpful...
Code to check iam policies:
iam = boto3.client('iam',
aws_access_key_id = '*********************',
aws_secret_access_key = '*************************************')
iam.list_attached_user_policies(UserName="Admin")
Output:
{'AttachedPolicies': [{'PolicyName': 'ComprehendFullAccess',
'PolicyArn': 'arn:aws:iam::aws:policy/ComprehendFullAccess'},
{'PolicyName': 'AdministratorAccess',
'PolicyArn': 'arn:aws:iam::aws:policy/AdministratorAccess'},
{'PolicyName': 'Comprehend-Limitied',
'PolicyArn': 'arn:aws:iam::401311205158:policy/Comprehend-Limitied'}],
'IsTruncated': False,
'ResponseMetadata': {'RequestId': '9094d8ff-1730-44b8-af0f-9222a63b32e9',
'HTTPStatusCode': 200,
'HTTPHeaders': {'x-amzn-requestid': '9094d8ff-1730-44b8-af0f-9222a63b32e9',
'content-type': 'text/xml',
'content-length': '871',
'date': 'Thu, 20 Jan 2022 21:48:11 GMT'},
'RetryAttempts': 0}}
Code to trigger error:
comprehend = boto3.client('comprehend',
aws_access_key_id = '*********************',
aws_secret_access_key = '********************************')
test_language_string = "This is a test string. I'm hoping that AWS Comprehend can interprete this as english..."
comprehend.detect_dominant_language(Text=test_language_string)
Output:
ClientError: An error occurred (NotAuthorizedException) when calling the DetectDominantLanguage operation: Your account is not authorized to make this call.

I encountered the same error and I end up creating a new user group and a user for that particular API access. Here're the steps in a nutshell:
Create a user group (e.g. Research)
Give access to ComprehendFullAccess
Create a user (e.g.
ComprehendUser) under the newly created user group (i.e.
Research)
Bingo! It should work now.
Here is my code snippet:
# import packages
import boto3
# aws access credentials
AWS_ACCESS_KEY_ID = 'your-access-key'
AWS_SECRET_ACCESS_KEY = 'your-secret-key'
comprehend = boto3.client('comprehend',
aws_access_key_id=AWS_ACCESS_KEY_ID,
aws_secret_access_key=AWS_SECRET_ACCESS_KEY,
region_name='us-east-1')
test_language_string = "This is a test string. I'm hoping that AWS Comprehend can interprete this as english..."
comprehend.detect_dominant_language(Text=test_language_string)
Expected Output
{'Languages': [{'LanguageCode': 'en', 'Score': 0.9753355979919434}],
'ResponseMetadata': {'RequestId': 'd2ab429f-6ff7-4f9b-9ec2-dbf494ebf20a',
'HTTPStatusCode': 200,
'HTTPHeaders': {'x-amzn-requestid': 'd2ab429f-6ff7-4f9b-9ec2-dbf494ebf20a',
'content-type': 'application/x-amz-json-1.1',
'content-length': '64',
'date': 'Mon, 07 Feb 2022 16:31:36 GMT'},
'RetryAttempts': 0}}

UPDATE: Thanks for all the feedback y'all! It turns out us-west-1 doesn't support comprehend. Switching to a different availability zone did the trick, so I would recommend anyone with similar problems try different zones before digging too deep into permissions//access keys.

Related

Fetch IAM Username as the output

`We are running an AWS Glue job and we believe the following code snippet should return the AWS UserId from which the job is being triggered.
For ex, the following code was run with the user mmohanty
import boto3 client = boto3.client('sts') response = client.get_caller_identity() print('User ID:', response['UserId'])
The output is being shown as **AROA6CNCYWLF5MGCB5DF4:GlueJobRunnerSession **instead of IAM username.
The entire output of client.get_caller_identity() doesn't have any reference to IAM username.
{'UserId': 'AROA6CNCYWLF5MGCB5DF4:GlueJobRunnerSession', 'Account': '1234567', 'Arn': 'arn:aws:sts::12345678:assumed-role/xxx-GlueRole/GlueJobRunnerSession', 'ResponseMetadata': {'RequestId': 'bb43bd2b-4426-46b5-8457-aaaaaaa92116d', 'HTTPStatusCode': 200, 'HTTPHeaders': {'x-amzn-requestid': 'bb43bd2b-4426-46b5-8457-aaaaaaa2116d', 'content-type': 'text/xml', 'content-length': '459', 'date': 'Fri, 20 Jan 2023 17:31:27 GMT'}, 'RetryAttempts': 0}}
Please let us know how to get IAM username instead of the cryptic userid.
We are getting cryptic Userid instead of actual IAM username.`

How do I list deleted secrets in AWS Secrets Manager?

Looking at the man page for list-secrets, there is no special options to show deleted or not. It does not list deleted secrets. However, the output definition includes a "DeletedDate" timestamp.
The ListSecrets API does not show any option for deleted secrets. But again the response includes a DeletedDate.
The boto3 docs for list_secrets() are the same.
However, in the AWS console I can see deleted secrets. A quick look at the dev tools and I can see my request payload to the Secrets Manager endpoint looks like:
{
"method": "POST",
"path": "/",
"headers": {
"Content-Type": "application/x-amz-json-1.1",
"X-Amz-Target": "secretsmanager.ListSecrets",
"X-Amz-Date": "Fri, 27 Nov 2020 13:19:06 GMT"
},
"operation": "ListSecrets",
"content": {
"MaxResults": 100,
"IncludeDeleted": true,
"SortOrder": "asc"
},
"region": "eu-west-2"
}
Is there any way to pass "IncludeDeleted": true to the CLI?
Is this a bug? Where do I report it? (I know there is a cloudformation bug tracker on github, I assume secretsmanager would have something similar somewhere..?)
Save the following file to ~/.aws/models/secretsmanager/2017-10-17/service-2.sdk-extras.json:
{
"version": 1.0,
"merge": {
"shapes": {
"ListSecretsRequest": {
"members": {
"IncludeDeleted": {
"shape": "BooleanType",
"documentation": "<p>If set, includes secrets that are disabled.</p>"
}
}
}
}
}
}
Then you can list secrets with the CLI as follows:
aws secretsmanager list-secrets --include-deleted
or with boto3:
import boto3
def list_secrets(session, **kwargs):
client = session.client("secretsmanager")
for page in client.get_paginator("list_secrets").paginate(, **kwargs):
yield from page["SecretList"]
if __name__ == "__main__":
session = boto3.Session()
for secret in list_secrets(session, IncludeDeleted=True):
if "DeletedDate" in secret:
print(secret)
This is using the botocore loader mechanism to augment the service model for Secrets Manager, and tell boto3 that "IncludeDeleted" is a parameter for the ListSecrets API.
If you want more detail, I've just posted a blog post explaining what else I tried and how I got to this solution – and thanks to OP, whose dev tool experiments were a useful clue.

How to get the current execution role in a lambda?

I'm having issues with a lambda that does not seem have the permissions to perform an action and want to get some troubleshooting information.
I can do this to get the current user:
print("Current user: " + boto3.resource('iam').CurrentUser().arn)
Is there a way to get the execution role at runtime? Better yet, is there a way to get the policies that are attached to this role dynamically?
They shouldn't change from when I created the lambda, but I want to verify to be sure.
Check this: list_attached_user_policies
Lists all managed policies that are attached to the specified IAM
user.
An IAM user can also have inline policies embedded with it.
If you want just the inline policies: get_user_policy
Retrieves the specified inline policy document that is embedded in the
specified IAM user.
Do not know, how much relevance this will bring to the OP.
But we can get lambda function configuration at runtime.
lambda_client = boto3.client('lambda')
role_response = (lambda_client.get_function_configuration(
FunctionName = os.environ['AWS_LAMBDA_FUNCTION_NAME'])
)
print(role_response)
role_arn = role_response['Role']
role_response will have role arn.
role_response =>
{'ResponseMetadata': {'RequestId': '', 'HTTPStatusCode': 200, 'HTTPHeaders': {'date': 'GMT', 'content-type': 'application/json', 'content-length': '877', 'connection': 'keep-alive', 'x-amzn-requestid': ''}, 'RetryAttempts': 0}, 'FunctionName': 'lambda_name', 'FunctionArn': 'arn:aws:lambda:<region>:<account_id>:function:lambda_arn', 'Runtime': 'python3.8', 'Role': 'arn:aws:iam::<account_id>:role/<role_name>', 'Handler': 'handlers.handle', 'CodeSize': 30772, 'Description': '', 'Timeout': 30, 'MemorySize': 128, 'LastModified': '', 'CodeSha256': '', 'Version': '$LATEST', 'VpcConfig': {'SubnetIds': [], 'SecurityGroupIds': [], 'VpcId': ''}, 'TracingConfig': {'Mode': 'PassThrough'}, 'RevisionId': '', 'State': 'Active', 'LastUpdateStatus': 'Successful'}

Lambda is not authorized to perform: dynamodb:Query on resource - But scan works just fine

I am new to Lambda, I was playing with the lambda samples given in the AWS.
In nodeJS 6.10 Runtime,
dynamo.query({ TableName: 'my_table', KeyConditionExpression: 'id = :id',ExpressionAttributeValues: {':id': '123'} }, done); is erroring out with xxxxLambda is not authorized to perform: dynamodb:Query on resource.
But dynamo.scan({ TableName: 'my_table'}, done); works just fine, so does the PUT operation dynamo.putItem(JSON.parse(event.body), done); I haven't modified my IAM policies.
BTW, definition of done variable is
const done = (err, res) => callback(null, {
statusCode: err ? '400' : '200',
body: err ? err.message : JSON.stringify(res),
headers: {
'Content-Type': 'application/json',
},
});
Edit
My Lambda-IAM Original roles:
AWSLambdaMicroserviceExecutionRole-xxxx
AWSLambdaBasicExecutionRole-xxxx-xxxx-xx-xx-xx
When I attached "AmazonDynamoDBFullAccess" policy, 'query' also started to work fine.
But what I am wondering is how scan and put works but not query?
Edit-2
Dear downvoter, please add a comment so that I can improve my question.
This seems like a permission issue.
Double check the IAM policy inside the IAM Role attached to the Lambda function, whether it has AmazonDynamoDBFullAccess or Custom Policy including 'query' action granted.
"Effect": "Allow",
"Action" [
dynamodb:Query
]
I had the same issue, but the accepted solution alone did not work for me. I got everything to work fine if I did not provide a table arn in the permissions, but just made it work for all tables. Sure, it works, but that's hardly a secure solution.
I found that I had to specify the arn for the GSI I was querying on in the IAM permissions. So instead of specifying the arn for the table, I specified the arn for the specific index I wanted to query on. And then it worked. Maybe there's a more elegant solution, but this is what worked for me.

AWS S3 IAM user can't access bucket

I have an IAM user called server that uses s3cmd to backup up to S3.
s3cmd sync /path/to/file-to-send.bak s3://my-bucket-name/
Which gives:
ERROR: S3 error: 403 (SignatureDoesNotMatch): The request signature we calculated does not match the signature you provided. Check your key and signing method.
The same user can send email via SES so I know that the access_key and secret_key are correct.
I have also attached AmazonS3FullAccess policy to the IAM user and clicked on Simulate policy. I added all of the Amazon S3 actions and then clicked Run simulation. All of the actions were allowed so it seems that S3 thinks I should have access. The policy is:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:*",
"Resource": "*"
}
]
}
The only way I can get access is to use use the root accounts access_key and secret_key. I can not get any IAM user to be able to login.
Using s3cmd --debug gives:
DEBUG: Response: {'status': 403, 'headers': {'x-amz-bucket-region': 'eu-west-1', 'x-amz-id-2': 'XXX', 'server': 'AmazonS3', 'transfer-encoding': 'chunked', 'x-amz-request-id': 'XXX', 'date': 'Tue, 30 Aug 2016 09:10:52 GMT', 'content-type': 'application/xml'}, 'reason': 'Forbidden', 'data': '<?xml version="1.0" encoding="UTF-8"?>\n<Error><Code>SignatureDoesNotMatch</Code><Message>The request signature we calculated does not match the signature you provided. Check your key and signing method.</Message><AWSAccessKeyId>XXX</AWSAccessKeyId><StringToSign>GET\n\n\n\nx-amz-date:Tue, 30 Aug 2016 09:10:53 +0000\n/XXX/</StringToSign><SignatureProvided>XXX</SignatureProvided><StringToSignBytes>XXX</StringToSignBytes><RequestId>490BE76ECEABF4B3</RequestId><HostId>XXX</HostId></Error>'}
DEBUG: ConnMan.put(): connection put back to pool (https://XXX.s3.amazonaws.com#1)
DEBUG: S3Error: 403 (Forbidden)
Where I have replaced anything sensitive looking with XXX.
Have I missed something in the permissions setup?
explictly use the correct iam access key and secret key used with the s3cmd ie
s3cmd --access_key=75674745756 --secret_key=F6AFHDGFTFJGHGH sync /path/to/file-to-send.bak s3://my-bucket-name/
The error shown is for an incorrect access key and/or secret key