I am trying to programmatically create a workmail export job. I have access to a aws workmail organisation and everything I even created a role that would allow access to the s3 bucket to allow writing into it. I cannot figure out the response I get from the boto describe mailbox export job. It is basically failing but I cannot figure out why.
The questions i also need answered are:
Where do the mailbox export jobs exist in the aws gui environment because I can get them via the sdk but not on aws console itself.
What role needs to have the sts:AssumeRole policy? If it's workmail itself I added it to the trusted relationships as a principal but there is still nothing.
I have spent a lot of time changing configurations of the AWS IAM role to allow for different principals to be trusted by the role.
I have some thoughts about my program not having the correct permissions but I do have access to listing users in a mailbox and the like I don't know what I am missing.
Below is the state of the trusted relationships for the role...
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::{My Account}:user/{The username associated with the account}",
"Service": [
"workmail.amazonaws.com",
"s3.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
The response for one of the jobs I started:
{'EntityId': '{entity id}',
'Description': 'Testing job for mailbox export.',
'RoleArn': 'arn:aws:iam::{Account}:role/ApiReadRole',
'KmsKeyArn': 'arn:aws:kms:us-west-2:{Account}:key/{arn Id}',
'S3BucketName': 'communications',
'S3Prefix': 'media/private/emails',
'S3Path': 'media/private/emails/{some id}.zip',
'EstimatedProgress': 0,
'State': 'FAILED',
'ErrorInfo': 'Unable to assume role "arn:aws:iam::{Account}:role/ApiReadRole"',
'StartTime': datetime.datetime(2022, 12, 2, 8, 48, 9, 642000, tzinfo=tzlocal()),
'EndTime': datetime.datetime(2022, 12, 2, 8, 48, 11, 121000, tzinfo=tzlocal()),
'ResponseMetadata': {'RequestId': '{some id}',
'HTTPStatusCode': 200,
'HTTPHeaders': {'x-amzn-requestid': '{some id}',
'content-type': 'application/x-amz-json-1.1',
'content-length': '616',
'date': 'Fri, 02 Dec 2022 06:48:53 GMT'},
'RetryAttempts': 0}
}
Related
Alright, so after 3 days of trying to get this working I finally give up.
I have:
a private VPC with a subnet that contains an RDS MySQL instance
a Lambda rotator function based on the AWS Python template for single user MySQL rotation
an Endpoint for Secrets Manager, Private DNS enabled.
I have security groups, but for debugging, I've allowed all traffic for all security groups and Network ACLs.
The Lambda rotator function has permissions for secrets manager for all resources, logging to Cloudwatch and the relevant VPC permissions to execute in a VPC.
What does work:
logging to CloudWatch works. I have turned on DEBUG mode
Secrets Manager is invoking the lambda function successfully
a few requests to Secrets Manager appear to work
What doesn't work:
after a few requests, any subsequent requests start timing out
After some time, it manages to send a few more requests. Could this have something to do with Python, networking timeouts and lambda connections being held or dropped due to timeouts?
I can see it does a DescribeSecret request.
Then I can see a GetSecretValue request for an AWSCURRENT stage.
Then I can see a GetSecretValue request for an AWSPENDING stage.
This one returns:
Secrets Manager can't find the specified secret value for VersionId: xxxxxxxx"
Then I can see a GetRandomPassword request.
After that, I see the following in the logs:
Resetting dropped connection: secretsmanager.ap-southeast-2.amazonaws.com
The lambda function now times out.
From this point on, it can't even successfully do a DescribeSecret without the lambda timing out. After maybe 10-15 minutes it starts working again up to the GetRandomPassword part and then drops the connection again.
I don't think it's a security group, ACL or endpoint config issue, because it would either work or not work, not sometimes work.
I also don't think I'm stressing out the API that much - a few requests in a period of a few seconds and then nothing for many minutes should be fine for AWS.
I found a little clue here maybe after GetSecretValue is called?
[DEBUG] 2022-04-09T10:49:20.073Z 34585068-3f21-4471-9035-f9368a3094dd Response headers: {'x-amzn-RequestId': 'f443766f-921c-4772-997f-b150643c4909', 'Content-Type': 'application/x-amz-json-1.1', 'Content-Length': '156', 'Date': 'Sat, 09 Apr 2022 10:49:19 GMT', 'Connection': 'close'}
Looks like the response header contains Connection: close, but that's coming back FROM Secrets Manager.
When I look at other people's logs I can see the headers that the boto3 client sends usually contains Connection: keep-alive, yet looking at my logs none of them contain that header.
I did a bit of an experiment by injecting that header.
session = boto3.session.Session()
session.events.register('before-call.secrets-manager.*', inject_header)
...
def inject_header(params, **kwargs):
params['headers']['Connection'] = 'keep-alive'
However, even if I send that header to the Secrets Manager API it makes no difference.
There's got to be something else going on, I just don't understand the intermittent nature of it!
For reference, the lambda role policy. As you can see for debugging and troubleshooting I've left the secrets manager policy wide open.
{
"Statement": [
{
"Action": [
"secretsmanager:DescribeSecret",
"secretsmanager:GetSecretValue",
"secretsmanager:PutSecretValue",
"secretsmanager:UpdateSecretVersionStage"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"secretsmanager:GetRandomPassword"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": [
"ec2:CreateNetworkInterface",
"ec2:DescribeNetworkInterfaces",
"ec2:DeleteNetworkInterface",
"ec2:AssignPrivateIpAddresses",
"ec2:UnassignPrivateIpAddresses"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Action": "logs:CreateLogGroup",
"Effect": "Allow",
"Resource": "arn:aws:logs:ap-southeast-2:xxxxxxxxxxxx:*"
},
{
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Effect": "Allow",
"Resource": [
"arn:aws:logs:ap-southeast-2:xxxxxxxxxxxx:log-group:/aws/lambda/rcf-apse2-dev-onsite-rds-secret-rotator-function:*"
]
}
],
"Version": "2012-10-17"
}
The Python code is as per the following template:
https://github.com/aws-samples/aws-secrets-manager-rotation-lambdas/blob/master/SecretsManagerRDSMySQLRotationSingleUser/lambda_function.py
The secret value is being stored in this format as required:
{
"dbClusterIdentifier": "rcf-apse2-dev-onsite",
"engine": "mysql",
"host": "rcf-apse2-dev-onsite.cluster-xxxxxxxxx.ap-southeast-2.rds.amazonaws.com",
"password": "xxxxxxx",
"username": "xxxxx"
}
I think I finally understand what's going on!
The default timeout of boto3 is 60 seconds, but the Lambda execution timeout was only set to 30 seconds, which is why the boto3 retry logic never had a chance to kick in and the Lambda function would keep timing out.
Obviously a crude way to fix this is to increase the Lambda timeout, but a better solution in my opinion is to add the following to the Python code and adjust the timeouts as you see fit.
from botocore.config import Config
...
config = Config(
connect_timeout=2,
read_timeout=2,
retries = {
'max_attempts': 10,
'mode': 'standard'
}
)
service_client = boto3.client('secretsmanager', config=config, endpoint_url=os.environ['SECRETS_MANAGER_ENDPOINT'])
I'm still not sure why the connection resets happen in the first place, but I suspect it's probably because AWS doesn't want hold on to open connections for too long as they cost memory and resources.
Oh the joys of AWS!
I use AWS IoT-core Device Shadow REST API I have created an IAM user role and give all access
this is my API key and header and endpoint
URL: {{endpoint-url}}/things/thingName/shadow
Method: GET
Header: header pass with AWS signature
accessKey: "accessKey"
secretKey: "secretKey"
execute-api working fine this is API response
[
{
"id": 1,
"type": "dog",
"price": 249.99
},
{
"id": 2,
"type": "cat",
"price": 124.99
},
{
"id": 3,
"type": "fish",
"price": 0.99
}
]
but my IoT-core Shadow REST API not working
I follow this docs https://docs.aws.amazon.com/iot/latest/developerguide/device-shadow-rest-api.html
attached screenshot: https://i.stack.imgur.com/luBMa.png
I had the same issue, and the solution was to set the Service Name field in Postman AWS Signature settings used to sign the AWS Signature V4 auth header to iotdevicegateway as per the docs here: https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/iot-data.html#IoTDataPlane.Client.get_thing_shadow
I have created a s3 bucket and along with it i added a policy to the bucket saying to deny ListBucket action for a canonical user. Here canonical user is nothing but me. Below is my code..
s3_client.create_bucket(Bucket=bucket_name)
bucket_policy = {
'Version': '2012-10-17',
'Statement': [{
'Sid': 'AddPerm',
'Effect': 'Deny',
'Principal':
#I am denying ListBucket access to this canonical user id.
{"CanonicalUser":"1234567777777777777777544444444466666ac73d5bc7cd43619"},
'Action': ['s3:ListBucket'],
'Resource': f'arn:aws:s3:::{bucket_name}',
}]
}
# Convert the policy from JSON dict to string
bucket_policy = json.dumps(bucket_policy)
s3_client.put_bucket_policy(Bucket=bucket_name, Policy=bucket_policy)
s3_client.put_object(Bucket=bucket_name, Key="a/b/c/abc.txt")
#Still i am getting response for this list_objects operation.
response = s3_client.list_objects(Bucket=bucket_name)
print(response)
How can I remove a specific s3 bucket permission to a root user?
Thanks
Based on comments, the question was to deny root user access to resources within the same account, which is not recommended as well.
You can only use an AWS Organizations service control policy (SCP) to limit the permissions of the root user
Below approach described is for cross account access.
As per the documentation you can address root user in the following format
"Principal":{"AWS":"arn:aws:iam::AccountNumber-WithoutHyphens:root"}
"Principal":{"AWS":["arn:aws:iam::AccountNumber1-WithoutHyphens:root","arn:aws:iam::AccountNumber2-WithoutHyphens:root"]}
Grant permissions to an AWS Account
For example
{
"Id": "Policy1616693279544",
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt161669321",
"Action": [
"s3:ListBucket"
],
"Effect": "Deny",
"Resource": "arn:aws:s3:::mybucketname",
"Principal": {
"AWS": [
"arn:aws:iam::1234567890:root"
]
}
}
]
}
If you don want to stress about correct policy generation you can cross verify with this tool here
AWS Policy Generator
Your AWS account identifiers
I'm having a problem accessing a new DynamoDB table via a successfully authenticated Cognito user.
I get the following AccessDeniedException when attempting a scan of the table (using the AWS JavaScript SDK):
Unable to scan. Error: {
"message": "User: arn:aws:sts::MY-ACCOUNT-NUM:assumed-role/Cognito_VODStreamTestAuth_Role/CognitoIdentityCredentials
is not authorized to perform: dynamodb:Scan on resource: arn:aws:dynamodb:us-east-1:MY-ACCOUNT-NUM:table/VideoCatalog",
"code": "AccessDeniedException",
"time": "2019-01-27T02:25:27.686Z",
"requestId": "blahblah",
"statusCode": 400,
"retryable": false,
"retryDelay": 18.559011800834146
}
The authenticated Cognito user policy has been extended with the following DynamoDB section:
{
"Sid": "AllowedCatalogActions",
"Effect": "Allow",
"Action": [
"dynamodb:BatchGetItem",
"dynamodb:GetItem",
"dynamodb:Scan",
"dynamodb:Query",
"dynamodb:UpdateItem"
],
"Resource": [
"arn:aws:dynamodb:us-east-2:MY-ACCOUNT-NUM:table/VideoCatalog"
]
}
Shouldn't this be sufficient to give my authenticated Cognito users access to any DynamoDB table I might create, as long as I specify the table resource as I do above? Or do I also need to add "Fine-grained access control" under the table's 'Access control' tab?
I can say that I created the VideoCatalog DynamoDB table under my non-root Administrator IAM role (represented above by MY-ACCOUNT-NUM). Is that a problem? (Prior to trying to move to a DynamoDB table I was using a JSON file on S3 as the video catalog.)
IAM confused!
Looking at the error message from AWS and the policy document that you provided, I can see that there are two different regions here.
AWS is saying that your user does not have access to aws:dynamodb:us-east-1:MY-ACCOUNT-NUM:table/VideoCatalog, whereas your policy document is providing access to aws:dynamodb:us-east-2:MY-ACCOUNT-NUM:table/VideoCatalog.
Are you perhaps provisioning your resources in two different regions by mistake?
I have logs on cloudwatch which I want to store on S3 everyday. I am using AWS Lambda to achieve this.
I created a function on AWS Lambda and I use Cloudwatch event as the trigger. This created an event rule on Cloudwatch. Now when I execute this lambda function, it executes successfully and a file with name 'aws-log-write-test' gets created on S3 inside the bucket, but there is no other data or file in the bucket. The file contains the text 'Permission Check Successful'.
This is my lambda function:
import boto3
import collections
from datetime import datetime, date, time, timedelta
region = 'us-west-2'
def lambda_handler(event, context):
yesterday = datetime.combine(date.today()-timedelta(1),time())
today = datetime.combine(date.today(),time())
unix_start = datetime(1970,1,1)
client = boto3.client('logs')
response = client.create_export_task(
taskName='export_cw_to_s3',
logGroupName='ABC',
logStreamNamePrefix='ABCStream',
fromTime=int((yesterday-unix_start).total_seconds()),
to=int((today-unix_start).total_seconds()),
destination='abc-logs',
destinationPrefix='abc-logs-{}'.format(yesterday.strftime("%Y-%m-%d"))
)
return 'Response from export task at {} :\n{}'.format(datetime.now().isoformat(),response)
This is the response when I execute the lambda function:
Response from export task at 2018-01-05T10:57:42.441844 :\n{'ResponseMetadata': {'RetryAttempts': 0, 'HTTPStatusCode': 200, 'RequestId': 'xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx', 'HTTPHeaders': {'x-amzn-requestid': 'xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx', 'date': 'Fri, 05 Jan 2018 10:57:41 GMT', 'content-length': '49', 'content-type': 'application/x-amz-json-1.1'}}, u'taskId': u'xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx'}
START RequestId: xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx Version: $LATEST
END RequestId: xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx
REPORT RequestId: xxxxxxx-xxxx-xxxx-xxxx-xxxxxxxx Duration: 1418.13 ms Billed Duration: 1500 ms Memory Size: 128 MB Max Memory Used: 36 MB
In fact, according to method create_export_task you should convert timestamp in milliseconds multiplying the resulted number by 1000 :
fromTime = int((yesterday-unix_start).total_seconds() * 1000),
to = int((today-unix_start).total_seconds() * 1000),
Also, make sure you have already created appropriate bucket policy to able your lambda to export and put objects on your bucket :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "logs.us-west-2.amazonaws.com"
},
"Action": "s3:GetBucketAcl",
"Resource": "arn:aws:s3:::abc-logs"
},
{
"Effect": "Allow",
"Principal": {
"Service": "logs.us-west-2.amazonaws.com"
},
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::abc-logs/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "bucket-owner-full-control"
}
}
}
]
}
You try to create different folders on your bucket to make daily exports separated each other, it's a brilliant idea:
destinationPrefix='abc-logs-{}'.format(yesterday.strftime("%Y-%m-%d"))
But but it's not possible to use timestamp on policy json, so you have to:
Change Resource Arn to this, to allow putObject on all newly created destination folders:
"Resource":"arn:aws:s3:::abc-logs/*"
Make sure that timestamp is in milliseconds.
The logStreamNamePrefix must be defined like 202* for the below , if you want all the cloudwatch logs to be exported.