I created a CustomResource to call a lambda function when the CloudFormation stack is created. It fails with the following error:
Received response status [FAILED] from custom resource. Message returned: User: arn:aws:sts::<account>:assumed-role/stack-role is not authorized to perform: lambda:InvokeFunction on resource: arn:aws:lambda:us-east-1:<account>:function:<lambda> because no identity-based policy allows the lambda:InvokeFunction action
This is the code in the CDK:
import * as cr from '#aws-cdk/custom-resources';
const callLambda = new cr.AwsCustomResource(this, 'MyCustomResource', {
onCreate: {
service: 'Lambda',
action: 'invoke',
region: 'us-east-1',
physicalResourceId: cr.PhysicalResourceId.of(Date.now.toString()),
parameters: {
FunctionName: `my-function`,
Payload: '{}'
},
},
policy: cr.AwsCustomResourcePolicy.fromSdkCalls({
resources: cr.AwsCustomResourcePolicy.ANY_RESOURCE,
})
});
How can I grant permissions to the stack's assumed role so that it can perform lambda:InvokeFunction?
I solved the issue by creating a role that assumes the lambda service principal, and adding a policy statement allowing the lambda:InvokeFunction.
import * as cr from '#aws-cdk/custom-resources';
import * as iam from "#aws-cdk/aws-iam";
let role = new iam.Role(this, `my-role`, {
assumedBy: new iam.ServicePrincipal('lambda.amazonaws.com'),
});
role.addToPolicy(new iam.PolicyStatement({
effect: iam.Effect.ALLOW,
actions: ['lambda:InvokeFunction'],
resources: ['*']
}));
const callLambda = new cr.AwsCustomResource(this, 'MyCustomResource', {
onCreate: {
service: 'Lambda',
action: 'invoke',
region: 'us-east-1',
physicalResourceId: cr.PhysicalResourceId.of(Date.now.toString()),
parameters: {
FunctionName: `my-function`,
Payload: '{}'
},
},
policy: cr.AwsCustomResourcePolicy.fromSdkCalls({
resources: cr.AwsCustomResourcePolicy.ANY_RESOURCE,
}),
role: role as any
});
I find fromStatements works...must be some issues with fromSdkCalls
new cr.AwsCustomResource(this, 'MyCustomResource', {
onCreate: {
service: 'Lambda',
action: 'invoke',
region: 'us-east-1',
physicalResourceId: cr.PhysicalResourceId.of(Date.now.toString()),
parameters: {
FunctionName: `my-function`,
Payload: '{}'
},
},
policy: cr.AwsCustomResourcePolicy.fromStatements([
new PolicyStatement({
effect: Effect.ALLOW,
actions: ["lambda:InvokeFunction"],
resources: ["*"],
}),
])
});
Add a ResourcePolicy to your construct.
// infer the required permissions; fine-grained controls also available
policy: AwsCustomResourcePolicy.fromSdkCalls({resources: AwsCustomResourcePolicy.ANY_RESOURCE})
Related
How to create a cloud watch alarm when an S3 bucket is created without encryption in AWS.
either manually or through a cloudformation template.
1- A Config rule that checks that your Amazon S3 bucket either has Amazon S3 default encryption enabled or that the S3 bucket policy explicitly denies put-object requests without server side encryption. and here is a CloudFormation template to create it:
AWSTemplateFormatVersion: "2010-09-09"
Description: ""
Resources:
ConfigRule:
Type: "AWS::Config::ConfigRule"
Properties:
ConfigRuleName: "s3-bucket-server-side-encryption-enabled"
Scope:
ComplianceResourceTypes:
- "AWS::S3::Bucket"
Description: "A Config rule that checks that your Amazon S3 bucket either has Amazon S3 default encryption enabled or that the S3 bucket policy explicitly denies put-object requests without server side encryption."
Source:
Owner: "AWS"
SourceIdentifier: "S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED"
Parameters: {}
Metadata: {}
Conditions: {}
2- Use an EventBridge rule with a custom event pattern to match an AWS Config evaluation rule output as NON_COMPLIANT. Then, route the response to an SNS topic
And Finally to enforce s3 encryption, you can create SCP policy that requires that all Amazon S3 buckets use AES256 encryption:
AWSTemplateFormatVersion: "2010-09-09"
Description: ""
Resources:
ScpPolicy:
Type: "Custom::ServiceControlPolicy"
Properties:
PolicyName: "scp_s3_encryption"
PolicyDescription: "This SCP requires that all Amazon S3 buckets use AES256 encryption in an AWS Account. "
PolicyContents: "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Action\":[\"s3:PutObject\"],\"Resource\":\"*\",\"Effect\":\"Deny\",\"Condition\":{\"StringNotEquals\":{\"s3:x-amz-server-side-encryption\":\"AES256\"}}},{\"Action\":[\"s3:PutObject\"],\"Resource\":\"*\",\"Effect\":\"Deny\",\"Condition\":{\"Bool\":{\"s3:x-amz-server-side-encryption\":false}}}]}"
ServiceToken:
Fn::GetAtt:
- "ScpResourceLambda"
- "Arn"
ScpResourceLambdaRole:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: "Allow"
Principal:
Service: "lambda.amazonaws.com"
Action:
- "sts:AssumeRole"
Path: "/"
ManagedPolicyArns:
- "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
Policies:
- PolicyName: "scp-access"
PolicyDocument:
Statement:
- Effect: "Allow"
Action:
- "organizations:UpdatePolicy"
- "organizations:DeletePolicy"
- "organizations:CreatePolicy"
- "organizations:ListPolicies"
Resource: "*"
ScpResourceLambda:
Type: "AWS::Lambda::Function"
Properties:
Code:
ZipFile: "\n'use strict';\nconst AWS = require('aws-sdk');\nconst response = require('cfn-response');\nconst organizations = new AWS.Organizations({region: 'us-east-1'});\n\nexports.handler = (event, context, cb) => {\n console.log('Invoke:', JSON.stringify(event));\n const done = (err, data) => {\n if (err) {\n console.log('Error: ', err);\n response.send(event, context, response.FAILED, {}, 'CustomResourcePhysicalID');\n } else {\n response.send(event, context, response.SUCCESS, {}, 'CustomResourcePhysicalID');\n }\n };\n \n const updatePolicies = (policyName, policyAction) => {\n organizations.listPolicies({\n Filter: \"SERVICE_CONTROL_POLICY\"\n }, function(err, data){\n if (err) done(err);\n else {\n const policy = data.Policies.filter((policy) => (policy.Name === policyName))\n let policyId = ''\n if (policy.length > 0) \n policyId = policy[0].Id\n else\n done('policy not found')\n if (policyAction === 'Update'){\n organizations.updatePolicy({\n Content: event.ResourceProperties.PolicyContents,\n PolicyId: policyId\n }, done)\n }\n else {\n organizations.deletePolicy({\n PolicyId: policyId\n }, done)\n }\n }\n })\n }\n \n if (event.RequestType === 'Update' || event.RequestType === 'Delete') {\n updatePolicies(event.ResourceProperties.PolicyName, event.RequestType)\n \n } else if (event.RequestType === 'Create') {\n organizations.createPolicy({\n Content: event.ResourceProperties.PolicyContents, \n Description: event.ResourceProperties.PolicyDescription, \n Name: event.ResourceProperties.PolicyName, \n Type: \"SERVICE_CONTROL_POLICY\"\n }, done);\n } else {\n cb(new Error('unsupported RequestType: ', event.RequestType));\n }\n};"
Handler: "index.handler"
MemorySize: 128
Role:
Fn::GetAtt:
- "ScpResourceLambdaRole"
- "Arn"
Runtime: "nodejs12.x"
Timeout: 120
Parameters: {}
Metadata: {}
Conditions: {}
I followed the Pulumi Cognito.IdentityPool docs but could not link the Identity Pool with the Role using an attachment. This should be very easy: create Identity Pool, create Role, attach Role to Identity Pool. Simple. Unfortunately the code in the Pulumi docs does not reach this end, there is something missing. Here is my code:
import * as aws from '#pulumi/aws'
import { Stack, cognito, region } from '../../config'
const userPool = cognito.userPools[REDACTED]
const providerName: string = `cognito-idp.${region}.amazonaws.com/${userPool.poolId}`
export const swimmingPool = new aws.cognito.IdentityPool(REDACTED, {
identityPoolName: 'stuff!',
allowUnauthenticatedIdentities: false,
allowClassicFlow: false,
cognitoIdentityProviders: [{
providerName,
clientId: userPool.clientId,
serverSideTokenCheck: false,
}],
})
export const role = new aws.iam.Role(REDACTED, {
assumeRolePolicy: aws.iam.assumeRolePolicyForPrincipal(
{ Federated: 'cognito-identity.amazonaws.com' },
),
})
export const policy = new aws.iam.RolePolicy(REDACTED, {
role: role.id,
policy: {
Version: '2012-10-17',
Statement: [
{
Action: [
'cognito-sync:*',
'cognito-identity:*',
's3:PutObject',
's3:GetObject',
],
Effect: 'Allow',
Resource: '*',
},
],
},
})
export const roleAttachment = new aws.cognito.IdentityPoolRoleAttachment(REDACTED, {
identityPoolId: swimmingPool.id,
roles: { authenticated: role.arn },
roleMappings: [{
identityProvider: `cognito-idp.${region}.amazonaws.com/${userPool.poolId}:${userPool.clientId}`,
ambiguousRoleResolution: 'AuthenticatedRole',
type: 'Rules',
mappingRules: [{
claim: 'isAdmin',
matchType: 'Equals',
roleArn: role.arn,
value: 'paid',
}],
}],
})
I was expecting to see the Role to be attached in the AWS Console when I view the Identity Pool, but You have not specified roles for this identity pool. Click here to fix it. appears instead. What has to be done to attach the attachment that I attached?
I am trying to run a newly created lambda function using SAM template with run time node.js locally.
I have the following :
a) aws account with region, accessKeyId & secretAccessKey
b) aws-cli, aws-sam, docker
Running lambda function locally using sam local invoke is fine, but the problem is when i used to connect to dynamodb in my function, getting the below error.
{"message":"User: arn:aws:iam::*********:user/test.user is not authorized to perform: dynamodb:Query on resource: arn:aws:dynamodb:us-west-2:********:table/my_table with an explicit deny","code":"AccessDeniedException","time":"2020-08-30T07:45:51.678Z","requestId":"7SIBTHTKDSSDJNSTLNUSSBHDNDHBSMVJF66Q9ASUAAJG","statusCode":400,"retryable":false,"retryDelay":40.84164538820391}
I have access to aws account and has accessKeyId & secretAccessKey and able to query when trying from aws console. In vscode editor, installed aws-toolkit and added the credentials, but still i am getting the same error in local.
Can somebody help me with this issue as i am going through a difficult situation in finding the solution.
Here is my code snippet.
let response;
const AWS = require('aws-sdk');
AWS.config.update({
region: "us-west-2",
accessKeyId: '************',
secretAccessKey: '********************'
});
const dbClient = new AWS.DynamoDB.DocumentClient();
exports.lambdaHandler = async (event, context) => {
try {
let myTable = await getData(event.myId);
response = {
'statusCode': 200,
'body': JSON.stringify({
message: myTable
})
}
} catch (err) {
console.log(err);
return err;
}
return response
};
const MY_TABLE_NAME = "my_table";
const getData = async (myId) => {
const params = {
TableName: MY_TABLE_NAME ,
KeyConditionExpression: "#uid = :id",
ExpressionAttributeValues: {
':id': myId
},
ExpressionAttributeNames: {
'#uid': 'userID'
}
};
let { Count, Items } = await dbClient.query(params).promise();
if (Items.length == 0) {
return false;
} else {
var obj = {
Name: (Count > 0) ? Items[0].name : null,
MY_TABLE: MY_TABLE
};
return obj;
}
};
template.yaml
Resources:
HelloWorldFunction:
Type: AWS::Serverless::Function # More info
Properties:
CodeUri: hello-world/
Handler: app.lambdaHandler
Runtime: nodejs12.x
Policies: AmazonDynamoDBFullAccess
Events:
HelloWorld:
Type: Api # More info
Properties:
Path: /hello
Method: get
Any help would be really appreciated. Thanks in advance.
The reason why i am getting user not authorized error was beacuase i donot have programmatic access, and only authorized to access aws console access.
Thanks
I am working with AWS Textract and I want to analyze a multipage document, therefore I have to use the async options, so I first used startDocumentAnalysisfunction and I got a JobId as the return, But it needs to trigger a function that I have set to trigger when the SNS topic got a message.
These are my serverless file and handler file.
provider:
name: aws
runtime: nodejs8.10
stage: dev
region: us-east-1
iamRoleStatements:
- Effect: "Allow"
Action:
- "s3:*"
Resource: { "Fn::Join": ["", ["arn:aws:s3:::${self:custom.secrets.IMAGE_BUCKET_NAME}", "/*" ] ] }
- Effect: "Allow"
Action:
- "sts:AssumeRole"
- "SNS:Publish"
- "lambda:InvokeFunction"
- "textract:DetectDocumentText"
- "textract:AnalyzeDocument"
- "textract:StartDocumentAnalysis"
- "textract:GetDocumentAnalysis"
Resource: "*"
custom:
secrets: ${file(secrets.${opt:stage, self:provider.stage}.yml)}
functions:
routes:
handler: src/functions/routes/handler.run
events:
- s3:
bucket: ${self:custom.secrets.IMAGE_BUCKET_NAME}
event: s3:ObjectCreated:*
textract:
handler: src/functions/routes/handler.detectTextAnalysis
events:
- sns: "TextractTopic"
resources:
Resources:
TextractTopic:
Type: AWS::SNS::Topic
Properties:
DisplayName: "Start Textract API Response"
TopicName: TextractResponseTopic
Handler.js
module.exports.run = async (event) => {
const uploadedBucket = event.Records[0].s3.bucket.name;
const uploadedObjetct = event.Records[0].s3.object.key;
var params = {
DocumentLocation: {
S3Object: {
Bucket: uploadedBucket,
Name: uploadedObjetct
}
},
FeatureTypes: [
"TABLES",
"FORMS"
],
NotificationChannel: {
RoleArn: 'arn:aws:iam::<accont-id>:role/qvalia-ocr-solution-dev-us-east-1-lambdaRole',
SNSTopicArn: 'arn:aws:sns:us-east-1:<accont-id>:TextractTopic'
}
};
let textractOutput = await new Promise((resolve, reject) => {
textract.startDocumentAnalysis(params, function(err, data) {
if (err) reject(err);
else resolve(data);
});
});
}
I manually published an sns message to the topic and then it is firing the textract lambda, which currently has this,
module.exports.detectTextAnalysis = async (event) => {
console.log('SNS Topic isssss Generated');
console.log(event.Records[0].Sns.Message);
};
What is the mistake that I have and why the textract startDocumentAnalysis is not publishing a message and making it trigger the lambda?
Note: I haven't use the startDocumentTextDetection before using the startTextAnalysis function, though it is not necessary to call it before this.
Make sure you have in your Trusted Relationships of the role you are using:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com",
"textract.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
The SNS Topic name must be AmazonTextract
At the end your arn should look this:
arn:aws:sns:us-east-2:111111111111:AmazonTextract
I was able got this working directly via Serverless Framework by adding a Lambda execution resource to my serverless.yml file:
resources:
Resources:
IamRoleLambdaExecution:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: "2012-10-17"
Statement:
- Effect: Allow
Principal:
Service:
- lambda.amazonaws.com
- textract.amazonaws.com
Action: sts:AssumeRole
And then I just used the same role generated by Serverless (for the lambda function) as the notification channel role parameter when starting the Textract document analysis:
Thanks to this this post for pointing me in the right direction!
For anyone using the CDK in TypeScript, you will need to add Lambda as a ServicePrincipal as usual to the Lambda Execution Role. Next, access the assumeRolePolicy of the execution role and call the addStatements method.
The basic execution role without any additional statement (add those later)
this.executionRole = new iam.Role(this, 'ExecutionRole', {
assumedBy: new ServicePrincipal('lambda.amazonaws.com'),
});
Next, add Textract as an additional ServicePrincipal
this.executionRole.assumeRolePolicy?.addStatements(
new PolicyStatement({
principals: [
new ServicePrincipal('textract.amazonaws.com'),
],
actions: ['sts:AssumeRole']
})
);
Also, ensure the execution role has full permissions on the target SNS topic (note the topic is created already and accessed via fromTopicArn method)
const stmtSNSOps = new PolicyStatement({
effect: iam.Effect.ALLOW,
actions: [
"SNS:*"
],
resources: [
this.textractJobStatusTopic.topicArn
]
});
Add the policy statement to a global policy (within the active stack)
this.standardPolicy = new iam.Policy(this, 'Policy', {
statements: [
...
stmtSNSOps,
...
]
});
Finally, attach the policy to the execution role
this.executionRole.attachInlinePolicy(this.standardPolicy);
If you have your bucket encrypted you should grant kms permissions, otherwise it won't work
I am trying to log some information to a log stream I created in aws watchlog using a lambda function with aws-sdk, but I cannot get any logs even when the lambda is triggered.
This is my code,
Triggering Lambda Code
...
const lambda = new aws.Lambda();
lambda.invoke({
FunctionName: 'email-api-dev-logError',
Payload: JSON.stringify(err)
}, (err, data) => {
if(err) console.log('Lambda error is ', err);
else console.log('Lambda Data is ', data);
})
...
Lambda function
module.exports.logError = async (event) => {
const cloudwatchlogs = new aws.CloudWatchLogs();
const logEventParams = {
logEvents: [
{
message: event,
timestamp: new Date().getTime()
}
],
logGroupName: 'EmailAPIErrors',
logStreamName: 'Error'
};
cloudwatchlogs.putLogEvents(logEventParams, (err, data) => {
if (err) console.log(err, err.stack);
else console.log('Log data is ', data);
});
};
serverless.yml
iamRoleStatements:
- Effect: Allow
Action:
- lambda:InvokeFunction
Resource: "*"
- Effect: "Allow"
Action:
- "sqs:SendMessage"
- "sqs:ReceiveMessage"
Resource: "arn:aws:sqs:${self:provider.region}:*:EmailQueueDev"
- Effect: "Allow"
Action:
- "logs:CreateLogStream"
- "logs:PutLogEvents"
Resource: "arn:aws:logs:*:*:log-group:/aws/rds/*:log-stream:*"
functions:
logError:
handler: handler.logError
I am not sure what is going wrong here, pls help me to find the possible error and the fix for it.
You can't write to custom log-group from AWS lambda. The default log-group associated with lambda will be at /aws/lambda/function-name.
This is how the AWS lambda is designed. AWS Lambda service is kind of convention over configuration. So, the default log stream pattern is already defined.
You can do similar behavior if you happen to use EC2 machine by installing cloudwatch agent on machine.
cloudwatch agent configuration-EC2