Creating MSK event source mapping for Lambda function fails - amazon-web-services

I am using the AWS-CDK to create a stack with an AWS-MSK cluster and a Lambda function which should be triggered, when a new message is available in a specific topic.
I already had it working nicely and then I decided to add clientAuthentication and now I am stuck. I am using SASL/SCRAM for authentication. I have created a custom encryption key via the KMS service and I am using that key in a Secret in the SecretsManager. I have associated that Secret with my MSK cluster and turned on clientAuthentication there.
I have also already created an interface endpoint in my VPC to the Lambda Service in order for the service to be able to access my cluster (again, this already worked when I hadn't activated clientAuthentication).
Now I am defining my Lambda listener handler function like this:
const listener = new aws_lambda.Function(this, 'ListenerHandler', {
vpc,
vpcSubnets: { subnetGroupName: 'ListenerPrivate' },
runtime: aws_lambda.Runtime.NODEJS_14_X,
code: aws_lambda.Code.fromAsset('lambda'),
handler: 'listener.handler'
});
listener.addToRolePolicy(new aws_iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['kafka:*', 'kafka-cluster:*', 'secretsmanager:DescribeSecret', 'secretsmanager:GetSecretValue'],
resources: [cluster.ref]
}));
const secretsFromLambdaAccessRole = new aws_iam.Role(this, 'AccessSecretsFromLambdaRoles', {
assumedBy: new aws_iam.ServicePrincipal('kafka.amazonaws.com')
});
secretsFromLambdaAccessRole.addToPolicy(new aws_iam.PolicyStatement({
effect: Effect.ALLOW,
actions: ['secretsmanager:DescribeSecret', 'secretsmanager:GetSecretValue'],
resources: [KAFKA_ACCESS_SECRET_ARN]
}));
listener.role?.addManagedPolicy(
aws_iam.ManagedPolicy
.fromAwsManagedPolicyName("service-role/AWSLambdaVPCAccessExecutionRole")
);
listener.role?.addManagedPolicy(
aws_iam.ManagedPolicy
.fromAwsManagedPolicyName("service-role/AWSLambdaMSKExecutionRole")
);
const kafkaAccessSecret = aws_secretsmanager.Secret
.fromSecretCompleteArn(this, 'kafkaAccessSecret', KAFKA_ACCESS_SECRET_ARN);
listener.addEventSource(new ManagedKafkaEventSource({
clusterArn: cluster.ref,
topic: "MyTopic",
startingPosition: StartingPosition.LATEST,
secret: kafkaAccessSecret,
}));
The secret also has policies assigned to it:
{
"Version" : "2012-10-17",
"Statement" : [ {
"Sid" : "AWSLambdaResourcePolicy",
"Effect" : "Allow",
"Principal" : {
"Service" : "lambda.amazonaws.com"
},
"Action" : [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret", "secretsmanager:ListSecretVersionIds" ],
"Resource" : "arn:aws:secretsmanager:some-region:some-account:secret:AmazonMSK_some-secret"
}, {
"Sid" : "AWSKafkaResourcePolicy",
"Effect" : "Allow",
"Principal" : {
"Service" : "kafka.amazonaws.com"
},
"Action" : "secretsmanager:GetSecretValue",
"Resource" : "arn:aws:secretsmanager:some-region:some-account:secret:AmazonMSK_some-secret"
} ]
}
Now, when I try to deploy my lambda function via the CDK and it comes to the point, where it should add the event source mapping, I get this error:
Failed resources:
MskExampleStack | 17:23:22 | CREATE_FAILED | AWS::Lambda::EventSourceMapping | ListenerHandler/KafkaEventSource:MskExampleStackListenerHandler4711MyTopic (ListenerHandlerKafkaEventSourceMskExampleStackListenerHandler4711MyTopic0815)
Resource handler returned message: "Invalid request provided: Cannot access secret manager value arn:aws:secretsmanager:some-region:some-account:secret:AmazonMSK_dev-some-secret.
Please ensure the role can perform the 'secretsmanager:GetSecretValue' action on your broker in IAM. (Service: Lambda, Status Code: 400, Request ID: 123456789, Extended Request ID: null)" (RequestToken: 987654321, HandlerErrorCode: InvalidRequest)
I cannot figure out, what I am missing. What role is the error referring to? Where do I need to add the action "secretsmanager:GetSecretValue"? My user has complete Admin Rights.

You need the following:
kms permissions on the lambda role
secretsmanager permissions on the lambda role
(What I was missing) lambda.amazonaws.com in the key policy for your kms key
my setup:
lambda permissions:
- Effect: "Allow"
Action:
- kms:Decrypt
- kms:GenerateDataKey*
Resource:
- "*"
- Effect: "Allow"
Action:
- secretsmanager:GetSecretValue
Resource:
- "your secret arn"
KMS Policy:
{
"Sid": "Decrypt",
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"kms:GenerateDataKey*",
"kms:Decrypt",
],
"Resource": "*"
}
Amazon has a knack for writing about 500 needless words to document a feature and never document things around using KMS with that feature even though it seems to vary greatly.

Related

Backup vault creation failed because of insufficient privileges

I am trying to create a backup plan, rule and vault using AWS CDK. After deploying the application I receive the following error in cloudformation console.
Resource handler returned message: "Insufficient privileges to perform this action. (Service: Backup, Status Code: 403, Request ID: xxxxxxx)" (RequestToken: xxxxxxxxx, HandlerErrorCode: GeneralServiceException)
My CDK bootstrap role definitely have access to backup. See policy document below.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "cdk",
"Effect": "Allow",
"Action": [
"lambda:*",
"logs:*",
"serverlessrepo:*",
"servicediscovery:*",
"ssm:*",
"cloudformation:*",
"kms:*",
"iam:*",
"sns:*",
"dynamodb:*",
"codepipeline:*",
"cloudwatch:*",
"events:*",
"acm:*",
"sqs:*",
"backup:*"
],
"Resource": "*"
}
]
}
Following are my CDK code snippets:
backup-rule
ruleName: 'myTestRuleName',
completionWindow: Duration.hours(1),
startWindow: Duration.hours(1),
scheduleExpression: events.Schedule.cron({
day: '*',
hour: '2'
}),
deleteAfter: Duration.days(90),
backup-vault
I tried without encryptionKey and also with a key that I have created through AWS backup web interface. None worked
new backup.BackupVault(this, `${id}-instance`, {
backupVaultName: props.backupVaultName,
// encryptionKey: this.key
})
backup-plan
new BackupPlan(scope, `${id}-instance`, {
backupPlanName: context.backupPlanName,
backupPlanRules: context.backupPlanRules,
// backupVault: context.backupVault
});
backup selection
I also tried without creating the role and letting AWS CDK create and use the default role.
NOTE: I have also tried created plan, rule and vault without resource selection to make sure, that the problem does not occur on the resource selection side.
const role = new iam.Role(this, 'Backup Access Role', { assumedBy: new iam.ServicePrincipal('backup.amazonaws.com') });
const managedPolicy = iam.ManagedPolicy.fromManagedPolicyArn(this, 'AWSBackupServiceRolePolicyForBackup', 'arn:aws:iam::aws:policy/service-role/AWSBackupServiceRolePolicyForBackup');
role.addManagedPolicy(managedPolicy);
plan.backupPlan.addSelection('selection',
{
resources: [
BackupResource.fromDynamoDbTable(MyTable)
],
//role: role
}
)
``
I faced this problem too and adding permissions for backup-storage solved it for me! Referenced the AWSBackupFullAccess permissions

How to add IAM permission to cloudfront in order to associate lambda#edge?

I am trying to update my CloudFront distribution using CDK. While updating, it mentions this error message.
Lambda#Edge cannot retrieve the specified Lambda function. Update the IAM policy to add permission: lambda:GetFunction for resource: arn:aws:lambda:us-east-1:xxxxxxxx:function:edge-lambda-stack-xxxxxxx-xxxxxxxx-xxxxxxx:1
After inspecting, i found this aws docs link https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/lambda-edge-permissions.html
However i am unable to understand where to add these permissions, can somebody guide me where to add lambda:GetFunction permission.
CDK Code
const uriRedirector = new cloudfront.experimental.EdgeFunction(
this,
'UriRedirector',
{
code: lambda.Code.fromAsset('dist/events/object-cache/uri-redirector'),
runtime: lambda.Runtime.NODEJS_14_X,
handler: 'index.handle',
}
)
this.distribution = new cloudfront.Distribution(this, 'Distribution2', {
defaultBehavior: {
origin: s3Origin,
edgeLambdas: [
{
functionVersion: uriRedirector.currentVersion,
eventType: cloudfront.LambdaEdgeEventType.ORIGIN_REQUEST,
},
],
originRequestPolicy: defaultBehaviourOriginRequestPolicy,
viewerProtocolPolicy: cloudfront.ViewerProtocolPolicy.HTTPS_ONLY,
allowedMethods: cloudfront.AllowedMethods.ALLOW_ALL,
},
....
enter code here
const cfnDistribution = this.distribution.node
.defaultChild as cloudfront.CfnDistribution
cfnDistribution.overrideLogicalId(props.oldDistributionLogicalId)
You will create IAM policy in IAM and attach policy to user or role
By default AWS Lambda automatically create role you can attach policy to role
Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Lambda",
"Effect": "Allow",
"Action": [
"lambda:AddPermission",
"lambda:CreateFunction",
"lambda:DeleteFunction",
"lambda:GetFunction",
"lambda:GetFunctionConfiguration",
"lambda:ListTags",
"lambda:RemovePermission",
"lambda:TagResource",
"lambda:UntagResource",
"lambda:UpdateFunctionCode",
"lambda:UpdateFunctionConfiguration",
"lambda:GetLayerVersion"
],
"Resource": [
"arn:aws:lambda:us-east-1:xxxxxxxx:function:edge-lambda-stack-xxxxxxx-xxxxxxxx-xxxxxxx:*"
]
}
]
}

My CodePipeline is not getting triggered on s3 event trigger

My CodePipeline is not getting triggered when I upload code.zip to s3-bucket or I copy code.zip using aws cli (aws s3 cp).
Cloudformation event Rule snippet:
Type: AWS::Events::Rule
Properties:
EventPattern:
source:
- 'aws.s3'
detail:
eventSource:
- 's3.amazonaws.com'
eventName:
- 'CopyObject'
- 'PutObject'
- 'CompleteMultipartUpload'
requestParameters:
bucketName:
- 's3-bucket'
key:
- 'code.zip'
State: 'ENABLED'
Targets:
-
Arn: <CodePipeline ARN>'
Id: 'Target-1'
RoleArn: <trigger role ARN>
Trigger role policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"codepipeline:StartPipelineExecution"
],
"Resource": "*"
}
]
}
Event Pattern:
{
"source": [
"aws.s3"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"requestParameters": {
"bucketName": [
"s3-bucket"
],
"key": [
"code.zip"
]
},
"eventName": [
"CopyObject",
"PutObject",
"CompleteMultipartUpload"
]
}
}
What is missing here ? Or anyone has any pointers on how this can be debugged further?
There are two ways to further debug this.
First you want to ensure that you have a working event pattern. The easiest way to do this would to get a sample code-pipeline event and then make a test call via either https://docs.aws.amazon.com/eventbridge/latest/APIReference/API_TestEventPattern.html.
Next, if you have a working rule, you can check the metrics https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-monitoring-cloudwatch-metrics.html and setup DLQ https://docs.aws.amazon.com/eventbridge/latest/userguide/rule-dlq.html. Both provide visibility into if the rule had a match and was it successfully delivered.
The problem was events were not getting triggered out of S3 and when i enabled it from CloudTrail it worked.
Here is the related answer using which i resolved it:
S3 object level events are not getting triggered

Unable to configure SQS queue notification in S3

I created an SQS queue and added policy under permission tab allowing only my account users to configure the configure the notification
Policy Document
{
"Version": "2012-10-17",
"Id": "arn:aws:sqs:us-east-1:111111111111:sqsqueue/SQSDefaultPolicy",
"Statement": [
{
"Sid": "Sid111111111111",
"Effect": "Allow",
"Principal": {
"AWS": "111111111111"
},
"Action": [
"sqs:SendMessage",
"sqs:ReceiveMessage"
],
"Resource": "arn:aws:sqs:us-east-1:111111111111:queue"
}
]
Navigate to S3 and try to configure event notification for the above queue, it is throwing an error
Unable to validate the following destination configurations. Permissions on the destination queue do not allow S3 to publish
notifications from this bucket.
(arn:aws:sqs:us-east-1:111111111111:queue)*
am I doing something wrong? Can someone help me please
I was able to resolve this issue by adding "Service": "s3.amazonaws.com"
in the Principal tag.
Here the policy document
{
"Version": "2012-10-17",
"Id": "arn:aws:sqs:us-east-1:111111111111:sqsqueue/SQSDefaultPolicy",
"Statement": [
{
"Sid": "Sid111111111111",
"Effect": "Allow",
"Principal": {
"Service": "s3.amazonaws.com"
},
"Action": [
"sqs:SendMessage",
"sqs:ReceiveMessage"
],
"Resource": "arn:aws:sqs:us-east-1:111111111111:queue"
}
]
This is explained in https://forums.aws.amazon.com/thread.jspa?threadID=173251
This template file creates a bucket, SQS Queue and a policy to connect the two:
AWSTemplateFormatVersion: 2010-09-09
Parameters:
IncomingBucketName:
Type: 'String'
Description: 'Incoming Bucket Name'
Default: 'some-bucket-name-here'
Resources:
IncomingFileQueue:
Type: 'AWS::SQS::Queue'
Properties: {}
SQSQueuePolicy:
Type: 'AWS::SQS::QueuePolicy'
Properties:
PolicyDocument:
Id: 'MyQueuePolicy'
Version: '2012-10-17'
Statement:
- Sid: 'Statement-id'
Effect: 'Allow'
Principal:
AWS: "*"
Action: 'sqs:SendMessage'
Resource:
Fn::GetAtt: [ IncomingFileQueue, Arn ]
Queues:
- Ref: IncomingFileQueue
IncomingFileBucket:
Type: 'AWS::S3::Bucket'
DependsOn:
- SQSQueuePolicy
- IncomingFileQueue
Properties:
AccessControl: BucketOwnerFullControl
BucketName:
Ref: IncomingBucketName
NotificationConfiguration:
QueueConfigurations:
- Event:
s3:ObjectCreated:Put
Queue:
Fn::GetAtt: [ IncomingFileQueue, Arn ]
I was getting the same issue but used this page to work out how to connect the three resources in order to successfully deploy the stack:
https://aws.amazon.com/premiumsupport/knowledge-center/unable-validate-destination-s3/
I'm still working on the Policy Condition as the form recommended in the above link doesn't work for SQS. That being the case, the above template is not secure and shouldn't be used in production as it allows anyone to add messages to the queue.
I'll update this answer once I've figured that bit out...

SNS topic not publishing to SQS

I am trying to prototype a distributed application using SNS and SQS.I have this topic:
arn:aws:sns:us-east-1:574008783416:us-east-1-live-auction
and this queue:
arn:aws:sqs:us-east-1:574008783416:queue4
I created the queue using the JS Scratchpad. I added the subscription using the Console. I AddPermission to the queue using the scratchpad. The queue policy is now:
{
"Version":"2008-10-17",
"Id":"arn:aws:sqs:us-east-1:574008783416:queue4/SQSDefaultPolicy",
"Statement":[
{
"Sid":"RootPerms",
"Effect":"Allow",
"Principal":{
"AWS":"574008783416"
},
"Action":"SQS:*",
"Resource":"arn:aws:sqs:us-east-1:574008783416:queue4"
}
]
}
I have an email subscription on the same topic and the emails arrive fine but the messages never arrive on the queue. I've tried SendMessage directly to the queue - rather than via SNS - using Scratchpad and it works fine. Any ideas why it won't send to the queue?
This was posted a while back on the AWS forums: https://forums.aws.amazon.com/thread.jspa?messageID=202798
Then I gave the SNS topic the permission to send messages to the SQS queue. The trick here is to allow all principals. SNS doesn't send from your account ID -- it has its own account ID that it sends from.
Adding to Skyler's answer, if like me you cringe at the idea of allowing any principal (Principal: '*'), you can restrict the principal to SNS:
Principal:
Service: sns.amazonaws.com
Although this behavior is undocumented, it works.
Most of the answers (beside #spg answer) propose usage of principal: * - this is very dangerous practice and it will expose your SQS to whole world.
From AWS docs
For resource-based policies, such as Amazon S3 bucket policies, a wildcard (*) in the principal element specifies all users or public access.
We strongly recommend that you do not use a wildcard in the Principal element in a role's trust policy unless you otherwise restrict access through a Condition element in the policy. Otherwise, any IAM user in any account in your partition can access the role.
Therefore it is strongly not recommended to use this principal.
Instead you need to specify sns service as your principal:
"Principal": {
"Service": "sns.amazonaws.com"
},
Example policy:
{
"Version": "2012-10-17",
"Id": "Policy1596186813341",
"Statement": [
{
"Sid": "Stmt1596186812579",
"Effect": "Allow",
"Principal": {
"Service": "sns.amazonaws.com"
},
"Action": [
"sqs:SendMessage",
"sqs:SendMessageBatch"
],
"Resource": "Your-SQS-Arn"
}
]
}
With this policy sns will be able to send messages to your SQSs.
There are more permissions for SQS but from what I see SendMessage and SendMessageBatch should be enough for SNS->SQS subscribtion.
Here's a full CloudFormation example of Skyler's answer
{
"Resources": {
"MyTopic": {
"Type": "AWS::SNS::Topic"
},
"MyQueue": {
"Type": "AWS::SQS::Queue"
},
"Subscription": {
"Type" : "AWS::SNS::Subscription",
"Properties" : {
"Protocol" : "sqs",
"TopicArn" : {"Ref": "MyTopic"},
"Endpoint": {"Fn::GetAtt": ["MyQueue", "Arn"]}
}
},
"QueuePolicy": {
"Type": "AWS::SQS::QueuePolicy",
"Properties": {
"Queues": [
{"Ref": "MyQueue"}
],
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "allow-sns-messages",
"Effect": "Allow",
"Principal": {"Service": "sns.amazonaws.com"},
"Action": "sqs:SendMessage",
"Resource": {"Fn::GetAtt": ["MyQueue", "Arn"]},
"Condition": {
"ArnEquals": {
"aws:SourceArn": {"Ref": "MyTopic"}
}
}
}
]
}
}
}
}
}
Amazon has more options in their Sending Amazon SNS Messages to Amazon SQS Queues document.
I just experienced this and took me a while to figure out why:
If I create a SQS subscription from the SNS console, it does not add necessary permissions to the SQS access policy.
If I create the subscription in the SQS console to the same SNS, it does.
Old question but using an AWS SDK version > 1.10
Check out the docs SQS-SNS sendMessage Permission
private static void updateQueuePolicy(AmazonSQS sqs, String queueURL, String topicARN) {
Map<String, String> attributes = new HashMap<String, String>(1);
Action actions = new Action() {
#Override
public String getActionName() {
return "sqs:SendMessage"; // Action name
}
};
Statement mainQueueStatements = new Statement(Statement.Effect.Allow)
.withActions(actions)
.withPrincipals(new Principal("Service", "sns.amazonaws.com"))
.withConditions(
new Condition()
.withType("ArnEquals")
.withConditionKey("aws:SourceArn")
.withValues(topicARN)
);
final Policy mainQueuePolicy = new Policy()
.withId("MainQueuePolicy")
.withStatements(mainQueueStatements);
attributes.put("Policy", mainQueuePolicy.toJson());
updateQueueAttributes(sqs, queueURL, attributes);
}
Which outputs a policy similar to
{
Policy={
"Version":"2012-10-17",
"Id":"MainQueuePolicy",
"Statement":
[
{
"Sid":"1",
"Effect":"Allow",
"Principal": {
"Service": "sns.amazonaws.com"
},
"Action":["sqs:SendMessage"],
"Condition":
{"ArnEquals":
{"aws:SourceArn":["arn:aws:sns:us-east-1:3232:testSubscription"]}
}
}
]
}
}
Like the other answers mentioned, you must opt in and grant permission to this SNS topic to publish to your SQS queue.
If you use terraform, you can use the sqs_queue_policy resource.
Here is an example:
resource "aws_sqs_queue_policy" "your_queue_policy" {
queue_url = "${aws_sqs_queue.your_queue.id}"
policy = <<POLICY
{
"Version": "2012-10-17",
"Id": "sqspolicy",
"Statement": [
{
"Sid": "First",
"Effect": "Allow",
"Principal": "*",
"Action": "sqs:SendMessage",
"Resource": "${aws_sqs_queue.your_queue.arn}",
"Condition": {
"ArnEquals": {
"aws:SourceArn": "${aws_sns_topic.your_topic.arn}"
}
}
}
]
}
POLICY
}
If you have enabled encryption on your queue, can also be a reason for SNS not being able to put message on subscriber queue. You need to give access to SNS to that KMS key.
This article explains how to solve this problem:
only you need subscribe the queue to the topic from the queue console.
step one: select the queue
step two: queue Actions
step three: Subscribe Queue to SNS Topic
step choose: the topic
end.
It's by AWS::SQS::QueuePolicy.
You need define a this kind of policy to allow specific SNS perform actions to specific SQS.
QueuePolicy:
Type: AWS::SQS::QueuePolicy
Properties:
Queues:
- Ref: MyQueue1
- Fn::Sub: arn:aws:sqs:us-east-1:${AWS::AccountId}:my-queue-*
PolicyDocument:
Version: '2012-10-17'
Statement:
- Sid: allow-sns-messages
Effect: Allow
Principal:
Service: sns.amazonaws.com
Action: sqs:SendMessage
Resource:
- Fn::Sub: arn:aws:sqs:us-east-1:${AWS::AccountId}:my-queue-*
Condition:
ArnEquals:
aws:SourceArn:
- Fn::Sub: arn:aws:sns:us-east-1:${AWS::AccountId}:source-sns-*
With Lambda this kind of policies dont apply. Please if anyone know why please share it.