I have many cloudwatch log groups (more than 50). I created a lambda function that monitors cloudwatch logs for issues I defined and tested it on a single group. The lambda itself works, and I'd like to set all my log groups as triggers for the lambda function.
After setting about 40, I got
An error occurred when creating the trigger: The final policy size
(20839) is bigger than the limit (20480). (Service: AWSLambda; Status
Code: 400; Error Code: PolicyLengthExceededException;
Alright, reading about it here: The final policy size (20539) is bigger than the limit (20480) and here: https://docs.aws.amazon.com/lambda/latest/dg/access-control-resource-based.html , I understood I should manually change the permission policy. I deleted the existing permissions for about 4 triggers, then added a permission:
aws lambda add-permission --function-name=FUNCNAME --region=us-west-2 --action lambda:InvokeFunction --statement-id genericpermission --principal logs.us-west-2.amazonaws.com --source-account ACCOUNT --source-arn arn:aws:logs:us-west-2:ACCOUNT:log-group:*
At this point, the list of triggers of my lambda was emptied. (although the subscriptions still exist.)
Question:
Is there a "right" way to have a lot of triggers, while still being able to review the triggers in the UI?
Related
I have setup a lambda function to be triggered by many cloudwatch log groups. In order to do that, I added the invoke function permission on log group aws lambda add-permission and add subscription as lambda as destination aws logs put-subscription-filter. There are hundreds of log groups I need to stream to one lambda which makes the lambda trigger policy very big.
There two commands in this flow aws lambda add-permission and aws logs put-subscription-filter. I need to run these two commands per each log group. I added 46 cloudwath log groups as trigger for the lambda but when adding the 47th I got an error.
The error I got was this command:
aws lambda add-permission --function-name $AGGREGATOR_NAME \
--statement-id add-permission-$lambdaName --action lambda:InvokeFunction \
--principal logs.ap-southeast-2.amazonaws.com \
--source-arn $logArn
An error occurred (PolicyLengthExceededException) when calling the AddPermission operation: The final policy size (20623) is bigger than the limit (20480).
arn:aws:logs:ap-southeast-2:***
Is there a way to get around of that?
Is this a right way to stream hundreds of log groups to one lambda?
I have tried to use wildcard in the command but got a validation error.
aws lambda add-permission --function-name $AGGREGATOR_NAME --statement-id $ID --action lambda:InvokeFunction --principal logs.ap-southeast-2.amazonaws.com --source-arn "arn:aws:logs:*:*:log-group:/aws/lambda/hello*:*"
An error occurred (ValidationException) when calling the AddPermission operation: 1 validation error detected: Value 'arn:aws:logs:*:*:log-group:/aws/lambda/hello*:*' at 'sourceArn' failed to satisfy constraint: Member must satisfy regular expression pattern: arn:(aws[a-zA-Z0-9-]*):([a-zA-Z0-9\-])+:([a-z]{2}((-gov)|(-iso(b?)))?-[a-z]+-\d{1})?:(\d{12})?:(.*)
A way to get around of this is to utilize wildcard in --source-arn. By doing this, you don't need one lambda resource-based policy for each CloudWatch log group. Of course, the simplest way is to allow all log groups to execute lambda:InvokeFunction. In this case, you just need
aws lambda add-permission --function-name $AGGREGATOR_NAME \
--statement-id add-permission-$lambdaName --action lambda:InvokeFunction \
--principal logs.ap-southeast-2.amazonaws.com
Notice that the --source-arn is removed.
I have a rule in IoT Core that sends messages to a IoT Analytics channel and that data is then passed to a Analytics pipeline, in the pipeline however, I want to make use of a pipeline activity to transform the message, specifically the :
Transform message with Lambda function activity.
My Lambda function returns a value that it retrieves from DynamoDB, I have tested the Lambda in AWS Lambda and it executes and works as it should, however, once I click update preview that should now show me the transformed message I get the following error:
We could not run the pipeline activity. ERROR : Unable to execute Lambda function due to insufficient permissions; dropping the messages, number of messages dropped : 1, functionArn : arn:aws:lambda:eu-west-1:x:function:y
The IAM role associated with the Lambda y function has the following permissions:
AmazonDynamoDBFullAccess
AWSIoTAnalyticsFullAccess
AWSIoTFullAccess
Is there a policy perhaps that I do not have in my IAM role for the Lambda that is preventing it from doing what I need it to?
Seems like you did't provide permission to your lambda function,make sure you have granted IoT Analytics permission to invoke your Lambda function
Example AWS CLI command:
1)
aws lambda add-permission --function-name filter_to_cloudwatch --statement-id filter_to_cloudwatch_perms --principal iotanalytics.amazonaws.com --action lambda:InvokeFunction
2)
aws lambda add-permission --function-name LambdaForWeatherCorp --region us-east-1 --principal iot.amazonaws.com --source-arn arn:aws:iot:us-east-1:123456789012:rule/WeatherCorpRule --source-account 123456789012 --statement-id "unique_id" --action "lambda:InvokeFunction"
In AWS Lambda, I can see errors in the cloud watch metrics sometimes, but when I check the logs for the logs, I don't see any error. Does the AWS Lambda don't log the errors automatically?
Make sure your Lambda function's execute role has the permissions logs:CreateLogGroup, logs:CreateLogStream and logs:PutLogEvents. A good way to do this is to grant the built-in policy AWSLambdaBasicExecutionRole to your Lambda function's execution role.
We are trying to add permission to an SNS topic in account 'A'. A lambda function in account 'B' will invoke this. To do this, we used the CLI as below:
aws sns add-permission --topic-arn arn:aws:sns:us-east-1:<account_A>:djif-prod-policy-engine-config-sns --label lambda-<account_B>-us-east-2 --aws-account-id <account_B> --action-name Publish --region us-east-1
This returns the following error:
An error occurred (InvalidParameter) when calling the AddPermission operation: Invalid parameter: Policy contains too many statements!
Can someone help us figure out a way to resolve this. We created a lambda function in a different account (account C) and this command worked fine with no errors.
We figured this out. Whenever we run aws sns add-permission it updates the SNS topic policy. We had a bug in our code that called this multiple times for the same account (we are trying to invoke this SNS topic from multiple accounts). The AWS limit on the number of policies is 100 and when we hit this limit, we get the error.
Unable to create a recurring schedule for lambda.
What I did:
1) Created the function and successfully tested it.
2) Went to event source section in the AWS mgmt console.
3) clicked on add event source.
4) with default settings of rate(5 minutes) clicked on submit.
Got the error:
There was an error creating the event source mapping: Could not create
scheduled-event event source
I went through the docs and ran this statement via AWS CLI:
aws lambda add-permission --statement-id Allow-scheduled-events --action lambda:InvokeFunction --principal events.amazonaws.com --function-name function:myfunction
The above statement went through successfully, I tried running it again and it said the permission already exists confirming that it ran.
I tried adding the schedule again but got the same error.
Am i supposed to change the role or something? Can't anything else in the docs. The lambda is running with basic lambda execution role.
UPDATE
I temporarily gave the role under which the Lambda is executing admin access, still the same error.
Workaround
FYI... for people facing this problem, I could achieve the same result by going to cloudwatch and adding an event targeting lambda from there... same thing, that still does not answer this question though. I cant imagine that AWS console has such a gaping bug that they aren't doing anything about.
In my case my Lambda and S3 were not in the same region. I found out by, instead of adding the trigger on the Lambda, adding the event from s3.