Ansible: Add Cloudwatch Log event trigger to Lambda function - amazon-web-services

I am trying to add Cloudwatch logs trigger to Lambda function written in python3.6 via ansible. I am able to deploy lambda function via ansible but facing issues when trying to deploy a trigger with a log group configured.
Below is my code for ansible trigger and lambda policy.
Lambda trigger:
- name: Cloud Watch Log event mapping
lambda_event:
state: present
event_source: stream
lambda_function_arn: arn:aws:lambda:us-east-2:<account_id>:function:CWloggerLambda
alias: CWTEST
region: us-east-2
source_params:
source_arn: arn:aws:logs:us-east-2:<account_id>:log-group:<log_group_name>
enabled: True
Lambda Policy:
- name: Allowing CloudWatch Event(s) to trigger Lambda function(s)
lambda_policy:
lambda_function_arn: arn:aws:lambda:us-east-2:<account_id>:function:CWloggerLambda
statement_id: "CWloggerLambda_lambda-cloudwatch-trigger"
action: "lambda:InvokeFunction"
principal: "events.amazonaws.com"
source_arn: arn:aws:logs:us-east-2:<account_id>:log-group:<log_group_name>
region: us-east-2
state: present
The policy is added however trigger gives an error on the ARN as only Kinesis, DynamoDB and SQS are allowed. Any possible way to get a Cloudwatch Logs trigger via ansible?

Related

AWS EventBridge rule doesn't trigger: Error. NotAuthorizedForSourceException. Not authorized for the source

I'm creating a rule that should fire every time there is a change in status in a SageMaker batch transform job.
I'm using Serverless Framework but to simplify it even further, here's what I did:
The rule, exported from AWS console:
AWSTemplateFormatVersion: '2010-09-09'
Description: >-
CloudFormation template for EventBridge rule
'sagemaker-transform-status-to-CWL'
Resources:
EventRule0:
Type: AWS::Events::Rule
Properties:
EventBusName: default
EventPattern:
source:
- aws.sagemaker
detail-type:
- SageMaker Training Job State Change
Name: sagemaker-transform-status-to-CWL
State: ENABLED
Targets:
- Id: XXX
Arn: >-
arn:aws:logs:us-east-1:XXX:log-group:/aws/events/sagemaker-notifications
Eventually I want this to trigger a step function or a lambda function, but for now I am configuring the target to be CloudWatch with log group 'sagemaker-notifications'
I expect that everytime I run a batch transform job in SageMaker, this will get notified and the log would show up on cloudwatch.
But I'm not getting any logs, so when I tried to PutEvents manually to test it, I was getting this:
Error. NotAuthorizedForSourceException. Not authorized for the source.
It's probably an issue with roles, but I'm not sure which kind of role to configure, where and who should assume it.
Tried going through AWS tutorials, adding permissions to the default event bus, using serverless framework
See some sample event patterns here - https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-events-rule.html#aws-resource-events-rule--examples
Your source should be a custom source, and cannot contain aws. (Reference -https://docs.aws.amazon.com/eventbridge/latest/userguide/eb-events.html)

How to programmatically set up EventsBridge events for Lambdas

I have set up 2 lambda functions, deployed with AWS SAM. The first one uses the JS AWS SDK to run putRule and putTarget to trigger the second lambda with a cron job. When I run the first lambda, I see both the rule and target correctly set up in EventsBridge.
I also create the following permission for the second Lambda in my AWS SAM template
InvokePermission:
Type: AWS::Lambda::Permission
Properties:
FunctionName: !Ref MyLambda
Action: lambda:InvokeFunction
Principal: 'events.amazonaws.com'
and can see the Policy in the console
The only result I see of this cron event (at the timestamp I've chosen for the rule) is a failed invocation of the second Lambda, and CloudWatch doesn't provide any useful information
Any idea of why this is failing or how to retrieve any error? Might "events.amazonaws.com" be the wrong Principal for that?
I am looking into EventSourceMapping but I can't see my case anywhere in the docs

How can I get codepipeline execution id in cdk at runtime?

I am using AWS CDK to deploy a codepipeline. It also has a notification rule which notify when the pipeline fails. I need to put the codepipeline job URL in the notify message in order for people to open the piepline easily.
In cloudformation, I have to put below configuation to compute the URL:
Targets:
- Arn: !Ref SNSTopicNotification
Id: piplineID
InputTransformer:
InputPathsMap:
pipeline: "$.detail.pipeline"
executionId: "$.detail.execution-id"
region: "$.region"
InputTemplate: !Sub |
"Pipeline <pipeline> failed"
"https://<region>.console.aws.amazon.com/codesuite/codepipeline/pipelines/<pipeline>/executions/<executionId>/timeline?region=<region>"
the key is using $.detail.xxx to reference the value at runtime. How can I achieve this in CDK?

How to make SNS notification for CodeCommit in CloudFormation

AWS::CodeCommit::Repository have only triggers section.
Type: AWS::CodeCommit::Repository
Properties:
Code:
Code
RepositoryDescription: String
RepositoryName: String
Tags:
- Tag
Triggers:
- RepositoryTrigger
How to add notifications to a repository? Where is option for notifications?
Notifications for CodeCommit are part of AWS CodeStar Notifications:
Introducing notifications for AWS CodeCommit, AWS CodeBuild, AWS CodeDeploy, and AWS CodePipeline.
What Are Notifications?
Therefore, you use them through AWS::CodeStarNotifications::NotificationRule.

How to configure AWS SQS Queue to Listen already created S3 bucket's events

I have aws S3 bucket which is created by aws console. Now I want to deploy AWS SQS Queue for listening that bucket's object creating events with the Serverless framework.
Can someone explain how to achieve the task?
Here are relevant parts of my yml file:
......
iamRoleStatements:
- Effect: Allow
Action:
- sqs:*
Resource:
- "*"
......
resources:
Resources:
PDFConverterQueue:
Type: "AWS::SQS::Queue"
Properties:
QueueName: "PDFConverterQueue"
#### How can I configure this Queue to listen to previously created bucket's events.
.....
You will probably have better success to have the Lambda be invoked directly by the S3 event. The S3 requests are asynchronous and have exponential retry policies in place in case of failure.
functions:
pdfConverter:
handler: handler.pdfconverted
events:
- s3:
bucket: pdftoconvert
event: s3:ObjectCreated:*
existing: true
No need for an SQS queue so you save some resources.