AWS Firehose DynamicPartitioningConfiguration/Prefix simply won't work - amazon-web-services

I've been slamming my head against the wall for the past 2 days trying to get dynamic partitioning to work in the delivery stream I'm creating using a serverless application model template. I've tried multiple configurations, but it seems as soon as I fill in a prefix, enable dynamic partitioning and add a metadata extraction processor to get myself a field I can use as the prefix my stack stops working, I can generate alerts in aws and see the logs from the lambda processor I also added, but no files reach the bucket, I also can't seem to configure cloudwatch logs for the stream as well, since I've tried creating them through the script and manually and neither at least have logs pertaining to any error....
I'll leave my script here, hopefully I'm doing something wrong:
{
"AWSTemplateFormatVersion": "2010-09-09",
"Transform": "AWS::Serverless-2016-10-31",
"Description": "An AWS Serverless Application.",
"Parameters": {
"BucketArn": {
"Type": "String"
}
},
"Resources": {
"DeliveryPolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "K-POLICY",
"Roles": [
{
"Ref": "DeliveryRole"
}
],
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
{
"Ref": "BucketArn"
}
]
},
{
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction",
"lambda:GetFunctionConfiguration"
],
"Resource": [
{
"Fn::GetAtt": [
"Processor",
"Arn"
]
}
]
}
]
}
}
},
"DeliveryRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"firehose.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
}
}
},
"DeliveryStream": {
"Type": "AWS::KinesisFirehose::DeliveryStream",
"Properties": {
"DeliveryStreamName": "K-STREAM",
"DeliveryStreamType": "DirectPut",
"ExtendedS3DestinationConfiguration": {
"BucketARN": {
"Ref": "BucketArn"
},
"Prefix": "!{partitionKeyFromQuery:symb}",
"ErrorOutputPrefix": "errors/!{firehose:random-string}/!{firehose:error-output-type}/!{timestamp:yyyy/MM/dd}",
"RoleARN": {
"Fn::GetAtt": [
"DeliveryRole",
"Arn"
]
},
"DynamicPartitioningConfiguration": {
"Enabled": true
},
"ProcessingConfiguration": {
"Enabled": true,
"Processors": [
{
"Type": "MetadataExtraction",
"Parameters": [
{
"ParameterName": "MetadataExtractionQuery",
"ParameterValue": "{symb: .TICKER_SYMBOL}"
},
{
"ParameterName": "JsonParsingEngine",
"ParameterValue": "JQ-1.6"
}
]
},
{
"Type": "Lambda",
"Parameters": [
{
"ParameterName": "LambdaArn",
"ParameterValue": {
"Fn::GetAtt": [
"Processor",
"Arn"
]
}
}
]
}
]
}
}
}
},
"Processor": {
"Type": "AWS::Serverless::Function",
"Properties": {
"Handler": "K-PROCESSOR::KRATOS_PROCESSOR.Functions::FunctionHandler",
"FunctionName": "K-PROCESSOR",
"Runtime": "dotnetcore3.1",
"CodeUri": "./staging/app/3cs-int-k",
"MemorySize": 256,
"Timeout": 60,
"Role": null,
"Policies": [
"AWSLambdaBasicExecutionRole"
]
}
}
}
}

Related

aws policy for ec2:RunInstances with multiple conditions result in rejected request

I want to be able to restrict ec2:* operations (specifically ec2:RunInstances) only in a specific AWS account and prevent them against existing ec2 instances having a dedicated tag.
I have the following policy:
{
"Sid": "EC2InfraAccess",
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": [
"*"
],
"Condition": {
"StringEquals": {
"aws:ResourceAccount": [
"111111111111"
]
},
"StringNotEquals": {
"aws:ResourceTag/cluster": [
"team-prod-eks",
"team-stage-eks"
]
}
}
}
So I want the user to be able to perform ec2:RunInstances but have no ability to perform any ec2:* actions against nodes of two kubernetes clusters.
The resources in the account 111111111111 do have tags cluster=team-prod-eks or cluster=team-stage-eks.
I need the user to be able to create new ec2 instances without ability to perform any ops against the nodes associated with these 2 eks clusters.
But it doesn't work and I receive:
Error: creating EC2 instance: UnauthorizedOperation:
You are not authorized to perform this operation. Encoded authorization failure message....
The decoded message has this:
{
"allowed": false,
"explicitDeny": false,
"matchedStatements": {
"items": []
},
"failures": {
"items": []
},
"context": {
"principal": {
"id": "AROA6BMT6GMYSAV3BDPVI:UserName",
"arn": "arn:aws:sts::111111111:assumed-role/AWSReservedSSO_role_5fbf1098ce7e652e/UserName"
},
"action": "ec2:RunInstances",
"resource": "arn:aws:ec2:us-west-2::image/ami-0c12b5d624d73f1c0",
"conditions": {
"items": [
{
"key": "ec2:ImageID",
"values": {
"items": [
{
"value": "ami-0c12b5d624d73f1c0"
}
]
}
},
{
"key": "ec2:ImageType",
"values": {
"items": [
{
"value": "machine"
}
]
}
},
{
"key": "aws:Resource",
"values": {
"items": [
{
"value": "image/ami-0c12b5d624d73f1c0"
}
]
}
},
{
"key": "aws:Account",
"values": {
"items": [
{
"value": "801119661308"
}
]
}
},
{
"key": "ec2:IsLaunchTemplateResource",
"values": {
"items": [
{
"value": "false"
}
]
}
},
{
"key": "ec2:RootDeviceType",
"values": {
"items": [
{
"value": "ebs"
}
]
}
},
{
"key": "aws:Region",
"values": {
"items": [
{
"value": "us-west-2"
}
]
}
},
{
"key": "aws:Service",
"values": {
"items": [
{
"value": "ec2"
}
]
}
},
{
"key": "ec2:Owner",
"values": {
"items": [
{
"value": "amazon"
}
]
}
},
{
"key": "ec2:Public",
"values": {
"items": [
{
"value": "true"
}
]
}
},
{
"key": "aws:Type",
"values": {
"items": [
{
"value": "image"
}
]
}
},
{
"key": "ec2:Region",
"values": {
"items": [
{
"value": "us-west-2"
}
]
}
},
{
"key": "aws:ARN",
"values": {
"items": [
{
"value": "arn:aws:ec2:us-west-2::image/ami-0c12b5d624d73f1c0"
}
]
}
}
]
}
}
}
So how to write a proper policy so they can RunInstances in a specific account with any AMI ID?
"action": "ec2:RunInstances",
"resource": "arn:aws:ec2:us-west-2::image/ami-0c12b5d624d73f1c0",
Instead of having a not equals type of condition add a deny with condition and then add allow all. Deny takes preference over the allow.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowEc2",
"Effect": "Allow",
"Action": [
"ec2:*"
],
"Resource": "*"
},
{
"Sid": "DenyEc2onCluster",
"Effect": "Deny",
"Action": [
"ec2:*"
],
"Resource": "*",
"Condition": {
"StringEqualsIgnoreCase": {
"aws:ResourceTag/cluster": [
"team-prod-eks",
"team-stage-eks"
]
},
}
}
]
}

Need help on CloudFormation template and AWS lambda for pulling events from SQS to S3 via lambda

I am new to AWS CloudFormation, and I am trying to capture events from SQS Queue and place them in S3 bucket via AWS lambda. Flow of events is
SNS --> SQS <-- Lambda ---> S3 bucket.
I am trying to achieve above flow using cloudFormation template.I am getting below error message after deploying my CloudFormation template. Any help you can provide would be greatly appreciated. Thank you
11:51:56 2022-01-13 17:51:47,930 - INFO - ...
11:52:53 2022-01-13 17:52:48,511 - ERROR - Stack myDemoApp shows a rollback status ROLLBACK_IN_PROGRESS.
11:52:53 2022-01-13 17:52:48,674 - INFO - The following root cause failure event was found in the myDemoApp stack for resource EventStreamLambda:
11:52:53 2022-01-13 17:52:48,674 - INFO - Resource handler returned message: "Error occurred while GetObject. S3 Error Code: NoSuchKey.
S3 Error Message: The specified key does not exist. (Service: Lambda, Status Code: 400, Request
ID: 5f2f9882-a863-4a58-90bd-7e0d0dfdf4d5, Extended Request ID: null)" (RequestToken: 0a95acb4-a677-0a2d-d0bc-8b7487a858ad, HandlerErrorCode: InvalidRequest)
11:52:53 2022-01-13 17:52:48,674 - INFO - ..
MY lambda function is :
import json
import logging
import boto3
logger = logging.getLogger()
logger.setLevel(logging.INFO)
logging.basicConfig(level=logging.INFO,
format='%(asctime)s: %(levelname)s: %(message)s')
def lambda_handler(event, context):
logger.info(f"lambda_handler -- event: {json.dumps(event)}")
s3_bucket = boto3.resource("3")
event_message = json.loads(event["Records"][0]["body"])
s3_bucket.put_object(Bucket="S3DeployBucket", key="data.json", Body=json.dumps(event_message))
My complete CloudFormation template is :
{
"AWSTemplateFormatVersion": "2010-09-09",
"Description": "myDemoApp Resource Stack",
"Mappings": {
},
"Parameters": {
"S3DeployBucket": {
"Default": "myDemoApp-deploy-bucket",
"Description": "Bucket for deployment configs and artifacts for myDemoApp",
"Type": "String"
},
"EnvName": {
"Description": "Platform environment name for myDemoApp",
"Type": "String"
},
"AuditRecordKeyArn": {
"Description": "ARN for audit record key encryption for myDemoApp",
"Type": "String"
},
"ParentVPCStack": {
"Description": "The name of the stack containing the parent VPC for myDemoApp",
"Type": "String"
},
"StackVersion": {
"Description": "The version of this stack of myDemoApp",
"Type": "String"
},
"EventLogFolderName": {
"Type": "String",
"Description": "folder name for the logs for the event stream of myDemoApp",
"Default": "event_log_stream"
},
"EventLogPartitionKeys": {
"Type": "String",
"Description": "The partition keys that audit logs will write to S3. Use Hive-style naming conventions for automatic Athena/Glue comprehension.",
"Default": "year=!{timestamp:yyyy}/month=!{timestamp:MM}/day=!{timestamp:dd}/hour=!{timestamp:HH}"
},
"AppEventSNSTopicArn": {
"Description": "Events SNS Topic of myDemoApp",
"Type": "String"
},
"ReportingEventsRetentionDays": {
"Default": "2192",
"Description": "The number of days to retain a record used for reporting.",
"Type": "String"
}
},
"Resources": {
"AppEventSQSQueue": {
"Type": "AWS::SQS::Queue"
},
"AppEventSnsSubscription": {
"Type": "AWS::SNS::Subscription",
"Properties": {
"TopicArn": {
"Ref": "AppEventSNSTopicArn"
},
"Endpoint": {
"Fn::GetAtt": [
"AppEventSQSQueue",
"Arn"
]
},
"Protocol": "sqs"
}
},
"S3DeployBucket": {
"Type": "AWS::S3::Bucket",
"DeletionPolicy": "Retain",
"UpdateReplacePolicy": "Retain",
"Properties": {
"BucketEncryption": {
"ServerSideEncryptionConfiguration": [
{
"ServerSideEncryptionByDefault": {
"KMSMasterKeyID": {
"Ref": "AuditRecordKeyArn"
},
"SSEAlgorithm": "aws:kms"
}
}
]
},
"VersioningConfiguration": {
"Status": "Enabled"
},
"LifecycleConfiguration": {
"Rules": [
{
"ExpirationInDays": {
"Ref": "ReportingEventsRetentionDays"
},
"Status": "Enabled"
}
]
}
}
},
"EventStreamLogGroup": {
"Type": "AWS::Logs::LogGroup"
},
"EventLogStream": {
"Type": "AWS::Logs::LogStream",
"Properties": {
"LogGroupName": {
"Ref": "EventStreamLogGroup"
}
}
},
"EventStreamSubscriptionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "sns.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
},
"Policies": [
{
"PolicyName": "SNSSQSAccessPolicy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": {
"Action": [
"sqs:*"
],
"Effect": "Allow",
"Resource": "*"
}
}
}
]
}
},
"EventDeliveryRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "sqs.amazonaws.com"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringEquals": {
"sts:ExternalId": {
"Ref": "AWS::AccountId"
}
}
}
}
]
}
}
},
"EventSqsQueuePolicy": {
"Type": "AWS::SQS::QueuePolicy",
"Properties": {
"PolicyDocument": {
"Version": "2012-10-17",
"Id": "SqsQueuePolicy",
"Statement": [
{
"Sid": "Allow-SNS-SendMessage",
"Effect": "Allow",
"Principal": "*",
"Action": [
"sqs:SendMessage",
"sqs:ReceiveMessage"
],
"Resource": {
"Fn::GetAtt": [
"EventStreamLambda",
"Arn"
]
},
"Condition": {
"ArnEquals": {
"aws:SourceArn": {
"Ref": "EventSNSTopicArn"
}
}
}
}
]
},
"Queues": [
{
"Ref": "EventSNSTopicArn"
}
]
}
},
"EventDeliveryPolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "sqs_delivery_policy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:PutObject"
],
"Resource": [
{
"Fn::GetAtt": [
"S3DeployBucket",
"Arn"
]
},
{
"Fn::Join": [
"",
[
{
"Fn::GetAtt": [
"S3DeployBucket",
"Arn"
]
},
"/*"
]
]
}
]
},
{
"Effect": "Allow",
"Action": [
"logs:PutLogEvents"
],
"Resource": {
"Fn::Sub": "arn:${AWS::Partition}:logs:${AWS::Region}:${AWS::AccountId}:log-group:${EventStreamLogGroup}:log-stream:${EventLogStreamLogStream}"
}
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:GenerateDataKey"
],
"Resource": [
{
"Ref": "AuditRecordKeyArn"
}
],
"Condition": {
"StringEquals": {
"kms:ViaService": {
"Fn::Join": [
"",
[
"s3.",
{
"Ref": "AWS::Region"
},
".amazonaws.com"
]
]
}
},
"StringLike": {
"kms:EncryptionContext:aws:s3:arn": {
"Fn::Join": [
"",
[
{
"Fn::GetAtt": [
"S3DeployBucket",
"Arn"
]
},
"/*"
]
]
}
}
}
}
]
},
"Roles": [
{
"Ref": "EventDeliveryRole"
}
]
}
},
"EventStreamLambda": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Handler": "lambda_function.lambda_handler",
"MemorySize": 128,
"Runtime": "python3.8",
"Timeout": 30,
"FunctionName": "sqs_s3_pipeline_job",
"Role": {
"Fn::GetAtt": [
"SQSLambdaExecutionRole",
"Arn"
]
},
"Code": {
"S3Bucket": {
"Ref": "S3DeployBucket"
},
"S3Key": {
"Ref": "S3DeployBucket"
}
},
"TracingConfig": {
"Mode": "Active"
}
}
},
"SQSLambdaExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
},
"Action": [
"sts:AssumeRole"
]
}
]
},
"Policies": [
{
"PolicyName": "StreamLambdaLogs",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:*"
],
"Resource": "arn:aws:logs:*:*:*"
}
]
}
},
{
"PolicyName": "SQSLambdaPolicy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sqs:ReceiveMessage",
"sqs:DeleteMessage",
"sqs:GetQueueAttributes",
"sqs:ChangeMessageVisibility"
],
"Resource":"*"
}
]
}
}
]
}
}
},
"Outputs": {
"VpcSubnet3ExportKey": {
"Value": {
"Fn::Sub": "${ParentVPCStack}-privateSubnet3"
}
}
}
}
SubscriptionRoleArn is only for kinesis:
This property applies only to Amazon Kinesis Data Firehose delivery stream subscriptions.

API HTTP Gateway lambda integration 'null' in Resource Path

I am setting up an API HTTP Gateway (V2) with lambda integrations via Cloudformation, and everything has been working so far. I have 2 working integrations, but my third integration is not working: Everything looks fine from the API Gateway side (it lists the correct route with a link to the Lambda), but the API endpoint in the lambda is listed as "https://c59boisn2k.execute-api.eu-central-1.amazonaws.com/productionnull". When I try to call the route, it says "Not Found". The odd thing is that I am using the same template for all three integrations.
I was thinking it could be a "dependsOn" issue, but I think I have all the correct dependencies. I tried re-creating the stack from scratch and now two of the three functions say "null" in their URL while the API Gateway still states the correct routes. Can this be a 'dependsOn' problem?
Here's my template for a single integration:
{
"Resources": {
"api": {
"Type": "AWS::ApiGatewayV2::Api",
"Properties": {
"Name": { "Ref": "AWS::StackName" },
"ProtocolType": "HTTP",
"CorsConfiguration": {
"AllowMethods": ["*"],
"AllowOrigins": ["*"]
}
}
},
"stage": {
"Type": "AWS::ApiGatewayV2::Stage",
"Properties": {
"Description": { "Ref": "AWS::StackName" },
"StageName": "production",
"AutoDeploy": true,
"ApiId": { "Ref": "api" },
"AccessLogSettings": {
"DestinationArn": {
"Fn::GetAtt": ["stageLogGroup", "Arn"]
}
}
}
},
"getSignedS3LambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": {
"Fn::Sub": "${AWS::StackName}-getSignedS3"
},
"AssumeRolePolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": ["lambda.amazonaws.com"]
},
"Action": ["sts:AssumeRole"]
}
]
},
"Policies": [
{
"PolicyName": "root",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Resource": "arn:aws:logs:*:*:*",
"Action": "logs:*"
},
{
"Effect": "Allow",
"Action": ["s3:*"],
"Resource": ["arn:aws:s3:::euromomo.eu/uploads/*"]
}
]
}
}
]
}
},
"getSignedS3Lambda": {
"Type": "AWS::Lambda::Function",
"DependsOn": ["getSignedS3LambdaRole"],
"Properties": {
"FunctionName": {
"Fn::Sub": "${AWS::StackName}-getSignedS3"
},
"Code": {
"S3Bucket": { "Ref": "operationsS3Bucket" },
"S3Key": { "Ref": "getSignedS3S3Key" }
},
"Runtime": "nodejs10.x",
"Handler": "index.handler",
"Role": { "Fn::GetAtt": ["getSignedS3LambdaRole", "Arn"] }
}
},
"getSignedS3Permission": {
"Type": "AWS::Lambda::Permission",
"DependsOn": ["api", "getSignedS3Lambda"],
"Properties": {
"Action": "lambda:InvokeFunction",
"FunctionName": { "Ref": "getSignedS3Lambda" },
"Principal": "apigateway.amazonaws.com",
"SourceArn": {
"Fn::Sub": "arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${api}/*/*"
}
}
},
"getSignedS3Integration": {
"Type": "AWS::ApiGatewayV2::Integration",
"DependsOn": ["getSignedS3Permission"],
"Properties": {
"ApiId": { "Ref": "api" },
"IntegrationType": "AWS_PROXY",
"IntegrationUri": {
"Fn::Sub": "arn:aws:apigateway:${AWS::Region}:lambda:path/2015-03-31/functions/${getSignedS3Lambda.Arn}/invocations"
},
"PayloadFormatVersion": "2.0"
}
},
"getSignedS3Route": {
"Type": "AWS::ApiGatewayV2::Route",
"DependsOn": ["getSignedS3Integration"],
"Properties": {
"ApiId": { "Ref": "api" },
"RouteKey": "POST /getSignedS3",
"AuthorizationType": "NONE",
"Target": { "Fn::Sub": "integrations/${getSignedS3Integration}" }
}
}
}
}
After spending hours debugging this, I found that the problem was in my Lambda permission. I need to use the correct path in the permission.
This does not work:
arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${api}/*/*
This does work:
arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${api}/*/*/getSignedS3
I believe I could scope it even more to this:
arn:aws:execute-api:${AWS::Region}:${AWS::AccountId}:${api}/*/POST/getSignedS3
This fixed all my problems and shows the correct path in the lambda web console.

cloudwatchevent_rule default to latest version of lambda function

I am trying to automate the creation of a lambda function and cloudwatch rule for it. However it seems that the cloudwatchevent_rule ansible task requires a version id to attach itself to my lambda function. This is causing an error:
No target to arn:aws:lambda:us-east-
1:MYACCOUNTID:function:MYFUNCTIONNAME could be found on the rule
MYFUNCTIONNAME.
How can I change this so that the cloudwatch rule will always attach itself to the latest version of my lambda function:
- name: create cloudwatch rule
cloudwatchevent_rule:
name: 'name_for_rule'
region: "{{region}}"
description: 'trigger on new instance creation'
state: present
event_pattern: |-
{
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"ec2.amazonaws.com"
],
"eventName": [
"RunInstances"
]
}
}
targets:
- id: "{{ lambda.configuration.version }}"
arn: "{{ lambda.configuration.function_arn }}"
I've configured a Lambda Function with CloudWatch rule triggering it. The following SAM Template also contains permission, policy and roles I require. Please ignore those if not required.
{
"AWSTemplateFormatVersion": "2010-09-09",
"Transform": "AWS::Serverless-2016-10-31",
"Description": "AWS SAM template configuring lambda functions written in test package.",
"Resources": {
"OrchestratorTestLambdaFunction": {
"DependsOn": [
"LambdaPolicy"
],
"Type": "AWS::Lambda::Function",
"Properties": {
"Handler": "com.test.TestClass::orchestrateTestLambda",
"FunctionName": "OrchestratorTestLambda",
"Runtime": "java8",
"MemorySize": 256,
"Timeout": 60,
"Code": {
"S3Bucket": "BATS::SAM::CodeS3Bucket",
"S3Key": "BATS::SAM::CodeS3Key"
},
"Role": {
"Fn::GetAtt": [
"LambdaRole",
"Arn"
]
},
"Description": "Lambda reads from SQS provided in the cloud watch."
}
},
"LambdaRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "LambdaRole",
"AssumeRolePolicyDocument": {
"Version": "2008-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
}
},
"LambdaPolicy": {
"Type": "AWS::IAM::Policy",
"Properties": {
"PolicyName": "lambda_policy",
"PolicyDocument": {
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Action": [
"sqs:DeleteMessage",
"sqs:ReceiveMessage"
],
"Resource": [
{
"Fn::Sub": "arn:aws:sqs:eu-west-1:${AWS::AccountId}:TestUpdates"
}
]
},
{
"Sid": "",
"Action": [
"lambda:InvokeAsync"
],
"Effect": "Allow",
"Resource": "*"
},
{
"Sid": "",
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "arn:aws:logs:*:*:*"
}
]
},
"Roles": [
{
"Ref": "LambdaRole"
}
]
}
},
"PermissionForEventsToInvokeLambda": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"FunctionName": {
"Ref": "OrchestratorTestLambdaFunction"
},
"Action": "lambda:InvokeFunction",
"Principal": "events.amazonaws.com",
"SourceArn": {
"Fn::GetAtt": [
"TestRule",
"Arn"
]
}
}
},
"TestRule": {
"Type": "AWS::Events::Rule",
"Properties": {
"Name": "TestRule",
"Description": "Rule to Trigger OrchestratorTestLambdaFunction",
"ScheduleExpression": "rate(1 minute)",
"State": "ENABLED",
"Targets": [
{
"Arn": {
"Fn::GetAtt": [
"OrchestratorTestLambdaFunction",
"Arn"
]
},
"Id": "TestRuleV1",
"Input": {
"Fn::Sub": "{\"queueUrl\":\"https://sqs.eu-west-1.amazonaws.com/${AWS::AccountId}/TestUpdates\"}"
}
}
]
}
}
},
"Outputs": {
"StackArn": {
"Value": {
"Ref": "AWS::StackId"
},
"Description": "Use this as the stack_arn in your cloud_formation_deployment_stack override."
}
}
}
I've noticied that function_arn registered from lambda ansible module output is not consistent.
Some times is
"function_arn": "arn:aws:lambda:zone:account:function:name"
other time is:
"function_arn": "arn:aws:lambda:zone:account:function:name:version"
So I've construct the arn appending always the $LATEST version:
- cloudwatchevent_rule:
profile: "{{ aws_profile }}"
name: StartStop
schedule_expression: cron(* * * * ? *)
description: trigger my lambda
targets:
- id: StartStop
arn: "arn:aws:lambda:{{aws_zone}}:{{aws_account_id}}:function:{{lambdadeploy.configuration.function_name}}:$LATEST"

SNS topic not triggering Lambda

I am attempting to set up a email-sending lambda function that is triggered by an SNS topic in cloudformation, but for some reason it is not working. I went in and checked all of the dependencies/permissions after the lambda & sns went up and everything seems to be in order, but when I publish to the topic nothing happens. When I manually test the lambda in the Lambda console, it works perfectly.
Cloudformation
"Resources": {
"CloudformationEventHandlerLambdaExecutionRole": {
"Type": "AWS::IAM::Role",
"Properties": {
"Path": "/",
"Policies": [
{
"PolicyName": "CloudformationTrigger",
"PolicyDocument": {
"Statement": [
{
"Effect": "Allow",
"Action": [
"ses:*"
],
"Resource": [
"arn:aws:ses:*"
]
}
]
}
}
],
"AssumeRolePolicyDocument": {
"Statement": [
{
"Action": [
"sts:AssumeRole"
],
"Effect": "Allow",
"Principal": {
"Service": [
"lambda.amazonaws.com"
]
}
}
]
}
}
},
"CloudformationEventHandlerLambdaFunction": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Handler": "lambda_function.lambda_handler",
"Role": {
"Fn::GetAtt": [
"CloudformationEventHandlerLambdaExecutionRole",
"Arn"
]
},
"Code": {
"S3Bucket": {
"Ref": "Bucket"
},
"S3Key": "CloudformationEventHandler.zip"
},
"Runtime": "python2.7",
"Timeout": "30"
},
"DependsOn": [
"CloudformationEventHandlerLambdaExecutionRole"
]
},
"CloudformationEventHandlerLambdaInvokePermission": {
"Type": "AWS::Lambda::Permission",
"Properties": {
"Action": "lambda:InvokeFunction",
"SourceAccount": {
"Ref": "AWS::AccountId"
},
"Principal": "sns.amazonaws.com",
"SourceArn": {
"Ref": "CloudformationTopic"
},
"FunctionName": {
"Fn::GetAtt": [
"CloudformationEventHandlerLambdaFunction",
"Arn"
]
}
}
},
"CloudformationTopic": {
"Type": "AWS::SNS::Topic",
"Properties": {
"DisplayName": "CloudformationIngestTopic",
"Subscription": [
{
"Endpoint": {
"Fn::GetAtt": [
"CloudformationEventHandlerLambdaFunction",
"Arn"
]
},
"Protocol": "lambda"
}
]
},
"DependsOn": [ "CloudformationEventHandlerLambdaFunction" ]
}
}
Python SES Lambda
import boto3
client = boto3.client('ses')
def lambda_handler(event, context):
message = """
Event:
{}
Context:
{}
""".format(event, context)
response = client.send_email(
Source='***censored***',
Destination={ 'ToAddresses': [ ***censored***' ] },
Message={
'Subject': {
'Data': 'CFMTest'
},
'Body': {
'Text': {
'Data': message
}
}
}
)
The SourceAccount for the AWS::Lambda::Permission resource type is only meant to be used with Cloudwatch logs, CloudWatch rules, S3 and SES.
After removing this field from the CloudformationEventHandlerLambdaInvokePermission resource on your template, I am able to invoke the lambda function by publishing to the SNS topic.
Refer to this documentation for more information regarding lambda permissions