I had quite a hard time setting up an automization with Beanstalk and Codepipeline...
I finally got it running, the main issue was the S3 Cloudwatch event to trigger the start of the Codepipeline. I missed the Cloudtrail part which is necessary and I couldn't find that in any documentation.
So the current Setup is:
S3 file gets uploaded -> a CloudWatch Event triggers the Codepipeline -> Codepipeline deploys to ElasticBeanstalk env.
As I said to get the CloudWatch Event trigger you need a Cloudtrail trail like:
resource "aws_cloudtrail" "example" {
# ... other configuration ...
name = "codepipeline-source-trail" #"codepipeline-${var.project_name}-trail"
is_multi_region_trail = true
s3_bucket_name = "codepipeline-cloudtrail-placeholder-bucket-eu-west-1"
event_selector {
read_write_type = "WriteOnly"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_name}/file.zip"]
}
}
}
But this is only to create a new trail. The problem is that AWS only allows 5 trails max. On the AWS console you can add multiple data events to one trail, but I couldn't manage to do this in terraform. I tried to use the same name, but this just raises an error
"Error creating CloudTrail: TrailAlreadyExistsException: Trail codepipeline-source-trail already exists for customer: XXXX"
I tried my best to explain my problem. Not sure if it is understandable.
In a nutshell: I want to add a data events:S3 in an existing cloudtrail trail with terraform.
Thx for help,
Daniel
As I said to get the CloudWatch Event trigger you need a Cloudtrail trail like:
You do not need multiple CloudTrail to invoke a CloudWatch Event. You can create service-specific rules as well.
Create a CloudWatch Events rule for an Amazon S3 source (console)
From CloudWatch event rule to invoke CodePipeline as a target. Let's say you created this event rule
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"PutObject"
]
}
}
You add CodePipeline as a target for this rule and eventually, Codepipeline deploys to ElasticBeanstalk env.
Have you tried to add multiple data_resources to your current trail instead of adding a new trail with the same name:
resource "aws_cloudtrail" "example" {
# ... other configuration ...
name = "codepipeline-source-trail" #"codepipeline-${var.project_name}-trail"
is_multi_region_trail = true
s3_bucket_name = "codepipeline-cloudtrail-placeholder-bucket-eu-west-1"
event_selector {
read_write_type = "WriteOnly"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_A}/file.zip"]
}
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_B}/fileB.zip"]
}
}
}
You should be able to add up to 250 data resources (across all event selectors in a trail), and up to 5 event selectors to your current trail (CloudTrail quota limits)
Related
I am trying to trigger the codepipeline on upload to s3 using terraform.
Use case - So a terraform code for various resources will be pushed as a zip file to the source bucket which will trigger a pipeline. This pipeline will run terraform apply for the zip file. So in order to run the pipeline I am setting up a trigger
Here is what I have done.
Create source s3 bucket
Create code pipeline
Created cloudwatch events rule for s3 events fro cloudtrail
Created cloudTrail Manually, and added data event to log source bucket write events. , all previous steps were done using terraform.
After doing all this still, my pipeline is not triggered on upload of new bucket.
I was reading this docs and it had particular statement about sending trail events to eventbridge rule which I think is the cause but I can't find the option to add through console.
AWS CloudTrail is a service that logs and filters events on your Amazon S3 source bucket. The trail sends the filtered source changes to the Amazon CloudWatch Events rule. The Amazon CloudWatch Events rule detects the source change and then starts your pipeline.
https://docs.aws.amazon.com/codepipeline/latest/userguide/create-cloudtrail-S3-source.html
Here is my event ridge rule
resource "aws_cloudwatch_event_rule" "xxxx-pipeline-event" {
name = "xxxx-ci-cd-pipeline-event"
description = "Cloud watch event when zip is uploaded to s3"
event_pattern = <<EOF
{
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject", "CompleteMultipartUpload", "CopyObject"],
"requestParameters": {
"bucketName": ["xxxxx-ci-cd-zip"],
"key": ["app.zip"]
}
}
}
EOF
}
resource "aws_cloudwatch_event_target" "code-pipeline" {
rule = aws_cloudwatch_event_rule.XXXX-pipeline-event.name
target_id = "SendToCodePipeline"
arn = aws_codepipeline.cicd_pipeline.arn
role_arn = aws_iam_role.pipeline_role.arn
}
Event bridge role permissions terraform code
data "aws_iam_policy_document" "event_bridge_role" {
statement {
actions = ["sts:AssumeRole"]
effect = "Allow"
principals {
type = "Service"
identifiers = ["events.amazonaws.com"]
}
}
}
resource "aws_iam_role" "pipeline_event_role" {
name = "xxxxx-pipeline-event-bridge-role"
assume_role_policy = data.aws_iam_policy_document.event_bridge_role.json
}
data "aws_iam_policy_document" "pipeline_event_role_policy" {
statement {
sid = ""
actions = ["codepipeline:StartPipelineExecution"]
resources = ["${aws_codepipeline.cicd_pipeline.arn}"]
effect = "Allow"
}
}
resource "aws_iam_policy" "pipeline_event_role_policy" {
name = "xxxx-codepipeline-event-role-policy"
policy = data.aws_iam_policy_document.pipeline_event_role_policy.json
}
resource "aws_iam_role_policy_attachment" "pipeline_event_role_attach_policy" {
role = aws_iam_role.pipeline_event_role.name
policy_arn = aws_iam_policy.pipeline_event_role_policy.arn
}
The problem was with CLoudtrail filter. The filter was set for bucket and write actions.
I had to modify filter by adding prefix to it.Because my event bridge is looking for my-app.zip so it was not triggered if I used only bucket level prefix
bucket/prefix and write action
Docs :https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html
I am trying to build a simple Eventbridge -> SNS -> AWS Chatbot to notify Slack channel for any ECS deployment events. Below are my codes
resource "aws_cloudwatch_event_rule" "ecs_deployment" {
name = "${var.namespace}-${var.environment}-infra-ecs-deployment"
description = "This rule sends notification on the all app ECS Fargate deployments with respect to the environment."
event_pattern = <<EOF
{
"source": ["aws.ecs"],
"detail-type": ["ECS Deployment State Change"],
"detail": {
"clusterArn": [
{
"prefix": "arn:aws:ecs:<REGION>:<ACCOUNT>:cluster/${var.namespace}-${var.environment}-"
}
]
}
}
EOF
tags = {
Environment = "${var.environment}"
Origin = "terraform"
}
}
resource "aws_cloudwatch_event_target" "ecs_deployment" {
rule = aws_cloudwatch_event_rule.ecs_deployment.name
target_id = "${var.namespace}-${var.environment}-infra-ecs-deployment"
arn = aws_sns_topic.ecs_deployment.arn
}
resource "aws_sns_topic" "ecs_deployment" {
name = "${var.namespace}-${var.environment}-infra-ecs-deployment"
display_name = "${var.namespace} ${var.environment}"
}
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.ecs_deployment.arn
policy = data.aws_iam_policy_document.sns_topic_policy.json
}
data "aws_iam_policy_document" "sns_topic_policy" {
statement {
effect = "Allow"
actions = ["SNS:Publish"]
principals {
type = "Service"
identifiers = ["events.amazonaws.com"]
}
resources = [aws_sns_topic.ecs_deployment.arn]
}
}
Based on the above code, Terraform will create AWS Eventbridge rule and with SNS target. From there, I create AWS Chatbot in the console, and subscribe to the SNS.
The problem is, when I try to remove the detail, it works. But what I want is to filter the events to be coming from cluster with mentioned prefix.
Is this possible? Or did I do it the wrong way?
Any help is appreciated.
I'm trying to create via terraform, a lambda that triggered by Kinesis and her destination on failures will be AWS SQS.
I created and lambda and configured the source and destination
When I'm sending a message to Kinesis queue, the lambda is triggered but not sending messages to the DLQ.
What am I missing?
my labmda source mapping:
resource "aws_lambda_event_source_mapping" "csp_management_service_integration_stream_mapping" {
event_source_arn = local.kinesis_csp_management_service_integration_stream_arn
function_name = module.csp_management_service_integration_lambda.lambda_arn
batch_size = var.shared_kinesis_configuration.batch_size
bisect_batch_on_function_error = var.shared_kinesis_configuration.bisect_batch_on_function_error
starting_position = var.shared_kinesis_configuration.starting_position
maximum_retry_attempts = var.shared_kinesis_configuration.maximum_retry_attempts
maximum_record_age_in_seconds = var.shared_kinesis_configuration.maximum_record_age_in_seconds
function_response_types = var.shared_kinesis_configuration.function_response_types
destination_config {
on_failure {
destination_arn = local.shared_default_sqs_error_handling_dlq_arn
}
}
}
resource "aws_iam_policy" "shared_deadletter_sqs_queue_policy" {
name = "shared-deadletter-sqs-queue-policy"
path = "/"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"sqs:SendMessage",
]
Effect = "Allow"
Resource = [
local.shared_default_sqs_error_handling_dlq_arn
]
},
]
})
}
You should take a look on the following metric to see if you have permission error
I think you are facing some permission issue, try attaching a role to your lambda function with access to AWS SQS DLQ.
Is your DLQ encrypted by KMS? You will need top provide permissions to the KMS too in addition to SQS permissions
How is Lambda reporting failure?
I am trying to subscribe to the Aws AmazonIpSpaceChanged SNS topic using terraform. However, I keep getting the below error
SNS Topic subscription to AWS
resource "aws_sns_topic_subscription" "aws_ip_change_sns_subscription" {
topic_arn = "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged"
protocol = "lambda"
endpoint = "${aws_lambda_function.test_sg_lambda_function.arn}"
}
Error:
* module.test-lambda.aws_sns_topic_subscription.aws_ip_change_sns_subscription: 1 error(s) occurred:
* aws_sns_topic_subscription.aws_ip_change_sns_subscription: Error creating SNS topic: InvalidParameter: Invalid parameter: TopicArn
status code: 400, request id: 3daa2940-8d4b-5fd8-86e7-7b074a16ada9
I tried the same using aws cli and it failed the first time when I didn't include the option --region us-east-1. But once it is included, it was able to subscribe just fine.
Any ideas?
I know it's an old question, but there are no accepted answers - maybe this will help someone if you agree with it and mark it as accepted?
The SNS topic arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged is only available in the region us-east-1, so you need to use a provider within Terraform that is configured for that region.
You also need to give permissions to the SNS topic to invoke the Lambda function (not sure if you'd just left this off the question).
This also works if your lambda function is defined in a different region.
provider "aws" {
region = "{your target region}"
}
provider "aws" {
alias = "us_east_1"
region = "us-east-1"
}
resource "aws_lambda_function" "my_function" {
# This uses your default target region
:
:
}
resource "aws_lambda_permission" "lambda_permission" {
# This uses your default target region
statement_id = "AllowExecutionFromSNS"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.my_function.function_name
principal = "sns.amazonaws.com"
source_arn = "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged"
}
resource "aws_sns_topic_subscription" "aws_ip_change_sns_subscription" {
# This needs to use the same region as the SNS topic
provider = aws.us_east_1
topic_arn = "arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged"
protocol = "lambda"
endpoint = aws_lambda_function.my_function.arn
}
Your topic_arn is hardcoded to region us-east-1:
arn:aws:sns:us-east-1:806199016981:AmazonIpSpaceChanged
So when you have AWS_DEFAULT_REGION or similar configuration and point to another region, your code will fail.
That's the reason if you nominate the region, the code runs fine.
To avoid hardcodes, such as region, account id, you can do this:
data "aws_caller_identity" "current" {}
variable "region" {
type = "string"
default = "us-east-1"
}
resource "aws_sns_topic_subscription" "aws_ip_change_sns_subscription" {
topic_arn = "arn:aws:sns:${var.region}:${data.aws_caller_identity.current.account_id}:AmazonIpSpaceChanged"
protocol = "lambda"
endpoint = "${aws_lambda_function.test_sg_lambda_function.arn}"
}
With that, you should be fine and more flexible to run it in other region and other aws account as well.
I have an AWS lambda function that I created via apex. I've also created a SNS topic and a subscription through terraform.
My topic is: arn:aws:sns:ap-southeast-1:178284945954:fetch_realm_auctions
I have a subscription: arn:aws:sns:ap-southeast-1:178284945954:fetch_realm_auctions:2da1d182-946d-4afd-91cb-1ed3453c5d86 with a lambda type and the endpoint is: arn:aws:lambda:ap-southeast-1:178284945954:function:wowauctions_get_auction_data
I've confirmed this is the correct function ARN. Everything seems wired up correctly:
I trigger SNS manually:
aws sns publish
--topic-arn arn:aws:sns:ap-southeast-1:178284945954:fetch_realm_auctions
--message '{"endpoint": "https://us.api.battle.net", "realm": "spinebreaker"}'
It returns the message ID but no invocation happens. Why?
I added an inline policy to allow the lambda to be invoked:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1474873816000",
"Effect": "Allow",
"Action": [
"lambda:InvokeFunction"
],
"Resource": [
"arn:aws:lambda:ap-southeast-1:178284945954:function:wowauctions_get_auction_data"
]
}
]
}
And it's now working.
The SNS topic needs to have the permission to invoke the Lambda.
Here is an example how you can express that in Terraform:
# Assumption: both SNS topic and Lambda are deployed in the same region
# resource "aws_sns_topic" "instance" { ... }
# resource "aws_lambda_function" "instance" {... }
# Step 1: Allow the SNS topic to invoke the Lambda
resource "aws_lambda_permission" "allow_invocation_from_sns" {
statement_id = "AllowExecutionFromSNS"
action = "lambda:InvokeFunction"
function_name = "${aws_lambda_function.instance.function_name}"
principal = "sns.amazonaws.com"
source_arn = "${aws_sns_topic.instance.arn}"
}
# Step 2: Subscribe the Lambda to the SNS topic
resource "aws_sns_topic_subscription" "instance" {
topic_arn = "${aws_sns_topic.instance.arn}"
protocol = "lambda"
endpoint = "${aws_lambda_function.instance.arn}"
}
Some general tips for troubleshooting this problem (a Lambda not being fired):
Does my message arrive at the Lambda? -- Subscribe your email address to the SNS topic. If you get emails, you will know when messages arrive at the topic.
Is the Lambda subscribed to the topic? -- Check in the AWS console (under SNS -> Topic) whether the subscription is correct (the endpoint must exactly match the ARN of the Lambda)
Once you confirmed these basic checks and you still see no invocations, it has to be a permission error. When you open the Lambda in the AWS console, you should see SNS listed as a trigger:
For comparison, if the permission is missing, you will not see SNS:
If you are not using an automated deployment (e.g., with CloudFormation or Terraform), you can also manually add the missing permission:
Choose SNS under Add triggers (you will need to scroll down in the list to see it)
In Configure triggers, select the SNS topic
Click Add and save the Lambda
For me the problem was that I specified SourceAccount parameter inside AWS::Lambda::Permission in my cloudformation template and documentation states the following:
Do not use the --source-account parameter to add a source account to the Lambda policy when adding the policy. Source account is not supported for Amazon SNS event sources and will result in access being denied. This has no security impact as the source account is included in the source ARN.
As soon as I removed SourceAccount, everything worked fine.
As Robo mentioned in the comments, adding a Principal based permission is the simplest way of doing this:
"FooFunctionPermission" : {
"Type" : "AWS::Lambda::Permission",
"Properties" : {
"Action" : "lambda:InvokeFunction",
"FunctionName" : { "Ref" : "FooFunction" },
"Principal" : "sns.amazonaws.com"
}
}
Had the same issue:
1) Created and deployed simple lambda
2) Created aws sns topic manually from java sdk
3) Created sns subscription from java sdk (subscription between sns topic and
lambda)
Then I had a problem, when pushed some message to the topic from the console - it was not intercepted by lambda. And more, sns trigger was not even registered in the lambda.
So I fixed this simply by using this command:
https://docs.aws.amazon.com/cli/latest/reference/lambda/add-permission.html
After running aws lambda add-permission ......, everything were picked up and working fine.
This post helped me to get farther, but there is a missing piece. Terraform will create the wrong subscription. You must drop $LATEST
resource "aws_sns_topic" "cloudwatch_notifications" {
name = "aws-${var.service_name}-${var.stage}-alarm"
}
data "aws_lambda_function" "cloudwatch_lambda" {
function_name = "sls-${var.service_name}-${var.stage}-cloudwatch-alarms"
}
resource "aws_lambda_permission" "with_sns" {
statement_id = "AllowExecutionFromSNS"
action = "lambda:InvokeFunction"
function_name = "${replace(data.aws_lambda_function.cloudwatch_lambda.arn, ":$LATEST", "")}"
principal = "sns.amazonaws.com"
source_arn = "${aws_sns_topic.cloudwatch_notifications.arn}"
}
resource "aws_sns_topic_subscription" "cloudwatch_subscription" {
topic_arn = "${aws_sns_topic.cloudwatch_notifications.arn}"
protocol = "lambda"
endpoint = "${replace(data.aws_lambda_function.cloudwatch_lambda.arn, ":$LATEST", "")}"
}
This is a specific answer to this question - I have removed my other answer elsewhere !
For Terraform users, see also here:
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/lambda_permission
which shows use of the 'aws_lambda_permission' resource; SNS is covered in one of the examples, copied here:
resource "aws_lambda_permission" "with_sns" {
statement_id = "AllowExecutionFromSNS"
action = "lambda:InvokeFunction"
function_name = aws_lambda_function.func.function_name
principal = "sns.amazonaws.com"
source_arn = aws_sns_topic.default.arn
}