Objective
I'd like to pass event data from Amazon EventBridge directly into an AWS Fargate task. However, it doesn't seem like this is currently possible.
Workaround
As a work-around, I've inserted an extra resource in between AWS Fargate and EventBridge. AWS Step Functions allows you to specify ContainerOverrides, in which the Environment property allows you to configure environment variables that will be passed into the Fargate task, from the EventBridge event.
Unfortunately, this workaround increases the solution complexity and cost unnecessarily.
Question: Is there a way to pass event data from EventBridge directly into an AWS Fargate (ECS) task, that I am simply unaware of?
To pass data from Eventbridge Event to ECS Task for e.g with a Launch Type FARGATE you can use Input Transformation. For example let's say we have an S3 bucket configured to send all event notifications to eventbridge and we have an eventbridge rule that looks like this.
{
"detail": {
"bucket": {
"name": ["mybucket"]
}
},
"detail-type": ["Object Created"],
"source": ["aws.s3"]
}
Now let's say we would like to pass the bucket name, object key, and the object version id to our ecs task running on fargate you can create a aws_cloudwatch_event_target resource in terraform with an input transformer below.
resource "aws_cloudwatch_event_target" "EventBridgeECSTaskTarget"{
target_id = "EventBridgeECSTaskTarget"
rule = aws_cloudwatch_event_rule.myeventbridgerule.name
arn = "arn:aws:ecs:us-east-1:123456789012:cluster/myecscluster"
role_arn = aws_iam_role.EventBridgeRuleInvokeECSTask.arn
ecs_target {
task_count = 1
task_definition_arn = "arn:aws:ecs:us-east-1:123456789012:task-definition/mytaskdefinition"
launch_type = "FARGATE"
network_configuration {
subnets = ["subnet-1","subnet-2","subnet-3"]
security_groups = ["sg-group-id"]
}
}
input_transformer {
input_paths = {
bucketname = "$.detail.bucket.name",
objectkey = "$.detail.object.key",
objectversionid = "$.detail.object.version-id",
}
input_template = <<EOF
{
"containerOverrides": [
{
"name": "containername",
"environment" : [
{
"name" : "S3_BUCKET_NAME",
"value" : <bucketname>
},
{
"name" : "S3_OBJECT_KEY",
"value" : <objectkey>
},
{
"name" : "S3_OBJ_VERSION_ID",
"value": <objectversionid>
}
]
}
]
}
EOF
}
}
Once your ECS Task is running you can easily access these variables to check what bucket the object was created in, what was the object and the version and do a GetObject.
For e.g: In Go we can easily do it as follows. (snippets only not adding imports and stuff but you get the idea.
filename := aws.String(os.Getenv("S3_OBJECT_KEY"))
bucketname := aws.String(os.Getenv("S3_BUCKET_NAME"))
versionId := aws.String(os.Getenv("S3_OBJ_VERSION_ID"))
//You can print and verify the values in CloudWatch
//Prepare the s3 GetObjectInput
s3goi := &s3.GetObjectInput{
Bucket: bucketname,
Key: filename,
VersionId: versionId,
}
s3goo, err := s3svc.GetObject(ctx, s3goi)
if err != nil {
log.Fatalf("Error retreiving object: %v", err)
}
b, err := ioutil.ReadAll(s3goo.Body)
if err != nil {
log.Fatalf("Error reading file: %v", err)
}
There's no current direct invocation between EventBridge and Fargate. You can find the list of targets supported at https://docs.aws.amazon.com/eventbridge/latest/userguide/eventbridge-targets.html
The workarounds is to use an intermediary that supports calling fargate (like step-functions) or send the message to compute (like lambda [the irony]) before sending it downstream.
Related
I am trying to trigger the codepipeline on upload to s3 using terraform.
Use case - So a terraform code for various resources will be pushed as a zip file to the source bucket which will trigger a pipeline. This pipeline will run terraform apply for the zip file. So in order to run the pipeline I am setting up a trigger
Here is what I have done.
Create source s3 bucket
Create code pipeline
Created cloudwatch events rule for s3 events fro cloudtrail
Created cloudTrail Manually, and added data event to log source bucket write events. , all previous steps were done using terraform.
After doing all this still, my pipeline is not triggered on upload of new bucket.
I was reading this docs and it had particular statement about sending trail events to eventbridge rule which I think is the cause but I can't find the option to add through console.
AWS CloudTrail is a service that logs and filters events on your Amazon S3 source bucket. The trail sends the filtered source changes to the Amazon CloudWatch Events rule. The Amazon CloudWatch Events rule detects the source change and then starts your pipeline.
https://docs.aws.amazon.com/codepipeline/latest/userguide/create-cloudtrail-S3-source.html
Here is my event ridge rule
resource "aws_cloudwatch_event_rule" "xxxx-pipeline-event" {
name = "xxxx-ci-cd-pipeline-event"
description = "Cloud watch event when zip is uploaded to s3"
event_pattern = <<EOF
{
"source": ["aws.s3"],
"detail-type": ["AWS API Call via CloudTrail"],
"detail": {
"eventSource": ["s3.amazonaws.com"],
"eventName": ["PutObject", "CompleteMultipartUpload", "CopyObject"],
"requestParameters": {
"bucketName": ["xxxxx-ci-cd-zip"],
"key": ["app.zip"]
}
}
}
EOF
}
resource "aws_cloudwatch_event_target" "code-pipeline" {
rule = aws_cloudwatch_event_rule.XXXX-pipeline-event.name
target_id = "SendToCodePipeline"
arn = aws_codepipeline.cicd_pipeline.arn
role_arn = aws_iam_role.pipeline_role.arn
}
Event bridge role permissions terraform code
data "aws_iam_policy_document" "event_bridge_role" {
statement {
actions = ["sts:AssumeRole"]
effect = "Allow"
principals {
type = "Service"
identifiers = ["events.amazonaws.com"]
}
}
}
resource "aws_iam_role" "pipeline_event_role" {
name = "xxxxx-pipeline-event-bridge-role"
assume_role_policy = data.aws_iam_policy_document.event_bridge_role.json
}
data "aws_iam_policy_document" "pipeline_event_role_policy" {
statement {
sid = ""
actions = ["codepipeline:StartPipelineExecution"]
resources = ["${aws_codepipeline.cicd_pipeline.arn}"]
effect = "Allow"
}
}
resource "aws_iam_policy" "pipeline_event_role_policy" {
name = "xxxx-codepipeline-event-role-policy"
policy = data.aws_iam_policy_document.pipeline_event_role_policy.json
}
resource "aws_iam_role_policy_attachment" "pipeline_event_role_attach_policy" {
role = aws_iam_role.pipeline_event_role.name
policy_arn = aws_iam_policy.pipeline_event_role_policy.arn
}
The problem was with CLoudtrail filter. The filter was set for bucket and write actions.
I had to modify filter by adding prefix to it.Because my event bridge is looking for my-app.zip so it was not triggered if I used only bucket level prefix
bucket/prefix and write action
Docs :https://docs.aws.amazon.com/awscloudtrail/latest/userguide/logging-data-events-with-cloudtrail.html
I am trying to build a simple Eventbridge -> SNS -> AWS Chatbot to notify Slack channel for any ECS deployment events. Below are my codes
resource "aws_cloudwatch_event_rule" "ecs_deployment" {
name = "${var.namespace}-${var.environment}-infra-ecs-deployment"
description = "This rule sends notification on the all app ECS Fargate deployments with respect to the environment."
event_pattern = <<EOF
{
"source": ["aws.ecs"],
"detail-type": ["ECS Deployment State Change"],
"detail": {
"clusterArn": [
{
"prefix": "arn:aws:ecs:<REGION>:<ACCOUNT>:cluster/${var.namespace}-${var.environment}-"
}
]
}
}
EOF
tags = {
Environment = "${var.environment}"
Origin = "terraform"
}
}
resource "aws_cloudwatch_event_target" "ecs_deployment" {
rule = aws_cloudwatch_event_rule.ecs_deployment.name
target_id = "${var.namespace}-${var.environment}-infra-ecs-deployment"
arn = aws_sns_topic.ecs_deployment.arn
}
resource "aws_sns_topic" "ecs_deployment" {
name = "${var.namespace}-${var.environment}-infra-ecs-deployment"
display_name = "${var.namespace} ${var.environment}"
}
resource "aws_sns_topic_policy" "default" {
arn = aws_sns_topic.ecs_deployment.arn
policy = data.aws_iam_policy_document.sns_topic_policy.json
}
data "aws_iam_policy_document" "sns_topic_policy" {
statement {
effect = "Allow"
actions = ["SNS:Publish"]
principals {
type = "Service"
identifiers = ["events.amazonaws.com"]
}
resources = [aws_sns_topic.ecs_deployment.arn]
}
}
Based on the above code, Terraform will create AWS Eventbridge rule and with SNS target. From there, I create AWS Chatbot in the console, and subscribe to the SNS.
The problem is, when I try to remove the detail, it works. But what I want is to filter the events to be coming from cluster with mentioned prefix.
Is this possible? Or did I do it the wrong way?
Any help is appreciated.
I'm trying to create via terraform, a lambda that triggered by Kinesis and her destination on failures will be AWS SQS.
I created and lambda and configured the source and destination
When I'm sending a message to Kinesis queue, the lambda is triggered but not sending messages to the DLQ.
What am I missing?
my labmda source mapping:
resource "aws_lambda_event_source_mapping" "csp_management_service_integration_stream_mapping" {
event_source_arn = local.kinesis_csp_management_service_integration_stream_arn
function_name = module.csp_management_service_integration_lambda.lambda_arn
batch_size = var.shared_kinesis_configuration.batch_size
bisect_batch_on_function_error = var.shared_kinesis_configuration.bisect_batch_on_function_error
starting_position = var.shared_kinesis_configuration.starting_position
maximum_retry_attempts = var.shared_kinesis_configuration.maximum_retry_attempts
maximum_record_age_in_seconds = var.shared_kinesis_configuration.maximum_record_age_in_seconds
function_response_types = var.shared_kinesis_configuration.function_response_types
destination_config {
on_failure {
destination_arn = local.shared_default_sqs_error_handling_dlq_arn
}
}
}
resource "aws_iam_policy" "shared_deadletter_sqs_queue_policy" {
name = "shared-deadletter-sqs-queue-policy"
path = "/"
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Action = [
"sqs:SendMessage",
]
Effect = "Allow"
Resource = [
local.shared_default_sqs_error_handling_dlq_arn
]
},
]
})
}
You should take a look on the following metric to see if you have permission error
I think you are facing some permission issue, try attaching a role to your lambda function with access to AWS SQS DLQ.
Is your DLQ encrypted by KMS? You will need top provide permissions to the KMS too in addition to SQS permissions
How is Lambda reporting failure?
I had quite a hard time setting up an automization with Beanstalk and Codepipeline...
I finally got it running, the main issue was the S3 Cloudwatch event to trigger the start of the Codepipeline. I missed the Cloudtrail part which is necessary and I couldn't find that in any documentation.
So the current Setup is:
S3 file gets uploaded -> a CloudWatch Event triggers the Codepipeline -> Codepipeline deploys to ElasticBeanstalk env.
As I said to get the CloudWatch Event trigger you need a Cloudtrail trail like:
resource "aws_cloudtrail" "example" {
# ... other configuration ...
name = "codepipeline-source-trail" #"codepipeline-${var.project_name}-trail"
is_multi_region_trail = true
s3_bucket_name = "codepipeline-cloudtrail-placeholder-bucket-eu-west-1"
event_selector {
read_write_type = "WriteOnly"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_name}/file.zip"]
}
}
}
But this is only to create a new trail. The problem is that AWS only allows 5 trails max. On the AWS console you can add multiple data events to one trail, but I couldn't manage to do this in terraform. I tried to use the same name, but this just raises an error
"Error creating CloudTrail: TrailAlreadyExistsException: Trail codepipeline-source-trail already exists for customer: XXXX"
I tried my best to explain my problem. Not sure if it is understandable.
In a nutshell: I want to add a data events:S3 in an existing cloudtrail trail with terraform.
Thx for help,
Daniel
As I said to get the CloudWatch Event trigger you need a Cloudtrail trail like:
You do not need multiple CloudTrail to invoke a CloudWatch Event. You can create service-specific rules as well.
Create a CloudWatch Events rule for an Amazon S3 source (console)
From CloudWatch event rule to invoke CodePipeline as a target. Let's say you created this event rule
{
"source": [
"aws.s3"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"s3.amazonaws.com"
],
"eventName": [
"PutObject"
]
}
}
You add CodePipeline as a target for this rule and eventually, Codepipeline deploys to ElasticBeanstalk env.
Have you tried to add multiple data_resources to your current trail instead of adding a new trail with the same name:
resource "aws_cloudtrail" "example" {
# ... other configuration ...
name = "codepipeline-source-trail" #"codepipeline-${var.project_name}-trail"
is_multi_region_trail = true
s3_bucket_name = "codepipeline-cloudtrail-placeholder-bucket-eu-west-1"
event_selector {
read_write_type = "WriteOnly"
include_management_events = true
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_A}/file.zip"]
}
data_resource {
type = "AWS::S3::Object"
values = ["${data.aws_s3_bucket.bamboo-deploy-bucket.arn}/${var.project_B}/fileB.zip"]
}
}
}
You should be able to add up to 250 data resources (across all event selectors in a trail), and up to 5 event selectors to your current trail (CloudTrail quota limits)
I'm trying to create a stack with CloudFormation. The stack needs to take some data files from a central S3 bucket and copy them to it's own "local" bucket.
I've written a lambda function to do this, and it works when I run it in the Lambda console with a test event (the test event uses the real central repository and successfully copies the file to a specified repo).
My current CloudFormation script does the following things:
Creates the "local" S3 bucket
Creates a role that the Lambda function can use to access the buckets
Defines the Lambda function to move the specified file to the "local" bucket
Defines some Custom resources to invoke the Lambda function.
It's at step 4 where it starts to go wrong - the Cloudformation execution seems to freeze here (CREATE_IN_PROGESS). Also, when I try to delete the stack, it seems to just get stuck on DELETE_IN_PROGRESS instead.
Here's how I'm invoking the Lambda function in the CloudFormation script:
"DataSync": {
"Type": "Custom::S3DataSync",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : [ "S3DataSync", "Arn" ] },
"InputFile": "data/provided-as-ip-v6.json",
"OutputFile": "data/data.json"
}
},
"KeySync1": {
"Type": "Custom::S3DataSync",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : [ "S3DataSync", "Arn" ] },
"InputFile": "keys/1/public_key.pem"
}
},
"KeySync2": {
"Type": "Custom::S3DataSync",
"Properties": {
"ServiceToken": { "Fn::GetAtt" : [ "S3DataSync", "Arn" ] },
"InputFile": "keys/2/public_key.pem"
}
}
And the Lambda function itself:
exports.handler = function(event, context) {
var buckets = {};
buckets.in = {
"Bucket":"central-data-repository",
"Key":"sandbox" + "/" + event.ResourceProperties.InputFile
};
buckets.out = {
"Bucket":"sandbox-data",
"Key":event.ResourceProperties.OutputFile || event.ResourceProperties.InputFile
};
var AWS = require('aws-sdk');
var S3 = new AWS.S3();
S3.getObject(buckets.in, function(err, data) {
if (err) {
console.log("Couldn't get file " + buckets.in.Key);
context.fail("Error getting file: " + err)
}
else {
buckets.out.Body = data.Body;
S3.putObject(buckets.out, function(err, data) {
if (err) {
console.log("Couln't write to S3 bucket " + buckets.out.Bucket);
context.fail("Error writing file: " + err);
}
else {
console.log("Successfully copied " + buckets.in.Key + " to " + buckets.out.Bucket + " at " + buckets.out.Key);
context.succeed();
}
});
}
});
}
Your Custom Resource function needs to send signals back to CloudFormation to indicate completion, status, and any returned values. You will see CREATE_IN_PROGRESS as the status in CloudFormation until you notify it that your function is complete.
The generic way of signaling CloudFormation is to post a response to a pre-signed S3 URL. But there is a cfn-response module to make this easier in Lambda functions. Interestingly, the two examples provided for Lambda-backed Custom Resources use different methods:
Walkthrough: Refer to Resources in Another Stack - uses the cfn-response module
Walkthrough: Looking Up Amazon Machine Image IDs - uses pre-signed URLs.
Yup, i did the same thing. We need to upload(PUT request) the status of our request.(Need to send the status as SUCCESS)