aws CAPABILITY_AUTO_EXPAND console web codepipeline with cloudformation - amazon-web-services

I am trying to complete a codepipeline with the cloudformation service and this error is generated. It must be said that the separate cloudformation service works well. The complete error is:
JobFailed
Requires capabilities: [CAPABILITY_AUTO_EXPAND] (Service: AmazonCloudFormation; Status Code: 400; Error Code: InsufficientCapabilitiesException; Request ID: 1a977102-f829-11e8-b5c6-f7cc8454c4d0)
The solutions I have is to add the CAPABILITY_AUTO_EXPAND --capabilities parameter but that only applies to CLI and my case is by web console.

Ran into the same problem, I could not find a way to do it through the console.
However it works well with the CLI and you can find detailed documentation on pipeline update here : https://docs.aws.amazon.com/cli/latest/reference/codepipeline/update-pipeline.html
The way I did it was :
make a get-pipeline to get the current pipeline structure
save the result as a json file
from the json file : remove the metadata section, add a capabilities attribute with your value in configuration section
use the update-pipeline command with --cli-input-json option specifying the previous json file

Sample [Note the changes marked with arrows]:
{
"pipeline": {
"roleArn": "arn:aws:iam::123456789234:role/service-role/AWSCodePipelineServiceRole-us-east-1-SAMpipeline",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"version": "1",
"provider": "CodeCommit"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"configuration": {
"PollForSourceChanges": "false",
"BranchName": "master",
"RepositoryName": "CFNrepo"
},
"runOrder": 1
}
]
},
{
"name": "Build",
"actions": [
{
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"name": "Build",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"version": "1",
"provider": "CodeBuild"
},
"outputArtifacts": [
{
"name": "BuildArtifact"
}
],
"configuration": {
"ProjectName": "SAMproject"
},
"runOrder": 1
}
]
},
{
"name": "Deploy",
"actions": [
{
"inputArtifacts": [
{
"name": "BuildArtifact"
}
],
"name": "DeployStack",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"StackName": "s5765722591-cp",
"ActionMode": "CREATE_UPDATE",
"RoleArn": "arn:aws:iam::298320596430:role/CloudFormationFullAccess",
"Capabilities": "CAPABILITY_NAMED_IAM,CAPABILITY_AUTO_EXPAND", <--------------
"TemplatePath": "BuildArtifact::template.yaml"
},
"runOrder": 1
},
{
"inputArtifacts": [
{
"name": "BuildArtifact"
}
],
"name": "DeployStack2",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"StackName": "s5765722591-cp2",
"ActionMode": "CREATE_UPDATE",
"RoleArn": "arn:aws:iam::123456789234:role/CloudFormationFullAccess",
"Capabilities": "CAPABILITY_NAMED_IAM,CAPABILITY_AUTO_EXPAND", <-----------
"TemplatePath": "BuildArtifact::template.yaml"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-1-123456789234"
},
"name": "SAMpipeline",
"version": 5
}
}

Related

AWS CodePipeline "The deployment failed because the AppSpec file that specifies the deployment configuration is missing ...."

I'm trying to set up a deployment pipeline using CodeCommit, ECR and ECS. My pipeline passes the source and builds the image right, except for deployment phase, where it fails:
The deployment failed because the AppSpec file that specifies the deployment configuration is missing or has an invalid configuration. The input AppSpec file is a not well-formed yaml. The template cannot be parsed.
appspec.yaml is in the output of my build phase (stored inside a zip file in S3)
The following is my code pipeline:
{
"pipeline": {
"name": "dashboardpipeline",
"roleArn": "arn:aws:iam::410208438878:role/service-role/AWSCodePipelineServiceRole-us-east-2-DashBoardPipeline",
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-2-276644567431"
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "CodeCommit",
"version": "1"
},
"runOrder": 2,
"configuration": {
"BranchName": "master",
"OutputArtifactFormat": "CODE_ZIP",
"PollForSourceChanges": "false",
"RepositoryName": "provisions_dashboard"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"inputArtifacts": [],
"region": "us-east-2",
"namespace": "SourceVariables"
},
{
"name": "Image",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "ECR",
"version": "1"
},
"runOrder": 2,
"configuration": {
"ImageTag": "latest",
"RepositoryName": "dashboard-web-app"
},
"outputArtifacts": [
{
"name": "MyImage"
}
],
"inputArtifacts": [],
"region": "us-east-2"
}
]
},
{
"name": "Build",
"actions": [
{
"name": "Build",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ProjectName": "DashboardApplicationBuild"
},
"outputArtifacts": [
{
"name": "BuildArtifact"
}
],
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"region": "us-east-2",
"namespace": "BuildVariables"
}
]
},
{
"name": "Deploy",
"actions": [
{
"name": "Deploy",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CodeDeployToECS",
"version": "1"
},
"runOrder": 1,
"configuration": {
"AppSpecTemplateArtifact": "BuildArtifact",
"AppSpecTemplatePath": "appspec.yaml",
"ApplicationName": "dashboarddeploymentapp",
"DeploymentGroupName": "dashboardappdeploygr",
"Image1ArtifactName": "MyImage",
"Image1ContainerName": "IMAGE_URI",
"TaskDefinitionTemplateArtifact": "BuildArtifact",
"TaskDefinitionTemplatePath": "taskdef.json"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildArtifact"
},
{
"name": "MyImage"
}
],
"region": "us-east-2"
}
]
}
],
"version": 18
},
"metadata": {
"pipelineArn": "arn:aws:codepipeline:us-east-2:410208438878:dashboardpipeline",
"created": "2022-03-14T11:52:19.525000-03:00",
"updated": "2022-03-18T11:34:14.217000-03:00"
}
}

Codepipeline ARN not available through Terraform but available in metadata using cli

Executing AWS cli command as below:
aws codepipeline get-pipeline --name pipeline_name
produces following output
{
"pipeline": {
"name": "xxxxx",,
"roleArn": "xxxxx",,
"artifactStore": {
"type": "S3",
"location": "xxxxx",
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "CodeCommit",
"version": "1"
},
"runOrder": 1,
"configuration": {
"BranchName": "main",
"PollForSourceChanges": "true",
"RepositoryName": "xxxxx",
},
"outputArtifacts": [
{
"name": "SourceOutput"
}
],
"inputArtifacts": []
}
]
},
{
"name": "Build",
"actions": [
{
"name": "Build",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ProjectName": "xxxxx",
},
"outputArtifacts": [
{
"name": "BuildOutput"
}
],
"inputArtifacts": [
{
"name": "SourceOutput"
}
]
}
]
},
{
"name": "Deploy",
"actions": [
{
"name": "Deploy",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CodeDeployToECS",
"version": "1"
},
"runOrder": 1,
"configuration": {
"AppSpecTemplateArtifact": "BuildOutput",
"AppSpecTemplatePath": "appspec.yml",
"ApplicationName": "xxxxx",
"DeploymentGroupName": "xxxxx",
"Image1ArtifactName": "BuildOutput",
"Image1ContainerName": "IMAGE1_NAME",
"TaskDefinitionTemplateArtifact": "BuildOutput",
"TaskDefinitionTemplatePath": "taskdef.json"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildOutput"
}
]
}
]
}
],
"version": 3
},
"metadata": {
"pipelineArn": "xxxxx",
"created": "xxxxx",
"updated": "xxxxx",
}
}
We can see that metadata field has "pipelineArn": "xxxxx", . But this arn is not available in console nor have I been able to find any Terraform data source for this.
Is it possible to retrieve codepipline ARN in Terraform?
Also, to clarify I need this for "aws_codestarnotifications_notification_rule" where resource arn is required.
The ARN for the CodePipeline is available in the AWS console, but it is a bit hard to find. If you go to Pipelines -> Choose any pipeline -> Settings it is in the General tab, under Pipeline ARN (1).
As for the getting the CodePipeline ARN through Terraform, that is possible by getting the attribute with the same name after the CodePipeline resource is created (2). So in case the CodePipline was not created with Terraform, you can hard-code the value. If it was, you could simply reference the output attribute in the "aws_codestarnotifications_notification_rule" resource:
resource "aws_codestarnotifications_notification_rule" "this" {
detail_type = ""
event_type_ids = [""]
name = ""
resource = aws_codepipeline.this.arn
target {
address = ...
}
}
This code snippet assumes that you will fill out other details and that there is a Terraform code block which creates a CodePipeline resource with name this, i.e., you would have to have a code block similar to following:
resource "aws_codepipeline" "this" {
...
}
https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-view-console.html#pipelines-settings-console
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/codepipeline#arn

CodePipeline, CodeBuild, CloudFormation, Lambda: build multiple lambdas in a single build and assign their code correctly

I have a CD pipeline built with AWS CDK and CodePipeline. It compiles the code for 5 lambdas, each of which it pushes to a secondary artifact.
The S3 locations of each of the artifacts are assigned to the parameters of a CloudFormation template which are attached to the Code parts of the lambdas.
This is working fine!
My problem is, I cannot add a sixth secondary artifact to CodeBuild (hard limit). I also cannot combine all of my lambda code into a single artifact as (as far as I can see) CodePipeline is not smart enough to look inside an artifact when assigning Code to a lambda in CloudFormation.
What is the recommendation for deploying multiple lambdas from a CodeBuild/CodePipeline? How have other people solved this issue?
EDIT: Code pipeline CF template
note: I have only included 2 lambdas as an example
Lambda application template
{
"AWSTemplateFormatVersion": "2010-09-09",
"Resources": {
"Lambda1": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": {
"Ref": "lambda1SourceBucketNameParameter3EE73025"
},
"S3Key": {
"Ref": "lambda1SourceObjectKeyParameter326E8288"
}
}
}
},
"Lambda2": {
"Type": "AWS::Lambda::Function",
"Properties": {
"Code": {
"S3Bucket": {
"Ref": "lambda2SourceBucketNameParameter3EE73025"
},
"S3Key": {
"Ref": "lambda2SourceObjectKeyParameter326E8288"
}
}
}
}
},
"Parameters": {
"lambda1SourceBucketNameParameter3EE73025": {
"Type": "String"
},
"lambda1SourceObjectKeyParameter326E8288": {
"Type": "String"
},
"lambda2SourceBucketNameParameterA0D2319B": {
"Type": "String"
},
"lambda2SourceObjectKeyParameterF3B3F2C2": {
"Type": "String"
}
}
}
Code Pipeline template:
{
"Resources": {
"Pipeline40CE5EDC": {
"Type": "AWS::CodePipeline::Pipeline",
"Properties": {
"Stages": [
{
"Actions": [
{
"ActionTypeId": {
"Provider": "CodeBuild"
},
"Name": "CDK_Build",
"OutputArtifacts": [
{
"Name": "CdkbuildOutput"
}
],
"RunOrder": 1
},
{
"ActionTypeId": {
"Provider": "CodeBuild"
},
"Name": "Lambda_Build",
"OutputArtifacts": [
{ "Name": "CompiledLambdaCode1" },
{ "Name": "CompiledLambdaCode2" }
],
"RunOrder": 1
}
],
"Name": "Build"
},
{
"Actions": [
{
"ActionTypeId": {
"Provider": "CloudFormation"
},
"Configuration": {
"StackName": "DeployLambdas",
"ParameterOverrides": "{\"lambda2SourceBucketNameParameterA0D2319B\":{\"Fn::GetArtifactAtt\":[\"CompiledLambdaCode1\",\"BucketName\"]},\"lambda2SourceObjectKeyParameterF3B3F2C2\":{\"Fn::GetArtifactAtt\":[\"CompiledLambdaCode1\",\"ObjectKey\"]},\"lambda1SourceBucketNameParameter3EE73025\":{\"Fn::GetArtifactAtt\":[\"CompiledLambdaCode2\",\"BucketName\"]},\"lambda1SourceObjectKeyParameter326E8288\":{\"Fn::GetArtifactAtt\":[\"CompiledLambdaCode2\",\"ObjectKey\"]}}",
"ActionMode": "CREATE_UPDATE",
"TemplatePath": "CdkbuildOutput::CFTemplate.template.json"
},
"InputArtifacts": [
{ "Name": "CompiledLambdaCode1" },
{ "Name": "CompiledLambdaCode2" },
{ "Name": "CdkbuildOutput" }
],
"Name": "Deploy",
"RunOrder": 1
}
],
"Name": "Deploy"
}
],
"ArtifactStore": {
"EncryptionKey": "the key",
"Location": "the bucket",
"Type": "S3"
},
"Name": "Pipeline"
}
}
}
}
Reviewed templates.
So, I don't see five inputs to a CodeBuild action, but I do see 2 inputs to a CloudFormation action (Deploy).
I assume your problem was a perceived limit of 5 input to the CloudFormation action. Is that assumption correct?
The limits for a CloudFormation action are actually 10. See "Action Type Constraints for Artifacts
" # https://docs.aws.amazon.com/codepipeline/latest/userguide/reference-pipeline-structure.html#reference-action-artifacts
So if you can use up to 10, will that suffice?
If not, I have other ideas that would take a lot longer to document.

How to use Data Pipeline to copy data of a DynamoDB table to another DynamoDB table when both have on-demand capacity

I used to copy data from one DynamoDB to another DynamoDB using a pipeline.json. It works when the source table has provisioned capacity and doesn't matter if destination is set to provisioned/on demand. I want both of my tables set to On Demand capacity. But when i use the same template it doesn't work. Is there any way that we can do that, or is it still under development?
Here is my original functioning script:
{
"objects": [
{
"startAt": "FIRST_ACTIVATION_DATE_TIME",
"name": "DailySchedule",
"id": "DailySchedule",
"period": "1 day",
"type": "Schedule",
"occurrences": "1"
},
{
"id": "Default",
"name": "Default",
"scheduleType": "ONDEMAND",
"pipelineLogUri": "#{myS3LogsPath}",
"schedule": {
"ref": "DailySchedule"
},
"failureAndRerunMode": "CASCADE",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DDBSourceTable",
"tableName": "#{myDDBSourceTableName}",
"name": "DDBSourceTable",
"type": "DynamoDBDataNode",
"readThroughputPercent": "#{myDDBReadThroughputRatio}"
},
{
"name": "S3TempLocation",
"id": "S3TempLocation",
"type": "S3DataNode",
"directoryPath": "#{myTempS3Folder}/#{format(#scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}"
},
{
"id": "DDBDestinationTable",
"tableName": "#{myDDBDestinationTableName}",
"name": "DDBDestinationTable",
"type": "DynamoDBDataNode",
"writeThroughputPercent": "#{myDDBWriteThroughputRatio}"
},
{
"id": "EmrClusterForBackup",
"name": "EmrClusterForBackup",
"amiVersion": "3.8.0",
"masterInstanceType": "m3.xlarge",
"coreInstanceType": "m3.xlarge",
"coreInstanceCount": "1",
"region": "#{myDDBSourceRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "EmrClusterForLoad",
"name": "EmrClusterForLoad",
"amiVersion": "3.8.0",
"masterInstanceType": "m3.xlarge",
"coreInstanceType": "m3.xlarge",
"coreInstanceCount": "1",
"region": "#{myDDBDestinationRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "TableLoadActivity",
"name": "TableLoadActivity",
"runsOn": {
"ref": "EmrClusterForLoad"
},
"input": {
"ref": "S3TempLocation"
},
"output": {
"ref": "DDBDestinationTable"
},
"type": "EmrActivity",
"maximumRetries": "2",
"dependsOn": {
"ref": "TableBackupActivity"
},
"resizeClusterBeforeRunning": "true",
"step": [
"s3://dynamodb-emr-#{myDDBDestinationRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbImport,#{input.directoryPath},#{output.tableName},#{output.writeThroughputPercent}"
]
},
{
"id": "TableBackupActivity",
"name": "TableBackupActivity",
"input": {
"ref": "DDBSourceTable"
},
"output": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForBackup"
},
"resizeClusterBeforeRunning": "true",
"type": "EmrActivity",
"maximumRetries": "2",
"step": [
"s3://dynamodb-emr-#{myDDBSourceRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbExport,#{output.directoryPath},#{input.tableName},#{input.readThroughputPercent}"
]
},
{
"dependsOn": {
"ref": "TableLoadActivity"
},
"name": "S3CleanupActivity",
"id": "S3CleanupActivity",
"input": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForBackup"
},
"type": "ShellCommandActivity",
"command": "(sudo yum -y update aws-cli) && (aws s3 rm #{input.directoryPath} --recursive)"
}
],
"parameters": [
{
"myComment": "This Parameter specifies the S3 logging path for the pipeline. It is used by the 'Default' object to set the 'pipelineLogUri' value.",
"id" : "myS3LogsPath",
"type" : "AWS::S3::ObjectKey",
"description" : "S3 path for pipeline logs."
},
{
"id": "myDDBSourceTableName",
"type": "String",
"description": "Source DynamoDB table name"
},
{
"id": "myDDBDestinationTableName",
"type": "String",
"description": "Target DynamoDB table name"
},
{
"id": "myDDBWriteThroughputRatio",
"type": "Double",
"description": "DynamoDB write throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"id": "myDDBSourceRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"id": "myDDBDestinationRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"id": "myDDBReadThroughputRatio",
"type": "Double",
"description": "DynamoDB read throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"myComment": "Temporary S3 path to store the dynamodb backup csv files, backup files will be deleted after the copy completes",
"id": "myTempS3Folder",
"type": "AWS::S3::ObjectKey",
"description": "Temporary S3 folder"
}
]
}
And here is the error message from Data Pipeline execution when source DynamoDB table is set to On Demand capacity:
at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:520)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:512)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833)
at org.apache.hadoop.dynamodb.tools.DynamoDbExport.run(DynamoDbExport.java:79)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.dynamodb.tools.DynamoDbExport.main(DynamoDbExport.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
The following JSON file worked for upload (DynamoDB to S3) -
{
"objects": [
{
"id": "Default",
"name": "Default",
"scheduleType": "ONDEMAND",
"pipelineLogUri": "#{myS3LogsPath}",
"failureAndRerunMode": "CASCADE",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DDBSourceTable",
"tableName": "#{myDDBSourceTableName}",
"name": "DDBSourceTable",
"type": "DynamoDBDataNode",
"readThroughputPercent": "#{myDDBReadThroughputRatio}"
},
{
"name": "S3TempLocation",
"id": "S3TempLocation",
"type": "S3DataNode",
"directoryPath": "#{myTempS3Folder}/data"
},
{
"subnetId": "subnet-id",
"id": "EmrClusterForBackup",
"name": "EmrClusterForBackup",
"masterInstanceType": "m5.xlarge",
"coreInstanceType": "m5.xlarge",
"coreInstanceCount": "1",
"releaseLabel": "emr-5.23.0",
"region": "#{myDDBSourceRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "TableBackupActivity",
"name": "TableBackupActivity",
"input": {
"ref": "DDBSourceTable"
},
"output": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForBackup"
},
"resizeClusterBeforeRunning": "true",
"type": "EmrActivity",
"maximumRetries": "2",
"step": [
"s3://dynamodb-dpl-#{myDDBSourceRegion}/emr-ddb-storage-handler/4.11.0/emr-dynamodb-tools-4.11.0-SNAPSHOT-jar-with-dependencies.jar,org.apache.hadoop.dynamodb.tools.DynamoDBExport,#{output.directoryPath},#{input.tableName},#{input.readThroughputPercent}"
]
}
],
"parameters": [
{
"myComment": "This Parameter specifies the S3 logging path for the pipeline. It is used by the 'Default' object to set the 'pipelineLogUri' value.",
"id" : "myS3LogsPath",
"type" : "AWS::S3::ObjectKey",
"description" : "S3 path for pipeline logs."
},
{
"id": "myDDBSourceTableName",
"type": "String",
"description": "Source DynamoDB table name"
},
{
"id": "myDDBSourceRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"id": "myDDBReadThroughputRatio",
"type": "Double",
"description": "DynamoDB read throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"myComment": "Temporary S3 path to store the dynamodb backup csv files, backup files will be deleted after the copy completes",
"id": "myTempS3Folder",
"type": "AWS::S3::ObjectKey",
"description": "Temporary S3 folder"
}
]
}
And the following worked for download (S3 to DynamoDB) -
{
"objects": [
{
"id": "Default",
"name": "Default",
"scheduleType": "ONDEMAND",
"pipelineLogUri": "#{myS3LogsPath}",
"failureAndRerunMode": "CASCADE",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"name": "S3TempLocation",
"id": "S3TempLocation",
"type": "S3DataNode",
"directoryPath": "#{myTempS3Folder}/data"
},
{
"id": "DDBDestinationTable",
"tableName": "#{myDDBDestinationTableName}",
"name": "DDBDestinationTable",
"type": "DynamoDBDataNode",
"writeThroughputPercent": "#{myDDBWriteThroughputRatio}"
},
{
"subnetId": "subnet-id",
"id": "EmrClusterForLoad",
"name": "EmrClusterForLoad",
"releaseLabel": "emr-5.23.0",
"masterInstanceType": "m5.xlarge",
"coreInstanceType": "m5.xlarge",
"coreInstanceCount": "1",
"region": "#{myDDBDestinationRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "TableLoadActivity",
"name": "TableLoadActivity",
"runsOn": {
"ref": "EmrClusterForLoad"
},
"input": {
"ref": "S3TempLocation"
},
"output": {
"ref": "DDBDestinationTable"
},
"type": "EmrActivity",
"maximumRetries": "2",
"resizeClusterBeforeRunning": "true",
"step": [
"s3://dynamodb-dpl-#{myDDBDestinationRegion}/emr-ddb-storage-handler/4.11.0/emr-dynamodb-tools-4.11.0-SNAPSHOT-jar-with-dependencies.jar,org.apache.hadoop.dynamodb.tools.DynamoDBImport,#{input.directoryPath},#{output.tableName},#{output.writeThroughputPercent}"
]
},
{
"dependsOn": {
"ref": "TableLoadActivity"
},
"name": "S3CleanupActivity",
"id": "S3CleanupActivity",
"input": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForLoad"
},
"type": "ShellCommandActivity",
"command": "(sudo yum -y update aws-cli) && (aws s3 rm #{input.directoryPath} --recursive)"
}
],
"parameters": [
{
"myComment": "This Parameter specifies the S3 logging path for the pipeline. It is used by the 'Default' object to set the 'pipelineLogUri' value.",
"id" : "myS3LogsPath",
"type" : "AWS::S3::ObjectKey",
"description" : "S3 path for pipeline logs."
},
{
"id": "myDDBDestinationTableName",
"type": "String",
"description": "Target DynamoDB table name"
},
{
"id": "myDDBWriteThroughputRatio",
"type": "Double",
"description": "DynamoDB write throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"id": "myDDBDestinationRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"myComment": "Temporary S3 path to store the dynamodb backup csv files, backup files will be deleted after the copy completes",
"id": "myTempS3Folder",
"type": "AWS::S3::ObjectKey",
"description": "Temporary S3 folder"
}
]
}
Also, the subnet ID fields in both the pipeline definitions are totally optional, but it is always good to set them.

Setup Lambda to trigger from CloudWatch using CloudFormation

I want to use CloudFormation to trigger Lambda when my CloudWatch function is called. I have the below, but it does not work.
CloudWatch rule created fine
"CloudWatchNewEc2": {
"Type": "AWS::Events::Rule",
"DependsOn": ["LambdaNewEc2"],
"Properties": {
"Description": "Triggered on new EC2 instances",
"EventPattern": {
"source": [
"aws.ec2"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"ec2.amazonaws.com"
],
"eventName": [
"RunInstances"
]
}
},
"Targets": [
{
"Arn": {
"Fn::GetAtt": ["LambdaNewEc2", "Arn"]
},
"Id": "NewEc2AutoTag"
}
]
}
},
Lambda created but is not triggered
"LambdaNewEc2": {
"Type": "AWS::Lambda::Function",
"DependsOn": ["S3Lambda", "IAMRoleLambda"],
"Properties": {
"Code": {
"S3Bucket": {"Ref": "LambdaBucketName"},
"S3Key": "skynet-lambda.zip"
},
"Description": "When new EC2 instances are created, auto tag them",
"FunctionName": "newEc2AutoTag",
"Handler": "index.newEc2_autoTag",
"Role": {"Fn::GetAtt": ["IAMRoleLambda", "Arn"]},
"Runtime": "nodejs6.10",
"Timeout": "30"
}
}
},
It seems like CloudWatch Target is not sufficient?
UPDATE (Full CloudFormation template)
{
"Parameters": {
"Environment": {
"Type": "String",
"Default": "Staging",
"AllowedValues": [
"Testing",
"Staging",
"Production"
],
"Description": "Environment name"
},
"BucketName": {
"Type": "String",
"Default": "skynet-staging",
"Description": "Bucket Name"
},
"LambdaBucketName": {
"Type": "String",
"Default": "skynet-lambda",
"Description": "Lambda Bucket Name"
},
"Owner": {
"Type": "String",
"Description": "Owner"
}
},
"Resources": {
"S3Web": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {
"Ref": "BucketName"
},
"WebsiteConfiguration": {
"IndexDocument": "index.html",
"RoutingRules": [
{
"RedirectRule": {
"ReplaceKeyPrefixWith": "#"
},
"RoutingRuleCondition": {
"HttpErrorCodeReturnedEquals": "404"
}
}
]
},
"AccessControl": "PublicRead",
"Tags": [
{
"Key": "Cost Center",
"Value": "Skynet"
},
{
"Key": "Environment",
"Value": {
"Ref": "Environment"
}
},
{
"Key": "Owner",
"Value": {
"Ref": "Owner"
}
}
]
}
},
"S3Lambda": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {
"Ref": "LambdaBucketName"
},
"VersioningConfiguration": {
"Status": "Enabled"
},
"Tags": [
{
"Key": "Cost Center",
"Value": "Skynet"
},
{
"Key": "Owner",
"Value": {
"Ref": "Owner"
}
}
]
}
},
"CloudWatchNewEc2": {
"Type": "AWS::Events::Rule",
"DependsOn": ["LambdaNewEc2"],
"Properties": {
"Description": "Triggered on new EC2 instances",
"EventPattern": {
"source": [
"aws.ec2"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"ec2.amazonaws.com"
],
"eventName": [
"RunInstances"
]
}
},
"Targets": [
{
"Arn": {
"Fn::GetAtt": ["LambdaNewEc2", "Arn"]
},
"Id": "NewEc2AutoTag"
}
]
}
},
"IAMRoleLambda": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "skynet-lambda-role",
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [ "lambda.amazonaws.com" ]
},
"Action": [ "sts:AssumeRole" ]
}
]
},
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/AmazonEC2FullAccess",
"arn:aws:iam::aws:policy/AWSLambdaFullAccess",
"arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess",
"arn:aws:iam::aws:policy/CloudWatchLogsFullAccess"
]
}
},
"LambdaNewEc2": {
"Type": "AWS::Lambda::Function",
"DependsOn": ["S3Lambda", "IAMRoleLambda"],
"Properties": {
"Code": {
"S3Bucket": {"Ref": "LambdaBucketName"},
"S3Key": "skynet-lambda.zip"
},
"Description": "When new EC2 instances are created, auto tag them",
"FunctionName": "newEc2AutoTag",
"Handler": "index.newEc2_autoTag",
"Role": {"Fn::GetAtt": ["IAMRoleLambda", "Arn"]},
"Runtime": "nodejs6.10",
"Timeout": "30"
}
}
},
"Outputs": {
"WebUrl": {
"Value": {
"Fn::GetAtt": [
"S3Web",
"WebsiteURL"
]
},
"Description": "S3 bucket for web files"
}
}
}
I managed to deploy your template into a CloudFormation stack (by removing the LambdaBucket and pointing to my own zip file). It seems to create all resource correctly.
It took about 10 minutes for the RunInstances event to appear in CloudTrail. It then successfully triggered the Rule, but the CloudWatch metrics for my rule showed a failed invocation because I faked a Lambda function for your template.
Once I edited the rule to point to a better function and re-tested, it worked fine.
Bottom line: Seems to work!