AWS CodePipeline "The deployment failed because the AppSpec file that specifies the deployment configuration is missing ...." - amazon-web-services

I'm trying to set up a deployment pipeline using CodeCommit, ECR and ECS. My pipeline passes the source and builds the image right, except for deployment phase, where it fails:
The deployment failed because the AppSpec file that specifies the deployment configuration is missing or has an invalid configuration. The input AppSpec file is a not well-formed yaml. The template cannot be parsed.
appspec.yaml is in the output of my build phase (stored inside a zip file in S3)
The following is my code pipeline:
{
"pipeline": {
"name": "dashboardpipeline",
"roleArn": "arn:aws:iam::410208438878:role/service-role/AWSCodePipelineServiceRole-us-east-2-DashBoardPipeline",
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-2-276644567431"
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "CodeCommit",
"version": "1"
},
"runOrder": 2,
"configuration": {
"BranchName": "master",
"OutputArtifactFormat": "CODE_ZIP",
"PollForSourceChanges": "false",
"RepositoryName": "provisions_dashboard"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"inputArtifacts": [],
"region": "us-east-2",
"namespace": "SourceVariables"
},
{
"name": "Image",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "ECR",
"version": "1"
},
"runOrder": 2,
"configuration": {
"ImageTag": "latest",
"RepositoryName": "dashboard-web-app"
},
"outputArtifacts": [
{
"name": "MyImage"
}
],
"inputArtifacts": [],
"region": "us-east-2"
}
]
},
{
"name": "Build",
"actions": [
{
"name": "Build",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ProjectName": "DashboardApplicationBuild"
},
"outputArtifacts": [
{
"name": "BuildArtifact"
}
],
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"region": "us-east-2",
"namespace": "BuildVariables"
}
]
},
{
"name": "Deploy",
"actions": [
{
"name": "Deploy",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CodeDeployToECS",
"version": "1"
},
"runOrder": 1,
"configuration": {
"AppSpecTemplateArtifact": "BuildArtifact",
"AppSpecTemplatePath": "appspec.yaml",
"ApplicationName": "dashboarddeploymentapp",
"DeploymentGroupName": "dashboardappdeploygr",
"Image1ArtifactName": "MyImage",
"Image1ContainerName": "IMAGE_URI",
"TaskDefinitionTemplateArtifact": "BuildArtifact",
"TaskDefinitionTemplatePath": "taskdef.json"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildArtifact"
},
{
"name": "MyImage"
}
],
"region": "us-east-2"
}
]
}
],
"version": 18
},
"metadata": {
"pipelineArn": "arn:aws:codepipeline:us-east-2:410208438878:dashboardpipeline",
"created": "2022-03-14T11:52:19.525000-03:00",
"updated": "2022-03-18T11:34:14.217000-03:00"
}
}

Related

Codepipeline ARN not available through Terraform but available in metadata using cli

Executing AWS cli command as below:
aws codepipeline get-pipeline --name pipeline_name
produces following output
{
"pipeline": {
"name": "xxxxx",,
"roleArn": "xxxxx",,
"artifactStore": {
"type": "S3",
"location": "xxxxx",
},
"stages": [
{
"name": "Source",
"actions": [
{
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"provider": "CodeCommit",
"version": "1"
},
"runOrder": 1,
"configuration": {
"BranchName": "main",
"PollForSourceChanges": "true",
"RepositoryName": "xxxxx",
},
"outputArtifacts": [
{
"name": "SourceOutput"
}
],
"inputArtifacts": []
}
]
},
{
"name": "Build",
"actions": [
{
"name": "Build",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"provider": "CodeBuild",
"version": "1"
},
"runOrder": 1,
"configuration": {
"ProjectName": "xxxxx",
},
"outputArtifacts": [
{
"name": "BuildOutput"
}
],
"inputArtifacts": [
{
"name": "SourceOutput"
}
]
}
]
},
{
"name": "Deploy",
"actions": [
{
"name": "Deploy",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"provider": "CodeDeployToECS",
"version": "1"
},
"runOrder": 1,
"configuration": {
"AppSpecTemplateArtifact": "BuildOutput",
"AppSpecTemplatePath": "appspec.yml",
"ApplicationName": "xxxxx",
"DeploymentGroupName": "xxxxx",
"Image1ArtifactName": "BuildOutput",
"Image1ContainerName": "IMAGE1_NAME",
"TaskDefinitionTemplateArtifact": "BuildOutput",
"TaskDefinitionTemplatePath": "taskdef.json"
},
"outputArtifacts": [],
"inputArtifacts": [
{
"name": "BuildOutput"
}
]
}
]
}
],
"version": 3
},
"metadata": {
"pipelineArn": "xxxxx",
"created": "xxxxx",
"updated": "xxxxx",
}
}
We can see that metadata field has "pipelineArn": "xxxxx", . But this arn is not available in console nor have I been able to find any Terraform data source for this.
Is it possible to retrieve codepipline ARN in Terraform?
Also, to clarify I need this for "aws_codestarnotifications_notification_rule" where resource arn is required.
The ARN for the CodePipeline is available in the AWS console, but it is a bit hard to find. If you go to Pipelines -> Choose any pipeline -> Settings it is in the General tab, under Pipeline ARN (1).
As for the getting the CodePipeline ARN through Terraform, that is possible by getting the attribute with the same name after the CodePipeline resource is created (2). So in case the CodePipline was not created with Terraform, you can hard-code the value. If it was, you could simply reference the output attribute in the "aws_codestarnotifications_notification_rule" resource:
resource "aws_codestarnotifications_notification_rule" "this" {
detail_type = ""
event_type_ids = [""]
name = ""
resource = aws_codepipeline.this.arn
target {
address = ...
}
}
This code snippet assumes that you will fill out other details and that there is a Terraform code block which creates a CodePipeline resource with name this, i.e., you would have to have a code block similar to following:
resource "aws_codepipeline" "this" {
...
}
https://docs.aws.amazon.com/codepipeline/latest/userguide/pipelines-view-console.html#pipelines-settings-console
https://registry.terraform.io/providers/hashicorp/aws/latest/docs/resources/codepipeline#arn

How to use Data Pipeline to copy data of a DynamoDB table to another DynamoDB table when both have on-demand capacity

I used to copy data from one DynamoDB to another DynamoDB using a pipeline.json. It works when the source table has provisioned capacity and doesn't matter if destination is set to provisioned/on demand. I want both of my tables set to On Demand capacity. But when i use the same template it doesn't work. Is there any way that we can do that, or is it still under development?
Here is my original functioning script:
{
"objects": [
{
"startAt": "FIRST_ACTIVATION_DATE_TIME",
"name": "DailySchedule",
"id": "DailySchedule",
"period": "1 day",
"type": "Schedule",
"occurrences": "1"
},
{
"id": "Default",
"name": "Default",
"scheduleType": "ONDEMAND",
"pipelineLogUri": "#{myS3LogsPath}",
"schedule": {
"ref": "DailySchedule"
},
"failureAndRerunMode": "CASCADE",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DDBSourceTable",
"tableName": "#{myDDBSourceTableName}",
"name": "DDBSourceTable",
"type": "DynamoDBDataNode",
"readThroughputPercent": "#{myDDBReadThroughputRatio}"
},
{
"name": "S3TempLocation",
"id": "S3TempLocation",
"type": "S3DataNode",
"directoryPath": "#{myTempS3Folder}/#{format(#scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}"
},
{
"id": "DDBDestinationTable",
"tableName": "#{myDDBDestinationTableName}",
"name": "DDBDestinationTable",
"type": "DynamoDBDataNode",
"writeThroughputPercent": "#{myDDBWriteThroughputRatio}"
},
{
"id": "EmrClusterForBackup",
"name": "EmrClusterForBackup",
"amiVersion": "3.8.0",
"masterInstanceType": "m3.xlarge",
"coreInstanceType": "m3.xlarge",
"coreInstanceCount": "1",
"region": "#{myDDBSourceRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "EmrClusterForLoad",
"name": "EmrClusterForLoad",
"amiVersion": "3.8.0",
"masterInstanceType": "m3.xlarge",
"coreInstanceType": "m3.xlarge",
"coreInstanceCount": "1",
"region": "#{myDDBDestinationRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "TableLoadActivity",
"name": "TableLoadActivity",
"runsOn": {
"ref": "EmrClusterForLoad"
},
"input": {
"ref": "S3TempLocation"
},
"output": {
"ref": "DDBDestinationTable"
},
"type": "EmrActivity",
"maximumRetries": "2",
"dependsOn": {
"ref": "TableBackupActivity"
},
"resizeClusterBeforeRunning": "true",
"step": [
"s3://dynamodb-emr-#{myDDBDestinationRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbImport,#{input.directoryPath},#{output.tableName},#{output.writeThroughputPercent}"
]
},
{
"id": "TableBackupActivity",
"name": "TableBackupActivity",
"input": {
"ref": "DDBSourceTable"
},
"output": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForBackup"
},
"resizeClusterBeforeRunning": "true",
"type": "EmrActivity",
"maximumRetries": "2",
"step": [
"s3://dynamodb-emr-#{myDDBSourceRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbExport,#{output.directoryPath},#{input.tableName},#{input.readThroughputPercent}"
]
},
{
"dependsOn": {
"ref": "TableLoadActivity"
},
"name": "S3CleanupActivity",
"id": "S3CleanupActivity",
"input": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForBackup"
},
"type": "ShellCommandActivity",
"command": "(sudo yum -y update aws-cli) && (aws s3 rm #{input.directoryPath} --recursive)"
}
],
"parameters": [
{
"myComment": "This Parameter specifies the S3 logging path for the pipeline. It is used by the 'Default' object to set the 'pipelineLogUri' value.",
"id" : "myS3LogsPath",
"type" : "AWS::S3::ObjectKey",
"description" : "S3 path for pipeline logs."
},
{
"id": "myDDBSourceTableName",
"type": "String",
"description": "Source DynamoDB table name"
},
{
"id": "myDDBDestinationTableName",
"type": "String",
"description": "Target DynamoDB table name"
},
{
"id": "myDDBWriteThroughputRatio",
"type": "Double",
"description": "DynamoDB write throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"id": "myDDBSourceRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"id": "myDDBDestinationRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"id": "myDDBReadThroughputRatio",
"type": "Double",
"description": "DynamoDB read throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"myComment": "Temporary S3 path to store the dynamodb backup csv files, backup files will be deleted after the copy completes",
"id": "myTempS3Folder",
"type": "AWS::S3::ObjectKey",
"description": "Temporary S3 folder"
}
]
}
And here is the error message from Data Pipeline execution when source DynamoDB table is set to On Demand capacity:
at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:520)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:512)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833)
at org.apache.hadoop.dynamodb.tools.DynamoDbExport.run(DynamoDbExport.java:79)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.dynamodb.tools.DynamoDbExport.main(DynamoDbExport.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
The following JSON file worked for upload (DynamoDB to S3) -
{
"objects": [
{
"id": "Default",
"name": "Default",
"scheduleType": "ONDEMAND",
"pipelineLogUri": "#{myS3LogsPath}",
"failureAndRerunMode": "CASCADE",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DDBSourceTable",
"tableName": "#{myDDBSourceTableName}",
"name": "DDBSourceTable",
"type": "DynamoDBDataNode",
"readThroughputPercent": "#{myDDBReadThroughputRatio}"
},
{
"name": "S3TempLocation",
"id": "S3TempLocation",
"type": "S3DataNode",
"directoryPath": "#{myTempS3Folder}/data"
},
{
"subnetId": "subnet-id",
"id": "EmrClusterForBackup",
"name": "EmrClusterForBackup",
"masterInstanceType": "m5.xlarge",
"coreInstanceType": "m5.xlarge",
"coreInstanceCount": "1",
"releaseLabel": "emr-5.23.0",
"region": "#{myDDBSourceRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "TableBackupActivity",
"name": "TableBackupActivity",
"input": {
"ref": "DDBSourceTable"
},
"output": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForBackup"
},
"resizeClusterBeforeRunning": "true",
"type": "EmrActivity",
"maximumRetries": "2",
"step": [
"s3://dynamodb-dpl-#{myDDBSourceRegion}/emr-ddb-storage-handler/4.11.0/emr-dynamodb-tools-4.11.0-SNAPSHOT-jar-with-dependencies.jar,org.apache.hadoop.dynamodb.tools.DynamoDBExport,#{output.directoryPath},#{input.tableName},#{input.readThroughputPercent}"
]
}
],
"parameters": [
{
"myComment": "This Parameter specifies the S3 logging path for the pipeline. It is used by the 'Default' object to set the 'pipelineLogUri' value.",
"id" : "myS3LogsPath",
"type" : "AWS::S3::ObjectKey",
"description" : "S3 path for pipeline logs."
},
{
"id": "myDDBSourceTableName",
"type": "String",
"description": "Source DynamoDB table name"
},
{
"id": "myDDBSourceRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"id": "myDDBReadThroughputRatio",
"type": "Double",
"description": "DynamoDB read throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"myComment": "Temporary S3 path to store the dynamodb backup csv files, backup files will be deleted after the copy completes",
"id": "myTempS3Folder",
"type": "AWS::S3::ObjectKey",
"description": "Temporary S3 folder"
}
]
}
And the following worked for download (S3 to DynamoDB) -
{
"objects": [
{
"id": "Default",
"name": "Default",
"scheduleType": "ONDEMAND",
"pipelineLogUri": "#{myS3LogsPath}",
"failureAndRerunMode": "CASCADE",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"name": "S3TempLocation",
"id": "S3TempLocation",
"type": "S3DataNode",
"directoryPath": "#{myTempS3Folder}/data"
},
{
"id": "DDBDestinationTable",
"tableName": "#{myDDBDestinationTableName}",
"name": "DDBDestinationTable",
"type": "DynamoDBDataNode",
"writeThroughputPercent": "#{myDDBWriteThroughputRatio}"
},
{
"subnetId": "subnet-id",
"id": "EmrClusterForLoad",
"name": "EmrClusterForLoad",
"releaseLabel": "emr-5.23.0",
"masterInstanceType": "m5.xlarge",
"coreInstanceType": "m5.xlarge",
"coreInstanceCount": "1",
"region": "#{myDDBDestinationRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "TableLoadActivity",
"name": "TableLoadActivity",
"runsOn": {
"ref": "EmrClusterForLoad"
},
"input": {
"ref": "S3TempLocation"
},
"output": {
"ref": "DDBDestinationTable"
},
"type": "EmrActivity",
"maximumRetries": "2",
"resizeClusterBeforeRunning": "true",
"step": [
"s3://dynamodb-dpl-#{myDDBDestinationRegion}/emr-ddb-storage-handler/4.11.0/emr-dynamodb-tools-4.11.0-SNAPSHOT-jar-with-dependencies.jar,org.apache.hadoop.dynamodb.tools.DynamoDBImport,#{input.directoryPath},#{output.tableName},#{output.writeThroughputPercent}"
]
},
{
"dependsOn": {
"ref": "TableLoadActivity"
},
"name": "S3CleanupActivity",
"id": "S3CleanupActivity",
"input": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForLoad"
},
"type": "ShellCommandActivity",
"command": "(sudo yum -y update aws-cli) && (aws s3 rm #{input.directoryPath} --recursive)"
}
],
"parameters": [
{
"myComment": "This Parameter specifies the S3 logging path for the pipeline. It is used by the 'Default' object to set the 'pipelineLogUri' value.",
"id" : "myS3LogsPath",
"type" : "AWS::S3::ObjectKey",
"description" : "S3 path for pipeline logs."
},
{
"id": "myDDBDestinationTableName",
"type": "String",
"description": "Target DynamoDB table name"
},
{
"id": "myDDBWriteThroughputRatio",
"type": "Double",
"description": "DynamoDB write throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"id": "myDDBDestinationRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"myComment": "Temporary S3 path to store the dynamodb backup csv files, backup files will be deleted after the copy completes",
"id": "myTempS3Folder",
"type": "AWS::S3::ObjectKey",
"description": "Temporary S3 folder"
}
]
}
Also, the subnet ID fields in both the pipeline definitions are totally optional, but it is always good to set them.

Creating a JSON table from 'aws autoscaling describe-auto-scaling-groups'

I have a problem with parsing output/transforming it from aws autoscaling describe-auto-scaling-groups
The output looks like this:
{
"AutoScalingGroups": [
{
"AutoScalingGroupName": "eks-nodegroup-AZ1",
"AutoScalingGroupARN": "arn:aws:autoscaling:eu-central-1::autoScalingGroup:854a8f05-cd3c-421d-abf3-0f3730d0b068:autoScalingGroupName/eks-nodegroup-AZ1",
"LaunchTemplate": {
"LaunchTemplateId": "lt-XXXXXXXXXXXXX",
"LaunchTemplateName": "eks-nodegroup-AZ1",
"Version": "$Latest"
},
"MinSize": 1,
"MaxSize": 6,
"DesiredCapacity": 1,
"DefaultCooldown": 300,
"AvailabilityZones": [
"eu-central-1a"
],
"LoadBalancerNames": [],
"TargetGroupARNs": [],
"HealthCheckType": "EC2",
"HealthCheckGracePeriod": 300,
"Instances": [
{
"InstanceId": "i-XXXXXXXXXXXXXXX",
"AvailabilityZone": "eu-central-1a",
"LifecycleState": "InService",
"HealthStatus": "Healthy",
"LaunchTemplate": {
"LaunchTemplateId": "lt-XXXXXXXXXXXXX",
"LaunchTemplateName": "eks-nodegroup-AZ1",
"Version": "1"
},
"ProtectedFromScaleIn": false
}
],
"CreatedTime": "2019-09-24T17:24:57.805Z",
"SuspendedProcesses": [],
"VPCZoneIdentifier": "subnet-XXXXXXXXXXXX",
"EnabledMetrics": [],
"Tags": [
{
"ResourceId": "eks-nodegroup-AZ1",
"ResourceType": "auto-scaling-group",
"Key": "Name",
"Value": "eks-nodegroup-AZ1",
"PropagateAtLaunch": true
},
{
"ResourceId": "eks-nodegroup-AZ1",
"ResourceType": "auto-scaling-group",
"Key": "k8s.io/cluster-autoscaler/enabled",
"Value": "true",
"PropagateAtLaunch": true
},
{
"ResourceId": "eks-nodegroup-AZ1",
"ResourceType": "auto-scaling-group",
"Key": "k8s.io/cluster-autoscaler/k8s-team-sandbox",
"Value": "true",
"PropagateAtLaunch": true
},
{
"ResourceId": "eks-nodegroup-AZ1",
"ResourceType": "auto-scaling-group",
"Key": "kubernetes.io/cluster/k8s-team-sandbox",
"Value": "owned",
"PropagateAtLaunch": true
}
],
"TerminationPolicies": [
"Default"
],
"NewInstancesProtectedFromScaleIn": false,
"ServiceLinkedRoleARN": "arn:aws:iam::XXXXXXXXXXXXXXX:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling"
},
{
"AutoScalingGroupName": "eks-k8s-team-sandbox-AZ2",
"AutoScalingGroupARN": "arn:aws:autoscaling:eu-central-1::autoScalingGroup:25324f3a-b911-453c-b316-46657e850b19:autoScalingGroupName/eks-nodegroup-AZ2",
"LaunchTemplate": {
"LaunchTemplateId": "lt-XXXXXXXXXXXX",
"LaunchTemplateName": "eks-nodegroup-AZ2",
"Version": "$Latest"
},
"MinSize": 1,
"MaxSize": 6,
"DesiredCapacity": 1,
"DefaultCooldown": 300,
"AvailabilityZones": [
"eu-central-1b"
],
"LoadBalancerNames": [],
"TargetGroupARNs": [],
"HealthCheckType": "EC2",
"HealthCheckGracePeriod": 300,
"Instances": [
{
"InstanceId": "i-XXXXXXXXXXXX",
"AvailabilityZone": "eu-central-1b",
"LifecycleState": "InService",
"HealthStatus": "Healthy",
"LaunchTemplate": {
"LaunchTemplateId": "lt-XXXXXXXXXX",
"LaunchTemplateName": "eks-nodegroup-AZ2",
"Version": "1"
},
"ProtectedFromScaleIn": false
}
],
"CreatedTime": "2019-09-24T17:24:57.982Z",
"SuspendedProcesses": [],
"VPCZoneIdentifier": "subnet-XXXXXXXX",
"EnabledMetrics": [],
"Tags": [
{
"ResourceId": "eks-nodegroup-AZ2",
"ResourceType": "auto-scaling-group",
"Key": "Name",
"Value": "eks-nodegroup-AZ2",
"PropagateAtLaunch": true
},
{
"ResourceId": "eks-nodegroup-AZ2",
"ResourceType": "auto-scaling-group",
"Key": "k8s.io/cluster-autoscaler/enabled",
"Value": "true",
"PropagateAtLaunch": true
},
{
"ResourceId": "eks-nodegroup-AZ2",
"ResourceType": "auto-scaling-group",
"Key": "k8s.io/cluster-autoscaler/k8s-team-sandbox",
"Value": "true",
"PropagateAtLaunch": true
},
{
"ResourceId": "eks-nodegroup-AZ2",
"ResourceType": "auto-scaling-group",
"Key": "kubernetes.io/cluster/k8s-team-sandbox",
"Value": "owned",
"PropagateAtLaunch": true
}
],
"TerminationPolicies": [
"Default"
],
"NewInstancesProtectedFromScaleIn": false,
"ServiceLinkedRoleARN": "arn:aws:iam::ARN:role/aws-service-role/autoscaling.amazonaws.com/AWSServiceRoleForAutoScaling"
}
]
}
I need to parse it to get :
{
"eks-nodegroup-AZ1" : "$DesiredCapacityForEks-nodegroup-AZ1",
"eks-nodegroup-AZ2" : "$DesiredCapacityForEks-nodegroup-AZ2",
"eks-nodegroup-AZ3" : "$DesiredCapacityForEks-nodegroup-AZ3",
"eks-nodegroup-AZX" : "$DesiredCapacityForEks-nodegroup-AZX",
}
The following expected output will be used for external data resource for terraform to be able to automate DesiredCapacity value during the ASG rolling-updates.
Thanks,
Dominik
Try with your response,
response | jq '.AutoScalingGroups[] | {(.AutoScalingGroupName): .DesiredCapacity}' | jq -s add

aws CAPABILITY_AUTO_EXPAND console web codepipeline with cloudformation

I am trying to complete a codepipeline with the cloudformation service and this error is generated. It must be said that the separate cloudformation service works well. The complete error is:
JobFailed
Requires capabilities: [CAPABILITY_AUTO_EXPAND] (Service: AmazonCloudFormation; Status Code: 400; Error Code: InsufficientCapabilitiesException; Request ID: 1a977102-f829-11e8-b5c6-f7cc8454c4d0)
The solutions I have is to add the CAPABILITY_AUTO_EXPAND --capabilities parameter but that only applies to CLI and my case is by web console.
Ran into the same problem, I could not find a way to do it through the console.
However it works well with the CLI and you can find detailed documentation on pipeline update here : https://docs.aws.amazon.com/cli/latest/reference/codepipeline/update-pipeline.html
The way I did it was :
make a get-pipeline to get the current pipeline structure
save the result as a json file
from the json file : remove the metadata section, add a capabilities attribute with your value in configuration section
use the update-pipeline command with --cli-input-json option specifying the previous json file
Sample [Note the changes marked with arrows]:
{
"pipeline": {
"roleArn": "arn:aws:iam::123456789234:role/service-role/AWSCodePipelineServiceRole-us-east-1-SAMpipeline",
"stages": [
{
"name": "Source",
"actions": [
{
"inputArtifacts": [],
"name": "Source",
"actionTypeId": {
"category": "Source",
"owner": "AWS",
"version": "1",
"provider": "CodeCommit"
},
"outputArtifacts": [
{
"name": "SourceArtifact"
}
],
"configuration": {
"PollForSourceChanges": "false",
"BranchName": "master",
"RepositoryName": "CFNrepo"
},
"runOrder": 1
}
]
},
{
"name": "Build",
"actions": [
{
"inputArtifacts": [
{
"name": "SourceArtifact"
}
],
"name": "Build",
"actionTypeId": {
"category": "Build",
"owner": "AWS",
"version": "1",
"provider": "CodeBuild"
},
"outputArtifacts": [
{
"name": "BuildArtifact"
}
],
"configuration": {
"ProjectName": "SAMproject"
},
"runOrder": 1
}
]
},
{
"name": "Deploy",
"actions": [
{
"inputArtifacts": [
{
"name": "BuildArtifact"
}
],
"name": "DeployStack",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"StackName": "s5765722591-cp",
"ActionMode": "CREATE_UPDATE",
"RoleArn": "arn:aws:iam::298320596430:role/CloudFormationFullAccess",
"Capabilities": "CAPABILITY_NAMED_IAM,CAPABILITY_AUTO_EXPAND", <--------------
"TemplatePath": "BuildArtifact::template.yaml"
},
"runOrder": 1
},
{
"inputArtifacts": [
{
"name": "BuildArtifact"
}
],
"name": "DeployStack2",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"StackName": "s5765722591-cp2",
"ActionMode": "CREATE_UPDATE",
"RoleArn": "arn:aws:iam::123456789234:role/CloudFormationFullAccess",
"Capabilities": "CAPABILITY_NAMED_IAM,CAPABILITY_AUTO_EXPAND", <-----------
"TemplatePath": "BuildArtifact::template.yaml"
},
"runOrder": 1
}
]
}
],
"artifactStore": {
"type": "S3",
"location": "codepipeline-us-east-1-123456789234"
},
"name": "SAMpipeline",
"version": 5
}
}

Setup Lambda to trigger from CloudWatch using CloudFormation

I want to use CloudFormation to trigger Lambda when my CloudWatch function is called. I have the below, but it does not work.
CloudWatch rule created fine
"CloudWatchNewEc2": {
"Type": "AWS::Events::Rule",
"DependsOn": ["LambdaNewEc2"],
"Properties": {
"Description": "Triggered on new EC2 instances",
"EventPattern": {
"source": [
"aws.ec2"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"ec2.amazonaws.com"
],
"eventName": [
"RunInstances"
]
}
},
"Targets": [
{
"Arn": {
"Fn::GetAtt": ["LambdaNewEc2", "Arn"]
},
"Id": "NewEc2AutoTag"
}
]
}
},
Lambda created but is not triggered
"LambdaNewEc2": {
"Type": "AWS::Lambda::Function",
"DependsOn": ["S3Lambda", "IAMRoleLambda"],
"Properties": {
"Code": {
"S3Bucket": {"Ref": "LambdaBucketName"},
"S3Key": "skynet-lambda.zip"
},
"Description": "When new EC2 instances are created, auto tag them",
"FunctionName": "newEc2AutoTag",
"Handler": "index.newEc2_autoTag",
"Role": {"Fn::GetAtt": ["IAMRoleLambda", "Arn"]},
"Runtime": "nodejs6.10",
"Timeout": "30"
}
}
},
It seems like CloudWatch Target is not sufficient?
UPDATE (Full CloudFormation template)
{
"Parameters": {
"Environment": {
"Type": "String",
"Default": "Staging",
"AllowedValues": [
"Testing",
"Staging",
"Production"
],
"Description": "Environment name"
},
"BucketName": {
"Type": "String",
"Default": "skynet-staging",
"Description": "Bucket Name"
},
"LambdaBucketName": {
"Type": "String",
"Default": "skynet-lambda",
"Description": "Lambda Bucket Name"
},
"Owner": {
"Type": "String",
"Description": "Owner"
}
},
"Resources": {
"S3Web": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {
"Ref": "BucketName"
},
"WebsiteConfiguration": {
"IndexDocument": "index.html",
"RoutingRules": [
{
"RedirectRule": {
"ReplaceKeyPrefixWith": "#"
},
"RoutingRuleCondition": {
"HttpErrorCodeReturnedEquals": "404"
}
}
]
},
"AccessControl": "PublicRead",
"Tags": [
{
"Key": "Cost Center",
"Value": "Skynet"
},
{
"Key": "Environment",
"Value": {
"Ref": "Environment"
}
},
{
"Key": "Owner",
"Value": {
"Ref": "Owner"
}
}
]
}
},
"S3Lambda": {
"Type": "AWS::S3::Bucket",
"Properties": {
"BucketName": {
"Ref": "LambdaBucketName"
},
"VersioningConfiguration": {
"Status": "Enabled"
},
"Tags": [
{
"Key": "Cost Center",
"Value": "Skynet"
},
{
"Key": "Owner",
"Value": {
"Ref": "Owner"
}
}
]
}
},
"CloudWatchNewEc2": {
"Type": "AWS::Events::Rule",
"DependsOn": ["LambdaNewEc2"],
"Properties": {
"Description": "Triggered on new EC2 instances",
"EventPattern": {
"source": [
"aws.ec2"
],
"detail-type": [
"AWS API Call via CloudTrail"
],
"detail": {
"eventSource": [
"ec2.amazonaws.com"
],
"eventName": [
"RunInstances"
]
}
},
"Targets": [
{
"Arn": {
"Fn::GetAtt": ["LambdaNewEc2", "Arn"]
},
"Id": "NewEc2AutoTag"
}
]
}
},
"IAMRoleLambda": {
"Type": "AWS::IAM::Role",
"Properties": {
"RoleName": "skynet-lambda-role",
"AssumeRolePolicyDocument": {
"Version" : "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [ "lambda.amazonaws.com" ]
},
"Action": [ "sts:AssumeRole" ]
}
]
},
"ManagedPolicyArns": [
"arn:aws:iam::aws:policy/AmazonEC2FullAccess",
"arn:aws:iam::aws:policy/AWSLambdaFullAccess",
"arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess",
"arn:aws:iam::aws:policy/CloudWatchLogsFullAccess"
]
}
},
"LambdaNewEc2": {
"Type": "AWS::Lambda::Function",
"DependsOn": ["S3Lambda", "IAMRoleLambda"],
"Properties": {
"Code": {
"S3Bucket": {"Ref": "LambdaBucketName"},
"S3Key": "skynet-lambda.zip"
},
"Description": "When new EC2 instances are created, auto tag them",
"FunctionName": "newEc2AutoTag",
"Handler": "index.newEc2_autoTag",
"Role": {"Fn::GetAtt": ["IAMRoleLambda", "Arn"]},
"Runtime": "nodejs6.10",
"Timeout": "30"
}
}
},
"Outputs": {
"WebUrl": {
"Value": {
"Fn::GetAtt": [
"S3Web",
"WebsiteURL"
]
},
"Description": "S3 bucket for web files"
}
}
}
I managed to deploy your template into a CloudFormation stack (by removing the LambdaBucket and pointing to my own zip file). It seems to create all resource correctly.
It took about 10 minutes for the RunInstances event to appear in CloudTrail. It then successfully triggered the Rule, but the CloudWatch metrics for my rule showed a failed invocation because I faked a Lambda function for your template.
Once I edited the rule to point to a better function and re-tested, it worked fine.
Bottom line: Seems to work!