AWS.EC2.Windows.CloudWatch.json Not Logging Free Disk Space - amazon-web-services

On our instance (i-568a3f14) running Windows Server 2012 Standard we are trying to log the free disk space in CloudWatch. I've put the file AWS.EC2.Windows.CloudWatch.json in the directory C:\Program Files\Amazon\Ec2ConfigService\Settings
The contents of the .json file are as follows :
{
"IsEnabled": true,
"EngineConfiguration": {
"PollInterval": "00:00:05",
"Components": [
{
"Id": "PerformanceCounterDisk",
"FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"CategoryName": "LogicalDisk",
"CounterName": "% Free Space",
"InstanceName": "C:",
"MetricName": "FreeDiskPercentage",
"Unit": "Percent",
"DimensionName": "InstanceId",
"DimensionValue": "i-568a3f14"
},
{
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch",
"Id": "CloudWatch",
"Parameters": {
"AccessKey": "",
"NameSpace": "Windows/Default",
"Region": "eu-west-1a",
"SecretKey": ""
}
}
}
],
"Flows": {
"Flows":
[
"(PerformanceCounterDisk),CloudWatch"
]
}
}
}
I've restarted the Ec2ConfigService, but still, we aren't getting any data logged. What am I missing or what else should I try.

Related

AWS Lambda VScode launch.json

Recently AWS introduced launch configurations support for SAM debugging in the AWS Toolkit for VS Code
Ref : https://aws.amazon.com/blogs/developer/introducing-launch-configurations-support-for-sam-debugging-in-the-aws-toolkit-for-vs-code/
It means we cant use templates.json file, instead need to use launch.json to send in your event to lambda.
I want to send a test event to lambda function (a SQS message).
Before introducing launch configuration templates.json had it like this (and it worked fine):
"templates": {
"xxxxxxxx/template.yaml": {
"handlers": {
"xxxxxxxxx.lambdaHandler": {
"event": {
"Records": [
{
"messageId": "xxxxxxxxxxxxxxxx",
"receiptHandle": "xxxxxxxxxxxxxxxx",
"body": "{\"operation\": \"publish\", \"data\": { \"__typename\": \"xxxxxxxxxxxxxxxx\", \"id\": \"xxxxxxxxxxxxxxxx\" }}",
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "xxxxxxxxxxxxxxxx",
"SequenceNumber": "xxxxxxxxxxxxxxxx",
"MessageGroupId": "xxxxxxxxxxxxxxxx",
"SenderId": "xxxxxxxxxxxxxxxx:LambdaFunctionTest",
"MessageDeduplicationId": "xxxxxxxxxxxxxxxx",
"ApproximateFirstReceiveTimestamp": "xxxxxxxxxxxxxxxx"
},
"messageAttributes": {
"environment": {
"DataType": "String",
"stringValue": "Dev"
}},
"md5OfBody": "xxxxxxxxxxxxxxxx",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:us-east-1:xxxxxxxxxxxxxxxx:xxx.fifo",
"awsRegion": "us-east-1"
}
]
},
"environmentVariables": {}
}
............
But in launch.json , i pasted the Records in the following way and it is not excepting, see also attached jpg screenshot.
{
"configurations": [
{
"type": "aws-sam",
"request": "direct-invoke",
"name": "xxxxxxxx)",
"invokeTarget": {
"target": "code",
"projectRoot": "xxxxxxxx",
"lambdaHandler": "xxxxxxxx.lambdaHandler"
},
"lambda": {
"runtime": "nodejs12.x",
"payload": {
"json": {
"Records": [
{
"messageId": "xxxxxxxxxxxxxxxx",
"receiptHandle": "xxxxxxxxxxxxxxxx",
"body": "{\"operation\": \"publish\", \"data\": { \"__typename\": \"xxxxxxxxxxxxxxxx\", \"id\": \"xxxxxxxxxxxxxxxx\" }}",
"attributes": {
"ApproximateReceiveCount": "1",
"SentTimestamp": "xxxxxxxxxxxxxxxx",
"SequenceNumber": "xxxxxxxxxxxxxxxx",
"MessageGroupId": "xxxxxxxxxxxxxxxx",
"SenderId": "xxxxxxxxxxxxxxxx:LambdaFunctionTest",
"MessageDeduplicationId": "xxxxxxxxxxxxxxxx",
"ApproximateFirstReceiveTimestamp": "xxxxxxxxxxxxxxxx"
},
"messageAttributes": {
"environment": {
"DataType": "String",
"stringValue": "Dev"
}},
"md5OfBody": "xxxxxxxxxxxxxxxx",
"eventSource": "aws:sqs",
"eventSourceARN": "arn:aws:sqs:us-east-1:xxxxxxxxxxxxxxxx:xxx.fifo",
"awsRegion": "us-east-1"
}
]
},
},
}
},
enter image description here
Blockquote

How to use Data Pipeline to copy data of a DynamoDB table to another DynamoDB table when both have on-demand capacity

I used to copy data from one DynamoDB to another DynamoDB using a pipeline.json. It works when the source table has provisioned capacity and doesn't matter if destination is set to provisioned/on demand. I want both of my tables set to On Demand capacity. But when i use the same template it doesn't work. Is there any way that we can do that, or is it still under development?
Here is my original functioning script:
{
"objects": [
{
"startAt": "FIRST_ACTIVATION_DATE_TIME",
"name": "DailySchedule",
"id": "DailySchedule",
"period": "1 day",
"type": "Schedule",
"occurrences": "1"
},
{
"id": "Default",
"name": "Default",
"scheduleType": "ONDEMAND",
"pipelineLogUri": "#{myS3LogsPath}",
"schedule": {
"ref": "DailySchedule"
},
"failureAndRerunMode": "CASCADE",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DDBSourceTable",
"tableName": "#{myDDBSourceTableName}",
"name": "DDBSourceTable",
"type": "DynamoDBDataNode",
"readThroughputPercent": "#{myDDBReadThroughputRatio}"
},
{
"name": "S3TempLocation",
"id": "S3TempLocation",
"type": "S3DataNode",
"directoryPath": "#{myTempS3Folder}/#{format(#scheduledStartTime, 'YYYY-MM-dd-HH-mm-ss')}"
},
{
"id": "DDBDestinationTable",
"tableName": "#{myDDBDestinationTableName}",
"name": "DDBDestinationTable",
"type": "DynamoDBDataNode",
"writeThroughputPercent": "#{myDDBWriteThroughputRatio}"
},
{
"id": "EmrClusterForBackup",
"name": "EmrClusterForBackup",
"amiVersion": "3.8.0",
"masterInstanceType": "m3.xlarge",
"coreInstanceType": "m3.xlarge",
"coreInstanceCount": "1",
"region": "#{myDDBSourceRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "EmrClusterForLoad",
"name": "EmrClusterForLoad",
"amiVersion": "3.8.0",
"masterInstanceType": "m3.xlarge",
"coreInstanceType": "m3.xlarge",
"coreInstanceCount": "1",
"region": "#{myDDBDestinationRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "TableLoadActivity",
"name": "TableLoadActivity",
"runsOn": {
"ref": "EmrClusterForLoad"
},
"input": {
"ref": "S3TempLocation"
},
"output": {
"ref": "DDBDestinationTable"
},
"type": "EmrActivity",
"maximumRetries": "2",
"dependsOn": {
"ref": "TableBackupActivity"
},
"resizeClusterBeforeRunning": "true",
"step": [
"s3://dynamodb-emr-#{myDDBDestinationRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbImport,#{input.directoryPath},#{output.tableName},#{output.writeThroughputPercent}"
]
},
{
"id": "TableBackupActivity",
"name": "TableBackupActivity",
"input": {
"ref": "DDBSourceTable"
},
"output": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForBackup"
},
"resizeClusterBeforeRunning": "true",
"type": "EmrActivity",
"maximumRetries": "2",
"step": [
"s3://dynamodb-emr-#{myDDBSourceRegion}/emr-ddb-storage-handler/2.1.0/emr-ddb-2.1.0.jar,org.apache.hadoop.dynamodb.tools.DynamoDbExport,#{output.directoryPath},#{input.tableName},#{input.readThroughputPercent}"
]
},
{
"dependsOn": {
"ref": "TableLoadActivity"
},
"name": "S3CleanupActivity",
"id": "S3CleanupActivity",
"input": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForBackup"
},
"type": "ShellCommandActivity",
"command": "(sudo yum -y update aws-cli) && (aws s3 rm #{input.directoryPath} --recursive)"
}
],
"parameters": [
{
"myComment": "This Parameter specifies the S3 logging path for the pipeline. It is used by the 'Default' object to set the 'pipelineLogUri' value.",
"id" : "myS3LogsPath",
"type" : "AWS::S3::ObjectKey",
"description" : "S3 path for pipeline logs."
},
{
"id": "myDDBSourceTableName",
"type": "String",
"description": "Source DynamoDB table name"
},
{
"id": "myDDBDestinationTableName",
"type": "String",
"description": "Target DynamoDB table name"
},
{
"id": "myDDBWriteThroughputRatio",
"type": "Double",
"description": "DynamoDB write throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"id": "myDDBSourceRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"id": "myDDBDestinationRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"id": "myDDBReadThroughputRatio",
"type": "Double",
"description": "DynamoDB read throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"myComment": "Temporary S3 path to store the dynamodb backup csv files, backup files will be deleted after the copy completes",
"id": "myTempS3Folder",
"type": "AWS::S3::ObjectKey",
"description": "Temporary S3 folder"
}
]
}
And here is the error message from Data Pipeline execution when source DynamoDB table is set to On Demand capacity:
at org.apache.hadoop.mapreduce.JobSubmitter.writeOldSplits(JobSubmitter.java:520)
at org.apache.hadoop.mapreduce.JobSubmitter.writeSplits(JobSubmitter.java:512)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:394)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:833)
at org.apache.hadoop.dynamodb.tools.DynamoDbExport.run(DynamoDbExport.java:79)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.dynamodb.tools.DynamoDbExport.main(DynamoDbExport.java:30)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
The following JSON file worked for upload (DynamoDB to S3) -
{
"objects": [
{
"id": "Default",
"name": "Default",
"scheduleType": "ONDEMAND",
"pipelineLogUri": "#{myS3LogsPath}",
"failureAndRerunMode": "CASCADE",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"id": "DDBSourceTable",
"tableName": "#{myDDBSourceTableName}",
"name": "DDBSourceTable",
"type": "DynamoDBDataNode",
"readThroughputPercent": "#{myDDBReadThroughputRatio}"
},
{
"name": "S3TempLocation",
"id": "S3TempLocation",
"type": "S3DataNode",
"directoryPath": "#{myTempS3Folder}/data"
},
{
"subnetId": "subnet-id",
"id": "EmrClusterForBackup",
"name": "EmrClusterForBackup",
"masterInstanceType": "m5.xlarge",
"coreInstanceType": "m5.xlarge",
"coreInstanceCount": "1",
"releaseLabel": "emr-5.23.0",
"region": "#{myDDBSourceRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "TableBackupActivity",
"name": "TableBackupActivity",
"input": {
"ref": "DDBSourceTable"
},
"output": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForBackup"
},
"resizeClusterBeforeRunning": "true",
"type": "EmrActivity",
"maximumRetries": "2",
"step": [
"s3://dynamodb-dpl-#{myDDBSourceRegion}/emr-ddb-storage-handler/4.11.0/emr-dynamodb-tools-4.11.0-SNAPSHOT-jar-with-dependencies.jar,org.apache.hadoop.dynamodb.tools.DynamoDBExport,#{output.directoryPath},#{input.tableName},#{input.readThroughputPercent}"
]
}
],
"parameters": [
{
"myComment": "This Parameter specifies the S3 logging path for the pipeline. It is used by the 'Default' object to set the 'pipelineLogUri' value.",
"id" : "myS3LogsPath",
"type" : "AWS::S3::ObjectKey",
"description" : "S3 path for pipeline logs."
},
{
"id": "myDDBSourceTableName",
"type": "String",
"description": "Source DynamoDB table name"
},
{
"id": "myDDBSourceRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"id": "myDDBReadThroughputRatio",
"type": "Double",
"description": "DynamoDB read throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"myComment": "Temporary S3 path to store the dynamodb backup csv files, backup files will be deleted after the copy completes",
"id": "myTempS3Folder",
"type": "AWS::S3::ObjectKey",
"description": "Temporary S3 folder"
}
]
}
And the following worked for download (S3 to DynamoDB) -
{
"objects": [
{
"id": "Default",
"name": "Default",
"scheduleType": "ONDEMAND",
"pipelineLogUri": "#{myS3LogsPath}",
"failureAndRerunMode": "CASCADE",
"role": "DataPipelineDefaultRole",
"resourceRole": "DataPipelineDefaultResourceRole"
},
{
"name": "S3TempLocation",
"id": "S3TempLocation",
"type": "S3DataNode",
"directoryPath": "#{myTempS3Folder}/data"
},
{
"id": "DDBDestinationTable",
"tableName": "#{myDDBDestinationTableName}",
"name": "DDBDestinationTable",
"type": "DynamoDBDataNode",
"writeThroughputPercent": "#{myDDBWriteThroughputRatio}"
},
{
"subnetId": "subnet-id",
"id": "EmrClusterForLoad",
"name": "EmrClusterForLoad",
"releaseLabel": "emr-5.23.0",
"masterInstanceType": "m5.xlarge",
"coreInstanceType": "m5.xlarge",
"coreInstanceCount": "1",
"region": "#{myDDBDestinationRegion}",
"terminateAfter": "10 Days",
"type": "EmrCluster"
},
{
"id": "TableLoadActivity",
"name": "TableLoadActivity",
"runsOn": {
"ref": "EmrClusterForLoad"
},
"input": {
"ref": "S3TempLocation"
},
"output": {
"ref": "DDBDestinationTable"
},
"type": "EmrActivity",
"maximumRetries": "2",
"resizeClusterBeforeRunning": "true",
"step": [
"s3://dynamodb-dpl-#{myDDBDestinationRegion}/emr-ddb-storage-handler/4.11.0/emr-dynamodb-tools-4.11.0-SNAPSHOT-jar-with-dependencies.jar,org.apache.hadoop.dynamodb.tools.DynamoDBImport,#{input.directoryPath},#{output.tableName},#{output.writeThroughputPercent}"
]
},
{
"dependsOn": {
"ref": "TableLoadActivity"
},
"name": "S3CleanupActivity",
"id": "S3CleanupActivity",
"input": {
"ref": "S3TempLocation"
},
"runsOn": {
"ref": "EmrClusterForLoad"
},
"type": "ShellCommandActivity",
"command": "(sudo yum -y update aws-cli) && (aws s3 rm #{input.directoryPath} --recursive)"
}
],
"parameters": [
{
"myComment": "This Parameter specifies the S3 logging path for the pipeline. It is used by the 'Default' object to set the 'pipelineLogUri' value.",
"id" : "myS3LogsPath",
"type" : "AWS::S3::ObjectKey",
"description" : "S3 path for pipeline logs."
},
{
"id": "myDDBDestinationTableName",
"type": "String",
"description": "Target DynamoDB table name"
},
{
"id": "myDDBWriteThroughputRatio",
"type": "Double",
"description": "DynamoDB write throughput ratio",
"default": "1",
"watermark": "Enter value between 0.1-1.0"
},
{
"id": "myDDBDestinationRegion",
"type": "String",
"description": "Region of the DynamoDB table",
"default": "us-west-2"
},
{
"myComment": "Temporary S3 path to store the dynamodb backup csv files, backup files will be deleted after the copy completes",
"id": "myTempS3Folder",
"type": "AWS::S3::ObjectKey",
"description": "Temporary S3 folder"
}
]
}
Also, the subnet ID fields in both the pipeline definitions are totally optional, but it is always good to set them.

How to configure dynamically a AWS Sage Maker task with AWS Step Function

I'm trying to build an ML pipeline using AWS Step Function.
I would like to configure the 'CreateHyperParameterTuningJob' dynamically depending on the input of the task.
Here is a screenshot of the State Machine that I'm trying to build:
ML State Machine
When I try to create this State Machine, I got the following error:
The value for the field 'MaxParallelTrainingJobs' must be an INTEGER
I'm struggling to figure out what is the issue here.
Do you have any suggestion to make the SM configuration dynamic with Step Function? Is it even possible?
Here is the input data passed to the 'Run training job' task:
{
"client_id": "test",
"training_job_definition": {
"AlgorithmSpecification": {
"TrainingImage": "433757028032.dkr.ecr.us-west-2.amazonaws.com/xgboost:latest",
"TrainingInputMode": "File"
},
"ResourceConfig": {
"InstanceCount": 1,
"InstanceType": "ml.m5.large",
"VolumeSizeInGB": 5
},
"StaticHyperParameters": {
"num_round": 750
},
"StoppingCondition": {
"MaxRuntimeInSeconds": 900
},
"InputDataConfig": [
{
"ChannelName": "train",
"CompressionType": "None",
"ContentType": "csv",
"DataSource": {
"S3DataSource": {
"S3DataDistributionType": "FullyReplicated",
"S3DataType": "S3Prefix",
"S3Uri": "..."
}
}
},
{
"ChannelName": "validation",
"CompressionType": "None",
"ContentType": "csv",
"DataSource": {
"S3DataSource": {
"S3DataDistributionType": "FullyReplicated",
"S3DataType": "S3Prefix",
"S3Uri": "..."
}
}
}
],
"OutputDataConfig": {
"S3OutputPath": "..."
},
"RoleArn": "arn:aws:iam::679298748479:role/landingzone_sagemaker_role"
},
"hyper_parameter_tuning_job_config": {
"HyperParameterTuningJobObjective": {
"MetricName": "validation:rmse",
"Type": "Minimize"
},
"Strategy": "Bayesian",
"ResourceLimits": {
"MaxParallelTrainingJobs": 2,
"MaxNumberOfTrainingJobs": 10
},
"ParameterRanges": {
"ContinuousParameterRanges": [
{
"Name": "eta",
"MinValue": 0.01,
"MaxValue": 0.04
},
{
"Name": "gamma",
"MinValue": 0,
"MaxValue": 100
},
{
"Name": "subsample",
"MinValue": 0.6,
"MaxValue": 1
},
{
"Name": "lambda",
"MinValue": 0,
"MaxValue": 5
},
{
"Name": "alpha",
"MinValue": 0,
"MaxValue": 2
}
],
"IntegerParameterRanges": [
{
"Name": "max_depth",
"MinValue": 5,
"MaxValue": 10
}
]
}
}
}
Here is JSON file that describes the State Machine:
{
"StartAt": "Generate Training Dataset",
"States": {
"Generate Training Dataset": {
"Resource": "arn:aws:lambda:uswest-2:012345678912:function:StepFunctionsSample-SageMaLambdaForDataGeneration-1TF67BUE5A12U",
"Type": "Task",
"Next": "Run training job"
},
"Run training job": {
"Resource": "arn:aws:states:::sagemaker:createHyperParameterTuningJob.sync",
"Parameters": {
"HyperParameterTuningJobName.$": "$.execution_date",
"HyperParameterTuningJobConfig": {
"HyperParameterTuningJobObjective": {
"MetricName": "$.hyper_parameter_tuning_job_config.HyperParameterTuningJobObjective.MetricName",
"Type": "Minimize"
},
"Strategy": "$.hyper_parameter_tuning_job_config.Strategy",
"ResourceLimits": {
"MaxParallelTrainingJobs": "$.hyper_parameter_tuning_job_config.ResourceLimits.MaxParallelTrainingJobs",
"MaxNumberOfTrainingJobs": "$.hyper_parameter_tuning_job_config.ResourceLimits.MaxNumberOfTrainingJobs"
},
"ParameterRanges": "$.hyper_parameter_tuning_job_config.ParameterRanges"
},
"TrainingJobDefinition": {
"AlgorithmSpecification": "$.training_job_definition.AlgorithmSpecification",
"StoppingCondition": "$.training_job_definition.StoppingCondition",
"ResourceConfig": "$.training_job_definition.ResourceConfig",
"RoleArn": "$.training_job_definition.RoleArn",
"InputDataConfig": "$.training_job_definition.InputDataConfig",
"OutputDataConfig": "$.training_job_definition.OutputDataConfig",
"StaticHyperParameters": "$.training_job_definition.StaticHyperParameters"
},
"HyperParameterTuningJobConfig.ResourceLimits": ""
},
"Type": "Task",
"End": true
}
}
}

Configure the LoadBalancer in AWS CloudWatch Alarm

I have a web application on AWS and I am trying to configure my autoscaling based on the requests.
My AppLoadBalancer resource is as below:
"AppLoadBalancer": {
"Properties": {
"LoadBalancerAttributes": [
{
"Key": "idle_timeout.timeout_seconds",
"Value": "60"
}
],
"Name": "sample-app-v1",
"Scheme": "internet-facing",
"SecurityGroups": [
"sg-1abcd234"
],
"Subnets": {
"Fn::FindInMap": [
"LoadBalancerSubnets",
{
"Ref": "AWS::Region"
},
"Subnets"
]
},
"Tags": [
{
"Key": "Name",
"Value": "sample-app-v1"
},
{
"Key": "StackName",
"Value": "sample-app"
},
{
"Key": "StackVersion",
"Value": "v1"
}
]
},
"Type": "AWS::ElasticLoadBalancingV2::LoadBalancer"
}
I am trying to configure a CloudWatch Alarm like this:
"RequestCountTooHighAlarm": {
"Properties": {
"AlarmActions": [
{
"Ref": "ScaleUp"
}
],
"AlarmDescription": "Scale-up if request count >= 8000 for last 5 minute",
"ComparisonOperator": "GreaterThanOrEqualToThreshold",
"Dimensions": [
{
"Name": "LoadBalancer",
"Value": [
{
"Fn::GetAtt": [
"AppLoadBalancer",
"LoadBalancerFullName"
]
}
]
}
],
"EvaluationPeriods": 1,
"MetricName": "RequestCount",
"Namespace": "AWS/ApplicationELB",
"OKActions": [
{
"Ref": "ScaleDown"
}
],
"Period": 300,
"Statistic": "SampleCount",
"Threshold": 8000
},
"Type": "AWS::CloudWatch::Alarm"
}
However, my stack continues to fail and I don't know what is wrong here. Here is the error which I am getting.
ERROR: RequestCountTooHighAlarm CREATE_FAILED: Value of property Value must be of type String
ERROR: sample-app-v1 CREATE_FAILED: The following resource(s) failed to create: [RequestCountTooHighAlarm].
Can somebody suggest?
The property mentioned requires a string. You have it defined as a list:
"Value": [
{
"Fn::GetAtt": [
"AppLoadBalancer",
"LoadBalancerFullName"
]
} ]
The [] brackets defines a list in JSON. Remove the outside brackets in the Value value, and use only the Fn::GetAt portion. That call will return a string.

EC2Config Cloudwatch logs streaming not working

I was hoping someone could help with this, I'm trying to stream logs from a Windows Server 2012 with EC2config service installed.
I have followed the following documentation:
https://aws.amazon.com/blogs/devops/using-cloudwatch-logs-with-amazon-ec2-running-microsoft-windows-server/
Unfortunately nothing is streaming to cloudwatch logs.
Here is the Json I'm using:
{
"EngineConfiguration": {
"PollInterval": "00:00:15",
"Components": [
{
"Id": "ApplicationEventLog",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "Application",
"Levels": "1"
}
},
{
"Id": "SystemEventLog",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "System",
"Levels": "7"
}
},
{
"Id": "SecurityEventLog",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "Security",
"Levels": "7"
}
},
{
"Id": "ETW",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "Microsoft-Windows-WinINet/Analytic",
"Levels": "7"
}
},
{
"Id": "IISLog",
"FullName": "AWS.EC2.Windows.CloudWatch.IISLogOutput,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogDirectoryPath": "C:\\inetpub\\logs\\LogFiles\\W3SVC1"
"AccessKey": "",
"SecretKey": "",
"Region": "eu-west-1",
"LogGroup": "Web-Logs",
"LogStream": "IIStest"
}
},
{
"Id": "CustomLogs",
"FullName": "AWS.EC2.Windows.CloudWatch.CustomLog.CustomLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogDirectoryPath": "C:\\CustomLogs\\",
"TimestampFormat": "MM/dd/yyyy HH:mm:ss",
"Encoding": "UTF-8",
"Filter": "",
"CultureName": "en-US",
"TimeZoneKind": "Local"
}
},
{
"Id": "PerformanceCounter",
"FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"CategoryName": "Memory",
"CounterName": "Available MBytes",
"InstanceName": "",
"MetricName": "Memory",
"Unit": "Megabytes",
"DimensionName": "",
"DimensionValue": ""
}
},
{
"Id": "CloudWatchLogs",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"AccessKey": "",
"SecretKey": "",
"Region": "eu-west-1",
"LogGroup": "Win2Test",
"LogStream": "logging-test"
}
},
{
"Id": "CloudWatch",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters":
{
"AccessKey": "",
"SecretKey": "",
"Region": "eu-west-1",
"NameSpace": "Windows/Default"
}
}
],
"Flows": {
"Flows":
[
"(ApplicationEventLog,SystemEventLog),CloudWatchLogs",
"IISLog"
]
}
}
}
At this moment in time i only want to stream the IIS logs, from my understanding the Cloudwatch Log group and stream should automatically create.
The issue with Flows section is the second component of Flow definition is missing :
instead of
"Flows": {
"Flows":
[
"(ApplicationEventLog,SystemEventLog),CloudWatchLogs",
"IISLog"
]
}
It should be
[
"(ApplicationEventLog,SystemEventLog),CloudWatchLogs",
"IISLog,CloudWatchLogs"
]
The Flows section defines the source and target of the components from Components section first being what/how to get and second is how to send.
e.g. consider following snippet here ApplicationEventLog and SystemEventLog will be sent to CloudWatch (refers to "Id" : "CloudWatch" defined in Components instead of AWS CloudWatch).
The second line defines second flow i.e. PerformanceCounter sent to CloudWatch1
"Flows": {
"Flows":
[
"(ApplicationEventLog,SystemEventLog),CloudWatchLogs",
"PerformanceCounter,CloudWatch1"
]
}
hope this explains how it resolved the issue.
Looks like i made a few mistakes on the JSON file itself, specifically the FLOW area.
Got this working now :)