Monitoring memory usage in AWS CloudWatch for Windows instance - amazon-web-services

By default, memory usage isn’t monitored by CloudWatch. So I tried to add it to my Windows instance in AWS using these instructions.
This is what I did:
I created a user named custom-metrics-user. Then I stored the access and secret key.
I created and attached an Inline Policy to the user. it looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["cloudwatch:PutMetricData", "cloudwatch:GetMetricStatistics", "cloudwatch:ListMetrics", "ec2:DescribeTags"],
"Resource": "*"
}
]
}
I launched a Windows Instance [2012 R2 Base AMI]. After accessing the instance through RDP, I found that the AWS.EC2.Windows.CloudWatch.json file is already present.
I changed that .json file accordingly. After changing it, it looks like this:
{
"EngineConfiguration": {
"PollInterval": "00:00:15",
"Components": [
{
"Id": "ApplicationEventLog",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "Application",
"Levels": "1"
}
},
{
"Id": "SystemEventLog",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "System",
"Levels": "7"
}
},
{
"Id": "SecurityEventLog",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "Security",
"Levels": "7"
}
},
{
"Id": "ETW",
"FullName": "AWS.EC2.Windows.CloudWatch.EventLog.EventLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogName": "Microsoft-Windows-WinINet/Analytic",
"Levels": "7"
}
},
{
"Id": "IISLog",
"FullName": "AWS.EC2.Windows.CloudWatch.IisLog.IisLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogDirectoryPath": "C:\\inetpub\\logs\\LogFiles\\W3SVC1"
}
},
{
"Id": "CustomLogs",
"FullName": "AWS.EC2.Windows.CloudWatch.CustomLog.CustomLogInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"LogDirectoryPath": "C:\\CustomLogs\\",
"TimestampFormat": "MM/dd/yyyy HH:mm:ss",
"Encoding": "UTF-8",
"Filter": "",
"CultureName": "en-US",
"TimeZoneKind": "Local"
}
},
{
"Id": "PerformanceCounter",
"FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"CategoryName": "Memory",
"CounterName": "Available MBytes",
"InstanceName": "",
"MetricName": "Memory",
"Unit": "Megabytes",
"DimensionName": "InstanceId",
"DimensionValue": "{instance_id}"
}
},
{
"Id": "CloudWatchLogs",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatchLogsOutput,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"AccessKey": "",
"SecretKey": "",
"Region": "us-east-1",
"LogGroup": "Default-Log-Group",
"LogStream": "{instance_id}"
}
},
{
"Id": "CloudWatch",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters":
{
"AccessKey": "AKIAIK2U6EU675354BQ",
"SecretKey": "nPyk9ntdwW0y5oaw8353fsdfTi0e5/imx5Q09vz",
"Region": "us-east-1",
"NameSpace": "System/Windows"
}
}
],
"Flows": {
"Flows":
[
"PerformanceCounter,CloudWatch"
]
}
}
}
I enabled CloudWatch Logs integration under EC2ConfigSettings.
I restarted the EC2Config Service.
I got no errors but the Memory metric isn't being shown in the Cloud Watch console. The blog says to wait for 10-15 minutes for the metric to appear, but it has already been an hour since I have done it. What’s going wrong?

First, you need to add an IAM role to your instance:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessToSSM",
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": [
"*"
]
}
]
}
Note that you cannot add a role to an existing instance. So do it before launching.
Then you need to configure the EC2Config file (normally) accessible via the following path:
C:\Program Files\Amazon\Ec2ConfigService\Settings.AWS.EC2.Windows.CloudWatch.json
You should add the following block to the JSON file:
...
{
"Id": "PerformanceCounter",
"FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"CategoryName": "Memory",
"CounterName": "Available MBytes",
"InstanceName": "",
"MetricName": "Memory",
"Unit": "Megabytes",
"DimensionName": "InstanceId",
"DimensionValue": "{instance_id}"
}
}
...
{
"Id": "CloudWatch",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters":
{
"AccessKey": "",
"SecretKey": "",
"Region": "eu-west-1",
"NameSpace": "PerformanceMonitor"
}
}
Do not forget to restart the EC2Config service on your server after changing the config file. You should be able to get the memory metrics after a couple of minutes in your CloudWatch console.
The level of CloudWatch monitoring on your instance should also be set to detailed:
Update:
According to the documentation, you can now attach or modify an IAM role to your existing instance.

I am running a Windows 2012 Base R2 Server and it is running EC2Config Version greater than 4.0. If anyone faces the same problem, please restart the Amazon SSM Agent Service after restarting EC2Config Service.
I read it in the following link [STEP-6] :
http://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/send_logs_to_cwl.html
It reads the following :
If you are running EC2Config version 4.0 or later, then you must restart the SSM Agent on the instance from the Microsoft Services snap-in.
I solved my issue by doing this.

Related

Get errorType:OK when trying to deploy function

I'm struggling to deploy my cloud function. I'm unsure what information to provide. My set up:
# main.py
def callRequest():
print("bla")
return(1)
Entry point for the function is callRequest.
After failing to deploy I see this red highlighted message under details:
Deployment failure:
Build failed: {"metrics":{},"error":{"buildpackId":"","buildpackVersion":"","errorType":"OK","canonicalCode":"OK","errorId":"","errorMessage":""},"stats":[{"buildpackId":"google.utils.archive-source","buildpackVersion":"0.0.1","totalDurationMs":47,"userDurationMs":46},{"buildpackId":"google.python.runtime","buildpackVersion":"0.9.1","totalDurationMs":9487,"userDurationMs":6307},{"buildpackId":"google.python.functions-framework","buildpackVersion":"0.9.6","totalDurationMs":53,"userDurationMs":52},{"buildpackId":"google.python.pip","buildpackVersion":"0.9.2","totalDurationMs":5832,"userDurationMs":5822},{"buildpackId":"google.utils.label","buildpackVersion":"0.0.2","totalDurationMs":0,"userDurationMs":0}],"warnings":null,"customImage":false}
In the logs I see a notice related to the attempted deploy:
{
"protoPayload": {
"#type": "type.googleapis.com/google.cloud.audit.AuditLog",
"authenticationInfo": {
"principalEmail": "myname#bla.com"
},
"requestMetadata": {
"callerIp": "152.170.106.184",
"callerSuppliedUserAgent": "Mozilla/5.0 (X11; Linux x86_64; rv:108.0) Gecko/20100101 Firefox/108.0,gzip(gfe),gzip(gfe)",
"requestAttributes": {
"time": "2023-01-11T13:15:30.667011Z",
"auth": {}
},
"destinationAttributes": {}
},
"serviceName": "cloudfunctions.googleapis.com",
"methodName": "google.cloud.functions.v1.CloudFunctionsService.UpdateFunction",
"authorizationInfo": [
{
"resource": "projects/my-project/locations/us-central1/functions/ga4-to-s3-1",
"permission": "cloudfunctions.functions.update",
"granted": true,
"resourceAttributes": {}
}
],
"resourceName": "projects/my-project/locations/us-central1/functions/ga4-to-s3-1",
"request": {
"updateMask": "entryPoint,sourceUploadUrl",
"#type": "type.googleapis.com/google.cloud.functions.v1.UpdateFunctionRequest",
"function": {
"name": "projects/my-project/locations/us-central1/functions/ga4-to-s3-1",
"runtime": "python39",
"serviceAccountEmail": "my-project#appspot.gserviceaccount.com",
"availableMemoryMb": 256,
"maxInstances": 3000,
"timeout": "60s",
"eventTrigger": {
"eventType": "google.pubsub.topic.publish",
"resource": "projects/my-project/topics/ga4-daily-extract-complete"
},
"secretEnvironmentVariables": [
{
"version": "latest",
"key": "PAT",
"secret": "PAT-GA4-S3-Extract",
"projectId": "1234567"
}
],
"sourceUploadUrl": "https://storage.googleapis.com/uploads-1234567.us-central1.cloudfunctions.appspot.com/123-456-789-abc-def.zip?GoogleAccessId=service-123456789#gcf-admin-robot.iam.gserviceaccount.com&Expires=12345&Signature=kjhgfghjkjhg%iuytfrghj8765467uhgfdfghj",
"entryPoint": "callRequest",
"ingressSettings": "ALLOW_ALL"
}
},
"resourceLocation": {
"currentLocations": [
"us-central1"
]
}
},
"insertId": "nlbq4xd9dhq",
"resource": {
"type": "cloud_function",
"labels": {
"project_id": "my-project",
"function_name": "ga4-to-s3-1",
"region": "us-central1"
}
},
"timestamp": "2023-01-11T13:15:30.423213Z",
"severity": "NOTICE",
"logName": "projects/my-project/logs/cloudaudit.googleapis.com%2Factivity",
"operation": {
"id": "operations/Z2E0LWV4dHJhY3QvdXMtY2VudHJhbDEvZ2E0LXRvLXMzLTEvbHA2QlowNzBTekk",
"producer": "cloudfunctions.googleapis.com",
"first": true
},
"receiveTimestamp": "2023-01-11T13:15:31.626931279Z"
}
I'm unsure where else to look? Any pointers or advice most welcome.
Found the similar issue discussed here and the issue resolved.
cloud build service account was missing the Cloud Build Service Account role.
I tried removing the Cloud Build Service Account role and deployed the function, I also got the same deployment errors
Try adding the Cloud Build Service Account role for the Google Cloud Build Service Account (project-number#cloudbuild.gserviceaccount.com) in the Google Cloud IAM console . This fixed symptom of a cloud function deploy with the message:
message=Build failed: {
"metrics":{},
"error":{
"buildpackId":"",
"buildpackVersion":"",
"errorType":"OK",
"canonicalCode":"OK",
"errorId":"",
"errorMessage":""
}
}
Also have a look at this github link1 & link2 which might help

How to switch roles on AWS Console while requiring sts:RoleSessionName?

I have two AWS accounts, A and B. I authenticate to account A using SAML, then to access account B I switch roles. The setup worked well until I tried to enforce that, when switching roles the users need to provide their AWS username as the RoleSessionName. When I require that, then switching roles using the AWS cli works fine, but switching roles in the AWS Console stops working.
Here's the role trust policy that works on both cli and console:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::84...10:root"
},
"Action": "sts:AssumeRole"
}
]
}
Enforcing that the RoleSessionName be the AWS username means to change the policy like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::84...10:root"
},
"Action": "sts:AssumeRole",
"Condition": {
"StringLike": {
"sts:RoleSessionName": "${aws:username}"
}
}
}
]
}
Note: if instead of "sts:RoleSessionName": "${aws:username}" I do "sts:RoleSessionName": "*", then I can switch roles on the AWS console, but I can't find a way to figure out a pattern that includes the user name and works. I tried *${aws:username}, ${aws:username}*, and *${aws:username}*.
Once I enforce the sts:RoleSessionName condition on the role trust policy, I can no longer switch roles on the AWS Console. What I see in the CloudTrail events (only visible on us-east-1 region), is that switching the role in the AWS Console is recorded as 2 events: a SwitchRole followed by a AssumeRole.
When I switch roles in the AWS console with the first policy, I see the two events in CloudTrail.
Successful SwitchRole event:
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "ARO...BYP:me#email.com",
"arn": "arn:aws:sts::84...10:assumed-role/my-saml-role/me#email.com",
"accountId": "84...10"
},
"eventTime": "2022-12-21T19:34:55Z",
"eventSource": "signin.amazonaws.com",
"eventName": "SwitchRole",
"awsRegion": "us-east-1",
"sourceIPAddress": "4...0",
"userAgent": "Mozilla/5.0 ...",
"requestParameters": null,
"responseElements": {
"SwitchRole": "Success"
},
"additionalEventData": {
"RedirectTo": "https://us-east-1.console.aws.amazon.com/cloudtrail/home?region=us-east-1#/events?ReadOnly=false",
"SwitchTo": "arn:aws:iam::13...20:role/my-role"
},
"eventID": "ef759d28-37cb-4ece-af7d-3d7c5326691d",
"readOnly": false,
"eventType": "AwsConsoleSignIn",
"managementEvent": true,
"recipientAccountId": "84...10",
"eventCategory": "Management",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
"clientProvidedHostHeader": "signin.aws.amazon.com"
}
}
Successful AssumeRole event:
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "ARO...BYP:me#email.com",
"arn": "arn:aws:sts::84...10:assumed-role/my-saml-role/me#email.com",
"accountId": "84...10",
"accessKeyId": "ASIA...OA",
"sessionContext": {
"sessionIssuer": {
"type": "Role",
"principalId": "AR...BYP",
"arn": "arn:aws:sts::84...10:assumed-role/my-saml-role",
"accountId": "84...10",
"userName": "my-saml-role"
},
"webIdFederationData": {},
"attributes": {
"creationDate": "2022-12-21T19:10:25Z",
"mfaAuthenticated": "false"
}
}
},
"eventTime": "2022-12-21T19:34:55Z",
"eventSource": "sts.amazonaws.com",
"eventName": "AssumeRole",
"awsRegion": "us-east-1",
"sourceIPAddress": "4...0",
"userAgent": "AWS Signin, aws-internal/3 aws-sdk-java/1.12.339 Linux/5.4.215-mr.86.metal1.x86_64 OpenJDK_64-Bit_Server_VM/25.352-b09 java/1.8.0_352 kotlin/1.3.72 vendor/Oracle_Corporation cfg/retry-mode/standard",
"requestParameters": {
"roleArn": "arn:aws:iam::13...20:role/my-role",
"roleSessionName": "me#email.com"
},
"responseElements": {
"credentials": {
"accessKeyId": "AS...FJ",
"sessionToken": "IQ...VMRCMyvsQ==",
"expiration": "Dec 21, 2022, 8:34:55 PM"
},
"assumedRoleUser": {
"assumedRoleId": "AR...PX:me#email.com",
"arn": "arn:aws:sts::13...20:assumed-role/my-role/me#email.com"
}
},
"requestID": "6d9b977a-e489-4261-97b5-48c0167d82ea",
"eventID": "1f7178bd-046b-4e20-b264-dd86b8753a45",
"readOnly": true,
"resources": [
{
"accountId": "13...20",
"type": "AWS::IAM::Role",
"ARN": "arn:aws:iam::13...20:role/my-role"
}
],
"eventType": "AwsApiCall",
"managementEvent": true,
"recipientAccountId": "84...10",
"sharedEventID": "e49297fc-883e-4160-86bc-d157bc58a3ee",
"eventCategory": "Management",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
"clientProvidedHostHeader": "sts.us-east-1.amazonaws.com"
}
}
When I enforce the sts:RoleSessionName requirement, then the SwitchRole event fails with error switchrole.error.invalidparams.
Failed SwitchRole event
{
"eventVersion": "1.08",
"userIdentity": {
"type": "AssumedRole",
"principalId": "AR..YP:me#email.com",
"arn": "arn:aws:sts::84...10:assumed-role/my-saml-role/me#email.com",
"accountId": "84...10"
},
"eventTime": "2022-12-21T19:25:23Z",
"eventSource": "signin.amazonaws.com",
"eventName": "SwitchRole",
"awsRegion": "us-east-1",
"sourceIPAddress": "4...0",
"userAgent": "Mozilla/5.0 ...",
"errorMessage": "switchrole.error.invalidparams",
"requestParameters": null,
"responseElements": {
"SwitchRole": "Failure"
},
"additionalEventData": {
"RedirectTo": "https://eu-central-1.console.aws.amazon.com/cloudtrail/home?region=eu-central-1#/events?ReadOnly=false",
"SwitchTo": "arn:aws:iam::13..20:role/my-role"
},
"eventID": "76de870e-7e75-43a9-a908-d34616022553",
"readOnly": false,
"eventType": "AwsConsoleSignIn",
"managementEvent": true,
"recipientAccountId": "84...10",
"eventCategory": "Management",
"tlsDetails": {
"tlsVersion": "TLSv1.2",
"cipherSuite": "ECDHE-RSA-AES128-GCM-SHA256",
"clientProvidedHostHeader": "signin.aws.amazon.com"
}
}
The AWS docs mentions that sts:RoleSessionName is available when assuming a role in the AWS Console.
sts:RoleSessionName Works with string operators.
Use this key to compare the session name that a principal specifies
when assuming a role with the value that is specified in the policy.
Availability – This key is present in the request when the principal
assumes the role using the AWS Management Console, any assume-role CLI
command, or any AWS STS AssumeRole API operation.
Src: https://docs.aws.amazon.com/IAM/latest/UserGuide/reference_policies_iam-condition-keys.html#condition-keys-sts

Cross region code deploy error.(AWS Codepipeline)

I try to cross region deploy from ap-southeast-1 to ap-northeast-1 using AWS CodePipeline.
But, an below error is occurred deploy phase.
I set s3 full access to CodeStarWorker-test-ToolChain.
ReplicationStatus Replication of artifact 'test-BuildArtifact'
failed: Failed replicating artifact from
source_backet in ap-southeast-1
to dest_backet in ap-northeast-1: Check source and destination
artifact buckets exist and
arn:aws:iam::xxxxxxx:role/CodeStarWorker-test-ToolChain has
permission to access it.
I set below config to deploy of pipeline.
Does any one have same error and know reservation?
{
"name": "Deploy",
"actions": [
{
"region": "ap-northeast-1",
"inputArtifacts": [
{
"name": "test-BuildArtifact"
}
],
"name": "GenerateChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"ActionMode": "CHANGE_SET_REPLACE",
"ChangeSetName": "pipeline-changeset",
"RoleArn": "arn:aws:iam:: xxxxxxx:role/CodeStarWorker-test-CloudFormation",
"Capabilities": "CAPABILITY_NAMED_IAM",
"StackName": "awscodestar-test-lambda",
"ParameterOverrides": "{\"ProjectId\":\"test2\",
\"CodeDeployRole\":\"arn:aws:iam:: xxxxxxx:role/CodeStarWorker-test-CodeDeploy\"}",
"TemplateConfiguration": "test-BuildArtifact::template-configuration.json",
"TemplatePath": "test-BuildArtifact::template.yml"
},
"runOrder": 1
},
{
"region": "ap-northeast-1",
"inputArtifacts": [],
"name": "ExecuteChangeSet",
"actionTypeId": {
"category": "Deploy",
"owner": "AWS",
"version": "1",
"provider": "CloudFormation"
},
"outputArtifacts": [],
"configuration": {
"StackName": "awscodestar-test-lambda",
"ActionMode": "CHANGE_SET_EXECUTE",
"ChangeSetName": "pipeline-changeset"
},
"runOrder": 2
}
]
}
],
"artifactStores": {
"ap-southeast-1": {
"type": "S3",
"location": "source_backet"
},
"ap-northeast-1": {
"type": "S3",
"location": "dest_backet"
}
},
"name": "test-Pipeline",
"version": 1
}
When I've seen this error it's been one of two things.
You don't have your S3 bucket replicated to the bucket being used in the second region. https://docs.aws.amazon.com/AmazonS3/latest/dev/crr.html
Your step is running before the replication is complete.
If it's the latter I'm always able to re-run the step and it succeeds. Seems to be an issue with the S3 replication not moving fast enough.

lambda monitoring using aws quicksight

i have few lambdas that use different other services like SSM, athena, dynamodb, s3, SQS, SNS for my process. i am almost done with all my development and would love to monitor it visually. I use X-ray and cloud watch as my regular log monitoring and analysis. I feel cloud watch dashboards is not so efficient way to visualize my stuff with multiple services. So i did a lambda that pulls trace data from my X-ray traces and outputs a nested json file something like below.
[
{
"id": "4707a33e472",
"name": "test-lambda",
"start_time": 1524714634.098,
"end_time": 1524714672.046,
"parent_id": "1b9122bc",
"aws": {
"function_arn": "arn:aws:lambda:us-east-1:9684596:function:test-lambda",
"resource_names": [
"test-lambda"
],
"account_id": "9684596"
},
"trace_id": "1-5ae14c88-41dca52ccec8c7d",
"origin": "AWS::Lambda::Function",
"subsegments": [
{
"id": "ab6420197c",
"name": "S3",
"start_time": 1524714671.7148032,
"end_time": 1524714671.8333395,
"http": {
"response": {
"status": 200
}
},
"aws": {
"id_2": "No9Gemg5b9Y2XREorBG+6a1KLXX7S6O3HtPZ3f6vUuU5F1dQE0nIE1WmwmRRHIqCjI=",
"operation": "DeleteObjects",
"region": "us-east-1",
"request_id": "E2709BB91B8"
},
"namespace": "aws"
},
{
"id": "370e11d6d",
"name": "SSM",
"start_time": 1524714634.0991564,
"end_time": 1524714634.194922,
"http": {
"response": {
"status": 200
}
},
"aws": {
"operation": "GetParameter",
"region": "us-east-1",
"request_id": "f901ed67-4904-bde0-f9ad15cc558b"
},
"namespace": "aws"
},
{
"id": "8423bf21354",
"name": "DynamoDB",
"start_time": 1524714671.9744427,
"end_time": 1524714671.981935,
"http": {
"response": {
"status": 200
}
},
"aws": {
"operation": "UpdateItem",
"region": "us-east-1",
"request_id": "3AHBI44JRJ2UJ72V88CJPV5L4JVV4K6Q9ASUAAJG",
"table_name": "test-dynamodb",
"resource_names": [
"test-dynamodb"
]
},
I only posted the first few line of x-ray trace json output, but it's pretty large to post here. AWS quicksight doesn't support nested json, my question is, is there a way to visualize all my lambdas in a better way using quicksight. I am not allowed to use other third party monitoring systems. Need help with this

Unable to monitor Free Disk Space for Windows Instances using Custom CloudWatch Metrics

I have created a user and added the following inline policy to him. It reads the below piece :
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowAccessToSSM",
"Effect": "Allow",
"Action": [
"cloudwatch:PutMetricData",
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:DescribeLogGroups",
"logs:DescribeLogStreams",
"logs:PutLogEvents"
],
"Resource": [
"*"
]
}
]
}
Then, I have successfully monitored the Available Memory by making the following changes to the .json file :
...
{
"Id": "PerformanceCounterMemory",
"FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"CategoryName": "Memory",
"CounterName": "Available MBytes",
"InstanceName": "",
"MetricName": "Memory",
"Unit": "Megabytes",
"DimensionName": "InstanceId",
"DimensionValue": "{instance_id}"
}
},
{
"Id": "CloudWatch",
"FullName": "AWS.EC2.Windows.CloudWatch.CloudWatch.CloudWatchOutputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters":{
"AccessKey": "xxxxxxxxxxxxxxxxxxx",
"SecretKey": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
"Region": "us-east-1",
"NameSpace": "Windows/Demo"
}
}
"Flows": {
"Flows":
[
"PerformanceCounterMemory,CloudWatch"
]
}
...
After editing that file this way, I enabled CloudWatch Integration checkbox in ec2ConfigSettings.
Next, I have restarted both ec2Config and Amazon SSM Agent Services.
Successfully, I could see the Memory metric in my CloudWatch console.
Now, I thought of monitoring the available Disk Space too.
For that, I have added this part to my .json file :
{
"Id": "PerformanceCounterDisk",
"FullName": "AWS.EC2.Windows.CloudWatch.PerformanceCounterComponent.PerformanceCounterInputComponent,AWS.EC2.Windows.CloudWatch",
"Parameters": {
"CategoryName": "LogicalDisk",
"CounterName": "% Free Space",
"InstanceName": "C:",
"MetricName": "FreeDisk",
"Unit": "Percent",
"DimensionName": "InstanceId",
"DimensionValue": "{instance_id}"
}
},
"Flows": {
"Flows":
[
"(PerformanceCounterMemory,PerformanceCounterDisk),CloudWatch"
]
}
After doing this, I have restarted both the ec2Config and Amazon SSM Agent Services, but I can't see this metric under my namespace. Only memory is being shown and not disk space.
What mistake have I done?
I just changed
"InstanceName": "C:",
to
"InstanceName": "_Total",
After sometime, the free disk metric showed up.