Can I add a text field to an AWS CloudWatch dashboard to substitute a value into dashboard widgets? - amazon-web-services

I have an AWS CloudFront dashboard which monitors aspects of my datacenter infrastructure on AWS. I would like to insert a text field into this dashboard to parameterize the queries/titles of widgets on the dashboard, to easily run slightly different queries by entering different values into the text field.
Can this be done? If so, how?
See image below for more context:

Related

AWS Grafana sees identical data for all custom metrics

I have created some custom metric filters for a cloudtrail log group, 11 in total.
Each metric filter is filtering for multiple related events (one is for iam changes, another is for user logon activity, etc).
I want to log each time one of these metric filters captures an event and show it on an AWS Grafana dashboard.
I have the CDK to deploy the metric filters, they show up in cloudwatch and I can see them graphing events in the AWS Console.
I can even add the datasource and correct permissions to access it from AWS Grafana.
It's only when I go to render the results onto the dashboard panel that I start to see a problem. All of he metrics have the same data.
I have tried to add all metrics and they all show the same data. I have included some screenshots to demonstrate the issue.

Why do some metrics missing in cloudwatch metrics view?

I am using cloudwatch metric view to view dyanmodb metrics. When I search ReadThrottleEvents, only a few tables or index shown in the list. I wonder why the metrics are not visible for all tables? Is there any configuration I need to configure in order to view them?
Below is a screenshot of searching this metrics and I expect every table index should be shown in the list. But I only got 2 results.
If there is no data, they don't show:
Metrics that have not had any new data points in the past two weeks do not appear in the console. They also do not appear when you type their metric name or dimension names in the search box in the All metrics tab in the console, and they are not returned in the results of a list-metrics command. The best way to retrieve these metrics is with the get-metric-data or get-metric-statistics commands in the AWS CLI.

AWS CloudWatch | Export logs to EC2 server

I have "cloudwatch" service to monitor logs for my EC2 running instances. But the ColudWatch web console does not seem to have a button to allow you to download/exporting the log data from it.
Any ideas how I can achieve this goal through CLI or GUI?
Programmatically, using boto3 (Python),
log_client=boto3.client('logs')
result_1=log_client.describe_log_streams(logGroupName='<NAME>')
(I don't know what log group names for EC2 instances look like; for Lambda they are of the form '/aws/lambda/FuncName'. Try grabbing the names you see in the console).
result_1 contains two useful keys: logStreams (the result you want) and nextToken (for pagination, I'll let you look up the usage).
Now result_1['logStreams'] is a list of objects containing a logStreamName. Also useful are firstEventTimestamp and lastEventTimestamp.
Now that you have log stream names, you can use
log_client.get_log_events(logGroupName='<name>',logStreamName='<name>'
The response contains nextForwardToken and nextBackwardToken for pagination, and events for the log events you want. Each event contains a timestamp and a message.
I'll leave it to you to look up the API to see what other parameters might be useful to you. By the way, the console will let you stream your logs to an S3 bucket or to AWS's ElasticSearch service. ElasticSearch is a joy to use, and Kibana's UI is intuitive enough that you can get results even without learning their query language.
You can use the console or the AWS CLI to download CloudWatch logs to Amazon S3. You do need to know the log group name, from & to timestamps in the log, destination bucket and prefix. Amazon recommends a separate S3 bucket for your logs. Once you have a bucket you create an export task, under (in the console) Navigation - Logs - select your log group - Actions - Export data to S3 - fill in the details for your export - select Export data. Amazon's documentation explains it pretty well at: http://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3Export.html. And CLI instructions are there also if you wanted to use that. I imagine with the CLI you could also script your export, but you would have to define the variables somehow so you don't overwrite an existing export.
If this is part of your overall AWS disaster recovery planning, you might want to check out some tips & best practices, such as Amazon's white paper on AWS disaster recovery, and NetApp's discussion of using the cloud for disaster recovery.

Cloudwatch data logs to create a custom dashboard

So I am working on creating my own dashboard for AWS instances, I am trying to determine is there any way to get AWS cloud watch metrics log so that I can get the data and plot it in a graph.
I have been working with AWS CLI but wasn't able to get a perfect way to resolve my query.
I just need the metrics like
CPU utilization vs time
Disk Utilization vs time
Network Out/In vs time
etc
AWS CloudWatch API can do this for you, the two actions you may need:
GetMetricStatistics: get time-series data for one or more statistics of a given MetricName.
CLI reference: http://docs.aws.amazon.com/AmazonCloudWatch/latest/cli/cli-mon-get-stats.html
API docs: http://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricStatistics.html
ListMetrics: lists the names, namespaces, and dimensions of the metrics associated with your AWS account. You can filter metrics by using any combination of metric name, namespace, or dimensions.
CLI reference:
http://docs.aws.amazon.com/AmazonCloudWatch/latest/cli/cli-mon-list-metrics.html
API docs: http://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_ListMetrics.html
Apart from the CLI there are also a bunch of SDKs for different languages (Java, .NET, Ruby, JavaScript etc.) that you can use to call AWS APIs. You can find these in the official AWS github repo: https://github.com/aws

DynamoDB Table Missing?

I had created a simple table in dynamo called userId, I could view it in the AWS console and query it through some java on my local machine. This morning, however, I could no longer see the table in the dynamo dashboard but I could still query it through the java. The dashboard showed no tables at all (I only had one, the missing 'userId'). I then just created a new table using the dashboard, called it userId and populated it. However, now when I run my java to query it, the code is returning the items from the missing 'userId' table, not this new one! Any ideas what is going on?
Ok, that's strange. I thought dynamo tables were not specified by region but I noticed once I created this new version of 'userId' it was viewable under the eu-west region but then I could see the different (previously missing!) 'userId' table in the us-east region. They both had the same table name but contained different items. I didn't think this was possible?
Most of the services of Amazon Web Services are in a single region. The only exceptions are Route 53 (DNS), IAM, and CloudFront (CDN). The reason is that you want to control the location of your data, mainly for regulatory reasons. Many times your data can't leave the US or Europe or any other region.
It is possible to create high availability for your services within a single region with availability zones. This is how the highly available services as DynamoDB or S3 are giving such functionality, by replicating the data between availability zones, but within a single region.