AWS Cloudwatch metrics log - view log from graph - amazon-web-services

When viewing graphed Cloudwatch metric, how can one see which log/event is showed on graph without filtering the log stream for time showed on graph?
For example: on graph I can see one lambda error occurred at 12:21 pm, but I can't see the specific event.

Related

AWS Grafana sees identical data for all custom metrics

I have created some custom metric filters for a cloudtrail log group, 11 in total.
Each metric filter is filtering for multiple related events (one is for iam changes, another is for user logon activity, etc).
I want to log each time one of these metric filters captures an event and show it on an AWS Grafana dashboard.
I have the CDK to deploy the metric filters, they show up in cloudwatch and I can see them graphing events in the AWS Console.
I can even add the datasource and correct permissions to access it from AWS Grafana.
It's only when I go to render the results onto the dashboard panel that I start to see a problem. All of he metrics have the same data.
I have tried to add all metrics and they all show the same data. I have included some screenshots to demonstrate the issue.

How do I write a stack driver log query that samples 5% of all logs returned?

I keep reading that I can write a log query to sample a percentage of logs but I have found zero examples.
https://cloud.google.com/blog/products/gcp/preventing-log-waste-with-stackdriver-logging\
You can also choose to sample certain messages so that only a percentage of the messages appear in Stackdriver Logs Viewer
How do I get 10% of all GCE load balancer logs with a log query? I know I can configure this on the backend, but I don't want that. I want to get 100% of logs in stackdriver and create a pub/sub log sink with a log query that only captures 10% of them and sends those sampled logs somewhere else.
I suspect you'll want to create a Pub/Sub sink for Log Router. See Configure and manage sinks
Using Google's Log querying, you can use sample to filter (inclusion) logs.

Google Cloud Functions alert based in Logs

I went through the docs but I couldn't find a way of sending an alert based on my Cloud function logs, is this possible?
What I want to do is trigger an alert when I get an specific log node.
Like, every time I have this:
Triggers an alert in my GCP, is that possible?
Create an log-based metric from the advanced filter of the Logs and create an alerting from the log-based metric. To create alert from the log-based metric you need to click on three dot right side of the metric name there you will see the create alert option.
As an alternative to log-based metrics, now log-based alerts are also supported. This lets you receive notification whenever a specific message appears in logs.

How to use metric filter in cloudwatch logs

how can i display only the person name in cloudwatch dashbaord.
log : message:"Personname ABC",
able to filter the message using the query..filter #message like /Personname / |
display message
please help to display only the name i dont like to display'Personname' just the name ABC.
CloudWatch Metric filters are not used to extract text, they're used for counting the number of occurrences for a specific condition i.e. when the Persons name is ABC
After the CloudWatch Logs agent begins publishing log data to Amazon CloudWatch, you can begin searching and filtering the log data by creating one or more metric filters. Metric filters define the terms and patterns to look for in log data as it is sent to CloudWatch Logs. CloudWatch Logs uses these metric filters to turn log data into numerical CloudWatch metrics that you can graph or set an alarm on. You can use any type of CloudWatch statistic, including percentile statistics, when viewing these metrics or setting alarms.
If you're wanting to analyze your data take a look at using CloudWatch Logs Insights.
As #ChrisWilliams explained, the metric filters serve different purpose.
One way of filtering the logs is through log subscriptions:
You can use subscriptions to get access to a real-time feed of log events from CloudWatch Logs and have it delivered to other services such as a Amazon Kinesis stream, Amazon Kinesis Data Firehose stream, or AWS Lambda for custom processing, analysis, or loading to other systems.
Using subscriptions you could feed your logs into Kiensis Firehose in real-time, transform it using firehose transforamtions into format you desire and save it to S3. This way you can process the logs in a way you want and have them delivered to S3 for further analysis or long term storage.
Alternatively, can feed the logs directly to a lambda function, and from there you can do what you wish.

Cloudwatch Metric showing wrong value

I have an application publishing a custom cloudwatch metric using boto's put_metric_data. The metric shows the number of tasks waiting in a redis queue.
The 1-minute max shows '3', 1-minute min shows '0' and 1-minute average shows '1.5'.
It seems that the application is correctly setting the value to zero, but some other process is overwriting it with 3 at the same time, but I can't find this to stop it.
Is it possible to see logs for PutMetricData to diagnose where this value might be coming from?
Normally, Amazon CloudTrail would be the ideal way to discover information about API calls being made to your AWS account. Unfortunately, PutMetricData is not captured in Amazon CloudTrail.
From Logging Amazon CloudWatch API Calls in AWS CloudTrail:
The CloudWatch GetMetricStatistics, ListMetrics, and PutMetricData API actions are not supported.