AWS Grafana sees identical data for all custom metrics - amazon-web-services

I have created some custom metric filters for a cloudtrail log group, 11 in total.
Each metric filter is filtering for multiple related events (one is for iam changes, another is for user logon activity, etc).
I want to log each time one of these metric filters captures an event and show it on an AWS Grafana dashboard.
I have the CDK to deploy the metric filters, they show up in cloudwatch and I can see them graphing events in the AWS Console.
I can even add the datasource and correct permissions to access it from AWS Grafana.
It's only when I go to render the results onto the dashboard panel that I start to see a problem. All of he metrics have the same data.
I have tried to add all metrics and they all show the same data. I have included some screenshots to demonstrate the issue.

Related

How to limit the scope in Google Cloud Platform Error Reporting

We host quite a few things on our GCP project and it's kinda nice to be alerted on new errors, but I wish to send email notifications to Pagerduty only from my production kubernetes cluster.
Is there a way to do this, or should I filter this somehow in pagerduty (unsure if possible - still new with it).
Here is the procedure of sending notifications from kubernetes to pagerduty:
A metric needs to be created based on the requirement and that metric needs to be added when we create an alert. When we proceed further in the notifications page you can select pagerduty and proceed further in creating alerts.
Step1:
Creating an log-based-Metric :
1.In console go to log-based metrics page and there go to create metric and create a new custom metric.
2.Here set the metric type as counter and in details add log metric name as (user/delete).
3.In metric give the query which can fetch the logs of the errors you are expecting to be alerted and create the metric.
Step 2:
Creating an alert policy :
1.In the console go to the Alerting page and there go to create policy and create a new alerting policy.
2.Go to add condition and in that resource type is the resource which we need to be triggered (in our case kubernetes pod) and metric is the metric we created in step1.
3.In filer add the project id and in period add the suitable period. Next in the configuration add these details and proceed to the next steps leaving the rest fields as default.
Step3:
1.Next you will be directed to select notification channels there you go to Manage notification channels and select pager duty services and add new and there add the display name and the service key and check connectivity save and proceed further.
2.Add the alert name and save the alert.

Creating Alert Policy in Cloud Monitoring with OpenCensus Metric

I'd like to use an OpenCensus metric in a Cloud Monitoring (Stackdriver) Alert Policy.
When I try to click the Add button, I get This query must contain a resource type. error. The odd thing, is that I can view this metric in MQL and can chart it.
According to the MQL charts that us this metric, the Resource: field is blank, and the charts work fine. The MQL charts show a resource type (on metric hover) of knative_broker, dataflow_job, aws_rds_database, k8s_control_plane_component, aws_lambda_function, and 36 more.
What Resource type should be used to alert on Open Census metrics in Cloud Monitoring alerts?
In Cloud Monitoring, application-specific metrics are typically called “custom metrics”. You can create custom metrics with opencensus. For more detailed information please follow Custom Metrics with OpenCensus. We define a Stackdriver exporter as the goal to create an alert policy on the aggregation of two metrics. Refer Alert policy metric thresholds with stackdriver and opencensus for more information.

Why do some metrics missing in cloudwatch metrics view?

I am using cloudwatch metric view to view dyanmodb metrics. When I search ReadThrottleEvents, only a few tables or index shown in the list. I wonder why the metrics are not visible for all tables? Is there any configuration I need to configure in order to view them?
Below is a screenshot of searching this metrics and I expect every table index should be shown in the list. But I only got 2 results.
If there is no data, they don't show:
Metrics that have not had any new data points in the past two weeks do not appear in the console. They also do not appear when you type their metric name or dimension names in the search box in the All metrics tab in the console, and they are not returned in the results of a list-metrics command. The best way to retrieve these metrics is with the get-metric-data or get-metric-statistics commands in the AWS CLI.

Receiving alerts for GCP activities

Is it possible to create alerts for configuration activities?
On the dashboard of my GCP project, I'm able to see the history of activities. However, for security reasons, I would like to be able to receive notifications when certain activities happen, e.g. Set IAM policy on project, deleting instance of project, etc. Is this possible?
I have looked into "metric-based alerting policies", but I'm only able to create alerts for uptime checks. Not sure what else to look for.
You are on the right path. You need to create a log-based metric and then to create an alert when the counter cross a threshold (1 for example)
Now a more straightforward solution is available: In one step, you can use log-based alerts. It allows to set alerts on any log type and content. This new feature is on preview and was announced few days ago.

How can I subscribe to cloudwatch metric data?

I am using Elasitsearch to get logs from cloudwatch log group by subscribing a lambda to the log group. So whenever there is a log event pushed to the log group, my lambda will be triggered and it will save the log to Elasticsearch. Then I can search the log via Kibana dashboard.
I'd like to put the metrics data to Elasticsearch as well but I couldn't find a way to subscribe to metrics data.
You can use AWS Module in MetricBeat from the Elastic Beat's family. Note that pulling metrics from cloudwatch will result in chargeable API calls. So you should carefully consider the scraping frequency.
Thanks