Where does the userstats.o1.daus metric take the data from?
I looked in the metrics list and in the app, but I don't seem to find the source of the metric.
The application infrastructure relies on:
AWS
DynamoDB
New Relic
Amplitude
I found the answer thanks to #tqr_aupa_atleti and the support team from Datadog.
On the Datadog dashboard panel, I had to click Metrics -> Summary and look for my metric. I looked at the tags and I could figure out it was a custom metric form my company that uses data from Amplitude.
Related
I am trying to figure out how to simply view all of our custom metrics in CloudWatch.
AWS Console is far from helpful, or at least it's not well signposted. I want to try and relate our CloudWatch bill to actual metrics we have to try and determine where I can make some cuts.
For Example:
Our Bill shows 1,600 Metrics charged at $0.30 a piece per month, but I see over 17,000 custom namespaces in the metrics list within the CloudWatch console.
Does anyone know how I can best find this information, or have a nice handy CLI command to view all custom metrics for a region?
I can see the custom namespaces section in cloud watch, but these don't really marry up to the billing page as such. By about a 10 fold.
Thank you.
UPDATE:
I think I may have identified why there is a discrepancy between the billing and the list of metrics:
We have namespace builds, each creating metrics and being destroyed sometimes within hours.
These metrics which were created linger for 15 days according to the AWS FAQ on CloudWatch Metrics.
The overall monthly metrics seemingly is a figure of what it is due to the concurrency of metrics over the month.
However, this still doesn't make the billing breakdown any easier to understand when you're trying to highlight possible outliers in costs.
Is it possible to create alerts for configuration activities?
On the dashboard of my GCP project, I'm able to see the history of activities. However, for security reasons, I would like to be able to receive notifications when certain activities happen, e.g. Set IAM policy on project, deleting instance of project, etc. Is this possible?
I have looked into "metric-based alerting policies", but I'm only able to create alerts for uptime checks. Not sure what else to look for.
You are on the right path. You need to create a log-based metric and then to create an alert when the counter cross a threshold (1 for example)
Now a more straightforward solution is available: In one step, you can use log-based alerts. It allows to set alerts on any log type and content. This new feature is on preview and was announced few days ago.
In Google Cloud Logging (nee Stackdriver), can I create a log-based metric for logs aggregated at folder/organization level? I want to have a log-based metric for a certain audit event across many projects.
This isn't currently supported. You can export logs to a different project, but can't have metrics encapsulating more than one project.
If you think that functionality should be available, you can create a Feature Request at Public Issue Tracker.
I'm just starting out using Apache Beam on Google Cloud Dataflow. I have a project set up with a billing account. The only things I plan on using this project for are:
1. dataflow - for all data processing
2. pubsub - for exporting stackdriver logs to be consumed by Datadog
Right now, as I write this, I am not currently running any dataflow jobs.
Looking at the past month, I see ~$15 in dataflow costs and ~$18 in Stackdriver Monitor API costs. It looks as though Stackdriver Monitor API is close to a fixed $1.46/day.
I'm curious how to mitigate this. I do not believe I want or need Stackdriver Monitoring. Is it mandatory? Further, while I feel I have nothing running, I see this over the past hour:
So I suppose the questions are these:
1. what are these calls?
2. is it possible to disable Stackdriver Monitoring for dataflow or otherwise mitigate the cost?
Per Yuri's suggestion, I found the culprit, and this is how (thanks to Google Support for walking me through this):
In GCP Cloud Console, navigate to 'APIs & Services' -> Library
Search for 'Strackdriver Monitoring Api' and click
Click 'Manage' on the next screen
Click 'Metrics' from the left-hand side menu
In the 'Select Graphs' dropdown, select "Traffic by Credential" and click 'OK'
This showed me a graph making it clear just about all of my requests were coming from a credential named datadog-metrics-collection, a service account I'd set up previously to collect GCP metrics and emit to Datadog.
Considering the answer posted and question, If we think we do not need Stackdriver monitoring, we can disable stackdriver monitoring API using bellow steps:
From the Cloud Console,go to APIs & Services.
Select Stackdriver Monitoring API.
Click Disable API.
In addition you can view Stackdriver usage by billing account and also can estimate cost using Stackdriver pricing calculator [a] [b].
View Stackdriver usage by billing account:
From anywhere in the Cloud Console, click Navigation menu and select Billing.
If you have more than one billing account, select Go to linked billing account to
view the current project's billing account. To locate a different billing account,
select Manage billing accounts and choose the account for which you'd like to get
usage reports.
Select Reports.
4.Select Group By > SKU. This menu might be hidden; you can access it by clicking Show
Filters.
From the SKUs drop-down list, make the following selections:
Log Volume (Stackdriver Logging usage)
Spans Ingested (Stackdriver Trace usage)
Metric Volume and Monitoring API Requests (Stackdriver Monitoring usage)
Your usage data, filtered by the SKUs you selected, will appear.
You can also select just one or some of these SKUs if you don't want to group your usage data.
Note: If your usage of any of these SKUs is 0, they don't appear in the Group By > SKU pull-down menu. For example, who use only the Cloud console might never generate API requests, so Monitoring API Requests doesn't appear in the list.
Use the Stackdriver pricing calculator [b]:
Add your current or projected Monitoring usage data to the Metrics section and click Add to estimate.
Add your current or projected Logging usage data to the Logs section and click Add to estimate.
Add your current Trace usage data to the Trace spans section and click Add to estimate.
Once you have input your usage data, click Estimate.
Estimates of your future Stackdriver bills appear. You can also Email Estimate or Save Estimate.
[a] https://cloud.google.com/stackdriver/estimating-bills#billing-acct-usage
[b] https://cloud.google.com/products/calculator/#tab=google-stackdriver
I am exporting prometheus metrics to google stackdriver by following this guide: https://cloud.google.com/monitoring/kubernetes-engine/prometheus.
When I query into the prometheus, I find all the metrics. But in the stackdriver metrics explorer, I can't find all the metrics( some of the metrics are there).
Any help will be appreciated.
I suppose that you are aware that metrics imported from Prometheus are external metrics for Stackdriver.
As it is stated into the documentation:
For external metrics, a resource_type of global is invalid and results
in the metric data being discarded.
Prometheus exported metrics are those whose name begin with:
external.googleapis.com/prometheus/
A possible reason for your issue is that you have a limit of metric descriptors that you can export per project. The limit is 10,000 Prometheus-exported metric / project. In case you have more, it is a normal thing for some of the metrics not to be there.
If this is not the problem it should normally be only a configuration issue, as your export actually works. Somehow, some of the metrics are filtered by the collector. Just re-check the way you have managed your configuration parameters ( filters,file etc..). You can check this documentation for more information.