Not all prometheus metrics are showing in google Stackdriver - google-cloud-platform

I am exporting prometheus metrics to google stackdriver by following this guide: https://cloud.google.com/monitoring/kubernetes-engine/prometheus.
When I query into the prometheus, I find all the metrics. But in the stackdriver metrics explorer, I can't find all the metrics( some of the metrics are there).
Any help will be appreciated.

I suppose that you are aware that metrics imported from Prometheus are external metrics for Stackdriver.
As it is stated into the documentation:
For external metrics, a resource_type of global is invalid and results
in the metric data being discarded.
Prometheus exported metrics are those whose name begin with:
external.googleapis.com/prometheus/
A possible reason for your issue is that you have a limit of metric descriptors that you can export per project. The limit is 10,000 Prometheus-exported metric / project. In case you have more, it is a normal thing for some of the metrics not to be there.
If this is not the problem it should normally be only a configuration issue, as your export actually works. Somehow, some of the metrics are filtered by the collector. Just re-check the way you have managed your configuration parameters ( filters,file etc..). You can check this documentation for more information.

Related

GCP Logging log-based metric from folder-level logs?

In Google Cloud Logging (nee Stackdriver), can I create a log-based metric for logs aggregated at folder/organization level? I want to have a log-based metric for a certain audit event across many projects.
This isn't currently supported. You can export logs to a different project, but can't have metrics encapsulating more than one project.
If you think that functionality should be available, you can create a Feature Request at Public Issue Tracker.

Counter metrics in GCP metrics explorer

I have a DataFlow job with a counter metric. On every restart the metric is reset to zero, as expected. The problem is that when using the counter in gcp Metrics explorer, I cannot get an accumulated value for the metric, disregarding restarts. Prometheus has a function called increase() that does this. Is there a similar function for gcp metrics explorer?
One approuch to metrics across runs would be to make use of Cloud Monitoring. There is a good how to on the features and usage of custom metrics.
If you use job names that you can apply a regexp to then you can make use of the filters to aggregate them into a graph.

Google Stackdrive custom metrics - data retention period

I'm using GCP Stackdrive custom metrics and created few dashboard graphs to show the traffic on the system. The problem is that the graph system is keeping the data for few weeks - not forever.
From Stackdrive documentation:
See Quotas and limits for limits on the number of custom metrics and
the number of active time series, and for the data retention period.
If you wish to keep your metric data beyond the retention period, you
must manually copy the data to another location, such as Cloud Storage
or BigQuery.
Let's decide to work with Cloud Storage as a container to store data for the long term.
Questions:
How does this "manual data copy" is working? Just write the same data into two places (Google storage and Stackdrive)?
How the stackdrive is connecting the storage and generating graph of it?
You can use Stackdriver's Logs Export feature to export your logs into either of three sinks, Google Cloud Storage, BigQuery or Pub/Sub topic. Here are the instructions on how to export stackdriver logs. You are not writing logs in two places in real-time but exporting logs based on the filters you set.
One thing to keep in mind is you will not be able to use stackdriver graphs or alerting tools with the exported logs.
In addition, if you export logs into bigquery, you can plug a Datastudio graphe to see your metrics.
You can also do this with Cloud Storage export but it's less immediate and less handy
I'll suggest this guide on creating a pipeline to export metrics to BigQuery for long-term storage and analytics.
https://cloud.google.com/solutions/stackdriver-monitoring-metric-export

Google Cloud Metric to monitor instance group size

I can find a graph of "Group size" in the page of the instance group.
However, when I try to find this metric in Stackdriver, it doesn't exist.
I tried looking in the metricDescriptors API, but it doesn't seem to be there either.
Where can I find this metric?
I'm particularly interested in sending alerts when this metrics goes to 0.
There is not a Stackdriver Monitoring metric for this data yet. You can fetch the size using the instanceGroups.get API call. You could create a system that polls this data and posts it back to Stackdriver Monitoring as a custom metric and then you will be able to access it from Stackdriver.

How does Datadog measure DAUs?

Where does the userstats.o1.daus metric take the data from?
I looked in the metrics list and in the app, but I don't seem to find the source of the metric.
The application infrastructure relies on:
AWS
DynamoDB
New Relic
Amplitude
I found the answer thanks to #tqr_aupa_atleti and the support team from Datadog.
On the Datadog dashboard panel, I had to click Metrics -> Summary and look for my metric. I looked at the tags and I could figure out it was a custom metric form my company that uses data from Amplitude.