Is it possible to stream GKE logs to multiple GCP project StackDrivers - google-cloud-platform

I am looking to split the logs on the StackDriver Agent (SDA) to multiple GCP projects (StackDrivers) based on some filter. By default SDA targets GCP project where resides.
There is a SDA configuration option, to setup different GCP destination project id, but only one.
SDA as a FluentD wrapper uses for the match section, type google_cloud.
Does this mean that the only solution is to write a custom FluentD filter that rely on the google_cloud and targets multiple GCP projects?
Thanks

First of all, you can not split logs on Stackdriver Monitoring Agent to send in different GCP project’s Stackdrivers based on any filter. I understand that you went through the document [1] and want to be confirmed about the option “type google_cloud”.
Here, the configuration options will let you override LogEntry labels [2] and MonitoredResource labels [3] when ingesting logs to Stackdriver Logging and “type google_cloud” for cloud resources of all types.
[1]:- https://cloud.google.com/logging/docs/agent/configuration#label-setup
[2]:- https://cloud.google.com/logging/docs/reference/v2/rest/v2/LogEntry
[3]:- https://cloud.google.com/logging/docs/reference/v2/rest/v2/MonitoredResource

If you write your own Stackdriver logger, you can do anything you want.
The Google Stackdriver logging (the driver) does not support streaming parts of logs to different Stackdriver Logs (the service).

Related

Metric Registrar in Cloud Foundry

Does Metric Registrar works in Cloud Foundry without Pivotal?
I have open source Cloud Foundry and I need to get custom metrics from app. I installed Metric Registrar community plugin for CF, I registered my application with endpoint, I also defined log format. Unfortunately I see no traffic on registered endpoint.
If open source Cloud Foundry do not support Metric Registrar, is there any other way to get support for custom app metrics?
Does Metric Registrar works in Cloud Foundry without Pivotal?
The Metric Registrar is part of the VMware Tanzu Application Service product, it's not part of the Open Source Cloud Foundry project. It's a value-add feature for those using the paid product.
If open source Cloud Foundry do not support Metric Registrar, is there any other way to get support for custom app metrics?
You don't strictly need the Metric Registrar to do this. The Metric Registrar's main purpose is to take metrics from your apps and inject them into the Loggregator log/metric stream. This is convenient if you have other software that is already consuming log & metric streams from Loggregator.
You don't have to do that though, as there are other ways to export metrics from your app.
If you want them to go through Loggregator, you could export structured log messages (perhaps JSON?) via STDOUT that contains your metrics. Those will, like your other log messages, go out through Loggregator. You would then just need to have something ingesting your logs, identifying the structured messages, and parsing out your metrics. This is similar to what Metric Registrar does, you're just parsing out the structured log entries after they leave the platform.
If you have an ELK stack or similar running, you can probably make this solution work easily enough. ELK can ingest your logs & structured log metrics, then you can search/filter through the metrics and create dashboards.
Another option you could do is to run Prometheus/Grafana. You then just need to make sure your app has a Prometheus Exposition metrics endpoint (this is super easy with Java/Spring Boot & Spring Boot Actuator, but can be done in any language). Point Prometheus at your app and it will then be able to scrape metrics from your apps & you can use Grafana to view them. None of this goes through Loggregator.
If you're looking for a solution that's more automatic, you could run an APM agent (NewRelic, DataDog, AppDynamics, Dynatrace, etc..) with your apps. These will capture metrics directly from the process and export them to a SaaS platform where you can monitor/review them.
There are probably other options as well. This is just what comes to mind as I write this up.

Unable to collect metrics from customized fluentd on GKE

I have troubles to enable metrics on my GKE after customized fluentd in another namespace.
I add some changes to the fluentd configmap, since GKE default fluentd & configmap in kube-system namespace can't change(changes always get reverted), I deployed the fluentd and event-exporter in another namespace.
But the metrics are missing after I made the change. All the logs are OK, still in the logging viewer.
What needs to be done so GKE can collect the metrics again? Or maybe I'm wrong, is there any way to modify the default fluentd configmap in the kube-system?
I wasn't able to find anything useful on this topic. So I create a GCP support ticket.
Google provided one solution:
With Cloud Operations for GKE, you can collect just system logs [1] that way monitoring remains enabled in your cluster. Please note that this option can be enabled only via console but not via gcloud command line. There is a tracking bug, https://issuetracker.google.com/163356799 for the same.
Further, you can deploy your own configurable Fluentd daemonset to
customize the applications logs [2]
You will be running 2 daemonsets for fluentd with this config, however
to reduce the amount of log duplication it would be recommended that
you decrease the logging from CloudOps to capture system logs only[2],
while your customized fluentd daemonset will be able to capture your
application workload logs.
The disadvantages from using this approach are: ensuring your custom
deployment doesn't overlap something CloudOps is watching (ie. files,
logs), there will be an increased amount of API calls and you will be
responsible for updating/maintaining and managing your custom fluentd
deployment.
[1] https://cloud.google.com/stackdriver/docs/solutions/gke/installing#controlling_the_collection_of_application_logs
[2]. https://cloud.google.com/solutions/customizing-stackdriver-logs-fluentd

Application Status check using Stackdriver in Google Cloud

I have an application running in google compute instance (ps -ef | grep myapp).
I want to setup an alert if the application goes down and/or comes up in google cloud Stackdriver.
How to achieve this ?
You can create an alert on the Stackdriver monitoring. You will have the option to either have the alert displayed as an incident on the Stackdriver monitoring console or as a notification via an email.
Before configuring your alerts, you will need to choose a metric type in which the alert will be based on.
If you cannot find a metric type that matches your needs, you will need to create your own custom metric.
I also found this approach which will collect and the send the logs from the serial port to the Stackdriver logs. You will then be able to manipulate the data collected as you see fit.

How to disable google load balancer logging?

How to disable google load balancer logging? We are currently generating 30TB (Yes TB not GB) of logs per month that we don't care about and never look at. It's quite a waste or resources.
In Google Cloud Platform console, from Stackdriver Logging section, navigate to Resource usage, choose Cloud HTTP Load Balancer and select Disable log source from the options on the right. https://console.cloud.google.com/logs/usage
You can also see option to create Exclusion filter.
Reference: https://cloud.google.com/logging/docs/exclusions#excluding-resource
To delete the previous logs
gcloud logging logs delete projects/<PROJECT-ID>/logs/requests
As per the Google Cloud Load Balancer and Stackdriver Logging documentation and also going over the Cloud Console UI and the REST API documentation, there is currently no way to disable Stackdriver logging for Google Cloud Load balancer.
Also, Google currently does not charge a fee for storing the Stackdriver logs.
If your concern is that it is hard to search through the logs, you can always filter logs while viewing on Stackdriver to help you narrow down only the items of interest.

Log level in Google Stackdriver Logging

Configured Google Stackdriver Logging in one of the GCE VM and everything works except Log Level . Have used the parameter log_level in the file
/etc/google-fluentd/config.d/tomcat.conf
as per provided in http://docs.fluentd.org/articles/in_tail
but even then in Console Log Viewer cant able to view log in different levels. Is there any specif way to configure fluentd agent for Google Cloud?
The Google Cloud output plugin for fluentd currently uses the severity field to indicate the log level. See this issue for details. There is an open request to make it configurable.