Analyze Number value in Different Conditions with google cloud platform logging - google-cloud-platform

I'm struggling to find out how to use GCP logging to log a number value for analysis, I'm looking for a link to a tutorial or something (or a better 3rd party service to do this).
Context: I have a service that I'd like to test different conditions for the function execution time and analyze it with google-cloud-platform logging.
Example Log: { condition: 1, duration: 1000 }
Desire: Create graph using GCP logs to compare condition 1 and 2.
Is there a tutorial somewhere for this? Or maybe there is a better 3rd party service to use?
PS: I'm using the Node google cloud logging client which only talks about text logs.
PSS: I considered doing this in loggly, but ended up getting lost in their documentation and UI.

There are many tools that you could use to solve this problem. However, you suggest a willingness to use Google Cloud Platform services (e.g. Stackdriver monitoring), so I'll provide some guidance using it.
NOTE Please read around the topic and understand the costs involved with using e.g. Cloud Monitoring before you commit to an approach.
Conceptually, the data you're logging (!) more closely matches a metric. However, this approach would require you to add some form of metric library (see Open Telemetry: Node.js) to your code and instrument your code to record the values that interest you.
You could then use e.g. Google Cloud Monitoring to graph your metric.
Since you're already producing a log with the data you wish to analyze, you can use Log-based metrics to create a metric from your logs. You may be interested in reviewing the content for distribution metric.
Once you've a metric (either directly or using logs-based), you can then graph the resulting data in Cloud Monitoring. For logs-based metrics, see the Monitoring documentation.
For completeness and to provide an alternative approach to producing and analyzing metrics, see the open-source tool, Prometheus. Using a 3rd-party Prometheus client library for Node.js, you could instrument you code to produce a metric. You would then configure Prometheus to scrape your app for its metrics and graph the results for you.

Related

StackDriver Trace across Cloud/Services

What if I have an application that works across cloud services. Eg. AWS Lambda will call Google CloudRun service and I want to have my traces work across these. Isit possible? I guess I will have to somehow pass a trace ID and set it when I need it? But I see no way to set a trace ID?
If we look at the list of supported language/backend combinations, we see that both GCP (Stackdriver) and AWS (X-Ray) are supported. See: Exporters. What this means is that you can instrument either (or both) of your AWS Lambda or GCP CloudRun applications with OpenCensus calls. I suspect you will have to dig deep to determine the specifics but this feels like a good starting point.
If an OpenCensus library is available for your programming language, you can simplify the process of creating and sending trace data by using OpenCensus. In addition to being simpler to use, OpenCensus implements batching which might improve performance click here.
The Stackdriver Trace API allows you to send and retrieve latency data to and from Stackdriver Trace. There are two versions of the API:
Stackdriver Trace API v1 is fully supported.
Stackdriver Trace API v2 is in Beta release.
The client libraries for Trace automatically generate the trace_id and the span_id. You need to generate the values for these fields if you don't use the Trace client libraries or the OpenCensus client libraries. In this case, you should use a pseudo-random or random algorithm. Don't derive these fields from need-to-know data or from personally identifiable information, for details please click here.

Finding untraced time in Google Cloud Tracer Agent for Express.js

I'm using Google Cloud's Stackdriver Trace Agent with the Express.js plugin.
I noticed there are a few routes which have substantial "untraced" time. What strategies can I use to find and begin to measure these untraced paths, and why would it not pick up certain code paths?
If the Trace agent isn't working, there's unfortunately not very much you can do to modify its behavior. I recommend using OpenCensus to instrument your application, which will give you much more control over exactly how traces and spans are created.

Planning an architecture in GCP

I want to plan an architecture based on GCP cloud platform. Below are the subject areas what I have to cover. Can someone please help me to find out the proper services which will perform that operation?
Data ingestion (Batch, Real-time, Scheduler)
Data profiling
AI/ML based data processing
Analytical data processing
Elastic search
User interface
Batch and Real-time publish
Security
Logging/Audit
Monitoring
Code repository
If I am missing something which I have to take care then please add the same too.
GCP offers many products with functionality that can overlap partially. What product to use would depend on the more specific use case, and you can find an overview about it here.
That being said, an overall summary of the services you asked about would be:
1. Data ingestion (Batch, Real-time, Scheduler)
That will depend on where your data comes from, but the most common options are Dataflow (both for batch and streaming) and Pub/Sub for streaming messages.
2. Data profiling
Dataprep (which actually runs on top of Dataflow) can be used for data profiling, here is an overview of how you can do it.
3. AI/ML based data processing
For this, you have several options depending on your needs. For developers with limited machine learning expertise there is AutoML that allows to quickly train and deploy models. For more experienced data scientists there is ML Engine, that allows training and prediction of custom models made with frameworks like TensorFlow or scikit-learn.
Additionally, there are some pre-trained models for things like video analysis, computer vision, speech to text, speech synthesis, natural language processing or translation.
Plus, it’s even possible to perform some ML tasks in GCP’s data warehouse, BigQuery in SQL language.
4. Analytical data processing
Depending on your needs, you can use Dataproc, which is a managed Hadoop and Spark service, or Dataflow for stream and batch data processing.
BigQuery is also designed with analytical operations in mind.
5. Elastic search
There is no managed Elastic search service directly provided by GCP, but you can find several options on the marketplace, like an API service or a Kubernetes app for Google’s Kubernetes Engine.
6. User interface
If you are referring to a user interface for your own use, GCP’s console is what you’d be using. If you are referring to a UI for end-users, I’d suggest using App Engine.
If you are referring to a UI for data exploration, there is Datalab, which is essentially a managed notebook service, and Data Studio, where you can build plots of your data in real time.
7. Batch and Real-time publish
The publishing service in GCP, for both synchronous and asynchronous messages is Pub/Sub.
8. Security
Most security concerns in GCP are addressed here. Which is a wide topic by itself and should probably need a separate question.
9. Logging/Audit
GCP uses Stackdriver for logging of most of its products, and provides many ways to process and analyze those logs.
10. Monitoring
Stackdriver also has monitoring features.
11. Code repository
For this there is Cloud Source Repositories, which integrate with GCP’s automated build system and can also be easily synched with a Github repository.
12. Analytical data warehouse
You did not ask for this one, but I think it's an important part of a data analysis stack.
In the case of GCP, this would be BigQuery.

Google Cloud APIs usage data by projects

Is there any way to programmatically get data similar to APIs overview of Google CLoud dashboard. Specifically, I'm interested in the list of APIs enabled for the project and their usage/error stats for some predefined timeframe. I belive there's an API for that but I struggle to find it.
There's currently no API that gives you a report similar to the one you can see through the Google Cloud Console.
The Compute API can retrieve some quotas with the get method but it's somewhat limited (only Compute Engine quotas) and, for what I understood from your question, not quite what you're looking for.
However, I've found in Google's Issue Tracker a feature request that's close to what you're asking for.
If you would need something more specific or want to do the feature request yourself, check the "Report feature requests" documentation and create your own. The GCP team will take a look at it to evaluate and consider implementation.

Is there a way to report custom DataDog metrics from AWS Lambda?

I'm looking to report custom metrics from Lambda functions to Datadog. I need things like counters, gauges, histograms.
Datadog documentation outlines two options for reporting metrics from AWS Lambda:
print a line into the log
use the API
The fine print in the document above mentions that the printing method only supports counters and gauges, so that's obviously not enough for my usecase (I also need histograms).
Now, the second method - the API - only supports reporting time series points, which I'm assuming are just gauges (right?), according to the API documentation.
So, is there a way to report metrics to Datadog from my Lambda functions, short of setting up a statsd server in EC2 and calling out to it using dogstatsd? Anyone have any luck getting around this?
The easier way is using this library: https://github.com/marceloboeira/aws-lambda-datadog
It has runtime no dependencies, doesn't require authentication and reports everything to cloud-watch too. You can read more about it here: https://www.datadoghq.com/blog/how-to-monitor-lambda-functions/
Yes it is possible to emit metrics to DataDog from a AWS Lambda function.
If you were using node.js you could use https://www.npmjs.com/package/datadog-metrics to emit metrics to the API. It supports counters, gauges and histograms. You just need to pass in your app/api key as environment variables.
Matt