Is it possible to hide BigQuery query execution logs in Google Cloud platform? - google-cloud-platform

Based on my understanding Google Cloud platform does not provide Bigquery specific Logging api that we can disable so that BQ sql query does not get logged.
Any reference or workaround will be highly appriciated.
Use case:
Queries need to be executed in client dataset and data has to stay within client project only.

There is no data in the Logs, only the query performed. However, you can exclude the logs if you want. but you won't be able to track, debug, understand what happened. If you are safe with this, so, go ahead!

Related

Adding Google Analytics Segments to AWS Appflow

I am trying to add segments to my AWS Appflow that is pulling Google Analytics data.
This is because I am running into Sampling problems(Google Analytics summarizes a lot of the data and makes analysis impossible)
I can add date range filters, but even with that set the minimum, I am still in need of breaking the requests down further via segmenting. But I can not find any support articles or places online that have done similar.
I have used the Google Analytics API by itself without appflow and been able to get all the data without it sampling, but need to do something similar using appflow.
What is the correct way to add segments to a Google Analytics Appflow
Thanks in advance for any help

Analyze Number value in Different Conditions with google cloud platform logging

I'm struggling to find out how to use GCP logging to log a number value for analysis, I'm looking for a link to a tutorial or something (or a better 3rd party service to do this).
Context: I have a service that I'd like to test different conditions for the function execution time and analyze it with google-cloud-platform logging.
Example Log: { condition: 1, duration: 1000 }
Desire: Create graph using GCP logs to compare condition 1 and 2.
Is there a tutorial somewhere for this? Or maybe there is a better 3rd party service to use?
PS: I'm using the Node google cloud logging client which only talks about text logs.
PSS: I considered doing this in loggly, but ended up getting lost in their documentation and UI.
There are many tools that you could use to solve this problem. However, you suggest a willingness to use Google Cloud Platform services (e.g. Stackdriver monitoring), so I'll provide some guidance using it.
NOTE Please read around the topic and understand the costs involved with using e.g. Cloud Monitoring before you commit to an approach.
Conceptually, the data you're logging (!) more closely matches a metric. However, this approach would require you to add some form of metric library (see Open Telemetry: Node.js) to your code and instrument your code to record the values that interest you.
You could then use e.g. Google Cloud Monitoring to graph your metric.
Since you're already producing a log with the data you wish to analyze, you can use Log-based metrics to create a metric from your logs. You may be interested in reviewing the content for distribution metric.
Once you've a metric (either directly or using logs-based), you can then graph the resulting data in Cloud Monitoring. For logs-based metrics, see the Monitoring documentation.
For completeness and to provide an alternative approach to producing and analyzing metrics, see the open-source tool, Prometheus. Using a 3rd-party Prometheus client library for Node.js, you could instrument you code to produce a metric. You would then configure Prometheus to scrape your app for its metrics and graph the results for you.

How to generate uptime reports through Google Cloud Stackdriver?

I am a new user with Google cloud (Stackdriver).
I would like to set and generate uptime reports on a monthly basis which would include the past 4 weeks through e-mail in the cloud but I have not been able to find from where I could do this.
I have done research but have not managed to find what I am looking for. The closes I got to was TRACE but is still not what I would like to have.
It's not possible to generate that kind of reports using tools available in Google Cloud.
Using traces is probably the best you can do now - although you can try the Cloud Trace API which may give you a way to extract the information in a more structured way.
If you want this feature included in GCP please go to IssueTracker and create a new feature request with detailes explanation of what your goal is and mention the time-span you want to be able to get data from.

Schedule loading data from GCS to BigQuery periodically

I've researched it and currently come up with a strategy using Apache Airflow. I'm still not sure how to do this. The most blogs and answers I'm getting are directly codes instead of some material to better understand it. Also, please suggest if there is a good way to do it.
I also got an answer like using Background Cloud Function with a Cloud Storage trigger.
You can use BigQuery's Cloud Storage transfers, but note that it's still in BETA.
It gives you the option to schedule transfers from Cloud Storage to BigQuery with certain limitations.
The most blogs and answers I'm getting are directly codes
Apache Airflow comes with a rich UI for many tasks but that doesn't mean you are not supposed to write code in order to get your task done.
For your case, you need to use BigQuery command line operator for Apache Airflow
A good way on how to do this can be found in this link

How to understand errors(combined) at Google Spanner Monitor?

Google Spanner monitor provides helpful information about databases and instance. Operation per seconds view contains errors(combined) measure that is not clear for me.
How to understand errors(combined)?
You can make a dashboard in Stackdriver (https://app.google.stackdriver.com) that will break down the errors slightly. We're working on a resources page for Cloud Spanner right now that will actually break them down by error code, but before that, you can go to Resources > Metrics Explorer and filter by response status:
You'll occasionally get error responses using the Cloud Spanner API; FAILED_PRECONDITION is somewhat common if you have a lot of transactions happening simultaneously that invalidate other transactions.