Dialog Flow Logging to Big query table invalid schema - google-cloud-platform

I am using google cloud logging to sink Dialog flow requests data to big query. It works for the first 3-4 request but then gives the following Error.
Error Detail
Field jsonPayload.queryresult.responsemessages.payload.richcontent is a list of lists, which is not supported by Bigquery.
How can I solve this problem ?

Related

How to solve QuickSight SQL exception timeout error

I have created a VIEW in Amazon Athena using:
CREATE OR REPLACE VIEW sentiment_analysis AS
SELECT
file,
Sentiment,
SentimentScore.Positive AS Positive,
SentimentScore.Negative AS Negative,
SentimentScore.Neutral AS Neutral,
SentimentScore.Mixed AS Mixed
FROM "targeted_sentiment_output"."sentiment_results"
The VIEW works and populates with the data.
I am now trying to load this into Amazon QuickSight, but get the following error:
Your database generated a SQL exception. This can be caused by query timeouts, resource constraints, unexpected DDL alterations before or during a query, and other database errors. Check your database settings and your query, and try again.
I think it may be a timeout error. I have tried to find the Advanced Settings tab to increase the query timeout limit but can't find it. Please can you provide clear instructions on where to locate the timeout limit?
If it is a different error then please explain...
Thanks!

Create a report from GCP Cloud SQL logs

I have enabled logging on my GCP PostgreSQL 11 Cloud SQL database. The logs are being redirected to a bucket in the same project and they are in a JSON format.
The logs contain queries which were executed on the database. Is there a way to create a decent report from these JSON logs with a few fields from the log entries? Currently the log files are in JSON and not very reader friendly.
Additionally, if a multi-line query is run, then those many log entries are created for that query. If there is also a way to recognize logs which are belong to the same query, that will be helpful, too!
I guess the easiest way is using BigQuery.
BigQuery will import properly those jsonl files and will assign proper field names for the json data
When you have multiline-queries, you'll see that they appear as multiple log entries in the json files.
Looks like all entries from a multiline query have the same receiveTimestamp (which makes sense, since they were produced at the same time).
Also, the insertId field has a 's=xxxx' subfield that does not change for lines on the same statement. For example:
insertId: "s=6657e04f732a4f45a107bc2b56ae428c;i=1d4598;b=c09b782e120c4f1f983cec0993fdb866;m=c4ae690400;t=5b1b334351733;x=ccf0744974395562-0#a1"
The strategy to extract that statements in the right line order is:
Sort by the 's' field in insertId
Then sort by receiveTimestamp ascending (to get all the lines sent at once to the syslog agent in the cloudsql service)
And finally sort by timestamp ascending (to get the line ordering right)

Why does querying a report from google play console by the google cloud BigQuery API give incomplete results

I'm trying to get data from one of the reports available in the google play console. Specifically the user_acquisition report. I set up the data transfer service within the google cloud platform in order to use the BigQuery API.
When querying that specific report the results are partial. Some columns match the results I get when downloading the report manually but other columns just have the value null although the downloaded report shows that there should be numerical values there.
Another peculiar thing is that when specifying a date range for the query (month of may for example) the result will show about 1/3 of the dates in that month but there should be a row for each day of the month.
When looking at the transfer runs history, some of the runs have completed successfully, and some have failed giving the error message: Error code 5 : No files found for any reports. Please make sure you selected the correct Google Cloud Storage bucket and Google Play reports exist. But if no files are found, then how am i getting any results at all?
The users of both the GCP and Google Play Console are the owners of the project, so there shouldn't be any issue with the permissions to access the bucket where the reports are stored.
I tried creating another data transfer service to see if it can even find the reports. It did find some of the files but not the one I'm interested in. The transfer run history shows the same error as mentioned above.
Has anyone had some similar problem before and perhaps can offer some sort of solution? Or maybe just has some insights into why this problem is occurring?
I think the issue could be related with the availability of the desired report, since I've found that only some reports are supported by this service:
Detailed reports (Reviews, Financial reports)
Aggregated reports (Statistics, User acquisition)
Could it happen that the specific report your want to export is not supported?
If that's not the case I think you should file a support case sharing the "Resource name" into the Transfer details of the failed exports (and correct ones for reference). Alternatively of the support ticket you can also report a defect on the transfer service on a Public Issue tracker. The support team can help you to review further the error message.

"No data" message in Google Data Studio chart after connecting dataset from BigQuery?

I am trying to connect and visualise aggregation of metrics from a wildcard table in BigQuery. This is the first time I am connecting a table from this particular Google Cloud project to Data Studio. Prior to this, I have successfully connected and visualised metrics from other BigQuery tables from other Google Cloud projects in Google Data Studio and never encountered this issue. Any ideas? Could this be something to do with project-level permissions for Google Data Studio to access a BigQuery table for the first time?
More details of this instance: the dataset itself seems to be successfully connected into Data Studio so errors were encountered. After adding some charts connected to that data source and aggregating metrics, no other Data Studio error messages were encounterd. Just the words "No data" displayed in the chart. Could this also be a formatting issue in the BigQuery table itself? The BigQuery table in question was created via pandas-gbq in a loop to split the original dataset into individual daily _YYYYMMDD tables. However, this has been done before and never presented a problem.
I have been struggling with the same problem for a while, and eventually I find out that, at least for my case, it is related to the date I add to the suffix (_YYYYMMDD). If I add "today" to the suffix, DataStudio won't recognize it and will display "no data", but if I change it to "yesterday" (a day earlier), it will then display the data correctly. I think it is probably related to the timezones, e.g., "today" here is not yet there in the US, so the system can't show. Hopefully it helps.

How to fix 'Request contains an invalid argument' error when scheduling queries in BigQuery UI

I'm setting up a scheduled query in the new BigQuery UI as the project owner and have enabled the data transfer API. The query itself is a very simple SELECT * FROM table query written in standard SQL. The datasets I'm using are in the same region.
No matter how I set up the schedule options (start now, schedule start time, daily, weekly, etc.) or the destination dataset/table, I always get the same error:
"Error updating scheduled query: Request contains an invalid argument."
I have no idea which argument is invalid, it gives no more detail than that.
How do I solve this problem?
By trying to schedule the query in the classic BigQuery UI, it shows a more descriptive error which illustrates the issue:
Error in creating a new transfer: BigQuery Data Transfer Service does not yet support location northamerica-northeast1.
The data must be stored in either the US or the EU at this time, it seems.