I have a table in bigquery with around 32 million rows and 85 columns with size of 22 GB. (It is a physical table not a view or temporary table) I am using this table as a datasource in Domo. This is the domo connector I am using: https://www.domo.com/appstore/connector/google-bigquery-service-connector/overview .
I am getting the "Domo is ready, but BigQuery returned Premature EOF. Please try again later." error in Domo, there are no further details availbale from Domo on this error. As per suggestion from Domo support I have tried using "allow large results" option too, still it has the same error. When I run a query in Bigquery editor on the same dataset is works fine and complete execution in 17sec.
Are there other options I should try?
I don't think volume of data is a issue because yesterday I got the same error on another dataset in domo that connects to table in bigquery that has only 0.7 million row.
Are there any bigquery limits that are causing this error?
Related
there are two redshift table named A & B and a Quicksight dashboard where it takes A MINUS B as query to display content for a visual. If we use DIRECT query option and it is getting timedout because query is not completing in 2 mins(Quicksight have hard limit to run query within 2 mins) . Is there a way to use such large datasets as input Quicksight dasboard visual ?
Can't use SPICE engine because it have limit 1B or 1TB size limit.Also, it have 15 mins of delay to refresh data.
You will likely need to provide more information to fully resolve. MINUS can be a very expensive operation especially if you haven't optimized the tables for this operation. Can you provide information about your table setup and the EXPLAIN plan of the query you are running?
Barring improving the query, one way to work around a poorly performing query behind quicksight is to move this query to a materialized view. This way the result of the query can be stored for later retrieval but needs to be refreshed when the source data changes. It sounds like your data only changes every 15 min (did I get this right?) then this may be an option.
Good evening!
We are currently having an issue with the below error being thrown when trying to refresh a dataset via the on-premises gateway. Refreshing direct through Power BI desktop does not provide any errors and seems to refresh fine.
Data source error: {"error":{"code":"DM_GWPipeline_Gateway_MashupDataAccessError",
"pbi.error":{"code":"DM_GWPipeline_Gateway_MashupDataAccessError",
"parameters":{},"details":[{"code":"DM_ErrorDetailNameCode_UnderlyingErrorCode",
"detail":{"type":1,"value":"-2147467259"}},{"code":"DM_ErrorDetailNameCode_UnderlyingErrorMessage",
"detail":{"type":1,"value":"The key didn't match any rows in the table."}},{"code":"DM_ErrorDetailNameCode_UnderlyingHResult",
"detail":{"type":1,"value":"-2147467259"}},{"code":"Microsoft.Data.Mashup.ValueError.Key",
"detail":{"type":1,"value":"[entity = \"Forecast\"]"}},{"code":"Microsoft.Data.Mashup.ValueError.Reason",
"detail":{"type":1,"value":"Expression.Error"}}],"exceptionCulprit":1}}} Table: FACT - Cost Forecast Tool.
Cluster URI: WABI-EUROPE-NORTH-B-redirect.analysis.windows.net
Activity ID: f4b629a6-a9bc-4966-954c-ae37139737a4
Request ID: db16cb6d-a765-1e0e-f9f5-b8803c8baa6e
Time: 2020-11-25 17:20:30Z
From reading previous posts, I'm not sure any of the normal responses apply in this scenario, as they tend to relate to either pulling data from SQL Server, or Excel files (where the sheet name causes an issue).
The table throwing the error (FACT - Cost Forecast Tool) is made by appending two other tables - one comes from a Power BI dataflow (Forecast), and one from a folder import of CSV files (Forecast Tool Adjustments).
Publishing a model with just the two individual tables (Forecast and Forecast Tool Adjustments) works and refreshes fine.
The above error only occurs where a third table is added combining these two tables via a one liner in Power Query:
= Table.Combine({#"Forecast Tool Adjustments", Forecast})
Oddly the value being flagged in the error, "[entity = "Forecast"]", is one of the first steps in Power Query for the 'Forecast' table. So while that table refreshes fine if it's on it's own, it seems to throw an error when refreshed through the combined table?
Any thoughts would be greatly appreciated.
I am trying to connect and visualise aggregation of metrics from a wildcard table in BigQuery. This is the first time I am connecting a table from this particular Google Cloud project to Data Studio. Prior to this, I have successfully connected and visualised metrics from other BigQuery tables from other Google Cloud projects in Google Data Studio and never encountered this issue. Any ideas? Could this be something to do with project-level permissions for Google Data Studio to access a BigQuery table for the first time?
More details of this instance: the dataset itself seems to be successfully connected into Data Studio so errors were encountered. After adding some charts connected to that data source and aggregating metrics, no other Data Studio error messages were encounterd. Just the words "No data" displayed in the chart. Could this also be a formatting issue in the BigQuery table itself? The BigQuery table in question was created via pandas-gbq in a loop to split the original dataset into individual daily _YYYYMMDD tables. However, this has been done before and never presented a problem.
I have been struggling with the same problem for a while, and eventually I find out that, at least for my case, it is related to the date I add to the suffix (_YYYYMMDD). If I add "today" to the suffix, DataStudio won't recognize it and will display "no data", but if I change it to "yesterday" (a day earlier), it will then display the data correctly. I think it is probably related to the timezones, e.g., "today" here is not yet there in the US, so the system can't show. Hopefully it helps.
I am trying to alter a table in a Google Cloud SQL database that has several million records and has couple of indexes on it.
After a while (during which the space used on the db instance goes up by several GBs), the "alter table" command fails with the error: "ERROR 1034 (HY000): Incorrect key file for table xxx".
1) I searched for it and it seems that it often happens when the tmpdir goes short of space. The suggestions seemed to be that change the location of tmpdir for the MySql database to some place on file system where more storage is available. I don't really have that option on Google Cloud SQL setup, as far as I know.
2) I ran a "check table xxx" command on the mentioned table and it showed status=OK. So, there is no real corruption of the table involved anywhere. It just seems to be going short of space behind the scenes in the "alter table" on this heavy table.
Any suggestions please? Can I increase the tmpdir space on Google Cloud Sql setup for my project somehow? Can I change its location and give it more space somehow?
This sounds like a Cloud SQL First generation instance specific problem. Location or allocated amount of tmpdir storage (10 Gb) cannot be changed in that case, unfortunately.
The only reasonable option would be to migrate to Cloud SQL Second gen instance:
https://cloud.google.com/sql/docs/mysql/upgrade-2nd-gen
When I tring to import bigquery tables as dataset in my data Prep flow, I have the following error:
Could not create dataset: I/o error.
I tried to import many bigquery tables (which are from same BQ dataset) all of them successfully imported except this which has many columns (more than 2700 columns!).
Maybe that's because of the large number of columns but I can't see any such limitation in Docs!
when I selecet the table ----> I have this message "Preview not available" like this:
and after clicking "import":
Does anyone have any idea why this is happening or has any suggestion?
Dataprep documentation doesn't have maximum columns limitation, thus it's not probable that this is the problem.
On the other hand, I have seen generic messages like 'I/O error' or simply red icons when importing data, and they are related to the data type conversion between Dataprep and BigQuery.
I think that finding the types in BQ that are not compatibles in Dataprep and converting to one compatible should solve your issue.