Snowflake invalid materialized view definition - materialized-views

When running in Snowflake the following command:
CREATE MATERIALIZED VIEW MV_CUSTOMER_PREFERENCE as select * from V_CUSTOMER_PREFERENCE;
I get the following error:
SQL compilation error: error line {0} at position {1} Invalid materialized view definition. More than one table referenced in the view definition
V_CUSTOMER_PREFERENCE is an existing and functioning view (it can be queried separately), that joins information from different tables. I get the same error when I put the original query instead of the view, it's just a long and complicated SQL query.
What can be the problem with the query in the view? I cannot understand it from the error description and I didn't find related restrictions in https://docs.snowflake.net/manuals/user-guide/views-materialized.html

A materialized view can query only a single table. You can see the list of limitations for working with materialized views here:
https://docs.snowflake.net/manuals/user-guide/views-materialized.html#limitations-on-creating-materialized-views

That is correct: Unlike other databases, MVIEWS in Snowflake are a very targeted and simplified feature. They have the following use cases:
Provide Alternative Clustering for tables with multiple access paths.
Provide Project/Restrict on high use columns/rows.
Provide Pre-aggregation for high frequency queries and sub-queries.

Related

Google Bigquery: Join of two external tables fails if one of them is empty

I have 2 external tables in BiqQuery, created on top of JSON files on Google Cloud Storage. The first one is a fact table, the second is errors data - and it might or might not be empty.
I can query each table separately just fine, even an empty one - here is an
empty table query result example
I'm also able to left join them if both of them are not empty.
However, if errors table is empty, my query fails with the following error:
The query specified one or more federated data sources but not all of them were scanned. It usually indicates incorrect uri specification or a 'limit' clause over a union of federated data sources that was satisfied without having to read all sources.
This situation isn't covered anywhere in the docs, and it's not related to this versioning issue - Reading BigQuery federated table as source in Dataflow throws an error
I'd rather avoid converting either of this tables to native, since they are used in just one step of the ETL process, and this data is dropped afterwards. One of them being empty doesn't look like an exceptional situation, since plain select works just fine.
Is some workaround possible?
UPD: raised an issue with Google, waiting for response - https://issuetracker.google.com/issues/145230326
It feels like a bug. One workaround is to use scripting to avoid querying the empty table:
DECLARE is_external_table_empty BOOL DEFAULT
(SELECT 0 = (SELECT COUNT(*) FROM your_external_table));
-- do things differently when is_external_table_empty is true
IF is_external_table_empty = true
THEN ...
ELSE ...
END IF

Textfield with autocomplete

I have several fields that contain exactly the same sql query! Is it possible to place the sql question centrally in APEX in the same way as list of values or as a function in oracle? I am using APEX 18.2
Here are two extended solutions
Pipelined SQL
https://smart4solutions.nl/blog/on-apex-lovs-and-how-to-define-their-queries/
Dynamic SQL
http://stevenfeuersteinonplsql.blogspot.com/2017/01/learn-to-hate-repetition-lesson-from.html
Call me dense, but I don't think I understand why you'd have multiple fields (presumably on the same form) whose source is the same SQL query.
Are you passing a parameter to the SQL to get a different value for each field?
If you are passing a parameter to the SQL query, why not create a database view to hold the query, then pass the parameter to the view. That way, if you need to change it, it's in one place.
If they really are all the same value from the same query, how about using the SQL for one field/page_item, then make the source for the others be the first page item?
I would create a hidden item with the query in the source as text and use &HIDDEN_ITEM_NAME. to reference its value in the source of any item I was trying to display the query.
Finally isolved it with a function and use the type PL/SQL Function Body returning SQL Query in APEX, then i have all in one place. I created a function in SQL developer that returns a SQL query.

Require Partition Filter On BigQuery Views

We have currently a couple of authorized views in the big query for various teams
Currently, we are using partition_date column to use in the query to reduce the amount of data processed (reference)
#standardSQL
SELECT
<required_fields,...>,
EXTRACT(DATE FROM _PARTITIONTIME) AS partition_date
FROM
`<project-name>.<dataset-name>.<table-name>`
WHERE
_PARTITIONTIME >= TIMESTAMP("2018-05-01")
AND _PARTITIONTIME <= CURRENT_TIMESTAMP()
AND <Blah-Blah-Blah>
However, due to the number of users & data we have, it's very hard to maintain the quality of big query scripts leading us with increased query cost with the relatively increasing number of users.
I see we can use --require_partition_filter (reference) when creating TABLEs. So, could someone help me address the following questions
When I create a table with the above filter, does the referenced view will also expect the partition condition because of the partition filter enabled on the table level?
Due to the number of authorized views connected to tables we have, it requires significant efforts to change it to materialized views (tables). Is there an alternative way possible to apply something similar/use like --require_partition_filter on view level?
FYI, for someone who wants to update the current table with the above filter, I see we can use bq update command (reference) which I am planning to use for existing partitioned tables.
Yes, the same restriction on the tables being queried through the view applies.
There is not.

Power BI : Convert Duplicate Table into a Reference Table

Is there an automated way to convert a Duplicate Table(With all its steps) into a Reference Table preserving all the steps in Query Editor ?
Short answer, not really, but it's possibly trivial to do manually for one query.
Reference Table and Duplicate Table are GUI operations, which like other GUI operations, simply insert M code into the query. You can see the entire query in the Advanced Editor.
Reference Table just inserts the name of the other query; the effect is branching the data processing pipelines. If you change the original query, it affects all downstream queries.
Duplicate Table copies all of the steps; the effect is creating a separate query. You can change them at any point later. There is no link to where the steps came from even if they aren't changed.
So, it seems that you just want to convert duplicated steps to references. There is no automated way of doing it. But if you know two queries start with the same steps, try this: Duplicate to a base query and remove final steps that are not in common. Mark the new query to not load to the report by: Click All Properties; Uncheck Enable load to report. Then you can replace the duplicated initial steps in the other queries with a reference by a step like Source = BaseQuery in the Advanced Editor.
Also, if you find yourself duplicating steps in the middle of a query, you can create a query used as a function.

Does Cloud Spanner support a TRUNCATE TABLE command?

I want to clear all the values from a table. It has a few secondary indexes. I tried to do this via committing a transaction with Mutation.delete("MyTable", KeySet.all()) (see docs here). But I got an error:
error:INVALID_ARGUMENT: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: The transaction contains too many mutations.
How can I efficiently clear my table contents?
Cloud Spanner does not support such a truncate command. If your table had no secondary indexes, then you could specify KeySet.all() in your Delete as specified above, but this can fail if your table has secondary indexes and is large.
The best way to do what you want is to issue an updateDdl RPC including the following statements:
1) For each secondary index on MyTable, include a corresponding DROP INDEX statement
2) DROP TABLE MyTable
3) If necessary, re-create your table and indexes via the CREATE TABLE and CREATE INDEX statements, respectively.
Note that you are allowed and encouraged to include all of these statements in a single updateDdl RPC. The advantage of this is that it gives you atomic ("all-or-nothing") semantics.