Unsupported subquery type cannot be evaluated from Function - common-table-expression

I am trying to use a where exists subquery as follows:
WITH FILTER AS(
SELECT matchingvalues
FROM (VALUES ('This'),('Any')) filter(matchingvalues)
),
SRC AS (
SELECT Column_A
,Column_B
FROM (VALUES ('This','1'),('That','2'))SRC(Column_A,Column_B)
)
SELECT *
FROM SRC
WHERE EXISTS ( SELECT 1
FROM FILTER
WHERE Column_A = matchingvalues
OR matchingvalues = 'Any'
)
This works in T-sql but not in snowflake and returns the following error:
"SQL compilation error: Unsupported subquery type cannot be evaluated"

This issue has been fixed in the latest version of Snowflake (which is 3.56). This version will be released in this week or the next. You can verify the release from the following portal:
https://support.snowflake.net/s/topic/0TO0Z000000Unu5WAC/releases

Related

Is it possible to update a piece of data in a pre-existing table using the output of data from a CTE?

I need to correct a datapoint in a pre-existing table. I am using multiple CTEs to find the bad value and the corresponding good value. I am having trouble working out how to overwrite the value in the table using the output of the CTE. Here is what I am trying:
with [extra CTEs here]....
,CTE3 AS (
SELECT c1.FIELD_1, c1.FIELD_2 AS GOOD, c2.FIELD_3 AS BAD
FROM CTE1 c1
JOIN CTE2 c2 ON c1.FIELD_1 = c2.FIELD_1
)
update TABLE1
set TABLE1.FIELD_3 = CTE3.GOOD
from CTE3
INNER JOIN TABLE1 ON CTE3.BAD = TABLE1.FIELD_3
Is it even possible to achieve this?
If so, how should I change my logic to get it to work?
Trying the above logic is throwing the following error:
SQL Error [42601]: An unexpected token "WITH CTE1 AS ( SELECT
FIELD_1" was found following "BEGIN-OF-STATEMENT". Expected tokens
may include: "<update>".. SQLCODE=-104, SQLSTATE=42601,
DRIVER=4.27.25
Table designs and expected output:

Django Window Function FirstValue Producing Invalid SQL

I'm trying to use a window function to get the first of each group and its producing the sql query below - I took the sql and played around with it in the console and it appears that the CAST() function is causing the issue. Is there a way to get rid of this in the query?
Django 2,1 Code:
def contract_list(user):
price_window = Window(
expression=FirstValue('settle_price'),
partition_by=F('contract'),
order_by=F('trade_time').desc()
)
market_prices = Trade.objects.filter(
contract__isnull=False
).annotate(
market_price=price_window
).values(
con=F('contract'),
market_price=F('market_price'),
).order_by(
'contract'
).all()
produces this SQL run on a SQLite3 database:
SELECT
CAST(FIRST_VALUE("trading_trade"."settle_price") AS NUMERIC) OVER PARTITION BY "trading_trade"."contract_id" ORDER BY "trading_trade"."trade_time" DESC) AS "market_price",
"trading_trade"."contract_id" AS "con"
FROM
"trading_trade"
WHERE
"trading_trade"."contract_id" IS NOT NULL
ORDER BY
"trading_trade"."contract_id" ASC
Again, the issue seems to be related to the CAST(FIRST_VALUE("trading_trade"."settle_price") AS NUMERIC) since removing the cast function produces a result set.

OLE DB or ODBC Error: We cannot convert the value null to type Logical

I am calculating following calculated column in query editor:
End Date =
if [Date_1] <> null
then [Date_1]
else if [Date_2]<>null
then [Date_2]
else DateTime.Date(DateTime.LocalNow())
Based on this column, the following table is calculated:
Resident Payer Dates =
SELECTCOLUMNS (
GENERATE (
'Table1',
FILTER (
ALLNOBLANKROW ( Dates[Date] ),
Dates[Date] >= 'Table1'[Start Date]
&& Dates[Date] <= 'Table1'[End Date]
)
),
"Id", 'Table1'[Id],
"Date", Dates[Date]
)
Everything is working fine till here.
However, for some reason, I need to change the End Date column with the following formula:
End Date =
if [Date_1] <> null
then Date.AddDays([Date_1], -1)
else if [Date_2]<>null
then Date.AddDays([Date_2],-1)
else DateTime.Date(DateTime.LocalNow())
However, when I try to apply the changes, I am getting the following error:
I am totally clueless about why we are running in this error with a simple change like above, as the change is not producing any null values.
Any help and guidance would be appreciated.
For anyone who faces such issue in future:
Here, the query editor shows no errors and the evaluation is completed till the last step successfully. However, when you try to apply the changes, the error mentioned above is encountered.
With a lot of searches, I figured out that the column got corrupted, for reasons being still unknown.
To solve this, all you have to do is to remove the step/column completely and recreate it, the error will go away.

PowerBI DAX get COUNT DISTINCT with GROUP BY , see SQL query below

I have got this following SQL query that gives me the correct value from the database.
SELECT
SUM( DISTINCT_ORDER_NUMBERS )
FROM
(
SELECT STORE_KEY,
COUNT( DISTINCT TRANSACTION_NUM ) AS DISTINCT_ORDER_NUMBERS,
DATE_KEY,
TRANSACTION_TYPE_KEY
FROM Pos_Data
GROUP BY STORE_KEY,
DATE_KEY,
TRANSACTION_TYPE_KEY
)
AS A
I am however facing challenges writing a DAX formula for a measure in Power BI Here is what I have tried so far but I get an error.
Total Number Of Orders
VAR _TotalOrders =
SUMMARIZE('Pos_Data',
'Pos_Data'[STORE_KEY],
'Pos_Data'[DATE_KEY],
'Pos_Data'[TRANSACTION_TYPE_KEY],
"DISTINCT_ORDER_NUMBERS",
DISTINCTCOUNT('Pos_Data'[TRANSACTION_NUM]))
RETURN SUM(_TotalOrders[DISTINCT_ORDER_NUMBERS])
Please assist
The SUM function expects a base table rather than a calculated table.
Try this instead:
VAR _TotalOrders =
SUMMARIZE('Pos_Data',
'Pos_Data'[STORE_KEY],
'Pos_Data'[DATE_KEY],
'Pos_Data'[TRANSACTION_TYPE_KEY],
"DISTINCT_ORDER_NUMBERS",
DISTINCTCOUNT('Pos_Data'[TRANSACTION_NUM]))
RETURN SUMX(_TotalOrders, [DISTINCT_CHECK_SEQ])
Edit: If the difference you mentioned is related to nulls, then try this in place of DISTINCTCOUNT.
COUNTAX( DISTINCT( 'Pos_Data'[TRANSACTION_NUM] ), 'Pos_Data'[TRANSACTION_NUM] )
The COUNTAX function (as opposed to COUNTX) does not count nulls.

Adding LIMIT fixes "Invalid digit, Value N" error in Amazon Redshift. Why?

I have a standard listings table on Redshift table with all varchars (due to loading into database)
This query (simplified) gives me error:
with AL as (
select
L.price::int as price,
from listings L
where L.price <> 'NULL'
and L.listing_type <> 'NULL'
)
select price from AL
where price < 800
and the error:
-----------------------------------------------
error: Invalid digit, Value 'N', Pos 0, Type: Integer
code: 1207
context: NULL
query: 2422868
location: :0
process: query0_24 [pid=0]
-----------------------------------------------
If I remove the where price < 800 condition, the query returns just fine... but I need the where condition to be there.
I've also checked the number validity of the price field and all look good.
After playing around, this actually makes it work, and I can't quite explain why.
with AL as (
select
L.price::int as price,
from listings L
where L.price <> 'NULL'
and L.listing_type <> 'NULL'
limit 10000000000
)
select price from AL
where price < 800
Note that the table has far less records than the number stated in limit.
Can anyone (possibly from the Redshift engineer team) explain why this is the way it is? Possibly something to do with how the query plan being executed and parallelized?
I had query that could be expressed simply as:
SELECT TOP 10 field1, field2
FROM table1
INNER JOIN table2
ON table1.field3::int = table2.field3
ORDER BY table1.field1 DESC
Removing the explicit cast to ::int solved a similar error for me.
Meanwhile, postgresql locally requires the "::int" to work.
For what it's worth, my local postgresql version is
PostgreSQL 9.6.4 on x86_64-apple-darwin16.7.0, compiled by Apple LLVM version 8.1.0 (clang-802.0.42), 64-bit
Loading CSV data with NaN into AWS Redshift
I found this post while searching google but the above link had what I needed. I was importing a numeric column with value NaN, which is unsupported by redshift numeric.