We can’t parse this SQL syntax shows up in AWS Quicksight after applying toString() - amazon-web-services

I have a calculated field which computes a total based on a particular type:
sumIf(amount, type = "sale")
Now I'm trying to convert the result to string and then concatenate some text to it, but doing toString(sumIf(amount, type = "sale")) gives the following message:
We can’t parse this SQL syntax. If you are using custom SQL, verify the syntax and try again. Otherwise, contact support.
Is there any way to make this work?

Did you try using the correct bracket type? i.e.
toString(sumIf({amount}, {type} = "sale"))
I tried an example like this and that worked fine, Quicksight can have issues when calling fields without the right brackets.

Related

Bigquery struct introspection

Is there a way to get the element types of a struct? For example something along the lines of:
SELECT #TYPE(structField.y)
SELECT #TYPE(structField)
...etc
Is that possible to do? The closest I can find is via the query editor and the web call it makes to validate a query:
As I mentioned already in comments - one of the option is to mimic same very Dry Run call with query built in such a way that it will fail with exact error message that will give you the info you are looking for. Obviously this assumes your use case can be implemented in whatever scripting language you prefer. Should be relatively easy to do.
Meantime, I was looking for making this within the SQL Query.
Below is the example of another option.
It is limited to below types, which might fit or not into your particular use case
object, array, string, number, boolean, null
So example is
select
s.birthdate, json_type(to_json(s.birthdate)),
s.country, json_type(to_json(s.country)),
s.age, json_type(to_json(s.age)),
s.weight, json_type(to_json(s.weight)),
s.is_this, json_type(to_json(s.is_this)),
from (
select struct(date '2022-01-01' as birthdate, 'UA' as country, 1 as age, 2.5 as weight, true as is_this) s
)
with output
You can try the below approach.
SELECT COLUMN_NAME, DATA_TYPE
FROM `your-project.your-dataset.INFORMATION_SCHEMA.COLUMNS`
WHERE TABLE_NAME = 'your-table-name'
AND COLUMN_NAME = 'your-struct-column-name'
ORDER BY ORDINAL_POSITION
You can check this documentation for more details using INFORMATION_SCHEMA for BigQuery.
Below is the screenshot of my testing.
DATA:
RESULT USING THE ABOVE SYNTAX:

How to perform NATURALLEFTOUTERJOIN without having the same name in both tables?

I tried to make same column in both tables but end up receiving error "An incompatible join column, (''[WeekName]) was detected. 'NATURALLEFTOUTERJOIN' doesn't support joins by using columns with different data types or lineage".
LeftOuterJoin = NATURALLEFTOUTERJOIN(
SELECTCOLUMNS(GROUPBY(DateTime,DateTime[yDayFullName],
"WEEKCOUNT", COUNTX(CURRENTGROUP(),DateTime[yDayFullName])),
"WeekName", DateTime[yDayFullName], "WEEKCOUNT",[WEEKCOUNT]),
SELECTCOLUMNS(GROUPBY(FILTER(Mergetable,Mergetable[noShow]<>"true"),Mergetable[WeekDayName],
"TOTALDURATION", SUMX(CURRENTGROUP(),Mergetable[MeetingDurationInHours])),
"WeekName",Mergetable[WeekDayName],"TOTALDURATION",[TOTALDURATION]))
Can you please change the code to following and see if it works
"WeekName", DateTime[yDayFullName]&""
and
"WeekName",Mergetable[WeekDayName]&""
Also, please make sure that DateTime[yDayFullName] and Mergetable[WeekDayName] has the same data type.
I have assumed that both of them are string and I guess you are trying to join on WeekName.

How to see 'full' SQL Error Messages in BigQuery?

I am writing a large MERGE statement in BigQuery.
When I attempt to run this query the validator gives me an error involving a lot of ...'s that hides the useful information as shown below:
Value has type ARRAY<STRUCT<eventName STRING, eventUUID STRING, eventDate DATE, ...>> which cannot be inserted into column Events, which has type ARRAY<STRUCT<eventName STRING, eventUUID STRING, eventDate DATE, ...>> at [535:1]
I am extremely confident these two array objects match exactly, however since I am struggling to get around this I would love to see the full error message.
Is there any way to see the full error?
I have looked into the Google Logging tool and cannot see any additional information.
I have also tried the following Cloud Shell command:
bq --format=prettyjson show -j [Job Id Goes Here]
Again, this seems to provide no additional information.
This approach feels pretty silly but it could be the last resort for really long nest type.
Use INFORMATION_SCHEMA.COLUMNS to get a full string of the target type, in your case, type of column Events.
Use CREATE TABLE <yourDataset>.<yourTempTable> AS SELECT ... to dump one row of the Value into a table. Use 1) again to see its full type string.

Why doesn't the parameter index work?

In the documentation, https://docs.spring.io/spring-data/neo4j/docs/current/reference/html/
it uses {0} to reference the parameter 'movieTitle'.
#Query("MATCH (movie:Movie {title={0}}) RETURN movie")
Movie getMovieFromTitle(String movieTitle);
However, in my own code, if I use "{title={0}", my IntelliJ always reports a syntax error. I can resolve the issue by changing it to
{title:{movieTitle}
Here I have to use the actual argument name and the colon plus {}.
Is there any trick for this? I don't think the documentation is wrong.
Question 2:
If I want the node label "Movie" to be a parameter, it also shows an error message:
#Query("MATCH (movie:{label} {title={0}}) RETURN movie")
Movie getMovieFromTitle(String movieTitle, String label);
I do not know what version of IntelliJ you are using but the first query is right. There is also a test case for this in the spring-data-neo4j project.
It is not possible to use the second query syntax because there is no support for this on the database level where the query gets executed. If it would be supported in SDN before making the call to the DB the query has to be parsed (and the pattern replaced) every time when the query get executed and SDN will loose the possibility to parse the query once and then just add the parameter values in subsequent calls. This will lower the performance of executing annotated query functions.

"operator does not exist character varying = bigint" in GnuHealth project

We are developing the module in tryton based on GNU Health.We got the following error :
ProgrammingError: operator does not exist character varying = bigint
Hint: No opreator matches the given name and argument type(s). You might need to add explicit type casts
As best as I can vaguely guess from the limited information provided, in this query:
"SELECT name,age,dob,address FROM TABLENAME WHERE pmrn=%s" % (self.pmrn)
you appear to be doing a string substitution of a value into a query.
First, this is dangerously wrong, and you should never ever do it without an extremely good reason. Always use parameterized queries. psycopg2 supports these, so there's no excuse not to. So do all the other Python interfaces for PostgreSQL, but I'm assuming you're using psycopg2 because basically everyone does, so go read the usage documentation to see how to pass query parameters.
Second, as a result of failing to use parameterized queries, you aren't getting any help from the database driver with datatype handling. You mentioned that pmrn is of type char - for which I assume you really meant varchar; if it's actually char then the database designers need to be taken aside for a firm talking-to. Anyway, if you substitute an unquoted number in there your query is going to look like:
pmrn = 201401270001
and if pmrn is varchar that'll be an error, because you can't compare a text type to a number directly. You must pass the value as text. The simplistic way is to put quotes around it:
pmrn = '201401270001'
but what you should be doing instead is letting psycopg2 take care of all this for you by using parameterized queries. E.g.
curs.execute("SELECT name,age,dob,address FROM TABLENAME WHERE pmrn=%s", (self.pmrn,))
i.e. pass the SQL query as a string, then a 1-tuple containing the query params. (You might have to convert self.pmrn to str if it's an int, too, eg str(self.pmrn)).