TO_TIMESTAMP function can not be resolved in calcite - apache-calcite

I am using calcite library to query my data using API , I found function called TO_TIMESTAMP https://calcite.apache.org/docs/reference.html but when I called it give exception that there are no function with this signature
what I try to do :
SELECT TO_TIMESTAMP(cast(ORGINALTIMESTAMP as varchar),cast('yyyy-MM-dd HH:mm:ss' as varchar)) as TEST from my_table
the exception that I got :
No match found for function signature TO_TIMESTAMP(<CHARACTER>, <CHARACTER>)
any idea what is the wrong that I am doing ?

To solve the problem, include fun=postgresql in the JDBC connect string that you use to connect to Calcite, which starts with jdbc:calcite:.
In Calcite, TO_TIMESTAMP is a dialect-specific operator. This means that it is not included in the default dialect, and that is why it appears in Calcite's table of dialect-specific operators.
The 'c' (compatibility) column for TO_TIMESTAMP contains the values 'o p', meaning that TO_TIMESTAMP is enabled in Oracle and PostgreSQL function tables. That is why fun=postgresql solves the problem; fun=oracle would also work.

Related

How do I insert a time value in Redshift in a time column?

Given the following query
ALTER TABLE public.alldatatypes ADD x_time time ;,
how do I insert a value into x_time?
Time appears to be a valid column type according to the documentation.
https://docs.aws.amazon.com/redshift/latest/dg/r_Datetime_types.html#r_Datetime_types-time
https://docs.aws.amazon.com/redshift/latest/dg/c_Supported_data_types.html
However, when I try to do an insert, I always get an error.
Insert query: insert into public.alldatatypes(x_time) values('08:00:00');
Error:
SQL Error [500310] [0A000]: Amazon Invalid operation:
Specified types or functions (one per INFO message) not supported on
Redshift tables.;
I do not want to use another column type.
I am testing all the column types defined in the documentation.
That cryptic error message is the one Redshift gives when you try to use a leader-only function as source data for a compute node. So expect you aren't showing the exact code you ran to generated this error. I know that it can seem like you didn't change anything important to the issue but you likely have.
You see select now(); works just fine but insert into <table> select now(); will give the error you are showing. This is because now() is a leader only function. However insert into <table> select getdate(); works great - this is because getdate() is a function that runs on compute nodes.
Now the following SQL runs just fine for me:
create table fred (ttt time);
insert into public.fred(ttt) values('01:23:00'); -- this is more correctly written values('01:23:00':time)
insert into public.fred(ttt) select getdate()::time;
select * from fred;
While this throws the error you are getting:
insert into public.fred(ttt) select now()::time;
So if this doesn't help clear things up please post a complete test case that demonstrates the error.

Classic report issue with multiple inputs with IN statement

I am trying to refresh a report with dynamic action. And get the following errors:
{'dialogue': {'uv': true, 'line': [{'V': "failure of Widget}]}}
ORA-20876: stop the engine of the APEX.
classic_report"}]}}
I think its an issue with string which can't take and ST.ID IN (:P11_ROW_PK) in sql query.
Please suggest a workaround for the same.
This question requires the context you've provided in
https://stackoverflow.com/a/63627447/527513
If P11_ROW_PK is a delimited list of IDs, then you must structure your query accordingly, not expect the IN statement to deconstruct a bind variable containing a string.
Try this instead
select * from your_table
where st.id in (select column_value from apex_string.split(:P11_ROW_PK))
where REGEXP_LIKE(CUSTOMER_ID, '^('|| REPLACE(:P4_SEARCH,',','|') ||')$')
Above code will act same as APEX_STRING only if you are using lower version Apex

Is there a way to use value from CUST_DDA port as input port for lookup?

I'm trying to use a Lookup transformation to extract ACCT_ID from ACCT table based on the port CUST_DDA which is an output port from an expression.
I'm using an sqloverride as below. The initial lookup condition :
SUBSTR_ACCT_ID = IN_CUST_DDA
Override:
SELECT
ACCT.ACCT_ID as ACCT_ID,
ACCT.ALT_ACCT_ID as ALT_ACCT_ID,
substr(acct.acct_id,-1*(length(IN_CUST_DDA))) as SUBSTR_ACCT_ID
FROM ACCT
WHERE ACCT.ALT_ACCT_ID LIKE '%'||TO_CHAR(IN_CUST_DDA)
AND ACCT.ACCT_ID LIKE '%'||TO_CHAR(IN_CUST_DDA)
The above sql override is failing due to the error : ORA-00904: "IN_CUST_DDA": invalid identifier
Is there a way to use the value from CUST_DDA port as an input port for the lookup. CUST_DDA is not a field that belongs to the ACCT table. Is there a way to do this.
Thanks.
From the override I can see that you are trying to convert IN_CUST_DDA into CHAR, also at the same time your using IN_CUST_DDA in the length.
Might be the length function causing the issue, because length function can be used along with a string.
In order to use CUST_DDA(from source) in your lookup override. You need to join the lookup table with source with a common field in the override.
You cant use the port in the way you mentioned. When you run the workflow informatica integration service will run the lookup override query in the database and get the data into the cache file (that is the reason you are receiving the error "IN_CUST_DDA": invalid identifier.) . Once the cache file is ready it will apply the conditions and then get the output for you.
Let me know if you are not clear on this
Regards
Raj
To achieve this you need to configure your lookup as non-cached, so the query will be executed for each input row. Note, that this degrades performance a lot.
Next, you need to use a bit different syntax, enclosing the input port in question marks. Here's an example. In your case it should be something like (this might need a little adjustment):
SELECT
ACCT.ACCT_ID as ACCT_ID,
ACCT.ALT_ACCT_ID as ALT_ACCT_ID,
substr(acct.acct_id,-1*(length(?IN_CUST_DDA?))) as SUBSTR_ACCT_ID
FROM ACCT
WHERE ACCT.ALT_ACCT_ID LIKE '%'||TO_CHAR(?IN_CUST_DDA?)
AND ACCT.ACCT_ID LIKE '%'||TO_CHAR(?IN_CUST_DDA?)

Amazon Athena: no viable alternative at input

While creating a table in Athena; it gives me following exception:
no viable alternative at input
hyphens are not allowed in table name.. ( though wizard allows it ) .. Just remove hyphen and it works like a charm
Unfortunately, at the moment the syntax validation error messages are not very descriptive in Athena, this error may mean "almost" any possible syntax errors on the create table statement.
Although this is annoying at the moment you will need to check if the syntax follows the Create table documentation
Some examples are:
Backticks not in place (as already pointed out)
Missing/extra commas (remember that the last column doesn't need the comma after column definition
Missing spaces
More ..
This error generally occurs when the syntax of DDL has some silly errors.There are several answers that explain different errors based on there state.The simple solution to this problem is to patiently look into DDL and verify following points line by line:-
Check for missing commas
Unbalanced `(backtick operator)
Incompatible datatype not supported by HIVE(HIVE DATA TYPES REFERENCE)
Unbalanced comma
Hypen in table name
In my case, it was because of a trailing comma after the last column in the table. For example:
CREATE EXTERNAL TABLE IF NOT EXISTS my_table (
one STRING,
two STRING,
) LOCATION 's3://my-bucket/some/path';
After I removed the comma at the end of two STRING, it worked fine.
My case: it was an external table and the location had a typo (hence didn't exist)
Couple of tips:
Click the "Format query" button so you can spot errors easily
Use the example at the bottom of the documentation - it works - and modify it with your parameters: https://docs.aws.amazon.com/athena/latest/ug/create-table.html
Slashes. Mine was slashes. I had the DDL from Athena, saved as a python string.
WITH SERDEPROPERTIES (
'escapeChar'='\\',
'quoteChar'='\"',
'separatorChar'=',')
was changed to
WITH SERDEPROPERTIES (
'escapeChar'='\',
'quoteChar'='"',
'separatorChar'=',')
And everything fell apart.
Had to make it:
WITH SERDEPROPERTIES (
'escapeChar'='\\\\',
'quoteChar'='\\\"',
'separatorChar'=',')
In my case, it was an extra comma in PARTITIONED BY section,
In my case, I was missing the singlequotes for the S3 URL
In my case, it was that one of the table column names was enclosed in single quotes, as per the AWS documentation :( ('bucket')
As other users have noted, the standard syntax validation error message that Athena provides is not particularly helpful. Thoroughly checking the required DDL syntax (see HIVE data types reference) that other users have mentioned can be pretty tedious since it is fairly extensive.
So, an additional troubleshooting trick is to let AWS's own data parsing engine (AWS Glue) give you a hint about where your DDL may be off. The idea here is to let AWS Glue parse the data using its own internal rules and then show you where you may have made your mistake.
Specifically, here are the steps that worked for me to troubleshoot my DDL statement, which was giving me lots of trouble:
create a data crawler in AWS Glue; AWS and lots of other places go through the very detailed steps this requires so I won't repeat it here
point the crawler to the same data that you wanted (but failed) to upload into Athena
set the crawler output to a table (in an Athena database you've already created)
run the crawler and wait for the table with populated data to be created
find the newly-created table in the Athena Query Editor tab, click on the three vertical dots (...), and select "Generate Create Table DLL":
this will make Athena create the DLL for this table that is guaranteed to be valid (since the table was already created using that DLL)
take a look at this DLL and see if/where/how it differs from the DLL that you originally wrote. Naturally, this automatically-generated DLL will not have the exact choices for the data types that you may find useful, but at least you will know that it is 100% valid
finally, update your DLL based on this new Glue/Athena-generated-DLL, adjusting the column/field names and data types for your particular use case
After searching and following all the good answers here.
My issue was that working in Node.js i needed to remove the optional
ESCAPED BY '\' used in the Row settings to get my query to work. Hope this helps others.
Something that wasn't obvious for me the first time I used the UI is that if you get an error in the create table 'wizard', you can then cancel and there should be the query used that failed written in a new query window, for you to edit and fix.
My database had a hypen, so I added backticks in the query and rerun it.
This happened to me due to having comments in the query.
I realized this was a possibility when I tried the "Format Query" button and it turned the entire thing into almost 1 line, mostly commented out. My guess is that the query parser runs this formatter before sending the query to Athena.
Removed the comments, ran the query, and an angel got its wings!

How to insert data into RDF data source in WSO2 DSS

I can query data using Sparql query as explained here, however, when I try to write insert statement in Sparql like below:
PREFIX space: <http://purl.org/net/schemas/space/>
PREFIX relevance: <http://a9.com/-/opensearch/extensions/relevance/1.0/>
PREFIX foaf: <http://xmlns.com/foaf/0.1/>
PREFIX dc: <http://purl.org/dc/elements/1.1/>
INSERT DATA
{
http://nasa.dataincubator.org/spacecraft/1968-009B space:internationalDesignator 1968-009B
}
DSS throws this exception:
Nested Exception:-
com.hp.hpl.jena.query.QueryParseException: Lexical error at line 10, column 101. Encountered: " " (32), after : "INSERT"
Because I can write insert SQL with RDBMS data source, so I think RDF also supports insert functionality.
Could you help me to solve it?
By the looks of it, I feel that the problem is with the SPARQL query itself. Although, I'm aware that the query is syntactically correct and conforms to the SPARQL specifications, I wonder whether the Apache Jena version used in DSS allows you to follow the syntax "INSERT DATA" (just a wild guess analyzing the reported error log). Can you try "INSERT (INTO)" clause and check if it works? Ideally, DSS doesn't do any modifications to the query except input/output mapping processing so if your query format is right it should work out of the box.
Cheers,
Prabath
Insert functionality is not yet supported in WSO2 DSS yet.