JPQL Parser failed to compile Named query which has JOIN FETCH in Nested queries - jpa-2.0

I am trying to execute the below Named query, but it fails to parse by JPQLParser. The query get parsed If i try to remove the below two bold lines containing JOIN FETCH.
I am using was 8 shipped JPA 2.X
Both the Business Object "applicationItem" and "application" which are JOIN FETCH are marked LAZY FETCH inside business class "AdmissionItem" and "ApplicationItem"
The below query is running absolutely fine in JPA 1.X but fails in JPA 2.X
Named Query
"SELECT DISTINCT request FROM Request request
WHERE request.genId IN
(
SELECT DISTINCT applicationItem.request.genId
FROM ApplicationItem applicationItem
WHERE applicationItem.genId IN
(
SELECT DISTINCT admissionItem.applicationItem.genId
FROM AdmissionItem admissionItem
**JOIN FETCH admissionItem.applicationItem
JOIN FETCH admissionItem.applicationItem.application**
WHERE admissionItem.genId IN
(
// more query
)
AND admissionItem.applicationItem.application.name='SMA'
)
)"
ERROR MESSAGE
org.apache.openjpa.persistence.ArgumentException: "Encountered "request . genId IN ( SELECT DISTINCT applicationItem . request . genId FROM ApplicationItem applicationItem WHERE applicationItem . genId IN ( SELECT DISTINCT admissionItem . applicationItem . genId FROM AdmissionItem admissionItem JOIN FETCH" at character 31, but expected: ["(", ")", "*", "+", ",", "-", ".", "/", ":", "<", "<=", "<>", "=", ">", ">=", "?", "ABS", "ALL", "AND", "ANY", "AS", "ASC", "AVG", "BETWEEN", "BOTH", "BY", "CONCAT", "COUNT", "CURRENT_DATE", "CURRENT_TIME", "CURRENT_TIMESTAMP", "DELETE", "DESC", "DISTINCT", "EMPTY", "ESCAPE", "EXISTS", "FETCH", "FROM", "GROUP", "HAVING", "IN", "INNER", "IS", "JOIN", "LEADING", "LEFT", "LENGTH", "LIKE", "LOCATE", "LOWER", "MAX", "MEMBER", "MIN", "MOD", "NEW", "NOT", "NULL", "OBJECT", "OF", "OR", "ORDER", "OUTER", "SELECT", "SET", "SIZE", "SOME", "SQRT", "SUBSTRING", "SUM", "TRAILING", "TRIM", "TYPE", "UPDATE", "UPPER", "WHERE", , , , , , , , , ]." while parsing JPQL
at org.apache.openjpa.kernel.jpql.JPQLParser.parse(JPQLParser.java:51)

Related

Oracle Apex 22.21 - REST data source - nested JSON array - discovery

I need to get APEX Rest Data Source to parse my JSON which has a nested array. I've read that JSON nested arrays are not supported but there must be a way.
I have a REST API that returns data via JSON as per below. On Apex, I've created a REST data source following the tutorial on this Oracle blog link
However, Auto-Discovery does not 'discover' the nested array. It only returns the root level data.
[ {
"order_number": "so1223",
"order_date": "2022-07-01",
"full_name": "Carny Coulter",
"email": "ccoulter2#ovh.net",
"credit_card": "3545556133694494",
"city": "Myhiya",
"state": "CA",
"zip_code": "12345",
"lines": [
{
"product": "Beans - Fava, Canned",
"quantity": 1,
"price": 1.99
},
{
"product": "Edible Flower - Mixed",
"quantity": 1,
"price": 1.50
}
]
},
{
"order_number": "so2244",
"order_date": "2022-12-28",
"full_name": "Liam Shawcross",
"email": "lshawcross5#exblog.jp",
"credit_card": "6331104669953298",
"city": "Humaitá",
"state": "NY",
"zip_code": "98670",
"lines": [
{
"order_id": 5,
"product": "Beans - Green",
"quantity": 2,
"price": 4.33
},
{
"order_id": 1,
"product": "Grapefruit - Pink",
"quantity": 5,
"price": 5.00
}
]
},
]
So in the JSON above, it only 'discovers' order_numbers up to zip_code. The 'lines' array with attributes order_id, product, quantity, & price do not get 'discovered'.
I found this SO question in which Carsten instructs to create the Rest Data Source manually. I've tried changing the Row Selector to "." (a dot) and leaving it blank. That still returns the root level data.
Changing the Row Selector to 'lines' returns only 1 array for each 'lines'
So in the JSON example above, it would only 'discover':
{
"product": "Beans - Fava, Canned",
"quantity": 1,
"price": 1.99
}
{
"order_id": 5,
"product": "Beans - Green",
"quantity": 2,
"price": 4.33
}
and not the complete array..
This is how the Data Profile is set up when creating Data Source manually.
There's another SO question with a similar situation so I followed some steps such as selecting the data type for 'lines' as JSON Document. I feel I've tried almost every selector & data type. But obviously not enough.
The docs are not very helpful on this subject and it's been difficult finding links on Google, Oracle Blogs, or SO.
My end goal would be to have two tables as below auto synchronizing from the API.
orders
id pk
order_number num
order_date date
full_name vc(200)
email vc(200)
credit_card num
city vc(200)
state vc(200)
zip_code num
lines
order_id /fk orders
product vc(200)
quantity num
price num
view orders_view orders lines
As you're correctly stating, REST Data Sources do not support nested arrays - a REST Source can only "extract" one flat table from the JSON response. In your example, the JSON as such is an array ("orders"). The Row Selector in the Data Profile would thus be "." (to select the "root node").
That gives you all the order attributes, but discovery would skip the lines array. However, you can manually add a column to the Data Profile, of the JSON Document data type, and using lines as the selector.
As a result, you'd still get a flat table from the REST Data Source, but that table contains a LINES column, which contains the "JSON Fragment" for the order line items. You could then synchronize the REST Source to a local table ("REST Synchronization"), then you can use some custom code to extract the JSON fragments to a ORDER_LINES child table.
Does that help?

Adaptive Card Input.Date

How to Set Todays Date as Minimum value in Input.Date Action of Adaptive Card.
When a user select date ,all Backdates & Previous dates need to be blocked and he can select only Dates after Today.
I used:-
"type": "Input.Date",
"label": "Date",
"id": "ipDate",
"isRequired": true,
"errorMessage": "Please enter Date",
"separator": true,
"min": "LocalTimestamp (Date(YYYY-MM-DD))"
But ,It is not Working.
Can anyone Guide what Expression should i use in Min Value???.
I want like this-->
Try utcNow() function.
Here is example, minimal and default date is today.
If you need some TimeSpan you can use addDays etc
Check: https://learn.microsoft.com/en-us/azure/bot-service/adaptive-expressions/adaptive-expressions-prebuilt-functions?view=azure-bot-service-4.0#date-and-time-functions
{
"id": "startDate",
"type": "Input.Date",
"value": "${substring(utcNow(),0,10)}",
"min": "${substring(utcNow(),0,10)}",
"errorMessage": "Date cannot be empty"
},

AWS Athena table creation fails with “no viable alternative at input 'create external'”

This is my first attempt with Athena, please be gentle :)
This query is not failing with error -> No viable alternative
CREATE EXTERNAL TABLE IF NOT EXISTS dev.mytokendata (
'dateandtime' timestamp, 'requestid' string,
'ip' string, 'caller' string, 'token' string, 'requesttime' int,
'httpmethod' string, 'resourcepath' string, 'status' smallint,
'protocol' string, 'responselength' int )
ROW FORMAT SERDE 'com.amazonaws.glue.serde.GrokSerDe'
WITH SERDEPROPERTIES (
'input.format'='%{TIMESTAMP_ISO8601:dateandtime}
\s+\{\"requestId\":\s+\"%{USERNAME:requestid}\",
\s+\"ip\":\s+\"%{IP:ip}\",
\s+\"caller\":\s+\"%{USERNAME:caller}\",
\s+\"token\":\s+\"%{USERNAME:token}\",
\s+\"requestTime\":\s+\"%{INT:requesttime}\",
\s+\"httpMethod\":\s+\"%{WORD:httpmethod}\",
\s+\"resourcePath\":\s+\"%{UNIXPATH:resourcepath}\",
\s+\"status\":\s+\"%{INT:status}\",
\s+\"protocol\":\s+\"%{UNIXPATH:protocol}\",
\s+\"responseLength:\"\s+\"%{INT:responselength}\"\s+\}' )
STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION 's3://athena-abc/1234/'
TBLPROPERTIES ('has_encrypted_data'='false', 'compressionType'='gzip');
This is a line from the log file which I am trying to parse (.gz file)
2018-07-30T02:23:34.134Z { "requestId":
"810000-9100-1100-a100-f100000", "ip": "01.01.01.001", "caller": "-",
"token": "1234-5678-78910-abcd", "requestTime":
"1002917414000", "httpMethod": "POST", "resourcePath":
"/MyApp/v1.0/MyService.wsdl",
"status": "200", "protocol": "HTTP/1.1", "responseLength": "1000" }
Can anyone please point out what may be wrong? It would be of great help
You have used the apostrophes unnecessarily in the column names and there was also the problem with escape characters.
The correct version:
CREATE EXTERNAL TABLE mytokendata (
`dateandtime` timestamp,
`requestid` string,
`ip` string,
`caller` string,
`token` string,
`requesttime` int,
`httpmethod` string,
`resourcepath` string,
`status` smallint,
`protocol` string,
`responselength` int )
ROW FORMAT SERDE 'com.amazonaws.glue.serde.GrokSerDe'
WITH SERDEPROPERTIES (
'input.format'='%{TIMESTAMP_ISO8601:dateandtime}
\\s+\\{\"requestId\":\\s+\"%{USERNAME:requestid}\",
\\s+\"ip\":\\s+\"%{IP:ip}\",
\\s+\"caller\":\\s+\"%{USERNAME:caller}\",
\\s+\"token\":\\s+\"%{USERNAME:token}\",
\\s+\"requestTime\":\\s+\"%{INT:requesttime}\",
\\s+\"httpMethod\":\\s+\"%{WORD:httpmethod}\",
\\s+\"resourcePath\":\\s+\"%{UNIXPATH:resourcepath}\",
\\s+\"status\":\\s+\"%{INT:status}\",
\\s+\"protocol\":\\s+\"%{UNIXPATH:protocol}\",
\\s+\"responseLength:\"\\s+\"%{INT:responselength}\"\\s+\\}' )
STORED AS INPUTFORMAT 'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION 's3://athena-abc/1234/'
TBLPROPERTIES ('has_encrypted_data'='false', 'compressionType'='gzip');

Regular expression in postgresql

I have the table mytable with the column images, there I store strings like JSON objects.
This column contains invalid string in some records, not because the string is incorrect, but because when I try to cast it to JSON the query fails. Example:
`SELECT images::JSON->0 FROM mytable WHERE <any filter>`
If all elements of the JSON object are good that query works successfully, but if some string has " in incorrect place (to be specific, in this case in the title key) the error happens.
Good strings are like this:
[
{
"imagen": "http://www.myExample.com/asd1.png",
"amazon": "http://amazonExample.com/asd1.jpg",
"title": "A title 1."
},
{
"imagen": "http://www.myExample.com/asd2.png",
"amazon": "http://amazonExample.com/asd2.jpg",
"title": "A title 2."
},
{
"imagen": "http://www.myExample.com/asd3.png",
"amazon": "http://amazonExample.com/asd3.jpg",
"title": "A title 3."
}
]
Bad are like this:
[
{
"imagen": "http://www.myExample.com/asd1.png",
"amazon": "http://amazonExample.com/asd1.jpg",
"title": "A "title" 1."
},
{
"imagen": "http://www.myExample.com/asd2.png",
"amazon": "http://amazonExample.com/asd2.jpg",
"title": "A title 2."
},
{
"imagen": "http://www.myExample.com/asd3.png",
"amazon": "http://amazonExample.com/asd3.jpg",
"title": "A title 3."
}
]
Please put attention in the difference of title keys.
I need a regular expression to convert bad strings into good ones in PostgreSQL.
It will be very complicated, if possible to do this in one regexp, but it will be very easy to do in two or more.
For example, replace all the double quotes with \" and then replace {\" with {", \":\" with ":", \",\" with ",", \"} with "}. The quotes that are not escaped are the ones that breaks JSON.
Alternatively, replace "(?=[^}:]*"[\s]*}) (quotes in title only) with \" and then replace ":\" with ":". See details: https://regex101.com/r/pB6rD9/1
Crafting replace that will be able to do so in one go requires lookbehinds and I suppose that PSQL does not support them.

Notepad++ and Regex to put a bunch of word into [ ], each word in quotation mark and separated by commas

I have a list, a thousand row like this
"Categories": "Action, Adventure, Comedy, Fantasy",
"Categories": "Action, Adventure",
"Categories": "Action, Adventure, Comedy, Drama,Fantasy, Martial Arts, Mystery, Supernatural",
"Categories": "Action,Adventure, Comedy, Fantasy,Psychological, School Life, Supernatural",
and I'd like to make into this
"Categories": ["Action", "Adventure", "Comedy", "Fantasy"]
"Categories": ["Action", "Adventure"]
"Categories": ["Action", "Adventure", "Comedy", "Drama", "Fantasy", "Mystery", "Supernatural"]
"Categories": ["Action", "Adventure", "Comedy", "Fantasy", "Psychological", "Supernatural"]
I've tried a bunch of regular expression, such as
("Categories":) "(\b.*?), (\b.*?), (.*), (.*), (\w+?)",
and still stuck, because I am still green at this stuff
please help me to solve this in regex and thank you for the answer
In two steps:
step 1: you replace the string with an array of strings when there is more than one item
search: "Categories":\s*\K("[^",]*+[^"]+")
replace: [$1]
step 2: you replace all the commas in the string
search: (\G(?!^)|"Categories":\s*\[")[^",]+?\K\s*,\s*
replace: ", "
Try:
pattern: ("Categories":) ("[^"]*")
substitute with: $1[$2]
bye