wrong values when passing list as query parameter using jpa2 - jpa-2.0

I'm using the criteria object for my queries. One of the parameters I am using is a list of Strings (for example [10, 11, 15]).
For some reason, when the SQL query is generated, the values are now 15, 11, 15.
From the code:
// I have method that receives a String[] types as parameter
// inside the method:
Criteria criteria = new Criteria();
List<String> listTypes = Arrays.asList(types);
criteria.setParameter("types", listTypes);
// and the query
select a from table a where a.types in (:types)
Did anyone have the same issue? Why do the values get changed?

Related

Can we create Dynamic Date table in mapping Data Flow?

I have a query in Power BI that takes two parameter: Start Date and End Date.
Whenever I pass these Dates it return a table of Date that contain few columns created according to this range of date such as Date, QuarterofYear, Year, MonthName......etc.
Can we create a mapping data flow in ADF that takes two parameter as input and return a calculated table according to provided dates?
Is there any function that return the range of dates?
For your request: "I want that I pass two date Start Date and End Date in ADF Mapping Data Flow , and Data flow will Create a column such as "Date" that contain that number of Date rows. Is there any function for this? Exam. Start Date=20-01-2019, End Date=20-01-2020 Then Date Column Values should be: 20-01-2019 21-01-2019 ......... ......... 20-02-2020", according the Data Factory documents and my experience, the answer is no, we can't achieve it in Data Flow.
There is a solution to this, but it is a bit tricky.
TL;DR
The general data flow looks like this:
We need a dummy source with exactly one row which contains whatever.
Then we derive a column where we use the mapLoop() expression to create an array of all the dates we want to get rows for.
Finally, we need to flatten the array column which will result in one row per array entry and thus one row per date.
Walkthrough
Source dummy
Each dataflow needs a source and we need exactly one row to make our dataflow work. To achieve this I've created a dataset called empty of type CSV in my data lake which has this content:
empty
""
This is our source definition:
And its result looks like this:
Derived column days
This is where the magic happens!
We create a new column dates which is an array of all the dates we want to have in our date table:
In this scenario we want a date table starting on 2019-01-01 and reaching one year into the future. The full expression looks like this:
mapLoop(
addDays(currentDate(), 365) - toDate(2019-01-01),
addDays(toDate(2019-01-01), #index)
)
This is what happens here:
the mapLoop() function builds an array of elements. You specify the number of elements you want to have and the lambda expression to calculate each of the elements. For example, mapIndex([1, 2, 3, 4], #item + 2 + #index) results in [4, 6, 8, 10]
addDays(currentDate(), 365) - toDate('2019-01-01') is the number of days between our start (2019-01-01) and end date (1 year in the future from now) and thus the number of dates we want to have in our resulting array.
addDays(toDate(2019-01-01), #index) calculates each array item by adding #index days to our start date. This is executed for the number of days we've calculated before and #index is the array position. Thus, the first element of the array will be 2019-01-01 + 1, the second 2019-01-01 + 2 and so on.
Our stream now has these columns:
Flatten
Finally, you need a flatten transformation which will expand each item in your array to its dedicated row. We can also dismiss the useless empty column in this step:
And this finally results in what we wanted to achieve:
References
Data transformation expressions in mapping data flow

Odbc null value binding for where clause

I'm trying to achieve batch deleting of rows with ODBC binding. What i'm doing is: Bind all columns with array of values and then execute statement.
Let's say i've got query: Delete From Table Where primary_key=? and second_primary_key=? and then bind values (1, 2, 3, 4) for first column and then (4, 3, 2, 1) for second column.
This works perfectly fine until i stumbled upon column with nullable value. For those, if any row contains null value i bind SQL_NULL_DATA. I guess it doesn't work as primary_key = null always returns true.
My question is:
Is it possible to force ODBC to interpret this situation as: primary_key=? or primary_key is null? Or do i have to search through all values and append sql query manually?

Using list manager in Oracle Apex 5

Am building an application with oracle apex 5 where i want the user to chose multiple parameters and returning an interactive report based on the parameters selected by the user.
One of the parameters is a list manager item where the user select multiple values to be passed to an SQL query.
my problem is how to pass those values to the sql query, the item type is list manager and the name is P2_OPTIONS how do i pass the parameters to the SQL query generating the report.
Selected values storing in P2_OPTIONS divided by colon, for example 2:7:17.
So, you can insert this string into your query, preliminary replacing colon to comma and get expression like
...
and parameter0 in (2,7,17)
...
OR
you can parse this string into apex collection and join this collection in you query
...and apex_collections.collection_name = 'P2_OPTIONS_PARSED'
and parameter0 = apex_collections.c001
...

Ignite SqlFieldsQuery specific keys

Using the ignite C++ API, I'm trying to find a way to perform an SqlFieldsQuery to select a specific field, but would like to do this for a set of keys.
One way to do this, is to do the SqlFieldsQuery like this,
SqlFieldsQuery("select field from Table where _key in (" + keys_string + ")")
where the keys_string is the list of the keys as a comma separated string.
Unfortunately, this takes a very long time compared to just doing cache.GetAll(keys) for the set of keys, keys.
Is there an alternative, faster way of getting a specific field for a set of keys from an ignite cache?
EDIT:
After reading the answers, I tried changing the query to:
auto query = SqlFieldsQuery("select field from Table t join table(_key bigint = ?) i on t._key = i._key")
I then add the arguments from my set of keys like this:
for(const auto& key: keys) query.AddArgument(key);
but when running the query, I get the error:
Failed to bind parameter [idx=2, obj=159957, stmt=prep0: select field from Table t join table(_key bigint = ?) i on t._key = i._key {1: 159956}]
Clearly, this doesn't work because there is only one '?'.
So I then tried to pass a vector<int64_t> of the keys, but I got an error which basically says that std::vector<int64_t> did not specialize the ignite BinaryType. So I did this as defined here. When calling e.g.
writer.WriteInt64Array("data", data.data(), data.size())
I gave the field a arbitrary name "data". This then results in the error:
Failed to run map query remotely.
Unfortunately, the C++ API is neither well documented, nor complete, so I'm wondering if I'm missing something or that the API does not allow for passing an array as argument to the SqlFieldsQuery.
Query that uses IN clause doesn't always use indexes properly. The workaround for this is described here: https://apacheignite.readme.io/docs/sql-performance-and-debugging#sql-performance-and-usability-considerations
Also if you have an option to to GetAll instead and lookup by key directly, then you should use it. It will likely be more effective anyway.
Query with operator "IN" will not always use indexes. As a workaround, you can rewrite the query in the following way:
select field from Table t join table(id bigint = ?) i on t.id = i.id
and then invoke it like:
new SqlFieldsQuery(
"select field from Table t join table(id bigint = ?) i on t.id = i.id")
.setArgs(new Object[]{ new Integer[] {2, 3, 4} }))

Python dictionary map to SQL string with list comprehension

I have a python dictionary that maps column names from a source table to a destination table.
Note: this question was answered in a previous thread for a different query string, but this query string is more complicated and I'm not sure if it can be generated using the same list comprehension method.
Dictionary:
tablemap_computer = {
'ComputerID' : 'computer_id',
'HostName' : 'host_name',
'Number' : 'number'
}
I need to dynamically produce the following query string, such that it will update properly when new column name pairs are added to the dictionary.
(ComputerID, HostName, Number) VALUES (%(computer_id.name)s, %(host_name)s, %(number)s)
I started with a list comprehension but I only was able to generate the first part of the query string so far with this technique.
queryStrInsert = '('+','.join([tm_val for tm_key, tm_val in tablemap_incident.items()])+')'
print(queryStrInsert)
#Output
#(computer_id,host_name,number)
#Still must generate the remaining part of the query string parameterized VALUES
If I understand what you're trying to get at, you can get it done this way:
holder = list(zip(*tablemap_computer.items()))
"insert into mytable ({0}) values ({1})".format(",".join(holder[0]), ",".join(["%({})s".format(x) for x in holder[1]]))
This should yield:
# 'insert into mytable (HostName,Number,ComputerID) values (%(host_name)s,%(number)s,%(computer_id)s)'
I hope this helps.