I am using lookup transformation in mapping.
In lookup, I am using lookup source filter and lookup sql override together.
But I have observed that only Lookup sql override is working. Condition in lookup source filter is not working when lookup sql override has some query in it. lookup source filter works when lookup sql override doesn't has any thing.
Can anyone please explain this behavior?
Thanx!
This is the intended behaviour! If you are using sql override you would include the filter there only. Why do you need filter separately. Informatica cannot analyze and combine sql override and filter.
Always override overrides filter condition in both LKP transformation and Source Qualifier transformation.
Related
I have several fields that contain exactly the same sql query! Is it possible to place the sql question centrally in APEX in the same way as list of values or as a function in oracle? I am using APEX 18.2
Here are two extended solutions
Pipelined SQL
https://smart4solutions.nl/blog/on-apex-lovs-and-how-to-define-their-queries/
Dynamic SQL
http://stevenfeuersteinonplsql.blogspot.com/2017/01/learn-to-hate-repetition-lesson-from.html
Call me dense, but I don't think I understand why you'd have multiple fields (presumably on the same form) whose source is the same SQL query.
Are you passing a parameter to the SQL to get a different value for each field?
If you are passing a parameter to the SQL query, why not create a database view to hold the query, then pass the parameter to the view. That way, if you need to change it, it's in one place.
If they really are all the same value from the same query, how about using the SQL for one field/page_item, then make the source for the others be the first page item?
I would create a hidden item with the query in the source as text and use &HIDDEN_ITEM_NAME. to reference its value in the source of any item I was trying to display the query.
Finally isolved it with a function and use the type PL/SQL Function Body returning SQL Query in APEX, then i have all in one place. I created a function in SQL developer that returns a SQL query.
I'm trying to use a Lookup transformation to extract ACCT_ID from ACCT table based on the port CUST_DDA which is an output port from an expression.
I'm using an sqloverride as below. The initial lookup condition :
SUBSTR_ACCT_ID = IN_CUST_DDA
Override:
SELECT
ACCT.ACCT_ID as ACCT_ID,
ACCT.ALT_ACCT_ID as ALT_ACCT_ID,
substr(acct.acct_id,-1*(length(IN_CUST_DDA))) as SUBSTR_ACCT_ID
FROM ACCT
WHERE ACCT.ALT_ACCT_ID LIKE '%'||TO_CHAR(IN_CUST_DDA)
AND ACCT.ACCT_ID LIKE '%'||TO_CHAR(IN_CUST_DDA)
The above sql override is failing due to the error : ORA-00904: "IN_CUST_DDA": invalid identifier
Is there a way to use the value from CUST_DDA port as an input port for the lookup. CUST_DDA is not a field that belongs to the ACCT table. Is there a way to do this.
Thanks.
From the override I can see that you are trying to convert IN_CUST_DDA into CHAR, also at the same time your using IN_CUST_DDA in the length.
Might be the length function causing the issue, because length function can be used along with a string.
In order to use CUST_DDA(from source) in your lookup override. You need to join the lookup table with source with a common field in the override.
You cant use the port in the way you mentioned. When you run the workflow informatica integration service will run the lookup override query in the database and get the data into the cache file (that is the reason you are receiving the error "IN_CUST_DDA": invalid identifier.) . Once the cache file is ready it will apply the conditions and then get the output for you.
Let me know if you are not clear on this
Regards
Raj
To achieve this you need to configure your lookup as non-cached, so the query will be executed for each input row. Note, that this degrades performance a lot.
Next, you need to use a bit different syntax, enclosing the input port in question marks. Here's an example. In your case it should be something like (this might need a little adjustment):
SELECT
ACCT.ACCT_ID as ACCT_ID,
ACCT.ALT_ACCT_ID as ALT_ACCT_ID,
substr(acct.acct_id,-1*(length(?IN_CUST_DDA?))) as SUBSTR_ACCT_ID
FROM ACCT
WHERE ACCT.ALT_ACCT_ID LIKE '%'||TO_CHAR(?IN_CUST_DDA?)
AND ACCT.ACCT_ID LIKE '%'||TO_CHAR(?IN_CUST_DDA?)
so i'm learning informatica powercenter
(at least through cloud designer)
i'm trying to figure out why we would use a lookup transformation to retrieve data based on a key, when we can just use a source transformation and join the data based on the key
i did both situations and they both accomplished the same thing using 2 different tables (flat files, csv)
why would i use a lookup transformation (besides having 1 transformation instead of 2 (source + joiner))
There are several kinds of lookup transformation which solve some particular scenarios. Those cannot be done using a joiner. For example, unconnected lookup, un-cached lookup, dynamic cache lookup, active and passive lookups have their unique uses
One big advantage of the Lookup transformation is the disconnected mode:
You can perform the Lookup based on a condition
You can also use the same lookup several times on several fields (e.g. you want to retrieve the name of two different customers, the payer and the ship-to, from the same dimension table)
More generally (i.e. not specific to the unconnected Lookups):
You can perform a Lookup on an inequality, which is not possible with the Joiner (e.g. retrieve the status of the customer at the current date, having a begin and end of validity date in the lookup table)
You can retrieve the first / last value based on the sort criteria if there are more than one record satisfying the Lookup condition
This comes in addition to what has already been said: better readability, especially in case of multiple Lookups, Dynamis Lookup Cache, etc.
Hope this helps :)
I started investigating why my Django Model.objects.filter(condition = variable).order_by(textcolumn) queries do not yield objects in correct order. And found out that it is database (Postgresql) issue.
In my earlier question (Postgresql sorting language specific characters (collation)) i figured out (with a lot of help from zero323 in actually getting it to work) that i can specify collation per database query like this:
SELECT nimi COLLATE "et_EE" FROM test ORDER BY nimi ASC;
But as much as i can see, order_by only accepts field names as arguments.
I was wondering, that if it is somehow possible to extend that functionality to include also the collation parameter? Is it possible to hack it in somehow using mixins or whatnot? Or is feature request the only way to do this right now?
I wish it would work something like this:
Model.objects.filter(condition = variable).order_by(*fieldnames, collation = 'et_EE')
Edit1:
Apparently im not the only one to ask for this:
https://groups.google.com/forum/#!msg/django-developers/0iESVnawNAY/JefMfAm7nQMJ
Alan
As #olau menioned in the comment, since Django 3.2 Collate utility is available. For older Django versions see the original information below the following code sample:
# Starting with Django 3.2:
Test.objects.order_by(Collate('nimi', 'et_EE'))
Since Django 1.8 order_by() accepts not only field names but also query expressions.
In another answer I gave an example of how you can override the default collation for a column. The useful query expression here is Func(), which you may subclass or use directly:
nimi_et = Func(
'nimi',
function='et_EE',
template='(%(expressions)s) COLLATE "%(function)s"')
Test.objects.order_by(nimi_et.asc())
Yet, note that the resulting SQL will be more like:
SELECT nimi FROM test ORDER BY nimi COLLATE "et_EE" ASC;
That is, the collation is overridden in ORDER BY clause rather than in SELECT clause. However, if you need to use it in a WHERE clause, you can use Func() in annotate().
allright. It seems that right now the Raw Queries are the only way to do this.
But There is django ticket open which will hopefully be closed/resolved sometime soon.
everybody.
I work with Django 1.3 and Postgres 9.0. I have very complex sql query which extends simple model table lookup with some extra fields. And it wrapped in table function since it is parameterized.
A month before I managed to make it work with the help of raw query but RawQuerySet lacks a lot of features which I really need (filters, count() and clone() methods, chainability).
The idea looks simple. QuerySet lets me to perform this query:
SELECT "table"."field1", ... ,"table"."fieldN" FROM "table"
whereas I need to do this:
SELECT "table"."field1", ... ,"table"."fieldN" FROM proxy(param1, param2)
So the question is: How can I do this? I've already started to create custom manager but can't substitute model.db_table with custom string (because it's being quoted and database stops to recognize the function call).
If there's no way to substitute table with function I would like to know if I can create QuerySet from RawQuerySet (Not the cleanest solution but simple RawQuerySet brings so much pain in... one body part).
If your requirement is that you want to use Raw SQL for efficiency purposes and still have access to model methods, what you really need is a library to map what fields of the SQL are the columns of what models.
And that library is, Unjoinify