i am new in informatica software. now i have two tables, say AAA and BBB table.
AAA: last_post_date
BBB: Trx_No, Field1, Field2, trx_date
I want to move BBB table to the target table which the trx_date must be greater than last_post_date. I cannot use joiner transformation as it doesnt have >, < , >= and <= operators. If I want to use lookup transformation, how to use it for this case or any other way can help me do this. I searched many websites about the lookup transformation , still don't know how to use it.
Please help.
Thanks!
I am assuming that AAA has only 1 row which contains last_post_date. If both the tables are in same database, you can use Source Qualifier override
select Trx_No, Field1, Field2, trx_date from BBB where trx_date > last_post_date
But if both tables are in different database and / or you are not able to create DB link between them, then use below solution.
After Source Qualifier for both source, use Expression transformation.
Add an output port in both Expression Transformations, say o_Dummy and hardcode the value as 1 (for both transformations)
Use Joiner and use normal join. Join condition would be o_Dummy = o_Dummy1.
After it use a filer to filter records where trx_date > last_post_date.
This would be your flow.
SQ_AAA -> Expression -> Joiner -> Filter -> Target
SQ_BBB -> Expression -^
Use the Source Qualifier to read data from BBB, followed by lookup to AAA and a filter witha a condition trx_date>last_post_date.
Ideally you'd use an unconnected lookup reffered from Expression variable port e.g. v_LastPostDate = IIF(ISNULL(v_LastPostDate), LKP.LoopkupToAAA, v_LastPostDate) - this would ensure you perform the lookup only once. Not that it would matter a lot with a single value, but I thought I'll share some good practice :)
Related
I have the following query in M:
= Table.Combine({
Table.Distinct(Table.SelectColumns(Tab1,{"item"})),
Table.Distinct(Table.SelectColumns(Tab2,{"Column1"}))
})
Is it possible to get it working without prior changing column names?
I want to get something similar to SQL syntax:
select item from Tab1 union all
select Column1 from Tab2
If you need just one column from each table then you may use this code:
= Table.FromList(List.Distinct(Tab1[item])
& List.Distinct(Tab2[Column1]))
If you use M (like in your example or the append query option) the columns names must be the same otherwise it wont work.
But it works in DAX with the command
=UNION(Table1; Table2)
https://learn.microsoft.com/en-us/dax/union-function-dax
It's not possible in Power Query M. Table.Combine make an union with columns that match. If you want to keep all in the same step you can add the change names step instead of tap2 like you did with Table.SelectColumns.
This comparison of matching names is to union in a correct way.
Hope you can manage in the same step if that's what you want.
Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production.
I have a table in the below format.
Name Department
Johny Dep1
Jacky Dep2
Ramu Dep1
I need an output in the below format.
Dep1 - Johny,Ramu
Dep2 - Jacky
I have tried the 'LISTAGG' function, but there is a hard limit of 4000 characters. Since my db table is huge, this cannot be used in the app. The other option is to use the
SELECT CAST(COLLECT(Name)
But my framework allows me to execute only select queries and no PL/SQL scripts.Hence i dont find any way to create a type using "CREATE TYPE" command which is required for the COLLECT command.
Is there any alternate way to achieve the above result using select query ?
You should add GetClobVal and also need to rtrim as it will return delimiter in the end of the results.
SELECT RTRIM(XMLAGG(XMLELEMENT(E,colname,',').EXTRACT('//text()')
ORDER BY colname).GetClobVal(),',') from tablename;
if you cant create types (you can't just use sql*plus to create on as a one off?), but you're OK with COLLECT, then use a built-in array. There's several knocking around in the RDBMS. run this query:
select owner, type_name, coll_type, elem_type_name, upper_bound, length
from all_coll_types
where elem_type_name = 'VARCHAR2';
e.g. on my db, I can use sys.DBMSOUTPUT_LINESARRAY which is a varray of considerable size.
select department,
cast(collect(name) as sys.DBMSOUTPUT_LINESARRAY)
from emp
group by department;
A derivative of #anuu_online but handle unescaping the XML in the result.
dbms_xmlgen.convert(xmlagg(xmlelement(E, name||',')).extract('//text()').getclobval(),1)
For IBM DB2, Casting the result to a varchar(10000) will give more than 4000.
select column1, listagg(CAST(column2 AS VARCHAR(10000)), x'0A') AS "Concat column"...
I end up in another approach using the XMLAGG function which doesn't have the hard limit of 4000.
select department,
XMLAGG(XMLELEMENT(E,name||',')).EXTRACT('//text()')
from emp
group by department;
You can use:
SELECT department
, REGEXP_REPLACE(XMLCAST(XMLAGG(XMLELEMENT(x, name, ',')) AS CLOB), ',$')
FROM emp
GROUP BY department
it will return CLOB that has no size limit, handles correctly XML entity escapes and separators.
Instead of REGEXP_REPLACE(..., ',$')) you can use RTRIM(..., ','), which should be faster, but will remove all separators from the end of the result (including those that can appear in name at the end, or previous ones if last names are empty).
I have a table with one column having a large json object in the format below. The column datatype is VARCHAR
column1
--------
{"key":"value",....}
I'm interested in the first value of the column data
in regex I can do it by .*?:(.*),.* with group(1) giving me the value
How can i use it in the select query
Don't do that, it's bad database design. Shred the keys and values to their own table as columns, or use the XML data type. XML would work fine because you can index the structure well, and you can use XPATH queries on the data. XPATH supports regexp natively.
You can use regular expression with xQuery, you just need to call the function matches from a SQL query or a FLORW query.
This is an example of how to use regular expressions from SQL:
db2 "with val as (
select t.text
from texts t
where xmlcast(xmlquery('fn:matches(\$TEXT,''^[A-Za-z 0-9]*$'')') as integer) = 0
)
select * from val"
For more information:
http://pic.dhe.ibm.com/infocenter/db2luw/v10r5/topic/com.ibm.db2.luw.xml.doc/doc/xqrfnmat.html
http://angocadb2.blogspot.fr/2014/04/regular-expressions-in-db2.html
DB2 doesn't have any built in regex functionality, unfortunately. I did find an article about how to add this with libraries:
http://www.ibm.com/developerworks/data/library/techarticle/0301stolze/0301stolze.html
Without regexes, this operation would be a mess. You could make a function that goes through the string character by character to find the first value. Or, if you will need to do more than this one operation, you could make a procedure that parses the json and throws it into a table of keys/values. Neither one sounds fun, though.
In DB2 for z/OS you will have to pass the variable to XMLQUERY with the PASSING option
db2 "with val as (
select t.text
from texts t
where xmlcast(xmlquery('fn:matches($TEXT,''^[A-Za-z 0-9]*$'')'
PASSING t.text as "TEXT") as integer) = 0
)
select * from val"
I use HSQLDB2.0 and JPA2.0 for my current project and have few date columns in DB.
I would like to run wildcard queries on the date columns. How could I do that?
Ex : If my DB contains two rows with date values as : 10-01-2011 and 15-02-2011
and my search criteria will be "%10-01%", then result should be 10-01-2011.
Else if search criteria is "%2011%" then both rows need to be fetched with the select query.
Thanks in advance,
Satya
You can define an autogenerated column of type VARCHAR containing a copy of the date. You can then perform queries with both LIKE predicates and REGEXP_MATCHES() function. An example of the column definition is below:
DATEGEN VARCHAR(10) GENERATED ALWAYS AS (CAST(DATECOL AS VARCHAR(10))
Note the string representation of DATE is in the form '2011-02-26' and your query strings should follow this pattern.
This can be achieved in this format :
select date_birth from member where to_char(date_birth,'MM-yyyy') like '%02-2011%'
select date_birth from member where to_char(date_birth,'MM-dd') like '%02-15%'
select date_birth from member where to_char(date_birth,'dd-yyyy') like '%30-2011%'
Regards,
Satya
I am returning a SQL dataset in SSRS (Microsoft SQL Server Reporting Services) with a one to many relationship like this:
ID REV Event
6117 B FTG-06a
6117 B FTG-06a PMT
6117 B GTI-04b
6124 A GBI-40
6124 A GTI-04b
6124 A GTD-04c
6136 M GBI-40
6141 C GBI-40
I would like to display it as a comma-delimited field in the last column [Event] like so:
ID REV Event
6117 B FTG-06a,FTG-06a PMT,GTI-04b
6124 A GBI-40, GTI-04b, GTD-04c
6136 M GBI-40
6141 C GBI-40
Is there a way to do this on the SSRS side of things?
You want to concat on the SQL side not on the SSRS side, that way you can combine these results in a stored procedure say, and then send it to the reporting layer.
Remember databases are there to work with the data. The report should just be used for the presentation layer, so there is no need to tire yourself with trying to get a function to parse this data out.
Best thing to do is do this at the sproc level and push the data from the sproc to the report.
Based on your edit this is how you would do it:
To concat fields take a look at COALESCE.
You will then get a string concat of all the values you have listed.
Here's an example:
use Northwind
declare #CategoryList varchar(1000)
select #CategoryList = coalesce(#CategoryList + ‘, ‘, ”) + CategoryName from Categories
select ‘Results = ‘ + #CategoryList
Now because you have an additional field namely the ID value, you cannot just add on values to the query, you will need to use a CURSOR with this otherwise you will get a notorious error about including additional fields to a calculated query.
Take a look here for more help, make sure you look at the comment at the bottom specifically posted by an 'Alberto' he has a similiar issue as you do and you should be able to figure it out using his comment.