explicitly call TEIID function - teiid

I would like to explicitly call a TEIID function instead of having it pushdown...
Is there a way for me to explicitly call org.jboss.teiid.row_number() over (order by x), so that the row_number() function is at the TEIID layer, instead of getting pushed down to the underlying datasource?
I've used translator overrides previously to force federation and execution of sql outer joins at the TEIID layer by setting SupportsOuterJoins=false, but I don't see anything specific to over() clauses, and I was hoping there was a way to explicitly tell TEIID to do row_number() on the result set after it is returned
Thank you,
Adam

try "SupportsElementaryOlapOperations=false" to turn off the RANK, DENSE_RANK, and ROW_NUMBER functions.

Related

How to update a UDF in Redshift (amazon)?

I have created a UDF in Redshift. I can view this in the pg_proc table by
select * from pg_proc where proname ilike 'my_udf';
Now i need to update this function (including the function signature). I have tried using update statements on the pg_proc table with no luck.
EDIT: Seems the only way to update the signature is to delete the function although DROP FUNCTION <function_name> does not seems to work.
What is the correct way to do this? Also knowing the function signature would be helpful, is there any way to view that?
You should use CREATE [ OR REPLACE ] FUNCTION... to redefine the User-Defined Function (UDF).
See: CREATE FUNCTION - Amazon Redshift
If the signature is changing, you might need to DROP FUNCTION and then CREATE FUNCTION.

Can Informatica's Stored Procedure Transformer process stored procedures that have multiple resultsets?

I have a stored procedure that returns two resultsets. I know Informatica has a Stored Procedure Transformer, but I cannot find anywhere it is possible to handle a stored procedure that returns more than one resultset.
Is this an Informatica capability?
It's not possible, I'm afraid. Informatica will not be able to 'guess' what to do with each dataset.
In general, whatever it is that you need to do with the results, e.g. if you need to:
join them, or
use just one of them in a particular mapping, or
switch between them with every run,
I'd recommend to wrap this stored procedure with another one, that would perform the logic required and return the appropriate result set.
Informatica SP transformation can produce only return value not a result set as far as I am aware of.
The possible solution is, store the result-set data into a table/flat-file and use it as a source (either using SQ override or flat file source) in the following mapping

Select stmt in source qualifier along with procedure call in Informatica

We have a situation where we are dealing with a relational source(Oracle). The system is developed in a way where we have to first execute a package which will enable data read from Oracle and user will be able to get results out of select statement. I am trying to find a way on how to implement this in informatica mapping.
What we tried
1. In PreSQL we tried to execute the package and in SQL query we wrote select statement - data not getting loaded in target.
2. In PreSQL we wrote a block in which we are executing the package and just after that(within same beging...end block) we wrote insert statement on top of select statement - This is inserting data through insert statement however I am not in favor of this solution as both source and target are dummy which will confuse people in future.
Is there any possibility to implement this solution somehow by using 1st option.
Please help and suggest.
Thanks
The stored procedure transformation is there for this purpose configure it to execute source pre load
Pre-Sql and data read are not a part of same session. From what I understand, this needs to be done within the same session as otherwise the read is granted only for the session.
What you can do, is create a stored procedure/package that will grant read access and then return the data. Use it as a SQL Override on your SQ. This way SQ will read the data as usual. The concept:
CREATE PROCEDURE ReadMyData AS
BEGIN
execute immediate 'GiveMeTheReadAccess';
select * from MyTable;
END;
And use the ReadMyData on the Source Qualifier.

WSO2 CEP pizzaOrderProcessingPlan's Siddhi Language is strange

Following link provide a WSO2 CEP sample
https://docs.wso2.com/display/CEP310/Getting+Started+with+CEP
I sequentially proceed the document and have no problems.
But i have a question about following Siddhi Language
define table pizza_deliveries (deliveredTime long, order_id string);
from deliveryStream
select time, orderNo
insert into pizza_deliveries;
from orderStream#window.time(30 seconds)
insert into overdueDeliveries for expired-events;
from overdueDeliveries as overdueStream unidirectional join pizza_deliveries
on pizza_deliveries.order_id == overdueStream.orderNo
select count(overdueStream.orderNo) as sumOrderId, overdueStream.customerName
insert into deliveredOrders;
In this execution plan, pizza_deliveries are defined as table.
orderStream, deliveryStream, deliveredOrders are defined as document.
I can't find where and when "overdueDeliveries" is defined. But, it's working..
My question is
when or where overdueDeliveries is defined? automatically generated?
And...
Is overdueDeliveries stream or table?
overdueDeliveries is a stream and it's defined implicitly by Siddhi engine.
If you look at this query:
from orderStream#window.time(30 seconds)
insert into overdueDeliveries for expired-events;
in this query all attributes coming through the orderStream are added to the overdueDeliveries stream and Siddhi engine defines the stream with them.
similarly if you write a query like following:
from orderStream
select orderNo
insert into orderNumbersStream;
in this case the Siddhi engine will define a stream named orderNumbersStream with the attribute 'orderNo' only since it's explicitly selected. if there's no select statement, by default, all attributes are added to the stream.
Also, orderStream, deliveryStream and deliveredOrders are streams. In siddhi, events flow through 'streams' and you can imagine it as a way to pass events from one query to another (one or more).
Regarding tables - When you define a table, you have to explicitly define it using a define table query as given in this execution plan.

How do I create a CAST in Informix to cast an LVARCHAR to TEXT?

What built-in routine can I make use of to cast data of type LVARCHAR to data of type TEXT?
The larger context: I have a table with a column that has been defined as LVARCHAR(4096). Now a developer wishes to change the data type of this column to TEXT. Ideally this would be done with:
ALTER TABLE foo MODIFY bar TEXT;
...but in such a case the following error is puked to the screen:
ALTER TABLE can not modify column (bar) type. Need a cast from the current type to the new type.
I have read up on the CREATE CAST construction, but I cannot begin to think what on earth the proper conversion function would look like. Without a function, Informix will not allow the CREATE CAST to work. That is, if I do, simply:
CREATE CAST (LVARCHAR AS TEXT)
...Informix tells me that a cast function is required (which makes sense).
Beware, Informix developers: if you inadvertently run into a problem like this, there is no way to get out of it using SQL or DDL alone. Let me repeat that.
If you have a VARCHAR or an LVARCHAR column that you need to migrate to be a TEXT column, and if you cannot afford to lose data in that column, there is no way to do this in SQL or DDL.
Instead, you must write a program that does the conversion for you inside the database driver, in memory. In my case, I used JDBC mutable result sets and copied the column to a new column, letting the JDBC driver perform the conversion, then dropped the old column, and renamed the new column back to the old column. This general pattern is the only way to migrate existing character data into a TEXT column.
#Storm: Which version of IDS/ODBC are you using? AFAIK, IDS 9 or 10 can't do that without using specific embedded C in server (See boulder site), but in no way you can do that directly through SQL. Blob related functions or so.
Othe way is by using UNLOAD / LOAD.
In my scenario, we have lots of problems: no admin rights to enterprise server, as we are service providers, we only can use database, but cannot modify structure. We cannot modify TEXT fields only by launching queries.