Teiid Transaction support in Virtual Procedures - teiid

I'm trying to execute few SQL SELECT statements inside a teiid Virtual Procedure. Does teiid have transaction support for virtual procedures. If so does it guarantee that the same database connection from the connection pool is used to execute all SELECT statements within that virtual procedure. My code would look like bellow.
CREATE VIRTUAL PROCEDURE GetFlightRecordsByID(IN p1 integer) RETURNS (xml_out xml) OPTIONS (UPDATECOUNT 0, "REST:METHOD" 'GET', "REST:URI" 'GetFlightRecordsByID')
AS
/*+ cache(pref_mem ttl:14400000) */
BEGIN
SELECT XMLELEMENT("", XMLAGG(XMLELEMENT("", XMLFOREST(.....))) ) as xml_out FROM (...) A;
SELECT XMLELEMENT("", XMLAGG(XMLELEMENT("", XMLFOREST(.....))) ) as xml_out FROM (...) B;
SELECT XMLELEMENT("", XMLAGG(XMLELEMENT("", XMLFOREST(.....))) ) as xml_out FROM (...) C;
END

Does teiid have transaction support for virtual procedures.
Yes, but it is largely dependent on your datasources.
If so does it guarantee that the same database connection from the connection pool is used to execute all SELECT statements within that virtual procedure.
Yes, when a transaction is started (which can be XA or local from the client, a request scoped transaction, or even a block level) the WildFly/EAP transaction manager is relied upon to coordinate the transaction - so generally you'll need XA or transactional sources.

Related

Set autocommit off in PostgreSQL with SOCI C++

This is an answer rather than a question which I need to state in SO anyway. I was struggle with this question ("how to turn off autocommit when using soci library with PostgreSQL databases") for a long time and came up with several solutions.
In Oracle, by default the auto commit option is turned off and we have to call soci::session::commit explicitly to commit the transactions we have made but in PostgreSQL this is other way around and it will commit as soon as we execute a sql statement (correct me, if I'm wrong). This will introduce problems when we write applications database independently. The soci library provide soci::transaction in order to address this.
So, when we initialize a soci::transaction by providing the soci::session to that, it will hold the transaction we have made without commiting to the database. At the end when we call soci::transaction::commit it will commit the changes to the database.
soci::session sql(CONNECTION_STRING);
soci::transaction tr(sql);
try {
sql << "insert into soci_test(id, name) values(7, \'John\')";
tr.commit();
}
catch (std::exception& e) {
tr.rollback();
}
But, performing commit or rollback will end the transaction tr and we need to initialize another soci::transaction in order to hold future transactions (to create an active in progress transaction) we are about to make. Here are more fun facts about soci::transaction.
You can have only one soci::transaction instance per soci::session. The second one will replace the first one, if you initialize another.
You can not perform more than a single commit or rollback using a soci::transaction. You will receive an exception, at the second time you do commit or rollback.
You can initialize a transaction, then use session::commit or session::rollback. It will give the same result as transaction::commit or transaction::rollback. But the transaction will end as soon as you perform single commit or rollback as usual.
It doesn't matter the visibility of the soci::transaction object to your scope (where you execute the sql and call commit or rollback) in order to hold the db transactions you made until explicitly commit or rollback. In other words, if there is an active transaction in progress for a session, db transactions will hold until we explicitly commit or rollback.
But, if the lifetime of the transaction instance which created for the session was end, we cannot expect the db transactions will be halt.
If you every suffer with "WARNING: there is no transaction in progress", you have to perform commit or rollback only using soci::transaction::commit or soci::transaction::rollback.
Now I will post the solution which I came up with, in order to enable the explicit commit or rollback with any database backend.
This is the solution I came up with.
namespace mysociutils
{
class session : public soci::session
{
public:
void open(std::string const & connectString)
{
soci::session::open(connectString);
tr = std::unique_ptr<soci::transaction>(new soci::transaction(*this));
}
void commit()
{
tr->commit();
tr = std::unique_ptr<soci::transaction>(new soci::transaction(*this));
}
void rollback()
{
tr->rollback();
tr = std::unique_ptr<soci::transaction>(new soci::transaction(*this));
}
void ~session()
{
tr->rollback();
}
private:
std::unique_ptr<soci::transaction> tr;
};
}
When ever commit or rollback is performed, initialize a new soci::transaction. Now you can replace your soci::session sql with mysociutils::session sql and enjoy SET AUTOCOMMIT OFF.

Postgres advisory lock within function allows concurrent execution

I'm encountering an issue where I have a function that is intended to require serialized access dependent on some circumstances. This seemed like a good case for using advisory locks. However, under fairly heavy load, I'm finding that the serialized access isn't occurring and I'm seeing concurrent access to the function.
The intention of this function is to provide "inventory control" for a event. Meaning, it is intended to limit concurrent ticket purchases for a given event such that the event is not oversold. These are the only advisory locks used within the application/database.
I'm finding that occasionally there are more tickets in an event than the eventTicketMax value. This doesn't seem like it should be possible because of the advisory locks. When testing with low volume (or manually introduced delays such as pg_sleep after acquiring the lock), things work as expected.
CREATE OR REPLACE FUNCTION createTicket(
userId int,
eventId int,
eventTicketMax int
) RETURNS integer AS $$
DECLARE insertedId int;
DECLARE numTickets int;
BEGIN
-- first get the event lock
PERFORM pg_advisory_lock(eventId);
-- make sure we aren't over ticket max
numTickets := (SELECT count(*) FROM api_ticket
WHERE event_id = eventId and status <> 'x');
IF numTickets >= eventTicketMax THEN
-- raise an exception if this puts us over the max
-- and bail
PERFORM pg_advisory_unlock(eventId);
RAISE EXCEPTION 'Maximum entries number for this event has been reached.';
END IF;
-- create the ticket
INSERT INTO api_ticket (
user_id,
event_id,
created_ts
)
VALUES (
userId,
eventId,
now()
)
RETURNING id INTO insertedId;
-- update the ticket count
UPDATE api_event SET ticket_count = numTickets + 1 WHERE id = eventId;
-- release the event lock
PERFORM pg_advisory_unlock(eventId);
RETURN insertedId;
END;
$$ LANGUAGE plpgsql;
Here's my environment setup:
Django 1.8.1 (django.db.backends.postgresql_psycopg2 w/ CONN_MAX_AGE 300)
PGBouncer 1.7.2 (session mode)
Postgres 9.3.10 on Amazon RDS
Additional variables which I tried tuning:
setting CONN_MAX_AGE to 0
Removing pgbouncer and connecting directly to DB
In my testing, I have noticed that, in cases where an event was oversold, the tickets were purchased from different webservers so I don't think there is any funny business about a shared session but I can't say for sure.
As soon as PERFORM pg_advisory_unlock(eventId)is executed, another session can grab that lock, but as the INSERT of session #1 is not yet commited, it will not be counted in the COUNT(*)of session #2, resulting in the over-booking.
If keeping the advisory lock strategy, you must use transaction-level advisory locks (pg_advisory_xact_lock), as opposed to session-level. Those locks are automatically released at COMMIT time.

Addressing to temporary table created through CDatabase::ExecuteSQL

Consider following code and advise, why can I not address the temporary table created in the current session.
CDatabase cdb;
CString csConnectionString = "Dsn=prm2;Driver={INFORMIX 3.34 32 BIT};Host=10.XXX.XXX.XXX;Server=SRVNAME;Service=turbo;Protocol=olsoctcp;Database=DBNAME;Uid=user;Pwd=password";
cdb.OpenEx(csConnectionString, CDatabase::noOdbcDialog);
cdb.ExecuteSQL(CString("Set Isolation to Dirty Read"));
...
CString csStatement1 = "SELECT serno FROM TABLE1 into temp ttt_1;"
CString csStatement2 = "DROP TABLE ttt_1";
cdb.ExecuteSQL(csStatement1); // point1
cdb.ExecuteSQL(csStatement2); // point2
...
cdb.Close();
At point1 everything is fine. At point2 I have:
The specified table (ttt_1) is not in the database. State:S0002,Native:-206,Origin:[Informix][Informix ODBC Driver][Informix]
I tried to specify username as prefix (like user.ttt_1 or "user".ttt_1); I tried to create permanent table within respective statement in csStatement1 and every time it failed at point2. But when I tried to create same temporary table twice within csStatement1 I got the message that the temporary table already exists in session.
Please advise: what is wrong and how can I address created temporary tables.
it is all to do with ODBC autocommit mode. By default ODBC uses the option what is defined during the connection, and according to connectionstrings.com the default settings for Informix is commitretain=false.
You have two options: either set it via the connection string (commitretain=true) or (better option) via the ODBC. For a set of statements where you'd like to retain the temp table activate the manual commit mode via SqlSetConnectAttr, then execute a few statements and then call SqlEndTran. Please note, that in manual mode you do not need to call BEGIN TRANSACTION, as it will start automatically (behaviour similar to Oracle)
Please note that ODBC applications should not use Transact-SQL transaction statements such as BEGIN TRANSACTION, COMMIT TRANSACTION, or ROLLBACK TRANSACTION, but use the ODBC commands.

Setting Connection Parameters via ADO for SQL Server

Is it possible to set a connection parameter on a connection to SQL Server and have that variable persist throughout the life of the connection? The parameter must be usable by subsequent queries.
We have some old Access reports that use a handful of VBScript functions in the SQL queries (let's call them GetStartDate and GetEndDate) that return global variables. Our application would set these before invoking the query and then the queries can return information between date ranges specified in our application.
We are looking at changing to a ReportViewer control running in local mode, but I don't see any convenient way to use these custom functions in straight T-SQL.
I have two concept solutions (not tested yet), but I would like to know if there is a better way. Below is some pseudo code.
Set all variables before running Recordset.OpenForward
Connection->Execute("SET #GetStartDate = ...");
Connection->Execute("SET #GetEndDate = ...");
// Repeat for all parameters
Will these variables persist to later calls of Recordset->OpenForward? Can anything reset the variables aside from another SET/SELECT #variable statement?
Create an ADOCommand "factory" that automatically adds parameters to each ADOCommand object I will use to execute SQL
// Command has been previously been created
ADOParameter *Parameter1 = Command->CreateParameter("GetStartDate");
ADOParameter *Parameter2 = Command->CreateParameter("GetEndDate");
// Set values and attach etc...
What I would like to know if there is something like:
Connection->SetParameter("GetStartDate", "20090101");
Connection->SetParameter("GetEndDate", 20100101");
And these will persist for the lifetime of the connection, and the SQL can do something like #GetStartDate to access them. This may be exactly solution #1, if the variables persist throughout the lifetime of the connection.
Since no one has ventured an answer I'm guessing there isn't an elegant solution, that said:
Global cursors persist for the duration of the connection and can be accessed from any SQL or stored proc so you could execute this once on the connection:
DECLARE KludgeKursor CURSOR GLOBAL STATIC FOR
SELECT StartDate = '2010-01-01', EndDate = '2010-04-30'
OPEN KludgeKursor
and in your stored procedures:
--get the values
DECLARE #StartDate datetime, #EndDate datetime
FETCH FIRST FROM GLOBAL KludgeKursor
INTO #StartDate, #EndDate
--go crazy
SELECT #StartDate, #EndDate
Each connection would only see their own values, so the same stored procs can be used for different connection/values. The global cursor is automatically deallocated when the connection ends

Using MbUnit3's [Rollback] for unit testing NHibernate interactions with SQLite

Background:
My team is dedicated to ensuring that straight from checkout, our code compiles and unit tests run successfully. To facilitate this and test some of our NHibernate mappings, we've added a SQLite DB to our repository which is a mirror of our production SQL Server 2005 database. We're using the latest versions of: MbUnit3 (part of Gallio), System.Data.SQLite and NHibernate.
Problem:
I've discovered that the following unit test does not work with SQLite, despite executing without trouble against SQL Server 2005.
[Test]
[Rollback]
public void CompleteCanPersistNode()
{
// returns a Configuration for either SQLite or SQL Server 2005 depending on how the project is configured.
Configuration config = GetDbConfig();
ISessionFactory sessionFactory = config.BuildSessionFactory();
ISession session = sessionFactory.OpenSession();
Node node = new Node();
node.Name = "Test Node";
node.PhysicalNodeType = session.Get<NodeType>(1);
// SQLite fails with the exception below after the next line called.
node.NodeLocation = session.Get<NodeLocation>(2);
session.Save(node);
session.Flush();
Assert.AreNotEqual(-1, node.NodeID);
Assert.IsNotNull(session.Get<Node>(node.NodeID));
}
The exception I'm getting (ONLY when working with SQLite) follows:
NHibernate.ADOException: cannot open connection --->
System.Data.SQLite.SQLiteException:
The database file is locked database is locked
at System.Data.SQLite.SQLite3.Step(SQLiteStatement stmt)
at System.Data.SQLite.SQLiteDataReader.NextResult()
at System.Data.SQLite.SQLiteDataReader..ctor(SQLiteCommand cmd, CommandBehavior behave)
at System.Data.SQLite.SQLiteCommand.ExecuteReader(CommandBehavior behavior)
at System.Data.SQLite.SQLiteCommand.ExecuteNonQuery()
at System.Data.SQLite.SQLiteTransaction..ctor(SQLiteConnection connection, Boolean deferredLock)
at System.Data.SQLite.SQLiteConnection.BeginDbTransaction(IsolationLevel isolationLevel)
at System.Data.SQLite.SQLiteConnection.BeginTransaction()
at System.Data.SQLite.SQLiteEnlistment..ctor(SQLiteConnection cnn, Transaction scope)
at System.Data.SQLite.SQLiteConnection.EnlistTransaction(Transaction transaction)
at System.Data.SQLite.SQLiteConnection.Open()
at NHibernate.Connection.DriverConnectionProvider.GetConnection()
at NHibernate.Impl.SessionFactoryImpl.OpenConnection()
--- End of inner exception stack trace ---
at NHibernate.Impl.SessionFactoryImpl.OpenConnection()
at NHibernate.AdoNet.ConnectionManager.GetConnection()
at NHibernate.AdoNet.AbstractBatcher.Prepare(IDbCommand cmd)
at NHibernate.AdoNet.AbstractBatcher.ExecuteReader(IDbCommand cmd)
at NHibernate.Loader.Loader.GetResultSet(IDbCommand st, Boolean autoDiscoverTypes, Boolean callable, RowSelection selection, ISessionImplementor session)
at NHibernate.Loader.Loader.DoQuery(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies)
at NHibernate.Loader.Loader.DoQueryAndInitializeNonLazyCollections(ISessionImplementor session, QueryParameters queryParameters, Boolean returnProxies)
at NHibernate.Loader.Loader.LoadEntity(ISessionImplementor session, Object id, IType identifierType, Object optionalObject, String optionalEntityName, Object optionalIdentifier, IEntityPersister persister)
at NHibernate.Loader.Entity.AbstractEntityLoader.Load(ISessionImplementor session, Object id, Object optionalObject, Object optionalId)
at NHibernate.Loader.Entity.AbstractEntityLoader.Load(Object id, Object optionalObject, ISessionImplementor session)
at NHibernate.Persister.Entity.AbstractEntityPersister.Load(Object id, Object optionalObject, LockMode lockMode, ISessionImplementor session)
at NHibernate.Event.Default.DefaultLoadEventListener.LoadFromDatasource(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.DoLoad(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.Load(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.ProxyOrLoad(LoadEvent event, IEntityPersister persister, EntityKey keyToLoad, LoadType options)
at NHibernate.Event.Default.DefaultLoadEventListener.OnLoad(LoadEvent event, LoadType loadType)
at NHibernate.Impl.SessionImpl.FireLoad(LoadEvent event, LoadType loadType)
at NHibernate.Impl.SessionImpl.Get(String entityName, Object id)
at NHibernate.Impl.SessionImpl.Get(Type entityClass, Object id)
at NHibernate.Impl.SessionImpl.Get[T](Object id)
D:\dev\598\Code\test\unit\DataAccess.Test\NHibernatePersistenceTests.cs
When SQLite is used and the [Rollback] attribute is NOT specified, the test also completes successfully.
Question:
Is this an issue with System.Data.SQLite's implementation of TransactionScope which MbUnit3 uses for [Rollback] or a limitation of the SQLite engine?
Is there some way to write this unit test, working against SQLite, that will rollback so as to avoid affecting the database each time the test is run?
This is not a real answer to you question, but probably a solution to solve the problem.
I use an in-memory implementation of sql lite for my integration tests. I build up the schema and fill the database before each test. The schema creation and initial data filling happens really fast (less then 0.01 seconds per test) because it's an in-memory database.
Why do you use a physical database?
Edit: response to answer about question above:
1.) Because I migrated my schema and data directly from SQL Server 2005 and I want it to persist in source control.
I recommend to store a file with the database schema in and a file or script that creates the sample data in source control. You can generate the file using sql server studion management express, you can generate it from your NHibernate mappings or you can use a tool like sql compare and you can probably find other solutions for this when you need it. Plain text files are stored easier in version control systems then complete binary database files.
2.) Does something about the in-memory SQLite engine differ such that it would resolve this difficulty?
It might solve your problems because you can recreate your database before each test. Your database under test will be in a the state you expect it to be before each test is executed. A benefit of that is there is no need to roll back your transactions, but I have run similar test with in memory sqllite and it worked as aspected.
Check if you're not missing connection.release_mode=on_close in your SQLite NHibernate configuration. (reference docs)
BTW: always dispose your ISession and ISessionFactory.
Ditch [Rollback] and use NDbUnit. I use this myself for this exact scenario and it has been working great.