What determines the lifetime of a Teiid session? - teiid

When accessing Teiid via jdbc, when is the lifetime of a Teiid session? It it the same as the lifetime of a connection?
I would like to create a temporary table with one connection and read it with a different one. Is this possible? If so, when would it be deleted?

Yes, life of connection = life of session. A session scoped temporary table is not for sharing across the connections. See [1] for temporary table support.
[1] http://teiid.github.io/teiid-documents/12.3.x/content/reference/Temp_Tables.html

Related

Restoring a DynamoDB table using PITR and updating Cloudformation/CDK/other services references

I have an environment where my DynamoDB table is central to a few services (a couple of lambdas, kinesis and firehoses). All of that is managed by AWS's CloudFormation/Typescript CDK.
This table has PITR enabled and, as far as I know, it's only possible to do a PITR by dumping the recovered data into a new table. Here is where the pain begins:
AWS's documentation after the creation of the new table is INEXISTENT!
How can I update the references for the new table on all other services?
Should I just 'erase' my old table and import the recovered ones?
Wouldn't this means that I would need to take my service down to recover it?
What is the "standard" or "best practice" here?
Thanks a lot community! :D
You must restore to a new table yes. There are some ways to overcome the issues you describe. Firstly, when you restore to a new table you will need to import that resource to your CDK stack.
Parameter Store
Use the parameter Store to hold the latest name of your table, all of your down stream applications will resolve the table name by querying the param store.
Lambda Env Variable
Set your table name dynamically as environment variables for your lambda, this will reduce latency as opposed to the other approach, but it's only applicable to Lambda or services which allow you to set env variables.
Inline Anwers for Completeness
AWS's documentation after the creation of the new table is INEXISTENT!
Please share feedback directly on the docs page if you believe relevant information is missing.
How can I update the references for the new table on all other services?
2 options mentioned above is the most common approach.
Should I just 'erase' my old table and import the recovered ones?
This would cause application downtime, if you can afford that then it would be an easy approach. If not, follow the above suggestions.
Wouldn't this means that I would need to take my service down to recover it? What is the "standard" or "best practice" here?
Yes, as mentioned above.

How to capture session level details in a table

I have a requirement to capture session level details like session start time, end time, src success row, failedrows etc.. in a audit table. As all those details are available in prebuilt session variables i need to store these in a table. As of now what i am doing is, taking an assignment task in a workflow and assigning all these prebuilt session variables values for a particular session to wrkflow variables and passing these workflow variables to mapping variables using another non reusable session (the mapping which loads the table) using pre variable assignment option.It is working fine for workflow which is having one session. But if i have to implement this for a workflow having more no of sessions this process will be tedious as i have to create assignment task for each of these sessions and need to create non resuable session which calls a mapping to load into audit table.
So i am wondering is there any alternative solution to get this job done? I am thinking of a solution in which if we can able to captures audit details of all session in a file and pass this file as a input to a mapping to load this data at once into table. Is this possible? any solution?
Check this out: ETL Operational framework
It covers and end-to-end solution that should fit your needs and be quite easy to extend if you have multiple sessions - all you'd need to do is apply similar Post session commands before running the final session that loads the stats to database.

How to modify a scope in Sync Framework?

I am new to using sync framework and need help in fixing an issue.
The system we built is a window based application. Each user will have their own database in their local. End of the day they sync their database to the remote DB server when they are within the network.
I added two new columns to an existing table. Scope definition seems to be updated in my local database. But when I try to do a sync with my remote DB server it says could not find _bulk-insert store procedure and errors out.
When checked in my remote DB server. I could see the new columns in the table and I don't see any of the store procedures. Scope-config table does not have the new columns in it.
Do the remote server needs to have the store procedure or updating the scope config table will do?
have you provisioned your remote DB server? if you're not finding the sync fx related objects then its not provisioned.
likewise, Sync Fx do not support schema synchronisation. there's nothing on the API to allow you to alter the scope definition either.
it's either you drop and re-create the scope and re-sync, or you hack your way into the Sync Fx scope definition metadata.

Merging Sync Framework 2.1 Scopes

I've used SqlSyncScopeProvisioning to create ScopeA ScopeB and ScopeC. Each corresponds to a single table, TableA TableB TableC. There are no filters, and all the data is syncing between the servers correctly.
I'd now like to clean up the scopes and have a single scope to cover all of the tables, instead of 3 scopes. Let's call it ScopeAll.
If I provision ScopeAll containing TableA TableB TableC, will it copy the existing knowledge data from ScopeA ScopeB ScopeC so that the initial data does not need to all be copied over and sync'd again?
After that I would then deprovision ScopeA ScopeB and ScopeC.
no, it will not copy the sync knowledge. a scope doesn't care about any other scope in the database.

Making database schema changes using Microsoft Sync framework without losing any tracking table data

I am using Microsoft Synch Service Framework 4.0 for synching Sql server Database tables with SqlLite Database on the Ipad side.
Before making any Database schema changes in the Sql Server Database, We have to Deprovision the database tables. ALso after making the schema changes, we ReProvision the tables.
Now in this process, the tracking tables( i.e. the Synching information) gets deleted.
I want the tracking table information to be restored after Reprovisioning.
How can this be done? Is it possible to make DB changes without Deprovisioning.
e.g, the application is in Version 2.0, The synching is working fine. Now in the next version 3.0, i want to make some DB changes. SO, in the process of Deprovisioning-Provisioning, the tracking info. gets deleted. So all the tracking information from the previous version is lost. I do not want to loose the tracking info. How can i restore this tracking information from the previous version.
I believe we will have to write a custom code or trigger to store the tracking information before Deprovisioning. Could anyone suggest a suitable method OR provide some useful links regarding this issue.
the provisioning process should automatically populate the tracking table for you. you don't have to copy and reload them yourself.
now if you think the tracking table is where the framework stores what was previously synched, the answer is no.
the tracking table simply stores what was inserted/updated/deleted. it's used for change enumeration. the information on what was previously synched is stored in the scope_info table.
when you deprovision, you wipe out this sync metadata. when you synch, its like the two replicas has never synched before. thus you will encounter conflicts as the framework tries to apply rows that already exists on the destination.
you can find information here on how to "hack" the sync fx created objects to effect some types of schema changes.
Modifying Sync Framework Scope Definition – Part 1 – Introduction
Modifying Sync Framework Scope Definition – Part 2 – Workarounds
Modifying Sync Framework Scope Definition – Part 3 – Workarounds – Adding/Removing Columns
Modifying Sync Framework Scope Definition – Part 4 – Workarounds – Adding a Table to an existing scope
Lets say I have one table "User" that I want to synch.
A tracking table will be created "User_tracking" and some synch information will be present in it after synching.
WHen I make any DB changes, this Tracking table "User_tracking" will be deleted AND the tracking info. will be lost during the Deprovisioning- Provisioning process.
My workaround:
Before Deprovisioning, I will write a script to copy all the "User_tracking" data into another temporary table "User_tracking_1". so all the existing tracking info will be stored in "User_tracking_1". WHen I reprovision the table, a new trackin table "User_Tracking" will be created.
After Reprovisioning, I will copy the data from table "User_tracking_1" to "User_Tracking" and then delete the contents from table "User_Tracking_1".
UserTracking info will be restored.
Is this the right approach...