We have a single source & two targets Target A & Target B. We want to load Both Tar A first & Tar B. We want the Load to continue In case Either one Fails. Meaning in case Target A fails, Target B should be loaded and vice versa. We donot want the load to abandon with All or None Scenario. Any options where we query the Source only onetime. Two independent Job flows is not an option as we want a single pull
In your workflow you can write the output to a staging table in your first session and then query the same staging table in 2 parallel sessions writing to the 2 separate tables, that way you guarantee that it doesn't matter what happens with the unrelated load to each of the targets.
Related
I want to load Target1 in first run of workflow and load Target2 in second run of same workflow and Target1 in third run of same workflow and so on...Please let me know how can I achieve this.
Create a Workflow with a persistent variable (eg. $$runCnt) with default value as 1. Use AssignmentTask to flip the variable value IIF($$runCnt=1, 2, 1). Link the AssignmentTask with two sessions, eg. s_Target1 and s_Target2. Use following conditions on the links:
AssignmentTask to s_Target1 link condition: $$runCnt=1
AssignmentTask to s_Target2 link condition: $$runCnt=2
The two Sessions should reuse same mapping, just override 'Target Table Name' property to use the appropriate table in each one of them.
I have 100 insert statements like these ones
INSERT INTO table_A (col1,col2col3) VALUES ('ab','jerry',123);
INSERT INTO table_A (col1,col2col3) SELECT col1,col2,col3 FROM Test WHERE col1='ab';
INSERT INTO table_B (col1,col2col3) SELECT loc1,loc2,loc3 FROM Test_v2 WHERE loc2='ab';
I'm running the queries every 2 months. The WHERE clauses are not changing and the recipient table is being deleted every 2 months too, making it clean slate.
I've been looking the internet but it does not seem possible to create the equivalent of a SQL stored procedure and be able to run it , once it in a while .
Or is it ...?
If it doesn't exist, I'm willing to rewrite it but I want to make sure that it does not exist before doing so.
TIA.
This depends on your setup. If you have a SAS Server (including a metadata server), you can create stored processes, which is a direct analogue. See this paper or the documentation.
If your main concern is repeatability, you should just use a macro. If, on the other hand, you're interested in scheduling, you have two major options.
First, a .sas program can be scheduled in batch mode very easily; see Batch processing under Windows or look for a similar article for your operating system of choice. This entails simply setting up a .bat program that will execute your .sas program, and then asking the Windows scheduler to run it however often you need.
Second, an Enterprise Guide process flow can be scheduled via a handy tool built into the program. Go to File -> Schedule , or right click on a process flow and select Schedule . This will create a .vbs and register it with the Windows scheduler.
We have a situation where we have a different execution order of instances of the same target being loaded from a single source qualifier.
We have a problem when we promote a mapping from DEV to TEST when we execute in TEST after promoting there are problems.
For instance we have a router with 3 groups for Insert, Update and Delete followed by the appropriate update strategies to set the row type accordingly followed by three target instances.
RTR ----> UPD_Insert -----> TGT_Insert
\
\__> UPD_Update -------> TGT_Update
\
\__> UPD_Delete ---------> TGT_Delete
When we test this out using data to do an insert followed by an update followed by a delete all based on the same primary key we get a different execution order in TEST compared to the same data in our DEV environment.
Anyone have any thoughts - I would post an image but I don't have enough cred yet.
Cheers,
Gil.
You can not controll the load order as long as you have a single source. I you could separate the loads to use separate sources the target load order setting in the mapping could be used, or you could even create separate mappings for them.
As it is now you should use a single target and utilize the update strategy transformation to determine the wanted operation for each record passing through. It is then possible to use a sort to define in what order the different operations is made to the physical table.
You can use the sorter transformation just before update strategy......based on update strategy condition you can sort the incoming rows....So first date will go through the Insert, than update at last through delete strategy.
Simple solution is try renaming the target definition in alphabetical order... like INSERT_A, UPDATE_B, DELETE_C then start loading
This will load in A,B,C order. Try and let me know
I have a workflow which writes data from a table into a flatfile. It works just fine, but I want to insert a blank line inbetween each records. How can this be achieved ? Any pointer ?
Here, you can create 2 target instances. One with the proper data and in other instance pass blank line. Set Merge Type as "Concurrent Merge" in session properties.
Multiple possibilities -
You can prepare appropriate dataset into a relational table, and afterwards, dump data from that into a flat file. For preparation of that data set, you can insert blank rows into that relational target.
Send a blank line to a separate target file (based on some business condition using a router or something similar), after that you can use merge files option (in session config) to get that data into a single file.
I want to copy the data into one excel file, while the mapping has been running in informatica.
I used Informatica last year, now am on SSIS. If I remember correctly, you can setup a separate connection for Excel target destination. There after you pretty much drag all fields from your source to the target destination (in this case Excel).
As usual develop a mapping with source and target.
But make target as Flat File while creating target.
Then run the mapping u will get