Listener to execute at the end of the input file - jberet

I have a process that uses a chunk to read a file and insert the records into a table. I need to be able to insert a row into a parent table at the opening of the input file and when the file is closed I need to update that parent table row inserted at the start. Is there a listener or approach to make this happen?

The closest one is StepListener, where you can implement its beforeStep and afterStep method to update the parent table. You can inject StepContext into the step listener class to access context data through step metrics or step transient data.
But beforeStep is called before opening the input file. Not sure if this diff is significant for your case.
Otherwise, you can implement your own item reader class to achieve your requirement.

Related

Simple document switcher functionality?

I'm writing an application that will allow a user to drag/drop specific files onto the application window, parse those files, put the contents into a table (via a QStandardItemModel), and add each file's name (or alias) to a separate tree view (which acts as the document switcher).
I'll use NotePad++ as a simple example.
When I click any of the new files in the leftmost "Doc Switcher," it shows the contents in the right pane. Imagine that right pane is a table. And for instance, imagine that the list on the left is a list of .csv files that were imported into the application.
What I want to do is, upon clicking each item in the list, I want the corresponding parsed .csv file to show up in the table pane on the right.
My table is just a QTableView that displays the contents of the .csv files in a QStandardItemModel. Everything works when it comes to implementing the table and parsing the files.
I also set up a QTreeWidget as the "document switcher." Now, I need to link the document switcher selection to the table so that each file's respective contents will be shown in the table view.
I can have the application populate the tableView with the model contents when the QTreeView's top level item selection changes. That's no problem. The problem is with what I should be checking for when that selection changes and how.
I'm unsure of how to implement this. How do I store a bunch of QStandardItemModel objects and then link them to their names in the document switcher? Should I even be doing that? Do I have to create a new QStandardItemModel for each file that is imported? Should I create one QStandardItemModel, then somehow save it to be pulled back up later and re-use that same table model object for each file that is added? I'm just unsure how how this is supposed to work and feel like I am missing a fundamental part of all of this.
I would suggest two approaches to solve your problem:
You can watch document switcher signal (selection changed) and create new model for the currently selected data. Your table view in the right should show the data, when you set the model. When new file item selected, delete existing model and create new one with new data,
The same as first approach, but instead of recreating model for each data change you can use a single model, but reset its data each time you switch the file.

Qt splitting data structure into groups

I have a problem I'm trying to solve but I'm at a stand still due to the fact that I'm in the process of learning Qt, which in turn is causing doubts as to what's the 'Qt' way of solving the problem. Whilst being the most efficient in term of time complexity. So I read a file line by line ( file qty ranging between 10-2000,000). At the moment my approach is to dump ever line to a QVector.
Qvector <QString> lines;
lines.append("id,name,type");
lines.append("1,James,A");
lines.append("2,Mark,B");
lines.append("3,Ryan,A");
Assuming the above structure I would like to give the user with three views that present the data based on the type field. The data is comma delimited in its original form. My question is what's the most elegant and possibly efficient way to achieve this ?
Note: For visual aid , the end result kind of emulates Microsoft access. So there will be the list of tables on the left side.In my case these table names will be the value of the grouping field (A,B). And when I switch between those two list items the central view (a table) will refill to contain the particular groups data.
Should I split the data into x amount of structures ? Or would that cause unnecessary overhead ?
Would really appreciate any help
In the end, you'll want to have some sort of a data model that implements QAbstractItemModel that exposes the data, and one or more views connected to it to display it.
If the data doesn't have to be editable, you could implement a custom table model derived from QAbstractTableModel that maps the file in memory (using QFile::map), and incrementally parses it on the fly (implement canFetchMore and fetchMore).
If the data is to be editable, you might be best off throwing it all into a temporary sqlite table as you parse the file, attaching a QSqlTableModel to it, and attaching some views to it.
When the user wants to save the changes, you simply iterate over the model and dump it out to a text file.

trigger Informatica workflow based on the status column in oracle table

I want to implement the below scenario without using pl/sql procedure or trigger
I have a table called emp_details with coulmns (empno,ename,salary,emp_status,flag,date1) .
If someone updates the columns emp_status='abc' and flag='y', Informatica WF 1 would be in continuous running status and checking emp_status value "ABC"
If it found record / records then query all the records and it will invoke WF 2.
WF 1 will pass value ename,salary,Date1 to WF 2 (Wf2 will populate will insert the records into the table emp_details2).
How can I do this using the informatica approach instead of plsql or trigger?
If you want to achieve this in real time, write the output of WF1 to a message queue and in the second workflow WF2 subscribe to the message queue produced from WF1.
If you have batch process in place. Produce a output file from WF1 and use this output file in WF2. You can easily setup this dependency using job schedulers.
I don't understand why do you need two workflows in the first place. Why not accomplish emp_details2 table updates with the very same workflow that is looking for differences.
Anyway, this can be done using indicator file:
WF1 running continously should create a file if any changes have been found.
WF2 should be running continously with EventWait set to wait for the indicator file specified above. Once found it should use the Assignment Task to rename/delete the file and fetch the desired data from source and populate the emp_details2 table.
If you need it this way, you can pass the data through the indicator file
You can do this in a single workflow, Create a dummy session which which check for the flag in table after this divide the flow into two based on the below link conditions,
Flow one: Link condition, Session.Status=SUCCEEDED and SOURCE_SUCCESS_ROWS(count)>=1 then run your actual session which will load the data
Flow two: Link Condition, Session.Status=SUCCEEDED and SOURCE_SUCCESS_ROWS=0, connect this to control task and mark the workflow as complete.
Make sure you schedule the workflow at Informatica level to run continousuly.
Cheers

Informatica target file

I have a workflow which writes data from a table into a flatfile. It works just fine, but I want to insert a blank line inbetween each records. How can this be achieved ? Any pointer ?
Here, you can create 2 target instances. One with the proper data and in other instance pass blank line. Set Merge Type as "Concurrent Merge" in session properties.
Multiple possibilities -
You can prepare appropriate dataset into a relational table, and afterwards, dump data from that into a flat file. For preparation of that data set, you can insert blank rows into that relational target.
Send a blank line to a separate target file (based on some business condition using a router or something similar), after that you can use merge files option (in session config) to get that data into a single file.

read csv using jmeter(starting from x)

I'm writing a jmeter script and I have a huge csv file with a bunch of data which I use in my requests, is it possible to start not from first entry but from 5th or nth entry?
Looking at the CSVDataSet, it doesn't seem to directly support skipping to a given row. However, you can emulate the same effect by first executing N loops where you just read from the data set and do nothing with the data. This is then followed by a loop containing your actual tests. It's been a while since I've used JMeter - for this approach to work, you must share the same CVSDataSet between both loops.
If that's not possible, then there is an alternative. In your main test loop, use a Counter and a If Controller. The Counter counts up from 1. The If controller contains your tests, with the condition ${Counter}>N where N is the number to skip. ("Counter" In the expression is whatever you set the "reference name" property" to in the counter.)
mdma's 2nd idea is a clean way to do it, but here are two other options that are simple, but annoying to do:
Easiest:
Create separate CSV files for where you want to start the file, deleting the rows your don't need. I would create separate CSV data config elements for each CSV file, and then just disable the ones you don't want to run.
Less Easy:
Create a new column in your CSV file, called "ignore". In the rows you want to skip, enter the value "True". In your test plan, create an IF controller that is parent to your requests. Make the If condition: "${ignore}"!="True" (include the quotes and note that 'true' is case sensitive). This will skip the requests if the 'ignore' column has a value of 'true'.
Both methods require modifying the CSV file, but method two has other applications (like excluding a header row) and can be fast if you're using Open Office, Excel, etc.