When to use MLOAD, FLOAD and TPT connections in informatica? - informatica

I am just collecting thoughts from you all experts and trying to learn from you. Can you share your thoughts on when to use MLOAD, FLOAD and TPT connections in informatica.
Thanks for your valuable time.

I have found some points.
MLOAD (MultiLoad):
- Each MultiLoad import task can do multiple data insert, update, and delete functions.
- Each MultiLoad delete task can remove large numbers of rows from a single table.
- Each MultiLoad import task can have up to 100 DML steps;
- Load data into multiple tables.
- Works in batch/interactive mode.
FLOAD (Fastload):
- Fastload works only on empty tables(Best for truncate and load strategy).
- Works in batch/interactive mode.
Please add/share your thoughts.

Related

Informatica PowerExchange CDC Data results in target DB way too slow

First of all, I'm very new to Informatica PowerCenter and PowerExchange.
We are using Informatica PowerCenter and PowerExchange to receive CDC data from our source DB2 to a PostgreSQL DB. Therefore we have one workflow where 7 tables are mapped and we get the result in our PostgreSQL. It works fine so far, but it's lacking performance. Not that the size of data is the problem, it's more the delay I see results in the target DB.
When I insert or delete some data on the DB2 (just like 10 rows in one db), I see the results in our PostgreSQL mostly in about ~10-30 seconds (very rare in less than 5 seconds).
My goal would be to speed up this delay. Is this possible? What would I need for that?
I played a little bit with commit interval, and DTM Buffer size, but nothing helped pretty much.
Also I have the feeling that when I configure the workflow to run continuously, it's even slower, compared to when I execute the workflow, after I made the Inserts/Deletes.
Thanks in advance

Why is loading dashDB analytics by trickle feed a bad idea?

I have a use case where I continuously need to trickle feed data into dashDB, however I have been informed that this is not optimal for dashDB.
Why is this not optimal? Is there a workaround?
Columnar warehouses are great for reads, but if you insert a single row into an N column table then the system has to cut the row into pieces and do N separate writes to disk. This makes small inserts relatively inefficient and things can slow down as a result.
You may want to do an initial batch load of data. Currently the compression dictionary is built only for bulk loads, so if you start with a new table and populate it only using inserts then the data doesn't get compressed at all.
Try to structure the loading into microbatches with a 2-5 minute load cycle.
What is the use case here? Check if dashDB Transactional can solve your need. DashDB transactional is tuned for OLTP and point of sale transactions which is what you are trying to feed.

Finding and debugging bad record using hive

Is there any way to pinpoint the badrecord when we are loading the data using hive or while processing the data.
The scenario Goes like this.
Suppose I have file that need to be loaded as table using hive which got 1 Million records in it. Delimited by some '|' symbol.
So suppose after Half a million record processing I encounter a problem. IS there anyway to debug it or precisely pinpoint the record/records having the issues.
If you are not clear about my question please let me know.
I know there is a skipping of bad record in mapreduce (Kind of percentage). I would like to get this in the perspective of hive.
Thanks In Advance.

How to optimize writing this data to a postgres database

I'm parsing poker hand histories, and storing the data in a postgres database. Here's a quick view of that:
I'm getting a relatively bad performance, and parsing files will take several hours. I can see that the database part takes 97% of the total program time. So only a little optimization would make this a lot quicker.
The way I have it set-up now is as follows:
Read next file into a string.
Parse one game and store it into object GameData.
For every player, check if we have his name in the std::map. If so; store the playerids in an array and go to 5.
Insert the player, add it to the std::map, store the playerids in an array.
Using the playerids array, insert the moves for this betting round, store the moveids in an array.
Using the moveids array, insert a movesequence, store the movesequenceids in an array.
If this isn't the last round played, go to 5.
Using the movesequenceids array, insert a game.
If this was not the final game, go to 2.
If this was not the last file, go to 1.
Since I'm sending queries for every move, for every movesequence, for every game, I'm obviously doing too many queries. How should I bundle them for best performance? I don't mind rewriting a bit of code, so don't hold back. :)
Thanks in advance.
CX
It's very hard to answer this without any queries, schema, or a Pg version.
In general, though, the answer to these problems is to batch the work into bigger coarser batches to avoid repeating lots of work, and, most importantly, by doing it all in one transaction.
You haven't said anything about transactions, so I'm wondering if you're doing all this in autocommit mode. Bad plan. Try wrapping the whole process in a BEGIN and COMMIT. If it's a seriously long-running process the COMMIT every few minutes / tens of games / whatever, write a checkpoint file or DB entry your program can use to resume the import from that point, and open a new transaction to carry on.
It'll help to use multi-valued inserts where you're inserting multiple rows to the same table. Eg:
INSERT INTO some_table(col1, col2, col3) VALUES
('a','b','c'),
('1','2','3'),
('bork','spam','eggs');
You can improve commit rates with synchronous_commit=off and a commit_delay, but that's not very useful if you're batching work into bigger transactions.
One very good option will be to insert your new data into UNLOGGED tables (PostgreSQL 9.1 or newer) or TEMPORARY tables (all versions, but lost when session disconnects), then at the end of the process copy all the new rows into the main tables and drop the import tables with commands like:
INSERT INTO the_table
SELECT * FROM the_table_import;
When doing this, CREATE TABLE ... LIKE is useful.
Another option - really a more extreme version of the above - is to write your results to CSV flat files as you read and convert them, then COPY them into the database. Since you're working in C++ I'm assuming you're using libpq - in which case you're hopefully also using libpqtypes. libpq offers access to the COPY api for bulk-loading, so your app wouldn't need to call out to psql to load the CSV data once it'd produced it.

Which is the fastest way to retrieve all items in SQLite?

I am programming on windows, I store my infors in sqlite.
However I find to get all items is a bit slow.
I am using the following way:
select * from XXX;
Retrieving all items in 1.7MB SQLite DB takes about 200-400ms.
It is too slow. Can anyone help?
Many Thanks!
Thanks for your answers!
I have to do a complex operation on the data, so everytime, when I open the app, I need to read all information from DB.
I would try the following:
Vacuum your database by running the "vacuum" command
SQLite starts with a default cache size of 2000 pages. (Run the command "pragma cache_size" to be sure. Each page is 512 bytes, so it looks like you have about 1 MByte of cache, which is not quite enough to contain your database. Increase your cache size by running "pragma default_cache_size=4000". That should get you 2 Mbytes cache, which is enough to get your entire database into the cache. You can run these pragma commands from the sqlite3 command line, or through your program as if it were another query.
Add an index to your table on the field you are ordering with.
You could possibly speed it up slightly by selecting only those columns you want, but otherwise nothing will beat an unordered select with no where clause for getting all the data.
Other than that a faster disk/cpu is your only option.
What type of hardware is this on?