How to multithread file processing in C++? - c++

I'm working on one problem where I need to process 24 files (each size = 3 GB) and write the output into multiple files (24). Each file takes around 1 hour to process. Is it possible to write data into multiple files concurrently using multi-threading with below code?
int _tmain(int argc, _TCHAR* argv[])
{
std::string path;
cout << "Enter the folder of the logs: " << endl;
cin >> path;
WIN32_FIND_DATA FileInformation; // File information
memset(&FileInformation, 0, sizeof(WIN32_FIND_DATA));
std::string strExt = "\\*.txt";
std::string strEscape = "\\";
std::string strPattern = path + strExt;
HANDLE hFile = ::FindFirstFile(strPattern.c_str(), &FileInformation);
while(hFile != INVALID_HANDLE_VALUE)
{
int offset;
std::ifstream Myfile;
std::string strFileName = FileInformation.cFileName;
std:: string fullPath = path + strEscape + strFileName;
std::string outputFile = path + strEscape + strFileName.substr(0, strFileName.length()-3) + "processed"+".txt";
std::ofstream ofs(outputFile, std::ofstream::out);
Myfile.open (fullPath);
std::string line;
if(Myfile.is_open())
{
while(!Myfile.eof())
{
-------Processing--------
}
Myfile.close();
}
else
cout<<"Cannot open file."<<endl;
if(FindNextFile(hFile, &FileInformation) == FALSE)
break;
}
// Close handle
::FindClose(hFile);
return 0;
}

Looking into your code I assume you produce one output file from one input. In such case you do not need to write multithreaded code to check if processing multiple files at once will speed up the process. Just modify your program to accept file name as a parameter and run multiple of them in parallel. But unless you are reading/writing from/to SSD drive such parallel processing most probably would slow process down, as hard-drive will have to switch between reading/writing for multiple positions, and head positioning is slow.
It is not clear what you are doing on processing, but if it takes 100% CPU then you most probably will speed up process significantly by processing one file by multiple threads. You would have one thread reading, then thread pool processing, then one thread writing. Tricky part would be to synchronize data and make it not appear in output file in wrong order.

Don't write multithreaded code here, write multiprocess code. That is, have your program process one file (which is passed as an argument), and call it multiple times in parallel from a script.
Don't run your program 24 times concurrently (unless you have 24 cores and 72GB of memory available). Try running 2, 4 or 6 instances concurrently and see what's best. I guess it'll be the number of cores, maybe the number of cores * 2 - 1 (hyperthreading does help). Try it out.
Also, if your program reads the file at the start, then performs the calculations, then writes the result, measure the time it takes to read the 3GB of data. If it's, for example, 30 seconds, and you run 4 processes concurrently, have your run script start the first instance, then wait 45 seconds, then start the second one and so on until the fourth. Start the fifth instance once one of the first four is finished. Every time another instance finishes, run the next one until all 24 have been run.

Related

Speed-up a single task using multi-threading in c++

I'm sorry if this is a repeat question but I already tried to search for an answer and came up empty handed.
I have a code to transfer data (189156 numbers) from txt file (input.txt) to another file (test.txt), after executing the program, the process takes about 23 seconds to transfer all data from the file: input.txt to the file : test.txt.
I wanted to speed up the process, so I divided the process into multiple threads (4 threads), each thread process 1/4 of the data, After executing the program, there was no difference in the time it took to transfer all the data.
here is my codes:
// This program reads data from a file into an array.
#include <iostream>
#include <fstream> // To use ifstream
#include <vector>
#include <thread>
using namespace std;
void test(int start, int end)
{
std::vector<int> numbers;
ifstream inputFile("input.txt"); // Input file stream object
// Check if exists and then open the file.
if (inputFile.good()) {
// Push items into a vector
int current_number = 0;
while (inputFile >> current_number) {
numbers.push_back(current_number);
}
// Close the file.
inputFile.close();
// Display the numbers read:
cout << "The numbers are: ";
for (int count = start ; count < end; count++) {
cout << numbers[count] << " " ;
std::ofstream ofs;
ofs.open("test.txt", std::ofstream::out | std::ofstream::app);
ofs << numbers[count] << endl;
ofs.close();
}
cout << endl;
}
else {
cout << "Error!";
_exit(0);
}
}
int main() {
std::thread worker1(test, 0, 50000);
std::thread worker2(test, 50000, 100000);
std::thread worker3(test, 100000, 150000);
std::thread worker4(test, 150000, 189156);
worker1.join();
worker2.join();
worker3.join();
worker4.join();
return 0;
}
I am a beginner, I do not know if it is correct to use multi-threads in such a case, please, if so, where is my mistake and if not, what is the correct way to speed up the process.
There is a big race condition in the code that not only prevent the code to be fast, but also should produce wrong results (possibly non-deterministically). Indeed, all threads can write in the same file "test.txt" simultaneously. While this operation may be thread safe on the target system, the order in which the threads append data in the target file is undefined and thus the result can be shuffled. The file appending have to be serialized and this when this processes is thread safe, it is typically protected with a lock that prevent any parallel execution.
Additionally, the open+write+close should be extremely slow since it results in 3 system calls per line and system calls are generally slow, especially IO ones.
That being said, you cannot use one ofstream object with multiple thread without protection since it would cause a bigger undefined behaviour. Indeed, here is what the C++ standard explicitly states:
Concurrent access to a stream object [string.streams, file.streams], stream buffer object [stream.buffers], or C Library stream [c.files] by multiple threads may result in a data race [intro.multithread] unless otherwise specified [iostream.objects]. [Note: Data races result in undefined behavior [intro.multithread]. --end note]
An efficient solution is to do a inner-thread reduction: all threads append data to a thread-local ostringstream so to perform the integer to string serialization in a big buffer and then write data in a serialized way (so for the order to be the same than the sequential program). The serialization should be speed up by the use of multiple thread while the IO part will still be sequential. In practice, the serialization should be pretty slow so the use of multiple thread should help to significantly reduce the execution time.
There is another big issue: the input file is entirely read by each thread! This means the 4 threads overall compute 4 time more work than using just 1 thread. This completely defeat the benefit of using multiple threads. You need to split the input file in relatively equal parts and then perform the computation. This is not so easy since the line delimiter should be taken into account.
One solution to this problem is to first retrieve the size of the file and then divide the 0..size range in N parts, where N is the number of workers. The split ranges then need to be corrected so to reference the begining of a line. You can do this correction by reading a line in the file at the starting location of each range and then adapt the start/end location of each range consequently (you just need to add the size of the line read). Once corrected, each worker can operate on a completely independent part of the file and read it in parallel (using a different ifstream object like you did).

C++ takes a lot of time when reading Blocks (4kb) out of a file. [std::ios::binary]

Im currently on a project where I try to store files into one (binary) file.
The files are stored block by block and a block looks like this:
struct Block_FS {
uint32_t ID;
char nutzdaten[_blockSize]; // blocksize is 4096, Nutzdaten means rawdata
};
So there is no issue with writing it into the file. That goes very cleanly and fast.
But when I try to read the original file out of the (binary) file it takes a long time (ca. 3-4 or more seconds for 10 blocks).
My readBlock function looks like this:
Block_FS getBlock(long blockID, std::fstream & iso, long blockPosition, Superblock_FS superblock) {
iso.seekg(blockPosition);
for (long i=0; i<superblock.blocksCount; i++) { //blockscount is just all blocks that are currently stored
Block_FS block;
std::string line;
std::getline(iso, line); //read the id
lock.ID = stoi(line);
iso.read(&block.nutzdaten[0], superblock.blockSize); //read the raw data 4kb
getline(iso, line); //skipping. maybe I should use null terminating to avoid this getline()...?
if (block.ID == blockID) { //if the current block has the right id im returning the whole block.
return block;
}
}
std::cerr << "Unexpected behavior: Block -> " << blockID << " not found." << std::endl;
Block_FS block;
block.ID = -1;
return block;
}
I don't think that there's much more you need to see from my code.
Somewhere in this function must be something that takes a lot of time. Blocks are always stored on the disk and not in the cache. Only the one I'm currently writing or reading is in the cache.
Currently I'm kind of iterating through the file until I find the right id. I thought maybe that causes the time issue. But when I tried the other way by saving the position of the block with tellg() and jump right to the point with seekg() it still took the same amount of time to read blocks. Do you see anything I don't see?

Data not written with ofstream, even though success is returned

I'm writing a program which fetches a large number of email files using libcurl and then writes the file to disk, and then generates a receipt.
My problem is that, whilst most of the receipts seem to get written, the majority of the emails aren't written to disk. Worse, even though the file doesn't get written, ofstream returns success - so the receipt gets written even if the file write didn't complete successfully.
My guess is that, because ofstream is asynchronous, if a write doesn't complete in time then it'll get dropped on the floor - only a certain number of writes being possible concurrently. I am just guessing here.
Perhaps I need to refactor my code to write synchronously - but I can't believe that that's necessary. Does anyone have any idea how I can make this work?
The email sizes range from a few KBytes to a couple of MBytes.
int write_file(string filename, string mail_item) {
ofstream out(filename.c_str());
out << mail_item;
out.close();
out.flush();
if (!out) {
return FUNCTION_FAILED;
}
return FUNCTION_SUCCESS;
}
This is part of another function, and has been cut out so that only the salient code for this question is shown.
vector<string> directory = curl_listroot(curl);
for (int i=0; i<directory.size(); i++) {
vector<int> mail_list = curl_search(curl,directory[i],make_vector<string>() << "SEEN" << "RECENT" << "NEW" << "ANSWERED" << "FLAGGED");
for (int j=0; j<mail_list.size(); j++) {
curl_reset(curl, imap.username, imap.password);
string mail_item = curl_fetch(curl,directory[i],mail_list[j]);
if (mail_item.compare("") != 0) {
string m_id = getMessageID(mail_item);
string filename = save_path+"/"+RECEIPTNAME+"/"+clean_filename(m_id) + ".eml";
if (!file_exists(filename)) {
string real_filename;
real_filename = save_path+"/"+INBOXNAME+"/"+clean_filename(m_id) + ".eml";
int success = write_file(real_filename, mail_item);
if (success == FUNCTION_SUCCESS) {
write_file(filename, ""); //write empty receipt
}
}
}
}
}
All suggestions gratefully received! Thank you!
Okay. I've found an answer - there may be better answers - but this one works for me. The problem seems to be in the OS (Linux, in this case) - ofstream completes, having handed the responsibility for writing the file to the OS, but the file hasn't actually been written yet (so whilst ofstream may be synchronous the end to end write of the file, from data to file safely written to disk, isn't). Given that I'm banging away with a huge number of writes in quick succession (potentially thousands), this won't necessarily work. The OS may throw its hands in the air and drop a significant number of the files writes on the floor (hence my original request for a synchronous way of writing the files - end to end).
My solution is to pause after each write to give the OS time to catch up. It's inelegant though, and not as performant as it should be - it doesn't take half a second to write an empty file. Additionally, on slow storage, half a second might not be enough time. I'd welcome any clever suggestions for how to improve my code.
int write_file(string filename, string mail_item) {
ofstream out(filename.c_str());
if (!out) {
return FUNCTION_FAILED;
}
out << mail_item << endl;
out.flush();
usleep(500000); //wait for half a second to give the OS time to output the file
if (!out) {
return FUNCTION_FAILED;
}
out.close();
if (!out) {
return FUNCTION_FAILED;
}
return FUNCTION_SUCCESS;
}

Is there a faster way to split a text file when a certain token is found?

I have a 7GB text file comprised of multi line records that are delimited with a line that only contains the token "$$$$".
I wrote a method to split it by parsing a line at a time, testing for the token, and splitting accordingly. The idea is to write each multi line record to different output files in round robin fashion. My code is below:
// Open all temp files for reading
int nThreads = threadData.size();
std::vector<ofstream*> ostrms(nThreads);
for (int i = 0; i < nThreads; ++i)
{
ostrms[i] = new ofstream(threadData[i].InFileName);
if (! ostrms[i]->is_open() )
return(false);
}
// parse mol records into temp files in round-robin fashion
std::vector<std::string> molRecord;
std::string line;
const std::string MOL_END_OF_RECORD = "$$$$";
int curOutfileNo = 0;
while( ! strm.eof() )
{
std::getline(strm,line);
if (line.find(MOL_END_OF_RECORD) != std::string::npos)
{
for (int i = 0; i < molRecord.size(); ++i)
*(ostrms[curOutfileNo]) << molRecord[i] << "\n";
(*ostrms[curOutfileNo]) << line << "\n";
curOutfileNo = (curOutfileNo+1) % nThreads;
molRecord.clear();
}
else
molRecord.push_back(line);
}
for (int i = 0; i < nThreads; ++i)
delete ostrms[i];
This runs very slowly (several minutes). Is there a faster way?
The 7GB text file has 245,634,858 lines and 466537 unique records delimited by"$$$$"
If you are absolutely sure that your splitting lines contain exactly $$$$ without any prefix or suffix characters (e.g. spaces), you might replace
if (line.find(MOL_END_OF_RECORD) != std::string::npos)
with
if (line == std::string(MOL_END_OF_RECORD))
but I don't think it matters that lot.
If spending a day on improving the coding is worth the effort (I believe that it is not), and assuming a Linux system, you could use with care some clever combination of low-level syscalls like read(2) with a large buffer of at least 64 Kbytes, mmap(2) on multi-megabyte ranges, posix_fadvise(2), readahead(2) (in a separate thread), ...
If you access the same file (with a constant content) several times, you might consider preprocessing (or pre-digesting) it e.g. to fill some GDBM indexed file, or some Sqlite (or other) "database", and have your real application use these. You could also simply compute some "index" file containing the offset of every $$$$ delimiter.
As I commented, you should consider that the time(1) spent by utilities like wc(1) as reasonable lower bound of execution time. I guess that they could show you that in fact (on your particular system) the program is I/O bound.
BTW, if your machine has more than e.g. 10Gbytes of RAM, you could simply wc yourhugefile before running your program. The wc process will fill the file system RAM cache with your file's data. See http://www.linuxatemyram.com/
We can't help much more unless you explain what is the huge data, how often does it change, and what does your application....
You could also buy more RAM and/or some SSD...

getline while reading a file vs reading whole file and then splitting based on newline character

I want to process each line of a file on a hard-disk now. Is it better to load a file as a whole and then split on basis of newline character (using boost), or is it better to use getline()? My question is does getline() reads single line when called (resulting in multiple hard disk access) or reads whole file and gives line by line?
getline will call read() as a system call somewhere deep in the gutst of the C library. Exactly how many times it is called, and how it is called depends on the C library design. But most likely there is no distinct difference in reading a line at a time vs. the whole file, becuse the OS at the bottom layer will read (at least) one disk-block at a time, and most likely at least a "page" (4KB), if not more.
Further, unles you do nearly nothing with your string after you have read it (e.g you are writing something like "grep", so mostly just reading the to find a string), it is unlikely that the overhead of reading a line at a time is a large part of the time you spend.
But the "load the whole file in one go" has several, distinct, problems:
You don't start processing until you have read the whole file.
You need enough memory to read the entire file into memory - what if the file is a few hundred GB in size? Does your program fail then?
Don't try to optimise something unless you have used profiling to prove that it's part of why your code is running slow. You are just causing more problems for yourself.
Edit: So, I wrote a program to measure this, since I think it's quite interesting.
And the results are definitely interesting - to make the comparison fair, I created three large files of 1297984192 bytes each (by copying all source files in a directory with about a dozen different source files, then copying this file several times over to "multiply" it, until it took over 1.5 seconds to run the test, which is how long I think you need to run things to make sure the timing isn't too susceptible to random "network packet came in" or some other outside influences taking time out of the process).
I also decided to measure the system and user-time by the process.
$ ./bigfile
Lines=24812608
Wallclock time for mmap is 1.98 (user:1.83 system: 0.14)
Lines=24812608
Wallclock time for getline is 2.07 (user:1.68 system: 0.389)
Lines=24812608
Wallclock time for readwhole is 2.52 (user:1.79 system: 0.723)
$ ./bigfile
Lines=24812608
Wallclock time for mmap is 1.96 (user:1.83 system: 0.12)
Lines=24812608
Wallclock time for getline is 2.07 (user:1.67 system: 0.392)
Lines=24812608
Wallclock time for readwhole is 2.48 (user:1.76 system: 0.707)
Here's the three different functions to read the file (there's some code to measure time and stuff too, of course, but for reducing the size of this post, I choose to not post all of that - and I played around with ordering to see if that made any difference, so results above are not in the same order as the functions here)
void func_readwhole(const char *name)
{
string fullname = string("bigfile_") + name;
ifstream f(fullname.c_str());
if (!f)
{
cerr << "could not open file for " << fullname << endl;
exit(1);
}
f.seekg(0, ios::end);
streampos size = f.tellg();
f.seekg(0, ios::beg);
char* buffer = new char[size];
f.read(buffer, size);
if (f.gcount() != size)
{
cerr << "Read failed ...\n";
exit(1);
}
stringstream ss;
ss.rdbuf()->pubsetbuf(buffer, size);
int lines = 0;
string str;
while(getline(ss, str))
{
lines++;
}
f.close();
cout << "Lines=" << lines << endl;
delete [] buffer;
}
void func_getline(const char *name)
{
string fullname = string("bigfile_") + name;
ifstream f(fullname.c_str());
if (!f)
{
cerr << "could not open file for " << fullname << endl;
exit(1);
}
string str;
int lines = 0;
while(getline(f, str))
{
lines++;
}
cout << "Lines=" << lines << endl;
f.close();
}
void func_mmap(const char *name)
{
char *buffer;
string fullname = string("bigfile_") + name;
int f = open(fullname.c_str(), O_RDONLY);
off_t size = lseek(f, 0, SEEK_END);
lseek(f, 0, SEEK_SET);
buffer = (char *)mmap(NULL, size, PROT_READ, MAP_PRIVATE, f, 0);
stringstream ss;
ss.rdbuf()->pubsetbuf(buffer, size);
int lines = 0;
string str;
while(getline(ss, str))
{
lines++;
}
munmap(buffer, size);
cout << "Lines=" << lines << endl;
}
The OS will read a whole block of data (depending on how the disk is formatted, typically 4-8k at a time) and do some of the buffering for you. Let the OS take care of it for you, and read the data in the way that makes sense for your program.
The fstreams are buffered reasonably. The underlying acesses to the harddisk by the OS are buffered reasonably. The hard disk itself has a reasonable buffer. You most surely will not trigger more hard disk accesses if you read the file line by line. Or character by character, for that matter.
So there is no reason to load the whole file into a big buffer and work on that buffer, because it already is in a buffer. And there often is no reason to buffer one line at a time, either. Why allocate memory to buffer something in a string that is already buffered in the ifstream? If you can, work on the stream directly, don't bother tossing everything around twice or more from one buffer to the next. Unless it supports readability and/or your profiler told you that disc access is slowing your program down significantly.
I believe the C++ idiom would be to read the file line-by-line, and create a line-based container as you read the file. Most likely the iostreams (getline) will be buffered enough that you won't notice a significant difference.
However for very large files you may get better performance by reading larger chunks of the file (not the whole file at once) and splitting internall as newlines are found.
If you want to know specifically which method is faster and by how much, you'll have to profile your code.
Its better to fetch the all data if it can be accommodated in memory because whenever you request the I/O your programmme looses the processing and put in a wait Q.
However if the file size is big then it's better to read as much data at a time which is required in processing. Because bigger read operation will take much time to complete then the small ones. cpu process switching time is much smaller then this entire file read time.
If it's a small file on disk, it's probably more efficient to read the entire file and parse it line by line vs. reading one line at a time--that would take lot's of disk access.