Copying QFile contents to another QFile, what's the optimal way? - c++

I need to copy a QFile to another QFile in chunks, so I can't use QFile::copy. Here's the most primitive implementation:
bool CFile::copyChunk(int64_t chunkSize, const QString &destFolder)
{
if (!_thisFile.isOpen())
{
// Initializing - opening files
_thisFile.setFileName(_absoluteFilePath);
if (!_thisFile.open(QFile::ReadOnly))
return false;
_destFile.setFileName(destFolder + _thisFileName);
if (!_destFile.open(QFile::WriteOnly))
return false;
}
if (chunkSize < (_thisFile.size() - _thisFile.pos()))
{
QByteArray data (chunkSize, 0);
_thisFile.read(data.data(), chunkSize);
return _destFile.write(data) == chunkSize;
}
}
It's not clear from this fragment, but I only intend to copy a binary file as a whole into another location, just in chunks so I can provide progress callbacks and cancellation facility for large files.
Another idea is to use memory mapping. Should I? If so, then should I only map source file and still use _destFile.write, or should I map both and use memcpy?
I guess this question isn't really tied to Qt, I think the answer should be general to any file I/O API that supports memory mapping.

Ok, ok, if it must be a memory mapping solution. Here is one:
QFile source("/tmp/bla1.bin");
source.open(QIODevice::ReadOnly);
QFile destination("/tmp/bla2.bin");
destination.open(QIODevice::ReadWrite);
destination.resize(source.size());
uchar *data = destination.map(0,destination.size());
if(!data){
qDebug() << "Cannot map";
exit(-1);
}
QByteArray buffer;
int chunksize = 200;
int var = 0;
do{
var = source.read((char *)(data), chunksize);
data += var;
}while(var > 0);
destination.unmap(data);
destination.close();
This maps only the destination file into memory. I doubt it will make much of a difference to map the source file also. But this is something for concrete measurements, not assumptions.
Another questions is whether you can map your whole file into memory at once. Constantly unmapping and remapping will certainly cost performance. And even if you use Qt. Functions like memory mapping have the tendency to act disturbingly different on different platforms, e.g. the maximum file size you map in to memory might be different.

What the optimal method is, lies always a bit in the eye of the beholder. Here is at least one working shorter method:
QFile source("/tmp/bla1.bin");
source.open(QIODevice::ReadOnly);
QFile destination("/tmp/bla2.bin");
destination.open(QIODevice::WriteOnly);
QByteArray buffer;
int chunksize = 200; // Whatever chunk size you like
while(!(buffer = source.read(chunksize)).isEmpty()){
destination.write(buffer);
}
destination.close();
source.close();
And memory mapping... I try to stay away from things like that. I am never too sure how platform independent they are.

Use this QFile::map() method:
QFile fs("Sourcefile.bin");
fs.open(QFile::ReadOnly);
QFile fd("Destinationfile.bin");
fd.open(QFile::WriteOnly);
fd.write((char*) fs.map(0, fs.size()), fs.size()); //Copies all data
fd.close();
fs.close();

Related

C++, OpenCV: Fastest way to read a file containing non-ASCII characters on windows

I am writing a program using OpenCV that shall work on Windows as well as on Linux. Now the problem with OpenCV is, that its cv::imread function can not handle filepaths that contain non-ASCII characters on Windows. A workaround is to first read the file into a buffer using other libraries (for example std-libraries or Qt) and then read the file from that buffer using the cv::imdecode function. This is what I currently do. However, it's not very fast and much slower than just using cv::imread. I have a TIF image that is about 1GB in size. Reading it with cv::imread takes approx. 1s, reading it with the buffer method takes about 14s. I assume that imread just reads those parts of the TIF that are necessary for displaying the image (no layers etc.). Either this, or my code for reading a file into a buffer is bad.
Now my question is if there is a better way to do it. Either a better way with regard to OpenCV or a better way with regard to reading a file into a buffer.
I tried two different methods for the buffering, one using the std libraries and one using Qt (actually they both use QT for some things). They both are equally slow.:
Method 1
std::shared_ptr<std::vector<char>> readFileIntoBuffer(QString const& path) {
#ifdef Q_OS_WIN
std::ifstream file(path.toStdWString(), std::iostream::binary);
#else
std::ifstream file(path.toStdString(), std::iostream::binary);
#endif
if (!file.good()) {
return std::shared_ptr<std::vector<char>>(new std::vector<char>());
}
file.exceptions(std::ifstream::badbit | std::ifstream::failbit | std::ifstream::eofbit);
file.seekg(0, std::ios::end);
std::streampos length(file.tellg());
std::shared_ptr<std::vector<char>> buffer(new std::vector<char>(static_cast<std::size_t>(length)));
if (static_cast<std::size_t>(length) == 0) {
return std::shared_ptr<std::vector<char>>(new std::vector<char>());
}
file.seekg(0, std::ios::beg);
try {
file.read(buffer->data(), static_cast<std::size_t>(length));
} catch (...) {
return std::shared_ptr<std::vector<char>>(new std::vector<char>());
}
file.close();
return buffer;
}
And then for reading the image from the buffer:
std::shared_ptr<std::vector<char>> buffer = utility::readFileIntoBuffer(path);
cv::Mat image = cv::imdecode(*buffer, cv::IMREAD_UNCHANGED);
Method 2
QByteArray readFileIntoBuffer(QString const & path) {
QFile file(path);
if (!file.open(QIODevice::ReadOnly)) {
return QByteArray();
}
return file.readAll();
}
And for decoding the image:
QByteArray buffer = utility::readFileIntoBuffer(path);
cv::Mat matBuffer(1, buffer.size(), CV_8U, buffer.data());
cv::Mat image = cv::imdecode(matBuffer, cv::IMREAD_UNCHANGED);
UPDATE
Method 3
This method maps the file into memory using QFileDevice::map and then uses cv::imdecode.
QFile file(path);
file.open(QIODevice::ReadOnly);
unsigned char * fileContent = file.map(0, file.size(), QFileDevice::MapPrivateOption);
cv::Mat matBuffer(1, file.size(), CV_8U, fileContent);
cv::Mat image = cv::imdecode(matBuffer, cv::IMREAD_UNCHANGED);
However, also this approach didn't result in a shorter time than the other two. I also did some time measurements and found that reading the file in the memory or mapping it to the memory is actually not the bottleneck. The operation that takes the majority of the time is the cv::imdecode. I don't know why this is the case, since using cv::imread with the same image only takes a fraction of the time.
Potential Workaround
I tried obtaining an 8.3 pathname on Windows for files that contain non-ascii characters using the following code:
QString getShortPathname(QString const & path) {
#ifndef Q_OS_WIN
return QString();
#else
long length = 0;
WCHAR* buffer = nullptr;
length = GetShortPathNameW(path.toStdWString().c_str(), nullptr, 0);
if (length == 0) return QString();
buffer = new WCHAR[length];
length = GetShortPathNameW(path.toStdWString().c_str(), buffer, length);
if (length == 0) {
delete[] buffer;
return QString();
}
QString result = QString::fromWCharArray(buffer);
delete[] buffer;
return result;
#endif
}
However, I had to find out that 8.3 pathname generation is disabled on my machine, so it potentially is on others as well. So I wasn't able to test this yet and it does not seem to provide a reliable workaround. I also have the problem that the function doesn't tell me that 8.3 pathname generation is disabled.
There is an open ticket on this in OpenCV GitHub: https://github.com/opencv/opencv/issues/4292
One of the comments there suggest a workaround without reading the whole file to memory by using memory-mapped file (with help from Boost):
mapped_file map(path(L"filename"), ios::in);
Mat file(1, numeric_cast<int>(map.size()), CV_8S, const_cast<char*>(map.const_data()), CV_AUTOSTEP);
Mat image(imdecode(file, 1));

what the schema of reading big xml data using "Memory Mapped Files"?

i have a big xml file( osm map data file to parse). the initial code to process is like this:
FILE* file = fopen(fileName.c_str(), "r");
size_t BUF_SIZE = 10 * 1024 * 1024;
char* buf = new char[BUF_SIZE];
string contents;
while (!feof(file))
{
int ret = fread(buf, BUF_SIZE, 1, file);
assert(ret != -1);
contents.append(buf);
}
size_t pos = 0;
while (true)
{
pos = contents.find('<', pos);
if (pos == string::npos) break;
// Case: found new node.
if (contents.substr(pos, 5) == "<node")
{
do something;
}
// Case: found new way.
else if (contents.substr(pos, 4) == "<way")
{
do something;
}
}
then here people tell me i should use memory mapping file to process those "big data file",
detail is here:
how to read to huge file into buffer,
i mean when it is a fixed size and not very large, may i could load one time into memory and append the content to a string object, then i could apply find(), method and other string method to extract the node content of a xml file. ( the code in the beginning of my question use this method and i test that will produce right result). Then if the file is very large, how apply those methods (not using xml library such as libxml)?
in one word, for small xml file, i could load the whole content to a std::string and apply the find(), substr() operation and got wanted information in the xml file. when the xml file is very large, when i need use the memory mapping file to cope with. then could append the whole content to a std::string, how could i parse the file not using exsit xml library?
hope i have clearly express my question.
If you're using std::string members to get the data you need, you're almost certainly not parsing the XML in the traditional sense of parsing XML. (That is, you're very probably not making any use of XML's hierarchical structure. Although you are extracting data from XML, "parsing XML" means something much more specific to most people.)
That said, the C equivalents of the std::string members you seem to be OK with, such as memcmp and the GNU extension memmem, just take pointers and lengths. Read their documentation and use them in place of their std:;string-member equivalents.

very fast text file processing (C++)

i wrote an application which processes data on the GPU. Code works well, but i have the problem that the reading part of the input file (~3GB, text) is the bottleneck of my application. (The read from the HDD is fast, but the processing line by line is slow).
I read a line with getline() and copy line 1 to a vector, line2 to a vector and skip lines 3 and 4. And so on for the rest of the 11 mio lines.
I tried several approaches to get the file at the best time possible:
Fastest method I found is using boost::iostreams::stream
Others were:
Read the file as gzip, to minimize IO, but is slower than directly
reading it.
copy file to ram by read(filepointer, chararray, length)
and process it with a loop to distinguish the lines (also slower than boost)
Any suggestions how to make it run faster?
void readfastq(char *filename, int SRlength, uint32_t blocksize){
_filelength = 0; //total datasets (each 4 lines)
_SRlength = SRlength; //length of the 2. line
_blocksize = blocksize;
boost::iostreams::stream<boost::iostreams::file_source>ins(filename);
in = ins;
readNextBlock();
}
void readNextBlock() {
timeval start, end;
gettimeofday(&start, 0);
string name;
string seqtemp;
string garbage;
string phredtemp;
_seqs.empty();
_phred.empty();
_names.empty();
_filelength = 0;
//read only a part of the file i.e the first 4mio lines
while (std::getline(in, name) && _filelength<_blocksize) {
std::getline(in, seqtemp);
std::getline(in, garbage);
std::getline(in, phredtemp);
if (seqtemp.size() != _SRlength) {
if (seqtemp.size() != 0)
printf("Error on read in fastq: size is invalid\n");
} else {
_names.push_back(name);
for (int k = 0; k < _SRlength; k++) {
//handle special letters
if(seqtemp[k]== 'A') ...
else{
_seqs.push_back(5);
}
}
_filelength++;
}
}
EDIT:
The source-file is downloadable under https://docs.google.com/open?id=0B5bvyb427McSMjM2YWQwM2YtZGU2Mi00OGVmLThkODAtYzJhODIzYjNhYTY2
I changed the function readfastq to read the file, because of some pointer problems. So if you call readfastq the blocksize (in lines) must be bigger than the number of lines to read.
SOLUTION:
I found a solution, which get the time for read in the file from 60sec to 16sec. I removed the inner-loop which handeles the special characters and do this in GPU. This decreases the read-in time and only minimal increases the GPU running time.
Thanks for your suggestions.
void readfastq(char *filename, int SRlength) {
_filelength = 0;
_SRlength = SRlength;
size_t bytes_read, bytes_expected;
FILE *fp;
fp = fopen(filename, "r");
fseek(fp, 0L, SEEK_END); //go to the end of file
bytes_expected = ftell(fp); //get filesize
fseek(fp, 0L, SEEK_SET); //go to the begining of the file
fclose(fp);
if ((_seqarray = (char *) malloc(bytes_expected/2)) == NULL) //allocate space for file
err(EX_OSERR, "data malloc");
string name;
string seqtemp;
string garbage;
string phredtemp;
boost::iostreams::stream<boost::iostreams::file_source>file(filename);
while (std::getline(file, name)) {
std::getline(file, seqtemp);
std::getline(file, garbage);
std::getline(file, phredtemp);
if (seqtemp.size() != SRlength) {
if (seqtemp.size() != 0)
printf("Error on read in fastq: size is invalid\n");
} else {
_names.push_back(name);
strncpy( &(_seqarray[SRlength*_filelength]), seqtemp.c_str(), seqtemp.length()); //do not handle special letters here, do on GPU
_filelength++;
}
}
}
First instead of reading the file into memory you may work with file mappings. You just have to build your program as 64-bit to fit 3GB of virtual address space (for 32-bit application only 2GB is accessible in the user mode). Or alternatively you may map & process your file by parts.
Next, it sounds to me that your bottleneck is "copying a line to a vector". Dealing with vectors involves dynamic memory allocation (heap operations), which in a critical loop hits the performance very seriously). If this is the case - either avoid using vectors, or make sure they're declared outside the loop. The latter helps because when you reallocate/clear vectors they do not free memory.
Post your code (or a part of it) for more suggestions.
EDIT:
It seems that all your bottlenecks are related to string management.
std::getline(in, seqtemp); reading into an std::string deals with the dynamic memory allocation.
_names.push_back(name); This is even worse. First the std::string is placed into the vector by value. Means - the string is copied, hence another dynamic allocation/freeing happens. Moreover, when eventually the vector is internally reallocated - all the contained strings are copied again, with all the consequences.
I recommend using neither standard formatted file I/O functions (Stdio/STL) nor std::string. To achieve better performance you should work with pointers to strings (rather than copied strings), which is possible if you map the entire file. Plus you'll have to implement the file parsing (division into lines).
Like in this code:
class MemoryMappedFileParser
{
const char* m_sz;
size_t m_Len;
public:
struct String {
const char* m_sz;
size_t m_Len;
};
bool getline(String& out)
{
out.m_sz = m_sz;
const char* sz = (char*) memchr(m_sz, '\n', m_Len);
if (sz)
{
size_t len = sz - m_sz;
m_sz = sz + 1;
m_Len -= (len + 1);
out.m_Len = len;
// for Windows-format text files remove the '\r' as well
if (len && '\r' == out.m_sz[len-1])
out.m_Len--;
} else
{
out.m_Len = m_Len;
if (!m_Len)
return false;
m_Len = 0;
}
return true;
}
};
if _seqs and _names are std::vectors and you can guess the final size of them before processing the whole 3GB of data, you can use reserve to avoid most of the memory re-allocation during pushing back the new elements in the loop.
You should be aware of the fact that the vectors effectively produce another copy of parts of the file in main memory. So unless you have a main memory sufficiently large to store the text file plus the vector and its contents, you will probably end up with a number of page faults that also have a significant influence on the speed of your program.
You are apparently using <stdio.h> since using getline.
Perhaps fopen-ing the file with fopen(path, "rm"); might help, because the m tells (it is a GNU extension) to use mmap for reading.
Perhaps setting a big buffer (i.e. half a megabyte) with setbuffer could also help.
Probably, using the readahead system call (in a separate thread perhaps) could help.
But all this are guesses. You should really measure things.
General suggestions:
Code the simplest, most straight-forward, clean approach,
Measure,
Measure,
Measure,
Then if all else fails:
Read raw bytes (read(2)) in page-aligned chunks. Do so sequentially, so kernel's read-ahead plays to your advantage.
Re-use the same buffer to minimize cache flushing.
Avoid copying data, parse in place, pass around pointers (and sizes).
mmap(2)-ing [parts of the] file is another approach. This also avoids kernel-userland copy.
Depending on your disk speed, using a very fast de compression algorithm might help, like fastlz (there are at least two other that might be more efficient, but under GPL, so licence can be a problem).
Also, using C++ data structures and functions car increase the speed as you can maybe achieve a better compiler-time optimization. Going the C way isn't always the fastes! In some bad conditions, using char* you need to parse the whole string to reach the \0 yielding desastrous performances.
For parsing your data, using boost::spirit::qi is also probably the most optimized approach http://alexott.blogspot.com/2010/01/boostspirit2-vs-atoi.html

Problems with pwrite() to a file in C/C++

I've a bad problem. I'm trying to write to a file via filedescriptor and memalign. I can write to it but only something like an wrong encoded char is written to a file.
Here's my code:
fdOutputFile = open(outputFile, O_CREAT | O_WRONLY | O_APPEND | O_DIRECT, 0644)
void writeThis(char* text) {
while (*text != '\0') {
// if my internal buffer is full -> write to disk
if (buffPositionOutput == outputbuf.st_blksize) {
posix_memalign((void **)&bufferO, outputbuf.st_blksize, outputbuf.st_blksize);
cout << "wrote " << pwrite(fdOutputFile, bufferO, outputbuf.st_blksize, outputOffset*outputbuf.st_blksize) << " Bytes to disk." << endl;
buffPositionOutput = 0;
++outputOffset;
}
// buffer the incoming text...
bufferO[buffPositionOutput] = *text;
++text;
++buffPositionOutput;
}
}
I think it's the alignment - can someone help me?
It writes to the file but not the correct text, just a bunch of '[]'-chars.
Thanks in advance for your help!
Looking at your program, here is what happens:
You fill the memory initially pointed to by buffer0+buffPositionOutput (Which is where, precisely? I don't know based on the code you give.) up to buffer0+outputbuf.st_blksize with data.
You pass the address of the buffer0 pointer to posix_memalign, which ignores its current value and overwrites it with a pointer to outputbuf.st_blksize bytes of newly-allocated memory.
You write data from the newly-allocated block to disk; this might be anything, since you just allocated memory and haven't written anything there yet.
This won't work, obviously. You probably want to initialize your buffer via posix_memalign at the top of your function, and then just overwrite the block's worth of data in it as you use your aligned buffer to repeatedly write data into the file. (Reset buffpositionoutput to zero after each time you write data, but don't re-allocate.) Make sure you free your buffer when you are done.
Also, why are you using pwrite instead of write?
Here's how I would implement writeThis (keeping your variable names so you can match it up with your version):
void writeThis(char *text) {
char *buffer0;
size_t buffPositionOutput = 0;
posix_memalign(&buffer0, outputbuf.st_blksize, outputbuf.st_blksize);
while (*text != 0) {
++text; ++buffPositionOutput;
if (buffPositionOutput == outputbuf.st_blksize) {
write(fdOutputFile, buffer0, outputbuf.st_blksize);
buffPositionOuput = 0;
}
}
if (buffPositionOutput != 0) {
// what do you want to do with a partial block of data? Not sure.
}
}
(For speed, you might consider using memcpy calls instead of a loop. You would need to know the length of the data to write ahead of time though. Worry about that after you have a working solution that does not leak memory.)
You're re-allocating buffer0 every time you try to output it, and not freeing it. That's really not efficient (and leaks memory). I'd suggest you refactor your code a bit, because it's quite hard to follow whether your bounds checking on that buffer is correct or not.
Allocate buffer0 only once somewhere (form that snippet, storing it in outputbuf sounds like a good idea). Also store buffPositionOutput in that struct (or in another struct, but close to that buffer).
// in setup code
int rc = posix_memalign(&(outputbuf.data), outputbuf.st_blksize,
outputbuf.st_blksize);
// check rc!
outputbuf.writePosition = 0;
// in cleanup code
free(outputbuf.data);
Then you can rewrite your function like this:
void writeThis(char *text) {
while (*text != 0) {
outputbuf.data[outputbuf.writePosition] = *text;
outputbuf.writePosition++;
text++;
if (outputbuf.writePosition == outputbuf.block_size) {
int rc = pwrite(...);
// check rc!
std::cout << ...;
outputbuf.writePosition = 0;
}
}
I don't think C/C++ has encodings. ASCII only.
Unless you use wchar http://en.wikipedia.org/wiki/Wide_character

Multiple threads reading from the same file

My platform is windows vista 32, with visual c++ express 2008 .
for example:
if i have a file contains 4000 bytes, can i have 4 threads read from the file at same time? and each thread access a different section of the file.
thread 1 read 0-999, thread 2 read 1000 - 2999, etc.
please give a example in C language.
If you don't write to them, no need to take care of sync / race condition.
Just open the file with shared reading as different handles and everything would work. (i.e., you must open the file in the thread's context instead of sharing same file handle).
#include <stdio.h>
#include <windows.h>
DWORD WINAPI mythread(LPVOID param)
{
int i = (int) param;
BYTE buf[1000];
DWORD numread;
HANDLE h = CreateFile("c:\\test.txt", GENERIC_READ, FILE_SHARE_READ,
NULL, OPEN_EXISTING, 0, NULL);
SetFilePointer(h, i * 1000, NULL, FILE_BEGIN);
ReadFile(h, buf, sizeof(buf), &numread, NULL);
printf("buf[%d]: %02X %02X %02X\n", i+1, buf[0], buf[1], buf[2]);
return 0;
}
int main()
{
int i;
HANDLE h[4];
for (i = 0; i < 4; i++)
h[i] = CreateThread(NULL, 0, mythread, (LPVOID)i, 0, NULL);
// for (i = 0; i < 4; i++) WaitForSingleObject(h[i], INFINITE);
WaitForMultipleObjects(4, h, TRUE, INFINITE);
return 0;
}
There's not even a big problem writing to the same file, in all honesty.
By far the easiest way is to just memory-map the file. The OS will then give you a void* where the file is mapped into memory. Cast that to a char[], and make sure that each thread uses non-overlapping subarrays.
void foo(char* begin, char*end) { /* .... */ }
void* base_address = myOS_memory_map("example.binary");
myOS_start_thread(&foo, (char*)base_address, (char*)base_address + 1000);
myOS_start_thread(&foo, (char*)base_address+1000, (char*)base_address + 2000);
myOS_start_thread(&foo, (char*)base_address+2000, (char*)base_address + 3000);
As others have noted already, there is no inherent problem in having multiple threads read from the same file, as long as they have their own file descriptor/handles. However, I'm a little curious about your motives. Why do you want to read a file in parallell? If you're only reading a file into memory, your bottleneck is likely the disk itself, in which case multiple thread won't help you at all (it'll just clutter your code).
And as always when optimizing, you should not attempt it until you (1) have a easy to understand, working, solution, and (2) you've measured your code to know where you should optimize.
Windows supports overlapped I/O, which allows a single thread to asynchronously queue multiple I/O requests for better performance. This could conceivably be used by multiple threads simultaneously as long as the file you are accessing supports seeking (i.e. this is not a pipe).
Passing FILE_FLAG_OVERLAPPED to CreateFile() allows simultaneous reads and writes on the same file handle; otherwise, Windows serializes them. Specify the file offset using the Offset and OffsetHigh members of the OVERLAPPED structure.
For more information see Synchronization and Overlapped Input and Output.
You can certainly have multiple threads reading from a data structure, race conditions can potentially occur if any writing is taking place.
To avoid such race conditions you need to define the boundaries that threads can read, if you have an explicit number of data segments and an explicit number of threads to match these then that is easy.
As for an example in C you would need to provide some more information, like the threading library you are using. Attempt it first, then we can help you fix any issues.
The easiest way is to open the file within each parallel instance, but just open it as readonly.
The people who say there may be an IO bottleneck are probably wrong. Any modern operating system caches file reads. Which means the first time you read a file will be the slowest, and any subsequent reads will be lightning fast. A 4000 byte file can even rest inside the processor's cache.
I don't see any real advantage to doing this.
You may have multiple threads reading from the device but your bottleneck will not be CPU but rather disk IO speed.
If you are not careful you may even slow the processes down (but you will need to measure it to know for certain).
It is possible though i'm not sure it will be worth the effort. Have you considered reading the entire file into memory within a single thread and then allow multiple threads to access that data?
You shouldn't need to do anything particularly clever if all they're doing is reading. Obviously you can read it as many times in parallel as you like, as long as you don't exclusively lock it. Writing is clearly another matter of course...
I do have to wonder why you'd want to though - it will likely perform badly since your HDD will waste a lot of time seeking back and forth rather than reading it all in one (relatively) uninterrupted sweep. For small files (like your 4000 line example) where that might not be such a problem, it doesn't seem worth the trouble.
Reading: No need to lock the file. Just open the file as read only or shared read
Writing: Use a mutex to ensure the file is only written to by one person.
std::mutex mtx;
void worker(int n)
{
mtx.lock();
char * memblock;
ifstream file ("D:\\test.txt", ios::in);
if (file.is_open())
{
memblock = new char [1000];
file.seekg (n * 999, ios::beg);
file.read (memblock, 999);
memblock[999] = '\0';
cout << memblock << endl;
file.close();
delete[] memblock;
}
else
cout << "Unable to open file";
mtx.unlock();
}
int main()
{
vector<std::thread> vec;
for(int i=0; i < 3; i++)
{
vec.push_back(std::thread(&worker,i));
}
std::for_each(vec.begin(), vec.end(), [](std::thread& th)
{
th.join();
});
return 0;
}
You need a way to sync those threads. There're different solutions to mutex http://en.wikipedia.org/wiki/Mutual_exclusion
He wants to read from a file in different threads. I guess that should be ok if the file is opened as read-only by each thread.
I hope you don't want to do this for performance though, since you will have to scan large parts of the file for newline characters in each thread.