I'm writing an SDL application for Linux, that works from the console (no X server). One function I have is a file copy mechanism, that copies specific files from HDD to USB Flash device, and showing progress of this copy in the UI. To do this, I'm using simple while loop and copying file by 8kB chunks to get copy progress. The problem is, that it's slow. I get to copy a 100 MB file in nearly 10 minutes, which is unacceptable.
How can I implement faster file copy? I was thinking about some asynchronous API that would read file from HDD to a buffer and store the data to USB in separate thread, but I don't know if I should implement this myself, because it doesn't look like an easy task. Maybe you know some C++ API/library that can that for me? Or maybe some other, better method?
Don't synchronously update your UI with the copy progress, that will slow things down considerably. You should run the file copy on a separate thread from the main UI thread so that the file copy can proceed as fast as possible without impeding the responsiveness of your application. Then, the UI can update itself at the natural rate (e.g. at the refresh rate of your monitor).
You should also use a larger buffer size than 8 KB. Experiment around, but I think you'll get faster results with larger buffer sizes (e.g. in the 64-128 KB range).
So, it might look something like this:
#define BUFSIZE (64*1024)
volatile off_t progress, max_progress;
void *thread_proc(void *arg)
{
// Error checking omitted for expository purposes
char buffer[BUFSIZE];
int in = open("source_file", O_RDONLY);
int out = open("destination_file", O_WRONLY | O_CREAT | O_TRUNC);
// Get the input file size
struct stat st;
fstat(in, &st);
progress = 0;
max_progress = st.st_size;
ssize_t bytes_read;
while((bytes_read = read(in, buffer, BUFSIZE)) > 0)
{
write(out, buffer, BUFSIZE);
progress += bytes_read;
}
// copy is done, or an error occurred
close(in);
close(out);
return 0;
}
void start_file_copy()
{
pthread_t t;
pthread_create(&t, NULL, &thread_proc, 0);
}
// In your UI thread's repaint handler, use the values of progress and
// max_progress
Note that if you are sending a file to a socket instead of another file, you should instead use the sendfile(2) system call, which copies the file directly in kernel space without round tripping into user space. Of course, if you do that, you can't get any progress information, so that may not always be ideal.
For Windows systems, you should use CopyFileEx, which is both efficient and provides you a progress callback routine.
Let the OS do all the work:
Map the file to memory: mmap, will drastically speed up the reading process.
Save it to a file using msync.
Related
I'm writing a server that will compress files and send them over an http socket.
Unfortunately, they're not really files, they're more like database entries from a remote source.
I want to compress each entry in memory and then send them out over my http server, and each entry is potentially large, like 1GB each.
I receive data from the source in chunks, for instance 16mb (but could be any chunk size that makes sense).
Conceptually, this is what is happening, although this is a little bit of pseudo-code:
archive *_archive = archive_write_new();
//set to zip format
bool ok = true;
ok |= archive_write_set_format( _archive, ARCHIVE_FORMAT_ZIP );
ok |= archive_write_add_filter( _archive, ARCHIVE_FILTER_NONE );
char *_archiveBuffer = malloc(8192);
size_t _used;
ok = archive_write_open_memory( _archive, _archiveBuffer, 8192, &_used );
if (!ok) return ERROR;
archive_entry *_archiveEntry = archive_entry_new();
//fetch metadata about the object by id
QString id = "123456789";
QJsonObject metadata = database.fetchMetadata(id);
int size = metadata["size"].toInt();
//write the http header
httpd.writeHeader(size);
archive_entry_set_pathname( _archiveEntry, "entries/"+id );
archive_entry_set_size( entry, size );
archive_entry_set_filetype( _archiveEntry, AE_IFREG );
//archive_entry_set_perm( entry, ... );
archive_write_header( _archive, _archiveEntry );
int chunksize = 16777216;
for (int w = 0; w < size; w+=chunksize)
{
QByteArray chunk = database.fetchChunk(id,chunksize);
archive_write_data( _archive, chunk.data(), (size_t) chunk.size() );
//accumulate data, then fetch compressed data from _archiveBuffer and write to httpd
if (_used > 0)
{
httpd.writeData(_archiveBuffer);
//clear archive buffer?
}
}
archive_entry_free(_archiveEntry);
archive_write_close(_archive);
httpd.writeData(_archiveBuffer);
archive_write_free(_archive);
The question is, how do I know when data has been compressed to _archiveBuffer, and when it has, how can I read the buffer and then clear it, resetting the _used counter. I assume if _used>0, a compress/flush has happened.
Also, does the _archiveBuffer need to be greater than my chunksize?
Seems like I may need to use a callback, but unclear how to use archive_write_open with a callback and a memory buffer.
I can't seem to find examples online.
Any help would be appreciated!
The solution was much easier than I thought... just took a minute to realize it.
I'm sure it's obvious to those familiar with the library and streams.
Using callbacks was the answer. Don't care about opening in memory, as that creates an extra layer which is not useful, as the library already manages itself.
Depending on how your multithreading is configured, the callback will execute when something interesting happens on the archive stream, so for instance you can write single bytes to the archive over and over, but only when it's saturated will a callback happen. At that moment you can write to network or wherever in the callback. So the void *client_data is key because that links back to your main classes and API.
In my case I didn't want to write an http header until (archive) data was available, because any error could happen when fetching, which may result in a different http header.
When data is done, the close and free functions will also do their work with callbacks, so destructors need to happen after those callbacks complete.
Now the task is to multithread these requests... which seems simple now that I get the library.
If anyone is interested, I can post new pseudo-code.
My goal is to parse large csv files with C++ in a QT project in OSX environment.
(When I say csv I mean tsv and other variants 1GB ~ 5GB ).
It seems like a simple task , but things get complicated when file sizes get bigger. I don't want to write my own parser because of the many edge cases related to parsing csv files.
I have found various csv processing libraries to handle this job, but parsing 1GB file takes about 90 ~ 120 seconds on my machine which is not acceptable. I am not doing anything with the data right now, I just process and discard the data for testing purposes.
cccsvparser is one of the libraries I have tried . But the the only fast enough library was fast-cpp-csv-parser which gives acceptable results: 15 secs on my machine, but it works only when the file structure is known.
Example using: fast-cpp-csv-parser
#include "csv.h"
int main(){
io::CSVReader<3> in("ram.csv");
in.read_header(io::ignore_extra_column, "vendor", "size", "speed");
std::string vendor; int size; double speed;
while(in.read_row(vendor, size, speed)){
// do stuff with the data
}
}
As you can see I cannot load arbitrary files and I must specifically define variables to match my file structure. I'm not aware of any method that allows me to create those variables dynamically in runtime .
The other approach I have tried is to read csv file line by line with fast-cpp-csv-parser LineReader class which is really fast (about 7 secs to read whole file), and then parse each line with cccsvparser lib that can process strings, but this takes about 40 seconds until done, it is an improvement compared to the first attempts but still unacceptable.
I have seen various Stack Overflow questions related to csv file parsing none of them takes large file processing in to account.
Also I spent a lot of time googling to find a solution to this problem, and I really miss the freedom that package managers like npm or pip offer when searching for out of the box solutions.
I will appreciate any suggestion about how to handle this problem.
Edit:
When using #fbucek's approach, processing time reduced to 25 seconds, which is a great improvement.
can we optimize this even more?
I am assuming you are using only one thread.
Multithreading can speedup your process.
Best accomplishment so far is 40 sec. Let's stick to that.
I have assumed that first you read then you process -> ( about 7 secs to read whole file)
7 sec for reading
33 sec for processing
First of all you can divide your file into chunks, let's say 50MB.
That means that you can start processing after reading 50MB of file. You do not need to wait till whole file is finished.
That's 0.35 sec for reading ( now it is 0.35 + 33 second for processing = cca 34sec )
When you use Multithreading, you can process multiple chunks at a time. That can speedup process theoretically up to number of your cores. Let's say you have 4 cores.
That's 33/4 = 8.25 sec.
I think you can speed up you processing with 4 cores up to 9 s. in total.
Look at QThreadPool and QRunnable or QtConcurrent
I would prefer QThreadPool
Divide task into parts:
First try to loop over file and divide it into chunks. And do nothing with it.
Then create "ChunkProcessor" class which can process that chunk
Make "ChunkProcessor" a subclass of QRunnable and in reimplemented run() function execute your process
When you have chunks, you have class which can process them and that class is QThreadPool compatible, you can pass it into
It could look like this
loopoverfile {
whenever chunk is ready {
ChunkProcessor *chunkprocessor = new ChunkProcessor(chunk);
QThreadPool::globalInstance()->start(chunkprocessor);
connect(chunkprocessor, SIGNAL(finished(std::shared_ptr<ProcessedData>)), this, SLOT(readingFinished(std::shared_ptr<ProcessedData>)));
}
}
You can use std::share_ptr to pass processed data in order not to use QMutex or something else and avoid serialization problems with multiple thread access to some resource.
Note: in order to use custom signal you have to register it before use
qRegisterMetaType<std::shared_ptr<ProcessedData>>("std::shared_ptr<ProcessedData>");
Edit: (based on discussion, my answer was not clear about that)
It does not matter what disk you use or how fast is it. Reading is single thread operation.
This solution was suggested only because it took 7 sec to read and again does not matter what disk it is. 7 sec is what's count. And only purpose is to start processing as soon as possible and not to wait till reading is finished.
You can use:
QByteArray data = file.readAll();
Or you can use principal idea: ( I do not know why it take 7 sec to read, what is behind it )
QFile file("in.txt");
if (!file.open(QIODevice::ReadOnly | QIODevice::Text))
return;
QByteArray* data = new QByteArray;
int count = 0;
while (!file.atEnd()) {
++count;
data->append(file.readLine());
if ( count > 10000 ) {
ChunkProcessor *chunkprocessor = new ChunkProcessor(data);
QThreadPool::globalInstance()->start(chunkprocessor);
connect(chunkprocessor, SIGNAL(finished(std::shared_ptr<ProcessedData>)), this, SLOT(readingFinished(std::shared_ptr<ProcessedData>)));
data = new QByteArray;
count = 0;
}
}
One file, one thread, read almost as fast as read by line "without" interruption.
What you do with data is another problem, but has nothing to do with I/O. It is already in memory.
So only concern would be 5GB file and ammout of RAM on the machine.
It is very simple solution all you need is subclass QRunnable, reimplement run function, emit signal when it is finished, pass processed data using shared pointer and in main thread joint that data into one structure or whatever. Simple thread safe solution.
I would propose a multi-thread suggestion with a slight variation is that one thread is dedicated to reading file in predefined (configurable) size of chunks and keeps on feeding data to a set of threads (more than one based cpu cores). Let us say that the configuration looks like this:
chunk size = 50 MB
Disk Thread = 1
Process Threads = 5
Create a class for reading data from file. In this class, it holds a data structure which is used to communicate with process threads. For example this structure would contain starting offset, ending offset of the read buffer for each process thread. For reading file data, reader class holds 2 buffers each of chunk size (50 MB in this case)
Create a process class which holds a pointers (shared) for the read buffers and offsets data structure.
Now create driver (probably main thread), creates all the threads and waiting on their completion and handles the signals.
Reader thread is invoked with reader class, reads 50 MB of the data and based on number of threads creates offsets data structure object. In this case t1 handles 0 - 10 MB, t2 handles 10 - 20 MB and so on. Once ready, it notifies processor threads. It then immediately reads next chunk from disk and waits for processor thread to completion notification from process threads.
Processor threads on the notification, reads data from buffer and processes it. Once done, it notifies reader thread about completion and waits for next chunk.
This process completes till the whole data is read and processed. Then reader thread notifies back to the main thread about completion which sends PROCESS_COMPLETION, upon all threads exits. or main thread chooses to process next file in the queue.
Note that offsets are taken for easy explanation, offsets to line delimiter mapping needs to be handled programmatically.
If the parser you have used is not distributed obviously the approach is not scalable.
I would vote for a technique like this below
chunk up the file into a size that can be handled by a machine / time constraint
distribute the chunks to a cluster of machines (1..*) that can meet your time/space requirements
consider dealing at block sizes for a given chunk
Avoid threads on same resource (i.e given block) to save yourself from all thread related issues.
Use threads to achieve non competing (on a resource) operations - such as reading on one thread and writing on a different thread to a different file.
do your parsing (now for this small chunk you can invoke your parser).
do your operations.
merge the results back / if can distribute them as they are..
Now, having said that, why can't you use Hadoop like frameworks?
I wrote this program, where in one part, a thread takes char* buffers and write them to a pipe
that was created as follows:
ret_val = mkfifo(lpipename.c_str(), 0666);
pipehandler = open(lpipename.c_str(), O_RDWR);
then I write to the pipe one buffer after another as follows:
int size = string(pcstr->buff).length()
numWritten = write(pipehandler, pcstr->buff, size);
each pcstr->buff is a pointer to a malloc'ed size of a pre-configured size of 1-5 MB
however, it takes too long to write to the pipe , than it does to fill the pcstr->buff (from another source) and it for makes my program run too slow.
Does anyone have any idea of a faster writing method?
Thanks
each pcstr->buff is a pointer to a malloc'ed size of a pre-configured size of 1-5 MB
Just save the length somewhere. Copying it into std::string just to find out its size is rather wasteful. Or use strlen().
however, it takes to long to write to the pipe , than it does to fill the pcstr->buff (from another source) and it for makes my program run too slow.
In Linux the default maximum pipe buffer size is 1Mb as of today. You mentioned you write more than 1Mb into the pipe. When that happens the writing thread blocks till some data from the pipe have been consumed.
Does anyone have any idea of a faster writing method?
Use a plain file in /dev/shm or /tmp. On latest Linux'es /tmp is an in-memory filesystem. This only works though, if the amount of data sent through the pipe can be saved in a file without overflowing the amount of free disk space or memory.
Background
I am currently working on a small application that grabs the RGB and depth map streams from a Microsoft Kinect device and saves them on disk for future analysis. Whn I run the program it shall output each frame as a separate image on disk.
The framerate of the Kinect is 30fps, but there are two sources so make this (approximately) 60fps. If I naively try to just save each frame when it arrives I will get dropped frames as is demonstrated by the bundled freenect/record.c application.
I rewrote the application to use one thread that grabs the frames from the device and pushes them to the back of a double ended list (std::deque). Then there are two threads that each pop frames from the front of the double ended list and saves the frames to disk.
When the recording is turned off, there is a potentially large number of frames left in the list that still need to be recorded, so before exiting we let the two save threads do their work until finished.
Now the actual problem
Although the problem of dropped frames is solved, writes to the filesystem are still quite slow. Is there any good way to speed up the file creation on disk?
Currently, the function dump_frame looks like this:
static void
dump_frame(struct frame* frame)
{
FILE* fp;
char filename[512]; /* plenty of space! */
sprintf(filename, "d-%f-%u.pgm", get_time, frame->timestamp);
fp = fopen(filename, "w");
fprintf(fp, "P5 %d %d 65535\n", frame->width, frame->height);
fwrite(frame->data, frame->size, 1, fp);
fclose(fp);
}
I am running Fedora 14 x64, so the solution only have to concern Linux as operating system.
You need to measure what takes time in your specific case. Is it creating multiple files or actually writing the image data to disk?
When I tested on my local system with OSX and an Intel SSD X25M 2G I noticed a huge variation in writes when writing multiple 1MB files vs writing 1 multi MB file. This is probably due to housekeeping of the filesystem and will vary depending on the file system you have.
To avoid the housekeeping you could site all your images to the same file and split it later. However, the data you are saving needs about 60MB sustained speed which is quite high.
An alternative if you have a lot of memory is to create a ram disk and store the images there first and later move them on to the persistent file system. With a 6GB ram disk you could store about 100 seconds of video.
A possible improvement would to explicitly set the buffering of fp to full using setvbuf:
const size_t BUFFER_SIZE = 1024 * 16;
fp = fopen(filename, "w");
setvbuf(fp, 0, _IOFBF, BUFFER_SIZE)); /* Must be immediately after the open. */
fprintf(fp, "P5 %d %d 65535\n", frame->width, frame->height);
fwrite(frame->data, frame->size, 1, fp);
fclose(fp);
You could profile using different buffer sizes to determine which provides the best performance.
How can I create a global counter-value that can be shared between multiple processes in c++? What I need is a way to "invalidate" multiple processes at once, signaling them to perform some operation (like reading from file). All processes would continuously poll (every 10ms) for current counter-value and compare it with internally stored last value. Mismatching values would indicate that some work is needed.
Edit: btw my processes are executing as different .exe:s, not created from some parent process. Operating system is windows.
What about a named semaphore? Posix supports it, not sure about windows.
Consider the way you want to distribute the information and potential overlaps - if it takes longer for any of the readers to finish reading than it takes for a refresh then you are going to get in trouble with the suggested approach.
The way I read your question, there are multiple readers, the writer doesn't know (or care in most part) how many readers there are at one time, but wants to notify the readers that something new is available to read.
Without knowing how many potential readers there are you can't use a simple mutex or semaphore to know when the readers are done, without knowing when everybody is done you don't have good info on when to reset an event to notify for the next read event.
MS Windows specific:
Shared Segments
One option is to place variables within a shared data segment. That means that the same variables can be read (and written to) by all exe's that have named the same segment or if you put it into a DLL - loaded the shared DLL.
See http://www.codeproject.com/KB/DLL/data_seg_share.aspx for more info.
// Note: Be very wary of using anything other than primitive types here!
#pragma data_seg(".mysegmentname")
HWND hWnd = NULL;
LONG nVersion = -1;
#pragma data_seg()
#pragma comment(linker, "/section:.mysegmentname,rws")
IPC - COM
Make your main app a com service where the workers can register with for events, push out the change to each event sink.
IPC - dual events
Assuming any 1 read cycle is much less than time between write events.
create 2 manual reset events, at any time at most 1 of those events will be signaled, alternate between events. signaling will immediatly release all the readers and once complete they will wait on the alternate event.
you can do this the easy way or the way
the easy way is to store shared values in registry or a file so that all processes agree to check it frequently.
the hard way is to use IPC(inter process communication, the most common method that i use is NamedPipes. its not too hard because you can find plenty of resources about IPC on the net.
If you are on *nix you could make the processes read from a named pipe (or sockets), and then write the specific msg there to tell the other processes that they should shutdown.
IPC performance: Named Pipe vs Socket
Windows NAmed Pipes alternative in Linux
Use a named event object with manual reset. The following solution doesn't use the CPU so much than busy waiting
Sending process:
Set event
Sleep 10 ms
Reset Event
Receiving processes:
All waiting processes pass when event is set
They read the file
Let them sleep for 20 ms, so say can't see the same event twice.
Wait again
Sleep( 10 ) might actually take longer than Sleep( 20 ) but this only results in another cycle (reading the unchanged file again).
As the name of the executable is known, I have another solution which I implemented (in C#) in a project just a few days ago:
Every reader process creates a named event "Global\someuniquestring_%u" with %u being it's process id. If the event is signaled, read the file and do the work.
The sender process has a list of event handles and sets them active if the file has changed and thus notifys all reader processes. From time to time, e.g. when the file has changed, it has to update the list of event handles:
Get all processes with name 'reader.exe' (e.g.)
For every process get it's id
Open a handle for the existing event "Global\someuniquestring_%u" if it's a new process.
Close all handles for no longer running processes.
Found one solution for monitoring folder changes (with "event_trigger"-event) and reading additional event information from file:
HANDLE event_trigger;
__int64 event_last_time;
vector<string> event_info_args;
string event_info_file = "event_info.ini";
// On init
event_trigger = FindFirstChangeNotification(".", false, FILE_NOTIFY_CHANGE_LAST_WRITE);
event_last_time = stat_mtime_force("event_info.ini");
// On tick
if (WaitForSingleObject(event_trigger, 0)==0)
{
ResetEventTrigger(event_trigger);
if (stat_mtime_changed("event_info.ini", event_last_time))
{
FILE* file = fopen_force("event_info.ini");
char buf[4096];
assert(fgets(buf, sizeof(buf), file));
split(buf, event_info_args, "\t\r\n");
fclose(file);
// Process event_info_args here...
HWND wnd = ...;
InvalidateRect(wnd,0,false);
}
}
// On event invokation
FILE* file = fopen("event_info.ini", "wt");
assert(file);
fprintf(file,"%s\t%s\t%d\n",
"par1", "par2", 1234);
fclose(file);
stat_mtime_changed("event_info.ini", event_last_time);
// Helper functions:
void ResetEventTrigger()
{
do
{
FindNextChangeNotification(evt);
}
while(WaitForSingleObject(evt, 0)==0);
}
FILE* fopen_force(const char* file);
{
FILE* f = fopen(file, "rt");
while(!f)
{
Sleep(10+(rand()%100));
f = fopen(f, "rt");
}
assert(f);
return f;
}
__int64 stat_mtime_force(const char* file)
{
struct stat stats;
int res = stat(file, &stats);
if(res!=0)
{
FILE* f = fopen(file, "wt");
fclose(f);
res = stat(file, &stats);
}
assert(res==0);
return stats.st_mtime;
}
bool stat_mtime_changed(const char* file, __int64& time);
{
__int64 newTime = stat_mtime(file);
if (newTime - time > 0)
{
time = newTime;
return true;
}
return false;
}