Replaying stored data at a fixed rate - c++

I am working on a problem where I want to replayed data stored in a file at a specified rate.
For Eg: 25,000 records/second.
The file is in ascii format. Currently, I read each line of the file and apply a regex to
extract the data. 2- 4 lines make up a record. I timed this operation and it takes close to
15 microseconds for generating each record.
The time taken to publish each record is 6 microseconds.
If I perform the reading and writing sequentially, then I would end up with 21 microseconds to publish each record. So effectively, this means my upper bound is ~47K records per second.
If I decide to multi thread the reading and writing then I will be able to send out a packet every 9 microsecond ( neglecting the locking penalty since reader and writer share the same Q ) which gives a throughput of 110K ticks per second.
Is my previous design correct ?
What kind of Queue and locking construct has minimum penalty when a single producer and consumer share a queue ?
If I would like to scale beyond this what's the best approach ?
My application is in C++

If it takes 15uS to read/prepare a record then your maximum throughput will be about 1sec/15uSec = 67k/sec. You can ignore the 6uSec part as the single thread reading the file cannot generate more records than that. (try it, change the program to only read/process and discard the output) not sure how you got 9uS.
To make this fly beyond 67k/sec ...
A) estimate the maximum records per second you can read from the disk to be formatted. While this depends on hardware a lot, a figure of 20Mb/sec is typical for an average laptop. This number will give you the upper bound to aim for, and as you get close you can ease off trying.
B) create a single thread just to read the file and incur the IO delay. This thread should write to large preallocated buffers, say 4Mb each. See http://en.wikipedia.org/wiki/Circular_buffer for a way of managing these. You are looking to hold maybe 1000 records per buffer (guess, but not just 8 ish records!) pseudo code:
while not EOF
Allocate big buffer
While not EOF and not buffer full
Read file using fgets() or whatever
Apply only very small preprocessing, ideally none
Save into buffer
Release buffer for other threads
C) create another thread ( or several if the order of records is not important) to process a ring buffer when it is full, your regex step. This thread in turn writes to another set of output ring buffers (tip, keep the ring buffer control structures apart in memory)
While run-program
Wait/get an input buffer to process, semaphores/mutex/whatever you prefer
Allocate output buffer
Process records from input buffer,
Place result in output buffer
Release output buffer for next thread
Release input buffer for reading thread
D) create you final thread to consume the data. It isn't clear if this output is being written to disk or network, so this might affect the disk reading thread.
Wait/get input buffer from processed records pool
Output records to wherever
Return buffer to processed records pool
Notes.
Preallocate all buffers and pass them back to where they came from. Eg you might have 4 buffers between file reading thread and processing threads, when all 4 are infuse, the file reader waits for one to be free, it doesn't just allocate new buffers.
Try not to memset() buffers if you can avoid it, waste of memory bandwidth.
You won't need many buffers, 6? Per ring buffer?
The system will auto tune to slowest thread ( http://en.wikipedia.org/wiki/Theory_of_constraints ) so if you can read and prepare data faster than you want to output it, all the buffers will fill up and everything will pause except the output.
As the threads are passing reasonable amounts of data each sync point, the overhead of this will not matter too much.
The above design is how some of my code reads CSV files as quick as possible, basically it all comes to to input IO bandwidth as limiting factor.

Related

Handling many ofstream operations at once

I would like to ask for advice. As I am not very experienced user of C++. I lack a bit knowledge about threads - but I've been doing a lot in Android so I know the general idea.
I need to write 8 *.wav files at once. I have a callback called very often with upcoming signal from 8 input channels. I need to save data to *.wav from each channel (recording). This require me to open file every time I get new data and write additional 256 samples of data at the end of it.
Doing so with 2 inputs is fine, but after 3 or more my input latency is starting to increase. Processor is lagging so probably I have to do it in some kind of thread.
I think it should be quite common problem but I haven't learn yet how to handle it. Can someone explain me right way to do it? Is it necessary to use it http://www.cplusplus.com/reference/thread/thread/ or are there any other simple/elegant patterns.
You need to record, or save, the data from 8 input channels.
I highly recommend at least 8 large buffers to contain the data.
When there is a pause in the input or in the background, you can write the data to the files. Wait for a large amount of data to collect before writing the buffers to the files. The file I/O loves to process large blocks of data rather than many small ones. You can always flush the output streams which tells the OS to write to the file.
If you want to play with threads, I recommend at least three.
Thread 1 -- read input channels and store in buffers.
Thread 2 -- GUI
Thread 3 -- Writes buffers to files.
Remember that thread 1 is the highest priority. When it sees a low amount of space remaining in the buffer, it should wake up thread 3 to write out the buffers to the files.
You should maintain at least 2 buffers for each input channel. This is called double buffering and allows thread 3 to write the buffer to file while thread 1 is reading the input channel data into another buffer.

Memory-compute overlap issue in cuda

I have a CUDA kernel that process a lot of data.
As I cannot transfer all the data at once I have to split them into chunks and process them chuck by chunk and update the output on the GPU.
I am parsing the input data from a file.
I was thinking if i could overlap the chunks' memory transfers by having two buffers both in the host and in the GPU. While processing one chuck, I could read the other, transfer it to the GPU and launch the kernel to the same stream.
My problem is that the kernel's execution time is slower than parsing the data and transferring them to the GPU. How can I ensure that the memcpys won't write over the data that the kernel uses given the fact that memcpys are no blocking?
//e.g. Pseudocode
//for every chunk
//parse data
//cudaMemcpyAsync ( dev, host, size, H2D )
//launch kernel
//switch_buffer
//copy result from device to host
Thank you in advance.
Just insert an explicit sync point with cudaDeviceSynchronize() after the kernel launch.
That way, you are essentially starting a memory transfer and launching a kernel at the same time. The transfer would go to one buffer and the kernel would work on the other. The cudaDeviceSynchronize() would wait until both were done, at which time you would swap the buffers and repeat.
Of course, you also need to copy the results from the device to the host within the loop and add logic to handle the first iteration, when there's no data for the kernel to process yet, and the last iteration, when there's no more data to copy but still one buffer to be processed. This can be done with logic within the loop or by partially unrolling the loop, to specifically code the first and last iterations.
Edit:
By moving the sync point to just before the cudaMemcpyAsync() and after the file read and parse, you allow the kernel to also overlap that part of the processing (if the kernel runs long enough).

Writing many large files quickly in C++

I have a program which gets a stream of raw data from different cameras and writes it to disk. The program runs these sorts of recordings for ~2 minutes and then another program is used to process the frames.
Each raw frame is 2MB and the frame rate is 30fps (ie. data rate is around 60MB/s) and I'm writing to an SSD which can easily handle a sustained > 150MB/s (tested by copying 4000 2MB files from another disk which took 38 seconds and Process Explorer shows constant IO activity).
My issue is that occasionally calls to fopen(), fwrite() and fclose() stall for up to 5 seconds which means that 300MB of frames build up in memory as a back log, and after a few of these delays I hit the 4GB limit of a 32 bit process. (When the delay happens, Process Explorer shows a gap in IO activity)
There is a thread which runs a loop calling this function for every new frame which gets added to a queue:
writeFrame(char* data, size_t dataSize, char* filepath)
{
// Time block 2
FILE* pFile = NULL;
fopen(&pFile, filepath, "wb");
// End Time block 2
// Time block 3
fwrite(data,1,dataSize,pFile);
// End Time block 3
// Time block 4
fclose(pFile);
// End Time block 4
}
(There's error checking too in the actual code but it makes no difference to this issue)
I'm logging the time it takes for each of the blocks and the total time it takes to run the function and I get results which most of the time look like this: (times in ms)
TotalT,5, FOpenT,1, FWriteT,2, FCloseT,2
TotalT,4, FOpenT,1, FWriteT,1, FCloseT,2
TotalT,5, FOpenT,1, FWriteT,2, FCloseT,2
ie. ~5ms to run the whole functions, ~1ms to open the file, ~2ms to call write and ~2ms to close the file.
Occasionally however (on average about 1 in every 50 frames, but sometimes it can be thousands of frames between this problem occurring), I get frames which take over 4000ms:
TotalT,4032, FOpenT,4023, FWriteT,6, FCloseT,3
and
TotalT,1533, FOpenT,1, FWriteT,2, FCloseT,1530
All the frames are the same size and its never fwrite that takes the extra time, always fopen or fclose
No other process is reading/writing to/from this SSD (confirmed with Process Monitor).
Does anyone know what could be causing this issue and/or any way of avoiding/mitigating this problem?
I'm going to side with X.J., you're probably writing too many files to a single directory.
A solution could be to create a new directory for each batch of frames. Also consider calling SetEndOfFile directly after creating the file, as that will help Windows allocate sufficient space in a single operation.
FAT isn't a real solution as it's doing even worse on large directories.
Prepare empty files (2 MB files filled with zeros) So that space is already "ready", then just overwrite these files. Or create a file that is a batch of several frames, so you can reduce number of files.
there are libraries for doing compression and decompression and playback of videos:
libTheora may be usefull because already compress frames (well you will need to output the video in a single file) and do that pretty fast (lossy compression by the way).

How to handle 100 Mbps input stream when my program can process data only at 1 Mbps rate

I am working on a project where we can have input data stream with 100 Mbps.
My program can be used overnight for capturing these data and thus will generate huge data file. My program logic which interpret these data is complex and can process only 1 Mb data per second.
We also dump the bytes to some log file after getting processed. We do not want to loose any incoming data and at the same time want my program to work in real time.So; we are maintaining a circular buffer which acts like a cache.
Right now only way to save incoming data from getting lost is to increase size of this buffer.
Please suggest better way to do this and also what are the alternate way of caching I can try?
Stream the input to a file. Really, there is no other choice. It comes in faster than you can process it.
You could create one file per second of input data. That way you can directly start processing old files while new files are being streamed on the disk.

Loading large multi-sample audio files into memory for playback - how to avoid temporary freezing

I am writing an application needs to use large audio multi-samples, usually around 50 mb in size. One file contains approximately 80 individual short sound recordings, which can get played back by my application at any time. For this reason all the audio data gets loaded into memory for quick access.
However, when loading one of these files, it can take many seconds to put into memory, meaning my program if temporarily frozen. What is a good way to avoid this happening? It must be compatible with Windows and OS X. It freezes at this : myMultiSampleClass->open(); which has to do a lot of dynamic memory allocation and reading from the file using ifstream.
I have thought of two possible options:
Open the file and load it into memory in another thread so my application process does not freeze. I have looked into the Boost library to do this but need to do quite a lot of reading before I am ready to implement. All I would need to do is call the open() function in the thread then destroy the thread afterwards.
Come up with a scheme to make sure I don't load the entire file into memory at any one time, I just load on the fly so to speak. The problem is any sample could be triggered at any time. I know some other software has this kind of system in place but I'm not sure how it works. It depends a lot on individual computer specifications, it could work great on my computer but someone with a slow HDD/Memory could get very bad results. One idea I had was to load x samples of each audio recording into memory, then if I need to play, begin playback of the samples that already exist whilst loading the rest of the audio into memory.
Any ideas or criticisms? Thanks in advance :-)
Use a memory mapped file. Loading time is initially "instant", and the overhead of I/O will be spread over time.
I like solution 1 as a first attempt -- simple & to the point.
If you are under Windows, you can do asynchronous file operations -- what they call OVERLAPPED -- to tell the OS to load a file & let you know when it's ready.
i think the best solution is to load a small chunk or single sample of wave data at a time during playback using asynchronous I/O (as John Dibling mentioned) to a fixed size of playback buffer.
the strategy will be fill the playback buffer first then play (this will add small amount of delay but guarantees continuous playback), while playing the buffer, you can re-fill another playback buffer on different thread (overlapped), at least you need to have two playback buffer, one for playing and one for refill in the background, then switch it in real-time
later you can set how large the playback buffer size based on client PC performance (it will be trade off between memory size and processing power, fastest CPU will require smaller buffer thus lower delay).
You might want to consider a producer-consumer approach. This basically involved reading the sound data into a buffer using one thread, and streaming the data from the buffer to your sound card using another thread.
The data reader is the producer, and streaming the data to the sound card is the consumer. You need high-water and low-water marks so that, if the buffer gets full, the producer stops reading, and if the buffer gets low, the producer starts reading again.
A C++ Producer-Consumer Concurrency Template Library
http://www.bayimage.com/code/pcpaper.html
EDIT: I should add that this sort of thing is tricky. If you are building a sample player, the load on the system varies continuously as a function of which keys are being played, how many sounds are playing at once, how long the duration of each sound is, whether the sustain pedal is being pressed, and other factors such as hard disk speed and buffering, and amount of processor horsepower available. Some programming optimizations that you eventually employ will not be obvious at first glance.