Hey all, I'm writing an application which records microphone input to a WAV file. Previously, I had written this to fill a buffer of a specified size and that worked fine. Now, I'd like to be able to record to an arbitrary length. Here's what I'm trying to do:
Set up 32 small audio buffers (circular buffering)
Start a WAV file with ofstream -- write the header with PCM length set to 0
Add a buffer to input
When a buffer completes, append its data to the WAV file and update the header; recycle the buffer
When the user hits "stop", write the remaining buffers to file and close
It kind of works in that the files are coming out to the correct length (header and file size and are correct). However, the data is wonky as hell. I can make out a semblance of what I said -- and the timing is correct -- but there's this repetitive block of distortion. It basically sounds like only half the data is getting into the file.
Here are some variables the code uses (in header)
// File writing
ofstream mFile;
WAVFILEHEADER mFileHeader;
int16_t * mPcmBuffer;
int32_t mPcmBufferPosition;
int32_t mPcmBufferSize;
uint32_t mPcmTotalSize;
bool mRecording;
Here is the code that prepares the file:
// Start recording audio
void CaptureApp::startRecording()
{
// Set flag
mRecording = true;
// Set size values
mPcmBufferPosition = 0;
mPcmTotalSize = 0;
// Open file for streaming
mFile.open("c:\my.wav", ios::binary|ios::trunc);
}
Here's the code that receives the buffer. This assumes the incoming data is correct -- it should be, but I haven't ruled out that it isn't.
// Append file buffer to output WAV
void CaptureApp::writeData()
{
// Update header with new PCM length
mPcmBufferPosition *= sizeof(int16_t);
mPcmTotalSize += mPcmBufferPosition;
mFileHeader.bytes = mPcmTotalSize + sizeof(WAVFILEHEADER);
mFileHeader.pcmbytes = mPcmTotalSize;
mFile.seekp(0);
mFile.write(reinterpret_cast<char *>(&mFileHeader), sizeof(mFileHeader));
// Append PCM data
if (mPcmBufferPosition > 0)
{
mFile.seekp(mPcmTotalSize - mPcmBufferPosition + sizeof(WAVFILEHEADER));
mFile.write(reinterpret_cast<char *>(&mPcmBuffer), mPcmBufferPosition);
}
// Reset file buffer position
mPcmBufferPosition = 0;
}
And this is the code that closes the file:
// Stop recording
void CaptureApp::stopRecording()
{
// Save remaining data
if (mPcmBufferSize > 0)
writeData();
// Close file
if (mFile.is_open())
{
mFile.flush();
mFile.close();
}
// Turn off recording flag
mRecording = false;
}
If there's anything here that looks like it would result in bad data getting appended to the file, please let me know. If not, I'll triple check the input data (in the callback). This data should be good, because it works if I copy it to a larger buffer (eg, two minutes) and then save that out.
I am just wondering, how
void CaptureApp::writeData()
{
mPcmBufferPosition *= sizeof(int16_t); // mPcmBufferPosition = 0, so 0*2 = 0;
// (...)
mPcmBufferPosition = 0;
}
works (btw. sizeof int16_t is always 2). Are you setting mPcmBufferPosition somewhere else?
void CaptureApp::writeData()
{
// Update header with new PCM length
long pos = mFile.tellp();
mPcmBufferBytesToWrite *= 2;
mPcmTotalSize += mPcmBufferBytesToWrite;
mFileHeader.bytes = mPcmTotalSize + sizeof(WAVFILEHEADER);
mFileHeader.pcmbytes = mPcmTotalSize;
mFile.seekp(0);
mFile.write(reinterpret_cast<char *>(&mFileHeader), sizeof(mFileHeader));
mFile.seekp(pos);
// Append PCM data
if (mPcmBufferBytesToWrite > 0)
mFile.write(reinterpret_cast<char *>(mPcmBuffer), mPcmBufferBytesToWrite);
}
Also mPcmBuffer is a pointer, so don't know why you use & in write.
The most likely reason is you're writing from the address of the pointer to your buffer, not from the buffer itself. Ditch the "&" in the final mFile.write. (It may have some good data in it if your buffer is allocated nearby and you happen to grab a chunk of it, but that's just luck that your write hapens to overlap your buffer)
In general, if you find yourself in this sort of situation, you could try to think how you can test this code in isolation from the recording code: Set up a buffer that has the values 0..255 in it, and then set your "chunk size" to 16 and see if it writes out a continuous sequence of 0..255 across 16 separate write operations. That will quickly verify if your buffering code is working or not.
I won't debug your code, but will try to give you checklist of the things you can try to check and determine where's the error:
always have referent recorder or player handy. It can be something as simple as Windows Sound Recorder, or Audacity, or Adobe Audition. Have a recorder/player that you are CERTAIN that will record and play files correctly.
record the file with your app and try to play it with reference player. Working?
try to record the file with reference recorder, and play it with your player. Working?
when you write SOUND data to the WAV file in your recorder, write it to one extra file. Open that file in RAW mode with the player (Windows Sound Recorder won't be enough here). Does it play correctly?
when playing the file in your player, and writing to the soundcard, write output to the RAW file, to see if you are playing the data correctly at all or you have soundcars issues. Does it play correctly?
Try all this, and you'll have much better idea of where something went wrong.
Shoot, sorry -- had a late night of work and am a bit off today. I forgot to show y'all the actual callback. This is it:
// Called when buffer is full
void CaptureApp::onData(float * data, int32_t & size)
{
// Check recording flag and buffer size
if (mRecording && size <= BUFFER_LENGTH)
{
// Save the PCM data to file and reset the array if we
// don't have room for this buffer
if (mPcmBufferPosition + size >= mPcmBufferSize)
writeData();
// Copy PCM data to file buffer
copy(mAudioInput.getData(), mAudioInput.getData() + size, mPcmBuffer + mPcmBufferPosition);
// Update PCM position
mPcmBufferPosition += size;
}
}
Will try y'alls advice and report.
Related
I'm using nanopb in a project on ESP32, in platformIO. It's an arduino flavored C++ codebase.
I'm using some protobufs to encode data for transfer. And I've set up the memory that the protobufs will use at the root level to avoid re-allocating the memory every time a message is sent.
// variables to store the buffer/stream the data will render into...
uint8_t buffer[MESSAGE_BUFFER_SIZE];
pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
// object to hold the data on its way into the encode action...
TestMessage abCounts = TestMessage_init_zero;
Then I've got my function that encodes data into this stream via protobufs (using nanoPB)...
void encodeABCounts(int32_t button_a, int32_t button_b, String message)
{
// populate our data structure...
abCounts.a_count = button_a;
abCounts.b_count = button_b;
strcpy(abCounts.message, message.c_str());
// encode the data!
bool status = pb_encode(&stream, TestMessage_fields, &abCounts);
if (!status)
{
Serial.println("Failed to encode");
return;
}
// and here's some debug code I'll discuss below....
Serial.print("Message Length: ");
Serial.println(stream.bytes_written);
for (int i = 0; i < stream.bytes_written; i++)
{
Serial.printf("%02X", buffer[i]);
}
Serial.println("");
}
Ok. So the first time this encode action occurs this is the data I get in the serial monitor...
Message Length: 14
Message: 080110001A087370656369616C41
And that's great - everything looks good. But the second time I call encodeABCounts(), and the third time, and the forth, I get this...
Message Length: 28
Message: 080110001A087370656369616C41080210001A087370656369616C41
Message Length: 42
Message: 080110001A087370656369616C41080210001A087370656369616C41080310001A087370656369616C41
Message Length: 56
Message: 080110001A087370656369616C41080210001A087370656369616C41080310001A087370656369616C41080410001A087370656369616C41
...etc
So it didn't clear out the buffer/stream when the new data went in. Each time the buffer/stream is just getting longer as new data is appended.
How do I reset the stream/buffer to a state where it's ready for new data to be encoded and stuck in there, without reallocating the memory?
Thanks!
To reset the stream, simply re-create it. Now you have this:
pb_ostream_t stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
You can recreate it by assigning again:
stream = pb_ostream_from_buffer(buffer, sizeof(buffer));
Though you can also move the initial stream declaration to inside encodeABCounts() to create it every time, if you don't have any particular reason to keep it around after use. The stream creation is very lightweight, as it just stores the location and size of the buffer.
I am trying to send a PNG file from C++ over stdout to Nodejs. However, when I send it, it seems to get cut halfway sometimes when I read it in NodeJS, while I only flush after I sent the whole PNG in C++. What causes this behaviour?
My code to send the image:
void SendImage(Mat image)
{ //from: https://stackoverflow.com/questions/41637438/opencv-imencode-buffer-exception
std::vector<uchar> buffer;
#define MB image_size.width*image_size.height
buffer.resize(200 * MB);
cv::imencode(".png", image, buffer);
printf("image ");
for(int i = 0; i < buffer.size(); i++)
printf("%c", buffer[i]);
fflush(stdout);
}
Then, I receive it in Nodejs and just test what I receive:
this.puckTracker.stdout.on('data', (data) => {
console.log("DATA");
var str = data.toString();
console.log(str);
//first check if its an image being sent. C++ prints "image 'imageData'". So try to see if the first characters are 'image'.
const possibleImage = str.slice(0, 5);
console.log("POSSIBLEIMAGE: " + possibleImage);
}
I have tried the following commands in C++ to try and remove automatic flushes:
//disable sync between libraries. This makes the stdout much faster, but you must either use cout or printf, no mixes. Since printf is faster, use printf everywhere.
std::ios_base::sync_with_stdio(false);
//make sure C++ ONLY flushes when I say so, so no data gets broken in half.
std::setvbuf(stdout, nullptr, _IOFBF, BUFSIZ);
When I run the C++ program with a visible terminal, it seems to be alright.
What I expect the NodeJS console to print is:
DATA
image ëPNG
IHDR ... etc, all the image data.
POSSIBLEIMAGE: image
and this for every image I send.
Instead I get:
DATA
image �PNG
IHDT ...
POSSIBLEIMAGE: image
DATA
-m5VciVWjՖҬvXjvXm9kV[d嬭v
POSSIBLEIMAGE: -m5V
DATA
image �PNG
etc.
It seems to cut each image once as far as I can tell.
Here is a pastebin in case someone needs the full log. (Printing some additional stuff, but that shouldn't matter.) https://pastebin.com/VJEbm6V5
for(int i = 0; i < buffer.size(); i++)
printf("%c", buffer[i]);
fflush(stdout);
There are no guarantees whatsoever that only the final fflush will send all the data, in one chunk.
You never had any, nor will have any, guarantee whatsoever that stdout will get flushed only when you explicitly want it to. Typical implementations of stdout, or its C++ equivalent use a fixed size buffer that gets automatically flushed when its full, whether you want it or not. As each character goes out the door, it gets added to this fixed size buffer. When it's full the buffer gets flushed to the output. The only thing fflush does is make it explicitly, flushing out the partially-filled buffer.
Then, that's not the whole story.
When you are reading from a network connection, you also have no guarantees whatsoever that you will read everything that was written, in one chunk, even if it was flushed in one chunk. Sockets and pipes don't work this way. Anywhere in between the data can get broken up in intermediate chunks, and delivered to your reading process one chunk at a time.
//make sure C++ ONLY flushes when I say so, so no data gets broken in half.
std::setvbuf(stdout, nullptr, _IOFBF, BUFSIZ);
This does not turn off buffering, effectively making the buffering infinite. From the Linux documentation of what happens with a null buffer pointer:
If the argument buf is NULL, only the mode is affected; a new buffer
will be allocated on the next read or write operation.
All this does is give you a default buffer, with the default size. Which stdout already has anyway.
Now, you could certainly create a custom buffer that's as big as your image, so that everything gets buffered up front. But, as I explained, that won't accomplish anything useful, whatsoever. The data will still likely be broken up in transit, and you will read it in nodejs one chunk a time.
This entire approach is completely wrong. You need to send the # of bytes separately, up front, read it first, then you know how many bytes to expect, then read the given number of bytes.
printf("image ");
Put the number of bytes to follow, here, read it in nodejs, parse it, and then you know how many bytes to keep reading, until you get everything.
Of course, keep in mind that, for the reasons I explained above, the very first thing your nodejs code could read (unlikely, but it can happen, and a good programmer will write proper code that will correctly handle all possibilities):
image 123
with the "40" part read in the next chunk, indicating that 12340 bytes follow. Or, it could equally well read just:
ima
with the rest following.
Conclusion: you have no guarantees that whatever you read, in whatever way, will always match, exactly, the byte counts of whatever was written, no matter how it was buffered on the write end, or when it was flushed. Sockets and pipes never gave you this guarantee (there are some slight read/write semantics that are documented, for pipes, but that's irrelevant). You will need to code everything on the reading side accordingly: no matter how small or big is read, your code will need to logically parse "image ### ", one character at a time, determining when to stop when parsing the space after a digit. Parsing this gives you the byte count, then your code will need to logically read the exact number of bytes to follow. It's possible that this, and the first chunk of data, will be the first thing you read. It's possible that the first think you will read will be just the "i". You never know what's to expect. It's like playing the lottery. You don't have any guarantees, but that's how things work. No, this is not easy, to do correctly.
I have fixed it and it works now. I'm placing my code here, in case someone in the feature needs it.
Sending side C++
To be able to concatenate my buffer and parse it correctly, I have added "stArt" and "eNd" around the message I send. Example: stArtimage‰PNG..IHDR..binary data..eNd.
You can probably also do this by just using the default start and stop of the PNG itself or even only the start and take everything before the next start. However, I need to send custom data as well. The C++ code is now:
void SendImage(Mat image)
{
std::vector<uchar> buffer;
cv::imencode(".png", image, buffer);
//StArt (that caps) is the word to split the data chunks on in nodejs.
cout << "stArtimage";
fwrite(buffer.data(), 1, buffer.size(), stdout);
cout << "eNd";
fflush(stdout);
}
Very important: add this at the start of your program, otherwise the image becomes unreadable:
#include <io.h>
#include <fcntl.h>
//sets the stdout to binary. If this is not done, it replaces \n by \r\n, which gives issues when sending PNG images.
_setmode(_fileno(stdout), O_BINARY);
Receiving side NodeJS
When new data comes in, I concatenate with the previous unused data. If I can find both a stArt and an eNd, the data is complete and I use the piece in between. I then store all the bytes after eNd, so I can use them for the next time I get data. In my code this is placed in a class, so if it doesn't compile, do that :). I also use SocketIO to send data from NodeJS to the browser, so that is the eventdispatcher.emit you are seeing.
this.puckTracker.stdout.on('data', (data) => {
try {
this.bufferArray.push(data);
var buff = Buffer.concat(this.bufferArray);
//data is sent in like: concat ["stArt"][5 letters of dataType][data itself]["eNd"]
// dataTypes: "PData" = puck data, "image" = png image, "Track" = tracking is running
// example image: stArtimage*binaryPNGdata*eNd
// example: stArtPData[]eNdStArtPData[{"ID": "0", "pos": [881.023071, 448.251221]}]eNd
var startBuf = buff.indexOf("stArt");
var endBuf = buff.indexOf("eNd");
if (startBuf != -1 && endBuf != -1) {
var dataType = buff.subarray(startBuf + 5, startBuf + 10).toString(); //extract the five letters datatype directly behind stArt.
var realData = buff.subarray(startBuf + 10, endBuf); //extract the data behind the datatype, before the end of data.
switch (dataType) {
//sending custom JSON data
//sending the PNG image.
case "image":
this.eventDispatcher.emit('PNG', realData);
this.refreshBuffer(endBuf, buff);
break;
case "customData": //do something with your custom realData
this.refreshBuffer(endBuf, buff);
break;
}
}
else {
this.bufferArray.length = 0; //empty the array
this.bufferArray.push(buff); //buff contains the full concatenated buffer of the previous bufferArray, it therefore saves all previous unused data in index 0.
}
} catch (error) {
console.error(error);
console.error(data.toString());
}
});
refreshBuffer(endBuf, buff) {
//do this in all cases (but not if there is no match of dataType)
var tail = buff.subarray(endBuf + 3); //save the unused data of the previous buffer
this.bufferArray.length = 0; //empty the array
this.bufferArray.push(tail); //fill the first spot of the array with the tail of the previous buffer.
}
Client side Javascript
To just make the answer complete, to render the PNG in the browser, use the following code, and make sure you have a canvas ready in your HTML.
socket.on('PNG', (PNG) => {
var blob = new Blob([PNG], { type: "image/png" });
var img = new Image();
var c = document.getElementById("canvas");
var ctx = c.getContext("2d");
img.onload = function (e) {
console.log("PNG Loaded");
ctx.drawImage(img, 0, 0);
window.URL.revokeObjectURL(img.src);
img = null;
};
img.onerror = img.onabort = function (error) {
console.error("ERROR!", error);
img = null;
};
img.src = window.URL.createObjectURL(blob);
});
Make sure you dont use SendImage too often, or you will overflow the stdout and connection with data and it will print it out faster than the browser or server can handle it.
I have a complex interpreter reading in commands from (sometimes) multiples files (the exact details are out of scope) but it requires iterating over these multiple files (some could be GB is size, preventing nice buffering) multiple times.
I am looking to increase the speed of reading in each command from a file.
I have used the RDTSC (program counter) register to micro benchmark the code enough to know about >80% of the time is spent reading in from the files.
Here is the thing: the program that generates the input file is literally faster than to read in the file in my small interpreter. i.e. instead of outputting the file i could (in theory) just link the generator of the data to the interpreter and skip the file but that shouldn't be faster, right?
What am I doing wrong? Or is writing suppose to be 2x to 3x (at least) faster than reading from a file?
I have considered mmap but some of the results on http://lemire.me/blog/archives/2012/06/26/which-is-fastest-read-fread-ifstream-or-mmap/ appear to indicate it is no faster than ifstream. or would mmap help in this case?
details:
I have (so far) tried adding a buffer, tweaking parameters, removing the ifstream buffer (that slowed it down by 6x in my test case), i am currently at a loss for ideas after searching around.
The important section of the code is below. It does the following:
if data is left in buffer, copy form buffer to memblock (where it is then used)
if data is not left in the buffer, check to see how much data is left in the file, if more than the size of the buffer, copy a buffer sized chunk
if less than the file
//if data in buffer
if(leftInBuffer[activefile] > 0)
{
//cout <<bufferloc[activefile] <<"\n";
memcpy(memblock,(buffer[activefile])+bufferloc[activefile],16);
bufferloc[activefile]+=16;
leftInBuffer[activefile]-=16;
}
else //buffers blank
{
//read in block
long blockleft = (cfilemax -cfileplace) / 16 ;
int read=0;
/* slow block starts here */
if(blockleft >= MAXBUFELEMENTS)
{
currentFile->read((char *)(&(buffer[activefile][0])),16*MAXBUFELEMENTS);
leftInBuffer[activefile] = 16*MAXBUFELEMENTS;
bufferloc[activefile]=0;
read =16*MAXBUFELEMENTS;
}
else //read in part of the block
{
currentFile->read((char *)(&(buffer[activefile][0])),16*(blockleft));
leftInBuffer[activefile] = 16*blockleft;
bufferloc[activefile]=0;
read =16*blockleft;
}
/* slow block ends here */
memcpy(memblock,(buffer[activefile])+bufferloc[activefile],16);
bufferloc[activefile]+=16;
leftInBuffer[activefile]-=16;
}
edit: this is on a mac, osx 10.9.5, with an i7 with a SSD
Solution:
as was suggested below, mmap was able to increase the speed by about 10x.
(for anyone else who searches for this)
specifically open with:
uint8_t * openMMap(string name, long & size)
{
int m_fd;
struct stat statbuf;
uint8_t * m_ptr_begin;
if ((m_fd = open(name.c_str(), O_RDONLY)) < 0)
{
perror("can't open file for reading");
}
if (fstat(m_fd, &statbuf) < 0)
{
perror("fstat in openMMap failed");
}
if ((m_ptr_begin = (uint8_t *)mmap(0, statbuf.st_size, PROT_READ, MAP_SHARED, m_fd, 0)) == MAP_FAILED)
{
perror("mmap in openMMap failed");
}
uint8_t * m_ptr = m_ptr_begin;
size = statbuf.st_size;
return m_ptr;
}
read by:
uint8_t * mmfile = openMMap("my_file", length);
uint32_t * memblockmm;
memblockmm = (uint32_t *)mmfile; //cast file to uint32 array
uint32_t data = memblockmm[0]; //take int
mmfile +=4; //increment by 4 as I read a 32 bit entry and each entry in mmfile is 8 bits.
This should be a comment, but I don't have 50 reputation to make a comment.
What is the value of MAXBUFELEMENTS? From my experience, many smaller reads is far slower than one read of larger size. I suggest to read the entire file in if possible, some files could be GBs, but even reading in 100MB at once would perform better than reading 1 MB 100 times.
If that's still not good enough, next thing you can try is to compress(zlib) input files(may have to break them into chunks due to size), and decompress them in memory. This method is usually faster than reading in uncompressed files.
As #Tony Jiang said, try experimenting with the buffer size to see if that helps.
Try mmap to see if that helps.
I assume that currentFile is a std::ifstream? There's going to be some overhead for using iostreams (for example, an istream will do its own buffering, adding an extra layer to what you're doing); although I wouldn't expect the overhead to be huge, you can test by using open(2) and read(2) directly.
You should be able to run your code through dtruss -e to verify how long the read system calls take. If those take the bulk of your time, then you're hitting OS and hardware limits, so you can address that by piping, mmap'ing, or adjusting your buffer size. If those take less time than you expect, then look for problems in your application logic (unnecessary work on each iteration, etc.).
I am implementing an audio player application that pre-buffers a small part of the audio data and reads the rest of the data when it is required to do so, for example when the play command arrives. It's a real time application, to it's really important that theres near zero latency between the play command and the start of the playback.
Example: my audio stream is 10 Mb, I read part of it when the file is selected and start creating a buffer like this:
// Stuff to do as soon as the file is selected
// Allocate new memory for the current sample
// contains sample length in number of samples
sampleSize = SampleLib.smp.len;
// assume it's a 16-bit audio file, each sample is 2 bytes long
byteSize = sampleSize * sizeof(short);
// Allow 10 extra samples and fill with zeroes
SampleData = new short[sampleSize + 10]();
// PRELOAD_BYTESIZE is set to 65535 bytes
preloadByteSize = byteSize > PRELOAD_BYTESIZE ? PRELOAD_BYTESIZE : byteSize;
// Set pointer in file - WavePointer contains the exact location where the sample data starts in file
fseek(inFile, WavePointer, SEEK_SET);
// read preloadByteSize from inFile into SampleData
fread(SampleData, 1, preloadByteSize, inFile);
At this point my buffer SampleData contains only part of the audio data to start playing back as soon as the play command arrives. At the same time, the program should fill the rest of the buffer and continue playing up until the end of the audio sample with no interruption.
// Stuff to do as soon the playback starts
// Load the rest of the sample data
// If file is already in memory, avoid reading it again
if (preloadByteSize < ByteSize)
{
// Set pointer in file at stample start + preload size
fseek(fpFile, WavePointers + preloadByteSize, SEEK_SET);
// read the remaining bytes from inFile and fill the empty part of the buffer
fread(SampleData + preloadByteSize / sizeof(short), 1, ByteSize - preloadByteSize, inFile);
// remember the number of loaded bytes
preloadByteSize = ByteSize;
}
I expect that the second part of the code is executed in the background while the file is playing back, but actually it's all serial processing, so playback won't start until the rest of the buffer is loaded, thus retarding the playback.
Ho can I have a background thread that loads the file data without interfering with the audio task? Can I do this with OpenMP?
You might be able to do this with OpenMP, but this involves concurrency more than parallelism, so I would look at pthreads or C++11 threads:
pthreads (link)
C++11 threads (link)
The BackgroundWorker Class
Some good example here:
BackgroundWorker Class Microsoft
BackgroundWorker Class CodeProject
Here I launch three threads using pthread. It might give you something to work from ... enjoy :
// g++ -o audio *.cpp ../common/*.cpp -std=c++11 -lm -lpthread
#include "cpp_openal_opengl_dna.h"
#include <thread>
#include <exception>
#include <mutex>
void launch_producer(Circular_Buffer * given_circular_buffer,
struct_sample_specs * ptr_struct_sample_specs, std::string chosen_file) {
}
void launch_mvc_visualization(Audio_Model * given_audio_model) {
}
void launch_audio_playback(Circular_Buffer * given_circular_buffer, Audio_Model * given_audio_model) {
}
int main() {
std::cout << "hello Corinde" << std::endl; // prints hello Corinde
// here we launch three threads
// thread t1 reads an input file to populate audio buffer
// notice the first parameter is the function above followed by its input parms
std::thread t1(launch_producer, circular_buffer, ptr_struct_sample_specs,
all_file_names[WHICH_FILE_INPUT]);
Audio_Model * audio_model = new Audio_Model(MAX_SIZE_CIRCULAR_BUFFER);
// thread t2 does real time OpenGL visualization of audio buffer data
std::thread t2(launch_mvc_visualization, audio_model); // OpenGL graphics visualization
// thread t3 renders the audio buffers as sound to your speakers
std::thread t3(launch_audio_playback, circular_buffer, audio_model);
// -------------------------
std::cout << "all three threads now launched" << std::endl;
t1.join();
t2.join();
t3.join();
std::cout << "processing is complete" << std::endl;
// ----------
return 0;
}
I think I have just solved using std::thread with method detach().
For doing so, I must re-open the file every time I have to load new sample data from it so I have now a global variable that stores the filename and call the function this way:
// The loading function that will be executed in a new thread
void continuePreload(unsigned long ByteSize)
{
// Re-open the file 'openFile'
FILE *fpFile = fopen(openFile, "rb");
// Set pointer in file at stample start + preload size
fseek(fpFile, WavePointers + preloadByteSize, SEEK_SET);
// Read the remaining bytes
fread(SampleData + preloadByteSize / sizeof(short), 1, ByteSize - preloadByteSize, fpFile);
// Close file
fclose(fpFile);
// Remember how many bytes we loaded
preloadByteSize = ByteSize;
}
Within the Play Event function...
// Get the size in bytes
const unsigned long ByteSize = SampleLib.smp.len * sizeof(short);
if (preloadByteSize < ByteSize)
{
std::thread loadSample(&myClass::continuePreload, this, ByteSize);
loadSample.detach();
}
The program is now acting exactly how I expected: whenever the play event arrives, it starts playing back audio from the sample buffer using what was previously preloaded, in the meantime a new thread finishes loading the remaining part of the file and fills the buffer completely.
As long as loading from disk is faster than the audio playback, we have no race conditions. In case loading is too slow, I can still increase the preload size, slowing down a bit the initial loading time.
I am taking input from a file in binary mode using C++; I read the data into unsigned ints, process them, and write them to another file. The problem is that sometimes, at the end of the file, there might be a little bit of data left that isn't large enough to fit into an int; in this case, I want to pad the end of the file with 0s and record how much padding was needed, until the data is large enough to fill an unsigned int.
Here is how I am reading from the file:
std::ifstream fin;
fin.open('filename.whatever', std::ios::in | std::ios::binary);
if(fin) {
unsigned int m;
while(fin >> m) {
//processing the data and writing to another file here
}
//TODO: read the remaining data and pad it here prior to processing
} else {
//output to error stream and exit with failure condition
}
The TODO in the code is where I'm having trouble. After the file input finishes and the loop exits, I need to read in the remaining data at the end of the file that was too small to fill an unsigned int. I need to then pad the end of that data with 0's in binary, recording enough about how much padding was done to be able to un-pad the data in the future.
How is this done, and is this already done automatically by C++?
NOTE: I cannot read the data into anything but an unsigned int, as I am processing the data as if it were an unsigned integer for encryption purposes.
EDIT: It was suggested that I simply read what remains into an array of chars. Am I correct in assuming that this will read in ALL remaining data from the file? It is important to note that I want this to work on any file that C++ can open for input and/or output in binary mode. Thanks for pointing out that I failed to include the detail of opening the file in binary mode.
EDIT: The files my code operates on are not created by anything I have written; they could be audio, video, or text. My goal is to make my code format-agnostic, so I can make no assumptions about the amount of data within a file.
EDIT: ok, so based on constructive comments, this is something of the approach I am seeing, documented in comments where the operations would take place:
std::ifstream fin;
fin.open('filename.whatever', std::ios::in | std::ios::binary);
if(fin) {
unsigned int m;
while(fin >> m) {
//processing the data and writing to another file here
}
//1: declare Char array
//2: fill it with what remains in the file
//3: fill the rest of it until it's the same size as an unsigned int
} else {
//output to error stream and exit with failure condition
}
The question, at this point, is this: is this truly format-agnostic? In other words, are bytes used to measure file size as discrete units, or can a file be, say, 11.25 bytes in size? I should know this, I know, but I've got to ask it anyway.
Are bytes used to measure file size as discrete units, or can a file be, say, 11.25 bytes in size?
No data type can be less than a byte, and your file is represented as an array of char meaning each character is one byte. Thus it is impossible to not get a whole number measure in bytes.
Here is step one, two, and three as per your post:
while (fin >> m)
{
// ...
}
std::ostringstream buffer;
buffer << fin.rdbuf();
std::string contents = buffer.str();
// fill with 0s
std::fill(contents.begin(), contents.end(), '0');