I have a question regarding a sound synthesis app that I'm working on. I am trying to read in an audio file, create randomized 'grains' using granular synthesis techniques, place them into an output buffer and then be able to play that back to the user using OpenAL. For testing purposes, I am simply writing the output buffer to a file that I can then listen back to.
Judging by my results, I am on the right track but am getting some aliasing issues and playback sounds that just don't seem quite right. There is usually a rather loud pop in the middle of the output file and volume levels are VERY loud at times.
Here are the steps that I have taken to get the results I need, but I'm a little bit confused about a couple of things, namely formats that I am specifying for my AudioStreamBasicDescription.
Read in an audio file from my mainBundle, which is a mono file in .aiff format:
ExtAudioFileRef extAudioFile;
CheckError(ExtAudioFileOpenURL(loopFileURL,
&extAudioFile),
"couldn't open extaudiofile for reading");
memset(&player->dataFormat, 0, sizeof(player->dataFormat));
player->dataFormat.mFormatID = kAudioFormatLinearPCM;
player->dataFormat.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
player->dataFormat.mSampleRate = S_RATE;
player->dataFormat.mChannelsPerFrame = 1;
player->dataFormat.mFramesPerPacket = 1;
player->dataFormat.mBitsPerChannel = 16;
player->dataFormat.mBytesPerFrame = 2;
player->dataFormat.mBytesPerPacket = 2;
// tell extaudiofile about our format
CheckError(ExtAudioFileSetProperty(extAudioFile,
kExtAudioFileProperty_ClientDataFormat,
sizeof(AudioStreamBasicDescription),
&player->dataFormat),
"couldnt set client format on extaudiofile");
SInt64 fileLengthFrames;
UInt32 propSize = sizeof(fileLengthFrames);
ExtAudioFileGetProperty(extAudioFile,
kExtAudioFileProperty_FileLengthFrames,
&propSize,
&fileLengthFrames);
player->bufferSizeBytes = fileLengthFrames * player->dataFormat.mBytesPerFrame;
Next I declare my AudioBufferList and set some more properties
AudioBufferList *buffers;
UInt32 ablSize = offsetof(AudioBufferList, mBuffers[0]) + (sizeof(AudioBuffer) * 1);
buffers = (AudioBufferList *)malloc(ablSize);
player->sampleBuffer = (SInt16 *)malloc(sizeof(SInt16) * player->bufferSizeBytes);
buffers->mNumberBuffers = 1;
buffers->mBuffers[0].mNumberChannels = 1;
buffers->mBuffers[0].mDataByteSize = player->bufferSizeBytes;
buffers->mBuffers[0].mData = player->sampleBuffer;
My understanding is that .mData will be whatever was specified in the formatFlags (in this case, type SInt16). Since it is of type (void *), I want to convert this to float data which is obvious for audio manipulation. Before I set up a for loop which just iterated through the buffer and cast each sample to a float*. This seemed unnecessary so now I pass in my .mData buffer to a function I created which then granularizes the audio:
float *theOutBuffer = [self granularizeWithData:(float *)buffers->mBuffers[0].mData with:framesRead];
In this function, I dynamically allocate some buffers, create random size grains, place them in my out buffer after windowing them using a hamming window and return that buffer (which is float data). Everything is cool up to this point.
Next I set up all my output file ASBD and such:
AudioStreamBasicDescription outputFileFormat;
bzero(audioFormatPtr, sizeof(AudioStreamBasicDescription));
outputFileFormat->mFormatID = kAudioFormatLinearPCM;
outputFileFormat->mSampleRate = 44100.0;
outputFileFormat->mChannelsPerFrame = numChannels;
outputFileFormat->mBytesPerPacket = 2 * numChannels;
outputFileFormat->mFramesPerPacket = 1;
outputFileFormat->mBytesPerFrame = 2 * numChannels;
outputFileFormat->mBitsPerChannel = 16;
outputFileFormat->mFormatFlags = kAudioFormatFlagIsFloat | kAudioFormatFlagIsPacked;
UInt32 flags = kAudioFileFlags_EraseFile;
ExtAudioFileRef outputAudioFileRef = NULL;
NSString *tmpDir = NSTemporaryDirectory();
NSString *outFilename = #"Decomp.caf";
NSString *outPath = [tmpDir stringByAppendingPathComponent:outFilename];
NSURL *outURL = [NSURL fileURLWithPath:outPath];
AudioBufferList *outBuff;
UInt32 abSize = offsetof(AudioBufferList, mBuffers[0]) + (sizeof(AudioBuffer) * 1);
outBuff = (AudioBufferList *)malloc(abSize);
outBuff->mNumberBuffers = 1;
outBuff->mBuffers[0].mNumberChannels = 1;
outBuff->mBuffers[0].mDataByteSize = abSize;
outBuff->mBuffers[0].mData = theOutBuffer;
CheckError(ExtAudioFileCreateWithURL((__bridge CFURLRef)outURL,
kAudioFileCAFType,
&outputFileFormat,
NULL,
flags,
&outputAudioFileRef),
"ErrorCreatingURL_For_EXTAUDIOFILE");
CheckError(ExtAudioFileSetProperty(outputAudioFileRef,
kExtAudioFileProperty_ClientDataFormat,
sizeof(outputFileFormat),
&outputFileFormat),
"ErrorSettingProperty_For_EXTAUDIOFILE");
CheckError(ExtAudioFileWrite(outputAudioFileRef,
framesRead,
outBuff),
"ErrorWritingFile");
The file is written correctly, in CAF format. My question is this: am I handling the .mData buffer correctly in that I am casting the samples to float data, manipulating (granulating) various window sizes and then writing it to a file using ExtAudioFileWrite (in CAF format)? Is there a more elegant way to do this such as declaring my ASBD formatFlag as kAudioFlagIsFloat? My output CAF file has some clicks in it and when I open it in Logic, it looks like there is a lot of aliasing. This makes sense if I am trying to send it float data but there is some kind of conversion happening which I am unaware of.
Thanks in advance for any advice on the matter! I have been an avid reader of pretty much all the source material online, including the Core Audio Book, various blogs, tutorials, etc. The ultimate goal of my app is to play the granularized audio in real time to a user with headphones so the writing to file thing is just being used for testing at the moment. Thanks!
What you say about step 3 suggests to me you are interpreting an array of shorts as an array of floats? If that is so, we found the reason for your trouble. Can you assign the short values one by one into an array of floats? That should fix it.
It looks like mData is a void * pointing to an array of shorts. Casting this pointer to a float * doesn't change the underlying data into float but your audio processing function will treat them as if they were. However, float and short values are stored in totally different ways, so the math you do in that function will operate on very different values that have nothing to do with your true input signal. To investigate this experimentally, try the following:
short data[4] = {-27158, 16825, 23024, 15};
void *pData = data;
The void pointer doesn't indicate what kind of data it points to, so erroneously, one can falsely assume it points to float values. Note that a short is 2 byte wide, but a float is 4 byte wide. It is a coincidence that your code did not crash with an access violation. Interpreted as float the array above is only long enough for two values. Let's just look at the first value:
float *pfData = (float *)pData;
printf("%d == %f\n", data[0], pfData[0]);
The output of this will be -27158 == 23.198200 illustrating how instead of the expected -27158.0f you obtain roughly 23.2f. Two problematic things happened. First, sizeof(float) is not sizeof(short). Second, the "ones and zeros" of a floating point number are stored very differently than an integer. See http://en.wikipedia.org/wiki/Single_precision_floating-point_format.
How to solve the problem? There are at least two simple solutions. First, you could convert each element of the array before you feed it into your audio processor:
int k;
float *pfBuf = (float *)malloc(n_data * sizeof(float));
short *psiBuf = (short *)buffers->mBuffers[0].mData[k];
for (k = 0; k < n_data; k ++)
{
pfBuf[k] = psiBuf[k];
}
[self granularizeWithData:pfBuf with:framesRead];
for (k = 0; k < n_data; k ++)
{
psiBuf[k] = pfBuf[k];
}
free(pfBuf);
You see that most likely you will have to convert everything back to short after your call to granularizeWithData: with:. So the second solution would be to do all processing in short although from what you write, I imagine you would not like that latter approach.
Related
Is there a way to decompress sound files using the FMOD library in c++?
I'm developing a sound editor, using the FMOD Engine library, but I got to the problem with compressed audio files, specifically OGG types.
For now I'm just reading the raw data using FMOD::Sound::readData(), and then normalize it and display it to the screen using SFML. This works fine with WAV files, because they are not compressed, but I need to do more steps for compressed formats.
This is what I'm doing now:
FMOD_RESULT r;
FMOD::System* m_fmodSystem = nullptr;
int m_maxChannels = 64;
// Create fmod system
r = FMOD::System_Create(&m_fmodSystem);
FMOD_ERROR_CHECK(r);
// Initialize system
r = m_fmodSystem->init(m_maxChannels, FMOD_INIT_NORMAL, nullptr);
FMOD_ERROR_CHECK(r);
// Create sound
FMOD::Sound* soundResource = nullptr;
FMOD::Channel* channel = nullptr;
r = m_fmodSystem->createSound("640709__chobesha__laser-gun-sound.ogg",
FMOD_DEFAULT | FMOD_OPENONLY,
nullptr,
&soundResource);
FMOD_ERROR_CHECK(r);
// Get sound length in raw bytes
unsigned int audioLength = 0;
FMOD_TIMEUNIT timeUnit = FMOD_TIMEUNIT_RAWBYTES;
r = soundResource->getLength(&audioLength, timeUnit);
FMOD_ERROR_CHECK(r);
// Read sound data
char* audioBuffer = new char[audioLength];
unsigned int readData = 0;
r = soundResource->readData(reinterpret_cast<void*>(audioBuffer),
audioLength,
&readData);
FMOD_ERROR_CHECK(r);
signed short* interpretedData = reinterpret_cast<signed short*>(audioBuffer);
// Analize data to normalize it
signed short maxValue = -32767;
signed short minValue = 32767;
int interpretedDataSize = readData / sizeof(signed short);
for (int i = 0; i < interpretedDataSize; ++i) {
if (interpretedData[i] > maxValue) {
maxValue = interpretedData[i];
}
if (interpretedData[i] < minValue) {
minValue = interpretedData[i];
}
}
float maxValF = static_cast<float>(maxValue);
float minValF = static_cast<float>(minValue);
float* normalizedArray = new float[interpretedDataSize];
// Normalize data
float maxAbsValF = abs(maxValF);
maxAbsValF = maxAbsValF > abs(minValF) ? maxAbsValF : abs(minValF);
for (int i = 0; i < interpretedDataSize; ++i) {
normalizedArray[i] = interpretedData[i] / maxAbsValF;
}
I read on other posts and on the FMOD documentation that I can use the flag FMOD_CREATESAMPLE to tell the createSound function to decompress the data at loadtime, instead of playtime, but It doesn't work in my current structure of the code, I'm guessing because the FMOD_OPENONLY prevents it from closing, and therefor it doesn't gets the chance to decompress, or something. That's what I got from the documentation.
The problem with not using the FMOD_OPENONLY flag, is that I cannot read the data using the readData function, or it returns an error flag.
Searching, I found that I can use the lock function, to help it decompress and to get the pointer to the data of the sound, but even with all of this, it stills appears to be compressed. I donĀ“t know if I'm missing something.
This is the version 2 of the code, with this modifications:
// Create sound
FMOD::Sound* soundResource = nullptr;
FMOD::Channel* channel = nullptr;
r = m_fmodSystem->createSound("640709__chobesha__laser-gun-sound.ogg",
FMOD_DEFAULT | FMOD_CREATESAMPLE,
nullptr,
&soundResource);
FMOD_ERROR_CHECK(r);
// Get sound length in raw bytes
unsigned int audioLength = 0;
FMOD_TIMEUNIT timeUnit = FMOD_TIMEUNIT_RAWBYTES;
r = soundResource->getLength(&audioLength, timeUnit);
FMOD_ERROR_CHECK(r);
// Read sound data
char* audioBuffer = new char[audioLength];
void* ptr2 = nullptr;
unsigned int len1, len2;
r = soundResource->lock(0, audioLength, reinterpret_cast<void**>(&audioBuffer), &ptr2, &len1, &len2);
FMOD_ERROR_CHECK(r);
r = soundResource->unlock(reinterpret_cast<void*>(audioBuffer), ptr2, len1, len2);
FMOD_ERROR_CHECK(r);
This is the graph I get for a WAV sound
The left side is the sound loaded in the Audacity app, and the right side is my graph.
This is the graph for the first try with the OGG file
And this is the graph for the OGG file with the modifications
For what I can see, the first is the same as the second, so I'm assuming both are compressed and what I changed did nothing.
Someone knows a better way to decompress and read the raw data of a sound, preferably using this library of FMOD. If it's not possible with FMOD, what is the best way to decompress any sound file, knowing its format.
This answer is just a couple minority-position takes and a sketchy description of a process I once used. Maybe the thoughts are worth consideration.
One thought: a person who is editing sound (your target audience?) has the know-how to decompress files (e.g., using Audacity), so perhaps adding this capability (handling all possible incoming audio formats) is a lower priority?
Another thought: there are likely many libraries for decompressing sound available. You could employ one of them prior to presenting the results to FMOD. I just did a search on github for "ogg c++" and was shown 51 repositories.
In my own experience, for an application I wrote about seven years ago, I tweaked some code from a Vorbis decoder source so that it output PCM rather than outputting as a .wav. With OGG, the .wav data is converted to PCM prior to compression. So, it decompresses back to PCM before converting that to a .wav. I found the point in the code where the conversion happens and edited that out, leaving the data in a decompressed PCM form.
My application was built to accept PCM, so I actually ended up saving an intermediate step.
I'm working on an OpenFX plugin to process images in grading/post-production software.
All my processing is done in a series of Metal kernel functions. The image is sent to the GPU as buffers (float array), one for the input and one for the output.
The output is then used by the OpenFX framework for display inside the host application, so up till then I didn't have to take care of it.
I now need to be able to read the output values once the GPU has processed the commands. I have tried to use the "contents" method applied on the buffer but my plugin keeps crashing (in the worst case), or gives me very weird values when it "works" (I'm not supposed to have anything over 1 and under 0, but I get very large numbers, 0 or negative 0, nan... So I assume I have a memory access issue of sorts).
At first I thought it was an issue with Private/Shared memory, so I tried to modify the buffer to be shared. But I'm still struggling!
Full disclosure: I have no specific training in MSL, I'm learning as I go with this project so I might be doing and-or saying very stupid things. I have looked around for hours before deciding to ask for help. Thanks to all who will help out in any way!
Below is the code (without everything that doesn't concern my current issue). If it is lacking anything of interest please let me know.
id < MTLBuffer > srcDeviceBuf = reinterpret_cast<id<MTLBuffer> >(const_cast<float*>(p_Input)) ;
//Below is the destination Image buffer creation the way it used to be done before my edits
//id < MTLBuffer > dstDeviceBuf = reinterpret_cast<id<MTLBuffer> >(p_Output);
//My attempt at creating a Shared memory buffer
MTLResourceOptions bufferOptions = MTLResourceStorageModeShared;
int bufferLength = sizeof(float)*1920*1080*4;
id <MTLBuffer> dstDeviceBuf = [device newBufferWithBytes:p_Output length:bufferLength options:bufferOptions];
id<MTLCommandBuffer> commandBuffer = [queue commandBuffer];
commandBuffer.label = [NSString stringWithFormat:#"RunMetalKernel"];
id<MTLComputeCommandEncoder> computeEncoder = [commandBuffer computeCommandEncoder];
//First method to be computed
[computeEncoder setComputePipelineState:_initModule];
int exeWidth = [_initModule threadExecutionWidth];
MTLSize threadGroupCount = MTLSizeMake(exeWidth, 1, 1);
MTLSize threadGroups = MTLSizeMake((p_Width + exeWidth - 1) / exeWidth,
p_Height, 1);
[computeEncoder setBuffer:srcDeviceBuf offset: 0 atIndex: 0];
[computeEncoder setBuffer:dstDeviceBuf offset: 0 atIndex: 8];
//encodes first module to be executed
[computeEncoder dispatchThreadgroups:threadGroups threadsPerThreadgroup: threadGroupCount];
//Modules encoding
if (p_lutexport_on) {
//Fills the image with patch values for the LUT computation
[computeEncoder setComputePipelineState:_LUTExportModule];
[computeEncoder dispatchThreadgroups:threadGroups threadsPerThreadgroup: threadGroupCount];
}
[computeEncoder endEncoding];
[commandBuffer commit];
if (p_lutexport_on) {
//Here is where I try to read the buffer values (and inserts them into a custom object "p_lut_exp_lut"
float* result = static_cast<float*>([dstDeviceBuf contents]);
//Retrieve the output values and populate the LUT with them
int lutLine = 0;
float3 out;
for (int index(0); index < 35937 * 4; index += 4) {
out.x = result[index];
out.y = result[index + 1];
out.z = result[index + 2];
p_lutexp_lut->setValuesAtLine(lutLine, out);
lutLine++;
}
p_lutexp_lut->toFile();
}
If a command buffer includes write or read operations on a given MTLBuffer, you must ensure that these operations complete before reading the buffers contents. You can use the addCompletedHandler: method, waitUntilCompleted method, or custom semaphores to signal that a command buffer has completed execution.
[commandBuffer addCompletedHandler:^(id<MTLCommandBuffer> cb) {
/* read or write buffer here */
}];
[commandBuffer commit];
I'm building a graphics engine, and I need to write te result image to a .bmp file. I'm storing the pixels in a vector<Color>. While also saving the width and the heigth of the image. Currently I'm writing the image as follows(I didn't write this code myself):
std::ostream &img::operator<<(std::ostream &out, EasyImage const &image) {
//temporaryily enable exceptions on output stream
enable_exceptions(out, std::ios::badbit | std::ios::failbit);
//declare some struct-vars we're going to need:
bmpfile_magic magic;
bmpfile_header file_header;
bmp_header header;
uint8_t padding[] =
{0, 0, 0, 0};
//calculate the total size of the pixel data
unsigned int line_width = image.get_width() * 3; //3 bytes per pixel
unsigned int line_padding = 0;
if (line_width % 4 != 0) {
line_padding = 4 - (line_width % 4);
}
//lines must be aligned to a multiple of 4 bytes
line_width += line_padding;
unsigned int pixel_size = image.get_height() * line_width;
//start filling the headers
magic.magic[0] = 'B';
magic.magic[1] = 'M';
file_header.file_size = to_little_endian(pixel_size + sizeof(file_header) + sizeof(header) + sizeof(magic));
file_header.bmp_offset = to_little_endian(sizeof(file_header) + sizeof(header) + sizeof(magic));
file_header.reserved_1 = 0;
file_header.reserved_2 = 0;
header.header_size = to_little_endian(sizeof(header));
header.width = to_little_endian(image.get_width());
header.height = to_little_endian(image.get_height());
header.nplanes = to_little_endian(1);
header.bits_per_pixel = to_little_endian(24);//3bytes or 24 bits per pixel
header.compress_type = 0; //no compression
header.pixel_size = pixel_size;
header.hres = to_little_endian(11811); //11811 pixels/meter or 300dpi
header.vres = to_little_endian(11811); //11811 pixels/meter or 300dpi
header.ncolors = 0; //no color palette
header.nimpcolors = 0;//no important colors
//okay that should be all the header stuff: let's write it to the stream
out.write((char *) &magic, sizeof(magic));
out.write((char *) &file_header, sizeof(file_header));
out.write((char *) &header, sizeof(header));
//okay let's write the pixels themselves:
//they are arranged left->right, bottom->top, b,g,r
// this is the main bottleneck
for (unsigned int i = 0; i < image.get_height(); i++) {
//loop over all lines
for (unsigned int j = 0; j < image.get_width(); j++) {
//loop over all pixels in a line
//we cast &color to char*. since the color fields are ordered blue,green,red they should be written automatically
//in the right order
out.write((char *) &image(j, i), 3 * sizeof(uint8_t));
}
if (line_padding > 0)
out.write((char *) padding, line_padding);
}
//okay we should be done
return out;
}
As you can see, the pixels are being written one by one. This is quite slow, I put some timers in my program, and found that the writing was my main bottleneck.
I tried to write entire (horizontal) lines, but I did not find how to do it(best I found was this.
Secondly, I wanted to write to the file using multithreading(not sure if I need to use threading or processing). using openMP. But that means I need to specify which byte address to write to, I think, which I couldn't solve.
Latstly, I thought about immidiatly writing to the file whenever I drew an object, but then I had the same issue with writing to specific locations in the file.
So, my question is: what's the best(fastest) way to tackle this problem. (Compiling this for windows and linux)
The fastest method to write to a file is to use hardware assist. Write your output to memory (a.k.a. buffer), then tell the hardware device to transfer from memory to the file (disk).
The next fastest method is to write all the data to a buffer then block write the data to the file. If you want other tasks or threads to execute during your writing, then create a thread that writes the buffer to the file.
When writing to a file, the more data per transaction, the more efficient the write will be. For example, 1 write of 1024 bytes is faster than 1024 writes of one byte.
The idea is to keep the data streaming. Slowing down the transfer rate may be faster than a burst write, delay, burst write, delay, etc.
Remember that the disk is essentially a serial device (unless you have a special hard drive). Bits are laid down on the platters using a bit stream. Writing data in parallel will have adverse effects because the head will have to be moved between the parallel activities.
Remember that if you use more than one core, there will be more traffic on the data bus. The transfer to the file will have to pause while other threads/tasks are using the data bus. So, if you can, block all tasks, then transfer your data. :-)
I've written programs that copy from slow memory to fast memory, then transferred from fast memory to the hard drive. That was also using interrupts (threads).
Summary
Fast writing to a file involves:
Keep the data streaming; minimize the pauses.
Write in binary mode (no translations, please).
Write in blocks (format into memory as necessary before writing the block).
Maximize the data in a transaction.
Use separate writing thread, if you want other tasks running "concurrently".
The hard drive is a serial device, not parallel. Bits are written to the platters in a serial stream.
I created an application a couple of years ago that allowed me to process audio by downmixing a 6 channel or 8 channel a.k.a 5.1 as 7.1 as matrixed stereo encoded for that purpose I used the portaudio library with great results this is an example of the open stream function and callback to downmix a 7.1 signal
Pa_OpenStream(&Flujo, &inputParameters, &outParameters, SAMPLE_RATE, 1, paClipOff, ptrFunction, NULL);
Notice the use of framesPerBuffer value of just one (1), this is my callback function
int downmixed8channels(const void *input, void *output, unsigned long framesPerBuffer, const PaStreamCallbackTimeInfo * info, PaStreamCallbackFlags state, void * userData)
{
(void)userData;
(void)info;
(void)state;
(void)framesBuffer;
float *ptrInput = (float*)input;
float *ptrOutput = (float*)ouput;
/*This is a struct to identify samples*/
AudioSamples->L = ptrInput[0];
AudioSamples->R = ptrInput[1];
AudioSamples->C = ptrInput[2];
AudioSamples->LFE = ptrInput[3];
AudioSamples->RL = ptrInput[4];
AudioSamples->RR = ptrInput[5];
AudioSamples->SL = ptrInput[6];
AudioSamples->SR = ptrInput[7];
Encoder->8channels(AudioSamples->L,
AudioSamples->R,
AudioSamples->C,
AudioSamples->LFE,
MuestrasdeAudio->SL,
MuestrasdeAudio->SR,
MuestrasdeAudio->RL,
MuestrasdeAudio->RR,);
ptrOutput[0] = Encoder->gtLT();
ptrOutput[1] = Encoder->gtRT();
return paContinue;
}
As you can see the order set by the index in the output and input buffer correspond to a discrete channel
in the case of the output 0 = Left channel, 1 = right Channel. This used to work well, until entered Windows 10 2004, since I updated my system to this new version my audio glitch and I get artifacts like those
Those are captures from the sound of the channel test window under the audio device panel of windows. By the images is clear my program is dropping frames, so the first try to solve this was to use a larger buffer than one to hold samples process them and send then, the reason I did not use a buffer size larger than one in the first place was that the program would drop frames.
But before implementing a I did a proof of concept, would not include audio processing at all, of simple passing of data from input to ouput, for that I set the oputput channelCount parameters to 8 just like the input, resulting in something as simple as this.
for (int i = 0; i < FramesPerBuffer /*1000*/; i++)
{
ptrOutput[i] = ptrOutput[i];
}
but still the program is still dropping samples.
Next I used two callbacks one for writing to a buffer and a second one to read it and send it to output
(void)info;
(void)userData;
(void)state;
(void)output;
float* ptrInput = (float*)input;
for (int i = 0; i < FRAME_SIZE; i++)
{
buffer_input[i] = ptrInput[i];
}
return paContinue;
Callback to store.
(void)info;
(void)userData;
(void)state;
(void)output;
float* ptrOutput = (float*)output;
for (int i = 0; i < FRAME_SIZE; i++)
{
AudioSamples->L = (buffer_input[i] );
AudioSamples->R = (buffer_input[i++]);
AudioSamples->C = (buffer_input[i++] );
AudioSamples->LFE = (buffer_input[i++]);
AudioSamples->SL = (buffer_input[i++] );
AudioSamples->SR = (buffer_input[i++]);
Encoder->Encoder(AudioSamples->L, AudioSamples->R, AudioSamples->C, AudioSamples->LFE,
AudioSamples->SL, AudioSamples->SR);
bufferTransformed[w] = (Encoder->getLT() );
bufferTransformed[w++] = (Encoder->getRT() );
}
w = 0;
for (int i = 0; i < FRAME_REDUCED; i++)
{
ptrOutput[i] = buffer_Transformed[i];
}
return paContinue;
Callback for processing
The processing callback use a reduced frames per buffer since 2 channel is less than eight since it seems in portaudio a frame is composed of a sample for each audio channel.
This also did not work, since the first problem, is how to syncronize the two callback?, after all of this, what recommendation or advice, can you give me to solve this issue,
Notes: the samplerate must be same for both devices, I implemeted logic in the program to prevent this, the bitdepth is also the same I am using paFloat32,
.The portaudio is the modified one use by audacity, since I wanted to use their implementation of WASAPI
loopback
Thank very much in advance!.
At the end of the day it I did not have to change my callbacks functions in any way, what solved it, was changing or increasing the parameter ".suggestedLatency" of the input and output parameters, to 1.0, even the devices defaultLowOutputLatency or defaultHighOutputLatency values where causing to much glitching, I test it until 1.0 was de sweepspot, higher values did not seen to improve.
TL;DR Increased the suggestedLatency until the glitching is gone.
I'm having trouble reading in a 16bit .wav file. I have read in the header information, however, the conversion does not seem to work.
For example, in Matlab if I read in wave file I get the following type of data:
-0.0064, -0.0047, -0.0051, -0.0036, -0.0046, -0.0059, -0.0051
However, in my C++ program the following is returned:
0.960938, -0.00390625, -0.949219, -0.00390625, -0.996094, -0.00390625
I need the data to be represented the same way. Now, for 8 bit .wav files I did the following:
uint8_t c;
for(unsigned i=0; (i < size); i++)
{
c = (unsigned)(unsigned char)(data[i]);
double t = (c-128)/128.0;
rawSignal.push_back(t);
}
This worked, however, when I did this for 16bit:
uint16_t c;
for(unsigned i=0; (i < size); i++)
{
c = (signed)(signed char)(data[i]);
double t = (c-256)/256.0;
rawSignal.push_back(t);
}
Does not work and shows the output (above).
I'm following the standards found Here
Where data is a char array and rawSignal is a std::vector<double> I'm probably just handing the conversion wrong but cannot seem to find out where. Anyone have any suggestions?
Thanks
EDIT:
This is what is now displaying (In a graph):
This is what it should be displaying:
There are a few problems here:
8 bit wavs are unsigned, but 16 bit wavs are signed. Therefore, the subtraction step given in the answers by Carl and Jay are unnecessary. I presume they just copied from your code, but they are wrong.
16 bit waves have a range from -32,768 to 32,767, not from -256 to 255, making the multiplication you are using incorrect anyway.
16-bit wavs are 2 bytes, thus you must read two bytes to make one sample, not one. You appear to be reading one character at a time. When you read the bytes, you may have to swap them if your native endianness is not little-endian.
Assuming a little-endian architecture, your code would look more like this (very close to Carl's answer):
for (int i = 0; i < size; i += 2)
{
int c = (data[i + 1] << 8) | data[i];
double t = c/32768.0;
rawSignal.push_back(t);
}
for a big-endian architecture:
for (int i = 0; i < size; i += 2)
{
int c = (data[i] << 8) | data[i+1];
double t = c/32768.0;
rawSignal.push_back(t);
}
That code is untested, so please LMK if it doesn't work.
(First of all about little-endian/big-endian-ness. WAV is just a container format, the data encoded in it can be in countless format. Most of the codecs are lossless (MPEG Layer-3 aka MP3, yes, the stream can be "packaged" into a WAV, various CCITT and other codecs). You assume that you deal with some kind of PCM format, where you see the actual wave in RAW format, no lossless transformation was done on it. The endianness depends on the codec, which produced the stream.
Is the endianness of format params guaranteed in RIFF WAV files?)
It's also a question if the one PCM sample is in linear scale sampled integer or there some scaling, log scale or other transformation behind it. Regular PCM wav files I encountered were simple linear scale samples, but I'm not working in the audio recording or producing industry.
So a path to your solution:
Make sure that you are dealing with regular 16 bit PCM encoded RIFF WAV file.
While reading the stream, always read two bytes (char) at a time and convert the two chars into a 16 bit short. People showed this before me.
The wave form you show clearly suggest that you either not estimated the frequency well (or you just have one mono channel instead of a stereo). Because the sampling rate (44.1kHz, 22KHz, 11KHz, 8kHz, etc) is just as important as the resolution (8 bit, 16 bit, 24 bit, etc). Maybe in the first case you had a stereo data. You can read it in as mono, you may not notice it. In the second case if you have mono data, then you'll run out of samples half way into reading the data. That's what it seems to happen according to your graphs. Talking about the other cause: the lower sampling resolutions (and 16 bit is also lower) often paired with lower sampling rates. So if your input data is the recording time, and you think you have a 22kHz data, but it's actually just 11kHz, then again you'll run out half way through from the actual samples and read in memory garbage. So either one of these.
Make sure that you interpret and treat your loop iterator variable and the size well. It seems that size tells how many bytes you have. You'll have exactly half as much short integer samples. Notice, that Bjorn's solution correctly increments i by 2 because of that.
My working code is
int8_t* buffer = new int8_t[size];
/*
HERE buffer IS FILLED
*/
for (int i = 0; i < size; i += 2)
{
int16_t c = ((unsigned char)buffer[i + 1] << 8) | (unsigned char)buffer[i];
double t = c/32768.0;
rawSignal.push_back(t);
}
A 16-bit quantity gives you a range from -32,768 to 32,767, not from -256 to 255 (that's just 9 bits). Use:
for (int i = 0; i < size; i += 2)
{
c = (data[i + 1] << 8) + data[i]; // WAV files are little-endian
double t = (c - 32768)/32768.0;
rawSignal.push_back(t);
}
You might want something more like this:
uint16_t c;
for(unsigned i=0; (i < size); i++)
{
// get a 16 bit pointer to the array
uint16_t* p = (uint16_t*)data;
// get the i-th element
c = *( p + i );
// convert to signed? I'm guessing this is what you want
int16_t cs = (int16_t)c;
double t = (cs-256)/256.0;
rawSignal.push_back(t);
}
Your code converts the 8 bit value to a signed value then writes it into an unsigned variable. You should look at that and see if it's what you want.