Explanation of a this Vst Synth example - c++

I am having trouble understanding a particular area of code in the Steinberg VST Synth example
In this function:
void VstXSynth::processReplacing (float** inputs, float** outputs, VstInt32 sampleFrames)
{
float* out1 = outputs[0];
float* out2 = outputs[1];
if (noteIsOn)
{
float baseFreq = freqtab[currentNote & 0x7f] * fScaler;
float freq1 = baseFreq + fFreq1; // not really linear...
float freq2 = baseFreq + fFreq2;
float* wave1 = (fWaveform1 < .5) ? sawtooth : pulse;
float* wave2 = (fWaveform2 < .5) ? sawtooth : pulse;
float wsf = (float)kWaveSize;
float vol = (float)(fVolume * (double)currentVelocity * midiScaler);
VstInt32 mask = kWaveSize - 1;
if (currentDelta > 0)
{
if (currentDelta >= sampleFrames) // future
{
currentDelta -= sampleFrames;
return;
}
memset (out1, 0, currentDelta * sizeof (float));
memset (out2, 0, currentDelta * sizeof (float));
out1 += currentDelta;
out2 += currentDelta;
sampleFrames -= currentDelta;
currentDelta = 0;
}
// loop
while (--sampleFrames >= 0)
{
// this is all very raw, there is no means of interpolation,
// and we will certainly get aliasing due to non-bandlimited
// waveforms. don't use this for serious projects...
(*out1++) = wave1[(VstInt32)fPhase1 & mask] * fVolume1 * vol;
(*out2++) = wave2[(VstInt32)fPhase2 & mask] * fVolume2 * vol;
fPhase1 += freq1;
fPhase2 += freq2;
}
}
else
{
memset (out1, 0, sampleFrames * sizeof (float));
memset (out2, 0, sampleFrames * sizeof (float));
}
}
The way I understand the function is that if a midi note is currently on, we need to copy our wave table into the outputs array to pass back to the VstHost. What I don't understand specifically is what the area in the if (currentDelta > 0) conditional block is doing. It seems like its just writing zeros to the output arrays...
A full version of the file can be found at http://pastebin.com/SdAXkRyW

The incomming MIDI NoteOn event can have an offset relative to the start of the buffers you receive (called deltaFrames). The currentDelta keeps track of when the note should play relative to the start of the buffers received.
So if the currentDelta > sampleFrames, that means the note should not play in this cycle (future) - early exit.
If the currentDelta is within range of this cycle then the memory is cleared up to the moment the note should produce output (memset) and the pointers are manipulated to make it look like the buffers begin right on the spot where the sound should play - length -sampleFrames- is also adjusted.
Then in the loop the sound is produced.
Hope it helps.
Marc

Related

stereo ping pong delay c++

I have to create a stereo ping pong delay with these parameters.
• Delay Time (0 – 3000 milliseconds)
• Feedback (0 – 0.99)
• Wet / Dry Mix (0 – 1.0)
I have managed to implement the stereo in/out and the 3 parameters, but struggling with how to implement the ping pong. I have this code in the process block, but it only replays the left and right in the opposite channels once. Is there a simple way to loop this to reply over and over and not just once or have is this not the best way to implement ping pong. Any help would be great!
//ping pong implementation
for (int i = 0; i < buffer.getNumSamples(); i++)
{
// Reduce the amplitude of each sample in the block for the
// left and right channels
//channelDataLeft[i] = channelDataLeft[i] * 0.5;
// channelDataRight[i] = channelDataRight[i] * 0.25;
if (i % 2 == 1) //if i is odd this will play
{
// Calculate the next output sample (current input sample + delayed version)
float outputSampleLeft = (channelDataLeft[i] + (mix * delayDataLeft[readIndex]));
float outputSampleRight = (channelDataRight[i] + (mix * delayDataRight[readIndex]));
// Write the current input into the delay buffer along with the delayed sample
delayDataLeft[writeIndex] = channelDataLeft[i] + (delayDataLeft[readIndex] * feedback);
delayDataRight[writeIndex] = channelDataRight[i] + (delayDataRight[readIndex] * feedback);
// Increment read and write index, check to see if it's greater than buffer length
// if yes, wrap back around to zero
if (++readIndex >= delayBufferLength)
readIndex = 0;
if (++writeIndex >= delayBufferLength)
writeIndex = 0;
// Assign output sample computed above to the output buffer
channelDataLeft[i] = outputSampleLeft;
channelDataRight[i] = outputSampleRight;
}
else //if i is even then this will play
{
// Calculate the next output sample (current input sample + delayed version swapped around from if)
float outputSampleLeft = (channelDataLeft[i] + (mix * delayDataRight[readIndex]));
float outputSampleRight = (channelDataRight[i] + (mix * delayDataLeft[readIndex]));
// Write the current input into the delay buffer along with the delayed sample
delayDataLeft[writeIndex] = channelDataLeft[i] + (delayDataLeft[readIndex] * feedback);
delayDataRight[writeIndex] = channelDataRight[i] + (delayDataRight[readIndex] * feedback);
// Increment read and write index, check to see if it's greater than buffer length
// if yes, wrap back around to zero
if (++readIndex >= delayBufferLength)
readIndex = 0;
if (++writeIndex >= delayBufferLength)
writeIndex = 0;
// Assign output sample computed above to the output buffer
channelDataLeft[i] = outputSampleLeft;
channelDataRight[i] = outputSampleRight;
}
}
Not really sure why you have the modulo one and different behavior based on sample index. A ping-pong delay should have two delay buffers, one for each channel. The input of one stereo channel plus the feedback of the opposite channel's delay buffer should be be fed into each delay.
Here is a good image of the audio signal graph of it:
Here is some pseudo-code of the logic:
float wetDryMix = 0.5f;
float wetFactor = wetDryMix;
float dryFactor = 1.0f - wetDryMix;
float feedback = 0.6f;
int sampleRate = 44100;
int sampleCount = sampleRate * 10;
float[] leftInSamples = new float[sampleCount];
float[] rightInSamples = new float[sampleCount];
float[] leftOutSamples = new float[sampleCount];
float[] rightOutSamples = new float[sampleCount];
int delayBufferSize = sampleRate * 3;
float[] delayBufferLeft = new float[delayBufferSize];
float[] delayBufferRight = new float[delayBufferSize];
int delaySamples = sampleRate / 2;
int delayReadIndex = 0;
int delayWriteIndex = delaySamples;
for(int sampleIndex = 0; sampleIndex < sampleCount; sampleIndex++) {
//Read samples in from input
leftChannel = leftInSamples[sampleIndex];
rightChannel = rightInSamples[sampleIndex];
//Make sure delay ring buffer indices are within range
delayReadIndex = delayReadIndex % delayBufferSize;
delayWriteIndex = delayWriteIndex % delayBufferSize;
//Get the current output of delay ring buffer
float delayOutLeft = delayBufferLeft[delayReadIndex];
float delayOutRight = delayBufferRight[delayReadIndex];
//Calculate what is put into delay buffer. It is the current input signal plus the delay output attenuated by the feedback factor
//Notice that the right delay output is fed into the left delay and vice versa
//In this version sound from each stereo channel will ping pong back and forth
float delayInputLeft = leftChannel + delayOutRight * feedback;
float delayInputRight = rightChannel + delayOutLeft * feedback;
//Alternatively you could use a mono signal that is pushed to one delay channel along with the current feedback delay
//This will ping-pong a mixed mono signal between channels
//float delayInputLeft = leftChannel + rightChannel + delayOutRight * feedback;
//float delayInputRight = delayOutLeft * feedback;
//Push the calculated delay value into the delay ring buffers
delayBufferLeft[delayWriteIndex] = delayInputLeft;
delayBufferRight[delayWriteIndex] = delayInputRight;
//Calculate resulting output by mixing the dry input signal with the current delayed output
float outputLeft = leftChannel * dryFactor + delayOutLeft * wetFactor;
float outputRight = rightChannel * dryFactor + delayOutRight * wetFactor;
leftOutSamples[sampleIndex] = outputLeft;
rightOutSamples[sampleIndex] = outputRight;
//Increment ring buffer indices
delayReadIndex++;
delayWriteIndex++;
}

Raspberry Pi generate tone with PiFM

While back I came across PiFM and decided to learn a bit about audio and modulation. I am trying to write an AFSK modulator, but first I wanted to generate pure tones, like 1000Hz. I am using code from the PiFM project (https://github.com/rm-hull/pifm/blob/master/pifm.cpp) which reads in a WAV file and shoots it out over RF. I want to do the same but I want to shoot out pure tones over RF.
Here is one of my attempts:
void playSineWave(float frequency, int duration, float samplerate)
{
SampleSink* ss;
ss = new Outputter(samplerate);
int bufferLen = duration * samplerate;
float* buffer = new float[bufferLen];
for (int i = 0; i < bufferLen; i++) {
float amplitude = 6000;
buffer[i] = amplitude * sin( (2.f * float(M_PI) * i * frequency) / samplerate );
}
cout << "Buffer length: " << bufferLen << endl;
ss->consume(buffer, bufferLen);
delete [] buffer;
}
I would use it as such playSineWave(1000, 5, 22050); to play 1000Hz tone for 5 seconds. But I get either nothing or noise. Can you guys suggest how to fix it or perhaps some good reading material?
Edit: Changed code to fix issue with amplitude. Still no tone.
I'm not sure if it's intentional to make amplitude change over time. Anyhow, i / bufferLen * 32760 is evaluated as (i / bufferLen) * 32760 and i / bufferLen is going to be 0 for all i smaller than bufferLen (look up integer division in C/C++), which is the case because of for (int i = 0; i < bufferLen; i++).

DSP - How to apply gain in frequency domain?

I’m a beginner in DSP and I have to make an audio equalizer.
I’ve done some research and tried a lot of thing in the past month but in the end, it’s not working and I’m a bit overwhelmed with all those informations (that I certainly don’t interpret well).
I have two main classes : Broadcast (which generate pink noise, and apply gain to it) and Record (which analyse the input of the microphone et deduct the gain from it).
I have some trouble with both, but I’m gonna limit this post to the Broadcast side.
I’m using Aquila DSP Library, so I used this example and extended the logic of it.
/* Constructor */
Broadcast::Broadcast() :
_Info(44100, 2, 2), // 44100 Hz, 2 channels, sample size : 2 octet
_pinkNoise(_Info.GetFrequency()), // Init the Aquila::PinkNoiseGenerator
_thirdOctave() // list of “Octave” class, containing min, center, and max frequency of each [⅓ octave band](http://goo.gl/365ZFN)
{
_pinkNoise.setAmplitude(65536);
}
/* This method is called in a loop and fills the buffer with the pink noise */
bool Broadcast::BuildBuffer(char * Buffer, int BufferSize, int & BufferCopiedSize)
{
if (BufferSize < 131072)
return false;
int SampleCount = 131072 / _Info.GetSampleSize();
int signalSize = SampleCount / _Info.GetChannelCount();
_pinkNoise.generate(signalSize);
auto fft = Aquila::FftFactory::getFft(signalSize);
Aquila::SpectrumType spectrum = fft->fft(_pinkNoise.toArray());
Aquila::SpectrumType ampliSpectrum(signalSize);
std::list<Octave>::iterator it;
double gain, fl, fh;
/* [1.] - The gains are applied in this loop */
for (it = _thirdOctave.begin(); it != _thirdOctave.end(); it++)
{
/* Test values */
if ((*it).getCtr() >= 5000)
gain = 6.0;
else
gain = 0.0;
fl = (signalSize * (*it).getMin() / _Info.GetFrequency());
fh = (signalSize * (*it).getMax() / _Info.GetFrequency());
/* [2.] - THIS is the part that I think is wrong */
for (int i = 0; i < signalSize; i++)
{
if (i >= fl && i < fh)
ampliSpectrum[i] = std::pow(10, gain / 20);
else
ampliSpectrum[i] = 1.0;
}
/* [3.] - Multiply each bin of spectrum with ampliSpectrum */
std::transform(
std::begin(spectrum),
std::end(spectrum),
std::begin(ampliSpectrum),
std::begin(spectrum),
[](Aquila::ComplexType x, Aquila::ComplexType y) { return x * y; }); // Aquila::ComplexType is an std::complex<double>
}
/* Put the IFFT result in a new buffer */
boost::scoped_ptr<double> s(new double[signalSize]);
fft->ifft(spectrum, s.get());
int val;
for (int i = 0; i < signalSize; i++)
{
val = int(s.get()[i]);
/* Fills the two channels with the same value */
reinterpret_cast<int*>(Buffer)[i * 2] = val;
reinterpret_cast<int*>(Buffer)[i * 2 + 1] = val;
}
BufferCopiedSize = SampleCount * _Info.GetSampleSize();
return true;
}
I’m using the pink noise of gStreamer along with the equalizer-nbands module to compare my output.
With all gain set to 0.0 the outputs are the same.
But as soon as I add some gain, the outputs sound different (even though my output still sound like a pink noise, and seems to have gain in the right spot).
So my question is :
How can I apply my gains to each ⅓ Octave band in the frequency domain.
My research shows that I should do a filter bank of band-pass filters, but how to do that with the result of an FFT ?
Thanks for your time.

fftw - Access violation error

I implemented a fftw (fftw.org) example to use Fast Fourier transforms...
This is the code....
I load an image that I convert from uint8_t to double (this code works fine...).
string bmpFileNameImage = "files/testDummyFFTWWithWisdom/onechannel_image.bmp";
BMPImage bmpImage(bmpFileNameImage);
vector<double>pixelColors;
vector<uint8_t> image = bmpImage.copyBits();
toDouble(image,pixelColors,256,256, 1);
int width = bmpImage.width();
int height = bmpImage.height();
I use wisdom files to improve the performance
FILE * file = fopen("wisdom.fftw", "r");
if (file) {
fftw_import_wisdom_from_file(file);
fclose(file);
}
///* fftw variables */
fftw_complex *out;
double *wisdomInput = (double *) fftw_malloc(sizeof(double)*width*2*(height/2 +1 ));
const fftw_plan forward =fftw_plan_dft_r2c_2d(width,height, wisdomInput,reinterpret_cast<fftw_complex *>(wisdomInput),FFTW_PATIENT);
const fftw_plan inverse = fftw_plan_dft_c2r_2d(width, height,reinterpret_cast<fftw_complex *>(wisdomInput),wisdomInput, FFTW_PATIENT);
file = fopen("wisdom.fftw", "w");
if (file) {
fftw_export_wisdom_to_file(file);
fclose(file);
}
Finally, I execute the fftw library.... I receive an Access violation error with the first
function (fftw_execute_dft_r2c) and I don't know why... I read this tutorial:
http://www.fftw.org/fftw3_doc/Multi_002dDimensional-DFTs-of-Real-Data.html#Multi_002dDimensional-DFTs-of-Real-Data.
I do a malloc with (ny/2+1) how it is explained.... . I don't understand why it is not working.... I am testing different sizes...
out = (fftw_complex*) fftw_malloc(sizeof(fftw_complex) * width *(height / 2 + 1));
double *result =(double *)fftw_malloc(width * (height+2) * sizeof(double));
fftw_execute_dft_r2c(forward,&pixelColors[0],out);
fftw_execute_dft_c2r(inverse,out,result);
Regards.
This is the corrected code.
It had a few mistakes:
It was reading a wrong wisdom.fftw file (from some old test...). Now, It always creates a new fftw_plan and a new file.
I misunderstood how it works the fftw library with in-place and out-of-place parameters. I had to change mallocs for the correct padding for "in-place" (I added +2 in malloc functions).
In order to restore the image, I had to divide by its size ((width+2) * height) how it is explained in this link.
`
/* load image */
string bmpFileNameImage = "files/polyp.bmp";
BMPImage bmpImage(bmpFileNameImage);
int width = bmpImage.width();
int height = bmpImage.height();
vector<double> pixelColors;
vector<uint8_t> image = bmpImage.copyBits();
//get one channel from the image
Uint8ToDouble(image,pixelColors,bmpImage.width(),bmpImage.height(),1);
//We don't reuse old wisdom.fftw... It can be corrupt
/*
FILE * file = fopen("wisdom.fftw", "r");
if (file) {
fftw_import_wisdom_from_file(file);
fclose(file);
} */
double *wisdomInput = (double *) fftw_malloc(sizeof(double)*height*(width+2));
const fftw_plan forward =fftw_plan_dft_r2c_2d(width,height,wisdomInput,reinterpret_cast<fftw_complex *>(wisdomInput),FFTW_PATIENT);
const fftw_plan inverse = fftw_plan_dft_c2r_2d(width,height,reinterpret_cast<fftw_complex *>(wisdomInput),wisdomInput, FFTW_PATIENT);
double *bitsColors =(double *)fftw_malloc((width) * height * sizeof(double));
for (int y = 0; y < height; y++) {
for (int x = 0; x < width+2; x++) {
if (x < width) {
int currentIndex = ((y * width) + (x));
bitsColors[currentIndex] = (static_cast<double>(result[y * (width+2) + x])) / (height*width);
}
}
}
fftw_free (wisdomInput);
fftw_free (out);
fftw_free (result);
fftw_free (bitsColors);
fftw_destroy_plan(forward);
fftw_destroy_plan(inverse);
fftw_cleanup();
}
`
fftw_execute_dft_r2c(forward,&pixelColors[0],out);
What are you doing here ? The array has already a pointer.
Change it to fftw_execute_dft_r2c(forward,pixelColors[0],out); it should work now.
Maybe the problem is here (http://www.fftw.org/doc/New_002darray-Execute-Functions.html):
[...] that the following conditions are met:
The input and output arrays are the same (in-place) or different (out-of-place) if the plan was originally created to be in-place or
out-of-place, respectively.
In the plan you are using in-place transformation parameters (with bad allocation, BTW, since:
double *wisdomInput = (double *) fftw_malloc(sizeof(double)*width*2*(height/2 +1 ));
should be:
double *wisdomInput = (double *) fftw_malloc(sizeof(fftw_complex)*width*2*(height/2 +1 ));
to be suitable for output too).
But you're calling fftw_execute_dft_r2c function with out-of-place parameters.

How do I get most accurate audio frequency data possible from real time FFT on Tizen?

currently i m working on the Tizen IDE.
I had read the input data from the microPhone and apply the FFT on it... but everytime i gets the nan output.
here is my code..
ShortBuffer *pBuffer1 = pData->AsShortBufferN();
fft = new KissFFT(BUFFER_SIZE);
std::vector<short> input(pBuffer1->GetPointer(),
pBuffer1->GetPointer() + BUFFER_SIZE); // this contains audio data
std::vector<float> specturm(BUFFER_SIZE);
fft->spectrum(input, specturm);
applying FFT..
void KissFFT::spectrum(KissFFTO* fft, std::vector<short>& samples2,
std::vector<float>& spectrum) {
int len = fft->numSamples / 2 + 1;
kiss_fft_scalar* samples = (kiss_fft_scalar*) &samples2[0];
kiss_fftr(fft->config, samples, fft->spectrum);
for (int i = 0; i < len; i++) {
float re = scale(fft->spectrum[i].r) * fft->numSamples;
float im = scale(fft->spectrum[i].i) * fft->numSamples;
if (i > 0)
spectrum[i] = sqrtf(re * re + im * im) / (fft->numSamples / 2);
else
spectrum[i] = sqrtf(re * re + im * im) / (fft->numSamples);
AppLog("specturm %d",spectrum[i]); // everytime returns returns nan output
}
}
KissFFTO* KissFFT::create(int numSamples) {
KissFFTO* fft = new KissFFTO();
fft->config = kiss_fftr_alloc(numSamples/2, 0, NULL, NULL);
fft->spectrum = new kiss_fft_cpx[numSamples / 2 + 1];
fft->numSamples = numSamples;
return fft;
}
In fft->config there should be some parameters about the size of FFT like 2048, 4096, i.e. powers of 2. If you increase this value, you can get more resolution in frequency.