Cross-Correlation of Two Signals (DI and Microphone) - c++

I'm wondering if anyone would possibly be able to give me some advice on how to implement a cross-correlation function within two simple delay lines that I have set up. My problem is that I have two hard coded delay lines that I can manually change to align two signals going in. I'm using a DI signal and a microphone signal from a bass amp. If I use this code in its current state it will delay the DI signal, but what I want it to do, is take the two signals and align them within the DSP for it to output them in phase with one and other. My current code can be seen below:
#include <Bela.h>
#define DELAY_BUFFER_SIZE 44100
// Buffer holding previous samples per channel
float gDelayBuffer_l[DELAY_BUFFER_SIZE] = {0};
float gDelayBuffer_r[DELAY_BUFFER_SIZE] = {0};
// Write pointer
int gDelayBufWritePtr = 0;
// Amount of delay
float gDelayAmount = 1;
// Amount of feedback
float gDelayFeedbackAmount = 0;
// Level of pre-delay input
float gDelayAmountPre = 1;
// Amount of delay in samples
int gDelayInSamples = 22050;
// Buffer holding previous samples per channel
float hDelayBuffer_l[DELAY_BUFFER_SIZE] = {0};
float hDelayBuffer_r[DELAY_BUFFER_SIZE] = {0};
// Write pointer
int hDelayBufWritePtr = 0;
// Amount of delay
float hDelayAmount = 1;
// Amount of feedback
float hDelayFeedbackAmount = 0;
// Level of pre-delay input
float hDelayAmountPre = 1;
// Amount of delay in samples
int hDelayInSamples = 44100;
bool setup(BelaContext *context, void *userData)
{
return true;
}
void render(BelaContext *context, void *userData)
{
for(unsigned int n = 0; n < context->analogFrames; n++) {
float out_l = 0;
float out_r = 0;
// Read audio inputs
out_l = analogRead(context,n,0);
out_r = analogRead(context,n,1);
// Increment delay buffer write pointer
if(++gDelayBufWritePtr>DELAY_BUFFER_SIZE)
gDelayBufWritePtr = 0;
// Increment delay buffer write pointer
// Calculate the sample that will be written into the delay buffer...
// 1. Multiply the current (dry) sample by the pre-delay gain level (set above)
// 2. Get the previously delayed sample from the buffer, multiply it by the feedback gain and add it to the current sample
float del_input_l = (gDelayAmountPre * out_l + gDelayBuffer_l[(gDelayBufWritePtr-gDelayInSamples+DELAY_BUFFER_SIZE)%DELAY_BUFFER_SIZE] * gDelayFeedbackAmount);
float del_input_r = (gDelayAmountPre * out_r + gDelayBuffer_r[(gDelayBufWritePtr-gDelayInSamples+DELAY_BUFFER_SIZE)%DELAY_BUFFER_SIZE] * gDelayFeedbackAmount);
// Now we can write it into the delay buffer
gDelayBuffer_l[gDelayBufWritePtr] = del_input_l;
gDelayBuffer_r[gDelayBufWritePtr] = del_input_r;
// Get the delayed sample (by reading `gDelayInSamples` many samples behind our current write pointer) and add it to our output sample
out_l = gDelayBuffer_l[(gDelayBufWritePtr-gDelayInSamples+DELAY_BUFFER_SIZE)%DELAY_BUFFER_SIZE] * gDelayAmount;
out_r = gDelayBuffer_r[(gDelayBufWritePtr-gDelayInSamples+DELAY_BUFFER_SIZE)%DELAY_BUFFER_SIZE] * gDelayAmount;
// Write the sample into the output buffer
analogWrite(context, n, 0, out_l);
analogWrite(context, n, 1, out_r);
}
for(unsigned int n = 0; n < context->analogFrames; n++) {
float out_l = 0;
float out_r = 0;
// Read audio inputs
out_l = analogRead(context,n,2);
out_r = analogRead(context,n,3);
// Increment delay buffer write pointer
if(++hDelayBufWritePtr>DELAY_BUFFER_SIZE)
hDelayBufWritePtr = 0;
// Increment delay buffer write pointer
if(++hDelayBufWritePtr>DELAY_BUFFER_SIZE)
hDelayBufWritePtr = 0;
// Calculate the sample that will be written into the delay buffer...
// 1. Multiply the current (dry) sample by the pre-delay gain level (set above)
// 2. Get the previously delayed sample from the buffer, multiply it by the feedback gain and add it to the current sample
float del_input_l = (hDelayAmountPre * out_l + hDelayBuffer_l[(hDelayBufWritePtr-hDelayInSamples+DELAY_BUFFER_SIZE)%DELAY_BUFFER_SIZE] * hDelayFeedbackAmount);
float del_input_r = (hDelayAmountPre * out_r + hDelayBuffer_r[(hDelayBufWritePtr-hDelayInSamples+DELAY_BUFFER_SIZE)%DELAY_BUFFER_SIZE] * hDelayFeedbackAmount);
// Now we can write it into the delay buffer
hDelayBuffer_l[hDelayBufWritePtr] = del_input_l;
hDelayBuffer_r[hDelayBufWritePtr] = del_input_r;
// Get the delayed sample (by reading `gDelayInSamples` many samples behind our current write pointer) and add it to our output sample
out_l = hDelayBuffer_l[(hDelayBufWritePtr-hDelayInSamples+DELAY_BUFFER_SIZE)%DELAY_BUFFER_SIZE] * hDelayAmount;
out_r = hDelayBuffer_r[(hDelayBufWritePtr-hDelayInSamples+DELAY_BUFFER_SIZE)%DELAY_BUFFER_SIZE] * hDelayAmount;
// Write the sample into the output buffer
analogWrite(context, n, 2, out_l);
analogWrite(context, n, 3, out_r);
}
}
void cleanup(BelaContext *context, void *userData)
{
}

Related

stereo ping pong delay c++

I have to create a stereo ping pong delay with these parameters.
• Delay Time (0 – 3000 milliseconds)
• Feedback (0 – 0.99)
• Wet / Dry Mix (0 – 1.0)
I have managed to implement the stereo in/out and the 3 parameters, but struggling with how to implement the ping pong. I have this code in the process block, but it only replays the left and right in the opposite channels once. Is there a simple way to loop this to reply over and over and not just once or have is this not the best way to implement ping pong. Any help would be great!
//ping pong implementation
for (int i = 0; i < buffer.getNumSamples(); i++)
{
// Reduce the amplitude of each sample in the block for the
// left and right channels
//channelDataLeft[i] = channelDataLeft[i] * 0.5;
// channelDataRight[i] = channelDataRight[i] * 0.25;
if (i % 2 == 1) //if i is odd this will play
{
// Calculate the next output sample (current input sample + delayed version)
float outputSampleLeft = (channelDataLeft[i] + (mix * delayDataLeft[readIndex]));
float outputSampleRight = (channelDataRight[i] + (mix * delayDataRight[readIndex]));
// Write the current input into the delay buffer along with the delayed sample
delayDataLeft[writeIndex] = channelDataLeft[i] + (delayDataLeft[readIndex] * feedback);
delayDataRight[writeIndex] = channelDataRight[i] + (delayDataRight[readIndex] * feedback);
// Increment read and write index, check to see if it's greater than buffer length
// if yes, wrap back around to zero
if (++readIndex >= delayBufferLength)
readIndex = 0;
if (++writeIndex >= delayBufferLength)
writeIndex = 0;
// Assign output sample computed above to the output buffer
channelDataLeft[i] = outputSampleLeft;
channelDataRight[i] = outputSampleRight;
}
else //if i is even then this will play
{
// Calculate the next output sample (current input sample + delayed version swapped around from if)
float outputSampleLeft = (channelDataLeft[i] + (mix * delayDataRight[readIndex]));
float outputSampleRight = (channelDataRight[i] + (mix * delayDataLeft[readIndex]));
// Write the current input into the delay buffer along with the delayed sample
delayDataLeft[writeIndex] = channelDataLeft[i] + (delayDataLeft[readIndex] * feedback);
delayDataRight[writeIndex] = channelDataRight[i] + (delayDataRight[readIndex] * feedback);
// Increment read and write index, check to see if it's greater than buffer length
// if yes, wrap back around to zero
if (++readIndex >= delayBufferLength)
readIndex = 0;
if (++writeIndex >= delayBufferLength)
writeIndex = 0;
// Assign output sample computed above to the output buffer
channelDataLeft[i] = outputSampleLeft;
channelDataRight[i] = outputSampleRight;
}
}
Not really sure why you have the modulo one and different behavior based on sample index. A ping-pong delay should have two delay buffers, one for each channel. The input of one stereo channel plus the feedback of the opposite channel's delay buffer should be be fed into each delay.
Here is a good image of the audio signal graph of it:
Here is some pseudo-code of the logic:
float wetDryMix = 0.5f;
float wetFactor = wetDryMix;
float dryFactor = 1.0f - wetDryMix;
float feedback = 0.6f;
int sampleRate = 44100;
int sampleCount = sampleRate * 10;
float[] leftInSamples = new float[sampleCount];
float[] rightInSamples = new float[sampleCount];
float[] leftOutSamples = new float[sampleCount];
float[] rightOutSamples = new float[sampleCount];
int delayBufferSize = sampleRate * 3;
float[] delayBufferLeft = new float[delayBufferSize];
float[] delayBufferRight = new float[delayBufferSize];
int delaySamples = sampleRate / 2;
int delayReadIndex = 0;
int delayWriteIndex = delaySamples;
for(int sampleIndex = 0; sampleIndex < sampleCount; sampleIndex++) {
//Read samples in from input
leftChannel = leftInSamples[sampleIndex];
rightChannel = rightInSamples[sampleIndex];
//Make sure delay ring buffer indices are within range
delayReadIndex = delayReadIndex % delayBufferSize;
delayWriteIndex = delayWriteIndex % delayBufferSize;
//Get the current output of delay ring buffer
float delayOutLeft = delayBufferLeft[delayReadIndex];
float delayOutRight = delayBufferRight[delayReadIndex];
//Calculate what is put into delay buffer. It is the current input signal plus the delay output attenuated by the feedback factor
//Notice that the right delay output is fed into the left delay and vice versa
//In this version sound from each stereo channel will ping pong back and forth
float delayInputLeft = leftChannel + delayOutRight * feedback;
float delayInputRight = rightChannel + delayOutLeft * feedback;
//Alternatively you could use a mono signal that is pushed to one delay channel along with the current feedback delay
//This will ping-pong a mixed mono signal between channels
//float delayInputLeft = leftChannel + rightChannel + delayOutRight * feedback;
//float delayInputRight = delayOutLeft * feedback;
//Push the calculated delay value into the delay ring buffers
delayBufferLeft[delayWriteIndex] = delayInputLeft;
delayBufferRight[delayWriteIndex] = delayInputRight;
//Calculate resulting output by mixing the dry input signal with the current delayed output
float outputLeft = leftChannel * dryFactor + delayOutLeft * wetFactor;
float outputRight = rightChannel * dryFactor + delayOutRight * wetFactor;
leftOutSamples[sampleIndex] = outputLeft;
rightOutSamples[sampleIndex] = outputRight;
//Increment ring buffer indices
delayReadIndex++;
delayWriteIndex++;
}

Getting values for specific frequencies in a short time fourier transform

I'm trying to use C++ to recreate the spectrogram function used by Matlab. The function uses a Short Time Fourier Transform (STFT). I found some C++ code here that performs a STFT. The code seems to work perfectly for all frequencies but I only want a few. I found this post for a similar question with the following answer:
Just take the inner product of your data with a complex exponential at
the frequency of interest. If g is your data, then just substitute for
f the value of the frequency you want (e.g., 1, 3, 10, ...)
Having no background in mathematics, I can't figure out how to do this. The inner product part seems simple enough from the Wikipedia page but I have absolutely no idea what he means by (with regard to the formula for a DFT)
a complex exponential at frequency of interest
Could someone explain how I might be able to do this? My data structure after the STFT is a matrix filled with complex numbers. I just don't know how to extract my desired frequencies.
Relevant function, where window is Hamming, and vector of desired frequencies isn't yet an input because I don't know what to do with them:
Matrix<complex<double>> ShortTimeFourierTransform::Calculate(const vector<double> &signal,
const vector<double> &window, int windowSize, int hopSize)
{
int signalLength = signal.size();
int nOverlap = hopSize;
int cols = (signal.size() - nOverlap) / (windowSize - nOverlap);
Matrix<complex<double>> results(window.size(), cols);
int chunkPosition = 0;
int readIndex;
// Should we stop reading in chunks?
bool shouldStop = false;
int numChunksCompleted = 0;
int i;
// Process each chunk of the signal
while (chunkPosition < signalLength && !shouldStop)
{
// Copy the chunk into our buffer
for (i = 0; i < windowSize; i++)
{
readIndex = chunkPosition + i;
if (readIndex < signalLength)
{
// Note the windowing!
data[i][0] = signal[readIndex] * window[i];
data[i][1] = 0.0;
}
else
{
// we have read beyond the signal, so zero-pad it!
data[i][0] = 0.0;
data[i][1] = 0.0;
shouldStop = true;
}
}
// Perform the FFT on our chunk
fftw_execute(plan_forward);
// Copy the first (windowSize/2 + 1) data points into your spectrogram.
// We do this because the FFT output is mirrored about the nyquist
// frequency, so the second half of the data is redundant. This is how
// Matlab's spectrogram routine works.
for (i = 0; i < windowSize / 2 + 1; i++)
{
double real = fft_result[i][0];
double imaginary = fft_result[i][1];
results(i, numChunksCompleted) = complex<double>(real, imaginary);
}
chunkPosition += hopSize;
numChunksCompleted++;
} // Excuse the formatting, the while ends here.
return results;
}
Look up the Goertzel algorithm or filter for example code that uses the computational equivalent of an inner product against a complex exponential to measure the presence or magnitude of a specific stationary sinusoidal frequency in a signal. Performance or resolution will depend on the length of the filter and your signal.

DSP - How to apply gain in frequency domain?

I’m a beginner in DSP and I have to make an audio equalizer.
I’ve done some research and tried a lot of thing in the past month but in the end, it’s not working and I’m a bit overwhelmed with all those informations (that I certainly don’t interpret well).
I have two main classes : Broadcast (which generate pink noise, and apply gain to it) and Record (which analyse the input of the microphone et deduct the gain from it).
I have some trouble with both, but I’m gonna limit this post to the Broadcast side.
I’m using Aquila DSP Library, so I used this example and extended the logic of it.
/* Constructor */
Broadcast::Broadcast() :
_Info(44100, 2, 2), // 44100 Hz, 2 channels, sample size : 2 octet
_pinkNoise(_Info.GetFrequency()), // Init the Aquila::PinkNoiseGenerator
_thirdOctave() // list of “Octave” class, containing min, center, and max frequency of each [⅓ octave band](http://goo.gl/365ZFN)
{
_pinkNoise.setAmplitude(65536);
}
/* This method is called in a loop and fills the buffer with the pink noise */
bool Broadcast::BuildBuffer(char * Buffer, int BufferSize, int & BufferCopiedSize)
{
if (BufferSize < 131072)
return false;
int SampleCount = 131072 / _Info.GetSampleSize();
int signalSize = SampleCount / _Info.GetChannelCount();
_pinkNoise.generate(signalSize);
auto fft = Aquila::FftFactory::getFft(signalSize);
Aquila::SpectrumType spectrum = fft->fft(_pinkNoise.toArray());
Aquila::SpectrumType ampliSpectrum(signalSize);
std::list<Octave>::iterator it;
double gain, fl, fh;
/* [1.] - The gains are applied in this loop */
for (it = _thirdOctave.begin(); it != _thirdOctave.end(); it++)
{
/* Test values */
if ((*it).getCtr() >= 5000)
gain = 6.0;
else
gain = 0.0;
fl = (signalSize * (*it).getMin() / _Info.GetFrequency());
fh = (signalSize * (*it).getMax() / _Info.GetFrequency());
/* [2.] - THIS is the part that I think is wrong */
for (int i = 0; i < signalSize; i++)
{
if (i >= fl && i < fh)
ampliSpectrum[i] = std::pow(10, gain / 20);
else
ampliSpectrum[i] = 1.0;
}
/* [3.] - Multiply each bin of spectrum with ampliSpectrum */
std::transform(
std::begin(spectrum),
std::end(spectrum),
std::begin(ampliSpectrum),
std::begin(spectrum),
[](Aquila::ComplexType x, Aquila::ComplexType y) { return x * y; }); // Aquila::ComplexType is an std::complex<double>
}
/* Put the IFFT result in a new buffer */
boost::scoped_ptr<double> s(new double[signalSize]);
fft->ifft(spectrum, s.get());
int val;
for (int i = 0; i < signalSize; i++)
{
val = int(s.get()[i]);
/* Fills the two channels with the same value */
reinterpret_cast<int*>(Buffer)[i * 2] = val;
reinterpret_cast<int*>(Buffer)[i * 2 + 1] = val;
}
BufferCopiedSize = SampleCount * _Info.GetSampleSize();
return true;
}
I’m using the pink noise of gStreamer along with the equalizer-nbands module to compare my output.
With all gain set to 0.0 the outputs are the same.
But as soon as I add some gain, the outputs sound different (even though my output still sound like a pink noise, and seems to have gain in the right spot).
So my question is :
How can I apply my gains to each ⅓ Octave band in the frequency domain.
My research shows that I should do a filter bank of band-pass filters, but how to do that with the result of an FFT ?
Thanks for your time.

Calculate frequency from FFT sample?

I'm using the below code in Unreal Engine 4 to capture microphone input and get the resulting FFT.
I'm having trouble calculating the frequency based on this data.
I've tried finding the max amplitude and taking that as the frequency, but that doesn't seem to be correct.
// Additional includes:
#include "Voice.h"
#include "OnlineSubsystemUtils.h"
// New class member:
TSharedPtr<class IVoiceCapture> voiceCapture;
// Initialisation:
voiceCapture = FVoiceModule::Get().CreateVoiceCapture();
voiceCapture->Start();
// Capturing samples:
uint32 bytesAvailable = 0;
EVoiceCaptureState::Type captureState = voiceCapture->GetCaptureState(bytesAvailable);
if (captureState == EVoiceCaptureState::Ok && bytesAvailable > 0)
{
uint8 buf[maxBytes];
memset(buf, 0, maxBytes);
uint32 readBytes = 0;
voiceCapture->GetVoiceData(buf, maxBytes, readBytes);
uint32 samples = readBytes / 2;
float* sampleBuf = new float[samples];
int16_t sample;
for (uint32 i = 0; i < samples; i++)
{
sample = (buf[i * 2 + 1] << 8) | buf[i * 2];
sampleBuf[i] = float(sample) / 32768.0f;
}
// Do fun stuff here
delete[] sampleBuf;
}
I don't see a Fourier transform to be carried out in your code snippet. Any way, using a DFT given N samples at an average sampling frequency R, the frequency corresponding to the bin k is k·R/2N

Distortion with chorus

I am new to plug in Development and C++. I was trying to write a chorus plug in using XCode Audio Unit Template. However, when I test the plug in with a sine wave I can hear some mild distortion. I believe I did something wrong with the interpolation technique I am using even though I went through it a thousand times and could not figure out what I did wrong. Here is the code I have written that includes the important parts of the audio unit:
private: //state variables...
enum {kWaveArraySize = 2000}; //number of points in the LFO sine array to hold the points
float mSine[kWaveArraySize];
float *waveArrayPointer; //pointer to point in the array Variable to hold Sampling Rate
Float32 SR;
long mSamplesProcessed; //variable to keep track of samples processed
enum {sampleLimit = (int)10E6}; //limit to reset sine wave
float mCurrentScale, mNextScale; //scaling factor for the LFO sine
TAUBuffer<Float32> Buffer; //circular buffer
Float32 rawIndex; //raw read Index
UInt32 ReadIndex, NextIndex; //the Read Index and the sample after the Read Index for Linear Interpolation
UInt32 WriteIndex; //the Write Index
UInt32 BufferSize; //Size of Buffer
UInt32 MaxBufferSize; //Allocated Number of Samples
Float32 DelayTime; //Delay Time going to be calculated according to LFO
Float32 inputSample, outputSample,
freq, Depth, //Variables to hold the frequency of the LFO and Depth parameter
samplesPerCycle, //number of samples per LFO cycle
InterpOutput, //interpolated output variable
fracDelay, DryValue, WetValue; //fractional Delay, Dry and Wet value variables
VibratoUnit::VibratoUnitKernel::VibratoUnitKernel (AUEffectBase *inAudioUnit) : AUKernelBase (inAudioUnit),
mSamplesProcessed(0), mCurrentScale(0)
{
for (int i = 0; i<kWaveArraySize; ++i) //loop to calculate one cycle of LFO
{
double radians = i * 2.0 * pi / kWaveArraySize;
mSine[i] = (sin(radians) + 1.0) * 0.5;
}
SR = GetSampleRate();
BufferSize = SR;
MaxBufferSize = BufferSize + 20;
Buffer.AllocateClear(MaxBufferSize);
ReadIndex = MaxBufferSize - 1;
WriteIndex = MaxBufferSize - 1; //Give both ReadIndex and WriteIndex a Value outside the buffer so they would be reset to 0 in the process method
void VibratoUnit::VibratoUnitKernel::Reset() //Reset and clear
{
mCurrentScale = 0;
mSamplesProcessed = 0;
Buffer.Clear();
}
//------------------PROCESS METHOD-----------------------//
void VibratoUnit::VibratoUnitKernel::Process( const Float32 *inSourceP,
Float32 *inDestP,
UInt32 inFramesToProcess,
UInt32 inNumChannels,
bool &ioSilence )
{
UInt32 nSampleFrames = inFramesToProcess;
const Float32 *sourceP = inSourceP;
Float32 *destP = inDestP;
freq = GetParameter(kParamFreq);
Depth = GetParameter(kParamDepth);
Depth = (SR/1000.0)*Depth;
WetValue = GetParameter(kParamDryWet);
DryValue = 1.0 - WetValue;
waveArrayPointer = &mSine[0];
samplesPerCycle = SR/freq;
mNextScale = kWaveArraySize/samplesPerCycle;
//----processing loop----//
while (nSampleFrames-- > 0) {
int index = static_cast<long> (mSamplesProcessed * mCurrentScale)%kWaveArraySize; //Find index for in the LFO wave table
if ((mNextScale != mCurrentScale) && (index == 0))
{
mCurrentScale = mNextScale;
mSamplesProcessed = 0; //change LFO in 0 crossing
}
if ((mSamplesProcessed >= sampleLimit) && (index == 0))
{
mSamplesProcessed = 0; // reset samples processed
}
if (WriteIndex >= BufferSize) //reset write Index if goes outside the buffer
{
WriteIndex = 0;
}
inputSample = *sourceP;
sourceP += inNumChannels;
DelayTime = waveArrayPointer[index]; //receive raw sine value between 0 and 1
DelayTime = (Depth*DelayTime)+Depth; //calculate delay value according to sine wave
rawIndex = WriteIndex - DelayTime; //calculate rawIndex relative to the write Index position
if (rawIndex < 0) {
rawIndex = BufferSize + rawIndex;
}
ReadIndex = (UInt32)rawIndex; //calculate readIndex according to rawIndex position
fracDelay = DelayTime - (UInt32)DelayTime; //calculate fractional delay time
NextIndex = ReadIndex + 1; //for interpolation
if (NextIndex >= BufferSize) //bounds checking
{
NextIndex = 0;
}
InterpOutput = (fracDelay*Buffer[ReadIndex]) + ((1.0-fracDelay)*Buffer[NextIndex]); //calculate interpolated value
Buffer[ReadIndex] = InterpOutput; //write the interpolated output to buffer
Buffer[WriteIndex] = inputSample; //write inputsample to buffer
outputSample = (Buffer[ReadIndex]*WetValue) + (inputSample * DryValue); //read output sample from buffer
WriteIndex++; //increment writeIndex
mSamplesProcessed++; //increment samplesprocessed
*destP = outputSample;
destP += inNumChannels;
}
}
Thank you for your help in advance.