How to calculate number of samples in audio given some parameters? - c++

Given following parameters:
Sample size: 16
Channel count: 2
Codec: audio/pcm
Byte order: little endian
Sample rate: 11025
Sample type: signed int
How can I determine number of samples for N miliseconds of recorded audio? I'm new in audio processing. The codec is PCM so I guess it's uncompressed audio.
I'm using Qt 4.8 on Windows 7 Ultimate x64.

/**
* Converts milliseconds to samples of buffer.
* #param ms the time in milliseconds
* #return the size of the buffer in samples
*/
int msToSamples( int ms, int sampleRate, int channels ) {
return (int)(((long) ms) * sampleRate * channels / 1000);
}
/* get size of a buffer to hold nSamples */
int samplesToBytes(int nSamples, int sampleSizeBits) {
return nSamples * (sampleSizeBits / 8);
}
Reference

I think it is important here for you to understand what each of these terms means so that you can then write the code that gives you what you want.
Sample rate is the number of samples per second of audio, in your case 11025 (this is sometimes expressed in KHz) this is quite low when compared to something like CD audio which is 44.1KHz so 44100 sample rate and there are higher standards such as 48KHz, 96KHz.
Next you have the number of bits used for each sample, this can typically be 8/16/24/32 bits.
Next you can have an arbitrary number of channels for each sample.
So the code sample already posted shows how to apply each of these numbers together to get your milliseconds to samples which is simply multiplying the number of channels by the sample bits by the sample rate which gives you the data size for a single second of audio, then divide this number by 1000 to give you milliseconds.
This can get quite tricky when you start applying this to video which deals in frames which are either nice numbers like 25/30/50/60 frames a second to the NTSC based ones which are 23.98/29.97/59.94 frames a second in which case you have to do horrible calculations to make sure they align correctly.
Hope this helps.

Here a solution in pseudocode:
Given the
duration = 20... in milliseconds &
sr = 11025 ... samplingrate in Hz
then the number of samples N
N = sr * dur/1000 = 220.5
You will need to round that to the closest integer number.

Related

fftw analysing frequencies from mic input on pc

I am using fftw to analyse the frequency spectrum of audio input to a computer from the mic input. I am using portaudio c++ libraries to capture the windows of time-domain audio data and then fftw to do a real to complex r2c transformation of this data to the frequency domain. Below is my function which I call everytime I receive the block of data.
The sample rate is 44100 samples per second , the sample type is short (signed 16 bit integer)and I am taking 250ms blocks of data in each window. The fft resolution is therefore 4Hz.
The problem is , i'm not sure how to interpret the data which I am receiving after the transformation. When no audio is played , I am getting amplitudes of around 1000 to 4000 for every frequency component, as soon as audio is played from an instrument for example, all of the amplitudes go negative.
I have tried doing a normalisation before the fft, by dividing by the average power and then the data makes more sense. All amplitudes are from 200 to 500 when nothing is played, then for example if I play a tone of 76Hz, the amplitude for this component increases to around 2000. So that is something along the lines of what I expect, but still not sure if this process can be implemented better.
My question is, am I doing the right thing here? Must the data be normalised and am I doing it right? Why am I still receiving high amplitudes on the frequencies that are not being played. Has anyone any experience of doing something similar and maybe give some tips. Many thanks in advance.
void AudioProcessor::GetFFT(void* inputData, void* freqSpectrum)
{
double* input = (double*)inputData;
short* freq_spectrum = (short*)freqSpectrum;
fftPlan = fftw_plan_dft_r2c_1d(FRAMES_PER_BUFFER, input, complexOut, FFTW_ESTIMATE);
fftw_execute(fftPlan);
////
for (int k = 0; k < (FRAMES_PER_BUFFER + 1) / 2; ++k)
{
freq_spectrum[k] = (short)(sqrt(complexOut[k][0] * complexOut[k][0] + complexOut[k][1] * complexOut[k][1]));
}
if (FRAMES_PER_BUFFER % 2 == 0) /* frames per buffer is even number */
{
freq_spectrum[FRAMES_PER_BUFFER / 2] = (short)(sqrt(complexOut[FRAMES_PER_BUFFER / 2][0] * complexOut[FRAMES_PER_BUFFER / 2][0] + complexOut[FRAMES_PER_BUFFER / 2][1] * complexOut[FRAMES_PER_BUFFER / 2][1])); /* Nyquist freq. */
}
}

Obtain the total number of samples with FFMpeg

Currently my application reads audio files based on a while-realloc loop:
// Pseudocode
float data* = nullptr;
int size = 0;
AVFrame* frame;
while(readFrame(formatContext, frame))
{
data = realloc(data, size + frame.nSamples);
size += frame.nSamples;
/* Read frame samples into data */
}
Is there a way to obtain the total number of samples in a stream at the beginning? I want to be able to create the array with new[] instead of malloc.
For reference, this was answered here:
FFmpeg: How to estimate number of samples in audio stream?
I used the following in my code:
int total_samples = (int) ((format_context->duration / (float) AV_TIME_BASE) * SAMPLE_RATE * NUMBER_CHANNELS);
NOTE: my testing shows this calculation will be most likely be more than the actual number of samples found, so make sure you compensate for that in your code. I set all remaining "unset" samples to zero.

How to efficiently determine the minimum necessary size of a pre-rendered sine wave audio buffer for looping?

I've written a program that generates a sine-wave at a user-specified frequency, and plays it on a 96kHz audio channel. To save a few CPU cycles I employ the old trick of pre-rendering a short section of audio into a buffer, and then playing back the buffer in a loop, so that I can avoid calling the sin() function 96000 times per second for the duration of the program and just do simple memory-copying instead.
My problem is efficiently determining what the minimum usable size of this pre-rendered buffer would be. For some frequencies it is easy -- for example, an 8kHz sine wave can be perfectly represented by generating a 12-sample buffer and playing it in a looping, because (8000*12 == 96000). For other frequencies, however, a single cycle of the sine wave requires a non-integral number of samples to represent, and therefore looping a single cycle's worth of samples would cause unacceptable glitching.
For some of those frequencies, however, it's possible to get around that problem by pre-rendering more than one cycle of the sine wave and looping that -- if I can figure out how many cycles are required so that the number of cycles present in the buffer will be integral, while also guaranteeing that the number of samples in the buffer are integral. For example, a sine-wave frequency of 12.8kHz translates to a single-cycle buffer-size of 7.5 samples, which won't loop cleanly, but if I render two consecutive cycles of the sine wave into a 15-sample buffer, then I can cleanly loop the result.
My current approach to solving this issue is brute force: I try all possible cycle-counts and see if any of them result in a buffer size with an integral number of samples in it. I think that approach is unsatisfactory for the following reasons:
1) It's very inefficient. For example, the program shown below (which prints buffer-size results for 480,000 possible frequency values between 0Hz and 48kHz) takes 35 minutes to complete on my 2.7GHz machine. I think there must be a much faster way to do this.
2) I suspect that the results are not 100% accurate, due to floating-point errors.
3) The algorithm gives up if it can't find an acceptable buffer size less than 10 seconds long. (I could make the limit higher, but of course that would make the algorithm even slower).
So, is there any way to calculate the minimum-usable-buffer-size analytically, preferably in O(1) time? It seems like it should be easy, but I haven't been able to figure out what kind of math I should use.
Thanks in advance for any advice!
#include <stdio.h>
#include <math.h>
static const long long SAMPLES_PER_SECOND = 96000;
static const long long MAX_ALLOWED_BUFFER_SIZE_SAMPLES = (SAMPLES_PER_SECOND * 10);
// Returns the length of the pre-render buffer needed to properly
// loop a sine wave at the given frequence, or -1 on failure.
static int GetNumCyclesNeededForPreRenderedBuffer(float freqHz)
{
double oneCycleLengthSamples = SAMPLES_PER_SECOND/freqHz;
for (int count=1; (count*oneCycleLengthSamples) < MAX_ALLOWED_BUFFER_SIZE_SAMPLES; count++)
{
double remainder = fmod(oneCycleLengthSamples*count, 1.0);
if (remainder > 0.5) remainder = 1.0-remainder;
if (remainder <= 0.0) return count;
}
return -1;
}
int main(int, char **)
{
for (int i=0; i<48000*10; i++)
{
double freqHz = ((double)i)/10.0f;
int numCyclesNeeded = GetNumCyclesNeededForPreRenderedBuffer(freqHz);
if (numCyclesNeeded >= 0)
{
double oneCycleLengthSamples = SAMPLES_PER_SECOND/freqHz;
printf("For %.1fHz, use a pre-render-buffer size of %f samples (%i cycles, %f samples/cycle)\n", freqHz, (numCyclesNeeded*oneCycleLengthSamples), numCyclesNeeded, oneCycleLengthSamples);
}
else printf("For %.1fHz, there was no suitable pre-render-buffer size under the allowed limit!\n", freqHz);
}
return 0;
}
number_of_cycles/size_of_buffer = frequency/samples_per_second
This implies that if you can simplify your frequency/samples_per_second fraction, you can find the size of your buffer and the number of cycles in the buffer. If frequency and samples_per_second are integers, you can simplify the fraction by finding the greatest common divisor, otherwise you can use the method of continued fractions.
Example:
Say your frequency is 1234.5, and your samples_per_second is 96000. We can make these into two integers by multiplying by 10, so we get the ratio:
frequency/samples_per_second = 12345/960000
The greatest common divisor is 15, so it can be reduced to 823/64000.
So you would need 823 cycles in a 64000 sample buffer to reproduce the frequency exactly.

ffmpeg::avcodec_encode_video setting PTS h264

I'm trying to encode video as H264 using libavcodec
ffmpeg::avcodec_encode_video(codec,output,size,avframe);
returns an error that I don't have the avframe->pts value set correctly.
I have tried setting it to 0,1, AV_NOPTS_VALUE and 90khz * framenumber but still get the error non-strictly-monotonic PTS
The ffmpeg.c example sets the packet.pts with ffmpeg::av_rescale_q() but this is only called after you have encoded the frame !
When used with the MP4V codec the avcodec_encode_video() sets the pts value correctly itself.
I had the same problem, solved it by calculating pts before calling avcodec_encode_video as follows:
//Calculate PTS: (1 / FPS) * sample rate * frame number
//sample rate 90KHz is for h.264 at 30 fps
picture->pts = (1.0 / 30) * 90 * frame_count;
out_size = avcodec_encode_video(c, video_outbuf, video_outbuf_size, picture);
Solution stolen from this helpful blog post
(Note: Changed sample rate to khz, expressed in hz was far too long between frames, may need to play with this value - not a video encoding expert here, just wanted something that worked and this did)
I had this problem too. I sloved the problem in this way:
Before you invoke
ffmpeg::avcodec_encode_video(codec,output,size,avframe);
you set the pts value of avframe an integer value which has an initial value 0 and increments by one every time, just like this:
avframe->pts = nextPTS();
The implementation of nextPTS() is:
int nextPTS()
{
static int static_pts = 0;
return static_pts ++;
}
After giving the pts of avframe a value, then encoded it. If encoding successfully. Add the following code:
if (packet.pts != AV_NOPTS_VALUE)
packet.pts = av_rescale_q(packet.pts, mOutputCodecCtxPtr->time_base, mOutputStreamPtr->time_base);
if (packet.dts != AV_NOPTS_VALUE)
packet.dts = av_rescale_q(packet.dts, mOutputCodecCtxPtr->time_base, mOutputStreamPtr->time_base);
It'll add correct dts value for the encoded AVFrame. Among the code, packe of type AVPacket, mOutputCodeCtxPtr of type AVCodecContext* and mOutputStreamPtr of type AVStream.
avcodec_encode_video returns 0 indicates the current frame is buffered, you have to flush all buffered frames after all frames have been encoded. The code flushs all buffered frame somewhat like:
int ret;
while((ret = ffmpeg::avcodec_encode_video(codec,output,size,NULL)) >0)
;// place your code here.
I had this problem too. As far as I remember, the error is related to dts
setting
out_video_packet.dts = AV_NOPTS_VALUE;
helped me
A strictly increase monotonic function is a function where f(x) < f(y) if x < y.
So it means you cannot encode 2 frames with the same PTS as you were doing... check for example with a counter and it should not return error anymore.

Interpretation of DirectSound buffer elements from mic capture device

I am doing some maintenance work involving DirectSound buffers. I would like to know how to interpret the elements in the buffer, that is, to know what each value in the buffer represents. This data is coming from a microphone.
This wave format is being used:
WAVEFORMATEXTENSIBLE format = {
{ WAVE_FORMAT_EXTENSIBLE, 1, sample_rate, sample_rate * 4, 4, 32, 22 },
{ 32 }, 0, KSDATAFORMAT_SUBTYPE_IEEE_FLOAT
};
My goal is to detect microphone silence. I am currently accomplishing this by simply determining if all values in the buffer fail to exceed some threshold volume value, assuming that the intensity of each buffer element directly corresponds to volume.
This what I am currently trying:
bool is_mic_silent(float * data, unsigned int num_samples, float threshold)
{
float * max_iter = std::max_element(data, data + num_samples);
if(!max_iter) {
return true;
}
float max = *max_iter;
if(max < threshold) {
return true;
}
return false; // At least one value is sufficiently loud.
}
As MSN said the samples are in 32-bit floats. To detect a silence you would normally calculate the RMS value: Take the average of the squared sample values over some time interval (say 20-50 ms) and compare (square root of) this average to a threshold.
The noise inherent in the microphone signal may let single samples reach above the threshold while the ambient sound would still be considered silence. The averaging over a short interval will result in a value that corresponds better with our perception.
From here, floating point PCM values are from [-1, 1].
In addition to Han's suggestion to average samples, als consider calibrating your threshold value. Under different environments, with different microphones and different audio channels, "silence" can mean a lot of things.
The simple way would be loowing to configure the threshold. Alternatively, allow a "Noise floor measurement" where you acqurie a threshold value.
Note that the samples are linear, but levels in audio processing are usually given in dB. So depending on yoru target audience, you may want to convert readings and inputs to/from dB.