How would you measure the amount of bass in one audio sample? - c++

I am a beginner in audio programming and was wondering how would you get the amount of bass in just one single audio sample. I was thinking it would be measured in db maybe but i don't know if there a unit that is actually for measuring bass.
I have no code to show for the measuring of the bass since I have no idea where to look or to start out by doing by I've already got everything up to the point of having all the samples of my audio file stored as a float array using the juce library, now its just a matter of going through each sample measuring the bass of each sample
Any help please?

I am assuming by one audio sample you mean an array of floats, and not just one element of that array.
If you "Google" the word Bass you land on the very first result telling:
Bass (also called bottom end) describes tones of low (also called "deep") frequency, pitch and range from 16 to 256 Hz.
Yes, Bass is just the audio in that range.
Now, with that I think you would be able to figure out how to find frequencies using audio samples and if not, then this is the best I can do...
Now, you can find the amount of Bass, frequencies in the said range, clearly.. :)

There's just one solution here, and it's not what you think. You need to transform your signal in the time domain to a signal in the frequency domain. Bass is the lower part of the frequency domain.
The first thing you need then is the FFT. This takes a number of samples as input. A typical value would be 2048 samples. If your input is a 48 kHz signal, this will divide the signal into 1024 bins of 47 Hz each. The lower 5 bins or so contain the bass part of your signal. (Bin 0 also contains any DC offset, which might be problematic)
You then need to convert these 5 bins into energy; that's just squaring the 5 values and summing them.

Related

RF Divider function in SDR

I have what may be an odd question for the SDR gurus out there.
What would be the physical implementation (in software) of a broadband frequency divider?
For example, say I want to capture a signal at 1 GHz, with a 10 MHz bandwidth, then divide it by a factor of 10.
I would expect to get a down-sampled signal at 100 MHz with a 1 MHz bandwidth.
Yes, I know I would lose information, but assume this would be presented as a spectrum analysis, not full audio, video, etc.
Conceptually, could this be accomplished by sampling the RF at 2+times the highest frequency components, say at 2.5 GHz, then discarding 9 out of 10 samples - decimating the input stream?
Thanks,
Dave
Well, as soon as you've digitized your signal it loses the property "bandwidth", which is a real-world concept (and not one attached to the inherently meaningless stream of numbers that we're talking about in DSP and SDR). So, there's no signal with a bandwidth of 10MHz (without looking at the contents of the samples), but only a stream of numbers that we remember being produced by sampling an analog signal with a sampling rate of 20MS/s (if you're doing real sampling; if you have an I/Q downconverter and sample I and Q simultaneously, you'll get complex samples, of which 10MS/s will be enough to represent 10MHz of bandwidth).
Now, if you just throw away 9 out of 10 samples, which is decimation, you'll get aliasing, because now you can't tell whether a sine that took 10 samples in the original signal is actually a sine or just a constant; the same goes for any sine with a frequency higher than your new sampling rate's Nyquist bandwidth. That is a loss of information, so yes, that would work.
I think however, you have something specific in mind, which is scaling the signal in frequency direction. Let's have a quick excourse in to Fourier analysis:
there is the well known correspondence for frequency scaling.
let G be the Fourier transform of g, then
g(at) <--> 1/|a| G(t/a)
As you can see, compressing something in frequency domain actually means "speeding it up" in time domain, ie. decimation!
So, in order to do this meaningfully, you could imagine taking the DFT of length N of your signal, and set 9 out of 10 bins to zero, by multiplying it with a comb of 1's. Now, multiplication with a signal in frequency domain is convolution with the fourier transform of that signal in time domain. The fourier transform of such a Comb is, to little surprise, the complement of a Nyquist-M filter, and thus a filter itself; you will thus end up with a multi-band-passed version of your signal, which you then can decimate without aliasing.
Hope that was what you're after!

Compute frequency of sinusoidal signal, c++

i have a sinusoidal-like shaped signal,and i would like to compute the frequency.
I tried to implement something but looks very difficult, any idea?
So far i have a vector with timestep and value, how can i get the frequency from this?
thank you
If the input signal is a perfect sinusoid, you can calculate the frequency using the time between positive 0 crossings. Find 2 consecutive instances where the signal goes from negative to positive and measure the time between, then invert this number to convert from period to frequency. Note this is only as accurate as your sample interval and it does not account for any potential aliasing.
You could try auto correlating the signal. An auto correlation can be rapidly calculated by following these steps:
Perform FFT of the audio.
Multiply each complex value with its complex conjugate.
Perform the inverse FFT of the audio.
The left most peak will always be the highest (as the signal always correlates best with itself). The second highest peak, however, can be used to calculate the sinusoid's frequency.
For example if the second peak occurs at an offset (lag) of 50 points and the sample rate is 16kHz and the window is 1 second then the end frequency is 16000 / 50 or 320Hz. You can even use interpolation to get a more accurate estimation of the peak position and thus a more accurate sinusoid frequency. This method is quite intense but is very good for estimating the frequency after significant amounts of noise have been added!

Parameters to improve a music frequency analyzer

I'm using a FFT on audio data to output an analyzer, like you'd see in Winamp or Windows Media Player. However the output doesn't look that great. I'm plotting using a logarithmic scale and I average the linear results from the FFT into the corresponding logarithmic bins. As an example, I'm using bins like:
16k,8k,4k,2k,1k,500,250,125,62,31,15 [hz]
Then I plot the magnitude (dB) against frequency [hz]. The graph definitely 'reacts' to the music, and I can see the response of a drum sample or a high pitched voice. But the graph is very 'saturated' close to the lower frequencies, and overall doesn't look much like what you see in applications, which tend to be more evenly distributed. I feel that apps that display visual output tend to do different things to the data to make it look better.
What things could I do to the data to make it look more like the typical music player app?
Some useful information:
I downsample to single channel, 32kHz, and specify a time window of 35ms. That means the FFT gets ~1100 points. I vary these values to experiment (ie tried 16kHz, and increasing/decreasing interval length) but I get similar results.
With an FFT of 1100 points, you probably aren't able to capture the low frequencies with a lot of frequency resolution.
Think about it, 30 Hz corresponds to a period of 33ms, which at 32kHz is roughly 1000 samples. So you'll only be able to capture about 1 period in this time.
Thus, you'll need a longer FFT window to capture those low frequencies with sharp frequency resolution.
You'll likely need a time window of 4000 samples or more to start getting noticeably more frequency resolution at the low frequencies. This will be fine too, since you'll still get about 8-10 spectrum updates per second.
One option too, if you want very fast updates for the high frequency bins but good frequency resolution at the low frequencies, is to update the high frequency bins more quickly (such as with the windows you're currently using) but compute the low frequency bins less often (and with larger windows necessary for the good freq. resolution.)
I think a lot of these applications have variable FFT bins.
What you could do is start with very wide evenly spaced FFT bins like you have and then keep track of the number of elements that are placed in each FFT bin. If some of the bins are not used significantly at all (usually the higher frequencies) then widen those bins so that they are larger (and thus have more frequency entries) and shring the low frequency bins.
I have worked on projects were we just spend a lot of time tuning bins for specific input sources but it is much nicer to have the software adjust in real time.
A typical visualizer would use constant-Q bandpass filters, not a single FFT.
You could emulate a set of constant-Q bandpass filters by multiplying the FFT results by a set of constant-Q filter responses in the frequency domain, then sum. For low frequencies, you should use an FFT longer than the significant impulse response of the lowest frequency filter. For high frequencies, you can use shorter FFTs for better responsiveness. You can slide any length FFTs along at any desired update rate by overlapping (re-using) data, or you might consider interpolation. You might also want to pre-window each FFT to reduce "spectral leakage" between frequency bands.

What is a correct formula of amplifying WaveForm audio?

I am wondering what a correct formula of amplifying WaveForm audio is from C++.
Let's say there's a 16 bit waveform data following:
0x0000 0x2000, 0x3000, 0x2000, 0x0000, (negative part), ...
Due to acoustic reason, just doubled the number won't make twice bigger audio like this:
0x0000 0x4000, 0x6000, 0x4000, 0x0000, (doubled negative part), ...
If there's someone who knows well about audio modification, please let me know.
If you double all the sample values it will sure sound "twice as loud", that is, 6dB louder. Of course, you need to be careful to avoid distortion due to clipping - that's the main reason why all professional audio processing software today uses float samples internally.
You may need to get back to integer when finally outputting the sound data. If you're just writing a plugin for some DAW (as I would recommend if you want to do program simple yet effective sound FX), it will do all this stuff for you: you just get a float, do something with it, and output a float again. But if you want to, for instance, directly output a .wav file, you need to first limit the output so that everything above 0dB (which is +-1 in a usual float stream) is clipped to just +-1. Then you can multiply by the maximum your desired integer type can reach -1, and just cast it into that type. Done.
Anyway, you're certainly right in that it's important to scale your volume knob logarithmically rather than linear (many consumer-grade programs don't, which is just stupid because you will end up using values very close to the left end of the knobs range most of the time), but that has nothing to do with the amplification calculation itself, it's just because we perceive the loudness of signals on a logarithmic scale. Still, the loudness itself is determined by a simple multiplication of a constant factor of the sound pressure, which in turn is proportional to the voltage in the analog circuitry and to the values of the digital samples in any DSP.
Another thing: I don't know how far you're intending to go, but if you want do do this really properly you should not just clip away peaks that are over 0dB (the clipping sounds very harsh), but implement a proper compressor/limiter. This would then automatically prevent clipping by reducing the level at the loudest parts. You don't want to overdo this either (popular music is usually over-compressed anyway, as a result a lot of the dynamic musical expression is lost), but it is still a "less dangerous" way of increasing the audio level.
I used linear multiplication for it every time and it never failed. It even worked for fade-outs for example...
so
float amp=1.2;
short sample;
short newSample=(short)amp*sample;
If you want your fade out to be linear, in a sample processing loop do
amp-=0.03;
and if you want to be logarithmic, in a sample processing loop do
amp*=0.97;
until amp reaches some small value (amp < 0.1)
This just may be a perception problem. Your ears (and eyes - look up gamma w.r.t. video), don't perceive loudness in a linear response to the input. A good model of it is that your ears respond to perceive a ln(n) increase for a n increase in volume. Look up the difference between linear pots and audio pots.
Anyway, I don't know if that matters here because your output amp may account for that, but if you want it to be perceived twice as loud you may have to make it e^2 times as loud. Which may mean you're in the realm of clipping now.

detecting pauses in a spoken word audio file using pymad, pcm, vad, etc

First I am going to broadly state what I'm trying to do and ask for advice. Then I will explain my current approach and ask for answers to my current problems.
Problem
I have an MP3 file of a person speaking. I'd like to split it up into segments roughly corresponding to a sentence or phrase. (I'd do it manually, but we are talking hours of data.)
If you have advice on how to do this programatically or for some existing utilities, I'd love to hear it. (I'm aware of voice activity detection and I've looked into it a bit, but I didn't see any freely available utilities.)
Current Approach
I thought the simplest thing would be to scan the MP3 at certain intervals and identify places where the average volume was below some threshold. Then I would use some existing utility to cut up the mp3 at those locations.
I've been playing around with pymad and I believe that I've successfully extracted the PCM (pulse code modulation) data for each frame of the mp3. Now I am stuck because I can't really seem to wrap my head around how the PCM data translates to relative volume. I'm also aware of other complicating factors like multiple channels, big endian vs little, etc.
Advice on how to map a group of pcm samples to relative volume would be key.
Thanks!
PCM is a time frame base encoding of sound. For each time frame, you get a peak level. (If you want a physical reference for this: The peak level corresponds to the distance the microphone membrane was moved out of it's resting position at that given time.)
Let's forget that PCM can uses unsigned values for 8 bit samples, and focus on
signed values. If the value is > 0, the membrane was on one side of it's resting position, if it is < 0 it was on the other side. The bigger the dislocation from rest (no matter to which side), the louder the sound.
Most voice classification methods start with one very simple step: They compare the peak level to a threshold level. If the peak level is below the threshold, the sound is considered background noise.
Looking at the parameters in Audacity's Silence Finder, the silence level should be that threshold. The next parameter, Minimum silence duration, is obviously the length of a silence period that is required to mark a break (or in your case, the end of a sentence).
If you want to code a similar tool yourself, I recommend the following approach:
Divide your sound sample in discrete sets of a specific duration. I would start with 1/10, 1/20 or 1/100 of a second.
For each of these sets, compute the maximum peak level
Compare this maximum peak to a threshold (the silence level in Audacity). The threshold is something you have to determine yourself, based on the specifics of your sound sample (loudnes, background noise etc). If the max peak is below your threshold, this set is silence.
Now analyse the series of classified sets: Calculate the length of silence in your recording. (length = number of silent sets * length of a set). If it is above your Minimum silence duration, assume that you have the end of a sentence here.
The main point in coding this yourself instead of continuing to use Audacity is that you can improve your classification by using advanced analysis methods. One very simple metric you can apply is called zero crossing rate, it just counts how often the sign switches in your given set of peak levels (i.e. your values cross the 0 line). There are many more, all of them more complex, but it may be worth the effort. Have a look at discrete cosine transformations for example...
Just wanted to update this. I'm having moderate success using Audacity's Silence Finder. However, I'm still interested in this problem. Thanks.
PCM is a way of encoding a sinusoidal wave. It will be encoded as a series of bits, where one of the bits (1, I'd guess) indicates an increase in the function, and 0 indicates a decrease. The function can stay roughly constant by alternating 1 and 0.
To estimate amplitude, plot the sin wave, then normalize it over the x axis. Then, you should be able to estimate the amplitude of the sin wave at different points. Once you've done that, you should be able to pick out the spots where amplitude is lower.
You may also try to use a Fourier transform to estimate where the signals are most distinct.