I am currently struggling with the implementation of my audio volume slider in my c++ app.
The app is able to control the windows mixer volume level and the slider has the range 0.0f to 1.0f.
The problem I am facing is that my DB values aren't equal to the DB values windows is using.
Here are some values I set with my volume slider with the resulting DB values and the ones windows is using.
Below is the function I use for calculating the audio DB level. What am I doing wrong here?
Thank you in advance!
if (this->m_useAudioEndpointVolume)
{
const float slider_min = 0.0f;
const float slider_max = 1.0f;
const float logBase = 10;
m_ignoreAudioValue = TRUE;
if (volume >= 1.0f) {
volume = 1.0f;
}
if (volume <= 0.0f) {
volume = 0.0f;
}
float pfLevelMinDB = 0;
float pfLevelMaxDB = 0;
float pfVolumeIncrementDB = 0;
m_pEndpointVolume->GetVolumeRange(&pfLevelMinDB, &pfLevelMaxDB, &pfVolumeIncrementDB);
// Logarithmic formula for audio volume
// Volume = log(((Slider.Value-Slider.Min)*(B-1))/(Slider.Max-Slider.Min) + 1)/log(B) * (Volume.Max - Volume.Min) + Volume.Min
float calcVolume = log(((volume - slider_min)*(logBase - 1)) / (slider_max - slider_min) + 1) / log(logBase) * (pfLevelMaxDB - pfLevelMinDB) + pfLevelMinDB;
if (volume == 0.0f) {
m_pEndpointVolume->SetMute(TRUE, NULL);
}
else
{
m_pEndpointVolume->SetMute(FALSE, NULL);
}
float currentValue = 0.0f;
m_pEndpointVolume->GetMasterVolumeLevel(¤tValue);
// Todo: The calculation has to be logarithmic
m_pEndpointVolume->SetMasterVolumeLevel(calcVolume, NULL);
}
Assume the following:
volumeMaxDB = +5
volumeMinDB = -10
incrementDB = 5
To me this suggests a slider that would look something like the ascii art below. I've also shown my presumed mapping to your slider UI's scale.
dB Slider
| +5 <=> 1.0
| 0
- -5
| -10 <=> 0.0
First, calculate the total volume range in dB (e.g. -10 to +5 is 15 dB)
dBRange = abs(volumeMaxDB) + abs(volumeMinDB);
Second, scale the current slider position dB value of 0 to dBRange. This gives the following mappings
* 0.0 -> 0
* 1.0 -> 15
* 0.5 -> 7.5
dB = dBRange * slider;
Third, shift the range up or down so that +15 becomes +5 and 0 becomes -10.
dB = dB - (dbRange - volumeMaxDB);
Finally, you may want to round to the nearest dB increment.
Extra credit: If you have control over your slider's range, you could make your life way simpler by just setting the min and max value the same as minDB and maxDB and be done with it.
I found the solution.
The IAudioEndpointVolume has the function SetMasterVolumeLevelScalar. This function uses the range from 0.0 to 1.0 regarding to the MSDN documentation so you don't need to implement a logarithmic function yourself.
Seems like I overlooked this one.
Here's the current code sample I am using in case someone will need it in the future.
float pLevel = 0.0f;
m_pEndpointVolume->GetMasterVolumeLevelScalar(&pLevel);
// We have to set this first to TRUE to an avoid unnecessary callback
m_ignoreAudioValue = TRUE;
// Set the scalar value
// https://msdn.microsoft.com/de-de/library/windows/desktop/dd368062(v=vs.85).aspx
m_pEndpointVolume->SetMasterVolumeLevelScalar(sliderValue, NULL);
// We have to set this again to TRUE to avoid an unnecessary callback
// because the SetMasterVolumeLevelScalar triggers the OnNotify event
// and this causes the m_ignoreAudioValue to be FALSE again.
m_ignoreAudioValue = TRUE;
// If the value is higher the 0.0 unmute the master volume.
m_pEndpointVolume->SetMute(sliderValue > 0.0f ? FALSE : TRUE, NULL);
m_pEndpointVolume->GetMasterVolumeLevelScalar(&pLevel);
Edit:
It seems like Windows is using a linear volume slider. Thats the reason why 2% in Windows feels still too loud and everything above 50% isn't much of an increase anymore.
Here's a really good article about it why you should avoid it.
Volume Controls
Related
The goal I want to achieve is to rotate the drone over z axis slow and when it detects an object it will stop rotating. The first node is publishing the string "Searching" and the second node is subscribing to it. So every time it receives "Searching" the drone must rotate.
Ubuntu 18.04
ROS melodic
PX4 firmware
Python 3.6
I have read part of this code in one paper written in C++ but I am not so good at it, I use python. I would like to ask you guys if you can give me some hints. I want to implement the below code in python.
ros::Duration d(0.5);
geometry_msgs::PoseStamped cmd;
cmd.pose.position.x = 0.0;
cmd.pose.position.y = 0.0;
cmd.pose.position.z = 2.0;
cmd.pose.orientation.x = 0.0;
cmd.pose.orientation.y = 0.0;
cmd.pose.orientation.z = 0.0;
cmd.pose.orientation.w = 1.0;
Eigen::Affine3d t;
ROS_INFO("Searching target...");
while (ros::ok() )
{
if (c == 'q' || rc < 0)
break;
tf::poseMsgToEigen (cmd.pose, t);
t.rotate (Eigen::AngleAxisd (M_PI/10.0, Eigen::Vector3d::UnitZ()));
// AngleAxisf (angle1 , Vector3f::UnitZ())
tf::poseEigenToMsg(t, cmd.pose);
nav->SetPoint(cmd);
ros::spinOnce();
d.sleep();
}
https://osf.io/jqmk2/
I am not so familiar with C++ but what I understand in the code to rotate the drone, is that, it is taking cmd = PoseStamped as an input and then it is applying rotation to that position (rotation matrix) and the result of this rotation matrix is been publishing as Pose. Is it correct?
I am sending smaller position targets to the pose.orientation.z to go for 0.0 to 2pi with increment of 0.1 to rotate the drone but it is rotating too fast and it's not keeping its x, y position... this is the code:
if data.data == "Searching" and self.yawVal < two_pi:
rVal, pVal = 0, 0
pose.position.x = 0
pose.position.y = 0
pose.position.z = 1.6
quat = quaternion_from_euler(rVal, pVal, self.yawVal)
pose.orientation.x = quat[0]
pose.orientation.y = quat[1]
pose.orientation.z = quat[2]
pose.orientation.w = quat[3]
pub.publish(pose)
self.yawVal += 0.1
else:
self.yawVal = 0
I realize is not a good way because the drone rotate so fast and if I add a sleep the drone cannot recognize an object because it's rotating fast.
Is it possible to translate the C++ code to python?
I am using FFTW to create a spectrum analyzer in C++.
After applying any window function to an input signal, the output amplitude suddenly seems to scale with frequency.
Retangular Window
Exact-Blackman
Graphs are scaled logarithmically with a sampling frequency of 44100 Hz. All harmonics are generated at the same level, peaking at 0dB as seen during the rectangular case. The Exact-Blackman window was amplified by 7.35dB to attempt to makeup for processing gain.
Here is my code for generating the input table...
freq = 1378.125f;
for (int i = 0; i < FFT_LOGICAL_SIZE; i++)
{
float term = 2 * PI * i / FFT_ORDER;
for (int h = 1; freq * h < FREQ_NYQST; h+=1) // Harmonics up to Nyquist
{
fftInput[i] += sinf(freq * h * K_PI * i / K_SAMPLE_RATE); // Generate sine
fftInput[i] *= (7938 / 18608.f) - ((9240 / 18608.f) * cosf(term)) + ((1430 / 18608.f) * cosf(term * 2)); // Exact-Blackman window
}
}
fftwf_execute(fftwR2CPlan);
Increasing or decreasing the window size changes nothing. I tested with the Hamming window as well, same problem.
Here is my code for grabbing the output.
float val; // Used elsewhere
for (int i = 1; i < K_FFT_COMPLEX_BINS_NOLAST; i++) // Skips the DC and Nyquist bins
{
real = fftOutput[i][0];
complex = fftOutput[i][1];
// Grabs the values and scales based on the window size
val = sqrtf(real * real + complex * complex) / FFT_LOGICAL_SIZE_OVER_2;
val *= powf(20, 7.35f / 20); // Only applied during Exact-Blackman test
}
Curiously, I attempted the following to try to flatten out the response in the Exact-Blackman case. This scaling back down resulted in a nearly, but still not perfectly flat response. Neat, but still doesn't explain to me why this is happening.
float x = (float)(FFT_COMPLEX_BINS - i) / FFT_COMPLEX_BINS; // Linear from 0 to 1
x = log10f((x * 9) + 1.3591409f); // Now logarithmic from 0 to 1, offset by half of Euler's constant
val = sqrt(real * real + complex * complex) / (FFT_LOGICAL_SIZE_OVER_2 / x); // Division by x added to this line
Might be a bug. You seem to be applying your window function multiple times per sample. Any windowing should be removed from your input compositing loop and applied to the input vector just once, right before the FFT.
I was not able to reproduce code because I do not have the library on hand. However, This may be a consequence of spectral leakage. https://en.wikipedia.org/wiki/Spectral_leakage
This is an inevevitiblity of window functions as well as sampling. If you look at the tradeoffs section of that article, the type of window can be adaptive for a wide range of frequencies or focused on a particular one. Since the frequency of your signal is increasing perhaps the lower freq signal outside your target is more subjected to spectral leakage.
I am looking into the windows magnification api and I have been playing around with it, but I have a problem with the magnification defaults, Windows only allows you to increment in 25%'s at the lowest. Is it possible for me to increase this perhaps 1-5% at a time? Perhaps increase by one percent with the mouse scroll in and out?
Windows Lowest 25% Default][1]
Thanks in advance for your assistance.
int xDlg = (int)((float)GetSystemMetrics(SM_CXSCREEN) * (1.0 - (1.0 / magnificationFactor)) / 2.0);
int yDlg = (int)((float)GetSystemMetrics(SM_CYSCREEN) * (1.0 - (1.0 / magnificationFactor)) / 2.0);
BOOL successSet = MagSetFullscreenTransform(magnificationFactor, xDlg, yDlg);
if (successSet)
{
BOOL fInputTransformEnabled;
RECT rcInputTransformSource;
RECT rcInputTransformDest;
if (MagGetInputTransform(&fInputTransformEnabled, &rcInputTransformSource, &rcInputTransformDest))
{
if (fInputTransformEnabled)
{
SetInputTransform(hwndDlg, fInputTransformEnabled);
}
}
}
successSet == false; when it isn't 1.1 anything lower fails and I realised 1.1 = 125% zoom.
There is no such limit in the magnification API. The limitations you see on-screen were chosen by the UI developer.
Both MagSetFullscreenTransform and MagSetWindowTransform take float input arguments. There are no restrictions as far as the magnification factor resolution goes, as long as it is at least 1.0f and no larger than the upper bound.
I have a flow layout. Inside it I have about 900 tables. Each table is stacked one on top of the other. I have a slider which resizes them and thus causes the flow layout to resize too.
The problem is, the tables should be linearly resizing. Their base size is 200x200. So when scale = 1.0, the w and h of the tables is 200.
Here is an example of the problem:
My issue is when delta is 8 instead of 9. What could I do to make sure my increases are always linear?
void LobbyTableManagaer::changeTableScale( double scale )
{
setTableScale(scale);
}
void LobbyTableManager::setTableScale( double scale )
{
scale += 0.3;
scale *= 2.0;
float scrollRel = m_vScroll->getRelativeValue();
setScale(scale);
rescaleTables();
resizeFlow();
...
double LobbyTableManager::getTableScale() const
{
return (getInnerWidth() / 700.0) * getScale();
}
void LobbyFilterManager::valueChanged( agui::Slider* source,int val )
{
if(source == m_magnifySlider)
{
DISPATCH_LOBBY_EVENT
{
(*it)->changeTableScale((double)val / source->getRange());
}
}
}
In short, I would like to ensure that the tables always increase by a linear amount. I cant understand why every few times delta is 8 rather than 9.
Thanks
Look at your "200 X Table Scale" values, they are going up by about 8.8. So when it is rounded to an integer, it will be 9 more than the previous value about 80% of the time and 8 more the other 20% of the time.
If you really need the increases to be the same size every time, you have to do everything with integers. Otherwise, you have to adjust your scale changes so the result is closer to 9.0.
Im setting up a progress bar as follows :
void CProgressBar::add(int ammount)
{
mProgress += ammount;
}
float CProgressBar::get()
{
float pr = (float)mProgress * 100.0f / (float)mMax;
return pr;
}
And now here is the problem.I'm trying to render a small surface although it doesn't properly fill it because i can't
figure out how to scale properly the value :
/*
Progress bar box has size of 128x16
|-----------|
|-----------|
*/
float progress = progressBar->get();
float scale = 4.0f; //Here i have it hardcoded although i have to make this generic
progress *= scale;
graphics->color(prgColor);
graphics->renderQd(CRect(x,y,progress,height));
So im kindly asking for some help on the matter...
You have to linearly interpolate between the width of the rectangle with 0% progress and the width of the rectangle with 100% progress. E.G:
float width_0 = 0.f; // or any other number of pixels
float width_100 = 250.f; // or any other number of pixels
The interpolation works as follows:
float interpolated_width = (width_100 - width_0) * progress + width_0;
Important: progress has to be in the range of 0 to 1! So you might want to change the CProgressBar::get() function or divide by 100 first.
Now you can just render the rectangle with the new width:
graphics->renderQd(CRect(x,y,interpolated_width,height));
The width of your progress bar is 128 and the progress->get() function returns something between 0 and 100 therefore, without knowing your library details, it appears your scale should be 1.28
I assume mMax is the value of complete progress.
For little tidy-up I would make get() const and not use C-style casts.