I am looking into the windows magnification api and I have been playing around with it, but I have a problem with the magnification defaults, Windows only allows you to increment in 25%'s at the lowest. Is it possible for me to increase this perhaps 1-5% at a time? Perhaps increase by one percent with the mouse scroll in and out?
Windows Lowest 25% Default][1]
Thanks in advance for your assistance.
int xDlg = (int)((float)GetSystemMetrics(SM_CXSCREEN) * (1.0 - (1.0 / magnificationFactor)) / 2.0);
int yDlg = (int)((float)GetSystemMetrics(SM_CYSCREEN) * (1.0 - (1.0 / magnificationFactor)) / 2.0);
BOOL successSet = MagSetFullscreenTransform(magnificationFactor, xDlg, yDlg);
if (successSet)
{
BOOL fInputTransformEnabled;
RECT rcInputTransformSource;
RECT rcInputTransformDest;
if (MagGetInputTransform(&fInputTransformEnabled, &rcInputTransformSource, &rcInputTransformDest))
{
if (fInputTransformEnabled)
{
SetInputTransform(hwndDlg, fInputTransformEnabled);
}
}
}
successSet == false; when it isn't 1.1 anything lower fails and I realised 1.1 = 125% zoom.
There is no such limit in the magnification API. The limitations you see on-screen were chosen by the UI developer.
Both MagSetFullscreenTransform and MagSetWindowTransform take float input arguments. There are no restrictions as far as the magnification factor resolution goes, as long as it is at least 1.0f and no larger than the upper bound.
Related
I have been using the following function to convert between pixel sizes and font sizes for my Qt application:
int FontFace::pointToPixelSize(float pointSize)
{
// Points are a physical unit, where 1 point equals 1/72th of an inch in digital typography
// Resolution is in DPI
float resolution = QGuiApplication::primaryScreen()->logicalDotsPerInch();
float resolution = Screen::PrimaryScreen()->logicalDotsPerInch();
int pixelSize = int(pointSize * resolution / 72.0);
return pixelSize;
}
float FontFace::pixelToPointSize(uint32_t pixelSize)
{
float resolution = QGuiApplication::primaryScreen()->logicalDotsPerInch();
float resolution = Screen::PrimaryScreen()->logicalDotsPerInch();
float pointSize = pixelSize * 72.0 / resolution;
return pointSize;
}
I now need to replicate this functionality using GLFW instead of Qt. However, I haven't been able to find any useful information online regarding what logicalDotsPerInch actually is beyond that "This value can be used to convert font point sizes to pixel sizes." So, I assumed that this value was the physical DPI of the monitor, divided by the content scaling of the monitor.
So, this was my best shot at getting that:
int widthMm, heightMm;
float xScale, yScale;
// Get physical monitor size
glfwGetMonitorPhysicalSize(glfwGetPrimaryMonitor(), widthMm, heightMm);
// Get current desktop resolution
const GLFWvidmode* videoMode = glfwGetVideoMode(glfwGetPrimaryMonitor());
int widthRes = videoMode->width;
int heightRes = videoMode->height;
// Get monitor content scale
glfwGetMonitorContentScale(glfwGetPrimaryMonitor(), xScale, yScale);
// Get DPI
float mmToInches = 0.0393701;
float dpiX = double(widthRes) / (widthMm * mmToInches );
float dpiY = double(heightRes) / (heightMm * mmToInches );
// Get logical DPI
float logicalDpiX = dpiX / xScale;
float logicalDpiY = dpiY / yScale;
float logicalDpi = 0.5 * (logicalDpiX + logicalDpiY);
Unfortunately, this gives me a very different answer than the Qt result. Qt gives me a logical DPI of 168, which is actually higher than the physical DPI of ~140. This makes no sense to me. The logicalDpi that I calculated using GLFW is 79.96, so ~80. These are very different numbers, with no obvious relationship between them. I did find this thread inquiring about the Qt functions, but it wasn't super insightful.
I'll note that both Qt and GLFW give me equivalent physical DPI calculations of ~140 for my monitor, so the missing piece seems to be that the concept of logicalDotsPerInch is a black box. If this is truly the correct number to use, then I'm at a total loss, since I can't think of any sensible way to obtain this result.
Edit: Digging into this more, I think I've found how Qt gets their calculation. A typical DPI for a device is 96. Dividing this value by my logical DPI gives me a factor of 1.200445. Multiplying this factor by the physical DPI of the screen gives 168, which matches the logical DPI returned by Qt.
Working back on the math from this, the value of 168 is actually just equal to the typical DPI of a monitor (96.0), multiplied by the content scale of my monitor, which is 1.75.
So, I've reverse-engineered the solution, but I can't wrap my head around what it means physically. Why is Qt's solution correct? Shouldn't the logical DPI be the ACTUAL DPI of the monitor DIVIDED by the content scaling? This doesn't make sense to me.
The only reasonable answer is that I'm misunderstanding how font size calculations work. If font sizes are always treated with a 96 DPI baseline, and then the content scaling of the monitor is applied to offset any physical size differences, then this result actually makes some sense, and most of my work is superfluous. I still don't understand why you wouldn't need to use the actual device DPI though.
I am currently struggling with the implementation of my audio volume slider in my c++ app.
The app is able to control the windows mixer volume level and the slider has the range 0.0f to 1.0f.
The problem I am facing is that my DB values aren't equal to the DB values windows is using.
Here are some values I set with my volume slider with the resulting DB values and the ones windows is using.
Below is the function I use for calculating the audio DB level. What am I doing wrong here?
Thank you in advance!
if (this->m_useAudioEndpointVolume)
{
const float slider_min = 0.0f;
const float slider_max = 1.0f;
const float logBase = 10;
m_ignoreAudioValue = TRUE;
if (volume >= 1.0f) {
volume = 1.0f;
}
if (volume <= 0.0f) {
volume = 0.0f;
}
float pfLevelMinDB = 0;
float pfLevelMaxDB = 0;
float pfVolumeIncrementDB = 0;
m_pEndpointVolume->GetVolumeRange(&pfLevelMinDB, &pfLevelMaxDB, &pfVolumeIncrementDB);
// Logarithmic formula for audio volume
// Volume = log(((Slider.Value-Slider.Min)*(B-1))/(Slider.Max-Slider.Min) + 1)/log(B) * (Volume.Max - Volume.Min) + Volume.Min
float calcVolume = log(((volume - slider_min)*(logBase - 1)) / (slider_max - slider_min) + 1) / log(logBase) * (pfLevelMaxDB - pfLevelMinDB) + pfLevelMinDB;
if (volume == 0.0f) {
m_pEndpointVolume->SetMute(TRUE, NULL);
}
else
{
m_pEndpointVolume->SetMute(FALSE, NULL);
}
float currentValue = 0.0f;
m_pEndpointVolume->GetMasterVolumeLevel(¤tValue);
// Todo: The calculation has to be logarithmic
m_pEndpointVolume->SetMasterVolumeLevel(calcVolume, NULL);
}
Assume the following:
volumeMaxDB = +5
volumeMinDB = -10
incrementDB = 5
To me this suggests a slider that would look something like the ascii art below. I've also shown my presumed mapping to your slider UI's scale.
dB Slider
| +5 <=> 1.0
| 0
- -5
| -10 <=> 0.0
First, calculate the total volume range in dB (e.g. -10 to +5 is 15 dB)
dBRange = abs(volumeMaxDB) + abs(volumeMinDB);
Second, scale the current slider position dB value of 0 to dBRange. This gives the following mappings
* 0.0 -> 0
* 1.0 -> 15
* 0.5 -> 7.5
dB = dBRange * slider;
Third, shift the range up or down so that +15 becomes +5 and 0 becomes -10.
dB = dB - (dbRange - volumeMaxDB);
Finally, you may want to round to the nearest dB increment.
Extra credit: If you have control over your slider's range, you could make your life way simpler by just setting the min and max value the same as minDB and maxDB and be done with it.
I found the solution.
The IAudioEndpointVolume has the function SetMasterVolumeLevelScalar. This function uses the range from 0.0 to 1.0 regarding to the MSDN documentation so you don't need to implement a logarithmic function yourself.
Seems like I overlooked this one.
Here's the current code sample I am using in case someone will need it in the future.
float pLevel = 0.0f;
m_pEndpointVolume->GetMasterVolumeLevelScalar(&pLevel);
// We have to set this first to TRUE to an avoid unnecessary callback
m_ignoreAudioValue = TRUE;
// Set the scalar value
// https://msdn.microsoft.com/de-de/library/windows/desktop/dd368062(v=vs.85).aspx
m_pEndpointVolume->SetMasterVolumeLevelScalar(sliderValue, NULL);
// We have to set this again to TRUE to avoid an unnecessary callback
// because the SetMasterVolumeLevelScalar triggers the OnNotify event
// and this causes the m_ignoreAudioValue to be FALSE again.
m_ignoreAudioValue = TRUE;
// If the value is higher the 0.0 unmute the master volume.
m_pEndpointVolume->SetMute(sliderValue > 0.0f ? FALSE : TRUE, NULL);
m_pEndpointVolume->GetMasterVolumeLevelScalar(&pLevel);
Edit:
It seems like Windows is using a linear volume slider. Thats the reason why 2% in Windows feels still too loud and everything above 50% isn't much of an increase anymore.
Here's a really good article about it why you should avoid it.
Volume Controls
I am using FFTW to create a spectrum analyzer in C++.
After applying any window function to an input signal, the output amplitude suddenly seems to scale with frequency.
Retangular Window
Exact-Blackman
Graphs are scaled logarithmically with a sampling frequency of 44100 Hz. All harmonics are generated at the same level, peaking at 0dB as seen during the rectangular case. The Exact-Blackman window was amplified by 7.35dB to attempt to makeup for processing gain.
Here is my code for generating the input table...
freq = 1378.125f;
for (int i = 0; i < FFT_LOGICAL_SIZE; i++)
{
float term = 2 * PI * i / FFT_ORDER;
for (int h = 1; freq * h < FREQ_NYQST; h+=1) // Harmonics up to Nyquist
{
fftInput[i] += sinf(freq * h * K_PI * i / K_SAMPLE_RATE); // Generate sine
fftInput[i] *= (7938 / 18608.f) - ((9240 / 18608.f) * cosf(term)) + ((1430 / 18608.f) * cosf(term * 2)); // Exact-Blackman window
}
}
fftwf_execute(fftwR2CPlan);
Increasing or decreasing the window size changes nothing. I tested with the Hamming window as well, same problem.
Here is my code for grabbing the output.
float val; // Used elsewhere
for (int i = 1; i < K_FFT_COMPLEX_BINS_NOLAST; i++) // Skips the DC and Nyquist bins
{
real = fftOutput[i][0];
complex = fftOutput[i][1];
// Grabs the values and scales based on the window size
val = sqrtf(real * real + complex * complex) / FFT_LOGICAL_SIZE_OVER_2;
val *= powf(20, 7.35f / 20); // Only applied during Exact-Blackman test
}
Curiously, I attempted the following to try to flatten out the response in the Exact-Blackman case. This scaling back down resulted in a nearly, but still not perfectly flat response. Neat, but still doesn't explain to me why this is happening.
float x = (float)(FFT_COMPLEX_BINS - i) / FFT_COMPLEX_BINS; // Linear from 0 to 1
x = log10f((x * 9) + 1.3591409f); // Now logarithmic from 0 to 1, offset by half of Euler's constant
val = sqrt(real * real + complex * complex) / (FFT_LOGICAL_SIZE_OVER_2 / x); // Division by x added to this line
Might be a bug. You seem to be applying your window function multiple times per sample. Any windowing should be removed from your input compositing loop and applied to the input vector just once, right before the FFT.
I was not able to reproduce code because I do not have the library on hand. However, This may be a consequence of spectral leakage. https://en.wikipedia.org/wiki/Spectral_leakage
This is an inevevitiblity of window functions as well as sampling. If you look at the tradeoffs section of that article, the type of window can be adaptive for a wide range of frequencies or focused on a particular one. Since the frequency of your signal is increasing perhaps the lower freq signal outside your target is more subjected to spectral leakage.
I'm implementing blur effect on windows phone using native C++ with DirectX, but it looks like even the simplest blur with small kernel causes visible FPS drop.
float4 main(PixelShaderInput input) : SV_TARGET
{
float4 source = screen.Sample(LinearSampler, input.texcoord);
float4 sum = float4(0,0,0,0);
float2 sizeFactor = float2(0.00117, 0.00208);
for (int x = -2; x <= 2; x ++)
{
float2 offset = float2(x, 0) *sizeFactor;
sum += screen.Sample(LinearSampler, input.texcoord + offset);
}
return ((sum / 5) + source);
}
I'm currently using this pixel shader for 1D blur and it's visibily slower than without blur. Is it really so that WP8 phone hardware is that slow or am I making some mistake? If so, could you point me where to look for error?
Thank you.
Phones often don't have the best fill-rate, and blur is one of the worst things you can do if you're fill-rate bound. Using some numbers from gfxbench.com's Fill test, a typical phone fill rate is around 600MTex/s. With some rough math:
(600M texels/s) / (1280*720 texels/op) / (60 frames/s) ~= 11 ops/frame
So in your loop, if your surface is the entire screen, and you're doing 5 reads and 1 write, that's 6 of your 11 ops used, just for the blur. So I would say a framerate drop is expected. One way around this is to dynamically lower your resolution, and do a single linear upscale - you'll get a different kind of natural blur from the linear interpolation, which might be passable depending on the visual effect you're going for.
Allright - seems my question was as cloudy as my head. Lets try again.
I have 3 properties while configuring viewports for a D3D device:
- The resolution the device is running in (full-screen).
- The physical aspect ratio of the monitor (as fraction and float:1, so for ex. 4:3 & 1.33).
- The aspect ratio of the source resolution (source resolution itself is kind of moot and tells us little more than the aspect ratio the rendering wants and the kind of resolution that would be ideal to run in).
Then we run into this:
// -- figure out aspect ratio adjusted VPs --
m_nativeVP.Width = xRes;
m_nativeVP.Height = yRes;
m_nativeVP.X = 0;
m_nativeVP.Y = 0;
m_nativeVP.MaxZ = 1.f;
m_nativeVP.MinZ = 0.f;
FIX_ME // this does not cover all bases -- fix!
uint xResAdj, yResAdj;
if (g_displayAspectRatio.Get() < g_renderAspectRatio.Get())
{
xResAdj = xRes;
yResAdj = (uint) ((float) xRes / g_renderAspectRatio.Get());
}
else if (g_displayAspectRatio.Get() > g_renderAspectRatio.Get())
{
xResAdj = (uint) ((float) yRes * g_renderAspectRatio.Get());
yResAdj = yRes;
}
else // ==
{
xResAdj = xRes;
yResAdj = yRes;
}
m_fullVP.Width = xResAdj;
m_fullVP.Height = yResAdj;
m_fullVP.X = (xRes - xResAdj) >> 1;
m_fullVP.Y = (yRes - yResAdj) >> 1;
m_fullVP.MaxZ = 1.f;
m_fullVP.MinZ = 0.f;
Now as long as g_displayAspectRatio equals the ratio of xRes/yRes (= adapted from device resolution), all is well and this code will do what's expected of it. But as soon as those 2 values are no longer related (for example, someone runs a 4:3 resolution on a 16:10 screen, hardware-stretched) another step is required to compensate, and I've got trouble figuring out how exactly.
(and p.s I use C-style casts on atomic types, live with it :-) )
I'm assuming what you want to achieve is a "square" projection, e.g. when you draw a circle you want it to look like a circle rather than an ellipse.
The only thing you should play with is your projection (camera) aspect ratio. In normal cases, monitors keep pixels square and all you have to do is set your camera aspect ratio equal to your viewport's aspect ratio:
viewport_aspect_ratio = viewport_res_x / viewport_res_y;
camera_aspect_ratio = viewport_aspect_ratio;
In the stretched case you describe (4:3 image stretched on a 16:10 screen for example), pixels are not square anymore and you have to take that into account in your camera aspect ratio:
stretch_factor_x = screen_size_x / viewport_res_x;
stretch_factor_y = screen_size_y / viewport_res_y;
pixel_aspect_ratio = stretch_factor_x / stretch_factor_y;
viewport_aspect_ratio = viewport_res_x / viewport_res_y;
camera_aspect_ratio = viewport_aspect_ratio * pixel_aspect_ratio;
Where screen_size_x and screen_size_y are multiples of the real size of the monitor (e.g. 16:10).
However, you should simply assume square pixels (unless you have a specific reason no to), as the monitor may report incorrect physical size informations to the system, or no informations at all. Also monitors don't always stretch, mine for example keeps 1:1 pixels aspect ratio and adds black borders for lower resolutions.
Edit
If you want to adjust your viewport to some aspect ratio and fit it on an arbitrary resolution then you could do like that :
viewport_aspect_ratio = 16.0 / 10.0; // The aspect ratio you want your viewport to have
screen_aspect_ratio = screen_res_x / screen_res_y;
if (viewport_aspect_ratio > screen_aspect_ratio) {
// Viewport is wider than screen, fit on X
viewport_res_x = screen_res_x;
viewport_res_y = viewport_res_x / viewport_aspect_ratio;
} else {
// Screen is wider than viewport, fit on Y
viewport_res_y = screen_res_y;
viewport_res_x = viewport_res_y * viewport_aspect_ratio;
}
camera_aspect_ratio = viewport_res_x / viewport_res_y;