(Note: This problem occurs for me only when the compiler switch /arch:AVX is set. More at the bottom)
My gtest unit tests have done this for 7 years
ASSERT_EQ(-3.0, std::round(-2.5f)); // (Note the 'f' suffix)
According to cpp-reference, std::round is supposed to round AWAY from zero, right? Yet with the current release, this test just started failing. Am I missing something? All I did was update my Visual Studio 2022 to 17.4.3 My co-worker with 17.3.3 does not have this problem
EDIT: I don't know if the problem is GTEST and its macros or assumptions my unit test makes about equality. I put the following two lines of code into my test
std::cerr << "std::round(-2.5) = " << std::round(-2.5) << std::endl;
std::cerr << "std::round(-2.5f) = " << std::round(-2.5f) << std::endl;
They produce the following output. The second one is wrong, is it not?
std::round(-2.5) = -3
std::round(-2.5f) = -2
EDIT #2: As I note above, the only occurs when I set the compiler flag /arch:AVX If just create a console app and do not set the flag of if I explicitly set it to /arch:IA32, the problem goes away. But the question then becomes: Is this a bug or am I just not supposed to use that option?
This is a known bug, see the bug report on developercommunity, which is already in the "pending release" state.
For completeness/standalone sake, the minimal example from there is (godbolt):
int main()
{
std::cout << "MSVC version: " << _MSC_FULL_VER << '\n';
std::cout << "Round 0.5f: " << std::round(0.5f) << '\n';
std::cout << "Round 0.5: " << std::round(0.5) << '\n';
}
compiled with AVX or AVX2.
The correct output e.g. with MSVC 19.33 is
MSVC version: 193331631
Round 0.5f: 1
Round 0.5: 1
while the latest MSVC 19.34 outputs
MSVC version: 193431931
Round 0.5f: 0
Round 0.5: 1
Related
I've discovered an issue impacting several unit tests at my work, which only happens when the unit tests are run with valgrind, in that the value returned from std::cos and std::sin are different for identical inputs depending on if the unit test is run in isolation versus run under valgrind.
This issue only seems to happen for some specific inputs, because many unit tests pass which run through the same code.
Here's a minimally reproducible example (slightly worsened so that my compiler wouldn't optimize away any of the logic):
#include <complex>
#include <iomanip>
#include <iostream>
int main()
{
std::complex<long double> input(0,0), output(0,0);
input = std::complex<long double>(39.21460183660255L, -40);
std::cout << "input: " << std::setprecision(20) << input << std::endl;
output = std::cos(input);
std::cout << "output: " << std::setprecision(20) << output << std::endl;
if (std::abs(output) < 5.0)
{
std::cout << "TEST FAIL" << std::endl;
return 1;
}
std::cout << "TEST PASS" << std::endl;
return 0;
}
Output when run normally:
input: (39.21460183660254728,-40)
output: (6505830161375283.1118,117512680740825220.91)
TEST PASS
Output when run under valgrind:
input: (39.21460183660254728,-40)
output: (0.18053126362312540976,3.2608771240037195405)
TEST FAIL
Notes:
OS: Red Hat Enterprise Linux 7
Compiler: Intel OneAPI 2022 Next generation DPP/C++ Compiler
Valgrind: 3.20 (built with same compiler), also occurred on official distribution of 3.17
Issue did not manifest when unit tests were built with GCC-7 (cannot go back to that compiler) or GCC-11 (another larger bug with boost prevents us from using this with valgrind)
-O0/1/2/3 make no difference on this issue
only compiler flag I have set is "-fp-speculation=safe", which otherwise if unset causes numerical precision issues in other unit tests
Is there any better ways I can figure out what's going on to resolve this situation, or should I submit a bug report to valgrind? I hope this issue is benign but I want to be able to trust my valgrind output.
When I compile my application in Release mode I get incorrect division result of 40.0 / 5 = 7.
In debug compilation it is correct, and result is 8
I tried to cast to double, from double, to int, without abs() etc, but no luck. I know this must be related to weirdness of floating point math on computers, but I have no idea what exactly. I also logged the values on console, via the qDebugs() below the code - everything looks okay, except initial steps.
//somewhere in code
double tonnageToRecover = 0.5;//actually, its QDoubleSpinBox->value(), with 0.5 step set. Anyway, the value finally reduces to 0.5 every time
double tonnagePerArmorPoint = 0.0125;//taken from .json
int minimumArmorDelta = 5;//taken from .json
...
//palace where the calculations are preformed
double armorPointsPerHalfTon = tonnageToRecover / tonnagePerArmorPoint;
int steps = abs(static_cast<int>(armorPointsPerHalfTon / minimumArmorDelta));
qDebug() << "armorPointsPerHalfTon = " << armorPointsPerHalfTon;
qDebug() << "tonnagePerArmorPoint = " << tonnagePerArmorPoint;
qDebug() << "steps initial = " << steps;
qDebug() << "minimumArmorDelta = " << minimumArmorDelta;
both 1st division parts are type double, tonnageToRecover = 0.5, tonnagePerArmorPoint = 0.0125, result is 40 which is OK
minimumArmorDelta is int = 5
So why 40/5 isn't 8??
Compiler - MinGW 32 5.3.0, from Qt 5.11 pack
Screenshots:
Release
Debug
#Julian
I suspect that too, but how can I overcome this obstacle? Will try to change steps to double, then cast to int again.
RESUT: still does not work :/
I found a solution, but I have no idea exactly why it works now. Current code it:
double armorPointsPerHalfTon = tonnageToRecover / tonnagePerArmorPoint;
// int aPHT = (int)armorPointsPerHalfTon;
// double minDelta = 5.0;//static_cast<double>(minimumArmorDelta);
QString s(QString::number(abs(armorPointsPerHalfTon / minimumArmorDelta)));
int steps = abs(armorPointsPerHalfTon / minimumArmorDelta);
#define myqDebug() qDebug() << fixed << qSetRealNumberPrecision(10)
myqDebug() << "tonnageToRecover = " << tonnageToRecover;
myqDebug() << "tonnagePerArmorPoint = " << tonnagePerArmorPoint;
myqDebug() << "armorPointsPerHalfTon = " << armorPointsPerHalfTon;
//myqDebug() << "aPHT = " << aPHT;//this was 39 in Release, 40 in Debug
myqDebug() << "steps initial = " << steps;
myqDebug() << "string version = " << s;
myqDebug() << "minimumArmorDelta = " << minimumArmorDelta;// << ", minDelta = " << minDelta;
#undef myqDebug
I suppose that creation of that QString s flushes something, and that's why calculation of steps is correct now. String has incorrect value "7", though.
Your basic problem is that you are truncating.
Suppose real number arithmetic would give an answer of exactly 8. Floating point arithmetic will give an answer that is very close to 8, but can differ from it in either direction due to rounding error. If the floating point answer is slightly greater than 8, truncating will change it to 8. If it is even slightly less than 8, truncating will change it to 7.
I suggest writing a new question on how to avoid the truncation, with discussion of why you are doing it.
I guess, the reason is that armorPointsPerHalfTon / minimumArmorDelta could be not 8 but actually 7.99999999 in the Release-version. This value then changes to 7 through the int-cast.
So, if the Debug version calculates armorPointsPerHalfTon / minimumArmorDelta = 8.0000001, the result is static_cast<int>(armorPointsPerHalfTon / minimumArmorDelta) = 8.
It's not surprising that Debug / Release yield different results (on the order of machine precision), as several optimizations occur in the Release version.
EDIT: If it suits your requirements, you could just use std::round to round your double to the nearest integer, rather than truncation decimals.
I'm wondering what the best way to detect a high DPI display is. Currently I'm trying to use SDL_GetDisplayDPI (int, *float, *float, *float), however this has only returned errors on the two different computers I tested with (MacBook Pro running OS X 10.11.5 and iMac running macOS 10.12 Beta (16A238m)). For reference, my code is bellow.
float diagDPI = -1;
float horiDPI = -1;
float vertDPI = -1;
int dpiReturn = SDL_GetDisplayDPI (0, &diagDPI, &horiDPI, &vertDPI);
std::cout << "GetDisplayDPI() returned " << dpiReturn << std::endl;
if (dpiReturn != 0)
{
std::cout << "Error: " << SDL_GetError () << std::endl;
}
std::cout << "DDPI: " << diagDPI << std::endl << "HDPI: " << horiDPI << std::endl << "VDPI: " << vertDPI << std::endl;
Unfortunately, this is only giving me something like this:
/* Output */
GetDisplayDPI() returned -1
Error:
DDPI: -1
HDPI: -1
VDPI: -1
Not Retina
I also tried comparing the OpenGL drawable size with the SDL window size, but SDL_GetWindowSize (SDL_Window, *int, *int) is returning 0s, too. That code is bellow, followed by the output.
int gl_w;
int gl_h;
SDL_GL_GetDrawableSize (window, &gl_w, &gl_h);
std::cout << "GL_W: " << gl_w << std::endl << "GL_H: " << gl_h << std::endl;
int sdl_w;
int sdl_h;
SDL_GetWindowSize (window, &sdl_w, &sdl_h);
std::cout << "SDL_W: " << sdl_w << std::endl << "SDL_H: " << sdl_h << std::endl;
/* Output */
GL_W: 1280
GL_H: 720
SDL_W: 0
SDL_H: 0
It's entirely possible that I'm doing something wrong here, or making these calls in the wrong place, but I think more likely is that I'm on the wrong track entirely. There's a hint to disallow high-dpi canvases, so there's probably a simple bool somewhere, or something that I'm missing. I have certainly looked through the wiki, and checked Google, but I can't really find any help for this. Any suggestions or feedback are welcome!
Thank you for your time!
I know I'm not answering your question directly, and want to reiterate one thing you tried.
On a Macbook pro, when an SDL window is on an external display, SDL_GetWindowSize and SDL_GL_GetDrawableSize return the same values. If the window is on a Retina screen, they're different. Specifically, the drawable size is 2x larger in each dimension.
I was using a .framework installation of SDL when I encountered this issue. For an unrelated reason, I trashed the .framework SDL files (image and ttf as well), and built SDL from source (thus transitioning to a "unix-style" SDL-installation). To my surprise, SDL_GetDisplayDPI () is now returning 0, setting the values of DDPI, HDPI, and VDPI, to 109 on a non-retina iMac, and 113.5 on a retina MacBook Pro. I'm not certain that these are correct/accurate, but it is consistent between launches, so I'll work with it.
At this point, I'm not sure if it was a bug, which has been fixed in the repo, or was an issue with my .framework files. On a somewhat unrelated note, SDL_GetBasePath () and SDL_GetPrefPath (), which also weren't working, now return the expected values. If you're also experiencing any of these problems on macOS, try compiling and installing SDL from source (https://hg.libsdl.org/SDL).
Thanks for your input, everyone!
I have a C++ Date/Time library that I have literally used for decades. It's been rock solid without any issues. But today, as I was making some small enhancements, my test code started complaining violently. This following program demonstrates the problem :
#include <iostream>
#include <time.h>
void main(void) {
_tzset();
std::cout << "_tzname[ 0 ]=" << _tzname[ 0 ] << std::endl;
std::cout << "_tzname[ 1 ]=" << _tzname[ 1 ] << std::endl;
std::cout << "_timezone=" << _timezone << std::endl;
size_t ret;
char buf[ 64 ];
_get_tzname(&ret,buf,64,0);
std::cout << "_get_tzname[ 0 ]=" << buf << std::endl;
_get_tzname(&ret,buf,64,1);
std::cout << "_get_tzname[ 1 ]=" << buf << std::endl;
}
If I run this in the Visual Studio Debugger I get the following output :
_tzname[ 0 ]=SE Asia Standard Time
_tzname[ 1 ]=SE Asia Daylight Time
_timezone=-25200
_get_tzname[ 0 ]=SE Asia Standard Time
_get_tzname[ 1 ]=SE Asia Daylight Time
This is correct.
But if I run the program from the command line I get the following output :
_tzname[ 0 ]=Asi
_tzname[ 1 ]=a/B
_timezone=0
_get_tzname[ 0 ]=Asi
_get_tzname[ 1 ]=a/B
Note that the TZ environment variable is set to : Asia/Bangkok, which is a synonym for SE Asia Standard Time or UTC+7. You will notice in the command line output that the tzname[ 0 ] value is the first 3 characters of Asia/Bangkok and tzname[ 1 ] is the next 3 characters. I have some thoughts on this, but I cannot make sense of it, so I'll just stick to the facts.
Note that I included the calls to _get_tzname(...) to demonstrate that I am not getting caught in some kind deprecation trap given that _tzname and _timezone are deprecated.
I'm on Windows 7 Professional and I am linking statically to the runtime library (Multi-threaded Debug (/MTd)). I recently installed Visual Studio 2015 and while I am not using it yet, I compiled this program there and the results are the same. I thought there was a chance that I was somehow linking with the VS2015 libraries but I cannot verify this. The Platform Toolset setting in both projects reflects what I would expect.
Thank you for taking the time to look at this...
EDIT: it was an uninitialized variable... :(
Explanation:
The PointLLA constructor I used only passed through Latitude and Longitude, but I never explicitly set the internal Altitude member variable to 0. Rookie mistake...
Original Question:
I'm having a pretty horrible time with a bug in my code. I'm calculating distances between a single point and the corners of a rectangle. In this case, the point is centered over the rectangle so I should get four equal distances. I get three equal distances, and one almost equal distance value that's inconsistent (different every time it runs).
If I have a few key debug statements (pretty much just a std::cout call) that explicitly print out the location of each rectangle corner, I get the expected value for the distance and the inconsistency disappears. Here's the relevant code:
// calculate the minimum and maximum distance to
// camEye within the available lat/lon bounds
Vec3 viewBoundsNE; convLLAToECEF(PointLLA(maxLat,maxLon),viewBoundsNE);
Vec3 viewBoundsNW; convLLAToECEF(PointLLA(maxLat,minLon),viewBoundsNW);
Vec3 viewBoundsSW; convLLAToECEF(PointLLA(minLat,minLon),viewBoundsSW);
Vec3 viewBoundsSE; convLLAToECEF(PointLLA(minLat,maxLon),viewBoundsSE);
// begin comment this block out, and buggy output
OSRDEBUG << "INFO: NE (" << viewBoundsNE.x
<< " " << viewBoundsNE.y
<< " " << viewBoundsNE.z << ")";
OSRDEBUG << "INFO: NW (" << viewBoundsNW.x
<< " " << viewBoundsNW.y
<< " " << viewBoundsNW.z << ")";
OSRDEBUG << "INFO: SE (" << viewBoundsSW.x
<< " " << viewBoundsSW.y
<< " " << viewBoundsSW.z << ")";
OSRDEBUG << "INFO: SW (" << viewBoundsSE.x
<< " " << viewBoundsSE.y
<< " " << viewBoundsSE.z << ")";
// --------------- end
// to get the maximum distance, find the maxima
// of the distances to each corner of the bounding box
double distToNE = camEye.DistanceTo(viewBoundsNE);
double distToNW = camEye.DistanceTo(viewBoundsNW); // <-- inconsistent
double distToSE = camEye.DistanceTo(viewBoundsSE);
double distToSW = camEye.DistanceTo(viewBoundsSW);
std::cout << "INFO: distToNE: " << distToNE << std::endl;
std::cout << "INFO: distToNW: " << distToNW << std::endl; // <-- inconsistent
std::cout << "INFO: distToSE: " << distToSE << std::endl;
std::cout << "INFO: distToSW: " << distToSW << std::endl;
double maxDistToViewBounds = distToNE;
if(distToNW > maxDistToViewBounds)
{ maxDistToViewBounds = distToNW; }
if(distToSE > maxDistToViewBounds)
{ maxDistToViewBounds = distToSE; }
if(distToSW > maxDistToViewBounds)
{ maxDistToViewBounds = distToSW; }
OSRDEBUG << "INFO: maxDistToViewBounds: " << maxDistToViewBounds;
So if I run the code shown above, I'll get output as follows:
INFO: NE (6378137 104.12492 78.289415)
INFO: NW (6378137 -104.12492 78.289415)
INFO: SE (6378137 -104.12492 -78.289415)
INFO: SW (6378137 104.12492 -78.289415)
INFO: distToNE: 462.71851
INFO: distToNW: 462.71851
INFO: distToSE: 462.71851
INFO: distToSW: 462.71851
INFO: maxDistToViewBounds: 462.71851
Exactly as expected. Note that all the distTo* values are the same. I can run the program over and over again and I'll get exactly the same output. But now, if I comment out the block that I noted in the code above, I get something like this:
INFO: distToNE: 462.71851
INFO: distToNW: 463.85601
INFO: distToSE: 462.71851
INFO: distToSW: 462.71851
INFO: maxDistToViewBounds: 463.85601
Every run will slightly vary distToNW. Why distToNW and not the other values? I don't know. A few more runs:
463.06218
462.79352
462.76194
462.74772
463.09787
464.04648
So... what's going on here? I tried cleaning/rebuilding my project to see if there was something strange going on but it didn't help. I'm using GCC 4.6.3 with an x86 target.
EDIT: Adding the definitions of relevant functions.
void MapRenderer::convLLAToECEF(const PointLLA &pointLLA, Vec3 &pointECEF)
{
// conversion formula from...
// hxxp://www.microem.ru/pages/u_blox/tech/dataconvert/GPS.G1-X-00006.pdf
// remember to convert deg->rad
double sinLat = sin(pointLLA.lat * K_PI/180.0f);
double sinLon = sin(pointLLA.lon * K_PI/180.0f);
double cosLat = cos(pointLLA.lat * K_PI/180.0f);
double cosLon = cos(pointLLA.lon * K_PI/180.0f);
// v = radius of curvature (meters)
double v = ELL_SEMI_MAJOR / (sqrt(1-(ELL_ECC_EXP2*sinLat*sinLat)));
pointECEF.x = (v + pointLLA.alt) * cosLat * cosLon;
pointECEF.y = (v + pointLLA.alt) * cosLat * sinLon;
pointECEF.z = ((1-ELL_ECC_EXP2)*v + pointLLA.alt)*sinLat;
}
// and from the Vec3 class defn
inline double DistanceTo(Vec3 const &otherVec) const
{
return sqrt((x-otherVec.x)*(x-otherVec.x) +
(y-otherVec.y)*(y-otherVec.y) +
(z-otherVec.z)*(z-otherVec.z));
}
The inconsistent output suggests that either you're making use of an uninitialized variable somewhere in your code, or you have some memory error (accessing memory that's been deleted, double-deleting memory, etc). I don't see either of those things happening in the code you pasted, but there's lots of other code that gets called.
Does the Vec3 constructor initialize all member variables to zero (or some known state)? If not, then do so and see if that helps. If they're already initialized, take a closer look at convLLAToECEF and PointLLA to see if any variables are not getting initialized or if you have any memory errors there.
Seems to me like the DistanceTo function is bugged in some way. If you cannot work out what is up, experiment a bit, and report back.
Try reordering the outputs to see if it's still NW that is wrong.
Try redoing the NW point 2-3 times into different vars to see if they are even consistent in one run.
Try using a different camEye for each point to rule out statefulness in that class.
As much as I hate it, have you stepped through it in a debugger? I usually bias towards stdout based debugging, but it seems like it'd help. That aside, you've got side effects from something nasty kicking around.
My guess is that the fact that you expect (rightfully of course) all four values to be the same is masking a "NW/SW/NE/SE" typo someplace. First thing I'd do is isolate the block you've got here into it's own function (that takes the box and point coordinates) then run it with the point in several different locations. I think the error should likely expose itself quickly at that point.
See if the problem reproduces if you have the debug statements there, but move them after the output. Then the debug statements could help determine whether it was the Vec3 object that was corrupted, or the calculation from it.
Other ideas: run the code under valgrind.
Pore over the disassembly output.