EDIT: it was an uninitialized variable... :(
Explanation:
The PointLLA constructor I used only passed through Latitude and Longitude, but I never explicitly set the internal Altitude member variable to 0. Rookie mistake...
Original Question:
I'm having a pretty horrible time with a bug in my code. I'm calculating distances between a single point and the corners of a rectangle. In this case, the point is centered over the rectangle so I should get four equal distances. I get three equal distances, and one almost equal distance value that's inconsistent (different every time it runs).
If I have a few key debug statements (pretty much just a std::cout call) that explicitly print out the location of each rectangle corner, I get the expected value for the distance and the inconsistency disappears. Here's the relevant code:
// calculate the minimum and maximum distance to
// camEye within the available lat/lon bounds
Vec3 viewBoundsNE; convLLAToECEF(PointLLA(maxLat,maxLon),viewBoundsNE);
Vec3 viewBoundsNW; convLLAToECEF(PointLLA(maxLat,minLon),viewBoundsNW);
Vec3 viewBoundsSW; convLLAToECEF(PointLLA(minLat,minLon),viewBoundsSW);
Vec3 viewBoundsSE; convLLAToECEF(PointLLA(minLat,maxLon),viewBoundsSE);
// begin comment this block out, and buggy output
OSRDEBUG << "INFO: NE (" << viewBoundsNE.x
<< " " << viewBoundsNE.y
<< " " << viewBoundsNE.z << ")";
OSRDEBUG << "INFO: NW (" << viewBoundsNW.x
<< " " << viewBoundsNW.y
<< " " << viewBoundsNW.z << ")";
OSRDEBUG << "INFO: SE (" << viewBoundsSW.x
<< " " << viewBoundsSW.y
<< " " << viewBoundsSW.z << ")";
OSRDEBUG << "INFO: SW (" << viewBoundsSE.x
<< " " << viewBoundsSE.y
<< " " << viewBoundsSE.z << ")";
// --------------- end
// to get the maximum distance, find the maxima
// of the distances to each corner of the bounding box
double distToNE = camEye.DistanceTo(viewBoundsNE);
double distToNW = camEye.DistanceTo(viewBoundsNW); // <-- inconsistent
double distToSE = camEye.DistanceTo(viewBoundsSE);
double distToSW = camEye.DistanceTo(viewBoundsSW);
std::cout << "INFO: distToNE: " << distToNE << std::endl;
std::cout << "INFO: distToNW: " << distToNW << std::endl; // <-- inconsistent
std::cout << "INFO: distToSE: " << distToSE << std::endl;
std::cout << "INFO: distToSW: " << distToSW << std::endl;
double maxDistToViewBounds = distToNE;
if(distToNW > maxDistToViewBounds)
{ maxDistToViewBounds = distToNW; }
if(distToSE > maxDistToViewBounds)
{ maxDistToViewBounds = distToSE; }
if(distToSW > maxDistToViewBounds)
{ maxDistToViewBounds = distToSW; }
OSRDEBUG << "INFO: maxDistToViewBounds: " << maxDistToViewBounds;
So if I run the code shown above, I'll get output as follows:
INFO: NE (6378137 104.12492 78.289415)
INFO: NW (6378137 -104.12492 78.289415)
INFO: SE (6378137 -104.12492 -78.289415)
INFO: SW (6378137 104.12492 -78.289415)
INFO: distToNE: 462.71851
INFO: distToNW: 462.71851
INFO: distToSE: 462.71851
INFO: distToSW: 462.71851
INFO: maxDistToViewBounds: 462.71851
Exactly as expected. Note that all the distTo* values are the same. I can run the program over and over again and I'll get exactly the same output. But now, if I comment out the block that I noted in the code above, I get something like this:
INFO: distToNE: 462.71851
INFO: distToNW: 463.85601
INFO: distToSE: 462.71851
INFO: distToSW: 462.71851
INFO: maxDistToViewBounds: 463.85601
Every run will slightly vary distToNW. Why distToNW and not the other values? I don't know. A few more runs:
463.06218
462.79352
462.76194
462.74772
463.09787
464.04648
So... what's going on here? I tried cleaning/rebuilding my project to see if there was something strange going on but it didn't help. I'm using GCC 4.6.3 with an x86 target.
EDIT: Adding the definitions of relevant functions.
void MapRenderer::convLLAToECEF(const PointLLA &pointLLA, Vec3 &pointECEF)
{
// conversion formula from...
// hxxp://www.microem.ru/pages/u_blox/tech/dataconvert/GPS.G1-X-00006.pdf
// remember to convert deg->rad
double sinLat = sin(pointLLA.lat * K_PI/180.0f);
double sinLon = sin(pointLLA.lon * K_PI/180.0f);
double cosLat = cos(pointLLA.lat * K_PI/180.0f);
double cosLon = cos(pointLLA.lon * K_PI/180.0f);
// v = radius of curvature (meters)
double v = ELL_SEMI_MAJOR / (sqrt(1-(ELL_ECC_EXP2*sinLat*sinLat)));
pointECEF.x = (v + pointLLA.alt) * cosLat * cosLon;
pointECEF.y = (v + pointLLA.alt) * cosLat * sinLon;
pointECEF.z = ((1-ELL_ECC_EXP2)*v + pointLLA.alt)*sinLat;
}
// and from the Vec3 class defn
inline double DistanceTo(Vec3 const &otherVec) const
{
return sqrt((x-otherVec.x)*(x-otherVec.x) +
(y-otherVec.y)*(y-otherVec.y) +
(z-otherVec.z)*(z-otherVec.z));
}
The inconsistent output suggests that either you're making use of an uninitialized variable somewhere in your code, or you have some memory error (accessing memory that's been deleted, double-deleting memory, etc). I don't see either of those things happening in the code you pasted, but there's lots of other code that gets called.
Does the Vec3 constructor initialize all member variables to zero (or some known state)? If not, then do so and see if that helps. If they're already initialized, take a closer look at convLLAToECEF and PointLLA to see if any variables are not getting initialized or if you have any memory errors there.
Seems to me like the DistanceTo function is bugged in some way. If you cannot work out what is up, experiment a bit, and report back.
Try reordering the outputs to see if it's still NW that is wrong.
Try redoing the NW point 2-3 times into different vars to see if they are even consistent in one run.
Try using a different camEye for each point to rule out statefulness in that class.
As much as I hate it, have you stepped through it in a debugger? I usually bias towards stdout based debugging, but it seems like it'd help. That aside, you've got side effects from something nasty kicking around.
My guess is that the fact that you expect (rightfully of course) all four values to be the same is masking a "NW/SW/NE/SE" typo someplace. First thing I'd do is isolate the block you've got here into it's own function (that takes the box and point coordinates) then run it with the point in several different locations. I think the error should likely expose itself quickly at that point.
See if the problem reproduces if you have the debug statements there, but move them after the output. Then the debug statements could help determine whether it was the Vec3 object that was corrupted, or the calculation from it.
Other ideas: run the code under valgrind.
Pore over the disassembly output.
Related
I'm working with this code: https://github.com/UCLA-TMD/Ogata. If performs a fast fourier transform.
It integrates the function test, according to some predefined options that are argument of the FBT - FBT(bessel order,some option that doesnt matter,Number of function calls, estimate value where function has its maximum). So far so good.
This code works fine with that function from test, but that's not the function I actually need to use, so I switched to something like exp(x) just to test it, and I no matter what I do I always get:
(It always compiles okay, but when I run the .o file, it gives me this)
gsl: fsolver.c:126: ERROR: endpoints do not enclose a minimum
Default GSL error handler invoked.
Aborted (core dumped)
At first I thought it could be a problem with the function's maximum value Q in FBT, but whenever I change it, it gives me the same error.
Would really appreciate any help.
double test( double x, double width ){ return x*exp(-x/width);} // test function to transform data allows to send anything else to the function
int main( void )
{
//FBT(Bessel Order nu, option of function (always zero), number of function calls, rough estimate where the maximum of the function f(x) is).
FBT ogata0 = FBT(0.0,0,10,1.0); // Fourier Transform with Jnu, nu=0.0 and N=10
double qT = 1.;
double width = 1.;
//call da ogata 0 para função teste
auto begin = std::chrono::high_resolution_clock::now();
double res = ogata0.fbt(std::bind(test, std::placeholders::_1, width),qT);
auto end = std::chrono::high_resolution_clock::now();
std::cout << std::setprecision(30) << " FT( J0(x*qT) x*exp(-x) ) at qT= " << qT << std::endl;
std::cout << std::setprecision(30) << "Numerical transformed = " << res << std::endl;
auto overhead=std::chrono::duration_cast<std::chrono::nanoseconds>(end-begin).count();
std::cout<<"Calc time: "<<overhead<<" nanoseconds\n";
}
I'm trying to use boost.geometry for polygon subtraction. The polygons might be concave but don't have interior rings.
Most of the time this works well but i found at least one occasion where the result is detected by boost geometry itself as self intersecting.
The Polygon Concept defines some rules which i think i'm all fulfilling, but at least about the spikes i'm not entirely sure.
Boost.geometry refers to the OGC Simple Feature Specification which notes the following about cut lines, spikes and punctures:
d) A Polygon may not have cut lines, spikes or punctures e.g.:
∀ P ∈ Polygon, P = P.Interior.Closure;
The following example fails:
I'm trying to subtract the white from the red polygon:
The result is the green dashed polygon which looks fine
A closer view of the interesting part
is here:
When zooming in to the marked corner
(while admittedly being somewhat limited by the floating point accuracy of the viewer)
it appears that there are two very close points which might or might not overlap
(i didn't do the calculations; the main point is that boost.geometry thinks they overlap)
Here's a godbolt showing the behavior.
The coordinates of the four line segments forming the Z in the result polygon are
-38.166710648232516689, -29.97044076186023176
-46.093710179049615761, -23.318898379209066718 // these two points are notably
-46.093707539928615802, -23.318900593694593226 // but not awfully close
-46.499997777166534263, -22.977982153068655435
While the differences are somewhat after the decimal point, it feels like they should be still more than big enough to not cause floating point accuracy problems.
Is there a more detailed explanation of what defines a spike as mentioned in the docs - "There should be no cut lines, spikes or punctures"
Is there something in boost.geometry, f.e. the strategies which i don't know much about, or anything else which can be used to completely avoid such situations?
If nothing else helps, would switching to integers completely solve the problem or is using boost.polygon the only stable boost option?
EDIT 1:
Instead of asking a similar question which likely boils down to the same problem again, i add another reproduction here which doesn't show the problem in the call to intersection but in a result which expresses a hole where there should be one.
The following example fails:
I'm trying to subtract the white from the red polygon.
This should result in a polygon almost identical with the red one and without any holes. Instead the result is the original red polygon as outer ring and the white polygon as inner ring.
Adding and removing seemingly unrelated points, f.e. points 7, 8 and 9 from the red polygon, changes the behavior and makes it work correctly.
Adding more precision supposedly could solve this example but i suspect it to be a problem inherent to the algorithm.
Here's a godbolt showing the behavior.
After rotating the points of poly2 by one to the right, the problem disappears as shown in this godbolt.
covered_by seems to match with this behavior.
Brace for a ride: first I confirm the hypotheses (it's a floating point accuracy limitation/defect). Next I come up with the simplest of workarounds that... apparently works :)
Checking The Premise
Firat I simplified the tester adding a lot of diagnostics:
Live On Wandbox
#include <boost/geometry.hpp>
#include <boost/geometry/geometries/geometries.hpp>
#include <iostream>
#include <iomanip>
namespace bg = boost::geometry;
void report(auto& geo, std::string_view caption) {
std::cout << "---\n";
std::cout << caption << ": " << bg::wkt(geo) << "\n";
std::string reason;
if (!bg::is_valid(geo, reason)) {
std::cout << caption << ": " << reason << "\n";
bg::correct(geo);
std::cout << caption << " corrected: " << bg::wkt(geo) << "\n";
}
if (bg::intersects(geo)) {
std::cout << caption << " is self-intersecting\n";
}
std::cout << caption << " area: " << bg::area(geo) << "\n";
}
int main() {
using point_t = bg::model::d2::point_xy<double>;
using polygon_t = bg::model::polygon<point_t /*, true, true*/>;
using multi_polygon_t = bg::model::multi_polygon<polygon_t>;
polygon_t poly1{{
{-46.499997761818364, -23.318506263117456},
{-46.499999998470159, 26.305250946791375},
{-5.3405104310993323, 15.276598956337693},
{37.500000001521741, -9.4573812741570009},
{37.500000001521741, -29.970448623772313},
{-38.166710648232517, -29.970440761860232},
{-46.094160894726727, -23.318520183850637},
{-46.499997761818364, -23.318506263117456},
}},
poly2{{
{-67.554314795325794, -23.318900735763236},
{-62.596713294359084, -17.333596950467950},
{-60.775620215083222, -15.852879652420938},
{-58.530163386780792, -15.186307709861694},
{-56.202193256444019, -15.435360658555282},
{-54.146122173314907, -16.562122444733632},
{-46.093707539928616, -23.318900593694593},
{-67.554314795325794, -23.318900735763236},
}};
report(poly1, "poly1");
report(poly2, "poly2");
multi_polygon_t diff;
bg::difference(poly1, poly2, diff);
if (diff.empty()) {
std::cout << "difference is empty\n";
} else {
report(diff, "diff");
for (size_t i = 0; i < diff.size(); ++i) {
report(diff[i], "result#" + std::to_string(i+1));
}
}
std::cout << "Diff in areas: " << (bg::area(poly1) - bg::area(diff)) << "\n";
}
Prints
---
poly1: POLYGON((-46.5 -23.3185,-46.5 26.3053,-5.34051 15.2766,37.5 -9.45738,37.5 -29.9704,-38.
1667 -29.9704,-46.0942 -23.3185,-46.5 -23.3185))
poly1 area: 3468.84
---
poly2: POLYGON((-67.5543 -23.3189,-62.5967 -17.3336,-60.7756 -15.8529,-58.5302 -15.1863,-56.20
22 -15.4354,-54.1461 -16.5621,-46.0937 -23.3189,-67.5543 -23.3189))
poly2 area: 105.495
---
diff: MULTIPOLYGON(((-46.5 -22.978,-46.5 26.3053,-5.34051 15.2766,37.5 -9.45738,37.5 -29.9704,
-38.1667 -29.9704,-46.0937 -23.3189,-46.0937 -23.3189,-46.5 -22.978)))
diff is self-intersecting
diff area: 3468.78
---
result#1: POLYGON((-46.5 -22.978,-46.5 26.3053,-5.34051 15.2766,37.5 -9.45738,37.5 -29.9704,-3
8.1667 -29.9704,-46.0937 -23.3189,-46.0937 -23.3189,-46.5 -22.978))
result#1 is self-intersecting
result#1 area: 3468.78
Diff in areas: 0.0690986
Thus confirming the premise.
Shot In The Dark
As a blind shot I tried replacing double with long double. This leads to no discernible difference in output.
Replacing it with a multi-precision type like:
using Coord = boost::multiprecision::number<
boost::multiprecision::backends::cpp_dec_float<100>,
boost::multiprecision::et_off>;
does show a difference:
As you can see there is no significant difference in the area, but the area delta changed (0.0690986 became 0.0690985) and more interestingly, the "mis-diagnosed" self-intersection has vanished.
THE PROBLEM
There's a problem with the above analysis: you can't use it without changing a few spots in the code where
constexpr is wrongly assumed for the calculation type (preventing compilation)
std::abs is qualified, instead of invoking ADL-enabled abs from Boost Multiprecision
If you want you can see the relevant patch (relative to 1.76.0) here, but it will by implication mean you may run into more similar issues (like I have before): https://github.com/sehe/geometry/commit/31a7ccf1730b09b827ba6cc4dabcb845c3582a9b
commit 31a7ccf1730b09b827ba6cc4dabcb845c3582a9b
Author: sehe <sgheeren#gmail.com>
Date: Wed Jun 9 16:35:53 2021 +0200
Minimal patch for https://stackoverflow.com/q/67904576/85371
Allows compilation of bg::difference (specifically, sort_by_side) using
Multiprecision number type. Expressions templates hav already been
disabled to sidestep the bulk of the issues.
diff --git a/include/boost/geometry/algorithms/detail/overlay/sort_by_side.hpp b/include/boost/geometry/algorithms/detail/overlay/sort_by_side.hpp
index f65c8ebae..72f4aa724 100644
--- a/include/boost/geometry/algorithms/detail/overlay/sort_by_side.hpp
+++ b/include/boost/geometry/algorithms/detail/overlay/sort_by_side.hpp
## -299,7 +299,7 ## public :
// Including distance would introduce cyclic dependencies.
using coor_t = typename select_coordinate_type<Point1, Point2>::type;
using calc_t = typename geometry::select_most_precise <coor_t, T>::type;
- constexpr calc_t machine_giga_epsilon = 1.0e9 * std::numeric_limits<calc_t>::epsilon();
+ calc_t machine_giga_epsilon = 1.0e9 * std::numeric_limits<calc_t>::epsilon();
calc_t const& a0 = geometry::get<0>(a);
calc_t const& b0 = geometry::get<0>(b);
## -310,9 +310,10 ## public :
// The maximum limit is avoid, for floating point, large limits like 400
// (which are be calculated using eps)
- constexpr calc_t maxlimit = 1.0e-3;
+ calc_t maxlimit = 1.0e-3;
auto const limit = (std::min)(maxlimit, limit_giga_epsilon * machine_giga_epsilon * c);
- return std::abs(a0 - b0) <= limit && std::abs(a1 - b1) <= limit;
+ using std::abs;
+ return abs(a0 - b0) <= limit && abs(a1 - b1) <= limit;
}
template <typename Operation, typename Geometry1, typename Geometry2>
SUMMARY / WORKAROUND
I donot recommend using the patch. I recommend concluding that indeed it's a precision problem. If you think that is a defect with the library, consider reporting the problem to the library maintainers.
As a workaround, you can try scaling your inputs:
for (auto& pt: poly1.outer()) bg::multiply_value(pt, 1'000);
for (auto& pt: poly2.outer()) bg::multiply_value(pt, 1'000);
This also removes the symptom: Live On Wandbox
---
poly1: POLYGON((-46500 -23318.5,-46500 26305.3,-5340.51 15276.6,37500 -9457.38,37500 -29970.4,-38166.7 -29970.4,-46094.2 -23318.5,-46500 -23318.5))
poly1 area: 3.46884e+09
---
poly2: POLYGON((-67554.3 -23318.9,-62596.7 -17333.6,-60775.6 -15852.9,-58530.2 -15186.3,-56202.2 -15435.4,-54146.1 -16562.1,-46093.7 -23318.9,-67554.3 -23318.9))
poly2 area: 1.05495e+08
---
diff: MULTIPOLYGON(((-46500 -22978,-46500 26305.3,-5340.51 15276.6,37500 -9457.38,37500 -29970.4,-38166.7 -29970.4,-46093.7 -23318.9,-46500 -22978)))
diff area: 3.46878e+09
---
result#1: POLYGON((-46500 -22978,-46500 26305.3,-5340.51 15276.6,37500 -9457.38,37500 -29970.4,-38166.7 -29970.4,-46093.7 -23318.9,-46500 -22978))
result#1 area: 3.46878e+09
Diff in areas: 69098.1
When I compile my application in Release mode I get incorrect division result of 40.0 / 5 = 7.
In debug compilation it is correct, and result is 8
I tried to cast to double, from double, to int, without abs() etc, but no luck. I know this must be related to weirdness of floating point math on computers, but I have no idea what exactly. I also logged the values on console, via the qDebugs() below the code - everything looks okay, except initial steps.
//somewhere in code
double tonnageToRecover = 0.5;//actually, its QDoubleSpinBox->value(), with 0.5 step set. Anyway, the value finally reduces to 0.5 every time
double tonnagePerArmorPoint = 0.0125;//taken from .json
int minimumArmorDelta = 5;//taken from .json
...
//palace where the calculations are preformed
double armorPointsPerHalfTon = tonnageToRecover / tonnagePerArmorPoint;
int steps = abs(static_cast<int>(armorPointsPerHalfTon / minimumArmorDelta));
qDebug() << "armorPointsPerHalfTon = " << armorPointsPerHalfTon;
qDebug() << "tonnagePerArmorPoint = " << tonnagePerArmorPoint;
qDebug() << "steps initial = " << steps;
qDebug() << "minimumArmorDelta = " << minimumArmorDelta;
both 1st division parts are type double, tonnageToRecover = 0.5, tonnagePerArmorPoint = 0.0125, result is 40 which is OK
minimumArmorDelta is int = 5
So why 40/5 isn't 8??
Compiler - MinGW 32 5.3.0, from Qt 5.11 pack
Screenshots:
Release
Debug
#Julian
I suspect that too, but how can I overcome this obstacle? Will try to change steps to double, then cast to int again.
RESUT: still does not work :/
I found a solution, but I have no idea exactly why it works now. Current code it:
double armorPointsPerHalfTon = tonnageToRecover / tonnagePerArmorPoint;
// int aPHT = (int)armorPointsPerHalfTon;
// double minDelta = 5.0;//static_cast<double>(minimumArmorDelta);
QString s(QString::number(abs(armorPointsPerHalfTon / minimumArmorDelta)));
int steps = abs(armorPointsPerHalfTon / minimumArmorDelta);
#define myqDebug() qDebug() << fixed << qSetRealNumberPrecision(10)
myqDebug() << "tonnageToRecover = " << tonnageToRecover;
myqDebug() << "tonnagePerArmorPoint = " << tonnagePerArmorPoint;
myqDebug() << "armorPointsPerHalfTon = " << armorPointsPerHalfTon;
//myqDebug() << "aPHT = " << aPHT;//this was 39 in Release, 40 in Debug
myqDebug() << "steps initial = " << steps;
myqDebug() << "string version = " << s;
myqDebug() << "minimumArmorDelta = " << minimumArmorDelta;// << ", minDelta = " << minDelta;
#undef myqDebug
I suppose that creation of that QString s flushes something, and that's why calculation of steps is correct now. String has incorrect value "7", though.
Your basic problem is that you are truncating.
Suppose real number arithmetic would give an answer of exactly 8. Floating point arithmetic will give an answer that is very close to 8, but can differ from it in either direction due to rounding error. If the floating point answer is slightly greater than 8, truncating will change it to 8. If it is even slightly less than 8, truncating will change it to 7.
I suggest writing a new question on how to avoid the truncation, with discussion of why you are doing it.
I guess, the reason is that armorPointsPerHalfTon / minimumArmorDelta could be not 8 but actually 7.99999999 in the Release-version. This value then changes to 7 through the int-cast.
So, if the Debug version calculates armorPointsPerHalfTon / minimumArmorDelta = 8.0000001, the result is static_cast<int>(armorPointsPerHalfTon / minimumArmorDelta) = 8.
It's not surprising that Debug / Release yield different results (on the order of machine precision), as several optimizations occur in the Release version.
EDIT: If it suits your requirements, you could just use std::round to round your double to the nearest integer, rather than truncation decimals.
I'm wondering what the best way to detect a high DPI display is. Currently I'm trying to use SDL_GetDisplayDPI (int, *float, *float, *float), however this has only returned errors on the two different computers I tested with (MacBook Pro running OS X 10.11.5 and iMac running macOS 10.12 Beta (16A238m)). For reference, my code is bellow.
float diagDPI = -1;
float horiDPI = -1;
float vertDPI = -1;
int dpiReturn = SDL_GetDisplayDPI (0, &diagDPI, &horiDPI, &vertDPI);
std::cout << "GetDisplayDPI() returned " << dpiReturn << std::endl;
if (dpiReturn != 0)
{
std::cout << "Error: " << SDL_GetError () << std::endl;
}
std::cout << "DDPI: " << diagDPI << std::endl << "HDPI: " << horiDPI << std::endl << "VDPI: " << vertDPI << std::endl;
Unfortunately, this is only giving me something like this:
/* Output */
GetDisplayDPI() returned -1
Error:
DDPI: -1
HDPI: -1
VDPI: -1
Not Retina
I also tried comparing the OpenGL drawable size with the SDL window size, but SDL_GetWindowSize (SDL_Window, *int, *int) is returning 0s, too. That code is bellow, followed by the output.
int gl_w;
int gl_h;
SDL_GL_GetDrawableSize (window, &gl_w, &gl_h);
std::cout << "GL_W: " << gl_w << std::endl << "GL_H: " << gl_h << std::endl;
int sdl_w;
int sdl_h;
SDL_GetWindowSize (window, &sdl_w, &sdl_h);
std::cout << "SDL_W: " << sdl_w << std::endl << "SDL_H: " << sdl_h << std::endl;
/* Output */
GL_W: 1280
GL_H: 720
SDL_W: 0
SDL_H: 0
It's entirely possible that I'm doing something wrong here, or making these calls in the wrong place, but I think more likely is that I'm on the wrong track entirely. There's a hint to disallow high-dpi canvases, so there's probably a simple bool somewhere, or something that I'm missing. I have certainly looked through the wiki, and checked Google, but I can't really find any help for this. Any suggestions or feedback are welcome!
Thank you for your time!
I know I'm not answering your question directly, and want to reiterate one thing you tried.
On a Macbook pro, when an SDL window is on an external display, SDL_GetWindowSize and SDL_GL_GetDrawableSize return the same values. If the window is on a Retina screen, they're different. Specifically, the drawable size is 2x larger in each dimension.
I was using a .framework installation of SDL when I encountered this issue. For an unrelated reason, I trashed the .framework SDL files (image and ttf as well), and built SDL from source (thus transitioning to a "unix-style" SDL-installation). To my surprise, SDL_GetDisplayDPI () is now returning 0, setting the values of DDPI, HDPI, and VDPI, to 109 on a non-retina iMac, and 113.5 on a retina MacBook Pro. I'm not certain that these are correct/accurate, but it is consistent between launches, so I'll work with it.
At this point, I'm not sure if it was a bug, which has been fixed in the repo, or was an issue with my .framework files. On a somewhat unrelated note, SDL_GetBasePath () and SDL_GetPrefPath (), which also weren't working, now return the expected values. If you're also experiencing any of these problems on macOS, try compiling and installing SDL from source (https://hg.libsdl.org/SDL).
Thanks for your input, everyone!
So this is really a mystery for me. I am Measuring time of my own sine function and comparing it to the standard sin(). There is a strange behavior though. When I use the functions just standalone like:
sin(something);
I get an average time like (measuring 1000000 calls in 10 rounds) 3.1276 ms for the standard sine function and 51.5589 ms for my implementation.
But when I use something like this:
float result = sin(something);
I get suddenly 76.5621 ms for standard sin() and 49.3675 ms for my one. I understand that it takes some time to assign the value to a variable but why doesn't it add time to my sine too? It's more or less the same while the standard one increases rapidly.
EDIT:
My code for measuring:
ofstream file("result.txt",ios::trunc);
file << "Measured " << repeat << " rounds with " << callNum << " calls in each \n";
for (int i=0;i<repeat;i++)
{
auto start = chrono::steady_clock::now();
//call the function here dattebayo!
for (int o=0; o<callNum;o++)
{
double g = sin((double)o);
}
auto end = chrono::steady_clock::now();
auto difTime = end-start;
double timeD = chrono::duration <double,milli> (difTime).count();
file << i << ": " << timeD << " ms\n";
sum += timeD;
}
In any modern compiler, the compiler will know functions such as sin, cos, printf("%s\n", str) and many more, and either translate to simpler form [constant if the value is constant, printf("%s\n", str); becomes puts(str);] or completely remove [if known that the function itself does not have "side-effects", in other words, it JUST calculates the returned value, and has no effect on the system in other ways].
This often happens even for standard function even when the compiler is in low or even no optimisation modes.
You need to make sure that the result of your function is REALLY used for it to be called in optimised mode. Add the returned values together in the loop...