Hopefully someone can give me some insight why if I use the
CGAL::Simple_cartesian<CGAL::Gmpq> as Kernel my program works and it crashes if I use the CGAL::Exact_predicates_exact_constructions_kernel as Kernel;
Problem: I let one CGAL::polygon_2 (diamond shaped) fall on another polygon (square). As soon as the tip of the diamond shaped polygon touches the square the program crashes during the call to do_intersect(diamond,square) (probably with a stack overflow) if I use the EPECK kernel. My understanding was that this should always work since it is exact and I thought since it does not make a construction I should even be able to use CGAL::Exact_predicates_inexact_constructions_kernel.
It seems to start looping at the blue marked bar in the image at call: BOOST_PP_REPEAT_FROM_TO(2, 9, CGAL_LAZY_REP, _).
Solution: If I replace the EPECK with CGAL::Simple_cartesian<CGAL::Gmpq> it works.
I am willing to use this as the solution, but I want to be certain that it is actually the solution and not that I get a problem further down the line. Also some understanding would be nice why the problem is there since I thought that CGAL should be able to handle this with EPECK even if it might be a degenerate case.
Additional Info:
I have build it on 3 computers, with 2 MSVC compiler versions and 2 CGAL version all with comparable results.
MSVC: 14.10 & 14.12
CGAL: 4.10 & 4.12
gmp: 5.01
boost: 1.67.0
windows sdk: 10.0.17134 & 10.0.14393.0
Windows 10 64 bit
Related
Why am I seeing different results for this simple 3d vector operation using Eigen on Mac and Windows?
I wrote some simulation code on my MacBook Pro (macOS 10.12.6) and tested it extensively. As soon as my colleague tried using it on Windows, he had problems. He gave me a specific failing case. It worked for me. As we dug in, it came down to an attempt to normalize a 3d zero vector, so an attempt to divide by zero. He got (nan, nan, nan) while I got (0, 0, 0). In the context where it happened, the zero result was a soft/harmless fail, which is why I had not noticed it in my testing.
Clearly the vector-of-nans is the right answer. I tried it in an Ubuntu build running under Vagrant and got (-nan, -nan, -nan).
Does anyone know why I get (0, 0, 0) on macOS? I think by default Xcode is using LLVM. The Ubuntu build used clang.
My suspicion is that you got a newer Eigen version on macOS. The behavior of normalize() had been changed some time ago:
https://bitbucket.org/eigen/eigen/commits/12f866a746
There was a discussion about the expected behavior here: http://eigen.tuxfamily.org/bz/show_bug.cgi?id=977
Check your compiler flags. You probably have fast math enabled (-ffast-math in gcc). This enables -ffinite-math-only (again, gcc) which, and I quote:
Allow optimizations for floating-point arithmetic that assume that arguments and results are not NaNs or +-Infs.
At work I inherited a big code base. Older version was compiled with VC6.0 and works fine on Windows XP and 32-bit Windows 7. The quad core computer is specifically made for field use in a special industry.
Managed to upgrade to VC2005 and VC2013, however, the binaries produced by newer compilers yields very high CPU usage, to a point UI is not usable.
Tried a few profilers but got quite different results. For example, one points to PostMessageA, and another points to LineTo (MFC function).
Any clue where I should look at to find the cause?
I've rarely trusted profilers. One thing I do is I will repeatedly pause the debugger and see where it ends up. If it keeps ending up with a similar call stack, that's where the problem probably is.
Of course, if you have a lot of threads, you can play with freezing individual threads and pressing play/pause. Of course, if there is a lot of intra-thread dependencies, this will be difficult.
Absolute TAPI beginner here. I recently ported my CV code to make use of UMat intstead of Mat since my CPU was on it's limit, especially morphologic operations seemed to consume quite some computing power.
Now with UMat I cannot see any changes in my framerate, it is exactly the same no matter if I use UMat or not, also Process Explorer reports no GPU usage whatsoever. I did a small test with a few calls of dilation and closing on a full HD image -- no effect.
Am I missing something here? I'm using the latest OpenCV 3.2 build for Windows and a GTX 980 with driver 378.49. cv::ocl::haveOpenCL() and cv::ocl::useOpenCL() both return true and cv::ocl::Context::getDefault().device( 0 ) also gives me the correct device, everything looks good as far as I can tell. Also, I'm using some custom CL code via cv::ocl::Kernel which is definitely invoked.
I realize it is a naive way of thinking that just changing Mat to UMat will result in huge performance gain (although every of the the very limited number of ressources covering TAPI I find online suggests exactly that). Still, I was hoping to get some gain for starters and then further optimize step-by-step from there. However, the fact that I can't discover any GPU usage whatsoever highliy irritates me.
Is there something I have to watch out for? Maybe my usage of TAPI prevents from a streamlined execution of OpenCL code, maybe by accidental/hidden readbacks I'm not aware of? Do you see any way of profiling the code with respect to that matter?
Are there any how-to's, best practices or common pitfalls for using TAPI? Things like "don't use local UMat instances within functions", "use getUMat() instead of copyTo()", "avoid calls of function x since it will cause a cv::ocl::flush()", things of that sort?
Are there OpenCV operations that are not ported to OpenCL yet? Is there documentation accordingly? In the OpenCV source code I saw that, if built with HAVE_OPENCL flat, the functions try to run CL code using the CV_OCL_RUN macro, however there are a few conditions checked beforehand, otherwise it falls back to CPU. It does not seem like I have any possitility to figure out if the GPU or the CPU was actually used apart from stepping into each and every OpenCL function with the debugger, am I right?
Any ideas/experiences apart from that? I'd appreciate any input that relates to this matter.
I use C++ and odeint to solve a set of differential equations. I compile the code in Matlab using mex and g++ on Mac OS X. For some time everything worked perfectly, but now something curious is happening:
I can run the same simulation (with the same parameters) twice and in one run the results are fine and in the other a whole column (or multiple columns) of the output is NaN (this also influences the other outputs hinting at a problem occurring during the integration process).
I tried switching between various solvers. Now I am using runge_kutta_fehlberg78 with a fixed step size. (One possibility for NaN output is that the initial step size of an adaptive solver is too large.)
What can be the cause of this? Especially the randomness makes me wonder.
Edit: I am starting to suspect that the problem has something to do with Matlab. I compiled a version without the Matlab interface with Xcode as a normal executable and so far I didn't have a issue with NaN results. Its hard to tell if the problem is truly resolved and I don't understand why that would resolve it.
I have an extreme problem.
I have been working on a game for about two years(20000+ lines of code), and Lately I have been noticing a ton of memory leaks. The problem is that I cannot track every single one of them since my game is way too big...
I have searched around and noticed that CppCheck would be useful in my situation, but the problem is that since I am using windows, i cannot use CppCheck(which is for linux only).
I am wondering if maybe there is a library or plugin that is CppCheck's equivalent for windows, or maybe a way to use CppCheck on windows instead.
All of the possibilities that I have come up with, along with solutions to other's problems(such as using smart pointers for std::deque and such) imply that my program is small or the more fitting: rewrite my entire program, something that I -really- do not want to do...
IDE: Code Blocks 10.05
Compiler: MinGW 3.81 GCC 4.4.1
CppCheck works on Windows too (check downloads on SourceForge). CppCheck is only a static check tool (it analyzes your source code to find some potential problems). In order to find real memory leaks it may be necessary to use some debugging tool that actually runs your code (look at Google's Dr. Memory for example).