Hopefully someone can give me some insight why if I use the
CGAL::Simple_cartesian<CGAL::Gmpq> as Kernel my program works and it crashes if I use the CGAL::Exact_predicates_exact_constructions_kernel as Kernel;
Problem: I let one CGAL::polygon_2 (diamond shaped) fall on another polygon (square). As soon as the tip of the diamond shaped polygon touches the square the program crashes during the call to do_intersect(diamond,square) (probably with a stack overflow) if I use the EPECK kernel. My understanding was that this should always work since it is exact and I thought since it does not make a construction I should even be able to use CGAL::Exact_predicates_inexact_constructions_kernel.
It seems to start looping at the blue marked bar in the image at call: BOOST_PP_REPEAT_FROM_TO(2, 9, CGAL_LAZY_REP, _).
Solution: If I replace the EPECK with CGAL::Simple_cartesian<CGAL::Gmpq> it works.
I am willing to use this as the solution, but I want to be certain that it is actually the solution and not that I get a problem further down the line. Also some understanding would be nice why the problem is there since I thought that CGAL should be able to handle this with EPECK even if it might be a degenerate case.
Additional Info:
I have build it on 3 computers, with 2 MSVC compiler versions and 2 CGAL version all with comparable results.
MSVC: 14.10 & 14.12
CGAL: 4.10 & 4.12
gmp: 5.01
boost: 1.67.0
windows sdk: 10.0.17134 & 10.0.14393.0
Windows 10 64 bit
I use C++ and odeint to solve a set of differential equations. I compile the code in Matlab using mex and g++ on Mac OS X. For some time everything worked perfectly, but now something curious is happening:
I can run the same simulation (with the same parameters) twice and in one run the results are fine and in the other a whole column (or multiple columns) of the output is NaN (this also influences the other outputs hinting at a problem occurring during the integration process).
I tried switching between various solvers. Now I am using runge_kutta_fehlberg78 with a fixed step size. (One possibility for NaN output is that the initial step size of an adaptive solver is too large.)
What can be the cause of this? Especially the randomness makes me wonder.
Edit: I am starting to suspect that the problem has something to do with Matlab. I compiled a version without the Matlab interface with Xcode as a normal executable and so far I didn't have a issue with NaN results. Its hard to tell if the problem is truly resolved and I don't understand why that would resolve it.
I'm making a very simple program with c++ for linux usage, and I'd like to know if it is possible to make just one big binary containing all the dependencies that would work on any linux system.
If my understanding is correct, any compiler turns source code into machine instructions, but since there are often common parts of code that can be reused with different programs, most programs depend on another libraries.
However if I have the source code for all my dependencies, I should be able to compile a binary in a way that would not require anything from the system? Will I be able to run stuff compiled on 64bit system on a 32bit system?
In short: Maybe.
The longer answer is:
It depends. You can't, for example, run a 64-bit binary on a 32-bit system, that's just not even nearly possible. Yes, it's the same processor family, but there are twice as many registers in the 64-bit system, which also has twice as long registers. What's the 32-bit processor going to "give back" for the value of those bits and registers that doesn't exist in the hardware in the processor? It just plain won't work. Some of the instructions also completely change meaning, so the system really needs to be "right" for the compiled code, or it won't work - fortunately, Linux will check this and plain refuse if it's not right.
You can BUILD a 32-bit binary on a 64-bit system (assuming you have all the right libraries, etc, installed for both 64- and 32-bit, etc).
Similarly, if you try to run ARM code on an x86 processor, or MIPS code on an ARM processor, it simply has no chance of working, because the actual instructions are completely different (or they would be in breach of some patent/copyright or similar, because processor instruction sets contain portions that are "protected intellectual property" in nearly all cases - so designers have to make sure they do NOT do "the same as someone else's design"). Like for 32-bit and 64-bit, you simply won't get a chance to run the wrong binary here, it just won't work.
Sometimes, there are subtle differences, for example ARM code can be compiled with "hard" or "soft" floating point. If the code is compiled for hard float, and there isn't the right support in the OS, then it won't run the binary. Worse yet, if you compile on x86 for SSE instructions, and try to run on a non-SSE processor, the code will simply crash [unless you specifically build code to "check for SSE, and display error if not present"].
So, if you have a binary that passes the above criteria, the Linux system tends to change a tiny bit between releases, and different distributions have subtle "fixes" that change things. Most of the time, these are completely benign (they fix some obscure corner-case that someone found during testing, but the general, non-corner case behaviour is "normal"). However, if you go from Linux version 2.2 to Linux version 3.15, there will be some substantial differences between the two versions, and the binary from the old one may very well be incompatible with the newer (and almost certainly the other way around) - it's hard to know exactly which versions are and aren't compatible. Within releases that are close, then it should work OK as long as you are not specifically relying on some feature that is present in only one (after all, new things ARE added to the Linux kernel from time to time). Here the answer is "maybe".
Note that in the above is also your implementation of the C and C++ runtime, so if you have a "new" C or C++ runtime library that uses Linux kernel feature X, and try to run it on an older kernel, before feature X was implemented (or working correctly for the case the C or C++ runtime is trying to use it).
Static linking is indeed a good way to REDUCE the dependency of different releases. And a good way to make your binary huge, which may be preventing people from downloading it.
Making the code open source is a much better way to solve this problem, then you just distribute your source code and a list of "minimum requirements", and let other people deal with it needing to be recompiled.
In practice, it depends on "sufficiently simple". If you're using C++11, you'll quickly find that the C++11 libraries have dependencies on modern libc releases. In turn, those only ship with modern Linux distributions. I'm not aware of any "Long Term Support" Linux distribution which today (June 2014) ships with libc support for GCC 4.8
The short answer is no, at least without serious hack.
Different linux distribution may have different glue code between user-space and kernel. For instant, an hello world seemingly without dependency built from ubuntu cannot be executed under CentOS.
EDIT: Thanks for the comment. I re-verify this and the cause is im using 32-bit VM. Sorry for causing confusion. However, as noted above, the rule of thumb is that even same linux distribution may sometime breaks compatibility in order to deploy bugfix, so the conclusion stands.
The C++ projects I wrote and run without any problems on Ubuntu return the exception "vector subscript out of range" when I run them on windows. I use Windows 7 and Visual C++ 2008 Express.
I hope that makes sense to someone.
The version of the STL shipped by Microsoft comes with checked iterators, which when run in debug mode ensure that your vector indices are in range. No such checking is done in GCC by default.
Your code almost certainly contains undefined behavior. In such a case, an implementation is free to do almost anything it wants. It appears that gcc has basically ignored the problem, so it wasn't apparent. VC++ has built enough self-monitoring into your code to find the problem and tell you about it.
The next step is pretty much up to you: find the problem in your code and fix it. Unfortunately since you haven't posted any of the code, it's essentially impossible to give much more detailed advice about what you should do or how to do it. About the only hint I can think of is that the debugger in VC++ does have a nice stack trace feature, so if you run the code under the debugger and it fails, it's pretty easy to walk back up the stack to find the code that called the function (that called the function, etc.) where the problem was detected.
I have a serial Fortran code that works fine. Once I compile the same code using ifort -parallel and run it, it gives wrong results and overflow. I would expect that with "-parallel" flag, the Intel compiler is capable of selecting the loops that are safe to parallelize and I should get the exact same results as for the serial code, which did not happen. The even more strange behaviour is that I went ahead and closed all the do loops parallelization in my code using !DEC$ NOPARALLEL, compiled the code using ifort -parallel to make sure that non of the loops was parallelized and then run. Surprisingly, I got the same wrong results and overflow, although the latter action should be exactly equivalent to a serial code.
Is there any one capable of explaining this behaviour or is it just an Intel compiler deficiency.
Greetings.
Sorry to say this, but it's unlikely to be an Intel compiler problem it's a pretty good compiler (no, I don't work for Intel ! but I do use their compilers).
Yes I am capable of explaining this sort of behaviour, but without sight of your program anything I suggest will be wrong.
Answers were given to this identical question on the Intel Fortran Forum: http://software.intel.com/en-us/forums/topic/269743
EDIT: I revised the link, since as stated in the comment, the original link is now dead.