I am using pre-built opencv lib & dll, version 3.4.3 Winpack (downloaded from official site https://opencv.org/releases.html).
Till now everything worked fine, but recently my code started to crash.
It is one specific function that causes this crash: cv::split(). It is a common utility funtion to extract channels
from cv::Mat array. The crash occurs only on Xeon processor, Windows Server 2012. Regardless of preceding calls or context, it just crashes immediately on this call and the application just closes.
On other processors the same .exe works without problems, the code is normally tested on Windows 10 with ordinary processors. I don't have Xeon processor at hand to test every function, but the mentioned crash could be reproduced 100% on a Xeon Gold machine and I have used quite a lot of different library functions and they worked there, so it is the first one that crashed.
It seems that some functions' runtime simply contains instruction that are incompatible with the Xeon processor so it just crashes there.
Question: how do I know in advance whether certain openCV function will work or not on a Xeon processor?
Currently I have just removed cv::split() calls from my code and replced it by cv::extractChannel() methods which works fine on all tested platforms. I suspect one option would be to compile a custom version of the lib and disable specific instructions, but that will need knowledge of what to disable, etc, so frankly I am not in the mood involving
custom compiled version for what seems relatively 'standard architecture' (Xeon processor).
What can you suggest to avoid these errors?
Maybe there is a list of openCV functions that are known to be 'special' (not for Xeon processor so I can just avoid them)?
Code example:
# include <opencv2/opencv.hpp>
int main ( int argc, char* argv[] )
{
cv::Mat Patch = cv::imread ( "image.png", -1 );
cv::Mat Patch_planes[4];
cv::split ( Patch, Patch_planes );
return 0;
}
Compiler command (Microsoft (R) C/C++ Optimizing Compiler Version 19.15.26732.1 for x64):
cl.exe "minim.cpp" /EHsc /W2 /I "c:\VCLIB\openCV-3.4.3" "c:\VCLIB\openCV-3.4.3\lib\opencv_world343.lib" /link /SUBSYSTEM:CONSOLE
How do I know in advance whether certain openCV function will work or not on a Xeon processor?
You don't. The compiler will use whatever instructions it deems most suitable to compile any particular piece of code, subject to the constraints given on the command line.
So to be safe (assuming it is an 'illegal instruction' error), you probably do need to compile openCV for the least capable processor you need to support and then check the performance hit on other processors. Either that or check the CPU in your installer and install a version of openCV tailored to that processor. Yuk, I don't envy you.
A large, complex OS X C++/OpenGL codebase (i.e. impossible to get a small fragment of code that illustrates the issue) was originally compiled with Xcode 4.6 and gcc.
After building using Xcode 6.1 and clang, the application runs much more slowly - typically 20-75% more slowly.
Using Instruments, I identified an area where the slowdown appears to happen - calls to glDrawRangeElements() in the render loop when built with clang take 100x more time than those with gcc.
Specifically, the call to glDrawRangeElements_IMM_Exec() takes all the time vs glDrawRangeElements_ACC_Exec() which takes about the same time.
I've rules out optimization issues between the two - both use -O3. I've tried building with the same OpenGL framework from 10.7.
No idea where to start looking now - searching for those calls doesn't return anything useful.
Can anyone suggest some avenues to explore?
I have a small sorting program in C++ developed using XCode on a 2.4 GHz i7 Macbook Pro (I didn't change any of the configurations, so Xcode is probably using LLVM as compiler).
The program only incorporates very standard operations like calculating sums over (parts) of lists (i.e. no explicit use of pointers or so) and is only using standard types and vectors.
When compiling the same code using CL within Visual Studio 2010 on a 2.4 GHz i5 Notebook the runtime is significantly slower (at least by factor 100).
Are there any well-known performance issues with translations from Xcode to VS like the one I just described?
I haven't changed much in Visual Studio 2010 either: Are there some options for CL to be turned on or off that do the job?
Many thanks in advance.
The i7 and i5 processors have similar architecture. The two you speak of have the same clock rate, but are different tiers. Therefore, the i5 and i7 are not comparable on a benchmark like this. Only if you installed Windows on your Mac would you get a valid timing for both programs. The i7 is more powerful than the i5. #Jerry Coffins also has a point there.
Check out the difference between i5 and i7
In addition to #Jerry Coffin's comment, you need to use Shift+F5 combination to run your code without the debugger (yes, there is a debugger for release build as well).
I have written a benchmark method to test my C++ program (which searches a game tree), and I am noticing that compiling with the "LLVM compiler 2.0" option in XCode 4.0.2 gives me a significantly faster binary than if I compile with the latest version of clang++ from MacPorts.
If I understand correctly I am using a clang front-end and llvm back-end in both cases. Has Apple made improvements to their clang/llvm distribution to produce faster binaries for Mac OS? I can't find much information about the project.
Here are the benchmarks my program produces for various compilers, all using -O3 optimization (higher is better):
(Xcode) "gcc 4.2": 38.7
(Xcode) "llvm gcc 4.2": 51.2
(Xcode) "llvm compiler 2.0": 50.6
g++-mp-4.6: 43.4
clang++: 40.6
Also, how do I compile with the clang/llvm XCode is using from the terminal? I can't find the command.
EDIT: The scores I output are "thousands of games per second" which are calculated over a long enough run of the program. Scores are very consistent over multiple runs, and recent major algorithmic improvements have given me 1% - 5% speed ups, for example. A 25% speed up of 40 to 50 is huge for my program.
UPDATE: I wasn't invoking clang++ from the command line with -flto. Now when I compare clang++ -O3 -flto to /Developer/usr/bin/clang++ -O3 -flto from the command line the results are closer, but the Apple one is still 6.5% faster.
Now how to enable link time optimization for gcc? When I try g++ -flto I get the following error:
cc1plus: error: LTO support has not been enabled in this configuration
Apple LLVM Compiler should be available under /Developer/usr/bin/clang.
I can't think of any particular reason why MacPorts clang++ would generate slower code... I would check whether you're passing in comparable command-line options. One thing that would make a large difference is if you're producing 32-bit code with one compiler, and 64-bit code with the other.
If GCC has no LTO then you need to build it yourself:
http://solarianprogrammer.com/2012/07/21/compiling-gcc-4-7-1-mac-osx-lion/
For LTO you need to add 'libelf' to the instructions.
http://sourceforge.net/apps/trac/mingw-w64/wiki/LTO%20and%20GCC
Exact speed of an algorithm can depend on all kinds of things that are totally out of your's and the compiler's power. You may have a loop where the execution time depends on precisely how the instructions are aligned in memory, in a way that the compiler couldn't predict. I have seen cases where a loop could enter different "states" with different execution times per iteration (so after a context switch, it could enter a state where it took either 12 or 13 cycles, rather randomly). This can all be coincidence.
And you might be using different libraries, which is quite possible the reason. In MacOS X, they are using a new and presumably faster implementation of std::string and std::vector, for example.
I have an AMD Opteron server running CentOS 5. I want to have a compiler for a fairly large C++ Boost based program. Which compiler I should choose?
There is an interesting PDF here which compares a number of compilers.
I hope this helps more than hurts :)
I did a little compiler shootout sometime over a year ago, and I am going off memory.
GCC 4.2 (Apple)
Intel 10
GCC 4.2 (Apple) + LLVM
I tested multiple template heavy audio signal processing programs that I'd written.
Compilation times: The Intel compiler was by far the slowest compiler - more than '2x times slower' as another posted cited.
GCC handled deep templates very well in comparison to Intel.
The Intel compiler generated huge object files.
GCC+LLVM yielded the smallest binary.
The generated code may have significant variance due to the program's construction, and where SIMD could be used.
For the way I write, I found that GCC + LLVM generated the best code. For programs which I'd written before I took optimization seriously (as I wrote), Intel was generally better.
Intel's results varied; it handled some programs far better, and some programs far worse. It handled raw processing very well, but I give GCC+LLVM the cake because when put into the context of a larger (normal) program... it did better.
Intel won for out of the box, number crunching on huge data sets.
GCC alone generated the slowest code, though it can be as fast with measurement and nano-optimizations. I prefer to avoid those because the wind may change direction with the next compiler release, so to speak.
I never measured poorly written programs in this test (i.e. results outperformed distributions of popular performance libraries).
Finally, the programs were written over several years, using GCC as the primary compiler in that time.
Update: I was also enabling optimizations/extensions for Core2Duo. The programs were clean enough to enable strict aliasing.
The MySQL team posted once that icc gave them about a 10% performanct boost over gcc. I'll try to find the link.
In general I've found that the 'native' compilers perform better than gcc on their respective platforms
edit: I was a little off. Typical gains were 20-30% not 10%. Some narrow edge cases got a doubling of performance. http://www.mysqlperformanceblog.com/files/presentations/LinuxWorld2004-Intel.pdf
I suppose it varies depending on the code, but with the codebase I am working on now, ICC 11.035 gives an almost 2x improvement over gcc 4.4.0 on a Xeon 5504.
icc options: -O2 -fno-alias
gcc options: -O3 -msse3 -mfpmath=sse -fargument-noalias-global
The options are specific to just the file containing the compute-intensive code, where I know there is no aliasing. Single-threaded code with a 5-level nested loop.
Although autovectorization is enabled, neither compilers generate vectorized code (not a fault of the compilers)
Update (2015/02/27):
While optimizing some geophysics code (Q2, 2013) to run on Sandy Bridge-E Xeons, I had an opportunity to compare the performance of ICC 11.1 against GCC 4.8.0, and GCC was now generating faster code than ICC. The code made used of AVX intrinsics and did use 8-way vectorized instructions (nieither compiler autovectorized the code properly due to certain data layout requirements). In addition, GCC's LTO implementation (with the IR core embedded in the .o files) was much easier to manage than that in ICC. GCC with LTO was running roughly 3 times faster than ICC without LTO. I'm not able to find the numbers right now for GCC without LTO, but I recall it was still faster than ICC. It's by no means a general statement on ICC's performance, but the results were sufficient for us to go ahead with GCC 4.8.*.
Looking forward to GCC 5.0 (http://www.phoronix.com/scan.php?page=article&item=gcc-50-broadwell)!
We use the Intel compiler on our product (DB2), on Linux and Windows IA32/AMD64, and on OS X (i.e. all our Intel platform ports except SunAMD).
I don't know the numbers, but the performance is good enough that we:
pay for the compiler which I'm told is very expensive.
live with the 2x times slower build times (primarily due to the time it spends acquiring licenses before it allows itself to run).
PHP - Compilation from source, with ICC rather than GCC, should result in a 10 % to 20 % speed improvment - http://www.papelipe.no/tags/ez_publish/benchmark_of_intel_compiled_icc_apache_php_and_apc
MySQL - Compilation from source, with ICC rather than GCC, should result in a 25 % to 50 % speed improvment - http://www.mysqlperformanceblog.com/files/presentations/LinuxWorld2005-Intel.pdf
I used to work on a fairly large signal processing system which ran on a large cluster. We used to reckon for heavy maths crunching, the Intel compiler gave us about 10% less CPU load than GCC. That's very unscientific but it was our experience (that was about 18 months ago).
What would have been interesting is if we'd been able to use Intel's math libraries as well which use their chipset more efficiently.
I used UnixBench (v. 5.1.3) on an openSUSE 12.2 (kernel 3.4.33-2.24-default x86_64), and compiled it first with GCC, and then with Intel's compiler.
With 1 parallel copy, UnixBench compiled with Intel's is about 20% faster than the version compiled with GCC.
However this hides huge differences. Dhrystone is about 25% slower with Intel compiler, while Whetstone runs 2x faster.
With 4 copies of UnixBench running in parallel, the improvement of Intel compiler over GCC is only 7%. Again Intel is much better at Whetstone (> 200%), and slower at Dhrystone (about 20%).
Many optimizations which the Intel compiler performs routinely require specific source syntax and use of -O3 -ffast-math for gcc. Unfortunately, the -funsafe-math-optimizations component of -ffast-math -O3 -march=native has turned out to be incompatible with -fopenmp, so I must split my source files into groups named with the different options in Makefile. Today I ran into a failure where a g++ build using -O3 -ffast-math -fopenmp -march=native was able to write to screen but not redirect to a file.
One of the more egregious differences in my opinion is the optimization by icpc only of std::max and min where gcc/g++ want the fmax|min[f] with -ffast-math to change their meaning away from standard.