Dynamically re-linking library optimized for target CPU - c++

I have a library that I would like to compile for different CPU generations (all x86), say I want one to make use of all the instructions Skylake has to offer, a fallback version for Haswell, one for Sandy Bridge and so on. I know I can achieve this with -march=xyz, however, I also want to make it easy to use the library. Ideally the user would only have to link against a stub library which then dynamically loads in the correct library optimized for the targets CPU. I know I can achieve this by exporting function pointers in the loader library which are then populated correctly, however, that doesn't really lend itself to a C++ applications since I can't export class methods for example.
Is there any way to force the dynamic linker to re-do the relocation? Ideally the loader library would provide either the most fallback version of the code that runs on all CPUs supported by the library, but can then load a more specialized version, or it really is just a stub library that gets replaced at runtime. Either way, ideally I would like to make it so that the user only ever has to link against one dynamic library and the rest happens automatically at runtime.
Ideally I'd like an answer that works both for Linux and OS X, but I also take one that's only applicable for one of these.

Related

Understanding static & dynamic library linking [duplicate]

Are there any compelling performance reasons to choose static linking over dynamic linking or vice versa in certain situations? I've heard or read the following, but I don't know enough on the subject to vouch for its veracity.
1) The difference in runtime performance between static linking and dynamic linking is usually negligible.
2) (1) is not true if using a profiling compiler that uses profile data to optimize program hotpaths because with static linking, the compiler can optimize both your code and the library code. With dynamic linking only your code can be optimized. If most of the time is spent running library code, this can make a big difference. Otherwise, (1) still applies.
Dynamic linking can reduce total resource consumption (if more than one process shares the same library (including the version in "the same", of course)). I believe this is the argument that drives its presence in most environments. Here "resources" include disk space, RAM, and cache space. Of course, if your dynamic linker is insufficiently flexible there is a risk of DLL hell.
Dynamic linking means that bug fixes and upgrades to libraries propagate to improve your product without requiring you to ship anything.
Plugins always call for dynamic linking.
Static linking, means that you can know the code will run in very limited environments (early in the boot process, or in rescue mode).
Static linking can make binaries easier to distribute to diverse user environments (at the cost of sending a larger and more resource-hungry program).
Static linking may allow slightly faster startup times, but this depends to some degree on both the size and complexity of your program and on the details of the OS's loading strategy.
Some edits to include the very relevant suggestions in the comments and in other answers. I'd like to note that the way you break on this depends a lot on what environment you plan to run in. Minimal embedded systems may not have enough resources to support dynamic linking. Slightly larger small systems may well support dynamic linking because their memory is small enough to make the RAM savings from dynamic linking very attractive. Full-blown consumer PCs have, as Mark notes, enormous resources, and you can probably let the convenience issues drive your thinking on this matter.
To address the performance and efficiency issues: it depends.
Classically, dynamic libraries require some kind of glue layer which often means double dispatch or an extra layer of indirection in function addressing and can cost a little speed (but is the function calling time actually a big part of your running time???).
However, if you are running multiple processes which all call the same library a lot, you can end up saving cache lines (and thus winning on running performance) when using dynamic linking relative to using static linking. (Unless modern OS's are smart enough to notice identical segments in statically linked binaries. Seems hard, does anyone know?)
Another issue: loading time. You pay loading costs at some point. When you pay this cost depends on how the OS works as well as what linking you use. Maybe you'd rather put off paying it until you know you need it.
Note that static-vs-dynamic linking is traditionally not an optimization issue, because they both involve separate compilation down to object files. However, this is not required: a compiler can in principle, "compile" "static libraries" to a digested AST form initially, and "link" them by adding those ASTs to the ones generated for the main code, thus empowering global optimization. None of the systems I use do this, so I can't comment on how well it works.
The way to answer performance questions is always by testing (and use a test environment as much like the deployment environment as possible).
1) is based on the fact that calling a DLL function is always using an extra indirect jump. Today, this is usually negligible. Inside the DLL there is some more overhead on i386 CPU's, because they can't generate position independent code. On amd64, jumps can be relative to the program counter, so this is a huge improvement.
2) This is correct. With optimizations guided by profiling you can usually win about 10-15 percent performance. Now that CPU speed has reached its limits it might be worth doing it.
I would add: (3) the linker can arrange functions in a more cache efficient grouping, so that expensive cache level misses are minimised. It also might especially effect the startup time of applications (based on results i have seen with the Sun C++ compiler)
And don't forget that with DLLs no dead code elimination can be performed. Depending on the language, the DLL code might not be optimal either. Virtual functions are always virtual because the compiler doesn't know whether a client is overwriting it.
For these reasons, in case there is no real need for DLLs, then just use static compilation.
EDIT (to answer the comment, by user underscore)
Here is a good resource about the position independent code problem http://eli.thegreenplace.net/2011/11/03/position-independent-code-pic-in-shared-libraries/
As explained x86 does not have them AFAIK for anything else then 15 bit jump ranges and not for unconditional jumps and calls. That's why functions (from generators) having more then 32K have always been a problem and needed embedded trampolines.
But on popular x86 OS like Linux you do not need to care if the .so/DLL file is not generated with the gcc switch -fpic (which enforces the use of the indirect jump tables). Because if you don't, the code is just fixed like a normal linker would relocate it. But while doing this it makes the code segment non shareable and it would need a full mapping of the code from disk into memory and touching it all before it can be used (emptying most of the caches, hitting TLBs) etc. There was a time when this was considered slow.
So you would not have any benefit anymore.
I do not recall what OS (Solaris or FreeBSD) gave me problems with my Unix build system because I just wasn't doing this and wondered why it crashed until I applied -fPIC to gcc.
Dynamic linking is the only practical way to meet some license requirements such as the LGPL.
I agree with the points dnmckee mentions, plus:
Statically linked applications might be easier to deploy, since there are fewer or no additional file dependencies (.dll / .so) that might cause problems when they're missing or installed in the wrong place.
One reason to do a statically linked build is to verify that you have full closure for the executable, i.e. that all symbol references are resolved correctly.
As a part of a large system that was being built and tested using continuous integration, the nightly regression tests were run using a statically linked version of the executables. Occasionally, we would see that a symbol would not resolve and the static link would fail even though the dynamically linked executable would link successfully.
This was usually occurring when symbols that were deep seated within the shared libs had a misspelt name and so would not statically link. The dynamic linker does not completely resolve all symbols, irrespective of using depth-first or breadth-first evaluation, so you can finish up with a dynamically linked executable that does not have full closure.
1/ I've been on projects where dynamic linking vs static linking was benchmarked and the difference wasn't determined small enough to switch to dynamic linking (I wasn't part of the test, I just know the conclusion)
2/ Dynamic linking is often associated with PIC (Position Independent Code, code which doesn't need to be modified depending on the address at which it is loaded). Depending on the architecture PIC may bring another slowdown but is needed in order to get benefit of sharing a dynamically linked library between two executable (and even two process of the same executable if the OS use randomization of load address as a security measure). I'm not sure that all OS allow to separate the two concepts, but Solaris and Linux do and ISTR that HP-UX does as well.
3/ I've been on other projects which used dynamic linking for the "easy patch" feature. But this "easy patch" makes the distribution of small fix a little easier and of complicated one a versioning nightmare. We often ended up by having to push everything plus having to track problems at customer site because the wrong version was token.
My conclusion is that I'd used static linking excepted:
for things like plugins which depend on dynamic linking
when sharing is important (big libraries used by multiple processes at the same time like C/C++ runtime, GUI libraries, ... which often are managed independently and for which the ABI is strictly defined)
If one want to use the "easy patch", I'd argue that the libraries have to be managed like the big libraries above: they must be nearly independent with a defined ABI that must not to be changed by fixes.
Static linking is a process in compile time when a linked content is copied into the primary binary and becomes a single binary.
Cons:
compile time is longer
output binary is bigger
Dynamic linking is a process in runtime when a linked content is loaded. This technic allows to:
upgrade linked binary without recompiling a primary one that increase an ABI stability[About]
has a single shared copy
Cons:
start time is slower(linked content should be copied)
linker errors are thrown in runtime
[iOS Static vs Dynamic framework]
It is pretty simple, really. When you make a change in your source code, do you want to wait 10 minutes for it to build or 20 seconds? Twenty seconds is all I can put up with. Beyond that, I either get out the sword or start thinking about how I can use separate compilation and linking to bring it back into the comfort zone.
Best example for dynamic linking is, when the library is dependent on the used hardware. In ancient times the C math library was decided to be dynamic, so that each platform can use all processor capabilities to optimize it.
An even better example might be OpenGL. OpenGl is an API that is implemented differently by AMD and NVidia. And you are not able to use an NVidia implementation on an AMD card, because the hardware is different. You cannot link OpenGL statically into your program, because of that. Dynamic linking is used here to let the API be optimized for all platforms.
Dynamic linking requires extra time for the OS to find the dynamic library and load it. With static linking, everything is together and it is a one-shot load into memory.
Also, see DLL Hell. This is the scenario where the DLL that the OS loads is not the one that came with your application, or the version that your application expects.
On Unix-like systems, dynamic linking can make life difficult for 'root' to use an application with the shared libraries installed in out-of-the-way locations. This is because the dynamic linker generally won't pay attention to LD_LIBRARY_PATH or its equivalent for processes with root privileges. Sometimes, then, static linking saves the day.
Alternatively, the installation process has to locate the libraries, but that can make it difficult for multiple versions of the software to coexist on the machine.
Another issue not yet discussed is fixing bugs in the library.
With static linking, you not only have to rebuild the library, but will have to relink and redestribute the executable. If the library is just used in one executable, this may not be an issue. But the more executables that need to be relinked and redistributed, the bigger the pain is.
With dynamic linking, you just rebuild and redistribute the dynamic library and you are done.
Static linking includes the files that the program needs in a single executable file.
Dynamic linking is what you would consider the usual, it makes an executable that still requires DLLs and such to be in the same directory (or the DLLs could be in the system folder).
(DLL = dynamic link library)
Dynamically linked executables are compiled faster and aren't as resource-heavy.
static linking gives you only a single exe, inorder to make a change you need to recompile your whole program. Whereas in dynamic linking you need to make change only to the dll and when you run your exe, the changes would be picked up at runtime.Its easier to provide updates and bug fixes by dynamic linking (eg: windows).
There are a vast and increasing number of systems where an extreme level of static linking can have an enormous positive impact on applications and system performance.
I refer to what are often called "embedded systems", many of which are now increasingly using general-purpose operating systems, and these systems are used for everything imaginable.
An extremely common example are devices using GNU/Linux systems using Busybox. I've taken this to the extreme with NetBSD by building a bootable i386 (32-bit) system image that includes both a kernel and its root filesystem, the latter which contains a single static-linked (by crunchgen) binary with hard-links to all programs that itself contains all (well at last count 274) of the standard full-feature system programs (most except the toolchain), and it is less than 20 megabytes in size (and probably runs very comfortably in a system with only 64MB of memory (even with the root filesystem uncompressed and entirely in RAM), though I've been unable to find one so small to test it on).
It has been mentioned in earlier posts that the start-up time of a static-linked binaries is faster (and it can be a lot faster), but that is only part of the picture, especially when all object code is linked into the same file, and even more especially when the operating system supports demand paging of code direct from the executable file. In this ideal scenario the startup time of programs is literally negligible since almost all pages of code will already be in memory and be in use by the shell (and and init any other background processes that might be running), even if the requested program has not ever been run since boot since perhaps only one page of memory need be loaded to fulfill the runtime requirements of the program.
However that's still not the whole story. I also usually build and use the NetBSD operating system installs for my full development systems by static-linking all binaries. Even though this takes a tremendous amount more disk space (~6.6GB total for x86_64 with everything, including toolchain and X11 static-linked) (especially if one keeps full debug symbol tables available for all programs another ~2.5GB), the result still runs faster overall, and for some tasks even uses less memory than a typical dynamic-linked system that purports to share library code pages. Disk is cheap (even fast disk), and memory to cache frequently used disk files is also relatively cheap, but CPU cycles really are not, and paying the ld.so startup cost for every process that starts every time it starts will take hours and hours of CPU cycles away from tasks which require starting many processes, especially when the same programs are used over and over, such as compilers on a development system. Static-linked toolchain programs can reduce whole-OS multi-architecture build times for my systems by hours. I have yet to build the toolchain into my single crunchgen'ed binary, but I suspect when I do there will be more hours of build time saved because of the win for the CPU cache.
Another consideration is the number of object files (translation units) that you actually consume in a library vs the total number available. If a library is built from many object files, but you only use symbols from a few of them, this might be an argument for favoring static linking, since you only link the objects that you use when you static link (typically) and don't normally carry the unused symbols. If you go with a shared lib, that lib contains all translation units and could be much larger than what you want or need.

Load-time dynamic link library dispatching

I'd like my Windows application to be able to reference an extensive set of classes and functions wrapped inside a DLL, but I need to be able to guide the application into choosing the correct version of this DLL before it's loaded. I'm familiar with using dllexport / dllimport and generating import libraries to accomplish load-time dynamic linking, but I cannot seem to find any information on the interwebs with regard to possibly finding some kind of entry point function into the import library itself, so I can, specifically, use CPUID to detect the host CPU configuration, and make a decision to load a paricular DLL based on that information. Even more specifically, I'd like to build 2 versions of a DLL, one that is built with /ARCH:AVX and takes full advantage of SSE - AVX instructions, and another that assumes nothing is available newer than SSE2.
One requirement: Either the DLL must be linked at load-time, or there needs to be a super easy way of manually binding the functions referenced from outside the DLL, and there are many, mostly wrapped inside classes.
Bonus question: Since my libraries will be cross-platform, is there an equivalent for Linux based shared objects?
I recommend that you avoid dynamic resolution of your DLL from your executable if at all possible, since it is just going to make your life hard, especially since you have a lot of exposed interfaces and they are not pure C.
Possible Workaround
Create a "chooser" process that presents the necessary UI for deciding which DLL you need, or maybe it can even determine it automatically. Let that process move whatever DLL has been decided on into the standard location (and name) that your main executable is expecting. Then have the chooser process launch your main executable; it will pick up its DLL from your standard location without having to know which version of the DLL is there. No delay loading, no wonkiness, no extra coding; very easy.
If this just isn't an option for you, then here are your starting points for delay loading DLLs. Its a much rockier road.
Windows
LoadLibrary() to get the DLL in memory: https://msdn.microsoft.com/en-us/library/windows/desktop/ms684175(v=vs.85).aspx
GetProcAddress() to get pointer to a function: https://msdn.microsoft.com/en-us/library/windows/desktop/ms683212(v=vs.85).aspx
OR possibly special delay-loaded DLL functionality using a custom helper function, although there are limitations and potential behavior changes.. never tried this myself: https://msdn.microsoft.com/en-us/library/151kt790.aspx (suggested by Igor Tandetnik and seems reasonable).
Linux
dlopen() to get the SO in memory: http://pubs.opengroup.org/onlinepubs/009695399/functions/dlopen.html
dladdr() to get pointer to a function: http://man7.org/linux/man-pages/man3/dladdr.3.html
To add to qexyn's answer, one can mimic delay loading on Linux by generating a small static stub library which would dlopen on first call to any of it's functions and then forward actual execution to shared library. Generation of such stub library can be automatically generated by custom project-specific script or Implib.so:
# Generate stub
$ implib-gen.py libxyz.so
# Link it instead of -lxyz
$ gcc myapp.c libxyz.tramp.S libxyz.init.c

C/C++ Static vs dynamic libraries example

I'm learning about static and dynamic libraries. So far I understand why I would need a dynamic library. In case something is changing it's good to plug in a newer version and all the applications will update automatically without even noticing.
a) Great for plugins,
b) multiple apps using the same library and
c) maintenance when you need to correct errors.
However, why would anyone use a static library? I mean what's the advantage? Does sb have an example so I can understand it better? Is it to make a product proprietary?
EDIT: Due to the confusion in the comments. I understand what a static library is, and I also know the difference between a dynamic library. It was just beyond me why anyone would use a static library instead of just the source itself. I think I'm now starting to understand that a static library offers the following advantages:
a) better code maintenance
b) faster compiling times
There is another difference between static and dynamic libraries which may become important in some situations, I am surprised that nobody mentions that.
When static library is linked, the symbols (e.g. function names) are resolved during the linkage (compile) time, so a call to a library function is resolved to the direct call to an address in the final executable.
With dynamic library, this happens during the run-time, when the library is loaded into the process space (often during process start-up). The symbols must be mapped into the process's address space. Depending on the number of symbols, which can be surprisingly large, and number of libraries loaded at the start-up, the delay can be quite tangible.
There is this excellent in-depth guide on dynamic libraries on Linux - How To Write Shared Libraries. It is way too detailed for most of us, but even skimming through it gives you many surprising insights. For instance, it says that in release 1.0 of OpenOffice it had to do more than 1.5 million of string comparisons during the launch!
A way to get a feeling of that is to set LD_DEBUG to symbols, and LD_DEBUG_OUTPUT to some file, run a program and look at the file to see the activity that goes on on startup.
Compilers can do all sorts of additional optimizations with static libraries that they cannot do with dynamic libraries. For example, a compiler can delete unused functions from static libraries. It wouldn't know to do that in a dynamic library. But there are even more advanced optimizations. The compiler can pull code from a static library function into the main program, which will eliminate function calls. Very smart compilers can do even more. The sky is really the limit with static libraries, but dynamic libraries make much of this much harder or impossible.
Probably the more practical reason however, is that static linking is the default for most library compilers so many people just end up using it. To create a dynamic library, you normally have to create an additional file which exposes certain functions. Although the file tends to be relatively simple, if you don't take the time to do it, then your libraries all end up being static.
As mentioned in another post, managing dependencies with static libraries tends to be easier simply because you have everything under your control. You may not know what dll/so is installed on the user's system.
A static library is basically a ZIP of object files. The only advantage it has over just distributing the object files is that it's a single file that encompasses your whole library. Users can use your headers and the lib to build their applications.
Because it's just a ZIP of object files, anything the compiler does to object files also works with static libraries, for example, dead code elimination and whole program optimization (also called Link-Time Code Generation). The compiler won't include bits of the shared library in the final program that are unused, unlike dynamic libraries.
With some build systems, it makes link seams easier too. E.g. for MSVC++, I'll often have a "production" EXE project, a "testing" EXE project, and put the common stuff in a static library. That way, I don't have to rebuild all the common stuff when I do builds.
1.) Shared libraries require position independent code (-fpic or -fPIC for libraries) and position independent code requires setup. This makes for larger code; for example:
*Part of this is due to compiler inefficiencies as discussed here
long realfoo(long, long);
long foo(long x, long y){
return realfoo(x,y);
}
//static
foo:
jmp realfoo #
//shared (-fpic code)
foo:
pushl %ebx #
call __x86.get_pc_thunk.bx #
addl $_GLOBAL_OFFSET_TABLE_, %ebx # tmp87,
subl $16, %esp #,
pushl 28(%esp) # y
pushl 28(%esp) # x
call realfoo#PLT #
addl $24, %esp #,
popl %ebx #
ret
__x86.get_pc_thunk.bx:
movl (%esp), %ebx #,
ret
Using the previous example, the realfoo() could be considered for inlining in a static build if proper optimization is enabled and supported. This is because the compiler has direct access to the object in the archive file (libfoo.a). You can think of the .a file as a pseudo-directory containing the individual object files.
The "not having to recompile the whole binary for a bug in a library" cuts both ways. If your binary doesn't use the offending bug, it was probably compiled out of the static binary and replacing a shared library with code from the latest trunk with the fix may introduce (multiple) other as yet unreported bugs.
Initial startup time. Although many shared library proponents will suggest that using shared libraries will reduce startup times due to other programs already using them and (sometimes) smaller binary size. In practice, this is rarely true with the exception of some really basic X11 apps. Anecdotally, my startup time for X went down to about 1/10th of a second from over 5s by switching to a static build with musl-libc and tinyX11 from the stock glibc+X11 shared. In practice, most static binaries end up starting faster because they don't need to initialize all the (possibly unused) symbols in every dependent library. Furthermore, subsequent calls to the same binary get the same preloading benefits as the shared libraries.
Use a dynamic library when the library is bad for static builds. For example gnu-libc (aka glibc) is notoriously bad at static builds with a basic hello world weighing in at close to 1Mb while musl-libc, diet-libc and uclibc all build hello world at around 10kb. On top of that glibc will omit some key (mostly network related) functionality in static builds.
Use a shared library if the library relies on "plugins" for key functionality; gtk icon loading and a few other features for instance used to be buildable with the plugins builtin, but for the longest time that has been broken, so the only recourse is to use shared libraries. Some C libraries won't support loading arbitrary libraries (musl for example) unless they are built as a shared library, so if you need to load a library on the fly as a "plugin" or "module", you may at least need the C library to be shared. A workaround for static builds is to use function pointers instead and a better, more stable GUI toolkit for the specific case of GTK. If you are building an extendable programming language like perl or python, you're going to need shared library capabilities for your optimized plugins to be written in a compiled language.
Use a shared library if you absolutely need to use a library with a license that is incompatible with static building. (AGPL, GPL, LGPL without a static link clause) ... and of course if/when you don't have the source code.
Use static builds when you want more aggressive optimization. Many well-seasoned libraries like libX11 were written when cdecl was the primary calling convention. Consequently many X11 functions take a boatload of parameter in the same order as the struct that the function manipulates (rather than just a pointer to the struct)... which makes sense with the cdecl calling convention since theoretically you could just move the stack pointer and call the function. However this breaks down as soon as you use some number of registers for parameter passing. With static builds, some of these unintentional consequences can be mitigated via inlining and link time optimization. This is just one example. There are many many other optimizations that crop up as function boundaries are erased.
Memory security can go either way. With static binaries, you aren't susceptible to LD_PRELOAD attacks (a static binary may only have 1 symbol - the entry point), but shared (and static-pie if supported) can have address space randomization that is only available to static builds on the few architectures that support position independent executables (though most common architecture do support PIE now).
Shared libraries can (sometimes) produce smaller binaries, so if you are using shared libraries that will be on the system anyhow, you can reduce package size (and thus server loads) a bit. However if you are going to have to ship the shared libraries anyhow, the combined size of the libraries and the binary will always be larger than the static binary barring some crazy aggressive inlining. A good compromise is to use shared common system libraries (for instance libc, X11, zlib, png, glib and gtk/qt/other-default-toolkit) while using static libraries for uncommon dependencies or libraries specific to your package. The chrome browser does this to some extent. A good way to determine the shared/static threshold in your target Linux distro is to iterate over */bin and */sbin in the default install with ldd or objdump -x and parse the output through sort and uniq to determine a good cutoff. If you are distributing multiple binaries that use most of the same libraries a bunch of static builds will increase bloat, but you can consider using a multicall binary (such as in busybox, toybox, mupdf or netpbm)
Use static builds if you want to avoid "DLL hell" or for cross distro compatibility. A static binary built in one distro should work on any other distro that is reasonably up to date (mostly kernel versions to prevent trying to use unsupported syscalls).
For more info on the advantages of static linking see: stali (short for static linux)
For some shared library propaganda from the former maintainer of glibc, see Ulrich Drepper's "Static Linking Considered Harmful" Though roughly half of the problems with static linking that he mentions are glibc-specific problems (diet, musl and uclibc don't have the same problems).
If you think static libraries don't quite go far enough toward enabling full optimizations, check out Sean Barrett's list of single file libraries. These can often be configured to enable all functions to be static so that the binary can be built as a single compilation unit. This enables several optimizations that can't even be achieved with link time optimization, but you better have an IDE that supports code folding.
Static libraries good then you want tiny package without any problems with conflicting dll's.
Also time to load and initialize libraries reduced a lot with static linking.
But as disadvantage you can notice some size increase of binary.
Static librarys on WIKI
If you need a number of small programs, which use only a very small, but sometimes slightly different part of a huge library (it usually happens with large open-source libraries), it might be better to not build a large number of small dynamic libraries, as they will become hard to manage. In this case, it might be a good idea to statically link only the parts you need.
With dynamic libraries if all the libraries aren't present then the application doesn't run. So if you have a partition that holds your libraries and it becomes unavailable then so does the application. If an application has static libraries then the libraries are always present so there is nothing to prevent the application from working. This generally helps in case you are staring your system in maintenance mode.
On a Solaris system for example, commands that you may need to run in the event that some partitions may not be present are stored under /sbin. sbin is short for static binaries. If a partition is unavailable these apps will still work.

Catching calls from a program in runtime and mapping them to other calls

A program usually depends on several libraries and might sometimes depend on other programs as well. I look at projects like Wine and think how do they figure out what calls a program is making?
In a Linux environment, what are the approaches used to know what calls an executable is making in runtime in order to catch and map them to other calls?
Any code snippets or references to resources for extra reading is greatly appreciated :)
On Linux you're looking for the LD_PRELOAD environment variable. This will load your libraries before any requested by the program. If you provide a function definition that matches one loaded by the target program then your version will be called instead.
You can't really detect what functions a program is calling however. You can however get all the functions in a shared library and implement all of those. You aren't really catching the functions, you are simply reimplementing them.
Projects like Wine do this in some cases, but not in all. They also rewrite some of the dynamic libraries. So when a Win32 loads some DLL it is actually loading the Wine version and not the native version. This is essentially the same concept of replacing the functions with your own.
Lookup LD_PRELOAD for more information.

Compile and optimize for different target architectures

Summary: I want to take advantage of compiler optimizations and processor instruction sets, but still have a portable application (running on different processors). Normally I could indeed compile 5 times and let the user choose the right one to run.
My question is: how can I can automate this, so that the processor is detected at runtime and the right executable is executed without the user having to chose it?
I have an application with a lot of low level math calculations. These calculations will typically run for a long time.
I would like to take advantage of as much optimization as possible, preferably also of (not always supported) instruction sets. On the other hand I would like my application to be portable and easy to use (so I would not like to compile 5 different versions and let the user choose).
Is there a possibility to compile 5 different versions of my code and run dynamically the most optimized version that's possible at execution time? With 5 different versions I mean with different instruction sets and different optimizations for processors.
I don't care about the size of the application.
At this moment I'm using gcc on Linux (my code is in C++), but I'm also interested in this for the Intel compiler and for the MinGW compiler for compilation to Windows.
The executable doesn't have to be able to run on different OS'es, but ideally there would be something possible with automatically selecting 32 bit and 64 bit as well.
Edit: Please give clear pointers how to do it, preferably with small code examples or links to explanations. From my point of view I need a super generic solution, which is applicable on any random C++ project I have later.
Edit I assigned the bounty to ShuggyCoUk, he had a great number of pointers to look out for. I would have liked to split it between multiple answers but that is not possible. I'm not having this implemented yet, so the question is still 'open'! Please, still add and/or improve answers, even though there is no bounty to be given anymore.
Thanks everybody!
Yes it's possible. Compile all your differently optimised versions as different dynamic libraries with a common entry point, and provide an executable stub that that loads and runs
the correct library at run-time, via the entry point, depending on config file or other information.
Can you use script?
You could detect the CPU using script, and dynamically load the executable that is most optimized for architecture. It can choose 32/64 bit versions too.
If you are using a Linux you can query the cpu with
cat /proc/cpuinfo
You could probably do this with a bash/perl/python script or windows scripting host on windows. You probably don't want to force the user to install a script engine. One that works on the OS out of the box IMHO would be best.
In fact, on windows you probably would want to write a small C# app so you can more easily query the architecture. The C# app could just spawn whatever executable is fastest.
Alternatively you could put your different versions of code in a dll's or shared object's, then dynamically load them based on the detected architecture. As long as they have the same call signature it should work.
If you wish this to cleanly work on Windows and take full advantage in 64bit capable platforms of the additional 1. Addressing space and 2. registers (likely of more use to you) you must have at a minimum a separate process for the 64bit ones.
You can achieve this by having a separate executable with the relevant PE64 header. Simply using CreateProcess will launch this as the relevant bitness (unless the executable launched is in some redirected location there is no need to worry about WoW64 folder redirection
Given this limitation on windows it is likely that simply 'chaining along' to the relevant executable will be the simplest option for all different options, as well as making testing an individual one simpler.
It also means you 'main' executable is free to be totally separate depending on the target operating system (as detecting the cpu/OS capabilities is, by it's nature, very OS specific) and then do most of the rest of your code as shared objects/dlls.
Also you can 'share' the same files for two different architectures if you currently do not feel that there is any point using the differing capabilities.
I would suggest that the main executable is capable of being forced into making a specific choice so you can see what happens with 'lesser' versions on a more capable machine (or what errors come up if you try something different).
Other possibilities given this model are:
Statically linking to different versions of the standard runtimes (for ones with/without thread safety) and using them appropriately if you are running without any SMP/SMT capabilities.
Detect if multiple cores are present and whether they are real or hyper threading (also whether the OS knows how the schedule effectively in those cases)
checking the performance of things like the system timer/high performance timers and using code optimized to this behaviour, say if you do anything where you look for a certain amount of time to expire and thus can know your best possible granularity.
If you wish to optimize you choice of code based on cache sizing/other load on the box. If you are using unrolled loops then more aggressive unrolling options may depend on having a certain amount level 1/2 cache.
Compiling conditionally to use doubles/floats depending on the architecture. Less important on intel hardware but if you are targetting certain ARM cpu's some have actual floating point hardware support and others require emulation. The optimal code would change heavily, even to the extent you just use conditional compilation rather than using the optimizing compiler(1).
Making use of co-processor hardware like CUDA capable graphics cards.
detect virtualization and alter behaviour (perhaps trying to avoid file system writes)
As to doing this check you have a few options, the most useful one on Intel being the the cpuid instruction.
Windows
Use someone else's implementation but you'll have to pay
Use a free open source one
Linux
Use the built in one
You could also look at open source software doing the same thing
Pixman does a fair amount of this and is a permissive licence.
Alternatively re-implement/update an existing one using available documentation on the features you need.
Quite a lot of separate documents to work out how to detect things:
Intel:
SSE 4.1/4.2
SSE3
MMX
A large part of what you would be paying for in the CPU-Z library is someone doing all this (and the nasty little issues involved) for you.
be careful with this - it is hard to beat decent optimizing compilers on this
Have a look at liboil: http://liboil.freedesktop.org/wiki/ . It can dynamically select implementations of multimedia-related computations at run-time. You may find you can liboil itself and not just its techniques.
Since you mention you are using GCC, I'll assume your code is in C (or C++).
Neil Butterworth already suggested making separate dynamic libraries, but that requires some non-trivial cross-platform considerations (manually loading dynamic libraries is different on Linux, Windows, OSX, etc., and getting it right will likely take some time).
A cheap solution is to simply write all of your variants using unique names, and use a function pointer to select the proper one at runtime.
I suspect the extra dereference caused by the function pointer will be amortized by the actual work you are doing (but you'll want to confirm that).
Also, getting different compiler optimizations will likely require different .c/.cpp files, as well as some twiddling of your build tool. But it's probably less overall work than separate libraries (which needed this already in one form or another).
Since you didn't specify whether you have limits on the number of files, I propose another solution: compile 5 executables, and then create a sixth executable that launches the appropriate binary. Here is some pseudocode, for Linux
int main(int argc, char* argv[])
{
char* target_path[MAXPATH];
char* new_argv[];
char* specific_version = determine_name_of_specific_version();
strcpy(target_path, "/usr/lib/myapp/versions");
strcat(target_path, specific_version);
/* append NULL to argv */
new_argv = malloc(sizeof(char*)*(argc+1));
memcpy(new_argv, argv, argc*sizeof(char*));
new_argv[argc] = 0;
/* optionally set new_argv[0] to target_path */
execv(target_path, new_argv);
}
On the plus side, this approach allows to provide the user transparently with both 32-bit and 64-bit binaries, unlike any library methods that have been proposed. On the minus side, there is no execv in Win32 (but a good emulation in cygwin); on Windows, you have to create a new process, rather than re-execing the current one.
Lets break the problem down to its two constituent parts. 1) Creating platform dependent optimized code and 2) building on multiple platforms.
The first problem is pretty straightforward. Encapsulate the platform dependent code in a set of functions. Create a different implementation of each function for each platform. Put each implementation in its own file or set of files. It's easiest for the build system if you put each platform's code in a separate directory.
For part two I suggest you look at Gnu Atuotools (Automake, AutoConf, and Libtool). If you've ever downloaded and built a GNU program from source code you know you have to run ./configure before running make. The purpose of the configure script is to 1) verify that your system has all of the required libraries and utilities need to build and run the program and 2) customize the Makefiles for the target platform. Autotools is the set of utilities for generating the configure script.
Using autoconf, you can create little macros to check that the machine supports all of the CPU instructions your platform dependent code needs. In most cases, the macros already exists, you just have to copy them into your autoconf script. Then, automake and autoconf can set up the Makefiles to pull in the appropriate implementation.
All this is a bit much for creating an example here. It takes a little time to learn. But the documentation is all out there. There is even a free book available online. And the process is applicable to your future projects. For multi-platform support, this is really the most robust and easiest way to go, I think. A lot of the suggestions posted in other answers are things that Autotools deals with (CPU detection, static & shared library support) without you have to think about it too much. The only wrinkle you might have to deal with is finding out if Autotools are available for MinGW. I know they are part of Cygwin if you can go that route instead.
You mentioned the Intel compiler. That is funny, because it can do something like this by default. However, there is a catch. The Intel compiler didn't insert checks for the approopriate SSE functionality. Instead, they checked if you had a particular Intel chip. There would still be a slow default case. As a result, AMD CPUs would not get suitable SSE-optimized versions. There are hacks floating around that will replace the Intel check with a proper SSE check.
The 32/64 bits difference will require two executables. Both the ELF and PE format store this information in the exectuables header. It's not too hard to start the 32 bits version by default, check if you are on a 64 bit system, and then restart the 64 bit version. But it may be easier to create an appropriate symlink at installation time.