Runtime or compile time for platform-specific libraries? - c++

I'm creating a library in C++. It links against Windows libraries on Windows and Linux libraries on Linux. It's abstracted, all is well.
However, is it feasible to dynamically detect, load and use libraries (and copying header files for use) so it could be used on any platform if it was running under LLVM JIT?

Unfortunately, the LLVM intermediate representation in the bitcode files is not machine completely machine independent. You could probably get away with x86 Linux and Windows, but that same bitcode would probably not run on x86_64 systems, for example.

Related

How to use llvm libraries

I am working in a project that consist of some C++ teams. Each team delivers libraries and our team is integrating these libraries into a front end application.
The application is cross platform, so it means that other the teams have to provide the same (static) libraries compiled for different platforms/CPU architecture/configuration. Eg. we have Visual Studio 2015/2013, 32bit/64bit, linux, Debug/Release etc.
It would be nice to reduce the number of these static library "manifests", so I was looking into the Clang/LLVM. The idea would be compile the static libraries into LLVM bitcode and use the llvm-ar tool to create an llvm static library. When we have to make the binaries for a specific platform we would use the llc (LLVM platform compiler) to create the native code static library and do the linking with the platform linker.
Questions:
is there a better way to do what I want to achieve?
the llc does not seem to support the compiling of a static library, only individual translation units (.bc -> .o). Of course I can extract each individual bitcode file, assemble it to native object file and use the platform librarian tool (lib/ar) to make the static library, but I wonder if there is a more streamlined solution.
the gold linker seems to make something I need, but seems to be restricted to ELF format. I have to support Windows/Linux and maybe IOS
LLVM IR generated from target-specific and platform-specific language (C/C++) won't be target neutral. Think about type sizes, alignments, ABI requirements, etc. Not the mention pure source code features like preprocessor. So, no, the approach you thought about won't work at all.
See LLVM bitcode cross-platform for some more information.

build cpp program run on different version linux

Some linux program for example mongodb binary file can run on different version linux whatever the host machine gcc version and glibc version.
How to do that? static link all libs? But I heard of glibc is not supposed to be static linked.
To make an executable that is independent of the installed libraries, you must statically link it.
However, if the application isn't very large/complex to build, it's often better to either distribute the source and build on/for the target system, or pre-build for the most popular variants.
The reason that you don't want to statically link glibc (and all other libs that the application may use) is that even the most simple application becomes about 700K-1MB. Given that my distribution has 1900 entries in /usr/bin, that would make it around 2GB minimum, where now it is 400MB (and that includes beasts like clang, emacs and skype, all weighing in at over 7MB in non-statically linked form - they probably have more than a dozen library dependencies each - clang, for example, grows from under 10MB to around 100-120MB if you compile it with static linking).
And of course, with static linkage, all the code for each application needs to be loaded into memory as a separate copy. So the overall memory usage goes up quite dramatically.

How do you change the target machine configuration in fortran?

I am using the Intel Fortran Compiler on Linux. I know that if I type in "ifort -dumpmachine" it will provide the target machine configuration for the compilation (e.g. "x86_64-linux-gnu") but I need to know how to change this if I want to compile for a different operating system (e.g. a different version of linux). The "-arch" compiler option allows you to chagne the processor architecture but I need to know how to also change the operating system.
Cross compilation is highly processor dependent stuff, so there is no general Fortran answer. As far as I know Intel Fortran is available only for a limited number of architectures - x86 and x86-64. There are separate products for Linux, Windows and OS X and you can not cross-compile between them.
You did not specify what you mean by different version of Linux. You should find in the manual of your version of the compiler, what kernel version need the resulting executables. In principle you can then target all such distributions. There may be also problems with the right Intel runtime libraries and glibc and also your other libraries you use. You can solve this by statically linking your libraries on your machine (use -static, or -static-intel for Intel runtime libraries only), but be aware, that they have to be also compatible with your target architecture (In particular, if they require advanced instruction set, like SSE(2) or AVX).

Is it possible to use same DLL for clients using both Windows and Linux

I am looking to create a C++ library that can be used by both Linux and Windows clients. The OS specific functionality will be hooked up by the client by implementing the interfaces provided by the library.
Is this possible to achieve? Do I need to recompile the C++ project again in linux.
P.S: I am using CodeBlocks IDE
The short answer is no, you still need to compile your library for each targetted platform -- however, assuming your code is written such that it is cross-platform, you can set up your build to target both Windows and Linux environments with little fuss. I do this now using CMake to generate both Visual Studio projects for Windows environments and Makefiles for Linux environments.
I'm pretty confident that Linux will not accept a .dll :) And yes, you will need to recompile. Unless you run windows as a virtual machine under linux which sort of preempts the question.
It certainly cannot be the same binary file: shared objects ELF format on Linux, DLL "PE" format on Windows. And dynamic loading has different semantics on both systems. See Levine's linker and loader book for details.
You could, if done carefully, have the same source code giving the two different files (the DLL on Windows, the dynamic shared object on Linux).
But you probably would need some conditional compilation tricks like #ifdef WINDOWS etc...
You might use libraries providing you a common abstraction for such things. For instance, both GTK/Glib and Qt have some mechanism giving a common abstraction of dynamically linked (or dynamically loaded - ie dlopen-ed) libraries.
You probably want to read the Program Library Howto (at least for Linux).

Compiling a C program with a specific architecture

I was recently fighting some problems trying to compile an open source library on my Mac that depended on another library and got some errors about incompatible library architectures. Can somebody explain the concept behind compiling a C program for a specific architecture? I have seen the -arch compiler flag before and have seen values passed to it such as ppc, i386 and x86_64 which I assume maps to the CPU "language", but my understanding stops there. If one program uses a particular architecture, do all libraries that it loads need to be on the same architecture as well? How can I tell what architecture a given program/process is running under?
Can somebody explain the concept behind compiling a C program for a specific architecture?
Yes. The idea is to translate C to a sequence of native machine instructions, which have the program coded into binary form. The meaning of "architecture" here is "instruction-set architecture", which is how the instructions are coded in binary. For example, every architecture has its own way of coding for an instruction that adds two integers.
The reason to compile to machine instructions is that they run very, very fast.
If one program uses a particular architecture, do all libraries that it loads need to be on the same architecture as well?
Yes. (Exceptions exist but they are rare.)
How can I tell what architecture a given program/process is running under?
If a process is running on your hardware, it is running on the native architecture which on Unix you can discover by running the command uname -m, although for the human reader the output from uname -a may be more informative.
If you have an executable binary or a shared library (.so file), you can discover its architecture using the file command:
% file /lib/libm-2.10.2.so
/lib/libm-2.10.2.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, stripped
% file /bin/ls
/bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
You can see that these binaries have been compiled for the very old 80386 architecture, even though my hardware is a more modern i686. The i686 (Pentium Pro) is backward compatible with 80386 and runs 80386 binaries as well as native binaries. To make this backward compatibility possible, Intel went to a great deal of trouble and expense—but they practically cornered the market on desktop CPUs, so it was worth it!
One thing that may be confusing here is that the Mac platform has what they call a universal binary, which is really two binaries in one archive, one for intel and the other for ppc architecture. Your computer will automatically decide which one to run. You can (sometimes) run a binary for another architecture in an emulation mode, and some architectures are supersets of others (ie. i386 code will usually run on a i486, i586, i686, etc.) but for the most part the only code you can run is code for your processor's architecture.
For cross compiling, not only the program, but all the libraries it uses, need to be compatible with the target processor. Sometimes this means having a second compiler installed, sometimes it is just a question of having the right extra module for the compiler availible. The cross compiler for gcc is actually a seperate executable, though it can sometimes be accessed via a command line switch. The gcc cross compilers for various architectures are most likely separate installs.
To build for a different architecture than the native of your CPU, you will need a cross-compiler, which means that the code generated cannot run natively on the machine your sitting on. GCC can do this fine. To find out which architecture a program is built for check out the file command. In Linux-based systems at least, a 32-bit x86 program will require 32-bit x86 libs to go along with it. I guess it's the same for most OSes.
Does ldd help in this case?