Loading a dylib from a different architecture - c++

I currently have a program I have compiled in x86_64, it relies on quite a few libraries also compiled in x86_64 (so recompiling them all would be a big project). I am looking to run a i386 dylib, however whenever I load it using dlopen I get an error saying it was not built for my architecture. Is there any way to either convert the i386 lib directly to a x86_64 (I do not have the source code for this) or run it on an x86_64 architecture?

You cannot load an i386 library in an x86_64 executable.
There only way to get an x86_64 library out of an i386 one is to recompile it for the right target. If you don't have the source code, this cannot be done.
You can recompile all your code for i386 and use the library though.

You can't load a 32-bit (i386) library (dylib) into a 64-bit (x86_64) process, nor vice versa.
The machine can run either 32-bit or 64-bit processes; what you can't do is mix 32-bit and 64-bit code in a single process.

If that library is irreplaceable, you can't recompile it and you really need the rest of the program to be x86_64, you can run it in a separate process and use some form of IPC to call the code and pass results.
In a lot of cases though, it may be easier to rewrite the library or replace it with something else that does a similar job.

Related

What determines if a 32-bit library built on a 64-bit machine needs x86_64 or i386 dependencies?

I am updating some old C++ projects to build on a 64-bit Linux machine for the first time, and I don't have much Linux experience. I need to build everything as 32-bit binaries, so I'm building everything with -m32 in the compiler and linker flags. I'm finding that, when linking to their dependencies, some must link with i386 shared objects, and some must link with x86_64 shared objects. If I only include the wrong folder in the linking path (-L/path/to/wrong/folder), it says
/usr/bin/ld: skipping incompatible xxx.so when searching for -lxxx
which I've come to understand means the architecture doesn't match what I'm trying to build.
The makefiles are nearly identical for two such differing projects, so it doesn't seem like I'm doing something obviously wrong there, and -m32 appears in the calls to gcc and g++ in the terminal. What could be causing this difference? Should I be concerned, or is it typical for this to happen?
Let me know if more information is needed to answer; I'm not really sure, due to inexperience with Linux and gcc, so apologies in advance.
Thanks #Wyzard and #duskwuff for the tips. I was indeed able to find my problem by using file on my .o files. It was just a silly mistake; I had inadvertently reverted the changes I made to one of the projects' make files, which included adding the -m32 flag. I think I misunderstood what the "x86_64" libraries are for, and that confused me (I had assumed it meant "32-bit process for 64-bit machine").

undefined symbol: _ZL22__gthrw_pthread_cancelm error

I have a C++/C application which needs to be compiled as a 32 bit application (as there are certain third-party libraries only available for 32 bit). However, the compilation as well as the execution will happen on CentOS 6.4 x86_64 machine.
I am using gnu autotools for building. After doing a lot of googling, finally figured a sets of options to give to ./configure to create 32 bit executables/shared objects. Set the LD_LIBRARY_PATH to search in /lib, /usr/lib/, /usr/lib/gcc/... instead of /lib64, ... Verified that all the generated .so and executable are 32 bit by using file command.
But I get the error: "undefined symbol: _ZL22__gthrw_pthread_cancelm" if I run the executable.
Any clues?
It seems you forgot to link to pthreads with -lpthread.
GCC adds a layer of abstraction over pthreads and this abstraction use weak symbols, so you can build your executable without link error but fail at runtime.
Is there a 32bits pthread library on your target host? If not, I guess you need to get one installed. Also inspect the output of ldd <my-program> on your target host, this might help you find out what is missing.

Building Lua for both i386 & x86_64 architecture?

I've been building some Lua scripts to automate certain functions and configurations that I can use with my audio VST plug-ins. The scripts themselves work fine, tested in a separate project embedded in C++.
However, due to VST and VSTGUI needing to be build against the 10.6 SDK with Architectures set to Standard 32-bit/64-bit (and Valid architectures include i386 & x86_64), when I integrate it into the VST plug-in project, it ignores liblua.a for the i386 architecture, causing obvious linking errors.
Note: I can build the VST plug-ins for 64-bit only and eliminate the i386 arch, but then the plug-in won't load in some hosts. I think this has to do with some hosts still implementing only Carbon-based UI and how this works with VSTGUI.
Anyway, what kind of solutions exist for this problem? I can build Lua for either architecture, but not both. Unless I put them in separate directories and somehow tell Xcode about that?
It's not really a critical thing, but I'd like to be able to script some common elements between plug-ins. Thanks!

Runtime or compile time for platform-specific libraries?

I'm creating a library in C++. It links against Windows libraries on Windows and Linux libraries on Linux. It's abstracted, all is well.
However, is it feasible to dynamically detect, load and use libraries (and copying header files for use) so it could be used on any platform if it was running under LLVM JIT?
Unfortunately, the LLVM intermediate representation in the bitcode files is not machine completely machine independent. You could probably get away with x86 Linux and Windows, but that same bitcode would probably not run on x86_64 systems, for example.

Compiling a C program with a specific architecture

I was recently fighting some problems trying to compile an open source library on my Mac that depended on another library and got some errors about incompatible library architectures. Can somebody explain the concept behind compiling a C program for a specific architecture? I have seen the -arch compiler flag before and have seen values passed to it such as ppc, i386 and x86_64 which I assume maps to the CPU "language", but my understanding stops there. If one program uses a particular architecture, do all libraries that it loads need to be on the same architecture as well? How can I tell what architecture a given program/process is running under?
Can somebody explain the concept behind compiling a C program for a specific architecture?
Yes. The idea is to translate C to a sequence of native machine instructions, which have the program coded into binary form. The meaning of "architecture" here is "instruction-set architecture", which is how the instructions are coded in binary. For example, every architecture has its own way of coding for an instruction that adds two integers.
The reason to compile to machine instructions is that they run very, very fast.
If one program uses a particular architecture, do all libraries that it loads need to be on the same architecture as well?
Yes. (Exceptions exist but they are rare.)
How can I tell what architecture a given program/process is running under?
If a process is running on your hardware, it is running on the native architecture which on Unix you can discover by running the command uname -m, although for the human reader the output from uname -a may be more informative.
If you have an executable binary or a shared library (.so file), you can discover its architecture using the file command:
% file /lib/libm-2.10.2.so
/lib/libm-2.10.2.so: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, stripped
% file /bin/ls
/bin/ls: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), dynamically linked (uses shared libs), for GNU/Linux 2.6.8, stripped
You can see that these binaries have been compiled for the very old 80386 architecture, even though my hardware is a more modern i686. The i686 (Pentium Pro) is backward compatible with 80386 and runs 80386 binaries as well as native binaries. To make this backward compatibility possible, Intel went to a great deal of trouble and expense—but they practically cornered the market on desktop CPUs, so it was worth it!
One thing that may be confusing here is that the Mac platform has what they call a universal binary, which is really two binaries in one archive, one for intel and the other for ppc architecture. Your computer will automatically decide which one to run. You can (sometimes) run a binary for another architecture in an emulation mode, and some architectures are supersets of others (ie. i386 code will usually run on a i486, i586, i686, etc.) but for the most part the only code you can run is code for your processor's architecture.
For cross compiling, not only the program, but all the libraries it uses, need to be compatible with the target processor. Sometimes this means having a second compiler installed, sometimes it is just a question of having the right extra module for the compiler availible. The cross compiler for gcc is actually a seperate executable, though it can sometimes be accessed via a command line switch. The gcc cross compilers for various architectures are most likely separate installs.
To build for a different architecture than the native of your CPU, you will need a cross-compiler, which means that the code generated cannot run natively on the machine your sitting on. GCC can do this fine. To find out which architecture a program is built for check out the file command. In Linux-based systems at least, a 32-bit x86 program will require 32-bit x86 libs to go along with it. I guess it's the same for most OSes.
Does ldd help in this case?