Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 4 years ago.
Improve this question
I'm writing a plugin based emulation system. The way this works is that the main system sets up an ImGui instance and the plugins use ImGui to draw windows to the screen. I'm using a static build of ImGui which is embedded in the host program and linked to at run time; on Linux, this works fine, because the plugin .so files don't need to link against ImGui at compile time, only at run time. On OS X I get errors about "Undefined symbols for architecture x86_64" when trying to link the .dylibs.
Is there a way to tell OS X to leave the linking for run-time also?
Found the answer elsewhere - I need to add the -undefined dynamic_lookup flag on OS X.
Related
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 3 years ago.
Improve this question
I'm trying to port a simple program code.cpp linked to a couple of shared libraries libA.so and libB.so from my personal computer (running up-to-date Arch Linux installation) on a machine with Ubuntu 16.04 LTS. Compiling the libraries works just fine, but I get a lot undefined reference errors when compiling code.cpp.
I suspect this is being caused by the fact that libA.so is linked to libB.so, and while both libA.so and libB.so compile OK on Ubuntu 16.04, strangely libA.so does not get linked against libB.so, despite the compilation flag -lB, which in turns causes undefined references when generating the binary. On the other hand, on Arch Linux libA.so does get linked against libB.so, or so ldd tells me.
I initially though the problem could be mismatched GCC versions, but even after installing and using GCC 8 on Ubuntu 16.04 the problem persists.
No one guarantees the complete compatibility not only between different Linux distributions, but even between different versions of the same Linux distribution. There was an attempt to unify it called LSB, but unfortunately it failed dramatically.
Different C++ compilers often have incompatible C++ libraries. This is also a big pain.
I will strongly recommend to recommend you to recompile everything on the target platform.
There is ambiguity about loading shared libraries. For some systems if you link a library A which is linked with library B will make symbols of the library B available for your program. For some systems it is not so.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I know these are the include files(in c++) We have to compile them and then have to ship them with the actual binary. But I have a bit strange problem.I used windows.h in a program and I want to ship it but windows.h have other include files and so on.So I would have to ship whole windows sdk in the form of dll's .Is there any other way to do it?
You do not need to ship header files with a binary application.
You do however need to ship any shared libraries (DLL's on Windows) that your program depends on - and this includes the compilers runtime (the standard library etc) - static libraries are made part of the executable and thus do not need to be shipped separately.
If you are using Visual Studio then you need to ship the Visual Studio redistributables along with your program (google the version for your Visual Studio version) - for other compilers there are similar requirements.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
Can someone explain about toolchain dependency on OS and platform architecture, for instance if I want to compile code for an arm architecture, should I look for platform architecture or for OS that platform is running on and then adapt toolchain to it?
Most compilers compile their code to the assembly language. The code they produce will most likely depend on various calls to the operating system (e.g. to allocate dynamic memory), and have a header defining properties of the file like the location of the code and data sections (e.g. ELF, PE). An assembler then compiles this assembly to object files, which are linked using the linker of this platform. All these tools produce code for a specific architecture and OS.
This does not mean that the compiler and linker cannot run on another type of system. The process of compiling code for another system is called cross-compiling. Even though this is less commonly used than compiling for the same platform as the compiler runs on, it is quite commonly used. A few examples of this are compiling OS kernels, which of course cannot rely on another OS, or compiling native code for Android (the android NDK contains a cross-compiler).
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I came across code where a library is linked statically and shared ,and both contains function names also same .How does does linker decide which library to link.
I am adding foobar.so library executable path here in this path /etc/ld.so.conf aswell as -I /(include files path) -l(executable name) -L(library executable path )
.After this i executed ldconfig .I am using gcc comipler version gcc (GCC) 4.4.7
It really depends on the runtime environment you are using, and how "shared" or "dynamic" libraries are implemented in that environment.
There is one approach where each dynamic library comes together with a statically linked "stub" library, so the compiler resolves your calls against the stub methods, and the stub methods forward to the dynamically loaded library once that library has been loaded. This would definitely not work in your case, because each stub method would conflict with a statically linked method.
There is another approach where loading a dynamic library gives you a handle to that library, and then you can query the system for entry points on that handle, and invoke these entry points dynamically. In this case, the linker is not involved at all in the resolution of the dynamic entry points, so there is no problem at all (besides it being pointless) with having a statically linked library that provides equivalent entry points.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 7 years ago.
Improve this question
This is a newb question. I'm not sure if "external libraries" is the right terminology, but I see some programs include or use libraries or modules that are not programmer-defined. Do I need to do anything special when compiling - do I need to tell the compiler where to find these external libraries?
For example, on this page http://www.unidata.ucar.edu/software/netcdf/examples/programs/, SimpleXyWr.cpp and simple_xy_wr.f90 both reference the netCDF library/module. How does the compiler know where to find the library/module? Do I need to provide the path myself at some point in the compilation?
Typically for GNU compilers -L options tells where to find library and -l tells what library to link. For example,
f77 -o run main.f -L/usr/local/lib -llapack -lblas
will look for libraries in /usr/local/lib driectory and link with lapack and blas libraries