Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I've recently started trying to compile/link C++ code dynamically. Let's suppose I have an application MyApp.exe running. I want that program to load some compiled object files (.o) and do all the linking stuff. Is it possible or I need a shared library?
Is it possible
Yes.
or I need a shared library?
That's exactly what a shared library is!
Update
For a plugin system you should look into the system API functions
LoadLibrary GetProcAddress (Windows)
dlopen dlsym (Linux, *BSD, MacOS)
The recommended approach for implementing a plugin is to have exactly one function of a specific name, common to all plugin modules, exported by the plugin.
This function serves two purposes:
initializing the plugin
filling a structure of function pointers with the pointers to the plugin's functions
That way the user of the plugin gets all the plugin's symbols by only a handful of system function calls, instead of littering the code with countless calls to dlsym or GetProcAddress.
It is theoretically possible, but probably not practical. On Windows, you cannot dynamically load object (.obj) files using LoadLibrary, so you would have link the object(s) into a DLL first. This would require a linker (along with static library dependencies, etc.) compatible with the compiler used to produce the object files, knowledge of the appropriate linker flags for the objects, etc.
Generally, it makes more sense to produce needed DLLs as part of the same build process that is performing the compilation. That process is in a better position to have all the necessary tools and information.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last year.
Improve this question
I've found multiple posts describing in detail the differences between static and shared libraries; however, I have yet to see an overarching view on when the shared library is loaded, what goes on here, and when the library is unloaded. Especially how this is affected by the presence of static variables. I understand this differs from system to system but let's say on Linux.
Static linking - the code is linked directly into the executable program's file. The code effectively loads when the executable program is started. The loader doesn't really know about libraries at this point. All the addresses to invoked functions are resolved at compile/link time.
shared library loading (implicit linking). The code is compiled to link with the stub library (.sa file). When the executable program is started, the loader will need to interrogate the program to find out all its runtime library dependencies, resolve each's location (the .so file), do the position independent code fixups, update the symbol table, etc... That is, when the shared library is pulled into memory, there's some fixups on the addresses in which each function exists at. On linux, there's a common program called ldd that will dump the list of .so file dependencies a program needs at runtime. The loader is effectively doing the equivalent of what ldd does before loading the libraries into user space. Typically, the C/C++ runtimes and most system libraries (posix functions) are loaded this way.
Static/global variables are in a different part of the shared library and are duplicated by the loader for each process that loads the shared library.
shared library load (explicitly linking). The code invokes the dlopen library function to explicitly load a shared library (.so file) at runtime and invokes dlsym to get the address of a function inside that library so it can explicitly execute it. The loader does effectively the same thing as implicit loading, but doesn't pull the code into memory until the dlopen call.
Advantages of shared libraries include smaller executable program files and the shared libraries can be mapped into memory just once and shared among multiple programs running at the same time.
Windows has effectively the same pattern with DLLs. dumpbin.exe is the equivalent tool to ldd on Unix. LoadLibrary and GetProcAddress are the Windows equivalent to dlopen and dlsym.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
Allow me to preface this question by letting you know that I have no formal CS education and have been self-learning C++.
I would like to understand what the different ways of including third party libraries in a project are.
How to identify how a project is to be included if there is an interesting library on github?
I've read about concepts of dynamic and static linking in a windows context however I am still somewhat not clear about them.
Libraries contain implementations of functions, regardless of being static or dynamic. A library contains m function implementations, of which a subset n < m is exposed to the user. In C++, for these n functions the library normally offers a header file provided to you for inclusion, a list of so-called function prototypes. Depending on what header you include, and what function you have used in your project, the linker memorizes the prototypes of such external functions and demands you to link the appropriate library against your executable to satisfy the unresolved prototype symbol.
Function implementations of static libraries are welded at compile time into your executable. Function implementations of dynamic libraries remain exactly in their location, there's just a stub welded into your executable that will transfer control over to the dynamic library upon a call to it.
Briefly, you just need to do following steps to include/referece a C++ library:
Add the heade file (*.h or *.hpp) directory to your project's include paths.
#include the xxx.h in your source code files, to call the functions/methods/interfaces.
Add libaray:
For static lib (*.lib on Windows or *.a on Linux), add the lib to your project's library paths.
For dynamic lib: just make sure the *.dll (or *.so) are in the same directory of your project output (like *.exe or your lib), for running or to publish/deploy.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 7 years ago.
Improve this question
I came across code where a library is linked statically and shared ,and both contains function names also same .How does does linker decide which library to link.
I am adding foobar.so library executable path here in this path /etc/ld.so.conf aswell as -I /(include files path) -l(executable name) -L(library executable path )
.After this i executed ldconfig .I am using gcc comipler version gcc (GCC) 4.4.7
It really depends on the runtime environment you are using, and how "shared" or "dynamic" libraries are implemented in that environment.
There is one approach where each dynamic library comes together with a statically linked "stub" library, so the compiler resolves your calls against the stub methods, and the stub methods forward to the dynamically loaded library once that library has been loaded. This would definitely not work in your case, because each stub method would conflict with a statically linked method.
There is another approach where loading a dynamic library gives you a handle to that library, and then you can query the system for entry points on that handle, and invoke these entry points dynamically. In this case, the linker is not involved at all in the resolution of the dynamic entry points, so there is no problem at all (besides it being pointless) with having a statically linked library that provides equivalent entry points.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 8 years ago.
Improve this question
I am currently working on a Raspberry Pi project that uses the OpenCV library, among others. The OpenCV library is quite large, and the build process for it is decently extensive. I found a script which downloads and installs the latest version of OpenCV, and following some suggestions from this question, I was able to build the library, and begin using the functions within OpenCV.
Considering the actual build process for OpenCV took considerably longer than building our project would, is it acceptable to just build the library once, as opposed to building the library each time we build our project?
While I realize this is probably personal preference, I am wondering how others handle situations similar to this.
As you probably already know, code that does not change does not need to be recompiled. This is true for executables and libraries alike.
A library is supposed to provide you addictional functionality in a neat, pre-packaged form. The difference between additional code you add to your project and a library is that the code included in a library is supposed to be in a stable state, so that once built the user will be able to use its features without any maintenance hassle; the APIs will be available and they will always work. You will be able to discard any implementation files, and just work with the header files - which provide you with the API within your code - and the library files, which contain the compiled implementation.
You are pretty much pre-compiling part of your program; a part that will also be able to be used in other projects, again without recompiling.
Note that C++ itself is an example of this: an implementation of the C++ standard library (for example libc++) is already included with your compiler, so that you are able to use the standard C++ headers without the need to recompile C++ whole every time you try a "Hello World!" program.
You can even extract libraries out of parts of your project that you feel are already completed and stable: this can allow you to reduce the time required to compile your project even though it becomes bigger. These reasons are part of why modularity is so strongly encouraged when programming.
TL; DR: Recompiling a library only once is not only acceptable, is most probably what you want to do.
It is normal to compile once and then only link the library. For that reason the compilers can detect whether there are changes in source files.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to call some third party DLL routines from my own DLL. I'm just not sure how to do that in C++.
You can use load-time dynamic linking or run-time dynamic linking in your DLL in the same way as in the executable. The only restriction is not to call LoadLibrary from your DllMain function to avoid deadlocks.
LoadLibrary and GetProcAddress are, but one of, your friends ...
If this dll has .lib file, you just add it to linker input and import its functions statically. If it don't, there are some tools to generate .lib file from .dll.
Also you can import functions dynamically, using LoadLibrary and GetProcAddress.
MSDN says that you can't call LoadLibrary from DllMain. But in most cases nothing bad happens.
Typically you link to the DLL via an export library in your project. Then the DLL functions can be called by your program, provided the DLL is in the program's path at runtime.
It's also possible (but a lot more work) to avoid link-time resolution of the required functions by manually loading the DLL and getting the required function addresses, but that should not be necessary if the third-party DLL supports the usual link-time mechanisms.