I have a C++ CMake project that has multiple sub-projects that I package into shared libraries. Then, the project itself, which is an executable, links with all these shared libraries. This is a project that is being ported from Windows to Ubuntu. What I do is have the exectable, EXE, use a one subproject, Core, to open all other libraries. Problem is that this isn't working on Linux.
This is EXE:
int main(int argc, char *argv[])
{
core::plugin::PluginManager& wPluginManager = core::plugin::PluginManagerSingleton::Instance();
wPluginManager.loadPlugin("libcore.so");
wPluginManager.loadPlugin("libcontroller.so")
wPluginManager.loadPlugin("libos.so")
wPluginManager.loadPlugin("libnetwork.so")
wPluginManager.loadPlugin("liblogger.so")
}
This is core::plugin::PluginManager::loadPlugin():
bool PluginManager::loadPlugin(const boost::filesystem::path &iPlugin) {
void* plugin_file = dlopen(plugin_file_name, RTLD_LAZY);
std::cout << (plugin_file ? " success" : "failed") << std::endl;
return true;
}
What happens is that libcore gets loaded properly, but then all other libraries fail with no no error message. I cannot find out why it's not working. However, when I do the same thing, but instead of having Core load the libraries, I simply do it in main and it works.
Basically, I can load libraries from an exe, but I can't from other shared libraries. What gives and how can I fix this?
The most likely reason for dlopen from the main executable to succeed and for the exact same dlopen from libcore.so to fail is that the main executable has correct RUNPATH to find all the libraries, but libcore.so does not.
You can verify this with:
readelf -d main-exe | grep R.*PATH
readelf -d libcore.so | grep R.PATH
If (as I suspect) main-exe has RUNPATH, and libcore.so doesn't, the right fix is to add -rpath=.... to the link line for libcore.so.
You can also gain a lot of insight into dynamic loader operation by using LD_DEBUG envrironment variable:
LD_DEBUG=libs ./main-exe
will tell you which directories the loader is searching for which libraries, and why.
I cannot find out why it's not working
Yes, you can. You haven't spent nearly enough effort trying.
Your very first step should be to print the value of dlerror() when dlopen fails. The next step is to use LD_DEBUG. And if all that fails, you can actually debug the runtime loader itself -- it's open-source.
I managed to find a fix for this issue. I don't quite understand the inner workings nor the explanation of my solution, but it works. If someone who has a better understanding than my very limited experience with shared libraries could comment on my answer with the real explanation, I'm sure it could help future viewers of this question.
What I was currently doing is dlopen("libcore.so"). I simply changed it to an absolute path dlopen("/home/user/project/libcore.so") and it now works. I have not yet tried with relative paths, but it appears we should always use relative or absolute paths instead of just the filename with dlopen.
If absolute path was help, maybe problem is local dependencies of shared libraries. Another words, maybe libcontroller.so is depend from libos.so or other your library, but cannot find it. Linux loader means that all shared libraries are placed in /lib, /usr/lib, etc. You need to specify path for find dynamic libraries with environment variable LD_LIBRARY_PATH.
Try to run your app this way:
LD_LIBRARY_PATH=/path/to/your/executable/and/modules ./yourapp
bool PluginManager::loadPlugin(const boost::filesystem::path &iPlugin) {
void* plugin_file = dlopen(plugin_file_name, RTLD_LAZY);
std::cout << (plugin_file ? " success" : "failed") << std::endl;
return true;
}
The flags to use with dlopen depend upon the distro. I think Debian and derivatives use RTLD_GLOBAL | RTLD_LAZY, while Red Hat and derivatives use RTLD_GLOBAL. Or maybe it is vice-versa. And I seem to recall Android uses RTLD_LOCAL, too.
You should just try both to simplify loading on different platforms:
bool PluginManager::loadPlugin(const boost::filesystem::path &iPlugin) {
void* plugin_file = dlopen(plugin_file_name, RTLD_GLOBAL);
if (!plugin_file) {
plugin_file = dlopen(plugin_file_name, RTLD_GLOBAL | RTLD_LAZY);
}
const bool success = plugin_file != NULL;
std::cout << (success ? "success" : "failed") << std::endl;
return success ;
}
What happens is that libcore gets loaded properly, but then all other libraries fail with no no error message
This sounds a bit unusual. It sounds like the additional libraries from the sub-projects are not in the linker path.
You should ensure the additional libraries are in the linker path. Put them next to libcore.so in the filesystem since loading libcore.so seems to work as expected.
If they are already next to libcore.so, then you need to provide more information, like the failure from loadPlugin, the RUNPATH used (if present) and the output of ldd.
but then all other libraries fail with no no error message. I cannot find out why it's not working.
As #Paul stated in the comments, the way to check for a dlopen error is with dlerror. It is kind of a crappy way to do it since you can only get a text string and not an error code.
The dlopen man page is at http://man7.org/linux/man-pages/man3/dlopen.3.html, and it says:
RETURN VALUE
On success, dlopen() and dlmopen() return a non-NULL handle for the
loaded library. On error (file could not be found, was not readable,
had the wrong format, or caused errors during loading), these
functions return NULL.
On success, dlclose() returns 0; on error, it returns a nonzero value.
Errors from these functions can be diagnosed using dlerror(3).
Related
I am trying to use dlopen() and dlinfo() to get the path my executable. I am able to get the path to a .so by using the handle returned by dlopen() but when I use the handle returned by dlopen(NULL,RTLD_LAZY); then the path I get back is empty.
void* executable_handle = dlopen(0, RTLD_LAZY);
if (nullptr != executable_handle)
{
char pp_linkmap[sizeof(link_map)];
int r = dlinfo(executable_handle, RTLD_DI_LINKMAP, pp_linkmap);
if (0 == r)
{
link_map* plink = *(link_map**)pp_linkmap;
printf("path: %s\n", plink->l_name);
}
}
Am I wrong in my assumption that the handle for the executable can be used in the dlinfo functions the same way a .so handle can be used?
Am I wrong in my assumption that the handle for the executable can be used in the dlinfo functions the same way a .so handle can be used?
Yes, you are.
The dynamic linker has no idea which file the main executable was loaded from. That's because the kernel performs all mmaps for the main executable, and only passes a file descriptor to the dynamic loader (who's job it is to load other required libraries and star the executable running).
I'm trying to replicate some of the functionality of GetModuleFileName() on linux
There is no reliable way to do that. In fact the executable may no longer exist anywhere on disk at all -- it's perfectly fine to run the executable and remove the executable file while the program is still running.
Also hard links mean that there could be multiple correct answers -- if a.out and b.out are hard linked, there isn't an easy way to tell whether a.out or b.out was used to start the program running.
Your best options probably are reading /proc/self/exe, or parsing /proc/self/cmdline and/or /proc/self/maps.
The BSD utility library has a function getprogname(3) that does exactly what you want. I'd suggest that is more portable and easier to use than procfs in this case.
I'm developing a project on DevC++ which uses MinGW64. On Windows 7 (i don't know if this can be related to my issue).
I had a problem compiling a C++ program where I call the function GetFileVersionInfoSize(), which is:
main.cpp:(.text+0x51): undefined reference to `GetFileVersionInfoSizeA'
After two days of researching, I understood that I have to include in the parameters of the linker the "version.lib" file, which is missing in my computer, I searched it everywhere.
I can't even find a download mirror on the web, so I'm asking, does anybody know where I can find version.lib? Maybe somewhere hidden in my PC or in the web? Maybe with a new installation of MinGW64? I don't know, since my installation of MinGW64 came with DevC++.
Thanks for reading.
Thank all you guys for the suggestions you gave me
In this answer there is how to find Version.lib in this case.
The following is about accomplishing using GetFileVersionInfoSize(), GetFileVersionInfo() and VerQueryValue() to get the Product Name of an executable file, which is why I needed Version.lib, actually being called libversion.a on my machine.
LAST-EDIT: I managed to achieve what I wanted, this is the code that works in my case:
// filename contains the path of the .exe we want to get Product Name
int version_info_size = GetFileVersionInfoSize(filename, NULL);
if(version_info_size > 0) {
BYTE *version_info_buffer = new BYTE[version_info_size];
if(GetFileVersionInfo(filename, 0, version_info_size, version_info_buffer)) {
char *product_name = NULL;
UINT pLenFileInfo = 0;
if(VerQueryValue(version_info_buffer, TEXT("\\StringFileInfo\\040904e4\\ProductName"),
(LPVOID*)&product_name, &pLenFileInfo)) cout << product_name;
}
}
Notice that if you want to compare product_name with another value, you have to do something like string product_name_str = product_name and compare that string variable, otherwise the comparison will always return false. Or maybe just do (string)product_name, I should try.
040904e4 is the lang-code hex translation that I needed in order to do this, I found it thanks to the code in this answer.
Since it's a environment built for GCC, library names will follow Nix conventions:
Start with lib: libversion
Extension can be either .so or .a (depending on the library being dynamic or static)
Typically (depending on default MinGW installation), it should reside in ${MINGW_INSTALL_DIR}/x86_64-w64-mingw32/lib/libversion.a. One installation example is: f:\Install\Qt\Qt\Tools\mingw730_64\x86_64-w64-mingw32\lib\libversion.a.
According to [Archive.Web - MinGW]: HOWTO Specify the Location of Libraries for use with MinGW (Determining MinGW's Default Library Search Path section), it can be retrieved by:
[cfati#cfati-5510-0:/cygdrive/e/Work/Dev/StackOverflow]> x86_64-w64-mingw32-ld.exe --verbose | grep SEARCH_DIR | tr -s ' ;' \\012
SEARCH_DIR("=/usr/x86_64-w64-mingw32/lib")
SEARCH_DIR("=/usr/local/lib")
SEARCH_DIR("=/lib")
SEARCH_DIR("=/usr/lib")
But since it's a system library, you shouldn't care about its path, MinGW should find it automatically, all you have to do is pass it to the linker (using the standard ways).
Assume, I have a folder with my program and also another folder with external library.
bin
myprog.exe
etc
lib.dll
sublib.dll
In my case I want to load the lib.dll from my main program myprog.exe. The problem is that lib.dll linked with sublib.dll.
So I try to do that in this way:
QCoreApplication a(argc, argv);
QLibrary lib;
QString path = "C:/etc/lib.dll";
a.addLibraryPath(path);
if(QLibrary::isLibrary(path)) {
lib.setFileName(path);
lib.load();
if(lib.isLoaded())
qDebug() << "Ok\n";
else
qDebug() << "Error " << lib.errorString() << "\n";
} else
qDebug() << "Not a library\n";
return a.exec();
After running the app I get the error:
cannot load library lib.dll the specified module could not be found
If I put both lib.dll and sublib.dll inside bin directory it works without error. But that is not I want to do.
I've tried
a.addLibraryPath("C:/etc");
but that doesn't work.
As I understand QCoreApplication::addLibraryPath() sets path for Qt program, not as system wide setting. So, in this case, lib.dll still can't find sublib.dll although it locates in same directory.
So my question - how can I load external shared library inside Qt program in case that this library has its own dependencies?
That is Windows issue. The DLL will be looked at the current process directory and then in the system PATH. The code that is contained in some C:\etc\lib.dll is behaving in its own process and unless very specific logic implemented will behave according to the system rule.
Please refer to MSDN article Dynamic-Link Library Search Order for details. If the source code for that lib.dll is available, it makes sense to examine LoadLibrary call. If there is no specific path provided then:
The first directory searched is the directory containing the image
file used to create the calling process (for more information, see the
CreateProcess function). Doing this allows private dynamic-link
library (DLL) files associated with a process to be found without
adding the process's installed directory to the PATH environment
variable. If a relative path is specified, the entire relative path is
appended to every token in the DLL search path list. To load a module
from a relative path without searching any other path, use
GetFullPathName to get a nonrelative path and call LoadLibrary with
the nonrelative path. For more information on the DLL search order,
see Dynamic-Link Library Search Order.
Nothing prevents you from explicitly preloading the libraries that lib.dll depends on. Once pre-loaded, they're ready for use by any library that you'll subsequently open. After all, you know where they are so it's a simple matter to iterate them and attempt to load them. Due to possible dependencies between these libraries, you have to keep loading them until there's no more progress:
QString path;
QSet<QString> libraries;
QDirIterator it{path, {"*.dll"}};
while (it.hasNext())
libraries << it.next();
bool progress = true;
while (progress) {
progress = false;
for (auto i = libraries.begin(); i != libraries.end();) {
QLibrary lib{*i};
if (lib.load()) {
progress = true;
i = libraries.erase(i);
} else
++i;
}
}
It's either that or to use a PE library of your choice to build the dependency tree yourself and only open the necessary libraries, in dependency order.
Side note: you don't own C:\Windows and you should never write anything there (nor in any subfolder) in any sort of a modern installer.
I'm trying to find find a substitute for a call to "system" (from stdlib.h) in my C++ program.
So far I've been using it to call g++ in my program to compile and then link a variable number of source files in a directory chosen by the user.
Here I've got an example how the command could approximately look like: "C:/mingw32/bin/g++.exe -L"C:\mingw32\lib" [...]"
However, I have the problem that (at least with the MinGW compiler I'm using) I get the error "Command line is too long" when the command string gets too long.
In my case it was about 12000 characters long. So I probably need another way to call g++.
Additionally, I've read that you generally shouldn't use "system" anyway: http://www.cplusplus.com/forum/articles/11153/
So I'm in need for some substitute (that should also be as platform independent as possible, because I want the program to run on Windows and Linux).
I've found one candidates that would generally look quite well suited:
_execv / execv:
Platform independent, but:
a) http://linux.die.net/man/3/exec says "The exec() family of functions replaces the current process image with a new process image". So do I need to call "fork" first so that the C++ program isn't terminated? Is fork also available on Windows/MSVC?
b) Using "system", I've tested whether the return value was 0 to see if the source file could be compiled. How would this work with exec? If I understand the manpage correctly, will it only return the success of creating the new process and not the status of g++? And with which function could I suspend my program to wait for g++ to finish and get the return value?
All in all, I'm not quite sure how I should handle this. What are your suggestions? How do multiplatform programs like Java (Runtime.getRuntime().exec(command)) or the Eclipse C++ IDE internally solve this? What would you suggest me to do to call g++ in an system independent way - with as many arguments as I want?
EDIT:
Now I'm using the following code - I've only tested it on Windows yet, but at least there it seems to work as expected. Thanks for your idea, jxh!
Maybe I'll look into shortening the commands by using relative paths in the future. Then I would have to find a platform independent way of changing the working directory of the new process.
#ifdef WIN32
int success = spawnv(P_WAIT, sCompiler.c_str(), argv);
#else
pid_t pid;
switch (pid = fork()) {
case -1:
cerr << "Error using fork()" << endl;
return -1;
break;
case 0:
execv(sCompiler.c_str(), argv);
break;
default:
int status;
if (wait(&status) != pid) {
cerr << "Error using wait()" << endl;
return -1;
}
int success = WEXITSTATUS(status);
}
#endif
You might get some traction with some of these command line options if all your files are in (or could be moved to) one (or a small number) of directories. Given your sample path to audio.o, this would reduce your command line by about 90%.
-Ldir
Add directory dir to the list of directories to be searched for `-l'.
From: https://gcc.gnu.org/onlinedocs/gcc-3.0/gcc_3.html#SEC17
-llibrary
Search the library named library when linking.
It makes a difference where in the command you write this option; the linker searches processes libraries and object files in the order they are specified. Thus, foo.o -lz bar.o' searches libraryz' after file foo.o' but beforebar.o'. If bar.o' refers to functions inz', those functions may not be loaded.
The linker searches a standard list of directories for the library, which is actually a file named `liblibrary.a'. The linker then uses this file as if it had been specified precisely by name.
The directories searched include several standard system directories plus any that you specify with `-L'.
Normally the files found this way are library files--archive files whose members are object files. The linker handles an archive file by scanning through it for members which define symbols that have so far been referenced but not defined. But if the file that is found is an ordinary object file, it is linked in the usual fashion. The only difference between using an -l' option and specifying a file name is that-l' surrounds library with lib' and.a' and searches several directories.
From: http://gcc.gnu.org/onlinedocs/gcc-3.0/gcc_3.html
Here's another option, perhaps closer to what you need. Try changing the directory before calling system(). For example, here's what happens in Ruby...I'm guessing it would act the same in C++.
> system('pwd')
/Users/dhempy/cm/acts_rateable
=> true
> Dir.chdir('..')
=> 0
> system('pwd')
/Users/dhempy/cm
=> true
If none of the other answers pan out, here's another. You could set an environment variable to be the path to the directory, then use that variable before each file that you link in.
I don't like this approach much, as you have to tinker with the environment, and I don't know if that would actually affect the command line limit. It may be that the limit applies after interpolating the command. But, something to thing about, regardless.
I am compiling an application which consists of several projects, that generate dynamic libraries (Shared Libraries on LINUX). Of course that different projects are linking to the others that I've compiled. I am using CodeBlocks 10 under Ubuntu, using GCC compiler.
Due to the fact that, according to the arguments specified by the user, different libraries shall be loaded, in my main application I am loading the appropriate library, according to the decision, with the following line:
dll = dlopen("my_library.so", RTLD_LAZY);
As specified in documentation, the dlopen loads libraries automatically If the library has dependencies on other shared libraries and the process is done recursively.
The problem is that right after my dlopen, I call dlerror() in order to understand what's going on and I get the following error:
../../../../gccDebug/os.so : Cannot open shared object file: No such
file or directory.
Just looking at the error, I completly understand it, because it is looking 2 folders below more than it should. The question is why?
What I mean is: I use relative paths to explicitly load Shared Libraries on the projects. On my Main Application, the Working Directory is ../../gccDebug.
I load, using dlopen, mylibrary.so, which explicitly loads (in project Options) ../../gccDebug/gui.so. This gui.so then also explicitly loads (in project options) ../../gccDebug/so.os
What it seems to me that is happening, is that he is appending the paths making that on the 3rd "iteration" he is looking for a path which is already searching too many folders back than it should. If the first recursive loading (gui.so) works just fine, why does the 2nd recursive loading (so.os) is giving a strange path??
What is wrong with the recursive loading of the shared libraries using dlopen function?
Each path should be relative to the library doing the loading, so for ../../gccDebug/gui.so to load something in the same directory it should load ./gui.so
The extra ../.. are because you've told something in ../../gccDebug to load something in ../../gccDebug_relative to itself_ which relative to your program's working directory is../../gccDebug/../../gccDebugi.e.../../../gccDebug`
Do that a few times for different libraries and you'll get the number of unwanted .. components you're seeing.
Are you sure that gui.so actually loaded? Could it be that mylibrary.so had copied the ../../gccDebug/os.so dependency from gui.so at link-time and so at run-time was trying to load that before loading gui.so?
Have you use ldd mylibrary.so to see what it tries to find? You can also use readelf -d mylibrary.so to see the contents of the dynamic section of the library.