I have a few Libraries defined in C++ source code. My problem is whenever a library is removed or a new one is added, something has to call the libraries "Initialize" function.
An example would be: A class for Network sockets called CLuaNet which has a function called CLuaNet::Initialize(*State);. How do I call this function in my project since I cannot predict the library name? If possible I would like this "linking" to be done at compile time, something like a macro.
Whenever a new instance of a Lua environment is opened, every library's initialize function has to be called with an the Lua state as argument.
The project runs on different architectures (X86/X64/ARMv6/ARMv7) and operating systems, making compiled libraries for every possible platform and OS combination is not feasible (Windows - DLL, Linux - SO, etc..). This is intended to be a server application.
EDIT: I am not using DLLs or SOs - everything is compiled into one executable for portability.
Note: I dont have alot of experience in project design/managment. Id
like to hear opinions and tips about my approach on this.
The normal mechanism for coping with name finding, is to create another well named function which returns sufficient meta data to allow you to call it.
extern "C" DLLEXPORT const MetaInfo & InfoFunction()
{
static MetaInfo luaNet(
CLuaNet::Initialize,
CLuaNet::Finalize,
CLuaNet::SomethingElse );
return luaNet; // Give main application information on how to call me
}
Where DLLEXPORT declares a function to be exported from the DLL/shared object.
MetaInfo is compiled into the DLL/shared object and has a set of well-defined slots which perform useful functions.
e.g.
struct MetaInfo {
int (*initializeFunction)( lua_State * L );
int (*closeFunction)( lua_State * L );
int (*utilityFunction)( lua_State* L );
MetaInfo( int (*)(lua_State *) init,
int (*)( lua_State*) fini,
int (*)( lua_State*) other ) :
initializeFunction( init),
closeFunction( fini ),
utilityFunction( other ) {}
};
Related
For the C++ library I am currently developing I have seen the advantages of a plug-in system based on shared libraries. A feature will become available to the user of the library just if one of the shared libraries that are in a directory scanned at initialization time is offering that.
Using dlopen, the shared libraries will be searched for two symbols: one function that returns the string that names the feature they implement, and a create function that instantiates the class and returns a pointer to the basse class. Something like this:
In Library A
#include "Instance.hpp"
extern "C" const char* type() {
return "InstanceA";
}
//... class InstanceA definition, inherits from Instance
extern "C" Instance* create() {
return new InstanceA();
}
The Core library will scan the plug-ins directory and keep a map string->pointerToCreateFunction in order to create new instances.
This is really handy, and a pretty standard thing to do. If the user code tries to instantiate an InstanceX but no shared library implement that type, an error will be given, everything will work perfectly.
But this framework will be used also for iOs development, and the App Store doesn't allow loading 3rd party shared objects. I would like to keep this modular plug-in structure even when loading a self contained version of the library that links statically the plugins. Note that at project management level this would be as easy as defining a variable in CMake that creates static versions of the plugins and links them statically. This would also exclude the dynamic loading part of the code.
What I am missing is how to invert the mechanism: while for shared objects the core library will leverage on the file system to learn about the possible type of instances that can be used, I don't know how to "register" them without changing big part of the code, and without going in the static initialization fiasco. Looks like the only way to have that is subsituting the code that scans the types and the create function by including all the possible headers and having a big switch like
Instance* theInstance;
if (instanceRequired == "instanceA")
theInstance = new InstanceA();
etc etc...
Do you have any thought on a way to avoid including all the headers and having to change the code in Core each time a new instance is added?
I do such things through those pesky static objects that call register in ctor.
The ordering problem is dodged by making the map itself a local static, so getting constructed on the first client call, wherever it comes from. (Threading and similar issues are dodged by restricting thread launch to until-main time, and by that time all important things are forced.)
class RegMap; // the registry
RegMap& GetRegistry(); // the singleton access function; you can make a separate const/nonconst version or provide another wrapper for registering
// implementation in core.cpp:
RegMap& GetRegistry()
{
static RegMap m;
return m;
}
//in client ctor:
GetRegistry().Register( key, func);
I have a program that statically links with several c++ libraries that export a few functions:
extern "C"
{
KSrvRequestHandler* CreateRequestHandler( const char* name );
bool DestroyRequestHandler( KSrvRequestHandler* handler );
const char** ListRequestHandlerTypes();
}
The main program then calls these functions using GetProcAddress/dlsym:
#ifdef WIN32
HINSTANCE hDll = GetModuleHandle( NULL );
mCreateHandler = GetProcAddress( hDll, createFuncName );
mDestroyHandler = GetProcAddress( hDll, destroyFuncName );
mGetHandlerTypes = GetProcAddress( hDll, listFuncName );
#else // POSIX
void* handle = dlopen( NULL, 0 );
mCreateHandler = dlsym( handle, createFuncName );
mDestroyHandler = dlsym( handle, destroyFuncName );
mGetHandlerTypes = dlsym( handle, listFuncName );
dlclose( handle );
#endif // !POSIX
So the key here is that I'm calling a function in my own main program using dynamic linking.
( Why I do this is beyond the scope of the question, but short answer: this is a plugin architecture, but I have some standard plugins that are linked directly into the main binary - but I still want to load them through the same plugin loading interface. E.g. for the built-in plugins I load them by passing in the current executable as the source of the plugin interfaces. )
Here is the problem: the linker doesn't know I'm going to need these functions and doesn't link them in.
How do I force these functions to be linked in? For a dynamic lib, exporting them is enough. But for an exe, even dll exported function are deleted by the linker.
I know I can probably force linking by making the main binary assign these function addresses to something or some other similar hack. Is there a right way to do this?
#UPDATE: So I have a solution that works - but it sure is ugly on the inside. Still looking for a better way.
So I have to somehow define the symbols I need in the object that loads the built-in interfaces. I don't think there is a way to force the linker to link in a symbol otherwise. E.g. There is no way that I know of to build a library with a function that is always linked wether it looks needed or not. This is entirely at the discretion of the link step for the executable.
So in the executable I have a macro that defines the built-in interfaces I need. Each built-in plugin has a prefix to all of its interface functions so, at the top of the file I do:
DEFINE_BUILT_IN_PLUGIN( PluginOne )
DEFINE_BUILT_IN_PLUGIN( PluginTwo )
This will force the definitions of the functions I need. But the macro to do this is so ugly that I'm filled with feelings of rage and self doubt ( I've removed the trailing slashes from the macro for readability ):
#define FORCE_UNDEFINED_SYMBOL(x)
void* _fp_ ## x ## _fp =(void*)&x;
if (((ptrv) _fp_ ## x ##_fp * ( rand() | 1 )) < 1 )
exit(0);
#define DEFINE_BUILT_IN_PLUGIN( PREFIX )
extern "C"
{
KSrvRequestHandler* PREFIX ## CreateRequestHandler( const char* name );
bool PREFIX ## DestroyRequestHandler( KSrvRequestHandler* handler );
const char** PREFIX ## ListRequestHandlerTypes();
}
class PREFIX ## HandlerInterfaceMagic
{
public:
PREFIX ## HandlerInterfaceMagic()
{
FORCE_UNDEFINED_SYMBOL( PREFIX ## CreateRequestHandler );
FORCE_UNDEFINED_SYMBOL( PREFIX ## DestroyRequestHandler );
FORCE_UNDEFINED_SYMBOL( PREFIX ## ListRequestHandlerTypes );
}
};
PREFIX ## HandlerInterfaceMagic PREFIX ## HandlerInterfaceMagicInstance;
Since the compiler is an optimizing genuis, in FORCE_UNDEFINED_SYMBOLS I'm going to great lengths to trick the compiler into linking an unreferenced function. That macro only works inside a function. So I have to create this bogus Magic class. There must be a better way.
Anyway - it does work.
I have seen at least two different approaches to solve similar tasks.
In Qt for example, you can have static plug-ins which need to be "imported" into the main executable by calling a specific macro:
https://qt-project.org/doc/qt-4.8/qtplugin.html#Q_IMPORT_PLUGIN
It creates a static instance of a custom class whose constructor calls an initialization function exported from the static plug-in.
The Poco guys force the export of a specific symbol from the static library using an extern "C" declaration on Linux and a pragma on Windows:
__pragma(comment (linker, "/export:CreateRequestHandler"))
The linkage to the static library is forced with the same extern "C" declaration on Linux and with a linker pragma on Windows:
__pragma(comment (linker, "/include:CreateRequestHandler"))
You can find the details in this blog post.
Can't you provide a .def file to your main executable linker? That file should export the functions in question, which would keep them from being deleted.
I seem to remember that I did something like this long ago.
Problem: On windows, STATIC LIB which contains an OBJ file that has a function marked __decl-spec(dll¬export) but if the same is not used in the EXE, function does not get exported from the EXE. On other platforms also we have the same problem BUT there we have compiler options like --whole-archive / -force_load, do make it work.
Links: Link1 Link2
Only solution that come to my mind is to not create STATIC libs, rather include all code (static LIBS) in the executable then: 1. It works on Windows 2. It works on Linux without --whole-archive 3. It works on Mac OS X without -force_load 4. We also need not worry about if 2 & 3 include the dead code, exe bloat etc.
This is the only solution till the linkers become smart and throw out every unused symbol, except those marked specifically for external consumption i.e. marked to be exported.
I have managed to call a function from a DLL in C++, but I would like to pass a parameter to it.
I am currently using SDL and I would like to pass the SDL event 'event' to the function in my source. Example below:
// DLL
typedef void (*Events)(SDL_Event *event);
static __declspec(dllexport) void HandleEvents(Events events)
{
events(&d2Main::event);
}
// Application
int main()
{
d2Main::HandleEvents(&HandleEvents);
}
void HandleEvents(SDL_Event *events)
{
if(events.type == SDL_QUIT)
// Do stuff
}
The d2Main is a class.
Is this possible?
Use nm utility (GNU binary utilities) to look at the symbol tables in the dll file and then call it the same way you call a regular function by wrapping the function you want to call using the extern "C" { your function} declaration. Your function should be linked by the linker therefore you should also add -L ./ -ldllfile as a switch to g++ or gcc.
I suppose d2Main::event is static SDL_Event object. Yes it is possible. As long as the definition of the structure SDL_Event as seen by the application and the DLL is the same (including padding and packing that goes between elements inside the structure - for member aligment reasons).
Some things to remember when working across module boundaries are,
Ensure structures are compiled using the same definition
Better to use the same version of the same compiler. Because e.g. VS 8 and VS 9 ships with their own C/C++ runtimes and there by using their own individual heaps. So, a memory allocated from one module using VS 8 cannot be deleted from a module compiled using VS 9. Often this problem manifests as failures like "my dll crashes when I assign memory to std::string passed in as reference to my dll's export function"
Ensure to not mix modules built for Release and Debug configurations (same reasons - the heaps are different in release and debug CRT).
Sooooo I'm writing a script interpreter. And basically, I want some classes and functions stored in a DLL, but I want the DLL to look for functions within the programs that are linking to it, like,
program dll
----------------------------------------------------
send code to dll-----> parse code
|
v
code contains a function,
that isn't contained in the DLL
|
list of functions in <------/
program
|
v
corresponding function,
user-defined in the
program--process the
passed argument here
|
\--------------> return value sent back
to the parsing function
I was wondering basically, how do I compile a DLL with gcc? Well, I'm using a windows port of gcc. Once I compile a .dll containing my classes and functions, how do I link to it with my program? How do I use the classes and functions in the DLL? Can the DLL call functions from the program linking to it? If I make a class { ... } object; in the DLL, then when the DLL is loaded by the program, will object be available to the program? Thanks in advance, I really need to know how to work with DLLs in C++ before I can continue with this project.
"Can you add more detail as to why you want the DLL to call functions in the main program?"
I thought the diagram sort of explained it... the program using the DLL passes a piece of code to the DLL, which parses the code, and if function calls are found in said code then corresponding functions within the DLL are called... for example, if I passed "a = sqrt(100)" then the DLL parser function would find the function call to sqrt(), and within the DLL would be a corresponding sqrt() function which would calculate the square root of the argument passed to it, and then it would take the return value from that function and put it into variable a... just like any other program, but if a corresponding handler for the sqrt() function isn't found within the DLL (there would be a list of natively supported functions) then it would call a similar function which would reside within the program using the DLL to see if there are any user-defined functions by that name.
So, say you loaded the DLL into the program giving your program the ability to interpret scripts of this particular language, the program could call the DLLs to process single lines of code or hand it filenames of scripts to process... but if you want to add a command into the script which suits the purpose of your program, you could say set a boolean value in the DLL telling it that you are adding functions to its language and then create a function in your code which would list the functions you are adding (the DLL would call it with the name of the function it wants, if that function is a user-defined one contained within your code, the function would call the corresponding function with the argument passed to it by the DLL, the return the return value of the user-defined function back to the DLL, and if it didn't exist, it would return an error code or NULL or something). I'm starting to see that I'll have to find another way around this to make the function calls go one way only
This link explains how to do it in a basic way.
In a big picture view, when you make a dll, you are making a library which is loaded at runtime. It contains a number of symbols which are exported. These symbols are typically references to methods or functions, plus compiler/linker goo.
When you normally build a static library, there is a minimum of goo and the linker pulls in the code it needs and repackages it for you in your executable.
In a dll, you actually get two end products (three really- just wait): a dll and a stub library. The stub is a static library that looks exactly like your regular static library, except that instead of executing your code, each stub is typically a jump instruction to a common routine. The common routine loads your dll, gets the address of the routine that you want to call, then patches up the original jump instruction to go there so when you call it again, you end up in your dll.
The third end product is usually a header file that tells you all about the data types in your library.
So your steps are: create your headers and code, build a dll, build a stub library from the headers/code/some list of exported functions. End code will link to the stub library which will load up the dll and fix up the jump table.
Compiler/linker goo includes things like making sure the runtime libraries are where they're needed, making sure that static constructors are executed, making sure that static destructors are registered for later execution, etc, etc, etc.
Now as to your main problem: how do I write extensible code in a dll? There are a number of possible ways - a typical way is to define a pure abstract class (aka interface) that defines a behavior and either pass that in to a processing routine or to create a routine for registering interfaces to do work, then the processing routine asks the registrar for an object to handle a piece of work for it.
On the detail of what you plan to solve, perhaps you should look at an extendible parser like lua instead of building your own.
To your more specific focus.
A DLL is (typically?) meant to be complete in and of itself, or explicitly know what other libraries to use to complete itself.
What I mean by that is, you cannot have a method implicitly provided by the calling application to complete the DLLs functionality.
You could however make part of your API the provision of methods from a calling app, thus making the DLL fully contained and the passing of knowledge explicit.
How do I use the classes and functions in the DLL?
Include the headers in your code, when the module (exe or another dll) is linked the dlls are checked for completness.
Can the DLL call functions from the program linking to it?
Yes, but it has to be told about them at run time.
If I make a class { ... } object; in the DLL, then when the DLL is loaded by the program, will object be available to the program?
Yes it will be available, however there are some restrictions you need to be aware about. Such as in the area of memory management it is important to either:
Link all modules sharing memory with the same memory management dll (typically c runtime)
Ensure that the memory is allocated and dealloccated only in the same module.
allocate on the stack
Examples!
Here is a basic idea of passing functions to the dll, however in your case may not be most helpfull as you need to know up front what other functions you want provided.
// parser.h
struct functions {
void *fred (int );
};
parse( string, functions );
// program.cpp
parse( "a = sqrt(); fred(a);", functions );
What you need is a way of registering functions(and their details with the dll.)
The bigger problem here is the details bit. But skipping over that you might do something like wxWidgets does with class registration. When method_fred is contructed by your app it will call the constructor and register with the dll through usage off methodInfo. Parser can lookup methodInfo for methods available.
// parser.h
class method_base { };
class methodInfo {
static void register(factory);
static map<string,factory> m_methods;
}
// program.cpp
class method_fred : public method_base {
static method* factory(string args);
static methodInfo _methoinfo;
}
methodInfo method_fred::_methoinfo("fred",method_fred::factory);
This sounds like a job for data structures.
Create a struct containing your keywords and the function associated with each one.
struct keyword {
const char *keyword;
int (*f)(int arg);
};
struct keyword keywords[max_keywords] = {
"db_connect", &db_connect,
}
Then write a function in your DLL that you pass the address of this array to:
plugin_register(keywords);
Then inside the DLL it can do:
keywords[0].f = &plugin_db_connect;
With this method, the code to handle script keywords remains in the main program while the DLL manipulates the data structures to get its own functions called.
Taking it to C++, make the struct a class instead that contains a std::vector or std::map or whatever of keywords and some functions to manipulate them.
Winrawr, before you go on, read this first:
Any improvements on the GCC/Windows DLLs/C++ STL front?
Basically, you may run into problems when passing STL strings around your DLLs, and you may also have trouble with exceptions flying across DLL boundaries, although it's not something I have experienced (yet).
You could always load the dll at runtime with load library
I have an application a part of which uses shared libraries. These libraries are linked at compile time.
At Runtime the loader expects the shared object to be in the LD_LIBRARY_PATH , if not found the entire application crashes with error "unable to load shared libraries".Note that there is no guarantee that client would be having the library, in that case I want the application to leave a suitable error message also the independent part should work correctly.
For this purpose I am using dlsym() and dlopen() to use the API in the shared library. The problem with this is if I have a lot of functions in the API, i have to access them Individually using dlsym() and ptrs which in my case are leading to memory corruption and code crashes.
Are there any alternatives for this?
The common solution to your problem is to declare a table of function pointers, to do a single dlsym() to find it, and then call all the other functions through a pointer to that table. Example (untested):
// libfoo.h
struct APIs {
void (*api1)(void);
void *(*api2)(int);
long (*api3)(int, void *);
};
// libfoo.cc
void fn1(void) { ... }
void *fn2(int) { ... }
long fn3(int, void *) { ... }
APIs api_table = { fn1, fn2, fn3 };
// client.cc
#include "libfoo.h"
...
void *foo_handle = dlopen("libfoo.so", RTLD_LAZY);
if (!foo_handle) {
return false; // library not present
}
APIs *table = dlsym(foo_handle, "api_table");
table->api1(); // calls fn1
void *p = table->api2(42); // calls fn2
long x = table->api3(1, p); // calls fn3
P.S. Accessing your API functions individually using dlsym and pointers does not in itself lead to memory corruption and crashes. Most likely you just have bugs.
EDIT:
You can use this exact same technique with a 3rd-party library. Create a libdrmaa_wrapper.so and put the api_table into it. Link the wrapper directly against libdrmaa.so.
In the main executable, dlopen("libdrmaa_wrapper.so", RTLD_NOW). This dlopen will succeed if (and only if) libdrmaa.so is present at runtime and provides all API functions you used in the api_table. If it does succeed, a single dlsym call will give you access to the entire API.
You can wrap your application with another one which first checks for all the required libraries, and if something is missing it errors out nicely, but if everything is allright it execs the real application.
Use below type of code
Class DynLib
{
/* All your functions */
void fun1() {};
void fun2() {};
.
.
.
}
DynLib* getDynLibPointer()
{
DynLib* x = new Dynlib;
return x;
}
use dlopen() for loading this library at runtime.
and use dlsym() and call getDynLibPointer() which returns DynLib object.
from this object you can access all your functions jst as obj.fun1().....
This is ofcource a C++ style of struct method proposed earlier.
You are probably looking for some form of delay library load on Linux. It's not available out-of-the-box but you can easily mimic it by creating a small static stub library that would try to dlopen needed library on first call to any of it's functions (emitting diagnostic message and terminating if dlopen failed) and then forwarding all calls to it.
Such stub libraries can be written by hand, generated by project/library-specific script or generated by universal tool Implib.so:
$ implib-gen.py libxyz.so
$ gcc myapp.c libxyz.tramp.S libxyz.init.c ...
Your problem is that the resolution of unresolved symbols is done very early on - on Linux I believe the data symbols are resolved at process startup, and the function symbols are done lazily. Therefore depending on what symbols you have unresolved, and on what sort of static initialization you have going on - you may not get a chance to get in with your code.
My suggestion would be to have a wrapper application that traps the return code/error string "unable to load shared libraries", and then converts this into something more meaningful. If this is generic, it will not need to be updated every time you add a new shared library.
Alternatively you could have your wrapper script run ldd and then parse the output, ldd will report all libraries that are not found for your particular application.