In file gperftools-2.2.1/src/gperftools/malloc_extension.h, it reads:
// Extra extensions exported by some malloc implementations. These
// extensions are accessed through a virtual base class so an
// application can link against a malloc that does not implement these
// extensions, and it will get default versions that do nothing.
//
// NOTE FOR C USERS: If you wish to use this functionality from within
// a C program, see malloc_extension_c.h.
My question is how exactly can I access these extensions through a virtual base class?
Usually to load a class from a dynamic library, I would need to write a base class which allows me to get an instance of the wanted class and its functions through polymorphism, as described here.
However to do so there must be some class factory functions available in the API, but there are no such functions in any tcmalloc files. Moreover I would also need to load the tcmalloc library with dlopen(), which is not recommended according to the install note:
...loading a malloc-replacement library via dlopen is
asking for trouble in any case: some data will be allocated with one malloc, some with another.
So clearly accessing the extensions through the typical way as mentioned above is not an option. I can get away with using the C versions as declared in malloc_extensions_c.h but just wonder if there is any better solution.
I managed to load the malloc extensions via some 'hack', which is not as clean as I would prefer, but it gets the job done. Here is the (temporary) solution for those who are interested in.
First, most of these malloc extension functions are similar to static functions in a way that they are mostly called on the current instance only, e.g. to call the GetMemoryReleaseRate() function on the current process you just call MallocExtension::instance()->GetMemoryReleaseRate(). Therefore we don't need to create a base class and get an instance of MallocExtension class to call these functions.
For the example above, I'd just create a standalone function getMemoryReleaseRate() which simply calls the required function when it gets called, as below:
getMemoryReleaseRate()
{
MallocExtension::instance()->GetMemoryReleaseRate();
}
This function can be inserted directly to a source file, e.g. tcmalloc.cc, or, if you prefer not to edit the tcmalloc source every time there is a new version, added to your makefile, to be attached to the source file when it is compiled.
Now in your code, you can call the MallocExtension function via the 'facade' function you have created via dlsym(), e.g. as below:
typedef void (*getMemoryReleaseRate)();
((getMemoryReleaseRate)dlsym(RTLD_DEFAULT, "getMemoryReleaseRate"))();
Simply including this header and doing MallocExtension::instance()->GetMemoryReleaseRate(); would work too. No need to modify tcmalloc for that.
Related
Pretty much like the title says, I want to split some parts of my Qt application into plugins, so I
can add new functionalities at runtime. Ideally, plugins would be compiled separately and put into a
dedicated path for plugins; when the application launches, installed extensions are automatically
loaded, or can be reloaded at the user request at any time.
I should mention that the objects I want to put into plugins are not QObjects, but if it can make
the solution simpler it's acceptable that they inherit from QObject.
How can I do that? I want the simplest solution that's portable and doesn't require anything else
than Qt (no external dependencies).
Although I answer my own question, I'm more than interested to hear others'!
For a start, you need to have a common interface among your plugins. Here's an example:
class MyPlugin
{
public:
virtual ~MyPlugin() {} // Needs to be virtual. Important!
// Put here your method(s)
virtual void frobnicate() = 0;
};
Do not name your interface like this, though. If your plugins represent video codecs, name it
"VideoCodec", for example. Some prefer to put an "I" before interfaces' name (e.g. IVideoCodec).
Also, some people would tell you to have public methods calling protected virtuals, but that's not
strictly necessary there.
Why an interface? That's because it's the only way the application can use plugins without knowing
the classes themselves beforehand. This means that because the application doesn't know the
classes, the plugin must allow creating the plugin component via a factory. In fact, the only
required function to declare is a factory function that creates a fresh instance of the "plugin".
This factory function could be declared as such:
extern "C" std::unique_ptr<MyPlugin> MyPlugin_new();
(You need extern "C", otherwise you'll get trouble with QLibrary because of C++ name mangling ―
see below)
The factory function need not be without parameters, but the parameters must make sense for all types
of plugins. This could be a hashtable or a file containing general configuration information, or
even better, an interface for a configuration object, for instance.
Now the loading part. The easiest way is to use a QDirIterator initialized to the plugin
directory, iterate through all files and try to load them. Something along the lines of...
void load_plugins_from_path(const QString &plugin_dir)
{
QDirIterator it(plugin_dir, QDir::Files, QDir::Readable);
while (it.hasNext()) {
try_load_plugin(it.next());
}
}
(it's written like it's a function, but it should be a method)
Do not try in any way to filter the files by extension or by using the QDir::Executable flag: this
will needlessly reduce the portability of the program―each OSes have their own file extensions, and QDir::Executable only work on unices (probably because there's no exec bit on Windows).
Here, the method load_plugins_from_path just loads plugins from one given path; the caller may
invoke that method over the elements of a list containing all the paths to search for plugins, for
example. try_load_plugin may be defined like this:
void try_load_plugin(const QString &filename)
{
QLibrary lib(filename);
auto factory = reinterpret_cast<decltype (MyPlugin_new) *>(lib.resolve("MyPlugin_new"));
if (factory) {
std::unique_ptr<MyPlugin> plugin(factory());
// Do something with "plugin", e.g. store in a std::vector
}
}
decltype is used on MyPlugin_new so we doesn't have to specify its type
(std::unique_ptr<MyPlugin> (*)()) and using it with auto will save you the trouble of changing
the code more than it needs to be, should you change the signature of MyPlugin_new.
This method just tries to load a file as a library (whether it's a valid library file or not!) and
attempts to resolve the required function, returning nullptr if either we're not dealing with a
valid library file or the requested symbol (our function) didn't exist. Note that because we do the
search directly in a dynamic library, we must know the exact name of the entity in that library.
Because C++ mangles names, and that mangling is dependent on the implementation, the only sensible
thing is to use extern "C" functions. Don't worry though: extern "C" will only prevent
overloading of that function, but otherwise all C++ can be used inside of that function. Also, even
though the factory function is not inside any namespace, it won't collide with other factory
functions in other libraries, because we use explicit linking; that way, we can have
MyPlugin_new from plugin A and MyPlugin_new from plugin B, and they will live at separate
addresses.
Finally, if your set of plugins is too diverse to be expressed by one interface, one solution is to
simply define (possibly) multiple factories inside of your plugins, each returning a pointer to a
different kind of interface.
Qt already has a class called QPluginLoader that does what you're trying to achieve.
I have a library which has a basic class which is used extensively by the particular library in question (say library_1).
namespace library_1 {
class some_class {
}
}
I have want this library to use instead another version of this class that I am defining.
namespace my_own {
class some_class {
}
}
my_own::some_class and library_1::some_class are going to have the same public interface (but different data members so they are not exactly dynamic castable). So I want to be able to compile this library replacing just this one class. This is doable.
The complication in this whole process, however, is that I have a second library (whose source code I do not have access to, call it library_2) which makes use of the first library (including accesses to some_class).
My main executable needs to access both library_2 (which is compiled against the original library) and a different version of the library_1 with this some_class replaced.
I know this is a complicated situation but what is the best way to achieve this (from a code perspective and about how to maintain this in version control)?
What you can do is to expose just the API for the part where you need to use your replace class and compile the corresponding portion into a dynamically linked library, statically resolving all symbols to the library you are meddling with. Obviously, the meddled with objects shall not escape this interface. With this, you program can effectively use two conflicting implementations of the same library although they won't be able to shared objects. Essentially, this is how COM exposes its interface but this technique works on other platforms than Windows although I wouldn't be come up with the needed steps in creating a shared library doing this for a UNIX system.
I want to implement some custom library functions in Linux. For example, I want to implement my own pthread_mutex_lock, pthread_mutex_unlock, malloc and free functions. I've read LD_PRELOAD can be used to use your own custom functions, although I haven't got into the details.
But I have one question, I also want to use the original functions within my new implementations. What would be the trick to do that, as both would have the same names?
You could use the dlopen function to open the library you replace (or use RTLD_NEXT if it is already loaded, see comments), and then use dlsym function to find the address of the function in that library that you want to call.
How would I go about calling an unexported function in Win32 C++?
Calling unexported functions that are defined in the same module (DLL/EXE) as your code is easy: just call them like any other C++ function. Obviously this isn't what you're asking about. If you want to call unexported functions in a different module, you need to find out their addresses somehow.
One way to do this is to have the first module call an exported function in the second module which returns a function pointer. (Or: a struct containing function pointers, a pointer to an instance of a class, etc.) Think factory pattern.
Another way is to export a registration function from the first module and have the second module's initialization code call it, passing it pointers to unexported functions along with some sort of identifying info. (Better also have a corresponding unregistration function which is called before the second module is unloaded.)
Yet another way is to grovel through the debug symbols using dbghelp.dll. This would not be recommended for a real-world application because it would require distributing debug symbols and would be extremely slow, not to mention overly complex.
Additionally to bk1e's answer, there's still another method (not recommended as well).
Obtain the relative Adress of that function in the dll (e.g. via disassembly). This has to be done manually and before compiling.
In the program, you now have to obtain the startadress of the dll in memory (for example using an exported function and some calculation).
Now you can directly call that function using the relative Adress of the function + the startadress of the exported function.
I don't recommend this though. It works only on one defined version of that dll. Any recompile and the adress may change. Or that function may not be needed any more and gets deleted. There must be a reason, why this function is NOT exported. In general - you try to archive something the author of the library intentionally did not want you to do and that's "evil" most of the time.
You mentioned the ida-name. This name includes the startadress.
No two ways about it, you'll have to study the disassembly to figure out what gets pushed on the stack, and how it's used to determine the types.
This question is related to "How to make consistent dll binaries across VS versions ?"
We have applications and DLLs built
with VC6 and a new application built
with VC9. The VC9-app has to use
DLLs compiled with VC6, most of
which are written in C and one in
C++.
The C++ lib is problematic due to
name decoration/mangling issues.
Compiling everything with VC9 is
currently not an option as there
appear to be some side effects.
Resolving these would be quite time
consuming.
I can modify the C++ library, however it must be compiled with VC6.
The C++ lib is essentially an OO-wrapper for another C library. The VC9-app uses some static functions as well as some non-static.
While the static functions can be handled with something like
// Header file
class DLL_API Foo
{
int init();
}
extern "C"
{
int DLL_API Foo_init();
}
// Implementation file
int Foo_init()
{
return Foo::init();
}
it's not that easy with the non-static methods.
As I understand it, Chris Becke's suggestion of using a COM-like interface won't help me because the interface member names will still be decorated and thus inaccessible from a binary created with a different compiler. Am I right there?
Would the only solution be to write a C-style DLL interface using handlers to the objects or am I missing something?
In that case, I guess, I would probably have less effort with directly using the wrapped C-library.
The biggest problem to consider when using a DLL compiled with a different C++ compiler than the calling EXE is memory allocation and object lifetime.
I'm assuming that you can get past the name mangling (and calling convention), which isn't difficult if you use a compiler with compatible mangling (I think VC6 is broadly compatible with VS2008), or if you use extern "C".
Where you'll run into problems is when you allocate something using new (or malloc) from the DLL, and then you return this to the caller. The caller's delete (or free) will attempt to free the object from a different heap. This will go horribly wrong.
You can either do a COM-style IFoo::Release thing, or a MyDllFree() thing. Both of these, because they call back into the DLL, will use the correct implementation of delete (or free()), so they'll delete the correct object.
Or, you can make sure that you use LocalAlloc (for example), so that the EXE and the DLL are using the same heap.
Interface member names will not be decorated -- they're just offsets in a vtable. You can define an interface (using a C struct, rather than a COM "interface") in a header file, thusly:
struct IFoo {
int Init() = 0;
};
Then, you can export a function from the DLL, with no mangling:
class CFoo : public IFoo { /* ... */ };
extern "C" IFoo * __stdcall GetFoo() { return new CFoo(); }
This will work fine, provided that you're using a compiler that generates compatible vtables. Microsoft C++ has generated the same format vtable since (at least, I think) MSVC6.1 for DOS, where the vtable is a simple list of pointers to functions (with thunking in the multiple-inheritance case). GNU C++ (if I recall correctly) generates vtables with function pointers and relative offsets. These are not compatible with each other.
Well, I think Chris Becke's suggestion is just fine. I would not use Roger's first solution, which uses an interface in name only and, as he mentions, can run into problems of incompatible compiler-handling of abstract classes and virtual methods. Roger points to the attractive COM-consistent case in his follow-on.
The pain point: You need to learn to make COM interface requests and deal properly with IUnknown, relying on at least IUnknown:AddRef and IUnknown:Release. If the implementations of interfaces can support more than one interface or if methods can also return interfaces, you may also need to become comfortable with IUnknown:QueryInterface.
Here's the key idea. All of the programs that use the implementation of the interface (but don't implement it) use a common #include "*.h" file that defines the interface as a struct (C) or a C/C++ class (VC++) or struct (non VC++ but C++). The *.h file automatically adapts appropriately depending on whether you are compiling a C Language program or a C++ language program. You don't have to know about that part simply to use the *.h file. What the *.h file does is define the Interface struct or type, lets say, IFoo, with its virtual member functions (and only functions, no direct visibility to data members in this approach).
The header file is constructed to honor the COM binary standard in a way that works for C and that works for C++ regardless of the C++ compiler that is used. (The Java JNI folk figured this one out.) This means that it works between separately-compiled modules of any origin so long as a struct consisting entirely of function-entry pointers (a vtable) is mapped to memory the same by all of them (so they have to be all x86 32-bit, or all x64, for example).
In the DLL that implements the the COM interface via a wrapper class of some sort, you only need a factory entry point. Something like an
extern "C" HRESULT MkIFooImplementation(void **ppv);
which returns an HRESULT (you'll need to learn about those too) and will also return a *pv in a location you provide for receiving the IFoo interface pointer. (I am skimming and there are more careful details that you'll need here. Don't trust my syntax) The actual function stereotype that you use for this is also declared in the *.h file.
The point is that the factory entry, which is always an undecorated extern "C" does all of the necessary wrapper class creation and then delivers an Ifoo interface pointer to the location that you specify. This means that all memory management for creation of the class, and all memory management for finalizing it, etc., will happen in the DLL where you build the wrapper. This is the only place where you have to deal with those details.
When you get an OK result from the factory function, you have been issued an interface pointer and it has already been reserved for you (there is an implicit IFoo:Addref operation already performed on behalf of the interface pointer you were delivered).
When you are done with the interface, you release it with a call on the IFoo:Release method of the interface. It is the final release implementation (in case you made more AddRef'd copies) that will tear down the class and its interface support in the factory DLL. This is what gets you correct reliance on a consistent dynamic stoorage allocation and release behind the interface, whether or not the DLL containing the factory function uses the same libraries as the calling code.
You should probably implement IUnknown:QueryInterface (as method IFoo:QueryInterface) too, even if it always fails. If you want to be more sophisticated with using the COM binary interface model as you have more experience, you can learn to provide full QueryInterface implementations.
This is probably too much information, but I wanted to point out that a lot of the problems you are facing about heterogeneous implementations of DLLs are resolved in the definition of the COM binary interface and even if you don't need all of it, the fact that it provides worked solutions is valuable. In my experience, once you get the hang of this, you will never forget how powerful this can be in C++ and C++ interop situations.
I haven't sketched the resources you might need to consult for examples and what you have to learn in order to make *.h files and to actually implement factory-function wrappers of the libraries you want to share. If you want to dig deeper, holler.
There are other things you need to consider too, such as which run-times are being used by the various libraries. If no objects are being shared that's fine, but that seems quite unlikely at first glance.
Chris Becker's suggestions are pretty accurate - using an actual COM interface may help you get the binary compatibility you need. Your mileage may vary :)
not fun, man. you are in for a lot of frustration, you should probably give this:
Would the only solution be to write a
C-style DLL interface using handlers
to the objects or am I missing
something? In that case, I guess, I
would probably have less effort with
directly using the wrapped C-library.
a really close look. good luck.