in an application I am writing, I am dynamically loading objects from a shared library I wrote.
This fine up to a point where virtual functions come into play. This means that I can easily call getters and setters, but I immediately get a segmentation fault when trying to call a function that overwrites a virtual base class functions. I am running out of ideas currently, as this happens to every class (hierarchy) in my project.
The functions can be successfully called when creating the objects inside the application, not using dynamic loading or shared libraries at all.
I suspect either a conceptual mistake or some kind of compilation/linking error on my side.
The class hierarchy looks something like this:
BaseClass.h:
class BaseClass : public EvenMoreBaseClass {
public:
virtual bool enroll(const shared_ptr<string> filepath) = 0;
}
Derived.h:
class Derived : public BaseClass {
bool enroll(const shared_ptr<string> filepath);
}
Derived.cpp:
bool Derived::enroll(const shared_ptr<string> filepath) {
cout << "enroll" << endl;
}
(includes and namespace left out here)
The application loads the library and gets a (shared) pointer to an object of BaseClass (the application includes BaseClass.h). All functions can be executed, except the virtual ones.
Development is done in Eclipse CDT. Everything is currently in one project with different build configurations (the application's .cpp is disabled in the shared library's configuration and vice versa). The compiler is g++ 4.4. All .o files of the hierarchy are linked with the library, -shared is set.
I would really appreciate any help.
Maybe this it a very late answer, but I found same behaviour, it happens if you use dlclose(handle) before accessing a virtual method (though static methods worked correctly). So you shouldn't close library handle until you are done with objects from it.
Make sure that the dll and the executable are both being compiled with the exact same compiler settings. If they are different the compiler might be generating code in the executable that doesn't match what the dll expects and vice versa which is a ODR violation.
Note that standard library types such as std::string and std::shared_ptr might have different layouts in debug configurations for example and can also change between compiler versions (even from the same vendor).
In general you need to be careful using classes across dll boundaries. It is usually recommended to deal with only built-in types and stateless interfaces.
Could there be any difference in sizes of the types between the projects set in the build configurations?
[update]
I have encountered similar problems when one project has types such as short and long defined to be different sizes to the other project, and classes containing these types are referenced through the same header file. One project puts the second short at byte 2, the other project thinks it is at byte 4.
I have also had problems with different padding values. One project pads odd characters to 4 byte boundaries, the other thinks it is padded to 8 byte boundaries.
The same might hold true for virtual function pointers, maybe one project fits them in 32 bits, the other expects them in 64 bits.
Related
I have the problem with passing by reference std::string to function in dll.
This is function call:
CAFC AFCArchive;
std::string sSSS = std::string("data\\gtasa.afc");
AFCER_PRINT_RET(AFCArchive.OpenArchive(sSSS.c_str()));
//AFCER_PRINT_RET(AFCArchive.OpenArchive(sSSS));
//AFCER_PRINT_RET(AFCArchive.OpenArchive("data\\gtasa.afc"));
This is function header:
#define AFCLIBDLL_API __declspec(dllimport)
AFCLIBDLL_API EAFCErrors CAFC::OpenArchive(std::string const &_sFileName);
I try to debug pass-by-step through calling the function and look at _sFileName value inside function.
_sFileName in function sets any value(for example, t4gs..\n\t).
I try to detect any heap corruption, but compiler says, that there is no error.
DLL has been compiled in debug settings. .exe programm compiled in debug too.
What's wrong?? Help..!
P.S. I used Visual Studio 2013. WinApp.
EDIT
I have change header of func to this code:
AFCLIBDLL_API EAFCErrors CAFC::CreateArchive(char const *const _pArchiveName)
{
std::string _sArchiveName(_pArchiveName);
...
I really don't know, how to fix this bug...
About heap: it is allocated in virtual memory of our process, right? In this case, shared virtual memory is common.
The issue has little to do with STL, and everything to do with passing objects across application boundaries.
1) The DLL and the EXE must be compiled with the same project settings. You must do this so that the struct alignment and packing are the same, the members and member functions do not have different behavior, and even more subtle, the low-level implementation of a reference and reference parameters is exactly the same.
2) The DLL and the EXE must use the same runtime heap. To do this, you must use the DLL version of the runtime library.
You would have encountered the same problem if you created a class that does similar things (in terms of memory management) as std::string.
Probably the reason for the memory corruption is that the object in question (std::string in this case) allocates and manages dynamically allocated memory. If the application uses one heap, and the DLL uses another heap, how is that going to work if you instantiated the std::string in say, the DLL, but the application is resizing the string (meaning a memory allocation could occur)?
C++ classes like std::string can be used across module boundaries, but doing so places significant constraints on the modules. Simply put, both modules must use the same instance of the runtime.
So, for instance, if you compile one module with VS2013, then you must do so for the other module. What's more, you must link to the dynamic runtime rather than statically linking the runtime. The latter results in distinct runtime instances in each module.
And it looks like you are exporting member functions. That also requires a common shared runtime. And you should use __declspec(dllexport) on the entire class rather than individual members.
If you control both modules, then it is easy enough to meet these requirements. If you wish to let other parties produce one or other of the modules, then you are imposing a significant constraint on those other parties. If that is a problem, then consider using more portable interop. For example, instead of std::string use const char*.
Now, it's possible that you are already using a single shared instance of the dynamic runtime. In which case the error will be more prosaic. Perhaps the calling conventions do not match. Given the sparse level of detail in your question, it's hard to say anything with certainty.
I encountered similar problem.
I resolved it synchronizing Configuration Properties -> C / C++ settings.
If you want debug mode:
Set _DEBUG definition in Preprocessor Definitions in both projects.
Set /MDd in Code Generation -> Runtime Library in both projects.
If you want release mode:
Remove _DEBUG definition in Preprocessor Definitions in both projects.
Set /MD in Code Generation -> Runtime Library in both projects.
Both projects I mean exe and dll project.
It works for me especially if I don't want to change any settings of dll but only adjust to them.
class SomeClass
{
//some members
MemberClass one_of_the_mem_;
}
I have a function foo( SomeClass *object ) within a dll, it is being called from an exe.
Problem
address of one_of_the_mem_ changes during the time the dll call is dispatched.
Details:
before the call is made (from exe):
'&(this).one_of_the_mem_' - `0x00e913d0`
after - in the dll itself :
'&(this).one_of_the_mem_' - `0x00e913dc`
The address of object remains constant. It is only the member whose address shift by c every time.
I want some pointers regarding how can I troubleshoot this problem.
Code :
Code from Exe
stat = module->init ( this,
object_a,
&object_b,
object_c,
con_dir
);
Code in DLL
Status_C ModuleClass( SomeClass *object, int index, Config *conf, const char* name)
{
_ASSERT(0); //DEBUGGING HOOK
...
Update 1:
I compared the Offsets of members following Michael's instruction and they are the same in both cases.
Update 2:
I found a way to dump the class layout and noticed the difference in size, I have to figure out why is that happening though.
linked is the question that I found to dump class layout.
Update 3:
Final Update : Solved the problem, much thanks to Michael Burr.
it turned out that one of the build was using 32 bit time, _USE_32BIT_TIME_T was defined in it and the other one was using 64 bit time. So it generated the different layout for the object, attached is the difference file.
Your DLL was probably compiled with different set of compiler options (or maybe even a slightly different header file) and the class layout is different as a result.
For example, if one was built using debug flags and other wasn't or even if different compiler versions were used. For example, the libraries used by different compiler versions might have subtle differences and if your class incorporates a type defined by the library you could have different layouts.
As a concrete example, with Microsoft's compiler iterators and containers are sensitive to release/debug, _SECURE_SCL on/off , and _HAS_ITERATOR_DEBUGGING on/off setting (at least up though VS 2008 - VS 2010 may have changed some of this to a certain extent). See http://connect.microsoft.com/VisualStudio/feedback/details/352699/secure-scl-is-broken-in-release-builds for some details.
These kinds of issues make using C++ classes across DLL boundaries a bit more fragile than using straight C interfaces. They can occur in C structures as well, but it seems like C++ libraries have these differences more often (I think that's the nature of having richer functionality).
Another layout-changing issue that occurs every now and then is having a different structure packing option in effect in the different compiles. One thing that can 'hide' this is that pragmas are often used in headers to set structure packing to a certain value, and sometimes you may come across a header that does this without changing it back to the default (or more correctly the previous setting). If you have such a header, it's easy to have it included in the build for one module, but not another.
that sounds a bit wierd, you should show more code, it should 'move' if it being passed by ref, it sounds more like a copy of it is being made and that having the member function called.
Perhaps the DLL versions is compiled against a different version that you are referencing. check and make sure the header file is for the same version as the dll.
Recompile the library if you can.
My executable makes calls to a number of DLLs, that i wrote myself. According to 3rd party C++ libs used by these DLLs i can not freely choose compiler settings for all DLLs. Therefore in some DLLs _ITERATOR_DEBUG_LEVEL is set to 2 (default in the debug version), but in my executable _ITERATOR_DEBUG_LEVEL is set to 0, according to serious performance problems.
When i now pass a std::string to the DLL, the application crashes, as soon as the DLL tries to copy it to a local std::string obj, as the memory layout of the string object in the DLL is different from that in my executable. So far i work around this by passing C-strings. I even wrote a small class that converts a std::map<std::string, int> to and from a temporary representation in C-Data in order to pass sich data from and to the DLL. This works.
How can i overcome this problem? I want to pass more different classes and containers, and for several reasons i do not want to work with _ITERATOR_DEBUG_LEVEL = 2.
The problem is that std::string and other containers are template classes. They are generated at compilation time for each binary, so in your case they are generated differently. You can say it's not the same objects.
To fix this, you have several solutions, but they are all following the same rule of thumb : don't expose any template code in your header code shared between binaries.
You could create specific interfaces only for this purpose, or just make sure your headers don't expose template types and functions. You can use those template types inside your binaries, but just don't expose them to other binaries.
std::string in interfaces can be replaced by const char * . You can still use std::string in your systems, just ask for const char * in interfaces and use std::string::c_str() to expose your data.
For maps and other containers you'll have to provide functions that allow external code to manipulate the internal map. Like "Find( const char* key ); ".
The main problem will be with template members of your exposed classes. A way to fix this is to use the PImpl idiom : create (or generate) an API, headers, that just expose what can be done with your binaries, and then make sure that API have pointers to the real objects inside your binaries. The API will be used outside but inside your library you can code with whatever you want. DirectX API and other OS API are done that way.
It is not recommended to use an c++ interface with complex types (stl...) with 3rd party libs, if you get them only as binary or need special settings for compile which are different from your settings.
As you wrote - the implementation could be different with the same compiler depending on your settings and with different compilers the situation gets even worse.
If possible compile the 3rd party lib with your compiler and your settings.
If that's not possible you may write an wrapper-DLL which is compiled with the same compiler and same settings than the 3rd party lib and provides an C-Data interface for you. In your project you may write another wrapper-class so you can make your function-calls with STL-Objects and get them converted and transfered "in background".
My own experience with flags like _SECURE_SCL and _ITERATOR_DEBUG_LEVEL, is that they must be consistent if you attempt to pass a stl object accross dll boudaries.
However I think you can pass a stl object to a dll which has a smaller _ITERATOR_DEBUG_LEVEL
since you can probably give a stl object instantiated in a debug dll to a dll compiled in release mode.
EDIT 07/04/2011
Apparently Visual Studio 2010 provides some niceties to detect mismatches between ITERATOR_DEBUG_LEVEL. I have not watch the video yet.
http://blogs.msdn.com/b/vcblog/archive/2011/04/05/10150198.aspx
Imagine we have a solution with 2 projects: MakeDll (a dll app), which creates a dll, and UseDll (an exe app), which uses the dll. Now I know there are basically two ways, one is pleasant, other is not. The pleasant way is that UseDll link statically to MakeDll.lib, and just dllimports functions and classes and uses them. The unpleasant way is to use LoadLibrary and GetProcAddress which I don't even imagine how is done with overloaded functions or class members, in other words anything else but extern "C" functions.
My questions are the following (all regarding the first option)
What exactly does the MakeDll.lib
contain?
When is MakeDll.dll loaded into my application, and when unloaded? Can I control that?
If I change MakeDll.dll, can I use the new version (provided it is a superset of the old one in terms of interface) without rebuilding UseDll.exe? A special case is when a polymorphic class is exported and a new virtual function is added.
Thanks in advance.
P.S. I am using MS Visual Studio 2008
It basically contains a list of the functions in the DLL, both by name and by ordinal (though almost nobody uses ordinals anymore). The linker uses that to create an import table in UseDLL.exe -- i.e., a reference that says (in essence): "this file depends on function xxx from MakeDll.dll". When the loader loads that executable, it looks at the import table, and (recursively) loads all the DLLs it lists, and (at least conceptually) uses GetProcAddress to find the functions, so it can put their addresses into the executable where they're needed.
It's normally loaded during the process of loading your executable. You can use the /delayload switch to delay its being loaded until a function from that DLL is called.
In general, yes. In the specific case of adding a virtual function, it'll depend on the class' vtable layout staying the same other than the new function being added. Unless you take steps to either assure or verify that yourself, depending on it is a really bad idea.
MakeDll.lib contains a list of studs for the exported functions and their RVAs into the MakeDll.dll
MakeDll.dll is loaded into the application based on what type of loading is defined for the dll in question. (e.g. DELAYLOAD). Raymond Chen has an interesting article on that.
You can use the new updated version of MakeDll.dll as long as all the RVA offsets used in UseDll.exe have not changed. In the event you change a vtable layout for a polymorphic class, as in add a new function in the middle of the previously defined vtable, you will need to recompile UseDll.exe. Other than that you can use the updated dll with the previously compiled UseDll.exe.
The unpleasant way is to use LoadLibrary and GetProcAddress which I don't even imagine how is done with overloaded functions or class members, in other words anything else but extern "C" functions.
Yes, this is unpleaseant but is not as bad as it sounds. If you choose to go through with this option, you'll want to do something like the following:
// Common.h: interface common to both sides.
// Note: 'extern "C"' disables name mangling on methods.
extern "C" class ISomething
{
// Public virtual methods...
// Object MUST delete itself to ensure memory allocator
// coherence. If linking to different libraries on either
// sides and don't do this, you'll get a hard crash or worse.
// Note: 'const' allows you to make constants and delete
// without a nasty 'const_cast'.
virtual void destroy () const = 0;
};
// MakeDLL.c: interface implementation.
class Something : public ISomething
{
// Overrides + oher stuff...
virtual void destroy () const { delete this; }
};
extern "C" ISomething * create () { return new Something(); }
I've successfully deployed such setups with different C++ compilers on both ends (i.e. G++ and MSVC interchanged in all 4 possible combinations).
You may change Something's implementation all you want. However, you may not change the interface without re-compiling on both sides! When you think about it, this is faily intuitive: both sides rely on the other's definition of ISomething. What you may do to add this extra flexibility is use numbered interfaces (as DirectX does) or go with a set of interfaces and test for capabilities (as COM does). The former is really intuitive to set up but requires discipline and the second well... would be re-inventing the wheel!
Sooooo I'm writing a script interpreter. And basically, I want some classes and functions stored in a DLL, but I want the DLL to look for functions within the programs that are linking to it, like,
program dll
----------------------------------------------------
send code to dll-----> parse code
|
v
code contains a function,
that isn't contained in the DLL
|
list of functions in <------/
program
|
v
corresponding function,
user-defined in the
program--process the
passed argument here
|
\--------------> return value sent back
to the parsing function
I was wondering basically, how do I compile a DLL with gcc? Well, I'm using a windows port of gcc. Once I compile a .dll containing my classes and functions, how do I link to it with my program? How do I use the classes and functions in the DLL? Can the DLL call functions from the program linking to it? If I make a class { ... } object; in the DLL, then when the DLL is loaded by the program, will object be available to the program? Thanks in advance, I really need to know how to work with DLLs in C++ before I can continue with this project.
"Can you add more detail as to why you want the DLL to call functions in the main program?"
I thought the diagram sort of explained it... the program using the DLL passes a piece of code to the DLL, which parses the code, and if function calls are found in said code then corresponding functions within the DLL are called... for example, if I passed "a = sqrt(100)" then the DLL parser function would find the function call to sqrt(), and within the DLL would be a corresponding sqrt() function which would calculate the square root of the argument passed to it, and then it would take the return value from that function and put it into variable a... just like any other program, but if a corresponding handler for the sqrt() function isn't found within the DLL (there would be a list of natively supported functions) then it would call a similar function which would reside within the program using the DLL to see if there are any user-defined functions by that name.
So, say you loaded the DLL into the program giving your program the ability to interpret scripts of this particular language, the program could call the DLLs to process single lines of code or hand it filenames of scripts to process... but if you want to add a command into the script which suits the purpose of your program, you could say set a boolean value in the DLL telling it that you are adding functions to its language and then create a function in your code which would list the functions you are adding (the DLL would call it with the name of the function it wants, if that function is a user-defined one contained within your code, the function would call the corresponding function with the argument passed to it by the DLL, the return the return value of the user-defined function back to the DLL, and if it didn't exist, it would return an error code or NULL or something). I'm starting to see that I'll have to find another way around this to make the function calls go one way only
This link explains how to do it in a basic way.
In a big picture view, when you make a dll, you are making a library which is loaded at runtime. It contains a number of symbols which are exported. These symbols are typically references to methods or functions, plus compiler/linker goo.
When you normally build a static library, there is a minimum of goo and the linker pulls in the code it needs and repackages it for you in your executable.
In a dll, you actually get two end products (three really- just wait): a dll and a stub library. The stub is a static library that looks exactly like your regular static library, except that instead of executing your code, each stub is typically a jump instruction to a common routine. The common routine loads your dll, gets the address of the routine that you want to call, then patches up the original jump instruction to go there so when you call it again, you end up in your dll.
The third end product is usually a header file that tells you all about the data types in your library.
So your steps are: create your headers and code, build a dll, build a stub library from the headers/code/some list of exported functions. End code will link to the stub library which will load up the dll and fix up the jump table.
Compiler/linker goo includes things like making sure the runtime libraries are where they're needed, making sure that static constructors are executed, making sure that static destructors are registered for later execution, etc, etc, etc.
Now as to your main problem: how do I write extensible code in a dll? There are a number of possible ways - a typical way is to define a pure abstract class (aka interface) that defines a behavior and either pass that in to a processing routine or to create a routine for registering interfaces to do work, then the processing routine asks the registrar for an object to handle a piece of work for it.
On the detail of what you plan to solve, perhaps you should look at an extendible parser like lua instead of building your own.
To your more specific focus.
A DLL is (typically?) meant to be complete in and of itself, or explicitly know what other libraries to use to complete itself.
What I mean by that is, you cannot have a method implicitly provided by the calling application to complete the DLLs functionality.
You could however make part of your API the provision of methods from a calling app, thus making the DLL fully contained and the passing of knowledge explicit.
How do I use the classes and functions in the DLL?
Include the headers in your code, when the module (exe or another dll) is linked the dlls are checked for completness.
Can the DLL call functions from the program linking to it?
Yes, but it has to be told about them at run time.
If I make a class { ... } object; in the DLL, then when the DLL is loaded by the program, will object be available to the program?
Yes it will be available, however there are some restrictions you need to be aware about. Such as in the area of memory management it is important to either:
Link all modules sharing memory with the same memory management dll (typically c runtime)
Ensure that the memory is allocated and dealloccated only in the same module.
allocate on the stack
Examples!
Here is a basic idea of passing functions to the dll, however in your case may not be most helpfull as you need to know up front what other functions you want provided.
// parser.h
struct functions {
void *fred (int );
};
parse( string, functions );
// program.cpp
parse( "a = sqrt(); fred(a);", functions );
What you need is a way of registering functions(and their details with the dll.)
The bigger problem here is the details bit. But skipping over that you might do something like wxWidgets does with class registration. When method_fred is contructed by your app it will call the constructor and register with the dll through usage off methodInfo. Parser can lookup methodInfo for methods available.
// parser.h
class method_base { };
class methodInfo {
static void register(factory);
static map<string,factory> m_methods;
}
// program.cpp
class method_fred : public method_base {
static method* factory(string args);
static methodInfo _methoinfo;
}
methodInfo method_fred::_methoinfo("fred",method_fred::factory);
This sounds like a job for data structures.
Create a struct containing your keywords and the function associated with each one.
struct keyword {
const char *keyword;
int (*f)(int arg);
};
struct keyword keywords[max_keywords] = {
"db_connect", &db_connect,
}
Then write a function in your DLL that you pass the address of this array to:
plugin_register(keywords);
Then inside the DLL it can do:
keywords[0].f = &plugin_db_connect;
With this method, the code to handle script keywords remains in the main program while the DLL manipulates the data structures to get its own functions called.
Taking it to C++, make the struct a class instead that contains a std::vector or std::map or whatever of keywords and some functions to manipulate them.
Winrawr, before you go on, read this first:
Any improvements on the GCC/Windows DLLs/C++ STL front?
Basically, you may run into problems when passing STL strings around your DLLs, and you may also have trouble with exceptions flying across DLL boundaries, although it's not something I have experienced (yet).
You could always load the dll at runtime with load library