How to avoid Multiple Definitions in C++? - c++

I have a big C++ Project, in which I try to implement a debug function which needs classes from other libraries. Unfortunately these classes share the same name and namespaces with classes called inside the project. I tried to use a static library to avoid multiple definitions, but of course the compiler complains about that issue. So my question:
It is possible to create that library for the function without that the compiler knows about the called classes inside the function?
I don't know, like a "protected function" or like putting all the code from the libraries inside the function code..
Edit: I'm using the g++ compiler.
Max, I know but so far I see no other way.
Schematic, the problem is.
Project:
#include a.h // (old one)
#include a2.h
return a->something();
return a2->something(); //debug function
debug function a2:
#include a.h // (new one!!)
return a->something(); // (new one!)
Compiling Process looks so far:
g++ project -la -la2
That is a very simplified draft. But that's it actually.

Maybe you can create a wrapper library which internally links to that outside library and exports its definitions under a different name or namespace.

try enclosing the #includes for the declarations of the classes that you are using in your debug function in a namspace, but don't use an using clause for that namespace.

There are a few techniques that may help you, but that depends on what the "debug version" of the library does.
First, it's not unheard of to have #ifdef blocks inside functions that do additional checking depending on whether the program was built in debug mode. The C assert macro behaves this way.
Second, it's possible that the "debug version" does nothing more than log messages. It's easy enough to include the logging code in both debug and release versions, and make the decision to actually log based on some kind of "priority" parameter for each log message.
Third, you may consider using an event-based design where functions can, optionally, take objects as parameters that have certain methods, and then if interesting things happen and the function was passed an event object, the function can call those methods.
Finally, if you're actually interested in what happens at a lower level than the library you're working on, you can simply link to debug versions of those lower level libraries. This is a case of the first option mentioned above, applied to a different library than the one you're actually working on. Microsoft's runtime libraries do this, as do Google's perftools and many "debugging malloc" libraries.

Related

How do I (programmatically) ensure that a dll contains definitions for all exported functions?

Let's say I have this lib
//// testlib.h
#pragma once
#include <iostream>
void __declspec(dllexport) test();
int __declspec(dllexport) a();
If I omit the definition for a() and test() from my testlib.cpp, the library still compiles, because the interface [1] is still valid. (If I use them from a client app then they won't link, obviously)
Is there a way I can ensure that when the obj is created (which I gather is the compiler's job) it actually looks for the definitions of the functions that I explicitly exported, and fails if doesn't ?
This is not related to any real world issue. Just curious.
[1] MSVC docs
No, it's not possible.
Partially because a dllexport declaration might legally not even be implemented in the same DLL, let alone library, but be a mere forward declaration for something provided by yet another DLL.
Especially it's impossible to decide on the object level. It's just another forward declaration like any other.
You can dump the exported symbols once the DLL has been linked, but there is no common tool for checking completeness.
Ultimately, you can't do without a client test application which attempts to load all exported interfaces. You can't check that on compile time yet. Even just successfully linking the test application isn't enough, you have to actually run it.
It gets even worse if there are delay-loaded DLLs (and yes, there usually are), because now you can't even check for completeness unless you actually call at least one symbol from each involved DLL.
Tools like Dependency Walker etc. exist for this very reason.
You asked
"Is there a way I can ensure that when the obj is created (which I gather is the compiler's job) it actually looks for the definitions of the functions that I explicitly exported, and fails if doesn't ?"
When you load a DLL then the actual function binding is happenning at runtime(late binding of functions) so it is not possible for the compiler to know if the function definition are available in the DLL or not. Hope this answer to your query.
How do I (programmatically) ensure that a dll contains definitions for all exported functions?
In general, in standard C++11 (read the document n3337 and see this C++ reference), you cannot
Because C++11 does not know about DLL or dynamic linking. It might sometimes make sense to dynamically load code which does not define a function which you pragmatically know will never be called (e.g. an incomplete DLL for drawing shapes, but you happen to know that circles would never be drawn in your particular usage, so the loaded DLL might not define any class Circle related code. In standard C++11, every called function should be defined somewhere (in some other translation unit).
Look also into Qt vision of plugins.
Read Levine's Linkers and loaders book. Notice that on Linux, plugins loaded with dlopen(3) have a different semantics than Windows DLLs. The evil is in the details
In practice, you might consider using some recent variant of the GCC compiler and develop your GCC plugin to check that. This could require several weeks of work.
Alternatively, adapt the Clang static analyzer for your needs. Again, budget several weeks of work.
See also this draft report and think about C++ cross compilers (e.g. compiling on Windows a plugin for a RaspBerry Pi)
Consider also runtime code generation frameworks like asmjit or libgccjit. You might think of generating at runtime the missing stubs or functions (and fill appropriately function pointers with them, or even vtables). C++ exceptions could also be an additional issue.
If your DLL contains calls to the functions, the linker will fail if a definition isn't provided for those functions. It doesn't matter if the calls are never executed, only that they exist.
//// testlib.h
#pragma once
#include <iostream>
#ifndef DLLEXPORT
#define DLLEXPORT(TYPE) TYPE __declspec(dllexport)
#endif
DLLEXPORT(void) test();
DLLEXPORT(int) a();
//// testlib_verify.c
#define DLLEXPORT(TYPE)
void DummyFunc()
{
#include testlib.h
}
This macro-based solution only works for functions with no parameters, but it should be easy to extend.

Memory corruption due to mismatched compiler directives in library vs. EXE projects

I have a library StudentModelLib in which CStudentModeler is the main class in the library. It has a logging option that I made conditional on whether PRETTY_LOG is enabled. If only if PRETTY_LOG is enabled do I include the CPrettyLogger, initialize it (later), and/or actually log things.
Another project in the same solution, StudentModel2, statically links to StudentModelLib. It includes StudentModeler.h from the library and instantiates CStudentModeler at runtime.
How to set up the weirdness:
Set PRETTY_LOG inside the library in the project's preprocessor definitions
Unset PRETTY_LOG in the EXE project
Compile the entire solution, which builds the library, then the EXE
The weirdness starts when CStudentModeler is instantiated within the code for the executable. At that point, the debugger seems confused about which version of CStudentModeler it should be using and hovering over variables in the IDE leads to really confusing results. When the EXE runs, it also has memory corruption that shows up.
My hypothesis is that the compiled library's CStudentModeler has a prettyLogger member, but the compiled EXE uses the .h file with the directive disabled and it assumes CStudentModeler does not have a prettyLogger member. I'm guessing the memory corruption occurs because the library and the EXE have different definitions for where class's member variables live on the heap.
My questions are as follows:
Have I correctly identified the problem?
Is it possible to have library features be optional based on compiler directives but not break other projects that use that library?
What is the proper way to ensure that the projects using the library assume the correct enabled features based on how the library was compiled?
How is it that no part of the VS2010 compilation/linking process warns me about this seemingly huge bug?
For the sake of this test, CPrettyLogger has an empty default constructor and all other code related to it is commented out. Simply instantiating it causes the bug.
StudentModeler.h
This is part of the library and contains the conditional member variable.
class CStudentModeler : public CDataProcessor2
{
// Configuration variables
string student_id;
// Submodules
CContentSelector contentSelector;
EventLog eventLog;
#ifdef PRETTY_LOG
CPrettyLogger prettyLogger; // <--- the problem?
#endif
// Methods
void InitConcepts();
void InitLOs();
public:
CStudentModeler( string sm_version, string session_id, string url,
string db_user, string db_password, string db_name,
SMConfig config );
~CStudentModeler();
}
It seems your assessment is right on the spot.
Yes, it is possible. Don't make externally visible declarations depend on preprocessing directives and you should be OK. Internal stuff may be as configurable as you want, but interfaces should be set in stone. In your case the library should export an interface and a class factory. The client should either not know whether the selected instance has an additional feature, or be able to access it only through potentially fallible interface. If it fails, it's not supported.
If you do what I'm suggesting in (2) you shouln't need to. If you still want to, have a variable name that macro expands to library_options_opt1_yes_opt2_no_opt3_42_... in the library, and have the client code in the header reference it. You will have a link error in case of mismatch.
Thr C++ standard specifically allows the compiler not to warn you if you do such things. It's actually not really easy for the compiler. The corresponding rule is called One Definition Rule.

Is it possible to write COM code in a static library and then link it to a DLL?

I am currently working on a project that has a number of COM objects written in C++ with ATL.
Currently, they are all defined in .cpp and .idl files that are directly compiled into the COM DLL.
To allow unit tests to be written easier, I am planning on moving the implementation of the COM objects out into a separate static library. That library can then be linked in to the main DLL, and the separate unit test project.
I am assuming that there's nothing particularly special about the code generated by ATL, and that this will work much like all other C++ code when it comes to linking with static libraries. However, I don't have too much actual knowledge of ATL myself so don't know if this is really the case.
Will this work as I'm expecting? Or are there pitfalls that I should look out for?
There are gotchas since LIBs are pulled in only if they are referenced, as opposed to OBJs which are explicitly included.
Larry Osterman discussed some of the subtleties a few years ago:
When I moved my code into a library, what happened to my ATL COM
objects?
A caveat: This post discusses details of how ATL7 works. For other
version of ATL, YMMV. The general principals apply for all
versions, but the details are likely to be different.
My group’s recently been working on reducing the number of DLLs
that make up the feature we’re working on (going from somewhere
around 8 to 4). As a part of this, I’ve spent the past couple of
weeks consolidating a bunch of ATL COM DLL’s.
To do this, I first changed the DLLs to build libraries, and then
linked the libraries together with a dummy DllInit routine (which
basically just called CComDllModule::DllInit()) to make the DLL.
So far so good. Everything linked, and I got ready to test the new
DLL.
For some reason, when I attempted to register the DLL, the
registration didn’t actually register the COM objects. At that
point, I started kicking my self for forgetting one of the
fundamental differences between linking objects together to make an
executable and linking libraries together to make an executable.
To explain, I’ve got to go into a bit of how the linker works. When
you link an executable (of any kind), the linker loads all the
sections in the object files that make up the executable. For each
extdef symbol in the object files, it starts looking for a public
symbol that matches the symbol.
Once all of the symbols are matched, the linker then makes a second
pass combining all the .code sections that have identical contents
(this has the effect of collapsing template methods that expand into
the same code (this happens a lot with CComPtr)).
Then a third pass is run. The third pass discards all of the
sections that have not yet been referenced. Since the sections
aren’t referenced, they’re not going to be used in the resulting
executable, so to include them would just bloat the executable.
Ok, so why didn’t my ATL based COM objects get registered? Well,
it’s time to play detective.
Well, it turns out that you’ve got to dig a bit into the ATL code to
figure it out.
The ATL COM registration logic gets picked in the CComModule
object. Within that object, there’s a method
RegisterClassObjects, which redirects to
AtlComModuleRegisterClassObjects. This function walks a list of
_ATL_OBJMAP_ENTRY structures and calls the RegisterClassObject
on each structure. The list is retrieved from the
m_ppAutoObjMapFirst member of the CComModule (ok, it’s really a
member of the _ATL_COM_MODULE70, which is a base class for the
CComModule). So where did that field come from?
It’s initialized in the constructor of the CAtlComModule, which
gets it from the __pobjMapEntryFirst global variable. So where’s
__pobjMapEntryFirst field come from?
Well, there are actually two fields of relevance,
__pobjMapEntryFirst and __pobjMapEntryLast.
Here’s the definition for the __pobjMapEntryFirst:
__declspec(selectany) __declspec(allocate("ATL$__a")) _ATL_OBJMAP_ENTRY* __pobjMapEntryFirst = NULL;
And here’s the definition for __pobjMapEntryLast:
__declspec(selectany) __declspec(allocate("ATL$__z")) _ATL_OBJMAP_ENTRY* __pobjMapEntryLast = NULL;
Let’s break this one down:
__declspec(selectany): __declspec(selectany) is a directive to
the linker to pick any of the similarly named items from the section
– in other words, if a __declspec(selectany) item is found
in multiple object files, just pick one, don’t complain about it
being multiply defined.
__declspec(allocate("ATL$__a")): This one’s the one that makes
the magic work. This is a declaration to the compiler, it tells the
compiler to put the variable in a section named "ATL$__a" (or
"ATL$__z").
Ok, that’s nice, but how does it work?
Well, to get my ATL based COM object declared, I included the
following line in my header file:
OBJECT_ENTRY_AUTO(<my classid>, <my class>)
OBJECT_ENTRY_AUTO expands into:
#define OBJECT_ENTRY_AUTO(clsid, class) \
__declspec(selectany) ATL::_ATL_OBJMAP_ENTRY __objMap_##class = {&clsid, class::UpdateRegistry, class::_ClassFactoryCreatorClass::CreateInstance, class::_CreatorClass::CreateInstance, NULL, 0, class::GetObjectDescription, class::GetCategoryMap, class::ObjectMain }; \
extern "C" __declspec(allocate("ATL$__m")) __declspec(selectany) ATL::_ATL_OBJMAP_ENTRY* const __pobjMap_##class = &__objMap_##class; \
OBJECT_ENTRY_PRAGMA(class)
Notice the declaration of __pobjMap_##class above – there’s
that declspec(allocate("ATL$__m")) thingy again. And that’s where
the magic lies. When the linker’s laying out the code, it sorts
these sections alphabetically – so variables in the ATL$__a
section will occur before the variables in the ATL$__z section.
So what’s happening under the covers is that ATL’s asking the linker
to place all the __pobjMap_<class name> variables in the
executable between __pobjMapEntryFirst and __pobjMapEntryLast.
And that’s the crux of the problem. Remember my comment above about
how the linker works resolving symbols? It first loads all the items
(code and data) from the OBJ files passed in, and resolves all the
external definitions for them. But none of the files in the wrapper
directory (which are the ones that are explicitly linked) reference
any of the code in the DLL (remember, the wrapper doesn’t do much more
than simply calling into ATL’s wrapper functions – it doesn’t
reference any of the code in the other files.
So how did I fix the problem? Simple. I knew that as soon as the
linker pulled in the module that contained my COM class definition,
it'd start resolving all the items in that module. Including the
__objMap_<class>, which would then be added in the right location so that ATL would be able to pick it up. I put a dummy function call
called ForceLoad<MyClass> inside the module in the library, and
then added a function called CallForceLoad<MyClass> to my DLL
entry point file (note: I just added the function – I didn’t
call it from any code).
And voila, the code was loaded, and the class factories for my COM
objects were now auto-registered.
What was even cooler about this was that since no live code called
the two dummy functions that were used to pull in the library, pass
three of the linker discarded the code!

How to reliably replace a library-defined error handler with my own?

On certain error cases ATL invokes AtlThrow() which is implemented as ATL::AtlThrowImpl() which in turn throws CAtlException. The latter is not very good - CAtlException is not even derived from std::exception and also we use our own exceptions hierarchy and now we will have to catch CAtlException separately here and there which is lots of extra code and error-prone.
Looks like it is possible to replace ATL::AtlThrowImpl() with my own handler - define _ATL_CUSTOM_THROW and define AtlThrow() to be the custom handler before including atlbase.h - and ATL will call the custom handler.
Not so easy. Some of ATL code is not in sources - it comes compiled as a library - either static or dynamic. We use the static - atls.lib. And... it is compiled in such way that it has ATL::ThrowImpl() inside and some code calling it. I used a static analysis tool - it clearly shows that there're paths on which the old default handler is called.
To ensure I even tried to "reimplement" ATL::AtlThrowImpl() in my code. Now the linker says it sees two declarations of ATL::AtlThrowImpl() which I suppose confirms that there's another implementation that can be called by some code.
How can I handle this? How do I replace the default handler completely and ensure that the default handler is never called?
I encountered similar problems when writing my own memory manager and wanted to overrule malloc and free.
The problem is that ATL::AtlThrowImpl is probably part of a source file that also includes other files that are really needed.
If the linker sees a reference to a function in one of the object files, it pulls in the object file with that function, including all other functions in that same object file.
The solution is to look up in the ATL sources where ATL::AtlThrowImpl is defined and see if the source contains other functions. You will also need to implement the other functions to prevent the linker from having any reference to the original ATL source.
I have not worked these windows libraries in quite a while, but you should be able to get the linker to pick your replacement by putting it in a static library listed earlier in the link statement. Make sure you are using the same method signature as the original. You can also play with linker flags to omit default libraries, for various definitions of default.
If you have a friend with access to MSDN, it may have some of the ATL debug sources, which also helps.
And, as Patrick says, you need to provide a replacement for everything in the same linkage scope. Some libraries break the individual methods out into separate object files to make it easier to replace one piece. That's more common with e.g. c standard lib than for a class. If there's a lot of ATL code capable of calling their throw impl in a single object file, it may be more pain than it is worth....you may need to try catching and rethrowing as your own type.

Should I use GetProcAddress or just include various win32 libraries?

Wonder what the difference between:
static PROCESSWALK pProcess32First=(PROCESSWALK)GetProcAddress(hKernel,"Process32First");
...
pProcess32First(...);
what is hKernel? Look in here. You can replace with GetModuleHandle()
and
#include <Tlhelp32.h>
...
Process32First(...);
What are the differences, I wonder which I should use. Is there any difference in terms of best practices then?
NOTE: my answer assumes that the function is available either way, there are other things to consider if you are after non-exported functions.
If you use LoadLibrary and GetProcAddress, then you have the option running with reduced functionality if the required library isn't there. if you use the include and link directly the lib, and dll isn't there (or doesn't have the export due to wrong version) your app will simply fail to load.
It really only makes a difference if you want to use a function that isn't in all versions of a given dll.
In addition to what Evan said (which is correct), another important difference (IMHO) is that if you're dynamically loading functions, you need to have a typedef to cast the void* to in order to call the function once it's loaded. Unless the header files which defines the function prototype for static linkage has a mechanism for also defining the typedef for the function pointer from the same template function definition code, you're going to end up with a duplicate function definition, probably in your code. If the external header definitions are ever updated (for example, with new definitions for 64bit data types), you risk runtime errors in your application unless you update the other function prototypes (these will not be caught at compile time, because of the c-style cast to the function typedef).
It's a subtle issue, but an important one to consider. I would use implicit ("static") linking if you can because of that issue, and if you're using dynamic loading, be aware of the issue, and structure your code to avoid problems in the future as best you can.
See here for Microsoft's explanation.
Implicit linking (using *.lib) is simplier.
As for kernel libraries, there are no other difference.
I would take the first approach if I had optional plugin libraries that the user may or may not have. For necessary libraries I would use the second since it is a lot less code to get the functions.