Wonder what the difference between:
static PROCESSWALK pProcess32First=(PROCESSWALK)GetProcAddress(hKernel,"Process32First");
...
pProcess32First(...);
what is hKernel? Look in here. You can replace with GetModuleHandle()
and
#include <Tlhelp32.h>
...
Process32First(...);
What are the differences, I wonder which I should use. Is there any difference in terms of best practices then?
NOTE: my answer assumes that the function is available either way, there are other things to consider if you are after non-exported functions.
If you use LoadLibrary and GetProcAddress, then you have the option running with reduced functionality if the required library isn't there. if you use the include and link directly the lib, and dll isn't there (or doesn't have the export due to wrong version) your app will simply fail to load.
It really only makes a difference if you want to use a function that isn't in all versions of a given dll.
In addition to what Evan said (which is correct), another important difference (IMHO) is that if you're dynamically loading functions, you need to have a typedef to cast the void* to in order to call the function once it's loaded. Unless the header files which defines the function prototype for static linkage has a mechanism for also defining the typedef for the function pointer from the same template function definition code, you're going to end up with a duplicate function definition, probably in your code. If the external header definitions are ever updated (for example, with new definitions for 64bit data types), you risk runtime errors in your application unless you update the other function prototypes (these will not be caught at compile time, because of the c-style cast to the function typedef).
It's a subtle issue, but an important one to consider. I would use implicit ("static") linking if you can because of that issue, and if you're using dynamic loading, be aware of the issue, and structure your code to avoid problems in the future as best you can.
See here for Microsoft's explanation.
Implicit linking (using *.lib) is simplier.
As for kernel libraries, there are no other difference.
I would take the first approach if I had optional plugin libraries that the user may or may not have. For necessary libraries I would use the second since it is a lot less code to get the functions.
Related
Let's say I have this lib
//// testlib.h
#pragma once
#include <iostream>
void __declspec(dllexport) test();
int __declspec(dllexport) a();
If I omit the definition for a() and test() from my testlib.cpp, the library still compiles, because the interface [1] is still valid. (If I use them from a client app then they won't link, obviously)
Is there a way I can ensure that when the obj is created (which I gather is the compiler's job) it actually looks for the definitions of the functions that I explicitly exported, and fails if doesn't ?
This is not related to any real world issue. Just curious.
[1] MSVC docs
No, it's not possible.
Partially because a dllexport declaration might legally not even be implemented in the same DLL, let alone library, but be a mere forward declaration for something provided by yet another DLL.
Especially it's impossible to decide on the object level. It's just another forward declaration like any other.
You can dump the exported symbols once the DLL has been linked, but there is no common tool for checking completeness.
Ultimately, you can't do without a client test application which attempts to load all exported interfaces. You can't check that on compile time yet. Even just successfully linking the test application isn't enough, you have to actually run it.
It gets even worse if there are delay-loaded DLLs (and yes, there usually are), because now you can't even check for completeness unless you actually call at least one symbol from each involved DLL.
Tools like Dependency Walker etc. exist for this very reason.
You asked
"Is there a way I can ensure that when the obj is created (which I gather is the compiler's job) it actually looks for the definitions of the functions that I explicitly exported, and fails if doesn't ?"
When you load a DLL then the actual function binding is happenning at runtime(late binding of functions) so it is not possible for the compiler to know if the function definition are available in the DLL or not. Hope this answer to your query.
How do I (programmatically) ensure that a dll contains definitions for all exported functions?
In general, in standard C++11 (read the document n3337 and see this C++ reference), you cannot
Because C++11 does not know about DLL or dynamic linking. It might sometimes make sense to dynamically load code which does not define a function which you pragmatically know will never be called (e.g. an incomplete DLL for drawing shapes, but you happen to know that circles would never be drawn in your particular usage, so the loaded DLL might not define any class Circle related code. In standard C++11, every called function should be defined somewhere (in some other translation unit).
Look also into Qt vision of plugins.
Read Levine's Linkers and loaders book. Notice that on Linux, plugins loaded with dlopen(3) have a different semantics than Windows DLLs. The evil is in the details
In practice, you might consider using some recent variant of the GCC compiler and develop your GCC plugin to check that. This could require several weeks of work.
Alternatively, adapt the Clang static analyzer for your needs. Again, budget several weeks of work.
See also this draft report and think about C++ cross compilers (e.g. compiling on Windows a plugin for a RaspBerry Pi)
Consider also runtime code generation frameworks like asmjit or libgccjit. You might think of generating at runtime the missing stubs or functions (and fill appropriately function pointers with them, or even vtables). C++ exceptions could also be an additional issue.
If your DLL contains calls to the functions, the linker will fail if a definition isn't provided for those functions. It doesn't matter if the calls are never executed, only that they exist.
//// testlib.h
#pragma once
#include <iostream>
#ifndef DLLEXPORT
#define DLLEXPORT(TYPE) TYPE __declspec(dllexport)
#endif
DLLEXPORT(void) test();
DLLEXPORT(int) a();
//// testlib_verify.c
#define DLLEXPORT(TYPE)
void DummyFunc()
{
#include testlib.h
}
This macro-based solution only works for functions with no parameters, but it should be easy to extend.
On my project I often see people defining global functions in .cpp files, i.e functions that are not restricted to file scope, class scope or any particular namespace.
These are clearly local helper functions that the author only wanted to be able to access in that file.
I know this is bad practice and the solution is to restrict them to file scope either by using the static keyword or better yet use an anonymous namespace.
But my question is, if these functions are not declared in the header file, what can actually go wrong?
I would like to advise these people against this practice but I feel my argument would have more weight if I could clearly describe what could go wrong. Or even what what might already be going wrong that we are not aware of!
Thanks.
One, you are cluttering the namespace. The result can be multiple definitions, i.e. linker errors, and programmers choosing awkward function names to circumvent this. Imagine one source file defining its helper() function, the next one a my_helper() because helper() resulted in an error, then a third a other_helper() and so on... in any case, the cleaner the namespace, the easier it becomes to understand what is actually going on.
Two, and this is an extension of the above, imagine a helper( int x ) and a helper( long y ), and you can imagine the kind of ambiguity that could arise from this. If you are lucky (and using appropriate warning options), the compiler will warn you about these conditions, but you might end up calling a different function than what you expected.
Three, and this is from a maintainer's point of view, if you see a function that is static or declared in an anonymous namespace, you know that you only have to check the current source file for calls to this function. This makes refactorings that much easier. ("Does anyone actually use this exotic but buggy feature, or can I optimize it away?")
Ulrich Drepper's paper on shared ELF libraries is relevant for you if you produce dynamically shared objects, usually shared libraries. I assume that some considerations also apply to applications which just are dynamically linked. The paper discusses the GNU tools but similar concerns will likely apply to other tool chains.
In short there may be build-time, load-time and run-time penalties for global objects and functions.
Build- and load-time penalties are rooted in the number of (string) comparisons needed to resolve dependencies which are not necessary for locally-defined symbols like file static functions and variables. Drepper discusses this at page 8 using the example of OpenOffice.
The reason for run-time penalties is ELF specifying that even locally-defined but global symbols could be replaced at run time with definitions in other objects. Therefore function code cannot be inlined and further optimized, even though it is visible at compile time; and the function call proper is more complicated than necessary, involving more indirections. See Drepper's paper, pp. 17 and 18.
I have a big C++ Project, in which I try to implement a debug function which needs classes from other libraries. Unfortunately these classes share the same name and namespaces with classes called inside the project. I tried to use a static library to avoid multiple definitions, but of course the compiler complains about that issue. So my question:
It is possible to create that library for the function without that the compiler knows about the called classes inside the function?
I don't know, like a "protected function" or like putting all the code from the libraries inside the function code..
Edit: I'm using the g++ compiler.
Max, I know but so far I see no other way.
Schematic, the problem is.
Project:
#include a.h // (old one)
#include a2.h
return a->something();
return a2->something(); //debug function
debug function a2:
#include a.h // (new one!!)
return a->something(); // (new one!)
Compiling Process looks so far:
g++ project -la -la2
That is a very simplified draft. But that's it actually.
Maybe you can create a wrapper library which internally links to that outside library and exports its definitions under a different name or namespace.
try enclosing the #includes for the declarations of the classes that you are using in your debug function in a namspace, but don't use an using clause for that namespace.
There are a few techniques that may help you, but that depends on what the "debug version" of the library does.
First, it's not unheard of to have #ifdef blocks inside functions that do additional checking depending on whether the program was built in debug mode. The C assert macro behaves this way.
Second, it's possible that the "debug version" does nothing more than log messages. It's easy enough to include the logging code in both debug and release versions, and make the decision to actually log based on some kind of "priority" parameter for each log message.
Third, you may consider using an event-based design where functions can, optionally, take objects as parameters that have certain methods, and then if interesting things happen and the function was passed an event object, the function can call those methods.
Finally, if you're actually interested in what happens at a lower level than the library you're working on, you can simply link to debug versions of those lower level libraries. This is a case of the first option mentioned above, applied to a different library than the one you're actually working on. Microsoft's runtime libraries do this, as do Google's perftools and many "debugging malloc" libraries.
Is there ever such a pattern of dependancies that it is impossible to keep everything in header files only? What if we enforced a rule of one class per header only?
For the purposes of this question, let's ignore static things :)
I am aware of no features in standard C++, excepting statics which you have already mentioned, which require a library to define a full translation unit (instead of only headers). However, it's not recommended to do that, because when you do, you force all your clients to recompile their entire codebase whenever your library changes. If you're using source files or a static library or a dynamic library form of distribution, your library can be changed/updated/modified without forcing everyone to recompile.
It is possible, I would say, at the express condition of not using a number of language features: as you noticed, a few uses of the static keyword.
It may require a few trick, but they can be reviewed.
You'll need to keep the header / source distinction whenever you need to break a dependency cycle, even though the two files will be header files in practice.
Free-functions (non-template) have to be declared inline, the compiler may not inline them, but if they are declared so it won't complained that they have been redefined when the client builts its library / executable.
Globally shared data (global variables and class static attributes) should be emulated using local static attribute in functions / class methods. In practice it matters little as far as the caller is concerned (just adds ()). Note that in C++0x this becomes the favored way because it's guaranteed to be thread-safe while still protecting from the initialization order fiasco, until then... it's not thread-safe ;)
Respecting those 3 points, I believe you would be able to write a fully-fledged header-only library (anyone sees something else I missed ?)
A number of Boost Libraries have used similar tricks to be header-only even though their code was not completely template. For example Asio does very consciously and proposes the alternative using flags (see release notes for Asio 1.4.6):
clients who only need a couple features need not worry about building / linking, they just grab what they need
clients who rely on it a bit more or want to cut down on compilation time are offered the ability to build their own Asio library (with their own sets of flags) and then include "lightweight" headers
This way (at the price of some more effort on the part of the library devs) the clients get their cake and eat it too. It's a pretty nice solution I think.
Note: I am wondering whether static functions could be inlined, I prefer to use anonymous namespaces myself so never really looked into it...
The one class per header rule is meaningless. If this doesn't work:
#include <header1>
#include <header2>
then some variation of this will:
#include <header1a>
#include <header2>
#include <header1b>
This might result in less than one class per header, but you can always use (void*) and casts and inline functions (in which case the 'inline' will likely be duly ignored by the compiler). So the question, seems to me, can be reduced to:
class A
{
// ...
void *pimpl;
}
Is it possible that the private implementation, pimpl, depends on the declaration of A? If so then pimpl.cpp (as a header) must both precede and follow A.h. But Since you can always, once again, use (void*) and casts and inline functions in preceding headers, it can be done.
Of course, I could be wrong. In either case: Ick.
In my long career, I haven't come across dependency pattern that would disallow header-only implementation.
Mind you that if you have circular dependencies between classes, you may need to resort to either abstract interface - concrete implementation paradigm, or use templates (using templates allows you to forward-reference properties/methods of template parameters, which are resolved later during instantiation).
This does not mean that you SHOULD always aim for header-only libraries. Good as they are, they should be reserved to template and inline code. They SHOULD NOT include substantial complex calculations.
I have a global list of function pointers. This list should be populated at startup. Order is not important and there are no dependencies that would complicate static initialization. To facilitate this, I've written a class that adds a single entry to this list in its constructor, and scatter global instances of this class via a macro where necessary. One of the primary goals of this approach is to remove the need for explicitly referencing every instance of this class externally, instead allowing each file that needs to register something in the list to do it independently. Nice and clean.
However, when placing these objects in a static library, the linker discards (or rather never links in) these units because no code in them is explicitly referenced. Explicitly referencing symbols in the compilation units would be counterproductive, directly contradicting one of the main goals of the approach. For the same reason, /INCLUDE is not an acceptable option, and /OPT:NOREF is not actually related to this problem.
Metrowerks has a __declspec directive for it, GCC has -force_load, but I cannot find any equivalent for MSVC.
I think you want a #pragma optimize("", off) and #pragma optimize("", on) pair around these variable declarations. That will turn off the optimization which is discarding them.
Also remember preprocessor directives can't be part of macros. __pragma can be used for some things (and is designed specifically to be used in macros), but not everything, and the syntax is a little different. Not sure if it works with optimize.
If you don't want to #include anything, then this won't work, but if you can live with that, a common technique is to use a nifty-counter to call a function declared in the cpp file before main, which would then initialize any static objects in said file.
I've used this in projects that had a lot of objects that should go into factory. We had to have one file that #include each header that declared a class that must register itself in the factory. It's reasonably doable IMHO, as I strongly prefer writing source files as opposed to setting various linker/compiler options.
Another way (that doesn't seem to work via static libs, see comments) could be to declare something as __declspec(dllexport), which would force its inclusion (__declspec works in executables too).