Is it currently possible to scan/query/iterate all functions (or classes) with some attribute across modules?
For example:
source/packageA/something.d:
#sillyWalk(10)
void doSomething()
{
}
source/packageB/anotherThing.d:
#sillyWalk(50)
void anotherThing()
{
}
source/main.d:
void main()
{
for (func; /* All #sillWalk ... */) {
...
}
}
Believe it or not, but yes, it kinda is... though it is REALLY hacky and has a lot of holes. Code: http://arsdnet.net/d-walk/
Running that will print:
Processing: module main
Processing: module object
Processing: module c
Processing: module attr
test2() has sillyWalk
main() has sillyWalk
You'll want to take a quick look at c.d, b.d, and main.d to see the usage. The onEach function in main.d processes each hit the helper function finds, here just printing the name. In the main function, you'll see a crazy looking mixin(__MODULE__) - this is a hacky trick to get a reference to the current module as a starting point for our iteration.
Also notice that the main.d file has a module project.main; line up top - if the module name was just main as it is automatically without that declaration, the mixin hack would confuse the module for the function main. This code is really brittle!
Now, direct your attention to attr.d: http://arsdnet.net/d-walk/attr.d
module attr;
struct sillyWalk { int i; }
enum isSillyWalk(alias T) = is(typeof(T) == sillyWalk);
import std.typetuple;
alias hasSillyWalk(alias what) = anySatisfy!(isSillyWalk, __traits(getAttributes, what));
enum hasSillyWalk(what) = false;
alias helper(alias T) = T;
alias helper(T) = T;
void allWithSillyWalk(alias a, alias onEach)() {
pragma(msg, "Processing: " ~ a.stringof);
foreach(memberName; __traits(allMembers, a)) {
// guards against errors from trying to access private stuff etc.
static if(__traits(compiles, __traits(getMember, a, memberName))) {
alias member = helper!(__traits(getMember, a, memberName));
// pragma(msg, "looking at " ~ memberName);
import std.string;
static if(!is(typeof(member)) && member.stringof.startsWith("module ")) {
enum mn = member.stringof["module ".length .. $];
mixin("import " ~ mn ~ ";");
allWithSillyWalk!(mixin(mn), onEach);
}
static if(hasSillyWalk!(member)) {
onEach!member;
}
}
}
}
First, we have the attribute definition and some helpers to detect its presence. If you've used UDAs before, nothing really new here - just scanning the attributes tuple for the type we're interested in.
The helper templates are a trick to abbreviate repeated calls to __traits(getMember) - it just aliases it to a nicer name while avoiding a silly parse error in the compiler.
Finally, we have the meat of the walker. It loops over allMembers, D's compile time reflection's workhorse (if you aren't familiar with this, take a gander at the sample chapter of my D Cookbook https://www.packtpub.com/application-development/d-cookbook - the "Free Sample" link is the chapter on compile time reflection)
Next, the first static if just makes sure we can actually get the member we want to get. Without that, it would throw errors on trying to get private members of the automatically imported object module.
The end of the function is simple too - it just calls our onEach thing on each element. But the middle is where the magic is: if it detects a module (sooo hacky btw but only way I know to do it) import in the walk, it imports it here, gaining access to it via the mixin(module) trick used at the top level... thus recursing through the program's import graph.
If you play around, you'll see it actually kinda works. (Compile all those files together on the command line btw for best results: dmd main.d attr.d b.d c.d)
But it also has a number of limitations:
Going into class/struct members is possible, but not implemented here. Pretty straightforward though: if the member is a class, just descend into it recursively too.
It is liable to break if a module shares a name with a member, such as the example with main mentioned above. Work around by using unique module names with some package dots too, should be ok.
It will not descend into function-local imports, meaning it is possible to use a function in the program that will not be picked up by this trick. I'm not aware of any solution to this in D today, not even if you're willing to use every hack in the language.
Adding code with UDAs is always tricky, but doubly so here because the onEach is a function with its on scope. You could perhaps build up a global associative array of delegates into handlers for the things though: void delegate()[string] handlers; /* ... */ handlers[memberName] = &localHandlerForThis; kind of thing for runtime access to the information.
I betcha it will fail to compile on more complex stuff too, I just slapped this together now as a toy proof of concept.
Most D code, instead of trying to walk the import tree like this, just demands that you mixin UdaHandler!T; in the individual aggregate or module where it is used, e.g. mixin RegisterSerializableClass!MyClass; after each one. Maybe not super DRY, but way more reliable.
edit:
There's another bug I didn't notice when writing the answer originally: the "module b.d;" didn't actually get picked up. Renaming it to "module b;" works, but not when it includes the package.
ooooh cuz it is considered "package mod" in stringof.... which has no members. Maybe if the compiler just called it "module foo.bar" instead of "package foo" we'd be in business though. (of course this isn't practical for application writers... which kinda ruins the trick's usefulness at this time)
Related
I have a working set of TCL script plus C++ extension but I dont know exactly how it works and how was it compiled. I am using gcc and linux Arch.
It works as follows: when we execute the test.tcl script it will pass some values to an object of a class defined into the C++ extension. Using these values the extension using a macro give some result and print some graphics.
In the test.tcl scrip I have:
#!object
use_namespace myClass
proc simulate {} {
uplevel #0 {
set running 1
for {} {$running} { } {
moveBugs
draw .world.canvas
.statusbar configure -text "t:[tstep]"
}
}
}
set toroidal 1
set nx 100
set ny 100
set mv_dist 4
setup $nx $ny $mv_dist $toroidal
addBugs 100
# size of a grid cell in pixels
set scale 5
myClass.scale 5
The object.cc looks like:
#include //some includes here
MyClass myClass;
make_model(myClass); // --> this is a macro!
The Macro "make_model(myClass)" expands as follows:
namespace myClass_ns { DEFINE_MYLIB_LIBRARY; int TCL_obj_myClass
(mylib::TCL_obj_init(myClass),TCL_obj(mylib::null_TCL_obj,
(std::string)"myClass",myClass),1); };
The Class definition is:
class MyClass:
{
public:
int tstep; //timestep - updated each time moveBugs is called
int scale; //no. pixels used to represent bugs
void setup(TCL_args args) {
int nx=args, ny=args, moveDistance=args;
bool toroidal=args;
Space::setup(nx,ny,moveDistance,toroidal);
}
The whole thing creates a cell-grid with some dots (bugs) moving from one cell to another.
My questions are:
How do the class methods and variables get the script values?
How is possible to have c++ code and compile it without a main function?
What is that macro doing there in the extension and how it works??
Thanks
Whenever a command in Tcl is run, it calls a function that implements that command. That function is written in a language like C or C++, and it is passed in the arguments (either as strings or Tcl_Obj* values). A full extension will also include a function to do the library initialisation; the function (which is external, has C linkage, and which has a name like Foo_Init if your library is foo.dll) does basic setting up tasks like registering the implementation functions as commands, and it's explicit because it takes a reference to the interpreter context that is being initialised.
The implementation functions can do pretty much anything they want, but to return a result they use one of the functions Tcl_SetResult, Tcl_SetObjResult, etc. and they have to return an int containing the relevant exception code. The usual useful ones are TCL_OK (for no exception) and TCL_ERROR (for stuff's gone wrong). This is a C API, so C++ exceptions aren't allowed.
It's possible to use C++ instance methods as command implementations, provided there's a binding function in between. In particular, the function has to get the instance pointer by casting a ClientData value (an alias for void* in reality, remember this is mostly a C API) and then invoking the method on that. It's a small amount of code.
Compiling things is just building a DLL that links against the right library (or libraries, as required). While extensions are usually recommended to link against the stub library, it's not necessary when you're just developing and testing on one machine. But if you're linking against the Tcl DLL, you'd better make sure that the code gets loaded into a tclsh that uses that DLL. Stub libraries get rid of that tight binding, providing pretty strong ABI stability, but are little more work to set up; you need to define the right C macro to turn them on and you need to do an extra API call in your initialisation function.
I assume you already know how to compile and link C++ code. I won't tell you how to do it, but there's bound to be other questions here on Stack Overflow if you need assistance.
Using the code? For an extension, it's basically just:
# Dynamically load the DLL and call the init function
load /path/to/your.dll
# Commands are all present, so use them
NewCommand 3
There are some extra steps later on to turn a DLL into a proper Tcl package, abstracting code that uses the DLL away from the fact that it is exactly that DLL and so on, but they're not something to worry about until you've got things working a lot more.
That probably wasn't very clear. Say I have a char *a = "reg". Now, I want to write a function that, on reading the value of a, instantiates an object of a particular class and names it reg.
So for example, say I have a class Register, and a separate function create(char *). I want something like this:
void create(char *s) //s == "reg"
{
//what goes here?
Register reg; // <- this should be the result
}
It should be reusable:
void create(char *s) //s == "second"
{
//what goes here?
Register second; // <- this should be the result
}
I hope I've made myself clear. Essentially, I want to treat the value in a variable as a separate variable name. Is this even possible in C/C++? If not, anything similar? My current solution is to hash the string, and the hash table would store the relevant Register object at that location, but I figured that was pretty unnecessary.
Thanks!
Variable names are compile-time artifacts. They don't exist at runtime. It doesn't make sense in C++ to create a dynamically-named variable. How would you refer to it?
Let's say you had this hypothetical create function, and wrote code like:
create("reg");
reg.value = 5;
This wouldn't compile, because the compiler doesn't know what reg refers to in the second line.
C++ doesn't have any way to look up variables at runtime, so creating them at runtime is a nonstarter. A hash table is the right solution for this. Store objects in the hash table and look them up by name.
This isn't possible. C++ does not offer any facilities to process code at runtime. Given the nature of a typical C++ implementation (which compiles to machine code ahead of time, losing all information about source code), this isn't even remotely feasible.
Like I said in my comment:
What's the point? A variable name is something the compiler, but -most importantly- you, the programmer, should care about. Once the application is compiled, the variable name could be whatever... it could be mangled and senseless, it doesn't matter anymore.
You read/write code, including var-names. Once compiled, it's down to the hardware to deal with it.
Neither C nor C++ have eval functions
Simply because: you only compile what you need, eval implies input later-on that may make no sense, or require other dependencies.
C/C++ are compiled ahead of time, eval implies evaluation at runtime. The C process would then imply: pre-process, compile and link the string, in such a way that it still is part of the current process...
Even if it were possible, eval is always said to be evil, that goes double for languages like the C family that are meant to run reliably, and are often used for time-critical operations. The right tool for the job and all that...
A HashTable with objects that have hash, key, Register, collision members is the sensible thing to do. It's not that much overhead anyway...
Still feel like you need this?
Look into the vast number of scripting languages that are out there. Perl, Python... They're all better suited to do this type of stuff
If you need some variable creation and lookup you can either:
Use one of the scripting languages, as suggested by others
Make the lookup explicitly, yourself. The simplest approach is by using a map, which would map a string to your register object. And then you can have:
std::map<const char*, Register*> table;
Register* create(const char* name) {
Register* r = new Register();
table[name] = r;
return r;
}
Register* lookup(const char* name) {
return table[name];
}
void destroy(const char* name) {
delete table[name];
table.erase(name);
}
Obviously, each time you want to access a variable created this way, you have to go through the call to lookup.
In my application, I'm dealing with a larger-size classes (over 50 methods) each of which is reasonably complex. I'm not worried about the complexity as they are still straight forward in terms of isolating pieces of functionality into smaller methods and then calling them. This is how the number of methods becomes large (a lot of these methods are private - specifically isolating pieces of functionality).
However when I get to the implementation stage, I find that I loose track of which methods have been implemented and which ones have not been. Then at linking stage I receive errors for the unimplemented methods. This would be fine, but there are a lot of interdependencies between classes and in order to link the app I would need to get EVERYTHING ready. Yet I would prefer to get one class our of the way before moving to the next one.
For reasons beyond my control, I cannot use an IDE - only a plain text editor and g++ compiler. Is there any way to find unimplemented methods in one class without doing a full linking? Right now I literally do text search on method signatures in the implementation cpp file for each of the methods, but this is very time consuming.
You could add a stub for every method you intend to implement, and do:
void SomeClass::someMethod() {
#error Not implemented
}
With gcc, this outputs file, line number and the error message for each of these. So you could then just compile the module in question and grep for "Not implemented", without requiring a linker run.
Although you then still need to add these stubs to the implementation files, which might be part of what you were trying to circumvent in the first place.
Though I can't see a simple way of doing this without actually attempting to link, you could grep the linker output for "undefined reference to ClassInQuestion::", which should give you only lines related to this error for methods of the given class.
This at least lets you avoid sifting through all error messages from the whole linking process, though it does not prevent having to go through a full linking.
That’s what unit tests and test coverage tools are for: write minimal tests for all functions up-front. Tests for missing functions won’t link. The test coverage report will tell you whether all functions have been visited.
Of course that’s only helping up to some extent, it’s not a 100% fool proof. Your development methodology sounds slightly dodgy to me though: developing classes one by one in isolation doesn’t work in practice: classes that depend on each other (and remember: reduce dependencies!) need to be developed in lockstep to some extent. You cannot churn out a complete implementation for one class and move to the next, never looking back.
In the past I have built an executable for each class:
#include "klass.h"
int main() {
Klass object;
return 0;
}
This reduces build time, can let you focus on one class at a time, speeds up your feedback loop.
It can be easily automated.
I really would look at reducing the size of that class though!
edit
If there are hurdles, you can go brute force:
#include "klass.h"
Klass createObject() {
return *reinterpret_cast<Klass>(0);
}
int main() {
Klass object = createObject();
return 0;
}
You could write a small script which analyses the header file for method implementations (regular expressions will make this very straightforward), then scans the implementation file for those same method implementations.
For example in Ruby (for a C++ compilation unit):
className = "" # Either hard-code or Regex /class \w+/
allMethods = []
# Scan header file for methods
File.open(<headerFile>, "r") do |file|
allLines = file.map { |line| line }
allLines.each do |line|
if (line =~ /(\);)$/) # Finds lines ending in ");" (end of method decl.)
allMethods << line.strip!
end
end
end
implementedMethods = []
yetToImplement = []
# Scan implementation file for same methods
File.open(<implementationFile>, "r") do |file|
contents = file.read
allMethods.each do |method|
if (contents.include?(method)) # Or (className + "::" + method)
implementedMethods << method
else
yetToImplement << method
end
end
end
# Print the results (may need to scroll the code window)
print "Yet to implement:\n"
yetToImplement.each do |method|
print (method + "\n")
end
print "\nAlready implemented:\n"
implementedMethods.each do |method
print (method + "\n")
end
Someone else will be able to tell you how to automate this into the build process, but this is one way to quickly check which methods haven't yet been implemented.
The delete keyword of c++11 does the trick
struct S{
void f()=delete; //unimplemented
};
If C++11 is not avaiable, you can use private as a workaround
struct S{
private: //unimplemented
void f();
};
With this two method, you can write some testing code in a .cpp file
//test_S.cpp
#include "S.hpp"
namespace{
void test(){
S* s;
s->f(); //will trigger a compilation error
}
}
Note that your testing code will never be executed. The namespace{} says to the linker that this code is never used outside the current compilation unit (i.e., test_S.cpp) and will therefore be dropped just after compilation checking.
Because this code is never executed, you do not actualy need to create a real S object in the test function. You just want to trick the compiler in order to test if a S objects has a callable f() function.
You can create a custom exception and throw it so that:
Calling an unimplemented function will terminate the application instead of leaving it in an unexpected state
The code can still be compiled, even without the required functions being implemented
You can easily find the unimplemented functions by looking through compiler warnings (by using some possibly nasty tricks), or by searching your project directory
You can optionally remove the exception from release builds, which would cause build errors if there are any functions that try to throw the exception
#if defined(DEBUG)
#if defined(__GNUC__)
#define DEPRECATED(f, m) f __attribute__((deprecated(m)))
#elif defined(_MSC_VER)
#define DEPRECATED(f, m) __declspec(deprecated(m)) f
#else
#define DEPRECATED(f, m) f
#endif
class not_implemented : public std::logic_error {
public:
DEPRECATED(not_implemented(), "\nUnimplemented function") : logic_error("Not implemented.") { }
}
#endif // DEBUG
Unimplemented functions would look like this:
void doComplexTask() {
throw not_implemented();
}
You can look for these unimplemented functions in multiple ways. In GCC, the output for debug builds is:
main.cpp: In function ‘void doComplexTask()’:
main.cpp:21:27: warning: ‘not_implemented::not_implemented()’ is deprecated:
Unimplemented function [-Wdeprecated-declarations]
throw not_implemented();
^
main.cpp:15:16: note: declared here
DEPRECATED(not_implemented(), "\nUnimplemented function") : logic_error("Not implemented.") { }
^~~~~~~~~~~~~~~
main.cpp:6:26: note: in definition of macro ‘DEPRECATED’
#define DEPRECATED(f, m) f __attribute__((deprecated(m)))
Release builds:
main.cpp: In function ‘void doComplexTask()’:
main.cpp:21:11: error: ‘not_implemented’ was not declared in this scope
throw not_implemented;
^~~~~~~~~~~~~~~
You can search for the exception with grep:
$ grep -Enr "\bthrow\s+not_implemented\b"
main.cpp:21: throw not_implemented();
The advantage of using grep is that grep doesn't care about your build configuration and will find everything regardless. You can also get rid of the deprecated qualifier to clean up your compiler output--the above hack generates a lot of irrelevant noise. Depending on your priorities this might be a disadvantage (for example, you might not care about Windows-specific functions if you're currently implementing Linux-specific functions, or vice-versa.)
If you use an IDE, most will let you search your entire project, and some even let you right-click a symbol and find everywhere it is used. (But you said you can't use one so in your case grep is your friend.)
I cannot see an easy way of doing this. Having several classes with no implementation will easily lead to a situation where keeping track in a multiple member team will be a nightmare.
Personally I would want to unit test each class I write and test driven development is my recommendation. However this involves linking the code each time you want to check the status.
For tools to use TDD refer to link here.
Another option is to write a piece of code that can parse through the source and check for functihat are to be implemented. GCC_XML is a good starting point.
/** This is struct S. */
struct S(T) {
static if(isFloatingPoint!T)
{
/// This version works well with floating-point numbers.
void fun() { }
}
else
{
/// This version works well with everything else.
void fun() { }
/// We also provide extra functionality.
void du() { }
}
}
Compiling with dmd -D, documentation is generated for the first block only. How do I get it to generate for the else block as well ?
For version blocks, it's only the version which is used which ends up in the documentation (be it the first one or the last one or whichever in between). So, for instance, if you have a version block for Linux and one for Windows, only the one which matches the system that you compile on will end up in the docs.
static if blocks outside of templates seem to act the same way. If they're compiled in, then their ddoc comments end up in the docs, whereas if they're not compiled in, they don't.
However, static if blocks inside templates appear to always grab the documentation from the first static if block, even if it's always false. But considering that those static ifs can end up being both true and false (from different instantiations of the template) and that the compiler doesn't actually require that the template be instantiated for its ddoc comments to end up in the generated docs, that makes sense. It doesn't have one right answer like static if blocks outside of templates do.
Regardless, it's generally a bad idea to put documentation inside of a version block or static if, precisely because they're using conditional compilation and may or may not be compiled in. The solution is to use a version(D_Ddoc) block. So, you'd end up with something like this:
/// This is struct S
struct S(T)
{
version(D_Ddoc)
{
/// Function foo.
void fun();
/// Extra functionality. Exists only when T is not a floating point type.
void du();
}
else
{
static if(isFloatingPoint!T)
void fun() { }
else
{
void fun() { }
void du() { }
}
}
}
I would also note that even if what you were trying to do had worked, it would look very bizarre in the documentation, because you would have ended up with foo in there twice with the exact same signature but different comments. static if doesn't end up in the docs at all, so there'd be no way to know under what circumstances foo existed. It would just look like you somehow declared foo twice.
The situation is similar with template constraints. The constraints don't end up in the docs, so it doesn't make sense to document each function overload when you're dealing with templated functions which are overloaded only by the their constraints.
One place where you don't need version(D_Ddoc), however, is when you have the same function in a series of version blocks. e.g.
/// foo!
version(linux)
void foo() {}
else version(Windows)
void foo() {}
else
static assert(0, "Unsupported OS.");
The ddoc comment will end up in the generated documentation regardless of which version block is compiled in.
It should be noted that the use of version(D_Ddoc) blocks tends to make it so when using -D, it makes no sense to compile your code for anything other than generating the documentation and that the actual executable that you run should be generated by a separate build which doesn't use -D. You can put the full code in the version(D_Ddoc) blocks to avoid that, but that would mean duplicating code, and it wouldn't really work with static if. Phobos uses version(StdDdoc) (which it defines for itself) instead of version(D_Ddoc) so that if you don't use version(D_Ddoc) blocks, you can still compile with -D and have Phobos work, but once you start using version(D_Ddoc), you're going to have to generate your documentation separately from your normal build.
the context
I'm working on a project having some "modules".
What I call a module here is a simple class, implementing a particular functionality and derivating from an abstract class GenericModule which force an interface.
New modules are supposed to be added in the future.
Several instances of a module can be loaded at the same time, or none, depending on the configuration file.
I though it would be great if a future developer could just "register" his module with the system in a simple line. More or less the same way they register tests in google test.
the context² (technical)
I'm building the project with visual studio 2005.
The code is entirely in a library, except the main() which is in an exec project.
I'd like to keep it that way.
my solution
I found inspiration in what they did with google test.
I created a templated Factory. which looks more or less like this (I've skipped uninteresting parts to keep this question somewhat readable ):
class CModuleFactory : boost::noncopyable
{
public:
virtual ~CModuleFactory() {};
virtual CModuleGenerique* operator()(
const boost::property_tree::ptree& rParametres ) const = 0;
};
template <class T>
class CModuleFactoryImpl : public CModuleFactory
{
public:
CModuleGenerique* operator()(
const boost::property_tree::ptree& rParametres ) const
{
return new T( rParametres );
}
};
and a method supposed to register the module and add it's factory to a list.
class CGenericModule
{
// ...
template <class T>
static int declareModule( const std::string& rstrModuleName )
{
// creation de la factory
CModuleFactoryImpl<T>* pFactory = new CModuleFactoryImpl<T>();
// adds the factory to a map of "id" => factory
CAcquisition::s_mapModuleFactory()[rstrModuleName ] = pFactory;
return 0;
}
};
now in a module all I need to do to declare a module is :
static int initModule =
acquisition::CGenericModule::declareModule<acquisition::modules::CMyMod>(
"mod_name"
);
( in the future it'll be wrapped in a macro allowing to do
DECLARE_MODULE( "mod_name", acquisition::modules::CMyMod );
)
the problem
Allright now the problem.
The thing is, it does work, but not exactly the way i'd want.
The method declareModule is not being called if I put the definition of the initModule in the .cpp of the module (where I'd like to have it) (or even in the .h).
If I put the static init in a used .cpp file .. it works.
By used I mean : having code being called elsewhere.
The thing is visual studio seems to discard the entire obj when building the library. I guess that's because it's not being used anywhere.
I activated verbose linking and in pass n°2 it lists the .objs in the library and the .obj of the module isn't there.
almost resolved?
I found this and tried to add the /OPT:NOREF option but it didn't work.
I didn't try to put a function in the .h of the module and call it from elsewhere, because the whole point is being able to declare it in one line in it's file.
Also I think the problem is similar to this one but the solution is for g++ not visual :'(
edit: I just read the note in the answer to this question. Well if I #include the .h of the module from an other .cpp, and put the init in the module's .h. It works and the initialization is actually done twice ... once in each compilation unit? well it seems it happens in the module's compilation unit ...
side notes
Please if you don't agree with what I'm trying to do, fell free to tell, but I'm still interested in a solution
If you want this kind of self-registering behavior in your "modules", your assumption that the linker is optimizing out initModule because it is not directly referenced may be incorrect (though it could also be correct :-).
When you register these modules, are you modifying another static variable defined at file scope? If so, you at least have an initialization order problem. This could even manifest itself only in release builds (initialization order can vary depending on compiler settings) which might lead you to believe that the linker is optimizing out this initModule variable even though it may not be doing so.
The module registry kind of variable (be it a list of registrants or whatever it is) should be lazy constructed if you want to do things this way. Example:
static vector<string> unsafe_static; // bad
vector<string>& safe_static()
{
static vector<string> f;
return f;
} // ok
Note that the above has problems with concurrency. Some thread synchronization is needed for multiple threads calling safe_static.
I suspect your real problem has to do with initialization order even though it may appear that the initModule definition is being excluded by the linker. Typically linkers don't omit references which have side effects.
If you find out for a fact that it's not an initialization order problem and that the code is being omitted by the linker, then one way to force it is to export initModule (ex: dllexport on MSVC). You should think carefully if this kind of self-registration behavior really outweighs the simple process of adding on to a list of function calls to initialize your "modules". You could also achieve this more naturally if each "module" was defined in a separate shared library/DLL, in which case your macro could just be defining the function to export which can be added automatically by the host application. Of course that carries the burden of having to define a separate project for each "module" you create as opposed to just adding a self-registering cpp file to an existing project.
I've got something similar based on the code from wxWidgets, however I've only ever used it as a DLL. The wxWidgets code works with static libs however.
The bit that might make a difference is that in wx the equivelant of the following is defined at class scope.
static int initModule =
acquisition::CGenericModule::declareModule<acquisition::modules::CMyMod>(
"mod_name"
);
Something like the following where the creation of the Factory because it is static causes it to be loaded to the Factory list.
#define DECLARE_CLASS(name)\
class name: public Interface { \
private: \
static Factory m_reg;\
static std::auto_ptr<Interface > clone();
#define IMPLEMENT_IAUTH(name,method)\
Factory name::m_reg(method,name::clone);\