Find unimplemented class methods - c++

In my application, I'm dealing with a larger-size classes (over 50 methods) each of which is reasonably complex. I'm not worried about the complexity as they are still straight forward in terms of isolating pieces of functionality into smaller methods and then calling them. This is how the number of methods becomes large (a lot of these methods are private - specifically isolating pieces of functionality).
However when I get to the implementation stage, I find that I loose track of which methods have been implemented and which ones have not been. Then at linking stage I receive errors for the unimplemented methods. This would be fine, but there are a lot of interdependencies between classes and in order to link the app I would need to get EVERYTHING ready. Yet I would prefer to get one class our of the way before moving to the next one.
For reasons beyond my control, I cannot use an IDE - only a plain text editor and g++ compiler. Is there any way to find unimplemented methods in one class without doing a full linking? Right now I literally do text search on method signatures in the implementation cpp file for each of the methods, but this is very time consuming.

You could add a stub for every method you intend to implement, and do:
void SomeClass::someMethod() {
#error Not implemented
}
With gcc, this outputs file, line number and the error message for each of these. So you could then just compile the module in question and grep for "Not implemented", without requiring a linker run.
Although you then still need to add these stubs to the implementation files, which might be part of what you were trying to circumvent in the first place.

Though I can't see a simple way of doing this without actually attempting to link, you could grep the linker output for "undefined reference to ClassInQuestion::", which should give you only lines related to this error for methods of the given class.
This at least lets you avoid sifting through all error messages from the whole linking process, though it does not prevent having to go through a full linking.

That’s what unit tests and test coverage tools are for: write minimal tests for all functions up-front. Tests for missing functions won’t link. The test coverage report will tell you whether all functions have been visited.
Of course that’s only helping up to some extent, it’s not a 100% fool proof. Your development methodology sounds slightly dodgy to me though: developing classes one by one in isolation doesn’t work in practice: classes that depend on each other (and remember: reduce dependencies!) need to be developed in lockstep to some extent. You cannot churn out a complete implementation for one class and move to the next, never looking back.

In the past I have built an executable for each class:
#include "klass.h"
int main() {
Klass object;
return 0;
}
This reduces build time, can let you focus on one class at a time, speeds up your feedback loop.
It can be easily automated.
I really would look at reducing the size of that class though!
edit
If there are hurdles, you can go brute force:
#include "klass.h"
Klass createObject() {
return *reinterpret_cast<Klass>(0);
}
int main() {
Klass object = createObject();
return 0;
}

You could write a small script which analyses the header file for method implementations (regular expressions will make this very straightforward), then scans the implementation file for those same method implementations.
For example in Ruby (for a C++ compilation unit):
className = "" # Either hard-code or Regex /class \w+/
allMethods = []
# Scan header file for methods
File.open(<headerFile>, "r") do |file|
allLines = file.map { |line| line }
allLines.each do |line|
if (line =~ /(\);)$/) # Finds lines ending in ");" (end of method decl.)
allMethods << line.strip!
end
end
end
implementedMethods = []
yetToImplement = []
# Scan implementation file for same methods
File.open(<implementationFile>, "r") do |file|
contents = file.read
allMethods.each do |method|
if (contents.include?(method)) # Or (className + "::" + method)
implementedMethods << method
else
yetToImplement << method
end
end
end
# Print the results (may need to scroll the code window)
print "Yet to implement:\n"
yetToImplement.each do |method|
print (method + "\n")
end
print "\nAlready implemented:\n"
implementedMethods.each do |method
print (method + "\n")
end
Someone else will be able to tell you how to automate this into the build process, but this is one way to quickly check which methods haven't yet been implemented.

The delete keyword of c++11 does the trick
struct S{
void f()=delete; //unimplemented
};
If C++11 is not avaiable, you can use private as a workaround
struct S{
private: //unimplemented
void f();
};
With this two method, you can write some testing code in a .cpp file
//test_S.cpp
#include "S.hpp"
namespace{
void test(){
S* s;
s->f(); //will trigger a compilation error
}
}
Note that your testing code will never be executed. The namespace{} says to the linker that this code is never used outside the current compilation unit (i.e., test_S.cpp) and will therefore be dropped just after compilation checking.
Because this code is never executed, you do not actualy need to create a real S object in the test function. You just want to trick the compiler in order to test if a S objects has a callable f() function.

You can create a custom exception and throw it so that:
Calling an unimplemented function will terminate the application instead of leaving it in an unexpected state
The code can still be compiled, even without the required functions being implemented
You can easily find the unimplemented functions by looking through compiler warnings (by using some possibly nasty tricks), or by searching your project directory
You can optionally remove the exception from release builds, which would cause build errors if there are any functions that try to throw the exception
#if defined(DEBUG)
#if defined(__GNUC__)
#define DEPRECATED(f, m) f __attribute__((deprecated(m)))
#elif defined(_MSC_VER)
#define DEPRECATED(f, m) __declspec(deprecated(m)) f
#else
#define DEPRECATED(f, m) f
#endif
class not_implemented : public std::logic_error {
public:
DEPRECATED(not_implemented(), "\nUnimplemented function") : logic_error("Not implemented.") { }
}
#endif // DEBUG
Unimplemented functions would look like this:
void doComplexTask() {
throw not_implemented();
}
You can look for these unimplemented functions in multiple ways. In GCC, the output for debug builds is:
main.cpp: In function ‘void doComplexTask()’:
main.cpp:21:27: warning: ‘not_implemented::not_implemented()’ is deprecated:
Unimplemented function [-Wdeprecated-declarations]
throw not_implemented();
^
main.cpp:15:16: note: declared here
DEPRECATED(not_implemented(), "\nUnimplemented function") : logic_error("Not implemented.") { }
^~~~~~~~~~~~~~~
main.cpp:6:26: note: in definition of macro ‘DEPRECATED’
#define DEPRECATED(f, m) f __attribute__((deprecated(m)))
Release builds:
main.cpp: In function ‘void doComplexTask()’:
main.cpp:21:11: error: ‘not_implemented’ was not declared in this scope
throw not_implemented;
^~~~~~~~~~~~~~~
You can search for the exception with grep:
$ grep -Enr "\bthrow\s+not_implemented\b"
main.cpp:21: throw not_implemented();
The advantage of using grep is that grep doesn't care about your build configuration and will find everything regardless. You can also get rid of the deprecated qualifier to clean up your compiler output--the above hack generates a lot of irrelevant noise. Depending on your priorities this might be a disadvantage (for example, you might not care about Windows-specific functions if you're currently implementing Linux-specific functions, or vice-versa.)
If you use an IDE, most will let you search your entire project, and some even let you right-click a symbol and find everywhere it is used. (But you said you can't use one so in your case grep is your friend.)

I cannot see an easy way of doing this. Having several classes with no implementation will easily lead to a situation where keeping track in a multiple member team will be a nightmare.
Personally I would want to unit test each class I write and test driven development is my recommendation. However this involves linking the code each time you want to check the status.
For tools to use TDD refer to link here.
Another option is to write a piece of code that can parse through the source and check for functihat are to be implemented. GCC_XML is a good starting point.

Related

lcov woes: weird duplicate constructor marked as not covered & function not marked as covered, even though its lines have been executed

On my quest to learn more about automated testing by getting a small C++ test project up & running with 100% coverage, I've run into the following issue - even though all my actual lines of code and all the execution branches are covered by tests, lcov still reports two lines as untested (they only contain function definitions), as well as a "duplicate" constructor method that is supposedly untested even though it matches my "real" constructor (the only one ever defined & used) perfectly.
(Skip to EDIT for the minimal reproduction case)
If I generate the same coverage statistics (from the same exact source, .gcno & .gcda files) using the gcovr python script and pass the results to the Jenkins Cobertura plugin, it gives me 100% on all counts - lines, conditionals & methods.
Here's what I mean:
The Jenkins Cobertura Coverage page: http://gints.dyndns.info/heap_std_gcovr_jenkins_cobertura.html (everything at a 100%).
The same .gcda files processed using lcov: http://gints.dyndns.info/heap_std_lcov.html (two function definition lines marked as not executed even though lines within those functions are fully covered, as well as functions Hit = functions Total - 1).
The function statistics for that source file from lcov: http:// gints.dyndns.info/heap_std_lcov_func (shows two identical constructor definitions, both referring to the same line of code in the file, one of them marked hit 5 times, the other 0 times).
If I look at the intermediate lcov .info file: http://gints.dyndns.info/lcov_coverage_filtered.info.txt I see that there are two constructor definitions there too, both are supposed to be on the same line: FN:8,_ZN4BBOS8Heap_stdC1Ev & FN:8,_ZN4BBOS8Heap_stdC2Ev.
Oh, and don't mind the messiness around the .uic include / destructor, that's just a dirty way of dealing with What is the branch in the destructor reported by gcov? I happened to be trying out when I took those file snapshots.
Anyone have a suggestion on how to resolve this? Is there some "behind-the-scenes" magic the C++ compiler is doing here? (An extra copy of the constructor for special purposes that I should make sure to call from my tests, perhaps?) What about the regular function definition - how can the definition line be untested even though the body has been fully tested? Is this simply an issue with lcov? Any suggestions welcome - I'd like to understand why this is happening and if there's really some functionality that my tests are leaving uncovered and Cobertura is not complaining about ... or if not, how do I make lcov understand that?
EDIT: adding minimal repro scenario below
lcov_repro_one_bad.cpp:
#include <stdexcept>
class Parent {
public:
Parent() throw() { }
virtual void * Do_stuff(const unsigned m) throw(std::runtime_error) =0;
};
class Child : public Parent {
public:
Child() throw();
virtual void * Do_stuff(const unsigned m)
throw(std::runtime_error);
};
Child::Child()
throw()
: Parent()
{
}
void * Child::Do_stuff(const unsigned m)
throw(std::runtime_error)
{
const int a = m;
if ( a > 10 ) {
throw std::runtime_error("oops!");
}
return NULL;
}
int main()
{
Child c;
c.Do_stuff(5);
try {
c.Do_stuff(11);
}
catch ( const std::runtime_error & ) { }
return 0;
}
makefile:
GPP_FLAGS:=-fprofile-arcs -ftest-coverage -pedantic -pedantic-errors -W -Wall -Wextra -Werror -g -O0
all:
g++ ${GPP_FLAGS} lcov_repro_one_bad.cpp -o lcov_repro_one_bad
./lcov_repro_one_bad
lcov --capture --directory ${PWD} --output-file lcov_coverage_all.info --base-directory ${PWD}
lcov --output-file lcov_coverage_filtered.info --extract lcov_coverage_all.info ${PWD}/*.*
genhtml --output-directory lcov_coverage_html lcov_coverage_filtered.info --demangle-cpp --sort --legend --highlight
And here's the coverage I get from that: http://gints.dyndns.info/lcov_repro_bin/lcov_coverage_html/gints/lcov_repro/lcov_repro_one_bad.cpp.gcov.html
As you can see, the supposedly not-hit lines are the definitions of what exceptions the functions may throw, and the extra not-hit constructor for Child is still there in the functions list (click on functions at the top).
I've tried removing the throw declarations from the function definitions, and that takes care of the un-executed lines at the function declarations: http://gints.dyndns.info/lcov_repro_bin/lcov_coverage_html/gints/lcov_repro/lcov_repro_one_v1.cpp.gcov.html (the extra constructor is still there, as you can see).
I've tried moving the function definitions into the class body, instead of defining them later, and that gets rid of the extra constructor: http://gints.dyndns.info/lcov_repro_bin/lcov_coverage_html/gints/lcov_repro/lcov_repro_one_v2.cpp.gcov.html (there's still some weirdness around the Do_stuff function definition, though, as you can see).
And then, of course, if I do both of the above, all is well: http://gints.dyndns.info/lcov_repro_bin/lcov_coverage_html/gints/lcov_repro/lcov_repro_one_ok.cpp.gcov.html
But I'm still stumped as to what the root cause of this is ... and I still want to have my methods (including the constructor) defined in a separate .cpp file, not in the class body, and I do want my functions to have well defined exceptions they can throw!
Here's the source, in case you feel like playing around with this: http://gints.dyndns.info/lcov_repro_src.zip
Any ideas?
Thanks!
OK, after some hunting around & reading up on C++ exception declarations, I think I understand what's going on:
As far as the un-hit throw declarations are concerned, it seems everything is actually correct here: function throw declarations are supposed to add extra code to the output object file that checks for illegal (as far as the throw declaration is concerned) exceptions thrown. Since I was not testing the case of this happening, it makes sense that that code was never executed, and those statements were marked un-hit. Although the situation is far from ideal here anyway, at least one can see where this is coming from.
As far as the duplicate constructors are concerned, this seems to be a known thing with gcc with a longstanding discussion (and various attempts at patches to resolve the resulting object code duplication): http://gcc.gnu.org/bugzilla/show_bug.cgi?id=3187 - basically, there are two versions of the constructor created - one for use with this class, and one for use with child classes, and you need to exercise both, if you want 100% coverage.

Parsing C++ to make some changes in the code

I would like to write a small tool that takes a C++ program (a single .cpp file), finds the "main" function and adds 2 function calls to it, one in the beginning and one in the end.
How can this be done? Can I use g++'s parsing mechanism (or any other parser)?
If you want to make it solid, use clang's libraries.
As suggested by some commenters, let me put forward my idea as an answer:
So basically, the idea is:
... original .cpp file ...
#include <yourHeader>
namespace {
SpecialClass specialClassInstance;
}
Where SpecialClass is something like:
class SpecialClass {
public:
SpecialClass() {
firstFunction();
}
~SpecialClass() {
secondFunction();
}
}
This way, you don't need to parse the C++ file. Since you are declaring a global, its constructor will run before main starts and its destructor will run after main returns.
The downside is that you don't get to know the relative order of when your global is constructed compared to others. So if you need to guarantee that firstFunction is called
before any other constructor elsewhere in the entire program, you're out of luck.
I've heard the GCC parser is both hard to use and even harder to get at without invoking the whole toolchain. I would try the clang C/C++ parser (libparse), and the tutorials linked in this question.
Adding a function at the beginning of main() and at the end of main() is a bad idea. What if someone calls return in the middle?.
A better idea is to instantiate a class at the beginning of main() and let that class destructor do the call function you want called at the end. This would ensure that that function always get called.
If you have control of your main program, you can hack a script to do this, and that's by far the easiet way. Simply make sure the insertion points are obvious (odd comments, required placement of tokens, you choose) and unique (including outlawing general coding practices if you have to, to ensure the uniqueness you need is real). Then a dumb string hacking tool to read the source, find the unique markers, and insert your desired calls will work fine.
If the souce of the main program comes from others sources, and you don't have control, then to do this well you need a full C++ program transformation engine. You don't want to build this yourself, as just the C++ parser is an enormous effort to get right. Others here have mentioned Clang and GCC as answers.
An alternative is our DMS Software Reengineering Toolkit with its C++ front end. DMS, using its C++ front end, can parse code (for a variety of C++ dialects), builds ASTs, carry out full name/type resolution to determine the meaning/definition/use of all symbols. It provides procedural and source-to-source transformations to enable changes to the AST, and can regenerate compilable source code complete with original comments.

Ways to show your co-programmers that some methods are not yet implemented in a class when programming in C++

What approaches can you use when:
you work with several (e.g. 1-3) other programmers over a small C++ project, you use a single repository
you create a class, declare its methods
you don't have a time do implement all methods yet
you don't want other programmers to use your code yet (because it's not implemented yet); or don't want to use not-yet-implemented parts of the code
you don't have a time/possibility to tell about all such not-yet-implemented stuff to you co-workers
when your co-workers use your not-yet-implemented code you want them to immediately realize that they shouldn't use it yet - if they get an error you don't want them to wonder what's wrong, search for potential bugs etc.
The simplest answer is to tell them. Communication is key whenever you're working with a group of people.
A more robust (and probably the best) option is to create your own branch to develop the new feature and only merge it back in when it's complete.
However, if you really want your methods implemented in the main source tree but don't want people using them, stub them out with an exception or assertion.
I actually like the concept from .Net of a NotImplementedException. You can easily define your own, deriving from std::exception, overriding what as "not implemented".
It has the advantages of:
easily searchable.
allows current & dependent code to compile
can execute up to the point the code is needed, at which point, you fail (and you immediately have an execution path that demonstrates the need).
when it fails, it fails to a know state, so long as you're not blanketly swallowing exceptions, rather than relying upon indeterminable state.
You should either, just not commit the code, or better yet, commit it to a development branch so that it is at least off your machine in case of catastrophic failure of your box.
This is what I do at work with my git repo. I push my work at the end of the day to a remote repo (not the master branch). My coworker is aware that these branches are super duper unstable and not to be touched with a ten foot pole unless he really likes to have broken branches.
Git is super handy for this situation as is, I imagine, other dvcs with cheap branching. Doing this in SVN or worse yet CVS would mean pain and suffering.
I would not check it into the repository.
Declare it. Dont implemented it.
When the programmer use to call the unimplemented part of code linker complains, which is the clear hit to the programmer.
class myClass
{
int i;
public:
void print(); //NOt yet implemented
void display()
{
cout<<"I am implemented"<<endl;
}
};
int main()
{
myClass var;
var.display();
var.print(); // **This line gives the linking error and hints user at early stage.**
return 0;
}
Assert is the best way. Assert that doesn't terminate the program is even better, so that a coworker can continue to test his code without being blocked by your function stubs, and he stays perfectly informed about what's not implemented yet.
In case that your IDE doesn't support smart asserts or persistent breakpoints here is simple implementation (c++):
#ifdef _DEBUG
// 0xCC - int 3 - breakpoint
// 0x90 - nop?
#define DebugInt3 __emit__(0x90CC)
#define DEBUG_ASSERT(expr) ((expr)? ((void)0): (DebugInt3) )
#else
#define DebugInt3
#define DEBUG_ASSERT(expr) assert(expr)
#endif
//usage
void doStuff()
{
//here the debugger will stop if the function is called
//and your coworker will read your message
DEBUG_ASSERT(0); //TODO: will be implemented on the next week;
//postcondition number 2 of the doStuff is not satisfied;
//proceed with care /Johny J.
}
Advantages:
code compiles and runs
a developer get a message about what's not implemented if and only if he runs into your code during his testing, so he'll not get overwhelmed with unnecessary information
the message points to the related code (not to exception catch block or whatever). Call stack is available, so one can trace down the place where he invokes unfinished piece of code.
a developer after receiving the message can continue his test run without restarting the program
Disadvantages:
To disable a message one have to comment out a line of code. Such change can possibly sneak in the commit.
P.S. Credits for initial DEBUG_ASSERT implementation go to my co-worker E. G.
You can use pure virtual functions (= 0;) for inherited classes, or more commonly, declare them but not define them. You can't call a function with no definition.

static initialization

the context
I'm working on a project having some "modules".
What I call a module here is a simple class, implementing a particular functionality and derivating from an abstract class GenericModule which force an interface.
New modules are supposed to be added in the future.
Several instances of a module can be loaded at the same time, or none, depending on the configuration file.
I though it would be great if a future developer could just "register" his module with the system in a simple line. More or less the same way they register tests in google test.
the context² (technical)
I'm building the project with visual studio 2005.
The code is entirely in a library, except the main() which is in an exec project.
I'd like to keep it that way.
my solution
I found inspiration in what they did with google test.
I created a templated Factory. which looks more or less like this (I've skipped uninteresting parts to keep this question somewhat readable ):
class CModuleFactory : boost::noncopyable
{
public:
virtual ~CModuleFactory() {};
virtual CModuleGenerique* operator()(
const boost::property_tree::ptree& rParametres ) const = 0;
};
template <class T>
class CModuleFactoryImpl : public CModuleFactory
{
public:
CModuleGenerique* operator()(
const boost::property_tree::ptree& rParametres ) const
{
return new T( rParametres );
}
};
and a method supposed to register the module and add it's factory to a list.
class CGenericModule
{
// ...
template <class T>
static int declareModule( const std::string& rstrModuleName )
{
// creation de la factory
CModuleFactoryImpl<T>* pFactory = new CModuleFactoryImpl<T>();
// adds the factory to a map of "id" => factory
CAcquisition::s_mapModuleFactory()[rstrModuleName ] = pFactory;
return 0;
}
};
now in a module all I need to do to declare a module is :
static int initModule =
acquisition::CGenericModule::declareModule<acquisition::modules::CMyMod>(
"mod_name"
);
( in the future it'll be wrapped in a macro allowing to do
DECLARE_MODULE( "mod_name", acquisition::modules::CMyMod );
)
the problem
Allright now the problem.
The thing is, it does work, but not exactly the way i'd want.
The method declareModule is not being called if I put the definition of the initModule in the .cpp of the module (where I'd like to have it) (or even in the .h).
If I put the static init in a used .cpp file .. it works.
By used I mean : having code being called elsewhere.
The thing is visual studio seems to discard the entire obj when building the library. I guess that's because it's not being used anywhere.
I activated verbose linking and in pass n°2 it lists the .objs in the library and the .obj of the module isn't there.
almost resolved?
I found this and tried to add the /OPT:NOREF option but it didn't work.
I didn't try to put a function in the .h of the module and call it from elsewhere, because the whole point is being able to declare it in one line in it's file.
Also I think the problem is similar to this one but the solution is for g++ not visual :'(
edit: I just read the note in the answer to this question. Well if I #include the .h of the module from an other .cpp, and put the init in the module's .h. It works and the initialization is actually done twice ... once in each compilation unit? well it seems it happens in the module's compilation unit ...
side notes
Please if you don't agree with what I'm trying to do, fell free to tell, but I'm still interested in a solution
If you want this kind of self-registering behavior in your "modules", your assumption that the linker is optimizing out initModule because it is not directly referenced may be incorrect (though it could also be correct :-).
When you register these modules, are you modifying another static variable defined at file scope? If so, you at least have an initialization order problem. This could even manifest itself only in release builds (initialization order can vary depending on compiler settings) which might lead you to believe that the linker is optimizing out this initModule variable even though it may not be doing so.
The module registry kind of variable (be it a list of registrants or whatever it is) should be lazy constructed if you want to do things this way. Example:
static vector<string> unsafe_static; // bad
vector<string>& safe_static()
{
static vector<string> f;
return f;
} // ok
Note that the above has problems with concurrency. Some thread synchronization is needed for multiple threads calling safe_static.
I suspect your real problem has to do with initialization order even though it may appear that the initModule definition is being excluded by the linker. Typically linkers don't omit references which have side effects.
If you find out for a fact that it's not an initialization order problem and that the code is being omitted by the linker, then one way to force it is to export initModule (ex: dllexport on MSVC). You should think carefully if this kind of self-registration behavior really outweighs the simple process of adding on to a list of function calls to initialize your "modules". You could also achieve this more naturally if each "module" was defined in a separate shared library/DLL, in which case your macro could just be defining the function to export which can be added automatically by the host application. Of course that carries the burden of having to define a separate project for each "module" you create as opposed to just adding a self-registering cpp file to an existing project.
I've got something similar based on the code from wxWidgets, however I've only ever used it as a DLL. The wxWidgets code works with static libs however.
The bit that might make a difference is that in wx the equivelant of the following is defined at class scope.
static int initModule =
acquisition::CGenericModule::declareModule<acquisition::modules::CMyMod>(
"mod_name"
);
Something like the following where the creation of the Factory because it is static causes it to be loaded to the Factory list.
#define DECLARE_CLASS(name)\
class name: public Interface { \
private: \
static Factory m_reg;\
static std::auto_ptr<Interface > clone();
#define IMPLEMENT_IAUTH(name,method)\
Factory name::m_reg(method,name::clone);\

compile time assertions *across modules* / c,c++

recently i discovered in a relatively large project, that ugly runtime crashes occurred because various headers were included in different order in different cpp files.
These headers included #pragma pack - and these pragmas were sometimes not 'closed' ( i mean, set back to the compiler default #pragma pack() ) - resulting in different object layouts in different object files. No wonder the application crashed when it accessed struct members being created in one module and passed to another module. Or derived classes accessing members from base classes.
Since i like the idea to create a more general debugging and assertion strategy from every bug i find, i would really like to assert that object layouts are always and everywhere the same.
So it would be easy to assert
ASSERT( offsetof(membervar) == 4 )
But this would not catch a different layout in another module - or require manual updates whenever the struct layout changes .. so my favourite idea would be something like
ASSERT( offsetof(membervar) == offsetof(othermodule_membervar) )
Would this be possible with an assertion? Or is this a case for a unit test?
Thanks,
H
ASSERT( offsetof(membervar) == offsetof(othermodule_membervar) )
I can't see way to make this technically possible. Further, even if it was phyiscally possible, it isn't practical. You'd need an assert for every pair of source files:
ASSERT( offsetof(A.c::MyClass.membervar) == offsetof(B.c::MyClass.membervar) )
ASSERT( offsetof(A.c::MyClass.membervar) == offsetof(C.c::MyClass.membervar) )
ASSERT( offsetof(A.c::MyClass.membervar) == offsetof(D.c::MyClass.membervar) )
ASSERT( offsetof(B.c::MyClass.membervar) == offsetof(C.c::MyClass.membervar) )
ASSERT( offsetof(B.c::MyClass.membervar) == offsetof(D.c::MyClass.membervar) )
etc
You might be able to get away with this by asserting the sizeof(class) in different files. If the packing is causing the size of the object to be smaller, than I would expect that sizeof() would show that up.
You could also do this as a static assert using C++0x's static assert, or Boost's (or a handrolled one of course)
On the part of not wanting to do this in every file, I would recommend putting together a header file that includes all the headers you're worried about, and the static_asserts.
Personally though, I'd just recommend searching through the code base over the list of pragmas and fix them.
Wendy,
In Win32, there are single functions that can populate different versions of a given struct. Over the years, the FOOBAR struct might have new features added to it, so they create a FOOBAR2 or FOOBAREX. In some cases there are more than two versions.
Anyway, the way they handle this is to have the caller pass in sizeof(theStruct) in addition to the pointer to the struct:
FOOBAREX foobarex = {0};
long lResult = SomeWin32Api(sizeof(foobarex), &foobarex);
Within the implementation of SomWin32Api(), they check the first parameter and determine which version of the struct they're dealing with.
You could do something similar in a debug build to assure that the caller and callee agree on the size of the struct being referred to, and assert if the value doesn't match the expected size. With macros, you might even be able to automate/hide this so that it only happens in a debug build.
Unfortunately, this is a run-time check and not a compile-time check...
What you want isn't directly possible as such. If you're using VC++, the following may be of interest:
http://blogs.msdn.com/vcblog/archive/2007/05/17/diagnosing-hidden-odr-violations-in-visual-c-and-fixing-lnk2022.aspx
There's probably scope to create some way of semi-automating the process it describes, collating the output and cross-referencing.
To detect this sort of problem somewhat more automatically, the following occurs to me. Create a file that defines a struct that will have a particular size with the designated default packing amount, but a different size with different pack values. Also include some kind of static assert that its size is correct. For example, if the default is 4-byte packing:
struct X {
char c;
int i;
double d;
};
extern const char g_check[sizeof(X)==16?1:-1];
Then #include this file at the start of every header (just write a program to put the extra includes in if there's too many to do by hand), and compile and see what happens. This won't directly detect changes in struct layout, just non-standard packing settings, which is what you're interested in anyway.
(When adding new headers one would put this #include at the top, along with the usual ifdef boilerplate and so on. This is unfortunate but I'm not sure there's any way around it. The best solution is probably to ask people to do it, but assume they'll forget, and run the extra-include-inserting program every now and again...)
Apologies for posting an answer - which this is not - but I don't know how to post code in comments. Sorry.
To wrap Brone's idea in a macro, here is what free we currently use (feel free to edit it):
/** Our own assert macro, which will trace a FATAL error message if the assert
* fails. A FATAL trace will cause a system restart.
* Note: I would love to use CPPUNIT_ASSERT_MESSAGE here, for a nice clean
* test failure if testing with CppUnit, but since this header file is used
* by C code and the relevant CppUnit include file uses C++ specific
* features, I cannot.
*/
#ifdef TESTING
/* ToDo: might want to trace a FATAL if integration testing */
#define ASSERT_MSG(subsystem, message, condition) if (!(condition)) {printf("Assert failed: \"%s\" at line %d in file \"%s\"\n", message, __LINE__, __FILE__); fflush(stdout); abort();}
/* we can also use this, which prints of the failed condition as its message */
#define ASSERT_CONDITION(subsystem, condition) if (!(condition)) {printf("Assert failed: \%s\" at line %d in file \%s\"\n", #condition, __LINE__, __FILE__); fflush(stdout); abort();}
#else
#define ASSERT_MSG(subsystem, message, condition) if (!condition) DebugTrace(FATAL, subsystem, __FILE__, __LINE__, "%s", message);
#define ASSERT_CONDITION(subsystem, condition) if (!(condition)) DebugTrace(FATAL, subsystem, __FILE__, __LINE__, "%s", #condition);
#endif
What you would be looking for is an assertion like ASSERT_CONSISTENT(A_x, offsetof(A,x)), placed in a header file. Let me explain why, and what the problem is.
Because the problem exists across translation units, you can only detect the error at link time. That means you need to force the linker to spit out an error. Unfortunately, most cross-translation unit problems are formally of the "no diagnosis needed" kind. The most familiar one is the ODR rule. We can trivially cause ODR violations with such assertions, but you just can't rely on the linker to warn you about them. If you can, the implementation of the ODR can be as simple as
#define ASSERT_CONSISTENT(label, x) class ASSERT_ ## label { char test[x]; };
But if the linker doesn't notice these ODR violations, this will pass by silently. And here lies the problem: the linker really only needs to complain if it can't find something.
With two macro's the problem is solved:
template <int i> class dummy; // needed to differentiate functions
#define ASSERT_DEFINE(label, x) void ASSERT_label(dummy<x>&) { }
#define ASSERT_CHECK(label, x) void (*check)(dummy<x>&) = &ASSERT_label;
You'd need to put the ASSERT_DEFINE macro in a .cpp, and ASSERT_CHECK in its header. If the x value checked isn't the x value defined for that label, you're taking the address of an undefined function. Now, a linker doesn't need to warn about multiple definitions, but it must warn about missing definitions.
BTW, for this particular problem, see Diagnosing Hidden ODR Violations in Visual C++ (and fixing LNK2022)