I've done a shady thing - c++

Are (seemingly) shady things ever acceptable for practical reasons?
First, a bit of background on my code. I'm writing the graphics module of my 2D game. My module contains more than two classes, but I'll only mention two in here: Font and GraphicsRenderer.
Font provides an interface through which to load (and release) files and nothing much more. In my Font header I don't want any implementation details to leak, and that includes the data types of the third-party library I'm using. The way I prevent the third-party lib from being visible in the header is through an incomplete type (I understand this is standard practice):
class Font
{
private:
struct FontData;
boost::shared_ptr<FontData> data_;
};
GraphicsRenderer is the (read: singleton) device that initializes and finalizes the third-party graphics library and also is used to render graphical objects (such as Fonts, Images, etc). The reason it's a singleton is because, as I've said, the class initializes the third-party library automatically; it does this when the singleton object is created and exits the library when the singleton is destroyed.
Anyway, in order for GR to be able to render Font it must obviously have access to its FontData object. One option would be to have a public getter, but that would expose the implementation of Font (no other class other than Font and GR should care about FontData). Instead I considered it's better to make GR a friend of Font.
Note: Until now I've done two things that some may consider shady (singleton and friend), but these are not the things I want to ask you about. Nevertheless, if you think my rationale for making GR a singleton and a friend of Font is wrong please do criticize me and maybe offer better solutions.
The shady thing. So GR has access to Font::data_ though friendship, but how does it know exactly what a FontData is (since it's not defined in the header, it's an incomplete type)? I'll just show the code and the comment that includes the rationale...
// =============================================================================
// graphics/font.cpp
// -----------------------------------------------------------------------------
struct Font::FontData
: public sf::Font
{
// Just a synonym of sf::Font
};
// A redefinition of FontData exists in GraphicsRenderer::printText(),
// which will have to be modified as well if this definition is modified.
// (The redefinition is called FontDataSurogate.)
// Why not have FontData defined only once in a separate header:
// If the definition of FontData changes, most likely printText() text will
// have to be altered also regardless. Considering that and also that FontData
// has (and should have) a very simple definition, a separate header was
// considered too much of an overhead and of little practical advantage.
// =============================================================================
// graphics/graphics_renderer.cpp
// -----------------------------------------------------------------------------
void GraphicsRenderer::printText(const Font& fnt /* ... */)
{
struct FontDataSurogate
: public sf::Font {
};
FontDataSurogate* suro = (FontDataSurogate*)fnt.data_.get();
sf::Font& font = (sf::Font)(*suro);
// ...
}
So that's the shady thing I'm trying to do. Basically what I want is a review of my rationale, so please tell me if you think I've done something horrendous or if not confirm my rationale so I can be a bit surer I'm doing the right thing. :) (This is my biggest project yet and I'm only at the beginning so I'm kinda feeling things in the dark atm.)

In general, if something looks sketchy, I've found that it's often worth going back a few times and trying to figure out exactly why that's necessary. In most cases, some kind of fix pops up (maybe not as "nice", but not relying on any kind of trick).
Now, the first issue I see in your example is this bit of code:
struct FontDataSurogate
: public sf::Font {
};
occurs twice, in different files (neither being a header). That may come back and be a bother when you change one but not the other in the future, and making sure both are identical will very likely be a pain.
To solve that, I would suggest putting the definition to FontDataSurogate and the appropriate includes (whatever library/header defines sf::Font) in a separate header. From the two files that need to use FontDataSurogate, include that definition header (not from any other code files or headers, just those two).
If you have a main class declaration header for your library, place the forward declaration for the class there, and use pointers in your objects and parameters (regular pointers or shared pointers).
You can then use friend or add a get method to retrieve the data, but by moving the class definition to its own header, you've created a single copy of that code and have a single object/file that's interfacing with the other library.
Edit:
You commented on the question while I was writing this, so I'll add on a reply to your comment.
"Too much overhead" - more to document, one more thing to include, the complexity of the code grows, etc.
Not so. You will have one copy of the code, compared to the two that must remain identical now. The code exists either way, so it needs documented, but your complexity and particularly maintenance is simplified. You do gain two #include statements, but is that such a high cost?
"Little practical advantage" - printText() would have to be modified every time FontData is modified regardless of whether or not it's defined in a separate header or not.
The advantage is less duplicate code, making it easier to maintain for you (and others). Modifying the function when the input data changes is not surprising or unusual really. Moving it to another header doesn't cost you anything but the mentioned includes.

friend is fine, and encouraged. See C++ FAQ Lite's rationale for more info: Do friends violate encapsulation?
This line is indeed horrendous, as it invokes undefined behavior: FontDataSurogate* suro = (FontDataSurogate*)fnt.data_.get();

You forward declare the existence of the FontData struct, and then go on to fully declare it in two locations: Font, and GraphicsRenderer. Ew. Now you have to manually keep these exactly binary compatible.
I'm sure it works, but you're right, it is kindof shady. But whenever we say such-and-such is eeevil, we mean to avoid a certain practice, with the caveat that sometimes it can be useful. That being said, I don't think this is one of those times.
One technique is to invert your handling. Instead of putting all of the logic inside GraphicsRenderer, put some of it inside Font. Like so:
class Font
{
public:
void do_something_with_fontdata(GraphicsRenderer& gr);
private:
struct FontData;
boost::shared_ptr<FontData> data_;
};
void GraphicsRenderer::printText(const Font& fnt /* ... */)
{
fnt.do_something_with_fontdata(*this);
}
This way, the Font details are kept within the Font class, and even GraphicsRenderer doesn't need to know the specifics of the implementation. This solves the friend issue too (although I don't think friend is all that bad to use).
Depending on how your code is laid out, and what it's doing, attempting to invert it like this may be quite difficult. If that is the case, simply move the real declaration of FontData to its own header file, and use it in both Font and GraphicsRenderer.

You've spent a lot more effort asking this question then you've supposedly saved by duplicating that code.
You state three reasons you didn't want to add the file:
Extra include
Extra Documentation
Extra Complexity
But I would have to say that 2 and 3 are increased by duplicating that code. Now you document what its doing in the original place and what the fried monkey its doing defined again in another random place in the code base. And duplicating code can only increase the complexity of a project.
The only thing you are saving is an include file. But files are cheap. You should not be afraid of creating them. There is almost zero cost (or at least there should be) to add a new header file.
The advantages of doing this properly:
The compiler doesn't have to make the definition you give compatible
Someday, somebody is going to modify the FontData class without modifying PrintText(), maybe they should modify PrintText(), but they either haven't done it yet or don't know that they need to. Or perhaps in a way that simply hasn't occoured to additional data on FontData make sense. Regardless, the different pieces of code will operate on different assumptions and will explode in a very hard to trace bug.

Related

C/C++ API design dilemma

I have been analysing the problem of API design in C++ and how to work around a big hole in the language when it comes to separating interfaces from implementations.
I am a purist and strongly believe in neatly separating the public interface of a system from any information about its implementation. I work daily on a huge codebase that is not only really slow to build, mainly because of header files pulling a large number of other header files, but also extremely hard to dig as a client what something does, as the interface contains all sorts of functions for public, internal, and private use.
My library is split into several layers, each using some others. It's a design choice to expose to the client every level so that they can extend what the high level entities can do by using lower level entities without having to fork my repository.
And now it comes the problem. After thinking for a long while on how to do this, I have come to the conclusion that there is literally no way in C++ to separate the public interface from the details for a class in such a way that satisfies all of the following requirements:
It does not require any code duplication/redundancy. Reason: it is not scalable, and whilst it's OK for a few types it quickly becomes a lot more code for realistic codebases. Every single line in a codebase has a maintenance cost I would much prefer to spend on meaningful lines of code.
It has zero overhead. Reason: I do not want to pay any performance for something that is (or at least should!) be well known at compile time.
It is not a hack. Reason: readability, maintainability, and because it's just plain ugly.
As far as I know, and this is where my question lies, in C++ there are three ways to fully hide the implementation of a class from its public interface.
Virtual interface: violates requirements 1 (code duplication) and 2 (overhead).
Pimpl: violates requirements 1 and 2.
Reinterpret casting the this pointer to the actual class in the .cpp. Zero overhead but introduces some code duplication and violates (3).
C wins here. Defining an opaque handle to your entity and a bunch of function that take that handle as the first argument beautifully satisfies all requirements, but it is not idiomatic C++. I know one could say "just use C-style while writing C++" but it does not answer the question, as we are speaking about an idiomatic C++ solution for this.
Defining an opaque handle to your entity and a bunch of function that take that handle as the first argument beautifully satisfies all requirements, but it is not idiomatic C++.
You can still encapsulate this in a class. The opaque handle would be the sole private data member of the class, its implementation not publically exposed in any way. Implementation-wise, it would just be a pointer to a private data structure, dereferenced by the member functions of the class. This is still a minor improvement over the C solution because all of the related data and functions would be encapsulated in a single class and it makes it unnecessary for the client to keep track of a handle and pass it to every function.
Yes, I suppose dereferencing a pointer introduces some trivial amount of overhead, but the C solution would have the same problem.
No code duplication is required, and although it could arguably be considered a hack (or at least, inelegant C++ design), it certainly is no more of a hack than the same approach implemented in C. The only difference is C programmers have a lower threshold for what a "hack" is because their language has less ways of expressing a design.
A rough sketch of the design I'm thinking of (basically the same as PIMPL, but with only the data members made opaque):
// In a header file:
class DrawingPen
{
public:
DrawingPen(...); // ctor
~DrawingPen(); // dtor
void SetThickness(int thickness);
// ...and other member functions
private:
void *pPen; // opaque handle to private data
};
// In an implementation file:
namespace {
struct DrawingPenData
{
int thickness;
int red;
int green;
int blue;
// ... whatever else you need to describe the object or track its state
};
}
// Definitions of the ctor, dtor, member functions, etc.
// For instance:
void DrawingPen::SetThickness(int thickness)
{
// Get the object data through the handle.
DrawingPenData *pData = reinterpret_cast<DrawingPenData*>(this->pPen);
// Update the thickness.
pData->thickness = thickness;
}
If you need private functions that work on a DrawingPen, but that you do not want to expose in the DrawingPen header, you would just place them in the same anonymous namespace in the implementation file, accepting a reference to the class object.

Why can a user change from private to public?

I've made a programm with libraries.
One library has a interface for to include a header for an extern programm call:
class IDA{
private:
class IDA_A;
IDA_A *p_IDA_A;
public:
IDA();
~IDA();
void A_Function(const char *A_String);
};
Then I opened the header with Kate and placed
class IDA_A;
IDA_A *p_IDA_A;
into the public part.
And this worked?! But why? And can I avoid that?
Greeting
Earlybite
And this worked?!
Yes
But why?
private, protected and public are just hints to the C+++ compiler how the data structure is supposed to be used. The C++ compiler will error out if piece of C++ source doesn't follow this, but that's it.
It has no effect whatsoever on the memory layout. In particular the C++ standard mandates, that member variables of a class must be laid out in memory in the order they are declared in the class definition.
And can I avoid that?
No. In the same way you can not avoid, that some other piece of code static_casts a (opaque) pointer to your class to char* and dumps it out to a file as is, completely with vtable and everything.
If you really want to avoid the user of a library to "look inside" you must give them some handle value and internally use a map (std::map or similar) to look up the actual instance from the handle value. I do this in some of the libraries I develop, for robustness reasons, but also because some of the programming environments there are bindings for don't deal properly with pointers.
Of course this works. The whole private-public stuff is only checked by the compiler. There are no runtime-checks for this. If the library is already compiled, the user only needs the library file (shared object) and the header files for development. If you afterwards change the header file in the pattern you mentioned, IDA_A is considered public by the compiler.
How can you avoid that? There is no way for this. If another user changes your headerfile, you can do nothing about it. Also is the question: Why bother with it? It can have very ugly side-effects if the headerfile for an already compiled lib is been changed. So the one who changes it also has to deal with the issues. There is just some stuff you dont do. One of them is moving stuff in headers around for libraries (unless you are currently developing on that library).
You've got access to the p_IDA_A member variable, and this is always possible in the manner you've outlined. But it is of type IDA_A, which for which you only have a declaration, but not the definition. So you can not do anything meaningful with it. (This method of data hiding is called the pimpl idiom).

Declaration and Implementation of Functions

According to my teacher, it's bad practice to write user-defined functions like this:
int DoubleNumber(int Number)
{
return Number * 2;
}
int main()
{
cout << DoubleNumber(8);
}
Instead, he says to always use forward declarations, even if the functions don't need any knowledge of each other:
int DoubleNumber(int Number); // Forward declaration.
int main()
{
cout << DoubleNumber(8);
}
int DoubleNumber(int Number) // Implementation.
{
return Number * 2;
}
I find this especially strange since he made a point of telling us how important it is that the forward declaration and implementation are exactly the same or you'll get errors. If it's such a big deal, why not just put it all above main()?
So, is it really bad practice to declare and implement at the same time? Does it even matter?
If you don't declare the forward declarations ("prototypes"), then you need to make sure that all your functions occur before any functions that depend on them, i.e. in the reverse order of the call graph. That's fine for a simple example as above, but is a complete pain for anything more realistic (and in some cases impossible, if there are any loops in the call graph).
I think your teacher is an old C programmer.
If you wrote a C program without forward declarations and one function called another function declared later in the file (or in a different compilation unit), the compiler would not complain but silently pretend to know what the prototype should be.
Debugging is horrible, if you don't know if your compiler is passing the arguments correctly. Therefore it was a good defensive policy to always declare all functions; at least the compiler could raise an error if the declaration did not match the implementation.
C compilers and tool have gotten better (I hope). It is still not an error to call an unknown function, but GCC for example is kind enough to warn by default.
But in C++ you can't call a function that hasn't been declared or defined. Consequently, C++ programmers don't worry much about forward declarations.
Your teacher's policy is horrible IMHO. Use forward declarations only when they're really needed. That way, their presence demonstrates their necessity, which gives the reader useful documentation (i.e., there may be mutual recursion between the functions). Of course you do need forward declarations in header files; that's what they're for.
In my first programming class, the teacher also emphasized this point. I'm not exactly sure there is a benefit to such a simple case in actual software.
However, it does prepare you for using header files if you haven't covered that yet. In a typical case, you will have a header file custom-math.h and source file custom-math.cpp where custom-math.h contains the forward declaration and custom-math.cpp the implementation. Doing so may increase compilation time significantly when doing modifications to function implementations only in large projects. It is also a convenient way to split your program into "logical" groups of functions and/or classes.
If you are going to put other functions in the same file as main(), then what you do probably depends on your personal preference. Some people prefer to have main() close to the top to get to the program logic right away. In this case, forward declare your functions.
Karl Knecthel writes "Use forward declarations only when they're really needed. That way, their presence demonstrates their necessity, which gives the reader useful documentation (i.e., there may be mutual recursion between the functions)." and IMHO that's sound advice.
Oli Charlesworth talks about "complete pain" for ordering functions so that they can be called without forward declarations. That's not my experience, and I cannot imagine how that pain/problem is accomplished. I suspect a PEBCAK problem there.
A practice of using forward declarations for all functions will not save you from PEBCAK problems, but they do introduce needless maintainance work and needless more code to relate to, and they do make it more unclear which functions really need forward declarations.
If you get to the point where forward declarations could help to see function signatures at a glance, when forced to some very simple editor, then there are two actions that should be taken: (1) refactoring of the code, and (2) switching to a better editor.
Cheers & hth.,
Ask your teacher about why he recommends this ;) Anyway, it's not bad practice in my opinion, and in most cases it doesn't even matter. The biggest advantage of declaring all functions upfront is that you have a high-level overview of what the code does.
Traditionally you'll be sticking all your prototypes in a header file, so they can be used by other source files - or at least, you'll put the ones you want to expose in the .h file.
As far as code where it's not necessary, there's something to be said for putting all your file-level declarations at the top (variables and functions) because it means you can move functions around at-will and not have to worry about it. Not to mention, I can see every function in a file right away. But something like:
void Func1() { ... }
...
void Func2() { ... }
...
void Func3() { ... }
...
int main() { Func1(); Func2(); Func3(); return 0; }
That - that is to say, a number of disjointed functions all called by main() - is a very common file, and it's perfectly reasonable to forgo the declaration.
Blanket rules are rarely correct. The public api you would normally expose through the prototypes in the header file. The rest of the functions will likely to be in an anonymous namespace in the cpp file. If those are called multiple times in the implementation it make sense to provide prototypes at the top, otherwise every function using them would have to provide prototypes before calling functions. At the same time if some function is used multiple times in the cpp file it might be an indication that it's universal enough to be moved to a common api. If the functions are not used all over the place, it's better to provide as limited exposure to them as possible, i.e. declaring and defining them close to the place they are called from.
Personally, I like to only forward declare anything that client code needs to know about (i.e. in a class header). Anything that's local to a class implementation (such as helper functions), should be defined prior to usage.
Of course at the end of the day, for the most part, it boils down to either personal preference or coding standard that your project is following.

Header file best practices for typedefs

I'm using shared_ptr and STL extensively in a project, and this is leading to over-long, error-prone types like shared_ptr< vector< shared_ptr<const Foo> > > (I'm an ObjC programmer by preference, where long names are the norm, and still this is way too much.) It would be much clearer, I believe, to consistently call this FooListPtr and documenting the naming convention that "Ptr" means shared_ptr and "List" means vector of shared_ptr.
This is easy to typedef, but it's causing headaches with the headers. I seem to have several options of where to define FooListPtr:
Foo.h. That entwines all the headers and creates serious build problems, so it's a non-starter.
FooFwd.h ("forward header"). This is what Effective C++ suggests, based on iosfwd.h. It's very consistent, but the overhead of maintaining twice the number of headers seems annoying at best.
Common.h (put all of them together into one file). This kills reusability by entwining a lot of unrelated types. You now can't just pick up one object and move it to another project. That's a non-starter.
Some kind of fancy #define magic that typedef's if it hasn't already been typedefed. I have an abiding dislike for the preprocessor because I think it makes it hard for new people to grok the code, but maybe....
Use a vector subclass rather than a typedef. This seems dangerous...
Are there best practices here? How do they turn out in real code, when reusability, readability and consistency are paramount?
I've marked this community wiki if others want to add additional options for discussion.
I'm programming on a project which sounds like it uses the common.h method. It works very well for that project.
There is a file called ForwardsDecl.h which is in the pre-compiled header and simply forward-declares all the important classes and necessary typedefs. In this case unique_ptr is used instead of shared_ptr, but the usage should be similar. It looks like this:
// Forward declarations
class ObjectA;
class ObjectB;
class ObjectC;
// List typedefs
typedef std::vector<std::unique_ptr<ObjectA>> ObjectAList;
typedef std::vector<std::unique_ptr<ObjectB>> ObjectBList;
typedef std::vector<std::unique_ptr<ObjectC>> ObjectCList;
This code is accepted by Visual C++ 2010 even though the classes are only forward-declared (the full class definitions are not necessary so there's no need to include each class' header file). I don't know if that's standard and other compilers will require the full class definition, but it's useful that it doesn't: another class (ObjectD) can have an ObjectAList as a member, without needing to include ObjectA.h - this can really help reduce header file dependencies!
Maintenance is not particularly an issue, because the forwards declarations only need to be written once, and any subsequent changes only need to happen in the full declaration in the class' header file (and this will trigger fewer source files to be recompiled due to reduced dependencies).
Finally it appears this can be shared between projects (I haven't tried myself) because even if a project does not actually declare an ObjectA, it doesn't matter because it was only forwards declared and if you don't use it the compiler doesn't care. Therefore the file can contain the names of classes across all projects it's used in, and it doesn't matter if some are missing for a particular project. All that is required is the necessary full declaration header (e.g. ObjectA.h) is included in any source (.cpp) files that actually use them.
I would go with a combined approach of forward headers and a kind of common.h header that is specific to your project and just includes all the forward declaration headers and any other stuff that is common and lightweight.
You complain about the overhead of maintaining twice the number of headers but I don’t think this should be too much of a problem: the forward headers usually only need to know a very limited number of types (one?), and sometimes not even the full type.
You could even try auto-generating the headers using a script (this is done e.g. in SeqAn) if there are really that many headers.
+1 for documenting the typedef conventions.
Foo.h - can you detail the problems you have with that?
FooFwd.h - I'd not use them generally, only on "obvious hotspots". (Yes, "hotspots" are hard to determine).
It doesn't change the rules IMO because when you do introduce a fwd header, the associated typedefs from foo.h move there.
Common.h - cool for small projects, but doesn't scale, I do agree.
Some kind of fancy #define... PLEASE NO!...
Use a vector subclass - doesn't make it better.
You might use containment, though.
So here the prelimenary suggestions (revised from that other question..)
Standard type headers <boost/shared_ptr.hpp>, <vector> etc. can go into a precompiled header / shared include file for the project. This is not bad. (I personally still include them where needed, but that works in addition to putting them into the PCH.)
If the container is an implementation detail, the typedefs go where the container is declared (e.g. private class members if the container is a private class member)
Associated types (like FooListPtr) go to where Foo is declarated, if the associated type is the primary use of the type. That's almost always true for some types - e.g. shared_ptr.
If Foo gets a separate forward declaration header, and the associated type is ok with that, it moves to the FooFwd.h, too.
If the type is only associated with a particular interface (e.g. parameter for a public method), it goes there.
If the type is shared (and does not meet any of the previous criteria), it gets its own header. Note that this also means to pull in all dependencies.
It feels "obvious" for me, but I agree it's not good as a coding standard.
I'm using shared_ptr and STL extensively in a project, and this is leading to over-long, error-prone types like shared_ptr< vector< shared_ptr > > (I'm an ObjC programmer by preference, where long names are the norm, and still this is way too much.) It would be much clearer, I believe, to consistently call this FooListPtr and documenting the naming convention that "Ptr" means shared_ptr and "List" means vector of shared_ptr.
for starters, i recommend using good design structures for scoping (e.g., namespaces) as well as descriptive, non-abbreviated names for typedefs. FooListPtr is terribly short, imo. nobody wants to guess what an abbreviation means (or be surprised to find Foo is const, shared, etc.), and nobody wants to alter their code simply because of scope collisions.
it may also help to choose a prefix for typedefs in your libraries (as well as other common categories).
it's also a bad idea to drag types out of their declared scope:
namespace MON {
namespace Diddy {
class Foo;
} /* << Diddy */
/*...*/
typedef Diddy::Foo Diddy_Foo;
} /* << MON */
there are exceptions to this:
an entirely ecapsualted private type
a contained type within a new scope
while we're at it, using in namespace scopes and namespace aliases should be avoided - qualify the scope if you want to minimize future maintentance.
This is easy to typedef, but it's causing headaches with the headers. I seem to have several options of where to define FooListPtr:
Foo.h. That entwines all the headers and creates serious build problems, so it's a non-starter.
it may be an option for declarations which really depend on other declarations. implying that you need to divide packages, or there is a common, localized interface for subsystems.
FooFwd.h ("forward header"). This is what Effective C++ suggests, based on iosfwd.h. It's very consistent, but the overhead of maintaining twice the number of headers seems annoying at best.
don't worry about the maintenance of this, really. it is a good practice. the compiler uses forward declarations and typedefs with very little effort. it's not annoying because it helps reduce your dependencies, and helps ensure that they are all correct and visible. there really isn't more to maintain since the other files refer to the 'package types' header.
Common.h (put all of them together into one file). This kills reusability by entwining a lot of unrelated types. You now can't just pick up one object and move it to another project. That's a non-starter.
package based dependencies and inclusions are excellent (ideal, really) - do not rule this out. you'll obviously have to create package interfaces (or libraries) which are designed and structured well, and represent related classes of components. you're making an unnecessary issue out of object/component reuse. minimize the static data of a library, and let the link and strip phases do their jobs. again, keep your packages small and reusable and this will not be an issue (assuming your libraries/packages are well designed).
Some kind of fancy #define magic that typedef's if it hasn't already been typedefed. I have an abiding dislike for the preprocessor because I think it makes it hard for new people to grok the code, but maybe....
actually, you may declare a typedef in the same scope multiple times (e.g., in two separate headers) - that is not an error.
declaring a typedef in the same scope with different underlying types is an error. obviously. you must avoid this, and fortunately the compiler enforces that.
to avoid this, create a 'translation build' which includes the world - the compiler will flag declarations of typedeffed types which don't match.
trying to sneak by with minimal typedefs and/or forwards (which are close enough to free at compilation) is not worth the effort. sometimes you'll need a bunch of conditional support for forward declarations - once that is defined, it is easy (stl libraries are a good example of this -- in the event you are also forward declaring template<typename,typename>class vector;).
it's best to just have all these declarations visible to catch any errors immediately, and you can avoid the preprocessor in this case as a bonus.
Use a vector subclass rather than a typedef. This seems dangerous...
a subclass of std::vector is often flagged as a "beginner's mistake". this container was not meant to be subclassed. don't resort to bad practices simply to reduce your compile times/dependencies. if the dependency really is that significant, you should probably be using PIMPL, anyways:
// <package>.types.hpp
namespace MON {
class FooListPtr;
}
// FooListPtr.hpp
namespace MON {
class FooListPtr {
/* ... */
private:
shared_ptr< vector< shared_ptr<const Foo> > > d_data;
};
}
Are there best practices here? How do they turn out in real code, when reusability, readability and consistency are paramount?
ultimately, i've found a small concise package based approach the best for reuse, for reducing compile times, and minimizing dependence.
Unfortunately with typedefs you have to choose between not ideal options for your header files. There are special cases where option one (right in the class header) works well, but it sounds like it won't work for you. There are also cases where the last option works well, but it's usually where you are using the subclass to replace a pattern involving a class with a single member of type std::vector. For your situation, I'd use the forward declaring header solution. There's extra typing and overhead, but it wouldn't be C++ otherwise, right? It keeps things separate, clean and fast.

Is there a good way to avoid duplication of method prototypes in C++?

Most C++ class method signatures are duplicated between the declaration normally in a header files and the definition in the source files in the code I have read. I find this repetition undesirable and code written this way suffers from poor locality of reference. For instance, the methods in source files often reference instance variables declared in the header file; you end up having to constantly switch between header files and source files when reading code.
Would anyone recommend a way to avoid doing so? Or, am I mainly going to confuse experienced C++ programmers by not doing things in the usual way?
See also Question 538255 C++ code in header files where someone is told that everything should go in the header.
There is an alternative, but the cure is worse than the illness — define all the function bodies in the header, or even inline in the class, like C#. The downsides are that this will bloat compile times significantly, and it'll annoy veteran C++ programmers. It can also get you into some annoying situations of circular dependency that, while solvable, are a nuisance to deal with.
Personally, I just set my IDE to have a vertical split, and put the header file on the right side and the source file on the left.
I assume you're talking about member function declarations in a header file and definitions in source files?
If you're used to the Java/Python/etc. model, it may well seem redundant. In fact, if you were so inclined, you could define all functions inline in the class definition (in the header file). But, you'd definitely be breaking with convention and paying the price of additional coupling and compilation time every time you changed anything minor in the implementation.
C++, Ada, and other languages originally designed for large scale systems kept definitions hidden for a reason--there's no good reason that the users of a class should have to be concerned with its implementation, nor any reason they should have to repeatedly pay to compile it. Less of an issue nowadays with faster systems, but still relevant for really large systems. Additionally, TDD, stubbing and other testing strategies are facilitated by the isolation and quicker compilation.
Don't break with convention. In the end, you will make a ball of worms that doesn't work very well. Plus, compilers will hate you. C/C++ are setup that way for a reason.
C++ language supports function overloading, which means that the entire function signature is basically a way to identify a specific function. For this reason, as long as you declare and define function separately, there's really no redundancy in having to list the parameters again. More precisely, having to list the parameter types is not redundant. Parameters names, on the other hand, play no role in this process and you are free to omit them in the declaration (i.e in the header file), although I belive this limits readability.
You "can" get around the problem. You define an abstract interface class that only contains the pure virtual functions that an outside application will call. Then in the CPP file you provide the actual class that derives from the interface and contains all the class variables. You implement as normal now. The only thing this requires is a way to instantiate the derived implementation class from the interface class. You could do that by providing a static "Create" function that has its implementation in the CPP file.
ie
InterfaceClass* InterfaceClass::Create()
{
return new ImplementationClass;
}
This way you effectively hide the implementation from any outside user. You can't, however, create the class on the stack only on the heap ... but it does solve your problem AND provides a better layer of abstraction. In the end though if you aren't prepared to do this you need to stick with what you are doing.