Why std::unary_function doesn't contain virtual destructor - c++

I came across class template std::unary_function and std::binary_function.
template <class Arg, class Result>
struct unary_function {
typedef Arg argument_type;
typedef Result result_type;
};
template <class Arg1, class Arg2, class Result>
struct binary_function {
typedef Arg1 first_argument_type;
typedef Arg2 second_argument_type;
typedef Result result_type;
};
Both these can be used as base class for specific purposes. But still there's no virtual destructor in these. One reason which I could guess is these are not meant to be treated polymorphically. i.e
std::unary_function* ptr;
//intialize it
//do something
delete ptr;
But if that is so, shouldn't destructor be there with protected access specifier so that compiler would break any attempt to do that.

In a well-balanced C++ design philosophy the idea of "preventing" something from happening is mostly applicable when there's a good chance of accidental and not-easily-detectable misuse. And even in that case the preventive measures are only applicable when they don't impose any significant penalties. The purpose of such classes as unary_function, binary_function, iterator etc. should be sufficiently clear to anyone who knows about them. It would take a completely clueless user to use them incorrectly.
In case of classes that implement the well-established idiom of "group member injection" through public inheritance, adding a virtual destructor to the would be a major design error. Turning a non-polymorphic class into a polymorphic one is a major qualitative change. Paying such price for the ability to use this idiom would be prohibitively unacceptable.
A non-virtual protected destructor is a different story... I don't know why they didn't go that way. Maybe it just looked unnecessarily excessive to add a member function to that purpose alone (since otherwise, these classes contain only typedefs).
Note that even though unary_function, binary_function are deprecated, iterator is not. The deprecation does not target the idiom itself. The idiom is widely used within other larger-scale design approaches, like C++ implementation of Mixins and such.

Because std::unary_function and std::binary_function are, by design, not supposed to be used for polymorphic deletion : they exist only to provide typedefs to the child classes, and have no other intent.
Being a base class in C++ does not mean that the class must exhibit any particular polymorphic behaviour.
i.e. you should never see code such as :
void foo(std::unary_function* f)
{
delete f; // illegal
}
Note that both classes are deprecated since C++11 (See N3145)

The basic reason the various type tags (e.g., there is also std::iterator<...>) don't play nicely with people believing everything derived from is meant to be a base class is that overall design of STL where they are used frowns upon the use of inheritance for polymorphism. That is, the people who proposed these classes wouldn't see any reason why anybody would want to treat anything dynamically polymorphic, especially not any of these empty-by-design type tags. Thus, little effort was made to prevent silly mistakes.
When these classes were accepted as part of the STL at large there was a lot more effort spent on removing the rough edges of STL and not so much on unimportant details. Also, having the type tags being empty could be useful as they wouldn't interfere with some of the constraints place upon classes using any access specifiers. Thus, the type tags were left empty.
As it is specifically not needed to use any of these type traits with C++11 (the return type can be determined upon use and the arguments can be perfectly forwarded) these types are being deprecated rather than getting "fixed" (assuming they are considered broken).

Related

Why is it bad to impose type constraints on templates in C++?

In this question the OP asked about limiting what classes a template will accept. A summary of the sentiment that followed is that the equivalent facility in Java is bad; and don't do this.
I don't understand why this is bad. Duck typing is certainly a powerful tool; but in my mind it lends itself confusing runtime issues when a class looks close (same function names) but has slightly different behavior. And you can't necessarily rely on compile time checking because of examples like this:
struct One { int a; int b };
struct Two { int a; };
template <class T>
class Worker{
T data;
void print() { cout << data.a << endl; }
template <class X>
void usually_important () { int a = data.a; int b = data.b; }
}
int main() {
Worker<Two> w;
w.print();
}
Type Two will allow Worker to compile only if usually_important is not called. This could lead to some instantiations of Worker compiling and others not even in the same program.
In a case like this, though. The responsibility is put on to the designer of ENGINE to ensure that it is a valid type (after which they should inherit ENGINE_BASE). If they don't, there will be a compiler error. To me this seems much safer while not imposing any restrictions or adding much additional work.
class ENGINE_BASE {}; // Empty class, all engines should extend this
template <class ENGINE>
class NeedsAnEngine {
BOOST_STATIC_ASSERT((is_base_of<ENGINE_BASE, ENGINE>));
// Do stuff with ENGINE...
};
This is too long, but it might be informative.
Generics in Java are a type erasure mechanism, and automatic code generation of type casts and type checks.
templates in C++ are code generation and pattern matching mechanisms.
You can use C++ templates to do what Java generics do with a bit of effort. std::function< A(B) > behaves in a covariant/contravariant fashion with regards to A and B types and conversion to other std::function< X(Y) >.
But the primary design of the two is not the same.
A Java List<X> will be a List<Object> with some thin wrapping on it so users don't have to do type casts on extraction. If you pass it as a List<? extends Bar>, it again is getting a List<Object> in essence, it just has some extra type information that changes how the casts work and which methods can be invoked. This means you can extract elements from the List into a Bar and know it works (and check it). Only one method is generated for all List<? extends Bar>.
A C++ std::vector<X> is not in essence a std::vector<Object> or std::vector<void*> or anything else. Each instance of a C++ template is an unrelated type (except template pattern matching). In fact, std::vector<bool> uses a completely different implementation than any other std::vector (this is now considered a mistake because the implementation differences "leak" in annoying ways in this case). Each method and function is generated independently for the particular type you pass it.
In Java, it is assumed that all objects will fit into some hierarchy. In C++, that is sometimes useful, but it has been discovered it is often ill fitting to a problem.
A C++ container need not inherit from a common interface. A std::list<int> and std::vector<int> are unrelated types, but you can act on them uniformly -- they both are sequential containers.
The question "is the argument a sequential container" is a good question. This allows anyone to implement a sequential container, and such sequential containers can as high performance as hand-crafted C code with utterly different implementations.
If you created a common root std::container<T> which all containers inherited from, it would either be full of virtual table cruft or it would be useless other than as a tag type. As a tag type, it would intrusively inject itself into all non-std containers, requiring that they inherit from std::container<T> to be a real container.
The traits approach instead means that there are specifications as to what a container (sequential, associative, etc) is. You can test these specifications at compile time, and/or allow types to note that they qualify for certain axioms via traits of some kind.
The C++03/11 standard library does this with iterators. std::iterator_traits<T> is a traits class that exposes iterator information about an arbitrary type T. Someone completely unconnected to the standard library can write their own iterator, and use std::iterator<...> to auto-work with std::iterator_traits, add their own type aliases manually, or specialize std::iterator_traits to pass on the information required.
C++11 goes a step further. for( auto&& x : y ) can work with things that where written long before the range-based iteration was designed, without touching the class itself. You simply write a free begin and end function in the namespace that the class belongs to that returns a valid forward iterator (note: even invalid forward iterators that are close enough work), and suddenly for ( auto&& x : y ) starts working.
std::function< A(B) > is an example of using these techniques together with type erasure. It has a constructor that accepts anything that can be copied, destroyed, invoked with (B) and whose return type can be converted to A. The types it can take can be completely unrelated -- only that which is required is tested for.
Because of std::functions design, we can have lambda invokables that are unrelated types that can be type-erased into a common std::function if needed, but when not type erased their invokation action is known from there type. So a template function that takes a lambda knows at the point of invokation what will happen, which makes inlining an easy local operation.
This technique is not new -- it was in C++ since std::sort, a high level algorithm that is faster than C's qsort due to the ease of inlining invokable objects passed as comparators.
In short, if you need a common runtime type, type erase. If you need certain properties, test for those properties, don't force a common base. If you need certain axioms to hold (untestable properties), either document or require callers to claim those properties via tags or traits classes (see how the standard library handles iterator categories -- again, not inheritance). When in doubt, use free functions with ADL enabled to access properties of your arguments, and have your default free functions use SFINAE to look for a method and invoke if it exists, and fail otherwise.
Such a mechanism removes the central responsibility of a common base class, allows existing classes to be adapted without modification to pass your requirements (if reasonable), places type erasure only where it is needed, avoids virtual overhead, and ideally generates clear errors when properties are found to not hold.
If your ENGINE has certain properites it needs to pass, write a traits class that tests for those.
If there are properties that cannot be tested for, create tags that describe such properties. Use specialization of a traits class, or canonical typedefs, to let the class describe which axioms hold for the type. (See iterator tags).
If you have a type like ENGINE_BASE, don't demand it, but instead use it as a helper for said tags and traits and axiom typedefs, like std::iterator<...> (you never have to inherit from it, it simply acts as a helper).
Avoid over specifying requirements. If usually_important is never invoked on your Worker<X>, probably your X doesn't need a b in that context. But do test for properties in a way clearer than "method does not compile".
And sometimes, just punt. Following such practices might make things harder for you -- so do an easier way. Most code is written and discarded. Know when your code will persist, and write it better and more extendably and more maintainably. Know that you need to practice those techniques on disposable code so you can write it correctly when you have to.
Let me turn the question around on you: Why is it bad that the code compiles for Two if usually_important isn't called? The type you gave it meets all the needs for that particular instantiation and the compiler will immediately tell you if a particular instantiation no longer meets the interface needed for the needed functionality in the template.
That said if you insist that you need an Engine object, don't do it with templates at all, instead treat it as a sort of strategy pattern with a non-template (using this approach enforces at compile time that the user-defined type adheres to a specific interface, not just that it looks like a duck):
class Worker
{
public:
explicit Worker(EngineBase* data) : data_(data) {}
void print() { cout << data_->a() << endl; }
template <class X>
void usually_important () { int a = data_->a(); int b = data_->b(); }
private:
EngineBase* data_;
}
int main()
{
Worker w(new ConcreteEngine);
w.print();
}
I don't understand why this is bad. Duck typing is certainly a
powerful tool; but in my mind it lends itself confusing runtime issues
when a class looks close (same function names) but has slightly
different behavior.
The probability that you can define a non-trivial interface and then by accident have another interface that has different semantics but can be substituted is minimal. This never, ever happens.
Type Two will allow Worker to compile only if usually_important is not
called.
That is a good thing. We depend on it all the time. It makes class templates more flexible.
Matching a compile-time interface is strictly superior to a run-time one. This is because run-time interfaces can't differ in key ways that compile-time ones can (e.g. different types in the interface), and require a bunch of run-time abstraction like dynamic allocation that may be unnecessary.
In a case like this, though. The responsibility is put on to the
designer of ENGINE to ensure that it is a valid type (after which they
should inherit ENGINE_BASE). If they don't, there will be a compiler
error. To me this seems much safer while not imposing any restrictions
or adding much additional work.
It is not safer. It is utterly pointless. It is stupendously unlikely that the user will accidentally instantiate the class with the wrong type but it will compile successfully due to circumstantial interface match.
What it really boils down to is this: you should only require what you really need. Absolutely definitely must have in order to function. Everything else, don't require it. This is a core tenet of making software maintainable. You cannot possibly imagine what shenanigans I might conceive of long after you have written this class to use it in ways that you never thought it could be used for.

Advantages of typedef over derived class?

Simply put, what are the (or are there any) differences between doing say
class MyClassList : list<MyClass> { };
vs
typedef list<MyClass> MyClassList;
The only advantage that I can think of (and its what lead me to this question) is that with the derived class i can now easily forward declare MyClassList as
class MyClassList;
without compiler error, instead of
class MyClass;
typedef list<MyClass> MyClassList;
I can't think of any differences, but this made me wonder, are there cases in which a typedef can be used that a simple derived class can't?
Or to put it another way, is there any reason why I shouldn't change all my typedef list<...> SomeClassList; to the simple derived class so that I can easily forward declare them?
In C++ it is NOT recommended to derive from an STL container, so don't do it.
A typedef is just creating an alias for an existing type, as it were, so typedef std::list<MyClass> MyClassList; creates a "new type" that is called MyClassList which you can now use as follows:
MyClassList lst;
Changing your typedefs to a derived class is a bad idea. Don't do it.
typedef is intended exactly for this purpose -- to alias type names. Its very idiomatic and won't confuse anybody familiar with C++.
But to address why inheriting may be a bad idea.
std::list does not have a virtual destructor. Meaning MyClassList wouldn't have its destructor called when deleted through the base class. So this is typically frowned upon. In your case, you have no intention of putting any members in MyClassList, so this becomes a moot point until the next programmer sees inheritance as an invitation to add new members/override functions etc. They may not realize that std::list's destructor is not virtual and not realize that in some cases MyClassList's destructor won't get called.
Well, a typedef can only do what its name suggests while a derived class can possibly be a full-blown makeover of its base(s). So while there may not be much of a difference if you limit yourself to "just" deriving (and not add any members, or override anything, etc) as far as the compiler is concerned, there might be a big difference as far as human readers of the code are concerned.
One might wonder "why is this a derived class when a typedef would suffice"? Most people would assume that there must be a reason, so you would make life harder to the code's future maintainers. A typedef, on the other hand, is a very specific tool and does not raise questions.
And while we 're on the topic of maintenance don't forget that as most things in C++, this "nothing will go wrong as long as we are disciplined and don't cross this line" is an open invitation to disaster. Since the compiler isn't there to stop you, someone, someday, will cross the line.
A number of things have been mentioned. A big thing however:
Deriving from a type does not inherit all the constructors.
If there are a number of non-default constructors, you won't have them when inheriting (you'd have to forward them to the base constructor).
Typedefs have no such 'issue'.
Now, typedefs do not generate unique typeids. If you want that, and not have the overhead or other disadvantages of inheritance, look at boost: it has a strong typedef macro that generates a unique typeid:
http://www.boost.org/doc/libs/1_37_0/boost/strong_typedef.hpp
A typedef is an alias, while a class is a new type.
In the first case, the compiler has to simply replace MyClassList with list<MyClass>.
In the second case, MyClassList involve the generation of default constructor, copy constructor assignment operator, destructor, and - where c++11 is in use - even move constructor and move assignment.
In default cases, since MyClassList has no additional functionality, optimization will most likely wipe them out.
Note: I found the "deriving classes with non virtual destructor is not recommended" argument a weak one. A C++ developer should know that derivation does not necessarily imply polymorphism. A class that is not deleted through a pointer to its base doesn't need a virtual destructor, like a class whose method is not designed to be "called" through a base pointer does not require that method to be virtual.
Simply, if a destructor is not virtual, don't treat that type as "polymorphic" on deletion.
In this sense, destructor are not different from other virtual or non virtual methods.
If this argument has to be considered strong, then, all classes that don't have "all virtual" method shouldn't be derived!

Enforce functions to implement for template argument class?

Should I define an interface which explicitly informs the user what all he/she should implement in order to use the class as template argument or let the compiler warn him when the functionality is not implemented ?
template <Class C1, Class C2>
SomeClass
{
...
}
Class C1 has to implement certain methods and operators, compiler won't warn until they are used. Should I rely on compiler to warn or make sure that I do:
Class C1 : public SomeInterfaceEnforcedFunctions
{
// Class C1 has to implement them either way
// but this is explicit? am I right or being
// redundant ?
}
Ideally, you should use a concept to specify the requirements on the type used as a template argument. Unfortunately, neither the current nor the upcoming standard includes concepts.
Absent that, there are various methods available for enforcing such requirements. You might want to read Eric Neibler's article about how to enforce requirements on template arguments.
I'd agree with Eric's assertion that leaving it all to the compiler is generally unacceptable. It's much of the source of the horrible error messages most of us associate with templates, where seemingly trivial typos can result in pages of unreadable dreck.
If you are going to force an interface, then why use a template at all? You can simply do -
class SomeInterface //make this an interface by having pure virtual functions
{
public:
RType SomeFunction(Param1 p1, Param2 p2) = 0;
/*You don't have to know how this method is implemented,
but now you can guarantee that whoever wants to create a type
that is SomeInterface will have to implement SomeFunction in
their derived class.
*/
};
followed by
template <class C2>
class SomeClass
{
//use SomeInterface here directly.
};
Update -
A fundamental problem with this approach is that it only works for types that is rolled out by a user. If there is a standard library type that conforms to your interface specification, or a third party code or another library (like boost) that has classes that conform to SomeInterface, they won't work unless you wrap them in your own class, implement the interface and forward the calls appropriately. I'm somehow not liking my answer anymore.
Absent of concepts, a for now abandoned concept (pun not intended, but noted) for describing which requirements a template parameter must fulfill, the requirements are only enforced implicitly. That is, if whatever your users use as a template parameter doesn't fulfill them, the code won't compile. Unfortunately, the error message resulting from that are often quite gibberish. The only things you can do to improve matters is to
describe the requirements in your template's documentation
insert code that checks for those requirements early on in your template, before it delves so deep that the error messages your users get become unintelligibly.
The latter can be quite complicated (static_assert to the rescue!) or even impossible, which is the reason concepts where considered to become a core-language feature, instead of a library.
Note that it is easy to overlook a requirement this way, which will only become apparent when someone uses a type as a template parameter that won't work. However, it is at least as easy to overlook that requirements are often quite lose and put more into the description than what the code actually calls for.
For example, + is defined not only for numbers, but also for std::string and for any number of user-defined types. Conesequently, a template add<T> might not only be used with numbers, but also with strings and an infinite number of user-defined types. Whether this is an unwanted side-effect of the code you want to suppress or a feature you want to support is up to you. All I'm saying is that it is not easy to catch this.
I don't think defining an interface in the form of an abstract base class with virtual functions is a good idea. This is run-time polymorphism, a main pillar classic OO. If you do this, then you don't need a template, just take the base class per reference.
But then you also lose one of the main advantages of templates, which is that they are, in some ways, more flexible (try to write an add() function classic OO which works with any type overloading + in) and faster, because the binding of the function calls take place not at run-time, but during compilation. (That brings more than it might look like at first due to the ability to inline, which usually isn't possible with run-time polymorphism.)

Why is is it not possible to pass a const set<Derived*> as const set<Base*> to a function?

Before this is marked as duplicate, I'm aware of this question, but in my case we are talking about const containers.
I have 2 classes:
class Base { };
class Derived : public Base { };
And a function:
void register_objects(const std::set<Base*> &objects) {}
I would like to invoke this function as:
std::set<Derived*> objs;
register_objects(objs);
The compiler does not accept this. Why not? The set is not modifiable so there is no risk of non-Derived objects being inserted into it. How can I do this in the best way?
Edit:
I understand that now the compiler works in a way that set<Base*> and set<Derived*> are totally unrelated and therefor the function signature is not found. My question now however is: why does the compiler work like this? Would there be any objections to not see const set<Derived*> as derivative of const set<Base*>
The reason the compiler doesn't accept this is that the standard tells it not to.
The reason the standard tells it not to, is that the committee did not what to introduce a rule that const MyTemplate<Derived*> is a related type to const MyTemplate<Base*> even though the non-const types are not related. And they certainly didn't want a special rule for std::set, since in general the language does not make special cases for library classes.
The reason the standards committee didn't want to make those types related, is that MyTemplate might not have the semantics of a container. Consider:
template <typename T>
struct MyTemplate {
T *ptr;
};
template<>
struct MyTemplate<Derived*> {
int a;
void foo();
};
template<>
struct MyTemplate<Base*> {
std::set<double> b;
void bar();
};
Then what does it even mean to pass a const MyTemplate<Derived*> as a const MyTemplate<Base*>? The two classes have no member functions in common, and aren't layout-compatible. You'd need a conversion operator between the two, or the compiler would have no idea what to do whether they're const or not. But the way templates are defined in the standard, the compiler has no idea what to do even without the template specializations.
std::set itself could provide a conversion operator, but that would just have to make a copy(*), which you can do yourself easily enough. If there were such a thing as a std::immutable_set, then I think it would be possible to implement that such that a std::immutable_set<Base*> could be constructed from a std::immutable_set<Derived*> just by pointing to the same pImpl. Even so, strange things would happen if you had non-virtual operators overloaded in the derived class - the base container would call the base version, so the conversion might de-order the set if it had a non-default comparator that did anything with the objects themselves instead of their addresses. So the conversion would come with heavy caveats. But anyway, there isn't an immutable_set, and const is not the same thing as immutable.
Also, suppose that Derived is related to Base by virtual or multiple inheritance. Then you can't just reinterpret the address of a Derived as the address of a Base: in most implementations the implicit conversion changes the address. It follows that you can't just batch-convert a structure containing Derived* as a structure containing Base* without copying the structure. But the C++ standard actually allows this to happen for any non-POD class, not just with multiple inheritance. And Derived is non-POD, since it has a base class. So in order to support this change to std::set, the fundamentals of inheritance and struct layout would have to be altered. It's a basic limitation of the C++ language that standard containers cannot be re-interpreted in the way you want, and I'm not aware of any tricks that could make them so without reducing efficiency or portability or both. It's frustrating, but this stuff is difficult.
Since your code is passing a set by value anyway, you could just make that copy:
std::set<Derived*> objs;
register_objects(std::set<Base*>(objs.begin(), objs.end());
[Edit: you've changed your code sample not to pass by value. My code still works, and afaik is the best you can do other than refactoring the calling code to use a std::set<Base*> in the first place.]
Writing a wrapper for std::set<Base*> that ensures all elements are Derived*, the way Java generics work, is easier than arranging for the conversion you want to be efficient. So you could do something like:
template<typename T, typename U>
struct MySetWrapper {
// Requirement: std::less is consistent. The default probably is,
// but for all we know there are specializations which aren't.
// User beware.
std::set<T> content;
void insert(U value) { content.insert(value); }
// might need a lot more methods, and for the above to return the right
// type, depending how else objs is used.
};
MySetWrapper<Base*,Derived*> objs;
// insert lots of values
register_objects(objs.content);
(*) Actually, I guess it could copy-on-write, which in the case of a const parameter used in the typical way would mean it never needs to do the copy. But copy-on-write is a bit discredited within STL implementations, and even if it wasn't I doubt the committee would want to mandate such a heavyweight implementation detail.
If your register_objects function receives an argument, it can put/expect any Base subclass in there. That's what it's signature sais.
It's a violation of the Liskov substitution principle.
This particular problem is also referred to as Covariance. In this case, where your function argument is a constant container, it could be made to work. In case the argument container is mutable, it can't work.
Take a look here first: Is array of derived same as array of base. In your case set of derived is a totally different container from set of base and since there is no implicit conversion operator is available to convert between them , compiler is giving an error.
std::set<Base*> and std::set<Derived*> are basically two different objects. Though the Base and Derived classes are linked via inheritance, at compiler template instantiation level they are two different instantiation(of set).
Firstly, It seems a bit odd that you aren't passing by reference ...
Secondly, as mentioned in the other post, you would be better off creating the passed-in set as a std::set< Base* > and then newing a Derived class in for each set member.
Your problem surely arises from the fact that the 2 types are completely different. std::set< Derived* > is in no way inherited from std::set< Base* > as far as the compiler is concerned. They are simply 2 different types of set ...
Well, as stated in the question you mention, set<Base*> and set<Derived*> are different objects. Your register_objects() function takes a set<Base*> object. So the compiler do not know about any register_objects() that takes set<Derived*>. The constness of the parameter does not change anything. Solutions stated in the quoted question seem the best things you can do. Depends on what you need to do ...
As you are aware, the two classes are quite similar once you remove the non-const operations. However, in C++ inheritance is a property of types, whereas const is a mere qualifier on top of types. That means that you can't properly state that const X derives from const Y, even when X derives from Y.
Furthermore, if X does not inherit from Y, that applies to all cv-qualified variants of X and Y as well. This extends to std::set instantiations. Since std::set<Foo> does not inherit from std::set<bar>, std::set<Foo> const does not inherit from std::set<bar> const either.
You are quite right that this is logically allowable, but it would require further language features. They are available in C# 4.0, if you're interested in seeing another language's way of doing it. See here: http://community.bartdesmet.net/blogs/bart/archive/2009/04/13/c-4-0-feature-focus-part-4-generic-co-and-contra-variance-for-delegate-and-interface-types.aspx
Didn't see it linked yet, so here's a bullet point in the C++ FAQ Lite related to this:
http://www.parashift.com/c++-faq-lite/proper-inheritance.html#faq-21.3
I think their Bag-of-Apples != Bag-of-Fruit analogy suits the question.

Inheriting from iterator [duplicate]

Can/Should i inherit from STL iterator to implement my own iterator class? If no, why not?
Short answer
Many consider that the class std::iterator does not offer much compared to regular type aliases, and even obfuscates them a bit by not explicitly providing the names and relying on the order of the template parameters instead. It is deprecated in C++17 and is likely to be gone in a few years.
This means that you shouldn't use std::iterator anymore. You can read the whole post below if you're interested in the full story (there's a bit of redundancy since it has been started before the deprecation proposal).
Legacy answer
You can ignore everything below if you're not interested in history. The following fragments even contradict themselves several times.
As of today (C++11/C++14), the standard seems to imply that it isn't a good idea anymore to inherit from std::iterator to implement custom iterators. Here is a brief explanation, from N3931:
Although the Standard has made this mistake almost a dozen times, I recommend not depicting directory_iterator and recursive_directory_iterator as deriving from std::iterator, since that's a binding requirement on implementations. Instead they should be depicted as having the appropriate typedefs, and leave it up to implementers to decide how to provide them. (The difference is observable to users with is_base_of, not that they should be asking that question.)
[2014-02-08 Daniel comments and provides wording]
This issue is basically similar to the kind of solution that had been used to remove the requirement to derive from unary_function and friends as described by N3198 and I'm strongly in favour to follow that spirit here as well. I'd like to add that basically all "newer" iterator types (such as the regex related iterator) don't derive from std::iterator either.
The paper cites N3198 which itself states that it follows the deprecation discussed in N3145. The reasons for deprecating the classes that only exist to provide typedefs are given as such:
Our experience with concepts gives us confidence that it is rarely necessary to depend on specific base class-derived class relations, if availability of types and functions is sufficient. The new language tools allow us even in the absence of language-supported concepts to deduce the existence of typenames in class types, which would introduce a much weaker coupling among them. Another advantage of replacing inheritance by associated types is the fact, that this will reduce the number of cases, where ambiguities arise: This can easily happen, if a type would inherit both from unary_function and binary_function (This makes sense, if a functor is both an unary and a binary function object).
tl;dr: classes which only provide typedefs are now deemed useless. Moreover, they increase coupling when it is not needed, are more verbose, and can have unwanted side effects in some corner cases (see the previous quotation).
Update: issue 2438 from N4245 seems to actually contradict what I asserted earlier:
For LWG convenience, nine STL iterators are depicted as deriving from std::iterator to get their iterator_category/etc. typedefs. Unfortunately (and unintentionally), this also mandates the inheritance, which is observable (not just through is_base_of, but also overload resolution). This is unfortunate because it confuses users, who can be misled into thinking that their own iterators must derive from std::iterator, or that overloading functions to take std::iterator is somehow meaningful. This is also unintentional because the STL's most important iterators, the container iterators, aren't required to derive from std::iterator. (Some are even allowed to be raw pointers.) Finally, this unnecessarily constrains implementers, who may not want to derive from std::iterator. (For example, to simplify debugger views.)
To sum up, I was wrong, #aschepler was right: it can be used, but it is certainely not required - it isn't discouraged either. The whole "let's remove std::iterator" thing exists for the standard not to constrain the standard library implementers.
Round 3: P0174R0 proposes to deprecate std::iterator for a possible removal in the future. The proposal is already pretty good at explaining why it should be deprecated, so here we go:
The long sequence of void arguments is much less clear to the reader than simply providing the expected typedefs in the class definition itself, which is the approach taken by the current working draft, following the pattern set in C++14 where we deprecated the derivation throughout the library of functors from unary_function and binary_function.
In addition to the reduced clarity, the iterator template also lays a trap for the unwary, as in typical usage it will be a dependent base class, which means it will not be looking into during name lookup from within the class or its member functions. This leads to surprised users trying to understand why the following simple usage does not work:
#include <iterator>
template <typename T>
struct MyIterator : std::iterator<std::random_access_iterator_tag, T> {
value_type data; // Error: value_type is not found by name lookup
// ... implementations details elided ...
};
The reason of clarity alone was sufficient to persuade the LWG to update the standard library specification to no longer mandate the standard iterator adapators as deriving from std::iterator, so there is no further use of this template within the standard itself. Therefore, it looks like a strong candidate for deprecation.
This is becoming a bit tiring and not everyone seems to agree, so I will let you draw your own conclusions. If the committee eventually decides that std::iterator should be deprecated, then it will make it pretty clear that you shouldn't use it anymore. Note that the follow-up paper highlights a great support for the removal of std::iterator:
Update from Jacksonville, 2016:
Poll: Deprecate iterator for C++17??
SF  F   N   A   SA
6    10  1    0   0
In the above poll results, SF, F, N, A and SA stand for Strongly For, For, Neutral, Against and Strongly Against.
Update from Oulu, 2016:
Poll: Still want to deprecate std::iterator?
SF F N A SA
3   6  3  2  0
P0619R1 proposes to remove std::iterator, possibly as soon as C++20, and also proposes to enhance std::iterator_traits so that it can automatically deduce the types difference_type, pointer and reference the way std::iterator does when they're not explicitly provided.
If you mean std::iterator: yes, that's what it's for.
If you mean anything else: no, because none of the STL iterators have virtual destructors. They're not meant for inheritance and a class inheriting from them might not clean up properly.
No one should not because of the potential problems that might be encountered. Probably you are better off using Composition rather than Inheritance with STL Iterators.
Undefined Behavior due to absence of virtual destructors:
STL containers & iterators are not meant to act as base classes as they do not have virtual destructors.
For classes with no virtual destructors being used as Base class, the problem arises when deallocating through a pointer to the base class (delete, delete[] etc). Since the classes don't have virtual destructors, they cannot be cleaned up properly and results in Undefined Behavior.
One might argue that there would not be a need to delete the iterator polymorphically & hence nothing wrong to go ahead with deriving from STL iterators, well there might be some other problems like:
Inheritance maynot be possible at all:
All iterator types in the standard container are Implementation defined.
For e.g: std::vector<T>::iterator might be just a T*. In this case, you simply cannot inherit from it.
The C++ standard has no provisions demanding that say std::vector<T>::iterator does not
use inheritance inhibiting techniques to prevent derivation. Thus, if you are deriving from a STL iterator you are relying on a feature of your STL that happens to allow derivation. That makes such an implementation non portable.
Buggy behaviors if not implemented properly:
Consider that you are deriving from vector iterator class like:
class yourIterator : std::vector<T>::iterator { ... };
There might be a function which operates on the vector iterators,
For ex:
void doSomething(std::vector<T>::iterator to, std::vector<T>::iterator from);
Since yourIterator is a std::vector<T>::iterator you can call doSomething() on your container class but you will be facing the ugly problem of Object Slicing. The doSomething() has to be implemented in a proper templated manner, to avoid the
problem.
Problems while using Standard Library Algorithms:
Consider you are using the derivation from vector iterator, and then you use a Standard library algorithm like std::transform()
For Ex:
yourIterator a;
yourIterator b;
...
std::transform( a++, b--, ... );
The postfix operator ++ returns a std::vector<T>::iterator and not a
yourIterator resulting in a wrong template being chosen.
Thus, Inheriting from STL Iterators is possible indeed but if you ready to dig out all such and many other potential problems and address them, Personally I won't give it the time and the effort to do so.
If you're talking about std::iterator template, then yes, you should, but I hope you do understand that it has no functionality, just a bunch of typedefs. The pro of this decision is that your iterator can be fed to iterator_traits template.
If, on the other hand, you're talking about some specific STL iterator, like vector<T>::iterator or other, then the answer is a resounding NO. Let alone everything else, you don't know for sure that it's actually a class (e.g. the same vector<T>::iterator can be just typedefed as T*)