I'm writing a random access container in C++. In my code I use this (well, in my real code I use all kinds of Allocator typedefs, this is just easier to understand):
template<typename T, typename Allocator = std::allocator<T> >
class Carray {
public:
// ...
typedef T* iterator;
typedef const T* const_iterator;
// ...
};
But I can also create a different iterator class derived from std::iterator. This would add support for typedefs (it::iterator_category, it::difference_type, etc).
Now my question, is there an overhead by using a iterator class instead of a raw pointer? If yes, how substantial is this overhead, and is it severe enough to not use a iterator class?
You have iterator category, difference type etc avalaibale for you even if you have a raw pointer. You see, there is this iterator_traits<> template which you should use. It is already specialized for pointers.
iterator_traits<int*>::value_type // ... etc.
//or
iterator traits<my_custom_iterator>::value_type
If your iterator class simply wraps the pointer, there is almost certainly no overhead.
It's perfectly standard-conforming to use raw pointers as the iterators. However, some badly written code (including, as you suggest, code that tries to use nested typedefs directly instead of iterator_traits) may fail to compile. Some of the early standard libraries started with pointers for vector's iterators and changed, purely to keep such bad code working. That's really the only reason I'd think of for bothering.
BTW - if possible I'd make use of the Boost iterator support rather than deriving directly from std::iterator; a lot of subtle requirements are taken care of for you that way.
Related
Angew made a comment that a vector using a raw pointer as it's iterator type was fine. That kinda threw me for a loop.
I started researching it and found that the requirement for vector iterators was only that they are "Random Access Iterators" for which it is explicitly stated that pointers qualify:
A pointer to an element of an array satisfies all requirements
Is the only reason that compilers even provide iterators to vector for debugging purposes, or is there actually a requirement I missed on vector?
§ 24.2.1
Since iterators are an abstraction of pointers, their semantics is a generalization of most of the semantics
of pointers in C++. This ensures that every function template that takes iterators works as well with
regular pointers.
So yes, using a pointer satisfies all of the requirements for a Random Access Iterator.
std::vector likely provides iterators for a few reasons
The standard says it should.
It would be odd if containers such as std::map or std::set provided iterators while std::vector provided only a value_type* pointer. Iterators provide consistency across the containers library.
It allows for specializations of the vector type eg, std::vector<bool> where a value_type* pointer would not be a valid iterator.
My 50 cents:
Iterators are generic ways to access any STL container. What I feel you're saying is: Since pointers are OK as a replacement of iterators for vectors, why are there iterators for vectors?
Well, who said you can't have duplicates in C++? Actually it's a good thing to have different interfaces to the same functionality. That should not be a problem.
On the other hand, think about libraries that have algorithms that use iterators. If vectors don't have iterators, it's just an invitation to exceptions (exceptions in the linguistic since, not programming sense). Every time one has to write an algorithm, he must do something different for vectors with pointers. But why? No reason for this hassle. Just interface everything the same way.
What those comments are saying is that
template <typename T, ...>
class vector
{
public:
typedef T* iterator;
typedef const T* const_iterator;
...
private:
T* elems; // pointer to dynamic array
size_t count;
...
}
is valid. Similarly a user defined container intended for use with std:: algorithms can do that. Then when a template asks for Container::iterator the type it gets back in that instantiation is T*, and that behaves properly.
So the standard requires that vector has a definition for vector::iterator, and you use that in your code. On one platform it is implemented as a pointer into an array, but on a different platform it is something else. Importantly these things behave the same way in all the aspects that the standard specifies.
I want to wrap some c library in modern C++ way.
The library provide a way to reverse serialize objects from binary string.
So that its API appears can only forward from begin of the string to the end, where the part have been processed will not be kept, just like stream.
However it works different from standard stream, it will not support "<<" operator to return char, and for loops should not return char also. It needs a iterator which can iterator on it and return objects it generated.
at first, I want to implement code just like below:
class Obj{
c_ptr ptr;
.....
}
class X{
public:
class const_iterator : std::iterator<std::forward_iterator_tag, Obj>{
......
};
class iterator : const_iterator{
.....
};
X::const_iterator cbegin();
X::iterator begin();
X::const_iterator cend();
X::iterator end();
........
}
or merge Obj class into iterator.
There are problems in this case.
1.as vector iterator example shows, begin and end() should return index value. But here X is stream, I can only get the begin once, after that access stream would not the the first byte. In iostream the end() seems return a specific char EOF.
I suppose that I can not inherit X from istream? because istream seem is designed for char streams operation with lots of mechanism as overflow, etc, which are not needed for the wrapper.
iterator inherit from const_iterator is suggest by some one to reduce the similar code. But it seems still have a lots of code should be different, mainly focus on it's declaration.
Is there any best practice on this kind of container or iterator implementation in modern C++?
The iterators actually don't return index values. They are pointers to typed objects I think.
In the case of vector the iterator fulfills the requirements (properties/traits/...) of a RandomAccessIterator and that's why you can access by subscript operator.
I suggest you read upon the iterator concepts first, you might need some of the concepts four your design: InputIterator/OutputIterator, ForwardIterator (I believe this iterator is what you might consider probably), etc.
begin() usually always points to the beginning of the container and end() usually always points to the end of the container. In STL I don't know about an exception of that. Also it can be that some operations of the container can invalidate an iterator (which I believe is likely in your use case). In your case, you first need to be clear of your use case/requirement for your stream design.
You're probably right. And it is not a good idea to extend a class and change the semantics of derived methods, etc.
It depends what you want to do with your stream. Simply iterate over elements in read-only mode? Or do you need to be able to write? Or in reverse? There are four kinds: iterator, const_iterator, reverse_iterator and const_reverse_iterator.
The best practice is the STL, though very difficult to digest. I recommend the book The C++ Programming Language where you can learn about the ideas, design, use cases, etc.
Just a little introduction, with simple words.
In C++, iterators are "things" on which you can write at least the dereference operator *it, the increment operator ++it, and for more advanced bidirectional iterators, the decrement --it, and last but not least, for random access iterators we need operator index it[] and possibly addition and subtraction.
Such "things" in C++ are objects of types with the according operator overloads, or plain and simple pointers.
std::vector<> is a container class that wraps a continuous array, so pointer as iterator makes sense. On the nets, and in some literature you can find vector.begin() used as a pointer.
The rationale for using a pointer is less overhead, higher performance, especially if an optimizing compiler detects iteration and does its thing (vector instructions and stuff). Using iterators might be harder for the compiler to optimize.
Knowing this, my question is why modern STL implementations, let's say MSVC++ 2013 or libstdc++ in Mingw 4.7, use a special class for vector iterators?
You're completely correct that vector::iterator could be implemented by a simple pointer (see here) -- in fact the concept of an iterator is based on that of a pointer to an array element. For other containers, such as map, list, or deque, however, a pointer won't work at all. So why is this not done? Here are three reasons why a class implementation is preferrable over a raw pointer.
Implementing an iterator as separate type allows additional functionality (beyond what is required by the standard), for example (added in edit following Quentins comment) the possibility to add assertions when dereferencing an iterator, for example, in debug mode.
overload resolution If the iterator were a pointer T*, it could be passed as valid argument to a function taking T*, while this would not be possible with an iterator type. Thus making std::vector<>::iterator a pointer in fact changes the behaviour of existing code. Consider, for example,
template<typename It>
void foo(It begin, It end);
void foo(const double*a, const double*b, size_t n=0);
std::vector<double> vec;
foo(vec.begin(), vec.end()); // which foo is called?
argument-dependent lookup (ADL; pointed out by juanchopanza) If you make an unqualified call, ADL ensures that functions in namespace std will be searched only if the arguments are types defined in namespace std. So,
std::vector<double> vec;
sort(vec.begin(), vec.end()); // calls std::sort
sort(vec.data(), vec.data()+vec.size()); // fails to compile
std::sort is not found if vector<>::iterator were a mere pointer.
The implementation of the iterator is implementation defined, so long as fulfills the requirements of the standard. It could be a pointer for vector, that would work. There are several reasons for not using a pointer;
consistency with other containers.
debug and error checking support
overload resolution, class based iterators allow for overloads to work differentiating them from plain pointers
If all the iterators were pointers, then ++it on a map would not increment it to the next element since the memory is not required to be not-contiguous. Past the contiguous memory of std:::vector most standard containers require "smarter" pointers - hence iterators.
The physical requirement's of the iterator dove-tail very well with the logical requirement that movement between elements it a well defined "idiom" of iterating over them, not just moving to the next memory location.
This was one of the original design requirements and goals of the STL; the orthogonal relationship between the containers, the algorithms and connecting the two through the iterators.
Now that they are classes, you can add a whole host of error checking and sanity checks to debug code (and then remove it for more optimised release code).
Given the positive aspects class based iterators bring, why should or should you not just use pointers for std::vector iterators - consistency. Early implementations of std::vector did indeed use plain pointers, you can use them for vector. Once you have to use classes for the other iterators, given the positives they bring, applying that to vector becomes a good idea.
The rationale for using a pointer is less overhead, higher
performance, especially if an optimizing compiler detects iteration
and does its thing (vector instructions and stuff). Using iterators
might be harder for the compiler to optimize.
It might be, but it isn't. If your implementation is not utter shite, a struct wrapping a pointer will achieve the same speed.
With that in mind, it's simple to see that simple benefits like better diagnostic messages (naming the iterator instead of T*), better overload resolution, ADL, and debug checking make the struct a clear winner over the pointer. The raw pointer has no advantages.
The rationale for using a pointer is less overhead, higher
performance, especially if an optimizing compiler detects iteration
and does its thing (vector instructions and stuff). Using iterators
might be harder for the compiler to optimize.
This is the misunderstanding at the heart of the question. A well formed class implementation will have no overhead, and identical performance all because the compiler can optimize away the abstraction and treat the iterator class as just a pointer in the case of std::vector.
That said,
MSVC++ 2013 or libstdc++ in Mingw 4.7, use a special class for vector
iterators
because they view that adding a layer of abstraction class iterator to define the concept of iteration over a std::vector is more beneficial than using an ordinary pointer for this purpose.
Abstractions have a different set of costs vs benefits, typically added design complexity (not necessarily related to performance or overhead) in exchange for flexibility, future proofing, hiding implementation details. The above compilers decided this added complexity is an appropriate cost to pay for the benefits of having an abstraction.
Because STL was designed with the idea that you can write something that iterates over an iterator, no matter whether that iterator's just equivalent to a pointer to an element of memory-contiguous arrays (like std::array or std::vector) or something like a linked list, a set of keys, something that gets generated on the fly on access etc.
Also, don't be fooled: In the vector case, dereferencing might (without debug options) just break down to a inlinable pointer dereference, so there wouldn't even be overhead after compilation!
I think the reason is plain and simple: originally std::vector was not required to be implemented over contiguous blocks of memory.
So the interface could not just present a pointer.
source: https://stackoverflow.com/a/849190/225186
This was fixed later and std::vector was required to be in contiguous memory, but it was probably too late to make std::vector<T>::iterator a pointer.
Maybe some code already depended on iterator to be a class/struct.
Interestingly, I found implementations of std::vector<T>::iterator where this is valid and generated a "null" iterators (just like a null pointer) it = {};.
std::vector<double>::iterator it = {};
assert( &*it == nullptr );
Also, std::array<T>::iterator and std::initializer_list<T>::iterator are pointers T* in the implementations I saw.
A plain pointer as std::vector<T>::iterator would be perfectly fine in my opinion, in theory.
In practice, being a built-in has observable effects for metaprogramming, (e.g. std::vector<T>::iterator::difference_type wouldn't be valid, yes, one should have used iterator_traits).
Not-being a raw pointer has the (very) marginal advantage of disallowing nullability (it == nullptr) or default conductibility if you are into that. (an argument that doesn't matter for a generic programming point of view.)
At the same time the dedicated class iterators had a steep cost in other metaprogramming aspects, because if ::iterator were a pointer one wouldn't need to have ad hoc methods to detect contiguous memory (see contiguous_iterator_tag in https://en.cppreference.com/w/cpp/iterator/iterator_tags) and generic code over vectors could be directly forwarded to legacy C-functions.
For this reason alone I would argue that iterator-not-being-a-pointer was a costly mistake. It just made it hard to interact with C-code (as you need another layer of functions and type detection to safely forward stuff to C).
Having said this, I think we could still make things better by allowing automatic conversions from iterators to pointers and perhaps explicit (?) conversions from pointer to vector::iterators.
I got around this pesky obstacle by dereferencing and immediately referencing the iterator again. It looks ridiculous, but it satisfies MSVC...
class Thing {
. . .
};
void handleThing(Thing* thing) {
// do stuff
}
vector<Thing> vec;
// put some elements into vec now
for (auto it = vec.begin(); it != vec.end(); ++it)
// handleThing(it); // this doesn't work, would have been elegant ..
handleThing(&*it); // this DOES work
In Herb Sutter's When Is a Container Not a Container?, he shows an example of taking a pointer into a container:
// Example 1: Is this code valid? safe? good?
//
vector<char> v;
// ...
char* p = &v[0];
// ... do something with *p ...
Then follows it up with an "improvement":
// Example 1(b): An improvement
// (when it's possible)
//
vector<char> v;
// ...
vector<char>::iterator i = v.begin();
// ... do something with *i ...
But doesn't really provide a convincing argument:
In general, it's not a bad guideline to prefer using iterators instead
of pointers when you want to point at an object that's inside a
container. After all, iterators are invalidated at mostly the same
times and the same ways as pointers, and one reason that iterators
exist is to provide a way to "point" at a contained object. So, if you
have a choice, prefer to use iterators into containers.
Unfortunately, you can't always get the same effect with iterators
that you can with pointers into a container. There are two main
potential drawbacks to the iterator method, and when either applies we
have to continue to use pointers:
You can't always conveniently use an iterator where you can use a pointer. (See example below.)
Using iterators might incur extra space and performance overhead, in cases where the iterator is an object and not just a bald
pointer.
In the case of a vector, the iterator is just a RandomAccessIterator. For all intents and purposes this is a thin wrapper over a pointer. One implementation even acknowledges this:
// This iterator adapter is 'normal' in the sense that it does not
// change the semantics of any of the operators of its iterator
// parameter. Its primary purpose is to convert an iterator that is
// not a class, e.g. a pointer, into an iterator that is a class.
// The _Container parameter exists solely so that different containers
// using this template can instantiate different types, even if the
// _Iterator parameter is the same.
Furthermore, the implementation stores a member value of type _Iterator, which is pointer or T*. In other words, just a pointer. Furthermore, the difference_type for such a type is std::ptrdiff_t and the operations defined are just thin wrappers (i.e., operator++ is ++_pointer, operator* is *_pointer) and so on.
Following Sutter's argument, this iterator class provides no benefits over pointers, only drawbacks. Am I correct?
For vectors, in non-generic code, you're mostly correct.
The benefit is that you can pass a RandomAccessIterator to a whole bunch of algorithms no matter what container the iterator iterates, whether that container has contiguous storage (and thus pointer iterators) or not. It's an abstraction.
(This abstraction, among other things, allows implementations to swap out the basic pointer implementation for something a little more sexy, like range-checked iterators for debug use.)
It's generally considered to be a good habit to use iterators unless you really can't. After all, habit breeds consistency, and consistency leads to maintainability.
Iterators are also self-documenting in a way that pointers are not. What does a int* point to? No idea. What does an std::vector<int>::iterator point to? Aha…
Finally, they provide a measure a type safety — though such iterators may only be thin wrappers around pointers, they needn't be pointers: if an iterator is a distinct type rather than a type alias, then you won't be accidentally passing your iterator into places you didn't want it to go, or setting it to "NULL" accidentally.
I agree that Sutter's argument is about as convincing as most of his other arguments, i.e. not very.
You can't always conveniently use an iterator where you can use a pointer
That is not one of the disadvantages. Sometimes it is just too "convenient" to get the pointer passed to places where you really didn't want them to go. Having a separate type helps in validating parameters.
Some early implementations used T* for vector::iterator, but it caused various problems, like people accidentally passing an unrelated pointer to vector member functions. Or assigning NULL to the iterator.
Using iterators might incur extra space and performance overhead, in cases where the iterator is an object and not just a bald pointer.
This was written in 1999, when we also believed that code in <algorithm> should be optimized for different container types. Not much later everyone was surprised to see that the compilers figured that out themselves. The generic algorithms using iterators worked just fine!
For a std::vector there is absolutely no space of time overhead for using an iterator instead of a pointer. You found out that the iterator class is just a thin wrapper over a pointer. Compilers will also see that, and generate equivalent code.
One real-life reason to prefer iterators over pointers is that they can be implemented as checked iterators in debug builds and help you catch some nasty problems early. I.e:
vector<int>::iterator it; // uninitialized iterator
it++;
or
for (it = vec1.begin(); it != vec2.end(); ++it) // different containers
STL containers have a reference and const_reference typedef, which, in many cases I've seen (containers of bool being the only exceptions I can think of), could be trivially defined as
typedef value_type& reference;
typedef const value_type& const_reference;
What exactly, however, are the semantics of these types?
From what I understand, they are supposed to "behave like references to the value type", but what exactly does that mean?
MSDN says that reference is:
A type that provides a reference to an element stored in a vector.
But what does this mean, exactly? Do they need to overload specific operators, or have a specific behavior? If so, what is the required behavior?
I think part of the question comes from an assumption that allocators are useful. Allocators (at least pre-C++11) were something of a late addition to the STL:
People wanted containers independent of the memory model, which was somewhat excessive because the language doesn't include memory models. People wanted the library to provide some mechanism for abstracting memory models. Earlier versions of STL assumed that the size of the container is expressible as an integer of type size_t and that the distance between two iterators is of type ptrdiff_t. And now we were told, why don't you abstract from that? It's a tall order because the language does not abstract from that; C and C++ arrays are not parameterized by these types. We invented a mechanism called "allocator," which encapsulates information about the memory model. That caused grave consequences for every component in the library. You might wonder what memory models have to do with algorithms or the container interfaces. If you cannot use things like size_t, you also cannot use things like T* because of different pointer types (T*, T huge *, etc.). Then you cannot use references because with different memory models you have different reference types. There were tremendous ramifications on the library.
Unfortunately, they turned out to be substandard:
I invented allocators to deal with Intel's memory architecture. They are not such a bad ideas in theory - having a layer that encapsulates all memory stuff: pointers, references, ptrdiff_t, size_t. Unfortunately they cannot work in practice. For example,
vector<int, alloc1> a(...);
vector<int, alloc2> b(...);
you cannot now say:
find(a.begin(), a.end(), b[1]);
b[1] returns a alloc2::reference and not int&. It could be a type mismatch. It is necessary to change the way that the core language deals with references to make allocators really useful.
The reference typedef is meant to return whatever the equivalent of T& is for the allocator in question. On modern architectures, this is probably T&. However, the assumption was that on some architectures it could be something different (e.g., a compiler targeting an architecture with "near" and "far" pointers might need special syntax for "near" and "far" references). Sadly, this brilliant idea turned out to be less-than-brilliant. C++11 makes substantial changes to allocators -- adding scoped allocators -- and the memory model. I have to admit I don't know enough about C++11's changes w.r.t. allocators to say if things get better or worse.
Looking at the comments on the original question, since the Standard does not actually state how the containers must be implemented (although the Standard does put so many requirements on the behavior of the containers that it might as well ...), whatever type you typedef as reference must have the behaviors of T& that somebody could potentially rely on when implementing the container: an object of type reference should be a transparent alias to the original object, assigning to it should change the value of the original object without slicing, there is no need to support "reseating" the reference, taking the address of the reference should return the address of the original object, etc. If at all possible, it should in fact be T&; and the only case I can imagine where it wouldn't be possible would be if you were allocating memory that you couldn't manipulate through pointers or references (e.g., if the "memory" were actually on disk, or if the memory were actually allocated on a separate computer and reachable over the network through RPC calls, etc.).
Taken out of the standard:
It also defines reference as a typedef of value_type& which is a typedef of T
(New answer as previous one was different focus)
What you're asking is "Can I always dereference a reference? And yes, you can. That means dereferenced reference can do everything dereferenced value_type& can do which is everything value_type can do. If that makes sense to you.
You can't overload operators on typedefs. typedefs have the same behavior as the type they are assigned to. The reason they are typedef'd is to make them less cumbersome and provide a common "interface"
The reason reference exists is to prevent things like this:
template<typename T>
struct foo {
T& bar();
typedef T& reference;
reference baz();
}
foo<int> x;
foo<int>::T& y = x.bar(); // error! returns a T& but I can't create one!
foo<int>::reference z = x.baz(); // ok!
It also makes a cleaner interface and allows use of SFINAE:
template<typename T>
typename T::reference second(T::reference& t) { return t.at(1); };
template<typename T>
T& second(T& t) { return t[1]; };
std::vector v(10);
foo f(10); // type with [] overloaded but no reference typedef
second(x) = 5; // calls first def
second(f) = 3; // calls second def