optimize a small loop of if-else branch in c++ - c++

Is it possible to remove the branches in the following loop. All iterators are from the container type std::map<type_name, T>
record_iterator beginIter = lastLookup_;
record_iterator endIter = lastLookup_;
++endIter;
for(;endIter != end(); ++beginIter, ++endIter){
time_type now = beginIter->first;
if(ts == now){
lastLookup_ = beginIter;
return beginIter;
}else if(ts > now && ts <= endIter->first){
lastLookup_ = beginIter;
return endIter;
}
}
The problem that this algo is trying to solve is to optimize the forward lookup which location is assumed to be the same or (not too far ) forward of the last looked up location. Ideally, I kept an iterator of last looked up location, and move forward linearly. But this seems to have the same performance as,
record_iterator it= sliceMap_.find(ts);
if(it !=end()){
return it;
}else{
return sliceMap_.upper_bound(ts);
}
I feel that the problem is the branch, so it is possible to remove the branch in this code so I can profile the different in speed?

There are three big problems with the first approach:
Too many comparisons inside a loop.
Using iterators on a std::map involves using std::map<>::iterator::operator++(), which is not exactly fast. Look at the implementation starting at line 62: http://gcc.gnu.org/onlinedocs/libstdc++/libstdc++-html-USERS-3.4/tree_8cc-source.html .
Using iterators on a std::map is a linear search. Searching on a map should be logarithmic.
There's also a problem with your second approach. You are searching twice.
Why don't you just use
return sliceMap_.lower_bound(ts);
This should do exactly what you want with one logarithmic search.

As some people said, the first method doesn't make a lot of sense since you are doing a linear search on an ordered container. I realize the location is supposed to be near lastLookup
About the second method, I think a simple optimization would be eliminating the second lookup. You are doing one on record_iterator it= sliceMap_.find(ts); and another one on return sliceMap_.upper_bound(ts);
EDITED:
Try out doing it this way:
record_iterator it = sliceMap_.lower_bound(ts);
return it;
What we are doing there is, using lower_bound() to find the first element whose key doesn't compare less than ts (that includes an equal element which upper_bound() doesn't do).

Related

Binary search of range in std::map slower than map::find() search of whole map

Background: I'm new to C++. I have a std::map and am trying to search for elements by key.
Problem: Performance. The map::find() function slows down when the map gets big.
Preferred approach: I often know roughly where in the map the element should be; I can provide a [first,last) range to search in. This range is always small w.r.t. the number of elements in the map. I'm interested in writing a short binary search utility function with boundary hinting.
Attempt: I stole the below function from https://en.cppreference.com/w/cpp/algorithm/lower_bound and did some rough benchmarks. This function seems to be much slower than map::find() for maps large and small, regardless of the size or position of the range hint provided. I replaced the comparison statements (it->first < value) with a comparison of random ints and the slowdown appeared to resolve, so I think the slowdown may be caused by the dereferencing of it->first.
Question: Is the dereferencing the issue? Or is there some kind of unnecessary copy/move action going on? I think I remember reading that maps don't store their element nodes sequentially in memory, so am I just getting a bunch of cache misses? What is the likely cause of the slowdown, and how would I go about fixing it?
/* #param first Iterator pointing to the first element of the map to search.
* #param distance Number of map elements in the range to search.
* #param key Map key to search for. NOTE: Type validation is not a concern just yet.
*/
template<class ForwardIt, class T>
ForwardIt binary_search_map (ForwardIt& first, const int distance, const T& key) {
ForwardIt it = first;
typename std::iterator_traits<ForwardIt>::difference_type count, step;
count = distance;
while (count > 0) {
it = first;
step = count/2;
std::advance(it, step);
if (it->first < value) {
first = ++it;
count -= step + 1;
}
else if (it->first > value)
count = step;
else {
first = it;
break;
}
}
return first;
}
There is a reason that std::map::find() exists. The implementation already does a binary search, as the std::map has a balanced binary tree as implementation.
Your implementation of binary search is much slower because you can't take advantage of that binary tree.
If you want to take the middle of the map, you start with std::advance it takes the first node (which is at the leaf of the tree) and navigates through several pointers towards what you consider to be the middle. Afterwards, you again need to go from one of these leaf nodes to the next. Again following a lot of pointers.
The result: next to a lot more looping, you get a lot of cache misses, especially when the map is large.
If you want to improve the lookups in your map, I would recommend using a different structure. When ordering ain't important, you could use std::unordered_map. When order is important, you could use a sorted std::vector<std::pair<Key, Value>>. In case you have boost available, this already exists in a class called boost::container::flat_map.

Is there an even faster approach than swap-and-pop for erasing from std::vector?

I am asking this as the other relevant questions on SO seem to be either for older versions of the C++ standard, do not mention any form of parallelization, or are focused on keeping the ordering/indexing the same as elements are removed.
I have a vector of potentially hundreds of thousands or millions of elements (which are fairly light structures, around ~20 bytes assuming they're compacted down).
Due to other restrictions, it must be a std::vector and other containers would not work (like std::forward_list), or be even less optimal in other uses.
I recently swapped from simple it = std::erase(it) approach to using pop-and-swap using something like this:
for(int i = 0; i < myVec.size();) {
// Do calculations to determine if element must be removed
// ...
// Remove if needed
if(elementMustBeRemoved) {
myVec[i] = myVec.back();
myVec.pop_back();
} else {
i++;
}
}
This works, and was a significant improvement. It cut the runtime of the method down to ~61% of what it was previously. But I would like to improve this further.
Does C++ have a method to remove many non-consecutive elements from a std::vector efficiently? Like passing a vector of indices to erase() and have C++ do some magic under the hood to minimize movement of data?
If so, I could have threads individually gather indices that must be removed in parallel, and then combine them and pass them to erase().
Take a look at std::remove_if algorithm. You could use it like this:
auto firstToErase = std::remove_if(myVec.begin(), myVec.end(),
[](const & T x){
// Do calculations to determine if element must be removed
// ...
return elementMustBeRemoved;});
myVec.erase(firstToErase, myVec.end());
cppreference says that following code is a possible implementation for remove_if:
template<class ForwardIt, class UnaryPredicate>
ForwardIt remove_if(ForwardIt first, ForwardIt last, UnaryPredicate p)
{
first = std::find_if(first, last, p);
if (first != last)
for(ForwardIt i = first; ++i != last; )
if (!p(*i))
*first++ = std::move(*i);
return first;
}
Instead of swapping with the last element it continuously moves through a container building up a range of elements which should be erased, until this range is at the very end of vector. This looks like a more cache-friendly solution and you might notice some performance improvement on a very big vector.
If you want to experiment with a parallel version, there is a version (4) which allows to specify execution policy.
Or, since C++20 you can type sligthly less and use erase_if.
However, in such case you lose the option to choose execution policy.
Is there an even faster approach than swap-and-pop for erasing from std::vector?
Ever since C++11, the optimal removal of single element from vector without preserving order has been move-and-pop rather than swap-and-pop.
Does C++ have a method to remove many non-consecutive elements from a std::vector efficiently?
The remove-erase (std::erase in C++20) idiom is the most efficient that the standard provides. std::remove_if does preserve order, and if you don't care about that, then a more efficient algorithm may be possible. But standard library does not come with unstable remove out of the box. The algorithm goes as follows:
Find first element to be removed (a)
Find last element to not be removed (b)
Move b to a.
Repeat between a and b until iterators meet.
There is a proposal P0048 to add such algorithm to the standard library, and there is a demo implementation in https://github.com/WG21-SG14/SG14/blob/6c5edd5c34e1adf42e69b25ddc57c17d99224bb4/SG14/algorithm_ext.h#L84

C++ - how to buffer calc results faster than using unordered_map

I read a lot about unordered_map not being very fast but I wonder what's the best alternative to do this:
I need to buffer calculation results for a function of an integer argument. I don't know ahead of time what range or interval will be requested. Storing in a vector with maximal resolution would cost way too much memory.
So I'm using
unordered_map<unsigned long, pair<T, long>>
Where the key is the argument of the function to be computed, the first of the pair the result of the computation of type T, and the second of the pair a version information for that computation.
Only if the unordered_map does not contain the element or it contains it but the version is outdated, the computation is carried out and then added to the unordered_map. The lookup function looks something like this:
template<typename T> class BufferClass{
long MyVersion;
unordered_map<unsigned long, pair<T,long>> Buffer;
public:
BufferClass(): MyVersion{1} {};
T* GetIfValid(unsigned long index)
{
if (!Buffer.count(index)) return nullptr;
pair <T,long> &x{Buffer.at(index)};
if (x.second!=MyVersion) return nullptr;
return &x.first;
}
/* ...Functions to set elements...*/
}
As you can see, I combined element validity check and retrieval in one function, so that I only need one lookup for both.
The profiler shows most of the computation time is used up in the hash function __constrain_hash related to unordered_map.
What would be the fastest way to store and retrieve values like that? The list of stored indices is expected to be non-continuous (there will be a lot of "holes") and first and last index are also mostly unknown.
T will generally be a "small" data type (like double or complex).
Thanks!
Martin
In your code, there could be two hash lookup in one query, one invoked in count() and the other invoked in at(). It is redundant, use unordered_map::find instead, see here.
Sample code:
const auto iter = Buffer.find(index);
if(iter != Buffer.end()) //Found something, so the return value is not end()
{
return &(iter->first);
}
else return nullptr;
In my opinion, unordered_map is slow but not that slow, for 99.9% usage is fast enough. You may want to check whether you call this function (unnecessarily) too many times. Using other fast implementation is not free, it could bloat your code base, harm your application's compatibility with different host systems or so on. If you think std::unordered_map is unreasonably slow, it is almost always because you got somewhere wrong in your work. (either your estimation or your code implementation)
BTW, another thing to mention: You said T is a small data type right? then return its value instead of pointer to it, it is faster and safer.
One thing that strikes me as odd about your implementation is the following two lines:
if (!Buffer.count(index)) return nullptr;
pair <T,long> &x{Buffer.at(index)};
This code is checking if the key exists, then throws away the result and searches for the same key again with bounds checking to boot. I think you'll find searching once with std::unordered_map<unsigned long, std::pair<T, long>>::find and reusing the result to be preferable:
auto it = Buffer.find(index);
if (it == Buffer.end()) return nullptr;
auto& x = *it;

Constraining remove_if on only part of a C++ list

I have a C++11 list of complex elements that are defined by a structure node_info. A node_info element, in particular, contains a field time and is inserted into the list in an ordered fashion according to its time field value. That is, the list contains various node_info elements that are time ordered. I want to remove from this list all the nodes that verify some specific condition specified by coincidence_detect, which I am currently implementing as a predicate for a remove_if operation.
Since my list can be very large (order of 100k -- 10M elements), and for the way I am building my list this coincidence_detect condition is only verified by few (thousands) elements closer to the "lower" end of the list -- that is the one that contains elements whose time value is less than some t_xv, I thought that to improve speed of my code I don't need to run remove_if through the whole list, but just restrict it to all those elements in the list whose time < t_xv.
remove_if() though does not seem however to allow the user to control up to which point I can iterate through the list.
My current code.
The list elements:
struct node_info {
char *type = "x";
int ID = -1;
double time = 0.0;
bool spk = true;
};
The predicate/condition for remove_if:
// Remove all events occurring at t_event
class coincident_events {
double t_event; // Event time
bool spk; // Spike condition
public:
coincident_events(double time,bool spk_) : t_event(time), spk(spk_){}
bool operator()(node_info node_event){
return ((node_event.time==t_event)&&(node_event.spk==spk)&&(strcmp(node_event.type,"x")!=0));
}
};
The actual removing from the list:
void remove_from_list(double t_event, bool spk_){
// Remove all events occurring at t_event
coincident_events coincidence(t_event,spk_);
event_heap.remove_if(coincidence);
}
Pseudo main:
int main(){
// My list
std::list<node_info> event_heap;
...
// Populate list with elements with random time values, yet ordered in ascending order
...
remove_from_list(0.5, true);
return 1;
}
It seems that remove_if may not be ideal in this context. Should I consider instead instantiating an iterator and run an explicit for cycle as suggested for example in this post?
It seems that remove_if may not be ideal in this context. Should I consider instead instantiating an iterator and run an explicit for loop?
Yes and yes. Don't fight to use code that is preventing you from reaching your goals. Keep it simple. Loops are nothing to be ashamed of in C++.
First thing, comparing double exactly is not a good idea as you are subject to floating point errors.
You could always search the point up to where you want to do a search using lower_bound (I assume you list is properly sorted).
The you could use free function algorithm std::remove_if followed by std::erase to remove items between the iterator returned by remove_if and the one returned by lower_bound.
However, doing that you would do multiple passes in the data and you would move nodes so it would affect performance.
See also: https://en.cppreference.com/w/cpp/algorithm/remove
So in the end, it is probably preferable to do you own loop on the whole container and for each each check if it need to be removed. If not, then check if you should break out of the loop.
for (auto it = event_heap.begin(); it != event_heap.end(); )
{
if (coincidence(*it))
{
auto itErase = it;
++it;
event_heap.erase(itErase)
}
else if (it->time < t_xv)
{
++it;
}
else
{
break;
}
}
As you can see, code can easily become quite long for something that should be simple. Thus, if you need to do that kind of algorithm often, consider writing you own generic algorithm.
Also, in practice you might not need to do a complete search for the end using the first solution if you process you data in increasing time order.
Finally, you might consider using an std::set instead. It could lead to simpler and more optimized code.
Thanks. I used your comments and came up with this solution, which seemingly increases speed by a factor of 5-to-10.
void remove_from_list(double t_event,bool spk_){
coincident_events coincidence(t_event,spk_);
for(auto it=event_heap.begin();it!=event_heap.end();){
if(t_event>=it->time){
if(coincidence(*it)) {
it = event_heap.erase(it);
}
else
++it;
}
else
break;
}
}
The idea to make erase return it (as already ++it) was suggested by this other post. Note that in this implementation I am actually erasing all list elements up to t_event value (meaning, I pass whatever I want for t_xv).

Iterating over a vector in C++ [duplicate]

Take the following two lines of code:
for (int i = 0; i < some_vector.size(); i++)
{
//do stuff
}
And this:
for (some_iterator = some_vector.begin(); some_iterator != some_vector.end();
some_iterator++)
{
//do stuff
}
I'm told that the second way is preferred. Why exactly is this?
The first form is efficient only if vector.size() is a fast operation. This is true for vectors, but not for lists, for example. Also, what are you planning to do within the body of the loop? If you plan on accessing the elements as in
T elem = some_vector[i];
then you're making the assumption that the container has operator[](std::size_t) defined. Again, this is true for vector but not for other containers.
The use of iterators bring you closer to container independence. You're not making assumptions about random-access ability or fast size() operation, only that the container has iterator capabilities.
You could enhance your code further by using standard algorithms. Depending on what it is you're trying to achieve, you may elect to use std::for_each(), std::transform() and so on. By using a standard algorithm rather than an explicit loop you're avoiding re-inventing the wheel. Your code is likely to be more efficient (given the right algorithm is chosen), correct and reusable.
It's part of the modern C++ indoctrination process. Iterators are the only way to iterate most containers, so you use it even with vectors just to get yourself into the proper mindset. Seriously, that's the only reason I do it - I don't think I've ever replaced a vector with a different kind of container.
Wow, this is still getting downvoted after three weeks. I guess it doesn't pay to be a little tongue-in-cheek.
I think the array index is more readable. It matches the syntax used in other languages, and the syntax used for old-fashioned C arrays. It's also less verbose. Efficiency should be a wash if your compiler is any good, and there are hardly any cases where it matters anyway.
Even so, I still find myself using iterators frequently with vectors. I believe the iterator is an important concept, so I promote it whenever I can.
because you are not tying your code to the particular implementation of the some_vector list. if you use array indices, it has to be some form of array; if you use iterators you can use that code on any list implementation.
Imagine some_vector is implemented with a linked-list. Then requesting an item in the i-th place requires i operations to be done to traverse the list of nodes. Now, if you use iterator, generally speaking, it will make its best effort to be as efficient as possible (in the case of a linked list, it will maintain a pointer to the current node and advance it in each iteration, requiring just a single operation).
So it provides two things:
Abstraction of use: you just want to iterate some elements, you don't care about how to do it
Performance
I'm going to be the devils advocate here, and not recommend iterators. The main reason why, is all the source code I've worked on from Desktop application development to game development have i nor have i needed to use iterators. All the time they have not been required and secondly the hidden assumptions and code mess and debugging nightmares you get with iterators make them a prime example not to use it in any applications that require speed.
Even from a maintence stand point they're a mess. Its not because of them but because of all the aliasing that happen behind the scene. How do i know that you haven't implemented your own virtual vector or array list that does something completely different to the standards. Do i know what type is currently now during runtime? Did you overload a operator I didn't have time to check all your source code. Hell do i even know what version of the STL your using?
The next problem you got with iterators is leaky abstraction, though there are numerous web sites that discuss this in detail with them.
Sorry, I have not and still have not seen any point in iterators. If they abstract the list or vector away from you, when in fact you should know already what vector or list your dealing with if you don't then your just going to be setting yourself up for some great debugging sessions in the future.
You might want to use an iterator if you are going to add/remove items to the vector while you are iterating over it.
some_iterator = some_vector.begin();
while (some_iterator != some_vector.end())
{
if (/* some condition */)
{
some_iterator = some_vector.erase(some_iterator);
// some_iterator now positioned at the element after the deleted element
}
else
{
if (/* some other condition */)
{
some_iterator = some_vector.insert(some_iterator, some_new_value);
// some_iterator now positioned at new element
}
++some_iterator;
}
}
If you were using indices you would have to shuffle items up/down in the array to handle the insertions and deletions.
Separation of Concerns
It's very nice to separate the iteration code from the 'core' concern of the loop. It's almost a design decision.
Indeed, iterating by index ties you to the implementation of the container. Asking the container for a begin and end iterator, enables the loop code for use with other container types.
Also, in the std::for_each way, you TELL the collection what to do, instead of ASKing it something about its internals
The 0x standard is going to introduce closures, which will make this approach much more easy to use - have a look at the expressive power of e.g. Ruby's [1..6].each { |i| print i; }...
Performance
But maybe a much overseen issue is that, using the for_each approach yields an opportunity to have the iteration parallelized - the intel threading blocks can distribute the code block over the number of processors in the system!
Note: after discovering the algorithms library, and especially foreach, I went through two or three months of writing ridiculously small 'helper' operator structs which will drive your fellow developers crazy. After this time, I went back to a pragmatic approach - small loop bodies deserve no foreach no more :)
A must read reference on iterators is the book "Extended STL".
The GoF have a tiny little paragraph in the end of the Iterator pattern, which talks about this brand of iteration; it's called an 'internal iterator'. Have a look here, too.
Because it is more object-oriented. if you are iterating with an index you are assuming:
a) that those objects are ordered
b) that those objects can be obtained by an index
c) that the index increment will hit every item
d) that that index starts at zero
With an iterator, you are saying "give me everything so I can work with it" without knowing what the underlying implementation is. (In Java, there are collections that cannot be accessed through an index)
Also, with an iterator, no need to worry about going out of bounds of the array.
Another nice thing about iterators is that they better allow you to express (and enforce) your const-preference. This example ensures that you will not be altering the vector in the midst of your loop:
for(std::vector<Foo>::const_iterator pos=foos.begin(); pos != foos.end(); ++pos)
{
// Foo & foo = *pos; // this won't compile
const Foo & foo = *pos; // this will compile
}
Aside from all of the other excellent answers... int may not be large enough for your vector. Instead, if you want to use indexing, use the size_type for your container:
for (std::vector<Foo>::size_type i = 0; i < myvector.size(); ++i)
{
Foo& this_foo = myvector[i];
// Do stuff with this_foo
}
I probably should point out you can also call
std::for_each(some_vector.begin(), some_vector.end(), &do_stuff);
STL iterators are mostly there so that the STL algorithms like sort can be container independent.
If you just want to loop over all the entries in a vector just use the index loop style.
It is less typing and easier to parse for most humans. It would be nice if C++ had a simple foreach loop without going overboard with template magic.
for( size_t i = 0; i < some_vector.size(); ++i )
{
T& rT = some_vector[i];
// now do something with rT
}
'
I don't think it makes much difference for a vector. I prefer to use an index myself as I consider it to be more readable and you can do random access like jumping forward 6 items or jumping backwards if needs be.
I also like to make a reference to the item inside the loop like this so there are not a lot of square brackets around the place:
for(size_t i = 0; i < myvector.size(); i++)
{
MyClass &item = myvector[i];
// Do stuff to "item".
}
Using an iterator can be good if you think you might need to replace the vector with a list at some point in the future and it also looks more stylish to the STL freaks but I can't think of any other reason.
The second form represents what you're doing more accurately. In your example, you don't care about the value of i, really - all you want is the next element in the iterator.
After having learned a little more on the subject of this answer, I realize it was a bit of an oversimplification. The difference between this loop:
for (some_iterator = some_vector.begin(); some_iterator != some_vector.end();
some_iterator++)
{
//do stuff
}
And this loop:
for (int i = 0; i < some_vector.size(); i++)
{
//do stuff
}
Is fairly minimal. In fact, the syntax of doing loops this way seems to be growing on me:
while (it != end){
//do stuff
++it;
}
Iterators do unlock some fairly powerful declarative features, and when combined with the STL algorithms library you can do some pretty cool things that are outside the scope of array index administrivia.
Indexing requires an extra mul operation. For example, for vector<int> v, the compiler converts v[i] into &v + sizeof(int) * i.
During iteration you don't need to know number of item to be processed. You just need the item and iterators do such things very good.
No one mentioned yet that one advantage of indices is that they are not become invalid when you append to a contiguous container like std::vector, so you can add items to the container during iteration.
This is also possible with iterators, but you must call reserve(), and therefore need to know how many items you'll append.
If you have access to C++11 features, then you can also use a range-based for loop for iterating over your vector (or any other container) as follows:
for (auto &item : some_vector)
{
//do stuff
}
The benefit of this loop is that you can access elements of the vector directly via the item variable, without running the risk of messing up an index or making a making a mistake when dereferencing an iterator. In addition, the placeholder auto prevents you from having to repeat the type of the container elements,
which brings you even closer to a container-independent solution.
Notes:
If you need the the element index in your loop and the operator[] exists for your container (and is fast enough for you), then better go for your first way.
A range-based for loop cannot be used to add/delete elements into/from a container. If you want to do that, then better stick to the solution given by Brian Matthews.
If you don't want to change the elements in your container, then you should use the keyword const as follows: for (auto const &item : some_vector) { ... }.
Several good points already. I have a few additional comments:
Assuming we are talking about the C++ standard library, "vector" implies a random access container that has the guarantees of C-array (random access, contiguos memory layout etc). If you had said 'some_container', many of the above answers would have been more accurate (container independence etc).
To eliminate any dependencies on compiler optimization, you could move some_vector.size() out of the loop in the indexed code, like so:
const size_t numElems = some_vector.size();
for (size_t i = 0; i
Always pre-increment iterators and treat post-increments as exceptional cases.
for (some_iterator = some_vector.begin(); some_iterator != some_vector.end(); ++some_iterator){ //do stuff }
So assuming and indexable std::vector<> like container, there is no good reason to prefer one over other, sequentially going through the container. If you have to refer to older or newer elemnent indexes frequently, then the indexed version is more appropropriate.
In general, using the iterators is preferred because algorithms make use of them and behavior can be controlled (and implicitly documented) by changing the type of the iterator. Array locations can be used in place of iterators, but the syntactical difference will stick out.
I don't use iterators for the same reason I dislike foreach-statements. When having multiple inner-loops it's hard enough to keep track of global/member variables without having to remember all the local values and iterator-names as well. What I find useful is to use two sets of indices for different occasions:
for(int i=0;i<anims.size();i++)
for(int j=0;j<bones.size();j++)
{
int animIndex = i;
int boneIndex = j;
// in relatively short code I use indices i and j
... animation_matrices[i][j] ...
// in long and complicated code I use indices animIndex and boneIndex
... animation_matrices[animIndex][boneIndex] ...
}
I don't even want to abbreviate things like "animation_matrices[i]" to some random "anim_matrix"-named-iterator for example, because then you can't see clearly from which array this value is originated.
If you like being close to the metal / don't trust their implementation details, don't use iterators.
If you regularly switch out one collection type for another during development, use iterators.
If you find it difficult to remember how to iterate different sorts of collections (maybe you have several types from several different external sources in use), use iterators to unify the means by which you walk over elements. This applies to say switching a linked list with an array list.
Really, that's all there is to it. It's not as if you're going to gain more brevity either way on average, and if brevity really is your goal, you can always fall back on macros.
Even better than "telling the CPU what to do" (imperative) is "telling the libraries what you want" (functional).
So instead of using loops you should learn the algorithms present in stl.
For container independence
I always use array index because many application of mine require something like "display thumbnail image". So I wrote something like this:
some_vector[0].left=0;
some_vector[0].top =0;<br>
for (int i = 1; i < some_vector.size(); i++)
{
some_vector[i].left = some_vector[i-1].width + some_vector[i-1].left;
if(i % 6 ==0)
{
some_vector[i].top = some_vector[i].top.height + some_vector[i].top;
some_vector[i].left = 0;
}
}
Both the implementations are correct, but I would prefer the 'for' loop. As we have decided to use a Vector and not any other container, using indexes would be the best option. Using iterators with Vectors would lose the very benefit of having the objects in continuous memory blocks which help ease in their access.
I felt that none of the answers here explain why I like iterators as a general concept over indexing into containers. Note that most of my experience using iterators doesn't actually come from C++ but from higher-level programming languages like Python.
The iterator interface imposes fewer requirements on consumers of your function, which allows consumers to do more with it.
If all you need is to be able to forward-iterate, the developer isn't limited to using indexable containers - they can use any class implementing operator++(T&), operator*(T) and operator!=(const &T, const &T).
#include <iostream>
template <class InputIterator>
void printAll(InputIterator& begin, InputIterator& end)
{
for (auto current = begin; current != end; ++current) {
std::cout << *current << "\n";
}
}
// elsewhere...
printAll(myVector.begin(), myVector.end());
Your algorithm works for the case you need it - iterating over a vector - but it can also be useful for applications you don't necessarily anticipate:
#include <random>
class RandomIterator
{
private:
std::mt19937 random;
std::uint_fast32_t current;
std::uint_fast32_t floor;
std::uint_fast32_t ceil;
public:
RandomIterator(
std::uint_fast32_t floor = 0,
std::uint_fast32_t ceil = UINT_FAST32_MAX,
std::uint_fast32_t seed = std::mt19937::default_seed
) :
floor(floor),
ceil(ceil)
{
random.seed(seed);
++(*this);
}
RandomIterator& operator++()
{
current = floor + (random() % (ceil - floor));
}
std::uint_fast32_t operator*() const
{
return current;
}
bool operator!=(const RandomIterator &that) const
{
return current != that.current;
}
};
int main()
{
// roll a 1d6 until we get a 6 and print the results
RandomIterator firstRandom(1, 7, std::random_device()());
RandomIterator secondRandom(6, 7);
printAll(firstRandom, secondRandom);
return 0;
}
Attempting to implement a square-brackets operator which does something similar to this iterator would be contrived, while the iterator implementation is relatively simple. The square-brackets operator also makes implications about the capabilities of your class - that you can index to any arbitrary point - which may be difficult or inefficient to implement.
Iterators also lend themselves to decoration. People can write iterators which take an iterator in their constructor and extend its functionality:
template<class InputIterator, typename T>
class FilterIterator
{
private:
InputIterator internalIterator;
public:
FilterIterator(const InputIterator &iterator):
internalIterator(iterator)
{
}
virtual bool condition(T) = 0;
FilterIterator<InputIterator, T>& operator++()
{
do {
++(internalIterator);
} while (!condition(*internalIterator));
return *this;
}
T operator*()
{
// Needed for the first result
if (!condition(*internalIterator))
++(*this);
return *internalIterator;
}
virtual bool operator!=(const FilterIterator& that) const
{
return internalIterator != that.internalIterator;
}
};
template <class InputIterator>
class EvenIterator : public FilterIterator<InputIterator, std::uint_fast32_t>
{
public:
EvenIterator(const InputIterator &internalIterator) :
FilterIterator<InputIterator, std::uint_fast32_t>(internalIterator)
{
}
bool condition(std::uint_fast32_t n)
{
return !(n % 2);
}
};
int main()
{
// Rolls a d20 until a 20 is rolled and discards odd rolls
EvenIterator<RandomIterator> firstRandom(RandomIterator(1, 21, std::random_device()()));
EvenIterator<RandomIterator> secondRandom(RandomIterator(20, 21));
printAll(firstRandom, secondRandom);
return 0;
}
While these toys might seem mundane, it's not difficult to imagine using iterators and iterator decorators to do powerful things with a simple interface - decorating a forward-only iterator of database results with an iterator which constructs a model object from a single result, for example. These patterns enable memory-efficient iteration of infinite sets and, with a filter like the one I wrote above, potentially lazy evaluation of results.
Part of the power of C++ templates is your iterator interface, when applied to the likes of fixed-length C arrays, decays to simple and efficient pointer arithmetic, making it a truly zero-cost abstraction.