BOOST_FOREACH versus for loop - c++

I would like to have your advice regarding the usage of BOOST_FOREACH.
I have read around it is not really recommended in terms of performance being a very heavy header.
Moreover, it forces the use of "break" and "continue" statements since you can't really have an exit condition driven by a boolean and I've always been told that "break" and "continue" should be avoided when possible.
Of course, the advantages are that your are not dealing directly with iterators which ease the task of iterating through a container.
What do you think about it?
Do you think that if used it should be adopted systematically to guarantee homogeneity in a project or its use is recommended only under certain circumstances?

I would say C++ range based loops supercede it. This is an equivalent of this BOOST_FOREACH example:
std::string hello( "Hello, world!" );
for (auto c : hello)
{
std::cout << c;
}
I never found I needed to use it in ++03.
Note when using the range based loop over containers with expensive to copy elements, or in a generic context, it is best to use const& to those elements:
SomeContainerType<SomeType> v = ....;
for (const auto& elem : v)
{
std::cout << elem << " ";
}
Similarly, if you need to modify the elements of the container, then use a non-const & (auto& elem : v).

In programming, clarity is trump. I've always used boost foreach in C++03, found it much more readable than the hand-written loop, the header size won't kill you. As #juanchopanza rightly noted, of course, this question is obsolete in C++11.
Your concerns with break and continue are unfounded and probably counterproductive. With the traditionally long for-loop headers of C++03, people tend to not read the loop header and to overlook any condition variables that hide in the loop header. Better make your intent explicit with break and continue.
If you have decided to use boost foreach, use it systematically. It is supposed to be used to replace the bread-and-butter loops, after all.

I just replaced a use of BOOST_FOREACH with a simple for loop and got a 50% speedup, so I would say it is definitely not always the best thing to use.
You will also not get a loop counter (e.g. "i") which sometimes you actually need. Personally I'm not a fan but YMMV if it suits your style better.
BTW - a "heavy header" won't affect performance of your program, only the compilation time.

Related

C++ Counting Map

Recently I was dealing with what I am sure is a very common problem, which essentially boils down into the following:
Given a long text, calculate the frequency of each word occurring in the text.
I was able to solve this problem using std::unordered_map. This, however, turned quite ugly, as for every word in the text, if that's already been encountered I had to do a find, erase, and then a re-insert into the map with the value incremented.
I realise there are other ways of doing this, such as using a hashing function on top of a vanilla array/vector and increment value there, but I was wondering if there was a more elegant way of solving this problem, like an STL component, or function. that would have a similar interface to Pythons Counter collections.
I know C++ being C++ I can't really expect such high level concepts to always be implemented for me, but was just wondering if you guys new about anything (or at least your Googling skills are superior to mine) which could make my code a little nicer.
I'm not quite sure why an std::unordered_map (or just std::map) would involve much complexity. I'd write the code something like this:
std::unordered_map<std::string, int> words;
std::string word;
while (word = getword(input))
++words[word];
There's no need for any kind of find/erase/reinsert.
In case it's not clear how/why this works: operator[] will create an entry for a value if none exists yet in the map. The associated value will be a value-initialized object of the specified type, which will be zero in the case of an int (or similar). We then increment that every time we encounter the word.
An alternative solution:
std::multiset<std::string> m;
for (auto w: words) m.insert(w);
m.count("some word");
The advantage is that you don't have to rely on the 'trick' with operator[], making the code more readable.
EDIT: As Kerrek pointed out in the comments, this solution is slower. multiset stores all the elements you insert, even if they are deemed equal (they might still differ in some aspect that operator== does not check). This causes a significant overhead compared to unordered_map<std::string, int>, which only has to store each word once.
(As a side note, processing the complete works of William Shakespeare using the map solution takes about 0.33s on my machine, as opposed to 0.78s for the multiset solution.)

Is stability of std::remove and std::remove_if design fail?

Recently (from one SO comment) I learned that std::remove and std:remove_if are stable. Am I wrong to think this is a terrible design choice since it prevents certain optimizations?
Imagine removing the first and fifth elements of a 1M std::vector. Because of stability, we can't implement remove with swap. Instead we must shift every remaining element. :(
If we weren't limited by stability we could (for RA and BD iter) practically have 2 iters, one from front, second from behind, and then use swap to bring to-be-removed items to end. I'm sure smart people could maybe do even better. My question is in general, not about specific optimization I'm talking about.
EDIT: please note that C++ advertizes the zero overhead principle, and also there are std::sort and std::stable_sort sort algorithms.
EDIT2:
optimization would be something like the following:
For remove_if:
bad_iter looks from the beginning for those elements for which the predicate returns true.
good_iter looks from the end for those elements for which the predicate returns false.
when both have found what is expected they swap their elements. Termination is at good_iter <= bad_iter.
If it helps, think of it like one iter in quick sort algorithm, but we don't compare them to a special element, but instead we use the above predicate.
EDIT3: I played around and tried to find worst case (worst case for remove_if - notice how rarely the predicate would be true) and I got this:
#include <vector>
#include <string>
#include <iostream>
#include <map>
#include <algorithm>
#include <cassert>
#include <chrono>
#include <memory>
using namespace std;
int main()
{
vector<string> vsp;
int n;
cin >> n;
for (int i =0; i < n; ++i)
{ string s = "123456";
s.push_back('a' + (rand() %26));
vsp.push_back(s);
}
auto vsp2 = vsp;
auto remove_start = std::chrono::high_resolution_clock::now();
auto it=remove_if(begin(vsp),end(vsp), [](const string& s){ return s < "123456b";});
vsp.erase(it,vsp.end());
cout << vsp.size() << endl;
auto remove_end = std::chrono::high_resolution_clock::now();
cout << "erase-remove: " << chrono::duration_cast<std::chrono::milliseconds>(remove_end-remove_start).count() << " milliseconds\n";
auto partition_start = std::chrono::high_resolution_clock::now();
auto it2=partition(begin(vsp2),end(vsp2), [](const string& s){ return s >= "123456b";});
vsp2.erase(it2,vsp2.end());
cout << vsp2.size() << endl;
auto partition_end = std::chrono::high_resolution_clock::now();
cout << "partition-remove: " << chrono::duration_cast<std::chrono::milliseconds>(partition_end-partition_start).count() << " milliseconds\n";
}
C:\STL\MinGW>g++ test_int.cpp -O2 && a.exe
12345678
11870995
erase-remove: 1426 milliseconds
11870995
partition-remove: 658 milliseconds
For other usages, partition is bit faster, same or slower. Color me puzzled. :D
I assume you're asking about a hypothetical definition of stable_remove to be what remove currently is, and remove to be implemented however the implementer thinks is best to give the correct values in any order. With an expectation that implementers will be able to improve on just doing exactly the same as stable_remove.
In practice, the library can't easily do this optimization. It depends on the data, but you don't want to spend too long to work out how many elements will be removed before deciding on how to remove each one. For example you could do an extra pass to count them, but there are plenty of cases where that extra pass is inefficient. Just because an unstable remove is faster than stable for certain cases doesn't necessarily mean that an adaptive algorithm to choose between the two is a good bet.
I think the difference between remove and sort is that sorting is known to be a complicated problem with a lot of different solutions and trade-offs and tweaks. All "simple" sort algorithms are slow on average. Most standard algorithms are pretty simple, and remove is one of them but sort is not. I don't think it makes a lot of sense therefore to define stable_remove and remove as separate standard functions.
Edit: your edit with my tweak (similar to std::partition but no need to keep the values on the right) seems pretty reasonable to me. It requires a bidirectional iterator, but there is precedent in the standard for algorithms that behave differently on different iterator categories, such as std::distance. So it would be possible for the standard to define unstable_remove that only requires a forward iterator, but does your thing if it gets a bidi iterator. The standard probably wouldn't lay out the algorithm, but it could have a phrase like "if the iterator is bidirectional, does at most min(k, n-k) moves where k is the number of elements removed", which would in effect force it. But note that the standard doesn't currently say how many moves remove_if does, so I reckon that pinning this down simply wasn't a priority.
There is of course nothing stopping you from implementing your own unstable_remove.
If we accept that the standard didn't need to specify an unstable remove, the question then comes down to whether the function it does define should have been called stable_remove, anticipating a future remove that behaves differently for bidi iterators, and might behave differently for forward iterators if some clever heuristic for doing an unstable remove ever becomes well enough known to be worth a standard function. I'd say not: it is not a disaster if the names of standard functions aren't completely regular. It could have been pretty disruptive to remove the guarantee of stability from the STL's remove_if. Then the question becomes, "why didn't the STL call it stable_remove_if", to which I can only answer that in addition to all the points made in all the answers, the STL design process was a sight quicker than the standardization process.
stable_remove would also open a can of worms regarding other standard functions that could in theory have unstable versions. For a particularly silly example should copy be called stable_copy, just in case some implementation exists on which its demonstrably faster to reverse the order of elements while copying? Should copy be called copy_forward, so that the implementation can choose which of copy_backward and copy_forward is called by copy according to which is faster? Part of the committee's job is to draw a line somewhere.
I think realistically the current standard is sensible, and it would be sensible to separately define a stable_remove and a remove_with_some_other_constraints, but remove_in_some_unspecified_way just doesn't give the same opportunity for optimization that sort_in_some_unspecified_way does. Introsort was invented in 1997, just as C++ was being standardized, but I don't imagine the research effort around remove is quite what it was and is around sort. I may be wrong, optimizing remove might be the next big thing, and if so then the committee has missed a trick.
std::remove is specified to work with forward iterators.
The approach with working with a pair of iterators, from beginning and from the end, would either increase the requirements for the iterators and thus decrease the utility of the function or violate/worsen asymptotic complexity guarantees.
To answer my own question >3 years later :)
Yes it was a "fail".
There is a proposal D0041R0 that would add unstable_remove.
One could argue that just because there is a proposal to add std::unstable_remove that it does not mean that std::remove was a mistake, but I disagree. :)

'for' loop vs Qt's 'foreach' in C++

Which is better (or faster), a C++ for loop or the foreach operator provided by Qt? For example, the following condition
QList<QString> listofstrings;
Which is better?
foreach(QString str, listofstrings)
{
//code
}
or
int count = listofstrings.count();
QString str = QString();
for(int i=0;i<count;i++)
{
str = listofstrings.at(i);
//Code
}
It really doesn't matter in most cases.
The large number of questions on StackOverflow regarding whether this method or that method is faster, belie the fact that, in the vast majority of cases, code spends most of its time sitting around waiting for users to do something.
If you are really concerned, profile it for yourself and act on what you find.
But I think you'll most likely find that only in the most intense data-processing-heavy work does this question matter. The difference may well be only a couple of seconds and even then, only when processing huge numbers of elements.
Get your code working first. Then get it working fast (and only if you find an actual performance issue).
Time spent optimising before you've finished the functionality and can properly profile, is mostly wasted time.
First off, I'd just like to say I agree with Pax, and that the speed probably doesn't enter into it. foreach wins hands down based on readability, and that's enough in 98% of cases.
But of course the Qt guys have looked into it and actually done some profiling:
http://blog.qt.io/blog/2009/01/23/iterating-efficiently/
The main lesson to take away from that is: use const references in read only loops as it avoids the creation of temporary instances. It also make the purpose of the loop more explicit, regardless of the looping method you use.
It really doesn't matter. Odds are if your program is slow, this isn't the problem. However, it should be noted that you aren't make a completely equal comparison. Qt's foreach is more similar to this (this example will use QList<QString>):
for(QList<QString>::iterator it = Con.begin(); it != Con.end(); ++it) {
QString &str = *it;
// your code here
}
The macro is able to do this by using some compiler extensions (like GCC's __typeof__) to get the type of the container passed. Also imagine that boost's BOOST_FOREACH is very similar in concept.
The reason why your example isn't fair is that your non-Qt version is adding extra work.
You are indexing instead of really iterating. If you are using a type with non-contiguous allocation (I suspect this might be the case with QList<>), then indexing will be more expensive since the code has to calculate "where" the n-th item is.
That being said. It still doesn't matter. The timing difference between those two pieces of code will be negligible if existent at all. Don't waste your time worrying about it. Write whichever you find more clear and understandable.
EDIT: As a bonus, currently I strongly favor the C++11 version of container iteration, it is clean, concise and simple:
for(QString &s : Con) {
// you code here
}
Since Qt 5.7 the foreach macro is deprecated, Qt encourages you to use the C++11 for instead.
http://doc.qt.io/qt-5/qtglobal.html#foreach
(more details about the difference here : https://www.kdab.com/goodbye-q_foreach/)
I don't want to answer the question which is faster, but I do want to say which is better.
The biggest problem with Qt's foreach is the fact that it takes a copy of your container before iterating over it. You could say 'this doesn't matter because Qt classes are refcounted' but because a copy is used you don't actually change your original container at all.
In summary, Qt's foreach can only be used for read-only loops and thus should be avoided. Qt will happily let you write a foreach loop which you think will update/modify your container but in the end all changes are thrown away.
First, I completely agree with the answer that "it doesn't matter". Pick the cleanest solution, and optimize if it becomes a problem.
But another way to look at it is that often, the fastest solution is the one that describes your intent most accurately. In this case, QT's foreach says that you'd like to apply some action for each element in the container.
A plain for loop say that you'd like a counter i. You want to repeatedly add one to this value i, and as long as it is less than the number of elements in the container, you would like to perform some action.
In other words, the plain for loop overspecifies the problem. It adds a lot of requirements that aren't actually part of what you're trying to do. You don't care about the loop counter. But as soon as you write a for loop, it has to be there.
On the other hand, the QT people have made no additional promises that may affect performance. They simply guarantee to iterate through the container and apply an action to each.
In other words, often the cleanest and most elegant solution is also the fastest.
The foreach from Qt has a clearer syntax for the for loop IMHO, so it's better in that sense. Performance wise I doubt there's anything in it.
You could consider using the BOOST_FOREACH instead, as it is a well thought out fancy for loop, and it's portable (and presumably will make it's way into C++ some day and is future proof too).
A benchmark, and its results, on this can be found at http://richelbilderbeek.nl/CppExerciseAddOneAnswer.htm
IMHO (and many others here) it (that is speed) does not matter.
But feel free to draw your own conclusions.
For small collections, it should matter and foreach tends to be clearer.
However, for larger collections, for will begin to beat foreach at some point. (assuming that the 'at()' operator is efficient.
If this is really important (and I'm assuming it is since you are asking) then the best thing to do is measure it. A profiler should do the trick, or you could build a test version with some instrumentation.
You might look at the STL's for_each function. I don't know whether it will be faster than the two options you present, but it is more standardized than the Qt foreach and avoids some of the problems that you may run into with a regular for loop (namely out of bounds indexing and difficulties with translating the loop to a different data structure).
I would expect foreach to work nominally faster in some cases, and the about same in others, except in cases where the items are an actual array in which case the performace difference is negligible.
If it is implemented on top of an enumerator, it may be more efficient than a straight indexing, depending on implementation. It's unlikely to be less efficient. For example, if someone exposed a balanced tree as both indexable and enumerable, then foreach will be decently faster. This is because each index will have to independently find the referenced item, while an enumerator has the context of the current node to more efficiently navigate to the next ont.
If you have an actual array, then it depends on the implementation of the language and class whether foreach will be faster for the same as for.
If indexing is a literal memory offset(such as C++), then for should be marginally faster since you're avoiding a function call. If indexing is an indirection just like a call, then it should be the same.
All that being said... I find it hard to find a case for generalization here. This is the last sort of optimization you should be looking for, even if there is a performance problem in your application. If you have a performance problem that can be solved by changing how you iterate, you don't really have a performance problem. You have a BUG, because someone wrote either a really crappy iterator, or a really crappy indexer.

should I use the algorithm or hand-code it in this case?

Ok, someone tell me which would be better. I need to |= the elements of one vector with another. That is, I want to
void orTogether(vector<char>& v1, const vector<char>& v2)
{
typedef vector<char>::iterator iter;
for (iter i = v1.begin(), iter j = v2.begin() ; i != v1.end(); ++i, ++j)
*i |= *j;
}
I can't use for_each due to needing to process 2 collections. I suppose I could do something like
struct BitWiseOr
{
char operator()(const char& a, const char& b) {return a | b;}
};
void orTogether2(vector<char>& v1, const vector<char>& v2)
{
transform(v1.begin(), v1.end(), v2.begin(),
v1.begin(), BitwiseOr());
}
Is this a more efficient solution even though the top one is in place, but the bottom is an assign? This is right in the middle of a processing loop and I need the fastest code possible.
Edit: Added (obvious?) code for BitwiseOr. Also, I'm getting a lot of comments on non-related things like checking the lengths of v2 and changing the names. This is just an example, the real code is more complicated.
Well, I profiled both. orTogether2 is much faster than orTogether, so I'll be going with the transform method. I was surprised, orTogether2 was about 4 times faster in MSVC9 release mode. I ran it twice, changing the order the second time to make sure it wasn't some sort of cache issue, but same results. Thanks for the help everyone.
The bottom one will compile to effectively the same as the first, your OR functor is going to be inlined for sure. So the second idiom is more flexible if you ever need to add more flexibility or debugging frameworks or whatever.
Since there's no benefit to the first, use the transform method. Once you get into that habit you'll stop even considering the explicit loop choice for all your apps since it's unnecessary. The only advantage to the first method is it's easier to explain to beginner C++ programmers who are more comfortable with raw C.
Grab your watch and measure
There really isn't much to choose from, until we get to see how MyBitwiseOrObject is implemented. Why don't you run some tests?
The problem with transform here is that you need to have a separate functor and define it at namespace scope (far away from where it is used). Lambda's solve this problem -- you may want to take a look at Boost Lambda Library.
The algorithms provided by the STL can be assumed to be
correct
implemented reasonably fast
have a execution complexity guarantee (if specified by the standard)
Therefore you can't go wrong using them. In fact STL implementors may be able to specifically use faster implementations based on template specification for some problems.
Are you pre-optimizing? Do you know one is faster than the other? I suspect they are rather close to being the same.
That said, I would use the transform. Let the library, which is well tested, do the heavy lifting. Keep your code simple and clean. You may not know exactly what is happing inside a for-each, but you can trust the library designers to have done a good job.

Should one prefer STL algorithms over hand-rolled loops?

I seem to be seeing more 'for' loops over iterators in questions & answers here than I do for_each(), transform(), and the like. Scott Meyers suggests that stl algorithms are preferred, or at least he did in 2001. Of course, using them often means moving the loop body into a function or function object. Some may feel this is an unacceptable complication, while others may feel it better breaks down the problem.
So... should STL algorithms be preferred over hand-rolled loops?
It depends on:
Whether high-performance is required
The readability of the loop
Whether the algorithm is complex
If the loop isn't the bottleneck, and the algorithm is simple (like for_each), then for the current C++ standard, I'd prefer a hand-rolled loop for readability. (Locality of logic is key.)
However, now that C++0x/C++11 is supported by some major compilers, I'd say use STL algorithms because they now allow lambda expressions — and thus the locality of the logic.
I’m going to go against the grain here and advocate that using STL algorithms with functors makes code much easier to understand and maintain, but you have to do it right. You have to pay more attention to readability and clearity. Particularly, you have to get the naming right. But when you do, you can end up with cleaner, clearer code, and paradigm shift into more powerful coding techniques.
Let’s take an example. Here we have a group of children, and we want to set their “Foo Count” to some value. The standard for-loop, iterator approach is:
for (vector<Child>::iterator iter = children.begin();
iter != children.end();
++iter)
{
iter->setFooCount(n);
}
Which, yeah, it’s pretty clear, and definitely not bad code. You can figure it out with just a little bit of looking at it. But look at what we can do with an appropriate functor:
for_each(children.begin(), children.end(), SetFooCount(n));
Wow, that says exactly what we need. You don’t have to figure it out; you immediately know that it’s setting the “Foo Count” of every child. (It would be even clearer if we didn’t need the .begin() / .end() nonsense, but you can’t have everything, and they didn’t consult me when making the STL.)
Granted, you do need to define this magical functor, SetFooCount, but its definition is pretty boilerplate:
class SetFooCount
{
public:
SetFooCount(int n) : fooCount(n) {}
void operator () (Child& child)
{
child.setFooCount(fooCount);
}
private:
int fooCount;
};
In total it’s more code, and you have to look at another place to find out exactly what SetFooCount is doing. But because we named it well, 99% of the time we don’t have to look at the code for SetFooCount. We assume it does what it says, and we only have to look at the for_each line.
What I really like is that using the algorithms leads to a paradigm shift. Instead of thinking of a list as a collection of objects, and doing things to every element of the list, you think of the list as a first class entity, and you operate directly on the list itself. The for-loop iterates through the list, calling a member function on each element to set the Foo Count. Instead, I am doing one command, which sets the Foo Count of every element in the list. It’s subtle, but when you look at the forest instead of the trees, you gain more power.
So with a little thought and careful naming, we can use the STL algorithms to make cleaner, clearer code, and start thinking on a less granular level.
The std::foreach is the kind of code that made me curse the STL, years ago.
I cannot say if it's better, but I like more to have the code of my loop under the loop preamble. For me, it is a strong requirement. And the std::foreach construct won't allow me that (strangely enough, the foreach versions of Java or C# are cool, as far as I am concerned... So I guess it confirms that for me the locality of the loop body is very very important).
So I'll use the foreach only if there is only already a readable/understandable algorithm usable with it. If not, no, I won't. But this is a matter of taste, I guess, as I should perhaps try harder to understand and learn to parse all this thing...
Note that the people at boost apparently felt somewhat the same way, for they wrote BOOST_FOREACH:
#include <string>
#include <iostream>
#include <boost/foreach.hpp>
int main()
{
std::string hello( "Hello, world!" );
BOOST_FOREACH( char ch, hello )
{
std::cout << ch;
}
return 0;
}
See : http://www.boost.org/doc/libs/1_35_0/doc/html/foreach.html
That's really the one thing that Scott Meyers got wrong.
If there is an actual algorithm that matches what you need to do, then of course use the algorithm.
But if all you need to do is loop through a collection and do something to each item, just do the normal loop instead of trying to separate code out into a different functor, that just ends up dicing code up into bits without any real gain.
There are some other options like boost::bind or boost::lambda, but those are really complex template metaprogramming things, they do not work very well with debugging and stepping through the code so they should generally be avoided.
As others have mentioned, this will all change when lambda expressions become a first class citizen.
The for loop is imperative, the algorithms are declarative. When you write std::max_element, it’s obvious what you need, when you use a loop to achieve the same, it’s not necessarily so.
Algorithms also can have a slight performance edge. For example, when traversing an std::deque, a specialized algorithm can avoid checking redundantly whether a given increment moves the pointer over a chunk boundary.
However, complicated functor expressions quickly render algorithm invocations unreadable. If an explicit loop is more readable, use it. If an algorithm call can be expressed without ten-storey bind expressions, by all means prefer it. Readability is more important than performance here, because this kind of optimization is what Knuth so famously attributes to Hoare; you’ll be able to use another construct without trouble once you realize it’s a bottleneck.
It depends, if the algorithm doesn't take a functor, then always use the std algorithm version. It's both simpler for you to write and clearer.
For algorithms that take functors, generally no, until C++0x lambdas can be used. If the functor is small and the algorithm is complex (most aren't) then it may be better to still use the std algorithm.
I'm a big fan of the STL algorithms in principal but in practice it's just way too cumbersome. By the time you define your functor/predicate classes a two line for loop can turn into 40+ lines of code that is suddenly 10x harder to figure out.
Thankfully, things are going to get a ton easier in C++0x with lambda functions, auto and new for syntax. Checkout this C++0x Overview on Wikipedia.
I wouldn't use a hard and fast rule for it. There are many factors to consider, like often you perform that certain operation in your code, is just a loop or an "actual" algorithm, does the algorithm depend on a lot of context that you would have to transmit to your function?
For example I wouldn't put something like
for (int i = 0; i < some_vector.size(); i++)
if (some_vector[i] == NULL) some_other_vector[i]++;
into an algorithm because it would result in a lot more code percentage wise and I would have to deal with getting some_other_vector known to the algorithm somehow.
There are a lot of other examples where using STL algorithms makes a lot of sense, but you need to decide on a case by case basis.
I think the STL algorithm interface is sub-optimal and should be avoided because using the STL toolkit directly (for algorithms) might give a very small gain in performance, but will definitely cost readability, maintainability, and even a bit of writeability when you're learning how to use the tools.
How much more efficient is a standard for loop over a vector:
int weighted_sum = 0;
for (int i = 0; i < a_vector.size(); ++i) {
weighted_sum += (i + 1) * a_vector[i]; // Just writing something a little nontrivial.
}
than using a for_each construction, or trying to fit this into a call to accumulate?
You could argue that the iteration process is less efficient, but a for _ each also introduces a function call at each step (which might be mitigated by trying to inline the function, but remember that "inline" is only a suggestion to the compiler - it may ignore it).
In any case, the difference is small. In my experience, over 90% of the code you write is not performance critical, but is coder-time critical. By keeping your STL loop all literally inline, it is very readable. There is less indirection to trip over, for yourself or future maintainers. If it's in your style guide, then you're saving some learning time for your coders (admit it, learning to properly use the STL the first time involves a few gotcha moments). This last bit is what I mean by a cost in writeability.
Of course there are some special cases -- for example, you might actually want that for_each function separated to re-use in several other places. Or, it might be one of those few highly performance-critical sections. But these are special cases -- exceptions rather than the rule.
IMO, a lot of standard library algorithms like std::for_each should be avoided - mainly for the lack-of-lambda issues mentioned by others, but also because there's such a thing as inappropriate hiding of details.
Of course hiding details away in functions and classes is all part of abstraction, and in general a library abstraction is better than reinventing the wheel. But a key skill with abstraction is knowing when to do it - and when not to do it. Excessive abstraction can damage readability, maintainability etc. Good judgement comes with experience, not from inflexible rules - though you must learn the rules before you learn to break them, of course.
OTOH, it's worth considering the fact that a lot of programmers have been using C++ (and before that, C, Pascal etc) for a long time. Old habits die hard, and there is this thing called cognitive dissonance which often leads to excuses and rationalisations. Don't jump to conclusions, though - it's at least as likely that the standards guys are guilty of post-decisional dissonance.
I think a big factor is the developer's comfort level.
It's probably true that using transform or for_each is the right thing to do, but it's not any more efficient, and handwritten loops aren't inherently dangerous. If it would take half an hour for a developer to write a simple loop, versus half a day to get the syntax for transform or for_each right, and move the provided code into a function or function object. And then other developers would need to know what was going on.
A new developer would probably be best served by learning to use transform and for_each rather than handmade loops, since he would be able to use them consistently without error. For the rest of us for whom writing loops is second nature, it's probably best to stick with what we know, and get more familiar with the algorithms in our spare time.
Put it this way -- if I told my boss I had spent the day converting handmade loops into for_each and transform calls, I doubt he'd be very pleased.