Why not implement contains function in c++ Containers? - c++

My initial question was: why does in C++ the contains() function miss in Containers?
So I looked for an explanation and I found something interesting about why some other function are not implemented in all the Containers (essentially because of performance issues and convenience).
I know that you can use the find function from algorithm library or you can just write your own function with Iterator, but what I can't understand is why in set, for example, the contains function(where it's called find) is implemented, whereas in vector or queue it is not.
It's pretty clear to me too why Container classes does not share a common interface like Collections do in Java (thanks to this answer) but in this case I can't find the reason why not implement the contains() function in all the Containers classes (or at least in some like vector).
Thank you

The reason why std::set implements its own find is that the "generic" std::find does a linear search, while std::set can do a lot better than that by searching its tree representation in O(log2 n ).
std::vector and std::list, on the other hand, cannot be searched faster than in linear time, so they rely on std::find implementation.
Note that you are still allowed to apply std::find to std::set to search it linearly; it just wouldn't be as efficient as using set's own find member function.
std::set<int> s {1, 2, 3, 4, 5, 6};
auto res = std::find(s.begin(), s.end(), 3);
std::cout << *res << std::endl; // prints 3

Because it's a bad design pattern. If you're using "find" on a linear container repeatedly, there's a problem with your code. The average and worst case time complexity is still O(n), which means you've made a bad choice.
For example, std::map and std::unordered_map have find member functions which allows O(logn) and O(1) lookups. This is because the container employs efficient item lookup via these methods: that is how the container should be used.
If you've weighed all the options, and decided a linear container is the best model for your data, but do need to find an element in rare occasions, std::find() allows you to do just that. You shouldn't rely on it. I view this as an antipattern in Python and Java, and a benefit in C++.
Just as a personal note, 3 years ago, when I was a beginner coder, I wrote if mydict in list: do_something() a lot. I thought this was a good idea, because Python makes list item membership idiomatic. I didn't know better. This led me to produce awful code, until I learned why linear searches are so inefficient compared to binary searches and hashmap lookups. A programming language or framework should enable good design patterns, and discourage bad ones. Enabling linear searches is a bad design pattern.

Other answers for some reason focus on complexity of generalized and per-container find methods. However they fail to explain the OP's question. The actual reason for a lack of helpful utility methods originates from the way the classes from different files are used in C++. If every container that does not hold any special properties would have contains method executing generic search with linear complexity we would stuck either with situation when each container header also includes <algorithm> or when each container header reimplements it's own find algorithm (which is even worse). And the pretty large document build during compilation of each translation unit after preprocessor includes (that is copy-pastes) every included header would grow even bigger. Compilation would take even more time. And the rather cryptic compilation error messaged may get even longer. That old principle of not making member function when a non-member function can do the job (and to include only stuff you are going to use) exists for a reason.
Note that there was a proposal recently for uniform call syntax that may allow to kinda mix-in utility methods into classes. If it goes live it probably will be possible to write extension functions like this:
template< typename TContainer, typename TItem > bool
contains(TContainer const & container, TItem const & item)
{
// Some implementation possibly calling container member find
// or ::std::find if container has none...
}
::std::vector< int > registry;
registry.contains(3); // false

There is a good reason that containers do not share a "common (inherited) interface" (like in Java) and it is what makes the C++ generics so powerful. Why write code for every container when you can write it only once for all containers? This is one of the main principles the STL was built on.
If using containers relied on member functions from a common inherited interface you would have to write a find function for every container. That is wasteful and poor engineering. Good design says if you can write code in only one place you should because you only have to remove the bugs from that one place, and fixing a bug in one place fixes the bug everywhere.
So the philosophy behind the STL is to separate the algorithms from the containers so that you only have to write the algorithm once and it works for all containers. Once the algorithm is debugged, it is debugged for all containers.
A fly in that ointment is that some containers can make more efficient decisions due to their internal structure. For those containers a type specific function has been added which will take advantage of that efficiency.
But most functions should be separate from the containers. It is called decoupling and it reduces bugs while promoting code reuse, often much more so than polymorphism which is what libraries like Java containers use (a common inherited interface).

Related

Virtual parent for containers and 'iterable' class

I was wondering if there was a reason why there's no virtual parent inheritance for all std containers.
Since C++ is oop, I get some reason for using external function as such from algorithm header.
But i wonder why string does not have size() function ( doing the same as length ) for genericity?
Also, why is there no "iterable" class from which would iterables containers inherit ( because of the different types of iterator? )
This one come from my short experience with python. In my opinion, python's biggest strengh is how easy if is to use iterable.
EDIT:
To clarify myself: i know C++ is mutli-paradigm, but i would like to focus on containers. We could have keep C-style using struct and functions, why defining some as methods and others not? For encapsulation only?
Having both sequential and oop means is confusing and illogical i think
Why having begin(vec) and vec.begin() ?
And i speak about iterable, not iterators, and yes, iterator was a class in the std defining an iterface for iterator which is now deprecated ( and i understand why ). So iterators are object, why not doing iterable object ( juste to pack iterator begin and iterator end together if it is not to define virtual parent for iterable containers) ?
I hope i was clearer this time. I already got answer i thought about by myself for the most, but i would prefer an explanation about the current logic in the std.
Thank you for the time taken to read and answer.
I was wondering if there was a reason why there's no virtual parent inheritance for all std containers.
There is a cost attached to this (virtual dispatch is not free!), and most people (all people?) wouldn't need it.
In C++, you don't pay for what you don't use. (In theory.)
i wonder why string does not have size() function ( doing the same as length ) for genericity?
It does.
why is there no "iterable" class from which would iterables containers inherit ( because of the different types of iterator? )
Because we don't need one. You claim that C++ is "oop", but that is not true: it is multi-paradigm, and most of the standard library (particularly the parts inherited from the antique STL) are template based, not object-oriented.
This allows us to provide iterators that fit a set of "requirements", rather than those that fit some definition of "class". For example, pointers (which do not have class type!) are a valid kind of iterator.
This one come from my short experience with python. In my opinion, python's biggest strengh is how easy if is to use iterable.
That's reasonable. But C++ is not Python.
I was wondering if there was a reason why there's no virtual parent inheritance for all std containers.
Design decision at early stages of C++ standard forming. There is no much reason to do that - sure, you would have a nice common interface, but this can be also achieved with consistent standard managing (all containers already do have common set of functions without an explicit interface).
A disadvantage of a common base is that all containers would suddenly need virtual functions, and virtual functions are not free (more expensive at runtime than normal functions).
Since C++ is oop
It's not. C++ is a multiparadigm language. For example, you can have functions outside of any class (unlike Java, which is following OOP paradigm more strictly).
i get some reason for using external function as such from algorithm header.
A follow-up decision after the above. Having global functions accepting generic pair(s) of iterators allows to use any container (even user-defined) with these functions.
But i wonder why string does not have size() function ( doing the same as length ) for genericity?
It does have such a function and it always had. length() is another option, provided for better readability - it's more "English" to ask about length of string than to ask about size of string
Also, why is there no "iterable" class from which would iterables containers inherit ( because of the different types of iterator? ) This one come from my short experience with python. In my opinion, python's biggest strengh is how easy if is to use iterable.
There is one very good reason for that - iterators don't have to be of class type. Pointer type satisfies all of the requirements for RandomAccessIterator, but it is not a class.
C++ prefers compile-time polymorphism, suggesting you to use templates instead of virtual calls. Back in the good old days, it was more efficient (virtual calls are still not free).
There is however a certain level of standardization for containers: all of them support iterators, and standardized C++20 ranges are coming as well.

Are there any C++ language obstacles that prevent adopting D ranges?

This is a C++ / D cross-over question. The D programming language has ranges that -in contrast to C++ libraries such as Boost.Range- are not based on iterator pairs. The official C++ Ranges Study Group seems to have been bogged down in nailing a technical specification.
Question: does the current C++11 or the upcoming C++14 Standard have any obstacles that prevent adopting D ranges -as well as a suitably rangefied version of <algorithm>- wholesale?
I don't know D or its ranges well enough, but they seem lazy and composable as well as capable of providing a superset of the STL's algorithms. Given their claim to success for D, it would seem very nice to have as a library for C++. I wonder how essential D's unique features (e.g. string mixins, uniform function call syntax) were for implementing its ranges, and whether C++ could mimic that without too much effort (e.g. C++14 constexpr seems quite similar to D compile-time function evaluation)
Note: I am seeking technical answers, not opinions whether D ranges are the right design to have as a C++ library.
I don't think there is any inherent technical limitation in C++ which would make it impossible to define a system of D-style ranges and corresponding algorithms in C++. The biggest language level problem would be that C++ range-based for-loops require that begin() and end() can be used on the ranges but assuming we would go to the length of defining a library using D-style ranges, extending range-based for-loops to deal with them seems a marginal change.
The main technical problem I have encountered when experimenting with algorithms on D-style ranges in C++ was that I couldn't make the algorithms as fast as my iterator (actually, cursor) based implementations. Of course, this could just be my algorithm implementations but I haven't seen anybody providing a reasonable set of D-style range based algorithms in C++ which I could profile against. Performance is important and the C++ standard library shall provide, at least, weakly efficient implementations of algorithms (a generic implementation of an algorithm is called weakly efficient if it is at least as fast when applied to a data structure as a custom implementation of the same algorithm using the same data structure using the same programming language). I wasn't able to create weakly efficient algorithms based on D-style ranges and my objective are actually strongly efficient algorithms (similar to weakly efficient but allowing any programming language and only assuming the same underlying hardware).
When experimenting with D-style range based algorithms I found the algorithms a lot harder to implement than iterator-based algorithms and found it necessary to deal with kludges to work around some of their limitations. Of course, not everything in the current way algorithms are specified in C++ is perfect either. A rough outline of how I want to change the algorithms and the abstractions they work with is on may STL 2.0 page. This page doesn't really deal much with ranges, however, as this is a related but somewhat different topic. I would rather envision iterator (well, really cursor) based ranges than D-style ranges but the question wasn't about that.
One technical problem all range abstractions in C++ do face is having to deal with temporary objects in a reasonable way. For example, consider this expression:
auto result = ranges::unique(ranges::sort(std::vector<int>{ read_integers() }));
In dependent of whether ranges::sort() or ranges::unique() are lazy or not, the representation of the temporary range needs to be dealt with. Merely providing a view of the source range isn't an option for either of these algorithms because the temporary object will go away at the end of the expression. One possibility could be to move the range if it comes in as r-value, requiring different result for both ranges::sort() and ranges::unique() to distinguish the cases of the actual argument being either a temporary object or an object kept alive independently. D doesn't have this particular problem because it is garbage collected and the source range would, thus, be kept alive in either case.
The above example also shows one of the problems with possibly lazy evaluated algorithm: since any type, including types which can't be spelled out otherwise, can be deduced by auto variables or templated functions, there is nothing forcing the lazy evaluation at the end of an expression. Thus, the results from the expression templates can be obtained and the algorithm isn't really executed. That is, if an l-value is passed to an algorithm, it needs to be made sure that the expression is actually evaluated to obtain the actual effect. For example, any sort() algorithm mutating the entire sequence clearly does the mutation in-place (if you want a version doesn't do it in-place just copy the container and apply the in-place version; if you only have a non-in-place version you can't avoid the extra sequence which may be an immediate problem, e.g., for gigantic sequences). Assuming it is lazy in some way the l-value access to the original sequence provides a peak into the current status which is almost certainly a bad thing. This may imply that lazy evaluation of mutating algorithms isn't such a great idea anyway.
In any case, there are some aspects of C++ which make it impossible to immediately adopt the D-sytle ranges although the same considerations also apply to other range abstractions. I'd think these considerations are, thus, somewhat out of scope for the question, too. Also, the obvious "solution" to the first of the problems (add garbage collection) is unlikely to happen. I don't know if there is a solution to the second problem in D. There may emerge a solution to the second problem (tentatively dubbed operator auto) but I'm not aware of a concrete proposal or how such a feature would actually look like.
BTW, the Ranges Study Group isn't really bogged down by any technical details. So far, we merely tried to find out what problems we are actually trying to solve and to scope out, to some extend, the solution space. Also, groups generally don't get any work done, at all! The actual work is always done by individuals, often by very few individuals. Since a major part of the work is actually designing a set of abstractions I would expect that the foundations of any results of the Ranges Study Group is done by 1 to 3 individuals who have some vision of what is needed and how it should look like.
My C++11 knowledge is much more limited than I'd like it to be, so there may be newer features which improve things that I'm not aware of yet, but there are three areas that I can think of at the moment which are at least problematic: template constraints, static if, and type introspection.
In D, a range-based function will usually have a template constraint on it indicating which type of ranges it accepts (e.g. forward range vs random-access range). For instance, here's a simplified signature for std.algorithm.sort:
auto sort(alias less = "a < b", Range)(Range r)
if(isRandomAccessRange!Range &&
hasSlicing!Range &&
hasLength!Range)
{...}
It checks that the type being passed in is a random-access range, that it can be sliced, and that it has a length property. Any type which does not satisfy those requirements will not compile with sort, and when the template constraint fails, it makes it clear to the programmer why their type won't work with sort (rather than just giving a nasty compiler error from in the middle of the templated function when it fails to compile with the given type).
Now, while that may just seem like a usability improvement over just giving a compilation error when sort fails to compile because the type doesn't have the right operations, it actually has a large impact on function overloading as well as type introspection. For instance, here are two of std.algorithm.find's overloads:
R find(alias pred = "a == b", R, E)(R haystack, E needle)
if(isInputRange!R &&
is(typeof(binaryFun!pred(haystack.front, needle)) : bool))
{...}
R1 find(alias pred = "a == b", R1, R2)(R1 haystack, R2 needle)
if(isForwardRange!R1 && isForwardRange!R2 &&
is(typeof(binaryFun!pred(haystack.front, needle.front)) : bool) &&
!isRandomAccessRange!R1)
{...}
The first one accepts a needle which is only a single element, whereas the second accepts a needle which is a forward range. The two are able to have different parameter types based purely on the template constraints and can have drastically different code internally. Without something like template constraints, you can't have templated functions which are overloaded on attributes of their arguments (as opposed to being overloaded on the specific types themselves), which makes it much harder (if not impossible) to have different implementations based on the genre of range being used (e.g. input range vs forward range) or other attributes of the types being used. Some work has been being done in this area in C++ with concepts and similar ideas, but AFAIK, C++ is still seriously lacking in the features necessary to overload templates (be they templated functions or templated types) based on the attributes of their argument types rather than specializing on specific argument types (as occurs with template specialization).
A related feature would be static if. It's the same as if, except that its condition is evaluated at compile time, and whether it's true or false will actually determine which branch is compiled in as opposed to which branch is run. It allows you to branch code based on conditions known at compile time. e.g.
static if(isDynamicArray!T)
{}
else
{}
or
static if(isRandomAccessRange!Range)
{}
else static if(isBidirectionalRange!Range)
{}
else static if(isForwardRange!Range)
{}
else static if(isInputRange!Range)
{}
else
static assert(0, Range.stringof ~ " is not a valid range!");
static if can to some extent obviate the need for template constraints, as you can essentially put the overloads for a templated function within a single function. e.g.
R find(alias pred = "a == b", R, E)(R haystack, E needle)
{
static if(isInputRange!R &&
is(typeof(binaryFun!pred(haystack.front, needle)) : bool))
{...}
else static if(isForwardRange!R1 && isForwardRange!R2 &&
is(typeof(binaryFun!pred(haystack.front, needle.front)) : bool) &&
!isRandomAccessRange!R1)
{...}
}
but that still results in nastier errors when compilation fails and actually makes it so that you can't overload the template (at least with D's implementation), because overloading is determined before the template is instantiated. So, you can use static if to specialize pieces of a template implementation, but it doesn't quite get you enough of what template constraints get you to not need template constraints (or something similar).
Rather, static if is excellent for doing stuff like specializing only a piece of your function's implementation or for making it so that a range type can properly inherit the attributes of the range type that it's wrapping. For instance, if you call std.algorithm.map on an array of integers, the resultant range can have slicing (because the source range does), whereas if you called map on a range which didn't have slicing (e.g. the ranges returned by std.algorithm.filter can't have slicing), then the resultant ranges won't have slicing. In order to do that, map uses static if to compile in opSlice only when the source range supports it. Currently, map 's code that does this looks like
static if (hasSlicing!R)
{
static if (is(typeof(_input[ulong.max .. ulong.max])))
private alias opSlice_t = ulong;
else
private alias opSlice_t = uint;
static if (hasLength!R)
{
auto opSlice(opSlice_t low, opSlice_t high)
{
return typeof(this)(_input[low .. high]);
}
}
else static if (is(typeof(_input[opSlice_t.max .. $])))
{
struct DollarToken{}
enum opDollar = DollarToken.init;
auto opSlice(opSlice_t low, DollarToken)
{
return typeof(this)(_input[low .. $]);
}
auto opSlice(opSlice_t low, opSlice_t high)
{
return this[low .. $].take(high - low);
}
}
}
This is code in the type definition of map's return type, and whether that code is compiled in or not depends entirely on the results of the static ifs, none of which could be replaced with template specializations based on specific types without having to write a new specialized template for map for every new type that you use with it (which obviously isn't tenable). In order to compile in code based on attributes of types rather than with specific types, you really need something like static if (which C++ does not currently have).
The third major item which C++ is lacking (and which I've more or less touched on throughout) is type introspection. The fact that you can do something like is(typeof(binaryFun!pred(haystack.front, needle)) : bool) or isForwardRange!Range is crucial. Without the ability to check whether a particular type has a particular set of attributes or that a particular piece of code compiles, you can't even write the conditions which template constraints and static if use. For instance, std.range.isInputRange looks something like this
template isInputRange(R)
{
enum bool isInputRange = is(typeof(
{
R r = void; // can define a range object
if (r.empty) {} // can test for empty
r.popFront(); // can invoke popFront()
auto h = r.front; // can get the front of the range
}));
}
It checks that a particular piece of code compiles for the given type. If it does, then that type can be used as an input range. If it doesn't, then it can't. AFAIK, it's impossible to do anything even vaguely like this in C++. But to sanely implement ranges, you really need to be able to do stuff like have isInputRange or test whether a particular type compiles with sort - is(typeof(sort(myRange))). Without that, you can't specialize implementations based on what types of operations a particular range supports, you can't properly forward the attributes of a range when wrapping it (and range functions wrap their arguments in new ranges all the time), and you can't even properly protect your function against being compiled with types which won't work with it. And, of course, the results of static if and template constraints also affect the type introspection (as they affect what will and won't compile), so the three features are very much interconnected.
Really, the main reasons that ranges don't work very well in C++ are the some reasons that metaprogramming in C++ is primitive in comparison to metaprogramming in D. AFAIK, there's no reason that these features (or similar ones) couldn't be added to C++ and fix the problem, but until C++ has metaprogramming capabilities similar to those of D, ranges in C++ are going to be seriously impaired.
Other features such as mixins and Uniform Function Call Syntax would also help, but they're nowhere near as fundamental. Mixins would help primarily with reducing code duplication, and UFCS helps primarily with making it so that generic code can just call all functions as if they were member functions so that if a type happens to define a particular function (e.g. find) then that would be used instead of the more general, free function version (and the code still works if no such member function is declared, because then the free function is used). UFCS is not fundamentally required, and you could even go the opposite direction and favor free functions for everything (like C++11 did with begin and end), though to do that well, it essentially requires that the free functions be able to test for the existence of the member function and then call the member function internally rather than using their own implementations. So, again you need type introspection along with static if and/or template constraints.
As much as I love ranges, at this point, I've pretty much given up on attempting to do anything with them in C++, because the features to make them sane just aren't there. But if other folks can figure out how to do it, all the more power to them. Regardless of ranges though, I'd love to see C++ gain features such as template constraints, static if, and type introspection, because without them, metaprogramming is way less pleasant, to the point that while I do it all the time in D, I almost never do it in C++.

Most efficient way to process all items in an unknown container?

I'm doing a computation in C++ and it has to be as fast as possible (it is executed 60 times per second with possibly large data). During the computation, a certain set of items have to be processed. However, in different cases, different implementations of the item storage are optimal, so i need to use an abstract class for that.
My question is, what is the most common and most efficient way to do an action with each of the items in C++? (I don't need to change the structure of the container during that.) I have thought of two possible solutions:
Make iterators for the storage classes. (They're also mine, so i can add it.) This is common in Java, but doesn't seem very 'C' to me:
class Iterator {
public:
bool more() const;
Item * next();
}
Add sort of an abstract handler, which would be overriden in the computation part and would include the code to be called on each item:
class Handler {
public:
virtual void process(Item &item) = 0;
}
(Only a function pointer wouldn't be enough because it has to also bring some other data.)
Something completely different?
The second option seems a bit better to me since the items could in fact be processed in a single loop without interruption, but it makes the code quite messy as i would have to make quite a lot of derived classes. What would you suggest?
Thanks.
Edit: To be more exact, the storage data type isn't exactly just an ADT, it has means of only finding only a specific subset of the elements in it based on some parameters, which i need to then process, so i can't prepare all of them in an array or something.
#include <algorithm>
Have a look at the existing containers provided by the C++ standard, and functions such as for_each.
For a comparison of C++ container iteration to interfaces in "modern" languages, see this answer of mine. The other answers have good examples of what the idiomatic C++ way looks like in practice.
Using templated functors, as the standard containers and algorithms do, will definitely give you a speed advantage over virtual dispatch (although sometimes the compiler can devirtualize calls, don't count on it).
C++ has iterators already. It's not a particularly "Java" thing. (Note that their interface is different, though, and they're much more efficient than their Java equivalents)
As for the second approach, calling a virtual function for every element is going to hurt performance if you're worried about throughput.
If you can (pre-)sort your data so that all objects of the same type are stored consecutively, then you can select the function to call once, and then apply it to all elements of that type. Otherwise, you'll have to go through the indirection/type check of a virtual function or another mechanism to perform the appropriate action for every individual element.
What gave you the impression that iterators are not very C++-like? The standard library is full of them (see this), and includes a wide range of algorithms that can be used to effectively perform tasks on a wide range of standard container types.
If you use the STL containers you can save re-inventing the wheel and get easy access to a wide variety of pre-defined algorithms. This is almost always better than writing your own equivalent container with an ad-hoc iteration solution.
A function template perhaps:
template <typename C>
void process(C & c)
{
typedef typename C::value_type type;
for (type & x : c) { do_something_with(x); }
}
The iteration will use the containers iterators, which is generally as efficient as you can get.
You can specialize the template for specific containers.

Why aren't there convenience functions for set_union, etc, which take container types instead of iterators?

std::set_union and its kin take two pairs of iterators for the sets to be operated on. That's great in that it's the most flexible thing to do. However they very easily could have made an additional convenience functions which would be more elegant for 80% of typical uses.
For instance:
template<typename ContainerType, typename OutputIterator>
OutputIterator set_union( const ContainerType & container1,
const ContainerType & container2,
OutputIterator & result )
{
return std::set_union( container1.begin(), container1.end(),
container2.begin(), container2.end(),
result );
}
would turn:
std::set_union( mathStudents.begin(), mathStudents.end(),
physicsStudents.begin(), physicsStudents.end(),
students.begin() );
into:
std::set_union( mathStudents, physicsStudents, students.begin() );
So:
Are there convenience functions like this hiding somewhere that I just haven't found?
If not, can anyone thing of a reason why it would be left out of STL?
Is there perhaps a more full featured set library in boost? (I can't find one)
I can of course always put my implementations in a utility library somewhere, but it's hard to keep such things organized so that they're used across all projects, but not conglomerated improperly.
Are there convenience functions like this hiding somewhere that I just haven't found?
Not in the standard library.
If not, can anyone thing of a reason why it would be left out of STL?
The general idea with algorithms is that they work with iterators, not containers. Containers can be modified, altered, and poked at; iterators cannot. Therefore, you know that, after executing an algorithm, it has not altered the container itself, only potentially the container's contents.
Is there perhaps a more full featured set library in boost?
Boost.Range does this. Granted, Boost.Range does more than this. It's algorithms don't take "containers"; they take iterator ranges, which STL containers happen to satisfy the conditions for. They also have lazy evaluation, which can be nice for performance.
One reason for working with iterators is of course that it is more general and works on ranges that are not containers, or just a part of a container.
Another reason is that the signatures would be mixed up. Many algorithms, like std::sort have more than one signature already:
sort(Begin, End);
sort(Begin, End, Compare);
Where the second one is for using a custom Compare when sorting on other than standard less-than.
If we now add a set of sort for containers, we get these new functions
sort(Container);
sort(Container, Compare);
Now we have the two signatures sort(Begin, End) and sort(Container, Compare) which both take two template parameters, and the compiler will have problems resolving the call.
If we change the name of one of the functions to resolve this (sort_range, sort_container?) it will not be as convenient anymore.
I agree, STL should take containers instead of iterators-pairs for the following reasons;
Simpler code
Algorithms could be overloaded for specified containers, ie, could use the map::find algorithm instead of std::find -> More general code
A subrange could easily be wrapped into a container, as is done in boost::range
#Bo Persson has pointed to a problem with ambiguity, and I think that's quite valid.
I think there's a historical reason that probably prevented that from ever really even being considered though.
The STL was introduced into C++ relatively late in the standardization process. Shortly after it was accepted, the committee voted against even considering any more new features for addition into C++98 (maybe even at the same meeting). By the time most people had wrapped their head around the existing STL to the point of recognizing how much convenience you could get from something like ranges instead of individual iterators, it was too late to even be considered.
Even if the committee was still considering new features, and somebody had written a proposals to allow passing containers instead of discrete iterators, and had dealt acceptably with the potential for ambiguity, I suspect the proposal would have been rejected. Many (especially the C-oriented people) saw the STL as a huge addition to the standard library anyway. I'm reasonably certain quite a few people would have considered it completely unacceptable to add (lots) more functions/overloads/specializations just to allowing passing one parameter in place of two.
Using the begin & end elements for iteration allows one to use non-container types as inputs. For example:
ContainerType students[10];
vector<ContainerType> physicsStudents;
std::set_union(physicsStudents.begin(), physicsStudents.end(),
&students[0], &students[10],
physicsStudents.begin());
Since they are such simple implementations, I think it makes sense not to add them to the std library and allow authors to add their own. Especially given that they are templates, thus potentially increasing the lib size of the code and adding convenience functions across std would lead to code bloat.

Large scale usage of Meyer's advice to prefer Non-member,non-friend functions?

For some time I've been designing my class interfaces to be minimal, preferring namespace-wrapped non-member functions over member functions. Essentially following Scott Meyer's advice in the article How Non-Member Functions Improve Encapsulation.
I've been doing this with good effect in a few small scale projects, but I'm wondering how well it works on a larger scale. Are there any large, well regarded open-source C++ projects that I can take a look at and perhaps reference where this advice is strongly followed?
Update: Thanks for all the input, but I'm not really interested in opinion so much as finding out how well it works in practice on a larger scale. Nick's answer is closest in this regard, but I'd like to be able to see the code. Any sort of detailed description of practical experiences (positives, negatives, practical considerations, etc) would be acceptable as well.
I do this quite a bit on the project I work on; the largest of which at my current company is around 2M lines, but it's not open source, so I can't provide it as a reference. However, I will say that I agree with the advice, generally speaking. The more you can separate the functionality which is not strictly contained to just one object from that object, the better your design will be.
By way of an example, consider the classic polymorphism example: a Shape base class with subclasses, and a virtual Draw() function. In the real world, Draw() would need to take some drawing context, and potentially be aware of the state of other things being drawn, or the application in general. Once you put all that into each subclass implementation of Draw(), you're likely to have some code overlap, or most of your actual Draw() logic will be in the base class, or somewhere else. Then consider that if you want to re-use some of that code, you'll need to provide more entry points into the interface, and possibly pollute the functions with other code not related to drawing shapes (eg: multi-shape drawing correlation logic). Before long, it'll be a mess, and you'll wish you had a draw function which took a Shape (and context, and other data) instead, and Shape just had functions/data which were entirely encapsulated and not using or referencing external objects.
Anyway, that's my experience/advice, for what it's worth.
I'd argue that the benefit of non-member functions increases as the size of the project increases. The standard library containers, iterators, and algorithms library are proof of this.
If you can decouple algorithms from data structures (or, to phrase it another way, if you can decouple what you do with objects from how their internal state is manipulated), you can decrease coupling between your classes and take greater advantage of generic code.
Scott Meyers isn't the only author who has argued in favor of this principle; Herb Sutter has too, especially in Monoliths Unstrung, which ends with the guideline:
Where possible, prefer writing functions as nonmember nonfriends.
I think one of the best examples of an unneccessary member function from that article is std::basic_string::find; there is no reason for it to exist, really, as std::find provides exactly the same functionality.
OpenCV library does this. They have a cv::Mat class that presents a 3D matrix (or images). Then they have all the other functions in the cv namespace.
OpenCV library is huge and is widely regarded in its field.
One practical advantage of writing functions as nonmember nonfriends is that doing so can significantly reduce the time it takes to thoroughly test and verify the code.
Consider, for example, the sequence container member functions insert and push_back. There are at least two approaches to implementing push_back:
It can simply call insert (it's behavior is defined in terms of insert anyway)
It can do all the work that insert would do (possibly calling private helper functions) without actually calling insert
Obviously, when implementing a sequence container, you probably want to use the first approach. push_back is just a special form of insert and (to the best of my knowledge) you can't really get any performance benefit by implementing push_back some other way (at least not for list, deque, or vector).
However, to thoroughly test such a container, you have to test push_back separately: since push_back is a member function, it can modify any and all of the internal state of the container. From a testing standpoint, you should (must?) assume that push_back is implemented using the second approach because it is possible that it could be implemented using the second approach. There is no guarantee that it is implemented in terms of insert.
If push_back is implemented as a nonmember nonfriend, it can't touch any of the internal state of the container; it must use the first approach. When you write tests for it, you know that it can't break the internal state of the container (assuming the actual container member functions are implemented correctly). You can use that knowledge to significantly reduce the number of tests that you need to write to fully exercise the code.
(I don't have time to write this up nicely, the following's a 5 minute brain dump which doubtless can be ripped apart at various trival levels, but please address the concepts and general thrust.)
I have considerable sympathy for the position taken by Jonathan Grynspan, but want to say a bit more about it than can reasonably be done in comments.
First - a "well said" to Alf Steinbach, who chipped in with "It's only over-simplified caricatures of their viewpoints that might seem to be in conflict. For what it's worth I don't agree with Scott Meyers on this matter; as I see it he's over-generalizing here, or he was."
Scott, Herb etc. were making these points when few people understood the trade-offs or alternatives, and they did so with disproportionate strength. Some nagging hassles people had during evolution of code were analysed and a new design approach addressing those issues was rationally derived. Let's return to the question of whether there were downsides later, but first - worth saying that the pain in question was typically small and infrequent: non-member functions are just one small aspect of designing reusable code, and in enterprise scale systems I've worked on simply writing the same kind of code you'd have put into a member function as a non-member is rarely enough to make the non-members reusable. It's pretty rare for them to even express algorithms that are both complex enough to be worth reusing and yet not tightly bound to the specific of the class they were designed for, that being weird enough that it's practically inconceivable some other class will happen along supporting the same operations and semantics. Often, you also need to template arguments, or introduce a base class to abstract the set of operations required. Both have significant implications in terms of performance, being inline vs out-of-line, client-code recompilation.
That said, there's often less code changes and impact study required when changing implementation if operations have been implementing in terms of a public interface, and being a non-friend non-member systematically enforces that. Occasionally though, it makes the initial implementation more verbose or in some other way less desirable and maintainble.
But, as a litmus test - how many of these non-member functions sit in the same header as the only class for which they're currently applicable? How many want to abstract their arguments via templates (which means inlining, compilation dependencies) or base classes (virtual function overheads) to allow reuse? Both discourage people from seeing them as reusable, but when not the case, the operations available on a class are delocalised, which can frustrate developers perception of a system: the develop often has to work out for themselves the rather disappointing fact that - "oh - that will only work for class X".
Bottom line: most member functions aren't potentially reusable. Much corporate code isn't broken into clean algorithm versus data with potential for reuse of the former. That kind of division just isn't required or useful or conceivably useful 20 years down the road. It's much the same as get/set methods - they're needed at certain API boundaries, but can constitute needless verbosity when ownership and use of the code is localised.
Personally, I don't have an all or nothing approach to this, but decide what to make a member function or non-member based on whether there's any likely benefit to either, potential reusability versus locality of interface.
I also do this alot, where it seems to make sense, and it causes absolutely no problems with scaling. (although my current project is only 40000 LOC) In fact, I think it makes the code more scalable - it slims down classes, reduces dependencies.
It sometimes requires you to refactor your functions to make them independent of members of the class - and thereby often creating a library of more general helper functions, which you can easly reuse elsewhere. I'd also mention that one of the common problems with many large projects is the bloating of classes - and I think preferring non-member, non-friend functions also helps here.
Prefer non-member non-friend functions for encapsulation UNLESS you want implicit conversions to work for class templates non-member functions (in which case you better make them friend functions):
That is, if you have a class template type<T>:
template<class T>
struct type {
void friend foo(type<T> a) {}
};
and a type implicitly convertible to type<T>, e.g.:
template<class T>
struct convertible_to_type {
operator type<T>() { }
};
The following works as expected:
auto t = convertible_to_type<int>{};
foo(t); // t is converted to type<int>
However, if you make foo a non-friend function:
template<class T>
void foo(type<T> a) {}
then the following doesn't work:
auto t = convertible_to_type<int>{};
foo(t); // FAILS: cannot deduce type T for type
Since you cannot deduce T then the function foo is removed from the overload resolution set, that is: no function is found, which means that the implicit conversion does not trigger.