Performance with value semantics - c++

I am very concerned about performance and readability of the code and I get most of my ideas from Chandler Carruth from Google. I would like to apply the following rules for C++ for clean code without loosing performance at all.
Pass all Builtin types as values
Pass all objects that you don't want to mutate by const reference
Pass all objects that your function needs to consume by value
Ban everything else. When in corner cases, pass a pointer.
That way, functions don't have side effects. That's a must for code readability and makes C++ kind of functionnal. Now comes performance. What can you do if you want to write a function that adds 1 to every element of a std::vector? Here is my solution.
std::vector<int> add_one(std::vector<int> v) {
for (std::size_t k = 0; k < v.size(); ++k) {
v[k] += 1;
}
return v;
}
...
v = add_one(std::move(v));
...
I find this very elegant and makes only 2 moves. Here are my questions:
Is it legal C++11 ?
Do you think of any drawback with this design?
Couldn't a compiler automatically transform v = f(v) into this? A kind of copy elision.
PS: People ask me why I don't like to pass by reference. I have 2 arguments:
1 - It's does not make explicit at the call site which argument might get mutated.
2 - It sometimes kill performance. References and pointers are hell for compiler because of aliasing. Let's take the following code
std::array<double, 2> fval(const std::array<double, 2>& v) {
std::array<double, 2> ans;
ans[0] = cos(v[0] + v[1]);
ans[1] = sin(v[0] + v[1]);
return ans;
}
The same code that takes ans as a reference is 2 times slower:
std::array<double, 2> fref(const std::array<double, 2>& v,
std::array<double, 2>& ans) {
ans[0] = cos(v[0] + v[1]);
ans[1] = sin(v[0] + v[1]);
}
Pointer aliasing prevents the compiler from computing sin and cos with a single machine instruction (Gcc does not currently do that optimization, but icpc makes the optimization with value semantics).

It appears to be legal C++11 to me. Drawbacks may possibly be opinion-based so I won't address that, but as for your third point, the compiler could only do that transformation if it can prove that the function doesn't also alias v in any way. As such compiler writers may not have elected to implement such an optimization for the simple cases they can analyze and leave that burden (no-aliasing) on the programmer.
However also consider: When the function resists being written in a clear version with obvious performance characteristics perhaps you're writing the wrong function. Instead write an algorithm that operates on a range like the standard library and the problem goes away:
template <typename Iterator>
void add_one(Iterator first, Iterator last)
{
for(; first != last; ++first)
{
(*first) += 1;
}
}

Related

Extend std::vector with range checking and signed size_type

I would like to use a class with the same functionality as std::vector, but
Replace std::vector<T>::size_type by some signed integer (like int64_t or simply int), instead of usual size_t. It is very annoying to see warnings produced by a compiler in comparisons between signed and unsigned numbers when I use standard vector interface. I can't just disable such warnings, because they really help to catch programming errors.
put assert(0 <= i && i < size()); inside operator[](int i) to check out of range errors. As I understand it will be a better option over the call to .at() because I can disable assertions in release builds, so performance will be the same as in the standard implementation of the vector. It is almost impossible for me to use std::vector without manual checking of range before each operation because operator[] is the source of almost all weird errors related to memory access.
The possible options that come to my mind are to
Inherit from std::vector. It is not a good idea, as said in the following question: Extending the C++ Standard Library by inheritance?.
Use composition (put std::vector inside my class) and repeat all the interface of std::vector. This option forces me to think about the current C++ standard, because the interface of some methods, iterators is slightly different in C++ 98,11,14,17. I would like to be sure, that when c++ 20 became available, I can simply use it without reimplementation of all the interface of my vector.
An answer more to the underlying problem read from the comment:
For example, I don't know how to write in a ranged-based for way:
for (int i = a.size() - 2; i >= 0; i--) { a[i] = 2 * a[i+1]; }
You may change it to a generic one like this:
std::vector<int> vec1{ 1,2,3,4,5,6};
std::vector<int> vec2 = vec1;
int main()
{
// generic code
for ( auto it = vec1.rbegin()+1; it != vec1.rend(); it++ )
{
*it= 2* *(it-1);
}
// your code
for (int i = vec2.size() - 2; i >= 0; i--)
{
vec2[i] = 2 * vec2[i+1];
}
for ( auto& el: vec1) { std::cout << el << std::endl; }
for ( auto& el: vec2) { std::cout << el << std::endl; }
}
Not using range based for as it is not able to access relative to the position.
Regarding point 1: we hardly ever get those warnings here, because we use vectors' size_type where appropriate and/or cast to it if needed (with a 'checked' cast like boost::numeric_cast for safety). Is that not an option for you? Otherwise, write a function to do it for you, i.e. the non-const version would be something like
template<class T>
T& ati(std::vector<T>& v, std::int64_t i)
{
return v.at(checked_cast<decltype(v)::size_type>(i));
}
And yes, inheriting is still a problem. And even if it weren't you'd break the definition of vector (and the Liskov substitution principle I guess), because the size_type is defined as
an unsigned integral type that can represent any non-negative value of difference_type
So it's down to composition, or a bunch of free functions for accessing with a signed size_type and a range check. Personally I'd go for the latter: less work, as easy to use, and you can still pass your vector to functions teaking vectors without problems.
(This is more a comment than a real answer, but has some code, so...)
For the second part (range checking at runtime), a third option would be to use some macro trick:
#ifdef DEBUG
#define VECTOR_AT(v,i) v.at(i)
#else
#define VECTOR_AT(v,i) v[i]
#endif
This can be used this way:
std::vector<sometype> vect(somesize);
VECTOR_AT(vect,i) = somevalue;
Of course, this requires editing your code in a quite non-standard way, which may not be an option. But it does the job.

C++: Vector bounds

I am coming from Java and learning C++ in the moment. I am using Stroustrup's Progamming Principles and Practice of Using C++. I am working with vectors now. On page 117 he says that accessing a non-existant element of a vector will cause a runtime error (same in Java, index out of bounds). I am using the MinGW compiler and when I compile and run this code:
#include <iostream>
#include <cstdio>
#include <vector>
int main()
{
std::vector<int> v(6);
v[8] = 10;
std::cout << v[8];
return 0;
}
It gives me as output 10. Even more interesting is that if I do not modify the non-existent vector element (I just print it expecting a runtime error or at least a default value) it prints some large integers. So... is Stroustrup wrong, or does GCC have some strange ways of compiling C++?
The book is a bit vague. It's not as much a "runtime error" as it is undefined behaviour which manifests at runtime. This means that anything could happen. But the error is strictly with you, not with the program execution, and it is in fact impossible and non sensible to even talk about the execution of a program with undefined behaviour.
There is nothing in C++ that protects you against programming errors, quite unlike in Java.
As #sftrabbit says, std::vector has an alternative interface, .at(), which always gives a correct program (though it may throw exceptions), and consequently one which one can reason about.
Let me repeat the point with an example, because I believe this is an important fundamental aspect of C++. Suppose we're reading an integer from the user:
int read_int()
{
std::cout << "Please enter a number: ";
int n;
return (std::cin >> n) ? n : 18;
}
Now consider the following three programs:
The dangerous one: The correctness of this program depends on the user input! It is not necessarily incorrect, but it is unsafe (to the point where I would call it broken).
int main()
{
int n = read_int();
int k = read_int();
std::vector<int> v(n);
return v[k];
}
Unconditionally correct: No matter what the user enters, we know how this program behaves.
int main() try
{
int n = read_int();
int k = read_int();
std::vector<int> v(n);
return v.at(k);
}
catch (...)
{
return 0;
}
The sane one: The above version with .at() is awkward. Better to check and provide feedback. Because we perform dynamic checking, the unchecked vector access is actually guaranteed to be fine.
int main()
{
int n = read_int();
if (n <= 0) { std::cout << "Bad container size!\n"; return 0; }
int k = read_int();
if (k < 0 || k >= n) { std::cout << "Bad index!\n"; return 0; }
std::vector<int> v(n);
return v[k];
}
(We're ignoring the possibility that the vector construction might throw an exception of its own.)
The moral is that many operations in C++ are unsafe and only conditionally correct, but it is expected of the programmer that you make the necessary checks ahead of time. The language doesn't do it for you, and so you don't pay for it, but you have to remember to do it. The idea is that you need to handle the error conditions anyway, and so rather than enforcing an expensive, non-specific operation at the library or language level, the responsibility is left to the programmer, who is in a better position to integrate the checking into the code that needs to be written anyway.
If I wanted to be facetious, I would contrast this approach to Python, which allows you to write incredibly short and correct programs, without any user-written error handling at all. The flip side is that any attempt to use such a program that deviates only slightly from what the programmer intended leaves you with a non-specific, hard-to-read exception and stack trace and little guidance on what you should have done better. You're not forced to write any error handling, and often no error handling ends up being written. (I can't quite contrast C++ with Java, because while Java is generally safe, I have yet to see a short Java program.)</rantmode>
This is a valuable comment by #Evgeny Sergeev that I promote to the answer:
For GCC, you can -D_GLIBCXX_DEBUG to replace standard containers with safe implementations. More recently, this now also seems to work with std::array. More info here: gcc.gnu.org/onlinedocs/libstdc++/manual/debug_mode.html
I would add, it is also possible to bundle individual "safe" versions of vector and other utility classes by using gnu_debug:: namespace prefix rather than std::.
In other words, do not re-invent the wheel, array checks are available at least with GCC.
C and C++ does not always do bounds checks. It MAY cause a runtime error. And if you were to overdo your number by enough, say 10000 or so, it's almost certain to cause a problem.
You can also use vector.at(10), which definitely should give you an exception.
see:
http://www.cplusplus.com/reference/vector/vector/at/
compared with:
http://www.cplusplus.com/reference/vector/vector/operator%5B%5D/
I hoped that vector's "operator[]" would check boundary as "at()" does, because I'm not so careful. :-)
One way would inherit vector class and override operator[] to call at() so that one can use more readable "[]" and no need to replace all "[]" to "at()". You can also define the inherited vector (ex:safer_vector) as normal vector.
The code will be like this(in C++11, llvm3.5 of Xcode 5).
#include <vector>
using namespace std;
template <class _Tp, class _Allocator = allocator<_Tp> >
class safer_vector:public vector<_Tp, _Allocator>{
private:
typedef __vector_base<_Tp, _Allocator> __base;
public:
typedef _Tp value_type;
typedef _Allocator allocator_type;
typedef typename __base::reference reference;
typedef typename __base::const_reference const_reference;
typedef typename __base::size_type size_type;
public:
reference operator[](size_type __n){
return this->at(__n);
};
safer_vector(_Tp val):vector<_Tp, _Allocator>(val){;};
safer_vector(_Tp val, const_reference __x):vector<_Tp, _Allocator>(val,__x){;};
safer_vector(initializer_list<value_type> __il):vector<_Tp, _Allocator>(__il){;}
template <class _Iterator>
safer_vector(_Iterator __first, _Iterator __last):vector<_Tp,_Allocator>(__first, __last){;};
// If C++11 Constructor inheritence is supported
// using vector<_Tp, _Allocator>::vector;
};
#define safer_vector vector

C++ cast vector type in place

Is it possible to do this without creating new data structure?
Suppose we have
struct Span{
int from;
int to;
}
vector<Span> s;
We want to get an integer vector from s directly, by casting
vector<Span> s;
to
vector<int> s;
so we could remove/change some "from", "to" elements, then cast it back to
vector<Span> s;
This is not really a good idea, but I'll show you how.
You can get a raw pointer to the integer this way:
int * myPointer2 = (int*)&(s[0]);
but this is really bad practice because you can't guarantee that the span structure doesn't have any padding, so while it might work fine for me and you today we can't say much for other systems.
#include <iostream>
#include <vector>
struct Span{
int from;
int to;
};
int main()
{
std::vector<Span> s;
Span a = { 1, 2};
Span b = {2, 9};
Span c = {10, 14};
s.push_back(a);
s.push_back(b);
s.push_back(c);
int * myPointer = (int*)&(s[0]);
for(int k = 0; k < 6; k++)
{
std::cout << myPointer[k] << std::endl;
}
return 0;
}
As I said, that hard reinterpret cast will often work but is very dangerous and lacks the cross-platform guarantees you normally expect from C/C++.
The next worse thing is this, that will actually do what you asked but you should never do. This is the sort of code you could get fired for:
// Baaaad mojo here: turn a vector<span> into a vector<int>:
std::vector<int> * pis = (std::vector<int>*)&s;
for ( std::vector<int>::iterator It = pis->begin(); It != pis->end(); It++ )
std::cout << *It << std::endl;
Notice how I'm using a pointer to vector and pointing to the address of the vector object s. My hope is that the internals of both vectors are the same and I can use them just like that. For me, this works and while the standard templates may luckily require this to be the case, it is not generally so for templated classes (see such things as padding and template specialization).
Consider instead copying out an array (see ref 2 below) or just using s1.from and s[2].to.
Related Reading:
Are std::vector elements guaranteed to be contiguous?
How to convert vector to array in C++
If sizeof(Span) == sizeof(int) * 2 (that is, Span has no padding), then you can safely use reinterpret_cast<int*>(&v[0]) to get a pointer to array of int that you can iterate over. You can guarantee no-padding structures on a per-compiler basis, with __attribute__((__packed__)) in GCC and #pragma pack in Visual Studio.
However, there is a way that is guaranteed by the standard. Define Span like so:
struct Span {
int endpoints[2];
};
endpoints[0] and endpoints[1] are required to be contiguous. Add some from() and to() accessors for your convenience, if you like, but now you can use reinterpret_cast<int*>(&v[0]) to your heart’s content.
But if you’re going to be doing a lot of this pointer-munging, you might want to make your own vector-like data structure that is more amenable to this treatment—one that offers more safety guarantees so you can avoid shot feet.
Disclaimer: I have absolutely no idea about what you are trying to do. I am simply making educated guesses and showing possible solutions based on that. Hopefully I'll guess one right and you won't have to do crazy shenanigans with stupid casts.
If you want to remove a certain element from the vector, all you need to do is find it and remove it, using the erase function. You need an iterator to your element, and obtaining that iterator depends on what you know about the element in question. Given std::vector<Span> v;:
If you know its index:
v.erase(v.begin() + idx);
If you have an object that is equal to the one you're looking for:
Span doppelganger;
v.erase(std::find(v.begin(), v.end(), doppelganger));
If you have an object that is equal to what you're looking for but want to remove all equal elements, you need the erase-remove idiom:
Span doppelganger;
v.erase(std::remove(v.begin(), v.end(), doppelganger)),
v.end());
If you have some criterion to select the element:
v.erase(std::find(v.begin(), v.end(),
[](Span const& s) { return s.from == 0; }));
// in C++03 you need a separate function for the criterion
bool starts_from_zero(Span const& s) { return s.from == 0; }
v.erase(std::find(v.begin(), v.end(), starts_from_zero));
If you have some criterion and want to remove all elements that fit that criterion, you need the erase-remove idiom again:
v.erase(std::remove_if(v.begin(), v.end(), starts_from_zero)),
v.end());

What is the motivation behind C++11 lambda expressions?

I am trying to find out if there is an actual computational benefit to using lambda expressions in C++, namely "this code compiles/runs faster/slower because we use lambda expressions" or is it just a neat development perk open for abuse by poor coders trying to look cool?
I understand this question may seem subjective, but I would much appreciate the opinion of the community on this matter.
The benefit is what's the most important thing in writing computer programs: easier to understand code. I'm not aware of any performance considerations.
C++ allows, to a certain extend, to do Functional Programming. Consider this:
std::for_each( begin, end, doer );
The problem with this is that the function (object) doer
specifies what's done in the loop
yet somewhat hides what's actually done (you have to look up the function object's operator()'s implementation)
must be defined in a different scope than the std::for_each call
contains a certain amount of boilerplate code
is often throw-away code that's not used for anything but this one loop construct
Lambdas considerably improve on all these (and maybe some more I forgot).
I don't think it's nearly as much about the computational performance as increasing the expressive power of the language.
There's no performance benefit per se, but the need for lambda came as a consequence of the wide adoption of the STL and its design ideas.
Specifically, the STL algorithms make frequent use of functors. Without lambda, these functors need to be previously declared to be used. Lambdas make it possible to have 'anonymous', in-place functors.
This is important because there are many situations in which you need to use a functor only once, and you don't want to give a name to it for two reasons: you don't want to pollute the namespace, and in those specific cases the name you give is either vague or extremely long.
I, for instance, use STL a lot, but without C++0x I use much more for() loops than the for_each() algorithm and its cousins. That's because if I were to use for_each() instead, I'd need to get the code from inside the loop and declare a functor for it. Also all the local variables before the loop wouldn't be accessible, so I'd need to write additional code to pass them as parameters to the functor constructor, or another thing equivalent. As a consequence, I tend not to use for_each() unless there's strong motivation, otherwise the code would be longer and more difficult to read.
That's bad, because it's well known that using for_each() and similar algorithms gives much more room to the compiler & the library for optimizations, including automatic parallelism. So, indirectly, lambda will favour more efficient code.
IMO, the most important thing about lambda's is it keeps related code close together. If you have this code:
std::for_each(begin, end, unknown_function);
You need to navigate over to unknown_function to understand what the code does. But with a lambda, the logic can be kept together.
Lambdas are syntactic sugar for functor classes, so no, there is no computational benefit. As far as the motivation, probably any of the other dozen or so popular languages which have lambdas in them?
One could argue it aids in the readability of code (having your functor declared inline where it is used).
Although I think other parts of C++0x are more important, lambdas are more than just "syntactic sugar" for C++98 style function objects, because they can capture contexts, and they do so by name and then they can take those contexts elsewhere and execute. This is something new, not something that "compiles faster/slower".
#include <iostream>
#include <vector>
#include <functional>
void something_else(std::function<void()> f)
{
f(); // A closure! I wonder if we can write in CPS now...
}
int main()
{
std::vector<int> v(10,0);
std::function<void ()> f = [&](){ std::cout << v.size() << std::endl; };
something_else(f);
}
Well, compare this:
int main () {
std::vector<int> x = {2, 3, 5, 7, 11, 13, 17, 19};
int center = 10;
std::sort(x.begin(), x.end(), [=](int x, int y) {
return abs(x - center) < abs(y - center);
});
std::for_each(x.begin(), x.end(), [](int v) {
printf("%d\n", v);
});
return 0;
}
with this:
// why enforce this to be defined nonlocally?
void printer(int v) {
printf("%d\n", v);
}
int main () {
std::vector<int> x = {2, 3, 5, 7, 11, 13, 17, 19};
// why enforce we to define a whole struct just need to maintain a state?
struct {
int center;
bool operator()(int x, int y) const {
return abs(x - center) < abs(y - center);
}
} comp = {10};
std::sort(x.begin(), x.end(), comp);
std::for_each(x.begin(), x.end(), printer);
return 0;
}
"a neat development perk open for abuse by poor coders trying to look cool?"...whatever you call it, it makes code a lot more readable and maintainable. It does not increase the performance.
Most often, a programmer iterates over a range of elements (searching for an element, accumulating elements, sorting elements etc). Using functional style, you immediatly see what the programmer intends to do, as different from using for loops, where everything "looks" the same.
Compare algorithms + lambda:
iterator longest_tree = std::max_element(forest.begin(), forest.end(), [height]{arg0.height>arg1.height});
iterator first_leaf_tree = std::find_if(forest.begin(), forest.end(), []{is_leaf(arg0)});
std::transform(forest.begin(), forest.end(), firewood.begin(), []{arg0.trans(...));
std::for_each(forest.begin(), forest.end(), {arg0.make_plywood()});
with oldschool for-loops;
Forest::iterator longest_tree = it.begin();
for (Forest::const_iterator it = forest.begin(); it != forest.end(); ++it{
if (*it.height() > *longest_tree.height()) {
longest_tree = it;
}
}
Forest::iterator leaf_tree = it.begin();
for (Forest::const_iterator it = forest.begin(); it != forest.end(); ++it{
if (it->type() == LEAF_TREE) {
leaf_tree = it;
break;
}
}
for (Forest::const_iterator it = forest.begin(), jt = firewood.begin();
it != forest.end();
it++, jt++) {
*jt = boost::transformtowood(*it);
}
for (Forest::const_iterator it = forest.begin(); it != forest.end(); ++it{
std::makeplywood(*it);
}
(I know this pieace of code contains syntactic errors.)

acceptable fix for majority of signed/unsigned warnings?

I myself am convinced that in a project I'm working on signed integers are the best choice in the majority of cases, even though the value contained within can never be negative. (Simpler reverse for loops, less chance for bugs, etc., in particular for integers which can only hold values between 0 and, say, 20, anyway.)
The majority of the places where this goes wrong is a simple iteration of a std::vector, often this used to be an array in the past and has been changed to a std::vector later. So these loops generally look like this:
for (int i = 0; i < someVector.size(); ++i) { /* do stuff */ }
Because this pattern is used so often, the amount of compiler warning spam about this comparison between signed and unsigned type tends to hide more useful warnings. Note that we definitely do not have vectors with more then INT_MAX elements, and note that until now we used two ways to fix compiler warning:
for (unsigned i = 0; i < someVector.size(); ++i) { /*do stuff*/ }
This usually works but might silently break if the loop contains any code like 'if (i-1 >= 0) ...', etc.
for (int i = 0; i < static_cast<int>(someVector.size()); ++i) { /*do stuff*/ }
This change does not have any side effects, but it does make the loop a lot less readable. (And it's more typing.)
So I came up with the following idea:
template <typename T> struct vector : public std::vector<T>
{
typedef std::vector<T> base;
int size() const { return base::size(); }
int max_size() const { return base::max_size(); }
int capacity() const { return base::capacity(); }
vector() : base() {}
vector(int n) : base(n) {}
vector(int n, const T& t) : base(n, t) {}
vector(const base& other) : base(other) {}
};
template <typename Key, typename Data> struct map : public std::map<Key, Data>
{
typedef std::map<Key, Data> base;
typedef typename base::key_compare key_compare;
int size() const { return base::size(); }
int max_size() const { return base::max_size(); }
int erase(const Key& k) { return base::erase(k); }
int count(const Key& k) { return base::count(k); }
map() : base() {}
map(const key_compare& comp) : base(comp) {}
template <class InputIterator> map(InputIterator f, InputIterator l) : base(f, l) {}
template <class InputIterator> map(InputIterator f, InputIterator l, const key_compare& comp) : base(f, l, comp) {}
map(const base& other) : base(other) {}
};
// TODO: similar code for other container types
What you see is basically the STL classes with the methods which return size_type overridden to return just 'int'. The constructors are needed because these aren't inherited.
What would you think of this as a developer, if you'd see a solution like this in an existing codebase?
Would you think 'whaa, they're redefining the STL, what a huge WTF!', or would you think this is a nice simple solution to prevent bugs and increase readability. Or maybe you'd rather see we had spent (half) a day or so on changing all these loops to use std::vector<>::iterator?
(In particular if this solution was combined with banning the use of unsigned types for anything but raw data (e.g. unsigned char) and bit masks.)
Don't derive publicly from STL containers. They have nonvirtual destructors which invokes undefined behaviour if anyone deletes one of your objects through a pointer-to base. If you must derive e.g. from a vector, do it privately and expose the parts you need to expose with using declarations.
Here, I'd just use a size_t as the loop variable. It's simple and readable. The poster who commented that using an int index exposes you as a n00b is correct. However, using an iterator to loop over a vector exposes you as a slightly more experienced n00b - one who doesn't realize that the subscript operator for vector is constant time. (vector<T>::size_type is accurate, but needlessly verbose IMO).
While I don't think "use iterators, otherwise you look n00b" is a good solution to the problem, deriving from std::vector appears much worse than that.
First, developers do expect vector to be std:.vector, and map to be std::map. Second, your solution does not scale for other containers, or for other classes/libraries that interact with containers.
Yes, iterators are ugly, iterator loops are not very well readable, and typedefs only cover up the mess. But at least, they do scale, and they are the canonical solution.
My solution? an stl-for-each macro. That is not without problems (mainly, it is a macro, yuck), but it gets across the meaning. It is not as advanced as e.g. this one, but does the job.
I made this community wiki... Please edit it. I don't agree with the advice against "int" anymore. I now see it as not bad.
Yes, i agree with Richard. You should never use 'int' as the counting variable in a loop like those. The following is how you might want to do various loops using indices (althought there is little reason to, occasionally this can be useful).
Forward
for(std::vector<int>::size_type i = 0; i < someVector.size(); i++) {
/* ... */
}
Backward
You can do this, which is perfectly defined behaivor:
for(std::vector<int>::size_type i = someVector.size() - 1;
i != (std::vector<int>::size_type) -1; i--) {
/* ... */
}
Soon, with c++1x (next C++ version) coming along nicely, you can do it like this:
for(auto i = someVector.size() - 1; i != (decltype(i)) -1; i--) {
/* ... */
}
Decrementing below 0 will cause i to wrap around, because it is unsigned.
But unsigned will make bugs slurp in
That should never be an argument to make it the wrong way (using 'int').
Why not use std::size_t above?
The C++ Standard defines in 23.1 p5 Container Requirements, that T::size_type , for T being some Container, that this type is some implementation defined unsigned integral type. Now, using std::size_t for i above will let bugs slurp in silently. If T::size_type is less or greater than std::size_t, then it will overflow i, or not even get up to (std::size_t)-1 if someVector.size() == 0. Likewise, the condition of the loop would have been broken completely.
Definitely use an iterator. Soon you will be able to use the 'auto' type, for better readability (one of your concerns) like this:
for (auto i = someVector.begin();
i != someVector.end();
++i)
Skip the index
The easiest approach is to sidestep the problem by using iterators, range-based for loops, or algorithms:
for (auto it = begin(v); it != end(v); ++it) { ... }
for (const auto &x : v) { ... }
std::for_each(v.begin(), v.end(), ...);
This is a nice solution if you don't actually need the index value. It also handles reverse loops easily.
Use an appropriate unsigned type
Another approach is to use the container's size type.
for (std::vector<T>::size_type i = 0; i < v.size(); ++i) { ... }
You can also use std::size_t (from <cstddef>). There are those who (correctly) point out that std::size_t may not be the same type as std::vector<T>::size_type (though it usually is). You can, however, be assured that the container's size_type will fit in a std::size_t. So everything is fine, unless you use certain styles for reverse loops. My preferred style for a reverse loop is this:
for (std::size_t i = v.size(); i-- > 0; ) { ... }
With this style, you can safely use std::size_t, even if it's a larger type than std::vector<T>::size_type. The style of reverse loops shown in some of the other answers require casting a -1 to exactly the right type and thus cannot use the easier-to-type std::size_t.
Use a signed type (carefully!)
If you really want to use a signed type (or if your style guide practically demands one), like int, then you can use this tiny function template that checks the underlying assumption in debug builds and makes the conversion explicit so that you don't get the compiler warning message:
#include <cassert>
#include <cstddef>
#include <limits>
template <typename ContainerType>
constexpr int size_as_int(const ContainerType &c) {
const auto size = c.size(); // if no auto, use `typename ContainerType::size_type`
assert(size <= static_cast<std::size_t>(std::numeric_limits<int>::max()));
return static_cast<int>(size);
}
Now you can write:
for (int i = 0; i < size_as_int(v); ++i) { ... }
Or reverse loops in the traditional manner:
for (int i = size_as_int(v) - 1; i >= 0; --i) { ... }
The size_as_int trick is only slightly more typing than the loops with the implicit conversions, you get the underlying assumption checked at runtime, you silence the compiler warning with the explicit cast, you get the same speed as non-debug builds because it will almost certainly be inlined, and the optimized object code shouldn't be any larger because the template doesn't do anything the compiler wasn't already doing implicitly.
You're overthinking the problem.
Using a size_t variable is preferable, but if you don't trust your programmers to use unsigned correctly, go with the cast and just deal with the ugliness. Get an intern to change them all and don't worry about it after that. Turn on warnings as errors and no new ones will creep in. Your loops may be "ugly" now, but you can understand that as the consequences of your religious stance on signed versus unsigned.
vector.size() returns a size_t var, so just change int to size_t and it should be fine.
Richard's answer is more correct, except that it's a lot of work for a simple loop.
I notice that people have very different opinions about this subject. I have also an opinion which does not convince others, so it makes sense to search for support by some guru’s, and I found the CPP core guidelines:
https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines
maintained by Bjarne Stroustrup and Herb Sutter, and their last update, upon which I base the information below, is of April 10, 2022.
Please take a look at the following code rules:
ES.100: Don’t mix signed and unsigned arithmetic
ES.101: Use unsigned types for bit manipulation
ES.102: Use signed types for arithmetic
ES.107: Don’t use unsigned for subscripts, prefer gsl::index
So, supposing that we want to index in a for loop and for some reason the range based for loop is not the appropriate solution, then using an unsigned type is also not the preferred solution. The suggested solution is using gsl::index.
But in case you don’t have gsl around and you don’t want to introduce it, what then?
In that case I would suggest to have a utility template function as suggested by Adrian McCarthy: size_as_int