How to nicely implement generic sorting in C++ - c++

I'm trying to brush up on my C++ templating, and I'm running into the following issue. I'm implementing some basic in-place sorting methods, which I want to work for various types of data containers that can be indexed, and the elements of which can be compared. Specifically, the methods should work for both plain arrays, std::array, std::vector, etc. For some methods this is rather straightforward, like insertion sort:
template<typename T>
void insertion_sort(T& data)
{
if (std::size(data) < 2)
return;
for (int j = 1; j < std::size(data); ++j)
{
int i = j;
while (i > 0 && data[i - 1] > data[i])
{
swap_index(data, i, i - 1); // basic helper that swaps data at two indices
--i;
}
}
}
However, for some methods, I also need to know the actual type of the elements stored in data. An example is merge sort: I need to allocate some scratch space to be used for copying stuff around. I tried something like this:
template<typename T>
void mergesort(T& data)
{
typedef decltype(*std::begin(data)) inner_type;
inner_type* scratch = new inner_type[std::size(data)];
...
but that just gives me errors like "error C2528: 'scratch': pointer to reference is illegal".
In C# I'd just use something like where T : IComparable<T>, and have a parameter of type IList<T>, which seems to be closer to what I want to achieve: have a type T that is comparable, and as parameter some indexable collection of Ts. What is the "proper" way to achieve this with C++ templates? Or should I use something other than templates here? I want to sort the container in place, so I don't think I want to use something like iterators.

The clue is in the error message inner_type is a reference not the value type you are looking for. The following works for me (C++14 is required).
#include <type_traits>
typedef std::remove_reference_t<decltype(*std::begin(data))> inner_type;

Related

C++ - How to create a dynamic vector

I am following this example to make an adjacency list. However it seems like the vector size cannot be dynamic.
Visual studio throws an error
expression did not evaluate to a constant
on this line
vector<int> adj[V];
The strange thing is that the same exact code works correctly on codeblocks IDE.
I've tried replacing the above line with vector<int> adj; but then I cannot send the vector as a parameter to addEdge(adj, 0, 1); as it throws another error about pointers which I also don't know how to correct.
What could I do to dynamically create my vector?
C++ - How to create a dynamic vector
You don't need to do that for this example. But if you did need it, you could use std::make_unique.
The linked example program is ill-formed. I recommend to not try to learn from that. The issue that you encountered is that they use a non-const size for an array. But the size of an array must be compile time constant in C++. Simple fix is to declare the variable type as const:
const int V = 5;
I've tried replacing the above line with vector<int> adj;
You can't just replace an array of vectors with a single vector and expect the program to work without making other changes.
I need the size to be dynamic as it will only be known at compile time.
Assuming you meant to say that the size will only be known at runtime, the solution is to use a vector of vectors.
As written by eerorika, the example code isn't a good one, and you should avoid using raw arrays like that. An array in C/C++ is of static size, each vector in this array is dynamic, but the entire array is not!
There are two approaches for such a question. Either use adjacency lists (which is more common):
#include <vector>
#include <stdint.h>
class Vertix
{
public:
Vertix(uint64_t id_) : id(id_) {}
uint64_t get_id() const { return id; }
void add_adj_vertix(uint64_t id) { adj_vertices.push_back(id); }
const std::vector<uint64_t>& get_adj_vertices() const { return adj_vertices; }
private:
uint64_t id;
std::vector<uint64_t> adj_vertices;
};
class Graph
{
public:
void add_vertix(uint64_t id)
{
vertices[id] = Vertix(id);
}
void add_edge(uint64_t v_id, uint64_t u_id)
{
edges.emplace_back(u_id, v_id);
vertices[u_id].add_adj_vertix(v_id);
}
private:
std::vector<Vertix> vertices;
std::vector<std::pair<uint64_t, uint64_t>> edges;
};
or use double vector to represent the edges matrix:
std::vector<std::vector<uint64_t>> edges;
But it isn't a real matrix, and you cannot check if (u, v) is in the graph in O(1), which misses the point of having adjacency matrix. Assuming you know the size of Graph on compile time, you should write something like:
#include <array>
#include <stdint.h>
template <size_t V>
using AdjacencyMatrix = std::array<std::array<bool, V>, V>;
template <size_t V>
void add_edge(AdjacencyMatrix<V>& adj_matrix, uint64_t u, uint64_t v)
{
if (u < V && v < V)
{
adj_matrix[u][v] = true;
}
else
{
// error handling
}
}
Then you can use AdjacencyMatrix<5> instead of what they were using on that example, in O(1) time, and although it has static size, it does work as intended.
There’s no need to use C-style arrays in modern C++. Their equivalent is std::array, taking the size as a template parameter. Obviously that size can’t be a runtime variable: template parameters can be types or constant expressions. The compiler error reflects this: std::array is a zero cost wrapper over an internal, raw “C” array.
If the array is always small, you may wish to use a fixed-maximum-size array, such as provided by boost. You get all performance benefits of fixed size arrays and can still store down to zero items in it.
There are other solutions:
If all vectors have the same size, make a wrapper that takes two indices, and uses N*i1+i2 as the index to an underlying std::vector.
If the vectors have different sizes, use a vector of vectors: std::vector>. If there are lots of vectors and you often add and remove them, you may look into using a std::list of vectors.

Safely pass LUA sequences to C++ using Sol2

I am using Sol2 to bridge between Lua and C++ code. I would like to pass sequences of numbers from Lua to C++.
From Lua:
func{3, 2, 1.5, 10}
In C++:
void func(std::vector<double> v)
{ ... }
What is the best way to connect the call with the C++ function?
If I directly bind the C++ function I get a segfault. I think I can write a function that converts a sol::table to a std::vector<double>, throwing exceptions if there are any mismatched types, but I'm not sure the best way to do this or if this is the right direction to go.
Here is one solution:
/**
* Convert a Lua sequence into a C++ vector
* Throw exception on errors or wrong types
*/
template <typename elementType>
std::vector<elementType> convert_sequence(sol::table t)
{
std::size_t sz = t.size();
std::vector<elementType> res(sz);
for (int i = 1; i <= sz; i++) {
res[i - 1] = t[i];
}
return res;
}
This manually converts the sol::table into a std::vector and copies each element one by one. It errors if any elements in the table have the wrong type, or if things are missing.

How to iterate through variable members of a class C++

I'm currently trying to do a complicated variable correction to a bunch of variables (based on normalizing in various phase spaces) for some data that I'm reading in. Since each correction follows the same process, I was wondering if there would be anyway to do this iteratively rather than handle each variable by itself (since I need to this for about 18-20 variables). Can C++ handle this? I was told by someone to try this in python but I feel like it could be done in C++ in some way... I'm just hitting a wall!
To give you an idea, given something like:
class VariableClass{
public :
//each object of this class represents an event for this particlular data set
//containing the following variables
double x;
double y;
double z;
}
I want to do something along the lines of:
for (int i=0; i < num_variables; i++)
{
for (int j=0; j < num_events; j++)
{
//iterate through events
}
//correct variable here, then move on to next one
}
Thanks in advance for any advice!!!
I'm assuming your member variables will not all have the same type. Otherwise you can just throw them into a container. If you have C++11, one way you could solve this problem is a tuple. With some template metaprogramming you can simulate a loop over all elements of the tuple. The function std::tie will build a tuple with references to all of your members that you can "iterate" like this:
struct DoCorrection
{
template<typename T>
void operator()(T& t) const { /* code goes here */ }
};
for_each(std::tie(x, y, z), DoCorrection());
// see linked SO answer for the detailed code to make this special for_each work.
Then, you can specialize operator() for each member variable type. That will let you do the appropriate math automatically without manually keeping track of the types.
taken from glm (detail vec3.incl)
template <typename T>
GLM_FUNC_QUALIFIER typename tvec3<T>::value_type &
tvec3<T>::operator[]
(
size_type i
)
{
assert(i < this->length());
return (&x)[i];
}
this would translate to your example:
class VariableClass{
public :
//each object of this class represents an event for this particlular data
double x;
double y;
double z;
double & operator[](int i) {
assert(i < 3);
return (&x)[i];
}
}
VariableClass foo();
foo.x = 2.0;
std::cout << foo[0] << std::endl; // => 2.0
Althought i would recomment glm, if it is just about vector math.
Yes, just put all your variables into a container, like std::vector, for example.
http://en.cppreference.com/w/cpp/container/vector
I recommend spending some time reading about all the std classes. There are many containers and many uses.
In general you cannot iterate over members without relying on implementation defined things like padding or reordering of sections with different access qualifiers (literally no compiler does the later - it is allowed though).
However, you can use a the generalization of a record type: a std::tuple. Iterating a tuple isn't straight-forward but you will find plenty of code that does it. The worst here is the loss of named variables, which you can mimic with members.
If you use Boost, you can use Boost.Fusion's helper-macro BOOST_FUSION_ADAPT_STRUCT to turn a struct into a Fusion sequence and then you can use it with Fusion algorithms.

C++ cast to array of a smaller size

Here's an interesting question about the various quirks of the C++ language. I have a pair of functions, which are supposed to fill an array of points with the corners of a rectangle. There are two overloads for it: one takes a Point[5], the other takes a Point[4]. The 5-point version refers to a closed polygon, whereas the 4-point version is when you just want the 4 corners, period.
Obviously there's some duplication of work here, so I'd like to be able to use the 4-point version to populate the first 4 points of the 5-point version, so I'm not duplicating that code. (Not that it's much to duplicate, but I have terrible allergic reactions whenever I copy and paste code, and I'd like to avoid that.)
The thing is, C++ doesn't seem to care for the idea of converting a T[m] to a T[n] where n < m. static_cast seems to think the types are incompatible for some reason. reinterpret_cast handles it fine, of course, but is a dangerous animal that, as a general rule, is better to avoid if at all possible.
So my question is: is there a type-safe way of casting an array of one size to an array of a smaller size where the array type is the same?
[Edit] Code, yes. I should have mentioned that the parameter is actually a reference to an array, not simply a pointer, so the compiler is aware of the type difference.
void RectToPointArray(const degRect& rect, degPoint(&points)[4])
{
points[0].lat = rect.nw.lat; points[0].lon = rect.nw.lon;
points[1].lat = rect.nw.lat; points[1].lon = rect.se.lon;
points[2].lat = rect.se.lat; points[2].lon = rect.se.lon;
points[3].lat = rect.se.lat; points[3].lon = rect.nw.lon;
}
void RectToPointArray(const degRect& rect, degPoint(&points)[5])
{
// I would like to use a more type-safe check here if possible:
RectToPointArray(rect, reinterpret_cast<degPoint(&)[4]> (points));
points[4].lat = rect.nw.lat; points[4].lon = rect.nw.lon;
}
[Edit2] The point of passing an array-by-reference is so that we can be at least vaguely sure that the caller is passing in a correct "out parameter".
I don't think it's a good idea to do this by overloading. The name of the function doesn't tell the caller whether it's going to fill an open array or not. And what if the caller has only a pointer and wants to fill coordinates (let's say he wants to fill multiple rectangles to be part of a bigger array at different offsets)?
I would do this by two functions, and let them take pointers. The size isn't part of the pointer's type
void fillOpenRect(degRect const& rect, degPoint *p) {
...
}
void fillClosedRect(degRect const& rect, degPoint *p) {
fillOpenRect(rect, p); p[4] = p[0];
}
I don't see what's wrong with this. Your reinterpret-cast should work fine in practice (i don't see what could go wrong - both alignment and representation will be correct, so the merely formal undefinedness won't carry out to reality here, i think), but as i said above i think there's no good reason to make these functions take the arrays by reference.
If you want to do it generically, you can write it by output iterators
template<typename OutputIterator>
OutputIterator fillOpenRect(degRect const& rect, OutputIterator out) {
typedef typename iterator_traits<OutputIterator>::value_type value_type;
value_type pt[] = {
{ rect.nw.lat, rect.nw.lon },
{ rect.nw.lat, rect.se.lon },
{ rect.se.lat, rect.se.lon },
{ rect.se.lat, rect.nw.lon }
};
for(int i = 0; i < 4; i++)
*out++ = pt[i];
return out;
}
template<typename OutputIterator>
OutputIterator fillClosedRect(degRect const& rect, OutputIterator out) {
typedef typename iterator_traits<OutputIterator>::value_type value_type;
out = fillOpenRect(rect, out);
value_type p1 = { rect.nw.lat, rect.nw.lon };
*out++ = p1;
return out;
}
You can then use it with vectors and also with arrays, whatever you prefer most.
std::vector<degPoint> points;
fillClosedRect(someRect, std::back_inserter(points));
degPoint points[5];
fillClosedRect(someRect, points);
If you want to write safer code, you can use the vector way with back-inserters, and if you work with lower level code, you can use a pointer as output iterator.
I would use std::vector or (this is really bad and should not be used) in some extreme cases you can even use plain arrays via pointer like Point* and then you shouldn't have such "casting" troubles.
Why don't you just pass a standard pointer, instead of a sized one, like this
void RectToPointArray(const degRect& rect, degPoint * points ) ;
I don't think your framing/thinking of the problem is correct. You don't generally need to concretely type an object that has 4 vertices vs an object that has 5.
But if you MUST type it, then you can use structs to concretely define the types instead.
struct Coord
{
float lat, long ;
} ;
Then
struct Rectangle
{
Coord points[ 4 ] ;
} ;
struct Pentagon
{
Coord points[ 5 ] ;
} ;
Then,
// 4 pt version
void RectToPointArray(const degRect& rect, const Rectangle& rectangle ) ;
// 5 pt version
void RectToPointArray(const degRect& rect, const Pentagon& pent ) ;
I think this solution is a bit extreme however, and a std::vector<Coord> that you check its size (to be either 4 or 5) as expected with asserts, would do just fine.
I guess you could use function template specialization, like this (simplified example where first argument was ignored and function name was replaced by f(), etc.):
#include <iostream>
using namespace std;
class X
{
};
template<int sz, int n>
int f(X (&x)[sz])
{
cout<<"process "<<n<<" entries in a "<<sz<<"-dimensional array"<<endl;
int partial_result=f<sz,n-1>(x);
cout<<"process last entry..."<<endl;
return n;
}
//template specialization for sz=5 and n=4 (number of entries to process)
template<>
int f<5,4>(X (&x)[5])
{
cout<<"process only the first "<<4<<" entries here..."<<endl;
return 4;
}
int main(void)
{
X u[5];
int res=f<5,5>(u);
return 0;
}
Of course you would have to take care of other (potentially dangerous) special cases like n={0,1,2,3} and you're probably better off using unsigned int's instead of ints.
So my question is: is there a
type-safe way of casting an array of
one size to an array of a smaller size
where the array type is the same?
No. I don't think the language allows you to do this at all: consider casting int[10] to int[5]. You can always get a pointer to it, however, but we can't 'trick' the compiler into thinking a fixed-sized has a different number of dimensions.
If you're not going to use std::vector or some other container which can properly identify the number of points inside at runtime and do this all conveniently in one function instead of two function overloads which get called based on the number of elements, rather than trying to do crazy casts, consider this at least as an improvement:
void RectToPointArray(const degRect& rect, degPoint* points, unsigned int size);
If you're set on working with arrays, you can still define a generic function like this:
template <class T, size_t N>
std::size_t array_size(const T(&/*array*/)[N])
{
return N;
}
... and use that when calling RectToPointArray to pass the argument for 'size'. Then you have a size you can determine at runtime and it's easy enough to work with size - 1, or more appropriate for this case, just put a simple if statement to check if there are 5 elements or 4.
Later if you change your mind and use std::vector, Boost.Array, etc. you can still use this same old function without modifying it. It only requires that the data is contiguous and mutable. You can get fancy with this and apply very generic solutions that, say, only require forward iterators. Yet I don't think this problem is complicated enough to warrant such a solution: it'd be like using a cannon to kill a fly; fly swatter is okay.
If you're really set on the solution you have, then it's easy enough to do this:
template <size_t N>
void RectToPointArray(const degRect& rect, degPoint(&points)[N])
{
assert(N >= 4 && "points requires at least 4 elements!");
points[0].lat = rect.nw.lat; points[0].lon = rect.nw.lon;
points[1].lat = rect.nw.lat; points[1].lon = rect.se.lon;
points[2].lat = rect.se.lat; points[2].lon = rect.se.lon;
points[3].lat = rect.se.lat; points[3].lon = rect.nw.lon;
if (N >= 5)
points[4].lat = rect.nw.lat; points[4].lon = rect.nw.lon;
}
Yeah, there is one unnecessary runtime check but trying to do it at compile time is probably analogous to taking things out of your glove compartment in an attempt to increase your car's fuel efficiency. With N being a compile-time constant expression, the compiler is likely to recognize that the condition is always false when N < 5 and just eliminate that whole section of code.

acceptable fix for majority of signed/unsigned warnings?

I myself am convinced that in a project I'm working on signed integers are the best choice in the majority of cases, even though the value contained within can never be negative. (Simpler reverse for loops, less chance for bugs, etc., in particular for integers which can only hold values between 0 and, say, 20, anyway.)
The majority of the places where this goes wrong is a simple iteration of a std::vector, often this used to be an array in the past and has been changed to a std::vector later. So these loops generally look like this:
for (int i = 0; i < someVector.size(); ++i) { /* do stuff */ }
Because this pattern is used so often, the amount of compiler warning spam about this comparison between signed and unsigned type tends to hide more useful warnings. Note that we definitely do not have vectors with more then INT_MAX elements, and note that until now we used two ways to fix compiler warning:
for (unsigned i = 0; i < someVector.size(); ++i) { /*do stuff*/ }
This usually works but might silently break if the loop contains any code like 'if (i-1 >= 0) ...', etc.
for (int i = 0; i < static_cast<int>(someVector.size()); ++i) { /*do stuff*/ }
This change does not have any side effects, but it does make the loop a lot less readable. (And it's more typing.)
So I came up with the following idea:
template <typename T> struct vector : public std::vector<T>
{
typedef std::vector<T> base;
int size() const { return base::size(); }
int max_size() const { return base::max_size(); }
int capacity() const { return base::capacity(); }
vector() : base() {}
vector(int n) : base(n) {}
vector(int n, const T& t) : base(n, t) {}
vector(const base& other) : base(other) {}
};
template <typename Key, typename Data> struct map : public std::map<Key, Data>
{
typedef std::map<Key, Data> base;
typedef typename base::key_compare key_compare;
int size() const { return base::size(); }
int max_size() const { return base::max_size(); }
int erase(const Key& k) { return base::erase(k); }
int count(const Key& k) { return base::count(k); }
map() : base() {}
map(const key_compare& comp) : base(comp) {}
template <class InputIterator> map(InputIterator f, InputIterator l) : base(f, l) {}
template <class InputIterator> map(InputIterator f, InputIterator l, const key_compare& comp) : base(f, l, comp) {}
map(const base& other) : base(other) {}
};
// TODO: similar code for other container types
What you see is basically the STL classes with the methods which return size_type overridden to return just 'int'. The constructors are needed because these aren't inherited.
What would you think of this as a developer, if you'd see a solution like this in an existing codebase?
Would you think 'whaa, they're redefining the STL, what a huge WTF!', or would you think this is a nice simple solution to prevent bugs and increase readability. Or maybe you'd rather see we had spent (half) a day or so on changing all these loops to use std::vector<>::iterator?
(In particular if this solution was combined with banning the use of unsigned types for anything but raw data (e.g. unsigned char) and bit masks.)
Don't derive publicly from STL containers. They have nonvirtual destructors which invokes undefined behaviour if anyone deletes one of your objects through a pointer-to base. If you must derive e.g. from a vector, do it privately and expose the parts you need to expose with using declarations.
Here, I'd just use a size_t as the loop variable. It's simple and readable. The poster who commented that using an int index exposes you as a n00b is correct. However, using an iterator to loop over a vector exposes you as a slightly more experienced n00b - one who doesn't realize that the subscript operator for vector is constant time. (vector<T>::size_type is accurate, but needlessly verbose IMO).
While I don't think "use iterators, otherwise you look n00b" is a good solution to the problem, deriving from std::vector appears much worse than that.
First, developers do expect vector to be std:.vector, and map to be std::map. Second, your solution does not scale for other containers, or for other classes/libraries that interact with containers.
Yes, iterators are ugly, iterator loops are not very well readable, and typedefs only cover up the mess. But at least, they do scale, and they are the canonical solution.
My solution? an stl-for-each macro. That is not without problems (mainly, it is a macro, yuck), but it gets across the meaning. It is not as advanced as e.g. this one, but does the job.
I made this community wiki... Please edit it. I don't agree with the advice against "int" anymore. I now see it as not bad.
Yes, i agree with Richard. You should never use 'int' as the counting variable in a loop like those. The following is how you might want to do various loops using indices (althought there is little reason to, occasionally this can be useful).
Forward
for(std::vector<int>::size_type i = 0; i < someVector.size(); i++) {
/* ... */
}
Backward
You can do this, which is perfectly defined behaivor:
for(std::vector<int>::size_type i = someVector.size() - 1;
i != (std::vector<int>::size_type) -1; i--) {
/* ... */
}
Soon, with c++1x (next C++ version) coming along nicely, you can do it like this:
for(auto i = someVector.size() - 1; i != (decltype(i)) -1; i--) {
/* ... */
}
Decrementing below 0 will cause i to wrap around, because it is unsigned.
But unsigned will make bugs slurp in
That should never be an argument to make it the wrong way (using 'int').
Why not use std::size_t above?
The C++ Standard defines in 23.1 p5 Container Requirements, that T::size_type , for T being some Container, that this type is some implementation defined unsigned integral type. Now, using std::size_t for i above will let bugs slurp in silently. If T::size_type is less or greater than std::size_t, then it will overflow i, or not even get up to (std::size_t)-1 if someVector.size() == 0. Likewise, the condition of the loop would have been broken completely.
Definitely use an iterator. Soon you will be able to use the 'auto' type, for better readability (one of your concerns) like this:
for (auto i = someVector.begin();
i != someVector.end();
++i)
Skip the index
The easiest approach is to sidestep the problem by using iterators, range-based for loops, or algorithms:
for (auto it = begin(v); it != end(v); ++it) { ... }
for (const auto &x : v) { ... }
std::for_each(v.begin(), v.end(), ...);
This is a nice solution if you don't actually need the index value. It also handles reverse loops easily.
Use an appropriate unsigned type
Another approach is to use the container's size type.
for (std::vector<T>::size_type i = 0; i < v.size(); ++i) { ... }
You can also use std::size_t (from <cstddef>). There are those who (correctly) point out that std::size_t may not be the same type as std::vector<T>::size_type (though it usually is). You can, however, be assured that the container's size_type will fit in a std::size_t. So everything is fine, unless you use certain styles for reverse loops. My preferred style for a reverse loop is this:
for (std::size_t i = v.size(); i-- > 0; ) { ... }
With this style, you can safely use std::size_t, even if it's a larger type than std::vector<T>::size_type. The style of reverse loops shown in some of the other answers require casting a -1 to exactly the right type and thus cannot use the easier-to-type std::size_t.
Use a signed type (carefully!)
If you really want to use a signed type (or if your style guide practically demands one), like int, then you can use this tiny function template that checks the underlying assumption in debug builds and makes the conversion explicit so that you don't get the compiler warning message:
#include <cassert>
#include <cstddef>
#include <limits>
template <typename ContainerType>
constexpr int size_as_int(const ContainerType &c) {
const auto size = c.size(); // if no auto, use `typename ContainerType::size_type`
assert(size <= static_cast<std::size_t>(std::numeric_limits<int>::max()));
return static_cast<int>(size);
}
Now you can write:
for (int i = 0; i < size_as_int(v); ++i) { ... }
Or reverse loops in the traditional manner:
for (int i = size_as_int(v) - 1; i >= 0; --i) { ... }
The size_as_int trick is only slightly more typing than the loops with the implicit conversions, you get the underlying assumption checked at runtime, you silence the compiler warning with the explicit cast, you get the same speed as non-debug builds because it will almost certainly be inlined, and the optimized object code shouldn't be any larger because the template doesn't do anything the compiler wasn't already doing implicitly.
You're overthinking the problem.
Using a size_t variable is preferable, but if you don't trust your programmers to use unsigned correctly, go with the cast and just deal with the ugliness. Get an intern to change them all and don't worry about it after that. Turn on warnings as errors and no new ones will creep in. Your loops may be "ugly" now, but you can understand that as the consequences of your religious stance on signed versus unsigned.
vector.size() returns a size_t var, so just change int to size_t and it should be fine.
Richard's answer is more correct, except that it's a lot of work for a simple loop.
I notice that people have very different opinions about this subject. I have also an opinion which does not convince others, so it makes sense to search for support by some guru’s, and I found the CPP core guidelines:
https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines
maintained by Bjarne Stroustrup and Herb Sutter, and their last update, upon which I base the information below, is of April 10, 2022.
Please take a look at the following code rules:
ES.100: Don’t mix signed and unsigned arithmetic
ES.101: Use unsigned types for bit manipulation
ES.102: Use signed types for arithmetic
ES.107: Don’t use unsigned for subscripts, prefer gsl::index
So, supposing that we want to index in a for loop and for some reason the range based for loop is not the appropriate solution, then using an unsigned type is also not the preferred solution. The suggested solution is using gsl::index.
But in case you don’t have gsl around and you don’t want to introduce it, what then?
In that case I would suggest to have a utility template function as suggested by Adrian McCarthy: size_as_int