Class to represent reciprocals - c++

I have code where I do a lot things like
a = 1/((1/b)+(1/c))
I can't help it, I cringe at the use of so many divisions, when I could do it like this
oneOverA = oneOverB + oneOverC
In fact i can do pretty much everything with reciprocals. For instance with comparisons
A < B iff oneOverB < oneOverA
again without need to divide. And so on.
However, this would be extremely error-prone. Imagine if I pass the reversed version to a function which expects the "straight" one, or forget to reverse the order while comparing, etc.
But, C++ exists to make magic happen, right?
And if I implenent a very simple class header-only, pretty much everything should be optimized away; leaving it just as I would write it if I was as infallible as the compiler.
So I tried a class like
template<typename T = float> //T must be a type for which 1/T is representable as another T such that 1(1/T)) = T. es. float
class ReversibleNum{
private:
const T reverse;
public:
ReversibleNum(T orig): reverse(1/orig) {}
operator T () { return 1/reverse; }
bool operator < (const ReversibleNum<T>& oth) { return oth.reverse < reverse; }
friend T operator / (const T one, const ReversibleNum<T>& two);
}
inline T operator / (const T one, const ReversibleNum<T>& two) { return one * two.reverse; }
However, when I try
ReversibleNum<float> a(5);
1.0/a
it uses "operator float" to convert and then divide, rather than "operator /" which is what I want.
I could probably make "operator T" explicit. However I like the idea of being able to use it seamlessy and have the conversion just happen; but only when there really is no alternative.
Is there something I can do to make my code work?
Or maybe some different, more advanced magic I coud use to achieve my aim?

Related

For list sorting, is there a way to have multiple operator overloads with the same arguments?

Summary
I have a std::list of type Process*
class Process
{
// non essential stuff
// vars I want to sort by
int pid;
int burstTime;
int rBurstTime;
int priority;
}
I want to overload the < operator for sorting my list via list::sort()
bool operator<(Process const& p) {return this.priority < p.priority}
bool operator<(Process const& p) {return this.burstTime < p.burstTime}
// etc.
The above seems impossible since there is no way to determine the difference between the two (or am I on the right track?).
What I tried
I've tried something like
bool operator<(Process const& p, <k>) {return this.priority < p.priority}
where k is just any datatype/expected value that tells which overload to use, but this isn't possible since < overloads only take one argument.
Hopefully by now you can see what I am trying to do. Is there a C++ procedure for this that I am unaware of? I am a relatively new C++ programmer, so apologies if this is an easy fix.
Solved via Borgleader's comment:
std::list's sort can take a comparison function/functor, you should use that instead (this is also true of std::sort)

C++ How to create an array of template class? [duplicate]

I need to parse and store a somewhat (but not too) complex stream and need to store the parsed result somehow. The stream essentially contains name-value pairs with values possibly being of different type for different names. Basically, I end up with a map of key (always string) to a pair <type, value>.
I started with something like this:
typedef enum ValidType {STRING, INT, FLOAT, BINARY} ValidType;
map<string, pair<ValidType, void*>> Data;
However I really dislike void* and storing pointers. Of course, I can always store the value as binary data (vector<char> for example), in which case the map would end up being
map<string, pair<ValidType, vector<char>>> Data;
Yet, in this case I would have to parse the binary data every time I need the actual value, which would be quite expensive in terms of performance.
Considering that I am not too worried about memory footprint (the amount of data is not large), but I am concerned about performance, what would be the right way to store such data?
Ideally, I'd like to avoid using boost, as that would increase the size of the final app by a factor of 3 if not more and I need to minimise that.
You're looking for a discriminated (or tagged) union.
Boost.Variant is one example, and Boost.Any is another. Are you so sure Boost will increase your final app size by a factor of 3? I would have thought variant was header-only, in which case you don't need to link any libraries.
If you really can't use Boost, implementing a simple discriminated union isn't so hard (a general and fully-correct one is another matter), and at least you know what to search for now.
For completeness, a naive discriminated union might look like:
class DU
{
public:
enum TypeTag { None, Int, Double };
class DUTypeError {};
private:
TypeTag type_;
union {
int i;
double d;
} data_;
void typecheck(TypeTag tt) const { if(type_ != tt) throw DUTypeError(); }
public:
DU() : type_(None) {}
DU(DU const &other) : type_(other.type_), data_(other.data_) {}
DU& operator= (DU const &other) {
type_=other.type_; data_=other.data_; return *this;
}
TypeTag type() const { return type_; }
bool istype(TypeTag tt) const { return type_ == tt; }
#define CONVERSIONS(TYPE, ENUM, MEMBER) \
explicit DU(TYPE val) : type_(ENUM) { data_.MEMBER = val; } \
operator TYPE & () { typecheck(ENUM); return data_.MEMBER; } \
operator TYPE const & () const { typecheck(ENUM); return data_.MEMBER; } \
DU& operator=(TYPE val) { type_ = ENUM; data_.MEMBER = val; return *this; }
CONVERSIONS(int, Int, i)
CONVERSIONS(double, Double, d)
};
Now, there are several drawbacks:
you can't store non-POD types in the union
adding a type means modifying the enum, and the union, and remembering to add a new CONVERSIONS line (it would be even worse without the macro)
you can't use the visitor pattern with this (or, you'd have to write your own dispatcher for it), which means lots of switch statements in the client code
every one of these switches may also need updating if you add a type
if you did write a visitor dispatch, that needs updating if you add a type, and so may every visitor
you need to manually reproduce something like the built-in C++ type-conversion rules if you want to do anything like arithmetic with these (ie, operator double could promote an Int instead of only handling Double ... but only if you hand-roll every operator)
I haven't implemented operator== precisely because it needs a switch. You can't just memcmp the two unions if the types match, because identical 32-bit integers could still compare different if the extra space required for the double holds a different bit pattern
Some of these issues can be addressed if you care about them, but it's all more work. Hence my preference for not re-inventing this particular wheel if it can be avoided.
Since your data types are fixed what about something like this...
Have something like a std::vector for each type of value.
And your map would have as the second value of the pair the index to the data.
std::vector<int> vInt;
std::vector<float> vFloat;
.
.
.
map<std::string, std::pair<ValidType, int>> Data;
You can implement a multi-type map by leveraging the nifty features of std::tuple in C++11, which allows access by a type key. You can wrap this to create access by arbitrary keys. An in-depth explanation of this (and quite an interesting read) is available here:
https://jguegant.github.io/blogs/tech/thread-safe-multi-type-map.html
The modern C++ features provide create ways to solve old problems.

Store different data types in map - with info on type

I need to parse and store a somewhat (but not too) complex stream and need to store the parsed result somehow. The stream essentially contains name-value pairs with values possibly being of different type for different names. Basically, I end up with a map of key (always string) to a pair <type, value>.
I started with something like this:
typedef enum ValidType {STRING, INT, FLOAT, BINARY} ValidType;
map<string, pair<ValidType, void*>> Data;
However I really dislike void* and storing pointers. Of course, I can always store the value as binary data (vector<char> for example), in which case the map would end up being
map<string, pair<ValidType, vector<char>>> Data;
Yet, in this case I would have to parse the binary data every time I need the actual value, which would be quite expensive in terms of performance.
Considering that I am not too worried about memory footprint (the amount of data is not large), but I am concerned about performance, what would be the right way to store such data?
Ideally, I'd like to avoid using boost, as that would increase the size of the final app by a factor of 3 if not more and I need to minimise that.
You're looking for a discriminated (or tagged) union.
Boost.Variant is one example, and Boost.Any is another. Are you so sure Boost will increase your final app size by a factor of 3? I would have thought variant was header-only, in which case you don't need to link any libraries.
If you really can't use Boost, implementing a simple discriminated union isn't so hard (a general and fully-correct one is another matter), and at least you know what to search for now.
For completeness, a naive discriminated union might look like:
class DU
{
public:
enum TypeTag { None, Int, Double };
class DUTypeError {};
private:
TypeTag type_;
union {
int i;
double d;
} data_;
void typecheck(TypeTag tt) const { if(type_ != tt) throw DUTypeError(); }
public:
DU() : type_(None) {}
DU(DU const &other) : type_(other.type_), data_(other.data_) {}
DU& operator= (DU const &other) {
type_=other.type_; data_=other.data_; return *this;
}
TypeTag type() const { return type_; }
bool istype(TypeTag tt) const { return type_ == tt; }
#define CONVERSIONS(TYPE, ENUM, MEMBER) \
explicit DU(TYPE val) : type_(ENUM) { data_.MEMBER = val; } \
operator TYPE & () { typecheck(ENUM); return data_.MEMBER; } \
operator TYPE const & () const { typecheck(ENUM); return data_.MEMBER; } \
DU& operator=(TYPE val) { type_ = ENUM; data_.MEMBER = val; return *this; }
CONVERSIONS(int, Int, i)
CONVERSIONS(double, Double, d)
};
Now, there are several drawbacks:
you can't store non-POD types in the union
adding a type means modifying the enum, and the union, and remembering to add a new CONVERSIONS line (it would be even worse without the macro)
you can't use the visitor pattern with this (or, you'd have to write your own dispatcher for it), which means lots of switch statements in the client code
every one of these switches may also need updating if you add a type
if you did write a visitor dispatch, that needs updating if you add a type, and so may every visitor
you need to manually reproduce something like the built-in C++ type-conversion rules if you want to do anything like arithmetic with these (ie, operator double could promote an Int instead of only handling Double ... but only if you hand-roll every operator)
I haven't implemented operator== precisely because it needs a switch. You can't just memcmp the two unions if the types match, because identical 32-bit integers could still compare different if the extra space required for the double holds a different bit pattern
Some of these issues can be addressed if you care about them, but it's all more work. Hence my preference for not re-inventing this particular wheel if it can be avoided.
Since your data types are fixed what about something like this...
Have something like a std::vector for each type of value.
And your map would have as the second value of the pair the index to the data.
std::vector<int> vInt;
std::vector<float> vFloat;
.
.
.
map<std::string, std::pair<ValidType, int>> Data;
You can implement a multi-type map by leveraging the nifty features of std::tuple in C++11, which allows access by a type key. You can wrap this to create access by arbitrary keys. An in-depth explanation of this (and quite an interesting read) is available here:
https://jguegant.github.io/blogs/tech/thread-safe-multi-type-map.html
The modern C++ features provide create ways to solve old problems.

How to index and assign elements in a tensor using identical call signatures?

OK, I've been googling around for too long, I'm just not sure what to call this technique, so I figured it's better to just ask here on SO. Please point me in the right direction if this has an obvious name and/or solution I've overlooked.
For the laymen: a tensor is the logical extension of the matrix, in the same way a matrix is the logical extension of the vector. A vector is a rank-1 tensor (in programming terms, a 1D array of numbers), a matrix is a rank-2 tensor (a 2D array of numbers), and a rank-N tensor is then simply an N-D array of numbers.
Now, suppose I have something like this Tensor class:
template<typename T = double> // possibly also with size parameters
class Tensor
{
private:
T *M; // Tensor data (C-array)
// alternatively, std::vector<T> *M
// or std::array<T> *M
// etc., or possibly their constant-sized versions
// using Tensor<>'s template parameters
public:
... // insert trivial fluffy stuff here
// read elements
const T & operator() (size_t a, size_t b) const {
... // error checks etc.
return M[a + rows*b];
}
// write elements
T & operator() (size_t a, size_t b) {
... // error checks etc.
return M[a + rows*b];
}
...
};
With these definitions of operator()(...), indexing/assign individual elements then has the same call signature:
Tensor<> B(5,5);
double a = B(3,4); // operator() (size_t,size_t) used to both GET elements
B(3,4) = 5.5; // and SET elements
It is fairly trivial to extend this up to arbitrary tensor rank. But what I'd like to be able to implement is a more high-level way of indexing/assigning elements:
Tensor<> B(5,5);
Tensor<> C = B( Slice(0,4,2), 2 ); // operator() (Slice(),size_t) used to GET elements
B( Slice(0,4,2), 2 ) = C; // and SET elements
// (C is another tensor of the correct dimensions)
I am aware that std::valarray (and many others for that matter) does a very similar thing already, but it's not my objective to just accomplish the behavior; my objective here is to learn how to elegantly, efficiently and safely add the following functionality to my Tensor<> class:
// Indexing/assigning with Tensor<bool>
B( B>0 ) += 1.0;
// Indexing/assigning arbitrary amount of dimensions, each dimension indexed
// with either Tensor<bool>, size_t, Tensor<size_t>, or Slice()
B( Slice(0,2,FINAL), 3, Slice(0,3,FINAL), 4 ) = C;
// double indexing/assignment operation
B(3, Slice(0,4,FINAL))(mask) = C; // [mask] == Tensor<bool>
.. etc.
Note that it's my intention to use operator[] for non-checked versions of operator(). Alternatively, I'll stick more to the std::vector<> approach of using .at() methods for checked versions of operator[]. Anyway, this is a design choice and besides the issue right now.
I've conjured up the following incomplete "solution". This method is only really manageable for vectors/matrices (rank-1 or rank-2 tensors), and has many undesirable side-effects:
// define a simple slice class
Slice ()
{
private:
size_t
start, stride, end;
public:
Slice(size_t s, size_t e) : start(s), stride(1), end(e) {}
Slice(size_t s, size_t S, size_t e) : start(s), stride(S), end(e) {}
...
};
template<typename T = double>
class Tensor
{
... // same as before
public:
// define two operators() for use with slices:
// version for retrieving data
const Tensor<T> & operator() (Slice r, size_t c) const {
// use slicing logic to construct return tensor
...
return M;
{
// version for assigning data
Sass operator() (Slice r, size_t c) {
// returns Sass object, defined below
return Sass(*this, r,c);
}
protected:
class Sass
{
friend class Tensor<T>;
private:
Tensor<T>& M;
const Slice &R;
const size_t c;
public:
Sass(Tensor<T> &M, const Slice &R, const size_t c)
: M(M)
, R(R)
, c(c)
{}
operator Tensor<T>() const { return M; }
Tensor<T> & operator= (const Tensor<T> &M2) {
// use R/c to copy contents of M2 into M using the same
// Slice-logic as in "Tensor<T>::operator()(...) const" above
...
return M;
}
};
But this just feels wrong...
For each of the indexing/assignment methods outlined above, I'd have to define a separate Tensor<T>::Sass::Sass(...) constructor, a new Tensor<T>::Sass::operator=(...), and a new Tensor<T>::operator()(...) for each and every such operation. Moreover, the Tensor<T>::Sass::operators=(...) would need to contain much of the same stuff that's already in the corresponding Tensor<T>::operator()(...), and making everything suitable for a Tensor<> of arbitrary rank makes this approach quite ugly, way too verbose and more importantly, completely unmanageable.
So, I'm under the impression there is a much more effective approach to all this.
Any suggestions?
First of all I'd like to point out some design issues:
T & operator() (size_t a, size_t b) const;
suggests you can't alter the matrix through this method, because it's const. But you are giving back a nonconst reference to a matrix element, so in fact you can alter it. This only compiles because of the raw pointer you are using. I suggest to use std::vector instead, which does the memory management for you and will give you an error because vector's const version of operator[] gives a const reference like it should.
Regarding your actual question, I am not sure what the parameters of the Slice constructor should do, nor what a Sass object is meant to be (I am no native speaker, and "Sass" gives me only one translation in the dictionary, meaning sth. like "impudence", "impertinence").
However, I suppose with a slice you want to create an object that gives access to a subset of a matrix, defined by the slice's parameters.
I would advice against using operator() for every way to access the matrix. op() with two indices to access a given element seems natural. Using a similar operator to get a whole matrix to me seems less intuitive.
Here's an idea: make a Slice class that holds a reference to a Matrix and the necessary parameters that define which part of the Matrix is represented by the Slice. That way a Slice would be something like a proxy to the Matrix subset it defines, similar to a pair of iterators which can be seen as a proxy to a subrange of the container they are pointing to. Give your Matrix a pair of slice() methods (const and nonconst) that give back a Slice/ConstSlice, referencing the Matrix you call the method on. That way, you can even put checks into the method to see if the Slice's parameters make sense for the Matrix it refers to. If it makes sense and is necessary, you can also add a conversion operator, to convert a Slice into a Matrix of its own.
Overloading operator() again and again and using the parameters as a mask, as linear indices and other stuff is more confusing than helping imo. operator() is slick if it does something natural which everybody expects from it. It only obfuscates the code if it is used everywhere. Use named methods instead.
Not an answer, just a note to follow up my comment:
Tensor<bool> T(false);
// T (whatever its rank) contains all false
auto lazy = T(Slice(0,4,2));
// if I use lazy here, it will be all false
T = true;
// now T contains all true
// if I use lazy here, it will be all true
This may be what you want, or it might be unexpected.
In general, this can work cleanly with immutable tensors, but allowing mutation gives the same class of problem as COW strings.
If you allow for your Tensor to implicitly be a double you can return only Tensors from your operator() overload.
operator double() {
return M.size() == 1 ? M[0] : std::numeric_limits<double>::quiet_NaN();
};
That should allow for
double a = B(3,4);
Tensor<> a = B(Slice(1,2,3),4);
To get the operator() to work with multiple overloads with Slice and integer is another issue. I'd probably just use Slice and create another implicit conversion so integers can be Slice's, then maybe using the variable argument elipses.
const Tensor<T> & operator() (int numOfDimensions, ...)
Although the variable argument route is kind of a kludge best to just have 8 specializations for 1-8 parameters of Slice.

acceptable fix for majority of signed/unsigned warnings?

I myself am convinced that in a project I'm working on signed integers are the best choice in the majority of cases, even though the value contained within can never be negative. (Simpler reverse for loops, less chance for bugs, etc., in particular for integers which can only hold values between 0 and, say, 20, anyway.)
The majority of the places where this goes wrong is a simple iteration of a std::vector, often this used to be an array in the past and has been changed to a std::vector later. So these loops generally look like this:
for (int i = 0; i < someVector.size(); ++i) { /* do stuff */ }
Because this pattern is used so often, the amount of compiler warning spam about this comparison between signed and unsigned type tends to hide more useful warnings. Note that we definitely do not have vectors with more then INT_MAX elements, and note that until now we used two ways to fix compiler warning:
for (unsigned i = 0; i < someVector.size(); ++i) { /*do stuff*/ }
This usually works but might silently break if the loop contains any code like 'if (i-1 >= 0) ...', etc.
for (int i = 0; i < static_cast<int>(someVector.size()); ++i) { /*do stuff*/ }
This change does not have any side effects, but it does make the loop a lot less readable. (And it's more typing.)
So I came up with the following idea:
template <typename T> struct vector : public std::vector<T>
{
typedef std::vector<T> base;
int size() const { return base::size(); }
int max_size() const { return base::max_size(); }
int capacity() const { return base::capacity(); }
vector() : base() {}
vector(int n) : base(n) {}
vector(int n, const T& t) : base(n, t) {}
vector(const base& other) : base(other) {}
};
template <typename Key, typename Data> struct map : public std::map<Key, Data>
{
typedef std::map<Key, Data> base;
typedef typename base::key_compare key_compare;
int size() const { return base::size(); }
int max_size() const { return base::max_size(); }
int erase(const Key& k) { return base::erase(k); }
int count(const Key& k) { return base::count(k); }
map() : base() {}
map(const key_compare& comp) : base(comp) {}
template <class InputIterator> map(InputIterator f, InputIterator l) : base(f, l) {}
template <class InputIterator> map(InputIterator f, InputIterator l, const key_compare& comp) : base(f, l, comp) {}
map(const base& other) : base(other) {}
};
// TODO: similar code for other container types
What you see is basically the STL classes with the methods which return size_type overridden to return just 'int'. The constructors are needed because these aren't inherited.
What would you think of this as a developer, if you'd see a solution like this in an existing codebase?
Would you think 'whaa, they're redefining the STL, what a huge WTF!', or would you think this is a nice simple solution to prevent bugs and increase readability. Or maybe you'd rather see we had spent (half) a day or so on changing all these loops to use std::vector<>::iterator?
(In particular if this solution was combined with banning the use of unsigned types for anything but raw data (e.g. unsigned char) and bit masks.)
Don't derive publicly from STL containers. They have nonvirtual destructors which invokes undefined behaviour if anyone deletes one of your objects through a pointer-to base. If you must derive e.g. from a vector, do it privately and expose the parts you need to expose with using declarations.
Here, I'd just use a size_t as the loop variable. It's simple and readable. The poster who commented that using an int index exposes you as a n00b is correct. However, using an iterator to loop over a vector exposes you as a slightly more experienced n00b - one who doesn't realize that the subscript operator for vector is constant time. (vector<T>::size_type is accurate, but needlessly verbose IMO).
While I don't think "use iterators, otherwise you look n00b" is a good solution to the problem, deriving from std::vector appears much worse than that.
First, developers do expect vector to be std:.vector, and map to be std::map. Second, your solution does not scale for other containers, or for other classes/libraries that interact with containers.
Yes, iterators are ugly, iterator loops are not very well readable, and typedefs only cover up the mess. But at least, they do scale, and they are the canonical solution.
My solution? an stl-for-each macro. That is not without problems (mainly, it is a macro, yuck), but it gets across the meaning. It is not as advanced as e.g. this one, but does the job.
I made this community wiki... Please edit it. I don't agree with the advice against "int" anymore. I now see it as not bad.
Yes, i agree with Richard. You should never use 'int' as the counting variable in a loop like those. The following is how you might want to do various loops using indices (althought there is little reason to, occasionally this can be useful).
Forward
for(std::vector<int>::size_type i = 0; i < someVector.size(); i++) {
/* ... */
}
Backward
You can do this, which is perfectly defined behaivor:
for(std::vector<int>::size_type i = someVector.size() - 1;
i != (std::vector<int>::size_type) -1; i--) {
/* ... */
}
Soon, with c++1x (next C++ version) coming along nicely, you can do it like this:
for(auto i = someVector.size() - 1; i != (decltype(i)) -1; i--) {
/* ... */
}
Decrementing below 0 will cause i to wrap around, because it is unsigned.
But unsigned will make bugs slurp in
That should never be an argument to make it the wrong way (using 'int').
Why not use std::size_t above?
The C++ Standard defines in 23.1 p5 Container Requirements, that T::size_type , for T being some Container, that this type is some implementation defined unsigned integral type. Now, using std::size_t for i above will let bugs slurp in silently. If T::size_type is less or greater than std::size_t, then it will overflow i, or not even get up to (std::size_t)-1 if someVector.size() == 0. Likewise, the condition of the loop would have been broken completely.
Definitely use an iterator. Soon you will be able to use the 'auto' type, for better readability (one of your concerns) like this:
for (auto i = someVector.begin();
i != someVector.end();
++i)
Skip the index
The easiest approach is to sidestep the problem by using iterators, range-based for loops, or algorithms:
for (auto it = begin(v); it != end(v); ++it) { ... }
for (const auto &x : v) { ... }
std::for_each(v.begin(), v.end(), ...);
This is a nice solution if you don't actually need the index value. It also handles reverse loops easily.
Use an appropriate unsigned type
Another approach is to use the container's size type.
for (std::vector<T>::size_type i = 0; i < v.size(); ++i) { ... }
You can also use std::size_t (from <cstddef>). There are those who (correctly) point out that std::size_t may not be the same type as std::vector<T>::size_type (though it usually is). You can, however, be assured that the container's size_type will fit in a std::size_t. So everything is fine, unless you use certain styles for reverse loops. My preferred style for a reverse loop is this:
for (std::size_t i = v.size(); i-- > 0; ) { ... }
With this style, you can safely use std::size_t, even if it's a larger type than std::vector<T>::size_type. The style of reverse loops shown in some of the other answers require casting a -1 to exactly the right type and thus cannot use the easier-to-type std::size_t.
Use a signed type (carefully!)
If you really want to use a signed type (or if your style guide practically demands one), like int, then you can use this tiny function template that checks the underlying assumption in debug builds and makes the conversion explicit so that you don't get the compiler warning message:
#include <cassert>
#include <cstddef>
#include <limits>
template <typename ContainerType>
constexpr int size_as_int(const ContainerType &c) {
const auto size = c.size(); // if no auto, use `typename ContainerType::size_type`
assert(size <= static_cast<std::size_t>(std::numeric_limits<int>::max()));
return static_cast<int>(size);
}
Now you can write:
for (int i = 0; i < size_as_int(v); ++i) { ... }
Or reverse loops in the traditional manner:
for (int i = size_as_int(v) - 1; i >= 0; --i) { ... }
The size_as_int trick is only slightly more typing than the loops with the implicit conversions, you get the underlying assumption checked at runtime, you silence the compiler warning with the explicit cast, you get the same speed as non-debug builds because it will almost certainly be inlined, and the optimized object code shouldn't be any larger because the template doesn't do anything the compiler wasn't already doing implicitly.
You're overthinking the problem.
Using a size_t variable is preferable, but if you don't trust your programmers to use unsigned correctly, go with the cast and just deal with the ugliness. Get an intern to change them all and don't worry about it after that. Turn on warnings as errors and no new ones will creep in. Your loops may be "ugly" now, but you can understand that as the consequences of your religious stance on signed versus unsigned.
vector.size() returns a size_t var, so just change int to size_t and it should be fine.
Richard's answer is more correct, except that it's a lot of work for a simple loop.
I notice that people have very different opinions about this subject. I have also an opinion which does not convince others, so it makes sense to search for support by some guru’s, and I found the CPP core guidelines:
https://isocpp.github.io/CppCoreGuidelines/CppCoreGuidelines
maintained by Bjarne Stroustrup and Herb Sutter, and their last update, upon which I base the information below, is of April 10, 2022.
Please take a look at the following code rules:
ES.100: Don’t mix signed and unsigned arithmetic
ES.101: Use unsigned types for bit manipulation
ES.102: Use signed types for arithmetic
ES.107: Don’t use unsigned for subscripts, prefer gsl::index
So, supposing that we want to index in a for loop and for some reason the range based for loop is not the appropriate solution, then using an unsigned type is also not the preferred solution. The suggested solution is using gsl::index.
But in case you don’t have gsl around and you don’t want to introduce it, what then?
In that case I would suggest to have a utility template function as suggested by Adrian McCarthy: size_as_int