C++ Picking a type for a constant - c++

So on a fairly regular bases it seems I find the type of some constant I declared (typically integer, but occasionally other things like strings) is not the ideal type in a context it is being used, requiring a cast or resulting in a compiler warning about the implicit cast.
E.g. in one piece of code I had something like the below, and got a signed/unsigned comparison issue.
static const int MAX_FOO = 16;
...
if (container.size() > MAX_FOO) {...}
I have been thinking of just always using the smallest / most basic type allowed for a given constant (e.g. char, unsigned char, const char* etc rather than say int, size_t and std::string), but was wondering if this is really a good idea, and if there are some places where it would potentially be a really bad idea? e.g. code using the 'auto' keyword (or perhaps templates) getting a too small type and overflowing on what appeared to be a safe operation?

Going for the smallest type that can hold the initial value is a bad habit. That invites overflow.
Always code for the most general (which according to Murphy's Law is the worst) case. As templates generalize things, that makes the worst case a lot worse. Be prepared for bizarre kinds of overflows and avoid negative numbers while unsigned types are in the neighborhood.
std::size_t is the best choice for the size or length of anything, for the reason you mentioned. But subtract pointers and you get a std::ptrdiff_t instead. Personally I recommend to cast the result of such a subtraction to std::size_t if it can be guaranteed to be positive.
char * does not own its string in the C++ sense as std::string does, so the latter is the more conservative choice.
This question is so broad that no more specific advice can be made…

Related

Why is std::ssize being forced to a minimum size for its signed size type?

In C++20, std::ssize is being introduced to obtain the signed size of a container for generic code. (And the reason for its addition is explained here.)
Somewhat peculiarly, the definition given there (combining with common_type and ptrdiff_t) has the effect of forcing the return value to be "either ptrdiff_t or the signed form of the container's size() return value, whichever is larger".
P1227R1 indirectly offers a justification for this ("it would be a disaster for std::ssize() to turn a size of 60,000 into a size of -5,536").
This seems to me like an odd way to try to "fix" that, however.
Containers which intentionally define a uint16_t size and are known to never exceed 32,767 elements will still be forced to use a larger type than required.
The same thing would occur for containers using a uint8_t size and 127 elements, respectively.
In desktop environments, you probably don't care; but this might be important for embedded or otherwise resource-constrained environments, especially if the resulting type is used for something more persistent than a stack variable.
Containers which use the default size_t size on 32-bit platforms but which nevertheless do contain between 2B and 4B items will hit exactly the same problem as above.
If there still exist platforms for which ptrdiff_t is smaller than 32 bits, they will hit the same problem as well.
Wouldn't it be better to just use the signed type as-is (without extending its size) and to assert that a conversion error has not occurred (eg. that the result is not negative)?
Am I missing something?
To expand on that last suggestion a bit (inspired by Nicol Bolas' answer): if it were implemented the way that I suggested, then this code would Just Work™:
void DoSomething(int16_t i, T const& item);
for (int16_t i = 0, len = std::ssize(rng); i < len; ++i)
{
DoSomething(i, rng[i]);
}
With the current implementation, however, this produces warnings and/or errors unless static_casts are explicitly added to narrow the result of ssize, or to use int i instead and then narrow it in the function call (and the range indexing), neither of which seem like an improvement.
Containers which intentionally define a uint16_t size and are known to never exceed 32,767 elements will still be forced to use a larger type than required.
It's not like the container is storing the size as this type. The conversion happens via accessing the value.
As for embedded systems, embedded systems programmers already know about C++'s propensity to increase the size of small types. So if they expect a type to be an int16_t, they're going to spell that out in the code, because otherwise C++ might just promote it to an int.
Furthermore, there is no standard way to ask about what size a range is "known to never exceed". decltype(size(range)) is something you can ask for; sized ranges are not required to provide a max_size function. Without such an ability, the safest assumption is that a range whose size type is uint16_t can assume any size within that range. So the signed size should be big enough to store that entire range as a signed value.
Your suggestion is basically that any ssize call is potentially unsafe, since half of any size range cannot be validly stored in the return type of ssize.
Containers which use the default size_t size on 32-bit platforms but which nevertheless do contain between 2B and 4B items will hit exactly the same problem as above.
Assuming that it is valid for ptrdiff_t to not be a signed 64-bit integer on such platforms, there isn't really a valid solution to that problem. So yes, there will be cases where ssize is potentially unsafe.
ssize currently is potentially unsafe in cases where it is not possible to be safe. Your proposal would make ssize potentially unsafe in all cases.
That's not an improvement.
And no, merely asserting/contract checking is not a viable solution. The point of ssize is to make for(int i = 0; i < std::ssize(rng); ++i) work without the compiler complaining about signed/unsigned mismatch. To get an assert because of a conversion failure that didn't need to happen (and BTW, cannot be corrected without using std::size, which we are trying to avoid), one which is ultimately irrelevant to your algorithm? That's a terrible idea.
if it were implemented the way that I suggested, then this code would Just Work™:
Let us ignore the question of how often it is that a user would write this code.
The reason your compiler will expect/require you to use a cast there is because you are asking for an inherently dangerous operation: you are potentially losing data. Your code only "Just Works™" if the current size fits into an int16_t; that makes the conversion statically dangerous. This is not something that should implicitly take place, so the compiler suggests/requires you to explicitly ask for it. And users looking at that code get a big, fat eyesore reminding them that a dangerous thing is being done.
That is all to the good.
See, if your suggested implementation were how ssize behaved, then that means we must treat every use of ssize as just as inherently dangerous as the compiler treats your attempted implicit conversion. But unlike static_cast, ssize is small and easily missed.
Dangerous operations should be called out as such. Since ssize is small and difficult to notice by design, it therefore should be as safe as possible. Ideally, it should be as safe as size, but failing that, it should be unsafe only to the extend that it is impossible to make it safe.
Users should not look on ssize usage as something dubious or disconcerting; they should not fear to use it.

C++ vector::size_type: signed vs unsigned; int vs. long

I have been doing some testing of my application by compiling it on different platforms, and the shift from a 64-bit system to a 32-bit system is exposing a number of issues.
I make heavy use of vectors, strings, etc., and as such need to count them. However, my functions also make use of 32-bit unsigned numbers because in many cases I need to explicitly consume a positive integer.
I'm having issues with seemingly simple tasks such as std::min and std::max, which may be more systemic. Consider the following code:
uint32_t getmax()
{
return _vecContainer.size();
}
Seems simple enough: I know that a vector can't have a negative number of elements, so returning an unsigned integer makes complete sense.
void setRowCol(const uint32_t &r_row; const uint32_t &r_col)
{
myContainer_t mc;
mc.row = r_row;
mc.col = r_col;
_vecContainer.push_back(mc);
}
Again, simple enough.
Problem:
uint32_t foo(const uint32_t &r_row)
{
return std::min(r_row, _vecContainer.size());
}
This gives me errors such as:
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/../include/c++/v1/algorithm:2589:1: note: candidate template ignored: deduced conflicting types for parameter '_Tp' ('unsigned long' vs. 'unsigned int')
min(const _Tp& __a, const _Tp& __b)
I did a lot of digging, and on one platform vector::size_type is an 8 byte number. However, by design I am using unsigned 4-byte numbers. This is presumably causing things to be wacky because you cannot implicitly convert from an 8-byte number to a 4-byte number.
The solution was to do this the old fashioned weay:
#define MIN_M(a,b) a < b ? a : b
return MIN_M(r_row, _vecContainer.size());
Which works dandy. But the systemic issue remains: when planning for multiple platform support, how do you handle instances like this? I could use size_t as my standard size, but that adds other complications (e.g. moving from one platform which supports 64 bit numbers to another which supports 32 bit numbers at a later date). The bigger issue is that size_t is unsigned, so I can't update my signatures:
size_t foo(const size_t &r_row)
// bad, this allows -1 to be passed, which I don't want
Any suggestions?
EDIT: I had read somewhere that size_t was signed, and I've since been corrected. So far it looks like this is a limitation of my own design (e.g. 32-bit numbers vs. using std::vector::size_type and/or size_t).
One way to deal with this is to use
std::vector<Type>::size_type
as the underlying type of your function parameters/returns, or auto returns if using C++14.
An answer in the form of a set of tidbits:
Instead of relying on the compiler to deduce the type, you can explicitly specify the type when using function templates like std::min<T>. For example: std::min<std::uint32_t>(4, my_vec.size());
Turn on all the compiler warnings related to signed versus unsigned comparisons and implicit narrowing conversions. Use brace initialization where you can, as it will treat narrowing conversions as errors.
If you explicitly want to use 32-bit values like std::uint32_t, I'd try to find the minimal number of places to explicitly convert (i.e., static_cast) the "sizes" to the smaller types. You don't want casts everywhere, but if you're using library container sizes internally and you want your API to use std::uint32_t, explicitly cast at the API boundaries so that a user of your class never has to worry about doing the conversion themselves. If you can keep the conversions to just a couple places, it becomes practical to add run-time checks (i.e., assertions) that the size has not actually outgrown the range of the smaller type.
If you don't care about the exact size, use std::size_t, which is almost certainly identical to std::XXX::size_type for all of the standard containers. It's theoretically possible for them to be different, but it doesn't happen in practice. In most contexts, std::size_t is less verbose that std::vector::size_type, so it makes a good compromise.
Lots of people (including many people on the C++ standards committee) will tell you to avoid unsigned values even for sizes and indexes. I understand and respect their arguments, but I don't find them persuasive enough to justify the extra friction at the interface with the standard library. Whether or not it's an historical artifact that std::size_t is unsigned, the fact is that the standard library uses unsigned sizes extensively. If you use something else, your code ends up littered with implicit conversions, all of which are potential bugs. Worse, those implicit conversions make turning on the compiler warnings impractical, so all those latent bugs remain relatively invisible. (And even if you know your sizes will never exceed the smaller type, being forced to turn of the compiler warnings for signedness and narrowing means you could miss bugs in completely unrelated parts of the code.) Match the types of the APIs you're using as much as possible, assert and explicitly convert when necessary, and turn on all the warnings.
Keep in mind that auto is not a panacea. for (auto i = 0; i < my_vec.size(); ++i) ... is just as bad as for (int i .... But if you generally prefer algorithms and iterators to raw loops, auto will get you pretty far.
With division you must never divide unless you know the denominator is not 0. Similarly, with unsigned integral types, you must never subtract unless you know the subtrahend is smaller than or equal to the original value. If you can make that a habit, you can avoid the bugs that the always-use-a-signed-type folks are concerned about.

Making unsigned integer underflow throw an exception

I understand that there are applications in which using unsigned integer over/underflow is a good way to get cheap modular arithmetic.
In my code, I use uint exclusively for indices to containers, so I never want this behaviour.
Is this a bad idea? Should I be using int everywhere instead? I do have to do some unsavoury things to get a for loop to count down to 0.
Is there a commonly used implementation of a less unsafe unsigned integer type? Something that throws an exception?
Do compilers (for me gcc, clang) provide a mechanism for less unsafe behaviour in the given compilation unit?
First, a terminology quibble: there is no such thing as unsigned integer underflow, precisely because of the way they wrap around (using modulo arithmetic), which is probably the phrase you meant.
Second, is this a common scenario to be in? Yes, it is a bit. You're not the only one doing "unsavoury things" with loops for reverse counting, and I bet there are a ton of bugs out there where people haven't done "unsavoury things" and, as a result, their code has an unsavoury infinite loop hidden in it. Mind you, I'm not sure I'd go so far as to call unsigneds "unsafe" as a result; like anything, they are the right tool for a subset of infinite possible jobs, and within that subset they perfectly safe.
There is debate over whether unsigned integers should be used for array indexes at all. Some standard committee members believe that their use in the standard library was a mistake; I know that several members of the c++ community here on Stack Overflow also hate unsigned values and wish they'd go away.
Personally I think having access to the full range of the integer by default is absolutely crucial (and losing that is not worth it for a single "-1" sentinel value or whatever), so I think that — while you're not alone in this requirement, and it's a sensible requirement — using unsigned array indexes by default is a good thing. (And what the heck is a negative array index? Semantics, people!)
But that doesn't help you in this scenario. So, what can you do about it? No, there's no trapping unsigned integer implementation (at least, not one that I'm aware of, let alone widespread) because that would literally violate the rules of the type as defined by C++: it would introduce well-defined underflow/overflow semantics to a type for which underflow/overflow shouldn't even be possible.
You will have to use signed integers and check for "logical underflow" (i.e. going out of your desired range, say -1) yourself. You could wrap this behaviour in a class.
I suppose you could actually just wrap an unsigned integer while you're at it, adding some extra logic to operator-- and operator-= to detect a wrap-around and throw.
But I guess my point is that, whatever you do, it's going to be in your "code space" and thus subject to decreased performance. You can't eke out this behaviour from the platform itself.

Should I use int or unsigned int when working with STL container?

Referring to this guide:
https://google.github.io/styleguide/cppguide.html#Integer_Types
Google suggests to use int in the most of time.
I try to follow this guide and the only problem is with STL containers.
Example 1.
void setElement(int index, int value)
{
if (index > someExternalVector.size()) return;
...
}
Comparing index and .size() is generating a warning.
Example 2.
for (int i = 0; i < someExternalVector.size(); ++i)
{
...
}
Same warning between i and .size().
If I declare index or i as unsigned int, the warning is off, but the type declaration will propagate, then I have to declare more variables as unsigned int, then it contradicts the guide and loses consistency.
The best way I can think is to use a cast like:
if (index > static_cast<int>(someExternalVector.size())
or
for (int i = 0; i < static_cast<int>(someExternalVector.size()); ++i)
But I really don't like the casts.
Any suggestion?
Some detailed thoughts below:
To advantage to use only signed integer is like: I can avoid signed/unsigned warnings, castings, and be sure every value can be negative(to be consistent), so -1 could be used to represent invalid values.
There are many cases that the usage of loop counters are mixed with some other constants or struct members. So it would be problematic if signed/unsigned is not consistent. There will be full of warnings and castings.
Unsigned types have three characteristics, one of which is qualitatively 'good' and one of which is qualitatively 'bad':
They can hold twice as many values as the same-sized signed type (good)
The size_t version (that is, 32-bit on a 32-bit machine, 64-bit on a 64-bit machine, etc) is useful for representing memory (addresses, sizes, etc) (neutral)
They wrap below 0, so subtracting 1 in a loop or using -1 to represent an invalid index can cause bugs (bad.) Signed types wrap too.
The STL uses unsigned types because of the first two points above: in order to not limit the potential size of array-like classes such as vector and deque (although you have to question how often you would want 4294967296 elements in a data structure); because a negative value will never be a valid index into most data structures; and because size_t is the correct type to use for representing anything to do with memory, such as the size of a struct, and related things such as the length of a string (see below.) That's not necessarily a good reason to use it for indexes or other non-memory purposes such as a loop variable. The reason it's best practice to do so in C++ is kind of a reverse construction, because it's what's used in the containers as well as other methods, and once used the rest of the code has to match to avoid the same problem you are encountering.
You should use a signed type when the value can become negative.
You should use an unsigned type when the value cannot become negative (possibly different to 'should not'.)
You should use size_t when handling memory sizes (the result of sizeof, often things like string lengths, etc.) It is often chosen as a default unsigned type to use, because it matches the platform the code is compiled for. For example, the length of a string is size_t because a string can only ever have 0 or more elements, and there is no reason to limit a string's length method arbitrarily shorter than what can be represented on the platform, such as a 16-bit length (0-65535) on a 32-bit platform. Note (thanks commenter Morwen) std::intptr_t or std::uintptr_t which are conceptually similar - will always be the right size for your platform - and should be used for memory addresses if you want something that's not a pointer. Note 2 (thanks commenter rubenvb) that a string can only hold size_t-1 elements due to the value of npos. Details below.
This means that if you use -1 to represent an invalid value, you should use signed integers. If you use a loop to iterate backwards over your data, you should consider using a signed integer if you are not certain that the loop construct is correct (and as noted in one of the other answers, they are easy to get wrong.) IMO, you should not resort to tricks to ensure the code works - if code requires tricks, that's often a danger signal. In addition, it will be harder to understand for those following you and reading your code. Both these are reasons not to follow #Jasmin Gray's answer above.
Iterators
However, using integer-based loops to iterate over the contents of a data structure is the wrong way to do it in C++, so in a sense the argument over signed vs unsigned for loops is moot. You should use an iterator instead:
std::vector<foo> bar;
for (std::vector<foo>::const_iterator it = bar.begin(); it != bar.end(); ++it) {
// Access using *it or it->, e.g.:
const foo & a = *it;
When you do this, you don't need to worry about casts, signedness, etc.
Iterators can be forward (as above) or reverse, for iterating backwards. Use the same syntax of it != bar.end(), because end() signals the end of the iteration, not the end of the underlying conceptual array, tree, or other structure.
In other words, the answer to your question 'Should I use int or unsigned int when working with STL containers?' is 'Neither. Use iterators instead.' Read more about:
Why use iterators instead of array indices in C++?
Why again (some more interesting points in the answers to this question)
Iterators in general - the different kinds, how to use them, etc.
What's left?
If you don't use an integer type for loops, what's left? Your own values, which are dependent on your data, but which in your case include using -1 for an invalid value. This is simple. Use signed. Just be consistent.
I am a big believer in using natural types, such as enums, and signed integers fit into this. They match our conceptual expectation more closely. When your mind and the code are aligned, you are less likely to write buggy code and more likely to expressively write correct, clean code.
Use the type that the container returns. In this case, size_t - which is an integer type that is unsigned.
(To be technical, it's std::vector<MyType>::size_type, but that's usually defined to size_t, so you're safe using size_t. unsigned is also fine)
But in general, use the right tool for the right job. Is the 'index' ever supposed to be negative? If not, don't make it signed.
By the by, you don't have to type out 'unsigned int'. 'unsigned' is shorthand for the same variable type:
int myVar1;
unsigned myVar2;
The page linked to in the original question said:
Some people, including some textbook authors, recommend using unsigned
types to represent numbers that are never negative. This is intended
as a form of self-documentation. However, in C, the advantages of such
documentation are outweighed by the real bugs it can introduce.
It's not just self-documentation, it's use the right tool for the right job. Saying that 'unsigned variables can cause bugs so don't use unsigned variables' is silly. Signed variables can also cause bugs. So can floats (more than integers). The only guaranteed bug-free code is code that doesn't exist.
Their example of why unsigned is evil, is this loop:
for (unsigned int i = foo.Length()-1; i >= 0; --i)
I have difficulty iterating backwards over a loop, and I usually make mistakes (with signed or unsigned integers) with it. Do I subtract one from size? Do I make it greater-than-AND-equal-to 0, or just greater than? It's a sloppy situation to begin with.
So what do you do with code that you know you have problems with? You change your coding style to fix the problem, make it simpler, and make it easier to read, and make it easier to remember. There is a bug in the loop they posted. The bug is, they wanted to allow a value below zero, but they chose to make it unsigned. It's their mistake.
But here's a simple trick that makes it easier to read, remember, write, and run. With unsigned variables. Here's the intelligent thing to do (obviously, this is my opinion).
for(unsigned i = myContainer.size(); i--> 0; )
{
std::cout << myContainer[i] << std::endl;
}
It's unsigned. It always works. No negative to the starting size. No worrying about underflows. It just works. It's just smart. Do it right, don't stop using unsigned variables because someone somewhere once said they had a mistake with a for() loop and failed to train themselves to not make the mistake.
The trick to remembering it:
Set 'i' to the size. (don't worry about subtracting one)
Make 'i' point to 0 like an arrow. i --> 0 (it's a combination of post-decrementing (i--) and greater-than comparison (i > 0))
It's better to teach yourself tricks to code right, then to throw away tools because you don't code right.
Which would you want to see in your code?
for(unsigned i = myContainer.size()-1; i >= 0; --i)
Or:
for(unsigned i = myContainer.size(); i--> 0; )
Not because it's less characters to type (that'd be silly), but because it's less mental clutter. It's simpler to mentally parse when skimming through code, and easier to spot mistakes.
Try the code yourself

Why is size_t unsigned?

Bjarne Stroustrup wrote in The C++ Programming Language:
The unsigned integer types are ideal for uses that treat storage as a
bit array. Using an unsigned instead of an int to gain one more bit to
represent positive integers is almost never a good idea. Attempts to
ensure that some values are positive by declaring variables unsigned
will typically be defeated by the implicit conversion rules.
size_t seems to be unsigned "to gain one more bit to represent positive integers". So was this a mistake (or trade-off), and if so, should we minimize use of it in our own code?
Another relevant article by Scott Meyers is here. To summarize, he recommends not using unsigned in interfaces, regardless of whether the value is always positive or not. In other words, even if negative values make no sense, you shouldn't necessarily use unsigned.
size_t is unsigned for historical reasons.
On an architecture with 16 bit pointers, such as the "small" model DOS programming, it would be impractical to limit strings to 32 KB.
For this reason, the C standard requires (via required ranges) ptrdiff_t, the signed counterpart to size_t and the result type of pointer difference, to be effectively 17 bits.
Those reasons can still apply in parts of the embedded programming world.
However, they do not apply to modern 32-bit or 64-bit programming, where a much more important consideration is that the unfortunate implicit conversion rules of C and C++ make unsigned types into bug attractors, when they're used for numbers (and hence, arithmetical operations and magnitude comparisions). With 20-20 hindsight we can now see that the decision to adopt those particular conversion rules, where e.g. string( "Hi" ).length() < -3 is practically guaranteed, was rather silly and impractical. However, that decision means that in modern programming, adopting unsigned types for numbers has severe disadvantages and no advantages – except for satisfying the feelings of those who find unsigned to be a self-descriptive type name, and fail to think of typedef int MyType.
Summing up, it was not a mistake. It was a decision for then very rational, practical programming reasons. It had nothing to do with transferring expectations from bounds-checked languages like Pascal to C++ (which is a fallacy, but a very very common one, even if some of those who do it have never heard of Pascal).
size_t is unsigned because negative sizes make no sense.
(From the comments:)
It's not so much ensuring, as stating what is. When is the last time you saw a list of size -1? Follow that logic too far and you find that unsigned should not exist at all and bit operations shouldn't be permitted either. – geekosaur
More to the point: addresses, for reasons you should think about, are not signed. Sizes are generated by comparing addresses; treating an address as signed will do very much the wrong thing, and using a signed value for the result will lose data in a way that your reading of the Stroustrup quote evidently thinks is acceptable, but in fact is not. Perhaps you can explain what a negative address should do instead. – geekosaur
A reason for making index types unsigned is for symmetry with C and C++'s preference for half-open intervals. And if your index types are going to be unsigned, then it's convenient to also have your size type unsigned.
In C, you can have a pointer that points into an array. A valid pointer can point to any element of the array or one element past the end of the array. It cannot point to one element before the beginning of the array.
int a[2] = { 0, 1 };
int * p = a; // OK
++p; // OK, points to the second element
++p; // Still OK, but you cannot dereference this one.
++p; // Nope, now you've gone too far.
p = a;
--p; // oops! not allowed
C++ agrees and extends this idea to iterators.
Arguments against unsigned index types often trot out an example of traversing an array from back to front, and the code often looks like this:
// WARNING: Possibly dangerous code.
int a[size] = ...;
for (index_type i = size - 1; i >= 0; --i) { ... }
This code works only if index_type is signed, which is used as an argument that index types should be signed (and that, by extension, sizes should be signed).
That argument is unpersuasive because that code is non-idiomatic. Watch what happens if we try to rewrite this loop with pointers instead of indices:
// WARNING: Bad code.
int a[size] = ...;
for (int * p = a + size - 1; p >= a; --p) { ... }
Yikes, now we have undefined behavior! Ignoring the problem when size is 0, we have a problem at the end of the iteration because we generate an invalid pointer that points to the element before the first. That's undefined behavior even if we never try dereference that pointer.
So you could argue to fix this by changing the language standard to make it legit to have a pointer that points to the element before the first, but that's not likely to happen. The half-open interval is a fundamental building block of these languages, so let's write better code instead.
A correct pointer-based solution is:
int a[size] = ...;
for (int * p = a + size; p != a; ) {
--p;
...
}
Many find this disturbing because the decrement is now in the body of the loop instead of in the header, but that's what happens when your for-syntax is designed primarily for forward loops through half-open intervals. (Reverse iterators solve this asymmetry by postponing the decrement.)
Now, by analogy, the index-based solution becomes:
int a[size] = ...;
for (index_type i = size; i != 0; ) {
--i;
...
}
This works whether index_type is signed or unsigned, but the unsigned choice yields code that maps more directly to the idiomatic pointer and iterator versions. Unsigned also means that, as with pointers and iterators, we'll be able to access every element of the sequence--we don't surrender half of our possible range in order to represent nonsensical values. While that's not a practical concern in a 64-bit world, it can be a very real concern in a 16-bit embedded processor or in building an abstract container type for sparse data over a massive range that can still provide the identical API as a native container.
On the other hand ...
Myth 1: std::size_t is unsigned is because of legacy restrictions that no longer apply.
There are two "historical" reasons commonly referred to here:
sizeof returns std::size_t, which has been unsigned since the days of C.
Processors had smaller word sizes, so it was important to squeeze that extra bit of range out.
But neither of these reasons, despite being very old, are actually relegated to history.
sizeof still returns a std::size_t which is still unsigned. If you want to interoperate with sizeof or the standard library containers, you're going to have to use std::size_t.
The alternatives are all worse: You could disable signed/unsigned comparison warnings and size conversion warnings and hope that the values will always be in the overlapping ranges so that you can ignore the latent bugs using different types couple potentially introduce. Or you could do a lot of range-checking and explicit conversions. Or you could introduce your own size type with clever built-in conversions to centralize the range checking, but no other library is going to use your size type.
And while most mainstream computing is done on 32- and 64-bit processors, C++ is still used on 16-bit microprocessors in embedded systems, even today. On those microprocessors, it's often very useful to have a word-sized value that can represent any value in your memory space.
Our new code still has to interoperate with the standard library. If our new code used signed types while the standard library continues to use unsigned ones, we make it harder for every consumer that has to use both.
Myth 2: You don't need that extra bit. (A.K.A., You're never going to have a string larger than 2GB when your address space is only 4GB.)
Sizes and indexes aren't just for memory. Your address space may be limited, but you might process files that are much larger than your address space. And while you might not have a string with more the 2GB, you could comfortably have a bitset with more than 2Gbits. And don't forget virtual containers designed for sparse data.
Myth 3: You can always use a wider signed type.
Not always. It's true that for a local variable or two, you could use a std::int64_t (assuming your system has one) or a signed long long and probably write perfectly reasonable code. (But you're still going to need some explicit casts and twice as much bounds checking or you'll have to disable some compiler warnings that might've alerted you to bugs elsewhere in your code.)
But what if you're building a large table of indices? Do you really want an extra two or four bytes for every index when you need just one bit? Even if you have plenty of memory and a modern processor, making that table twice as large could have deleterious effects on locality of reference, and all your range checks are now two-steps, reducing the effectiveness of branch prediction. And what if you don't have all that memory?
Myth 4: Unsigned arithmetic is surprising and unnatural.
This implies that signed arithmetic is not surprising or somehow more natural. And, perhaps it is when thinking in terms of mathematics where all the basic arithmetic operations are closed over the set of all integers.
But our computers don't work with integers. They work with an infinitesimal fraction of the integers. Our signed arithmetic is not closed over the set of all integers. We have overflow and underflow. To many, that's so surprising and unnatural, they mostly just ignore it.
This is bug:
auto mid = (min + max) / 2; // BUGGY
If min and max are signed, the sum could overflow, and that yields undefined behavior. Most of us routinely miss this these kinds of bugs because we forget that addition is not closed over the set of signed ints. We get away with it because our compilers typically generate code that does something reasonable (but still surprising).
If min and max are unsigned, the sum could still overflow, but the undefined behavior is gone. You'll still get the wrong answer, so it's still surprising, but not any more surprising than it was with signed ints.
The real unsigned surprise comes with subtraction: If you subtract a larger unsigned int from a smaller one, you're going to end up with a big number. This result isn't any more surprising than if you divided by 0.
Even if you could eliminate unsigned types from all your APIs, you still have to be prepared for these unsigned "surprises" if you deal with the standard containers or file formats or wire protocols. Is it really worth adding friction to your APIs to "solve" only part of the problem?