Why is std::ssize being forced to a minimum size for its signed size type? - c++

In C++20, std::ssize is being introduced to obtain the signed size of a container for generic code. (And the reason for its addition is explained here.)
Somewhat peculiarly, the definition given there (combining with common_type and ptrdiff_t) has the effect of forcing the return value to be "either ptrdiff_t or the signed form of the container's size() return value, whichever is larger".
P1227R1 indirectly offers a justification for this ("it would be a disaster for std::ssize() to turn a size of 60,000 into a size of -5,536").
This seems to me like an odd way to try to "fix" that, however.
Containers which intentionally define a uint16_t size and are known to never exceed 32,767 elements will still be forced to use a larger type than required.
The same thing would occur for containers using a uint8_t size and 127 elements, respectively.
In desktop environments, you probably don't care; but this might be important for embedded or otherwise resource-constrained environments, especially if the resulting type is used for something more persistent than a stack variable.
Containers which use the default size_t size on 32-bit platforms but which nevertheless do contain between 2B and 4B items will hit exactly the same problem as above.
If there still exist platforms for which ptrdiff_t is smaller than 32 bits, they will hit the same problem as well.
Wouldn't it be better to just use the signed type as-is (without extending its size) and to assert that a conversion error has not occurred (eg. that the result is not negative)?
Am I missing something?
To expand on that last suggestion a bit (inspired by Nicol Bolas' answer): if it were implemented the way that I suggested, then this code would Just Work™:
void DoSomething(int16_t i, T const& item);
for (int16_t i = 0, len = std::ssize(rng); i < len; ++i)
{
DoSomething(i, rng[i]);
}
With the current implementation, however, this produces warnings and/or errors unless static_casts are explicitly added to narrow the result of ssize, or to use int i instead and then narrow it in the function call (and the range indexing), neither of which seem like an improvement.

Containers which intentionally define a uint16_t size and are known to never exceed 32,767 elements will still be forced to use a larger type than required.
It's not like the container is storing the size as this type. The conversion happens via accessing the value.
As for embedded systems, embedded systems programmers already know about C++'s propensity to increase the size of small types. So if they expect a type to be an int16_t, they're going to spell that out in the code, because otherwise C++ might just promote it to an int.
Furthermore, there is no standard way to ask about what size a range is "known to never exceed". decltype(size(range)) is something you can ask for; sized ranges are not required to provide a max_size function. Without such an ability, the safest assumption is that a range whose size type is uint16_t can assume any size within that range. So the signed size should be big enough to store that entire range as a signed value.
Your suggestion is basically that any ssize call is potentially unsafe, since half of any size range cannot be validly stored in the return type of ssize.
Containers which use the default size_t size on 32-bit platforms but which nevertheless do contain between 2B and 4B items will hit exactly the same problem as above.
Assuming that it is valid for ptrdiff_t to not be a signed 64-bit integer on such platforms, there isn't really a valid solution to that problem. So yes, there will be cases where ssize is potentially unsafe.
ssize currently is potentially unsafe in cases where it is not possible to be safe. Your proposal would make ssize potentially unsafe in all cases.
That's not an improvement.
And no, merely asserting/contract checking is not a viable solution. The point of ssize is to make for(int i = 0; i < std::ssize(rng); ++i) work without the compiler complaining about signed/unsigned mismatch. To get an assert because of a conversion failure that didn't need to happen (and BTW, cannot be corrected without using std::size, which we are trying to avoid), one which is ultimately irrelevant to your algorithm? That's a terrible idea.
if it were implemented the way that I suggested, then this code would Just Work™:
Let us ignore the question of how often it is that a user would write this code.
The reason your compiler will expect/require you to use a cast there is because you are asking for an inherently dangerous operation: you are potentially losing data. Your code only "Just Works™" if the current size fits into an int16_t; that makes the conversion statically dangerous. This is not something that should implicitly take place, so the compiler suggests/requires you to explicitly ask for it. And users looking at that code get a big, fat eyesore reminding them that a dangerous thing is being done.
That is all to the good.
See, if your suggested implementation were how ssize behaved, then that means we must treat every use of ssize as just as inherently dangerous as the compiler treats your attempted implicit conversion. But unlike static_cast, ssize is small and easily missed.
Dangerous operations should be called out as such. Since ssize is small and difficult to notice by design, it therefore should be as safe as possible. Ideally, it should be as safe as size, but failing that, it should be unsafe only to the extend that it is impossible to make it safe.
Users should not look on ssize usage as something dubious or disconcerting; they should not fear to use it.

Related

Making unsigned integer underflow throw an exception

I understand that there are applications in which using unsigned integer over/underflow is a good way to get cheap modular arithmetic.
In my code, I use uint exclusively for indices to containers, so I never want this behaviour.
Is this a bad idea? Should I be using int everywhere instead? I do have to do some unsavoury things to get a for loop to count down to 0.
Is there a commonly used implementation of a less unsafe unsigned integer type? Something that throws an exception?
Do compilers (for me gcc, clang) provide a mechanism for less unsafe behaviour in the given compilation unit?
First, a terminology quibble: there is no such thing as unsigned integer underflow, precisely because of the way they wrap around (using modulo arithmetic), which is probably the phrase you meant.
Second, is this a common scenario to be in? Yes, it is a bit. You're not the only one doing "unsavoury things" with loops for reverse counting, and I bet there are a ton of bugs out there where people haven't done "unsavoury things" and, as a result, their code has an unsavoury infinite loop hidden in it. Mind you, I'm not sure I'd go so far as to call unsigneds "unsafe" as a result; like anything, they are the right tool for a subset of infinite possible jobs, and within that subset they perfectly safe.
There is debate over whether unsigned integers should be used for array indexes at all. Some standard committee members believe that their use in the standard library was a mistake; I know that several members of the c++ community here on Stack Overflow also hate unsigned values and wish they'd go away.
Personally I think having access to the full range of the integer by default is absolutely crucial (and losing that is not worth it for a single "-1" sentinel value or whatever), so I think that — while you're not alone in this requirement, and it's a sensible requirement — using unsigned array indexes by default is a good thing. (And what the heck is a negative array index? Semantics, people!)
But that doesn't help you in this scenario. So, what can you do about it? No, there's no trapping unsigned integer implementation (at least, not one that I'm aware of, let alone widespread) because that would literally violate the rules of the type as defined by C++: it would introduce well-defined underflow/overflow semantics to a type for which underflow/overflow shouldn't even be possible.
You will have to use signed integers and check for "logical underflow" (i.e. going out of your desired range, say -1) yourself. You could wrap this behaviour in a class.
I suppose you could actually just wrap an unsigned integer while you're at it, adding some extra logic to operator-- and operator-= to detect a wrap-around and throw.
But I guess my point is that, whatever you do, it's going to be in your "code space" and thus subject to decreased performance. You can't eke out this behaviour from the platform itself.

Do type conversions slow program running?

Th title is quite obvious.
In my case, and for the sake of simplicity, I avoid using, for instance, unsigned int instead of int, as it makes coding faster and simpler.
(BTW, Im using an Android IDE, CppDroid)
Yet, the IDE frequently alerts me to implicit conversions at, for example, For loops where the incremented variable (int) is compared with the size of a vector (size_t/unsigned int).
My questions are:
Do type conversions take time?
If so, how long do they take compared to other common operations?
In the case convertions do take some time, is it worth to correctly define variables in order to avoid convertions?
Your question is valid, although the goal is misconstrued. It is paramount to correctly define variables, but not because of mysterious performance.
It is to ensure correctness. Comparing unsigned integer with signed one is a ticking bomb, as well as (most usually) comparing size_t with integer.
For example, consider following snippet:
for (int i = 0; i < vec.size(); ++i) { }
For all you know, this code can lead to undefined behavior! If the size of the vector is bigger than maximum size signed integer can hold (which is usually the case with 64bit systems) your integer will be overflowing, which is undefined. Compiler might just remove the loop altogether, if it can proove that size of the vector is bigger than maximum int!
Similar looking (and incorrecet as well) line
for (unsigned int i = 0; i < vec.size(), ++i) { }
Is not going to cause undefined behaviour, but it will hang the program when vector size is greater than maximum int. No good thing either.
And of course, the correct way of doing this is
for (decltype(vec.size()) i = 0; i < vec.size(), ++i) { }
Depends what you convert to what.
That particular warning of signed/unsigned mismatch results in zero overhead, but you may end treating negative number as huge unsigned one (or other way around) - so as long as you are using int, and you don't expect to break into 2^31 numbers land, you are safe.
As safe, as people writing file I/O routines around 1990 (never expecting to see 3GiB file in their life). ...not very funny nowadays (still so much SW is broken on 2+GiB file size).
Some other conversions like from int to uint_8 may have tiny overhead, so it's better to avoid them - if possible (by designing the code to use the desired data type all around).
I would firstly address clarity and functionality of the code, and that usually leads to usage of particular data type for particular value all the time, without any conversion.
After the code works, you can measure the performance and consider what optimization makes sense (including usage of mismatched data types with conversions between them).
conclusion: just fix it, use proper data type.
Type conversions might give performance hits (when signedness, bitness or conversion between floating-point types are involved), but, as a general rule, the type identity of the many things in a program is merely a conceptual language front-end feature. When such hits do happen, however, it is because the types involved mean reasonably different things, and hence code must be emitted in order to properly solve the conversion.
Another thing which is completely different from the above is the invocation of type conversion operators in C++, which can run arbitrary code and, thus, most obviously influence in the final program behavior (not only performance).
As mentioned by others, correct use of the type system is most important for program correctness, at least or specially in languages such as C and C++. Using mismatched types can affect the program behavior in some corner cases, albeit can have no impact whatsoever on the execution time otherwise.
It depends.
Actually converting the data between types will require extra calculations. That much should probably be obvious. Usually those calculations take extra time, so they will have a performance impact. However, there are several factors that mitigate the actual impact of this:
The compiler can optimize types in some cases to minimize conversions.
Some platforms implement certain conversions in hardware.
The primary concern surrounding calculations with unlike types typically has much less to do with performance and more to do with safety and producing expected results. That is why the compiler is warning you; the vast majority of compilers will not tell you that you are doing something inefficient, but they will tell you that you are doing something dangerous.
For example, comparing an int with an unsigned int is asking for trouble. The int has a negative range, the unsigned int has a larger positive range. On conversion to unsigned int, the negative range of the int will appear to be larger than its own positive range. It is very easy to generate endless loops or out-of-bounds errors with such a construct.
You should normally only worry yourself about type conversion performance if you are dealing with huge amounts of data - like large vectors/arrays that need converted between formats. You would have to loop over the data in some way, so it would be a conscious action. For example, converting a 10000 element vector of chars to ints. In these cases, you might need to consider if you have a design flaw that is requiring needless conversion of data.
It is worth pointing out that in the above example, even if the conversion itself were instant, the iteration and copy is not.
As for an example of platforms where this can be done to an extent in hardware, most video cards are able to interpret integers as floats on a normalized range, of the sort 255 --> 1.0. However, many other conversions, like conversions between image formats, are still done in software.
Given the platform and optimization details vary greatly, answering how long a given conversion takes relative to other operations is effectively impossible. If you are dealing with enough data that a conversion is creating a noticeable performance bottleneck, then profile that conversion.
It is worth it to make sure your types match to the best of your ability if you are dealing with enough data for it to matter; that is a subjective measurement of value, though, and will depend on what you are doing.
It is always worth it to make sure implicit type conversions do not cause errors, as errors due to them can be some of the worst possible in C/C++ (memory leaks, buffer overflows, access violations, etc.).

What's the best strategy to get rid of "warning C4267 possible loss of data"?

I ported some legacy code from win32 to win64. Not because the win32 object size was too small for our needs, but just because win64 is more standard now and we wish to port all our environments to this format (and we also use some 3rd party libs offering better performance in 64bits than in 32bits).
We end up with tons of;
warning C4267: 'argument': conversion from 'size_t' to '...', possible loss of data
Mainly due to code like: unsigned int size = v.size(); where v is a STL container.
I know why the warning makes sense, I know why it is issued and how it could be fixed. However, in this specific example, we never experienced cases where the container size exceeded unsigned int's max value in the past.... so there will be no reason for this problem to appear when code is ported to 64bits environment.
We had discussions on what would be the best strategy to supress those noisy warnings (they may hide a relevant one we will miss), but we could not make a decision on the apropriate strategy.
So I'm asking the question here, what would be the best recommended strategy?
1. Use a static_cast
Use a static_cast. Do unsigned int size = static_cast<unsigned int>(v.size());. I don't "like" that because we loose the 64bits capability to store a huge amount of data in a container. But as our code never reached the 32bits limit, so this appears to be a safe solution...
2. Replace unsigned int by size_t
That's definitely harder as unsigned int size object in the example above could be pased to other functions, saved as class attribute and then removing a one-line warning could end up in doing hundreds of code change...
3. Disable the warning
That's most likely a very bad idea as it would also disable warning in this case uint8_t size = v.size() which is definitely likely to cause loss of data....
4. Define a "safe cast"* function and use it
Something like:
template <typename From, typename To> To safe_cast( const From& value )
{
//assert( value < std::numeric_limits<To>::max() && value > std::numeric_limits<To>::min() );
// Edit 19/05: test above fails in some unsigned to signed cast (int64_t to uint32_t), test below is better:
assert(value == static_cast<From>(static_cast<To>(value))); // verify we don't loose information!
// or throw....
return static_cast<To>( value );
}
5. Other solutions are welcome...
"Use solution 1 in this case but 2 in this case" could perfectly be a good answer.
Use the correct type (option 2) - the function/interface defines that type for you, use it.
std::size_t size = v.size(); // given vector<>::size_type is size_t
// or a more verbose
decltype(v)::size_type size = v.size();
It goes to the intent... you are getting the size of v and that size has a type. If the correct type had been used from the beginning, this would not have been a problem.
If you require that value later as another type, transform it then; the safe_cast<> is then a good alternative that includes the runtime bounds checking.
Option 6. Use auto
When you use size = v.size(), if you are not concerned what the type is, only that you use the correct type,
auto size = v.size();
And let the compiler do the hard work for you.
IFF you have time pressure to get the code warning free, I would initially disable the warning -- your code used to work with this, and it is, IMHO, extremely unlikely that in cases where you assign to a 32bit size you'll exceed it. (4G values in a collection - I doubt that'll fly in normal applications.)
That being said, for cases other than collections, the warning certainly has merit, so try to get it enabled sooner or later.
Second when enabling it and fixing the code, my precedence would be:
Use auto (or size_t pre C++11) for locals where the value is not further narrowed.
For when narrowing is necessary, use your safe_cast if you can justify the overhead of introducing it to your team. (learning, enforcing, etc.)
Otherwise just use static_cast:
I do not think this is a problem with collections. If you do not know better to the contrary, your collection will never have more than 4G items. IMHO, it just doesn't make sense to have that much data in a collection for any normal real world use cases. (that's not to say that there may not be the oddball case where you'll need such large data sets, it's just that you'll know when this is the case.)
For cases where you actually do not narrow a count of a collection, but some other numeric, the narrowing is probably problematic anyway, so there you'll fix the code appropriately.

Why is size_t unsigned?

Bjarne Stroustrup wrote in The C++ Programming Language:
The unsigned integer types are ideal for uses that treat storage as a
bit array. Using an unsigned instead of an int to gain one more bit to
represent positive integers is almost never a good idea. Attempts to
ensure that some values are positive by declaring variables unsigned
will typically be defeated by the implicit conversion rules.
size_t seems to be unsigned "to gain one more bit to represent positive integers". So was this a mistake (or trade-off), and if so, should we minimize use of it in our own code?
Another relevant article by Scott Meyers is here. To summarize, he recommends not using unsigned in interfaces, regardless of whether the value is always positive or not. In other words, even if negative values make no sense, you shouldn't necessarily use unsigned.
size_t is unsigned for historical reasons.
On an architecture with 16 bit pointers, such as the "small" model DOS programming, it would be impractical to limit strings to 32 KB.
For this reason, the C standard requires (via required ranges) ptrdiff_t, the signed counterpart to size_t and the result type of pointer difference, to be effectively 17 bits.
Those reasons can still apply in parts of the embedded programming world.
However, they do not apply to modern 32-bit or 64-bit programming, where a much more important consideration is that the unfortunate implicit conversion rules of C and C++ make unsigned types into bug attractors, when they're used for numbers (and hence, arithmetical operations and magnitude comparisions). With 20-20 hindsight we can now see that the decision to adopt those particular conversion rules, where e.g. string( "Hi" ).length() < -3 is practically guaranteed, was rather silly and impractical. However, that decision means that in modern programming, adopting unsigned types for numbers has severe disadvantages and no advantages – except for satisfying the feelings of those who find unsigned to be a self-descriptive type name, and fail to think of typedef int MyType.
Summing up, it was not a mistake. It was a decision for then very rational, practical programming reasons. It had nothing to do with transferring expectations from bounds-checked languages like Pascal to C++ (which is a fallacy, but a very very common one, even if some of those who do it have never heard of Pascal).
size_t is unsigned because negative sizes make no sense.
(From the comments:)
It's not so much ensuring, as stating what is. When is the last time you saw a list of size -1? Follow that logic too far and you find that unsigned should not exist at all and bit operations shouldn't be permitted either. – geekosaur
More to the point: addresses, for reasons you should think about, are not signed. Sizes are generated by comparing addresses; treating an address as signed will do very much the wrong thing, and using a signed value for the result will lose data in a way that your reading of the Stroustrup quote evidently thinks is acceptable, but in fact is not. Perhaps you can explain what a negative address should do instead. – geekosaur
A reason for making index types unsigned is for symmetry with C and C++'s preference for half-open intervals. And if your index types are going to be unsigned, then it's convenient to also have your size type unsigned.
In C, you can have a pointer that points into an array. A valid pointer can point to any element of the array or one element past the end of the array. It cannot point to one element before the beginning of the array.
int a[2] = { 0, 1 };
int * p = a; // OK
++p; // OK, points to the second element
++p; // Still OK, but you cannot dereference this one.
++p; // Nope, now you've gone too far.
p = a;
--p; // oops! not allowed
C++ agrees and extends this idea to iterators.
Arguments against unsigned index types often trot out an example of traversing an array from back to front, and the code often looks like this:
// WARNING: Possibly dangerous code.
int a[size] = ...;
for (index_type i = size - 1; i >= 0; --i) { ... }
This code works only if index_type is signed, which is used as an argument that index types should be signed (and that, by extension, sizes should be signed).
That argument is unpersuasive because that code is non-idiomatic. Watch what happens if we try to rewrite this loop with pointers instead of indices:
// WARNING: Bad code.
int a[size] = ...;
for (int * p = a + size - 1; p >= a; --p) { ... }
Yikes, now we have undefined behavior! Ignoring the problem when size is 0, we have a problem at the end of the iteration because we generate an invalid pointer that points to the element before the first. That's undefined behavior even if we never try dereference that pointer.
So you could argue to fix this by changing the language standard to make it legit to have a pointer that points to the element before the first, but that's not likely to happen. The half-open interval is a fundamental building block of these languages, so let's write better code instead.
A correct pointer-based solution is:
int a[size] = ...;
for (int * p = a + size; p != a; ) {
--p;
...
}
Many find this disturbing because the decrement is now in the body of the loop instead of in the header, but that's what happens when your for-syntax is designed primarily for forward loops through half-open intervals. (Reverse iterators solve this asymmetry by postponing the decrement.)
Now, by analogy, the index-based solution becomes:
int a[size] = ...;
for (index_type i = size; i != 0; ) {
--i;
...
}
This works whether index_type is signed or unsigned, but the unsigned choice yields code that maps more directly to the idiomatic pointer and iterator versions. Unsigned also means that, as with pointers and iterators, we'll be able to access every element of the sequence--we don't surrender half of our possible range in order to represent nonsensical values. While that's not a practical concern in a 64-bit world, it can be a very real concern in a 16-bit embedded processor or in building an abstract container type for sparse data over a massive range that can still provide the identical API as a native container.
On the other hand ...
Myth 1: std::size_t is unsigned is because of legacy restrictions that no longer apply.
There are two "historical" reasons commonly referred to here:
sizeof returns std::size_t, which has been unsigned since the days of C.
Processors had smaller word sizes, so it was important to squeeze that extra bit of range out.
But neither of these reasons, despite being very old, are actually relegated to history.
sizeof still returns a std::size_t which is still unsigned. If you want to interoperate with sizeof or the standard library containers, you're going to have to use std::size_t.
The alternatives are all worse: You could disable signed/unsigned comparison warnings and size conversion warnings and hope that the values will always be in the overlapping ranges so that you can ignore the latent bugs using different types couple potentially introduce. Or you could do a lot of range-checking and explicit conversions. Or you could introduce your own size type with clever built-in conversions to centralize the range checking, but no other library is going to use your size type.
And while most mainstream computing is done on 32- and 64-bit processors, C++ is still used on 16-bit microprocessors in embedded systems, even today. On those microprocessors, it's often very useful to have a word-sized value that can represent any value in your memory space.
Our new code still has to interoperate with the standard library. If our new code used signed types while the standard library continues to use unsigned ones, we make it harder for every consumer that has to use both.
Myth 2: You don't need that extra bit. (A.K.A., You're never going to have a string larger than 2GB when your address space is only 4GB.)
Sizes and indexes aren't just for memory. Your address space may be limited, but you might process files that are much larger than your address space. And while you might not have a string with more the 2GB, you could comfortably have a bitset with more than 2Gbits. And don't forget virtual containers designed for sparse data.
Myth 3: You can always use a wider signed type.
Not always. It's true that for a local variable or two, you could use a std::int64_t (assuming your system has one) or a signed long long and probably write perfectly reasonable code. (But you're still going to need some explicit casts and twice as much bounds checking or you'll have to disable some compiler warnings that might've alerted you to bugs elsewhere in your code.)
But what if you're building a large table of indices? Do you really want an extra two or four bytes for every index when you need just one bit? Even if you have plenty of memory and a modern processor, making that table twice as large could have deleterious effects on locality of reference, and all your range checks are now two-steps, reducing the effectiveness of branch prediction. And what if you don't have all that memory?
Myth 4: Unsigned arithmetic is surprising and unnatural.
This implies that signed arithmetic is not surprising or somehow more natural. And, perhaps it is when thinking in terms of mathematics where all the basic arithmetic operations are closed over the set of all integers.
But our computers don't work with integers. They work with an infinitesimal fraction of the integers. Our signed arithmetic is not closed over the set of all integers. We have overflow and underflow. To many, that's so surprising and unnatural, they mostly just ignore it.
This is bug:
auto mid = (min + max) / 2; // BUGGY
If min and max are signed, the sum could overflow, and that yields undefined behavior. Most of us routinely miss this these kinds of bugs because we forget that addition is not closed over the set of signed ints. We get away with it because our compilers typically generate code that does something reasonable (but still surprising).
If min and max are unsigned, the sum could still overflow, but the undefined behavior is gone. You'll still get the wrong answer, so it's still surprising, but not any more surprising than it was with signed ints.
The real unsigned surprise comes with subtraction: If you subtract a larger unsigned int from a smaller one, you're going to end up with a big number. This result isn't any more surprising than if you divided by 0.
Even if you could eliminate unsigned types from all your APIs, you still have to be prepared for these unsigned "surprises" if you deal with the standard containers or file formats or wire protocols. Is it really worth adding friction to your APIs to "solve" only part of the problem?

What are the arguments against using size_t?

I have a API like this,
class IoType {
......
StatusType writeBytes(......, size_t& bytesWritten);
StatusType writeObjects(......, size_t& objsWritten);
};
A senior member of the team who I respect seems to have a problem with the type size_t and suggest that I use C99 types. I know it sounds stupid but I always think c99 types like uint32_t and uint64_t look ugly. I do use them but only when it's really necessary, for instance when I need to serialize/deserialize a structure, I do want to be specific about the sizes of my data members.
What are the arguments against using size_t? I know it's not a real type but if I know for sure even a 32-bit integer is enough for me and a size type seems to be appropriate for number of bytes or number of objects, etc.
Use exact-size types like uint32_t whenever you're dealing with serialization of any sort (binary files, networking, etc.). Use size_t whenever you're dealing with the size of an object in memory—that's what it's intended for. All of the functions that deal with object sizes, like malloc, strlen, and the sizeof operator all size_t.
If you use size_t correctly, your program will be maximally portable, and it will not waste time and memory on platforms where it doesn't need to. On 32-bit platforms, a size_t will be 32 bits—if you instead used a uint64_t, you'd waste time and space. Conversely, on 64-bit platforms, a size_t will be 64 bits—if you instead used a uint32_t, your program could behave incorrectly (maybe even crash or open up a security vulnerability) if it ever had to deal with a piece of memory larger than 4 GB.
I can't think of anything wrong in using size_t in contexts where you don't need to serialize values. Also using size_t correctly will increase the code's safety/portability across 32 and 64 bit patforms.
Uhm, it's not a good idea to replace size_t (a maximally portable thing) with a less portable C99 fixed size or minimum size unsigned type.
On the other hand, you can avoid a lot of technical problems (wasted time) by using the signed ptrdiff_t type instead. The standard library’s use of unsigned type is just for historical reasons. It made sense in its day, and even today on 16-bit architectures, but generally it is nothing but trouble & verbosity.
Making that change requires some support, though, in particular a general size function that returns array or container size as ptrdiff_t.
Now, regarding your function signature
StatusType writeBytes(......, size_t& bytesWritten);
This forces the calling code’s choice of type for the bytes written count.
And then, with unsigned type size_t forced, it is easy to introduce a bug, e.g. by checking if that is less or more than some computed quantity.
A grotesque example: std::string("ah").length() < -5 is guaranteed true.
So instead, make that …
Size writeBytes(......);
or, if you do not want to use exceptions,
Size writeBytes(......, StatusType& status );
It is OK to have an enumeration of possible statuses as unsigned type or as whatever, because the only operations on status values will be equality checking and possibly as keys.