Bug in std::basic_string in special case of allocator - c++

I use g++ and I have defined a custom allocator where the size_type is byte.
I am using it with basic_string to create custom strings.
The "basic_string.tcc" code behaves erroneously because in the code of
_S_create(size_type __capacity, size_type __old_capacity, const _Alloc& __alloc)
the code checks for
const size_type __extra = __pagesize - __adj_size % __pagesize;
But all the arithmetic are byte arithmetic and so __pagesize that should have a value 4096, becomes 0 (because 4096 is a multiple of 256) and we have a "division by 0" exception (the code hangs).
The question isn't what should I do, but how could I ask a correction to the above code ? from whom ? (I may implement those corrections).

Before you can request or suggest a change to something like that, you have to establish a strong case that there is indeed a problem that needs to be fixed. In my view there probably is not.
The question is: under which circumstances would it be legitimate (or useful) to define a size_type as unsigned char? I am not aware of anything in the standard that specifically disallows this choice. It is defined as
unsigned integer type - a type that can represent the size of the largest object in the allocation model.
And unsigned char is definitely an unsigned integer type as per s3.9.1. Interesting.
So is it useful? Clearly you seem to think so, but I'm not sure your case is strongly made out. You could work on providing evidence that this is an issue worth resolving.
So it seems to me the process is:
Establish whether unsigned char is intended to be included as a valid choice in the standard, or whether it should be excluded, or was just overlooked.
Raise a 'standards non-compliance' issue with the team for each compiler that has the problem, providing good reasoning and a repro case.
Consider submitting a patch, if this is something within your ability to fix.
Or you could just use short unsigned int instead. I would.

Related

Why is std::ssize being forced to a minimum size for its signed size type?

In C++20, std::ssize is being introduced to obtain the signed size of a container for generic code. (And the reason for its addition is explained here.)
Somewhat peculiarly, the definition given there (combining with common_type and ptrdiff_t) has the effect of forcing the return value to be "either ptrdiff_t or the signed form of the container's size() return value, whichever is larger".
P1227R1 indirectly offers a justification for this ("it would be a disaster for std::ssize() to turn a size of 60,000 into a size of -5,536").
This seems to me like an odd way to try to "fix" that, however.
Containers which intentionally define a uint16_t size and are known to never exceed 32,767 elements will still be forced to use a larger type than required.
The same thing would occur for containers using a uint8_t size and 127 elements, respectively.
In desktop environments, you probably don't care; but this might be important for embedded or otherwise resource-constrained environments, especially if the resulting type is used for something more persistent than a stack variable.
Containers which use the default size_t size on 32-bit platforms but which nevertheless do contain between 2B and 4B items will hit exactly the same problem as above.
If there still exist platforms for which ptrdiff_t is smaller than 32 bits, they will hit the same problem as well.
Wouldn't it be better to just use the signed type as-is (without extending its size) and to assert that a conversion error has not occurred (eg. that the result is not negative)?
Am I missing something?
To expand on that last suggestion a bit (inspired by Nicol Bolas' answer): if it were implemented the way that I suggested, then this code would Just Work™:
void DoSomething(int16_t i, T const& item);
for (int16_t i = 0, len = std::ssize(rng); i < len; ++i)
{
DoSomething(i, rng[i]);
}
With the current implementation, however, this produces warnings and/or errors unless static_casts are explicitly added to narrow the result of ssize, or to use int i instead and then narrow it in the function call (and the range indexing), neither of which seem like an improvement.
Containers which intentionally define a uint16_t size and are known to never exceed 32,767 elements will still be forced to use a larger type than required.
It's not like the container is storing the size as this type. The conversion happens via accessing the value.
As for embedded systems, embedded systems programmers already know about C++'s propensity to increase the size of small types. So if they expect a type to be an int16_t, they're going to spell that out in the code, because otherwise C++ might just promote it to an int.
Furthermore, there is no standard way to ask about what size a range is "known to never exceed". decltype(size(range)) is something you can ask for; sized ranges are not required to provide a max_size function. Without such an ability, the safest assumption is that a range whose size type is uint16_t can assume any size within that range. So the signed size should be big enough to store that entire range as a signed value.
Your suggestion is basically that any ssize call is potentially unsafe, since half of any size range cannot be validly stored in the return type of ssize.
Containers which use the default size_t size on 32-bit platforms but which nevertheless do contain between 2B and 4B items will hit exactly the same problem as above.
Assuming that it is valid for ptrdiff_t to not be a signed 64-bit integer on such platforms, there isn't really a valid solution to that problem. So yes, there will be cases where ssize is potentially unsafe.
ssize currently is potentially unsafe in cases where it is not possible to be safe. Your proposal would make ssize potentially unsafe in all cases.
That's not an improvement.
And no, merely asserting/contract checking is not a viable solution. The point of ssize is to make for(int i = 0; i < std::ssize(rng); ++i) work without the compiler complaining about signed/unsigned mismatch. To get an assert because of a conversion failure that didn't need to happen (and BTW, cannot be corrected without using std::size, which we are trying to avoid), one which is ultimately irrelevant to your algorithm? That's a terrible idea.
if it were implemented the way that I suggested, then this code would Just Work™:
Let us ignore the question of how often it is that a user would write this code.
The reason your compiler will expect/require you to use a cast there is because you are asking for an inherently dangerous operation: you are potentially losing data. Your code only "Just Works™" if the current size fits into an int16_t; that makes the conversion statically dangerous. This is not something that should implicitly take place, so the compiler suggests/requires you to explicitly ask for it. And users looking at that code get a big, fat eyesore reminding them that a dangerous thing is being done.
That is all to the good.
See, if your suggested implementation were how ssize behaved, then that means we must treat every use of ssize as just as inherently dangerous as the compiler treats your attempted implicit conversion. But unlike static_cast, ssize is small and easily missed.
Dangerous operations should be called out as such. Since ssize is small and difficult to notice by design, it therefore should be as safe as possible. Ideally, it should be as safe as size, but failing that, it should be unsafe only to the extend that it is impossible to make it safe.
Users should not look on ssize usage as something dubious or disconcerting; they should not fear to use it.

Making unsigned integer underflow throw an exception

I understand that there are applications in which using unsigned integer over/underflow is a good way to get cheap modular arithmetic.
In my code, I use uint exclusively for indices to containers, so I never want this behaviour.
Is this a bad idea? Should I be using int everywhere instead? I do have to do some unsavoury things to get a for loop to count down to 0.
Is there a commonly used implementation of a less unsafe unsigned integer type? Something that throws an exception?
Do compilers (for me gcc, clang) provide a mechanism for less unsafe behaviour in the given compilation unit?
First, a terminology quibble: there is no such thing as unsigned integer underflow, precisely because of the way they wrap around (using modulo arithmetic), which is probably the phrase you meant.
Second, is this a common scenario to be in? Yes, it is a bit. You're not the only one doing "unsavoury things" with loops for reverse counting, and I bet there are a ton of bugs out there where people haven't done "unsavoury things" and, as a result, their code has an unsavoury infinite loop hidden in it. Mind you, I'm not sure I'd go so far as to call unsigneds "unsafe" as a result; like anything, they are the right tool for a subset of infinite possible jobs, and within that subset they perfectly safe.
There is debate over whether unsigned integers should be used for array indexes at all. Some standard committee members believe that their use in the standard library was a mistake; I know that several members of the c++ community here on Stack Overflow also hate unsigned values and wish they'd go away.
Personally I think having access to the full range of the integer by default is absolutely crucial (and losing that is not worth it for a single "-1" sentinel value or whatever), so I think that — while you're not alone in this requirement, and it's a sensible requirement — using unsigned array indexes by default is a good thing. (And what the heck is a negative array index? Semantics, people!)
But that doesn't help you in this scenario. So, what can you do about it? No, there's no trapping unsigned integer implementation (at least, not one that I'm aware of, let alone widespread) because that would literally violate the rules of the type as defined by C++: it would introduce well-defined underflow/overflow semantics to a type for which underflow/overflow shouldn't even be possible.
You will have to use signed integers and check for "logical underflow" (i.e. going out of your desired range, say -1) yourself. You could wrap this behaviour in a class.
I suppose you could actually just wrap an unsigned integer while you're at it, adding some extra logic to operator-- and operator-= to detect a wrap-around and throw.
But I guess my point is that, whatever you do, it's going to be in your "code space" and thus subject to decreased performance. You can't eke out this behaviour from the platform itself.

Once again: strict aliasing rule and char*

The more I read, the more confused I get.
The last question from the related ones is closest to my question, but I got confused with all words about object lifetime and especially - is it OK to only read or not.
To get straight to the point. Correct me if I'm wrong.
This is fine, gcc does not give warning and I'm trying to "read type T (uint32_t) via char*":
uint32_t num = 0x01020304;
char* buff = reinterpret_cast< char* >( &num );
But this is "bad" (also gives a warning) and I'm trying "the other way around":
char buff[ 4 ] = { 0x1, 0x2, 0x3, 0x4 };
uint32_t num = *reinterpret_cast< uint32_t* >( buff );
How is the second one different from the first one, especially when we're talking about reordering instructions (for optimization)? Plus, adding const does not change the situation in any way.
Or this is just a straight rule, which clearly states: "this can be done in the one direction, but not in the other"?
I couldn't find anything relevant in the standards (searched for this especially in C++11 standard).
Is this the same for C and C++ (as I read a comment, implying it's different for the 2 languages)?
I used union to "workaround" this, which still appears to be NOT 100% OK, as it's not guaranteed by the standard (which states, that I can only rely on the value, which is last modified in the union).
So, after reading a lot, I'm now more confused. I guess only memcpy is the "good" solution?
Related questions:
What is the strict aliasing rule?
"dereferencing type-punned pointer will break strict-aliasing rules" warning
Do I understand C/C++ strict-aliasing correctly?
Strict aliasing rule and 'char *' pointers
EDIT
The real world situation: I have a third party lib (http://www.fastcrypto.org/), which calculates UMAC and the returned value is in char[ 4 ]. Then I need to convert this to uint32_t. And, btw, the lib uses things like ((UINT32 *)pc->nonce)[0] = ((UINT32 *)nonce)[0] a lot. Anyway.
Also, I'm asking about what is right and what is wrong and why. Not only about the reordering, optimization, etc. (what's interesting is that with -O0 there are no warnings, only with -O2).
And please note: I'm aware of the big/little endian situation. It's not the case here. I really want to ignore the endianness here. The "strict aliasing rules" sounds like something really serious, far more serious than wrong endianness. I mean - like accessing/modifying memory, which is not supposed to be touched; any kind of UB at all.
Quotes from the standards (C and C++) would be really appreciated. I couldn't find anything about aliasing rules or anything relevant.
How is the second one different from the first one, especially when we're talking about reordering instructions (for optimization)?
The problem is in the compiler using the rules to determine whether such an optimization is allowed. In the second case you're trying to read a char[] object via an incompatible pointer type, which is undefined behavior; hence, the compiler might re-order the read and write (or do anything else which you might not expect).
But, there are exceptions for "going the other way", i.e. reading an object of some type via a character type.
Or this is just a straight rule, which clearly states: "this can be done in the one direction, but not in the other"? I couldn't find anything relevant in the standards (searched for this especially in C++11 standard).
http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3337.pdf chapter 3.10 paragraph 10.
In C99, and also C11, it's 6.5 paragraph 7. For C++11, it's 3.10 ("Lvalues and Rvalues").
Both C and C++ allow accessing any object type via char * (or specifically, an lvalue of character type for C or of either unsigned char or char type for C++). They do not allow accessing a char object via an arbitrary type. So yes, the rule is a "one way" rule.
I used union to "workaround" this, which still appears to be NOT 100% OK, as it's not guaranteed by the standard (which states, that I can only rely on the value, which is last modified in the union).
Although the wording of the standard is horribly ambiguous, in C99 (and beyond) it's clear (at least since C99 TC3) that the intent is to allow type-punning through a union. You must however perform all accesses through the union. It's also not clear that you can "cast a union into existence", that is, the union object must exist first before you use it for type-punning.
the returned value is in char[ 4 ]. Then I need to convert this to uint32_t
Just use memcpy or manually shift the bytes to the correct position, in case byte-ordering is an issue. Good compilers can optimize this out anyway (yes, even the call to memcpy).
I used union to "workaround" this, which still appears to be NOT 100% OK, as it's not guaranteed by the standard (which states, that I can only rely on the value, which is last modified in the union).
Endianess is the reason for this. Specifically the sequence of bytes 01 00 00 00 could mean 1 or 16,777,216.
The correct way to do what you are doing is to stop trying to trick the compiler into doing the conversion for you and perform the conversion yourself.
For instance if the char[4] is little-endian (smallest byte first) then you would do something like the following.
char[] buff = new char[4];
uint32_t result = 0;
for (int i = 0; i < 4; i++)
result = (result << 8) + buff[i];
This manually performs the conversion between the two and is guaranteed to always be correct as you are doing the mathematical conversion.
Now if you were doing this conversion rapidly it might make sense to use #if and knowledge of your architecture to use a enum to do this automatically as you mentioned, but that is again getting away from portable solutions. (Also you can use something like this as your fallback if you can't be certain)

Advice on unsigned int (Gangnam Style edition)

The video "Gangnam Style" (I'm sure you've heard it) just exceeded 2 billion views on youtube. In fact, Google says that they never expected a video to be greater than a 32-bit integer... which alludes to the fact that Google used int instead of unsigned for their view counter. I think they had to re-write their code a bit to accommodate larger views.
Checking their style guide: https://google-styleguide.googlecode.com/svn/trunk/cppguide.html#Integer_Types
...they advise "don't use an unsigned integer type," and give one good reason why: unsigned could be buggy.
It's a good reason, but could be guarded against. My question is: is it bad coding practice in general to use unsigned int?
The Google rule is widely accepted in professional circles. The problem
is that the unsigned integral types are sort of broken, and have
unexpected and unnatural behavior when used for numeric values; they
don't work well as a cardinal type. For example, an index into an array
may never be negative, but it makes perfect sense to write
abs(i1 - i2) to find the distance between two indices. Which won't work if
i1 and i2 have unsigned types.
As a general rule, this particular rule in the Google style guidelines
corresponds more or less to what the designers of the language intended.
Any time you see something other than int, you can assume a special
reason for it. If it is because of the range, it will be long or
long long, or even int_least64_t. Using unsigned types is generally
a signal that you're dealing with bits, rather than the numeric value of
the variable, or (at least in the case of unsigned char) that you're
dealing with raw memory.
With regards to the "self-documentation" of using an unsigned: this
doesn't hold up, since there are almost always a lot of values that the
variable cannot (or should not) take, including many positive ones. C++
doesn't have sub-range types, and the way unsigned is defined means
that it cannot really be used as one either.
This guideline is extremely misleading. Blindly using int instead of unsigned int won't solve anything. That simply shifts the problems somewhere else. You absolutely must be aware of integer overflow when doing arithmetic on fixed precision integers. If your code is written in a way that it does not handle integer overflow gracefully for some given inputs, then your code is broken regardless of whether you use signed or unsigned ints. With unsigned ints you must be aware of integer underflow as well, and with doubles and floats you must be aware of many additional issues with floating point arithmetic.
Just take this article about a bug in the standard Java binary search algorithm published by none other than Google for why you must be aware of integer overflow. In fact, that very article shows C++ code casting to unsigned int in order to guarantee correct behavior. The article also starts out by presenting a bug in Java where guess what, they don't have unsigned int. However, they still ran into a bug with integer overflow.
Use the right type for the operations which you will perform. float wouldn't make sense for a counter. Nor does signed int. The normal operations on the counter are print and +=1.
Even if you had some unusual operations, such as printing the difference in viewcounts, you wouldn't necessarily have a problem. Sure, other answers mention the incorrect abs(i2-i1) but it's not unreasonable to expect programmers to use the correct max(i2,i1) - min(i2,i1). Which does have range issues for signed int. No uniform solution here; programmers should understand the properties of the types they're working with.
Google states that: "Some people, including some textbook authors, recommend using unsigned types to represent numbers that are never negative. This is intended as a form of self-documentation."
I personally use unsigned ints as index parameters.
int foo(unsigned int index, int* myArray){
return myArray[index];
}
Google suggests: "Document that a variable is non-negative using assertions. Don't use an unsigned type."
int foo(int index, int* myArray){
assert(index >= 0);
return myArray[index];
}
Pro for Google: If a negative number is passed in debug mode my code will hopefully return an out of bounds error. Google's code is guaranteed to assert.
Pro for me: My code can support a greater size of myArray.
I think the actual deciding factor comes down to, how clean is your code? If you clean up all warnings, it will be clear when the compiler warns you know when you're trying to assign a signed variable to an unsigned variable. If your code already has a bunch of warnings, the compiler's warning is going to be lost on you.
A final note here: Google says: "Sometimes gcc will notice this bug and warn you, but often it will not." I haven't seen that to be the case on Visual Studio, checks against negative numbers and assignments from signed to unsigned are always warned. But if you use gcc you might have a care.
You specific question is:
"Is it bad practice to use unsigned?" to which the only correct answer can be no. It is not bad practice.
There are many style guides, each with a different focus, and while in some cases, an organisation, given their typical toolchain and deployment platform may choose not to use unsigned for their products, other toolchains and platforms almost demand it's use.
Google seem to get a lot of deference because they have a good business model (and probably employ some smart people like everyone else).
CERT IIRC recommend unsigned for buffer indexes, because if you do overflow, at least you'll still be in your own buffer, some intrinsic security there.
What do the language and standard library designers say (probably the best representation of accepted wisdom). strlen returns a size_t, which is probably unsigned (platform dependent), other answers suggest this is an anachronism because shiny new computers have wide architectures, but this misses the point that C and C++ are general programming languages and should scale well on big and small platforms.
Bottom line is that this is one of many religious questions; certainly not settled, and in these cases, I normally go with my religion for green field developments, and go with the existing convention of the codebase for existing work. Consistency matters.

C++ Picking a type for a constant

So on a fairly regular bases it seems I find the type of some constant I declared (typically integer, but occasionally other things like strings) is not the ideal type in a context it is being used, requiring a cast or resulting in a compiler warning about the implicit cast.
E.g. in one piece of code I had something like the below, and got a signed/unsigned comparison issue.
static const int MAX_FOO = 16;
...
if (container.size() > MAX_FOO) {...}
I have been thinking of just always using the smallest / most basic type allowed for a given constant (e.g. char, unsigned char, const char* etc rather than say int, size_t and std::string), but was wondering if this is really a good idea, and if there are some places where it would potentially be a really bad idea? e.g. code using the 'auto' keyword (or perhaps templates) getting a too small type and overflowing on what appeared to be a safe operation?
Going for the smallest type that can hold the initial value is a bad habit. That invites overflow.
Always code for the most general (which according to Murphy's Law is the worst) case. As templates generalize things, that makes the worst case a lot worse. Be prepared for bizarre kinds of overflows and avoid negative numbers while unsigned types are in the neighborhood.
std::size_t is the best choice for the size or length of anything, for the reason you mentioned. But subtract pointers and you get a std::ptrdiff_t instead. Personally I recommend to cast the result of such a subtraction to std::size_t if it can be guaranteed to be positive.
char * does not own its string in the C++ sense as std::string does, so the latter is the more conservative choice.
This question is so broad that no more specific advice can be made…