converting from size_t to unsigned int - c++

Is it possible that converting from size_t to unsigned int result in overflow .
size_t x = foo ( ) ; // foo ( ) returns a value in type size_t
unsigned int ux = (unsigned int ) x ;
ux == x // Is result of that line always 1 ?
language : c++
platform : any

Yes it's possible, size_t and int don't necessarily have the same size. It's actually very common to have 64bit size_ts and 32bit ints.
C++11 draft N3290 says this in ยง18.2/6:
The type size_t is an implementation-defined unsigned integer type that is large enough to contain the size in bytes of any object.
unsigned int on the other hand is only required to be able to store values from 0 to UINT_MAX (defined in <climits> and following the C standard header <limits.h>) which is only guaranteed to be at least 65535 (216-1).

Yes, overflow can occur on some platforms. For example, size_t can be defined as unsigned long, which can easily be bigger than unsigned int.

Related

Is it safe to use size_t to shift vector index?

I prefer to use size_t to deal with vector index. But when shifting index, is it safe? For example,
size_t n = 10000000;
vector<int> v(n);
size_t i = 500000;
size_t shift = 20000;
int a = v(i - (-1) * shift); // Is this ok? what is `(-1) * shift` type, size_t?
int b = v(-shift + i); // Is this ok? what is `-shift` type, size_t?
Negating an unsigned quantity is a valid operation. Section 5.3.2 of C++11:
The negative of an unsigned quantity is computed by subtracting its
value from 2^n , where n is the number of bits in the promoted
operand. The type of the result is the type of the promoted operand.
So, this is "safe", in so far as this is defined behavior.
Multiplying a size_t with (-1) is safe, it wraps around the maximum value of size_t, as size_t is an unsigned type. So (-1) * shift is the same as std::numeric_limits<size_t>::max-shift+1.
Surely, you meant v[...] instead of v(); std::vector has no operator(int).
Anyway, empirically,
#include <iostream>
using namespace std;
int main(){
unsigned foo = 1;
cout<<(-1)*foo<<'\n';
cout<<-foo<<'\n';
cout<<foo*(-1)<<'\n';
cout<<static_cast<int>(-foo)<<'\n';
cout<<static_cast<int>(foo)*-1<<'\n';
}
yields:
4294967295
4294967295
4294967295
-1
-1
ergo a negated unsigned or an unsigned multiplied by -1 overflows by wrapping around its maximum value (this should be the theoretical behavior too).
As for passing size_t to
http://en.cppreference.com/w/cpp/container/vector/operator_at, if by chance std::vector<T>::size_type isn't size_t (unlikely but possible), passing a size_t within the bounds of your vector's size() should be safe and not lead to UB, as size_t must be large enough to index an array of any size.

Is it danger cast int * to unsigned int *

I have variable of type int *alen. Trying to pass it to function:
typedef int(__stdcall *Tfnc)(
unsigned int *alen
);
with casting
(*Tfnc)( (unsigned int *)alen )
Can I expect problems in case value is never negative?
Under the C++ standard, what you are doing is undefined behavior. The memory layout of unsigned and signed ints is not guaranteed to be compatible, as far as I know.
On most platforms (which use 2s complement integers), this will not be a problem.
The remaining issue is strict aliasing, where the compiler is free to presume that pointers to one type and pointers to another type are not pointers to the same thing.
typedef int(__stdcall *Tfnc)(
unsigned int *alen
);
int test() {
int x = 3;
Tfnc pf = [](unsigned int* bob) { *bob = 2; };
pf((unsigned int*)&x);
return x;
}
the above code might be allowed to ignore the modification to the x while it is modified through the unsigned int*, even on 2s complement hardware.
That is the price of undefined behavior.
No it won't be of any problem, until and unless the int value you pass is not negative.
But if the given value is negative then the resulting value is the least unsigned integer congruent to the source integer (modulo 2^n where n is the number of bits used to represent the unsigned type).

Maximum value for unsigned int?

I wrote a program to find the last Fibonacci number using type unsigned int. It is 1836311903 but I thought the max values for an unsigned int is 65535. So what's going on?
while(true)
{
sequence[j] = sequence[j-1] + sequence[j-2];
if(sequence[j-1]>sequence[j]) break;
j++;
}
printf("%d", sequence[j-2]);
You are mistaken in your belief that max number for unsigned int is 65535. That hasn't been the case for most compilers since perhaps early days of windows 95 when we had 16-bit processors.
The standards do NOT define the size of any integral type; they only define the relationship of the sizes between one another. (long long >= long >= int >= short >= char... etc) The actual sizes though incredibly common and consistent are defined by the implementation of your compiler and thus are generally platform defined.
That not withstanding most int's use the size of a word on the processor; which today is often 32bits or 64bits.
You could verify 'why' yourself by taking sizeof(int); then raise 2 to that power subtract 1 and you've got your answer for max int...
A better way would be to #include <limits.h> or #include <climits> and use the values it defines.
In C++ you can use std::numeric_limits<unsigned int>::max() as well.
As shown in http://www.tutorialspoint.com/cprogramming/c_data_types.htm (and many other places), unsigned int can be 2 bytes or 4 bytes. In your case, you are using 4 bytes so the maximum is 4,294,967,295.
The maximum you mention, 65535 corresponds to 2 bytes.
The following code will give you the max value for an unsigned int on your system:
#include <stdio.h>
typedef unsigned int ui;
int main()
{
ui uimax = ~0;
printf("uimax %u\n",uimax);
return 0;
}
Int types only define the relationship between their sizes not their actual size e.g.
unsigned long long int >= unsigned long int >= unsigned int >= unsigned short int >= unsigned char

Is a forced integer buffer overflow legitime?

I want to implement a Handletype like in this example.
(Long story short: the structure Handle holds an index-member to an array with elements. Its other member count validates if the index is up to date, corresponding to the datas countArray. count and countArray are with a fixed size of a type/bitfield( u32 : 20bits))
To avoid being restricted to the 20bits of the generation/counter size, the following came into my mind: Why not let the unsigned char count/countArray overflow on purpose?
I could also do the same with the modulo method ( counter = ++counter % 0xff ), but that is another additional operation then..
So let the count grow upto 0xff and overflow will set it again to 0 when 0xff + 1 happens.
Is this legitime?
Here is my pseudo implementation (C++):
struct Handle
{
unsigned short index;
unsigned char count;
};
struct myData
{
unsigned short curIndex;
int* dataArray;
unsigned char* countArray;
Handle create()
{
// check if index not already used
// create object at dataArray[handle.index]
Handle handle;
handle.index = curIndex;
handle.count = countArray[curIndex];
return handle;
}
void destroy( const Handle& handle )
{
// delete object at dataArray[handle.index]
countArray[handle.index]++; // <-- overflow here?
}
bool isValid( const Handle& handle ) const
{
return handle.count == countArray[handle.index];
}
};
EDIT #1: Yes, these integral types should all be unsigned (as indexes are)
As long as you're not using signed types, you're safe.
Technically, unsigned types don't overflow:
3.9.1 Fundamental types [basic.fundamental]
46)This implies that unsigned arithmetic does not overflow because a
result that cannot be represented by the resulting unsigned integer
type is reduced modulo the number that is one greater than the largest
value that can be represented by the resulting unsigned integer type.

Is it safe to shift 1-based numbering to 0-based numbering by subtracting 1 if unsigned integers are used?

In a system I am maintaining, users request elements from a collection from a 1-based indexing scheme. Values are stored in 0-based arrays in C++ / C.
Is the following hypothetical code portable if 0 is erroneously entered as the input to this function? Is there a better way to validate the user's input when converting 1-based numbering schemes to 0-based?
const unsigned int arraySize;
SomeType array[arraySize];
SomeType GetFromArray( unsigned int oneBasedIndex )
{
unsigned int zeroBasedIndex = oneBasedIndex - 1;
//Intent is to check for a valid index.
if( zeroBasedIndex < arraySize )
{
return array[zeroBasedIndex];
}
//else... handle the error
}
My assumption is that (unsigned int)( 0 - 1 ) is always greater than arraySize; is this true?
The alternative, as some have suggested in their answers below, is to check oneBasedIndex and ensure that it is greater than 0:
const unsigned int arraySize;
SomeType array[arraySize];
SomeType GetFromArray( unsigned int oneBasedIndex )
{
if( oneBasedIndex > 0 && oneBasedIndex <= arraySize )
{
return array[oneBasedIndex - 1];
}
//else... handle the error
}
For unsigned types, 0-1 is the maximum value for that type, so it's always >= arraySize. In other words, yes that is absolutely safe.
Unsigned integers never overflow in C++ and in C.
For C++ language:
(C++11, 3.9.1p4) "Unsigned integers, declared unsigned, shall obey the laws of arithmetic modulo 2n where n is the number of bits in the value representation of that particular size of integer. 46)"
and footnote 46):
"46) This implies that unsigned arithmetic does not overflow because a result that cannot be represented by the resulting unsigned integer type is reduced modulo the number that is one greater than the largest value that can be represented by the resulting unsigned integer type."
and for C language:
(C11, 6.2.5p9) "A computation involving unsigned operands can never overflow,
because a result that cannot be represented by the resulting unsigned integer type is
reduced modulo the number that is one greater than the largest value that can be
represented by the resulting type."
Chances are good that it's safe, but for many purposes, there's a much simpler way: allocate one extra spot in your array, and just ignore element 0.
No, for example 0-1 is 0xffffffff in 4-byte unsigned int, what if your array is really that big?
32 bit is ok because 0xffffffff exceeds limit, the code breaks when compile in 64 bit if the array is that big.
Just check for oneBasedIndex > 0
Make sure oneBasedIndex is greater than zero...