C++ - What happens when you index an array by a float? - c++

I'm trying to understand exactly what happens when indexing through an array with a float value.
This link: Float Values as an index in an Array in C++
Doesn't answer my question, as it states that the float should be rounded to an integer. However in the code I'm trying to evaluate, this answer does not make sense, as the index value would only ever be 0 or 1.
I'm trying to solve a coding challenge posted by Nintendo. To solve the problem there is an archaic statement that uses a bitwise assignment into an array using a long complicated bitwise expression.
The array is declared as a pointer
unsigned int* b = new unsigned int[size / 16]; // <- output tab
Then it's assigned 0's to each element
for (int i = 0; i < size / 16; i++) { // Write size / 16 zeros to b
b[i] = 0;
}
Here's the beginning of the statement.
b[(i + j) / 32] ^= // some crazy bitwise expression
The above sits inside of a nested for loop.
I'm sparing a lot of code here, because I want to solve as much of this problem on my own as possible. But I'm wondering if there is a situation were you would want to iterate through an array like this.
There must be more to it than the float just automatically casting to an int. There hast to be more going on here.

There are no floats here. size is an integer, and 16 is an integer, and consequently size/16 is an integer as well.
Integer division rounds towards zero, so if size is in [0,16), then size/16 == 0. If size is in [16,32), then size/16 == 1, and so on. And if size is in (-16, 0], then size / 16 == 0 as well.
([x,y) is the "half-open" interval from x to y: that is, it contains every number between x and y, and furthermore it includes x but excludes y)

The subscript operator in terms of arrays is syntactic sugar. When you have the following :
class A {...};
A ar[17];
std::cout << ar[3] << std::endl;
Saying ar[3] is no different than saying :
*(ar + 3);
So ar[3.4] is the same as saying
*(ar + 3.4) (1)
From the C++ Standard section 5.7.1 - Additive operators we read that :
(...) For addition, either both operands shall have arithmetic or unscoped enumeration type, or one operand shall be a pointer to a completely-defined object type and the other shall have integral or unscoped enumeration type.
that's why expression (1) causes compilation error.
So, when you index an array by a float you get a compilation error

To answer the question in the title:
#include <stdio.h>
int main(int argc, char** argv) {
int x[5];
int i;
for (i = 0; i < 5; ++i)
x[i] = i;
x[2.5] = 10;
for (i = 0; i < 5; ++i)
printf("%d\n", x[i]);
}
if i compile this with gcc i get a compiler error:
foo.c:10: error: array subscript is not an integer

Related

unsigned bit field holding negative value

I'd like to work with 12 bits unsigned integer. Since I am working with array, it is of interest for me to have overflowing value, e.g., 0 - 1 = 4095.
I tried the following but I don't obtain the expected behaviour:
struct bit_field
{
unsigned int x: 12; // 12 bits
};
bit_field ii, jj, kk;
ii.x = 4096;
jj.x = 1;
kk.x = 0;
cout << ii.x;
cout << kk.x - jj.x;
Output:
>> 0 // ov as expected
>> -1 // expected 4095
This is how C/C++ is expected to work; you don't get arbitarily sized integers. your storage width declaration within the struct doesn't change that: the type your operators see is still unsigned int. It's just that you say "when I store this, it's 12 bits".
Because kk.x and kk.x are unsigned integers, their subtraction works just as defined for these: their subtraction is promoting values to signed integers.
Note that you're writing C++, so you can perfectly well write your own class that implements the mathematical operations you want and has cast operators for integer types.

Problems in compiling code due to the modulus operator

I have written a program to store a number (which is predefined by the programmer) in form of digits in an array.
For example, if I want to store a number 1234 in array arrx[4], then its elements would be:
arr[0] = 1
arr[1] = 2
arr[2] = 3
arr[3] = 4
I try to achieve this using the below piece of code:
#include <iostream>
#include <math.h>
using namespace std;
int main()
{
int arrx[4]; // Stores the individual digits of number as array
int digx = 4; // Total number of digits in number
int i;
long int dupx = 1234; // Number which has to be stored in array
for(i = digx-1; i >= 0 ; i--)
{
arrx[digx-i-1] = int(dupx/pow(10,i));
dupx = dupx%(pow(10, i));
}
return 0;
}
However, when I try to compile the above code, I get the following error message:
error: invalid operands of types 'long int' and 'double' to binary 'operator%'
The only conclusion which I was able to draw from above error was that the problem is with the modulus operator.
Therefore, I have following questions in my mind
What exactly is the problem with the code containing modulus operator?
How can I fix this?
I am using Code::Blocks version 17.12 with GNU GCC as my compiler.
You can only use % with integers, and pow produces floating-point numbers.
You could write an integer power function, or use a predefined table, but it's simpler to reverse the order of construction and start with the rightmost digit:
int main()
{
int arrx[4]; //stores the individual digits of number as array
int digx = 4; //total number of digits in number
long int dupx = 1234; //number which has to be stored in array
for(int i = 0; i < digx; i++)
{
arrx[digx-i-1] = dupx%10;
dupx = dupx/10;
}
return 0;
}
std::pow in its various guises returns a floating point type, even if the arguments are integral types.
Since % requires integral arguments, compilation will fail.
Using (long)(pow(10,i)) is one fix, checking of course that (long) is long enough. Note though that even under IEEE754 pow is not required to return the best floating point value possible, so the truncation to long can occasionally be harmful; perhaps std::round followed by the cast to long is to be preferred. Although the current fashion is to consider any implementation of pow that breaks for integral arguments to be defective.
In your case though I'd be tempted to define
constexpr/*use const on earlier standards*/ int powers[] = {1, 10, 100, 1000};
and index appropriately.

Is subtraction and comparison with unsigned integers in C / C++ well defined? [duplicate]

This question already has answers here:
unsigned becomes signed in if-statement comparisons?
(4 answers)
Closed 5 years ago.
As an expansion to the question "Is unsigned integer subtraction defined behavior?", I am confused about the following behavior.
In the code below, note that A = 50 and B = 100 are stored as unsigned 16-bit integers and the subtraction A - B = -50 = 65486 (mod 2^16 - 1). If I store the result of the subtraction in D (an unsigned 16-bit integer) and then evaluate D > 4000 I get true since 65486 > 4000. That makes sense.
If I forgo storing A - B and evaluate A - B > 4000 directly I get false. This seem inconsistent. Is this the expected result? Why? Is this always the correct behavior or am I in the land of "undefined behavior".
#include <stdio.h>
#include <stdint.h>
int main() {
uint16_t A = 50;
uint16_t B = 100;
uint16_t D = A - B; // D = 65486
printf("D = %u\n", D);
int R = D > 4000; // R = 1 (true)
printf("R = %d\n", R);
int S = A - B > 4000; // S = 0 (false)
printf("S = %d\n", S);
return 0;
}
BTW, this behavior seems to contradict the behavior in the code from this question, which further confuses me. If I change uint16_t to uint32_t above than I get
D = 4294967246
R = 1
S = 1
which seems correct to me.
Update: It seems the best detailed answer is that uint16_t gets promoted to a int (int is 32-bit on my system) and so A - B > 4000 is done with signed arithmetic. Whereas when I switch to uint32_t no promotion is performed (already 32-bits wide) and so A - B > 4000 is done with unsigned arithmetic. This would explain it.
P.S. I know folks want to be first to answer, but just saying "integer promotion" is not a useful answer.
If I forgo storing A - B and evaluate A - B > 4000 directly I get false. This seem inconsistent.
Yes it does from a cursory look.
However, when the expression A-B is evaluated, both are promoted to int before the subtraction is performed. Hence, A - B > 5000 evaluates to false.
You can read up on "Usual Arithmetic Conversions" at http://en.cppreference.com/w/c/language/conversion.
Re. But what about when I switch to unit32_t:
When both operands are of type unit32_t, the result is also of type unit32_t on a platform where sizeof(int) is 4, not int.

Integer overflow and order of operations

I recently faced a problem on a C++ code of mine making me wonder whether I had some misunderstanding of what the compiler would do with long operations...
Just look at the following code:
#include <iostream>
int main() {
int i = 1024, j = 1024, k = 1024, n = 3;
long long l = 5;
std::cout << i * j * k * n * l << std::endl; // #1
std::cout << ( i * j * k * n ) * l << std::endl; // #2
std::cout << l * i * j * k * n << std::endl; // #3
return 0;
}
For me the order in which the multiplications will happen in any of these 3 lines is undefined. However, here is what I thought would happen (assuming int is 32b, long long is 64b and they both follow the IEEE rules):
For line #2, the parenthesis is evaluated first, using ints as intermediate results, leading to an overflow and to store -1073741824. This intermediate result is promoted to long long for the last multiplication and the printed result should therefore be -5368709120.
Lines #1 and #3 are "equivalent" since the order of evaluation is undefined.
Now, for lines #1 and #3 here is were I'm unsure: I thought that although the order of evaluation was undefined, the compiler would "promote" all operations to the type of the largest operand, namely long long here. Therefore, no overflow would happen in this case since all computations would be made in 64b... But this is what GCC 5.3.0 gives me for this code:
~/tmp$ g++-5 cast.cc
~/tmp$ ./a.out
-5368709120
-5368709120
16106127360
I would have expected 16106127360 for the first result too. Since I doubt there is a compiler bug of this magnitude in GCC, I guess the bug lies between the keyboard and the chair.
Could anyone please confirm / infirm this is undefined behaviour and GCC is correct in giving me whatever it gives (since this is undefined)?
GCC is correct.
Associativity for multiplication is left to right. This means that all of these expression are evaluated left to right.
Promotion to higher type is only between two operands of different types of single binary operator.
For example, first expression is parsed as i * j * k * n * l = ((((i * j) * k) * n) * l) and the promotion happens only when last of multiplications is computed, but at this moment left operand is already incorrect.
The Standard unambiguously defines the grouping as follows:
5.6 Multiplicative operators [expr.mul]
1 The multiplicative operators *, /, and % group left-to-right.
This means that a * b * c is evaluated as (a * b) * c. A conforming compiler has no freedom to evaluate it as a * (b * c).

Bitwise operations and shifts problems

I am testing the function fitsBits(int x, int n) on my own and I figure out there is a condition that doesn't fit in this function, what is the problem?
/*
* fitsBits - return 1 if x can be represented as an
* n-bit, two's complement integer.
* 1 <= n <= 32
* Examples: fitsBits(5,3) = 0, fitsBits(-4,3) = 1
* Legal ops: ! ~ & ^ | + << >>
* Max ops: 15
* Rating: 2
*/
int fitsBits(int x, int n) {
int r, c;
c = 33 + ~n;
r = !(((x << c)>>c)^x);
return r;
}
It seems like it gives the wrong answer in
fitsBits(0x80000000, 0x20);
It gives me 1, but actually it should be 0...
How could I fix it?
Thank you!
fitsBits(0x80000000, 0x20);
This function returns 1, because the first argument of your function is int, which is (in practice these days) a 32 bit signed integer. The largest value that signed 32 bit integer can represent is 0x7FFFFFFF, which is less than the value you are passing in. Because of that your value gets truncated and becomes -0x80000000, something that 32 bit integer can represent. Therefore your function returns 1 (yes, my first argument is something that can be represented using 0x20 = 32 bits).
If you want your function to properly classify number 0x80000000 as something that cannot be represented using 32 bits, you need to change the type of the first argument of your function. One options would've been using an unsigned int, but from your problem definition it seems like you need to properly handle negative numbers, so your remaining option is long long int, that can hold numbers between -0x8000000000000000 and 0x7FFFFFFFFFFFFFFF.
You will need to do couple more adjustments: you need to explicitly specify that your constant is of type long long by using LL suffix, and you now need to shift by 64 - c, not by 32 - c:
#include <stdio.h>
int fitsBits(long long x, int n) {
long long r;
int c;
c = 65 + ~n;
r = !(((x << c)>>c)^x);
return r;
}
int main() {
printf("%d\n", fitsBits(0x80000000LL, 0x20));
return 0;
}
Link to IDEONE: http://ideone.com/G8I3kZ
Left shifts that cause overflow are undefined for signed types. Hence the compiler may optimise (x<<c)>>c as simply x, and the entire function reduces down to return 1;.
Probably you want to use unsigned types.
A second cause of undefined behavior in your code is that c may be greater than or equal to the width of int. A shift of more than the width of the integer type is undefined behavior.
r = (((x << c)>>c)^x); //This will give you 0, meaning r = 0;
OR
r = !((x << c)>>c);
Your function can be simplified to
int fitsBits(int x) {
int r, c;
c = 33;
r = (((x << c)>>c)^x);
return r;
}
Note that when NOT(!) is brought you're asking for opposite of r