Previous even number - c++

If an integer is uneven (odd), I would like to get the previous number, otherwise I would like to keep the current number. E.g. if x = 3, I would like to assign 2 to x, if x = 4, then nothing happens.
At the moment, I do the following: x = (x/2)*2, but I have heard that division is computational expensive. Does -O3 optimize this expression? I am using the c++ (Ubuntu 4.8.4-2ubuntu1~14.04) 4.8.4 compiler. x is a uint32_t.

Try the following
x &= ~1;
I suppose that x is declared as having type int. Otherwise if the rank of type x is greater than the rank of type int then you should use an integer literal of the type of the variable x.

Related

Why, in some C++ compilers, does `int x = 2147483647+1;` give only a warning but stores a negative value, while some compilers give a runtime error?

I want to check if the reverse of an signed int value x lies inside INT_MAX and INT_MIN. For this, I have reversed x twice and checked if it is equal to the original x, if so then it lies inside INT_MAX and INT_MIN, else it does not.
But online compilers are giving a runtime error, but my g++ compiler is working fine and giving the correct output. Can anybody tell me the reason?
int reverse(int x) {
int tx=x,rx=0,ans;
while(tx!=0){
rx = rx+rx+rx+rx+rx+rx+rx+rx+rx+rx+tx%10;
tx/=10;
}
ans = tx = rx;
rx=0;
while(tx!=0){
rx = rx*10 + tx%10;
tx/=10;
}
while(x%10==0&&x!=0)x/=10;
//triming trailing zeros
if(rx!=x){
return 0;
}else{
return ans;
}
}
ERROR:
Line 6: Char 23: runtime error: signed integer overflow: 1929264870 + 964632435 cannot be represented in type 'int' (solution.cpp)
SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior prog_joined.cpp:15:23
According to the cppreference:
Overflows
...
When signed integer arithmetic operation overflows (the result does not fit in the result type), the behavior is undefined: it may wrap around according to the rules of the representation (typically 2's complement), it may trap on some platforms or due to compiler options (e.g. -ftrapv in GCC and Clang), or may be completely optimized out by the compiler.
So the online compiler may comes with more strict checking, while your local GCC choosed to "wrap" on overflow.
Instead, if you want to wrap on overflow for sure, you may promote your operands to 64 bit width, perform the addition and then convert the result to 32 bit width again. According to cppreference this seems to be well defined after C++20.
Numeric conversions
Unlike the promotions, numeric conversions may change the values, with potential loss of precision.
Integral conversions
...
if the destination type is signed, the value does not change if the source integer can be represented in the destination type. Otherwise the result is { implementation-defined (until C++20) } / { the unique value of the destination type equal to the source value modulo 2n
where n is the number of bits used to represent the destination type. (since C++20) }
Example code on godbolt
I'm not sure what's wrong with your algorithm, but let me suggest an alternative that avoids doing math and the possibility of integer overflows.
If you want to find of if say, 2147483641 is a valid integer when reversed (e.g. 1463847412), you can do it entirely as string comparisons after converting the initial value to a string and reversing that string.
Basic algorithm for a non-negative value it this:
convert integer to string (let's call this string s)
convert INT_MAX to a string (let's call this string max_s)
reverse s. Handle leading zero's and negative chars appropriately. That is, 120 reversed is "21" and -456 reversed is "-654". The result of this conversion is a string called rev.
If rev.length() < s_max.length() Then rev is valid as an integer.
If rev.length() > s_max.length(), then rev is not valid as a integer.
If rev.size() == s_max.length(), then the reversed string is the same length as INT_MAX as a string. A lexical comparison will suffice to see if it's valid. That is isValid = (rev <= s_max) or isValid = strcmp(rev, s_max) < 1
The algorithm for a negative value is identical except replace INT_MAX with INT_MIN.

Does C++ R-Value has a size?

Let's assume the largest number an int variable can hold is 10. Consider the following situation:
main()
{
int r1 = 10;
int r2 = 1;
int x = r1 + r2;
}
According to my current little knowledge, r1 + r2 expression creates a temporary variable to hold the result before copying that value to x.
What i want to know is since the largest x can hold is 10, i know (it's a guess actually) that if i print x, i get 10. But what about r1 + r2 ?. Does this temporary variable that represent the result of r1 + r2 expression also hold 10 ?.
In other words does this temporary variable also has a largest it can hold ?
This is probably a noob question and i apologise.
Please Note:
I asked this question based on what i thought what overflowing is. That is; i thought when a variable reach to a state where (let's say for an integer case), if i add one more integer to it's value it's gonna overflow. And i thought when that happens the maximum value it holds gonna stay the same regardless of me increasing it. But that's not the case apparently. The behaviour is undefined when overflow for most types. check #bolov's answer
Signed integers
Computing a value larger than the maximum value or smaller than the minimum value of an signed integer type is called "overflow" and is Undefined Behavior.
E.g.:
int a = std::numeric_limits<int>::max();
int b = 1;
a + b;
The above program has Undefined Behavior because the type of a + b is int and the value computed would overflow.
§ 8 Expressions [expr]
§ 8.1 Preamble [expr.pre]
If during the evaluation of an expression, the result is not mathematically defined or not in the range of representable values for
its type, the behavior is undefined.
Unsigned integers
Unsigned integers do not overflow because they are always computed in modulo arithmetic.
unsigned a = std::numeric_limits<unsigned>::max();
a + 1; // guaranteed to be 0
unsigned b = 0;
b - 1; // guaranteed to be std::numeric_limits<unsigned>::max();
§6.9.1 Fundamental types [basic.fundamental]
Unsigned integers shall obey the laws of arithmetic modulo 2n where n
is the number of bits in the value representation of that particular
size of integer 49
49) This implies that unsigned arithmetic does not overflow because a
result that cannot be represented by the resulting unsigned integer
type is reduced modulo the number that is one greater than the largest
value that can be represented by the resulting unsigned integer type.
BTW, you cannot be sure, if it creates a new variable at all. I mean it all depends on the compiler, compiler options, etc. For instance, in some circumstances, the compiler can just calculate the value of the r-value (if it is vivid and possible for that point) and just put the calculated value into the variable.
For your example, it is obvious, that r1 + r2 == 11. Then the x might be constructed using a value 11. And this also doesn't mean, that the x will 100% constructed and a constructor will be called for him.
Once I debugged and saw, that a variable I declared (and defined) was not created at all (and also some calculations I had). That was because I didn't use the variable in any meaningful way and set the optimization to the highest level.

If I want to round an unsigned integer to the closest smaller or equal even integer, can I divide by 2 then multiply by 2?

For example :
f(8)=8
f(9)=8
Can I do x = x/2*2; ?
Is there a risk that the compiler will optimize away such expression ?
The compiler is allowed to make any optimisiations it likes so long as it does not introduce any side effects into the program. In your case it can't cancel the '2's as then the expression will have a different value for odd numbers.
x / 2 * 2 is evaluated strictly as (x / 2) * 2, and x / 2 is performed in integer arithmetic if x is an integral type.
This, in fact, is an idiomatic rounding technique.
Since you specified the integers are unsigned, you can do it with a simple mask:
x & (~1u)
Which will set the LSB to zero, thus producing the immediate even number that is no larger than x. That is, if x has a type that is no wider than an unsigned int.
You can of course force the 1 to be of the same type as a wider x, like this:
x & ~((x & 1u) | 1u)
But at this point, you really ought to look at this approach as an exercise in obfuscation, and accept Bathsheba's answer.
I of course forgot about the standard library. If you include stdint.h (or cstdint, as you should in C++ code). You can let the implementation take care of the details:
uintmax_t const lsb = 1;
x & ~lsb;
or
x & ~UINTMAX_C(1)
C and C++ generally use the "as if" rule in optimization. The computation result must be as if the compiler didn't optimize your code.
In this case, 9/2*2=8. The compiler may use any method to achieve the result 8. This includes bitmasks, bit shifts, or any CPU-specific hack that produces the same results (x86 has quite a few tricks that rely on the fact that it doesn't differentiate between pointers and integers, unlike C and C++).
You can write x / 2 * 2 and the compiler will produce very efficient code to clear the least significant bit if x has an unsigned type.
Conversely, you could write:
x = x & ~1;
Or probably less readably:
x = x & -2;
Or even
x = (x >> 1) << 1;
Or this too:
x = x - (x & 1);
Or this last one, suggested by supercat, that works for positive values of all integer types and representations:
x = (x | 1) ^ 1;
All of the above proposals work correctly for all unsigned integer types on 2's complement architectures. Whether the compiler will produce optimal code is a question of configuration and quality of implementation.
Note however that x & (~1u) does not work if the type of x is larger than unsigned int. This is a counter-intuitive pitfall. If you insist on using an unsigned constant, you must write x & ~(uintmax_t)1 as even x & ~1ULL would fail if x has a larger type than unsigned long long. To make matters worse, many platforms now have integer types larger than uintmax_t, such as __uint128_t.
Here is a little benchmark:
typedef unsigned int T;
T test1(T x) {
return x / 2 * 2;
}
T test2(T x) {
return x & ~1;
}
T test3(T x) {
return x & -2;
}
T test4(T x) {
return (x >> 1) << 1;
}
T test5(T x) {
return x - (x & 1);
}
T test6(T x) { // suggested by supercat
return (x | 1) ^ 1;
}
T test7(T x) { // suggested by Mehrdad
return ~(~x | 1);
}
T test1u(T x) {
return x & ~1u;
}
As suggested by Ruslan, testing on Godbolt's Compiler Explorer shows that for all the above alternatives gcc -O1 produces the same exact code for unsigned int, but changing the type T to unsigned long long shows differing code for test1u.
If your values are of any unsigned type as you say, the easiest is
x & -2;
The wonders of unsigned arithmetic make it that -2 is converted to the type of x, and has a bit pattern that has all ones, but for the least significant bit which is 0.
In contrary to some of the other proposed solutions, this should work with any unsigned integer type that is at least as wide as unsigned. (And you shouldn't do arithmetic with narrower types, anyhow.)
Extra bonus, as remarked by supercat, this only uses conversion of a signed type to an unsigned type. This is well-defined by the standard as being modulo arithmetic. So the result is always UTYPE_MAX-1 for UTYPE the unsigned type of x.
In particular, it is independent of the sign representation of the platform for signed types.
One option that I'm surprised hasn't been mentioned so far is to use the modulo operator. I would argue this represents your intent at least as well as your original snippet, and perhaps even better.
x = x - x % 2
As others have said, the compiler's optimiser will deal with any reasonable expression equivalently, so worry about what's clearer rather than what you think is fastest. All the bit-tweaking answers are interesting, but you should definitely not use any of them in place of arithmetic operators (assuming the motivation is arithmetic rather than bit tweaking).
just use following:
template<class T>
inline T f(T v)
{
return v & (~static_cast<T>(1));
}
Do not afraid that this is function, compiler should finally optimize this into just
v & (~1)
with appropriate type of 1.

VS13 C++ unexpected integral overflow

Consider this C++ code in VS13:
long long Number;
Number = 10000000000*900;
Number = 10000000*214;
Number = 10000000*215;
Number = (long long) 10000000*215;
When you compile it you get warning C4307: '*' : integral constant overflow for line 4. When you run the code there is effectively an integral overflow. The other lines are OK. Am I overlooking something?
The 10000000 constant (literal) is by default treated as a int constant as it fits the int type:
The type of the integer literal is the first type in which the value
can fit, from the list of types which depends on which numeric base
and which integer-suffix was used
(http://en.cppreference.com/w/cpp/language/integer_literal)
Therefore the multiplication is done within int type (no integer promotion happens). You can consider the line
Number = 10000000*215;
as if it was
const int tmp1 = 10000000;
const int tmp2 = 215;
Number = tmp1*tmp2;
Obviously this will produce overflow, and similarly the third and the last line will not produce it. For the second line, the compiler understands that 10000000000 does not fit neither int neither long, and therefore uses long long for it.
You can use the ll suffix to force a constant to be long long:
Number = 10000000ll*215;
Literal numbers are by default, understood to be of type int. In Visual Studio, that's a 32-bit integer. Multiplying two ints together results in an int.
Therefore this expression:
10000000*215 // (hex value is 0x80266580.)
is already going to be an overflowed since the expected value can't be expressed as a 32-bit positive int. The compiler will interpret it as -2144967296, which is completely different than 2150000000.
Hence, for force the expression as a 64-bit, at least one of those operands has to be 64-bit. Hence, this works nicely:
Number = 10000000LL * 215; // LL qualifies the literal number as a "long long"
It forces the whole expression (long long multiplied by int) to be treated as a long long. Hence, no overflow.

Order of operation int(4*x/y) vs int(x/(y/4))

Suppose x and y are of type int.
Are the two expressions:
int(4*x/y)
and:
int(x/(y/4))
always evaluate to the same for all x and y of type int? They should mathematically, but only the second expression is consistent (i.e., producing the expected value) in a program I've written.
In many programming languages, 4*x/y and x/(y/4) are different because y/4, an integer, is the truncated result of the division of y by 4. No such truncation exists in 4*x/y. On obvious difference in when y is 1, in which case the second expression divides by zero, whereas the first one computes 4*x.
i assume x and y are ints
then of course not :)
on integers: x/4=floor(x/4) mathematically
which gives you:
floor(4*x/y)
and
floor(x/floor(y/4))