This question already has answers here:
Overflowing of Unsigned Int
(3 answers)
C/C++ unsigned integer overflow
(4 answers)
Closed 5 years ago.
There is the ULARGE_INTEGER union for compilers that don't support 64 bit arithmetic.
What would happen in the following code if the addition on the last line overflows?
ULARGE_INTEGER u;
u.LowPart = ft->dwLowDateTime;
u.HighPart = ft->dwHighDateTime;
u.LowPart += 10000; //what if overflow?
Related question:
What is the point of the ULARGE_INTEGER union?
ULARGE_INTEGER is composed of two unsigned values. Unsigned values are guaranteed to wrap round, so in some sense they can't "overflow".
If wrap round does occur, u.LowPart will end up being less than 10,000. What you probably want is:
u.LowPart += 10000;
if (u.LowPart < 10000) u.HighPart++;
... but what compiler still doesn't support 64-bit integers these days? They have been required by the C++ standard since 2011, and the C standard since 1999. So what you really want is:
u.QuadPart += 10000; // Forget about legacy compilers that doen't support 64 bits.
Related
This question already has answers here:
Implicit type promotion rules
(4 answers)
Closed 3 years ago.
Have the following code:
short a = 5;
short b = 15;
short c = 25;
short d = std::min(a, b);
short e = std::min(a, b-c); // Error
The last line cannot be compiled, claiming that there's no overload of min() that matches the arguments "short, int".
What is the reason for this being the case?
I understand that the result of b-c could potentially not fit in a short anymore. However that would be the same if I were using INTs and there it doesn't automatically form a LONG or anything to enforce that it fits.
As long as I am sure that the resulting number will never exceed the range of SHORT, it is safe if I use static_cast<short>(b-c), right?
Huge thanks!
Reason: integer promotion. If a type is narrower than int, it is promoted to int automatically. This makes little difference for signed numbers because overflow is undefined, but for unsigned numbers for which overflow wraps, this allows the compiler to emit a lot less code for most processors.
Most of the time, this automatically casts back because assigning to a narrower variable is not an error. You happened to find a case where this really does cause an issue though.
If you're sure it fits, just cast it back.
This question already has answers here:
Is it possible to create a type in c++ that takes less than one byte of memory?
(5 answers)
Closed 5 years ago.
I require a 4 bit integer in a design for less memory use. In any version of c++ ,c++11 ,c++14 any can be used for the design.
There is no native 4bit datatype. But you could use an 8bit one to hold two 4bit values in the high/low nibble.
no, but you can use:
struct A {
unsigned int value1 : 4;
unsigned int value2 : 4;
};
This question already has answers here:
unsigned int vs. size_t
(8 answers)
Closed 6 years ago.
I saw an example recently that looks like the following:
const size_t NDim = 3;
double coords[NDim];
My question is straight forward. When does one use size_t vs an int or unsigned int? In this particular case, wouldn't the following be the equivalent as the above:
const unsigned int NDim = 3;
double coords[NDim];
size_t is commonly used for array indexing and loop counting.
According to cppreference:
Programs that use other types, such as unsigned int, for array indexing may fail on, e.g. 64-bit systems when the index exceeds UINT_MAX or if it relies on 32-bit modular arithmetic.
It also states:
std::size_t can store the maximum size of a theoretically possible
object of any type (including array). A type whose size cannot be
represented by std::size_t is ill-formed (since C++14)
The answer is straightforward as well. You use size_t for all your array indexing and sizing needs, this is exactly what it was designed for. And you never use anything else for it.
Apart from being a self-documenting feature, it also has another important aspect - on many platforms sizeof(int) is not equal to sizeof(size_t).
This question already has answers here:
Representing big numbers in source code for readability?
(5 answers)
Closed 7 years ago.
In C++, sometimes you want to declare large numbers. Sometimes it's hard to see if you have the right number of zeroes.
const long long VERY_LARGE_NUMBER = 300000000000;
In a language like OCaml, you can separate numbers with underscores to improve readability.
let x = 300_000_000_000;;
Is there a similar mechanism in C++? I have seen things like = 1 << 31 for powers of 2, but what about for very large powers of 10? Sometimes you're declaring very large numbers (e.g. array bounds in competition programming) and you want to be confident that your declared array size is correct.
I can think of something like:
const long long VERY_LARGE_NUMBER = 3 * (1 << (11 * 10 / 3));
...which abuses 1<<10 ~= 1000 get close to 3 with 11 zeroes, but it's verbose and not exact.
how about
const long long VERY_LARGE_NUMBER = (long long) 300 * 1000 * 1000 * 1000;
Since C++14, integer literal supports the use of ' as a delimiter. For example, unsigned long long l2 = 18'446'744'073'709'550'592llu;. See this cppreference page for the details. Also, you may consider using scientific notation, like 123e4. Such literals are floating point literals. But you can convert them to integer types.
This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Best way to detect integer overflow in C/C++
how do we check if any arithmetic operation like addition, multiplication or subtraction could result in an overflow?
Check the size of the operands first, and use std::numeric_limits. For example, for addition:
#include <limits>
unsigned int a, b; // from somewhere
unsigned int diff = std::numeric_limits<unsigned int>::max() - a;
if (diff < b) { /* error, cannot add a + b */ }
You cannot generally and reliably detect arithmetic errors after the fact, so you have to do all the checking before.
You can easily template this approach to make it work with any numeric type.