This question already has answers here:
Representing big numbers in source code for readability?
(5 answers)
Closed 7 years ago.
In C++, sometimes you want to declare large numbers. Sometimes it's hard to see if you have the right number of zeroes.
const long long VERY_LARGE_NUMBER = 300000000000;
In a language like OCaml, you can separate numbers with underscores to improve readability.
let x = 300_000_000_000;;
Is there a similar mechanism in C++? I have seen things like = 1 << 31 for powers of 2, but what about for very large powers of 10? Sometimes you're declaring very large numbers (e.g. array bounds in competition programming) and you want to be confident that your declared array size is correct.
I can think of something like:
const long long VERY_LARGE_NUMBER = 3 * (1 << (11 * 10 / 3));
...which abuses 1<<10 ~= 1000 get close to 3 with 11 zeroes, but it's verbose and not exact.
how about
const long long VERY_LARGE_NUMBER = (long long) 300 * 1000 * 1000 * 1000;
Since C++14, integer literal supports the use of ' as a delimiter. For example, unsigned long long l2 = 18'446'744'073'709'550'592llu;. See this cppreference page for the details. Also, you may consider using scientific notation, like 123e4. Such literals are floating point literals. But you can convert them to integer types.
Related
This question already has answers here:
Overflowing of Unsigned Int
(3 answers)
C/C++ unsigned integer overflow
(4 answers)
Closed 5 years ago.
There is the ULARGE_INTEGER union for compilers that don't support 64 bit arithmetic.
What would happen in the following code if the addition on the last line overflows?
ULARGE_INTEGER u;
u.LowPart = ft->dwLowDateTime;
u.HighPart = ft->dwHighDateTime;
u.LowPart += 10000; //what if overflow?
Related question:
What is the point of the ULARGE_INTEGER union?
ULARGE_INTEGER is composed of two unsigned values. Unsigned values are guaranteed to wrap round, so in some sense they can't "overflow".
If wrap round does occur, u.LowPart will end up being less than 10,000. What you probably want is:
u.LowPart += 10000;
if (u.LowPart < 10000) u.HighPart++;
... but what compiler still doesn't support 64-bit integers these days? They have been required by the C++ standard since 2011, and the C standard since 1999. So what you really want is:
u.QuadPart += 10000; // Forget about legacy compilers that doen't support 64 bits.
This question already has answers here:
Handling large numbers in C++?
(10 answers)
Closed 7 years ago.
I would like to write a program, which could compute integers having more then 2000 or 20000 digits (for Pi's decimals). I would like to do in C++, without any libraries! (No big integer, boost,...). Can anyone suggest a way of doing it? Here are my thoughts:
using const char*, for holding the integer's digits;
representing the number like
( (1 * 10 + x) * 10 + x )...
The obvious answer works along these lines:
class integer {
bool negative;
std::vector<std::uint64_t> data;
};
Where the number is represented as a sign bit and a (unsigned) base 2**64 value.
This means the absolute value of your number is:
data[0] + (data[1] << 64) + (data[2] << 128) + ....
Or, in other terms you represent your number as a little-endian bitstring with words as large as your target machine can reasonably work with. I chose 64 bit integers, as you can minimize the number of individual word operations this way (on a x64 machine).
To implement Addition, you use a concept you have learned in elementary school:
a b
+ x y
------------------
(a+x+carry) (b+y reduced to one digit length)
The reduction (modulo 2**64) happens automatically, and the carry can only ever be either zero or one. All that remains is to detect a carry, which is simple:
bool next_carry = false;
if(x += y < y) next_carry = true;
if(prev_carry && !++x) next_carry = true;
Subtraction can be implemented similarly using a borrow instead.
Note that getting anywhere close to the performance of e.g. libgmp is... unlikely.
A long integer is usually represented by a sequence of digits (see positional notation). For convenience, use little endian convention: A[0] is the lowest digit, A[n-1] is the highest one. In general case your number is equal to sum(A[i] * base^i) for some value of base.
The simplest value for base is ten, but it is not efficient. If you want to print your answer to user often, you'd better use power-of-ten as base. For instance, you can use base = 10^9 and store all digits in int32 type. If you want maximal speed, then better use power-of-two bases. For instance, base = 2^32 is the best possible base for 32-bit compiler (however, you'll need assembly to make it work optimally).
There are two ways to represent negative integers, The first one is to store integer as sign + digits sequence. In this case you'll have to handle all cases with different signs yourself. The other option is to use complement form. It can be used for both power-of-two and power-of-ten bases.
Since the length of the sequence may be different, you'd better store digit sequence in std::vector. Do not forget to remove leading zeroes in this case. An alternative solution would be to store fixed number of digits always (fixed-size array).
The operations are implemented in pretty straightforward way: just as you did them in school =)
P.S. Alternatively, each integer (of bounded length) can be represented by its reminders for a set of different prime modules, thanks to CRT. Such a representation supports only limited set of operations, and requires nontrivial convertion if you want to print it.
This question already has answers here:
Handle arbitrary length integers in C++
(3 answers)
Closed 7 years ago.
Hi I am doing an algorithm question require get the full result of
5208334^2, which is 27126743055556
I was able to do it with by represent integer using Charracter array. However can we have any better way (shorter or faster) to do that? any idea is welcome ?
Updated:
For my case, both long long and int64 work, just that I did not cast value before return:
int val (int n1, n2) {
........
return (long long) n1 * n2;
}
This number fits into long long(present in GCC and after c++11) type or int64(for some other compilers before c++11). Thus the simplest solution is to use this type.
This question already has answers here:
Why are floating point numbers inaccurate?
(5 answers)
Closed 8 years ago.
Now I understand floats are less accurate than double, but does this explain when I have the std::string:
"7.6317"
and I do:
float x = atof(myString.c_str());
getting 7.63170004 is expected? Is there any way I can tell the assignment of x to only read the first 4 decimal places? Or is this because of the way the float representation stores the number 7.6317?
Yes. It is expected. It is so-called floating point error.
Some floating point literals do not have an accurate representation in the computer, even if -- in decimal notation -- the number seems harmless. This is because the computer uses 2 as a base. So even if a number might have a finite representation in base 10, it might not have on in base 2.
you can do it like:
float x = floorf(val * 10000) / 10000;
i think it should work! see See
This question already has answers here:
How disastrous is integer overflow in C++?
(3 answers)
Closed 8 years ago.
when i try to add two long numbers it gives me minus result :
#include<iostream>
using namespace std;
int main ()
{
int a=1825228665;
int b=1452556585;
cout<<a+b;
return 0;
}
This gives me:
-1017182046
It's overflowing of the type. When you add two big number that the result can't be stored in chosen type it get overfloved. In most cases it will wraped the number, but it's not defined in the standard. So in some compilers the result is undefined.
For int and other numeric types when program can't store this big number in it we can see a overflow of it.
Lets say that int can store number from -10 to 10, when you do this:
int a = 10;
int b = a+1;
You will get -10 in b or some random value (it can be anything because the result is undefined)
That's because the results overflows. Since the first bit in numeric signed data types is used for the sign representation. The specific representation is called Two's complement (Wikipedia article here). Practically a 1 in this bit maps to a - while a 0 to +. The solution to this problem is using a larger data type like long. Larger it means that the memory used to store it is bigger so the range of values increases.