long and int not enough and double wouldn't work - c++

I am using C++ and I've heard and experienced that the maximum value that can be stored in a int
and a long are same.
But my problem is that I need to store a number that exceed the maximum value
of long variable. The size of double variable is pretty enough.
But the problem is using double variable
avoid me using the operator % which is necessary to code my function more easily and some times there
seems to be no other ways than using it.
So please would you kindly tell me a way to achieve my target?

It depends on the purpose. For a better answer, give us more context
Have a look at (unsigned) long long or GMP

You can use type long long intor unsigned long long int
To know the maximum value that an untegral type can contain you can use the following construction as for example
std::numeric_limits<long long>::max();
To use it you have to include header <limits>

So, you want to compute the modulo of large integers. It's 99% likely you're doing encryption, which is hard stuff. Your question kind of implies that maybe you should look for some off-the-shelf solution for your top-level problem (the encryption).
Anyway, the standard answer is otherwise to use a library for large-precision integers, such as GNU MP.

#include <cmath>
int main ()
{
double max_uint = 4294967295.0;
double max1 = max_uint + 2.0;
double max2 = (max1 + 1.0) * (max_uint + 1.0);
double f = fmod(max2,max1);
return 0;
}
max1 and max2 are both over unsigned int limit, and fmod returns correct max2 % max1 result, which is also over unsigned int limit: f == max_uint + 1.0.
Edit:
good hint from anatolyg: this method works only for integers up to 2^52. This is because mantissa of double has 52 bit, and every higher integer is representable only with precision loss. E.g. 2^80 could be == (2^80)+1 and == (2^80)+2 and so on. The higher the integers, the higher the inprecision, because densitiy of representable integers gets wider there.
But if you just need to have 20 extra bit compared to int with 32 bit, and have no other possibility to achieve this with an built-in integral type (with which the regular % will be faster I think), then you can use this...

first there's a difference between int and long type
but for To fix the your problem you can use
unsigned long long int

here is a list of some of the sizes you would expect in C++:
char : 1 byte
short : 2 bytes
int : 4 bytes
long : 4 bytes
long long : 8 bytes
float : 4 bytes
double : 8 bytes
I think this clearly explains why you are experiencing difficulties and gives you a hint on how to solve them

Related

Print precise value of pow() function in c++

my output is coming wrong. I guess i'm wrong with casting. please help me out.
int n;
cin>>n;
unsigned long long int a,s;
cin>>a;
s=(2*pow(10,n)+a);
But when I am giving large n like 17 or 18 then my output which is s is not coming as expected.
see image for output
e.g: when n=17, a=67576676767676788 then s=267576676767676800 which ideally should be 2*10^17 + 67576676767676788
First you have to understand what is going on.
To be able to use std::pow, compiler silently converts integer types to double and returned value is a double too.
Note that double has 16 significant digits (in decimal representation).
When you do assignment, conversion of double to long long int is silently performed
unsigned long long int - if this type has 64 bits the maximum power of 10 is 19
Now if you want to exceed this limitation you should use an external library. gmp is quite nice.
If it is acceptable to have a limitation from range of unsigned long long int just implement your own power function.

What data type is used to store intermediate calculations while executing a program in C++?

I was trying to do the following calculations but found out that the calculations do not yield the correct result.
I have the following doubt that when my computer does the calculation a*b, what data type is used to store the result of the calculation temporary before doing the modulus. How is the data type in which it stores the result decided?.
Please do let me know about the source of the information.
#include <iostream>
using namespace std;
int main()
{
long long int a=1000000000000000000; // 18 zeroes
long long int b=1000000000000000000;
long long int c=1000000007;
long long int d=(a*b)%c;
cout<<a<<"\n"<<b<<"\n"<<c<<"\n"<<d;
}
Edit1: This code also gives incorrect output
#include <iostream>
using namespace std;
int main()
{
int a=1000000000; // 9 zeroes
int b=1000000000;
long long int c=1000000007;
long long int d=a*b%c;
cout<<a<<"\n"<<b<<"\n"<<c<<"\n"<<d;
}
How is the data type in which it stores the result decided?
The rules are fairly complicated and convoluted in general, but in this particular case it's simple. a*b is of type long long, and since a*b overflows the programs has Undefined Behavior.
You can use the equivalent formula to compute the correct result (without overflowing):
(a * b) % c == ((a % c) * (b % c)) % c
Could you also suggest on how to decide for mixed data types and post
about your source of information
Of some interest: https://en.cppreference.com/w/cpp/language/implicit_conversion The standard rules are unfortunately even more complicated.
As some suggestions:
never mix unsigned and signed.
pay attentions that types smaller than int will be promoted to int or unsigned.
for a type T equal or larger than int then T op T will have type type T. This is what you should be aiming for in your expressions. (i.e. have both operators of the same type either int, long or long long.
avoid unsigned types. Unfortunately that's impossible with the current Standard Library design (std::size_t sigh)
avoid long as its width differs between current major compilers and platforms
if you care about the width of the integer data type then avoid int long long long and such and always use fixed width integer types (std::int32_t std::int64_t etc.). Completely ignore that technically those types are optional.
My understanding is that long long has to be able to accommodate at least 64 bits but each 1000000000000000000 is a 60 bit number so a*b would yield a result that exceeds any integer representation the compiler supports. Perhaps you were thinking that the 1000000000000000000 was binary?

Cannot understand the difference between these two code samples

I wanted to write a program that computes the number of zones made by n lines.
The first example is my code, and the second is my friend's code. I think they are trying to do the same thing, but for the case n=65535 my code gives me the wrong answer. Where is the problem in my code?
my code:
#include<iostream>
using namespace std;
int main()
{
int n;
cin >> n;
unsigned long long ans;
ans = (n*(n + 1) / 2) + 1;
cout << ans << endl;
return 0;
}
my friend's code:
#include <iostream>
using namespace std;
int main(void){
double n,sum;
cin>>n;
sum=n*(n+1)/2+1;
cout<<(long)sum<<endl;
return 0;
}
In your code:
int n;
ans = (n*(n + 1) / 2) + 1;
All values in the calculation are ints: n is declared as int, and plain integer constants are ints as well. Therefore the result of this calculation will also be an int. The fact that you later assign this result to a long long variable doesn't change this.
Now the result of the multiplication 65535*65536 does not fit in a 32-bit signed int, so you get a nonsense answer. Fix your program by making n a 64-bit long long.
As #Dithermaster suggests, the problem here is probably one of integer overflow.
As it stands right now, your code doesn't actually make much sense. In particular, since you've defined n as an int, and all the integer literals in the expression: (n*(n + 1) / 2) + 1 are also small enough to fit in an int, the calculation will be carried out on ints, and then (after the calculation is complete) the result will be converted to long long and assigned to ans (because you've defined ans as a long long).
What you almost certainly want is to carry out the entire calculation on long long to avoid overflow. The most obvious way to do this would be to define n as a long long instead of an int.
Your friend has avoided this by defining n as a double. This works up to a point--a typical implementation of double has a 53-bit significand, so it can be used as (essentially) a 53-bit integer type. That's obviously quite a bit more than the 16 bits that's mandated for an int, but equally obviously less than the 64 bits mandated for a long long.
There's also no point in supporting n being negative, so you could consider defining n and ans as unsigned long long instead.

Converting string to int (C++)

I looked everywhere and can't find an answer to this specific question :(
I have a string date, which contains the date with all the special characters stripped away. (i.e : yyyymmddhhmm or 201212031204).
I'm trying to convert this string into an int to be able to sort them later. I tried atoi, did not work because the value is too high for the function. I tried streams, but it always returns -858993460 and I suspect this is because the string is too large too. I tried atol and atoll and they still dont give the right answer.
I'd rather not use boost since this is for a homework, I dont think i'd be allowed.
Am I out of options to convert a large string to an int ?
Thank you!
What i'd like to be able to do :
int dateToInt(string date)
{
date = date.substr(6,4) + date.substr(3,2) + date.substr(0,2) + date.substr(11,2) + date.substr(14,2);
int d;
d = atoi(date.c_str());
return d;
}
You get negative numbers because 201212031204 is too large to fit int. Consider using long longs
BTW, You may sort strings as well.
You're on the right track that the value is too large, but it's not just for those functions. It's too large for an int in general. ints only hold up to 32 bits, or a maximum value of 2147483647 (4294967295 if unsigned). A long long is guaranteed to be large enough for the numbers you're using. If you happen to be on a 64-bit system, a long will be too.
Now, if you use one of these larger integers, a stream should convert properly. Or, if you want to use a function to do it, have a look at atoll for a long long or atol for a long. (Although for better error checking, you should really consider strtoll or strtol.)
Completely alternatively, you could also use a time_t. They're integer types under the hood, so you can compare and sort them. And there's some nice functions for them in <ctime> (have a look at http://www.cplusplus.com/reference/ctime/).
typedef long long S64;
S64 dateToInt(char * s) {
S64 retval = 0;
while (*s) {
retval = retval * 10 + (*s - '0');
++s;
}
return retval;
}
Note that as has been stated, the numbers you're working with will not fit into 32 bits.

When I calculate a large factorial, why do I get a negative number?

So, simple procedure, calculate a factorial number. Code is as follows.
int calcFactorial(int num)
{
int total = 1;
if (num == 0)
{
return 0;
}
for (num; num > 0; num--)
{
total *= num;
}
return total;
}
Now, this works fine and dandy (There are certainly quicker and more elegant solutions, but this works for me) for most numbers. However when inputting larger numbers such as 250 it, to put it bluntly, craps out. Now, the first couple factorial "bits" for 250 are { 250, 62250, 15126750, 15438000, 3813186000 } for reference.
My code spits out { 250, 62250, 15126750, 15438000, -481781296 } which is obviously off. My first suspicion was perhaps that I had breached the limit of a 32 bit integer, but given that 2^32 is 4294967296 I don't think so. The only thing I can think of is perhaps that it breaches a signed 32-bit limit, but shouldn't it be able to think about this sort of thing? If being signed is the problem I can solve this by making the integer unsigned but this would only be a temporary solution, as the next iteration yields 938043756000 which is far above the 4294967296 limit.
So, is my problem the signed limit? If so, what can I do to calculate large numbers (Though I've a "LargeInteger" class I made a while ago that may be suited!) without coming across this problem again?
2^32 doesn't give you the limit for signed integers.
The signed integer limit is actually 2147483647 (if you're developing on Windows using the MS tools, other toolsuites/platforms would have their own limits that are probably similar).
You'll need a C++ large number library like this one.
In addition to the other comments, I'd like to point out two serious bugs in your code.
You have no guard against negative numbers.
The factorial of zero is one, not zero.
Yes, you hit the limit. An int in C++ is, by definition, signed. And, uh, no, C++ does not think, ever. If you tell it to do a thing, it will do it, even if it is obviously wrong.
Consider using a large number library. There are many of them around for C++.
If you don't specify signed or unsigned, the default is signed. You can modify this using a command line switch on your compiler.
Just remember, C (or C++) is a very low-level language and does precisely what you tell it to do. If you tell it to store this value in a signed int, that's what it will do. You as the programmer have to figure out when that's a problem. It's not the language's job.
My Windows calculator (Start-Run-Calc) tells me that
hex (3813186000) = E34899D0
hex (-481781296) = FFFFFFFFE34899D0
So yes, the cause is the signed limit. Since factorials can by definition only be positive, and can only be calculated for positive numbers, both the argument and the return value should be unsigned numbers anyway. (I know that everybody uses int i = 0 in for loops, so do I. But that left aside, we should use always unsigned variables if the value can not be negative, it's good practice IMO).
The general problem with factorials is, that they can easily generate very large numbers. You could use a float, thus sacrificing precision but avoiding the integer overflow problem.
Oh wait, according to what I wrote above, you should make that an unsigned float ;-)
If i remember well:
unsigned short int = max 65535
unsigned int = max 4294967295
unsigned long = max 4294967295
unsigned long long (Int64 )= max 18446744073709551615
Edited source:
Int/Long Max values
Modern Compiler Variable