Output a 15 digit number - c++

I have a program that is suppose to find the sum of all the numbers between 1 and 75 in the fibonacci sequence that are divisible by three and add them together. I got the program working properly the only problem I am having is being able to display such a large number. I am told that the answer should be 15 digits. I have tried long long, long double, unsigned long long int and none of those produce the right output (they produce a negative number).
code:
long fibNum(int kth, int nth);
int main()
{
int kTerm;
int nTerm;
kTerm = 76;
nTerm = 3;
std::cout << fibNum(kTerm, nTerm) << std::endl;
system("pause");
return 0;
}
long fibNum(int kth, int nth)
{
int term[100];
long firstTerm;
long secondTerm;
long exactValue;
int i;
term[1] = 1;
term[2] = 1;
exactValue = 0;
do
{
firstTerm = term[nth - 1];
secondTerm = term[nth - 2];
term[nth] = (firstTerm + secondTerm);
nth++;
}
while(nth < kth);
for(i = 1; i < kth; i++)
{
if(term[i] % 3 == 0)
{
term[i] = term[i];
}
else
term[i] = 0;
exactValue = term[i] + exactValue;
}
return exactValue;
I found out that the problem has to do with the array. The array cannot store the 47th term which is 10 digits. Now I have no idea what to do

Type long long is guaranteed to be at least 64 bits (and is exactly 64 bits on every implementation I've seen). Its maximum value, LLONG_MAX is at least 263-1, or 9223372036854775807, which is 19 decimal digits -- so long longis more than big enough to represent 15-digit numbers.
Just use type long long consistently. In your code, you have one variable of type long double, which has far more range than long long but may have less precision (which could make it impossible to determine whether a given number is a multiple of 3.)
You could also use unsigned long long, whose upper bound is at least 264-1, but either long long or unsigned long long should be more than wide enough for your purposes.
Displaying a long long value in C++ is straightforward:
long long num = some_value;
std::cout << "num = " << num << "\n";
Or if you prefer printf for some reason, use the "%lld" format for long long, "%llu" for unsigned long long.
(For integers too wide to fit in 64 bits, there are software packages that handle arbitrarily large integers; the most prominent is GNU's GMP. But you don't need it for 15-digit integers.)

What you can do is take char s[15]and int i=14,k,
and then go for while loop till sum!=0
Under while body
k=n%10;
s[i]=k+48;
n=n/10;
i--;

The array cannot store the 47th term which is 10 digits.
This indicates that your architecture has a type long with just 32 bits. That is common for 32-bit architecture. 32 bits cover 9 digit numbers and low 10-digit numbers, to be precise 2.147.483.647 for long and 4.294.967.295 for unsigned long.
Just change your long types to long long or unsigned long long, including the return type of fibNum. That would easily cover 18 digits.

Related

Finding the largest prime factor? (Doesn't work in large number?)

I am a beginner in C++, and I just finished reading chapter 1 of the C++ Primer. So I try the problem of computing the largest prime factor, and I find out that my program works well up to a number of sizes 10e9 but fails after that e.g.600851475143 as it always returns a wired number e.g.2147483647 when I feed any large number into it. I know a similar question has been asked many times, I just wonder why this could happen to me. Thanks in advance.
P.S. I guess the reason has to do with some part of my program that is not capable to handle some large number.
#include <iostream>
int main()
{
int val = 0, temp = 0;
std::cout << "Please enter: " << std::endl;
std::cin >> val;
for (int num = 0; num != 1; val = num){
num = val/2;
temp = val;
while (val%num != 0)
--num;
}
std::cout << temp << std::endl;
return 0;
}
Your int type is 32 bits (like on most systems). The largest value a two's complement signed 32 bit value can store is 2 ** 31 - 1, or 2147483647. Take a look at man limits.h if you want to know the constants defining the limits of your types, and/or use larger types (e.g. unsigned would double your range at basically no cost, uint64_t from stdint.h/inttypes.h would expand it by a factor of 8.4 billion and only cost something meaningful on 32 bit systems).
2147483647 isn't a wired number its INT_MAX which is defined in climits header file. This happens when you reach maximum capacity of an int.
You can use a bigger data type such as std::size_t or unsigned long long int, for that purpose, which have a maximum value of 18446744073709551615.

Sum signed 32-bit int with unsigned 64bit int

On my application, I receive two signed 32-bit int and I have to store them. I have to create a sort of counter and I don't know when it will be reset, but I'll receive big values and frequently. Beacause of that, in order to store these values, I decided to use two unsigned 64-bit int.
The following could be a simple version of the counter.
struct Counter
{
unsigned int elementNr;
unsigned __int64 totalLen1;
unsigned __int64 totalLen2;
void UpdateCounter(int len1, int len2)
{
if(len1 > 0 && len2 > 0)
{
++elementNr;
totalLen1 += len1;
totalLen2 += len2;
}
}
}
I know that if a smaller type is casted to a bigger one (e.g. int to long) there should be no issues. However, passing from 32 bit rappresentation to 64 bit rappresentation and from signed to unsigned at the same time, is something new for me.
Reading around, I undertood that len1 should be expanded from 32 bit to 64 bit and then applied sign extension. Because the unsigned int and signen int have the same rank (Section 4.13), the latter should be converted.
If len1 stores a negative value, passing from signed to unsigned will return a wrong value, this is why I check the positivy at the beginning of the function. However, for positive values, there
should be no issues I think.
For clarity I could revrite UpdateCounter(int len1, int len2) like this
void UpdateCounter(int len1, int len2)
{
if(len1 > 0 && len2 > 0)
{
++elementNr;
__int64 tmp = len1;
totalLen1 += static_cast<unsigned __int64>(tmp);
tmp = len2;
totalLen2 += static_cast<unsigned __int64>(tmp);
}
}
Might there be some side effects that I have not considered.
Is there another better and safer way to do that?
A little background, just for reference: binary operators such arithmetic addition work on operands of the same type (the specific CPU instruction to which is translated depends on the number representation that must be the same for both instruction operands).
When you write something like this (using fixed width integer types to be explicit):
int32_t a = <some value>;
uint64_t sum = 0;
sum += a;
As you already know this involves an implicit conversion, more specifically an
integral promotion according to integer conversion rank.
So the expression sum += a; is equivalent to sum += static_cast<uint64_t>(a);, so a is promoted having the lesser rank.
Let's see what happens in this example:
int32_t a = 60;
uint64_t sum = 100;
sum += static_cast<uint64_t>(a);
std::cout << "a=" << static_cast<uint64_t>(a) << " sum=" << sum << '\n';
The output is:
a=60 sum=160
So all is all ok as expected. Let's se what happens adding a negative number:
int32_t a = -60;
uint64_t sum = 100;
sum += static_cast<uint64_t>(a);
std::cout << "a=" << static_cast<uint64_t>(a) << " sum=" << sum << '\n';
The output is:
a=18446744073709551556 sum=40
The result is 40 as expected: this relies on the two's complement integer representation (note: unsigned integer overflow is not undefined behaviour) and all is ok, of course as long as you ensure that the sum does not become negative.
Coming back to your question you won't have any surprises if you always add positive numbers or at least ensuring that sum will never be negative... until you reach the maximum representable value std::numeric_limits<uint64_t>::max() (2^64-1 = 18446744073709551615 ~ 1.8E19).
If you continue to add numbers indefinitely sooner or later you'll reach that limit (this is valid also for your counter elementNr).
You'll overflow the 64 bit unsigned integer by adding 2^31-1 (2147483647) every millisecond for approximately three months, so in this case it may be advisable to check:
#include <limits>
//...
void UpdateCounter(const int32_t len1, const int32_t len2)
{
if( len1>0 )
{
if( static_cast<decltype(totalLen1)>(len1) <= std::numeric_limits<decltype(totalLen1)>::max()-totalLen1 )
{
totalLen1 += len1;
}
else
{// Would overflow!!
// Do something
}
}
}
When I have to accumulate numbers and I don't have particular requirements about accuracy I often use double because the maximum representable value is incredibly high (std::numeric_limits<double>::max() 1.79769E+308) and to reach overflow I would need to add 2^32-1=4294967295 every picoseconds for 1E+279 years.

Unsigned int not working C++

Following are different programs/scenarios using unsigned int with respective outputs. I don't know why some of them are not working as intended.
Expected output: 2
Program 1:
int main()
{
int value = -2;
std::cout << (unsigned int)value;
return 0;
}
// OUTPUT: 4294967294
Program 2:
int main()
{
int value;
value = -2;
std::cout << (unsigned int)value;
return 0;
}
// OUTPUT: 4294967294
Program 3:
int main()
{
int value;
std::cin >> value; // 2
std::cout << (unsigned int)value;
return 0;
}
// OUTPUT: 2
Can someone explain why Program 1 and Program 2 don't work? Sorry, I'm new at coding.
You are expecting the cast from int to unsigned int to simply change the sign of a negative value while maintaining its magnitude. But that isn't how it works in C or C++. when it comes to overflow, unsigned integers follow modular arithmetic, meaning that assigning or initializing from negatives values such as -1 or -2 wraps around to the largest and second largest unsigned values, and so on. So, for example, these two are equivalent:
unsigned int n = -1;
unsigned int m = -2;
and
unsigned int n = std::numeric_limits<unsigned int>::max();
unsigned int m = std::numeric_limits<unsigned int>::max() - 1;
See this working example.
Also note that there is no substantial difference between programs 1 and 2. It is all down to the sign of the value used to initialize or assign to the unsigned integer.
Casting a value from signed to unsigned changes how the single bits of the value are interpreted. Lets have a look at a simple example with an 8 bit value like char and unsigned char.
The values of a character value range from -128 to 127. Including the 0 these are 256 (2^8) values. Usually the first bit indicates wether the value is negativ or positive. Therefore only the last 7 bits can be used to describe the actual value.
An unsigned character can't take any negative values because there is no bit to determine wether the value should be negative or positiv. Therfore its value ranges from 0 to 256.
When all bits are set (1111 1111) the unsigned character will have the value 256. However the simple character value will treat the first bit as an indicator for a negative value. Sticking to the two's complement this value will be -1.
This is the reason the cast from int to unsigned int does not what you expected it to do, but it does exactly what its supposed to do.
EDIT
If you just want to switch from negative to positive values write yourself a simple function like that
uint32_t makeUnsigned(int32_t toCast)
{
if (toCast < 0)
toCast *= -1;
return static_cast<uint32_t>(toCast);
}
This way you will convert your incoming int to an unsigned int with an maximal value of 2^32 - 1

fmodl - Modulus in long double

#include <iostream>
#include <cmath>
using namespace std;
unsigned long long modExp(unsigned long long b, unsigned long long e, unsigned long long m)
{
unsigned long long remainder;
unsigned long long x = 1;
while (e != 0)
{
remainder = e % 2;
e= e/2;
// These lines
if(remainder==1)
x=(unsigned long long)fmodl(((long double)x*(long double)b),(long double)m);
b=(unsigned long long)fmodl(((long double)b*(long double)b),(long double)m);
}
return x;
}
int main()
{
unsigned long long lastTen = 0,netSum=0;
unsigned long long sec(unsigned long long,unsigned long long);
for(int i=1;i<1001;i++)
{
lastTen = modExp(i,i,10000000000);
netSum += lastTen;
netSum %= 10000000000;
cout << lastTen << endl ;
}
cout << netSum%10000000000 << endl;
cout << sizeof(long double) << endl;
return 0;
}
This is my program to compute the last ten digits of a sum of series. It uses Arithmetic Exponentiation technique to compute the last 10 digits. It works well for 10^9. But when I go for a 10^10 it collapses.
So in order to use the higher size data types I have converted the number to be multiplied to long double and multiplied them(which would again yield long double) so if we take modulus on this number we would get the answer correctly. But I did not get the right answer again it causes the same wrong answer.
My thought to make such thing is like this
an unsigned long long is 8 bytes, since I am moding i would get a large number as a 10 digit number so multiplying 2, ten digit numbers would not fit in unsigned long long so it would cycle in unsigned long long
so for the above point I convert the unsigned long long to long double(which is 12 bytes) and since it has large space it is large enough to fit a 20 digit product of 2 ten digit numbers
Can any one say what is the flaw in this logic??
The common long double implementation cannot represent all 20-digit decimal numbers exactly.
The characteristics of long double are not completely determined by the C++ standard, and you do not state what implementation you are using.
One common implementation of long double uses a 64-bit significand. Although it may be stored in twelve bytes, it uses only ten, and 16 of those are used for the sign and exponent, leaving 64 for the significand (including an explicit leading bit).
A 64-bit significand can represent integers without error up to 264, which is about 1.845•1019. Thus, it cannot represent all 20-digit numbers exactly.

Checking if input file value is bigger than 10^9

So basically, I have something like this -
Input file with 2 integers.
Code, something like this -
#include <iostream>
#include <fstream>
using namespace std;
int main() {
unsigned long long n, k;
ifstream input_file("file.txt");
input_file >> n >> k;
if(n >= 10^9 || k >= 10^9) {
cout << "0" << endl;
}
return 0;
}
So, is there any chance to check if any of theese two integers are bigger than 10^9? Basically, if I assign thoose integers to unsigned long long, and if they are bigger than 10^9, they automatically turn to some random value, that fits inside unsigned long long, am I right, and that means that there is no chance to check it, or am I'm missing something?
I'm bad at counting zeroes. That's the machine's job. What about 1e9 instead of a bit operation 10^9.
On most platforms, an unsigned long long will be able to store 109 with no problem. You just need to say:
if (n >= 1000000000ull)
If an unsigned long long is 64-bits, for example, which is common, you can store up to 264
Read into a string:
std::string s;
input_file >> s;
and check if it's longer than 9 characters. If it's exactly 9, see that it's not exactly "1000000000" (1 and eight 0's).
For 10^9 you need to write 1000000000LL. In C++ ^ is the bitwise XOR operator, not exponentiaion. You also need the LL to ensure that the literal constant is interpreted as long long rather than just int.
if (n >= 1000000000LL || k >= 1000000000LL)
{
...
}
Of course if the user enters a value which is too large to be represented by a long long (greater than 2^63-1, typically) then you have a bigger problem.