So basically, I have something like this -
Input file with 2 integers.
Code, something like this -
#include <iostream>
#include <fstream>
using namespace std;
int main() {
unsigned long long n, k;
ifstream input_file("file.txt");
input_file >> n >> k;
if(n >= 10^9 || k >= 10^9) {
cout << "0" << endl;
}
return 0;
}
So, is there any chance to check if any of theese two integers are bigger than 10^9? Basically, if I assign thoose integers to unsigned long long, and if they are bigger than 10^9, they automatically turn to some random value, that fits inside unsigned long long, am I right, and that means that there is no chance to check it, or am I'm missing something?
I'm bad at counting zeroes. That's the machine's job. What about 1e9 instead of a bit operation 10^9.
On most platforms, an unsigned long long will be able to store 109 with no problem. You just need to say:
if (n >= 1000000000ull)
If an unsigned long long is 64-bits, for example, which is common, you can store up to 264
Read into a string:
std::string s;
input_file >> s;
and check if it's longer than 9 characters. If it's exactly 9, see that it's not exactly "1000000000" (1 and eight 0's).
For 10^9 you need to write 1000000000LL. In C++ ^ is the bitwise XOR operator, not exponentiaion. You also need the LL to ensure that the literal constant is interpreted as long long rather than just int.
if (n >= 1000000000LL || k >= 1000000000LL)
{
...
}
Of course if the user enters a value which is too large to be represented by a long long (greater than 2^63-1, typically) then you have a bigger problem.
Related
I am a beginner in C++, and I just finished reading chapter 1 of the C++ Primer. So I try the problem of computing the largest prime factor, and I find out that my program works well up to a number of sizes 10e9 but fails after that e.g.600851475143 as it always returns a wired number e.g.2147483647 when I feed any large number into it. I know a similar question has been asked many times, I just wonder why this could happen to me. Thanks in advance.
P.S. I guess the reason has to do with some part of my program that is not capable to handle some large number.
#include <iostream>
int main()
{
int val = 0, temp = 0;
std::cout << "Please enter: " << std::endl;
std::cin >> val;
for (int num = 0; num != 1; val = num){
num = val/2;
temp = val;
while (val%num != 0)
--num;
}
std::cout << temp << std::endl;
return 0;
}
Your int type is 32 bits (like on most systems). The largest value a two's complement signed 32 bit value can store is 2 ** 31 - 1, or 2147483647. Take a look at man limits.h if you want to know the constants defining the limits of your types, and/or use larger types (e.g. unsigned would double your range at basically no cost, uint64_t from stdint.h/inttypes.h would expand it by a factor of 8.4 billion and only cost something meaningful on 32 bit systems).
2147483647 isn't a wired number its INT_MAX which is defined in climits header file. This happens when you reach maximum capacity of an int.
You can use a bigger data type such as std::size_t or unsigned long long int, for that purpose, which have a maximum value of 18446744073709551615.
I was working on a problem and got stuck into this silly error and I can't solve it.
Basically I am using a for loop and reading a character from stream. When the character is '-' I am decreasing my integer by one and when it is '+' I am increasing it by one.
I used an unsigned int because I don't want negative numbers. Here is an example of code:
char x;
unsigned int number = 0;
for (int i = 0; i < n; i++){
cin >> x;
if (x == '-'){
number--;
}else if (x == '+'){
number++;
}
}
cout << number;
And it shows a number something like this 4294967293.
Where is the problem?
If your problem statement says that final resultant number will not be negative, it does not mean that the intermediate numbers will also be positive only. Because, there can be a sequence of streams with characters like - + + which leads to values like: 0, -1, 0, 1. Here, final answer is positive but intermediate numbers are still negative.
Now, you are trying to hold both positive and negative numbers in unsigned int datatype which is leads to wrong output here in above example. Because, if number is 0 and you try to apply negation, it will be 4294967295 (maximum value for a variable of type unsigned int) instead of being -1.
So, instead of using unsigned int datatype , you can use int datatype as suggested by Kaidul.
This is due to wrapping around of unsigned data type. Since it's unsigned, it can't be negative. So negation operation wraps around and yields boundary values of 32 bit integer.
Replace
unsigned int number = 0;
with
int number = 0;
I have a program that is suppose to find the sum of all the numbers between 1 and 75 in the fibonacci sequence that are divisible by three and add them together. I got the program working properly the only problem I am having is being able to display such a large number. I am told that the answer should be 15 digits. I have tried long long, long double, unsigned long long int and none of those produce the right output (they produce a negative number).
code:
long fibNum(int kth, int nth);
int main()
{
int kTerm;
int nTerm;
kTerm = 76;
nTerm = 3;
std::cout << fibNum(kTerm, nTerm) << std::endl;
system("pause");
return 0;
}
long fibNum(int kth, int nth)
{
int term[100];
long firstTerm;
long secondTerm;
long exactValue;
int i;
term[1] = 1;
term[2] = 1;
exactValue = 0;
do
{
firstTerm = term[nth - 1];
secondTerm = term[nth - 2];
term[nth] = (firstTerm + secondTerm);
nth++;
}
while(nth < kth);
for(i = 1; i < kth; i++)
{
if(term[i] % 3 == 0)
{
term[i] = term[i];
}
else
term[i] = 0;
exactValue = term[i] + exactValue;
}
return exactValue;
I found out that the problem has to do with the array. The array cannot store the 47th term which is 10 digits. Now I have no idea what to do
Type long long is guaranteed to be at least 64 bits (and is exactly 64 bits on every implementation I've seen). Its maximum value, LLONG_MAX is at least 263-1, or 9223372036854775807, which is 19 decimal digits -- so long longis more than big enough to represent 15-digit numbers.
Just use type long long consistently. In your code, you have one variable of type long double, which has far more range than long long but may have less precision (which could make it impossible to determine whether a given number is a multiple of 3.)
You could also use unsigned long long, whose upper bound is at least 264-1, but either long long or unsigned long long should be more than wide enough for your purposes.
Displaying a long long value in C++ is straightforward:
long long num = some_value;
std::cout << "num = " << num << "\n";
Or if you prefer printf for some reason, use the "%lld" format for long long, "%llu" for unsigned long long.
(For integers too wide to fit in 64 bits, there are software packages that handle arbitrarily large integers; the most prominent is GNU's GMP. But you don't need it for 15-digit integers.)
What you can do is take char s[15]and int i=14,k,
and then go for while loop till sum!=0
Under while body
k=n%10;
s[i]=k+48;
n=n/10;
i--;
The array cannot store the 47th term which is 10 digits.
This indicates that your architecture has a type long with just 32 bits. That is common for 32-bit architecture. 32 bits cover 9 digit numbers and low 10-digit numbers, to be precise 2.147.483.647 for long and 4.294.967.295 for unsigned long.
Just change your long types to long long or unsigned long long, including the return type of fibNum. That would easily cover 18 digits.
#include <iostream>
#include <cmath>
using namespace std;
unsigned long long modExp(unsigned long long b, unsigned long long e, unsigned long long m)
{
unsigned long long remainder;
unsigned long long x = 1;
while (e != 0)
{
remainder = e % 2;
e= e/2;
// These lines
if(remainder==1)
x=(unsigned long long)fmodl(((long double)x*(long double)b),(long double)m);
b=(unsigned long long)fmodl(((long double)b*(long double)b),(long double)m);
}
return x;
}
int main()
{
unsigned long long lastTen = 0,netSum=0;
unsigned long long sec(unsigned long long,unsigned long long);
for(int i=1;i<1001;i++)
{
lastTen = modExp(i,i,10000000000);
netSum += lastTen;
netSum %= 10000000000;
cout << lastTen << endl ;
}
cout << netSum%10000000000 << endl;
cout << sizeof(long double) << endl;
return 0;
}
This is my program to compute the last ten digits of a sum of series. It uses Arithmetic Exponentiation technique to compute the last 10 digits. It works well for 10^9. But when I go for a 10^10 it collapses.
So in order to use the higher size data types I have converted the number to be multiplied to long double and multiplied them(which would again yield long double) so if we take modulus on this number we would get the answer correctly. But I did not get the right answer again it causes the same wrong answer.
My thought to make such thing is like this
an unsigned long long is 8 bytes, since I am moding i would get a large number as a 10 digit number so multiplying 2, ten digit numbers would not fit in unsigned long long so it would cycle in unsigned long long
so for the above point I convert the unsigned long long to long double(which is 12 bytes) and since it has large space it is large enough to fit a 20 digit product of 2 ten digit numbers
Can any one say what is the flaw in this logic??
The common long double implementation cannot represent all 20-digit decimal numbers exactly.
The characteristics of long double are not completely determined by the C++ standard, and you do not state what implementation you are using.
One common implementation of long double uses a 64-bit significand. Although it may be stored in twelve bytes, it uses only ten, and 16 of those are used for the sign and exponent, leaving 64 for the significand (including an explicit leading bit).
A 64-bit significand can represent integers without error up to 264, which is about 1.845•1019. Thus, it cannot represent all 20-digit numbers exactly.
I am trying to manipulate 64 bits. I use the number to store in an unsigned long long int.To test the porcess I ran the following program
#include <iostream>
using namespace std;
int main()
{
unsigned long long x = 1;
int cnt = 0;
for(int i =0 ;i<64;++i)
{
if((1<<i)&x)
++cnt;
}
cout<<cnt;
}
but the output of the cnt is 2 which is clearly wrong. How Do I manipulate 64 bits? where is the correction? Actually I am trying to find parity, that is number of 1's in binary representation of a number less than 2^63.
For it's 64-bit, you should use a 64-bit 1. So, try this:
if(((unsigned long long) 1<<i)&x)
(1<<i) will overflow when i greater than 32
you can write the condition like (x >> i) & 1
What is meant by manipulation in your case? I am thinking you are going to test each and every bit of variable x. Your x should contain maximum value because you are going to test every bit of your variable x
int main()
{
unsigned long long x = 0xFFFFFFFFFFFFFFFF;
int cnt = 0;
for(int i =0 ;i<64;++i)
{
if((1<<i)&x)
++cnt;
}
cout<<cnt;
}