I am trying to manipulate 64 bits. I use the number to store in an unsigned long long int.To test the porcess I ran the following program
#include <iostream>
using namespace std;
int main()
{
unsigned long long x = 1;
int cnt = 0;
for(int i =0 ;i<64;++i)
{
if((1<<i)&x)
++cnt;
}
cout<<cnt;
}
but the output of the cnt is 2 which is clearly wrong. How Do I manipulate 64 bits? where is the correction? Actually I am trying to find parity, that is number of 1's in binary representation of a number less than 2^63.
For it's 64-bit, you should use a 64-bit 1. So, try this:
if(((unsigned long long) 1<<i)&x)
(1<<i) will overflow when i greater than 32
you can write the condition like (x >> i) & 1
What is meant by manipulation in your case? I am thinking you are going to test each and every bit of variable x. Your x should contain maximum value because you are going to test every bit of your variable x
int main()
{
unsigned long long x = 0xFFFFFFFFFFFFFFFF;
int cnt = 0;
for(int i =0 ;i<64;++i)
{
if((1<<i)&x)
++cnt;
}
cout<<cnt;
}
Related
I was solving a problem of Fenwick tree named shill and wave sequence and it wasn't passing all the test cases until I added a line looking at the solution and now want to find its purpose ,here is my code
#include<bits/stdc++.h>
using namespace std;
#define mod 1000000007
long long query(long long index,int p,long long **bit)
{
long long count=0;
for(;index>0;index-=(index&(-index)))
{
count=(count+bit[index][p])%mod;
}
return count;
}
void update(long long **bit,long long index,int p,long long val)
{
for(;index<=100000;index+=(index&(-index)))
{
bit[index][p]=(bit[index][p]+val)%mod;
}
}
int main()
{
int n;
cin>>n;
long long ans=0;
long long **bit=new long long*[100000+1];
for(int i=1;i<=100000;i++)
{
bit[i]=new long long[3];
for(int j=0;j<3;j++)
{
bit[i][j]=0;
}
}
for(int i=0;i<n;i++)
{
long long x;
cin>>x;
long long a=(query(x-1,0,bit)+query(x-1,2,bit))%mod;
long long b=(query(100000,1,bit)+query(100000,2,bit))%mod-query(x,1,bit)-query(x,2,bit);
b=(b+mod)%mod;
//WHAT IS THE PURPOSE OF ABOVE LINE?
ans+=(a+b)%mod;
update(bit,x,0,b);
update(bit,x,1,a);
update(bit,x,2,1);
}
cout<<ans%mod;
return 0;
}
b=(b+mod)%mod
but why?
For some cases b can be negative and may cause incorrect results while doing % directly. That's why before doing % operation it's safe to add mod and then do % which will make the number positive first and then do the modulo.
The "purpose" of the line (or rather, the reason that it has any effect) could be that
b=(b+mod)%mod;
changes the value of b.
So the remaning question is, does it matter that you add "mod" before applying %mod?
Mathematically speaking
(x+y) mod y = x mod y
However, in many programming languages, there is a maximum value of int and if you add up something too big, you get an integer overflow.
In that case, it can easily happen that
(x+y) mod y != x mod y
which could be the case for you.
I am getting negative output when adding large numbers in Fibonacci sequence despite using long int. How to fix that?
#include <iostream>
using namespace std;
void main() {
long int sum = 2;
long int f1 = 1, f2 = 2, f3;
for (unsigned int i = 2; i < 4000000; i++) {
f3 = f2 + f1;
if (!(f3 % 2)) {
sum += f3;
}
swap(f1, f2);
swap(f2, f3);
}
cout << sum << endl;
}
The output is -1833689714
As you can see here the 47th Fibonacci Number exceeds the range of a 32Bit/4Byte integer. Everything after that will become negative.
For your program you used a long int which may or may not be 32 or 64 bits wide, the C++ standard does not guarantee that (for good reasons). If I see your result it seems like 32 Bit for me.
First, to prevent negativeness, you could use unsigned long int which makes all your results positive and gives the ability to model "slightly" bigger numbers.
However you will still get the wrong results if you pass the 47th Fibonacci number since your data type is still too small. To fix this you could use unsigned long long or uint64_t.
Remember even for such big datatypes that can represent numbers up to approx. 18 trillion/quintillion (10^18) the Fibonacci numbers exceed this at the 89th iteration.
Try with this code:
#include<iostream>
using namespace std;
int main()
{
cout<<"Enter Number:";
unsigned long long int x;
cin>>x;
unsigned long long int a=0,b=1,c;
cout<<a<<"\t"<<b;
for(int i=0;i<x;i++)
{
c=a+b;
cout<<"\t"<<c;
a=b;
b=c;
return 0;
}
}
I have a program that is suppose to find the sum of all the numbers between 1 and 75 in the fibonacci sequence that are divisible by three and add them together. I got the program working properly the only problem I am having is being able to display such a large number. I am told that the answer should be 15 digits. I have tried long long, long double, unsigned long long int and none of those produce the right output (they produce a negative number).
code:
long fibNum(int kth, int nth);
int main()
{
int kTerm;
int nTerm;
kTerm = 76;
nTerm = 3;
std::cout << fibNum(kTerm, nTerm) << std::endl;
system("pause");
return 0;
}
long fibNum(int kth, int nth)
{
int term[100];
long firstTerm;
long secondTerm;
long exactValue;
int i;
term[1] = 1;
term[2] = 1;
exactValue = 0;
do
{
firstTerm = term[nth - 1];
secondTerm = term[nth - 2];
term[nth] = (firstTerm + secondTerm);
nth++;
}
while(nth < kth);
for(i = 1; i < kth; i++)
{
if(term[i] % 3 == 0)
{
term[i] = term[i];
}
else
term[i] = 0;
exactValue = term[i] + exactValue;
}
return exactValue;
I found out that the problem has to do with the array. The array cannot store the 47th term which is 10 digits. Now I have no idea what to do
Type long long is guaranteed to be at least 64 bits (and is exactly 64 bits on every implementation I've seen). Its maximum value, LLONG_MAX is at least 263-1, or 9223372036854775807, which is 19 decimal digits -- so long longis more than big enough to represent 15-digit numbers.
Just use type long long consistently. In your code, you have one variable of type long double, which has far more range than long long but may have less precision (which could make it impossible to determine whether a given number is a multiple of 3.)
You could also use unsigned long long, whose upper bound is at least 264-1, but either long long or unsigned long long should be more than wide enough for your purposes.
Displaying a long long value in C++ is straightforward:
long long num = some_value;
std::cout << "num = " << num << "\n";
Or if you prefer printf for some reason, use the "%lld" format for long long, "%llu" for unsigned long long.
(For integers too wide to fit in 64 bits, there are software packages that handle arbitrarily large integers; the most prominent is GNU's GMP. But you don't need it for 15-digit integers.)
What you can do is take char s[15]and int i=14,k,
and then go for while loop till sum!=0
Under while body
k=n%10;
s[i]=k+48;
n=n/10;
i--;
The array cannot store the 47th term which is 10 digits.
This indicates that your architecture has a type long with just 32 bits. That is common for 32-bit architecture. 32 bits cover 9 digit numbers and low 10-digit numbers, to be precise 2.147.483.647 for long and 4.294.967.295 for unsigned long.
Just change your long types to long long or unsigned long long, including the return type of fibNum. That would easily cover 18 digits.
This should find the largest prime factor of a number.. but it isn't working..
The answer should be 6857, but it is returning 688543..
int isPrime(unsigned long int n)
{
for(unsigned long int i=2;i*i<(n);i++)
{
if(n%i==0)
{
return 0;
break;
}
}
return 1;
}
int main()
{
unsigned long int num=600851475143;
unsigned long int max=2, i=2;
while(num!=1)
{
if(num%i==0 && isPrime(i))
{
max=i;
num/=i;
i--;
}
i++;
}
cout<<max;
return 0;
}
Thanks in advance:)
Among other issues, this will be a problem with large numbers:
for(unsigned long int i=2;i*i<(n);i++)
i*i for large numbers will overflow for an unsigned long (which appears to be 32-bits on the system you are compiling for).
You can fix it by switching it:
for (unsigned long int i = 2; i <= sqrt(n); ++i)
As long as n didn't overflow, the sqrt(n) will be valid. However, I would still suggest a switch to using unsigned long long if you are going to use numbers that get very close to the bounds for 32-bit integers.
unsigned long is apparently 32 bit on your system, so num won't be 600851475143 but instead 600851475143 mod 1<<32 which is 3851020999. 688543 is the largest prime factor of this number, so it appears that your algorithm works correctly at least.
Look up the maximum ranges for the types on your compiler/system combination, then pick an appropriate one.
So basically, I have something like this -
Input file with 2 integers.
Code, something like this -
#include <iostream>
#include <fstream>
using namespace std;
int main() {
unsigned long long n, k;
ifstream input_file("file.txt");
input_file >> n >> k;
if(n >= 10^9 || k >= 10^9) {
cout << "0" << endl;
}
return 0;
}
So, is there any chance to check if any of theese two integers are bigger than 10^9? Basically, if I assign thoose integers to unsigned long long, and if they are bigger than 10^9, they automatically turn to some random value, that fits inside unsigned long long, am I right, and that means that there is no chance to check it, or am I'm missing something?
I'm bad at counting zeroes. That's the machine's job. What about 1e9 instead of a bit operation 10^9.
On most platforms, an unsigned long long will be able to store 109 with no problem. You just need to say:
if (n >= 1000000000ull)
If an unsigned long long is 64-bits, for example, which is common, you can store up to 264
Read into a string:
std::string s;
input_file >> s;
and check if it's longer than 9 characters. If it's exactly 9, see that it's not exactly "1000000000" (1 and eight 0's).
For 10^9 you need to write 1000000000LL. In C++ ^ is the bitwise XOR operator, not exponentiaion. You also need the LL to ensure that the literal constant is interpreted as long long rather than just int.
if (n >= 1000000000LL || k >= 1000000000LL)
{
...
}
Of course if the user enters a value which is too large to be represented by a long long (greater than 2^63-1, typically) then you have a bigger problem.