Question is Given - To Count Total number of set bits for 2 given numbers . For example take 2 and 3 as input . So 2 represents - 10 in binary form and 3 represents - 11 in binary form , So total number of set bits = 3.
My work -
#include <iostream>
using namespace std;
int bit(int n1, int n2){
int count = 0;
while(n1 != 0 && n2 != 0){
if(n1 & 1 || n2 & 1) {
count++;
}
n1 >> 1;
n2 >> 1;
}
return count;
}
int main() {
int a;
cin >> a;
int b;
cin >> b;
cout << bit(a,b);
return 0;
}
Expected Output - 3
So please anyone knows what i am doing wrong please correct it and answer it.
Why ask the question for 2 numbers if the intended combined result is just the sum of the separate results?
If you can use C++20, std::popcount gives you the number of set bits in one unsigned variable.
If you can't use C++20, there is std::bitset. Construct one from your number and use its method count.
So your code becomes:
int bit(int n1, int n2) {
#if defined(__cpp_lib_bitops)
return std::popcount(static_cast<unsigned int>(n1)) + std::popcount(static_cast<unsigned int>(n2));
#else
return std::bitset<sizeof(n1)*8>(n1).count() + std::bitset<sizeof(n2)*8>(n2).count();
#endif
}
Demo
I'm not quite sure what happens if you give negative integers as input. I would do a check beforehand or just work with unsigned types from the beginning.
What you are doing wrong was shown in a (now deleted) answer by Ankit Kumar:
if(n1&1 || n2&1){
count++;
}
If a bit is set in both n1 and n2, you are including it only once, not twice.
What should you do, other than using C++20's std::popcount, is to write an algorithm1 that calculates the number of bits set in a number and then call it twice.
Also note that you should use an unsigned type, to avoid implementation defined behavior of right shifting a (possibly) signed value.
1) I suppose that is the purpose of OP's exercise, everybody else shuold just use std::popcount.
It is possible without a loop, no if and only 1 variable.
First OR the values together
int value = n1 | n2; // get the combinedd bits
Then divide value in bitblocks
value = (value & 0x55555555) + ((value >> 1) & 0x55555555);
value = (value & 0x33333333) + ((value >> 1) & 0x33333333);
value = (value & 0x0f0f0f0f) + ((value >> 1) & 0x0f0f0f0f);
value = (value & 0x00ff00ff) + ((value >> 1) & 0x00ff00ff);
value = (value & 0x0000ffff) + ((value >> 1) & 0x0000ffff);
This works for 32bit values only, adjust fopr 64bit accordingly
Related
#include<stdio.h>
int main()
{
int num,m,n,t,res,i;
printf("Enter number\n");
scanf("%d",&num);
for(i=31;i>=0;i--)
{
printf("%d",((num>>i)&1));
}
printf("\n");
printf("Enter position 1 and position 2\n");
scanf("%d%d",&m,&n);
printf("enter number\n");
scanf("%d",&t);
res=((num&(~(((~(unsigned)0)>>(32-((m-t)+1)))<<t)))&(num&(~(((~(unsigned)0)>>(32-((n-t)+1)))<<t))))|(((((num&((((~(unsigned)0)>>(((m-t))))<<(n))))>>(m-t))))|(((num&((((~(unsigned)0)>>(((32-n))))<<(32-t))))<<(m-t))));
for(i=31;i>=0;i--)
{
printf("%d",(res>>i)&1);
}
printf("\n");
}
I need to swap bits from (m to m-t) and (n to n-t) in number num.I tried the above code but it doesn't work..can someone please help.
As usual with bit swapping problems, you can save a few instructions by using xor.
unsigned f(unsigned num, unsigned n, unsigned m, unsigned t) {
n -= t; m -= t;
unsigned mask = ((unsigned) 1 << t) - 1;
unsigned nm = ((num >> n) ^ (num >> m)) & mask;
return num ^ (nm << n) ^ (nm << m);
}
It's easier if you break it down into smaller steps.
First, make a bit mask t bits wide. You can do this by subtracting 1 from a power of 2, like this:
int mask = (1 << t) - 1;
For example if t is 3 then mask will be 7 (111 in binary).
Then you can make a copy of num and clear the bits in the range of m to m-t and n to n-t by shifting up the mask, NOTing it and ANDing, so that only bits not covered by the mask remain set:
res = num & ~(mask<<(m-t)) & ~(mask<<(n-t));
Then you can shift the bits in the two ranges into their proper locations and OR with the result. You can do this by shifting down by (n-t), masking, and then shifting up by (m-t), then vice versa:
res |= ((num >> (n-t)) & mask) << (m-t);
res |= ((num >> (m-t)) & mask) << (n-t);
The bits are now in the correct place.
You could do this in one line like this:
res = (num & ~(mask<<(m-t)) & ~(mask<<(n-t))) | (((num >> (n-t)) & mask) << (m-t)) | (((num >> (m-t)) & mask) << (n-t));
And it can be simplified by doing the m-t and n-t subtractions beforehand, assuming you don't want to use the values afterwards:
m -= t; n -= t;
res = (num & ~(mask<<m) & ~(mask<<n)) | (((num >> n)) & mask) << m) | (((num >> m) & mask) << n);
This doesn't work if the two ranges overlap. It's not clear what the correct behaviour would be in that case.
I now know how it's done in one line, altough I fail to realise why my first draft doesn't work aswell. What I'm trying to do is saving the lower part into a different variable, shifting the higher byte to the right and adding the two numbers via OR. However, it just cuts the lower half of the hexadecimal and returns the rest.
short int method(short int number) {
short int a = 0;
for (int x = 8; x < 16; x++){
if ((number & (1 << x)) == 1){
a = a | (1<<x);
}
}
number = number >> 8;
short int solution = number | a;
return solution;
You are doing it one bit at a time; a better approach would do it with a single operation:
uint16_t method(uint16_t number) {
return (number << 8) | (number >> 8);
}
The code above specifies 16-bit unsigned type explicitly, thus avoiding issues related to sign extension. You need to include <stdint.h> (or <cstdint> in C++) in order for this to compile.
if ((number & (1 << x)) == 1)
This is only going to return true if x is 0. Since 1 in binary is 00000000 00000001, and 1 << x is going to set all but the x'th bit to 0.
You don't care if it's 1 or not, you just care if it's non-zero. Use
if (number & (1 << x))
how can I turn off leftmost non-zero bit of a number in O(1)?
for example
n = 366 (base 10) = 101101110 (in base 2)
then after turning the leftmost non-zero bit off ,number looks like = 001101110
n will always be >0
Well, if you insist on O(1) under any circumstances, the Intel Intrinsics function _bit_scan_reverse() defined in immintrin.h does a hardware find for the most-significant non-zero bit in a int number.
Though the operation does use a loop (functional equivalent), I believe its constant time given its latency at fixed 3 (as per Intel Intrinsics Guide).
The function will return the index to the most-significant non-zero bit thus doing a simple:
n = n & ~(1 << _bit_scan_reverse(n));
should do.
This intrinsic is undefined for n == 0. So you gotta watch out there. I'm following the assumption of your original post where n > 0.
n = 2^x + y.
x = log(n) base 2
Your highest set bit is x.
So in order to reset that bit,
number &= ~(1 << x);
Another approach:
int highestOneBit(int i) {
i |= (i >> 1);
i |= (i >> 2);
i |= (i >> 4);
i |= (i >> 8);
i |= (i >> 16);
return i - (i >> 1);
}
int main() {
int n = 32767;
int z = highestOneBit(n); // returns the highest set bit number i.e 2^x.
cout<< (n&(~z)); // Resets the highest set bit.
return 0;
}
Check out this question, for a possibly faster solution, using a processor instruction.
However, an O(lgN) solution is:
int cmsb(int x)
{
unsigned int count = 0;
while (x >>= 1) {
++count;
}
return x & ~(1 << count);
}
If ANDN is not supported and LZCNT is supported, the fastest O(1) way to do it is not something along the lines of n = n & ~(1 << _bit_scan_reverse(n)); but rather...
int reset_highest_set_bit(int x)
{
const int mask = 0x7FFFFFFF; // 011111111[...]
return x & (mask >> __builtin_clz(x));
}
I have written this code to calculate the number of set bits between the range of numbers. My program gets compiled fine and giving proper output. It is taking too much time for large inputs and "Time limit exceeding".
#define forn(i, n) for(long int i = 0; i < (long int)(n); i++)
#define ford(i, n) for(long int i = (long int)(n) - 1; i >= 0; i--)
#define fore(i, a, n) for(long int i = (int)(a); i < (long int)(n); i++)
long int solve(long int i) {
i = i - ((i >> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >> 2) & 0x33333333);
return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;
}
int main() {
freopen("C:/Projects/CodeChef/SetBits/input.txt", "rt", stdin);
freopen("C:/Projects/CodeChef/SetBits/output.txt", "wt", stdout);
int tt;
long long int num1;
long long int num2;
scanf("%d", &tt);
forn(ii, tt) {
unsigned long int bits = 0;
unsigned long long int total_bits = 0;
scanf("%lld",&num1);
scanf("%lld",&num2);
fore(jj, num1, num2+1) {
bits = solve(jj);
total_bits += bits;
}
printf("%lld\n",total_bits);
}
return 0;
}
Example test case:-
Sample Input:
3
-2 0
-3 4
-1 4
Sample Output:
63
99
37
For the first case, -2 contains 31 1's followed by a 0, -1 contains 32 1's and 0 contains 0 1's. Thus the total is 63.
For the second case, the answer is 31 + 31 + 32 + 0 + 1 + 1 + 2 + 1 = 99
Test case having large values:-
10
-1548535525 662630637
-1677484556 -399596060
-2111785037 1953091095
643110128 1917824721
-1807916951 491608908
-1536297104 1976838237
-1891897587 -736733635
-2088577104 353890389
-2081420990 819160807
-1585188028 2053582020
Any suggestions on how to optimize the code so that it will take less time. All helpful suggestions and answers will be appreciated with vote up. :)
I don't really have a clue what you are doing, but I do know you can clean up your code considerable, and you can inline you function.
Also I have taken the liberty of 'rephrasing' you code, you are using C++ like C and those defines are just grim and mapping the files onto stdio is even worse. I haven't tested or compiled the code but it is all there.
#include <fstream>
inline long int solve(long int i) {
i = i - ((i >> 1) & 0x55555555);
i = (i & 0x33333333) + ((i >> 2) & 0x33333333);
return (((i + (i >> 4)) & 0x0F0F0F0F) * 0x01010101) >> 24;
}
int main() {
long first, last;
unsigned count;
std::ifstream inf("C:/Projects/CodeChef/SetBits/input.txt");
std::ofstream off("C:/Projects/CodeChef/SetBits/output.txt");
inf >> count;
for(unsigned i=0u; i!=count; ++i) {
inf >> first >> last;
long total=0;
++last;
for(long t=first; t!=last; ++t) {
total+=solve(t);
}
off << total << '\n';
}
return 0;
}
A few ideas as to how you could be speed this up:
you could build a std::map of the computed values and if they have been previously processed then use them rather than recomputing.
do the same but store ranges rather than single values but that will be tricky.
you could see if a value exists in the map and increment through the map until there are no more preprocessed values in which case start processing them for the iteration.
check if there is a trivial sequence between on number and the next may be you could work out the first value then just increment it.
may there is a O(1) algorithm for such a sequence
Look at intel TBB and using something like tbb::parallel for to distribute the work over each core, because you are dealing with such a small about or memory then you should get a really good return with a large chunk size.
how to count number of occurrences of 1 in a 8 bit string. such as 10110001.
bit string is taken from user. like 10110001
what type of array should be used to store this bit string in c?
Short and simple. Use std::bitset(C++)
#include <iostream>
#include <bitset>
int main()
{
std::bitset<8> mybitstring;
std::cin >> mybitstring;
std::cout << mybitstring.count(); // returns the number of set bits
}
Online Test at Ideone
Don't use an array at all, use a std::string. This gives you the possibility of better error handling. You can write code like:
bitset <8> b;
if ( cin >> b ) {
cout << b << endl;
}
else {
cout << "error" << endl;
}
but there is no way of finding out which character caused the error.
You'd probably use an unsigned int to store those bits in C.
If you're using GCC then you can use __builtin_popcount to count the one bits:
Built-in Function: int __builtin_popcount (unsigned int x)
Returns the number of 1-bits in x.
This should resolve to a single instruction on CPUs that support it too.
From hacker's delight:
For machines that don't have this instruction, a good way to count the number
of 1-bits is to first set each 2-bit field equal to the sum of the two single
bits that were originally in the field, and then sum adjacent 2-bit fields,
putting the results in each 4-bit field, and so on.
so, if x is an integer:
x = (x & 0x55555555) + ((x >> 1) & 0x55555555);
x = (x & 0x33333333) + ((x >> 2) & 0x33333333);
x = (x & 0x0F0F0F0F) + ((x >> 4) & 0x0F0F0F0F);
x = (x & 0x00FF00FF) + ((x >> 8) & 0x00FF00FF);
x = (x & 0x0000FFFF) + ((x >>16) & 0x0000FFFF);
x will now contain the number of 1 bits. Just adapt the algorithm with 8 bit values.