Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 6 years ago.
Improve this question
I've been running into this really odd bug where a element of my array will swap places with another. I can solve the issue by manually changing the elements before compiling but for ease of access I want to keep it the way it is. My goal was to compare bits and store the result in an array. There is no function I actively call after I call that function so it doesn't change the array itself. It does the exact same thing when I declare the array as constants.
Here's my array:
int myconsts[8][8] = {
{ 6, 10, 0, 9, 14, 6, 6, 7 }
};
My issue as that number 14 (myconsts[4]) and 7 (myconsts[7]) would randomly swap places. Here's how I compare bits in my function
std::array<std::array<int, 4>, 8> myfunct(int arr[][8]) {
std::array<std::array<int, 4>, 8> arr2;
int i = 0;
do {
std::bitset<4> num1(arr[0][i]);
std::bitset<4> num2(arr[1][i]);
std::bitset<4> num3(arr[2][i]);
int no;
int no2;
int no3;
for (int x = 0; x < 4; x++) {
int sum = 0;
std::cout << num1[x] << " Num: " << num1[0] << num1[1] << num1[2] << num1[3] << std::endl;
}
std::cout << " " << std::endl;
i++;
} while (i < 8);
//just testing the logic
arr2 = { 0 };
return arr2;
}
Output:
[0]
0 Num: 0110
1 Num : 0110
1 Num : 0110
0 Num : 0110
[1]
0 Num : 0101
1 Num : 0101
0 Num : 0101
1 Num : 0101
[2]
0 Num : 0000
0 Num : 0000
0 Num : 0000
0 Num : 0000
[3]
1 Num : 1001
0 Num : 1001
0 Num : 1001
1 Num : 1001
Here is where it swaps positions with the original element[7]!
[4]
0 Num : 0111
1 Num : 0111
1 Num : 0111
1 Num : 0111
[5]
0 Num : 0110
1 Num : 0110
1 Num : 0110
0 Num : 0110
[6]
0 Num : 0110
1 Num : 0110
1 Num : 0110
0 Num : 0110
[7]
1 Num : 1110
1 Num : 1110
1 Num : 1110
0 Num : 1110
It's elements[0..7], every space in between represents a new element. I'm not sure if it's the same on all compilers.
I do not see what you are describing.
Here is my annotations on your output:
0 Num : 0111 // This is 2 + 4 + 8 = 14
1 Num : 0111
1 Num : 0111
1 Num : 0111
0 Num : 0110 // This is 2 + 4 = 6
1 Num : 0110
1 Num : 0110
0 Num : 0110
0 Num : 0110 // This is 2 + 4 = 6
1 Num : 0110
1 Num : 0110
0 Num : 0110
1 Num : 1110 // This is 1 + 2 + 4 = 7
1 Num : 1110
1 Num : 1110
0 Num : 1110
It looks like you just dumped: ..., 14, 6, 6, 7 };
which are the final 4 elements of your array.
I'd say everything looks correct.
Perhaps you are reading your bits backwards on screen, since LSB (least significant bit) is typically put on the far right, and you've put it on the far left.
But your code is working fine.
Related
So I have written a code for printing first 20 Binary Numbers like
0
1
10
11
100
101 and so on...
I Tried running the code in Atom Editor it's not accurate but when I ran the same code in an Online compiler it gave me a correct answer which I expected
This is the code that I used :
#include<iostream>
#include<math.h>
using namespace std;
int toBinary(int num){
int ans = 0;
int i = 0;
while(num!=0){
int bit = num&1;
ans = ( bit * pow(10,i) ) + ans;
num = num>>1;
i++;
}
return ans;
}
int main(){
for(int i=0;i<20;i++){
cout<<toBinary(i)<<endl;
}
return 0;
}
This is the output i'm getting in ATOM editor:
0 1 10 11 99 100 109 110 1000 1001 1010 1011 1099 1100 1109 1110 9999 10000 10009 10010
And this is the output I'm getting in Online Compiler (this is the output I expect):
0 1 10 11 100 101 110 111 1000 1001 1010 1011 1100 1101 1110 1111 10000 10001 10010 10011
pow is a floating point function and should not be used for integers.
It is also not required (from efficiency point of view) to recalculate the exponent from scratch every iteration. You can just keep a variable and multiply it by 10 in each one.
You can try the code below:
#include<iostream>
int toBinary(int num)
{
int ans = 0;
int exp = 1;
while (num)
{
int bit = num & 1;
ans += bit * exp;
// for next iteration:
exp *= 10;
num /= 2;
}
return ans;
}
int main()
{
for (int i = 0; i < 20; i++)
{
std::cout << toBinary(i) << std::endl;
}
}
Output:
0
1
10
...
10001
10010
10011
Note that the result will overflow pretty fast (2000 is already way to big for representing binary in an int the way you do).
A side note: Why is "using namespace std;" considered bad practice?.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I am trying to "merge" two binary integers into one but I can't seem to get it right.
The operation I am trying to do is:
Number of bits to merge = 3 (precomputed parameter)
Int 1 : 10010
Int 2 : 11011
Given these two numbers, append 3 bit of each to the result (left to right):
Result : 11 01 00.
Meaning the first bit of the first integer
and the first bit of the second integer. Then the second bit of
the first integer and the second bit of the secod integer... and so on
"Number of bits to merge" times.
Another example with letters would be:
Number of bits to merge = 4
Int1: abcde
Int2: xyzwt
Result: ax by cz dw
My idea is to use a for loop with the ammount of bits I have to set and there append to the result number, but I don't know how to do that "appending".
You can set each bit in a loop:
std::uint32_t merge(std::size_t start, std::size_t numberOfBits, int i1, int i2) {
if (start == 0 || start > sizeof(int) * 8) return 0;
if (numberOfBits == 0 || numberOfBits > 16) return 0;
if (start < numberOfBits) return 0;
int result = 0;
for (std::size_t i = 0; i < numberOfBits; ++i) {
std::size_t srcPos = start - 1 - i;
std::size_t destPos = 2 * (numberOfBits - i) - 1;
result |= (i1 & (1 << srcPos)) >> srcPos << destPos;
result |= (i2 & (1 << srcPos)) >> srcPos << (destPos - 1);
}
return result;
}
int main() {
std::size_t start = 5;
std::size_t numberOfBits = 3;
int i1 = 0b10010;
int i2 = 0b11011;
return merge(start, numberOfBits, i1, i2);
}
i1 & (1 << (start - 1 - i)) reads the i-th bit from left. >> (start - 1 - i) shifts it to the right. << (2 * (numberOfBits - i) - 1) resp. << (2 * (numberOfBits - i) - 2) shifts it to the correct position in the result.
Tested with input:
Start : 5
Number of bits : 3
Int 1 : 0b10010
Int 2 : 0b11011
output:
52 // == 0b110100
and input:
Start : 4
Number of bits : 2
Int 1 : 0b1010
Int 2 : 0b0101
output:
9 // == 0b1001
Create a bit mask, used to select which and how many bits to keep:
int mask = (1 << 3) - 1; // results in 0000 0000 0000 0111
Next you have to think about which bit locations you want from each input integer, I will call them i1 and i2:
// i1 = 0000 0000 0001 0010
// i2 = 0000 0000 0001 1011
int mask_shifted = mask << 3; // results in 0000 0000 0011 1000
Now you can apply the masks to the ints and merge the result with bit operations:
int applied_i1 = i1 & mask_i1; // results in 0000 0000 0001 0000
int applied_i2 = i2 & mask_i2; // results in 0000 0000 0001 1000
int result = (applied_i2 << 1) | (applied_i1 >> 3); // results in 0000 0000 0011 0100
I have problem with understanding how CRC32 should work in normal way.
I've implemented mechanism from wiki and other sites: https://en.wikipedia.org/wiki/Cyclic_redundancy_check#Computation
where you xor elements bit by bit. For CRC32 I've used Polynomial from wiki, which is also everywhere:
x^32 + x^26 + x^23 + x^22 + x^16 + x^12 + x^11 + x^10 + x^8 + x^7 + x^5 + x^4 + x^2 + x + 1
with binary representation: 1 0000 0100 1100 0001 0001 1101 1011 0111
I was calculating CRC32 for input string "1234" only for testing.
This is the output of program:
https://i.stack.imgur.com/tG4wk.png
as you can see the xor is calculated properly and CRC32 is "619119D1". When I calculate it using online calculator or even c++ boost lib, the answer is "9BE3E0A3".
What is wrong with normal XORing input string bit by bit? Should I add something at the end or what?
I don't want to use libs and any other magic code to compute this, because I have to implement it in that way for my study project.
I've tried also polynomial without x^32, negate bits at the end, starting from 1s instead of 0s (where you have to add 32 zeros), and the answer is also different. I have no idea what should I do now to fix this.
This is the part of the code (a bit changed), I have buffor 3parts * 32bits, I'm loading 4 Chars from file to the middle part and xor from beggining to the middle, at the end I xor the middle part and the end -> the end is CRC32.
My pseudo schema:
1) Load 8 chars
2) | First part | Middle Part | CRC32 = 0 |
3) XOR
4) | 0 0 0 0 | XXXXXXX | 0 0 0 0 |
5) memcpy - middle part to first part
6) | XXXXXXX | XXXXXXX | 0 0 0 0 |
7) Load 4 chars
8) | XXXXXXX | loaded 4chars | 0 0 0 0 |
9) repeat from point 4 to the end of file
10) now we have: | 0 0 0 0 | XXXXXX | 0 0 0 0 |
11) last xor from middle part to end
12) Result: | 0 0 0 0 | 0 0 0 0 | CRC32 |
Probably screen with output will be more helpful.
I will use smart pointers etc. later ;)
bool xorBuffer(unsigned char *buffer) {
bool * binaryTab = nullptr;
try {
// CRC-32
// 32 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 09 08 07 06 05 04 03 02 01 00
// 1 0 0 0 0 0 1 0 0 1 1 0 0 0 0 0 1 0 0 0 1 1 1 0 1 1 0 1 1 0 1 1 1
const int dividerSizeBits = 33;
const bool binaryDivider[dividerSizeBits] = { 1,0,0,0,0,0,1,0,0,1,1,0,0,0,0,0,1,0,0,0,1,1,1,0,1,1,0,1,1,0,1,1,1 };
const int dividerLength = countLength(binaryDivider, dividerSizeBits);
const int dividerOffset = dividerSizeBits - dividerLength; // when divider < 33 bits
bool * binaryTab = charTabToBits(buffer);
// check tab if first part = 0
while (!checkTabIfEmpty(binaryTab)) {
// set the beginnning
int start = 0;
for (start = 0; start < 32; start++)
if (binaryTab[start] == true)
break;
for (int i = 0; i < dividerLength; i++)
binaryTab[i + start] = binaryTab[i + start] ^ binaryDivider[i + dividerOffset];
}
// binaryTab -> charTab
convertBinaryTabToCharTab(binaryTab, buffer);
}
catch (exception e) {
delete[] binaryTab;
return false;
}
delete[] binaryTab;
return true;
}
std::string CRC::countCRC(std::string fileName){
// create variables
int bufferOnePartSize = 4;
int bufferSize = bufferOnePartSize * 3;
bool EOFFlag = false;
unsigned char *buffer = new unsigned char[bufferSize];
for (int i = 0; i < 3 * bufferOnePartSize; i++)
buffer[i] = 0;
// open file
ifstream fin;
fin.open(fileName.c_str(), ios_base::in | ios_base::binary);
int position = 0;
int count = 0;
// while -> EOF
if (fin.is_open()) {
// TODO check if file <= 4 -> another solution
char ch;
int multiply = 2;
bool skipNormalXor = false;
while (true) {
count = 0;
if (multiply == 2)
position = 0;
else
position = bufferOnePartSize;
// copy part form file to tab
while (count < bufferOnePartSize * multiply && fin.get(ch)) {
buffer[position] = (unsigned char)ch;
++count;
++position;
}
cout << endl;
// if EOF write zeros to end of tab
if (count == 0) {
cout << "TODO: end of file" << endl;
EOFFlag = true;
skipNormalXor = true;
}
else if (count != bufferOnePartSize * multiply) {
for (int i = count; i < bufferOnePartSize * multiply; i++) {
buffer[position] = 0;
position++;
}
EOFFlag = true;
}
if (!skipNormalXor) {
// -- first part
multiply = 1;
// xor the buffer
xorBuffer(buffer);
}
if (EOFFlag) { // xor to the end
xorBuffer(buffer + bufferOnePartSize);
break;
}
else {
// copy memory
for (int i = 0; i < bufferOnePartSize; i++)
buffer[i] = buffer[i + bufferOnePartSize];
}
}
cout << "\n End\n";
fin.close();
}
stringstream crcSum;
for (int i = 2 * bufferOnePartSize; i < bufferSize; i++) {
//buffer[i] = ~buffer[i];
crcSum << std::hex << (unsigned int)buffer[i];
}
cout << endl << "CRC: " << crcSum.str() << endl;
delete[] buffer;
return crcSum.str();
}
A CRC is not defined by just the polynomial. You need to define the bit ordering, the initial value of the CRC register, and the final exclusive-or of the CRC. For the standard CRC-32, which gives 0x9be3e0a3 for "1234", the bits are processed starting with the least significant bit, the initial value of the register is 0xffffffff, and you exclusive-or the final results with 0xffffffff.
The following code is supposed to find the minimum spanning tree from a adjacency matrix:
#include <iostream>
#include <fstream>
#include <stdlib.h>
#include <conio.h>
#include <vector>
#include <string>
using namespace std;
int i, j, k, a, b, u, v, n, ne = 1;
int min, mincost = 0, cost[9][9], parent[9];
int find(int);
int uni(int, int);
int find(int i)
{
while (parent[i]) // Error occurs at this line
i = parent[i];
return i;
}
int uni(int i, int j)
{
if (i != j)
{
parent[j] = i;
return 1;
}
return 0;
}
int main()
{
cout << "MST Kruskal:\n=================================\n";
cout << "\nNo. of vertices: ";
cin >> n;
cout << "\nAdjacency matrix:\n\n";
for (i = 1; i <= n; i++)
{
for (j = 1; j <= n; j++)
{
cin >> cost[i][j];
if (cost[i][j] == 0)
cost[i][j] = 999;
}
}
cout << "\nMST Edge:\n\n";
while (ne < n)
{
for (i = 1, min = 999; i <= n; i++)
{
for (j = 1; j <= n; j++)
{
if (cost[i][j] < min)
{
min = cost[i][j];
a = u = i;
b = v = j;
}
}
}
u = find(u);
v = find(v);
if (uni(u, v))
{
cout << ne++ << "th" << " edge " << "(" << a << "," << b << ")" << " = " << min << endl;
mincost += min;
}
cost[a][b] = cost[b][a] = 999;
}
cout << "\nMinimum cost = " << mincost << "\n" << endl;
system("PAUSE");
return 0;
}
It works for 6 number of vertices and the following matrix:
0 3 1 6 0 0
3 0 5 0 3 0
1 5 0 5 6 4
6 0 5 0 0 2
0 3 6 0 0 6
0 0 4 2 6 0
however for 13 vertices and with the following matrix:
0 1 0 0 0 2 6 0 0 0 0 0 0
1 0 1 2 4 0 0 0 0 0 0 0 0
0 1 0 0 4 0 0 0 0 0 0 0 0
0 2 0 0 2 1 0 0 0 0 0 0 0
0 4 4 2 0 2 1 0 0 0 0 4 0
2 0 0 1 2 0 0 0 0 0 0 2 0
6 0 0 0 1 0 0 3 0 1 0 5 0
0 0 0 0 0 0 3 0 2 0 0 0 0
0 0 0 0 0 0 0 2 0 0 1 0 0
0 0 0 0 0 0 1 0 0 0 1 3 2
0 0 0 0 0 0 0 0 1 1 0 0 0
0 0 0 0 4 2 5 0 0 3 0 0 1
0 0 0 0 0 0 0 0 0 2 0 1 0
this error occurs:
Unhandled exception at 0x00ED5811 in KruskalMST.exe: 0xC0000005: Access violation reading location 0x00F67A1C.
The error occurs at line 17: while (parent[i])
VS Autos:
Name Value Type
i 138596 int
parent 0x00ee048c {2, 999, 999, 999, 999, 999, 999, 999, 2} int[9]
[0] 2 int
[1] 999 int
[2] 999 int
[3] 999 int
[4] 999 int
[5] 999 int
[6] 999 int
[7] 999 int
[8] 2 int
You've defined your 'parent' array to have a size of 9 (assuming you have a maximum of 9 vertices, so max number of parents is 9). Six vertices will work because it's less than 9. With thirteen vertices you MAY be accessing elements passed your parent array size; thus, you should try and define your array size depending on the number of vertices.
P.S In general you don't want to have magic numbers in your code.
while (parent[i])
{
i = parent[i];
}
First of all, please use braces to enclose the while statement. Anyone adding another line to it would likely cause undesired behavior.
Your problem is likely that parent[i] assigns a value to i that is outside of the bounds of the parent array.
Try this to see what it's assigning to i:
while (parent[i] != 0)
{
cout << "parent[i] is " << parent[i];
i = parent[i];
}
Since the parent array has a size of 9, if i is ever set to 9 or greater (or less than 0 somehow), you may get an access violation when using parent[i].
Unrelated: It's good to be explicit about what condition you're checking in the while. Before I saw that parent was an int[], I didn't know if it might be an array of pointers, or booleans, I didn't know what the while condition was checking for.
If you want to be safe, bounds check your parent array:
static const int parentSize = 9;
int parent[parentSize];
while (parent[i] != 0 && i > 0 && i < parentSize)
{
cout << "parent[i] is " << parent[i];
i = parent[i];
}
You likely need to increase the parentSize to something larger. If you want something that is more dynamic you might considering using std::vector instead of an array, it can be resized at runtime if you run into a case where the container isn't large enough.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I wrote a solution for a question on interviewstreet, here is the problem description:
https://www.interviewstreet.com/challenges/dashboard/#problem/4e91289c38bfd
Here is the solution they have given:
https://gist.github.com/1285119
Here is the solution that I coded:
#include<iostream>
#include <string.h>
using namespace std;
#define LOOKUPTABLESIZE 10000000
int popCount[2*LOOKUPTABLESIZE];
int main()
{
int numberOfTests = 0;
cin >> numberOfTests;
for(int test = 0;test<numberOfTests;test++)
{
int startingNumber = 0;
int endingNumber = 0;
cin >> startingNumber >> endingNumber;
int numberOf1s = 0;
for(int number=startingNumber;number<=endingNumber;number++)
{
if(number >-LOOKUPTABLESIZE && number < LOOKUPTABLESIZE)
{
if(popCount[number+LOOKUPTABLESIZE] != 0)
{
numberOf1s += popCount[number+LOOKUPTABLESIZE];
}
else
{
popCount[number+LOOKUPTABLESIZE] =__builtin_popcount (number);
numberOf1s += popCount[number+LOOKUPTABLESIZE];
}
}
else
{
numberOf1s += __builtin_popcount (number);
}
}
cout << numberOf1s << endl;
}
}
Can you please point me what is wrong with my code? It can only pass 3/10 of tests. The time limit is 3 seconds.
What is unoptimized about this code?
The algorithm. You are looping
for(int number=startingNumber;number<=endingNumber;number++)
computing or looking up the number of 1-bits in each. That can take a while.
A good algorithm counts the number of 1-bits in all numbers 0 <= k < n in O(log n) time using a bit of math.
Here is an implementation counting 0s in decimal expansions, the modification to make it count 1-bits shouldn't be hard.
When looking at such a question, you need to break it down in simple pieces.
For example, suppose that you know how many 1s there are in all numbers [0, N] (let's call this ones(N)), then we have:
size_t ones(size_t N) { /* magic ! */ }
size_t count(size_t A, size_t B) {
return ones(B) - (A ? ones(A - 1) : 0);
}
This approach has the advantage that one is probably simpler to program that count, for example using recursion. As such, a first naive attempt would be:
// Naive
size_t naive_ones(size_t N) {
if (N == 0) { return 0; }
return __builtin_popcount(N) + naive_ones(N-1);
}
But this is likely to be too slow. Even when simply computing the value of count(B, A) we will be computing naive_ones(A-1) twice!
Fortunately, there is always memoization to assist here, and the transformation is quite trivial:
size_t memo_ones(size_t N) {
static std::deque<size_t> Memo(1, 0);
for (size_t i = Memo.size(); i <= N; ++i) {
Memo.push_back(Memo[i-1] + __builtin_popcnt(i));
}
return Memo[N];
}
It's likely that this helps, however the cost in terms of memory might be... crippling. Ugh. Imagine that for computing ones(1,000,000) we will occupy 8MB of memory on a 64bits computer! A sparser memoization could help (for example, only memoizing every 8th or 16th count):
// count number of ones in (A, B]
static unoptimized_count(size_t A, size_t B) {
size_t result = 0;
for (size_t i = A + 1; i <= B; ++i) {
result += __builtin_popcount(i);
}
return result;
}
// something like this... be wary it's not tested.
size_t memo16_ones(size_t N) {
static std::vector<size_t> Memo(1, 0);
size_t const n16 = N - (N % 16);
for (size_t i = Memo.size(); i*16 <= n16; ++i) {
Memo.push_back(Memo[i-1] + unoptimized_count(16*(i-1), 16*i);
}
return Memo[n16/16] + unoptimized_count(n16, N);
}
However, while it does reduce the memory cost, it does not solve the main speed issue: we must at least use __builtin_popcount B times! And for large values of B this is a killer.
The above solutions are mechanical, they did not require one ounce of thought. It turns out that interviews are not so much about writing code than they are about thinking.
Can we solve this problem more efficiently than dumbly enumerating all integers until B ?
Let's see what our brains (quite the amazing pattern machine) picks up when considering the first few entries:
N bin 1s ones(N)
0 0000 0 0
1 0001 1 1
2 0010 1 2
3 0011 2 4
4 0100 1 5
5 0101 2 7
6 0110 2 9
7 0111 3 12
8 1000 1 13
9 1001 2 15
10 1010 2 17
11 1011 3 20
12 1100 2 22
13 1101 3 25
14 1110 3 28
15 1111 3 32
Notice a pattern ? I do ;) The range 8-15 is built exactly like 0-7 but with one more 1 per line => it's like a transposition. And it's quite logical too, isn't it ?
Therefore, ones(15) - ones(7) = 8 + ones(7), ones(7) - ones(3) = 4 + ones(3) and ones(1) - ones(0) = 1 + ones(0).
Well, let's make this a formula:
Reminder: ones(N) = popcount(N) + ones(N-1) (almost) by definition
We now know that ones(2**n - 1) - ones(2**(n-1) - 1) = 2**(n-1) + ones(2**(n-1) - 1)
Let's make isolate ones(2**n), it's easier to deal with, note that popcount(2**n) = 1:
regroup: ones(2**n - 1) = 2**(n-1) + 2*ones(2**(n-1) - 1)
use the definition: ones(2**n) - 1 = 2**(n-1) + 2*ones(2**(n-1)) - 2
simplify: ones(2**n) = 2**(n-1) - 1 + 2*ones(2**(n-1)), with ones(1) = 1.
Quick sanity check:
1 = 2**0 => 1 (bottom)
2 = 2**1 => 2 = 2**0 - 1 + 2 * ones(1)
4 = 2**2 => 5 = 2**1 - 1 + 2 * ones(2)
8 = 2**3 => 13 = 2**2 - 1 + 2 * ones(4)
16 = 2**4 => 33 = 2**3 - 1 + 2 * ones(8)
Looks like it works!
We are not quite done though. A and B might not necessarily be powers of 2, and if we have to count all the way from 2**n to 2**n + 2**(n-1) that's still O(N)!
On the other hand, if we manage to express a number in base 2, then we should be able to leverage our newly acquired formula. The main advantage being than there are only log2(N) bits in the representation.
Let's pick an example and understand how it works: 13 = 8 + 4 + 1
1 -> 0001
4 -> 0100
8 -> 1000
13 -> 1101
... however, the count is not just merely the sum:
ones(13) != ones(8) + ones(4) + ones(1)
Let's express it in terms of the "transposition" strategy instead:
ones(13) - ones(8) = ones(5) + (13 - 8)
ones(5) - ones(4) = ones(1) + (5 - 4)
Okay, easy to do with a bit of recursion.
#include <cmath>
#include <iostream>
static double const Log2 = log(2);
// store ones(2**n) at P2Count[n]
static size_t P2Count[64] = {};
// Unfortunately, the conversion to double might lose some precision
// static size_t log2(size_t n) { return log(double(n - 1))/Log2 + 1; }
// __builtin_clz* returns the number of leading 0s
static size_t log2(size_t n) {
if (n == 0) { return 0; }
return sizeof(n) - __builtin_clzl(n) - 1;
}
static size_t ones(size_t n) {
if (n == 0) { return 0; }
if (n == 1) { return 1; }
size_t const lg2 = log2(n);
size_t const np2 = 1ul << lg2; // "next" power of 2
if (np2 == n) { return P2Count[lg2]; }
size_t const pp2 = np2 / 2; // "previous" power of 2
return ones(pp2) + ones(n - pp2) + (n - pp2);
} // ones
// reminder: ones(2**n) = 2**(n-1) - 1 + 2*ones(2**(n-1))
void initP2Count() {
P2Count[0] = 1;
for (size_t i = 1; i != 64; ++i) {
P2Count[i] = (1ul << (i-1)) - 1 + 2 * P2Count[i-1];
}
} // initP2Count
size_t count(size_t const A, size_t const B) {
if (A == 0) { return ones(B); }
return ones(B) - ones(A - 1);
} // count
And a demonstration:
int main() {
// Init table
initP2Count();
std::cout << "0: " << P2Count[0] << ", 1: " << P2Count[1] << ", 2: " << P2Count[2] << ", 3: " << P2Count[3] << "\n";
for (size_t i = 0; i != 16; ++i) {
std::cout << i << ": " << ones(i) << "\n";
}
std::cout << "count(7, 14): " << count(7, 14) << "\n";
}
Victory!
Note: as Daniel Fisher noted, this fails to account for negative number (but assuming two-complement it can be inferred from their positive count).