If I have very large array of bytes and want to find indices of all 1-bits, indices counting from leftmost bit, how do I do this efficiently, probably using SIMD.
(For finding the first 1-bit, see an earlier question. This question produces an array of outputs instead of 1 index.)
Of course I can do following 512-bit non-SIMD version using C++20:
Try it online!
#include <cstdint>
#include <iostream>
#include <bit>
int Find1s512(uint64_t const * p, uint16_t * idxs) {
int rpos = 0;
for (int i = 0; i < 8; ++i) {
uint64_t n = p[i];
while (true) {
int const j = std::countr_zero(n);
if (j >= 64)
break;
idxs[rpos++] = i * 64 + j;
n &= n - 1;
}
}
return rpos;
}
int main() {
uint64_t a[8] = {(1ULL << 17) | (1ULL << 63),
1ULL << 19, 1ULL << 23};
uint16_t b[512] = {};
int const cnt = Find1s512(a, b);
for (int i = 0; i < cnt; ++i)
std::cout << int(b[i]) << " ";
// Printed result: 17 63 83 151
}
And use above 512-bit version as building block to collect 1-bit positions of whole large array.
But I'd like to find out what is the most efficient way to do this, especially using SIMD 128/256/512.
Related
I've written a program, which shows binary representation of a particular integer value, using bitwise operators in C++. For even numbers it works as I expect, but for odd it adds 1 to the left of the binary representation.
#include <iostream>
using std::cout;
using std::cin;
using std::endl;
int main()
{
unsigned int a = 128;
for (int i = sizeof(a) * 8; i >= 0; --i) {
if (a & (1UL << i)) { // if i-th digit is 1
cout << 1; // Output 1
}
else {
cout << 0; // Otherwise output 0
}
}
cout << endl;
system("pause");
return 0;
}
Results:
For a = 128: 000000000000000000000000010000000,
For a = 127: 100000000000000000000000001111111
You might prefer CHAR_BIT macro instead of raw 8 (#include <climits>).
Consider your start value! Assuming unsigned int having 32 bit, your start value is int i = 4*8, so 1U << i shifts the value out of range. This is undefined behaviour and could result in anything, obviously, your specific compiler or hardware shifts %32, thus you get an initial value & 1, resulting in the unexpected leading 1... Did you notice that you actually printed out 33 digits instead of only 32?
The problem is here:
for (int i = sizeof(a) * 8; i >= 0; --i) {
it should be:
for (int i = sizeof(a) * 8; i-- ; ) {
http://coliru.stacked-crooked.com/a/8cb2b745063883fa
#include <iostream>
#include <bitset>
using namespace std;
int main(){
int k = -1;
int v = -1;
int r = 0;
for(int s = 0; s <= 30 ; s++){
int vbit = v & 1;
v >>= 1;
r |= vbit;
r <<= 1;
}
int vbit = v & 1;
r |= vbit;
cout << bitset<32>(k) << " " << bitset<32>(r) << endl;
}
I have written a code to reverse the bits in a integer. I my code works perfectly fine if I run the code as written but I think that I am looping one time less than I should to get to the correct answer. I have to shift 31 times to access all the bits in the int and the last two line of code after for loop is to patch the last bits to their places.
Is there a conceptual problem or a silly mistake?
Apply the shift to r before adding vbit to it.
That way you can shift 32 times.
Here is a way to do it that works for any unsigned integer type. (unsigned because it is undefined behavior to left shift a negative number, so using unsigned forces the caller to be sure the value is non-negative)
#include <limits>
#include <type_traits>
template <typename T>
auto reverse_bits(T value) -> T
{
static_assert(std::is_unsigned<T>::value, "type must be unsigned integer");
constexpr auto digits = std::numeric_limits<T>::digits;
T result = 0;
for (int k = 0; k < digits; ++k)
result |= ((value >> k) & 0x01) << (digits - (k + 1));
return result;
}
I'd like to generate all possible combination (without repetitions) in bit representation. I can't use any library like boost or stl::next_combination - it has to be my own code (computation time is very important).
Here's my code (modified from ones StackOverflow user):
int combination = (1 << k) - 1;
int new_combination = 0;
int change = 0;
while (true)
{
// return next combination
cout << combination << endl;
// find first index to update
int indexToUpdate = k;
while (indexToUpdate > 0 && GetBitPositionByNr(combination, indexToUpdate)>= n - k + indexToUpdate)
indexToUpdate--;
if (indexToUpdate == 1) change = 1; // move all bites to the left by one position
if (indexToUpdate <= 0) break; // done
// update combination indices
new_combination = 0;
for (int combIndex = GetBitPositionByNr(combination, indexToUpdate) - 1; indexToUpdate <= k; indexToUpdate++, combIndex++)
{
if(change)
{
new_combination |= (1 << (combIndex + 1));
}
else
{
combination = combination & (~(1 << combIndex));
combination |= (1 << (combIndex + 1));
}
}
if(change) combination = new_combination;
change = 0;
}
where n - all elements, k - number of elements in combination.
GetBitPositionByNr - return position of k-th bit.
GetBitPositionByNr(13,2) = 3 cause 13 is 1101 and second bit is on third position.
It gives me correct output for n=4, k=2 which is:
0011 (3 - decimal representation - printed value)
0101 (5)
1001 (9)
0110 (6)
1010 (10)
1100 (12)
Also it gives me correct output for k=1 and k=4, but gives me wrong outpu for k=3 which is:
0111 (7)
1011 (11)
1011 (9) - wrong, should be 13
1110 (14)
I guess the problem is in inner while condition (second) but I don't know how to fix this.
Maybe some of you know better (faster) algorithm to do want I want to achieve? It can't use additional memory (arrays).
Here is code to run on ideone: IDEONE
When in doubt, use brute force. Alas, generate all variations with repetition, then filter out the unnecessary patterns:
unsigned bit_count(unsigned n)
{
unsigned i = 0;
while (n) {
i += n & 1;
n >>= 1;
}
return i;
}
int main()
{
std::vector<unsigned> combs;
const unsigned N = 4;
const unsigned K = 3;
for (int i = 0; i < (1 << N); i++) {
if (bit_count(i) == K) {
combs.push_back(i);
}
}
// and print 'combs' here
}
Edit: Someone else already pointed out a solution without filtering and brute force, but I'm still going to give you a few hints about this algorithm:
most compilers offer some sort of intrinsic population count function. I know of GCC and Clang which have __builtin_popcount(). Using this intrinsic function, I was able to double the speed of the code.
Since you seem to be working on GPUs, you can parallelize the code. I have done it using C++11's standard threading facilities, and I've managed to compute all 32-bit repetitions for arbitrarily-chosen popcounts 1, 16 and 19 in 7.1 seconds on my 8-core Intel machine.
Here's the final code I've written:
#include <vector>
#include <cstdio>
#include <thread>
#include <utility>
#include <future>
unsigned popcount_range(unsigned popcount, unsigned long min, unsigned long max)
{
unsigned n = 0;
for (unsigned long i = min; i < max; i++) {
n += __builtin_popcount(i) == popcount;
}
return n;
}
int main()
{
const unsigned N = 32;
const unsigned K = 16;
const unsigned N_cores = 8;
const unsigned long Max = 1ul << N;
const unsigned long N_per_core = Max / N_cores;
std::vector<std::future<unsigned>> v;
for (unsigned core = 0; core < N_cores; core++) {
unsigned long core_min = N_per_core * core;
unsigned long core_max = core_min + N_per_core;
auto fut = std::async(
std::launch::async,
popcount_range,
K,
core_min,
core_max
);
v.push_back(std::move(fut));
}
unsigned final_count = 0;
for (auto &fut : v) {
final_count += fut.get();
}
printf("%u\n", final_count);
return 0;
}
I am trying to solve a problem, a part of which requires me to calculate (2^n)%1000000007 , where n<=10^9. But my following code gives me output "0" even for input like n=99.
Is there anyway other than having a loop which multilplies the output by 2 every time and finding the modulo every time (this is not I am looking for as this will be very slow for large numbers).
#include<stdio.h>
#include<math.h>
#include<iostream>
using namespace std;
int main()
{
unsigned long long gaps,total;
while(1)
{
cin>>gaps;
total=(unsigned long long)powf(2,gaps)%1000000007;
cout<<total<<endl;
}
}
You need a "big num" library, it is not clear what platform you are on, but start here:
http://gmplib.org/
this is not I am looking for as this will be very slow for large numbers
Using a bigint library will be considerably slower pretty much any other solution.
Don't take the modulo every pass through the loop: rather, only take it when the output grows bigger than the modulus, as follows:
#include <iostream>
int main() {
int modulus = 1000000007;
int n = 88888888;
long res = 1;
for(long i=0; i < n; ++i) {
res *= 2;
if(res > modulus)
res %= modulus;
}
std::cout << res << std::endl;
}
This is actually pretty quick:
$ time ./t
./t 1.19s user 0.00s system 99% cpu 1.197 total
I should mention that the reason this works is that if a and b are equivalent mod m (that is, a % m = b % m), then this equality holds multiple k of a and b (that is, the foregoing equality implies (a*k)%m = (b*k)%m).
Chris proposed GMP, but if you need just that and want to do things The C++ Way, not The C Way, and without unnecessary complexity, you may just want to check this out - it generates few warnings when compiling, but is quite simple and Just Works™.
You can split your 2^n into chunks of 2^m. You need to find: `
2^m * 2^m * ... 2^(less than m)
Number m should be 31 is for 32-bit CPU. Then your answer is:
chunk1 % k * chunk2 * k ... where k=1000000007
You are still O(N). But then you can utilize the fact that all chunk % k are equal except last one and you can make it O(1)
I wrote this function. It is very inefficient but it works with very large numbers. It uses my self-made algorithm to store big numbers in arrays using a decimal like system.
mpfr2.cpp
#include "mpfr2.h"
void mpfr2::mpfr::setNumber(std::string a) {
for (int i = a.length() - 1, j = 0; i >= 0; ++j, --i) {
_a[j] = a[i] - '0';
}
res_size = a.length();
}
int mpfr2::mpfr::multiply(mpfr& a, mpfr b)
{
mpfr ans = mpfr();
// One by one multiply n with individual digits of res[]
int i = 0;
for (i = 0; i < b.res_size; ++i)
{
for (int j = 0; j < a.res_size; ++j) {
ans._a[i + j] += b._a[i] * a._a[j];
}
}
for (i = 0; i < a.res_size + b.res_size; i++)
{
int tmp = ans._a[i] / 10;
ans._a[i] = ans._a[i] % 10;
ans._a[i + 1] = ans._a[i + 1] + tmp;
}
for (i = a.res_size + b.res_size; i >= 0; i--)
{
if (ans._a[i] > 0) break;
}
ans.res_size = i+1;
a = ans;
return a.res_size;
}
mpfr2::mpfr mpfr2::mpfr::pow(mpfr a, mpfr b) {
mpfr t = a;
std::string bStr = "";
for (int i = b.res_size - 1; i >= 0; --i) {
bStr += std::to_string(b._a[i]);
}
int i = 1;
while (!0) {
if (bStr == std::to_string(i)) break;
a.res_size = multiply(a, t);
// Debugging
std::cout << "\npow() iteration " << i << std::endl;
++i;
}
return a;
}
mpfr2.h
#pragma once
//#infdef MPFR2_H
//#define MPFR2_H
// C standard includes
#include <iostream>
#include <string>
#define MAX 0x7fffffff/32/4 // 2147483647
namespace mpfr2 {
class mpfr
{
public:
int _a[MAX];
int res_size;
void setNumber(std::string);
static int multiply(mpfr&, mpfr);
static mpfr pow(mpfr, mpfr);
};
}
//#endif
main.cpp
#include <iostream>
#include <fstream>
// Local headers
#include "mpfr2.h" // Defines local mpfr algorithm library
// Namespaces
namespace m = mpfr2; // Reduce the typing a bit later...
m::mpfr tetration(m::mpfr, int);
int main() {
// Hardcoded tests
int x = 7;
std::ofstream f("out.txt");
m::mpfr t;
for(int b=1; b<x;b++) {
std::cout << "2^^" << b << std::endl; // Hardcoded message
t.setNumber("2");
m::mpfr res = tetration(t, b);
for (int i = res.res_size - 1; i >= 0; i--) {
std::cout << res._a[i];
f << res._a[i];
}
f << std::endl << std::endl;
std::cout << std::endl << std::endl;
}
char c; std::cin.ignore(); std::cin >> c;
return 0;
}
m::mpfr tetration(m::mpfr a, int b)
{
m::mpfr tmp = a;
if (b <= 0) return m::mpfr();
for (; b > 1; b--) tmp = m::mpfr::pow(a, tmp);
return tmp;
}
I created this for tetration and eventually hyperoperations. When the numbers get really big it can take ages to calculate and a lot of memory. The #define MAX 0x7fffffff/32/4 is the number of decimals one number can have. I might make another algorithm later to combine multiple of these arrays into one number. On my system the max array length is 0x7fffffff aka 2147486347 aka 2^31-1 aka int32_max (which is usually the standard int size) so I had to divide int32_max by 32 to make the creation of this array possible. I also divided it by 4 to reduce memory usage in the multiply() function.
- Jubiman
I have a couple of integers, for example (in binary represetation):
00001000, 01111111, 10000000, 00000001
and I need to put them in sequence to array of bytes(chars), without the leading zeros, like so:
10001111 11110000 0001000
I understand that it is must be done by bit shifting with <<,>> and using binary or |. But I can't find the correct algorithm, can you suggest the best approach?
The integers I need to put there are unsigned long long ints, so the length of one can be anywhere from 1 bit to 8 bytes (64 bits).
You could use a std::bitset:
#include <bitset>
#include <iostream>
int main() {
unsigned i = 242122534;
std::bitset<sizeof(i) * 8> bits;
bits = i;
std::cout << bits.to_string() << "\n";
}
There are doubtless other ways of doing it, but I would probably go with the simplest:
std::vector<unsigned char> integers; // Has your list of bytes
integers.push_back(0x02);
integers.push_back(0xFF);
integers.push_back(0x00);
integers.push_back(0x10);
integers.push_back(0x01);
std::string str; // Will have your resulting string
for(unsigned int i=0; i < integers.size(); i++)
for(int j=0; j<8; j++)
str += ((integers[i]<<j) & 0x80 ? "1" : "0");
std::cout << str << "\n";
size_t begin = str.find("1");
if(begin > 0) str.erase(0,begin);
std::cout << str << "\n";
I wrote this up before you mentioned that you were using long ints or whatnot, but that doesn't actually change very much of this. The mask needs to change, and the j loop variable, but otherwise the above should work.
Convert them to strings, then erase all leading zeros:
#include <iostream>
#include <sstream>
#include <string>
#include <cstdint>
std::string to_bin(uint64_t v)
{
std::stringstream ss;
for(size_t x = 0; x < 64; ++x)
{
if(v & 0x8000000000000000)
ss << "1";
else
ss << "0";
v <<= 1;
}
return ss.str();
}
void trim_right(std::string& in)
{
size_t non_zero = in.find_first_not_of("0");
if(std::string::npos != non_zero)
in.erase(in.begin(), in.begin() + non_zero);
else
{
// no 1 in data set, what to do?
in = "<no data>";
}
}
int main()
{
uint64_t v1 = 437148234;
uint64_t v2 = 1;
uint64_t v3 = 0;
std::string v1s = to_bin(v1);
std::string v2s = to_bin(v2);
std::string v3s = to_bin(v3);
trim_right(v1s);
trim_right(v2s);
trim_right(v3s);
std::cout << v1s << "\n"
<< v2s << "\n"
<< v3s << "\n";
return 0;
}
A simple approach would be having the "current byte" (acc in the following), the associated number of used bits in it (bitcount) and a vector of fully processed bytes (output):
int acc = 0;
int bitcount = 0;
std::vector<unsigned char> output;
void writeBits(int size, unsigned long long x)
{
while (size > 0)
{
// sz = How many bit we're about to copy
int sz = size;
// max avail space in acc
if (sz > 8 - bitcount) sz = 8 - bitcount;
// get the bits
acc |= ((x >> (size - sz)) << (8 - bitcount - sz));
// zero them off in x
x &= (1 << (size - sz)) - 1;
// acc got bigger and x got smaller
bitcount += sz;
size -= sz;
if (bitcount == 8)
{
// got a full byte!
output.push_back(acc);
acc = bitcount = 0;
}
}
}
void writeNumber(unsigned long long x)
{
// How big is it?
int size = 0;
while (size < 64 && x >= (1ULL << size))
size++;
writeBits(size, x);
}
Note that at the end of the processing you should check if there is any bit still in the accumulator (bitcount > 0) and you should flush them in that case by doing a output.push_back(acc);.
Note also that if speed is an issue then probably using a bigger accumulator is a good idea (however the output will depend on machine endianness) and also that discovering how many bits are used in a number can be made much faster than a linear search in C++ (for example x86 has a special machine language instruction BSR dedicated to this).