I just got this frame for a sudoku solver, but I don't understand the syntax they've used and how I'm supposed to proceed. They call it a bitset, but upon searching for it I found nothing similar.
// This file contains a simple implementation of sets of
// digits between 1 and 9, called fields.
#ifndef __SUDOKU_FIELD_H__
#define __SUDOKU_FIELD_H__
#include <iostream>
#include <cassert>
#include "digit.h"
class Field {
private:
// Use integers for a bitset
unsigned int _digits;
// Number of digits in bitset
unsigned int _size;
public:
// Initialize with all digits between 1 and 9 included
Field(void)
: _digits((1 << 1) | (1 << 2) | (1 << 3) |
(1 << 4) | (1 << 5) | (1 << 6) |
(1 << 7) | (1 << 8) | (1 << 9)), _size(9) {}
// Return size of digit set (number of digits in set)
unsigned int size(void) const {
// FILL IN
}
// Test whether digit set is empty
bool empty(void) const {
// FILL IN
}
// Test whether set is assigned (that is, single digit left)
bool assigned(void) const {
// FILL IN
}
// Test whether digit d is included in set
bool in(digit d) const {
assert((d >= 1) && (d <= 9));
// FILL IN
}
// Return digit to which the set is assigned
digit value(void) const {
assert(assigned());
// FILL IN
}
// Print digits still included
void print(std::ostream& os) const;
// Remove digit d from set (d must be still included)
void prune(digit d) {
assert(in(d));
// FILL IN
}
// Assign field to digit d (d must be still included)
void assign(digit d) {
assert(in(d));
// FILL IN
}
};
// Print field
inline std::ostream&
operator<<(std::ostream& os, const Field& f) {
f.print(os); return os;
}
#endif
Obviously the //FILL IN's are for me to write, and the meaning of the bitset is 9 bits where all of them initially are set to 1. The question is how I manipulate or use them.
Oh, by the way, this is a digit:
#ifndef __SUDOKU_DIGIT_H__
#define __SUDOKU_DIGIT_H__
typedef unsigned char digit;
#endif
A "bitfield" is just an interpretation of a integer in memory as if it was a list of bits. You will be setting, testing and resetting bits in this integer individually, and the comments in the code tell you exactly what to do in each function.
You can use '&' and '|' for bitwise AND and OR, and '<<' and '>>' for shifting all bits to the left and right. This article can be very helpful to you: http://en.wikipedia.org/wiki/Bitwise_operation
This initialization sets the bits 1 - 9 of _digits to 1. The expression (1 << n) means 1 shifted n bits to the left. The expression a | b means a bit-wise or of a and b.
So, in detail, all of the expressions (1 << n) result in a bit-pattern with all zeroes and a 1 at the n th position, for 0 < n < 10. All of these are or'd together, to yield a bit-pattern bits 1 through 9 set to 1:
(1 << 1) 0010 |
(1 << 2) 0100 |
(1 << 3) 1000
======================
1110
(unused bits not shown)
4 bits:
0000
1 in binary is:
0001
Shifting is used to choose a single bit:
0001 << 0 = 0001 // first bit
0001 << 1 = 0010 // second bit
0001 << 2 = 0100 // third bit
Or is used to set individual bits:
0000 | 0100 = 0100
And is used to retrieve bits:
0111 & 0001 = 0001
This is how bitsets work.
Example:
unsigned int x = 0;
x |= 1 << 4; // set 5th bit
x |= 1 << 3; // set 4th bit
x |= 0x3; // set first 2 bits - 0x3 = 0011
unsigned int b = true;
x |= b << 7; // set 8th bit to value of b
if (x & (1 << 2)) { // check if 3rd bit is true
// ...
}
b = (x >> 3) & 1; // set b to value of 4th bit
Here is a way to count number of bits, along with other helpful algorithms:
unsigned int v; // count the number of bits set in v
unsigned int c; // c accumulates the total bits set in v
for (c = 0; v; c++)
{
v &= v - 1; // clear the least significant bit set
}
Related
Currently I have a few lines of code for working with binary strings in their decimal representation, namely I have functions to rotate the binary string to the left, flip a specific bit, flip all bits and reverse order of the binary string all working on the decimal representation. They are defined as follows:
inline u64 rotate_left(u64 n, u64 maxPower) {
return (n >= maxPower) ? (((int64_t)n - (int64_t)maxPower) * 2 + 1) : n * 2;
}
inline bool checkBit(u64 n, int k) {
return n & (1ULL << k);
}
inline u64 flip(u64 n, u64 maxBinaryNum) {
return maxBinaryNum - n - 1;
}
inline u64 flip(u64 n, u64 kthPower, int k) {
return checkBit(n, k) ? (int64_t(n) - (int64_t)kthPower) : (n + kthPower);
}
inline u64 reverseBits(u64 n, int L) {
u64 rev = (lookup[n & 0xffULL] << 56) | // consider the first 8 bits
(lookup[(n >> 8) & 0xffULL] << 48) | // consider the next 8 bits
(lookup[(n >> 16) & 0xffULL] << 40) | // consider the next 8 bits
(lookup[(n >> 24) & 0xffULL] << 32) | // consider the next 8 bits
(lookup[(n >> 32) & 0xffULL] << 24) | // consider the next 8 bits
(lookup[(n >> 40) & 0xffULL] << 16) | // consider the next 8 bits
(lookup[(n >> 48) & 0xffULL] << 8) | // consider the next 8 bits
(lookup[(n >> 54) & 0xffULL]); // consider last 8 bits
return (rev >> (64 - L)); // get back to the original maximal number
}
WIth the lookup[] list defined as:
#define R2(n) n, n + 2*64, n + 1*64, n + 3*64
#define R4(n) R2(n), R2(n + 2*16), R2(n + 1*16), R2(n + 3*16)
#define R6(n) R4(n), R4(n + 2*4 ), R4(n + 1*4 ), R4(n + 3*4 )
#define REVERSE_BITS R6(0), R6(2), R6(1), R6(3)
const u64 lookup[256] = { REVERSE_BITS };
All but the last one are easy to implement.
My question is whether you know any generalization of the above functions for the octal string of a number, while working only on the decimal representation as above? Obviously without doing a conversion and storing the octal string itself (mainly due to performance boost)
With flip() in octal code a would need to return the number with 8-x at the specified place in the string (for intstance: flip(2576, 2nd power, 2nd position) = 2376, i.e. 3 = 8-5).
I do understand that in octal representation the any similar formulas as for rotate_left or flip are not possible (maybe?), that is why I look for alternative implementation.
A possibility would be to represent each number in the octal string by their binary string, in other words to write: 29 --octal-> 35 --bin-> (011)(101)
Thus working on sets of binary numbers. Would that be a good idea?
If you have any suggestions for the code above for binary representation, I welcome any piece of advice.
Thanks in advance and sorry for the long post!
my understand of rotate_left, do not know my understand of question is correct, hope this will help you.
// maxPower: 8
// n < maxPower:
// 0001 -> 0010
//
// n >= maxPower
// n: 1011
// n - maxPower: 0011
// (n - maxPower) * 2: 0110
// (n - maxPower) * 2 + 1: 0111
inline u64 rotate_left(u64 n, u64 maxPower) {
return (n >= maxPower) ? (((int64_t)n - (int64_t)maxPower) * 2 + 1) : n * 2;
}
// so rotate_left for octadecimal, example: 3 digit octadecimal rotate left.
// 0 1 1 -> 1 1 0
// 000 001 001 -> 001 001 000
// 4 4 0 -> 4 0 4
// 100 100 000 -> 100 000 100
// so, keep:
// first digit of octadecimal number is:
// fisrt_digit = n & (7 << ((digit-1) * 3))
// other digit of octadecimal number is:
// other_digit = n - first_digit
// example for 100 100 000:
// first_digit is 100 000 000
// other_digit is 000 100 000
// so rotate left result is:
// (other_digit << 3) | (first_digit >> ((digit-1) * 3))
//
inline u64 rotate_left_oct(u64 n, u64 digit) {
u64 rotate = 3 * (digit - 1);
u64 first_digit = n & (7 << rotate);
u64 other_digit = n - first_digit;
return (other_digit << 3) | (first_digit >> rotate);
}
flip, for base 8, flip should be 7-x instead of 8-x:
// oct flip same with binary flip:
// (111)8 -> (001 001 001)2
// flip,
// (666)8 -> (110 110 110)2
// this should be 7 - 1, not 8 - 1, indead.
//
inline u64 flip_oct(u64 n, u64 digit) {
u64 maxNumber = (1 << (3 * digit)) - 1;
assert(n <= maxNumber);
return maxNumber - n;
}
// otc flip one digit
// (111)8 -> (001 001 001)2
// flip 2nd number of it
// (161)8 -> (001 110 001)2
// just need do xor of nth number of octadecimal number.
//
inline u64 flip_oct(u64 n, u64 nth, u64 digit) {
return (7 << (3 * (nth - 1))) ^ n;
}
simple reverse.
inline u64 reverse_oct(u64 n, u64 digit) {
u64 m = 0;
while (digit > 0) {
m = (m << 3) | (n & 7);
n = n >> 3;
--digit;
}
return m;
}
I want to stretch a mask in which every bit represents 4 bits of stretched mask.
I am looking for an elegant bit manipulation to stretch using c++ and systemC
for example:
input:
mask (32 bits) = 0x0000CF00
output:
stretched mask (128 bits) = 0x00000000 00000000 FF00FFFF 00000000
and just to clarify the example let's look at the the byte C:
0xC = 1100 after stretching: 1111111100000000 = 0xFF00
Do this in a elegant form is not easy.
The simple mode maybe is create a loop with shift bit
sc_biguint<128> result = 0;
for(int i = 0; i < 32; i++){
if(bit_test(var, i)){
result +=0x0F;
}
result << 4;
}
Here's a way of stretching a 16-bit mask into 64 bits where every bit represents 4 bits of stretched mask:
uint64_t x = 0x000000000000CF00LL;
x = (x | (x << 24)) & 0x000000ff000000ffLL;
x = (x | (x << 12)) & 0x000f000f000f000fLL;
x = (x | (x << 6)) & 0x0303030303030303LL;
x = (x | (x << 3)) & 0x1111111111111111LL;
x |= x << 1;
x |= x << 2;
It starts of with the mask in the bottom 16 bits. Then it moves the top 8 bits of the mask into the top 32 bits, like this:
0000000000000000 0000000000000000 0000000000000000 ABCDEFGHIJKLMNOP
becomes
0000000000000000 00000000ABCDEFGH 0000000000000000 00000000IJKLMNOP
Then it solves the similar problem of stretching a mask from the bottom 8 bits of a 32 bit word, to the top and bottom 32-bits simultaneously:
000000000000ABCD 000000000000EFGH 000000000000IJKL 000000000000MNOP
Then it does it for 4 bits inside 16 and so on until the bits are spread out:
000A000B000C000D 000E000F000G000H 000I000J000K000L 000M000N000O000P
Then it "smears" them across 4 bits by ORing the result with itself twice:
AAAABBBBCCCCDDDD EEEEFFFFGGGGHHHH IIIIJJJJKKKKLLLL MMMMNNNNOOOOPPPP
You could extend this to 128 bits by adding an extra first step where you shift by 48 bits and mask with a 128-bit constant:
x = (x | (x << 48)) & 0x000000000000ffff000000000000ffffLLL;
You'd also have to stretch the other constants out to 128 bits just by repeating the bit patterns. However (as far as I know) there is no way to declare a 128-bit constant in C++, but perhaps you could do it with macros or something (see this question). You could also make a 128-bit version just by using the 64-bit version on the top and bottom 16 bits separately.
If loading the masking constants turns out to be a difficulty or bottleneck you can generate each one from the previous one using shifting and masking:
uint64_t m = 0x000000ff000000ffLL;
m &= m >> 4; m |= m << 16; // gives 0x000f000f000f000fLL
m &= m >> 2; m |= m << 8; // gives 0x0303030303030303LL
m &= m >> 1; m |= m << 4; // gives 0x1111111111111111LL
Does this work for you?
#include <stdio.h>
long long Stretch4x(int input)
{
long long output = 0;
while (input & -input)
{
int b = (input & -input);
long long s = 0;
input &= ~b;
s = b*15;
while(b>>=1)
{
s <<= 3;
}
output |= s;
}
return output;
}
int main(void) {
int input = 0xCF00;
printf("0x%0x ==> 0x%0llx\n", input, Stretch4x(input));
return 0;
}
Output:
0xcf00 ==> 0xff00ffff00000000
The other solutions are good. However, most them are more C than C++. This solution is pretty straight forward: it uses std::bitset and set four bits for each input bit.
#include <bitset>
#include <iostream>
std::bitset<128>
starch_32 (const std::bitset<32> &input)
{
std::bitset<128> output;
for (size_t i = 0; i < input.size(); ++i) {
// If `input[N]` is `true`, set `output[N*4, N*4+4]` to true.
if (input.test (i)) {
const size_t output_index = i * 4;
output.set (output_index);
output.set (output_index + 1);
output.set (output_index + 2);
output.set (output_index + 3);
}
}
return output;
}
// Example with 0xC.
int main() {
std::bitset<32> input{0b1100};
auto result = starch_32 (input);
std::cout << "0x" << std::hex << result.to_ullong() << "\n";
}
Try it online!
On x86 you could use the PDEP intrinsic to move the 16 mask bits into the correct nibble (into the low bit of each nibble, for example) of a 64-bit word, and then use a couple of shift + or to smear them into the rest of the word:
unsigned long x = _pdep_u64(m, 0x1111111111111111);
x |= x << 1;
x |= x << 2;
You could also replace those two OR and two shift by a single multiplication by 0xF which accomplishes the same smearing.
Finally, you could consider a SIMD approach: solutions such as samgak's above should map naturally to SIMD.
So this is an update to my last post, but I'm still having a lot of trouble understanding how this works. So I was giving the main function:
void set_flag(int* flag_holder, int flag_position);
int check_flag(int flag_holder, int flag_position);
int main(int argc, char* argv[])
{
int flag_holder = 0;
int i;
set_flag(&flag_holder, 3);
set_flag(&flag_holder, 16);
set_flag(&flag_holder, 31);
for(i = 31; i >= 0; i--) {
printf("%d", check_flag(flag_holder, i));
if(i % 4 == 0)
printf(" ");
}
printf("\n");
return 0;
}
And for the assignment we are supposed to write the functions set_flag and check_flag, so that the output is equal to:
1000 0000 0000 0001 0000 0000 0000 1000
So from what I understand, were supposed to use the "set_flag" function to make sure that the nth bit is 1. And the "check_flag" function returns an integer that is 0 when the nth bit is 0, and 1 when it is 1. I don't understand what "set_flag" is really doing, and how 3, 16, and 31, will be saved as "flags" which then return as 1's in "check_flag".
When working with binary or hexadecimal values a common approach is the define a mask that we will apply to a main value.
You can easily set one or more bits to '1' with the inclusive OR operator '|'
eg: we want to set the bit#0 to '1'
main value 01011000 |
mask 00000001 =
result 01011001
To test a particular bit you can use the AND operator '&'
eg: we want to test the bit#3
main value 01011000 &
mask 00001000 =
result 00001000
note: you may need to properly format result; here the & operation will return either zero or non-zero (but nor necessary '1').
So here are the 2 functions set_flag and check_flag:
void set_flag(int* flag_holder, int flag_position) {
int mask = 1 << flag_position;
flag_holder = flag_holder | mask;
}
int check_flag(int flag_holder, int flag_position) {
int mask = 1 << flag_position;
int check = flag_holder & mask;
return (check !=0 ? 1 : 0);
}
In these scenarios we need a binary mask to set/check only one bit. The code "int mask = 1 << flag_position;" builds this single bit mask, it basically sets the bit#0 to '1' then shift to the left to the #bit we want to set/check.
Function Set_flag and Function Check_flag:
void set_flag(int* flag_holder, int flag_position) {
int mask = 1 << flag_position;
*flag_holder = *flag_holder | mask;
}
int check_flag(int flag_holder, int flag_position) {
int mask = 1 << flag_position;
int check = flag_holder & mask;
return (check !=0 ? 1 : 0);
}
Run it along with the main program and you will get the desired output.
I need to fetch last 6 bits of a integer or Uint32. For example if I have a value of 183, I need last six bits which will be 110 111 ie 55.
I have written a small piece of code, but it's not behaving as expected. Could you guys please point out where I am making a mistake?
int compress8bitTolessBit( int value_to_compress, int no_of_bits_to_compress )
{
int ret = 0;
while(no_of_bits_to_compress--)
{
std::cout << " the value of bits "<< no_of_bits_to_compress << std::endl;
ret >>= 1;
ret |= ( value_to_compress%2 );
value_to_compress /= 2;
}
return ret;
}
int _tmain(int argc, _TCHAR* argv[])
{
int val = compress8bitTolessBit( 183, 5 );
std::cout <<" the value is "<< val << std::endl;
system("pause>nul");
return 0;
}
You have entered the realm of binary arithmetic. C++ has built-in operators for this kind of thing. The act of "getting certain bits" of an integer is done with an "AND" binary operator.
0101 0101
AND 0000 1111
---------
0000 0101
In C++ this is:
int n = 0x55 & 0xF;
// n = 0x5
So to get the right-most 6 bits,
int n = original_value & 0x3F;
And to get the right-most N bits,
int n = original_value & ((1 << N) - 1);
Here is more information on
Binary arithmetic operators in C++
Binary operators in general
I don't get the problem, can't you just use bitwise operators? Eg
u32 trimmed = value & 0x3F;
This will keep just the 6 least significant bits by using the bitwise AND operator.
tl;dr:
int val = x & 0x3F;
int value = input & ((1 << (no_of_bits_to_compress + 1) - 1)
This one calculates the (n+1)th power of two: 1 << (no_of_bits_to_compress + 1) and subtracts 1 to get a mask with all n bits set.
The last k bits of an integer A.
1. A % (1<<k); // simply A % 2^k
2. A - ((A>>k)<<k);
The first method uses the fact that the last k bits is what is trimmed after doing k right shits(divide by 2^k).
I was looking at an example of reading bits from a byte and the implementation looked simple and easy to understand. I was wondering if anyone has a similar example of how to insert bits into a byte or byte array, that is easier to understand and also implement like the example below.
Here is the example I found of reading bits from a byte:
static int GetBits3(byte b, int offset, int count)
{
return (b >> offset) & ((1 << count) - 1);
}
Here is what I'm trying to do. This is my current implementation, I'm just a little confused with the bit-masking/shifting, etc., so I'm trying to find out if there is an easier way to do what I'm doing
BYTE Msg[2];
Msg_Id = 3;
Msg_Event = 1;
Msg_Ready = 2;
Msg[0] = ( ( Msg_Event << 4 ) & 0xF0 ) | ( Msg_Id & 0x0F ) ;
Msg[1] = Msg_Ready & 0x0F; //MsgReady & Unused
If you are using consecutive integer constant values like in the example above, you should shift the bits with these constants when putting them inside a byte. Otherwise they overlap: in your example, Msg_Id equals Msg_Event & Msg_Ready. These can be used like
Msg[0] = ( 1 << Msg_Event ) | ( 1 << Msg_Id); // sets the 2nd and 4th bits
(Note that bits within a byte are indexed from 0.) The other approach would be using powers of 2 as constant values:
Msg_Id = 4; // equals 1 << 2
Msg_Event = 1; // equals 1 << 0
Msg_Ready = 2; // equals 1 << 1
Note that in your code above, masking with 0x0F or 0xF0 is not really needed: (Msg_Id & 0x0F) == Msg_Id and ((Msg_Event << 4) & 0xF0) == (Msg_Event << 4).
You could use a bit field. For instance :
struct Msg
{
unsigned MsgEvent : 1; // 1 bit
unsigned MsgReady : 1; // 1 bit
};
You could then use a union to manipulate either the bitfield or the byte, something like this :
struct MsgBitField {
unsigned MsgEvent : 1; // 1 bit
unsigned MsgReady : 1; // 1 bit
};
union ByteAsBitField {
unsigned char Byte;
MsgBitField Message;
};
int main() {
ByteAsBitField MyByte;
MyByte.Byte = 0;
MyByte.Message.MsgEvent = true;
}