Scanning bit array for pattern of multiple bits - c++

I am writing an memory allocator that is backed by a bit map (array of uint8_t) currently when an allocation request comes along I scan the bitmap sequential from 0 to n bits and search for a space that can fulfill the request. (a bit 1 denotes page used a 0 denoted page is free) Now instead of searching for a space one bit at a time is there technique to scan the whole array faster? i.e if a request for 3 page memory arrives I would like to search for 000 pattern in the array in one go ideally without looping?
PS: I am not using std::bitset as it is not available for the compiler I am using. AFAIK that does not let me search for multiple bits also.
EDIT: Bits are packed into bytes one uint8_t has 8 pages (1 per bit) encoded in it.

To scan for one empty page, you could loop through the bit array one full byte at a time and check if it is smaller than 255. If it is smaller, there is at least one zero bit. Even better would be to scan 32 or 64 bit (unsigned ints) at a time, and then narrow the search inside the uint.
To optimize a bit, you could keep track of the first byte with a zero bit (and update that position when freeing a page). This could give a false positive once you allocate that free page, but at least the next time the scan can start there instead of at the beginning.
A scan for multiple pages could be optimized if you're willing to align larger blocks on a power of 2 (depending on your data structures). For example, to allocate 8 pages, you would only scan for a full byte being zero:
1 page: scan for any zero bit (up to 64 bits at a time)
2 pages: scan for 2 zero bits at bit position 0,2,4,6
3-4 pages: scan for zero nibble (for 3 pages, the fourth would be available for 1 page then)
5-8 pages: scan for an empty byte
for each of the above, you could first scan 64 bits at a time.
This way, you don't have to worry about (or check) overlapping zero ranges at byte/uint32/uint64 boundaries.
For each type a starting position with the first free block could be kept/updated.

Not a full answer to your question (I suppose) but I hope that the following function can help.
template <typename I>
bool scan_n_zeros (I iVal, std::size_t num)
{
while ( --num )
iVal |= ((iVal << 1) | I{1});
return iVal != I(-1);
}
It return (if I've written it correctly) true if (not where) there are at least num consecutive zero bit in iVal.
The following is a full working example when T is uint8_t
#include <iostream>
template <typename I>
bool scan_n_zeros (I iVal, std::size_t num)
{
while ( --num )
iVal |= ((iVal << 1) | I{1});
return iVal != I(-1);
}
int main()
{
uint8_t u0 { 0b00100100 };
uint8_t u1 { 0b00001111 };
uint8_t u2 { 0b10000111 };
uint8_t u3 { 0b11000011 };
uint8_t u4 { 0b11100001 };
uint8_t u5 { 0b11110000 };
std::cout << scan_n_zeros(u0, 2U) << std::endl; // print 1
std::cout << scan_n_zeros(u0, 3U) << std::endl; // print 0
std::cout << scan_n_zeros(u1, 4U) << std::endl; // print 1
std::cout << scan_n_zeros(u1, 5U) << std::endl; // print 0
std::cout << scan_n_zeros(u2, 4U) << std::endl; // print 1
std::cout << scan_n_zeros(u2, 5U) << std::endl; // print 0
std::cout << scan_n_zeros(u3, 4U) << std::endl; // print 1
std::cout << scan_n_zeros(u3, 5U) << std::endl; // print 0
std::cout << scan_n_zeros(u4, 4U) << std::endl; // print 1
std::cout << scan_n_zeros(u4, 5U) << std::endl; // print 0
std::cout << scan_n_zeros(u5, 4U) << std::endl; // print 1
std::cout << scan_n_zeros(u5, 5U) << std::endl; // print 0
}

Related

Why is UINT32_MAX + 1 = 0?

Consider the following code snippet:
#include <cstdint>
#include <limits>
#include <iostream>
int main(void)
{
uint64_t a = UINT32_MAX;
std::cout << "a: " << a << std::endl;
++a;
std::cout << "a: " << a << std::endl;
uint64_t b = (UINT32_MAX) + 1;
std::cout << "b: " << b << std::endl;
uint64_t c = std::numeric_limits<uint32_t>::max();
std::cout << "c: " << c << std::endl;
uint64_t d = std::numeric_limits<uint32_t>::max() + 1;
std::cout << "d: " << d << std::endl;
return 0;
}
Which gives the following output:
a: 4294967295
a: 4294967296
b: 0
c: 4294967295
d: 0
Why are b and d both 0? I cannot seem to find an explanation for this.
This behaviour is referred to as an overflow. uint32_t takes up 4 bytes or 32 bits of memory. When you use UINT32_MAX you are setting each of the 32 bits to 1 which is the maximum value 4 bytes of memory can represent. 1 is an integer literal which typically takes up 4 bytes of memory too. So you're basically adding 1 to the maximum value 4 bytes can represent. This is how the maximum value looks like in memory:
1111 1111 1111 1111 1111 1111 1111 1111
When you add one to this, there is no more room to represent one greater than the maximum value and hence all bits are set to 0 and back to their minimum value.
Although you're assigning to a uint64_t that has twice the capacity of uint32_t, it is only assigned after the addition operation is complete.
The addition operation checks the types of both the left and the right operands and this is what decides the type of the result. If atleast one value were of type uint64_t, the other operand would automatically be promoted to uint64_t too.
If you do:
(UINT32_MAX) + (uint64_t)1;
or:
(unint64_t)(UINT32_MAX) + 1;
,
you'll get what you expect. In languages like C#, you can use a checked block to check for overflow and prevent this from happening implicitly.

Binary representation, how do I get to the 'relevant bits'?

There are a couple of tutorials on google, but most show how to print binary representation of a number and do so by printing the whole 16/32 bits.
My question is how do you find out which is the most significant bit that is 1 and work(not necessarily print them) with those after it, itself included.
You can iterate the bits and check each bit's value:
uint data = 0x4AC;
for (auto i = 1, c = 0; c < sizeof(uint)*8; i = i << 1, ++c)
{
if ((data & i) > 0)
{
std::cout << "bit " << i <<" is 1" << std::endl;
}
}
bit 4 is 1
bit 8 is 1
bit 32 is 1
bit 128 is 1
bit 1024 is 1

Is it possible to split left and right channel from audio sample in SFML?

I am using const sf::Int16* samples = buffer.getSamples(); to collect all audio samples in an array. My sample.wav file is 16 bits per sample. From here I see that the first 8 bits correspond to the left channel and the latter ones to the right channel.
I'm currently accessing the samples using samples[i] but that returns an integer value. How should I proceed to be able to split the channels?
Each sample is a single 16-bit value; it isn't two 8-bit values.
The channels are interleaved.
This means - in a two channel sound sample - that the first sample (16-bit value) is the left channel, the second is the right, the third is the left, the fourth is the right and so on.
The following code splits a single stereo sound buffer (originalBuffer) into two separate mono sound buffers (leftBuffer and rightBuffer):
sf::SoundBuffer originalBuffer;
// set up original buffer here - assumed to be stereo during this example
const sf::Int16* originalSamples{ originalBuffer.getSamples() };
const sf::Uint64 sizeOfSingleChannel{ originalBuffer.getSampleCount() / 2u };
sf::Int16* leftSamples{ new sf::Int16[sizeOfSingleChannel] };
sf::Int16* rightSamples{ new sf::Int16[sizeOfSingleChannel] };
for (sf::Uint64 i{ 0u }; i < sizeOfSingleChannel; ++i)
{
leftSamples[i] = originalSamples[i * 2u];
rightSamples[i] = originalSamples[i * 2u + 1u];
}
sf::SoundBuffer leftBuffer;
sf::SoundBuffer rightBuffer;
leftBuffer.loadFromSamples(leftSamples, sizeOfSingleChannel, 1u, originalBuffer.getSampleRate());
rightBuffer.loadFromSamples(rightSamples, sizeOfSingleChannel, 1u, originalBuffer.getSampleRate());
Sure, using bitwise operators it's easy:
int main() {
uint16_t number = 0xfacb; // decimal: 64203
uint8_t low_byte = static_cast<uint8_t>(number & 0xFF); // low byte: 203
uint8_t high_byte = static_cast<uint8_t>((number & ~0xFF) >> 8); // high byte: 250
std::cout << std::bitset<16>(number) << " " << std::bitset<8>(high_byte) << " " << std::bitset<8>(low_byte) << "\n";
std::cout << number << " " << static_cast<unsigned short>(high_byte) << " " << static_cast<unsigned short>(low_byte) << "\n";
}
Remember to assign the result of the bitwise operation to your numeric type of 8 bits.
Live demo.

Showing binary representation of floating point types in C++ [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Improve this question
Consider the following code for integral types:
template <class T>
std::string as_binary_string( T value ) {
return std::bitset<sizeof( T ) * 8>( value ).to_string();
}
int main() {
unsigned char a(2);
char b(4);
unsigned short c(2);
short d(4);
unsigned int e(2);
int f(4);
unsigned long long g(2);
long long h(4);
std::cout << "a = " << +a << " " << as_binary_string( a ) << std::endl;
std::cout << "b = " << +b << " " << as_binary_string( b ) << std::endl;
std::cout << "c = " << c << " " << as_binary_string( c ) << std::endl;
std::cout << "d = " << c << " " << as_binary_string( d ) << std::endl;
std::cout << "e = " << e << " " << as_binary_string( e ) << std::endl;
std::cout << "f = " << f << " " << as_binary_string( f ) << std::endl;
std::cout << "g = " << g << " " << as_binary_string( g ) << std::endl;
std::cout << "h = " << h << " " << as_binary_string( h ) << std::endl;
std::cout << "\nPress any key and enter to quit.\n";
char q;
std::cin >> q;
return 0;
}
Pretty straight forward, works well and is quite simple.
EDIT
How would one go about writing a function to extract the binary or bit pattern of arbitrary floating point types at compile time?
When it comes to floats I have not found anything similar in any existing libraries of my own knowledge. I've searched google for days looking for one, so then I resorted into trying to write my own function without any success. I no longer have the attempted code available since I've originally asked this question so I can not exactly show you all of the different attempts of implementations along with their compiler - build errors. I was interested in trying to generate the bit pattern for floats in a generic way during compile time and wanted to integrate that into my existing class that seamlessly does the same for any integral type. As for the floating types themselves, I have taken into consideration the different formats as well as architecture endian. For my general purposes the standard IEEE versions of the floating point types is all that I should need to be concerned with.
iBug had suggested for me to write my own function when I originally asked this question, while I was in the attempt of trying to do so. I understand binary numbers, memory sizes, and the mathematics, but when trying to put it all together with how floating point types are stored in memory with their different parts {sign bit, base & exp } is where I was having the most trouble.
Since then with the suggestions those who have given a great answer - example I was able to write a function that would fit nicely into my already existing class template and now it works for my intended purposes.
What about writing one by yourself?
static_assert(sizeof(float) == sizeof(uint32_t));
static_assert(sizeof(double) == sizeof(uint64_t));
std::string as_binary_string( float value ) {
std::uint32_t t;
std::memcpy(&t, &value, sizeof(value));
return std::bitset<sizeof(float) * 8>(t).to_string();
}
std::string as_binary_string( double value ) {
std::uint64_t t;
std::memcpy(&t, &value, sizeof(value));
return std::bitset<sizeof(double) * 8>(t).to_string();
}
You may need to change the helper variable t in case the sizes for the floating point numbers are different.
You can alternatively copy them bit-by-bit. This is slower but serves for arbitrarily any type.
template <typename T>
std::string as_binary_string( T value )
{
const std::size_t nbytes = sizeof(T), nbits = nbytes * CHAR_BIT;
std::bitset<nbits> b;
std::uint8_t buf[nbytes];
std::memcpy(buf, &value, nbytes);
for(int i = 0; i < nbytes; ++i)
{
std::uint8_t cur = buf[i];
int offset = i * CHAR_BIT;
for(int bit = 0; bit < CHAR_BIT; ++bit)
{
b[offset] = cur & 1;
++offset; // Move to next bit in b
cur >>= 1; // Move to next bit in array
}
}
return b.to_string();
}
You said it doesn't need to be standard. So, here is what works in clang on my computer:
#include <iostream>
#include <algorithm>
using namespace std;
int main()
{
char *result;
result=new char[33];
fill(result,result+32,'0');
float input;
cin >>input;
asm(
"mov %0,%%eax\n"
"mov %1,%%rbx\n"
".intel_syntax\n"
"mov rcx,20h\n"
"loop_begin:\n"
"shr eax\n"
"jnc loop_end\n"
"inc byte ptr [rbx+rcx-1]\n"
"loop_end:\n"
"loop loop_begin\n"
".att_syntax\n"
:
: "m" (input), "m" (result)
);
cout <<result <<endl;
delete[] result;
return 0;
}
This code makes a bunch of assumptions about the computer architecture and I am not sure on how many computers it would work.
EDIT:
My computer is a 64-bit Mac-Air. This program basically works by allocating a 33-byte string and filling the first 32 bytes with '0' (the 33rd byte will automatically be '\0').
Then it uses inline assembly to store the float into a 32-bit register and then it repeatedly shifts it to the right by one bit.
If the last bit in the register was 1 before the shift, it gets stored into the carry flag.
The assembly code then checks the carry flag and, if it contains 1, it increases the corresponding byte in the string by 1.
Since it was previously initialized to '0', it will turn to '1'.
So, effectively, when the loop in the assembly is finished, the binary representation of a float is stored into a string.
This code only works for x64 (it uses 64-bit registers "rbx" and "rcx" to store the pointer and the counter for the loop), but I think it's easy to tweak it to work on other processors.
An IEEE floating point number looks like the following
sign exponent mantissa
1 bit 11 bits 52 bits
Note that there's a hidden 1 before the mantissa, and the exponent
is biased so 1023 = 0, not two's complement.
By memcpy()ing to a 64 bit unsigned integer you can then apply AND and
OR masks to get the bit pattern. The arrangement could be big endian
or little endian.
You can easily work out which arrangement you have by passing easy numbers
such as 1 or 2.
Generally people either use std::hexfloat or cast a pointer to the floating-point value to a pointer to an unsigned integer of the same size and print the indirected value in hex format. Both methods facilitate bit-level analysis of floating-point in a productive fashion.
You could roll your by casting the address of the float/double to a char and iterating it that way:
#include <memory>
#include <iostream>
#include <limits>
#include <iomanip>
template <typename T>
std::string getBits(T t) {
std::string returnString{""};
char *base{reinterpret_cast<char *>(std::addressof(t))};
char *tail{base + sizeof(t) - 1};
do {
for (int bits = std::numeric_limits<unsigned char>::digits - 1; bits >= 0; bits--) {
returnString += ( ((*tail) & (1 << bits)) ? '1' : '0');
}
} while (--tail >= base);
return returnString;
}
int main() {
float f{10.0};
double d{100.0};
double nd{-100.0};
std::cout << std::setprecision(1);
std::cout << getBits(f) << std::endl;
std::cout << getBits(d) << std::endl;
std::cout << getBits(nd) << std::endl;
}
Output on my machine (note the sign flip in the third output):
01000001001000000000000000000000
0100000001011001000000000000000000000000000000000000000000000000
1100000001011001000000000000000000000000000000000000000000000000

Converting hex String to structure

I've got a file containing a large string of hexidecimal. Here's the first few lines:
0000038f
0000111d
0000111d
03030303
//Goes on for a long time
I have a large struct that is intended to hold that data:
typedef struct
{
unsigned int field1: 5;
unsigned int field2: 11;
unsigned int field3: 16;
//Goes on for a long time
}calibration;
What I want to do is read the above string and store it in the struct. I can assume the input is valid (it's verified before I get it).
I've already got a loop that reads the file and puts the whole item in a string:
std::string line = "";
std::string hexText = "";
while(!std::getline(readFile, line))
{
hexText += line;
}
//Convert string into calibration
//Convert string into long int
long int hexInt = strtol(hexText.c_str(), NULL, 16);
//Here I get stuck: How to get from long int to calibration...?
How to get from long int to calibration...?
Cameron's answer is good, and probably what you want.
I offer here another (maybe not so different) approach.
Note1: Your file input needs re-work. I will suggest
a) use getline() to fetch one line at a time into a string
b) convert the one entry to a uint32_t (I would use stringstream instead of atol)
once you learn how to detect and recover from invalid input,
you could then work on combining a) and b) into one step
c) then install the uint32_t in your structure, for which my
offering below might offer insight.
Note2: I have worked many years with bit fields, and have developed a distaste for them.
I have never found them more convenient than the alternatives.
The alternative I prefer is bit masks and field shifting.
So far as we can tell from your problem statement, it appears your problem does not need bit-fields (which Cameron's answer illustrates).
Note3: Not all compilers will pack these bit fields for you.
The last compiler I used require what is called a "pragma".
G++ 4.8 on ubuntu seemed to pack the bytes just fine (i.e. no pragma needed)
The sizeof(calibration) for your original code is 4 ... i.e. packed.
Another issue is that packing can unexpectedly change when you change options or upgrade the compiler or change the compiler.
My team's work-around was to always have an assert against struct size and a few byte offsets in the CTOR.
Note4: I did not illustrate the use of 'union' to align a uint32_t array over your calibration struct.
This may be preferred over the reinterpret cast approach. Check your requirements, team lead, professor.
Anyway, in the spirit of your original effort, consider the following additions to your struct calibration:
typedef struct
{
uint32_t field1 : 5;
uint32_t field2 : 11;
uint32_t field3 : 16;
//Goes on for a long time
// I made up these next 2 fields for illustration
uint32_t field4 : 8;
uint32_t field5 : 24;
// ... add more fields here
// something typically done by ctor or used by ctor
void clear() { field1 = 0; field2 = 0; field3 = 0; field4 = 0; field5 = 0; }
void show123(const char* lbl=0) {
if(0 == lbl) lbl = " ";
std::cout << std::setw(16) << lbl;
std::cout << " " << std::setw(5) << std::hex << field3 << std::dec
<< " " << std::setw(5) << std::hex << field2 << std::dec
<< " " << std::setw(5) << std::hex << field1 << std::dec
<< " 0x" << std::hex << std::setfill('0') << std::setw(8)
<< *(reinterpret_cast<uint32_t*>(this))
<< " => " << std::dec << std::setfill(' ')
<< *(reinterpret_cast<uint32_t*>(this))
<< std::endl;
} // show
// I did not create show456() ...
// 1st uint32_t: set new val, return previous
uint32_t set123(uint32_t nxtVal) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
uint32_t prevVal = myVal[0];
myVal[0] = nxtVal;
return (prevVal);
}
// return current value of the combined field1, field2 field3
uint32_t get123(void) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
return (myVal[0]);
}
// 2nd uint32_t: set new val, return previous
uint32_t set45(uint32_t nxtVal) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
uint32_t prevVal = myVal[1];
myVal[1] = nxtVal;
return (prevVal);
}
// return current value of the combined field4, field5
uint32_t get45(void) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
return (myVal[1]);
}
// guess that next 4 fields fill 32 bits
uint32_t get6789(void) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
return (myVal[2]);
}
// ... tedious expansion
} calibration;
Here is some test code to illustrate the use:
uint32_t t125()
{
const char* lbl =
"\n 16 bits 11 bits 5 bits hex => dec";
calibration cal;
cal.clear();
std::cout << lbl << std::endl;
cal.show123();
cal.field1 = 1;
cal.show123("field1 = 1");
cal.clear();
cal.field1 = 31;
cal.show123("field1 = 31");
cal.clear();
cal.field2 = 1;
cal.show123("field2 = 1");
cal.clear();
cal.field2 = (2047 & 0x07ff);
cal.show123("field2 = 2047");
cal.clear();
cal.field3 = 1;
cal.show123("field3 = 1");
cal.clear();
cal.field3 = (65535 & 0x0ffff);
cal.show123("field3 = 65535");
cal.set123 (0xABCD6E17);
cal.show123 ("set123(0x...)");
cal.set123 (0xffffffff);
cal.show123 ("set123(0x...)");
cal.set123 (0x0);
cal.show123 ("set123(0x...)");
std::cout << "\n";
cal.clear();
std::cout << "get123(): " << cal.get123() << std::endl;
std::cout << " get45(): " << cal.get45() << std::endl;
// values from your file:
cal.set123 (0x0000038f);
cal.set45 (0x0000111d);
std::cout << "get123(): " << "0x" << std::hex << std::setfill('0')
<< std::setw(8) << cal.get123() << std::endl;
std::cout << " get45(): " << "0x" << std::hex << std::setfill('0')
<< std::setw(8) << cal.get45() << std::endl;
// cal.set6789 (0x03030303);
// std::cout << "get6789(): " << cal.get6789() << std::endl;
// ...
return(0);
}
And the test code output:
16 bits 11 bits 5 bits hex => dec
0 0 0 0x00000000 => 0
field1 = 1 0 0 1 0x00000001 => 1
field1 = 31 0 0 1f 0x0000001f => 31
field2 = 1 0 1 0 0x00000020 => 32
field2 = 2047 0 7ff 0 0x0000ffe0 => 65,504
field3 = 1 1 0 0 0x00010000 => 65,536
field3 = 65535 ffff 0 0 0xffff0000 => 4,294,901,760
set123(0x...) abcd 370 17 0xabcd6e17 => 2,882,366,999
set123(0x...) ffff 7ff 1f 0xffffffff => 4,294,967,295
set123(0x...) 0 0 0 0x00000000 => 0
get123(): 0
get45(): 0
get123(): 0x0000038f
get45(): 0x0000111d
The goal of this code is to help you see how the bit fields map into the lsbyte through msbyte of the data.
If you care at all about efficiency, don't read the whole thing into a string and then convert it. Simply read one word at a time, and convert that. Your loop should look something like:
calibration c;
uint32_t* dest = reinterpret_cast<uint32_t*>(&c);
while (true) {
char hexText[8];
// TODO: Attempt to read 8 bytes from file and then skip whitespace
// TODO: Break out of the loop on EOF
std::uint32_t hexValue = 0; // TODO: Convert hex to dword
// Assumes the structure padding & packing matches the dump version's
// Assumes the structure size is exactly a multiple of 32-bytes (w/ padding)
static_assert(sizeof(calibration) % 4 == 0);
assert(dest - &c < sizeof(calibration) && "Too much data");
*dest++ = hexValue;
}
assert(dest - &c == sizeof(calibration) && "Too little data");
Converting 8 chars of hex to an actual 4-byte int is a good exercise and is well-covered elsewhere, so I've left it out (along with the file reading, which is similarly well-covered).
Note the two assumptions in the loop: the first one cannot be checked either at run-time or compile time, and must be either agreed upon in advance or extra work has to be done to properly serialize the structure (handling structure packing and padding, etc.). The last one can at least be checked at compile time with the static_assert.
Also, care has to be taken to ensure that the endianness of the hex bytes in the file matches the endianness of the architecture executing the program when converting the hex string. This will depend on whether the hex was written in a specific endianness in the first place (in which case you can convert it from the know endianness to the current architecture's endianness quite easily), or whether it's architecture-dependent (in which case you have no choice but to assume the endianness is the same as your current architecture).