There are a couple of tutorials on google, but most show how to print binary representation of a number and do so by printing the whole 16/32 bits.
My question is how do you find out which is the most significant bit that is 1 and work(not necessarily print them) with those after it, itself included.
You can iterate the bits and check each bit's value:
uint data = 0x4AC;
for (auto i = 1, c = 0; c < sizeof(uint)*8; i = i << 1, ++c)
{
if ((data & i) > 0)
{
std::cout << "bit " << i <<" is 1" << std::endl;
}
}
bit 4 is 1
bit 8 is 1
bit 32 is 1
bit 128 is 1
bit 1024 is 1
Related
Please, could somebody explain what's happening under the hood there?
The example runs on an Intel machine. Would the behavior be the same on other architectures?
Actually, I have a hardware counter which overruns every now and then, and I have to make sure that the intervals are always computed correctly. I thought that integer arithmetics should always do the trick but when there is a sign change, binary subtraction yields an overflow bit which appears to be actually interpreted as the sign.
Do I really have to handle the sign by myself or is there a more elegant way to compute the interval regardless of the hardware or the implementation?
TIA
std::cout << "\nTest integer arithmetics\n";
int8_t iFirst = -2;
int8_t iSecond = 2;
int8_t iResult = iSecond - iFirst;
std::cout << "\n" << std::to_string(iSecond) << " - " << std::to_string(iFirst) << " = " << std::to_string(iResult);
iResult = iFirst - iSecond;
std::cout << "\n" << std::to_string(iFirst) << " - " << std::to_string(iSecond) << " = " << std::to_string(iResult);
iFirst = SCHAR_MIN + 1; iSecond = SCHAR_MAX - 2;
iResult = iSecond - iFirst;
std::cout << "\n" << std::to_string(iSecond) << " - " << std::to_string(iFirst) << " = " << std::to_string(iResult);
iResult = iFirst - iSecond;
std::cout << "\n" << std::to_string(iFirst) << " - " << std::to_string(iSecond) << " = " << std::to_string(iResult) << "\n\n";
And this is what I get:
Test integer arithmetics
2 - -2 = 4
-2 - 2 = -4
125 - -127 = -4
-127 - 125 = 4
What happens with iResult = iFirst - iSecond is that first both variables iFirst and iSecond are promoted to int due to usual arithmetic conversion. The result is an int. That int result is truncated to int8_t for the assignment (in effect, the top 24 bits of the 32-bit int is cut away).
The int result of -127 - 125 is -252. With two's complement representation that will be 0xFFFFFF04. Truncation only leaves the 0x04 part. Therefore iResult will be equal to 4.
the problem is that your variable is 8 bit. 8 bits can hold up to 256 numbers. So, your variables can only represent numbers within -128~127 range. Any number out of that range will give wrong output. Both of your last calculations produce numbers beyond the variable's range (252 and -252). There is no elegant or even possible way to handle it as it is. You can only handle the overflow bit yourself.
PS. This is not hardware problem. Any processor would give same results.
I wanna create a custom data type which uses 5 bits. Max value is 20 (10100) and min value is 0 (00000). I couldn't imagine how could I accomplish that. So I decided ask your helps...
And it should do arithmetics like:
note n = 15;
note x = 5;
std::cout << n + x << std::endl; //Should print 20
std::cout << n-x << std::endl; //Should print 10
Regards & thanks for your efforts!..
I am writing an memory allocator that is backed by a bit map (array of uint8_t) currently when an allocation request comes along I scan the bitmap sequential from 0 to n bits and search for a space that can fulfill the request. (a bit 1 denotes page used a 0 denoted page is free) Now instead of searching for a space one bit at a time is there technique to scan the whole array faster? i.e if a request for 3 page memory arrives I would like to search for 000 pattern in the array in one go ideally without looping?
PS: I am not using std::bitset as it is not available for the compiler I am using. AFAIK that does not let me search for multiple bits also.
EDIT: Bits are packed into bytes one uint8_t has 8 pages (1 per bit) encoded in it.
To scan for one empty page, you could loop through the bit array one full byte at a time and check if it is smaller than 255. If it is smaller, there is at least one zero bit. Even better would be to scan 32 or 64 bit (unsigned ints) at a time, and then narrow the search inside the uint.
To optimize a bit, you could keep track of the first byte with a zero bit (and update that position when freeing a page). This could give a false positive once you allocate that free page, but at least the next time the scan can start there instead of at the beginning.
A scan for multiple pages could be optimized if you're willing to align larger blocks on a power of 2 (depending on your data structures). For example, to allocate 8 pages, you would only scan for a full byte being zero:
1 page: scan for any zero bit (up to 64 bits at a time)
2 pages: scan for 2 zero bits at bit position 0,2,4,6
3-4 pages: scan for zero nibble (for 3 pages, the fourth would be available for 1 page then)
5-8 pages: scan for an empty byte
for each of the above, you could first scan 64 bits at a time.
This way, you don't have to worry about (or check) overlapping zero ranges at byte/uint32/uint64 boundaries.
For each type a starting position with the first free block could be kept/updated.
Not a full answer to your question (I suppose) but I hope that the following function can help.
template <typename I>
bool scan_n_zeros (I iVal, std::size_t num)
{
while ( --num )
iVal |= ((iVal << 1) | I{1});
return iVal != I(-1);
}
It return (if I've written it correctly) true if (not where) there are at least num consecutive zero bit in iVal.
The following is a full working example when T is uint8_t
#include <iostream>
template <typename I>
bool scan_n_zeros (I iVal, std::size_t num)
{
while ( --num )
iVal |= ((iVal << 1) | I{1});
return iVal != I(-1);
}
int main()
{
uint8_t u0 { 0b00100100 };
uint8_t u1 { 0b00001111 };
uint8_t u2 { 0b10000111 };
uint8_t u3 { 0b11000011 };
uint8_t u4 { 0b11100001 };
uint8_t u5 { 0b11110000 };
std::cout << scan_n_zeros(u0, 2U) << std::endl; // print 1
std::cout << scan_n_zeros(u0, 3U) << std::endl; // print 0
std::cout << scan_n_zeros(u1, 4U) << std::endl; // print 1
std::cout << scan_n_zeros(u1, 5U) << std::endl; // print 0
std::cout << scan_n_zeros(u2, 4U) << std::endl; // print 1
std::cout << scan_n_zeros(u2, 5U) << std::endl; // print 0
std::cout << scan_n_zeros(u3, 4U) << std::endl; // print 1
std::cout << scan_n_zeros(u3, 5U) << std::endl; // print 0
std::cout << scan_n_zeros(u4, 4U) << std::endl; // print 1
std::cout << scan_n_zeros(u4, 5U) << std::endl; // print 0
std::cout << scan_n_zeros(u5, 4U) << std::endl; // print 1
std::cout << scan_n_zeros(u5, 5U) << std::endl; // print 0
}
I want to replicate the behaviour of a micro controller.
If the memory location of the program counter contains 0x26 then I want to check that the value in the next memory location is positive or negative.
If it is positive then I add it to the program counter PC and if it is negative then I add it to the program counter PC, which is essentially subtracting it.
I am using bit masking to do this but I am having issues determining a negative value.
{
if (value_in_mem && 128 == 128)
{
cout << "\nNext byte is : " << value_in_mem << endl;
cout << "\nNumber is positive!" << endl;
PC = PC + value_in_mem;
cout << "\n(Program Counter has been increased)" << endl;
}
else if (value_in_mem && 128 == 0)
{
cout << "\nNext byte is : - " << value_in_mem << endl;
cout << "\nNumber is negative!" << endl;
PC = PC + value_in_mem;
cout << "\n(Program Counter has been decreased)" << endl;
}
}
My method is to && the value_in_mem (an 8 bit signed int) with 128 (0b10000000) to determine if the most significant bit is 1 or 0, negative or postitve respectively.
value_in_mem is a 8-bit hexadecimal value and I think this is where my confusion lies. I'm not entirely sure how negative hexadecimal values work, could someone possibly explain this and the errors in my attempt at the code?
1) You're using && which is a logical AND but you should use & which is a bitwise AND.
// It would be better to use hex values when you're working with bits
if ( value_in_mem & 0x80 == 0x80 )
{
// it's negative
}
else
{
// it's positive
}
2) You can simply compare your value to 0 (if value_in_mem is declared as char)
if ( value_in_mem < 0 )
{
// it's negative
}
else
{
// it's positive
}
Make sure you are using correct types for your values (or cast them where it matters, if you prefer for example to have memory values as unsigned bytes most of the time (I certainly would), then cast it to signed 8 bit integer only for the particular calculation/comparison by static_cast<int8_t>(value_in_mem) ).
To demonstrate the importance of correct typing, and how C++ compiler will then do all the dirty work for you, so you don't have to bother with bits and can use also if (x < 0):
#include <iostream>
int main()
{
{
uint16_t pc = 65530; int8_t b = 0xFF; pc += b;
std::cout << pc << "\n"; // unsigned 16 + signed 8
// 65529 (b works as -1, 65530 - 1 = 65529)
}
{
int16_t pc = 65530; int8_t b = 0xFF; pc += b;
std::cout << pc << "\n"; // signed 16 + signed 8
// -7 (b works as -1, 65530 as int16_t is -6, -6 + -1 = -7)
}
{
int16_t pc = 65530; uint8_t b = 0xFF; pc += b;
std::cout << pc << "\n"; // signed 16 + unsigned 8
// 249 (b works as +255, 65530 as int16_t is -6, -6 + 255 = 249)
}
{
uint16_t pc = 65530; uint8_t b = 0xFF; pc += b;
std::cout << pc << "\n"; // unsigned 16 + unsigned 8
// 249 (b = +255, 65530 + 255 = 65785 (0x100F9), truncated to 16 bits = 249)
}
}
I've got a file containing a large string of hexidecimal. Here's the first few lines:
0000038f
0000111d
0000111d
03030303
//Goes on for a long time
I have a large struct that is intended to hold that data:
typedef struct
{
unsigned int field1: 5;
unsigned int field2: 11;
unsigned int field3: 16;
//Goes on for a long time
}calibration;
What I want to do is read the above string and store it in the struct. I can assume the input is valid (it's verified before I get it).
I've already got a loop that reads the file and puts the whole item in a string:
std::string line = "";
std::string hexText = "";
while(!std::getline(readFile, line))
{
hexText += line;
}
//Convert string into calibration
//Convert string into long int
long int hexInt = strtol(hexText.c_str(), NULL, 16);
//Here I get stuck: How to get from long int to calibration...?
How to get from long int to calibration...?
Cameron's answer is good, and probably what you want.
I offer here another (maybe not so different) approach.
Note1: Your file input needs re-work. I will suggest
a) use getline() to fetch one line at a time into a string
b) convert the one entry to a uint32_t (I would use stringstream instead of atol)
once you learn how to detect and recover from invalid input,
you could then work on combining a) and b) into one step
c) then install the uint32_t in your structure, for which my
offering below might offer insight.
Note2: I have worked many years with bit fields, and have developed a distaste for them.
I have never found them more convenient than the alternatives.
The alternative I prefer is bit masks and field shifting.
So far as we can tell from your problem statement, it appears your problem does not need bit-fields (which Cameron's answer illustrates).
Note3: Not all compilers will pack these bit fields for you.
The last compiler I used require what is called a "pragma".
G++ 4.8 on ubuntu seemed to pack the bytes just fine (i.e. no pragma needed)
The sizeof(calibration) for your original code is 4 ... i.e. packed.
Another issue is that packing can unexpectedly change when you change options or upgrade the compiler or change the compiler.
My team's work-around was to always have an assert against struct size and a few byte offsets in the CTOR.
Note4: I did not illustrate the use of 'union' to align a uint32_t array over your calibration struct.
This may be preferred over the reinterpret cast approach. Check your requirements, team lead, professor.
Anyway, in the spirit of your original effort, consider the following additions to your struct calibration:
typedef struct
{
uint32_t field1 : 5;
uint32_t field2 : 11;
uint32_t field3 : 16;
//Goes on for a long time
// I made up these next 2 fields for illustration
uint32_t field4 : 8;
uint32_t field5 : 24;
// ... add more fields here
// something typically done by ctor or used by ctor
void clear() { field1 = 0; field2 = 0; field3 = 0; field4 = 0; field5 = 0; }
void show123(const char* lbl=0) {
if(0 == lbl) lbl = " ";
std::cout << std::setw(16) << lbl;
std::cout << " " << std::setw(5) << std::hex << field3 << std::dec
<< " " << std::setw(5) << std::hex << field2 << std::dec
<< " " << std::setw(5) << std::hex << field1 << std::dec
<< " 0x" << std::hex << std::setfill('0') << std::setw(8)
<< *(reinterpret_cast<uint32_t*>(this))
<< " => " << std::dec << std::setfill(' ')
<< *(reinterpret_cast<uint32_t*>(this))
<< std::endl;
} // show
// I did not create show456() ...
// 1st uint32_t: set new val, return previous
uint32_t set123(uint32_t nxtVal) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
uint32_t prevVal = myVal[0];
myVal[0] = nxtVal;
return (prevVal);
}
// return current value of the combined field1, field2 field3
uint32_t get123(void) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
return (myVal[0]);
}
// 2nd uint32_t: set new val, return previous
uint32_t set45(uint32_t nxtVal) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
uint32_t prevVal = myVal[1];
myVal[1] = nxtVal;
return (prevVal);
}
// return current value of the combined field4, field5
uint32_t get45(void) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
return (myVal[1]);
}
// guess that next 4 fields fill 32 bits
uint32_t get6789(void) {
uint32_t* myVal = reinterpret_cast<uint32_t*>(this);
return (myVal[2]);
}
// ... tedious expansion
} calibration;
Here is some test code to illustrate the use:
uint32_t t125()
{
const char* lbl =
"\n 16 bits 11 bits 5 bits hex => dec";
calibration cal;
cal.clear();
std::cout << lbl << std::endl;
cal.show123();
cal.field1 = 1;
cal.show123("field1 = 1");
cal.clear();
cal.field1 = 31;
cal.show123("field1 = 31");
cal.clear();
cal.field2 = 1;
cal.show123("field2 = 1");
cal.clear();
cal.field2 = (2047 & 0x07ff);
cal.show123("field2 = 2047");
cal.clear();
cal.field3 = 1;
cal.show123("field3 = 1");
cal.clear();
cal.field3 = (65535 & 0x0ffff);
cal.show123("field3 = 65535");
cal.set123 (0xABCD6E17);
cal.show123 ("set123(0x...)");
cal.set123 (0xffffffff);
cal.show123 ("set123(0x...)");
cal.set123 (0x0);
cal.show123 ("set123(0x...)");
std::cout << "\n";
cal.clear();
std::cout << "get123(): " << cal.get123() << std::endl;
std::cout << " get45(): " << cal.get45() << std::endl;
// values from your file:
cal.set123 (0x0000038f);
cal.set45 (0x0000111d);
std::cout << "get123(): " << "0x" << std::hex << std::setfill('0')
<< std::setw(8) << cal.get123() << std::endl;
std::cout << " get45(): " << "0x" << std::hex << std::setfill('0')
<< std::setw(8) << cal.get45() << std::endl;
// cal.set6789 (0x03030303);
// std::cout << "get6789(): " << cal.get6789() << std::endl;
// ...
return(0);
}
And the test code output:
16 bits 11 bits 5 bits hex => dec
0 0 0 0x00000000 => 0
field1 = 1 0 0 1 0x00000001 => 1
field1 = 31 0 0 1f 0x0000001f => 31
field2 = 1 0 1 0 0x00000020 => 32
field2 = 2047 0 7ff 0 0x0000ffe0 => 65,504
field3 = 1 1 0 0 0x00010000 => 65,536
field3 = 65535 ffff 0 0 0xffff0000 => 4,294,901,760
set123(0x...) abcd 370 17 0xabcd6e17 => 2,882,366,999
set123(0x...) ffff 7ff 1f 0xffffffff => 4,294,967,295
set123(0x...) 0 0 0 0x00000000 => 0
get123(): 0
get45(): 0
get123(): 0x0000038f
get45(): 0x0000111d
The goal of this code is to help you see how the bit fields map into the lsbyte through msbyte of the data.
If you care at all about efficiency, don't read the whole thing into a string and then convert it. Simply read one word at a time, and convert that. Your loop should look something like:
calibration c;
uint32_t* dest = reinterpret_cast<uint32_t*>(&c);
while (true) {
char hexText[8];
// TODO: Attempt to read 8 bytes from file and then skip whitespace
// TODO: Break out of the loop on EOF
std::uint32_t hexValue = 0; // TODO: Convert hex to dword
// Assumes the structure padding & packing matches the dump version's
// Assumes the structure size is exactly a multiple of 32-bytes (w/ padding)
static_assert(sizeof(calibration) % 4 == 0);
assert(dest - &c < sizeof(calibration) && "Too much data");
*dest++ = hexValue;
}
assert(dest - &c == sizeof(calibration) && "Too little data");
Converting 8 chars of hex to an actual 4-byte int is a good exercise and is well-covered elsewhere, so I've left it out (along with the file reading, which is similarly well-covered).
Note the two assumptions in the loop: the first one cannot be checked either at run-time or compile time, and must be either agreed upon in advance or extra work has to be done to properly serialize the structure (handling structure packing and padding, etc.). The last one can at least be checked at compile time with the static_assert.
Also, care has to be taken to ensure that the endianness of the hex bytes in the file matches the endianness of the architecture executing the program when converting the hex string. This will depend on whether the hex was written in a specific endianness in the first place (in which case you can convert it from the know endianness to the current architecture's endianness quite easily), or whether it's architecture-dependent (in which case you have no choice but to assume the endianness is the same as your current architecture).