I am working with some C++ code and I am just a novice and I do not understand what this conditional statement means for a true or false result.
This is what I have:
Font contains values related to bitmap font
for(j = 0; j < COUNT; j++) {
for(i = 0; i < 8; i++) {
Font[j][i] <<= 1;
if((j != COUNT -1) && (Font[j + 1][i] & 0x80))
Font[j][i] |= 0x01;
}
}
I understand most of this, the Boolean &&, then the & is confusing me for relevancy in this use and then the lone 0x80, I just don't understand how that relates to the (Font[j + 1][i] & 0x80) 0x80 is 128 decimal...
The font is a 128 x 8 font, but is that a relationship?
Can someone help me put this together so that I can understand how it is providing the condition?
I also need to know how |= 0x01 affects Font[j][i]. Is that a form of piping?
The if statement generic format is
if (conditional_expression) ...
The conditional_expression is any expression which yields a result. The result zero (0) is false, and anything non-zero is true.
If the conditional_expression is simple and without any kind of comparison, then it's implicitly compared to zero. For example
int my_variable = ...; // Actual value irrelevant for example
if (my_variable) { ... }
The if statement above is implicitly equal to
if (my_variable != 0) { ... }
This implicit comparison to zero is also done for compound conditions, for example
if (some_condition && my_variable) { ... }
is equal to
if (some_condition && my_variable != 0) { ... }
Now we get back to your code and the condition:
if((j != COUNT -1) && (Font[j + 1][i] & 0x80))
With the implicit comparison against zero, the above is equal to
if((j != COUNT -1) && (Font[j + 1][i] & 0x80) != 0)
That is, the right-hand side of the && check is Font[j + 1][i] & 0x80 is zero or not.
As for the & operator itself, it's the bitwise AND, and in essence can be used to check if a bit is set or not. For your code it checks if the bit corresponding to the value 0x80 (the eight bit) is set.
Related
My question:
I would like to change the Ascii (hex) in memory to a Decimal Value by shifting or any other way you know how?
I would like a function to assign the memory as follows:
From (Input):
Example Memory: 32 35 38 00 (Ascii 258)
Example Pointer: +var 0x0057b730 "258" const char *
To (Output)(The ANSWER I am looking for):
Example Memory: 02 01 00 00
Example Pointer: &var 0x0040f9c0 {258} int *
Example Int: var 258 int
This function will (NOT) produce my answer above:
This function will produce a Decimal (600) answer and a Hex(258) answer.
int Utility::AsciiToHex_4B(void* buff)
{
int result = 0, i = 0;
char cWork = 0;
if (buff != NULL)
{
for (i = 0; i <= 3; i++)
{
cWork = *((BYTE*)buff + i);
if (cWork != NULL)
{
if (cWork >= '0' && cWork <= '9')
cWork -= '0';
else if (cWork >= 'a' && cWork <= 'f')
cWork = cWork - 'a' + 10;
else if (cWork >= 'A' && cWork <= 'F')
cWork = cWork - 'A' + 10;
result = (result << 4) | cWork & 0xF;
}
}
return result; // :) Good
}
return result; // :( Bad
}
I've seen a lot of answers and questions about changing Ascii To Int or Ascii To Hex or even Ascii to Decimal and none of them answer the question above.
Thanks for any help you may provide.
"I would like to change the Ascii (hex) in memory to a Decimal Value by shifting.."
No, shifting won't help you here.
"...or any other way you know how?"
Yes as you say there are questions already answering that.
(in short you need to replace your shift operation with adding cWork times the correct base ten (1,10,100) and get it right with endianess. But just use an existing answer.)
First of all, for the computer decimal and hex make no difference as the number is store in binary format anyway and it is presented to the user as a needed by different print functions. If I understood your problem correctly, this should simplify your life since you need to convert the c-string only to one of the two formats internally. You can then display the number in decimal or hex format as the client desires.
Normally, when I do those things by myself, I convert a string to a decimal variable working from the units up to the higher order numbers:
char* str="258";
uint8_t str_len=3;
uint16_t num=0;
for(uint8_t i=str_len-1;i>=0;--i)
{
uint16_t val=str[i]-'0'; //convert value
uint16_t mult=10*st_len-i+1; //first round multiplier is 0, you could use base 16 instead of base 10 but I found it more laborious
num+=val*(mult==0? 1 : mult); //multiply the value by 1, 10 ... this is your decimal shift
}
Please take the above untested code just as a reference for a solution, it can be done in a much better and more compact way.
Once you have the number in binary format you can manipulate it. You can divide it by 16 (mind the remainders) to obtain an hexadecimal representation of the same quantity
Finally, you can convert it back to to string as follows:
for(uint16_t i=str_len-1; num>0 ; num= num/10, --i)
{
CHAR8 n = num % 10+'0'; //converts a decimal number to a decimal string, use base 16 for the hex
char_buffer[i]=n;
}
You could achieve a similar result also with atoi and similar, which have a lot of side effects in case of failed conversion. Left/right shifting might not help you as much, this operation is like elevating a number to a power of two (or taking the log2, for a right shift) with an as larger exponent as the number of shifts. I.e. unit8_t n = 1<<3 is like doing 2^3 and I don't think that pointer address is relevant for you.
Hope this suggestion can guide you forward
You pretty much got the correct algorithm. You can left shift with 4 or multiply with 16, same thing. But you need to do this for every byte except the last one. One way to fix that is to set result to 0, then each time in the loop, do the shift/multiply of the result before you add something new.
The parameter should be an array of char, const qualified since the function should not modify it.
Corrected code:
#include <stdint.h>
#include <stdio.h>
unsigned int AsciiToHex (const char* buf)
{
unsigned int result = 0;
for (int i = 0; i<4 && buf[i]!='\0'; i++)
{
result*=16;
if(buf[i] >= '0' && buf[i] <= '9')
{
result += buf[i] - '0';
}
else if(buf[i] >= 'a' && buf[i] <= 'f')
{
result += buf[i] - 'a' + 0xA;
}
else if(buf[i] >= 'A' && buf[i] <= 'F')
{
result += buf[i] - 'A' + 0xA;
}
else
{
// error handling here
}
}
return result;
}
int main (void)
{
_Static_assert('Z'-'A' == 25, "Crap/EBCDIC not supported.");
printf("%.4X\n", AsciiToHex("1234"));
printf("%.4X\n", AsciiToHex("007"));
printf("%.4X\n", AsciiToHex("ABBA"));
return 0;
}
Function that could have been useful here is isxdigit and toupper from ctype.h, check them out.
Since the C standard does in theory not guarantee that letters are adjacent in the symbol table, I added a static assert to weed out crap systems.
I am trying to write a function that counts some bit flags while avoiding the use of branching or conditionals:
uint8_t count_descriptors(uint8_t n)
{
return
((n & 2) && !(n & 1)) +
((n & 4) && !(n & 1)) +
((n & 8) && !(n & 1)) +
((n & 16) && !(n & 1)) +
((n & 32) && 1 ) +
((n & 64) || (n & 128)) ;
}
Bit zero is not directly counted, but bits 1-4 are only considered if bit 0 is not set, bit 5 is considered unconditionally, bit 6-7 can only counted once.
However, I understand that the boolean && and || use short-circuit evaluation. This means that their use creates a conditional branch, as you would see in such examples: if( ptr != nullptr && ptr->predicate()) that guarantees code in the second sub-expression is not executed if the result is short-circuit evaluated from the first sub-expression.
The first part of the question: do I need to do anything? Since these are purely arithmetic operations with no side-effects, will the compiler create conditional branches?
Second part: I understand that bitwise boolean operators do not short-circuit evaluate, but the only problem the bits do not line up. The result of masking the nth bit is either 2^n or zero.
What is the best way to make an expression such as (n & 16) evaluate to 1 or 0?
I assume with "bit 6-7 can only counted once" you mean only one of them is being counted
In this case something like this should work
uint8_t count_descriptors(uint8_t n)
{
uint8_t retVar;
retVar = (n&1)*(n&2 >> 1) +
(n&1)*(n&4 >> 2) +
(n&1)*(n&8 >> 3) +
(n&1)*(n&16 >> 4) +
(n&32 >> 5) +
(int)((n&64 >> 6) + (n&128 >> 7) >= 1)
return retVar;
}
What is the best way to make an expression such as (n & 16) evaluate
to 1 or 0?
By shifting it right the required number of bits: either (n>>4)&1 or (n&16)>>4.
I'd probably use a lookup table, either for all 256 values, or at least for the group of 4.
nbits[16]={0,1,1,2,1,2,2,3,1,2,2,3,2,3,3,4};
//count bits 1..4 iff bit 0 is 0, bit 5 always, and bit 6 or 7
return (!(n&1) * nbits[(n>>1)&0xF]) + ((n>>5)&1) + (((n>>6)|(n>>7))&1)
I think the cleanest way to convert (n & 16) into 0 or 1 is to just use int(bool(n & 16)). The cast to int can be dropped if you are using them in an arithmetic expression (like bool(n & 2) + bool(n & 4)).
For your function of counting bits set I would recommend using the popcount intrinsic function, available as __builtin_popcount on gcc and __popcnt on MSVC. Below is my understanding of the function you described, changed to use popcount.
f(n & 1)
{
//clear out first 4 bits and anything > 255
n &= 0xF0;
}
else
{
//clear out anything > 255 and bottom bit is already clear
n &= 0xFF;
}
return __builtin_popcount(n); //intrinsic function to count the set bits in a number
This doesn't quite match the function you wrote, but hopefully from here you get the idea.
Using bitwise operator how can I test if the n least significant bits of an integer are either all sets or all not sets.
For example if n = 3 I only care about the 3 least significant bits the test should return true for 0 and 7 and false for all other values between 0 and 7.
Of course I could do if x = 0 or x = 7, but I would prefer something using bitwise operators.
Bonus points if the technique can be adapted to take into accounts all the bits defined by a mask.
Clarification :
If I wanted to test if bit one or two is set I could to if ((x & 1 != 0) && (x & 2 != 0)). But I could do the "more efficient" if ((x & 3) != 0).
I'm trying to find a "hack" like this to answer the question "Are all bits of x that match this mask all set or all unset?"
The easy way is if ((x & mask) == 0 || (x & mask) == mask). I'd like to find a way to do this in a single test without the || operator.
Using bitwise operator how can I test if the n least significant bits of an integer are either all sets or all not sets.
To get a mask for the last n significant bits, thats
(1ULL << n) - 1
So the simple test is:
bool test_all_or_none(uint64_t val, uint64_t n)
{
uint64_t mask = (1ULL << n) - 1;
val &= mask;
return val == mask || val == 0;
}
If you want to avoid the ||, we'll have to take advantage of integer overflow. For the cases we want, after the &, val is either 0 or (let's say n == 8) 0xff. So val - 1 is either 0xffffffffffffffff or 0xfe. The failure causes are 1 thru 0xfe, which become 0 through 0xfd. Thus the success cases are call at least 0xfe, which is mask - 1:
bool test_all_or_none(uint64_t val, uint64_t n)
{
uint64_t mask = (1ULL << n) - 1;
val &= mask;
return (val - 1) >= (mask - 1);
}
We can also test by adding 1 instead of subtracting 1, which is probably the best solution (here once we add one to val, val & mask should become either 0 or 1 for our success cases):
bool test_all_or_none(uint64_t val, uint64_t n)
{
uint64_t mask = (1ULL << n) - 1;
return ((val + 1) & mask) <= 1;
}
For an arbitrary mask, the subtraction method works for the same reason that it worked for the specific mask case: the 0 flips to be the largest possible value:
bool test_all_or_none(uint64_t val, uint64_t mask)
{
return ((val & mask) - 1) >= (mask - 1);
}
How about?
int mask = (1<<n)-1;
if ((x&mask)==mask || (x&mask)==0) { /*do whatever*/ }
The only really tricky part is the calculation of the mask. It basically just shifts a 1 over to get 0b0...0100...0 and then subtracts one to make it 0b0...0011...1.
Maybe you can clarify what you wanted for the test?
Here's what you wanted to do, in one function (untested, but you should get the idea). Returns 0 if the n last bits are not set, 1 if they are all set, -1 otherwise.
int lastBitsSet(int num, int n){
int mask = (1 << n) - 1; //n 1-s
if (!(num & mask)) //we got all 0-s
return 0;
if (!(~num & mask)) //we got all 1-s
return 1;
else
return -1;
}
To test if all aren't set, you just need to mask-in only the bits you want, then you just need to compare to zero.
The fun starts when you define the oposite function by just inverting the input :)
//Test if the n least significant bits arent set:
char n_least_arent_set(unsigned int n, unsigned int value){
unsigned int mask = pow(2, n) - 1; // e. g. 2^3 - 1 = b111
int masked_value = value & mask;
return masked_value == 0; // if all are zero, the mask operation returns a full-zero.
}
//test if the n least significant bits are set:
char n_least_are_set(unsigned int n, unsigned int value){
unsigned int rev_value = ~value;
return n_least_arent_set(n, rev_value);
}
I am doing bitwise & between two bit arrays saving the result in old_array and I want to get rid of the if/else statement. I should probably make use of the BIT_STATE macro, but how?
#define BYTE_POS(pos) (pos / CHAR_BIT)
#define BIT_POS(pos) (1 << (CHAR_BIT - 1 - (pos % CHAR_BIT)))
#define BIT_STATE(pos, state) (state << (CHAR_BIT - 1 - (pos % CHAR_BIT)))
if (((old_array[BYTE_POS(old_pos)] & BIT_POS(old_pos)) != 0) &&
((new_array[BYTE_POS(new_pos)] & BIT_POS(new_pos)) != 0))
{
old_array[BYTE_POS(old_pos)] |= BIT_POS(old_pos);
}
else
{
old_array[BYTE_POS(old_pos)] &= ~(BIT_POS(old_pos));
}
You can always calculate both results and then combine it. The biggest problem is to compute a fitting bitmask.
E.g.
const uint32_t a = 41,
uint32_t b = 8;
const uint32_t mask[2] = { 0, 0xffffffff };
const uint32_t result = (a&mask[condition])
| (b&mask[!condition]);
or to avoid the unary not
const uint32_t mask_a[2] = { 0, 0xffffffff },
mask_b[2] = { mask_a[1], mask_a[0] };
const uint32_t result = (a&mask_a[condition])
| (b&mask_b[condition]);
However: When doing bitwise manipulations, always be careful with the number of bits involved. One way to be careful is fixed size types like uint32_t, who may or may not be defined on your platform (but if not, the good thing is you get a compile error), or use templates carefully. Other types, including char, int and even bool can have any size beyond some defined minimum.
Yes, such code looks somewhat ugly.
I don't think BIT_STATE is useful here. (State MUST BE 0 or 1 to work as expected)
I see following approaches to get rid of them
a) Use C++ bitfields
For example
http://en.wikipedia.org/wiki/Bit_field
b)
"Hide" that code in a class/method/function
c)
I think this is equivalent to your code
if ((new_array[BYTE_POS(new_pos)] & BIT_POS(new_pos)) == 0))
{
old_array[BYTE_POS(old_pos)] &= ~(BIT_POS(old_pos));
}
or as inliner
old_array[BYTE_POS(old_pos)] &=
~((new_array[BYTE_POS(new_pos)] & BIT_POS(new_pos)) ? 0 : BIT_POS(old_pos));
Take the expression
(new_array[BYTE_POS(new_pos)] & BIT_POS(new_pos))
which is either 0 or has 1 in bit BIT_POS(new_pos) and shift it until the bit, if set is in BIT_POS( old_pos )
(new_array[BYTE_POS(new_pos)] & BIT_POS(new_pos)) << ( old_pos - new_pos )
now and the result with old_array[BYTE_POS(old_pos)]
old_array[BYTE_POS(old_pos)] &= old_array[BYTE_POS(old_pos)]
THe only trick is that it is implementation dependent (at least it used to be) what happens if you shift by a negative amount. So if you already know whether old_pos is greater or less than new_pos you can substitute >> ( new_pos - old_pos ) when appropriate.
I've not tried this out. I may have << and >> swapped.
I am using an unsigned char to store 8 flags. Each flag represents the corner of a cube. So 00000001 will be corner 1 01000100 will be corners 3 and 7 etc. My current solution is to & the result with 1,2,4,8,16,32,64 and 128, check whether the result is not zero and store the corner. That is, if (result & 1) corners.push_back(1);. Any chance I can get rid of that 'if' statement? I was hoping I could get rid of it with bitwise operators but I could not think of any.
A little background on why I want to get rid of the if statement. This cube is actually a Voxel which is part of a grid that is at least 512x512x512 in size. That is more than 134 million Voxels. I am performing calculations on each one of the Voxels (well, not exactly, but I won't go into too much detail as it is irrelevant here) and that is a lot of calculations. And I need to perform these calculations per frame. Any speed boost that is minuscule per function call will help with these amount of calculations. To give you an idea, my algorithm (at some point) needed to determine whether a float was negative, positive or zero (within some error). I had if statements in there and greater/smaller than checks. I replaced that with a fast float to int function and shaved of a quarter of a second. Currently, each frame in a 128x128x128 grid takes a little more than 4 seconds.
I would consider a different approach to it entirely: there are only 256 possibilities for different combinations of flags. Precalculate 256 vectors and index into them as needed.
std::vector<std::vector<int> > corners(256);
for (int i = 0; i < 256; ++i) {
std::vector<int>& v = corners[i];
if (i & 1) v.push_back(1);
if (i & 2) v.push_back(2);
if (i & 4) v.push_back(4);
if (i & 8) v.push_back(8);
if (i & 16) v.push_back(16);
if (i & 32) v.push_back(32);
if (i & 64) v.push_back(64);
if (i & 128) v.push_back(128);
}
for (int i = 0; i < NumVoxels(); ++i) {
unsigned char flags = GetFlags(i);
const std::vector& v = corners[flags];
... // do whatever with v
}
This would avoid all the conditionals and having push_back call new which I suspect would be more expensive anyway.
If there's some operation that needs to be done if the bit is set and not if it's not, it seems you'll have to have a conditional of some kind somewhere. If it could be expressed as a calculation somehow, you could get around it like this, for example:
numCorners = ((result >> 0) & 1) + ((result >> 1) & 1) + ((result >> 2) & 1) + ...
Hackers's Delight, first page:
x & (-x) // isolates the lowest set bit
x & (x - 1) // clears the lowest set bit
Inlining your push_back method would also help (better create a function that receives all the flags together).
Usually if you need performance, you should design the whole system with that in mind. Maybe if you post more code it will be easier to help.
EDIT: here is a nice idea:
unsigned char LOG2_LUT[256] = {...};
int t;
switch (count_set_bits(flags)){
case 8: t = flags;
flags &= (flags - 1); // clearing a bit that was set
t ^= flags; // getting the changed bit
corners.push_back(LOG2_LUT[t]);
case 7: t = flags;
flags &= (flags - 1);
t ^= flags;
corners.push_back(LOG2_LUT[t]);
case 6: t = flags;
flags &= (flags - 1);
t ^= flags;
corners.push_back(LOG2_LUT[t]);
// etc...
};
count_set_bits() is a very known function: http://www-graphics.stanford.edu/~seander/bithacks.html#CountBitsSetTable
There is a way, it's not "pretty", but it works.
(result & 1) && corners.push_back(1);
(result & 2) && corners.push_back(2);
(result & 4) && corners.push_back(3);
(result & 8) && corners.push_back(4);
(result & 16) && corners.push_back(5);
(result & 32) && corners.push_back(6);
(result & 64) && corners.push_back(7);
(result & 128) && corners.push_back(8);
it uses a seldom known feature of the C++ language: the boolean shortcut.
I've noted a similar algorithm in the OpenTTD code. It turned out to be utterly useless: you're faster off by not breaking down numbers like that. Instead, replace the iteration over the vector<> you have now by an iteration over the bits of the byte. This is far more cache-friendly.
I.e.
unsigned char flags = Foo(); // the value you didn't put in a vector<>
for (unsigned char c = (UCHAR_MAX >> 1) + 1; c !=0 ; c >>= 1)
{
if (flags & c)
Bar(flags&c);
}