Explain this code regarding AVR port setup - avr-gcc

What does the following do?
PORTB = (PORTB & ~0xFC) | (b & 0xFC);
PORTD = (PORTD & ~0x30) | ((b << 4) & 0x30);
AFAIK, the 0xFC is a hex value. Is that basically saying 11111100, hence PORTD0-PORTD1 are outputs but the rest are inputs.
What would a full explanation of that code be?

PORTB = (PORTB & ~0xfc) | (b & 0xfc);
Breaking it down:
PORTB = PORTB & ~0xFC
0xFC = 1111 1100
~0xFC = 0000 0011
PORTB = PORTB & 0000 0011
Selects the lower two bits of PORTB.
b & 0xFC
0xFC = 1111 1100
Selects the upper 6 bits of b.
ORing them together, PORTB will contain the upper six bits of b and the lower two bits of PORTB.
PORTD = (PORTD & ~0x30) | ((b << 4) & 0x30);
Breaking it down:
PORTD = PORTD & ~0x30
0x30 = 0011 0000
~0x30 = 1100 1111
PORTD = PORTD & 11001111
Selects all but the 4th and 5th (counting from 0) bits of PORTD
(b << 4) & 0x30
Consider b as a field of bits:
b = b7 | b6 | b5 | b4 | b3 | b2 | b1 | b0
b << 4 = b3 b2 b1 b0 0 0 0 0
0x30 = 0011 0000
(b << 4) & 0x30 = 0 0 b0 b1 0 0 0 0
ORing the two pieces together, PORTD will contain the 0th and 1st bits of b in its 4th and 5th bits and the original values of PORTD in the rest.

The first line actually sets the state of port's PB7-PB2 lines. The current state of PORTB is first masked using ~0xFC = 0x03, so all bits, but 0 and 1, are reset.
The second step is masking b using 0xFC, so bits 0 and 1 are always 0. Then the values are OR'ed together. Effectively, it sets PB7-PB2 from b[7]..b[2], while keeping the current state of PB1 and PB0 untouched.
Note, that the PORTB register bits serve different purposes depending on the pin direction configured via the DDRB register. For output pins, it simply controls the pin state. For input pins, PORTB controls the pin's pull-up resistor. You have to enable this pull-up resistor if, for example, you have a push button connected between the pin and ground - this way the input pin is not floating when the switch is open.

Related

Merge two bitmask with conflict resolving, with some required distance between any two set bits

I have two integer values:
d_a = 6 and d_b = 3, so-called distance between set bits.
Masks created with appropriate distance look like below:
uint64_t a = 0x1041041041041041; // 0001 0000 0100 0001 0000 0100 0001 0000
// 0100 0001 0000 0100 0001 0000 0100 0001
uint64_t b = 0x9249249249249249; // 1001 0010 0100 1001 0010 0100 1001 0010
// 0100 1001 0010 0100 1001 0010 0100 1001
The goal is to have a target mask, which has bits set with d_b, but simultaneously takes into account bits set in the a mask (e.g. first set bit is shifted).
The second thing is that the distance in the target mask is not constant i.e. number of zeros between set bits in the target mask shall be equal to d_b or increased whenever between them is set bit in a
uint64_t target = 0x4488912224488912; // 0100 0100 1000 1000 1001 0001 0010 0010
// 0010 0100 0100 1000 1000 1001 0001 0010
The picture to visualize the problem:
The blue bar is a, yellow is b.
I would rather use bit manipulation intrinsics than bit-by-bit operations.
edit:
Actually, I have the following code, but I am looking for a solution with a fewer number of instructions.
void set_target_mask(int32_t d_a, int32_t d_b, int32_t n_bits_to_set, uint8_t* target)
{
constexpr int32_t n_bit_byte = std::numeric_limits<uint8_t>::digits;
int32_t null_cnt = -1;
int32_t n_set_bit = 0;
int32_t pos = 0;
while(n_set_bit != n_bits_to_set)
{
int32_t byte_idx = pos / n_bit_byte;
int32_t bit_idx = pos % n_bit_byte;
if(pos % d_a == 0)
{
pos++;
continue;
}
null_cnt++;
if(null_cnt % d_b == 0)
{
target[byte_idx] |= 1 << bit_idx;
n_set_bit++;
}
pos++;
}
}
If target is uint64_t, possible d_a and d_b can be converted into bit masks via look-up table. Like lut[6] == 0x2604D5C99A01041 from your question.
Look up tables can be initialized once per program run during initlalization, or in compile time using macro or constant expressions (constexpr).
To make d_b spread, skipping d_a bits, you can use pdep with inverted d_a:
uint64_t tmp = _pdep_u64(d_b_bits, ~d_a_bits);
Then you can convert n_bits_to_set to contiguous bits mask:
uint64_t n_bits = (1 << n_bits_to_set) - 1;
And spread them using pdep again:
uint64_t tmp = _pdep_u64(n_bits, tmp);
(See Intrinsic Guide about pdep. Note that pdep is slow on AMD before Zen3. It's fast on Intel CPUs and Zen3, but not Bulldozer-family or Zen1/Zen2)

How does ios::fmtflags works in C++?How setf() works?

I am trying to understand formatted flags of ios stream. Can anyone please explain how this cout.setf(ios::hex | ios::showbase) thing works? I mean how does the or (|) operator work between the two ios formatted flags?
Please pardon me for my bad english.
std::ios_base::hex and std::ios_base::showbase are both enumerators of the BitmaskType std::ios_base::fmtflags. A BitmaskType is typically an enumeration type whose enumerators are distinct powers of two, kinda like this: (1 << n means 2n)
// simplified; can also be implemented with integral types, std::bitset, etc.
enum fmtflags : unsigned {
dec = 1 << 0, // 1
oct = 1 << 1, // 2
hex = 1 << 2, // 4
// ...
showbase = 1 << 9, // 512
// ...
};
The | operator is the bit-or operator, which performs the or operation on the corresponding bits, so
hex 0000 0000 0000 0100
showbase 0000 0010 0000 0000
-------------------
hex | showbase 0000 0010 0000 0100
This technique can be used to combine flags together, so every bit in the bitmask represents a separate flag (set or unset). Then, each flag can be
queried: mask & flag;
set: mask | flag;
unset: mask & (~flag).

Unitary number for “&” bitwise operator in c++ [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
I have a question, I would appreciate it if you helped me to understand it. Imagin I define the following number
c= 0x3FFFFFFF
and a = an arbitrary integer number=Q. My question is, why a &= c always is equal to "Q" and it does not change? for example, if I consider a=10 then the result of a &= c is 10 if a=256 the result of a &= c is 256. Could you please explain why? Thanks a lot.
Both a and c are integer types and are composed of 32 bits in a computer. The first digit of an integer in a computer is the sign bit.The first digit of a positive number is 0, and the first digit of a negative number is 1. 0x3FFFFFFF is a special value. The first two digits of this number are 0, and the other digits are all 1. 1 & 1 = 1, 1 & 0 = 0. So when the number a a is positive and less than c, a & 0x3FFFFFFF is still a itself
a &= c is the same as a = a & c, which calculates the binary and of a and b and then assign that value to a again - just in case you've mistaken what that operator does.
Now a contains almost only 1's. Then just think what each bit becomes: 1 & x will always be x. Since you try with such low numbers only, none of them will change.
Try with c=0xffffffff and you will get a different result.
You have not tested a &= c; with all possible values of a and are incorrect to assert it does not change the value of a in all cases.
a &= c; sets a to a value in which each bit is set if the two bits in the same position in a and in c are both set. If the two bits are not both set, 5he bit in the result is clear.
In 0x3FFFFFFF, the 30 least significant bits are set. When this is used in a &= c; with any number in which higher bits are set, such as 0xC0000000, the higher bits will be cleared.
If you know about bitwise & ("and") operation and how it works, then there should be no question about this. Say, you have two numbers a and b. Each of them are n-bits long. Look,
a => a_(n-1) a_(n-2) a_(n-3) ... a_i ... a_2 a_1 a_0
b => b_(n-1) b_(n-2) b_(n-3) ... b_i ... b_2 b_1 b_0
Where a_0 and b_0 are the least significant bits and a_(n-1) and b_(n-1) are the most significant bits of a and b respectively.
Now, take a look at the & operation on two single binary bits.
1 & 1 = 1
1 & 0 = 0
0 & 1 = 0
0 & 0 = 0
So, the result of the & operation is 1 only when all bits are 1. If at least one bit is 0, then the result is 0.
Now, for n-bits long number,
a & b = (a_i & b_i); where `i` is from 0 to `n-1`
For example, if a and b both are 4 bits long numbers and a = 5, b = 12, then
a = 5 => a = 0101
b = 12 => b = 1100
if c = (a & b), c_i = (a_i & b_i) for i=0..3, here all numbers are 4 bits(0..3)
now, c = c_3 c_2 c_1 c_0
so c_3 = a_3 & b_3
c_2 = a_2 & b_2
c_1 = a_1 & b_1
c_0 = a_0 & b_0
a 0 1 0 1
b 1 1 0 0
-------------
c 0 1 0 0 (that means c = 4)
therefore, c = a & b = 5 & 12 = 4
Now, what would happen, if all of the bits in one number are 1s?
Let's see.
0 & 1 = 0
1 & 1 = 1
so if any bit is fixed and it 1, then the result is the same as the other bit.
if a = 5 (0101) and b = 15 (1111), then
a 0 1 0 1 (5)
b 1 1 1 1 (15)
------------------
c 0 1 0 1 (5, which is equal to a=5)
So, if any of the numbers has all bits are 1s, then the & result is the same as the other number. Actually, for a=any value of 4-bits long number, you will get the result as a, since b is 4-bits long and all 4 bits are 1s.
Now another issue would happen, when a > 15 means a exceeds 4-bits
For the above example, expand the bit size to 1 and change the value of a is 25.
a = 25 (11001) and b = 15 (01111). Still, b is the same as before except the size. So the Most Significant Bit (MSB) is 0. Now,
a 1 1 0 0 1 (25)
b 0 1 1 1 1 (15)
----------------------
c 0 1 0 0 1 (9, not equal to a=25)
So, it is clear that we have to keep every single bit to 1 if we want to get the other number as the result of the & operation.
Now it is time to analyze the scenario you posted.
Here, a &= c is the same as a = a & c.
We assumed that you are using 32-bit integer variables.
You set c = 0x3FFFFFFF means c = (2^30) - 1 or c = 1073741823
a = 0000 0000 0000 0000 0000 0000 0000 1010 (10)
& c = 0011 1111 1111 1111 1111 1111 1111 1111 (1073741823)
----------------------------------------------------------------
a = 0000 0000 0000 0000 0000 0000 0000 1010 (10, which is equal to a=10)
and
a = 0000 0000 0000 0000 0000 0001 0000 0000 (256)
& c = 0011 1111 1111 1111 1111 1111 1111 1111 (1073741823)
----------------------------------------------------------------
a = 0000 0000 0000 0000 0000 0001 0000 0000 (256, which is equal to a=256)
but, if a > c, say a=0x40000000 (1073741824, c+1 in base 10), then
a = 0100 0000 0000 0000 0000 0001 0000 0000 (1073741824)
& c = 0011 1111 1111 1111 1111 1111 1111 1111 (1073741823)
----------------------------------------------------------------
a = 0000 0000 0000 0000 0000 0000 0000 0000 (0, which is not equal to a=1073741823)
So, your assumption ( the value of a after executing statement a &= c is the same as previous a) is true only if a <= c

Why is "i & (i ^ (i - 1))" equivalent to "i & (-i)"

I had this in part of the code. Could anyone explain how i & (i ^ (i - 1)) could be reduced to i & (-i)?
i ^ (i - 1) makes all bits after the last 1 bit of i becomes 1.
For example if i has a binary representation as abc...de10...0 then i - 1 will be abc...de01...1 in binary (See Why does (x-1) toggle all the bits from the rightmost set bit of x?). The part before the last 1 bit is not changed when subtracting 1 from i, so xoring with each other returns 0 in that part, while the remaining will be 1 because of the difference in i and i - 1. After that i & (i ^ (i - 1)) will get the last 1 bit of i
-i will inverse all bits up to the last 1 bit of i because in two's complement -i == ~i + 1, and i & (-i) results the same like the above
For example:
20 = 0001 0100
19 = 0001 0011
20 ^ 19 = 0000 0111 = 7
20 & 7 = 0000 0100
-20 = 1110 1100
20 & -20 = 0000 0100
See also
Why n bitwise and -n always return the right most bit (last bit)
What does (number & -number) mean in bit programming?

Set digit of a hexadecimal number

How can I set a digit in a hexadecimal number?
I currently have this code:
int row = 0x00000000;
row |= 0x3 << 8;
row |= 0x2 << 4;
row |= 0x1 << 0;
printf("Row: 0x%08x", row);
Which works perfectly fine as long as "row" is just zeros. As soon as I change it to something like this:
int row = 0x33333333;
row |= 0x3 << 8;
row |= 0x2 << 4;
row |= 0x1 << 0;
printf("Row: 0x%08x", row);
I just get this output:
Row: 0x33333333
You should delete (make it 0) the digit first.
row &= ~(0xf << 4);
~ operator reverses the values of all bits in the number. So. 0x000000f0 becomes 0xffffff0f.
Your code should look like:
row &= ~(0xf << 8);
row |= 0x3 << 8;
row &= ~(0xf << 4);
row |= 0x2 << 4;
row &= ~(0xf << 0);
row |= 0x1 << 0;
As Alexandru explained, you do need to clear the bitfield you're trying to set before you go on to set it.
Just to add further comment on why your code didn't do what you wanted, consider what is happening to the variable at the binary level:
row |= 0x2 << 4;
The |= operator is a "bitwise or". Hence if either the bit you're trying to set OR the bit you're passing in is set to 1, the result is 1. In your code, row is set to 0x33333333, so each 4 bit hexadecimal digit is 0011 in binary. When you bitwise or that with 0x2, you get 0x3:
/* 0x3 | 0x2 = 0x3 */
0011 | 0010 = 0011
If you clear the bitfield first, you get 0x2, which is what you want:
/* 0x3 | 0x0 = 0x0 */
0011 | 0000 = 0000
/* 0x0 | 0x2 = 0x2 */
0000 | 0010 = 0010
Note that data manipulation using shifts and bitwise operations is unlikely to be portable between different platforms. You may run into problems trying to run code relying on bit shifts on machines of different endianess, or indeed if you try to run code that works on a 32 bit machine on a 64 bit machine:
http://www.viva64.com/en/a/0004/#ID0EQFDI
Wikipedia has more on bitwise operations:
http://en.wikipedia.org/wiki/Bitwise_operation