I'm trying to understand how this alignment works. It should align an uint32 address to its nearest 8 byte aligned address
static inline uint32_t
ZBI_ALIGN(uint32_t n) {
return ((n + ZBI_ALIGNMENT - 1) & -ZBI_ALIGNMENT);
Let's take n=10, and ZBI_ALIGNMENT=8. The nearest address should be 16
returns ((10 + 8 -1) & -8) = 17 & -8
Why this should be aligned?
The key to this formula is that it is only valid if ZBI_ALIGNMENT happens to be a power of two, which is not a big deal because alignment requirements tend to fulfil that criteria.
A number being aligned to (aka being a multiple of) a power of two means that all bits smaller than that power of two are set to 0. You can convince yourself of that easily by looking at a few 8-bit numbers:
15: 00001111
16: 00010000 <--- aligned to 16
17: 00010001
31: 00011111
32: 00100000 <--- aligned to 16
48: 00110000 <--- aligned to 16
Assuming that we have a mask that happens to have only have the bits higher or equal to 16 set, N & mask, would be a no-op for all multiples of 16, and give us the previous multiple of 16 for all other values.
16: 00010000
mask for 16: 11110000
15 & mask -> 00000000 : 0
16 & mask -> 00010000 : 16
17 & mask -> 00010000 : 16
32 & mask -> 00100000 : 32
In order to get the right value directly, we can use (N + 15) & mask instead. If N is a multiple of 16 already, N + 15 will land just shy of the next multiple. Otherwise, it will always "bump" the value to the next range. e.g. 1+15 = 16, 16 + 15 = 31, etc... This generalises as (N + (DESIRED_ALIGMENT - 1)).
So all that's left to figure out is how to calculate the mask for a given desired alignment.
Conveniently, in two's complement representation (which all signed integers have to use), negative values of powers of two happen to be exactly the mask we need.
For 8 bit numbers it looks like this:
-1 -> 11111111
-2 -> 11111110
-4 -> 11111100
-8 -> 11111000
etc...
So mask can simply be computed as -ZBI_ALIGNMENT.
Putting all this together, we get:
((n + ZBI_ALIGNMENT - 1) & -ZBI_ALIGNMENT)
I read the IP RFC and in there it says the 4 first bits of the IP header is the version. In the drawing it also shows that bits 0 to 3 are the version.
https://www.rfc-editor.org/rfc/rfc791#section-3.1
But when I look at the first byte of the header (as captured using pcap lib) I see this byte:
0x45
This is a version 4 IP header but obviously bits 4 to 7 are equal to 4 and not bits 0 to 3 as I expected.
I expected doing a bitwise and on first byte and 0x0F will get me the version but it seems that I need to and with 0xF0.
Am I missing something? Understanding something incorrectly?
You should read Appendix B of the RFC:
Whenever an octet represents a numeric quantity the left most bit in the
diagram is the high order or most significant bit. That is, the bit
labeled 0 is the most significant bit. For example, the following
diagram represents the value 170 (decimal).
0 1 2 3 4 5 6 7
+-+-+-+-+-+-+-+-+
|1 0 1 0 1 0 1 0|
+-+-+-+-+-+-+-+-+
Which means everything is correct except for your assumption that the “first four bits” are the least-significant, while those are the most-significant.
E.g. in the 7th and 8th bytes, containing the flags and the fragment offset, you can separate those as follows (consider that pseudocode, even though it is working C#):
byte flagsAndFragmentHi = packet[6];
byte fragmentLo = packet[7];
bool flagReserved0 = (flagsAndFragmentHi & 0x80) != 0;
bool flagDontFragment = (flagsAndFragmentHi & 0x40) != 0;
bool flagMoreFragments = (flagsAndFragmentHi & 0x20) != 0;
int fragmentOffset = ((flagsAndFragmentHi & 0x1F) << 8) | (fragmentLo);
Note that the more significant (left-shifted 8 bits) portion of the fragment offset is in the first byte (because IP works in big endian). Generally: bits on the left in the diagram are always more significant.
I am working on a project with a TFT touch screen. With this screen there is an included library. But after some reading, I still don't get something. In the library there are some defines regarding colors:
/* some RGB color definitions */
#define Black 0x0000 /* 0, 0, 0 */
#define Navy 0x000F /* 0, 0, 128 */
#define DarkGreen 0x03E0 /* 0, 128, 0 */
#define DarkCyan 0x03EF /* 0, 128, 128 */
#define Maroon 0x7800 /* 128, 0, 0 */
#define Purple 0x780F /* 128, 0, 128 */
#define Olive 0x7BE0 /* 128, 128, 0 */
#define LightGrey 0xC618 /* 192, 192, 192 */
#define DarkGrey 0x7BEF /* 128, 128, 128 */
#define Blue 0x001F /* 0, 0, 255 */
#define Green 0x07E0 /* 0, 255, 0 */
#define Cyan 0x07FF /* 0, 255, 255 */
#define Red 0xF800 /* 255, 0, 0 */
#define Magenta 0xF81F /* 255, 0, 255 */
#define Yellow 0xFFE0 /* 255, 255, 0 */
#define White 0xFFFF /* 255, 255, 255 */
#define Orange 0xFD20 /* 255, 165, 0 */
#define GreenYellow 0xAFE5 /* 173, 255, 47 */
#define Pink 0xF81F
Those are 16-bit colors. But how do they go from: 0, 128, 128(dark cyan) to 0x03EF. I mean, how do you convert a 16-bit color to a uint16? This doesn't need to have an answer in code, because I just want to add some colors in the library. A link to a online converter (which I could not find) would be okay as well :)
Thanks
From these one can easily find out the formula:
#define Red 0xF800 /* 255, 0, 0 */
#define Magenta 0xF81F /* 255, 0, 255 */
#define Yellow 0xFFE0 /* 255, 255, 0 */
F800 has 5 MSB bits set and FFE0 has 5 LSB not set.
0xF81F has obviously both 5 LSB's and 5 MSB's set, which proves the format to be RGB565.
The formula to convert a value 173 to Red is not as straightforward as it may look -- you can't simply drop the 3 least significant bits, but have to linearly interpolate to make 255 to correspond to 31 (or green 255 to correspond to 63).
NewValue = (31 * old_value) / 255;
(And this is still just a truncating division -- proper rounding could be needed)
With proper rounding and scaling:
Uint16_value = (((31*(red+4))/255)<<11) |
(((63*(green+2))/255)<<5) |
((31*(blue+4))/255);
EDIT Added parenthesis to as helpfully suggested by JasonD.
You need to know the exact format of the display, just "16-bit" is not enough.
There's RGB555, in which each of the three components get 5 bits. This drops the total color space to just 32,768 colors, wasting one bit but it's very simple to manage since the there's the same number of shades for each component.
There's also RGB565, in which the green component is given 6 bits (since the human eye is more sensitive to green). This might be the format you're having, since the "dark green" example is 0x03e0 which in binary is 0b0000 0011 1110 0000. Since there's 6 bits set to 1 there, I guess that's the total allocation for the green component and showing it's maximum value.
It's like this, then (with spaces separating every four bits and re-using the imaginary 0b prefix):
0bRRRR RGGG GGGB BBBB
Of course, the bit ordering can differ too, in the word.
The task of converting a triplet of numbers into a bit-packed word is quite easily done in typically programming languages that have bit manipulation operators.
In C, it's often done in a macro, but we can just as well have a function:
#include <stdint.h>
uint16_t rgb565_from_triplet(uint8_t red, uint8_t green, uint8_t blue)
{
red >>= 3;
green >>= 2;
blue >>= 3;
return (red << 11) | (green << 5) | blue;
}
note that the above assumes full 8-bit precision for the components, so maximum intensity for a component is 255, not 128 as in your example. If the color space really is using 7-bit components then some additional scaling would be necessary.
It looks like you're using RGB565, first 5 bits for Red, then 6 bits for Green, then 5 bits for Blue.
You should mask with 0xF800 and shift right 11 bits to get the red component (or shift 8 bits to get a value from 0-255).
Mask with 0x07E0 and shift right 5 bits to get green component (or 3 to get a 0-255 value).
Mask with 0x001F to get the blue component (and shift left 3 bits to get 0-255).
Your colours are in 565 format. It would be more obvious if you wrote them out in binary.
Blue, (0,0,255) is 0x001f, which is 00000 000000 11111
Green, (0, 255, 0) is 0x07e0, which is 00000 111111 00000
Red, (255, 0, 0) is 0xf800, which is 11111 000000 00000
To convert a 24 bit colour to 16 bit in this format, simply mask off the upper bits needed from each component, shift into position, and OR together.
The convert back into 24 bit colour from 16 bit, mask each component, shift into position, and then duplicate the upper bits into the lower bits.
In your examples it seems that some colours have been scaled and rounded rather than shifted.
I strongly recommend using the bit-shift method rather than scaling by a factor like 31/255, because the bit-shifting is not only likely to be faster, but should give better results.
The 3-part numbers you’re showing are applicable to 24-bit color. 128 in hex is 0x7f, but in your color definitions, it's being represented as 0x0f. Likewise, 255 is 0xff, but in your color definitions, it's being represented as 0x1f. This suggests that you need to take the 3-part numbers and shift them down by 3 bits (losing 3 bits of color data for each color). Then combine them into a single 16-bit number:
uint16 color = ((red>>3)<<11) | ((green>>2)<<5) | (blue>>3);
...revised from earlier because green uses 6 bits, not 5.
You need to know how many bits there are per colour channel. So yes, there are 16 bits for a colour, but the RGB components are each some subset of those bits. In your case, red is 5 bits, green is 6, and blue is 5. The format in binary would look like so:
RRRRRGGG GGGBBBBB
There are other 16 bit colour formats, such as red, green, and blue each being 5 bits and then use the remaining bit for an alpha value.
The range of values for both the red and blue channels will be from 0 to 2^5-1 = 31, while the range for green will be 0 to 2^6-1 = 63. So to convert from colours in the form of (0->255),(0->255),(0->255) you will need to map values from one to the other. For example, a red value of 128 in the range 0->255 will be mapped to (128/255) * 31 = 15.6 in the red channel with range 0-31. If we round down, we get 15 which is represented as 01111 in five bits. Similarly, for the green channel (with six bits) you will get 011111. SO the colour (128,128,128) will map to 01111011 11101111 which is 0x7BEF in hexadecimal.
You can apply this to the other values too: 0,128,128 becomes 00000011 11101111 which is 0x03EF.
Those colours shown in your code are RGB565. As shown by
#define Blue 0x001F /* 0, 0, 255 */
#define Green 0x07E0 /* 0, 255, 0 */
#define Red 0xF800 /* 255, 0, 0 */
If you simply want to add some new colours to this #defined list, the simplest way to convert from 16bit UINT per channel is just to shift your values down to loose the the low order bits and then shift and (or) them into position in the 16bitRGB value.
This could well produce banding artefacts though, and there may well be a better conversion method.
i.e.
UINT16 blue = 0xFF;
UINT16 green = 0xFF;
UINT16 red = 0xFF;
blue >>= 11; // top 5 bits
green >>= 10; // top 6 bits
red >>= 11; // top 5 bits
UINT16 RGBvalue = blue | (green <<5) | (red << 11)
You may need to mask of any unwanted stray bits after the shifts, as I am unsure how this works, but I think the code above should work.
Building on unwind's answer, specifically for the Adafruit GFX library using the Arduino 2.8" TFT Touchscreen(v2), you can add this function to your Arduino sketch and use it inline to calculate colors from rgb:
uint16_t getColor(uint8_t red, uint8_t green, uint8_t blue)
{
red >>= 3;
green >>= 2;
blue >>= 3;
return (red << 11) | (green << 5) | blue;
}
Now you can use it inline as so, illustrated with a function that creates a 20x20 square at x0,y0:
void setup() {
tft.begin();
makeSquare(getColor(20,157,217));
}
unsigned long makeSquare(uint16_t color1) {
tft.fillRect(0, 0, 20, 20, color1);
}
Docs for the Adafruit GFX library can be found here
#define RGB2BGR(a_ulColor) (a_ulColor & 0xFF000000) | ((a_ulColor & 0xFF0000) >> 16) | (a_ulColor & 0x00FF00) | ((a_ulColor & 0x0000FF) << 16)
Can you please explain to me the meaning of this macro?
Colors are usually represented by a 32-bit integer. 32-bit integers can hold four 8-bit bytes. Three of them are used to hold red, green, and blue color information. The remaining byte is either left unused or used to hold transparency information.
Which byte represents which color is not standardized. Some APIs expect the bytes like this:
(MSB) ******** rrrrrrrr gggggggg bbbbbbbb (LSB)
Which is the "RGB" layout, perhaps the most common form. In the illlustration above, the most sigificant 8-bits are the "don't care" bits, that is, the bits there are not used. The least significant 8-bits store the information for the blue color.
Some APIs expect the reverse for the 3 color bytes, like this:
(MSB) ******** bbbbbbbb gggggggg rrrrrrrr (LSB)
Which is the "BGR" layout.
The macro helps interconvert the two layouts using the bitwise operators. Let's take a look at its definition:
(a_ulColor & 0xFF000000) | ((a_ulColor & 0xFF0000) >> 16) |
(a_ulColor & 0x00FF00) | ((a_ulColor & 0x0000FF) << 16)
Let's say we have a color, Cornflower Blue, which has a value of 0x93CCEA. In the RGB layout, it has the following bit pattern:
a_ulColor = 00000000 10010011 11001100 11101010
The following expressions give you the following patterns:
1. a_ulColor & 0xFF000000 --> 00000000 00000000 00000000 00000000
2. a_ulColor & 0xFF0000 --> 00000000 10010011 00000000 00000000
3. a_ulColor & 0x00FF00 --> 00000000 00000000 11001100 00000000
4. a_ulColor & 0x0000FF --> 00000000 00000000 00000000 11101010
Notices that we're just extracting the individual bytes. Expression #1 extracts the most significant 8-bits, and expression #4 extracts the least signficiant 8-bits. We were able to do this via the AND bitwise operation.
Now, to convert RGB to BGR, we have to move some bits left or right, via bitshifts. Like this:
1. (a_ulColor & 0xFF000000) --> 00000000 00000000 00000000 00000000
2. (a_ulColor & 0xFF0000) >> 16 --> 00000000 00000000 00000000 10010011
3. (a_ulColor & 0x00FF00) --> 00000000 00000000 11001100 00000000
4. (a_ulColor & 0x0000FF) << 16 --> 00000000 11101010 00000000 00000000
The expression a >> 16 simply shifts the bits to the right by 16 bits. a << 16 shifts the bits to the left by 16 bits.
Then, when you OR them all together, you get this:
00000000 11101010 11001100 10010011
Compare the result to the original bit pattern:
00000000 11101010 11001100 10010011
00000000 10010011 11001100 11101010
You can see that the 2nd and 4th bytes are swapped. That's all the macro does.
It takes a four-byte integral value, AA BB CC DD, and returns the value AA DD CC BB. You can see that the first and third byte are retained unchanged, while the second byte is moved down two bytes (>> 16) and the fourth is moved up by two (<< 16).
It swaps the order of the byte-sized RGB elements from RGB to BGR (and vice-versa, to be fair).
a_ulColor is a 32 bit RGB representation (e.g. of a pixel or bitmap). The macro converts it to BGR layout. It effectively produces a new value by swapping the Red and Blue component values.
I have the following code for self learning:
#include <iostream>
using namespace std;
struct bitfields{
unsigned field1: 3;
unsigned field2: 4;
unsigned int k: 4;
};
int main(){
bitfields field;
field.field1=8;
field.field2=1e7;
field.k=18;
cout<<field.k<<endl;
cout<<field.field1<<endl;
cout<<field.field2<<endl;
return 0;
}
I know that unsigned int k:4 means that k is 4 bits wide, or a maximum value of 15, and the result is the following.
2
0
1
For example, filed1 can be from 0 to 7 (included), field2 and k from 0 to 15. Why such a result? Maybe it should be all zero?
You're overflowing your fields. Let's take k as an example, it's 4 bits wide. It can hold values, as you say, from 0 to 15, in binary representation this is
0 -> 0000
1 -> 0001
2 -> 0010
3 -> 0011
...
14 -> 1110
15 -> 1111
So when you assign 18, having binary representation
18 -> 1 0010 (space added between 4th and 5th bit for clarity)
k can only hold the lower four bits, so
k = 0010 = 2.
The equivalent holds true for the rest of your fields as well.
You have these results because the assignments overflowed each bitfield.
The variable filed1 is 3 bits, but 8 takes 4 bits to present (1000). The lower three bits are all zero, so filed1 is zero.
For filed2, 17 is represented by 10001, but filed2 is only four bits. The lower four bits represent the value 1.
Finally, for k, 18 is represented by 10010, but k is only four bits. The lower four bits represent the value 2.
I hope that helps clear things up.
In C++ any unsigned type wraps around when you hit its ceiling[1]. When you define a bitfield of 4 bits, then every value you store is wrapped around too. The possible values for a bitfield of size 4 are 0-15. If you store '17', then you wrap to '1', for '18' you go one more to '2'.
Mathematically, the wrapped value is the original value modulo the number of possible values for the destination type:
For the bitfield of size 4 (2**4 possible values):
18 % 16 == 2
17 % 16 == 1
For the bitfield of size 3 (2**3 possible values):
8 % 8 == 0.
[1] This is not true for signed types, where it is undefined what happens then.