I am facing problem in doing addition of long values
example
typedef unsigned short UINT16;
UINT16* flash_dest_ptr; // this is equal to in hexa 0XFF910000
UINT16 data_length ; // hex = 0x000002AA & dec = 682
//now when I add
UINT16 *memory_loc_ver = flash_dest_ptr + data_length ;
dbug_printf( DBUG_ERROR | DBUG_NAVD, " ADD hex =0x%08X\n\r",memory_loc_ver );
Actual O/p = 0xFF910554
// shouldn't o/p be FF9102AA ?
It's pointer arithmetic, so
UINT16 *memory_loc_ver = flash_dest_ptr + data_length ;
advances flash_dest_ptr by data_length * sizeof (UINT16) bytes.
Typically, sizeof (UINT16) would be 2, and
2 * 0x2AA = 0x554
When you add integers to a pointer value, you are actually moving the pointer as many bytes as it would take to move data_length UINT16s away in memory, not data_length bytes.
Related
I receive some bytes and then I want to cast to typedef struct with correspondent values.
My typedef struct is:
typedef struct SomethingHeader {
uint8_t PV;
uint8_t messageID;
uint32_t stID;
} ;
I receive the array with values:
somePDU[0] = {char} 2 [0x2]
somePDU[1] = {char} 6 [0x6]
somePDU[2] = {char} 41 [0x29]
somePDU[3] = {char} -90 [0xa6]
somePDU[4] = {char} 28 [0x1c]
somePDU[5] = {char} -93 [0xa3]
somePDU[6] = {char} 55 [0x37]
somePDU[7] = {char} -50 [0xce]
somePDU[8] = {char} 0 [0x]
....
When I use reinterpret_cast<SomethingHeader*>(somePDU), on the watch debug mode I see:
PV = 2 [0x2]
messageID = 6 [0x6]
stID = -835214564 [0xce37a31c]
The reinterpret_cast jumps two bytes: somePDU[2] and somePDU[3], but I needed, because my expected value is 698797623 (0x29a6ce37)
It seems to me that the reinterpret_cast only works well every 4 bytes (or with structures that occupy 4 bytes in a row).
How can I force the reinterpret_cast not to skip those two bytes?
You cannot use reinterpret_cast for this in C++. You will violate type aliasing rules and the behaviour of the program will be undefined.
You cannot define the struct in such way that it won't have padding in standard C++. A structure like this is not a portable way to represent byte patterns in C++.
A working example:
std::size_t offs = 0;
SomethingHeader sh;
std::memcpy(&sh.PV, somePDU + offs, sizeof sh.PV);
offs += sizeof sh.PV;
std::memcpy(&sh.messageID, somePDU + offs, sizeof sh.messageID);
offs += sizeof sh.messageID;
std::memcpy(&sh.stID, somePDU + offs, sizeof sh.stID);
offs += sizeof sh.stID;
Note that this still assumes that the order of bytes of the integer are in native endianness which is not a portable assumption. You need to know the endianness of the source data, and convert it to native byte order. This can be done portably by shift and bitwise or.
My problem are padding and endianness that are in comments on my question.
My solution for padding is:
#pragma pack(push,1)
struct SomethingHeader {
uint8_t PV;
uint8_t messageID;
uint32_t stID;
};
#pragma pack(pop)
For the endianness just use ntohl().
Thank you
I have this "buggy" code :
int arr[15];
memset(arr, 1, sizeof(arr));
memset sets each byte to 1, but since int is generally 4-bytes, it won't give the desired output. I know that each int in the array will we initalized to 0x01010101 = 16843009. Since I have a weak (very) understanding of hex values and memory layouts, can someone explain why it gets initialized to that hex value ? What will be the case if I have say, 4, in place of 1 ?
If I trust the man page
The memset() function writes len bytes of value c (converted to an unsigned char) to the string b.
In your case it will convert 0x00000001 (as an int) into 0x01 (as an unsigned char), then fill each byte of the memory with this value. You can fit 4 of that in an int, that is, each int will become 0x01010101.
If you had 4, it would be casted into the unsigned char 0x04, and each int would be filled with 0x04040404.
Does that make sense to you ?
What memset does is
Converts the value ch to unsigned char and copies it into each of the first count characters of the object pointed to by dest.
So, first your value (1) will be converted to unsigned char, which occupies 1 byte, so that will be 0b00000001. Then memset will fill the whole array's memory with these values. Since an int takes 4 bytes on your machine, the value of each int int the array would be 00000001000000010000000100000001 which is 16843009. If you place another value instead of 1, the array's memory will be filled with that value instead.
Note that memset converts its second argument to an unsigned char which is one byte. One byte is eight bits, and you're setting each byte to the value 1. So we get
0b00000001 00000001 00000001 00000001
or in hexadecimal,
0x01010101
or the decimal number 16843009. Why that value? Because
0b00000001000000010000000100000001 = 1*2^0 + 1*2^8 + 1*2^16 + 1*2^24
= 1 + 256 + 65536 + 16777216
= 16843009
Each group of four binary digits corresponds to one hexadecimal digit. Since 0b0000 = 0x0 and 0b0001 = 0x1, your final value is 0x01010101. With memset(arr, 4, sizeof(arr)); you would get 0x04040404 and with 12 you would get 0x0c0c0c0c.
How can I concatenate bytes?, for example
I have one byte array, BYTE *buffer[2] = [0x00, 0x02], but I want concatenate this two bytes, but backwards.
something like that
0x0200 <---
and later convert those bytes in decimal 0x0200 = 512
but I don't know how do it on C, because I can't use memcpy or strcat for the reason that buffer is BYTE and not a CHAR, even don't know if I can do that
Can somebody help me with a code or how can I concatenate bytes to convert on decimal?
because I have another byte array, buff = {0x00, 0x00, 0x0C, 0x00, 0x00, 0x00} and need do the same.
help please.
regards.
BYTE is not a standard type and is probably a typedef for unsigned char. Here, I'll use the definitions from <stdint.h> that define intgers for specified byte widths and where a byte is uint8_t.
Concatenating two bytes "backwards" is easy if you think about it:
uint8_t buffer[2] = {0x00, 0x02};
uint16_t x = buffer[1] * 256 + buffer[0];
It isn't called backwards, by the way, but Little Endian byte order. The opposite would be Big Endian, where the most significant byte comes first:
uint16_t x = buffer[0] * 256 + buffer[1];
Then, there's no such thing as "converting to decimal". Internally, all numbers are binary. You can print them as decimal numbers or as hexadeximal numbers or as numbers of any base or even as Roman numerals if you like, but it's still the same number:
printf("dec: %u\n", x); // prints 512
printf("hex: %x\n", x); // prints 200
Now let's look what happens for byte arrays of any length:
uint8_t buffer[4] = {0x11, 0x22, 0x33, 0x44};
uint32_t x = buffer[3] * 256 * 256 * 256
+ buffer[2] * 256 * 256
+ buffer[1] * 256
+ buffer[0];
See a pattern? You can rewrite this as:
uint32_t x = ( ( (buffer[3]) * 256
+ buffer[2]) * 256
+ buffer[1]) * 256
+ buffer[0];
You can convert this logic to a function easily:
uint64_t int_little_endian(uint8_t *arr, size_t n)
{
uint64_t res = 0ul;
while (n--) res = res * 256 + arr[n];
return res;
}
Likewise for Big Endian, wher you move "forward":
uint64_t int_big_endian(uint8_t *arr, size_t n)
{
uint64_t res = 0ul;
while (n--) res = res * 256 + *arr++;
return res;
}
Lastly, code that deals with byte conversions usually doesn't use the arithmetic operations of multiplication and addition, but is uses so-called bit-wise operators. A multiplication with 2 is represented by a shifting all bits of a number right by one. (Much as a multiplication by 10 in decimal is done by shifting all digits by one and appending a zero.) Out multiplication by 256 will become a bit-shift of 8 bytes to the left, which i C notation is x << 8.
Addition is done by applying the bit-wise or. These two operations are not identical, because the bit-wise or operates on bits and does not account for carry. In our case, where there are no clashes between additions, they behave the same. Your Little-Endian conversion function now looks like this:
uint64_t int_little_endian(uint8_t *arr, size_t n)
{
uint64_t res = 0ul;
while (n--) res = res << 8 | arr[n];
return res;
}
And If that doesn't look like some nifty C code, I don't know. (If these bitwise operators confuse you, leave them for now. In your example, you're fine with multiplication and addition.)
lets suppose I have an unsigned int* val and unsigned char mat[24][8]. Now the val stores the location of the variable mat. Is it possible to modify the bits in the mat variable using the location in val?
for ex:
val = 0x00000001 and location of val in memory is 0x20004000
the first element of mat is located at 0x00000001.
Now i want to modify the value of mat at, say, 10,4. Is it possible to do this using C++?
Yes, it is possible, unless either of the array members or the pointer target is const.
For example:
int array[3][2] = { { 0, 1 }, { 2, 3 }, { 4, 5 } };
int *p = &array[1][1];
*p = 42;
// array is now: { { 0, 1 }, { 2, 42 }, { 4, 5 } };
Yes, you can change the value of your matrix using the address(what you called location), but you have to calculate the right offset from the start. The offset calculation should be something like this :
(matrix_x_len * Y + X) * sizeof(unsigned int) + offset to the beggining of the matrix
then when you have the offset you can change mat like this : *(val + offset) = new_value.
can do it but making val as unsigned char*
val = &mat;
will make easy to do modification of bits
You can of course modify the bits since unsigned char mat[24][8] gives you a memory chunk with 24*8*sizeof(char) bytes.
(I assume that unsigned char is 1 byte (=8 bits) in size and unsigned int is 4 Bytes (=32 bits) from here but this may be dependant on your system.
But accessing memory elements of 1 byte width using a pointer to elements with 4 bytes width is tricky and can easily produce errors.
If you set element 0 of the int array to 1 for example
#define ROWS 24
#define COLS 8
unsigned char mat[ROWS][COLS];
unsigned int * val = (unsigned int*)&mat;
val[0] = 1;
You will see that mat[0][0] is 0, mat[0][1] is 0, mat[0][2] is 0 and mat[0][3] is 1.
Please not that you cannot edit the elements of mat directly using their offset in memory via such a "miss-typed" pointer.
Accessing val[10*8+4] for example will access byte 336 from the beginning of your memory chunk which has only 192 bytes.
You will have to calculate your index correctly:
size_t byte_index = (10*COLS+4)*sizeof(unsigned char); // will be 84
size_t int_index = byte_index / sizeof(unsigned int); // will be 21
size_t sub_byte = byte_index%sizeof(unsigned int); // will be 0
Therefore you can access val[int_index] or val[21] to access the 4 bytes that contain the data of element mat[10][4] which is byte number sub_byte of the refered unsigned int value.
If you have the same types there is no problem except that you need to calculate the correct offset.
#define ROWS 24
#define COLS 8
unsigned char mat[ROWS][COLS];
unsigned char * val = &mat;
val[10*8+4] = 12; // set mat[10][4] to 12
*(val+10*8+5) = 13; // set mat[10][5] to 13
I have an array of unsigned chars. Basically I have an array of bits.
I know that the first 16 bits corresponds to an unsigned integer and I retrieve its value using (u16)(*(buffer+ 1) << 8 | *abcBuffer)
Then comes a data type called u30 which is described as follows:
u30 - variable length encoded 30-bit unsigned integer value. The variable encoding for u30 uses one to five bytes, depending on the magnitude of the value encoded. Each byte contributes its low seven bits to the value.If the high (8th) bit of a byte is set then the next byte is also part of the value.
I don't understand this description: it says u30(thirty!) and then it says 1 to 5 bytes? Also I have another data type called s24 - three-byte signed integer value.
How should one read (retrieve their values) such non-typical data types? Any help will be appreciated.
Thanks a lot!
i=0;
val = buf[i]&0x7F;
while (buf[i++]&0x80)
{
val |= (buf[i]&0x7F)<<(i*7);
}
Assuming I understand correctly (always a questionable matter), the following will read the values. It starts at position zero in this example (i would need to be offset by the actual position in the buffer):
unsigned int val;
unsigned char buf[300];
int i;
int shift;
i = 0;
buf[0] = 0x81;
buf[1] = 0x3;
val = 0;
shift = 0;
do
{
val |= (0x7F & buf[i] ) << shift;
shift += 7;
i++;
} while (( buf[i-1] & 0x80 ) && ( i < 5 ));
printf( "Val = %u\n", val );
The encoding format description is somewhat informal perhaps, but should be enough. The idea will be that you read one byte (call it x), you take the lowest 7 bits x & 0x7F and at the same time check if it's highest bit is set. You'll need to write a small loop that merges the 7 bit sequences in a uint variable until the current byte no longer has its highest bit set.
You will have to figure out if you need to merge the new bits at the high end, or the low end of the number (a = (a << 7) | (x & 0x7F)). For that you need one test sequence of which you know what the correct output is.
To read the variable length 30 bit value, you could do something like such:
const char HIGH_BIT = 0x80;
const char DATA_MASK = 0x7F;
const char LAST_MASK = 0x03; // only need 2 bits of last byte
char tmpValue = 0; // tmp holder for value of byte;
int value = 0; holder for the actual value;
char* ptr = buffer; // assume buffer is at the start of the 30 bit number
for(int i = 0; i < 5; i++)
{
if(i == 4)
{
tmpValue = LAST_MASK & *ptr;
}
else
{
tmpValue = DATA_MASK & *ptr;
}
value |= tmpValue << ( 7 * i);
if(!(HIGH_BIT & *ptr))
{
break;
}
if(i != 4)
{
++ptr;
}
}
buff = ptr; // advance the buffer afterwards.
#Mark: your answer was posted while I was typing this, and would work except for the high byte. the value is only 30 bits, so only the first 2 bits of the high byte are used for the value and you are using the full 8 bits of the value.