How does .Byte[] function on a specific byte? [closed] - c++

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
I am working on the following lines of code:
#define WDOG_STATUS 0x0440
#define ESM_OP 0x08
and in a method of my defined class I have:
bool WatchDog = 0;
bool Operational = 0;
unsigned char i;
ULONG TempLong;
unsigned char Status;
TempLong.Long = SPIReadRegisterIndirect (WDOG_STATUS, 1); // read watchdog status
if ((TempLong.Byte[0] & 0x01) == 0x01)
WatchDog = 0;
else
WatchDog = 1;
TempLong.Long = SPIReadRegisterIndirect (AL_STATUS, 1);
Status = TempLong.Byte[0] & 0x0F;
if (Status == ESM_OP)
Operational = 1;
else
Operational = 0;
What SPIReadRegisterInderect() does is, it takes an unsigned short as Address of the register to read and an unsigned char Len as number of bytes to read.
What is baffling me is Byte[]. I am assuming that this is a method to separate some parts of byte from the value in Long ( which is read from SPIReadRegisterIndirect ). but why is that [0]? shouldn't it be 1? and how is that functioning? I mean if it is isolating only one byte, for example for the WatchDog case, is TempLong.Byte[0] equal to 04 or 40? (when I am printing the value before if statement, it is shown as 0, which is neither 04 nor 40 from WDOG_STATUS defined register.)
Please consider that I am new to this subject. and I have done google search and other searchs but unfortunatly I could not found what I wanted. Could somebody please help me to understand how this syntax works or direct me to a documentation where I can read about it?
Thank you in advance.

Your ULONG must be defined somewhere.
Else you'd get the syntax error 'ULONG' does not name a type
Probably something like:
typedef union {unsigned long Long; byte Byte[4];} ULONG;
Check union ( and typedef ) in your C / C++ book, and you'll see that
this union helps reinterpreting the long variable as an array of bytes.
Byte[0] is the first byte. Depending on the hardware (avr controllers are little endian) it's probably the LSB (least significant byte)

Related

Is it necessary to cast individual indices of a pointer after you cast the whole pointer in c? Why? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 1 year ago.
Improve this question
In the code below the address of ip is casted to uint8_t *.
But below again each index of the casted pointer is casted to uint8_t.
Why the programmer has done this? Does it make a difference if we remove all those casts that come after the initial cast? This code converts an IPv4 IP Address to an IP Number. Thank you
uint32_t Dot2LongIP(char* ipstring)
{
uint32_t ip = inet_addr(ipstring);
uint8_t *ptr = (uint8_t *) &ip;
uint32_t a = 0;
if (ipstring != NULL) {
a = (uint8_t)(ptr[3]);
a += (uint8_t)(ptr[2]) * 256;
a += (uint8_t)(ptr[1]) * 256 * 256;
a += (uint8_t)(ptr[0]) * 256 * 256 * 256;
}
return a;
}
Why the programmer has done this?
Ignorance, fear, or other incompetence.
The type of ptr is uint8_t *, so the type of ptr[i] is uint8_t. Converting a uint8_t to a uint8_t has no effect. Also, putting it in parentheses has no effect.
Does it make a difference if we remove all those casts that come after the initial cast?
Yes, it makes the code smaller and clearer. It has no effect on the program semantics.
This code converts an IPv4 IP Address to an IP Number.
No, it does not, not correctly; the code is broken.
When the uint8_t value is used in multiplication with 256, the usual arithmetic conversions are applied. These promote the uint8_t to int, and then the result of the * operator is an int. For ptr[0], as two more multiplications by 256 are performed, the result remains an int. Unfortunately, if the high bit (bit 7) of ptr[0] is set, these multiplications overflow a 32-bit int. Then the behavior of the program is not defined by the C standard.
To avoid this, the value should have been cast to uint32_t. (This speaks only to getting the arithmetic correct; I make no assertion about the usefulness of taking apart an in_addr_t returned by inet_addr and reassembling it in this way.)
I suspect that is Arduino code :)
uint32_t Dot2LongIP(char* ipstring)
{
uint32_t ip = inet_addr(ipstring);
return return nothl(ip);
}
or if you do not want to use htonl
uint32_t Dot2LongIP(char* ipstring)
{
uint32_t ip = inet_addr(ipstring);
ip = ((ip & 0xff000000) >> 24) | ((ip & 0x00ff0000) >> 8) |
((ip & 0x0000ff00) << 8) | ((ip & 0x000000ff) << 24);
return ip;
}
Dereferencing any index of ptr (i.e., ptr[0], ptr[1], etc) will have the type uint8_t. The casting performed on them is redundant.

Writing a program for a computer that uses Litttle or Big endian. And have the same result [duplicate]

This question already has answers here:
Detecting endianness programmatically in a C++ program
(29 answers)
Closed 2 years ago.
This question is about endian's.
Goal is to write 2 bytes in a file for a game on a computer. I want to make sure that people with different computers have the same result whether they use Little- or Big-Endian.
Which of these snippet do I use?
char a[2] = { 0x5c, 0x7B };
fout.write(a, 2);
or
int a = 0x7B5C;
fout.write((char*)&a, 2);
Thanks a bunch.
From wikipedia:
In its most common usage, endianness indicates the ordering of bytes within a multi-byte number.
So for char a[2] = { 0x5c, 0x7B };, a[1] will be always 0x7B
However, for int a = 0x7B5C;, char* oneByte = (char*)&a; (char *)oneByte[0]; may be 0x7B or 0x5C, but as you can see, you have to play with casts and byte pointers (bear in mind the undefined behaviour when char[1], it is only for explanation purposes).
One way that is used quite often is to write a 'signature' or 'magic' number as the first data in the file - typically a 16-bit integer whose value, when read back, will depend on whether or not the reading platform has the same endianness as the writing platform. If you then detect a mismatch, all data (of more than one byte) read from the file will need to be byte swapped.
Here's some outline code:
void ByteSwap(void *buffer, size_t length)
{
unsigned char *p = static_cast<unsigned char *>(buffer);
for (size_t i = 0; i < length / 2; ++i) {
unsigned char tmp = *(p + i);
*(p + i) = *(p + length - i - 1);
*(p + length - i - 1) = tmp;
}
return;
}
bool WriteData(void *data, size_t size, size_t num, FILE *file)
{
uint16_t magic = 0xAB12; // Something that can be tested for byte-reversal
if (fwrite(&magic, sizeof(uint16_t), 1, file) != 1) return false;
if (fwrite(data, size, num, file) != num) return false;
return true;
}
bool ReadData(void *data, size_t size, size_t num, FILE *file)
{
uint16_t test_magic;
bool is_reversed;
if (fread(&test_magic, sizeof(uint16_t), 1, file) != 1) return false;
if (test_magic == 0xAB12) is_reversed = false;
else if (test_magic == 0x12AB) is_reversed = true;
else return false; // Error - needs handling!
if (fread(data, size, num, file) != num) return false;
if (is_reversed && (size > 1)) {
for (size_t i = 0; i < num; ++i) ByteSwap(static_cast<char *>(data) + (i*size), size);
}
return true;
}
Of course, in the real world, you wouldn't need to write/read the 'magic' number for every input/output operation - just once per file, and store the is_reversed flag for future use when reading data back.
Also, with proper use of C++ code, you would probably be using std::stream arguments, rather than the FILE* I have shown - but the sample I have posted has been extracted (with only very little modification) from code that I actually use in my projects (to do just this test). But conversion to better use of modern C++ should be straightforward.
Feel free to ask for further clarification and/or explanation.
NOTE: The ByteSwap function I have provided is not ideal! It almost certainly breaks strict aliasing rules and may well cause undefined behaviour on some platforms, if used carelessly. Also, it is not the most efficient method for small data units (like int variables). One could (and should) provide one's own byte-reversal function(s) to handle specific types of variables - a good case for overloading the function with different argument types).
Which of these snippet do I use?
The first one. It has same output regardless of native endianness.
But you'll find that if you need to interpret those bytes as some integer value, that is not so straightforward. char a[2] = { 0x5c, 0x7B } can represent either 0x5c7B (big endian) or 0x7B5c (little endian). So, which one did you intend?
The solution for cross platform interpretation of integers is to decide on particular byte order for the reading and writing. De-facto "standard" for cross platform data is to use big endian.
To write a number in big endian, start by bit-shifting the input value right so that the most significant byte is in the place of the least significant byte. Mask all other bytes (technically redundant in first iteration, but we'll loop back soon). Write this byte to the output. Repeat this for all other bytes in order of significance.
This algorithm produces same output regardless of the native endianness - it will even work on exotic "middle" endian systems if you ever encounter one. Writing to little endian is similar, but in reverse order.
To read a big endian value, read the first byte of input, shift it left so that it goes to the place of most significant byte. Combine the shifted byte with the result (initially zero) using bitwise-or. Repeat with the next byte by shifting to the second most significant place and so on.
to know the Endianess of a computer?
To know endianness of a system, you can use std::endian in the upcoming C++20. Prior to that, you can use implementation specific macros from endian.h header. Or you can do a simple calculation like you suggest.
But you never really need to know the endianness of a system. You can simply use the algorithms that I described, which work on systems of all endianness without having to know what that endianness is.

What is the C++ equivalent to this MATLAB code? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
In my project, I am responsible for migrating some MATLAB code to C++. The code below refers to serial communication from a computer to a microcontroller. The function CreatePackage generates a package which is then sent to the microcontroller using MATLAB's fwrite(serial) function.
function package = CreatePackage(V)
for ii = 1:size(V,2)
if V(ii) > 100
V(ii) = 100;
elseif V(ii) < -100
V(ii) = -100;
end
end
vel = zeros(1, 6);
for ii = 1:size(V,2)
if V(ii) > 0
vel(ii) = uint8(V(ii));
else
vel(ii) = uint8(128 + abs(V(ii)));
end
end
package = ['BD' 16+[6, vel(1:6)], 'P' 10 13]+0;
And then, to send the package:
function SendPackage(S, Package)
for ii = 1:length(S)
fwrite(S(ii), Package);
end
How can I create an array/vector in C++ to represent the package variable used in the MATLAB code above?
I have no experience with MATLAB so any help would be greatly apreciated.
Thank you!
The package variable is being streamed as 12, unsigned 8-bit integers in your MATLAB code, so I would use a char[12] array in C++. You can double check sizeof(char) on your platform to ensure that char is only 1 byte.
Yes, MATLAB default data-type is a double, but that does not mean your vector V isn't filled with integer values. You have to look at this data or the specs from your equipment to figure this out.
Whatever the values are coming in, you are setting/clipping the outgoing range to [-100, 100] and then offsetting them to the byte range [0, 255].
If you do not know a whole lot about MATLAB, you may be able to leverage what you know from C++ and use C as an interim. MATLAB's fwrite functionality lines up with that of C's, and you can include these functions in C++ with the #include<cstdio.h> preprocessor directive.
Here is an example solution:
#include <cstdio.h> // fwrite
#include <algorithm> // min, max
...
void makeAndSendPackage(int * a6x1array, FILE * fHandles, int numHandles){
char packageBuffer[13] = {'B','D',24,0,0,0,0,0,0,'P','\n','\r',0};
for(int i=0;i<6;i++){
int tmp = a6x1array[i];
packageBuffer[i+3] = tmp<0: abs(max(-100,tmp))+144 ? min(100,tmp)+16;
}
for(int i=0;i<6;i++){
fwrite(fHandles[i],"%s",packageBuffer);
}
}
Let me know if you have questions about the above code.

computing a 8 bit checksum in c++ [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 8 years ago.
Improve this question
I have the following setup where I am trying to write a custom file header. I have the fields which are described as follows:
// Assume these variables are initialized.
unsigned short x, y, z;
unsigned int dl;
unsigned int offset;
// End assume
u_int8_t major_version = 0x31;
u_int8_t minor_version = 0x30;
u_int_32_t data_offset = (u_int_32_t)offset;
u_int16_t size_x = (u_int16_t)x;
u_int16_t size_y = (u_int16_t)y;
u_int16_t size_z = (u_int16_t)z;
u_int_32_t data_size = (u_int_32_t)dl;
Now, what I would like to do is compute an 8 bit header checksum from the fields starting from major revision to the data_size variables. I am fairly new to this and something simple would suffice for my current needs.
I'm not sure what you are trying to achieve, but to strictly answer your question, a 8-bit checksum is usually computed as
sum_of_all_elements %255
Simply put, add all the elements together, and sum % 255 is your checksum.
Watch out for overflows when doing the addition (you can compute partial sums if you run into trouble).
On a side note, a 8-bit checksum is not that good - It won't help you distinguish all cases and it won't help with data recovery; that's why a 16-bit checksum is usually preferred.

Calculate_CRC32 function. How do I convert it to calculating bytes and not bits

I am helping out a friend of mine who is a bit stuck and my own c++ skills are very rusty. My interest and curiosity is quite picked by this. so I shall try and explain this as best I can. Note its a 32 bit check.
uint32_t CRC32::calculate_CRC32(const uint32_t* plData, uint32_t lLength, uint32_t previousCrc32)
{
uint32_t lCount;
const uint32_t lPolynomial = 0x04C11DB7;
uint32_t lCrc = previousCrc32;
unsigned char* plCurrent = (unsigned char*) plData;
lCrc ^= *plCurrent++;
while (lLength-- != 0)
{
for (lCount = 0 ; lCount < lLength; lCount++)
{
if (lCrc & 1)
lCrc = (lCrc >> 8) ^ lPolynomial;
else
lCrc = lCrc >> 8;
}
}
return lCrc;
}
Now ILength is the number of bytes that the packet contains. plData is the packet for which data needs to be checked. As it is, the function works. But it works bit for bit. It needs to be improved to work byte for byte. So to all genius c++ developers out there who far surpasses my knowledge. Any ideas will be really helpful. Thanks in advance guys.
Read Ross Williams excellent tutorial on CRCs, especially section 9 on "A Table-Driven Implementation", which calculates the CRC a byte at a time instead of a bit at a time. You can also look at the somewhat more involved CRC implementation in zlib, which calculates it four bytes at a time. You can also calculate it eight bytes at a time.