Why g++ giver: "error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]" - c++

I'm doing an UDP connection, between a microcontroller and a computer.
The framework I'm using is c++ based and has a function to send an UDP packet with the following prototype:
bool UdpConnection::send(const char *data, int length)
int length is the number of bytes that the pointer contains.
But I'm doing some input reading using a function that returns a uint16_t type.
I cannot change anything directly in those two functions.
Then I did the following:
UdpConnection udp;
uint16_t dummy = 256;
udp.send(reinterpret_cast<char*>(dummy),2);
But I'm a curious guy so I tried the following:
UdpConnection udp;
uint16_t dummy = 256;
udp.send((char*)dummy,2);
When I compile this last code I get:
error: cast to pointer from integer of different size [-Werror=int-to-pointer-cast]
In my analysis both snippets do the same thing, why I get an error in the last one but not in the first?
EDIT:
The first snippet of code compiles but gives segmentation fault when the code runs. So neither code works.
EDIT 2:
An efficient and tested solution for the problem, but not an answer to the original question, is this:
union Shifter {
uint16_t b16;
char b8[2];
} static shifter;
shifter.b16 = 256;
udp.send(shifter.b8,2);
This solution is widely used, but it's not portable as it's dependent on CPU byte ordering, so do a test in your application before to be sure.

I would assume this is correct:
udp.send(reinterpret_cast<char*>(&dummy),2);
Note ampersand. Otherwise you are sending two byes from address 256, which are probably random (at best). This is microcontroller, so it might not crash.
Second version should be:
udp.send((char*)&dummy,2);

Related

const static char assignment with PROGMEM causes error in avr-g++ 5.4.0

I have a piece of code that was shipped as part of the XLR8 development platform that formerly used a bundled version (4.8.1) of the avr-gcc/g++ compiler. I tried to use the latest version of avr-g++ included with by my linux distribution (Ubuntu 22.04) which is 5.4.0
When running that compiler, I am getting the following error that seems to make sense to me. Here is the error and the chunk of related code below. In the bundled version of avr-g++ that was provided with the XLR8 board, this was not an error. I'm not sure why because it appears that the code below is attempting to place 16 bit words into an array of chars.
A couple questions,
Can anyone explain the reason this worked with previous avr-gcc releases and was not considered an error?
Because of the use of sizeof in the snippet below to control the for loop terminal count, I think the 16 bit size was supposed to be the data type per element of the array. Is that accurate?
If the size of the element was 16 bits, then is the correct fix simply to make that array of type unsigned int rather than char?
/home/rich/.arduino15/packages/alorium/hardware/avr/2.3.0/libraries/XLR8Info/src/XLR8Info.cpp:157:12: error: narrowing conversion of ‘51343u’ from ‘unsigned int’ to ‘char’ inside { } [-Wnarrowing]
0x38BF};
bool XLR8Info::hasICSPVccGndSwap(void) {
// List of chip IDs from boards that have Vcc and Gnd swapped on the ICSP header
// Chip ID of affected parts are 0x????6E00. Store the ???? part
const static char cidTable[] PROGMEM =
{0xC88F, 0x08B7, 0xA877, 0xF437,
0x94BF, 0x88D8, 0xB437, 0x94D7, 0x38BF, 0x145F, 0x288F, 0x28CF,
0x543F, 0x0837, 0xA8B7, 0x748F, 0x8477, 0xACAF, 0x14A4, 0x0C50,
0x084F, 0x0810, 0x0CC0, 0x540F, 0x1897, 0x48BF, 0x285F, 0x8C77,
0xE877, 0xE49F, 0x2837, 0xA82F, 0x043F, 0x88BF, 0xF48F, 0x88F7,
0x1410, 0xCC8F, 0xA84F, 0xB808, 0x8437, 0xF4C0, 0xD48F, 0x5478,
0x080F, 0x54D7, 0x1490, 0x88AF, 0x2877, 0xA8CF, 0xB83F, 0x1860,
0x38BF};
uint32_t chipId = getChipId();
for (int i=0;i< sizeof(cidTable)/sizeof(cidTable[0]);i++) {
uint32_t cidtoTest = (cidTable[i] << 16) + 0x6E00;
if (chipId == cidtoTest) {return true;}
}
return false;
}
As you already pointed out, the array type char definitely looks wrong. My guess is, that this is bug that may have never surfaced in the field. hasICSPVccGndSwap should always return false, so maybe they never used a chip type that had its pins swapped and got away with it.
Can anyone explain the reason this worked with previous avr-gcc releases and was not considered an error?
Yes, the error/warning behavior was changed with version 5.
As of G++ 5, the behavior is the following: When a later standard is in effect, e.g. when using -std=c++11, narrowing conversions are diagnosed by default, as required by the standard. A narrowing conversion from a constant produces an error, and a narrowing conversion from a non-constant produces a warning, but -Wno-narrowing suppresses the diagnostic.
I would've expected v4.8.1 to throw a warning at least, but maybe that has been ignored.
Because of the use of sizeof in the snippet below to control the for loop terminal count, I think the 16 bit size was supposed to be the data type per element of the array. Is that accurate?
Yes, this further supports that the array type should've been uint16 in the first place.
If the size of the element was 16 bits, then is the correct fix simply to make that array of type int rather than char?
Yes.
Several bugs here. I am not familiar with that software, but there are at least the following, obvious bugs:
The element type of cidTable should be a 16-bit, integral type like uint16_t. This follows from the code and also from the comments.
You cannot read from PROGMEM like that. The code will read from RAM, where it uses a flash address to access RAM. Currently, there is only one way to read from flash in avr-g++, which is inline assembly. To make life easier, you can use macros from avr/pgmspace.h like pgm_read_word.
cidTable[i] << 16 is Undefined Behaviour because a 16-bit type is shifted left by 16 bits. The type is an 8-bit type, then it is promoted to int which has 16 bits only. Same problem if the element type is already 16 bits wide.
Taking it all together, in order to make sense in avr-g++, the code would be something like:
#include <avr/pgmspace.h>
bool XLR8Info::hasICSPVccGndSwap()
{
// List of chip IDs from boards that have Vcc and Gnd swapped on
// the ICSP header. Chip ID of affected parts are 0x????6E00.
// Store the ???? part.
static const uint16_t cidTable[] PROGMEM =
{
0xC88F, 0x08B7, 0xA877, 0xF437, ...
};
uint32_t chipId = getChipId();
for (size_t i = 0; i < sizeof(cidTable) / sizeof (*cidTable); ++i)
{
uint16_t cid = pgm_read_word (&cidTable[i]);
uint32_t cidtoTest = ((uint32_t) cid << 16) + 0x6E00;
if (chipId == cidtoTest)
return true;
}
return false;
}

C++ casting a struct to std::vector<char> memory alignment

I'm trying to cast a struct into a char vector.
I wanna send my struct casted in std::vector throw a UDP socket and cast it back on the other side. Here is my struct whith the PACK attribute.
#define PACK( __Declaration__ ) __pragma( pack(push, 1) ) __Declaration__ __pragma( pack(pop) )
PACK(struct Inputs
{
uint8_t structureHeader;
int16_t x;
int16_t y;
Key inputs[8];
});
Here is test code:
auto const ptr = reinterpret_cast<char*>(&in);
std::vector<char> buffer(ptr, ptr + sizeof in);
//send and receive via udp
Inputs* my_struct = reinterpret_cast<Inputs*>(&buffer[0]);
The issue is:
All works fine except my uint8_t or int8_t.
I don't know why but whenever and wherever I put a 1Bytes value in the struct,
when I cast it back the value is not readable (but the others are)
I tried to put only 16bits values and it works just fine even with the
maximum values so all bits are ok.
I think this is something with the alignment of the bytes in the memory but i can't figure out how to make it work.
Thank you.
I'm trying to cast a struct into a char vector.
You cannot cast an arbitrary object to a vector. You can cast your object to an array of char and then copy that array into a vector (which is actually what your code is doing).
auto const ptr = reinterpret_cast<char*>(&in);
std::vector<char> buffer(ptr, ptr + sizeof in);
That second line defines a new vector and initializes it by copying the bytes that represent your object into it. This is reasonable, but it's distinct from what you said you were trying to do.
I think this is something with the alignment of the bytes in the memory
This is good intuition. If you hadn't told the compiler to pack the struct, it would have inserted padding bytes to ensure each field starts at its natural alignment. The fact that the operation isn't reversible suggests that somehow the receiving end isn't packed exactly the same way. Are you sure the receiving program has exactly the same packing directive and struct layout?
On x86, you can get by with unaligned data, but you may pay a large performance cost whenever you access an unaligned member variable. With the packing set to one, and the first field being odd-sized, you've guaranteed that the next fields will be unaligned. I'd urge you to reconsider this. Design the struct so that all the fields fall at their natural alignment boundaries and that you don't need to adjust the packing. This may make your struct a little bigger, but it will avoid all the alignment and performance problems.
If you want to omit the padding bytes in your wire format, you'll have to copy the relevant fields byte by byte into the wire format and then copy them back out on the receiving end.
An aside regarding:
#define PACK( __Declaration__ ) __pragma( pack(push, 1) ) __Declaration__ __pragma( pack(pop) )
Identifiers that begin with underscore and a capital letter or with two underscores are reserved for "the implementation," so you probably shouldn't use __Declaration__ as the macro's parameter name. ("The implementation" refers to the compiler, the standard library, and any other runtime bits the compiler requires.)
1
vector class has dynamically allocated memory and uses pointers inside. So you can't send the vector (but you can send the underlying array)
2
SFML has a great class for doing this called sf::packet. It's free, open source, and cross-platform.
I was recently working on a personal cross platform socket library for use in other personal projects and I eventually quit it for SFML. There's just TOO much to test, I was spending all my time testing to make sure stuff worked and not getting any work done on the actual projects I wanted to do.
3
memcpy is your best friend. It is designed to be portable, and you can use that to your advantage.
You can use it to debug. memcpy the thing you want to see into a char array and check that it matches what you expect.
4
To save yourself from having to do tons of robustness testing, limit yourself to only chars, 32-bit integers, and 64-bit doubles. You're using different compilers? struct packing is compiler and architecture dependent. If you have to use a packed struct, you need to guarantee that the packing is working as expected on all platforms you will be using, and that all platforms have the same endianness. Obviously, that's what you're having trouble with and I'm sorry I can't help you more with that. I would I would recommend regular serializing and would definitely avoid struct packing if I was trying to make portable sockets.
If you can make those guarantees that I mentioned, sending is really easy on LINUX.
// POSIX
void send(int fd, Inputs& input)
{
int error = sendto(fd, &input, sizeof(input), ..., ..., ...);
...
}
winsock2 uses a char* instead of a void* :(
void send(int fd, Inputs& input)
{
char buf[sizeof(input)];
memcpy(buf, &input, sizeof(input));
int error = sendto(fd, buf, sizeof(input), ..., ..., ...);
...
}
Did you tried the most simple approach of:
unsigned char *pBuff = (unsigned char*)&in;
for (unsigned int i = 0; i < sizeof(Inputs); i++) {
vecBuffer.push_back(*pBuff);
pBuff++;
}
This would work for both, pack and non pack, since you will iterate the sizeof.

Bitfield value changes when sent over socket C++

I have a bitfield that looks like the following:
typedef struct __attribute__((__packed__)) MyStruct {
unsigned int val1:14;
unsigned int val2:1;
unsigned int val3:1;
unsigned int val4:1;
unsigned int val5:1;
unsigned short aFullShort;
unsigned int aFullInt;
} MyStruct;
I am sending these values over the network and have noticed that sometimes the sender will think that val1 is set(verifiable by printing the value prior to send) but the receiver will not see that val1 is set. The code for transmission is as follows:
MyStruct* myStruct = new MyStruct(); //initialize fields here
sendto(sock, myStruct, sizeof(MyStruct), 0, ...);
The code for reading is as follows:
unsigned char theBuffer[sizeof(MyStruct)];
recvfrom(aSocket, &theBuffer, sizeof(theBuffer), 0, ...);
After reading in the bytes from the socket, I reinterpret cast the bytes to a MyStruct and perform endian conversion for aFullShort and aFullInt. The corruption occurs such that the receiver thinks that val1 is 0 when the sender set it to 1. Why might this happen? Might the compiler be inserting different padding for the sender and receiver? Do I need to worry about the endianness of the single bit values?
The compiler can lay out bit fields however it wants. It can randomize them on every execution if it wants to. There is absolutely no rule that prohibits this. If you want to serialize data in a predictable binary format that you can rely on, write code that does that.
The sole exception would be if your compiler has some specified guarantee for packed structs and you are willing to confine yourself to only that compiler. You don't specify the compiler you're using, but I doubt that it does.
There is really no reason to write code like this. You want to write code that is guaranteed to work by the relevant standard(s), not code that might happen to work if nothing happens to break the assumptions it makes.

Pointer arithmetic and portability

I'm writing an application and I had to do some pointers arithmetic. However this application will be running on different architecture! I was not really sure if this would be problematic but after reading this article, I thought that I must change it.
Here was my original code that I didn't like much:
class Frame{
/* ... */
protected:
const u_char* const m_pLayerHeader; // Where header of this layer starts
int m_iHeaderLength; // Length of the header of this layer
int m_iFrameLength; // Header + payloads length
};
/**
* Get the pointer to the payload of the current layer
* #return A pointer to the payload of the current layer
*/
const u_char* Frame::getPayload() const
{
// FIXME : Pointer arithmetic, portability!
return m_pLayerHeader + m_iHeaderLength;
}
Pretty bad isn't it! Adding an int value to a u_char pointer! But then I changed to this:
const u_char* Frame::getPayload() const
{
return &m_pLayerHeader[m_iHeaderLength];
}
I think now, the compiler is able to say how much to jump! Right? Is the operation [] on array considered as pointer arithmetic? Does it fix the portability problem?
p + i and &p[i] are synonyms when p is a pointer and i a value of integral type. So much that you can even write &i[p] and it's still valid (just as you can write i + p).
The portability issue in the example you link was coming from sizeof(int) varying across platforms. Your code is just fine, assuming m_iHeaderLength is the number of u_chars you want to skip.
In your code you are advancing the m_pLayerHeader by m_iHeaderLength u_chars. As long as whatever wrote the data you are pointing into has the same size for u_char, and i_HeaderLength is the number of u_chars in the header area you are safe.
But if m_iHeaderLength is really referring to bytes, and not u_chars, then you may have a problem if m_iHeaderLength is supposed to advance the pointer past other types than char.
Say you are sending data from a 16-bit system to a 32-bit system, your header area is defined like this
struct Header {
int something;
int somethingElse;
};
Assume that is only part of the total message defined by the struct Frame.
On the 32-bit machine you write the data out to a port that the 16-bit machine will read from.
port->write(myPacket, sizeof(Frame));
On the 16-bit machine you have the same Header definition, and try to read the information.
port->read(packetBuffer, sizeof(Frame));
You are already in trouble because you've tried to read twice the amount of data the sender wrote. The size of int on the 16-bit machine doing the reading is two, and the size of the header is four. But the header size was eight on the sending machine, two ints of four bytes each.
Now you attempt to advance your pointer
m_iHeaderLength = sizeof(Header);
...
packetBuffer += m_iHeaderLength;
packetBuffer will still be pointing into the data which was in the header in the frame sent from the originator.
If there is a portability problem, then no, that wouldn't fix it. m_pLayerHeader + m_iHeaderLength and &m_pLayerHeader[m_iHeaderLength] are completely equivalent (in this case).

Read Bytes off a Serial Device (and make sense of them??)

I'm pulling my hair out trying to figure out how to read bytes off a serial device, check a checksum, and then convert them into something that I can actually read.
I have a device which "should" be sending me various messages, each started with the byte $83 and ended with the byte $84. The second to last byte is supposedly a checksum, generated by XORign all the other values together and comparing.
The actual values coming back should be alphanumeric, but I can't make heads or tail of the data. I'm newish to C++ - I'm sure that's not helping.
I've read several guides on serial programming, but I'm lost.
Can anyone help me, link me, or show me how to read bytes off a serial device, watch for $83 and $84, and then make sense of the data in between?
Here is the format of each message:
$FF byte Destination Address
$10 byte Message Length 16 Bytes
$37 byte Message Type
$00 byte Message subtype
BankAngle int -179 to +180
PitchAngle int -90 to +90
YawAngle int -179 to +180
Slip sint -50 to +50
GForce fps 0 to 6G
MISC byte Mode bits
Heading word 0 to 359
N/A not used
Voltage byte input voltage
This is all coming off an MGL SP-4 AHRS, and for ease of use I am targeting a Linux system, specifically Ubuntu. I am using the GCC compiler end the Eclipse CDT for development.
Where I'm lost
I can read the data into a buffer, but then I'm not versed enough in C++ to make sense of it after that, since it's not ASCII. I'm interested in learning what I need to know, but I don't know what I need to know.
I have a Perl / Java background.
Accomplishing this is going to be wholly dependent on the Operating System and platform that you target. Since the device you mention is mounted internally to an aircraft in the general use-case, I will assume you are not targeting a Windows platform, but more likely a Linux or embedded system. There are a number of resources available for performing serial I/O on such platforms (for example: the Serial Programming HOW-TO) that you should look at. Additionally, as suggested in the device's Installation Manual (available here about halfway down the page), you should "Consult the SP-4 OEM manual for message formats and message type selection." I suspect you will obtain the most relevant and useful information from that document. You may want to check if the manufacturer provides an API for your platform, as that would negate the need for you to implement the actual communication routine.
As far as making sense of the data, once you can read bytes from your serial interface, you can leverage structs and unions to make accessing your data more programmer-friendly. For the rough message outline you provided, something like this might be appropriate:
struct _message
{
uint8_t DestinationAddress;
uint8_t MessageLength;
uint8_t MessageType;
uint8_t MessageSubtype;
int32_t BankAngle; //assuming an int is 32 bits
int32_t PitchAngle;
int32_t YawAngle;
sint_t Slip; //not sure what a 'sint' is
fps_t GForce; //likewise 'fps'
uint8_t MISC;
uint16_t Heading; //assuming a word is 16 bits
uint8_t Unused[UNUSED_BYTES]; //however many there are
uintt_t Voltage;
}
struct myMessage
{
union
{
char raw[MAX_MESSAGE_SIZE]; //sizeof(largest possible message)
struct _message message;
}
}
This way, if you were to declare struct myMessage serialData;, you can read your message into serialData.raw, and then conveniently access its members (e.g. serialData.message.DestinationAddress).
Edit: In response to your edit, I'll provide an example of how to make sense of your data. This example supposes there is only one message type you have to worry about, but it can be easily extended to other types.
struct myMessage serialData;
memcpy(serialData.raw, serialDataBuffer, MAX_MESSAGE_SIZE); //copy data from your buffer
if(serialData.message.MessageType == SOME_MESSAGE_TYPE)
{
//you have usable data here.
printf("I am a SOME_MESSAGE!\n");
}
Now, supposing that these integral types are really only useful for data transmission, you need to translate these bits into "usable data". Say one of these fields is actually an encoded floating-point number. One common scheme is to select a bit-weight (sometimes also called resolution). I don't know if this is directly applicable to your device, or if it is what the real values are, but let's say for the sake of discussion, that the YawAngle field had a resolution of 0.00014 degrees/bit. To translate the value in your message (serialData.message.YawAngle) from its uint32_t value to a double, for example, you might do this:
double YawAngleValue = 0.00014 * serialData.message.YawAngle;
...and that's about it. The OEM manual should tell you how the data is encoded, and you should be able to work out how to decode it from there.
Now, let's say you've got two message types to handle. The one I've already shown you, and a theoretical CRITICAL_BITS message. To add that type using the scheme I've laid out, you would first define the CRITICAL_BITS structure (perhaps as follows):
struct _critical_bits
{
uint8_t DestinationAddress;
uint8_t MessageLength;
uint8_t MessageType;
uint8_t MessageSubtype;
uint32_t SomeCriticalData;
}
...and then add it to the struct myMessage definition like so:
struct myMessage
{
union
{
char raw[MAX_MESSAGE_SIZE]; //sizeof(largest possible message)
struct _message message;
struct _critical_bits critical_message;
}
}
...then you can access the SomeCriticalData just like the other fields.
if(serialData.message.MessageType == CRITICAL_MESSAGE_TYPE)
{
uint32_t critical_bits = serialData.critical_message.SomeCriticalData;
}
You can find a little more information on how this works by reading about structs. Bear in mind, that instances of the struct myMessage type will only ever contain one set of meaningful data at a time. Put more simply, if serialData contains CRITICAL_MESSAGE_TYPE data, then the data in serialData.critical_message is valid, but serialData.message is not --even though the language does not prevent you from accessing that data if you request it.
Edit: One more example; to calculate the checksum of a message, using the algorithm you've specified, you would probably want something like this (assuming you already know the message is completely within the buffer):
uint8_t calculate_checksum(struct myMessage *data)
{
uint8_t number_bytes = data->message.MessageLength;
uint8_t checksum = 0;
int i;
for(i=0; i<number_bytes; ++i)
{
//this performs a XOR with checksum and the byte
//in the message at offset i
checksum ^= data->raw[i];
}
return checksum;
}
You might need to adjust that function for bytes that aren't included, check to make sure that data != NULL, etc. but it should get you started.