I am facing a bit of issue in writing a network software. When I try to send or receive a struct that contains a data type of 8 bytes the next sent or received struct is somehow affected. I have a few things in mind but first I wanted to confirm one thing before I get into debugging.
I am using 32-bit Ubuntu 11.04 (silly me) on a 64-bit x-86 system. Does this has anything to do with the byte alignment problems?
I am developing a controller to communicate with the Open Flow switch. The openflow protocol defines a set of specs based on which switches are built. The problem is when I try to communicate with the switch everything goes fine until I send or receive a struct that contains a 64 bit date type (uint64_t). The specific structs that are used for sending and receiving features are
estruct ofp_header {
uint8_t version; /* OFP_VERSION. */
uint8_t type; /* One of the OFPT_ constants. */
uint16_t length; /* Length including this ofp_header. */
uint32_t xid; /* Transaction id associated with this packet.
Replies use the same id as was in the request
to facilitate pairing. */};
assert(sizeof(struct ofp_header) == 8);
/* Switch features. */
struct ofp_switch_features {
struct ofp_header header;
uint64_t datapath_id; /* Datapath unique ID. The lower 48-bits are for a MAC address, while the upper 16-bits are implementer-defined. */
uint32_t n_buffers; /* Max packets buffered at once. */
uint8_t n_tables; /* Number of tables supported by datapath. */
uint8_t pad[3]; /* Align to 64-bits. */
/* Features. */ /* Bitmap of support "ofp_capabilities". */
uint32_t capabilities; /* Bitmap of supported "ofp_action_type"s. */
uint32_t actions;
/* Port info.*/
struct ofp_phy_port ports[0]; /* Port definitions. The number of ports is inferred from the length field in the header. */
};
assert(sizeof(struct ofp_switch_features) == 32);
The problem is when I communicate using any other structs that have data types less than 64-bit everything goes fine. When I receive features reply it shows the right values but after that if I receive any other struct it shows garbage values. Even if I receive features reply again I get garbage values. In short if at any point of code I receive features request or any other struct defined in the specs that has a data type of 64-bit the next structs receive garbage values. The code used for sending and receiving features request is as follows
////// features request and reply ////////////
ofp_header features_req;
features_req.version=OFP_VERSION;
features_req.type=OFPT_FEATURES_REQUEST;
features_req.length= htons(sizeof features_req);
features_req.xid = htonl(rcv_hello.xid);
if (send(connected, &features_req, sizeof(features_req), 0)==-1) {
printf("Error in sending message\n");
exit(-1);
}
printf("features req sent!\n");
ofp_switch_features features_rep={0};
if (recv(connected, &features_rep, sizeof(features_rep), 0)==-1) {
printf("Error in receiving message\n");
exit(-1);
}
printf("message type : %d\n",features_rep.header.type);
printf("version : %d\n",features_rep.header.version);
printf("message length: %d\n",ntohs(features_rep.header.length));
printf("xid : %d\n",ntohl(features_rep.header.xid));
printf("buffers: %d\n",ntohl(features_rep.n_buffers));
printf("tables: %d\n",features_rep.n_tables);
Convert your struct into an array of characters before sending them - this is call serialisation
Use the family of functions htons etc to ensure that integers are sent in network order. Saves hassle on the endians of the various machines
One the recieving end read the bytes and reconstruct the struct.
This will ensure that you will not have any hassle at all.
I got help from Daniweb.com and all credit goes to a guy with a nick NEZACHEM. His answer was, and I quote :
The problem has nothing to do with 64 bit types.
Values you read are not garbage, but a very valuable port definitions:
struct ofp_phy_port ports[0]; /* Port definitions. The number of ports is inferred from the length field in the header. */
Which means, once you've
recv(connected, &features_rep, sizeof(features_rep), 0)
you need to inspect features_rep.header.length,
figure out how many struct ofp_phy_port follow,
allocate memory for them and read those data.
I did that and thanks to him my problems were solved and all went well :)
thanx for everyone that replied.
cheers :)
You could even consider using serialization techniques: perhaps JSON, XDR, YAML could be relevant. or libraries like s11n, jansson, etc.
Here is what is want
features_req.version=OFP_VERSION;
features_req.type=OFPT_FEATURES_REQUEST;
features_req.length= htons(sizeof features_req);
features_req.xid = htonl(rcv_hello.xid);
char data[8];
data[0] = features_req.version;
data[1] = features_req.type;
memcpy(data + 2, &features_req.length, 2);
memcpy(data + 4, &features_req.xid, 4);
if (send(connected, data, 8) ....
On the receving end
char data[8];
if (recv(conncted, data, 8) ...
features_req.version = data[0];
features_req.type = data[1];
memcpy(&features_req.length, data + 2, 2);
memcpy(&features_req.xid, data + 4, 4);
features_req.length = ntohs(features_req.length);
features_req.xid= ntohl(features_req.xid);
1 In case you stick to sending the structures you should make sure they are byte aligned.
To do so use the pragma pack like this:
#pragma pack(1)
struct mystruct{
uint8_t myint8;
uint16_t myint16;
};
#pragma pack()
Doing so you makes sure this structure does use 3 bytes only.
2 For converting 64bit values from host order to network order this post reads interessing: Is there any "standard" htonl-like function for 64 bits integers in C++? (no, it only starts with c++ and ends with C also)
Related
I have the data, which includes 32 bit scaled integers for latitude and longitude,32 bit floats for x y and z velocity and some 1-bit flags. I also am successfully sending at least one double over UDP but it appears to be trash on the other end and in Wireshark. I need bitwise access to toggle/change the data every second that I send out the data over UDP. The message format is simple and each data type will take up its size and has a certain start bit. It seems like I need to serialize but with the other end, not necessarily deserializing is that possible? My first thought was to make a structure with all the data then send the whole structure over UDP but with padding that seems wrong. The message needs to be 64 bytes long and each data type has a start bit and length of the format. This code compiles and works but with an unreadable latitude on the receiving side. Lat, long, and alt update every second. There are more data fields but I am trying to start small. A snippet of the format is attached with the data type, word location, start bit within that word, and length in bits is attached, lat, long, and alt are somewhere else within the message. Data Format Best idea right now is to use Cereal (http://uscilab.github.io/cereal/index.html) but not sure if that's a dead end.
struct TSPIstruct //
{
double Latitude;
double Longitude;
double Altitude;
};
void SendDataUDP(double latitude, double longitude, double altitude)
{
TSPIstruct TSPI;
TSPI = CompileTSPI(latitude, longitude, altitude);
//printf("\n\r");
printf("Sending GPS over UDP at %d", LastSystemTime);
printf("\n\r");
printf("\n\r");
//send the message
if (sendto(s, (char*)&(TSPI.Latitude), sizeof(double) , 0 , (struct sockaddr *) &si_other, slen) == SOCKET_ERROR)
{
printf("sendto() failed with error code : %d" , WSAGetLastError());
exit(EXIT_FAILURE);
}
printf("UDP packet sent, latitude is %f", TSPI.Latitude);
printf("\n\r");
printf("\n\r");
}
TSPIstruct CompileTSPI(double latitude, double longitude, double altitude)
{
TSPIstruct TSPI;
TSPI.Latitude = latitude;
TSPI.Longitude = longitude;
TSPI.Altitude = altitude;
return TSPI;
}
maybe a stupid question, but when sending binary data, you must consider not ONLY padding/alignment, but also bit ordering.
You could skip the latter point if you are sure µp artitecture is the same on both sides. in TCP since old times they added:
uint32_t htonl(uint32_t host32bitvalue) ;
and so on..
if You try on the SAME local machine (i.e. 127.0.0.1) does it work?
paste both client AND server code.. I will try under Unix and windows
:)
Good day.
I am sending a custom protocol for logging via TCP which looks like this:
Timestamp (uint32_t -> 4 bytes)
Length of message (uint8_t -> 1 byte)
Message (char -> Length of message)
The Timestamp is converted to BigEndian for the transport and everything goes out correctly, except for one little detail: Padding
The Timestamp is sent on its own, however instead of just sending the timestamp (4 bytes) my application (using BSD sockets under Ubuntu) automatically appends two bytes of padding to the message.
Wireshark recognizes this correctly and marks the two extraneous bytes as padding, however the QTcpSocket (Qt 5.8, mingw 5.3.0) apparently assumes that the two extra bytes are actually payload, which obviously messes up my protocol.
Is there any way for me to 'teach' QTcpSocket to ignore the padding (like it should) or any way to get rid of the padding?
I'd like to avoid to do the whole 'create a sufficiently large buffer and preassemble the entire packet in it so it will be sent out in one go'-method if possible.
Thank you very much.
Because it was asked, the code used to send the data is:
return
C->sendInt(entry.TS) &&
C->send(&entry.LogLen, 1) &&
C->send(&entry.LogMsg, entry.LogLen);
where sendInt is declared as (Src being the parameter):
Src = htonl(Src);
return send(&Src, 4);
where 'send' is declared as (Source and Len being the parameters):
char *Src = (char *)Source;
while(Len) {
int BCount = ::send(Sock, Src, Len, 0);
if(BCount < 1) return false;
Src += BCount;
Len -= BCount;
}
return true;
::send is the standard BSD send function.
Reading is done via QTcpSocket:
uint32_t timestamp;
if (Sock.read((char *)×tamp, sizeof(timestamp)) > 0)
{
uint8_t logLen;
char message[256];
if (Sock.read((char *)&logLen, sizeof(logLen)) > 0 &&
logLen > 0 &&
Sock.read(message, logLen) == logLen
) addToLog(qFromBigEndian(timestamp), message);
}
Sock is the QTcpSocket instance, already connected to the host and addToLog is the processing function.
Also to be noted, the sending side needs to run on an embedded system, using QTcpServer is therefor not an option.
Your read logic appears to be incorrect. You have...
uint32_t timestamp;
if (Sock.read((char *)×tamp, sizeof(timestamp)) > 0)
{
uint8_t logLen;
char message[256];
if (Sock.read((char *)&logLen, sizeof(logLen)) > 0 &&
logLen > 0 &&
Sock.read(message, logLen) == logLen
) addToLog(qFromBigEndian(timestamp), message);
}
From the documentation for QTcpSocket::read(data, MaxSize) it...
Reads at most maxSize bytes from the device into data, and returns the
number of bytes read
What if one of your calls to Sock.read reads partial data? You essentially discard that data rather than buffering it for reuse next time.
Assuming you have a suitably scoped QByteArray...
QByteArray data;
your reading logic should be more along the lines of...
/*
* Append all available data to `data'.
*/
data.append(Sock.readAll());
/*
* Now repeatedly read/trim messages from data until
* we have no further complete messages.
*/
while (contains_complete_log_message(data)) {
auto message = read_message_from_data(data);
data = data.right(data.size() - message.size());
}
/*
* At this point `data' may be non-empty but doesn't
* contain enough data for a complete message.
*/
If the length of the padding is always fixed then just add socket->read(2); to ignore the 2 bytes.
On the other hand it might be just the tip of the iceberg. What are you using to read and write?
You should not invoke send three times but only once. For conversion into BigEndian you might use the Qt functions and write everything into a single buffer and only call send once. It is not what you want, but I assume it is what you'll need to do and it should be easy, as you already know the size of you message. You also will not need to leave the Qt world for sending the messages.
I'm writing something server-client related, and I have this code snippet here:
char serverReceiveBuf[65536];
client->read(serverReceiveBuf, client->bytesAvailable());
handleConnection(serverReceiveBuf);
that reads data whenever a readyRead() signal is emitted by the server. Using bytesAvailable() is fine when I test on my local network since there's no latency, but when I deploy the program I want to make sure the entire message is received before I handleConnection().
I was thinking of ways to do this, but read and write only accept chars, so the maximum message size indicator I can send in one char is 127. I want the maximum size to be 65536, but the only way I can think of doing that is have a size-of-size-of-message variable first.
I reworked the code to look like this:
char serverReceiveBuf[65536];
char messageSizeBuffer[512];
int messageSize = 0, i = 0; //max value of messageSize = 65536
client->read(messageSizeBuffer,512);
while((int)messageSizeBuffer[i] != 0 || i <= 512){
messageSize += (int) messageSizeBuffer[i];
//client will always send 512 bytes for size of message size
//if message size < 512 bytes, rest of buffer will be 0
}
client->read(serverReceiveBuf, messageSize);
handleConnection(serverReceiveBuf);
but I'd like a more elegant solution if one exists.
It is a very common technique when sending messages over a stream to send a fixed-sized header before the message payload. This header can include many different pieces of information, but it always includes the payload size. In the simplest case, you can send the message size encoded as a uint16_t for a maximum payload size of 65535 (or uint32_t if that's not sufficient). Just make sure you handle byte ordering with ntohs and htons.
uint16_t messageSize;
client->read((char*)&messageSize, sizeof(uint16_t));
messageSize = ntohs(messageSize);
client->read(serverReceiveBuf, messageSize);
handleConnection(serverReceiveBuf);
read and write work with byte streams. It does not matter to them if the bytes are chars or any other form of data. You can send a 4-byte integer by casting its address to char* and sending 4 bytes. On the receiving end cast the 4 bytes back to an int. (If the machines are of different types you may also have endian issues, requiring the bytes to be rearranged into an int. See htonl and its cousins.)
For a small tool that I am building for OSX, I want to capture the lengths of packets send and received from a certain ethernet controller.
When I fetch the ethernet cards I also get extra information like maximum packet sizes, link speeds etc.
When I start the (what I call) 'trafficMonitor' I launch it like this:
static void initializeTrafficMonitor(const char* interfaceName, int packetSize) {
char errbuf[PCAP_ERRBUF_SIZE];
pcap_t* sessionHandle = pcap_open_live(interfaceName, packetSize, 1, 100, errbuf);
if (sessionHandle == NULL)
{
printf("Error opening session for device %s: %s\n", interfaceName, errbuf);
return;
}
pcap_loop(sessionHandle, -1, packetReceived, NULL);
}
The supplied interfaceName is the BSD name of the interface, for example en0. The packetSize variable is an integer where I supply the maximum packetsize for that ethernet adapter (that seemed logical at that time). For example the packetsize for my WiFi adapter is 1538.
My callback method is called packetReceived and looks like this:
void packetReceived(u_char* args, const struct pcap_pkthdr* header, const u_char* packet) {
struct pcap_work_item* item = malloc(sizeof(struct pcap_pkthdr) + header->caplen);
item->header = *header;
memcpy(item->data, packet, header->caplen);
threadpool_add(threadPool, handlePacket, item, 0);
}
I stuff all the properties for my packet in a new struct and launch a worker thread to analyze the packet and process the results. This is to not keep pcap waiting and is an attempt to fix this problem which already existed before adding this worker thread method.
The handlePacket method is like this:
void handlePacket(void* args) {
const struct pcap_work_item* workItem = args;
const struct sniff_ethernet* ethernet = (struct sniff_ethernet*)(workItem->data);
u_int size_ip;
const struct sniff_ip* ip = (struct sniff_ip*)(workItem->data + SIZE_ETHERNET);
size_ip = IP_HL(ip) * 4;
if (size_ip < 20) {
return;
}
const u_int16_t type = ether_packet(&workItem->header, workItem->data);
switch (ntohs(type)) {
case ETHERTYPE_IP: {
char sourceIP[INET_ADDRSTRLEN];
char destIP[INET_ADDRSTRLEN];
inet_ntop(AF_INET, &ip->ip_src, sourceIP, sizeof(sourceIP));
inet_ntop(AF_INET, &ip->ip_dst, destIP, sizeof(destIP));
[refToSelf registerPacketTransferFromSource:sourceIP destinationIP:destIP packetLength:workItem->header.caplen packetType:ethernet->ether_type];
break;
}
case ETHERTYPE_IPV6: {
// handle v6
char sourceIP[INET6_ADDRSTRLEN];
char destIP[INET6_ADDRSTRLEN];
inet_ntop(AF_INET6, &ip->ip_src, sourceIP, sizeof(sourceIP));
inet_ntop(AF_INET6, &ip->ip_dst, destIP, sizeof(destIP));
[refToSelf registerPacketTransferFromSource:sourceIP destinationIP:destIP packetLength:workItem->header.caplen packetType:ethernet->ether_type];
break;
}
}
}
Based on the type of ethernet packet I try to figure out if it is an packet send using an IPv4 or IPv6 address. After that is determined I send some details to an objectiveC method (Source IP address, Destination IP address and packet length).
I cast the packet to the structs explained on the website of tcpdump (http://www.tcpdump.org/pcap.html).
The problem is that pcap either does not seem to keep up with the packets received/send. Either I am not sniffing all the packets or the packet lengths are wrong.
Does anyone have any pointers where I need to adjust my code to make pcap catch them all or where I have some sort of problem.
These methods are called from my objectiveC application and the refToSelf is a reference to a objC class.
Edit: I am calling the initializeTrafficMonitor in a background thread, because the pcap_loop is blocking.
On which version of OS X is this? In releases prior to Lion, the default buffer size for libpcap on systems using BPF, such as OS X, was 32K bytes; 1992 called, they want their 4MB workstations and 10Mb Ethernets back. In Lion, Apple updated libpcap to version 1.1.1; in libpcap 1.1.0, the default BPF buffer size was increased to 512MB (the maximum value in most if not all systems that have BPF).
If this is Snow Leopard, try switching to the new pcap_create()/pcap_activate() API, and use pcap_set_buffer_size() to set the buffer size to 512MB. If this is Lion or later, that won't make a difference.
That won't help if your program can't keep up with the average packet rate, but it will, at least, mean fewer packet drops if there are temporary bursts that exceed the average.
If your program can't keep up with the average packet rate, then, if you only want the IP addresses of the packets, try setting the snapshot length (which you call "packetSize"`) to a value large enough to capture only the Ethernet header and the IP addresses for IPv4 and IPv6. For IPv4, 34 bytes would be sufficient (libpcap or BPF might round that up to a larger value), as that's 14 bytes of Ethernet header + 20 bytes of IPv4 header without options. For IPv6, it's 54 bytes, for 14 bytes of Ethernet header and 40 bytes of IPv6 header. So use a packetSize value of 54.
Note that, in this case, you should use the len field, NOT the caplen field, of the struct pcap_pkthdr, to calculate the packet length. caplen is the amount of data that was captured, and will be no larger than the specified snapshot length; len is the length "on the wire".
Also, you might want to try running pcap_loop() and all the processing in the same thread, and avoid allocating a buffer for the packet data and copying it, to see if that speeds the processing up. If you have to do them in separate threads, make sure you free the packet data when you're done with it.
What's the best way to send float, double, and int16 over serial on Arduino?
The Serial.print() only sends values as ASCII encoded. But I want to send the values as bytes. Serial.write() accepts byte and bytearrays, but what's the best way to convert the values to bytes?
I tried to cast an int16 to an byte*, without luck. I also used memcpy, but that uses to many CPU cycles. Arduino uses plain C/C++. It's an ATmega328 microcontroller.
hm. How about this:
void send_float (float arg)
{
// get access to the float as a byte-array:
byte * data = (byte *) &arg;
// write the data to the serial
Serial.write (data, sizeof (arg));
}
Yes, to send these numbers you have to first convert them to ASCII strings. If you are working with C, sprintf() is, IMO, the handiest way to do this conversion:
[Added later: AAAGHH! I forgot that for ints/longs, the function's input argument wants to be unsigned. Likewise for the format string handed to sprintf(). So I changed it below. Sorry about my terrible oversight, which would have been a hard-to-find bug. Also, ulong makes it a little more general.]
char *
int2str( unsigned long num ) {
static char retnum[21]; // Enough for 20 digits plus NUL from a 64-bit uint.
sprintf( retnum, "%ul", num );
return retnum;
}
And similar for floats and doubles. The code doing the conversion has be known in advance. It has to be told - what kind of an entity it's converting, so you might end up with functions char *float2str( float float_num) and char *dbl2str( double dblnum).
You'll get a NUL-terminated left-adjusted (no leading blanks or zeroes) character string out of the conversion.
You can do the conversion anywhere/anyhow you like; these functions are just illustrations.
Use the Firmata protocol. Quote:
Firmata is a generic protocol for communicating with microcontrollers
from software on a host computer. It is intended to work with any host
computer software package. Right now there is a matching object in a
number of languages. It is easy to add objects for other software to
use this protocol. Basically, this firmware establishes a protocol for
talking to the Arduino from the host software. The aim is to allow
people to completely control the Arduino from software on the host
computer.
The jargon word you need to look up is "serialization".
It is an interesting problem over a serial connection which might have restrictions on what characters can go end to end, and might not be able to pass eight bits per character either.
Restrictions on certain character codes are fairly common. Here's a few off the cuff:
If software flow control is in use, then conventionally the control characters DC1 and DC3 (Ctrl-Q and Ctrl-S, also sometimes called XON and XOFF) cannot be transmitted as data because they are sent to start and stop the sender at the other end of the cable.
On some devices, NUL and/or DEL characters (0x00 and 0x7F) may simply vanish from the receiver's FIFO.
If the receiver is a Unix tty, and the termio modes are not set correctly, then the character Ctrl-D (EOT or 0x04) can cause the tty driver to signal an end-of-file to the process that has the tty open.
A serial connection is usually configurable for byte width and possible inclusion of a parity bit. Some connections will require that a 7-bit byte with a parity are used, rather than an 8-bit byte. It is even possible for connection to (seriously old) legacy hardware to configure many serial ports for 5-bit and 6-bit bytes. If less than 8-bits are available per byte, then a more complicated protocol is required to handle binary data.
ASCII85 is a popular technique for working around both 7-bit data and restrictions on control characters. It is a convention for re-writing binary data using only 85 carefully chosen ASCII character codes.
In addition, you certainly have to worry about byte order between sender and receiver. You might also have to worry about floating point format, since not every system uses IEEE-754 floating point.
The bottom line is that often enough choosing a pure ASCII protocol is the better answer. It has the advantage that it can be understood by a human, and is much more resistant to issues with the serial connection. Unless you are sending gobs of floating point data, then inefficiency of representation may be outweighed by ease of implementation.
Just be liberal in what you accept, and conservative about what you emit.
Does size matter? If it does, you can encode each 32 bit group into 5 ASCII characters using ASCII85, see http://en.wikipedia.org/wiki/Ascii85.
This simply works. Use Serial.println() function
void setup() {
Serial.begin(9600);
}
void loop() {
float x = 23.45585888;
Serial.println(x, 10);
delay(1000);
}
And this is the output:
Perhaps that is best Way to convert Float to Byte and Byte to Float,-Hamid Reza.
int breakDown(int index, unsigned char outbox[], float member)
{
unsigned long d = *(unsigned long *)&member;
outbox[index] = d & 0x00FF;
index++;
outbox[index] = (d & 0xFF00) >> 8;
index++;
outbox[index] = (d & 0xFF0000) >> 16;
index++;
outbox[index] = (d & 0xFF000000) >> 24;
index++;
return index;
}
float buildUp(int index, unsigned char outbox[])
{
unsigned long d;
d = (outbox[index+3] << 24) | (outbox[index+2] << 16)
| (outbox[index+1] << 8) | (outbox[index]);
float member = *(float *)&d;
return member;
}
regards.
`
Structures and unions solve that issue. Use a packed structure with a byte sized union matching the structure. Overlap the pointers to the structure and union (or add the union in the structure). Use Serial.write to send the stream. Have a matching structure/union on receiving end. As long as byte order matches no issue otherwise you can unpack using the "C" hto(s..l) functions. Add "header" info to decode different structures/unions.
For Arduino IDE:
float buildUp(int index, unsigned char outbox[])
{
unsigned long d;
d = (long(outbox[index +3]) << 24) | \
(long(outbox[index +2]) << 16) | \
(long(outbox[index +1]) << 8) | \
(long(outbox[index]));
float member = *(float *)&d;
return member;
}
otherwise not working.