I'm using an Arduino (duemilanove) with the official Ethernet shield to send data to the controller for controlling an LED matrix. I am trying to send some raw 32-bit unsigned int values (unix timestamps) to the controller by taking the 4 bytes in the 32-bit value on the desktop and sending it to the arduino as 4 consecutive bytes. However, whenever a byte value is larger than 127, the returned value by the ethernet client library is 63.
The following is a basic example of what I'm doing on the arduino side of things. Some things have been removed for neatness.
byte buffer[32];
memset(buffer, 0, 32);
int data;
int i=0;
data = client.read();
while(data != -1 && i < 32)
{
buffer[i++] = (byte)data;
data = client.read();
}
So, whenever the input byte is bigger than 127 the variable "data" will end up getting set to 63! At first I thought the problem was further down the line (buffer used to be char instead of byte) but when I print out "data" right after the read, it's still 63.
Any ideas what could be causing this? I know client.read() is supposed to output int and internally reads data from the socket as uint8_t which is a full byte and unsigned, so I should be able to at least go to 255...
EDIT: Right you are, Hans. Didn't realize that Encoding.ASCII.GetBytes only supported the first 7 bits and not all 8.
I'm more inclined to suspect the transmit side. Are you positive the transmit side is working correctly? Have you verified with a wireshark capture or some such?
63 is the ASCII code for ?. There's some relevance to the values, ASCII doesn't have character codes for values over 127. An ASCII encoder commonly replaces invalid codes like this with a question mark. Default behavior for the .NET Encoding.ASCII encoder for example.
It isn't exactly clear where that might happen. Definitely not in your snippet. Probably on the other end of the wire. Write bytes, not characters.
+1 for Hans Passant and Karl Bielefeldt.
Can you just send the data without encoding? How is the data being sent? TCP/UDP/IP/Ethernet definitely support sending binary data without restriction. If this isn't possible, perhaps converting the data to hex will solve the problem. Base64 will also work (better) but is considerably more work. For small amounts of data, hex is probably the easiest and fastest solution.
+1 again to Karl and Ben for mentioning wireshark. Invaluable for debugging network problems like this.
Related
I have a u_char* dynamic array having binary data of some network packet. I want to change the destination port number in the packet with some integer value. Suppose that the port number offset within the packet is ofs, with length of 4 bytes.
I tried the following 2 methods:
u_char* packet = new u_char[packet_size]; // Packet still empty
// Read packet from network ...
int new_port = 1234;
Method #1:
std::copy((u_char*)&new_port, (u_char*)&new_port+4, packet+ofs);
Method #2:
std::string new_port_str = std::to_string(new_port);
auto new_port_bytes = new_port_str.c_str();
std::copy(new_port_bytes, new_port_bytes+4, packet+ofs);
Both methods give garbage value for port number (but the rest of the packet is OK). Could anyone help me ?
You have to convert the integer from whatever internal representation your platform happens to use to the format the particular network protocol you're using requires them to be in when sent over the network.
This depends on the particular network protocol you're trying to use -- check its documentation for precisely the format it requires ports to be expressed in. My bet will be it's network byte order. You probably have functions like htons to convert shorts to network byte order.
Another problem -- how many bytes is int on your platform? How many bytes does the network protocol use to express ports? I'll bet the numbers are 4 and 2 respectively. So that's another issue. (Or maybe it isn't. I don't know for sure how many bytes an int is on your platform nor do I know what protocol you're trying to work with, so I have to guess.)
You can't just write code randomly and expect it to work. You have to think about what you're trying to do and understand the requirements.
My recommendation would be to look at the specification for the network protocol you're working with and figure out exactly which bytes in the data have to change and what they have to change to. Then write code to change each byte to the correct value according to the network protocol specification. This will ensure your code works correctly on any platform.
I want to send a payload that has this structure:
uint8_t num_of_products;
//1 product
uint8_t action;
time_t unix_time;//uint64_t
uint32_t name_len;//up to 256
char* name;
uint8_t num_of_ips;
//1 ip out if num_of_ips
uint8_t ip_ver; //0 error , 1 ipv4 , 2 ipv6
char* IP;// ipv4 : 4 , ipv6 : 16
before sending the packet I aggregate products using memcpy into jumbo size mbuf
from tests I did name_len must go through hton in order to not look "inverted" in wireshark.
my question is ,what logic can I apply in order to get the byte order right for a custom structure with inner variables with unknown size
i.e what should go through hton what should be left as is
If you are aiming to have your message in network byte order (big endian), only your integer fields that take up more than one byte will need the htonl treamnet. In your case, that would be time_t unix_time and uint32_t name_len. Your strings and single-byte fields (such as num_of_products) won't need any specific conversion.
As some of the commenters in your question suggested - it's really up to you if you want to use a strict network byte order. Serializing your message to have a strict byte ordering is useful if you intend your code to run across different platforms.
Writing efficient byte packing code is annoyingly hard. You wind up writing a lot of code to just to save a few bytes of network bandwidth.
User jxh mentioned JSON as a possible encoding for your message. Not sure why he deleted his answer because it was on point. But in any case, a standard messaging format of either JSON or XML (or any ascii text schema) is 100x easier to observe in wireshark and when debugging.
I'm trying to send data with a fixed-length header that tells the server how many bytes of data it's going to have to have available to read before it reads it. I'm having trouble doing this, though. The maximum number of bytes of data I want to be able to send at once is 65536, so I'm sending a uint16_t type variable as the header of my data because the maximum number it can represent is 65536.
The problem is, a uint16_t takes up two bytes, but numbers less than 255 only require one byte. So I have this code on the client side:
uint16_t messageSize = clientSendBuf.size(); //clientSendBuf is the data I want to send
char *bytes((char*)&messageSize);
clientSendBuf.prepend(bytes);
client.write(clientSendBuf);
And on the server, I handle receiving messages like this:
char serverReceiveBuf[65536];
uint16_t messageSize;
client->read((char*)&messageSize, sizeof(uint16_t));
client->read(serverReceiveBuf, messageSize);
I'm going to change this around a bit later because it's not the best solution (particularly for when all of the data isn't available yet), but I want to get this fixed first. My problem is that when clientSendBuf.size() is too small (in my test case it was 16 bytes, I assume this happens for every value under 255) reading data with
client->read((char*)&messageSize, sizeof(uint16_t));
reads a second byte that isn't part of the header, giving and incorrect value for messageSize and crashing the server. If I replace sizeof(uint16_t) with 1, then the server reads the data fine as I'd expect, although then I have a messageSize maximum of 255, which is much lower than I want. How do I make it so that the messageSize prepended to clientSendBuf is always two bytes, even for numbers <255?
Your
clientSendBuf.prepend(bytes);
Should also be told that it needs to send 2 bytes; now it treats the bytes as a zero-terminated string, which accidently works since on your platform the second byte of 0x0010 is zero (using little-endian numbers: 0x16, 0x00).
The prepend(char*, int) method will do the trick:
// use this instead:
cliendSendBuf.prepend(bytes, sizeof(messageSize));
I've just started on some raw network programming in C++ and have been compiling on my Raspberry Pi itself (no cross-compiling). That makes everything little endian.
After constructing my IP header, I calculate the IP checksum, but it was always coming out incorrect (based on an example here http://www.thegeekstuff.com/2012/05/ip-header-checksum/).
Revving up gdb, I've worked my issue down to the ordering of the first 32 bits in the IP header. The example uses 0x4500003C, which means version 4 (0x4), IHL 5 (0x5), TOS 0 (0x00), and tot_length 60 (0x003C). So I set my packet up the same.
struct iphdr* ip; // Also some mallocing
ip->version = 4;
ip->ihl = 5;
ip->tos = 0;
ip->tot_len = 60;
Now in gdb, I examined the first 32 bits, expecting 0x3C000045 because of endianness, but instead I get this:
(gdb) print ip
$1 = (iphdr *) 0x11018
(gdb) x/1xw 0x11018
0x11018: 0x003c0045
The first 16 bits are in little endian (0x0045) but the second, containing decimal 60, seem to be in big endian (0x003C)!
What is giving this? Am I crazy? Am I completely wrong about byte order inside structs? (It's a definite possibility)
There's the order of fields within the struct, and then there's the order of bytes within a multibyte field.
0x003C isn't endian at all, it's the hex value for 60. Sure, it's stored in memory with some endianness, but the order you used to write the field and the order you used to read it back out are the same -- both are the native byte order of the Raspberry Pi, and they cancel out.
Typically you will want to write:
ip->tot_len = htons(60);
when storing a 16-bit field into a packet. There's also htonl for 32-bit fields, and ntohs and ntohl for reading fields from network packets.
The ARM architecture can run both little and big endianess, but the Android platform runs little endian.
I am developing a Qt/C++ programme in QtCreator that reads and writes from/to the serial port using QextSerialPort. My programme sends commands to a Rhino Mark IV controller and must read the response of those commands (just in case they produce any response). My development and deployment platform is Windows XP Professional.
When the Mark IV sends a response to a command and my programme reads that response from the serial port buffer, the data are not properly encoded; my programme does not seem to get plain ASCII data. For example, when the Mark IV sends an ASCII "0" (decimal 48) followed by a carriage return (decimal 13), my buffer (char *) gets -80 and 13. Characters are not properly encoded, but carriage returns are indeed. I have tried using both read (char *data, qint64 maxSize) and readAll ().
I have been monitoring the serial port traffic using two monitors that interpret ASCII data and display the corresponding characters, and the data sent in both ways seem to be correctly encoded (they are actually displayed correctly). Given that QByteArray does not interpret any character encoding and that I have tried using both read (char *data, qint64 maxSize) and readAll (), I have discarded that the problem may be caused by Qt. However, I am not sure if the problem is caused by QextSerialPort, because my programme send (writes) data properly, but does not read the correct bytes.
I have also tried talking to the Mark IV controller by hand using HyperTerminal, and the communication takes place correctly, too. I set up the connection using HyperTerminal with the following parammeters:
Baud rate: 9600
Data bits: 8
Parity bits: 0
Stop bits: 1
Flow control: Hardware
My programme sets up the serial port using the same parammeters. HyperTerminal works, my programme does not.
I started using QextSerialPort 1.1 from qextserialport.sourceforge.net and then tried with the latest source code from QextSerialPort on Google Code, and the problem remains.
What is causing the wrong character encoding?
What do I have to do to solve this issue?
48 vs. -80 smells like a signed char vs. unsigned char mismatch to me. Try with explicit unsigned char* instead of char*.
Finally, I have realized that I was not configuring the serial port correctly, as suggested by Judge Maygarden. I did not find that information in the device's manual, but in the manual of a software product developed for that device.
The correct way to set up the serial port for connecting to the Mark IV controller is to set
Baud rate: 9600
Data bits: 7
Parity: even
Stop bits: 2 bits
Flow control: Hardware
However, I am still wondering why did HyperTerminal show the characters properly even with the wrong configuration.