I've just started on some raw network programming in C++ and have been compiling on my Raspberry Pi itself (no cross-compiling). That makes everything little endian.
After constructing my IP header, I calculate the IP checksum, but it was always coming out incorrect (based on an example here http://www.thegeekstuff.com/2012/05/ip-header-checksum/).
Revving up gdb, I've worked my issue down to the ordering of the first 32 bits in the IP header. The example uses 0x4500003C, which means version 4 (0x4), IHL 5 (0x5), TOS 0 (0x00), and tot_length 60 (0x003C). So I set my packet up the same.
struct iphdr* ip; // Also some mallocing
ip->version = 4;
ip->ihl = 5;
ip->tos = 0;
ip->tot_len = 60;
Now in gdb, I examined the first 32 bits, expecting 0x3C000045 because of endianness, but instead I get this:
(gdb) print ip
$1 = (iphdr *) 0x11018
(gdb) x/1xw 0x11018
0x11018: 0x003c0045
The first 16 bits are in little endian (0x0045) but the second, containing decimal 60, seem to be in big endian (0x003C)!
What is giving this? Am I crazy? Am I completely wrong about byte order inside structs? (It's a definite possibility)
There's the order of fields within the struct, and then there's the order of bytes within a multibyte field.
0x003C isn't endian at all, it's the hex value for 60. Sure, it's stored in memory with some endianness, but the order you used to write the field and the order you used to read it back out are the same -- both are the native byte order of the Raspberry Pi, and they cancel out.
Typically you will want to write:
ip->tot_len = htons(60);
when storing a 16-bit field into a packet. There's also htonl for 32-bit fields, and ntohs and ntohl for reading fields from network packets.
The ARM architecture can run both little and big endianess, but the Android platform runs little endian.
Related
I have a u_char* dynamic array having binary data of some network packet. I want to change the destination port number in the packet with some integer value. Suppose that the port number offset within the packet is ofs, with length of 4 bytes.
I tried the following 2 methods:
u_char* packet = new u_char[packet_size]; // Packet still empty
// Read packet from network ...
int new_port = 1234;
Method #1:
std::copy((u_char*)&new_port, (u_char*)&new_port+4, packet+ofs);
Method #2:
std::string new_port_str = std::to_string(new_port);
auto new_port_bytes = new_port_str.c_str();
std::copy(new_port_bytes, new_port_bytes+4, packet+ofs);
Both methods give garbage value for port number (but the rest of the packet is OK). Could anyone help me ?
You have to convert the integer from whatever internal representation your platform happens to use to the format the particular network protocol you're using requires them to be in when sent over the network.
This depends on the particular network protocol you're trying to use -- check its documentation for precisely the format it requires ports to be expressed in. My bet will be it's network byte order. You probably have functions like htons to convert shorts to network byte order.
Another problem -- how many bytes is int on your platform? How many bytes does the network protocol use to express ports? I'll bet the numbers are 4 and 2 respectively. So that's another issue. (Or maybe it isn't. I don't know for sure how many bytes an int is on your platform nor do I know what protocol you're trying to work with, so I have to guess.)
You can't just write code randomly and expect it to work. You have to think about what you're trying to do and understand the requirements.
My recommendation would be to look at the specification for the network protocol you're working with and figure out exactly which bytes in the data have to change and what they have to change to. Then write code to change each byte to the correct value according to the network protocol specification. This will ensure your code works correctly on any platform.
I want to send a payload that has this structure:
uint8_t num_of_products;
//1 product
uint8_t action;
time_t unix_time;//uint64_t
uint32_t name_len;//up to 256
char* name;
uint8_t num_of_ips;
//1 ip out if num_of_ips
uint8_t ip_ver; //0 error , 1 ipv4 , 2 ipv6
char* IP;// ipv4 : 4 , ipv6 : 16
before sending the packet I aggregate products using memcpy into jumbo size mbuf
from tests I did name_len must go through hton in order to not look "inverted" in wireshark.
my question is ,what logic can I apply in order to get the byte order right for a custom structure with inner variables with unknown size
i.e what should go through hton what should be left as is
If you are aiming to have your message in network byte order (big endian), only your integer fields that take up more than one byte will need the htonl treamnet. In your case, that would be time_t unix_time and uint32_t name_len. Your strings and single-byte fields (such as num_of_products) won't need any specific conversion.
As some of the commenters in your question suggested - it's really up to you if you want to use a strict network byte order. Serializing your message to have a strict byte ordering is useful if you intend your code to run across different platforms.
Writing efficient byte packing code is annoyingly hard. You wind up writing a lot of code to just to save a few bytes of network bandwidth.
User jxh mentioned JSON as a possible encoding for your message. Not sure why he deleted his answer because it was on point. But in any case, a standard messaging format of either JSON or XML (or any ascii text schema) is 100x easier to observe in wireshark and when debugging.
I'm trying to send data with a fixed-length header that tells the server how many bytes of data it's going to have to have available to read before it reads it. I'm having trouble doing this, though. The maximum number of bytes of data I want to be able to send at once is 65536, so I'm sending a uint16_t type variable as the header of my data because the maximum number it can represent is 65536.
The problem is, a uint16_t takes up two bytes, but numbers less than 255 only require one byte. So I have this code on the client side:
uint16_t messageSize = clientSendBuf.size(); //clientSendBuf is the data I want to send
char *bytes((char*)&messageSize);
clientSendBuf.prepend(bytes);
client.write(clientSendBuf);
And on the server, I handle receiving messages like this:
char serverReceiveBuf[65536];
uint16_t messageSize;
client->read((char*)&messageSize, sizeof(uint16_t));
client->read(serverReceiveBuf, messageSize);
I'm going to change this around a bit later because it's not the best solution (particularly for when all of the data isn't available yet), but I want to get this fixed first. My problem is that when clientSendBuf.size() is too small (in my test case it was 16 bytes, I assume this happens for every value under 255) reading data with
client->read((char*)&messageSize, sizeof(uint16_t));
reads a second byte that isn't part of the header, giving and incorrect value for messageSize and crashing the server. If I replace sizeof(uint16_t) with 1, then the server reads the data fine as I'd expect, although then I have a messageSize maximum of 255, which is much lower than I want. How do I make it so that the messageSize prepended to clientSendBuf is always two bytes, even for numbers <255?
Your
clientSendBuf.prepend(bytes);
Should also be told that it needs to send 2 bytes; now it treats the bytes as a zero-terminated string, which accidently works since on your platform the second byte of 0x0010 is zero (using little-endian numbers: 0x16, 0x00).
The prepend(char*, int) method will do the trick:
// use this instead:
cliendSendBuf.prepend(bytes, sizeof(messageSize));
I'm using libusb in Qt to communicate with a PIC microcontroller, 18F2550. The thing is that it's working OK until I try to send or read more than three bytes. Why does it happen?
I've tried using bulk_read transfer and interrupt_read. When I put the size of the buffer equal or less than three, then the transmission works perfectly, using bulk or interrupt. When this size is greater than three, then I'm getting buffer1 and buffer[2] OK, but the rest are wrong.
The error that I'm getting is from timeout. As input I'm using endpoint 0x81.
More information:
The return value from the bulk or interrupt read is -116. The numbers that I'm sending from the PIC to the PC in the two first bytes ([0] and 1) in hex is 0x02D6. With this number, buffer[0] = -42 (when it should be 0xD6 = 214) and buffer[1] = 2 that is correct.
In the [2] and [3] bytes the number is 0x033D, and I get [2] = 61 = 0x3D. That is correct and [3] = -42??? (like [0]).
And the fifth byte is 1, and the SW shows 2???. Might it be a problem in the microcontroller, because I'm programming it as an HID USB?
I don't think that being a HID is the problem. I had a similar issue before; the PIC would randomly timeout when large data was being transmitted. It turned out to be some voltage fluctuation on the MCU. How are you connecting the crystal? Do you have a capacitor on VUSB to regulate it?
Building a PIC18F USB device is a great tutorial on building a PIC HID, and even though it's not based on 18F2550 but on 18F4550, it should be quite similar, and I'm sure you can get a lot out of the schematics and hardware setup. It was the starting point for my PIC-USB projects.
I'm using an Arduino (duemilanove) with the official Ethernet shield to send data to the controller for controlling an LED matrix. I am trying to send some raw 32-bit unsigned int values (unix timestamps) to the controller by taking the 4 bytes in the 32-bit value on the desktop and sending it to the arduino as 4 consecutive bytes. However, whenever a byte value is larger than 127, the returned value by the ethernet client library is 63.
The following is a basic example of what I'm doing on the arduino side of things. Some things have been removed for neatness.
byte buffer[32];
memset(buffer, 0, 32);
int data;
int i=0;
data = client.read();
while(data != -1 && i < 32)
{
buffer[i++] = (byte)data;
data = client.read();
}
So, whenever the input byte is bigger than 127 the variable "data" will end up getting set to 63! At first I thought the problem was further down the line (buffer used to be char instead of byte) but when I print out "data" right after the read, it's still 63.
Any ideas what could be causing this? I know client.read() is supposed to output int and internally reads data from the socket as uint8_t which is a full byte and unsigned, so I should be able to at least go to 255...
EDIT: Right you are, Hans. Didn't realize that Encoding.ASCII.GetBytes only supported the first 7 bits and not all 8.
I'm more inclined to suspect the transmit side. Are you positive the transmit side is working correctly? Have you verified with a wireshark capture or some such?
63 is the ASCII code for ?. There's some relevance to the values, ASCII doesn't have character codes for values over 127. An ASCII encoder commonly replaces invalid codes like this with a question mark. Default behavior for the .NET Encoding.ASCII encoder for example.
It isn't exactly clear where that might happen. Definitely not in your snippet. Probably on the other end of the wire. Write bytes, not characters.
+1 for Hans Passant and Karl Bielefeldt.
Can you just send the data without encoding? How is the data being sent? TCP/UDP/IP/Ethernet definitely support sending binary data without restriction. If this isn't possible, perhaps converting the data to hex will solve the problem. Base64 will also work (better) but is considerably more work. For small amounts of data, hex is probably the easiest and fastest solution.
+1 again to Karl and Ben for mentioning wireshark. Invaluable for debugging network problems like this.