I need help identifying the following technique. Is a lengthy read so please try to follow. My question is if this is a known standard, does it have a name, can anyone relate or seen this before. What is the benefit. Also in case you wonder, this is related to a packet captured on a long forgotten online PS2 game and I am part of a team that is trying to bring it back.
Note that this is not the size as described by the ip protocol this size representation is withing the actual payload and it is for client and server consumption.
The following read describes the how the size of the message is being represented.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The true packet length is 94 bytes long.
These are bytes 5-6 [CF E0] on the payload data after all of the ip protocol stuff.
Also, note that we must interpret these two bytes as being in little endian format. Thus, we should think of these two bytes as being
[E0 CF]
We determine the Packet Class from these two bytes by taking the first nibble(4 bits) of the first byte. In this particular case, this is just 0xE. We would then identify this packet as having a packet class of 0xE. This was identified as a Session Initiator Packet Class.
Now, to determine the packet length from the remaining nibble and second byte. First we convert the second byte to decimal, we get 0xCF = 207. The different between this value and the actual length is 207-94=113 bytes. Originally I knew this byte was proportional to the packet length, but just had some offset. I wasn't sure where this offset came from. Additionally, this offset seemed to change for different packets. More study was required.
Eventually, I found out that each packet class had a different offset. So I needed to examine only packets in the same packet class to figure out the offset for that packet class. In doing this, I made a table of all the reported lengths (in byte 5) and compared that to the actual packet length. What I discovered is that
almost all of reported packet lengths in byte 5 were greater than 0x80=128.
the second nibble in the other byte worked as a type of multiplier for the packet length
that each packet class had an associated minimum packet length and maximum packet length that could be represented. For the 0xC packet class I was examining, the minimum packet size was 18 bytes and the maximum packet size was approximately 10*128 +17 = 1297 bytes.
This led to the following approach to extract the packet length from the fifth and sixth byte packet header. First note that we have previously determined the packet class to be 0xE and that the minimum packet size associated with this packet class is 15 bytes. Now, take the second nibble of the first byte [0xE0] = 0 in this case and multiply it by 128 bytes 0*128 = 0 bytes. Now add this to the second byte [0xCF] = 207 in this case and subtract out 128. So 0 + 207 - 128 = 79. Now we need to add in the minimum packet size for this packet class 0xE = 15 byte minimum packet size. So (0*128)+(207-128) -15 = 94. This is the reported true packet size.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This formula was tested on 20,000 subsequent packets and it works. But why go trough all that trouble just to indicate the size of the message that follows? I thought it was a form of encryption but the rest of the message is not encrypted at all. The formula is understood but I don't see a benefit. I am thinking that maybe is a way to optimize the size of the packet by passing a number greater than 255 using only one byte but that only saves me exactly one byte, throwing another byte yields a Max value of 65,535 so why not throw another byte into the byte stream. I am sure one extra byte is not going to have a great impact on the network so what could be the purpose. I thought that maybe someone else would see what's missing or connected to some kind of documented standard, protocol, pattern, technique or something that is documented somewhere.
Also, I do not take credit for figuring out the formula above, that was done by another team member.
My best guess is that the receiver uses some form of variable-length base128 encoding, like LEB128.
But in this case, the sender, knowing the actual max size fits in 11 bits, forces the encoding to to use 2 bytes, and overloads the high nibble for "class". This makes the header size and construction time constant. And the receiver side can just mask out the class and run it through a standard decoder.
Send:
len -= minlen[class]
byte[5]=(len&0x7F)|0x80;
byte[6]=(len>>7)|(class<<4);
Receive:
class = byte[6]>>4;
byte[6]&=0xF;
len = decode(&byte[5]) + minlen[class];
where:
int decode(byte* data) {
int v=*data&0x7F;
while (*data & 0x80) {
data++;
v+=*data&0x7F;
}
return v;
}
One other possibility is that byte[5] is signed, and length is reconstructed by
(int8_t)byte[5] + 128*((byte[6]&0xF)+1) + minlen[byte[6]>>4];
But I can't think of any reason to construct it this way.
Related
What is the reasoning behind not having small scalar types in google protocol buffers?
https://developers.google.com/protocol-buffers/docs/proto#scalar
More specifically for C++, do I transfer a uint16_t as two bytes in gpb? I'm looking into converting an existing message based protocol to gpb and this seems a bit strange to me.
Gbp uses variable-length encoding, meaning that the size of the transmitted integer depends on the value of the integer.
Small ints will be sent using only few bytes.
Here is a link to a guide about gbp encoding
In particular, if you only have one specific case of a short int (and not many of them, in which case you'll probably want to use bytes), you should simply cast all uint16_t to uint32_t and let varints do the stuff.
Protobuf scalar type encoding uses variable number of bytes:
1 byte if < 2**(8-1) = 128
2 bytes if < 2**(16-2) = 16384
3 bytes if < 2**(24-3) = 2097152
4 bytes if < 2**(32-4) = 268435456
etc.
So while this may be killing you as space-savvy C coder, uint_16t will be taking at most 2 bytes only for lowest 1/4 of the range.
PB is ultimately designed for forward compatibility, and Google knows that short fixed data types will always turn out too short :-) (Y2K, IPv4, upcoming 2038 Unix Time, etc.) If you're really, terribly after compactness, use bytes as #SRLKilling recommended, at the expense of needing to write your own codec on top of it.
I wanna prepend the size of the vector in the buffer. But I don't know exactly what the type of the size is. After all, std::size_t can't be a fixed size. In my mind, I intend to use uint64_t instead. Then the buffer would like this:
8 bytes length | 4 bytes element1 | 4 bytes element2 | ... |
Now the question is uint64_t doesn't mean std::size_t. Any better ideas would be appreciated.
You can use any type you want so long as it can hold the value you are using. Since it's already a size_t, just keep it that way. Decide how many bytes you want to use to represent the value and what value you need each byte to be and write code to encode/decode each byte correctly.
You are almost there. No current platform uses size_t greater than 64 bits (and it would take several days to transfer 64bits worth of int32 over experimental 100TBit fibre). The steps are:
const uint64_t len = vec.length();
Write the eight bytes into your tcp buffer in a defined order.
Write the four bytes of each int into the tcp buffer in a defined order.
Send.
Note that the last two steps will have to be in a loop for more than a few thousand elements..
For a full background (you don't really need to understand this to understand the problem but it may help) I am writing a CLI program that sends data over Ethernet and I wish to add VLAN tags and priority tags to the Ethernet headers.
The problem I am facing is that I have a single 16 bit integer value that is built from three smaller values: PCP is 3 bits long (so 0 to 7), DEI is 1 bit, then VLANID is 12 bits long (0-4095). PCP and DEI together form the first 4 bit nibble, 4 bits from VLANID add on to complete the first byte, the remaining 8 bits from VLANID form the second byte of the integer.
11123333 33333333
1 == PCP bits, 2 == DEI bit, 3 == VLANID bits
Lets pretend PCP == 5, which in binary is 101, DEI == 0, and VLANID == 164 which in binary is 0000 10100011. Firstly I need to compile these values together like to form the following:
10100000 10100101
The problem I face is then when I copy this integer into a buffer to be encoded onto the wire (Ethernet medium) the bit ordering changes as follows (I am printing out my integer in binary before it gets copied to the wire and using wireshark to capture it on the wire to compare):
Bit order in memory: abcdefgh 87654321
Bit order on the wire: 8765321 abcdefgh
I have two problems here really:
The first is creating the 2 byte integer by "sticking" the three smaller ones together
The second is ensuring the order of bits is that which will be encoded correctly onto the wire (so the bytes aren't in the reverse order)
Obviously I have made an attempt at this code to get this far but I'm really out of my depth and would like to see someone’s suggestion from scratch, rather than posting what I have done so far and someone suggestion how to change that it to perform the required functionality in a possibly hard to read and long winded fashion.
The issue is byte ordering, rather than bit ordering. Bits in memory don't really have an order because they are not individually addressable, and the transmission medium is responsible for ensuring that the discrete entities transmitted, octets in this case, arrive in the same shape they were sent in.
Bytes, on the other hand, are addressable and the transmission medium has no idea whether you're sending a byte string which requires that no reordering be done, or a four byte integer, which may require one byte ordering on the receiver's end and another on the sender's.
For this reason, network protocols have a declared 'byte ordering' to and from which all sender's and receivers should convert their data. This way data can be sent and retrieved transparently by network hosts of different native byte orderings.
POSIX defines some functions for doing the required conversions:
#include <arpa/inet.h>
uint32_t htonl(uint32_t hostlong);
uint16_t htons(uint16_t hostshort);
uint32_t ntohl(uint32_t netlong);
uint16_t ntohs(uint16_t netshort);
'n' and 'h' stand for 'network' and 'host'. So htonl converts a 32-bit quantity from the host's in-memory byte ordering to the network interface's byte ordering.
Whenever you're preparing a buffer to be sent across the network you should convert each value in it from the host's byte ordering to the network's byte ordering, and any time you're processing a buffer of received data you should convert the data in it from the network's ordering to the host's.
struct { uint32_t i; int8_t a, b; uint16_t s; } sent_data = {100000, 'a', 'b', 500};
sent_data.i = htonl(sent_data.i);
sent_data.s = htons(sent_data.s);
write(fd, &sent_data, sizeof sent_data);
// ---
struct { uint32_t i; int8_t a, b; uint16_t s; } received_data;
read(fd, &received_data, sizeof received_data);
received_data.i = ntohl(received_data.i);
received_data.s = ntohs(received_data.s);
assert(100000 == received_data.i && 'a' == received_data.a &&
'a' == received_data.b && 500 == received_data);
Although the above code still makes some assumptions, such as that both the sender and receiver use compatible char encodings (e.g., that they both use ASCII), that they both use 8-bit bytes, that they have compatible number representations after accounting for byte ordering, etc.
Programs that do not care about portability and inter-operate only with themselves on remote hosts may skip byte ordering in order to avoid the performance cost. Since all hosts will share the same byte ordering they don't need to convert at all. Of course if a program does this and then later needs to be ported to a platform with a different byte ordering then either the network protocol has to change or the program will have to handle a byte ordering that is neither the network ordering nor the host's ordering.
Today the only common byte orderings are simply reversals of each other, meaning that hton and ntoh both do the same thing and one could just as well use hton both for sending and receiving. However one should still use the proper conversion simply to communicate the intent of the code. And, who knows, maybe someday your code will run on a PDP-11 where hton and ntoh are not interchangeable.
I am trying to use openssl AES to encrypt my data i found the pretty nice example in this link ., http://saju.net.in/code/misc/openssl_aes.c.txt
but the question i still could found the answer it padding the data although it perform a multiple of key size .
for example it needs 16 byte as input to encrypt or any multiple of 16
i gave 1024 including the null ., and it still give me an out put of size 1040 ,
but as what i know AES input size = out put size , if the input is a multiple of 128 bit / 16 byte .
any one tried this example before me or can give me any idea ?|
thanks in Advance .
Most padding schemes require that some minimum amount of padding always be added. This is (at least primarily) so that on the receiving end, you can look at the last byte (or some small amount of data at the end) and know how much of the data at the end is padding, and how much is real data.
For example, a typical padding scheme puts zero bytes after the data with one byte at the end containing the number of bytes that are padding. For example, if you added 4 bytes of padding, the padding bytes (in hex) would be something like 00 00 00 04. Another common possibility puts that same value in all the padding bytes, so it would look like 04 04 04 04.
On the receiving end, the algorithm has to be ready to strip off the padding bytes. To do that, it looks at the last byte to tell it how many bytes of data to remove from the end and ignore. If there's no padding present, that's going to contain some value (whatever the last byte in the message happened to be). Since it has no way to know that no padding was added, it looks at that value, and removes that many bytes of data -- only in this case, it's removing actual data instead of padding.
Although it might be possible to devise a padding scheme that avoided adding extra data when/if the input happened to be an exact multiple of the block size, it's a lot simpler to just add at least one byte of padding to every message, so the receiver can count on always reading the last byte and finding how much of what it received is padding.
First of all, to clarify my goal: There exist two programs written in C in our laboratory. I am working on a Proxy Server (bidirectional) for them (which will also mainpulate the data). And I want to write that proxy server in Python. It is important to know that I know close to nothing about these two programs, I only know the definition file of the packets.
Now: assuming a packet definition in one of the C++ programs reads like this:
unsigned char Packet[0x32]; // Packet[Length]
int z=0;
Packet[0]=0x00; // Spare
Packet[1]=0x32; // Length
Packet[2]=0x01; // Source
Packet[3]=0x02; // Destination
Packet[4]=0x01; // ID
Packet[5]=0x00; // Spare
for(z=0;z<=24;z+=8)
{
Packet[9-z/8]=((int)(720000+armcontrolpacket->dof0_rot*1000)/(int)pow((double)2,(double)z));
Packet[13-z/8]=((int)(720000+armcontrolpacket->dof0_speed*1000)/(int)pow((double)2,(double)z));
Packet[17-z/8]=((int)(720000+armcontrolpacket->dof1_rot*1000)/(int)pow((double)2,(double)z));
Packet[21-z/8]=((int)(720000+armcontrolpacket->dof1_speed*1000)/(int)pow((double)2,(double)z));
Packet[25-z/8]=((int)(720000+armcontrolpacket->dof2_rot*1000)/(int)pow((double)2,(double)z));
Packet[29-z/8]=((int)(720000+armcontrolpacket->dof2_speed*1000)/(int)pow((double)2,(double)z));
Packet[33-z/8]=((int)(720000+armcontrolpacket->dof3_rot*1000)/(int)pow((double)2,(double)z));
Packet[37-z/8]=((int)(720000+armcontrolpacket->dof3_speed*1000)/(int)pow((double)2,(double)z));
Packet[41-z/8]=((int)(720000+armcontrolpacket->dof4_rot*1000)/(int)pow((double)2,(double)z));
Packet[45-z/8]=((int)(720000+armcontrolpacket->dof4_speed*1000)/(int)pow((double)2,(double)z));
Packet[49-z/8]=((int)armcontrolpacket->timestamp/(int)pow(2.0,(double)z));
}
if(SendPacket(sock,(char*)&Packet,sizeof(Packet)))
return 1;
return 0;
What would be the easiest way to receive that data, convert it into a readable python format, manipulate them and send them forward to the receiver?
You can receive the packet's 50 bytes with a .recv call on a properly connected socked (it might actually take more than one call in the unlikely event the TCP packet gets fragmented, so check incoming length until you have exactly 50 bytes in hand;-).
After that, understanding that C code is puzzling. The assignments of ints (presumably 4-bytes each) to Packet[9], Packet[13], etc, give the impression that the intention is to set 4 bytes at a time within Packet, but that's not what happens: each assignment sets exactly one byte in the packet, from the lowest byte of the int that's the RHS of the assignment. But those bytes are the bytes of (int)(720000+armcontrolpacket->dof0_rot*1000) and so on...
So must those last 44 bytes of the packet be interpreted as 11 4-byte integers (signed? unsigned?) or 44 independent values? I'll guess the former, and do...:
import struct
f = '>x4bx11i'
values = struct.unpack(f, packet)
the format f indicates: big-endian, 4 unsigned-byte values surrounded by two ignored "spare" bytes, 11 4-byte signed integers. Tuple values ends up with 15 values: the four single bytes (50, 1, 2, 1 in your example), then 11 signed integers. You can use the same format string to pack a modified version of the tuple back into a 50-bytes packet to resend.
Since you explicitly place the length in the packet it may be that different packets have different lenghts (though that's incompatible with the fixed-length declaration in your C sample) in which case you need to be a bit more accurate in receiving and unpacking it; however such details depend on information you don't give, so I'll stop trying to guess;-).
Take a look at the struct module, specifically the pack and unpack functions. They work with format strings that allow you to specify what types you want to write or read and what endianness and alignment you want to use.