Implementation of ISATAP Protocol - c++

Can anybody help me figure out how to implement ISATAP packet?
I'm creating packets in C++ (Winpcap). I can't imagine how it should be.
Specification: http://www.networksorcery.com/enp/protocol/isatap.htm
Is that an example of ISATAP packet?
0000 5EFE C0A8 0110
(IP Address - 192.168.1.16)
4548 9559 (Some data)

According to specification the packet for 192.168.1.16 which is not globally uniqe (U bit set to 0) should look like in hexadecimal
00 00 5E FE
C0 A8 01 10
Some data
So it's correct

Related

How to read a QTcpSocket from R

I'm trying to send data from Qt to R. I am new to the QtNetwork module and relatively new to Qt overall. As such I am also trying to figure out how QIODevice encodes data for the purposes of reading and writing.
If I run the Fortune Server Example and connect to it with the following code in R:
connection <- socketConnection(host="localhost", port=50743, open="rb", timeout=10)
readBin(connection, what="raw", n = 1000)
the following raw hexadecimal vector is returned
00 00 00 56 00 59 00 6f 00 75 00 20 00 77 00 69 00 6c 00 6c 00 20 00 66 00 65 00 65 00 6c 00 20 00 68 00 75 00 6e 00 67 00 72 00 79 00 20 00 61 00 67 00 61 00 69 00 6e 00 20 00 69 00 6e 00 20 00 61 00 6e 00 6f 00 74 00 68 00 65 00 72 00 20 00 68 00 6f 00 75 00 72 00 2e
Removing the first five bytes and all the remaining null characters and converting to char I get:
"You will feel hungry again in another hour."
So what I want to know is where do all the characters that are not part of the fortune come from? The fourth byte seems to be the byte length of the message from the sixth byte to the end, the rest of the "non-fortune" characters are all null.
I read that QByteArray terminates each byte with a null character and QByteArray is converted to a QBuffer before being written by QTcpSocket, is that what is happening here? QBuffer adds the length of the message (but what of the other four bytes) and every second byte of a QByteArray is the null character? Also, the last byte is not null (did the readBin operation consume it/ how did readBin know where the message ended)?
Is this the only way to write data to the socket? If I wanted to transmit values of type double would I have to convert them to QByteArray to transmit them in this fashion? Is there not some non-text way of transmitting data through a socket?
Any enlightenment would be much appreciated!
EDIT:
Thanks for the answer! For completeness sake here is how you might decode the string in R
connection <- socketConnection(host="localhost", port=50743, open="rb", timeout=10)
# Read first 32 bits, which contains the size of the string in bytes
len.raw <- readBin(connection, what="raw", n = 4)
# convert to integer
len <- strtoi(paste(c("0x",len.raw),collapse=""))
# Read raw message
msg.raw <- readBin(connection, what="raw", n = len)
# convert to char using UTF-16BE
msg <- iconv(list(msg.raw),from="UTF-16BE")
close(connection)
cat(msg)
If you take a look at how the Fortune Server Example is implemented, you can see that it uses QDataStream to serialize fortunes (QStrings) over the socket:
QByteArray block;
QDataStream out(&block, QIODevice::WriteOnly);
out.setVersion(QDataStream::Qt_4_0);
out << fortunes.at(qrand() % fortunes.size());
So, the question is reduced to "How does QDataStream serialize QStrings?", and this is answered extensively in the documentation page about serializing Qt data types. You can see that a QString's serialization looks like this:
If the string is null: 0xFFFFFFFF (quint32)
Otherwise: The string length in bytes (quint32) followed by the data in UTF-16
And this is exactly what you are seeing in your question. The first four bytes are the string length in bytes, and the "nulls" you are seeing later appear because of using UTF-16 encoding.
Is this the only way to write data to the socket? If I wanted to transmit values of type double would I have to convert them to QByteArray to transmit them in this fashion? Is there not some non-text way of transmitting data through a socket?
You can use any serialization format you like. QDataStream is widely used in Qt since it supports most Qt data types out of the box. This has nothing to do with using QByteArray, you can let QDataStream write to the socket directly. QDataStream is, actually, a binary format (non-text) as you can see. If want textual human-readable formats, you can use JSON.
But if you are aiming to send data from Qt to R using QDataStream, you'll have to write your QDataStream deserializer for R. I would recommend using some common data serialization that has implementations in C++ and R (in lieu of re-inventing the wheel). I believe JSON meets this criterion, and if you want to use a binary format, msgpack might be interesting for you, since it supports a lot of programming languages (including R and C++).

How to calculate checksum of UDP packet embedded inside IP packet

I have a UDP packet which is embedded inside IP packet and not able to calculate the checksum of UDP properly but I can correctly find the CHecksum of IP. Can someone help how the UDP checksum is found.
[45 00 00 53 00 80 00 00 40 11 66 16 0A 00 00 03 0A 00 00 02] CA B1 CA B1 00 3F DF A5
The bits enclosed in bracket is IP packet and the checksum is given in bold.
**UDP Packet**
CA B1 Source port
CA B1 Destination port
00 3F Length
DF A5 Checksum
Here how the checksum "DF A5" came. I did 16 bit addition and took the 1s complement but still not getting the value. Whether I need to consider IP header also to calculate the Checksum of UDP

Setting endianness of VS debugger

I am using VS 2012 and programming in C++. I have a wide string
wchar_t *str = L"Hello world".
Technically I read the string from a file but I don't know if that makes a difference. When I look at str in the memory window it looks like this:
00 48 00 65 00 6c 00 6c 00 6f 00 2c 00 20 00 77 00 6f 00 72 00 6c 00 64 00 21 00
As you can see the string is stored in memory as big-endian.
When I hover my mouse over the string I get:
L"䠀攀氀氀漀Ⰰ 眀漀爀氀搀℀"
And after I reverse the endianness of str the memory looks like:
48 00 65 00 6c 00 6c 00 6f 00 2c 00 20 00 77 00 6f 00 72 00 6c 00 64 00 21 00 00
And the hover over looks like:
L"Hello, world!"
It seems that the debugger displays UTF-16 in little-endian by default. My program reads big-endian files so it is very tedious to keep reversing the endianness of all strings to debug them. Is there any way to change the endianness of the debugger's display?
Except for debug purposes I can do all my processing in big endian.
It's not only the debugger. The wchar_t function of Visual Studio are little endian as the host is. When you want to process the data you need to reverse the string endianess to little endian anyway.
It's worth to have this change even if you output the strings to a file with a different endianess. Strings are defined as a byte sequence, your endianess applied to a string looks strange anyhow.
Your best shot in getting this to work is to define your own type and create a debugger type visualizer for it (see Customizing the Visual Studio Debugger Display of Your Data, or here).
Or maybe you can quick-hack it by shifting the address by 1 byte in watch window.
You're working with a non-native string format that just happens to "feel" similar to the native format. So you are tempted to think there should be almost a way to do it. But to the debugger, it's just a foreign binary format. The debugger is not designed to handle foreign endianness just as it does not handle visualizing an OGG stream packet.
If you want to use available tools for manipulating native-endian Unicode strings, you'll need to convert to native-endian Unicode format.
As has been pointed out, VS uses the native endianness, which is
little endian on an Intel/AMD. The problem is that you're not
reading the strings correctly; you should imbue the
std::istream with a locale which reads UTF-16BE (since this is
apparently the encoding form you're trying to read).
std::istream (or rather the backing std::filebuf) will
automatically do the code translation on the fly when reading
and writing.
You can set the endianness of the Memory window using the context menu. Right-click in the Memory window and check "Big Endian".

Binary through http

I'm using C++ to send post-request with binary information. The code looks like:
int binary[4] = { 1, 2, 3, 4 };
std::stringstream out;
out << "POST /address HTTP/1.1\r\n";
out << "Host: localhost\r\n";
out << "Connection: Keep-Alive\r\n";
out << "Content-Type: application/octet-stream\r\n";
out << "Content-Transfer-Encoding: binary\r\n";
out << "Content-Length: " << 4*sizeof(int) << "\r\n\r\n"; // 4 elements of integer type
And sending data into opened connection in socket:
std::string headers = out.str();
socket.send(headers.c_str(), headers.size()); // Send headers first
socket.send(reinterpret_cast<char*>(&binary[0]), bufferLength*sizeof(int)); // And array of numbers
But I was told, that sending pure bytes through http-protocol is wrong. Is that right? For example, I can't send 0 (zero), it's used by protocol.
If that's right (because I can't handle that post-request and get the data I've sent) what could I use instead? Maybe, convert array into hex or base64url?
Thanks.
The problem people saying it's wrong are addressing is about the endianness. You can transfer binary data with http of course, but when the other end receives them, it must be able to interpret them correctly. Let's suppose your machine is a little endian machine; your integers will be, in memory, stored as (32 bit int)
01 00 00 00
02 00 00 00
03 00 00 00
04 00 00 00
and you send these 16 bytes as they "are". Now, suppose the receiving machine get the data naively disregarding who and how they are sent, and suppose that machine is a big endian machine; in such machine, the memory layout for 1, 2, 3, 4 intergers would be
00 00 00 01
00 00 00 02
00 00 00 03
00 00 00 04
This means that for the receiving machine the first integer is 0x01000000 which is not 0x00000001 as the sender wanted.
If you decide that your integers must be sent always as big endian integer, then if the sender is a little endian machine, it needs to "re-arrange" properly the integers before sending. There are functions like hton* (host to net) that "transforms" host 32/16 bit integers to the "net byte order" that is big endian (and viceversa, with ntoh* net to host)
Note that data are not scrambled, they are send as they "are", so to say. What changes is the way you store them in memory, and the way you interpret them when reading. Usually it's not an issue, since data are sent according to a format that, if needed, specifies the endianness of non-single-byte data (e.g. see PNG format spec, sec 2.1, integers byte order: PNG uses net byte order i.e. big endian)
But I was told, that sending pure bytes through http-protocol is
wrong. Is that right?
No, it is fine in the body, depending on the Content-Type of course. "Octet-stream" should be fine in this regard, and yes it can contain zero bytes.
There is nothing wrong to send binaries via HTTP.
This happens all the time with images and with file upload

What is the meaning of 6E 00 when I send a command to a SmartCard

I try to access a SmartCard via C++.
I got already the Connection and the CardHandle.
But when I send an APDU Command via SCardTransmit, i'll get 6E 00 as the answer from the card.
No matter which APDU Command i send.
Everytime 6E 00.
For Example:
FF CA FA 00 00 (Card's ATR - Answer To Reset) or
FF CA FF 82 00 (Product name in ASCII)
The same thing when i send the Command with an PC/SC Testtootl like "PC/SC Diag".
Has anybody an Idea what the meaning of this Error-Code and how to solve the problem?
Please help me !!!! ;-)
According to ISO 7816-4 0x6E00 means "Class not supported".
Are you using the correct CLA value in the APDU?
The class (CLA) byte is usually 0x00, 0xA0, 0xC0 or 0xF0 and sometimes masked with 0x0C that indicates Secure Messaging on some cards. AFAIK, the only invalid CLA value is 0xFF.
But this varies from one card to another, do you have the card specification from the vendor?
It means "Wrong Instruction Class". Maybe it's just the wrong type of card?
https://datatracker.ietf.org/doc/html/draft-urien-eap-smartcard-05
The BasicCard PDF manual has a list of error codes on page 152-153.
The one you got they describe as "CLA byte of command not recognized".
"6A 86" is likely the response to a card specific command and I dont see it in the BasicCard list.