What is the meaning of 6E 00 when I send a command to a SmartCard - c++

I try to access a SmartCard via C++.
I got already the Connection and the CardHandle.
But when I send an APDU Command via SCardTransmit, i'll get 6E 00 as the answer from the card.
No matter which APDU Command i send.
Everytime 6E 00.
For Example:
FF CA FA 00 00 (Card's ATR - Answer To Reset) or
FF CA FF 82 00 (Product name in ASCII)
The same thing when i send the Command with an PC/SC Testtootl like "PC/SC Diag".
Has anybody an Idea what the meaning of this Error-Code and how to solve the problem?
Please help me !!!! ;-)

According to ISO 7816-4 0x6E00 means "Class not supported".
Are you using the correct CLA value in the APDU?
The class (CLA) byte is usually 0x00, 0xA0, 0xC0 or 0xF0 and sometimes masked with 0x0C that indicates Secure Messaging on some cards. AFAIK, the only invalid CLA value is 0xFF.
But this varies from one card to another, do you have the card specification from the vendor?

It means "Wrong Instruction Class". Maybe it's just the wrong type of card?
https://datatracker.ietf.org/doc/html/draft-urien-eap-smartcard-05

The BasicCard PDF manual has a list of error codes on page 152-153.
The one you got they describe as "CLA byte of command not recognized".
"6A 86" is likely the response to a card specific command and I dont see it in the BasicCard list.

Related

How to retrieve underlying block device IO error

Consider a device in the system, something under /dev/hdd[sg][nvme]xx
Open the device, get the file descriptor and start working with it (read(v)/write(v)/lseek, etc), at some point you may get EIO. How do you retrieve the underlying error reported by the device driver?
EDIT001: in case it is impossible using unistd functions, maybe there is other ways to work with block devices which can provide more low-level information like sg_scsi_sense_hdr?
You can't get any more error detail out of the POSIX functions. You're onto the right track with the SCSI generic stuff though. But, boy, it's loaded with hair. Check out the example in sg3_utils of how to do a SCSI READ(16). This will let you look at the sense data when it comes back:
https://github.com/hreinecke/sg3_utils/blob/master/examples/sg_simple16.c
Of course, this technique doesn't work with NVMe drives. (At least, not to my knowledge).
One concept I've played with in the past is to use normal POSIX/libc block I/O functions like pread and pwrite until I get an EIO out. At that point, you can bring in the SCSI-generic versions to try to figure out what happened. In the ideal case, a pread or lseek/read fails with EIO. You then turn around and re-issue it using a SG READ (10) or (16). If it's not just a transient failure, this may return sense data that your application can use.
Here's an example, using the command-line sg_read program. I have an iSCSI attached disk that I'm reading and writing. On the target, I remove its LUN mapping. dd reports EIO:
# dd if=/dev/sdb of=/tmp/output bs=512 count=1 iflag=direct
dd: error reading ‘/dev/sdb’: Input/output error
but sg_read reports some more useful information:
[root#localhost src]# sg_read blk_sgio=1 bs=512 cdbsz=10 count=512 if=/dev/sdb odir=1 verbose=10
Opened /dev/sdb for SG_IO with flags=0x4002
read cdb: 28 00 00 00 00 00 00 00 80 00
duration=9 ms
reading: SCSI status: Check Condition
Fixed format, current; Sense key: Illegal Request
Additional sense: Logical unit not supported
Raw sense data (in hex):
70 00 05 00 00 00 00 0a 00 00 00 00 25 00 00 00
00 00
sg_read: SCSI READ failed
Some error occurred, remaining block count=512
0+0 records in
You can see the Logical unit not supported additional sense code in the above output, indicating that there's no such LU at the target.
Possible? Yes. But as you can see from the code in sg_simple16.c, it's not easy!

boost asio async_read() seems to be skipping some nulls

I'm going a bit crazy with a simple boost asio TCP conversation.
I have a server and a client. I use length-prefixed messges. The client sends "one" and the server responds with "two". So this is what I see happen:
The client sends, and the server receives, 00 00 00 03 6F 6E 65 (== 0x0003 one).
The server responds by sending 00 00 00 03 74 77 6F (== 0x0003 two).
Now here is where it is very strange (code below). If the client reads four bytes, I expect it to get 00 00 00 03. If it reads seven, I expect to see 00 00 00 03 74 77 6F. (In fact, it will read four (the length header), then three (the body).)
But what I actually see is that, while if I read seven at once I do see 00 00 00 03 74 77 6F, if I only ask for four, I see 74 77 6F 03. This doesn't make any sense to me.
Here is the code I'm using to receive it (minus some print statements and such):
const int kTcpHeaderSize = 4;
const int kTcpMessageSize = 2048;
std::array<char, kTcpMessageSize + kTcpHeaderSize> receive_buffer_;
void TcpConnection::ReceiveHeader() {
boost::asio::async_read(
socket_, boost::asio::buffer(receive_buffer_, kTcpHeaderSize),
[this](boost::system::error_code error_code,
std::size_t received_length) {
if (error_code) {
LOG_WARNING << "Header read error: " << error_code;
socket_.close(); // TODO: Recover better.
return;
}
if (received_length != kTcpHeaderSize) {
LOG_ERROR << "Header length " << received_length
<< " != " << kTcpHeaderSize;
socket_.close(); // TODO: Recover better.
return;
}
uint32_t read_length_network;
memcpy(&read_length_network, receive_buffer_.data(),
kTcpHeaderSize);
uint32_t read_length = ntohl(read_length_network);
// Error: read_length is in the billions.
ReceiveBody(read_length);
});
}
Note that kTcpHeaderSize is 4. If I change it to 7 (which makes no sense, but just for the experiment) I see the stream of 7 bytes I expect. When it is 4, I see a stream that is not the first four bytes of what I expect.
Any pointers what I am doing wrong?
From what I can see in your code it should work according to the async_read documentation:
The asynchronous operation will continue until one of the following conditions is true:
The supplied buffers are full. That is, the bytes transferred is equal to the sum of the buffer sizes.
An error occurred.
However see the remark at the bottom:
This overload is equivalent to calling:
boost::asio::async_read(
s, buffers,
boost::asio::transfer_all(),
handler);
It looks like the transfer_all condition might be the only thing checked.
Try using the transfer_exactly condition and if it does work report an issue on https://github.com/boostorg/asio/issues.
The suggestion by #sergiopm to use transfer_all was good, and I'm pretty sure it helped. The other issue involved buffer lifetimes in the asynchronous send/receive functions. I got a bit confused, apparently, about how long certain things would live and how long I needed them to live, and so I was overwriting things from time to time. That may have been more important than transfer_all, but I'm still happy to give #sergiopm credit for helping getting me on my way.
The intent has just been to have a simple tcp client or server that I can declare, hand it a callback, and then go on my way knowing that I can only pay attention to those callbacks.
I'm pretty sure something like this must exist (thousands of times over). Do feel free to comment below, both for me and for those who come after, if you think there are better libraries than asio for this task (i.e., that would involve substantially less code on my part). The principle constraint is that, due to multiple languages and services, we need to own the wire protocol. Otherwise we get into things like "does library X have a module for language Y?".
As an aside, it's interesting to me that essentially every example I've found does length-prefix encoding rather than beginning/end of packet encoding. Length prefix is really easy to implement but, unless I'm quite mistaken, suffers from re-sync hell: if a stream is interrupted ("I'm going to send you 100 bytes, here are the first 50 but then I died") it's not clear to me that there aren't scenarios where I'm unable to resync properly.
Anyway, I learned a lot along the way, I recommend the exercise.

Implementation of ISATAP Protocol

Can anybody help me figure out how to implement ISATAP packet?
I'm creating packets in C++ (Winpcap). I can't imagine how it should be.
Specification: http://www.networksorcery.com/enp/protocol/isatap.htm
Is that an example of ISATAP packet?
0000 5EFE C0A8 0110
(IP Address - 192.168.1.16)
4548 9559 (Some data)
According to specification the packet for 192.168.1.16 which is not globally uniqe (U bit set to 0) should look like in hexadecimal
00 00 5E FE
C0 A8 01 10
Some data
So it's correct

Setting endianness of VS debugger

I am using VS 2012 and programming in C++. I have a wide string
wchar_t *str = L"Hello world".
Technically I read the string from a file but I don't know if that makes a difference. When I look at str in the memory window it looks like this:
00 48 00 65 00 6c 00 6c 00 6f 00 2c 00 20 00 77 00 6f 00 72 00 6c 00 64 00 21 00
As you can see the string is stored in memory as big-endian.
When I hover my mouse over the string I get:
L"䠀攀氀氀漀Ⰰ 眀漀爀氀搀℀"
And after I reverse the endianness of str the memory looks like:
48 00 65 00 6c 00 6c 00 6f 00 2c 00 20 00 77 00 6f 00 72 00 6c 00 64 00 21 00 00
And the hover over looks like:
L"Hello, world!"
It seems that the debugger displays UTF-16 in little-endian by default. My program reads big-endian files so it is very tedious to keep reversing the endianness of all strings to debug them. Is there any way to change the endianness of the debugger's display?
Except for debug purposes I can do all my processing in big endian.
It's not only the debugger. The wchar_t function of Visual Studio are little endian as the host is. When you want to process the data you need to reverse the string endianess to little endian anyway.
It's worth to have this change even if you output the strings to a file with a different endianess. Strings are defined as a byte sequence, your endianess applied to a string looks strange anyhow.
Your best shot in getting this to work is to define your own type and create a debugger type visualizer for it (see Customizing the Visual Studio Debugger Display of Your Data, or here).
Or maybe you can quick-hack it by shifting the address by 1 byte in watch window.
You're working with a non-native string format that just happens to "feel" similar to the native format. So you are tempted to think there should be almost a way to do it. But to the debugger, it's just a foreign binary format. The debugger is not designed to handle foreign endianness just as it does not handle visualizing an OGG stream packet.
If you want to use available tools for manipulating native-endian Unicode strings, you'll need to convert to native-endian Unicode format.
As has been pointed out, VS uses the native endianness, which is
little endian on an Intel/AMD. The problem is that you're not
reading the strings correctly; you should imbue the
std::istream with a locale which reads UTF-16BE (since this is
apparently the encoding form you're trying to read).
std::istream (or rather the backing std::filebuf) will
automatically do the code translation on the fly when reading
and writing.
You can set the endianness of the Memory window using the context menu. Right-click in the Memory window and check "Big Endian".

Dump only a portion of memory in VS 2005

Does any one know if there is a way to dump only a chunk of memory to disk using VS? Basically, I want to give it an address and a length, and have it write the memory to disk. That way I can do a binary diff.
Thanks.
I'm kind of surprised VS won't let you do that from the Memory dump window...
You might be able to get what you want (or close to it) with the VS command window:
>Tools.LogCommandWindowOutput c:\temp\testdump.log /overwrite
>Debug.ListMemory /Count:16 0x00444B20
0x00444B20 00 00 00 00 00 00 00 00 13 00 12 00 86 07 19 00 ................
>Tools.LogCommandWindowOutput /off
If you're willing to use WinDBG, (or ntsd/cdb) you can use the .writemem debugger command to do exactly what you want.
I believe you can only save a complete binary minidump. However, you can use the Debug Memory window and copy/paste to a text file to do memory diffs.
OK, this I have tried in VS 2008, but I believe VS 2005 should allow the same:
If the memory is a string (if it doesn't contain zero bytes), you can put the following into a watch window: (unsigned char*)(ptr),1024 to see 1kB in the text visualizer. However, this stops at zero bytes, so if you have binary data, this won't work.