I have some card, wanted to get ATR from it (using method from some SDK).
Implementation looks like this:
unsigned char ATR[128]={0};
int len=33;
int maxlen=33;
ret = sd7816_ATR(0,1,ATR,len,maxlen,1);
first, second and last parameters to sd7816_ATR function should be like that.
The length fields I tried changing to different values, including 0 but no help.
My concern is the ATR string I am sending is empty in the beginning, and I am expecting
something to get written in it after the call finishes (which actually returns success).
But after call ATR is still empty .. What can be going wrong here?
(I want to find out if card is of ISO/IEC 14443 or ISO/IEC 7816 type).
You are trying to receive an ATR for a command specific for a ISO/IEC 7816-3 contact card. In this particular case, that's requested from a (SIM form) SAM card reader. However, you are trying to read out the contactless based reader.
Now contactless cards do not have an ATR. Some cards do have an ATS (i.e. ISO/IEC 14443 Type A cards), but that should be requested by a similar 14443 SELECT command. Some cards, particularly Type B cards, contain an EF.ATR to make up for the lack of (space within the) ATR. Still, an ATR/ATS has only limited functionality for identifying cards.
ISO/IEC 7816 is comprised of several parts: parts 1 to 3 describe contact cards and 4 and higher describe the Application level APDU commands and file structure of processor cards. If your contactless card implements ISO/IEC 7816-4 then you can - in general - also directly use the PCSC interface to send and receive APDU's to/from the card.
In general readers are for contact or contactless only. If you have a reader which contains both contact and contactless operation then in general they will show up as two different readers in the operating system. So in general, if you know the reader, you know if the card is a contact card or contactless card.
SAM slots may not be identified as readers by the operating system - you may only be able to access them using a low level interface. They are mainly used as a secure storage of keys from the terminal/inspection system/interface device or whatever the name is of the system that reads out the card.
Related
I'm trying to play with Kafka Stream to aggregate some attribute of People.
I have a kafka stream test like this :
new ConsumerRecordFactory[Array[Byte], Character]("input", new ByteArraySerializer(), new CharacterSerializer())
var i = 0
while (i != 5) {
testDriver.pipeInput(
factory.create("input",
Character(123,12), 15*10000L))
i+=1;
}
val output = testDriver.readOutput....
I'm trying to group the value by key like this :
streamBuilder.stream[Array[Byte], Character](inputKafkaTopic)
.filter((key, _) => key == null )
.mapValues(character=> PersonInfos(character.id, character.id2, character.age) // case class
.groupBy((_, value) => CharacterInfos(value.id, value.id2) // case class)
.count().toStream.print(Printed.toSysOut[CharacterInfos, Long])
When i'm running the code, I got this :
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 1
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 2
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 3
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 4
[KTABLE-TOSTREAM-0000000012]: CharacterInfos(123,12), 5
Why i'm getting 5 rows instead of just one line with CharacterInfos and the count ?
Doesn't groupBy just change the key ?
If you use the TopologyTestDriver caching is effectively disabled and thus, every input record will always produce an output record. This is by design, because caching implies non-deterministic behavior what makes itsvery hard to write an actual unit test.
If you deploy the code in a real application, the behavior will be different and caching will reduce the output load -- which intermediate results you will get, is not defined (ie, non-deterministic); compare Michael Noll's answer.
For your unit test, it should actually not really matter, and you can either test for all output records (ie, all intermediate results), or put all output records into a key-value Map and only test for the last emitted record per key (if you don't care about the intermediate results) in the test.
Furthermore, you could use suppress() operator to get fine grained control over what output messages you get. suppress()—in contrast to caching—is fully deterministic and thus writing a unit test works well. However, note that suppress() is event-time driven, and thus, if you stop sending new records, time does not advance and suppress() does not emit data. For unit testing, this is important to consider, because you might need to send some additional "dummy" data to trigger the output you actually want to test for. For more details on suppress() check out this blog post: https://www.confluent.io/blog/kafka-streams-take-on-watermarks-and-triggers
Update: I didn't spot the line in the example code that refers to the TopologyTestDriver in Kafka Streams. My answer below is for the 'normal' KStreams application behavior, whereas the TopologyTestDriver behaves differently. See the answer by Matthias J. Sax for the latter.
This is expected behavior. Somewhat simplified, Kafka Streams emits by default a new output record as soon as a new input record was received.
When you are aggregating (here: counting) the input data, then the aggregation result will be updated (and thus a new output record produced) as soon as new input was received for the aggregation.
input record 1 ---> new output record with count=1
input record 2 ---> new output record with count=2
...
input record 5 ---> new output record with count=5
What to do about it: You can reduce the number of 'intermediate' outputs through configuring the size of the so-called record caches as well as the setting of the commit.interval.ms parameter. See Memory Management. However, how much reduction you will be seeing depends not only on these settings but also on the characteristics of your input data, and because of that the extent of the reduction may also vary over time (think: could be 90% in the first hour of data, 76% in the second hour of data, etc.). That is, the reduction process is deterministic but from the resulting reduction amount is difficult to predict from the outside.
Note: When doing windowed aggregations (like windowed counts) you can also use the Suppress() API so that the number of intermediate updates is not only reduced, but there will only ever be a single output per window. However, in your use case/code you the aggregation is not windowed, so cannot use the Suppress API.
To help you understand why the setup is this way: You must keep in mind that a streaming system generally operates on unbounded streams of data, which means the system doesn't know 'when it has received all the input data'. So even the term 'intermediate outputs' is actually misleading: at the time the second input record was received, for example, the system believes that the result of the (non-windowed) aggregation is '2' -- its the correct result to the best of its knowledge at this point in time. It cannot predict whether (or when) another input record might arrive.
For windowed aggregations (where Suppress is supported) this is a bit easier, because the window size defines a boundary for the input data of a given window. Here, the Suppress() API allows you to make a trade-off decision between better latency but with multiple outputs per window (default behavior, Suppress disabled) and longer latency but you'll get only a single output per window (Suppress enabled). In the latter case, if you have 1h windows, you will not see any output for a given window until 1h later, so to speak. For some use cases this is acceptable, for others it is not.
I have a software written by c++, the functionality of this software is to connect to OMNIKEY smart card, and read/write some data, I'm using a following code for read:
m_Errorcode = SCard3WBPReadData(m_Handle, length, m_Data, m_ulOffset);
This was right an no problem there, but after the OmniKey company changed the chip from x-chip to AVIATOR under new product (HID Global OMNIKEY Smart Card Reader), my code stopped and failed to read a data by preceding code.
I have read a lot and I think the problem can be solved by changing a voltage sequence as described at (page 13) in developer guide
https://www.hidglobal.com/doclib/files/resource_files/plt-03635_a.0_-_synchronous-api_software_developer_guide.pdf
There is also chapter in OK SW Dev Guide (Page 17) which suggesting an hexa value (0x1B) to make this change
https://www.hidglobal.com/sites/default/files/resource_files/plt-03099_a.3_-_omnikey_sw_dev_guide.pdf
but until now I can't obtain which is a suitable API function that I must use to pass this suggested Hexa value.
I need a library for my C++ program.
But the problem, I don't know the name of this data type I want.
I have NPAPI plugin (I know this API is deprecated and removed from modern browsers) which issues to a server
HTTP range requests. Request is asyncronious and the data may arraive in any order with any chunks size.
So I need to track ranges I already have requested from a server.
For example, if initially I requested bytes [10-20] (inclusevely), then I requested [30-40] the data type I need should keep it as two intervals:
[10-20],[30-40]
But if I request [21-29] or even [15-35] it should be merged in one interval:
[10-20],[30-40] + [15-35] = [10-40]
Also I need a substraction when a requested block arrives:
[10-40] - [20-30] = [10-19],[31-40]
(requested - arrived = we're still waiting for)
I had a look at boost::numeric::intervals library but at first glance it is too big for this task (1583 files, 13 Mb of sources after './dist/bin/bcp numeric/interval ~/boost').
Also, GNU ddrescue has some similar arithmetics inside but the code isn't a library there, it coupled too much with the applications specifics.
UPDATE:
Here is what I've found on my way:
A container for integer intervals, such as RangeSet, for C++
https://en.wikipedia.org/wiki/Interval_tree
Boost.ICL
NCBI C++ Toolkit, CIntervalTree
I'm currently building a robot which has some sensors attached to it. The control unit on the robot is an ARM Cortex-M3, all sensors are attached to it and it is connected via Ethernet to the "ground station".
Now I want to read and write settings on the robot via the ground station. Therefore I thought about implementing a "virtual register" on the robot that can be manipulated by the ground station.
It could be made up of structs and look like this:
// accelerometer register
struct accel_reg {
// accelerations
int32_t accelX;
int32_t accelY;
int32_t accelZ;
};
// infrared distance sensor register
struct ir_reg {
uint16_t dist; // distance
};
// robot's register table
struct {
uint8_t status; // current state
uint32_t faultFlags; // some fault flags
accel_reg accelerometer; // accelerometer register
ir_reg ir_sensors[4]; // 4 IR sensors connected
} robot;
// usage example:
robot.accelerometer.accelX = -981;
robot.ir_sensors[1].dist = 1024;
On the robot the registers will be constantly filled with new values and configuration settings are set by the ground station and applied by the robot.
The ground station and the robot will be written in C++ so they both can use the same struct datatype.
The question I have now is how to encapsulate the read/write operations in a protocol without writing tons of meta data?
Let's say I want to read the register robot.ir_sensors[2].dist. How would I address this register in my protocol?
I already thought about transmitting a relative offset in bytes (i.e the relative position in memory inside the struct) but I think memory alignment and padding may cause problems, especially because the ground station runs on a x86_64 architecture and the robot runs on a 32-bit ARM processor.
Thanks for any hints! :)
I'm also going to suggest Google Protocol Buffers.
In the simplest case, you could implement one message RobotState like this:
message RobotState {
optional int32_t status = 1;
optional int32_t distance = 2;
optional int32_t accelX = 3;
...
}
Then when the robot receives the message, it will take new values from any optional field that is present. It will then reply with a message containing the current value of all fields.
This way it is quite easy to implement field update using the "merge message" functionality of most protobuf implementations. Also you can keep it very simple at start because you only have one message type, but if you need to expand later you can add submessages.
It is true that protobuf does not support int8_t or int16_t. Just use int32_t instead.
I think the Google protocol buffers are an excellent session/presentation layer tool to use. Actually, Google protocol buffers do not support the syntax I am thinking of. So, I will change this part of my answer to recommend XSD by Code Synthesis. Although it is primarily used with XML, it supports different presentation layers such as XDR and may be more efficient than protocol buffers with large amounts of optional data. The generate code is also very nice to work with. XSD is free to use with OpenSource software and even commercial use with limited message structures.
I don't believe you want to read/write register sets at random. You can prefix a message with an enum that denotes a message such as, IR update, distance, accel, etc. These are register groups. Then the robot responds with the register set. All the registers you've given so far are sensors. The write ones must be motor control?
You want to think about what control you want to perform and the type of telemetry you would like to receive. Then come up with a message structure and bundle the information together. You could use sequence diagrams, and remote procedure API's like SOA/SOAP, RPC, REST, etc. I don't mean these RPC frameworks directly, but the concepts such as request/response and perhaps message that are just sent periodically (telemetry) without specific requests. So there would be a telemetry request from the ground station with some sort of interval and then the robot would respond periodically with unsolicited data. You always need a message id (enum above), unless your protocol is going to be stateful, which I would discourage for robustness reasons.
You haven't described how the control system might work or if you wish to do this remotely. Describing that may lead to more ideas on the protocol. I believe we are talking about layers 5,6,7 of OSI. Have fun.
Trying to read the sizes of disks that were created in multiple sessions using GetDiskFreeSpaceEx() gives the size of the last session only. How do I read correctly the number and sizes of all sessions in C/C++?
Thanks.
You might want to look at the DeviceIoControl API function. See here for control codes. Here is a code example that retrieves the size of a CD disk. Substitute
CreateFile(TEXT("\\\\.\\PhysicalDrive0")
for e.g.
CreateFile(TEXT("\\\\.\\F:") /* Drive is F: */
if you wish.
Note: The page says that DeviceIoControl can be used to "retrieve information about a floppy disk drive, hard disk drive, tape drive, or CD-ROM drive", but I have also tested it on a DVD, and it seemed to work perfectly. I did not have access to any multisession DVDs to test, so you'll have to test if that works yourself. If it doesn't work, I'd try some of the other control codes, at least IOCTL_DISK_GET_DRIVE_GEOMETRY_EX, IOCTL_DISK_GET_DRIVE_LAYOUT_EX, IOCTL_DISK_GET_LENGTH_INFO and IOCTL_DISK_GET_PARTITION_INFO_EX.
If all fails with DeviceIoControl, you could possibly make use of the Windows Image Mastering API (IMAPI). You'll need v2 of the API (included with Vista & later, can be added to XP & 2003 too, see here: What's new in IMAPIv2) for DVD support. This API is primarily for CD burning, but does perhaps contain some functionality for retrieving disk size, I'd find it weird if it didn't. Particularly, this example seems to be interesting. I do not know if this one works for multisession disks either, but since it can create them, I guess it's likely.
Here are some resources for IMAPI:
MSDN - IMAPI
MSDN - IMAPI interfaces
MSDN - Creating multisession disks with IMAPI (note: example with VB, not C or C++)
Hey I got at least 2 solutions for you:
1) Download dvd+rw-mediainfo.exe from http://fy.chalmers.se/~appro/linux/DVD+RW/tools/win32/, it's a tool that reads info about your disc. Then just make a system call from your app and parse the results. Here's example output:
D:\Downloads>"dvd+rw-mediainfo.exe" f:
INQUIRY: [HL-DT-ST][DVDRAM GT30N ][1.01]
GET [CURRENT] CONFIGURATION:
Mounted Media: 10h, DVD-ROM
Current Write Speed: 1.0x1385=1385KB/s
Write Speed #0: 8.0x1385=11080KB/s
Write Speed #1: 4.0x1385=5540KB/s
Write Speed #2: 2.0x1385=2770KB/s
Write Speed #3: 1.0x1385=1385KB/s
Speed Descriptor#0: 00/2292991 R#8.0x1385=11080KB/s W#8.0x1385=11080KB/s
READ DVD STRUCTURE[#0h]:
Media Book Type: 01h, DVD-ROM book [revision 1]
Legacy lead-out at: 2292992*2KB=4696047616
READ DISC INFORMATION:
Disc status: complete
Number of Sessions: 1
State of Last Session: complete
Number of Tracks: 1
READ TRACK INFORMATION[#1]:
Track State: complete
Track Start Address: 0*2KB
Free Blocks: 0*2KB
Track Size: 2292992*2KB
Last Recorded Address: 2292991*2KB
FABRICATED TOC:
Track#1 : 17#0
Track#AA : 17#2292992
Multi-session Info: #1#0
READ CAPACITY: 2292992*2048=4696047616
2) Investigate mciSendString from [DllImport("winmm.dll", EntryPoint = "mciSendStringA", CharSet = CharSet.Ansi)], I suspect you can send some command and get the desired results.
PS: of course you may download dvd+rw-mediainfo.exe sources from here and investigate further, I am just giving you ideas to think of.
UPDATE
Link to source code updated, thanks #oystein
There are many way to do this since the DVD drives have several interfaces for this due to legacy and backward-compatibility issues.
You could send an IOCTL_SCSI_PASSTHROUGH_DIRECT command to the DVD-drive ( the physicaldevice handle for it). With it you issue a SCSI commands that will be answered by the drive. You can read session information, disk information disk capcity and more.
I believe that dvd+rw-mediainfo.exe issues these.
Unfortunatly, the interface is a bit tricky and obscure, since it is a command within a command. Th passthrough has a byte buffer you will have to fill in yourself with the command structure.
Or you can call IOCTL_CDROM_READ_TOC_EX:
http://www.osronline.com/ddkx/storage/k306_2cs2.htm
I also believe that the exact set of the IOCTL / commands that will work depends on on the drive and its firmaware.
Older drives will not support the newr interfaces and some of the newer drives will not support legacy interfaces.
Thus, some of the libraries & tools might use one or more of these interfaces.
Accseeing the older sessons is all quite messy, really, since most OS will not care about them, only the most recent ones.