hidapi: Sending packet smaller than caps.OutputReportByteLength - c++

I am working with a device (the wiimote) that takes commands through the DATA pipe, and only accepts command packets that are EXACTLY as long as the command itself. For example, it will accept:
0x11 0x10
but it will not accept:
0x11 0x10 0x00 0x00 0x00 ... etc.
This is a problem on windows, as WriteFile() on windows requires that the byte[] passed to it is at least as long as caps.OutputReportByteLength. On mac, where this limitation isn't present, my code works correctly. Here is the code from hid.c that causes this issue:
/* Make sure the right number of bytes are passed to WriteFile. Windows
expects the number of bytes which are in the _longest_ report (plus
one for the report number) bytes even if the data is a report
which is shorter than that. Windows gives us this value in
caps.OutputReportByteLength. If a user passes in fewer bytes than this,
create a temporary buffer which is the proper size. */
if (length >= dev->output_report_length) {
/* The user passed the right number of bytes. Use the buffer as-is. */
buf = (unsigned char *) data;
} else {
/* Create a temporary buffer and copy the user's data
into it, padding the rest with zeros. */
buf = (unsigned char *) malloc(dev->output_report_length);
memcpy(buf, data, length);
memset(buf + length, 0, dev->output_report_length - length);
length = dev->output_report_length;
}
res = WriteFile(dev->device_handle, buf, length, NULL, &ol);
Removing the above code, as mentioned in the comments, results in an error from WriteFile().
Is there any way that I can pass data to the device of arbitrary size? Thanks in advance for any assistance.

Solved. I used a solution similar to the guys over at Dolphin, a Wii emulator. Apparently, on the Microsoft bluetooth stack, WriteFile() doesn't work correctly, causing the Wiimote to return with an error. By using HidD_SetOutputReport() on the MS stack and WriteFile() on the BlueSoleil stack, I was able to successfully connect to the device (at least on my machine).
I haven't tested this on the BlueSoleil stack, but Dolphin is using this method so it is safe to say it works.
Here is a gist containing an ugly implementation of this fix:
https://gist.github.com/Flafla2/d261a156ea2e3e3c1e5c

Related

Using HidD_GetInputReport(..) to retrive XBOX ONEs button states

I am trying to talk to the XBOX ONE Controller via the Microsoft HID API without using XINPUT. I'm currently able to control all the rumble motors (including the force feedback triggers) by sending the packet using HidD_SetOutputReport(HANDLE, VOID*, ULONG). But I'm stuck reading the button values using HidD_GetInputReport(HANDLE, VOID*, ULONG) or ReadFile() / ReadFileEx() with and without the HANDLE being created with FILE_FLAG_OVERLAPPED and using OVERLAPPED and Windows Events.
I have already reverse engineered the USB URB protocol with the help of the following article https://github.com/quantus/xbox-one-controller-protocol. The main goal is to overcome the XINPUT overhead and writing a flexible framework so that I can integrate other gamepads as well.
That is what I accomplished:
I have connected the gamepad via USB with my computer (So that I can read all the USB Packages sent and received from the device)
I have found the controller’s path using SetupDiGetClassDevs(...), SetupDiEnumDeviceInfo(...), SetupDiEnumDeviceInterfaces(...) and SetupDiGetDeviceInterfaceDetail(...)
I have created a handle to the device using HANDLE gamePad = CreateFile(path, GENERIC_READ | GENERIC_WRITE, FILE_SHARE_READ | FILE_SHARE_WRITE, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL)
Using HidP_GetCaps(HANDLE, HIDP_CAPS*) doesn’t seem to return valid data since it reports a OutputReportByteLength of 0 but I am able to send Output reports of size 5 (Turn ON) and 9 (Set rumble motors)
All in and outcoming data (At least buttons and Rumble motors) seem to follow the following pattern
byte 0: Package type
byte 1: was always 0x00
byte 2: Package number (always incrementing every package)
byte 3: Size of following data
byte 4+: <Data>
With that information i was able to let the motors and triggers rumble as I desire
For example: Two of my output rumble packets look like this (Using the pulse length to dirty turn on and off the motors):
This turns on and of all motors with the rumble motors speed at 0xFF and the triggers at speed 0xF0. This is how I did it:
struct RumbleContinous{
BYTE mode;
BYTE mask; // Motor Mask 0b 0 0 0 0 LT RT L R
BYTE lTForce;
BYTE rTForce;
BYTE lForce;
BYTE rForce;
BYTE pulseLength;
BYTE offTime;
BYTE terminator; // Terminator / Dummy / ?? (XINPUT sends that as 0xEB!) / Changing seems to not make any changes
};
RumbleContinous rc = {0x00, 0x0F, 0xF0, 0xF0, 0xFF, 0xFF, 0xFF 0x00, 0xEB};
HidD_SetOutputReport(gamePad, (PVOID)&rc, sizeof(RumbleContinous ));
Now to my problem
Looking at the input packages from the controller it looks like you need to create a buffer of size 0x0E = 14, ZeroMemory it (or just write the first byte to 0 like MSDN is suggesting) and just call HidD_GetInputReport(HANDLE, buffer, 14)
So what I did was calling HidD_FlushQueue() to make sure the next package is the input package. Then I insert a little delay so that I am able to change some controller values. After that I tried reading into a BYTE array with HidD_GetInputReport(HANDLE, cmd_in, 14) but the function always failed with GetLastError() == 0x00000057 // ERROR_INVALID_PARAMETER
Since HID is able to filter packages it may be required to allocate a buffer one byte larger than expected and pass the required report id to the buffer at location 0. This is what I did:
BYTE cmd_in[15];
ZeroMemory(cmd_in, 15);
cmd_in[0] = 0x20;
HidD_GetInputReport(gamePad, cmd_in, 15);
Still no success. Since the HidP_GetCaps(...) function reported an input report of 16 (But I don't trust this since it already fooled me with output report size of 0) I tried sweeping over many buffer sizes:
BYTE cmd_in[30];
for (UINT bs = 0; bs < 30; bs++) {
ZeroMemory(cmd_in, 30);
HidD_FlushQueue(gamePad); // Flushing works
Sleep(500);
if (HidD_GetInputReport(gamePad, cmd_in, bs)) {
// Output result (ommited)
}
else {
// Print error (ommited)
}
}
and
BYTE cmd_in[30];
for (UINT bs = 0; bs < 30; bs++) {
ZeroMemory(cmd_in, 30);
cmd_in[0] = 0x20;
HidD_FlushQueue(gamePad); // Flushing works
Sleep(500);
if (HidD_GetInputReport(gamePad, cmd_in, bs)) {
// Output result (ommited)
}
else {
// Print error (ommited)
}
}
Still no success. According to the special required output format and the wrong HidP_GetCaps(...) readings i suspect that the XBOX ONE Gamepad Driver requires a special header already beeing in the input buffer (As far as I know HidD_GetInputReport(...) just calls the User / Kernel Mode Driver call back; So the driver is free to perform checks and reject all data being send to it)
Maybe anyone out there does know how to call HidD_GetInputReport(...) for the XBOX One controller
I know that it is possible to retrieve that input data since the SimpleHIDWrite is able to see the button states. Even through the format is totally different (The two triggers for example are combined in one byte. In the USB Packed each trigger has its own byte):
I should also mention that the HIDWrite sees the data without the press of any button! Looking at the log from the SimpleHIDWrite it looked like it is reading RD from 00 15 bytes of data, having a 16-byte array and element 0 at 00 (Didn't work in my application). Or does it just dump all data coming in. If yes how is this possible? That would be an option for me too!
I looked at what XINPUT is doing when executing the following code:
XINPUT_STATE s;
XInputGetState(0, &s);
It turned out that XINPUT is doing the same stuff I did until it comes to reading the data from the controler. Insteal of HidD_GetInputReport(...) XINPUT is calling DeviceIoControl(...). So what I did was a fast google serch "DeviceIoControl xbox" and tada here it is without the need to to figure out the memory layout on my own: Getting xbox controller input without xinput
Edit: Using DeviceIoControl(...) works even if the gamepad is connected via bluetooth while HidD_SetOutputReport(...) does not work when the gamepad is connected via bluetooth. I rember reading something that DeviceIoControl(...) via bluetooth requires an addition parameter to be present in the output buffer. But I'm currently trying to figure out a way to controle the rumble motors via DeviceIoControl(...). If you have any sugestions feel free to comment! The article in the link above only activates the two rumble motors but not the triggers!
Edit 2: I tried sweeping over DeviceIoControl(HANDLE, j, CHAR*, i, NULL, 0, DWORD*, NULL) from j 0x0 to 0xFFFFFFFF and i 0x1 to 0x3F. Well it worked... at first... but after a few values of j I got a Blue-Screen: WDF_Violation (At least I know how to crash a computer ;) )

Send bytes to serial port

I'm trying to make a Windows program that sends data to a microcontroller, through serial port (USB emulating COM port).
Until now, I made it with ASCII strings, but I have to do a job with a classmate, that told me that I don't have to do that; That I have to send to the serial port the actual bytes that he needs to use (he is programming the microcontroller I am programming the Windows interface).
I always used WriteFile() with ASCII strings, in the form:
WriteFile(handlePort, bufferPort, strlen(buffer_puerto), &nBytes, NULL);
I have to send a byte chain, like 10000001 10010001 0000000 10100001 11101101.
The problem is that when WriteFile() detects the third byte 00000000, it is interpreted like a null character '\0' and does not send more bytes.
Please, can anyone help me? Is there any way to send all of the bytes (after the third 00000000) without losing any information?
Is there another function apart from WriteFile() which can do that? How should I do it?
It's not WriteFile, it's strlen that's stopping at 0. You want:
...
int len = 5;
char bytes[] = {0x81, 0x91, 0x0, 0xa1, 0xed};
WriteFile(handlePort,bytes,len,&nBytes,NULL);
if (len != nBytes) {
error("Not all bytes written!");
}
...
The problem is you're using strlen which is designed to stop at a zero byte. WriteFile is fine; it just needs you to tell it the right number of bytes to write.

Sending a fixed-length header

I'm trying to send data with a fixed-length header that tells the server how many bytes of data it's going to have to have available to read before it reads it. I'm having trouble doing this, though. The maximum number of bytes of data I want to be able to send at once is 65536, so I'm sending a uint16_t type variable as the header of my data because the maximum number it can represent is 65536.
The problem is, a uint16_t takes up two bytes, but numbers less than 255 only require one byte. So I have this code on the client side:
uint16_t messageSize = clientSendBuf.size(); //clientSendBuf is the data I want to send
char *bytes((char*)&messageSize);
clientSendBuf.prepend(bytes);
client.write(clientSendBuf);
And on the server, I handle receiving messages like this:
char serverReceiveBuf[65536];
uint16_t messageSize;
client->read((char*)&messageSize, sizeof(uint16_t));
client->read(serverReceiveBuf, messageSize);
I'm going to change this around a bit later because it's not the best solution (particularly for when all of the data isn't available yet), but I want to get this fixed first. My problem is that when clientSendBuf.size() is too small (in my test case it was 16 bytes, I assume this happens for every value under 255) reading data with
client->read((char*)&messageSize, sizeof(uint16_t));
reads a second byte that isn't part of the header, giving and incorrect value for messageSize and crashing the server. If I replace sizeof(uint16_t) with 1, then the server reads the data fine as I'd expect, although then I have a messageSize maximum of 255, which is much lower than I want. How do I make it so that the messageSize prepended to clientSendBuf is always two bytes, even for numbers <255?
Your
clientSendBuf.prepend(bytes);
Should also be told that it needs to send 2 bytes; now it treats the bytes as a zero-terminated string, which accidently works since on your platform the second byte of 0x0010 is zero (using little-endian numbers: 0x16, 0x00).
The prepend(char*, int) method will do the trick:
// use this instead:
cliendSendBuf.prepend(bytes, sizeof(messageSize));

What could cause my packet's byte order to become partially scrambled?

I am sending packets over a TCP socket between a Linux Centos 4 machine and a Windows XP machine running Interix with Gentoo. When the packet is received by Interix, about 10% of the characters are consistently scrambled at the exact same offsets from the beginning of the packet. On the sending Linux side, the packet has this correct contents:
-----BEGIN PUBLIC KEY-----
MIIBojCCARcGByqGSM4+AgEwggEKAoGBAP//////////yQ/aoiFowjTExmKLgNwc
^ ^^^^^^^^^^^^^
0SkCTgiKZ8x0Agu+pjsTmyJRSgh5jjQE3e+VGbPNOkMbMCsKbfJfFDdP4TVtbVHC
^^^^^^^^
ReSFtXZiXn7G9ExC6aY37WsL/1y29Aa37e44a/taiZ+lrp8kEXxLH+ZJKGZR7OZT
gf//////////AgECAoGAf//////////kh+1RELRhGmJjMUXAbg5olIEnBEUz5joB
Bd9THYnNkSilBDzHGgJu98qM2eadIY2YFYU2+S+KG6fwmra2qOEi8kLauzEvP2N6
JiF00xv2tYX/rlt6A1v29xw1/a1Ez9LXT5IIviWP8ySUMyj2cynA//////////8D
gYQAAoGAKcjWmS+h/a6xY6HfNeVBk+vU4ZQoi4ROBT8NXdiFQUeLwT/WpE/8oAxn
KCOssVcoF54bF8JlEL0McWjQUzMrqoQedizALRRdH7kTUM/yqZZdxLgRFmiFDUXT
XxsFFB5hlLpMqy9lqpNMN8+e5m9ISgu8zHMlTBQXsnwds0VkbeU=
-----END PUBLIC KEY-----
But on Interix, the packet contents are slightly scrambled (but the majority is correct):
-----BEGIN PUBLIC KEY-----
MIIBojCCARcGByqGSM4+AgEwggEKAoGBAP//////y////iFowjTExQ/aomKLgNwc
^ ^^^^^^^^^^^^^
KigTCkS0Z8x0Agu+pjsTmyJRSgh5jjQE3e+VGbPNOkMbMCsKbfJfFDdP4TVtbVHC
^^^^^^^^
ReSFtXZiXn7G9ExC6aY37WsL/1y29Aa37e44a/taiZ+lrp8kEXxLH+ZJKGZR7OZT
gf//////////AgECAoGAf//////////kh+1RELRhGmJjMUXAbg5olIEnBEUz5joB
Bd9THYnNkSilBDzHGgJu98qM2eadIY2YFYU2+S+KG6fwmra2qOEi8kLauzEvP2N6
JiF00xv2tYX/rlt6A1v29xw1/a1Ez9LXT5IIviWP8ySUMyj2cynA//////////8D
gYQAAoGAKcjWmS+h/a6xY6HfNeVBk+vU4ZQoi4ROBT8NXdiFQUeLwT/WpE/8oAxn
KCOssVcoF54bF8JlEL0McWjQUzMrqoQedizALRRdH7kTUM/yqZZdxLgRFmiFDUXT
XxsFFB5hlLpMqy9lqpNMN8+e5m9ISgu8zHMlTBQXsnwds0VkbeU=
-----END PUBLIC KEY-----
I've pointed to the differences with the ^ characters above. There could be a couple more characters around the y given the repeated / would hide additional characters that were moved in that section.
This code works fine between several platform pairs:
Linux and Linux
Linux and BSD
Linux and Cygwin
Could this be a bug in the Interix and Gentoo code? I'm running on Windows XP, Interix v3.5. I notice that all the right characters are present, but their order is consistently scrambled, portions are reversed, others are cut and reinserted in a different place. The packet is being read on the receiving side with ::read() on the TCP socket file descriptor. There is lot of code in play here, so I'm not sure what portions would be most relavent to include, but will try and add more code if specific requests are made.
const int fd; // Passed in by caller.
char *buf; // Passed in by caller.
size_t want = count; // This value is 625 for the packet in question.
// As ::read() is called, got is adjusted, until the whole packet is read.
size_t got = 0;
while (got < want) {
// We call ::select() to ensure bytes are available before calling ::read().
ssize_t result = ::read(fd, buf, want - got);
if (result < 0) {
// Handle error (not getting called, so omitted).
} else {
if (result != 0) {
// We are coming in here in one try and got is set to 625, the amount we want...
// Not an error, increment the byte counter 'got' & the read pointer,
// buf.
got += result;
buf += result;
} else { // EOF because zero result from read.
eof = true; // Connection reset by peer.
break;
}
}
}
What experiments might I perform to help nail down where the error is coming from?
I would say you have a concurrency bug on 'buf', or possibly a duplicate free() or a re-use after free().
Mystery solved! The issue was that off_t was 32 bits wide on the windows XP machine and 64 bits wide on the Centos machine. When the packet is sent, its memory layout that includes some off_t objects is put from host into network byte order (little endian to big endian) then on the windows machine when it gets the packet, it goes back from network to host. Because the memory layout differed, I got the scrambling seen above.
I resolved the issue by using my own soff_t everywhere that is 64 bits wide.
However, I then ran into another issue where the compiler did not pack a structure the same way on both machines and on windows it inserted 4 bytes to make the long long 8 byte aligned, whereas on Centos it did not do this:
typedef struct Option
{
char[56] _otherStuff;
int _cpuFreq;
int _bufSize;
soff_t _fileSize; // Original bug fixed by forcing these 8 bytes wide.
soff_t _seekTo; // Original bug fixed by forcing these 8 bytes wide.
int _optionBits;
int _padding; // To fix next bug, I added this 4 bytes
long long _mtime;
long long _mode;
} __attribute__ ((aligned(1), packed)) Option;
I had used the __attribute__ ((aligned(1), packed)) to force the packing to be consistent and dense, but on Windows XP this was not or could not be honored. I solved this by adding the _padding to force the next 8 byte member to be 8 byte aligned on Centos and thus agree with Windows XP.

Arduino Ethernet Byte size problem

I'm using an Arduino (duemilanove) with the official Ethernet shield to send data to the controller for controlling an LED matrix. I am trying to send some raw 32-bit unsigned int values (unix timestamps) to the controller by taking the 4 bytes in the 32-bit value on the desktop and sending it to the arduino as 4 consecutive bytes. However, whenever a byte value is larger than 127, the returned value by the ethernet client library is 63.
The following is a basic example of what I'm doing on the arduino side of things. Some things have been removed for neatness.
byte buffer[32];
memset(buffer, 0, 32);
int data;
int i=0;
data = client.read();
while(data != -1 && i < 32)
{
buffer[i++] = (byte)data;
data = client.read();
}
So, whenever the input byte is bigger than 127 the variable "data" will end up getting set to 63! At first I thought the problem was further down the line (buffer used to be char instead of byte) but when I print out "data" right after the read, it's still 63.
Any ideas what could be causing this? I know client.read() is supposed to output int and internally reads data from the socket as uint8_t which is a full byte and unsigned, so I should be able to at least go to 255...
EDIT: Right you are, Hans. Didn't realize that Encoding.ASCII.GetBytes only supported the first 7 bits and not all 8.
I'm more inclined to suspect the transmit side. Are you positive the transmit side is working correctly? Have you verified with a wireshark capture or some such?
63 is the ASCII code for ?. There's some relevance to the values, ASCII doesn't have character codes for values over 127. An ASCII encoder commonly replaces invalid codes like this with a question mark. Default behavior for the .NET Encoding.ASCII encoder for example.
It isn't exactly clear where that might happen. Definitely not in your snippet. Probably on the other end of the wire. Write bytes, not characters.
+1 for Hans Passant and Karl Bielefeldt.
Can you just send the data without encoding? How is the data being sent? TCP/UDP/IP/Ethernet definitely support sending binary data without restriction. If this isn't possible, perhaps converting the data to hex will solve the problem. Base64 will also work (better) but is considerably more work. For small amounts of data, hex is probably the easiest and fastest solution.
+1 again to Karl and Ben for mentioning wireshark. Invaluable for debugging network problems like this.