Why do I get an error when read or write more than 3 bytes using libusb to communicate with a PIC 18F2550? - c++

I'm using libusb in Qt to communicate with a PIC microcontroller, 18F2550. The thing is that it's working OK until I try to send or read more than three bytes. Why does it happen?
I've tried using bulk_read transfer and interrupt_read. When I put the size of the buffer equal or less than three, then the transmission works perfectly, using bulk or interrupt. When this size is greater than three, then I'm getting buffer1 and buffer[2] OK, but the rest are wrong.
The error that I'm getting is from timeout. As input I'm using endpoint 0x81.
More information:
The return value from the bulk or interrupt read is -116. The numbers that I'm sending from the PIC to the PC in the two first bytes ([0] and 1) in hex is 0x02D6. With this number, buffer[0] = -42 (when it should be 0xD6 = 214) and buffer[1] = 2 that is correct.
In the [2] and [3] bytes the number is 0x033D, and I get [2] = 61 = 0x3D. That is correct and [3] = -42??? (like [0]).
And the fifth byte is 1, and the SW shows 2???. Might it be a problem in the microcontroller, because I'm programming it as an HID USB?

I don't think that being a HID is the problem. I had a similar issue before; the PIC would randomly timeout when large data was being transmitted. It turned out to be some voltage fluctuation on the MCU. How are you connecting the crystal? Do you have a capacitor on VUSB to regulate it?
Building a PIC18F USB device is a great tutorial on building a PIC HID, and even though it's not based on 18F2550 but on 18F4550, it should be quite similar, and I'm sure you can get a lot out of the schematics and hardware setup. It was the starting point for my PIC-USB projects.

Related

Decoding an unknown CRC or checksum?

I've been trying decode the CRC or checksum algorithm that is being used for the serial communication between a drone and its camera for about a week without a lot of luck and I was wondering if anybody here sees something I am missing or has any suggestions.
A typical packet looks like this:
FE1A390100020001AE0BE0FF090046250B00040000004E0D32080008540D8808F4016B54
They always start with 0xFE. The 2nd byte is the total size of the packet minus 10 bytes. The packet sizes vary, but I think I am specifically interested the 0x1A size. Byte 3 seems to be a packet counter because it usually increases by 1, but sometimes I have seen it jump to a completely different number for a few packets (usually when changing to a 0x22 size packet) before resuming the increment by 1 sequence. The last 2 bytes always change and I believe are the checksum or CRC. All the rest of the bytes seem to stay the same from one 0x1A packet to the next unless I manipulate the drones radio controls.
Right after powering up there is a series of packets that I assume is for initializing the communication. They are the shortest packets and have the least amount of change between them so it seems like they might be the easyiest to look at. Here are the first 7 bytes sent after powering it on.
From Drone to camera
Time:
8.3982205 FE030001000000010200018F68
8.39934725 FE03010100000001020001A844
8.400473958 FE03020100000001020001C130
8.401600708 FE050301000000000000000001AAE8
8.402900792 FE1A040100020001000000000000000000000C000300000853060008AB028808F4014629
8.406020958 FE22050100030002000000000000000000000000000000000000B3FFFFFFDE22006300FF615110050000C956
8.4098345 FE1A060100020001000000000000000000000C000300000853060008AB028808F40180A9
If I put the first 3 packets into reveng with -w 16 -s then it comes back with:
reveng: warning: you have only given 3 samples
reveng: warning: to reduce false positives, give 4 or more samples
width=16 poly=0x1487 init=0x0334 refin=false refout=false xorout=0x0000 check=0xa5b9 residue=0x0000 name=(none)
If i add the 4th packet it finds the same poly, but there rest of it looks differnt:
width=16 poly=0x1487 init=0x417d refin=false refout=false xorout=0x5582 check=0xbfa2 residue=0xb059 name=(none)
If i add the 5th packet reveng comes back with no model found.
However, if I remove packet 4 and then run it with packets, 1,2,3 and 5 if finds the same poly again, but different values for the rest:
width=16 poly=0x1487 init=0x804b refin=false refout=false xorout=0x0138 check=0x7dcc residue=0xc8ca name=(none)
Most combinations of packets containing a 0x1A size packet and the first 3 initialization packets that I run through reveng come back with 'no model found'. So far every time I have run reveng with only 0x1a sized packets has failed to find a model.
I think it is possible that after the initialization packets it some how incorporates info it receives from the camera to the drone into the CRC calculation for the data going from the drone to the camera, but there isn't a lot of data in those packets. Here are the first 9 packets that are sent from the camera to the drone. Prior to the first 0x1A packet being sent from the drone, the only data sent from the camera seems to be 0x7D0001.
From camera to drone:
Time
3.474456792 FE0500020000000000007D00013D40
4.475220208 FE0501020000000000007D000168C5
5.476483875 FE0502020000000000007D00018642
6.477295958 FE0503020000000000007D0001D3C7
7.4783405 FE0504020000000000007D00014B45
8.479420458 FE06050200010003FA078538B838B3
8.480811667 FE0506020000000000007D0001F047
9.48057875 FE0507020000000000007D0001A5C2
9.481883 FE06080200010003F9078638B8386037
I have tried incorporating 0x7D0001 into the packets and running them through reveng, but that didn't seem to help.
I have also tried reveng -w 8 -s on various combinations of packets without finding a model. And I have tried various checksum algos manually (possibly incorrectly) without success.
I have a bunch more data that I have captured here:
https://drive.google.com/open?id=1v8MCaXOvP_2Wv_hcaqhUZnXvqNI1_2Ur
Any ideas? Suggestions? This has been driving me nuts for a week

LibsUsbK buffers not being filled when using function UsbK_IsoReadPipe

I'm trying to write some code to read from an Isochronous pipe using LibUsbK in Win32. I have successfully initialised the device into the correct state to send and receive Isochronous data and I can see data being sent over the USB in my hardware USB analyser, but the buffers I am receiving are always unfilled even though the analyser shows that there was data in the packets sent to the PC.
I'm new to LibUsbK and using Isochronous transfers though I'm not new to USB in general but I've been really struggling with this.
The code I'm using to read from the device is something like this...
UsbK_SelectInterface(usbHandle,1,0);
UsbK_SetAltInterface(usbHandle,1,0,1);
IsoK_Init(&isoCtx, ISO_PACKETS_PER_XFER, 0);
IsoK_SetPackets(isoCtx, ISO_PACKET_SIZE); // Size of each individual packet
OvlK_Init(&ovlPool, usbHandle, 4, 0);
OvlK_ResetPipe(usbHandle, 0x83);
OclK_Acquire(&ovlkHandle, ovlPool);
UsbK_IsoReadPipe(usbHandle, 0x83, inBuffer, sizeof(inBuffer), ovlkHandle, isoCtx);
while(!finished)
{
if(OvlK_IsComplete(ovlkHandle)
{
fwrite(inBuffer, sizeof(inBuffer), 1, outFile);
memset(inBuffer,0xcc,sizeof(inBuffer));
OvlK_ReUse(ovlkHandle);
UsbK_IsoReadPipe(usbHandle, 0x83, inBuffer, sizeof(inBuffer), ovlkHandle, isoCtx);
{
}
If I put a breakpoint at the fwrite line then the inBuffer is always full of 0xCC - ie, not having been filled by the iso read.
I've checked all the error return values from the UsbK/OvlK function calls and they are all as they should be. I've checked my buffers are sufficiently big to receive the data.
I use very similar code to write to the ISO out pipe on endpoint 0x02 and that works perfectly, the only difference really between the code above and my write code is that the fwrite/memset commands are replaced with a call to a "fillbuffer" function that populates my outBuffer before calling UsbK_IsoWritePipe function.
I tried looking through any examples I could find in the samples and also online but struggled to understand/get them to work with my particular device.
Any suggestions or help greatly appreciated.
So it appears that the above code did work and I was being mislead by the fact that the debugger was interrupting the flow of things - I keep forgetting that trying to debug real time stuff can introduce it's own issues.
The first issue was that stepping through the code in the debugger was causing issues with the low level libusbk code capturing the usb packets and filling my buffers correctly - once I let it run full speed and found other ways to test the buffers I did actually find there was some data in there.
The second problem I had was that quite often the buffer was starting to be filled part way through only (and not always right from the start) so when I examined the data I was only printing the first part of the buffer to the console and so all I saw was 0xCC and I was therefore assuming it hadn't worked.
Once I realised that there was actually some data later in the buffer I just started looking through the buffer in packet sized chuncks, if the packet was completely contained of 0xCC I would skip it and move on, but if any of it was not 0xCC then I would treat it as a valid packet - this worked perfectly and I was successfully receiving all the data. I'm sure there's a more "proper" way of doing this, but it works for me now.

IRQ 8 isn't working... HW or SW?

First, I program for Vintage computer groups. What I write is specifically for MS-DOS and not windows, because that's what people are running. My current program is for later systems and not the 8086 line, so the plan was to use IRQ 8. This allows me to set the interrupt rate in binary values from 2 / second to 8192 / second (2, 4, 8, 16, etc...)
Only, for some reason, on the newer old systems (ok, that sounds weird,) it doesn't seem to be working. In emulation, and the 386 system I have access to, it works just fine, but on the P3 system I have (GA-6BXC MB w/P3 800 CPU,) it just doesn't work.
The code
setting up the interrupt
disable();
oldrtc = getvect(0x70); //Reads the vector for IRQ 8
settvect(0x70,countdown); //Sets the vector for
outportb(0x70,0x8a);
y = inportb(0x71) & 0xf0;
outportb(0x70,0x8a);
outportb(0x71,y | _MRATE_); //Adjustable value, set for 64 interrupts per second
outportb(0x70,0x8b);
y = inportb(0x71);
outportb(0x70,0x8b);
outportb(0x71,y | 0x40);
enable();
at the end of the interrupt
outportb(0x70,0x0c);
inportb(0x71); //Reading the C register resets the interrupt
outportb(0xa0,0x20); //Resets the PIC (turns interrupts back on)
outportb(0x20,0x20); //There are 2 PICs on AT machines and later
When closing program down
disable();
outportb(0x70,0x8b);
y = inportb(0x71);
outportb(0x70,0x8b);
outportb(0x71,y & 0xbf);
setvect(0x70,oldrtc);
enable();
I don't see anything in the code that can be causing the problem. But it just doesn't seem to make sense. While I don't completely trust the information, MSD "does" report IRQ 8 as the RTC Counter and says it is present and working just fine. Is it possible that later systems have moved the vector? Everything I find says that IRQ 8 is vector 0x70, but the interrupt never triggers on my Pentium III system. Is there some way to find if the Vectors have been changed?
It's been a LONG time since I've done any MS-DOS code and I don't think I ever worked with this particular interrupt (I'm pretty sure you can just read the memory location to fetch the time too, and IRQ0 can be used to trigger you at an interval too, so maybe that's better. Anyway, given my rustiness, forgive me for kinda link dumping.
http://wiki.osdev.org/Real_Time_Clock the bottom of that page has someone saying they've had problem on some machines too. RBIL suggests it might be a BIOS thing: http://www.ctyme.com/intr/rb-7797.htm
Without DOS, I'd just capture IRQ0 itself and remap all of them to my own interrupt numbers and change the timing as needed. I've done that somewhat recently! I think that's a bad idea on DOS though, this looks more recommended for that: http://www.ctyme.com/intr/rb-2443.htm
Anyway though, I betcha it has to do with the BIOS thing:
"Notes: Many BIOSes turn off the periodic interrupt in the INT 70h handler unless in an event wait (see INT 15/AH=83h,INT 15/AH=86h).. May be masked by setting bit 0 on I/O port A1h "

Character encoding problem with QextSerialPort (Qt/C++)

I am developing a Qt/C++ programme in QtCreator that reads and writes from/to the serial port using QextSerialPort. My programme sends commands to a Rhino Mark IV controller and must read the response of those commands (just in case they produce any response). My development and deployment platform is Windows XP Professional.
When the Mark IV sends a response to a command and my programme reads that response from the serial port buffer, the data are not properly encoded; my programme does not seem to get plain ASCII data. For example, when the Mark IV sends an ASCII "0" (decimal 48) followed by a carriage return (decimal 13), my buffer (char *) gets -80 and 13. Characters are not properly encoded, but carriage returns are indeed. I have tried using both read (char *data, qint64 maxSize) and readAll ().
I have been monitoring the serial port traffic using two monitors that interpret ASCII data and display the corresponding characters, and the data sent in both ways seem to be correctly encoded (they are actually displayed correctly). Given that QByteArray does not interpret any character encoding and that I have tried using both read (char *data, qint64 maxSize) and readAll (), I have discarded that the problem may be caused by Qt. However, I am not sure if the problem is caused by QextSerialPort, because my programme send (writes) data properly, but does not read the correct bytes.
I have also tried talking to the Mark IV controller by hand using HyperTerminal, and the communication takes place correctly, too. I set up the connection using HyperTerminal with the following parammeters:
Baud rate: 9600
Data bits: 8
Parity bits: 0
Stop bits: 1
Flow control: Hardware
My programme sets up the serial port using the same parammeters. HyperTerminal works, my programme does not.
I started using QextSerialPort 1.1 from qextserialport.sourceforge.net and then tried with the latest source code from QextSerialPort on Google Code, and the problem remains.
What is causing the wrong character encoding?
What do I have to do to solve this issue?
48 vs. -80 smells like a signed char vs. unsigned char mismatch to me. Try with explicit unsigned char* instead of char*.
Finally, I have realized that I was not configuring the serial port correctly, as suggested by Judge Maygarden. I did not find that information in the device's manual, but in the manual of a software product developed for that device.
The correct way to set up the serial port for connecting to the Mark IV controller is to set
Baud rate: 9600
Data bits: 7
Parity: even
Stop bits: 2 bits
Flow control: Hardware
However, I am still wondering why did HyperTerminal show the characters properly even with the wrong configuration.

Arduino Ethernet Byte size problem

I'm using an Arduino (duemilanove) with the official Ethernet shield to send data to the controller for controlling an LED matrix. I am trying to send some raw 32-bit unsigned int values (unix timestamps) to the controller by taking the 4 bytes in the 32-bit value on the desktop and sending it to the arduino as 4 consecutive bytes. However, whenever a byte value is larger than 127, the returned value by the ethernet client library is 63.
The following is a basic example of what I'm doing on the arduino side of things. Some things have been removed for neatness.
byte buffer[32];
memset(buffer, 0, 32);
int data;
int i=0;
data = client.read();
while(data != -1 && i < 32)
{
buffer[i++] = (byte)data;
data = client.read();
}
So, whenever the input byte is bigger than 127 the variable "data" will end up getting set to 63! At first I thought the problem was further down the line (buffer used to be char instead of byte) but when I print out "data" right after the read, it's still 63.
Any ideas what could be causing this? I know client.read() is supposed to output int and internally reads data from the socket as uint8_t which is a full byte and unsigned, so I should be able to at least go to 255...
EDIT: Right you are, Hans. Didn't realize that Encoding.ASCII.GetBytes only supported the first 7 bits and not all 8.
I'm more inclined to suspect the transmit side. Are you positive the transmit side is working correctly? Have you verified with a wireshark capture or some such?
63 is the ASCII code for ?. There's some relevance to the values, ASCII doesn't have character codes for values over 127. An ASCII encoder commonly replaces invalid codes like this with a question mark. Default behavior for the .NET Encoding.ASCII encoder for example.
It isn't exactly clear where that might happen. Definitely not in your snippet. Probably on the other end of the wire. Write bytes, not characters.
+1 for Hans Passant and Karl Bielefeldt.
Can you just send the data without encoding? How is the data being sent? TCP/UDP/IP/Ethernet definitely support sending binary data without restriction. If this isn't possible, perhaps converting the data to hex will solve the problem. Base64 will also work (better) but is considerably more work. For small amounts of data, hex is probably the easiest and fastest solution.
+1 again to Karl and Ben for mentioning wireshark. Invaluable for debugging network problems like this.