Hamming Ecc computation - error-correction

Consider the example of hamming ECC https://en.wikipedia.org/wiki/Hamming_code
Suppose after receiving you find that parity bits 16 and 8 are incorrect , which bit do you correct

The question is a bit vague, but here are some possible answers:
1) 20-bit (20, 15) SEC code
If you have the code shown in the table without the "..." section filled in, then it depends on the implementation of the decoder but in theory it should be a detectable error. The decoder would probably raise the "detectable but uncorrectable error" (DUE) signal.
2) 31-bit (31, 26) SEC code
If you're talking about the code in the table with the "..." section filled in, this is a (31, 26) code. The error will mistakenly miscorrect the 8+16=24th bit, causing silent data corruption (SDC).
3) a 21-bit (21, 15) SEC-DED or 32-bit (32, 26) SEC-DED code
If you add an overall parity bit to the code (see section "Hamming codes with additional parity (SECDED)"), then the code can properly detect any two-bit errors. Therefore this error will be properly detected and the decoder will raise the DUE signal.

Related

Is HAL_UARTEx_RxEventCallback Size parameter calculated programmatically or by hardware

I'm realizing UART-DMA with STM_HAL library and I want to know if message size is counted by hardware (counting clock ticks till line is idle for example) or by some program method(something like strlen). So if Size in
HAL_UARTEx_RxEventCallback(UART_HandleTypeDef *huart, uint16_t Size)
is counted by hardware, I can send data in pure HEX format, but if it is calculated by something like strline, I may recieve problems if data is 0x00 and have to send data in ASCII.
I've tried to make some research in generated code in Keil but failed (maybe I didn't try hard enough) so maybe somebody can help me.
If you are using UART DMA, it is calculated by hardware.
If you check the call hierarchy of HAL_UARTEx_RxEventCallback using your ide, you can see how the Size variable is calculated.
The function is executed in the following flow.(Depending on the version of HAL Driver, it may be slightly different)
UART Idle Interrupt occur
Call HAL_UART_IRQHandler()
If DMA mod is enabled, Call HAL_UARTEx_RxEventCallback(huart, (huart->RxXferSize - huart->RxXferCount))
Therefore, Size variable is calculated as (huart->RxXferSize - huart->RxXferCount)
huart->RxXferSize is a set value when initializing RX DMA.
huart->RxXferCount is (huart->hdmarx)->Instance->NDTR
NDTR is a value calculated by hardware as the size of the buffer remaining after DMA transfer data to memory!!

How to determine CRC16 initial checksum so resulting checksum is zero

Working on a SPI communication bus between an array of SAMD MCUs.
I have an incoming packet which is something like { 0x00, 0xFF, 0x00, 0xFF }.
The receiver chip performs CRC16 check on the incoming packet.
Since I am expecting the exact same packet every time, I want to have zero CRC checksum when the packet is valid and not zero checksum when there is a transfer error.
I know that I can add the calculated CRC16 to the end of the packet when sending it and on the receiver side the CRC check will output 0, but in this case it is impossible to add a CRC16 checksum to the packet since the packet is constructed by multiple sender chips on the SPI line and each chip only fills its own two bytes from the entire packet.
I need to load an initial CRC checksum on the receiver side, so after the incoming packet is checked, the resulting CRC equals to zero (if packet is intact).
The answer here on SO is actually what I am looking for, but it is for CRC32 format and I don't actually understand the principle of the code, so I can't rewrite if for CRC16 format.
Any help would be greatly appreciated!
Regards,
Niko
The solution is simply to use a look-up table based CRC. If you can't append the checksum (aka the Frame Check Sequence, FCS) to the package, then do the table look-up first and then simply compare that one against the expected sequence for your fixed data.
Please note that "CRC 16" could mean anything, there are multiple versions and (non)standards. The most common one is perhaps the one called "CRC-16-CCITT" with 1021h poly and initial value FFFFh, but even for that one there's multiple algorithms out there - some are correct, some are broken. Your biggest challenge will be to find a trustworthy CRC algorithm.
However, I actually think SAMD specifically uses hardware-generated CRC-16-CCITT on-chip, for DMA purposes. Since this is SPI, it should be DMA-able, so perhaps investigate if you can use that one somehow.
I found a solution, thanks to the advice of Bastian Molkenthin, who did this great online CRC calculator.
He advised trying a brute force calculation of all the 2^16 values of a CRC16 initial value. Indeed, after a few lines of code and few microseconds later the SAMD51 found an initial value, which matches a zero CRC value for the given buffer.

Identifying CRC-16 algorithm used

I'm trying to communicate with a device over a serial communication protocol and having some trouble finding out about which checksum/crc algorithm that is used for the last 2 bytes of the messages. I've tried several CRC16 algorithms in various online crc utilities, like:
http://www.sunshine2k.de/coding/javascript/crc/crc_js.html
http://www.zorc.breitbandkatze.de/crc.html
I've also tried reverse engineering, with the help of REVENG software, but it only gives some occasional random hits (depending on which of the examples from the captured messages I try together), that does not seem to be a correct algorithm that matches all examples.
I have not found any documentation of the device, which can indicate the CRC16 algorithm used or if its some other variant like the lowest bytes of a CRC32.
Below are 2 types of messages each with some different examples and variations. The first 4 bytes of the message tells the renaming number of bytes of the message. Most probably these 4 first bytes should not be included in the CRC calculation, but that is just a guess. What I believe is a 16 wide CRC is the last 2 bytes of each message.
Message type 1 (examples):
0000000908100300180a4621a8
0000000901100300180a463a11
0000000909100300180a461f26
0000000902100300180a4649f9
000000090a100300180a466cce
0000000903100300180a46fb58
000000090b100300180a46de6f
Message type 2 (examples):
0000001f09131900180a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a0a7be0
0000001f0913190018141414141414141414141414141414141414141414141414f3a5
0000001f09131900181e1e1e1e1e1e1e1e1e1e1e1e1e1e1e1e1e1e1e1e1e1e1e1e3d38
0000001f0913190018282828282828282828282828282828282828282828282828e82f
0000001f09131900183232323232323232323232323232323232323232323232321762
0000001f001319001814ffffffffffffffffffffffffffffffffffffffffffffff3d16
0000001f00131900181effffffffffffffffffffffffffffffffffffffffffffff2e93
0000001f001319001828ffffffffffffffffffffffffffffffffffffffffffffff3438
0000001f00131900185fffffffffffffffffffffffffffffffffffffffffffffffac2b
Anyone out there with some knowledge about CRC that can point me in the right direction to figure this out?
It's not a CRC-16. I ran a simple brute-force search.
Some more messages where only one or two of the last "data" bytes are varying that show some sort of more logical behaviour on the last 2 checksum bytes, but still no clear view of how the algorithm is constructed for the calculation.
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF065B7C
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF075A7C
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF08557C
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF09547C
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0A577C
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0B567C
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0C517C
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF06FFA285
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF07FFA284
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF08FFA28B
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF09FFA28A
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0AFFA289
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0BFFA288
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0CFFA28F
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0DFFA28E
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF06065B85
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF07075A84
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0808558B
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0909548A
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0A0A5789
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0B0B5688
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0C0C518F
0113190018FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF0D0D508E
Full set of 1200 messages at: http://dpaste.com/0DSMZJV

IRQ 8 isn't working... HW or SW?

First, I program for Vintage computer groups. What I write is specifically for MS-DOS and not windows, because that's what people are running. My current program is for later systems and not the 8086 line, so the plan was to use IRQ 8. This allows me to set the interrupt rate in binary values from 2 / second to 8192 / second (2, 4, 8, 16, etc...)
Only, for some reason, on the newer old systems (ok, that sounds weird,) it doesn't seem to be working. In emulation, and the 386 system I have access to, it works just fine, but on the P3 system I have (GA-6BXC MB w/P3 800 CPU,) it just doesn't work.
The code
setting up the interrupt
disable();
oldrtc = getvect(0x70); //Reads the vector for IRQ 8
settvect(0x70,countdown); //Sets the vector for
outportb(0x70,0x8a);
y = inportb(0x71) & 0xf0;
outportb(0x70,0x8a);
outportb(0x71,y | _MRATE_); //Adjustable value, set for 64 interrupts per second
outportb(0x70,0x8b);
y = inportb(0x71);
outportb(0x70,0x8b);
outportb(0x71,y | 0x40);
enable();
at the end of the interrupt
outportb(0x70,0x0c);
inportb(0x71); //Reading the C register resets the interrupt
outportb(0xa0,0x20); //Resets the PIC (turns interrupts back on)
outportb(0x20,0x20); //There are 2 PICs on AT machines and later
When closing program down
disable();
outportb(0x70,0x8b);
y = inportb(0x71);
outportb(0x70,0x8b);
outportb(0x71,y & 0xbf);
setvect(0x70,oldrtc);
enable();
I don't see anything in the code that can be causing the problem. But it just doesn't seem to make sense. While I don't completely trust the information, MSD "does" report IRQ 8 as the RTC Counter and says it is present and working just fine. Is it possible that later systems have moved the vector? Everything I find says that IRQ 8 is vector 0x70, but the interrupt never triggers on my Pentium III system. Is there some way to find if the Vectors have been changed?
It's been a LONG time since I've done any MS-DOS code and I don't think I ever worked with this particular interrupt (I'm pretty sure you can just read the memory location to fetch the time too, and IRQ0 can be used to trigger you at an interval too, so maybe that's better. Anyway, given my rustiness, forgive me for kinda link dumping.
http://wiki.osdev.org/Real_Time_Clock the bottom of that page has someone saying they've had problem on some machines too. RBIL suggests it might be a BIOS thing: http://www.ctyme.com/intr/rb-7797.htm
Without DOS, I'd just capture IRQ0 itself and remap all of them to my own interrupt numbers and change the timing as needed. I've done that somewhat recently! I think that's a bad idea on DOS though, this looks more recommended for that: http://www.ctyme.com/intr/rb-2443.htm
Anyway though, I betcha it has to do with the BIOS thing:
"Notes: Many BIOSes turn off the periodic interrupt in the INT 70h handler unless in an event wait (see INT 15/AH=83h,INT 15/AH=86h).. May be masked by setting bit 0 on I/O port A1h "

Why do I get an error when read or write more than 3 bytes using libusb to communicate with a PIC 18F2550?

I'm using libusb in Qt to communicate with a PIC microcontroller, 18F2550. The thing is that it's working OK until I try to send or read more than three bytes. Why does it happen?
I've tried using bulk_read transfer and interrupt_read. When I put the size of the buffer equal or less than three, then the transmission works perfectly, using bulk or interrupt. When this size is greater than three, then I'm getting buffer1 and buffer[2] OK, but the rest are wrong.
The error that I'm getting is from timeout. As input I'm using endpoint 0x81.
More information:
The return value from the bulk or interrupt read is -116. The numbers that I'm sending from the PIC to the PC in the two first bytes ([0] and 1) in hex is 0x02D6. With this number, buffer[0] = -42 (when it should be 0xD6 = 214) and buffer[1] = 2 that is correct.
In the [2] and [3] bytes the number is 0x033D, and I get [2] = 61 = 0x3D. That is correct and [3] = -42??? (like [0]).
And the fifth byte is 1, and the SW shows 2???. Might it be a problem in the microcontroller, because I'm programming it as an HID USB?
I don't think that being a HID is the problem. I had a similar issue before; the PIC would randomly timeout when large data was being transmitted. It turned out to be some voltage fluctuation on the MCU. How are you connecting the crystal? Do you have a capacitor on VUSB to regulate it?
Building a PIC18F USB device is a great tutorial on building a PIC HID, and even though it's not based on 18F2550 but on 18F4550, it should be quite similar, and I'm sure you can get a lot out of the schematics and hardware setup. It was the starting point for my PIC-USB projects.