divided window into the number of valid value 1 ~ 8 - c++

i have trouble understanding part of protocol, this protocol is for LED screen controller C-Power 5200, i must write hex value of window which is divided from screen like this: if i have 8 window value must be 0x08 but i want split screen into 15 window, about value in protocol is written like this:
i can't understand how i must divide into number of valid value 1 ~ 8, what this part means?
this is my full packet structure:
$window_collector_cc = "0x01,0x09,0x00,
0x00,0x00,0x00,0x00,0x08,0x00,0x08,0x00,
0x08,0x00,0x00,0x00,0x50,0x00,0x08,0x00,
0x58,0x00,0x00,0x00,0x08,0x00,0x08,0x00,
0x00,0x00,0x08,0x00,0x08,0x00,0x08,0x00,
0x08,0x00,0x08,0x00,0x50,0x00,0x08,0x00,
0x58,0x00,0x08,0x00,0x08,0x00,0x08,0x00,
0x00,0x00,0x10,0x00,0x08,0x00,0x08,0x00,
0x08,0x00,0x10,0x00,0x50,0x00,0x08,0x00,
0x58,0x00,0x10,0x00,0x08,0x00,0x08,0x00,
";
in this packet 0x09 is amount of window which does not works because of 1~8 divide which i can't understand, if i will make for example <0x08 it works... each line is window with options (X,Y,W,H), Please help me understand this line of protocol. sorry if question is not correctly opened just answer me and i will delete if something wrong.

Related

Decoding an unknown CRC or checksum?

I've been trying decode the CRC or checksum algorithm that is being used for the serial communication between a drone and its camera for about a week without a lot of luck and I was wondering if anybody here sees something I am missing or has any suggestions.
A typical packet looks like this:
FE1A390100020001AE0BE0FF090046250B00040000004E0D32080008540D8808F4016B54
They always start with 0xFE. The 2nd byte is the total size of the packet minus 10 bytes. The packet sizes vary, but I think I am specifically interested the 0x1A size. Byte 3 seems to be a packet counter because it usually increases by 1, but sometimes I have seen it jump to a completely different number for a few packets (usually when changing to a 0x22 size packet) before resuming the increment by 1 sequence. The last 2 bytes always change and I believe are the checksum or CRC. All the rest of the bytes seem to stay the same from one 0x1A packet to the next unless I manipulate the drones radio controls.
Right after powering up there is a series of packets that I assume is for initializing the communication. They are the shortest packets and have the least amount of change between them so it seems like they might be the easyiest to look at. Here are the first 7 bytes sent after powering it on.
From Drone to camera
Time:
8.3982205 FE030001000000010200018F68
8.39934725 FE03010100000001020001A844
8.400473958 FE03020100000001020001C130
8.401600708 FE050301000000000000000001AAE8
8.402900792 FE1A040100020001000000000000000000000C000300000853060008AB028808F4014629
8.406020958 FE22050100030002000000000000000000000000000000000000B3FFFFFFDE22006300FF615110050000C956
8.4098345 FE1A060100020001000000000000000000000C000300000853060008AB028808F40180A9
If I put the first 3 packets into reveng with -w 16 -s then it comes back with:
reveng: warning: you have only given 3 samples
reveng: warning: to reduce false positives, give 4 or more samples
width=16 poly=0x1487 init=0x0334 refin=false refout=false xorout=0x0000 check=0xa5b9 residue=0x0000 name=(none)
If i add the 4th packet it finds the same poly, but there rest of it looks differnt:
width=16 poly=0x1487 init=0x417d refin=false refout=false xorout=0x5582 check=0xbfa2 residue=0xb059 name=(none)
If i add the 5th packet reveng comes back with no model found.
However, if I remove packet 4 and then run it with packets, 1,2,3 and 5 if finds the same poly again, but different values for the rest:
width=16 poly=0x1487 init=0x804b refin=false refout=false xorout=0x0138 check=0x7dcc residue=0xc8ca name=(none)
Most combinations of packets containing a 0x1A size packet and the first 3 initialization packets that I run through reveng come back with 'no model found'. So far every time I have run reveng with only 0x1a sized packets has failed to find a model.
I think it is possible that after the initialization packets it some how incorporates info it receives from the camera to the drone into the CRC calculation for the data going from the drone to the camera, but there isn't a lot of data in those packets. Here are the first 9 packets that are sent from the camera to the drone. Prior to the first 0x1A packet being sent from the drone, the only data sent from the camera seems to be 0x7D0001.
From camera to drone:
Time
3.474456792 FE0500020000000000007D00013D40
4.475220208 FE0501020000000000007D000168C5
5.476483875 FE0502020000000000007D00018642
6.477295958 FE0503020000000000007D0001D3C7
7.4783405 FE0504020000000000007D00014B45
8.479420458 FE06050200010003FA078538B838B3
8.480811667 FE0506020000000000007D0001F047
9.48057875 FE0507020000000000007D0001A5C2
9.481883 FE06080200010003F9078638B8386037
I have tried incorporating 0x7D0001 into the packets and running them through reveng, but that didn't seem to help.
I have also tried reveng -w 8 -s on various combinations of packets without finding a model. And I have tried various checksum algos manually (possibly incorrectly) without success.
I have a bunch more data that I have captured here:
https://drive.google.com/open?id=1v8MCaXOvP_2Wv_hcaqhUZnXvqNI1_2Ur
Any ideas? Suggestions? This has been driving me nuts for a week

Python reading a constant serial byte length from device

I have a device that sends 23 characters (numbers and alpha's) via RS232 serial in the following format:
$02 T AAAAA Q CCC PP ZZZ S RR I I NFF $0D
(the spaces in the above string are for readability only)
In this 23 character string the:
$02 represents the start of text 2 hex ( I am not sure what hex this is?)
$0D represents a Carriage Return 13 decimal.
I am currently reading this information in via Python mostly successfully but I still feel I am not doing it properly. I rarely program in Python but I have to use a raspberry PI so decided to go with python for the coding.
I setup my RPI serial port with the following function:
def setupSerialPort():
ser = serial.Serial(
port='/dev/ttyAMA0',
baudrate = 9600,
parity = serial.PARITY_NONE,
stopbits = serial.STOPBITS_ONE,
bytesize=serial.EIGHTBITS,
timeout=1,
xonxoff=0,
rtscts=0
)
return ser
From a while loop I read the port as follow:
# setup serial port
cSerial = setupSerialPort()
while 1:
inbuff = cSerial.inWaiting()
if inbuff > 0:
msgCOM = cSerial.read(inbuff)
#vMsgCOM = re.sub('[^A-Za-z0-9]+', '', msgCOM)
//insert value into database
sleep(1)
at which point I insert the value "vMsgCOM" or "msgCOM" string into a mysql database as I read/receive the data. At first I thought that this works pretty well but after a week of data it became clear that I sometimes only capture partial data which splits over two database rows as mentioned previously. I'll give an example:
A correct 23 char string will look like this: K00000E1120002000063B00.
Now sometimes the string is split into two rows like
(1) K00000E11200020
(2) 00063B00
Another variation of the above is the multiple 23 chunks returned as:
K00000E1120002000063B00K00000E1120002000063B00K00000E1120002000063B00
This happens roughly 15 times out of 400 reads for the above.
Can anyone help me in terms of coding to somehow ensure that I always read the buffer correctly when the 23 string arrives. I know timing can be an issue hence the timeout=1 but somehow I read to quickly (or to long) when the read is not complete.
I had a look at this code (haven't tried it yet): pySerial inWaiting returns incorrect number of bytes (the def read_all(port, chunk_size=200) function part)
but thought it best to rather ask advice from those in the know.
I have in my code a bit of corrective code to concat the two rows and split the multiple chunk event should these instance(s) happen but I still think it is not the best way of doing things.
If anyone can help me with some example code I will really appreciate it.

Trying to decode a FM like signal encoded on audio

I have an audio signal that has a kind of FM encoded signal on it. The encoded signal is using this Biphase mark coding technique <-- see at the end of this page.
This signal is a digital representation of a timecode, in hours, minutes, seconds and frames. It basically works like this:
lets consider that we are working in 25 frames per second;
we know that the code is transmitting 80 bits of information every frame (that is 80 bits per frame x 25 frames per second = 2000 bits per second);
The wave is being sampled at 44100 samples per second. So, if we divide 44100/2000 we see that every bit uses 22,05 samples;
A bit happens when the signal changes sign.
If the wave changes sign and keeps its sign during the whole bit period it is a ZERO. If the wave changes sign two times over one bit period it is a ONE;
What my code does is this:
detects the first zero crossing, that is the clock start (to)
measures the level for to = to + 0.75*bitPeriod... 0.75 to give a tolerance.
if that second level is different, we have a 1, if not we have a 0;
This is the code:
// data is a C array of floats representing the audio levels
float bitPeriod = ceil(44100 / 2000);
int firstZeroCrossIndex = findNextZeroCross(data);
// firstZeroCrossIndex is the value where the signal changed
// for example: data[0] = -0.23 and data[1] = 0.5
// firstZeroCrossIndex will be equal to 1
// if firstZeroCrossIndex is invalid, go away
if (firstZeroCrossIndex < 0) return
float firstValue = data[firstZeroCrossIndex];
int lastSignal = sign(firstValue);
if (lastSignal == 0) return; // invalid, go away
while (YES) {
float newValue = data[firstZeroCrossIndex + 0.75* bitPeriod];
int newSignal = sign(newValue);
if (lastSignal == newSignal)
printf("0");
else
printf("1");
firstZeroCrossIndex += bitPeriod;
// I think I must invert the signal here for the next loop interaction
lastSignal = -newSignal;
if (firstZeroCrossIndex > maximuPossibleIndex)
break;
}
This code appears logical to me but the result coming from it is a total nonsense. What am I missing?
NOTE: this code is executing over a live signal and reads values from a circular ring buffer. sign returns -1 if the value is negative, 1 if the value is positive or 0 if the value is zero.
Cool problem! :-)
The code fails in two independent ways:
You are searching for the first (any) zero crossing. This is good. But then there is a 50% chance, that this transition is the one which occurs before every bit (0 or 1) or whether this transition is one which marks a 1 bit. If you get it wrong in the beginning you end up with nonsense.
You keep on adding bitPeriod (float, 22.05) to firstZeroCrossIndex (int). This means that your sampling points will slowly run out of phase with your analog signal and you will see strange effects when you sample point gets near the signal transitions. You will get nonsense, periodically at least.
Solution to 1: You must search for at least one 0 first, so you know which transition indicates just the next bit and which indicates a 1 bit. In practice you will want to re-synchronize your sampler at every '0' bit.
Solution to 2: Do not add bitPeriod to your sampling point. Instead search for the next transition, like you did in the beginning. The next transition is either 'half a bit' away, or a 'complete bit' away, which gives you the information you want. After a 'half a bit' period you must see another 'half a bit' period. If not, you must re-synchronize since you took a middle transition for a start transition by accident. This is exactly the re-sync I was talking about in 1.

Why do I get an error when read or write more than 3 bytes using libusb to communicate with a PIC 18F2550?

I'm using libusb in Qt to communicate with a PIC microcontroller, 18F2550. The thing is that it's working OK until I try to send or read more than three bytes. Why does it happen?
I've tried using bulk_read transfer and interrupt_read. When I put the size of the buffer equal or less than three, then the transmission works perfectly, using bulk or interrupt. When this size is greater than three, then I'm getting buffer1 and buffer[2] OK, but the rest are wrong.
The error that I'm getting is from timeout. As input I'm using endpoint 0x81.
More information:
The return value from the bulk or interrupt read is -116. The numbers that I'm sending from the PIC to the PC in the two first bytes ([0] and 1) in hex is 0x02D6. With this number, buffer[0] = -42 (when it should be 0xD6 = 214) and buffer[1] = 2 that is correct.
In the [2] and [3] bytes the number is 0x033D, and I get [2] = 61 = 0x3D. That is correct and [3] = -42??? (like [0]).
And the fifth byte is 1, and the SW shows 2???. Might it be a problem in the microcontroller, because I'm programming it as an HID USB?
I don't think that being a HID is the problem. I had a similar issue before; the PIC would randomly timeout when large data was being transmitted. It turned out to be some voltage fluctuation on the MCU. How are you connecting the crystal? Do you have a capacitor on VUSB to regulate it?
Building a PIC18F USB device is a great tutorial on building a PIC HID, and even though it's not based on 18F2550 but on 18F4550, it should be quite similar, and I'm sure you can get a lot out of the schematics and hardware setup. It was the starting point for my PIC-USB projects.

Arduino Ethernet Byte size problem

I'm using an Arduino (duemilanove) with the official Ethernet shield to send data to the controller for controlling an LED matrix. I am trying to send some raw 32-bit unsigned int values (unix timestamps) to the controller by taking the 4 bytes in the 32-bit value on the desktop and sending it to the arduino as 4 consecutive bytes. However, whenever a byte value is larger than 127, the returned value by the ethernet client library is 63.
The following is a basic example of what I'm doing on the arduino side of things. Some things have been removed for neatness.
byte buffer[32];
memset(buffer, 0, 32);
int data;
int i=0;
data = client.read();
while(data != -1 && i < 32)
{
buffer[i++] = (byte)data;
data = client.read();
}
So, whenever the input byte is bigger than 127 the variable "data" will end up getting set to 63! At first I thought the problem was further down the line (buffer used to be char instead of byte) but when I print out "data" right after the read, it's still 63.
Any ideas what could be causing this? I know client.read() is supposed to output int and internally reads data from the socket as uint8_t which is a full byte and unsigned, so I should be able to at least go to 255...
EDIT: Right you are, Hans. Didn't realize that Encoding.ASCII.GetBytes only supported the first 7 bits and not all 8.
I'm more inclined to suspect the transmit side. Are you positive the transmit side is working correctly? Have you verified with a wireshark capture or some such?
63 is the ASCII code for ?. There's some relevance to the values, ASCII doesn't have character codes for values over 127. An ASCII encoder commonly replaces invalid codes like this with a question mark. Default behavior for the .NET Encoding.ASCII encoder for example.
It isn't exactly clear where that might happen. Definitely not in your snippet. Probably on the other end of the wire. Write bytes, not characters.
+1 for Hans Passant and Karl Bielefeldt.
Can you just send the data without encoding? How is the data being sent? TCP/UDP/IP/Ethernet definitely support sending binary data without restriction. If this isn't possible, perhaps converting the data to hex will solve the problem. Base64 will also work (better) but is considerably more work. For small amounts of data, hex is probably the easiest and fastest solution.
+1 again to Karl and Ben for mentioning wireshark. Invaluable for debugging network problems like this.