Fujitsu IR Remote Checksum Calculation - bit-manipulation

I am trying to reverse engineer the Fujitsu AC remote control protocol for a home automation project. I have gotten as far as identifying which bytes correspond to which control information, however there is a checksum at the end.
I believe the checksum is calculated using three other bytes (temperature, mode and fan speed).
I have used a spreadsheet to try reverse engineering what operations have been performed to get the checksum and found that for a temperature of "00001010" and any mode/fan speed combination the following algorithm holds true;
Checksum = 392 - (Temperature + Mode + Fan Speed)
Example
392 - (10 + 64 + 128) = 190
392 - (10 + 192 + 128) = 62
392 - (10 + 32 + 128) = 222
However no other temperature (that I have tested) works this way. My current theory is that the temperature has some other operation performed on it first and that whatever this operation is results in the same value for a temperature of "00001010", but not other temperatures.
Raw data:
Temperature, Mode, Fan Speed, Checksum
00000110, 10000000, 10000000, 01110110
00001010, 10000000, 10000000, 01111110
00000010, 10000000, 10000000, 01110001
Full spreadsheet at: This link
I can't work out what operation(s) are being performed on the temperature, or in fact if I am even correct in my assumptions about what the algorithm is.
I'm wondering if there is anyone with more experience with this kind of problem that might be able to shed some light on this?
Extras:
The temperature value is the integer of the temperature say 21 degrees (00010101)
1. Reversed to get 10101000
2. Only the first four bits taken - 1010
3. Then expanded to get a value of 00001010
So 00001010 in the raw data above is a temperature of 21 degrees
Original question has been edited as I was originally approaching this incorrectly and assuming my hypothesis was correct

I found the following solution after some more sifting through Google search results.
Thanks George Dewar on Github
1. Reverse (flip) bytes 8 - 13 (I - N in spreadsheet)
2. Sum those bytes
3. (208 - sum) % 256
4. Reverse (flip) bytes of result
E.g.
Data: 00000110, 10000000, 10000000, 00000000, 00000000, 00000000
1. Reverse:
01100000, 00000001, 00000001, 00000000, 00000000, 00000000
96, 1, 1, 0, 0, 0
2. Sum:
96 + 1 + 1 + 0 + 0 + 0 = 98
3. Calculate:
(208 - 98) % 256 = 110 (dec) or 01101110 (bin)
4. Reverse:
01110110
Answer provided by #george-dewar on Github. So a massive thank you to him. I would never have worked that out. Mine only differs in that my remote has less options and therefore less bytes to reverse and sum, otherwise it works exactly as George has it in his example code.

Related

Pyomo Gurobi solver error- "ERROR: Solver (gurobi) returned non-zero return code (137)"

I have a solution for solving an MIP problem of graph which works fine and gives the following output when I run it for smaller graphs. I'm using Gurobi solver with Pyomo.
Problem:
- Name: x73
Lower bound: 192.0
Upper bound: 192.0
Number of objectives: 1
Number of constraints: 10
Number of variables: 37
Number of binary variables: 36
Number of integer variables: 36
Number of continuous variables: 1
Number of nonzeros: 37
Sense: minimize
Solver:
- Status: ok
Return code: 0
Message: Model was solved to optimality (subject to tolerances), and an optimal solution is available.
Termination condition: optimal
Termination message: Model was solved to optimality (subject to tolerances), and an optimal solution is available.
Wall time: 0.03206682205200195
Error rc: 0
Time: 0.09361410140991211
Solution:
- number of solutions: 0
number of solutions displayed: 0
But I am getting the following error while running the code with larger graphs.
ERROR: Solver (gurobi) returned non-zero return code (137)
ERROR: Solver log: Using license file /opt/shared/gurobi/gurobi.lic Set
parameter TokenServer to value gurobi.lm.udel.edu Set parameter TSPort to
value 40100 Read LP format model from file /tmp/tmpaud9ogrn.pyomo.lp
Reading time = 0.01 seconds x1101: 56 rows, 551 columns, 551 nonzeros
Changed value of parameter TimeLimit to 600.0
Prev: inf Min: 0.0 Max: inf Default: inf
Gurobi Optimizer version 9.0.1 build v9.0.1rc0 (linux64) Optimize a model
with 56 rows, 551 columns and 551 nonzeros Model fingerprint: 0xafe0319a
Model has 15400 quadratic objective terms Variable types: 1 continuous,
550 integer (550 binary) Coefficient statistics:
Matrix range [1e+00, 1e+00] Objective range [0e+00, 0e+00]
QObjective range [4e+00, 8e+01] Bounds range [1e+00, 1e+00] RHS
range [1e+00, 1e+00]
Found heuristic solution: objective 22880.000000 Presolve removed 1 rows
and 1 columns Presolve time: 0.01s Presolved: 55 rows, 550 columns, 550
nonzeros Presolved model has 15400 quadratic objective terms Variable
types: 0 continuous, 550 integer (550 binary)
Root simplex log...
Iteration Objective Primal Inf. Dual Inf. Time
130920 9.8490000e+02 1.610955e+03 0.000000e+00 5s 263917
1.0999000e+03 1.710649e+03 0.000000e+00 10s 397157
1.0999000e+03 2.243077e+03 0.000000e+00 15s 529512
1.0999000e+03 1.910603e+03 0.000000e+00 20s 662404
1.0999000e+03 1.584650e+03 0.000000e+00 25s 791296
1.0999000e+03 1.812443e+03 0.000000e+00 30s 906473
1.3475000e+03 0.000000e+00 0.000000e+00 34s
Root relaxation: objective 1.347500e+03, 906473 iterations, 34.32 seconds
Nodes | Current Node | Objective Bounds | Work
Expl Unexpl | Obj Depth IntInf | Incumbent BestBd Gap | It/Node
Time
H 0 0 1730.0000000 0.00000 100% - 52s
H 0 0 1654.0000000 0.00000 100% - 52s
H 0 0 1578.0000000 0.00000 100% - 52s
0 0 1347.50000 0 137 1578.00000 1347.50000 14.6% - 52s
0 0 1347.50000 0 137 1578.00000 1347.50000 14.6% - 53s
H 0 0 1540.0000000 1347.50000 12.5% - 53s
0 2 1347.50000 0 145 1540.00000 1347.50000 12.5% - 55s
101 118 1396.92351 10 140 1540.00000 1347.50000 12.5% 157 61s
490 591 1416.40484 18 128 1540.00000 1347.50000 12.5% 63.6 65s
2136 2347 1440.09938 42 100 1540.00000 1347.50000 12.5% 42.9 70s
3847 3402 1461.55736 81 80 1540.00000 1347.50000 12.5% 37.0 82s
/opt/shared/gurobi/9.0.1/bin/gurobi.sh: line 17: 23890 Killed
$PYTHONHOME/bin/python3.7 "$#"
Traceback (most recent call last):
File "/home/2925/EdgeColoring/main.py", line 91, in <module>
qubo_coloring, qubo_time = qubo(G, colors, edge_list, solver)
File "/home/2925/EdgeColoring/qubo.py", line 59, in qubo
result = solver.solve(model)
File "/home/2925/.conda/envs/qubo/lib/python3.9/site-packages/pyomo/opt/base/solvers.py", line 596, in solve
raise ApplicationError(
pyomo.common.errors.ApplicationError: Solver (gurobi) did not exit normally
Using TimeLimit upto 2 minutes breaks the model early without any error but doesn't always give any optimal solution for larger graphs. Memory or processing power is not an issue here. I need to run the code without any interruption for at least 10 minutes if not for hours.

T 103 - Negative Marking

Raju is giving his JEE Main exam. The exam has Q questions and Raju needs S marks to pass. Giving the correct answer to a question awards the student with 4 marks whereas giving the incorrect answer to a question awards the student with negative 3 (-3) marks. If a student chooses to not answer a question at all, he is awarded 0 marks.
Write a program to calculate the minimum accuracy that Raju will need in order to pass the exam.
Input
Input consists of multiple test cases.
Each test case consists of two integers Q and S
Output
Print the minimum accuracy upto 2 decimal places
Print -1 if it is impossible to pass the exam
Sample Input 0
2
10 40
10 33
Sample Output 0
100.00
90.00
Think of this as a simultaneous equation problem.
4x - 3y = S
x + y = Q
For the second scenario, your equations will be :
4x - 3y = 33
x + y = 10
After solving 'x' will be equal to the minimum number of questions he has to solve correctly. Calculate what percentage of 'Q' is 'x'.
That's the concept, think how you would approach it programatically :)

gruobi: used model.write but cannot find the file

I am using Gurobi with C++ and want to save the Lp as a file.lp. Therefore, I used
model.write("model.lp");
model.optimize();
This is my output and nor error occurs:
Optimize a model with 105 rows, 58 columns and 186 nonzeros
Coefficient statistics:
Matrix range [1e+00, 1e+00]
Objective range [1e+00, 1e+00]
Bounds range [0e+00, 0e+00]
RHS range [1e+00, 6e+00]
Presolve removed 105 rows and 58 columns
Presolve time: 0.00s
Presolve: All rows and columns removed
Iteration Objective Primal Inf. Dual Inf. Time
0 -0.0000000e+00 0.000000e+00 0.000000e+00 0s
Solved in 0 iterations and 0.00 seconds
Optimal objective -0.000000000e+00
obj: -0
Status2
Process finished with exit code 0
So there is probably a mistake in my LP, since the optimal solution should not be 0. This is why I want to have a look at the model.lp file. However, I cannot find it. I searched my whole computer. Am I missing anything?

Writing both characters and digits in an array

I have a Fortran code which reads a txt file with seperate lines of characters and digits and then write them in a 1D array with 20 elements.
This code is not compatible with Fortran 77 compiler Force 2.0.9. My question is that how we can apply the aformenetioned procedure using a Fortran 77 compiler;i.e defining a 1D array nd then write the txt file line by line into elements of the array?
Thank you in advance.
The txt file follows:
Case 1:
10 0 1 2 0
1.104 1.008 0.6 5.0
25 125.0 175.0 0.7 1000.0
0.60
1 5
Advanced Case
15 53 0 10 0 1 0 0 1 0 0 0 0
0 0 0 0
0 0 1500.0 0 0 .03
0 0.001 0
0.1 0 0.125 0.08 0.46
0.1 5.0 0.04
# Jason:
I am a beginner and still learning Fortran. I guess Force 2 uses g77.
The followings are the correspond part of the original code. Force 2 editor returns an empty txt file as a result.
DIMENSION CARD(20)
CHARACTER*64 FILENAME
DATA XHEND / 4HEND /
OPEN(UNIT=3,FILE='CON')
OPEN(UNIT=4,FILE='CON')
OPEN(UNIT=7,STATUS='SCRATCH')
WRITE(3,9000) 'PLEASE ENTER THE INPUT FILE NAME : '
9000 FORMAT (A)
READ(4,9000) FILENAME
OPEN(UNIT=5,FILE=FILENAME,STATUS='OLD')
WRITE(3,9000) 'PLEASE ENTER THE OUTPUT FILE NAME : '
READ(4,9000) FILENAME
OPEN(UNIT=6,FILE=FILENAME,STATUS='NEW')
FILENAME = '...'
IR = 7
IW = 6
IP = 15
5 REWIND IR
I = 0
2 READ (5,7204,END=10000) CARD
IF (I .EQ. 0 ) WRITE (IW,7000)
7000 FORMAT (1H1 / 10X,15HINPUT DECK ECHO / 10X,15(1H-))
I= I + 1
WRITE (IW,9204) I,CARD
IF (CARD(1) .EQ. XHEND ) GO TO 7020
WRITE (IR,7204) CARD
7204 FORMAT (20A4)
9204 FORMAT (1X,I4,2X,20A4)
GO TO 2
7020 REWIND IR
It looks that CARD is being used as a to hold 20 4-character strings. I don't see the declaration as a character variable, only as an array, so perhaps in extremely old FORTRAN style a non-character variable is being used to hold characters? You are using a 20A4 format, so the values have to be positioned in the file precisely as 20 groups of 4 characters. You have to add blanks so that they are aligned into groups of 4 columns.
If you want to read numbers it would be much easier to read them into a numeric type and use list-directed IO:
real values (20)
read (5, *) values
Then you wouldn't have to worry about precision positioning of the values in the file.
This is really archaic FORTRAN ... even pre-FORTRAN-77 in style. I can't remember the last time that I saw Hollerith (H) formats! Where are you learning this from?
Edit: While I like Fortran for many programming tasks, I wouldn't use FORTRAN 66! Computers are supposed to make things easier ... there is no reason to have to count characters. Instead of
7000 FORMAT (1H1 / 10X,15HINPUT DECK ECHO / 10X,15(1H-))
You can use
7000 FORMAT ( / 10X, "INPUT DECK ECHO" / 10X, 15("-") )
I can think of only two reasons to use a Hollerith code: not bothering to change legacy source code (it is remarkable that a current Fortran compiler can process a feature that was obsolete 30 years ago! Fortran source code never dies!), or studying the history of computing languages. The name honors a great computing pioneer, whose invention accomplished the 1890 US Census in one year, when the 1880 Census took eight years: http://en.wikipedia.org/wiki/Herman_Hollerith
I much doubt that you will see the "1" in the first column performing "carriage control" today. I had to look up that "1" was the code for page eject. You are much more likely to see it in your output. See Are Fortran control characters (carriage control) still implemented in compilers?

Decode date format?

Given some short integers and the dates they represent, is there any systematic method to determine how they're stored in this format and decode other dates? The data stored is from another piece of software.
I initially thought that the days were represented by one of the bytes, since the first byte for May 1 minus the first byte for Feb 11 did equal the correct number of days (79 for year 2011). But it can't be that simple, not only because 8 bits can only store 256 days, but also because dates before 2000 store the year only, with both bytes.
Here's what I'm working with, but take the column headings with a grain of salt.
DDDDDDDD YYYYYYYY DD-MM-YY
01011010 10001010 1955
10110010 10010001 1960
11000000 10010001 1961
11100011 11000011 1996
01010001 11000110 1997
00001101 11001000 1999
10000000 11001001 10-02-00
11010101 11001010 16-01-01
10101010 11001101 11-01-03
00000101 11010000 05-09-04
10011101 11010101 07-08-08
11010000 11010101 27-09-08
00010000 11010110 30-11-08
00110100 11010110 05-01-09
11111110 11010110 26-07-09
10011101 11010111 01-01-10
10110111 11011000 10-10-10
00110011 11011001 11-02-11
00111010 11011001 18-02-11
10000010 11011001 01-05-11
10000101 11011001 04-05-11
01101100 11100110 19-05-20
I also see that 30-11-08 has the same second byte as 05-01-09, and conversely the two dates in 2010 have different values in the second byte.
EDIT: Thanks to the answers and some research, I see that the epoch is November 17, 1858. This is a standard format called the Modified Julian Day.
It looks like it's days since some point in the past ~1858 (I haven't worked out all the leap year magic), but the day of year is only displayed in your existing app for years >= 2000. The byte you marked year is the high order byte while the "day" byte is the low order byte.
Isn't it just a 16-bit days-since-epoch value? Feb 10 2000 is 51584. Feb 18 2011 is 55610. There are 4026 days between -- (11 * 365) + 3 leap days + the 8 days difference in the day of month. The start of the epoch would appear to be roughly 1860. Or, more likely, the high-order 1 bit turns on Jan 1, 1950.