help with substitution model [Sicp], using Clojure - clojure

I am studying the sicp book, and I have a doubt with the substitution model of a procedure:
(defn A
[x,y]
(cond (= y 0) 0
(= x 0) (* 2 y)
(= y 1) 2
:else (A (- x 1) (A x (- y 1)))))
This procedure is part of the exercise 1.10.
If I run the function in REPL with the following parameters (A 1 10), the result is 1024. I decided to verify the result using the Substitution Model, but the result was 2048.
This is the substitution model that I wrote. There is something wrong, but I don't know what.
(A 1 10)
(A (- 1 1) (A 1 (- 10 1))))
(A 0 (A 1 9)))
(A 0 (A (- 1 1) (A 1 (- 9 1)))))
(A 0 (A 0 (A 1 8))))
(A 0 (A 0 (A (- 1 1) (A 1 (- 8 1))))))
(A 0 (A 0 (A 0 (A 1 7)))))
(A 0 (A 0 (A 0 (A (- 1 1) (A 1 (- 7 1)))))))
(A 0 (A 0 (A 0 (A 0 (A 1 6))))))
(A 0 (A 0 (A 0 (A 0 (A (-1 1) (A 1 (- 6 1))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 5))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A (-1 1) (A 1 (- 5 1))))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 4)))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A (-1 1) (A 1 (- 4 1)))))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 3))))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A (-1 1) (A 1 (-3 1))))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 2)))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A (-1 1) (A 1 (- 2 1))))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 1))))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (* 2 1))))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 2)))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (* 2 2)))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 4))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (* 2 4))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 8)))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (* 2 8)))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 0 16))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (* 2 16)))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 32))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (* 2 32))))))
(A 0 (A 0 (A 0 (A 0 (A 0 64)))))
(A 0 (A 0 (A 0 (A 0 (* 2 64)))))
(A 0 (A 0 (A 0 (A 0 128))))
(A 0 (A 0 (A 0 (* 2 128))))
(A 0 (A 0 (A 0 256)))
(A 0 (A 0 (* 2 256)))
(A 0 (A 0 512))
(A 0 (* 2 512))
(A 0 1024)
2048 ????
Can anyone indicate what I did wrong?
I am sorry for the length of the question.

Consider these lines:
(A 0 (A 0 (A 0 (A 1 7)))))
(A 0 (A 0 (A 0 (A (- 1 1) (A 1 (- 7 1)))))))
(A 0 (A 0 (A 0 (A 0 (A 1 6))))))
(A 0 (A 0 (A 0 (A 0 (A (-1 1) (A 1 (- 6 1))))))))
(A 0 (A 0 (A 0 (A 0 (A 0 (A 0 (A 1 5))))))))
Strip off the redundant outer layers:
(A 1 7))
(A (- 1 1) (A 1 (- 7 1))))
(A 0 (A 1 6)))
(A 0 (A (-1 1) (A 1 (- 6 1)))))
(A 0 (A 0 (A 0 (A 1 5)))))
Somewhere in here you've ended up with mismatched parentheses, but that's not important. Note that in going from A 1 7 to A 1 6, a single outer layer of A 0 _ is created, as expected. In going from A 1 6 to A 1 5, you've got two new layers of A 0 _. Each of these ends up doubling the result, so that's why your answer is off by a factor of 2.

Related

Why so many 0s in the answer?

(take 100 (iterate rand-int 300))
evaluates differently, of course, each time... but usually with a ton of zeros. The result always leads with a 300. For example:
(300 93 59 58 25 14 9 4 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0)
I would have expected 100 random integers between 0 and 300.
What am I not understanding?
See docs for iterate:
Returns a lazy sequence of x, (f x), (f (f x)) etc. f must be free of side-effects
So, that's the reason your sequence is always starting with 300.
And why there are so many zeros? When you use iterate like this, rand-int takes the previous result and uses it as a new upper limit (exclusive) for a random number. So, your results can look like this:
300
=> 300
(rand-int *1)
=> 174
(rand-int *1)
=> 124
(rand-int *1)
=> 29
(rand-int *1)
=> 17
(rand-int *1)
=> 16
(rand-int *1)
=> 7
...
You can check yourself that this sequence leads to zero.
If you really want to get 100 random integers between 0 and 300, use repeatedly instead:
(repeatedly 100 #(rand-int 300))

Efficient way of finding all feasible solutions to set of Boolean constrains

I'm solving the following problem with cp_model from ortools.sat.python in Python, but I'm looking for a more efficient solver.
Problem:
Let's have n boolean variables A, B, C, ...
The goal is to find all possible/feasible combinations on boolean values that satisfy a set of rules. There are 3 types of rules:
One and only one of (A, B) might be true. I'm applying this as:
model.AddBoolXOr([A,B])
model.Add(A == False).OnlyEnforceIf(B)
model.Add(B == False).OnlyEnforceIf(A)
At most one of (C, D, E) might be true. I'm applying this as:
model.Add(C == False).OnlyEnforceIf(D)
model.Add(C == False).OnlyEnforceIf(E)
model.Add(D == False).OnlyEnforceIf(C)
model.Add(D == False).OnlyEnforceIf(E)
model.Add(E == False).OnlyEnforceIf(C)
model.Add(E == False).OnlyEnforceIf(D)
F is only possible when (A and ~C) or (B and (C or E)). First I'm converting this to CNF: (A or B) and (B or ~C) and (A or C or E). Then I insert that to the model:
model.Add(F == False).OnlyEnforceIf([A.Not(), B.Not()])
model.Add(F == False).OnlyEnforceIf([B.Not(), C])
model.Add(F == False).OnlyEnforceIf([A.Not(), C.Not(), E.Not()])
The result for above looks like:
1 0 0 0 0 0
1 0 1 0 0 0
1 0 0 1 0 0
1 0 0 0 1 0
1 0 0 0 1 1
1 0 0 0 0 1
1 0 0 1 0 1
0 1 0 0 1 1
0 1 0 0 1 0
0 1 0 0 0 0
0 1 0 1 0 0
0 1 1 0 0 0
0 1 1 0 0 1
Since my problem is big, I'm looking for a more efficient solution. I found minisat but I'm not sure if it is possible to express the above constraints in the DIMACS form and make minisat calculate all feasible solutions (by default it finds first and stops).
Is there any there solver capable of solving such a problem?
what a convoluted way of writing the model.
1)
model.Add(a + b == 1)
or
model.AddBoolOr([a, b])
model.AddImplication(a, b.Not())
model.AddImplication(b, a.Not())
model.Add(c + d + e <= 1)
or
model.AddImplication(c, d.Not())
model.AddImplication(c, e.Not())
model.AddImplication(d, c.Not())
model.AddImplication(d, e.Not())
model.AddImplication(e, c.Not())
model.AddImplication(e, d.Not())
Create 1 bool var for each and
(A and ~C) <=> G
model.AddImplication(G, A)
model.AddImplication(G, C.Not())
model.AddBoolOr([A.Not(), C, G.Not())
then F is only possible if x1 or x2 or x3
model.AddBoolOr([F.Not(), x1, x2, x3])

How to code: "If (A is True) and (B is True when C is True)"

How do I code this if statement (let's say in c++):
if (condition1 == true and condition2 == true (when condition3 == true))
{
// condition2 need to be true only when condition3 is true
}
When figuring out how to express any boolean-predicate it helps to build a truth-table that lists the possible values of each boolean value in separate columns and the expected output in its own column. The values in each column increment by +1 as though each column is a binary-digit in a base-2 number.
Like so:
A B C Output
---------------------
0 0 0 ?
0 0 1 ?
0 1 0 ?
0 1 1 ?
1 0 0 ?
1 0 1 ?
1 1 0 ?
1 1 1 ?
Based on your question's title (and not the example pseudocode that you posted), I assume you want this output:
A B C Output
---------------------
0 0 0 0
0 0 1 0
0 1 0 0
0 1 1 0
1 0 0 0
1 0 1 0
1 1 0 0
1 1 1 1
...which is just a trivial AND between all 3 values:
bool a = ...
bool b = ...
bool c = ...
if( a && b && c )
{
do_the_thing();
}

Recursive N-knightsproblem

I am trying to solve a n-knights problem on an 8x8 chessboard recursively. The n-knights problem is a variation of the n-queens problem, where the queens are replaced by knights. No piece can take another piece.
My code so far: http://pastebin.com/TVza3jVU.
The input consists of the number of knights that have to be placed on the chessboard. My code prints a lot of correct boards
Output looks like this (example):
0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 2
0 0 0 0 0 0 0 0 3
0 0 0 0 0 0 0 0 4
0 0 0 0 0 0 0 0 5
0 0 0 0 0 0 1 0 6
1 1 0 1 0 1 0 0 7
0 1 2 3 4 5 6 7
nrBoards = 49
A '1' stands for a knight.
My problem is as follows:
0 1 1 1 1 1 0 0 0
0 0 0 0 0 0 0 0 1
0 0 0 0 0 0 0 0 2
0 0 0 0 0 0 0 0 3
0 0 0 0 0 0 0 0 4
0 0 0 0 0 0 0 0 5
0 0 0 0 0 0 0 0 6
0 0 0 0 0 0 0 0 7
0 1 2 3 4 5 6 7
This is the last board my script will print. It will never put a knight on [0][0]. I can not figure out why. It also skips some configurations. Is there something wrong with my recursion?
From the code you have linked, it seems that one problem is in your checkplace() function. You do not check whether the bounds of x+2, x-2, y+2, y-2 etc are in or out of the interval 0 to 7.
int checkPlace(int y, int x, chessboard boards) {
if (boards.board[y - 2][x - 1] == 1) {
return 0;
}
if (boards.board[y - 1][x - 2] == 1) {
return 0;
}
if (boards.board[y - 2][x + 1] == 1) {
return 0;
}
if (boards.board[y - 1][x + 2] == 1) {
return 0;
}
if (boards.board[y + 1][x + 2] == 1) {
return 0;
}
if (boards.board[y + 1][x - 2] == 1) {
return 0;
}
if (boards.board[y + 2][x - 1] == 1) {
return 0;
}
if (boards.board[y + 2][x + 1] == 1) {
return 0;
}
return 1;
}
Instead:
if ( x-1 >= 0 && y-2 >= 0 && boards.board[y - 2][x - 1] == 1) {
Similarly for others.

C++ map matrix column into rank - bit manipulations

I've got (binary) matrices represented by uint64_t (from C++11). And I'd like to be able to efficiently map from any column into first rank. For example
0 1 0 0 0 0 0 0
0 1 0 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0
0 1 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 1 0 0 0 0 0 0
uint64_t matrice = 0x4040400040400040uLL;
uint64_t matrice_2 = map(matrice, ColumnEnum::Column2);
1 1 1 0 1 1 0 1
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0
matrice_2 contains 0xED00000000000000uLL;
Great question. I really enjoyed the hacking. Here is my solution:
uint64_t map(uint64_t x, int column)
{
x = (x >> (7 - column)) & 0x0101010101010101uLL;
x = (x | (x >> 7)) & 0x00FF00FF00FF00FFuLL;
x = (x | (x >> 14))& 0x000000FF000000FFuLL;
x = (x | (x >> 28))& 0x00000000000000FFuLL;
return x << 56;
}
A working example can be found at ideone, where the call is really map(matrice, ColumnEnum::Column2).
A nice little riddle. Here is a reasonably readable version:
matrice = (matrice >> (8ull - column)) & 0x0101010101010101ull;
uint64_t result(( ((matrice >> 0ul) & 0x01ull)
| ((matrice >> 7ul) & 0x02ull)
| ((matrice >> 14ull) & 0x04ull)
| ((matrice >> 21ull) & 0x08ull)
| ((matrice >> 28ull) & 0x10ull)
| ((matrice >> 35ull) & 0x20ull)
| ((matrice >> 42ull) & 0x40ull)
| ((matrice >> 49ull) & 0x80ull)) << 56ull);
First define bitmask for every column:
uint64_t columns[8] = {
0x8080808080808080uLL,
0x4040404040404040uLL,
//...
0x0101010101010101uLL
};
by applying column bitmask to your "matrice" you get only this column:
uint64_t col1 = matrice & columns[1]; // second column - rest is empty
by shifting you can get only first column case:
uint64_t col0 = (col1 << 1); // second column - rest is empty
// ^ this number is just zero based index of column,,,
Now first bit is on right place - just set the next 7 bits:
col0 |= (col0 & (1 << 55)) << 7; // second bit...
// ....
Or just use std::bitset<64>, I would do....