Einstein-Convention like implicit summation with sympy - sympy

Given some simpy tensor
A = sympy.tensor.array.Array( <3rd-rank data consisting of sympy expressions> )
and a matrices T and T1=T.inv() representing a basis transform, is it somehow possible to use an notation like
B[i,j,k] = T[i,a] * A[a,b,c] * T1[b,j] * T1[c,k]
to calculate the transformed tensor?
It seems that it is, in principle, possible in sympy to use an Einstein summation convention, but I am running into multiple problems with it:
A code snippet
from sympy import symbols, IndexedBase, EinsteinSum
TX, A, x = symbols('TX A x', cls=IndexedBase)
i, j = symbols('i j')
ein_sum = EinsteinSum(A[i, j] * x[j])
I found doesn't work, because EinsteinSum doesn't seem to exist anymore.
When trying to use expressions like
var("i")
Sum(A[i,i,i],(i,1,3))
I will raise a TypeError (apparently Array doesn't allow symbolic indices).

Some features have been recently added to the development version of SymPy. They will be available in the next version.
NOTE: EinsteinSum has never existed. There was just a proposal to add it, IIRC.
Let's suppose you have 3-rank array A given by
In [13]: A
Out[13]:
⎡⎡1 0 0 0 ⎤ ⎡0 0 0 1⎤ ⎡0 0 0 -ⅈ⎤ ⎡0 0 1 0 ⎤⎤
⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎥
⎢⎢0 1 0 0 ⎥ ⎢0 0 1 0⎥ ⎢0 0 ⅈ 0 ⎥ ⎢0 0 0 -1⎥⎥
⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎥
⎢⎢0 0 -1 0 ⎥ ⎢0 -1 0 0⎥ ⎢0 ⅈ 0 0 ⎥ ⎢-1 0 0 0 ⎥⎥
⎢⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥⎥
⎣⎣0 0 0 -1⎦ ⎣-1 0 0 0⎦ ⎣-ⅈ 0 0 0 ⎦ ⎣0 1 0 0 ⎦⎦
and a simple matrix T given by:
In [15]: T
Out[15]:
⎡2 0 0 0⎤
⎢ ⎥
⎢0 2 0 0⎥
⎢ ⎥
⎢0 0 2 0⎥
⎢ ⎥
⎣0 0 0 2⎦
So that:
In [17]: T.inv()
Out[17]:
⎡1/2 0 0 0 ⎤
⎢ ⎥
⎢ 0 1/2 0 0 ⎥
⎢ ⎥
⎢ 0 0 1/2 0 ⎥
⎢ ⎥
⎣ 0 0 0 1/2⎦
I suggest to convert everything to Array as I fear that Matrix and Array objects won't interoperate that well.
Symbolic indices are now supported:
In [28]: T[i, j]
Out[28]:
⎡2 0 0 0⎤
⎢ ⎥
⎢0 2 0 0⎥
⎢ ⎥[i, j]
⎢0 0 2 0⎥
⎢ ⎥
⎣0 0 0 2⎦
As soon as the values i and j are replaced with numbers, the tensor expression will be evaluated.
If you want to operate on tensor products of arrays, use the tensorproduct and tensorcontraction functions. To emulate the matrix multiplication on Array objects:
In [29]: tensorcontraction(tensorproduct(T, T1), (1, 2))
Out[29]:
⎡1 0 0 0⎤
⎢ ⎥
⎢0 1 0 0⎥
⎢ ⎥
⎢0 0 1 0⎥
⎢ ⎥
⎣0 0 0 1⎦
NOTE: the latest SymPy version has a bug with tensorcontraction if contracting on many axes at the same time. Development branch should be solved.
The expression Sum(A[i,i,i],(i, 0, 3)) works (NOTE: start index from 0, not 1 in Python).
The expression Sum(T[i, j]*T1[j, k], (j, 0, 3)) does not give a nice result, as it does not understand what to simplify. Compare:
In [38]: Sum(T[i, j]*T1[j, k], (j, 0, 3))
Out[38]:
3
______
╲
╲ ⎡1/2 0 0 0 ⎤ ⎡2 0 0 0⎤
╲ ⎢ ⎥ ⎢ ⎥
╲ ⎢ 0 1/2 0 0 ⎥ ⎢0 2 0 0⎥
╲ ⎢ ⎥[j, k]⋅⎢ ⎥[i, j]
╱ ⎢ 0 0 1/2 0 ⎥ ⎢0 0 2 0⎥
╱ ⎢ ⎥ ⎢ ⎥
╱ ⎣ 0 0 0 1/2⎦ ⎣0 0 0 2⎦
╱
╱
‾‾‾‾‾‾
j = 0
If you evaluate it, you get:
In [39]: Sum(T[i, j]*T1[j, k], (j, 0, 3)).doit()
Out[39]:
⎡1/2 0 0 0 ⎤ ⎡2 0 0 0⎤ ⎡1/2 0 0 0 ⎤ ⎡2 0 0 0⎤ ⎡1/2 0 0 0 ⎤ ⎡2 0 0 0⎤
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎢ 0 1/2 0 0 ⎥ ⎢0 2 0 0⎥ ⎢ 0 1/2 0 0 ⎥ ⎢0 2 0 0⎥ ⎢ 0 1/2 0 0 ⎥ ⎢0 2 0 0⎥
⎢ ⎥[0, k]⋅⎢ ⎥[i, 0] + ⎢ ⎥[1, k]⋅⎢ ⎥[i, 1] + ⎢ ⎥[2, k]⋅⎢ ⎥[i,
⎢ 0 0 1/2 0 ⎥ ⎢0 0 2 0⎥ ⎢ 0 0 1/2 0 ⎥ ⎢0 0 2 0⎥ ⎢ 0 0 1/2 0 ⎥ ⎢0 0 2 0⎥
⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥
⎣ 0 0 0 1/2⎦ ⎣0 0 0 2⎦ ⎣ 0 0 0 1/2⎦ ⎣0 0 0 2⎦ ⎣ 0 0 0 1/2⎦ ⎣0 0 0 2⎦
⎡1/2 0 0 0 ⎤ ⎡2 0 0 0⎤
⎢ ⎥ ⎢ ⎥
⎢ 0 1/2 0 0 ⎥ ⎢0 2 0 0⎥
2] + ⎢ ⎥[3, k]⋅⎢ ⎥[i, 3]
⎢ 0 0 1/2 0 ⎥ ⎢0 0 2 0⎥
⎢ ⎥ ⎢ ⎥
⎣ 0 0 0 1/2⎦ ⎣0 0 0 2⎦
which is the correct result, but the representation is not nice-looking. SymPy fails to recognize the possible simplifications that may occur.
To get back a matrix in a nice form, you could use:
In [45]: Matrix([[Sum(T[i, j]*T1[j, k], (j, 0, 3)).doit().subs({i: ni, k: nk}) for ni in range(4)] for nk in range(4)])
Out[45]:
⎡1 0 0 0⎤
⎢ ⎥
⎢0 1 0 0⎥
⎢ ⎥
⎢0 0 1 0⎥
⎢ ⎥
⎣0 0 0 1⎦

Related

Ultra fast lookup in small sized container of 64-bit integer values using dynamic perfect hashing

I have input keys that can cheaply be converted to a uint64_t (ie, the input contains less than or equal to 64 bits).
Each unique key (not yet in the map) will be assigned some data (a pointer to an object). Much like a std::map<uint64_t, Object*> thus.
Inserting a new key is not time critical because there will only be a small number of total keys, which are never removed again. Where small is less than 100.
A typical implementation would use a std::vector (because of the small number of elements) and either just scan over all elements, or use a binary search (ie boost::flat_map); but this is not optimal enough for me. Once all elements have been inserted the map is static and it should be possible to find the pointer that belongs to a given key in a mere few clock cycles.
What I am thinking of is determining a perfect hash every time a new key is inserted and then use this hash (function) to find back the pointers.
This requires two things:
An algorithm to find a cheap hash function that converts a smallish list of given 64-bit values to 8, 9, 10 (or how many is required) bits with no collisions (aka, a perfect hash function), so that the latter can be used directly in a lookup table (that is 256, 512, 1024... in size).
A method to dynamically create such a hash function; where I have to admit that I am not willing to run an external compiler and load the new hash function dynamically ;-).
PART 3
I finished my implementation of UltraHash. The result is better than I had expected possible - and indeed much much better than the "multiplication hash" approach.
Conversion of a 64-bit key --> lookup table index takes 67 clock cycles (18 ns).
For 100 keys, the lookup table index isn't 10 bits, or 9, or 8... but 7 bits! And if you have less than 64 keys chances are you'll get even 6 bits. For 200 keys, the returned index is still only 8 bits!
To initialize the UltraHash object with the keys, doesn't take 0.3 seconds - no.. it takes 1 to 3 ms!
The amount of code involved is too much to paste here (I think? ~800 lines). A lot of those lines are comments though. I did explain into excruciating detail what the code is doing and how the algorithm works. If anyone is interested and wants it to be included here, let me know and I'll put some more time it doing that.
The main idea is as follows:
The n keys are 64 bits each - we can see the whole as a matrix of of n x 64. Each of the 64 columns is examined for randomness (extremely rudimentary) after which they are sorted in order of likeliness to contribute to an evenly distributed set of keys of 32 to 64 keys per set. Then by brute force a combination of columns is picked, resulting in partitioning of the keys into said sets where for each set a perfect hash is attempted to be found. Upon failure the number of columns is increased.
Finding the perfect hash for a set of 32 to 64 keys is done by means of linear algebra over GF(2), which involves Gaussian elimination etc.
You can find the code here and the accompanied header here.
Since the code is part of my utils git submodule, it uses other files from that library, as well as debugging code from cwds, but it should be easy to make it work standalone. The project ai-utils-testsuite is a configurable and compile-able project that contains benchmark and test code.
Below follows the comments that precede one of the functions
in UltraHash.cxx:
// This function checks if the n given keys (in K) are linear independent and
// if they are - fills in Mᵀ with the magic numbers to convert the keys to a
// number in the range [0, 64> (64 --> 6 bits); the perfect hash.
//
// This involves solving a linear (matrix) equation of the form:
//
// K M = L
//
// Where each row of K is a 64-bit key, M is to be solved, and L exists
// of rows that are - in fact - the values returned by UtraHash::index:
// 6 bit numbers.
//
// For example, let K be a 3x5 matrix (3 keys of 5 bits):
//
// Column 0 (lsb)
// |
// v
// ⎡ 1 1 0 1 1 ⎤ <-- row 0 = key 1
// K = ⎢ 1 1 1 0 1 ⎥ key 2
// ⎣ 1 1 1 1 0 ⎦ key 3
//
// Note that column 0 is stored in the least significant bit of the uint64_t,
// the uint64_t values of the keys are therefore 27, 23 and 15, in this example.
//
// In order to visualize the relationship with the std::array<uint64_t, sz>
// variables we'll put column 0 on the right, but then also invert the rows
// in order to keep an indentity matrix visually unchanged (rotate the whole
// thing 180 degrees).
//
// Column 0 (lsb)
// |
// v
// ⎡ 0 1 1 1 1 ⎤ K[2] = 15
// K = ⎢ 1 0 1 1 1 ⎥ K[1] = 23
// ⎣ 1 1 0 1 1 ⎦ <-- row 0, K[0] = 27
//
// Furthermore, let L be such that it simply contains each row number in order:
//
// Column 0 (lsb)
// |
// v
// ⎡ 1 0 ⎤ row 2: 2
// L = ⎢ 0 1 ⎥ row 1: 1
// ⎣ 0 0 ⎦ <-- row 0: 0
//
// Note that values in the rows of L don't really matter, as long as they
// are all different.
//
// The matrix equation K M = L reads, in this example,
//
// ⎡ z e ⎤ <-- msb
// ⎡ 0 1 1 1 1 ⎤ ⎢ y d ⎥ ⎡ 1 0 ⎤
// ⎢ 1 0 1 1 1 ⎥ ⎢ x c ⎥ = ⎢ 0 1 ⎥
// ⎣ 1 1 0 1 1 ⎦ ⎢ w b ⎥ ⎣ 0 0 ⎦
// ⎣ v a ⎦
// ^
// |
// index 0
//
// where edcba is the binary representation of the value in the first position of MT, etc.
// Namely, if we transpose M, we get:
//
// lsb (column 0 of Mᵀ; row 0 of M).
// |
// v
// ⎡z y x w v ⎤
// Mᵀ = ⎣e d c b a ⎦ <-- MT[0] (row 0 of Mᵀ / column 0 of M).
//
// The matrix equation remains the same if we substract row i from row j (i != j)
// in both K and L, provided all rows in both matrices are different.
//
// If the keys are linearly independent we'll be able to put K
// in row echelon form (K') where each next row has more zeroes in
// the leading columns (or since we put the matrix upside down,
// in the trailing colums):
//
// For example, subtracting (XOR-ing) the bottom row (row 0) from row 1 and 2,
// and then substracting row 1 from row 2, gives
//
// ⎡ z e ⎤
// ⎡ 1 1 0 0 0 ⎤ ⎢ y d ⎥ ⎡ 1 1 ⎤
// K' = ⎢ 0 1 1 0 0 ⎥ ⎢ x c ⎥ = ⎢ 0 1 ⎥
// ⎣ 1 1 0 1 1 ⎦ ⎢ w b ⎥ ⎣ 0 0 ⎦
// ⎣ v a ⎦
//
// Where each row (starting at the bottom) has more zeroes on the
// right than the previous row (the row below it).
//
// This already proves that the keys were linearly independent,
// because there is no row that has all zeroes in K.
// Note that even if there was a row in K with all zeroes, we might
// still be able to find a solution if also L has all zeroes in that row.
//
// The above is corresponding to the set of equations:
// ⎡ 1 1 0 ⎤
// Lᵀ = ⎣ 1 0 0 ⎦
// 1 1 0 0 0
// e+d = 1
// z+y = 1
//
// 0 1 1 0 0
// d+c = 1
// y+x = 0
//
// 1 1 0 1 1
// e+d +b+a = 0
// z+y +w+v = 0
//
// and from top to bottom we want to
//
// 1) set d to 1 (keep e at 0).
// set y to 1 (keep z at 0).
// 2) set c to 1 + d.
// set x to 0 + y.
// 3) set a to 0 + e + d (keep b at 0).
// set v to 0 + z + y (keep w at 0).
//
// It is easy to verify that this answer also holds in the original
// equation:
// 1 0 <---- 1 0 <-- column m.
// .-- row n v v | | .-- row n.
// v ⎡ 0 0 ⎤ v v v
// 2 ⎡ 0 1 1 1 1 ⎤ ⎢ 1 1 ⎥ ⎡ 1 0 ⎤ 2
// 1 ⎢ 1 0 1 1 1 ⎥ ⎢ 1 0 ⎥ = ⎢ 0 1 ⎥ 1
// 0 ⎣ 1 1 0 1 1 ⎦ ⎢ 0 0 ⎥ ⎣ 0 0 ⎦ 0
// ⎣ 1 1 ⎦
//
// That is, the bit in L at row n and column m is the parity of
// the bit-wise AND between the key at row n in K and column m of M.
//
// Finally we transpose this result to get the output variable MT:
//
// ⎡ 0 1 1 0 1 ⎤
// Mᵀ = ⎣ 0 1 0 0 1 ⎦
//
// Since a zero key always results in a zero in L, we can not use
// the value zero in L when one of the keys in zero. Instead we
// ignore the key and use different and non-zero values in L.
//
// For example, if we had the same keys as in the above example,
// but ALSO an all zeroes key:
//
// ⎡ 1 1 0 1 1 ⎤ <-- row 0 = key 1
// ⎢ 0 0 0 0 0 ⎥ key 2
// K = ⎢ 0 1 1 0 0 ⎥ key 3
// ⎣ 0 1 1 1 1 ⎦ key 4
//
// Then the zero key is removed and we use for L the matrix
//
// ⎡ 1 1 ⎤
// L = ⎢ 1 0 ⎥
// ⎣ 0 1 ⎦
//
// The algorithm to solve M from K' in row echelon form is, as you can see
// from the 1) 2) 3) steps above, done per whole row of M:
//
// First, we start with M being all zeroes:
//
// n=5 row
// ⎡ 0 0 ⎤ 4
// ⎢ 0 0 ⎥ 3
// ⎢ 0 0 ⎥ 2
// ⎢ 0 0 ⎥ 1
// ⎣ 0 0 ⎦ 0
//
// We skip over row 4, leaving it all zeroes, because the first row of
// K' is 1 1 0 0 0
// 4 3 <-- start with row 3.
// ^-- the last 1.
// And that row is made equal to the first row of L: 1 1.
// The result after step 1) is therefore:
//
// n=5 row
// ⎡ 0 0 ⎤ 4
// ⎢ 1 1 ⎥ 3 <-- last updated.
// ⎢ 0 0 ⎥ 2
// ⎢ 0 0 ⎥ 1
// ⎣ 0 0 ⎦ 0
//
// The next row of K' is 0 1 1 0 0
// 4 3 2 <-- continue with row 2.
// ^-- the last 1.
// And that row is made equal to L's second row [ 0 1 ]
// XOR the current row of K' times M so far:
//
// ⎡ 0 0 ⎤
// ⎢ 1 1 ⎥
// [ 0 1 1 0 0 ] ⎢ 0 0 ⎥ = [ 1 1 ]
// ⎢ 0 0 ⎥
// ⎣ 0 0 ⎦
//
// [ 0 1 ] + [ 1 1 ] = [ 1 0 ]
//
// So that after step 2) M has become:
//
// n=5 row
// ⎡ 0 0 ⎤ 4
// ⎢ 1 1 ⎥ 3
// ⎢ 1 0 ⎥ 2 <-- last updated.
// ⎢ 0 0 ⎥ 1
// ⎣ 0 0 ⎦ 0
//
// The final row of K' is 1 1 0 1 1
// 4 3 2 1 0 <-- continue with row 0.
// ^-- the last 1.
// The next row of L is [ 0 0 ] and
//
// ⎡ 0 0 ⎤
// ⎢ 1 1 ⎥
// [ 0 0 ] + [ 1 1 0 1 1 ] ⎢ 1 0 ⎥ = [ 1 1 ]
// ⎢ 0 0 ⎥
// ⎣ 0 0 ⎦
//
// So that after step 3) M has become:
//
// n=5 row
// ⎡ 0 0 ⎤ 4
// ⎢ 1 1 ⎥ 3
// ⎢ 1 0 ⎥ 2
// ⎢ 0 0 ⎥ 1
// ⎣ 1 1 ⎦ 0 <-- last updated.
//
And trust me, that's the longest comment I ever wrote for one function :).
ORIGINAL POST:
The random multiplication factor approach
I tried the same approach of samgak: my hash function was just a multiplication of the uint64_t key with a uint64_t multiplication factor, and then I tried random values for the multiplication factor until the least number of high-bits of the result were different. It turns out that this takes easily up till 0.3 seconds to reach a 9 bit output.
Here I used the following function to find the number of high bits required:
// Return the maximal right-shift that is possible on
// the values in `hashes` such that the results are still all different.
int max_shift(std::vector<uint32_t> hashes)
{
std::sort(hashes.begin(), hashes.end());
int sm = 0;
int sz = hashes.size();
for (int i = 1; i < sz; ++i)
{
uint32_t d = hashes[i - 1] ^ hashes[i];
int s = std::countl_zero(d);
if (s > sm)
sm = s;
}
return 31 - sm;
}
The hashes here are, for example, just the most significant 32-bit of the result of the multiplication; the lower 32 bit would be shifted out / ignored anyway.
Considering that 100 values, being less than 128, theoretically fit into 7 bits (although I never even found 8 bits with the above approach; which isn't weird when you realize that the chance for a random attempt corresponds with the Birthday Problem with 100 people and 256 birthdays; which has a 1 in 5807421181 chance of having no collisions) and that I found that waiting up to 0.3 seconds, and theoretically much longer, was a bit annoying -- I started to think about a way to calculate the hash function.
In order to be able to do any calculations, I decided to use a linear algebra approach.
Using linear algebra
The idea is to use linear algebra (matrices, vectors). And since we are working with arbitrary bits, the most logical thing to do is to work over . Ok, this external conversion of LaTeX to images isn't working, I'll use UTF8: ℤ₂
Let all the input keys be represented by 64-dimensional column vectors over ℤ₂: 𝐕ᵢ.
Note there can exist at most 64 such linearly independent vectors. Assume for a moment that we have 64 such keys. Then we can construct the matrix
K = [𝐕₀ 𝐕₁ … 𝐕ᵢ … 𝐕₆₃]
and calculate the inverse of that (because of 𝐕ᵢ are linearly independent). Lets denote this inverse with K⁻¹.
Further more, note that we need 6 bits to enumerate 64 input keys. Now let the 6×64 matrix L be the matrix that exist of all 6-dimensional column vectors:
⎡ 0 0 0 0 0 1 1 ⎤
⎢ 0 0 0 0 0 1 1 ⎥
L = ⎢ 0 0 0 0 0 ... 1 1 ⎥
⎢ 0 0 0 0 1 1 1 ⎥
⎢ 0 0 1 1 0 1 1 ⎥
⎣ 0 1 0 1 0 0 1 ⎦
Let the 6×64 matrix M then be
M = L K⁻¹
Then
M K = L K⁻¹ K = L
and left-multiplying any input key 𝐕ᵢ with M will give a unique 6-bit column vector; aka - the perfect hash.
Linearly dependent keys
The main problem with this approach is of course that the input keys might not be linearly independent, which is certainly going to be the case when there are more than 64 keys.
In order to detect if/when keys are not linearly independent, we can simply triangulate the matrix K when new keys come in. In order to give examples, lets assume that we have vectors/keys of 4 bits instead of 64.
Consider the following five distinct input keys:
⎛0⎞ ⎛1⎞ ⎛1⎞ ⎛0⎞ ⎛1⎞
V₀=⎜1⎟, V₁=⎜0⎟, V₂=⎜0⎟, V₃=⎜0⎟, V₄=⎜1⎟
⎜1⎟ ⎜1⎟ ⎜0⎟ ⎜1⎟ ⎜0⎟
⎝0⎠ ⎝1⎠ ⎝1⎠ ⎝0⎠ ⎝1⎠
Then we start with a identity matrix (all ones on the main diagonal and the reset zeroes), and put the first key such that the top 1 aligns with a 1 on the matrix diagonal:
⎡ 0 0 0 1 ⎤
K'= ⎢ 0 0 1 0 ⎥
⎢ 0 1 1 0 ⎥
⎣ 1 0 0 0 ⎦
* <-- used
As a result, this matrix has an inverse, because the upper-left triangle is all zeroes and the main diagonal is all ones.
Next key, V₁, can likewise as-is be put in this matrix:
⎡ 0 0 0 1 ⎤
K'= ⎢ 0 0 1 0 ⎥
⎢ 0 1 1 1 ⎥
⎣ 1 0 0 1 ⎦
* * <-- used
Now column four is used, so we can't use it to put in V₂ (that also has a 1 at the top). Therefore, we subtract the fourth column from V₂ first (note that subtraction over ℤ₂ is addition modulo 2, aka an XOR operation). V₂ then becomes:
⎛1⎞ ⎛1⎞ ⎛0⎞
⎜0⎟ - ⎜0⎟ =⎜0⎟
⎜0⎟ ⎜1⎟ ⎜1⎟
⎝1⎠ ⎝1⎠ ⎝0⎠
and we can put that in column 2 of the matrix (which doesn't change it).
V₃ now no longer can be put in column 2, because it is "used".
If we try to use the same trick, subtracting column 2 from V₃ we get
⎛0⎞ ⎛0⎞ ⎛0⎞
⎜0⎟ - ⎜0⎟ =⎜0⎟
⎜1⎟ ⎜1⎟ ⎜0⎟
⎝0⎠ ⎝0⎠ ⎝0⎠
all zeroes. Which indicates that the first four keys were linearly dependent.
We can mitigate this by extending the vector with a 1 at the bottom (and all the others with zeroes):
⎛0⎞ ⎛1⎞ ⎛1⎞ ⎛0⎞ ⎛1⎞
V₀=⎜1⎟, V₁=⎜0⎟, V₂=⎜0⎟, V₃=⎜0⎟, V₄=⎜1⎟
⎜1⎟ ⎜1⎟ ⎜0⎟ ⎜1⎟ ⎜0⎟
⎜0⎟ ⎜1⎟ ⎜1⎟ ⎜0⎟ ⎜1⎟
⎝0⎠ ⎝0⎠ ⎝0⎠ ⎝1⎠ ⎝0⎠
If we would repeat the same thing with these inputs, we get the exact same matrix, but with an extra 1 in the bottom left:
⎡ 0 0 0 0 1 ⎤
K'= ⎢ 0 0 0 1 0 ⎥
⎢ 0 0 1 1 1 ⎥
⎢ 0 1 0 0 1 ⎥
⎣ 1 0 0 0 0 ⎦
* * * * <-- used
V₄ turns out to be linear dependent as well: it has a 1 at the top, thus subtract the last column, and then we get the (now) fourth column which was already used. Therefore we have to add another row:
⎛0⎞ ⎛1⎞ ⎛1⎞ ⎛0⎞ ⎛1⎞
V₀=⎜1⎟, V₁=⎜0⎟, V₂=⎜0⎟, V₃=⎜0⎟, V₄=⎜1⎟
⎜1⎟ ⎜1⎟ ⎜0⎟ ⎜1⎟ ⎜0⎟
⎜0⎟ ⎜1⎟ ⎜1⎟ ⎜0⎟ ⎜1⎟
⎜0⎟ ⎜0⎟ ⎜0⎟ ⎜1⎟ ⎜0⎟
⎝0⎠ ⎝0⎠ ⎝0⎠ ⎝0⎠ ⎝1⎠
and
⎡ 0 0 0 0 0 1 ⎤
K'= ⎢ 0 0 0 0 1 0 ⎥
⎢ 0 0 0 1 1 1 ⎥
⎢ 0 0 1 0 0 1 ⎥
⎢ 0 1 0 0 0 0 ⎥
⎣ 1 0 0 0 0 0 ⎦
* * * * * <-- used
Now lets construct K as described above (just put all the V's in it):
⎡ 0 1 1 . 0 1 ⎤
⎢ 1 0 0 . 0 1 ⎥
K = ⎢ 1 1 0 . 1 0 ⎥
⎢ 0 1 1 . 0 1 ⎥
⎢ 0 0 0 . 1 0 ⎥
⎣ 0 0 0 . 0 1 ⎦
I left the fourth column empty because we could only find three linearly independent keys even though the (original) keys are 4 bits. Aka, if the keys were 64 bits and we could only find 62 that were linearly independent, we would leave 2 columns empty. That is ... we'll fill them in with pseudo keys that ARE linearly independent with the previous ones; that will lead those columns having their lower bits zero:
⎡ 0 1 1 . 0 1 ⎤
⎢ 1 0 0 . 0 1 ⎥
K = ⎢ 1 1 0 . 1 0 ⎥
⎢ 0 1 1 . 0 1 ⎥
⎢ 0 0 0 0 1 0 ⎥
⎣ 0 0 0 0 0 1 ⎦
Moreover, the missing linearly independent keys are trivially the unused column(s) of K', so that K can be completed by extending the diagonal of ones from the bottom right to the top left:
⎡ 0 1 1 0 0 1 ⎤
⎢ 1 0 0 0 0 1 ⎥
K = ⎢ 1 1 0 0 1 0 ⎥
⎢ 0 1 1 1 0 1 ⎥
⎢ 0 0 0 0 1 0 ⎥
⎣ 0 0 0 0 0 1 ⎦
Because now all columns are linearly independent, this matrix has an inverse, in this case that is
⎡ 0 1 0 0 0 1 ⎤
⎢ 0 1 1 0 1 1 ⎥
K⁻¹=⎢ 1 1 1 0 1 0 ⎥
⎢ 1 0 0 1 0 0 ⎥
⎢ 0 0 0 0 1 0 ⎥
⎣ 0 0 0 0 0 1 ⎦
Originally we had 5 input keys, but even with the extra pseudo key we can enumerate that with 3 bits. However, lets take into account we're not really interested in what the pseudo key results in - so we use the following L:
⎡ 0 0 0 ? 0 1 ⎤
L = ⎢ 0 0 1 ? 1 0 ⎥
⎣ 0 1 0 ? 1 0 ⎦
where you can fill in whatever you want in column four.
If you fill in the next number (5):
⎡ 0 0 0 1 0 1 ⎤
L = ⎢ 0 0 1 0 1 0 ⎥
⎣ 0 1 0 1 1 0 ⎦
then
⎡ 1 0 0 1 0 1 ⎤
M = L K⁻¹ = ⎢ 1 1 1 0 0 0 ⎥
⎣ 1 1 1 1 0 1 ⎦
Constructing a Kₒ with the original keys (V₀, V₁, V₂, V₃ and V₄):
⎡ 0 1 1 0 1 ⎤
Kₒ=⎢ 1 0 0 0 1 ⎥
⎢ 1 1 0 1 0 ⎥
⎣ 0 1 1 0 1 ⎦
and using M without the last two columns,
⎡ 1 0 0 1 ⎤
M = ⎢ 1 1 1 0 ⎥
⎣ 1 1 1 1 ⎦
we get
⎡ 0 0 0 0 0 ⎤
M Kₒ = ⎢ 0 0 1 1 0 ⎥
⎣ 0 1 0 1 1 ⎦
showing that multiplying with this matrix M is not a perfect hash: the last column is the same as the second one.
The exact change is obvious: if we had used the full matrix (including the last two columns) and the extended keys then only the last key has a 1 in the last column and is therefore XOR-ed with the last column of the matrix:
⮶what we get
1 1 0
0 XOR 0 = 0
0 1 1
⮴expected
Thus, the only way to correct the output is by knowing in which column of K it belongs (the last column) based on the first four bits of that column, which is much like the original problem we try to solve to begin with.
PART 2
The trick is to divide the keys into sets of 64 or less linearly independent keys. This is easy because for fully random 64 bit values, on average the first 62.18 keys will be linearly independent, which is close enough to 64 to be efficient.
The keys should be well hashed however, for example if the 64-bit keys would be uint64_t with values between 0 till 64, only using 6 bits, then you'd obviously only find 6 linearly independent keys at a time and would at least need 64 / 6 = 11 sets.
This would slow down the look up because you will have to be able to determine in which set a key belongs before you can multiply it with the correct matrix.
One way to do this is to sort the keys and remember the values of the first key in the next set (the first key that is linear dependent on the keys in the previous set).
For example, lets say we have three sets:
S₀={V₀,V₁,V₂}, S₁={V₃,V₄} and S₂={V₅,V₆,V₇,V₈}
because V₃ can be expressed in terms of the keys in S₀ (e.g. V₀ + V₂) and V₅ could be expressed as V₃ + V₄.
Lets try to think of an example where this is the case. Remember that we sorted the keys, so V₀<V₁<V₂<V₃<V₄<V₅<V₆<V₇<V₈ when represented as numbers. Lets say that the least significant bit of the unsigned integers that they are stored in correspond with row 0 of the vectors, then possible values could be:
{ 0b00001, 0b00010, 0b00100 }, { 0b00101, 0b01000 }, { 0b01101, 0b01111, 0b11001, 0b11110 }
or
⎛0⎞ ⎛0⎞ ⎛0⎞ ⎛0⎞ ⎛0⎞ ⎛0⎞ ⎛0⎞ ⎛1⎞ ⎛1⎞
⎜0⎟ ⎜0⎟ ⎜0⎟ ⎜0⎟ ⎜1⎟ ⎜1⎟ ⎜1⎟ ⎜1⎟ ⎜1⎟
V₀=⎜0⎟, V₁=⎜0⎟, V₂=⎜1⎟, V₃=⎜1⎟, V₄=⎜0⎟, V₅=⎜1⎟, V₆=⎜1⎟,V₇=⎜0⎟, V₈=⎜1⎟
⎜0⎟ ⎜1⎟ ⎜0⎟ ⎜0⎟ ⎜0⎟ ⎜0⎟ ⎜1⎟ ⎜0⎟ ⎜1⎟
⎝1⎠ ⎝0⎠ ⎝0⎠ ⎝1⎠ ⎝0⎠ ⎝1⎠ ⎝1⎠ ⎝1⎠ ⎝0⎠
so that indeed V₃=V₀+V₂ and V₅=V₃+V₄.
But we can do better! There is no need to sort the keys and use < to specify the boundaries between sets. A very effective way would be to use a binary tree to find the correct set where each time one particular bit is used.
Note that with an expectation of ~100 keys and sets with a size of ~62 - we expect to have only two sets! So only a single bit to switch set on. This bit can be found by trial and error. In the above example we can create the following two sets of linearly independent vectors:
S₀={V₁,V₂,V₄,V₈} and S₁={V₀,V₃,V₅,V₆,V₇}
by simply switching on the least significant bit.
The total lookup code for one hundred 64-bit keys will then exist of testing 1 bit, that determines which table of six 64-bit values to use, and then six XOR operations (that can be done in parallel in theory; so that should be around 3 clock cycles) and likewise 6 parity tests - which I'll count as 6 clock cycles (see this post for more on parity). Which brings me to a dozen clock cycles or so. And the result would be 7 bit, not 9 bit! Also, calculating these matrices can be done much much faster than 0.3 seconds.
I'll post an update when I have working code.
If you have a hashing function that includes a constant, then for each possible value of that constant you have a "new" function. For example you could hash a 64-bit value to a value between 0-1023 that you could use as an index into a lookup table like this:
int HashValue(int64_t key, int mult)
{
return (int)(key ^ ((key * mult) >> 32)) & 1023;
}
where mult is a multiplier constant. For a given set of <= 100 keys, you can just try random values of mult until you find one that doesn't result in any collisions. I gave this a go and it typically finds a "perfect" hashing function after about 50 attempts when hashing to a range of 0-1023. For 0-511 it takes around 20000 attempts and for 0-255 it failed.
Example C++ implementation below:
using namespace std;
#include <stdlib.h>
#include <time.h>
#include <list>
#include <unordered_set>
int HashValue(int64_t key, int mult)
{
return (int)(key ^ ((key * mult) >> 32)) & 1023;
// slower alternative with more thorough mixing
// key = key ^ (key * mult);
// int hash = (int)(key ^ (key >> 32));
// hash ^= key >> 16;
// return (hash ^ (hash >> 8)) & 1023;
}
int FindHash(std::list<int64_t> keys)
{
for(int i = 0; i < 10000; i++) // try 10000 times
{
std::unordered_set<int> hashset;
bool collided = false;
int mult = rand();
for (std::list<int64_t>::iterator it = keys.begin(); it != keys.end(); it++)
{
int val = HashValue(*it, mult);
if(hashset.find(val) != hashset.end())
{
collided = true;
break;
}
hashset.insert(val);
}
if(!collided)
{
std::cout << "Found collision-free function with mult = " << mult << " on attempt " << i;
return mult;
}
}
std::cout << "Failed to find collision-free hashing function";
return 0;
}
int main()
{
// test with 100 random keys
srand (time(NULL));
std::list<int64_t> keys = {};
for(int i = 0; i < 100; i++)
{
// 64 bit random number
keys.push_back(((int64_t)rand() << 32) | rand());
}
FindHash(keys);
return 0;
}

Is there and R fucntion to mutate a variable on multiple columns conditions using If else or

I am trying to create a variable in my data based on following conditions:
x y Z S T G
1 0 1 0 1 0
1 0 0 0 0 0
1 1 1 0 0 0
1 1 1 1 1 1
if x=1 then 1,
if y=1 then 2 if s=1 then 3,
if t=1 then 4 if G=1 then 5 if X==y==z==1 then 6 and so on.
Please tell me how can i write this using if else
Using if else?
You can calculate it without if else:
v <- 1:6
# this vector should give each column a the value
# 1 2 3 ... 6
# the most tedious part is to get your notes into a the R terminal
# as an R matrix.
# I used the fact that the string in R can span multiple lines:
s <- "x y Z S T G
1 0 1 0 1 0
1 0 0 0 0 0
1 1 1 0 0 0
1 1 1 1 1 1"
# it looks like this:
s
## [1] "x y Z S T G\n1 0 1 0 1 0\n1 0 0 0 0 0 \n1 1 1 0 0 0 \n1 1 1 1 1 1"
# after trying long around with the base R functions
# which led to errors and diverse problems, I found the most elegant way
# to transform this string into a matrix-like tabular form
# is to use tidyverse's read_delim().
# install.packages("tidyverse")
# load tidyverse:
require(tidyverse) # or: library(tidyverse)
tb <- read_delim(s, delim=" ") ## it complains about parsing failues, but
tb
# A tibble: 4 x 6
x y Z S T G
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 0 1 0 1 0
2 1 0 0 0 0 0
3 1 1 1 0 0 0
4 1 1 1 1 1 1
# so it is read correctly in!
# what you want to do actually is
# to multiply each row with `v` and sum this result:
tb[1, ]
# A tibble: 1 x 6
x y Z S T G
<dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
1 1 0 1 0 1 0
# you do:
v * tb[1, ]
x y Z S T G
1 1 0 3 0 5 0
# if you build sum with this, then you get your desired numbers
sum(v * tb[1, ])
## [1] 9
# row-wise manipulation of matrix/data.frame/tibbles you do by
apply(tb, MARGIN=1, FUN=function(row) v * row)
[,1] [,2] [,3] [,4]
x 1 1 1 1
y 0 0 2 2
Z 3 0 3 3
S 0 0 0 4
T 5 0 0 5
G 0 0 0 6
# very often such functions flip the results, so flip it back
# by the transpose function `t()`:
t(apply(tb, MARGIN=1, FUN=function(row) v * row))
x y Z S T G
[1,] 1 0 3 0 5 0
[2,] 1 0 0 0 0 0
[3,] 1 2 3 0 0 0
[4,] 1 2 3 4 5 6
# to get directly the sum by row, do:
apply(tb, MARGIN=1, FUN=function(row) sum(v * row))
## [1] 9 1 6 21
# these are the values you wanted, isn't it?
# I see now, that
tb * v # by using vectorization of R
x y Z S T G
[1,] 1 0 3 0 5 0
[2,] 1 0 0 0 0 0
[3,] 1 2 3 0 0 0
[4,] 1 2 3 4 5 6
# therfore the rowSums are:
rowSums(tb * v)
## [1] 9 1 6 21
So this is the usual (messy) way how one often gets to the solution.
At the end, it boils down to this (and usually you find in Stack Overflow short answers like):
Short answer
require(tidyverse)
s <- "x y Z S T G
1 0 1 0 1 0
1 0 0 0 0 0
1 1 1 0 0 0
1 1 1 1 1 1"
tb <- read_delim(s, delim=" ")
rowSums(tb * v)
And this is the beauty of R: If you know exactly what to do, it is just 1-3 lines of code (or a little more) ...

sympy: given symbols u, v, w how do I tell it that u*u + v*v + w*w is 1?

I have a 3D unit vector with components (v1, v2, v3).
I do some manipulation and arrive at this matrix (its a 2x2 matrix in case you don't get it):
>>> Q
⎡ ⎛ _________________⎞ ⎛ _________________⎞ ⎤
⎢ ⎜ ╱ 2 2 2 ⎟ ⎜ ╱ 2 2 2 ⎟ ⎥
⎢ ⅈ⋅⎝v₃ - ╲╱ v₁ + v₂ + v₃ ⎠ ⅈ⋅⎝v₃ + ╲╱ v₁ + v₂ + v₃ ⎠ ⎥
⎢───────────────────────────────────────────────────── ─────────────────────────────────────────────────────⎥
⎢ __________________________________ __________________________________⎥
⎢ ╱ 2 ╱ 2 ⎥
⎢ ╱ │ _________________│ ╱ │ _________________│ ⎥
⎢ ╱ │ ╱ 2 2 2 │ ╱ │ ╱ 2 2 2 │ ⎥
⎢ ╱ │v₃ - ╲╱ v₁ + v₂ + v₃ │ ╱ │v₃ + ╲╱ v₁ + v₂ + v₃ │ ⎥
⎢(ⅈ⋅v₁ - v₂)⋅ ╱ │─────────────────────────│ + 1 (ⅈ⋅v₁ - v₂)⋅ ╱ │─────────────────────────│ + 1 ⎥
⎢ ╲╱ │ ⅈ⋅v₁ - v₂ │ ╲╱ │ ⅈ⋅v₁ - v₂ │ ⎥
⎢ ⎥
⎢ 1 1 ⎥
⎢ ───────────────────────────────────────── ───────────────────────────────────────── ⎥
⎢ __________________________________ __________________________________ ⎥
⎢ ╱ 2 ╱ 2 ⎥
⎢ ╱ │ _________________│ ╱ │ _________________│ ⎥
⎢ ╱ │ ╱ 2 2 2 │ ╱ │ ╱ 2 2 2 │ ⎥
⎢ ╱ │v₃ - ╲╱ v₁ + v₂ + v₃ │ ╱ │v₃ + ╲╱ v₁ + v₂ + v₃ │ ⎥
⎢ ╱ │─────────────────────────│ + 1 ╱ │─────────────────────────│ + 1 ⎥
⎣ ╲╱ │ ⅈ⋅v₁ - v₂ │ ╲╱ │ ⅈ⋅v₁ - v₂ │ ⎦
How do I tell sympy that since I am dealing with a unit vector v1*v1 + v2*v2 + v3*v3 is equal to 1 and have it simplify the result for me?
In simple cases like this you can use Q.subs(v1*v1 + v2*v2 + v3*v3, 1).

euler_equation from sympy.calculus.euler producing wrong derivatives

I've been trying to get the first variation of a functional that depends on the function phi, and its first and second gradients. I'm probably doing something wrong.
the function phi:
φ(x, y, t)
its first gradient gradient_1_phi:
⎡∂ ∂ ⎤
⎢──(φ(x, y, t)) ──(φ(x, y, t))⎥
⎣∂x ∂y ⎦
its second gradient gradient_2_phi:
⎡ 2 2 ⎤
⎢ ∂ ∂ ⎥
⎢ ───(φ(x, y, t)) ─────(φ(x, y, t))⎥
⎢ 2 ∂y ∂x ⎥
⎢ ∂x ⎥
⎢ ⎥
⎢ 2 2 ⎥
⎢ ∂ ∂ ⎥
⎢─────(φ(x, y, t)) ───(φ(x, y, t)) ⎥
⎢∂y ∂x 2 ⎥
⎣ ∂y ⎦
For the sake of simplicity, I split the functional in three parts, one containing phi, another its first gradient, and the last containing the second gradient. These are potential_PFC, regularizers_PFC_1, and regularizers_PFC_2 respectively.
Problem: when computing the first variation of regularizers_PFC_2 wrt the second gradient gradient_2_phi, the cross derivatives disappear.
In regularizers_PFC_2, I would expect to get
⎡ 4 4 4 ⎤
⎢ ∂ ∂ ∂ ⎥
⎢- 1.0⋅───(φ(x, y, t)) - 1.0⋅───(φ(x, y, t)) - 2.0⋅ ──────(φ(x, y, t)) = 0, True, True⎥
⎢ 4 4 2 2 ⎥
⎣ ∂x ∂y ∂x ∂y ⎦
but I get
⎡ 4 4 ⎤
⎢ ∂ ∂ ⎥
⎢- 1.0⋅───(φ(x, y, t)) - 1.0⋅───(φ(x, y, t)) = 0, True, True⎥
⎢ 4 4 ⎥
⎣ ∂x ∂y ⎦
Here is the code, I appreciate very much and suggestion.
from sympy import Symbol, Function
from sympy.calculus.euler import euler_equations
from sympy import tensorcontraction, tensorproduct, eye
from sympy.tensor.array import derive_by_array
from sympy import init_printing
init_printing()
La = Lb = Lc = 1
t = Symbol('t')
x = Symbol('x')
y = Symbol('y')
phi = Function('phi')(x,y,t)
gradient_1_phi = Function('gradient_1_phi')(x,y,t)
gradient_2_phi = Function('gradient_2_phi')(x,y,t)
grad_1_phi = derive_by_array(phi,[x,y])
grad_2_phi = derive_by_array(grad_1_phi,[x,y])
regularizers_PFC_1 = -Lb*tensorcontraction(tensorproduct(grad_1_phi,grad_1_phi), (0, 1))
regularizers_PFC_2 = +0.5*Lc*tensorcontraction(tensorproduct(grad_2_phi,grad_2_phi),(0,1,2,3))
potential_PFC = La*(0.5*(1.0-0.2)*phi**2 + 0.25*phi*phi*phi*phi)
d_regularizers_PFC_1 = euler_equations(regularizers_PFC_1, [phi,gradient_1_phi,gradient_2_phi], [x,y,t])
d_regularizers_PFC_2 = euler_equations(-regularizers_PFC_2, [phi,gradient_1_phi,gradient_2_phi], [x,y,t])

What is stored in the modelview matrix?

I have been trying to learn more about matrices in opengl; right now I'm stuck trying to understand where things are stored inside the modelview matrix. (location,scaling,rotations etc) This is obviously very important as understanding matrices is one of the first steps to fully understand modern opengl.
I have been trying to find some good articles, and I've currently found 2: (1,2)
However, I stil don't understand where the values are stored; any help is very appreciated (links, pinpointers etc)
Here is a reference of how different (affine) transformation matrices are constructed:
Identity:
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
Translate (x, y ,z):
1 0 0 x
0 1 0 y
0 0 1 z
0 0 0 1
Scale (sx, sy, sz):
sx 0 0 0
0 sy 0 0
0 0 sz 0
0 0 0 1
Rotate along x axis (by angle t):
1 0 0 0
0 cos(t) -sin(t) 0
0 sin(t) cos(t) 0
0 0 0 1
Rotate along y axis (by angle t):
cos(t) 0 sin(t) 0
0 1 0 0
-sin(t) 0 cos(t) 0
0 0 0 1
Rotate along z axis (by angle t):
cos(t) -sin(t) 0 0
sin(t) cos(t) 0 0
0 0 1 0
0 0 0 1