is pyrr.Matrix44 layout actually column-major? - opengl

in the pyrr.Matrix docs it states:
Matrices are laid out in row-major format and can be loaded directly into OpenGL. To convert to column-major format, transpose the array using the numpy.array.T method.
creating a transformation matrix gives me:
Matrix44.from_translation( np.array([1,2,3]))
Matrix44([[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[1, 2, 3, 1]])
If the layout is row-major, I would expect the output to be the transpose:
Matrix44([[1, 0, 0, 1],
[0, 1, 0, 2],
[0, 0, 1, 3],
[0, 0, 0, 1]])
I'm most likely confused (I come from C/OpenGL background), but could anyone please enlighten me?
Jonathan

I was writing down a great answer. But I found this really interesting link I invite you to read it !
This is a small resume :
If it's row-major matrix, then the translation is stored in the 3, 7, and 11th indices.
If it's column-major, then the translation is stored in the 12, 13, and 14th indices.
The difference behind the scene is the way to store the data. As it is 16 float in memory, those floats are contiguous in the memory. So you have to define if you either store them in 4 float x 4 columns or 4 float x 4 rows. And then it change the way you access and use it.
You can look at this link too.

Related

How to transfer data for train pre-trained model with pytorch?

I want to classify images using a pretrained model ResNet50 with Pytorch. I am faced with the problem of transferring data from the dataset to the model.
As far as I understood, it is necessary to transfer images for training in a tensor with the following dimension: (N, 4, 512,512), where N is the number of images, 4 is the number of channels, and 512 is the width and height of the picture. And also you need to pass "targets" as an array. Now I have a Pandas DataFrame with columns "Image", "Label". In the column "Images" I have a list of dimensions (512, 512, 4).
I tried writing data to an array, but it takes too long and takes up a lot of memory. Is there some other way to do this? So, my question is "How can I transfer data into model?"
This is part of my database:
Number
Image
label
0
[[[0, 0, 0, 0], [0, 0, 0, 0], [0, 0, 0, 0], [0...
0
1
[[[4, 3, 0, 18], [82, 0, 0, 27], [11, 4, 0, 20...
14
2
[[[2, 2, 0, 0], [1, 5, 0, 1], [0, 5, 0, 0], [2...
14
3
[[[7, 1, 0, 24], [31, 0, 0, 14], [23, 3, 0, 13...
3
...
...
...
I tried to do it in the following way:
x_train = []
y_train = []
for data in range(N):
x_train.append(train_df['Image'].iloc[data])
y_train.append(train_df['Label'].iloc[data])
x_train = torch.tensor(x_train)
y_train = torch.tensor(y_train)
y_train = y_train.view(-1)
x_train = x_train.permute(0, 3, 1, 2)

Python: How to make values of a list in a list of lists zeros

I would like to make all the values in the first list inside the list of lists named "child_Before" below zero. The piece of code I wrote to accomplish this task is also shown below after the list:
child_Before = [[9, 12, 7, 3, 13, 14, 10, 5, 4, 11, 8, 6, 2],
[1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1],
[[1, 0], [1, 1]]]
for elem in range(len(child_Before[0])):
child_Before[0][elem] = 0
Below is the expected result:
child_After = [[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
[1, 0, 1, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1],
[[1, 0], [1, 1]]]
However, I think there should be a more nibble way to accomplish this exercise. Hence, I welcome your help. Thank you in advance.
just to add a creative answer
import numpy as np
child_Before[0] = (np.array(child_Before[0])&0).tolist()
this is bad practice though since i'm using bitwise operasions in a senario where it is not intuitive, and i think there is a slight chance i'm making 2 loops xD on the bright site the & which is making all the zeros is O(1) time complexity
Just create a list of [0] with the same length as the original list.
# Answer to this question - make first list in the list to be all 0
child_Before[0] = [0] * len(child_Before[0])
As for you answer, I can correct it to make all the elements in the lists of this list to be zero.
# Make all elements 0
for child in range(len(child_Before)):
child_Before[child] = [0] * len(child_Before[child])
Use list comprehension:
child_after = [[i if n != 0 else 0 for i in j] for n, j in enumerate(child_Before)]

How to use block_diag repeatedly

I have rather simple question but still couldnĀ“t make it work.
I want a block diagonal n^2*n^2 matrix. The blocks are sparse n*n matrices with just the diagonal, first off diagonals and forth off diag. For the simple case of n=4 this can easily be done
datanew = ones((5,n1))
datanew[2] = -2*datanew[2]
diagsn = [-4,-1,0,1,4]
DD2 = sparse.spdiags(datanew,diagsn,n,n)
new = sparse.block_diag([DD2,DD2,DD2,DD2])
Since this only useful for small n's, is there a way better way to use block_diag? Thinking of n -> 1000
A simple way of constructing a long list of DD2 matrices, is with a list comprehension:
In [128]: sparse.block_diag([DD2 for _ in range(20)]).A
Out[128]:
array([[-2, 1, 0, ..., 0, 0, 0],
[ 1, -2, 1, ..., 0, 0, 0],
[ 0, 1, -2, ..., 0, 0, 0],
...,
[ 0, 0, 0, ..., -2, 1, 0],
[ 0, 0, 0, ..., 1, -2, 1],
[ 0, 0, 0, ..., 0, 1, -2]])
In [129]: _.shape
Out[129]: (80, 80)
At least in my version, block_diag wants a list of arrays, not *args:
In [133]: sparse.block_diag(DD2,DD2,DD2,DD2)
...
TypeError: block_diag() takes at most 3 arguments (4 given)
In [134]: sparse.block_diag([DD2,DD2,DD2,DD2])
Out[134]:
<16x16 sparse matrix of type '<type 'numpy.int32'>'
with 40 stored elements in COOrdinate format>
This probably isn't the fastest way to construct such a block diagonal array, but it's a start.
================
Looking at the code for sparse.block_mat I deduce that it does:
In [145]: rows=[]
In [146]: for i in range(4):
arow=[None]*4
arow[i]=DD2
rows.append(arow)
.....:
In [147]: rows
Out[147]:
[[<4x4 sparse matrix of type '<type 'numpy.int32'>'
with 10 stored elements (5 diagonals) in DIAgonal format>,
None,
None,
None],
[None,
<4x4 sparse matrix of type '<type 'numpy.int32'>'
...
None,
<4x4 sparse matrix of type '<type 'numpy.int32'>'
with 10 stored elements (5 diagonals) in DIAgonal format>]]
In other words, rows is a 'matrix' of None with DD2 along the diagonals. It then passes these to sparse.bmat.
In [148]: sparse.bmat(rows)
Out[148]:
<16x16 sparse matrix of type '<type 'numpy.int32'>'
with 40 stored elements in COOrdinate format>
bmat in turn collects the data,rows,cols from the coo format of all the input matricies, joins them into master arrays, and builds a new coo matrix from them.
So an alternative is to construct those 3 arrays directly.

3D search using A* JPS

How can I generalize Jump Point Search to a 3D search volume?
So far, I have defined pruning rules for a 3D cube involving each of the three movements- straight (0,0,1), first-order diagonal (0,1,1) and second-order (1,1,1).
What I'm mostly concerned about is the optimal turning points defined in the paper. I've been unable to ascertain exactly how they were derived, and therefore how to derive my own for three dimensions.
Any suggestions as to how this can be done?
Rather than attempting to derive turning points, it helps to use an intuitive understanding of the algorithm in 2D.
Because the shortest distance between two locations is a straight line, we know that moving diagonally is fastest because it's equivalent to two steps in 1D. In 3D, this means a diagonal is equivalent to three steps. (in reality, these values are sqrt(2) and sqrt(3)). With this, we choose to optimize by moving across as many axis as possible... Turning to move along a 2D axis is worse than turning to move along a 3D axis. Likewise, moving along 1D (straight) is even worse than 2D movement. This is the core assumption jump-point makes.
There is, in the culling algorithm, the assumption that if you are jumping on the least optimal axis (1D), then there are no optimal turns of a higher axis order (turning onto a 2D axis) until there is a parallel wall on the same axis order. For example, look at figure 2(d), where the code sees a parallel wall in 1D and adds a 2D movement back into the list.
As a Heuristic
Evaluate forward until one space is left open (and a wall is 2 spaces away), and add this point to the jumplist. For any point on the jumplist, jump in a new direction. goal > 2D movements forward > 1D movements forward > 1D movements backward > 2D movements backward. We can generalize this heuristic to any n dimension...
Evaluating the next direction, with + being towards the goal, and n being the amount of dimensions incremented gives us the equation...
+nD > +n-1 D > ... +1D > 0D > -1D > ... > -n-1 D > -nD
The order of best->worst turning points in 3D
3D+ = [1, 1, 1]
2D+ = [1, 1, 0], [1, 0, 1], [0, 1, 1]
1D+ = [1, 0, 0], [0, 1, 0], [0, 0, 1], [-1, 1, 1], [1, -1, 1], [1, 1, -1]
(suboptimals below; [0, 0, 0] is useless, so I didn't include it)
0D = [1, -1, 0], [1, 0, -1], [-1, 1, 0], [-1, 0, 1], [0, -1, 1], [0, 1, -1]
1D- = [-1, 0, 0], [0, -1, 0], [0, 0, -1], [-1, -1, 1], [1, -1, -1], [-1, 1, -1]
2D- = [-1, -1, 0], [-1, 0, -1], [0, -1, -1]
3D- = [-1, -1, -1]
phew typing that was a pain, but it should solve your problem.
Just remember that as you 'jump', keep track of which order of axis you are jumping; you need to find parallel walls in the same axis. Therefore, moving in the direction [1, 0, 1], you want to find walls that are at [1, 1, 0] and [0, 1, 1] in order to 'unlock' a jump point in the direction [1, 1, 1].
With the same logic, if you move in 1D [1, 0, 0], you check [0, 1, 0] for a wall to add [0, 1, 1] and [1, 1, 0]. You also check [0, 0, 1] in order to add [1, 0, 1] and [0, 1, 1] as jump points.
Hopefully you get what I mean, because it's really difficult to visualize and calculate, but easy to grasp once you have the mathematics of it.
Conclusion
Use the A* heuristics...
Dijkstra = distance from start
Greedy First = distance from goal
Then add our new heuristics!
+nD > +n-1 D > ... +1D > -1D > ... > -n-1 D > -nD
if any point nD has a parallel obstruction, you may add a jump point for each open n+1 D direction.
EDIT:
The definition of 'parallel' for your code
any point that is the same order as the direction you are moving
not the next point in that direction
has the same amount of positive and negative dimensional moves as the next point (e.g, [1, 1, -1] is parallel to [1, -1, 1] and [-1, 1, 1], but not to [1, 0, 0]

Proper form for using a 2D array in Clojure and initializing each cell

(Lisp beginner)
I need to create a 2D array, and initialize each cell in the array. Each cell is initialized with a function that is based on data in a preceding cell. So the cell as 0,1 will be initialized with the result of a function that uses the data from cell 0,0, and so on.
I was wondering what is the proper clojure idiom for setting up a data structure like this.
Representation of your array actually depends on you needs of using it, not initializing it. For example, if you have dense matrix, you most probably should use vector of vectors like this:
[[0, 1, 2, 3, 4],
[5, 6, 7, 8, 9],
[9, 8, 7, 6, 5],
[4, 3, 2, 1, 0],
[0, 1, 2, 3, 4]]
or single vector with some additional info on raw length:
{:length 5
:data
[0, 1, 2, 3, 4,
5, 6, 7, 8, 9,
9, 8, 7, 6, 5,
4, 3, 2, 1, 0,
0, 1, 2, 3, 4]
}
and if you need sparse matrix, you can use hash-maps:
{0 {0 0, 4 4},
2 {2 7},
3 {0 4, 2 2}}
(since your 2D array is small and you generate next value based on previous one, I believe first option is better suited for you).
If you are going to make a lot of matrix-specific manipulations (multiplication, decomposition, etc) you may want to use some existing libraries like Incanter.
And as for filling, my proposal is to use transients and store interim results, i.e. (for one-dimensional vector):
(defn make-array [initial-value f length]
(loop [result (transient []), length-left length, interim-value initial-value]
(if (= length-left 0)
(persistent! result)
(recur (conj! result (f interim-value)) (- length-left 1) (f interim-value))))
Transients will avoid creating new data structure on each new element, and interim value will avoid need in reading previous element from transient structure.
I don't know if this is a bad technique but I've used hash (or usually ordered) maps to specify 2D "arrays". They build up like this:
{ [x y] value ... }
There are cons to this since you have to specify the limits of the array somehow. And probably it's very slow compared to straight vector presentations as described in ffriend's post.