How does OpenGL calculate the new texture coordinate when wrapping with GL_MIRRORED_REPEAT? I mean given (x, y) what formula is applied? https://open.gl/textures
See OpenGL 4.6 API Core Profile Specification; 8.14.2 Coordinate Wrapping and Texel Selection; page 257, Table 8.20
MIRRORED_REPEAT : (size − 1) − mirror(coord mod (2 × size)) − size)
where mirror(a) returns a if a ≥ 0, and −(1 + a) otherwise.
This means if the texture is tiled then the even tiles are draw as the texture is and the odd tiles are drawn mirrored.
If the texture coordinate are in [0, 1], [2, 3], [4, 5], ..., then the wrap function returns a corresponding coordinate in range [0, 1].
If the texture coordinate are in [1, 2], [3, 4], [5, 6], ..., then the wrap function returns a corresponding mirrored coordinate in range [1, 0].
The wrap function is applied to each coordinate separately and for each coordinate a separate, different wrap function can be set.
Related
I'm trying to calculate the alpha values as explained here.
I have as argument a tensor with shape (1, 512, 14, 14). To calculate alpha values I need to calculate the average of all dimensions except the channel dimension, so the output will have the shape (1, k, 1, 1) which is essentialy (k,).
How can I do this in PyTorch?
Thanks!
You could permute the first and second axis to keep the channel dimension on dim=0, then flatten all other dimensions, and lastly, take the mean on that new axis:
x.permute(1, 0, 2, 3).flatten(start_dim=1).mean(dim=1)
Here are the shapes, step by step:
>>> x.permute(1, 0, 2, 3).shape
(512, 1, 14, 14)
>>> x.permute(1, 0, 2, 3).flatten(start_dim=1).shape
(512, 1, 196)
>>> x.permute(1, 0, 2, 3).flatten(start_dim=1).mean(dim=1).shape
(512,)
in the pyrr.Matrix docs it states:
Matrices are laid out in row-major format and can be loaded directly into OpenGL. To convert to column-major format, transpose the array using the numpy.array.T method.
creating a transformation matrix gives me:
Matrix44.from_translation( np.array([1,2,3]))
Matrix44([[1, 0, 0, 0],
[0, 1, 0, 0],
[0, 0, 1, 0],
[1, 2, 3, 1]])
If the layout is row-major, I would expect the output to be the transpose:
Matrix44([[1, 0, 0, 1],
[0, 1, 0, 2],
[0, 0, 1, 3],
[0, 0, 0, 1]])
I'm most likely confused (I come from C/OpenGL background), but could anyone please enlighten me?
Jonathan
I was writing down a great answer. But I found this really interesting link I invite you to read it !
This is a small resume :
If it's row-major matrix, then the translation is stored in the 3, 7, and 11th indices.
If it's column-major, then the translation is stored in the 12, 13, and 14th indices.
The difference behind the scene is the way to store the data. As it is 16 float in memory, those floats are contiguous in the memory. So you have to define if you either store them in 4 float x 4 columns or 4 float x 4 rows. And then it change the way you access and use it.
You can look at this link too.
Let say we have two matrices A and B.
A has the shape (r, k) and B has the shape (r, l).
Now I want to calculate the np.outer product of these two matrices per rows. After the outer product I then want to sum all values in axis 0. So my result matrix should have the shape (k, l).
E.g.:
Form of A is (4, 2), of B is (4, 3).
import numpy as np
A = np.array([[0, 7], [4, 1], [0, 2], [0, 5]])
B = np.array([[9, 7, 7], [6, 7, 5], [2, 7, 9], [6, 9, 7]])
# This is the first outer product for the first values of A and B
print(np.outer(A[0], B[0])) # This will give me
# First possibility is to use list comprehension and then
sum1 = np.sum((np.outer(x, y) for x, y in zip(A, B)), axis=0)
# Second possibility would be to use the reduce function
sum2 = reduce(lambda sum, (x, y): sum+np.outer(x, y), zip(A, B), np.zeros((A.shape[1], B.shape[1])))
# result for sum1 or sum2 looks like this:
# array([[ 175., 156., 133.], [ 133., 131., 137.]])
I'm asking my self, is there a better or faster solution? Because when I have e.g. two matrices with more than 10.000 rows this takes some time.
Only using the np.outer function is not the solution, because np.outer(A, B) will give me a matrix with shape (8, 12) (this is not what I want).
Need this for neural networks backpropagation.
You could literally transfer the iterators as string notation to np.einsum -
np.einsum('rk,rl->kl',A,B)
Or with matrix-multiplication using np.dot -
A.T.dot(B)
I have a set of X and Y coordinates and each point has a different pixel value - the Z quantity. I would like to plot these values using a raster or contour plot.
I am having difficulty doing this because there is no mathematical relationship between the pixel value and the X and Y coordinates.
I have created an array for the range of x and y values and I have tried constructing a dictionary where I can look up the value of z using a concatenated x and y string. At the moment I am having an index issue and I am wondering if there is a better way of achieving this?
My code so far is:
import matplotlib.pyplot as plt
import numpy as np
XY_Zpoints = {'11':8,
'12':8,
'13':8,
'14':6,
'21':6,
'22':8,
'23':6,
'24':6,
'31':8,
'32':3,
'33':8,
'34':6,
'41':8,
'42':3,
'43':3,
'44':8,
}
x, y = np.meshgrid(np.linspace(1,4,4), np.linspace(1,4,4))
z = XY_Zpoints[str(x)+str(y)]
# Plot the grid
plt.imshow(z)
plt.spectral()
plt.show()
Thanks in advance for any help you can offer!
Instead of a dictionary, you can use a numpy array where the position of each pixel value coresponds to the x and y coordinates. For your example this array would look like:
z = np.array([[8, 8, 8, 6], [6, 8, 6, 6], [8, 3, 8, 6], [8, 3, 3, 8]])
To access the pixel value at x = 2 and y = 3, for example you can do this:
x = 2
y = 3
pixel = z[x-1][y - 1]
z can be displayed with:
imshow(z)
How can I generalize Jump Point Search to a 3D search volume?
So far, I have defined pruning rules for a 3D cube involving each of the three movements- straight (0,0,1), first-order diagonal (0,1,1) and second-order (1,1,1).
What I'm mostly concerned about is the optimal turning points defined in the paper. I've been unable to ascertain exactly how they were derived, and therefore how to derive my own for three dimensions.
Any suggestions as to how this can be done?
Rather than attempting to derive turning points, it helps to use an intuitive understanding of the algorithm in 2D.
Because the shortest distance between two locations is a straight line, we know that moving diagonally is fastest because it's equivalent to two steps in 1D. In 3D, this means a diagonal is equivalent to three steps. (in reality, these values are sqrt(2) and sqrt(3)). With this, we choose to optimize by moving across as many axis as possible... Turning to move along a 2D axis is worse than turning to move along a 3D axis. Likewise, moving along 1D (straight) is even worse than 2D movement. This is the core assumption jump-point makes.
There is, in the culling algorithm, the assumption that if you are jumping on the least optimal axis (1D), then there are no optimal turns of a higher axis order (turning onto a 2D axis) until there is a parallel wall on the same axis order. For example, look at figure 2(d), where the code sees a parallel wall in 1D and adds a 2D movement back into the list.
As a Heuristic
Evaluate forward until one space is left open (and a wall is 2 spaces away), and add this point to the jumplist. For any point on the jumplist, jump in a new direction. goal > 2D movements forward > 1D movements forward > 1D movements backward > 2D movements backward. We can generalize this heuristic to any n dimension...
Evaluating the next direction, with + being towards the goal, and n being the amount of dimensions incremented gives us the equation...
+nD > +n-1 D > ... +1D > 0D > -1D > ... > -n-1 D > -nD
The order of best->worst turning points in 3D
3D+ = [1, 1, 1]
2D+ = [1, 1, 0], [1, 0, 1], [0, 1, 1]
1D+ = [1, 0, 0], [0, 1, 0], [0, 0, 1], [-1, 1, 1], [1, -1, 1], [1, 1, -1]
(suboptimals below; [0, 0, 0] is useless, so I didn't include it)
0D = [1, -1, 0], [1, 0, -1], [-1, 1, 0], [-1, 0, 1], [0, -1, 1], [0, 1, -1]
1D- = [-1, 0, 0], [0, -1, 0], [0, 0, -1], [-1, -1, 1], [1, -1, -1], [-1, 1, -1]
2D- = [-1, -1, 0], [-1, 0, -1], [0, -1, -1]
3D- = [-1, -1, -1]
phew typing that was a pain, but it should solve your problem.
Just remember that as you 'jump', keep track of which order of axis you are jumping; you need to find parallel walls in the same axis. Therefore, moving in the direction [1, 0, 1], you want to find walls that are at [1, 1, 0] and [0, 1, 1] in order to 'unlock' a jump point in the direction [1, 1, 1].
With the same logic, if you move in 1D [1, 0, 0], you check [0, 1, 0] for a wall to add [0, 1, 1] and [1, 1, 0]. You also check [0, 0, 1] in order to add [1, 0, 1] and [0, 1, 1] as jump points.
Hopefully you get what I mean, because it's really difficult to visualize and calculate, but easy to grasp once you have the mathematics of it.
Conclusion
Use the A* heuristics...
Dijkstra = distance from start
Greedy First = distance from goal
Then add our new heuristics!
+nD > +n-1 D > ... +1D > -1D > ... > -n-1 D > -nD
if any point nD has a parallel obstruction, you may add a jump point for each open n+1 D direction.
EDIT:
The definition of 'parallel' for your code
any point that is the same order as the direction you are moving
not the next point in that direction
has the same amount of positive and negative dimensional moves as the next point (e.g, [1, 1, -1] is parallel to [1, -1, 1] and [-1, 1, 1], but not to [1, 0, 0]