I'm working with an OpenCL kernel where I need to use associated Legendre polynomials.
These are a set of fairly difficult to compute polynomials indexed by a integer n and m orders, and accepting a real argument. The specifics of the actual polynomials is irrelevant, since I have a (slow) host-side function that can generate them, but the kernel side function would need to look something like:
float legendre(int n, int m, float z)
{
float3 lookupCoords;
lookupCoords.x = n;
lookupCoords.y = m;
lookupCoords.z = z;
//Do something here to interpolate Z for a given N and M...
}
I want to interpolate along the Z axis, but just have nearest neighbor for the n and m axes since they're only defined for integer values. A benefit of Z is that it's only defined between -1 and 1, so it already looks a lot like a texture coordinate.
How can I accomplish this with a sampler and lookup tables in OpenCL?
My first thought was to attempt to use a 3D texture filled with precomputed orders, but I only want to interpolate along one dimension (the real or Z argument), and I'm not sure what this would look like in OpenCL C.
In OpenCL 1.1 use read_imagef with an image3d_t for the first parameter, a sampler_t created with CLK_FILTER_LINEAR for the second paramter, and finally a float4 coord for the third parameter with your coordinates to read from.
To interpolate only along one axis, let that coordinate's value be any float value but make the other two coordinates floor(value) + 0.5f. This will make them not interpolate. Like this (only interpolating z):
float4 coordinate = (float4)(floor(x) + 0.5f, floor(y) + 0.5f, z, 0.0f);
In OpenCL 1.2 you could use image arrays but I'm not sure it would be any faster and NVIDIA does not support OpenCL 1.2 on Windows.
Related
In the deferred shading engine I'm working on, I currently store the normal vector in a buffer with the internal format GL_RGBA16F.
I was always aware that this could not be the best solution, but I had no time to deal with it.
Recently I read "Survey of Efficient Representations for Independent Unit Vectors", which inspired me to use Octahedral Normal Vectors (ONV) and to change the buffer to GL_RG16_SNORM:
Encode the normal vector (vec3 to vec2):
// Returns +/- 1
vec2 signNotZero( vec2 v )
{
return vec2((v.x >= 0.0) ? +1.0 : -1.0, (v.y >= 0.0) ? +1.0 : -1.0);
}
// Assume normalized input. Output is on [-1, 1] for each component.
vec2 float32x3_to_oct( in vec3 v )
{
// Project the sphere onto the octahedron, and then onto the xy plane
vec2 p = v.xy * (1.0 / (abs(v.x) + abs(v.y) + abs(v.z)));
// Reflect the folds of the lower hemisphere over the diagonals
return (v.z <= 0.0) ? ((1.0 - abs(p.yx)) * signNotZero(p)) : p;
}
Decode the normal vector (vec2 to vec3):
vec3 oct_to_float32x3( vec2 e )
{
vec3 v = vec3(e.xy, 1.0 - abs(e.x) - abs(e.y));
if (v.z < 0) v.xy = (1.0 - abs(v.yx)) * signNotZero(v.xy);
return normalize(v);
}
Since I have implemented an anisotropic light model right now, it is necessary to store the tangent vector as well as the normal vector. I want to store both vectors in one and the same color attachment of the frame buffer. That brings me to my question. What is a efficient compromise to pack a unit normal vector and tangent vector in a buffer?
Of course it would be easy with the algorithms from the paper to store the normal vector in the RG channels and the tangent vector in the BA channels of a GL_RGBA16_SNORM buffer, and this is my current implementation too.
But since the normal vector an the tangent vector are always orthogonal, there must be more elegant way, which either increases accuracy or saves memory.
So the real question is: How can I take advantage of the fact that I know that 2 vectors are orthogonal? Can I store both vectors in an GL_RGB16_SNORM buffer and if not can I improve the accuracy when I pack them to a GL_RGBA16_SNORM buffer.
The following considerations are purely mathematical and I have no experience with their practicality. However, I think that especially Option 2 might be a viable candidate.
Both of the following options have in common how they state the problem: Given a normal (that you can reconstruct using ONV), how can one encode the tangent with a single number.
Option 1
The first option is very close to what meowgoesthedog suggested. Define an arbitrary reference vector (e.g. (0, 0, 1)). Then encode the tangent as the angle (normalized to the [-1, 1] range) that you need to rotate this vector about the normal to match the tangent direction (after projecting on the tangent plane, of course). You will need two different reference vectors (or even three) and choose the correct one depending on the normal. You don't want the reference vector to be parallel to the normal. I assume that this is computationally more expensive than the second option but that would need measuring. But you would get a uniform error distribution in return.
Option 2
Let's consider the plane orthogonal to the tangent. This plane can be defined either by the tangent or by two vectors that lie in the plane. We know one vector: the surface normal. If we know a second vector v, we can calculate the tangent as t = normalize(cross(normal, v)). To encode this vector, we can prescribe two components and solve for the remaining one. E.g. let our vector be (1, 1, x). Then, to encode the vector, we need to find x, such that cross((1, 1, x), normal) is parallel to the tangent. This can be done with some simple arithmetic. Again, you would need a few different vector templates to account for all scenarios. In the end, you have a scheme whose encoder is more complex but whose decoder couldn't be simpler. The error distribution will not be as uniform as in Option 1, but should be ok for a reasonable choice of vector templates.
A university assignment requires me to use the Vertex Coordinates I have to calculate the Normals and the Tangent from the Normal values so that I can create a Object Space to Texture Space Matrix.
I have the code needed to make the Matrix, and the binormal but I don't have the code for calculating the Tangent. I tried to look online, but the answers usually confuse me. Can you explain to me clearly how it works?
EDIT: I have corrected what I wrote previously as clearly I misunderstood the assignment. Thank you everyone for helping me see that.
A tangent in the mathematical sense is a property of a geometric object, not of the normalmap. In case of normalmapping, we are in addition searching for a very specific tangent (there are infinitely many in each point, basically every vector in the plane defined by the normal is a tangent).
But let's go one step back: We want a space where the u-direction of the texture is mapped on the tangent direction, the v-direction on the bitangent/binormal and the up-vector of the normalmap to the normal of the object. Thus the tangent for a triangle (v0, v1, v2) with uv-coordinates (uv1, uv2, uv3) can be calculated as:
dv1 = v1-v0
dv2 = v2-v0
duv1 = uv1-uv0
duv2 = uv2-uv0
r = 1.0f / (duv1.x * duv2.y - duv1.y * duv2.x);
tangent = (dv1 * duv2.y - dv2 * duv1.y) * r;
bitangent = (dv2 * duv1.x - dv1 * duv2.x) * r;
When having this done for all triangles, we have to smooth the tangents at shared vertices (quite similar to what happens with the normal). There are several algorithms for doing this, depending on what you need. One can, for example, weight the tangents by the surface area of the adjacent triangles or by the incident angle of them.
An implementation of this whole calculation can be found [here] along a more detailed explaination: (http://www.opengl-tutorial.org/intermediate-tutorials/tutorial-13-normal-mapping/)
I've read the documentation here: http://www.opengl.org/sdk/docs/man2/xhtml/glRotate.xml
It specifies that angle is in degrees. It says that X,Y,Z are vectors. If I say glRotate(2,1,0,0) that says I will rotate 2 degrees about the X axis.
What happens if I say glRotate(2,0.5,0,0) and glRotate(2,0.0174524,0,0)
I don't understand what's really happening in that situation, can someone help explain to me?
Does it rotate as a percentage of the angle?
It will still rotate 2 degrees about the X axis. That page you linked also says the following:
x y z = 1 (if not, the GL will normalize this vector).
Meaning the vector (x,y,z) is a unit vector (of length 1), and if it's not, GL will normalize the vector (dividing it by its length, making it of length 1).
Conclusion: the x,y and z parameters define a vector, of which the direction is the only relevant part, the length will be dealt with by the function. Thus you can safely put in any vector and it will simply rotate about that vector.
It doesn't say that x, y and z are vectors. If you open the page with a MathML-capable browser, you'll see something like
glRotate produces a rotation of angle degrees around the vector (x,y,z).
I.e. x, y, z are components of a single vector. Similarly, it doesn't say "x y z = 1 (if not, the GL will normalize this vector)": instead, it says:
||(x,y,z)||=1 (if not, the GL will normalize this vector).
So, (x,y,z) is the vector, rotation around which the function will produce. If the vector you supply is not normalized, the GL will normalize it, so glRotate(2,0.5,0,0) and glRotate(2,0.0174524,0,0) are equivalent.
glRotate means you can rotate current matrix by a given vector (x, y, z) for angle degrees. so the params x, y, z is only 3 basic params of a vector(line) in 3D space. As you can see, a vector has direction and length, but in this function, length is useless, so whether your vector length is 1 or 100 or 0.2, it doesn't make sense in this function. It's just a direction mark.
I'm calculating surface normal for my analytical surface.
Some parts of normal i'm getting are correct but not all.
Code is :
SurfaceVertices3f[pos] = i;
SurfaceVertices3f[pos+1] = j;
SurfaceVertices3f[pos+2] = (cos(i)*sin(j));
/*a and b hold the poutput of partial differentiation of vertices from above three lines.a is wrt i and b is wrt j */
a[0]=1;
a[1]=0;
a[2]=-sin(i)*sin(j);
b[0]=0;
b[1]=1;
b[2]=cos(i)*cos(j);
normal_var=Vec3Df::crossProduct( a, b);
normal_var.normalize();
My output looks like this, right image is mine and left one i'm using as refrence .
http://tinypic.com/view.php?pic=73l9co&s=5
Could anyone tell me what mistake i'm doing?
Your normal calculation is correct. The reference image has just a different way to map normals to colors.
If you have a look at the green ground color, you will see that the color's norm is not 1. But normals should have a norm of 1. If we assume another common mapping from normal to color like this one:
color.rgb = normal.xyz / 2 + 0.5
We see that this is no unit vector either. So either they used yet a different mapping or they just don't have unit length normals.
I have a function in my program which rotates a point (x_p, y_p, z_p) around another point (x_m, y_m, z_m) by the angles w_nx and w_ny.
The new coordinates are stored in global variables x_n, y_n, and z_n. Rotation around the y-axis (so changing value of w_nx - so that the y - values are not harmed) is working correctly, but as soon as I do a rotation around the x- or z- axis (changing the value of w_ny) the coordinates aren't accurate any more. I commented on the line I think my fault is in, but I can't figure out what's wrong with that code.
void rotate(float x_m, float y_m, float z_m, float x_p, float y_p, float z_p, float w_nx ,float w_ny)
{
float z_b = z_p - z_m;
float x_b = x_p - x_m;
float y_b = y_p - y_m;
float length_ = sqrt((z_b*z_b)+(x_b*x_b)+(y_b*y_b));
float w_bx = asin(z_b/sqrt((x_b*x_b)+(z_b*z_b))) + w_nx;
float w_by = asin(x_b/sqrt((x_b*x_b)+(y_b*y_b))) + w_ny; //<- there must be that fault
x_n = cos(w_bx)*sin(w_by)*length_+x_m;
z_n = sin(w_bx)*sin(w_by)*length_+z_m;
y_n = cos(w_by)*length_+y_m;
}
What the code almost does:
compute difference vector
convert vector into spherical coordinates
add w_nx and wn_y to the inclination and azimuth angle (see link for terminology)
convert modified spherical coordinates back into Cartesian coordinates
There are two problems:
the conversion is not correct, the computation you do is for two inclination vectors (one along the x axis, the other along the y axis)
even if computation were correct, transformation in spherical coordinates is not the same as rotating around two axis
Therefore in this case using matrix and vector math will help:
b = p - m
b = RotationMatrixAroundX(wn_x) * b
b = RotationMatrixAroundY(wn_y) * b
n = m + b
basic rotation matrices.
Try to use vector math. Decide in which order you rotate, first along x, then along y perhaps.
If you rotate along z-axis, [z' = z]
x' = x*cos a - y*sin a;
y' = x*sin a + y*cos a;
The same repeated for y-axis: [y'' = y']
x'' = x'*cos b - z' * sin b;
z'' = x'*sin b + z' * cos b;
Again rotating along x-axis: [x''' = x'']
y''' = y'' * cos c - z'' * sin c
z''' = y'' * sin c + z'' * cos c
And finally the question of rotating around some specific "point":
First, subtract the point from the coordinates, then apply the rotations and finally add the point back to the result.
The problem, as far as I see, is a close relative to "gimbal lock". The angle w_ny can't be measured relative to the fixed xyz -coordinate system, but to the coordinate system that is rotated by applying the angle w_nx.
As kakTuZ observed, your code converts point to spherical coordinates. There's nothing inherently wrong with that -- with longitude and latitude, one can reach all the places on Earth. And if one doesn't care about tilting the Earth's equatorial plane relative to its trajectory around the Sun, it's ok with me.
The result of not rotating the next reference axis along the first w_ny is that two points that are 1 km a part of each other at the equator, move closer to each other at the poles and at the latitude of 90 degrees, they touch. Even though the apparent purpose is to keep them 1 km apart where ever they are rotated.
if you want to transform coordinate systems rather than only points you need 3 angles. But you are right - for transforming points 2 angles are enough. For details ask Wikipedia ...
But when you work with opengl you really should use opengl functions like glRotatef. These functions will be calculated on the GPU - not on the CPU as your function. The doc is here.
Like many others have said, you should use glRotatef to rotate it for rendering. For collision handling, you can obtain its world-space position by multiplying its position vector by the OpenGL ModelView matrix on top of the stack at the point of its rendering. Obtain that matrix with glGetFloatv, and then multiply it with either your own vector-matrix multiplication function, or use one of the many ones you can obtain easily online.
But, that would be a pain! Instead, look into using the GL feedback buffer. This buffer will simply store the points where the primitive would have been drawn instead of actually drawing the primitive, and then you can access them from there.
This is a good starting point.