Are the amounts of v, vn and vt same in an .obj model ? I ask it because i can only use one index per draw so i have that to use VBO
struct VertexCoord
{
float x,y,z,w;
float nx,ny,nz;
float u,v;
};
so i can use one index for all buffers by striding offsets.
no, the number of v, vt, vn can be different.
notice that there is a list of "v", then list of "vt", "vn", etc...
At the end there is a list of faces: 1/2/3, 4/5/4, etc.
Faces index vertex pos, texture coords, normals, but since those indexes are not related to each other this also means that num of vers can be different.
Only when the list of faces looks like "1/1/1", "4/4/4" we would have the same about of attributes.
This is a bit tricky to explain, but I hope you get the point :)
So in general you cannot directly map obj data into your VBO structure.
In OpenGL you can use indexed geometry of course, but that means one index per all attribs for particular vertex. You cannot index position, texture coords separately. You have to somehow rearrange the data.
here are some links:
http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Load_OBJ
http://xiangchen.wordpress.com/2010/05/04/loading-a-obj-file-in-opengl/
Related
In the deferred shading engine I'm working on, I currently store the normal vector in a buffer with the internal format GL_RGBA16F.
I was always aware that this could not be the best solution, but I had no time to deal with it.
Recently I read "Survey of Efficient Representations for Independent Unit Vectors", which inspired me to use Octahedral Normal Vectors (ONV) and to change the buffer to GL_RG16_SNORM:
Encode the normal vector (vec3 to vec2):
// Returns +/- 1
vec2 signNotZero( vec2 v )
{
return vec2((v.x >= 0.0) ? +1.0 : -1.0, (v.y >= 0.0) ? +1.0 : -1.0);
}
// Assume normalized input. Output is on [-1, 1] for each component.
vec2 float32x3_to_oct( in vec3 v )
{
// Project the sphere onto the octahedron, and then onto the xy plane
vec2 p = v.xy * (1.0 / (abs(v.x) + abs(v.y) + abs(v.z)));
// Reflect the folds of the lower hemisphere over the diagonals
return (v.z <= 0.0) ? ((1.0 - abs(p.yx)) * signNotZero(p)) : p;
}
Decode the normal vector (vec2 to vec3):
vec3 oct_to_float32x3( vec2 e )
{
vec3 v = vec3(e.xy, 1.0 - abs(e.x) - abs(e.y));
if (v.z < 0) v.xy = (1.0 - abs(v.yx)) * signNotZero(v.xy);
return normalize(v);
}
Since I have implemented an anisotropic light model right now, it is necessary to store the tangent vector as well as the normal vector. I want to store both vectors in one and the same color attachment of the frame buffer. That brings me to my question. What is a efficient compromise to pack a unit normal vector and tangent vector in a buffer?
Of course it would be easy with the algorithms from the paper to store the normal vector in the RG channels and the tangent vector in the BA channels of a GL_RGBA16_SNORM buffer, and this is my current implementation too.
But since the normal vector an the tangent vector are always orthogonal, there must be more elegant way, which either increases accuracy or saves memory.
So the real question is: How can I take advantage of the fact that I know that 2 vectors are orthogonal? Can I store both vectors in an GL_RGB16_SNORM buffer and if not can I improve the accuracy when I pack them to a GL_RGBA16_SNORM buffer.
The following considerations are purely mathematical and I have no experience with their practicality. However, I think that especially Option 2 might be a viable candidate.
Both of the following options have in common how they state the problem: Given a normal (that you can reconstruct using ONV), how can one encode the tangent with a single number.
Option 1
The first option is very close to what meowgoesthedog suggested. Define an arbitrary reference vector (e.g. (0, 0, 1)). Then encode the tangent as the angle (normalized to the [-1, 1] range) that you need to rotate this vector about the normal to match the tangent direction (after projecting on the tangent plane, of course). You will need two different reference vectors (or even three) and choose the correct one depending on the normal. You don't want the reference vector to be parallel to the normal. I assume that this is computationally more expensive than the second option but that would need measuring. But you would get a uniform error distribution in return.
Option 2
Let's consider the plane orthogonal to the tangent. This plane can be defined either by the tangent or by two vectors that lie in the plane. We know one vector: the surface normal. If we know a second vector v, we can calculate the tangent as t = normalize(cross(normal, v)). To encode this vector, we can prescribe two components and solve for the remaining one. E.g. let our vector be (1, 1, x). Then, to encode the vector, we need to find x, such that cross((1, 1, x), normal) is parallel to the tangent. This can be done with some simple arithmetic. Again, you would need a few different vector templates to account for all scenarios. In the end, you have a scheme whose encoder is more complex but whose decoder couldn't be simpler. The error distribution will not be as uniform as in Option 1, but should be ok for a reasonable choice of vector templates.
I'm writing a C++ algorithm that returns an X,Y position on a 2D texture. Using the X,Y value I wish to find the u,v texture coordinates of a 3D object (already mapped in software).
I have these calculations:
u = X/texture_width
v = texture_height - Y/texture_height
However the values calculated can not be found under vt in my obj file.
Help would be appreciated, many thanks.
Assuming that your (u,v) coordinates are supposed to be within the range [0,1] x [0,1], your computation is not quite right. It should be
u = X/texture_width
v = 1 - Y/texture_height
Given an image pixel coordinate (X,Y), this will compute the corresponding texture (u,v) coordinate. However, if you pick a random image pixel and convert its (X,Y) coordinate into a (u,v) coordinate, this coordinate will most likely not show up in the list of vt entries in the OBJ file.
The reason is that (u,v) coordinates in the OBJ file are only specified at the corners of the faces of your 3D object. The coordinates that you compute from image pixels likely lie in the interior of the faces.
Assuming your OBJ file represents a triangle mesh with positions and texture coordinates, the entries for faces will look something like this:
f p1/t1 p2/t2 p3/t3
where p1, p2, p3 are position indices and t1, t2, t3 are texture coordinate indices.
To find whether your computed (u,v) coordinate maps to a given triangle, you'll need to
find the texture coordinates (u1,v1), (u2,v2), (u3,v3) of the corners by looking up the vt entries with the indices t1, t2, t3,
find out whether the point (u,v) lies inside the triangle with corners (u1,v1), (u2,v2), (u3,v3). There are several ways to compute this.
If you repeat this check for all f entries of the OBJ file, you'll find the triangle(s) which the given image pixel maps to. If you don't find any matches then the pixel does not appear on the surface of the object.
I'm working with an OpenCL kernel where I need to use associated Legendre polynomials.
These are a set of fairly difficult to compute polynomials indexed by a integer n and m orders, and accepting a real argument. The specifics of the actual polynomials is irrelevant, since I have a (slow) host-side function that can generate them, but the kernel side function would need to look something like:
float legendre(int n, int m, float z)
{
float3 lookupCoords;
lookupCoords.x = n;
lookupCoords.y = m;
lookupCoords.z = z;
//Do something here to interpolate Z for a given N and M...
}
I want to interpolate along the Z axis, but just have nearest neighbor for the n and m axes since they're only defined for integer values. A benefit of Z is that it's only defined between -1 and 1, so it already looks a lot like a texture coordinate.
How can I accomplish this with a sampler and lookup tables in OpenCL?
My first thought was to attempt to use a 3D texture filled with precomputed orders, but I only want to interpolate along one dimension (the real or Z argument), and I'm not sure what this would look like in OpenCL C.
In OpenCL 1.1 use read_imagef with an image3d_t for the first parameter, a sampler_t created with CLK_FILTER_LINEAR for the second paramter, and finally a float4 coord for the third parameter with your coordinates to read from.
To interpolate only along one axis, let that coordinate's value be any float value but make the other two coordinates floor(value) + 0.5f. This will make them not interpolate. Like this (only interpolating z):
float4 coordinate = (float4)(floor(x) + 0.5f, floor(y) + 0.5f, z, 0.0f);
In OpenCL 1.2 you could use image arrays but I'm not sure it would be any faster and NVIDIA does not support OpenCL 1.2 on Windows.
I would like to plot a bunch of curves from multi-dimensional data. For each curve I have a dataset of M variables, where each variable is either a vector of length N or just a scalar value:
x1 = [x11,x12,.......,x1N] OR x1 = X1 (scalar value)
x2 = [x21,x22,.......,x2N] OR x2 = X2
....
xM = [xM1,xM2,.......,xMN] OR xM = XM
My curve shader takes three float attributes x,y,z which represent the variables that are currently on display.
For each curve and each x,y,z, I bind a vertex buffer containing the data for the respective variable to the attribute if the data is a vector. Drawing multiple curves with only vector data works fine.
If the data for some variable is just a scalar number, I disable the attribute array and set the attribute value (for example X1) with:
glDisableVertexAttribArray(xLocation);
glVertexAttrib1f(xLocation,X1);
Now to my question: It seems that all curves use the same value for any vertex attribute with disabled array in the shader, (the one for the last curve that I draw) even though i reset the values between glDrawArray() calls. Is it just not possible to use more than one value for an attribute with disabled array in a shader, or should it be possible and I have a bug?
I am making an OBJ importer and I happen to be stuck on how to construct the mesh from a set of given vertices. Consider a cube with these vertices (OBJ format, faces are triangles:
v -2.767533 -0.000000 2.927381
v 3.017295 -0.000000 2.927381
v -2.767533 6.311718 2.927381
v 3.017295 6.311718 2.927381
v -2.767533 6.311718 -2.845727
v 3.017295 6.311718 -2.845727
v -2.767533 -0.000000 -2.845727
v 3.017295 -0.000000 -2.845727
I know how to construct meshes using GLUT (to make my calls to GlBegin(GL_TRIANGLES), glVertex3f(x, y, z), glEnd(), etc.) Its just that I don't know how to combine the vertices to recreate the object. I thought it was to go v1, v2, v3, then v2, v3, v4, etc. until I have made enough triangles (and something like v7, v8, v1 (because it goes back to the begining)) counts. So 8 vertices is 12 triangles for the cube, and for, say, a sphere with 108 triangles and 56 vertices is (56 vertices * 2) - 4. For a cube, I make the 12 triangles, its ok but for a sphere, I make the 108 triangles with 56 vertices, it does not work. So how do I combine the vertices in my glVertex calls to make it work for any mesh? Thank you!
There should be a bunch of "face" lines in the file (lines beginning with the letter "f") that tell you how to combine the vertices into an object. For example,
f 1 2 3
would mean a triangle composed of the first three vertices in the file. You might also see something like
f 1/1 2/2 3/3
which is a triangle that also includes texture coordinates,
f 1//1 2//2 3//3
which includes vertex normal vectors, or
f 1/1/1 2/2/2 3/3/3
which is one that includes both.
Wikipedia has an article that includes an overview of the format: https://en.wikipedia.org/wiki/Wavefront_.obj_file