OpenGL Vertex Attribute Reuse - opengl

In the default mesh vertex construction we do this.
//v = vertex, p = position, c = color
buffer = { v0 , p0, c0,
v1 , p0, c0,
v2 , p0, c0 };
And we have an triangle.
But I want to reuse the obvious attributes something like this:
//p and c are the same for all vertexes
buffer = { v0, p0, c0 ,
v1,
v2 };
We can do this with uniform values on the shader, but i will render thousands of triangles with different positions on the same buffer:
buffer = { v0, p0, c0 ,
v1,
v2,
v3, p1, c1,
v4,
v5,
v6, p2, c2,
...};
My solutions for now is:
1) Send a copy of attributes for each vertex like first example (don´t want to, but may be the best solution).
2) Send an index attribute(for search) and uniform array for position/color (Uniform size limit problem?)
3) Best solution?

//p and c are the same for all vertexes
No, they are not. A vertex is the whole combination of v, p and c in your case. Change one of them, and you got an entirely different vertex. Common misconception, but that's how it works

Assuming you are not on an ancient hardware, and vertex memory isn't really that significant, just copy the data and keep each vertex with a full set of attributes. It certainly won't be slower.
"Reusing" data in such a way usually results in a perfomance drop, because GPU can't optimize the data access; it has to do complicated things you want to do instead.

Related

Packing the normal vector and tangent vector

In the deferred shading engine I'm working on, I currently store the normal vector in a buffer with the internal format GL_RGBA16F.
I was always aware that this could not be the best solution, but I had no time to deal with it.
Recently I read "Survey of Efficient Representations for Independent Unit Vectors", which inspired me to use Octahedral Normal Vectors (ONV) and to change the buffer to GL_RG16_SNORM:
Encode the normal vector (vec3 to vec2):
// Returns +/- 1
vec2 signNotZero( vec2 v )
{
return vec2((v.x >= 0.0) ? +1.0 : -1.0, (v.y >= 0.0) ? +1.0 : -1.0);
}
// Assume normalized input. Output is on [-1, 1] for each component.
vec2 float32x3_to_oct( in vec3 v )
{
// Project the sphere onto the octahedron, and then onto the xy plane
vec2 p = v.xy * (1.0 / (abs(v.x) + abs(v.y) + abs(v.z)));
// Reflect the folds of the lower hemisphere over the diagonals
return (v.z <= 0.0) ? ((1.0 - abs(p.yx)) * signNotZero(p)) : p;
}
Decode the normal vector (vec2 to vec3):
vec3 oct_to_float32x3( vec2 e )
{
vec3 v = vec3(e.xy, 1.0 - abs(e.x) - abs(e.y));
if (v.z < 0) v.xy = (1.0 - abs(v.yx)) * signNotZero(v.xy);
return normalize(v);
}
Since I have implemented an anisotropic light model right now, it is necessary to store the tangent vector as well as the normal vector. I want to store both vectors in one and the same color attachment of the frame buffer. That brings me to my question. What is a efficient compromise to pack a unit normal vector and tangent vector in a buffer?
Of course it would be easy with the algorithms from the paper to store the normal vector in the RG channels and the tangent vector in the BA channels of a GL_RGBA16_SNORM buffer, and this is my current implementation too.
But since the normal vector an the tangent vector are always orthogonal, there must be more elegant way, which either increases accuracy or saves memory.
So the real question is: How can I take advantage of the fact that I know that 2 vectors are orthogonal? Can I store both vectors in an GL_RGB16_SNORM buffer and if not can I improve the accuracy when I pack them to a GL_RGBA16_SNORM buffer.
The following considerations are purely mathematical and I have no experience with their practicality. However, I think that especially Option 2 might be a viable candidate.
Both of the following options have in common how they state the problem: Given a normal (that you can reconstruct using ONV), how can one encode the tangent with a single number.
Option 1
The first option is very close to what meowgoesthedog suggested. Define an arbitrary reference vector (e.g. (0, 0, 1)). Then encode the tangent as the angle (normalized to the [-1, 1] range) that you need to rotate this vector about the normal to match the tangent direction (after projecting on the tangent plane, of course). You will need two different reference vectors (or even three) and choose the correct one depending on the normal. You don't want the reference vector to be parallel to the normal. I assume that this is computationally more expensive than the second option but that would need measuring. But you would get a uniform error distribution in return.
Option 2
Let's consider the plane orthogonal to the tangent. This plane can be defined either by the tangent or by two vectors that lie in the plane. We know one vector: the surface normal. If we know a second vector v, we can calculate the tangent as t = normalize(cross(normal, v)). To encode this vector, we can prescribe two components and solve for the remaining one. E.g. let our vector be (1, 1, x). Then, to encode the vector, we need to find x, such that cross((1, 1, x), normal) is parallel to the tangent. This can be done with some simple arithmetic. Again, you would need a few different vector templates to account for all scenarios. In the end, you have a scheme whose encoder is more complex but whose decoder couldn't be simpler. The error distribution will not be as uniform as in Option 1, but should be ok for a reasonable choice of vector templates.

Algorithm for coloring a triangle by vertex color

I'm working on a toy raytracer using vertex based triangles, similar to OpenGL. Each vertex has its own color and the coloring of a triangle at each point should be based on a weighted average of the colors of the vertex, weighted by how close the point is to each vertex.
I can't figure out how to calculate the weight of each color at a given point on the triangle to mimic the color shading done by OpenGL, as shown by many examples here. I have several thoughts, but I'm not sure which one is correct (V is a vertex, U and W are the other two vertices, P is the point to color, C is the centroid of the triangle, and |PQ| is the distance form point P to point Q):
Have weight equal to `1-(|VP|/|VC|), but this would leave black at the centroid (all colors are weighted 0), which is not correct.
Weight is equal to 1-(|VP|/max(|VU|,|VW|)), so V has non-zero weight at the closer of the two vertices, which I don't think is correct.
Weight is equal to 1-(|VP|/min(|VU|,|VW|)), so V has zero weight at the closer of the two vertices, and negative weight (which would saturate to 0) at the further of the two. I'm not sure if this is right or not.
Line segment L extends from V through P to the opposite side of the triangle (UW): weight is the ratio of |VP| to |L|. So the weight of V would be 0 all along the opposite side.
The last one seems like the most likely, but I'm having trouble implementing it so I'm not sure if its correct.
OpenGL uses Barycentric coordinates (linear interpolation precisely although you can change that using interpolation functions or qualifiers such as centroid or noperspective in latest versions).
In case you don't know, barycentric coordinates works like that:
For a location P in a triangle made of vertices V1, V2 and V3 whose respective coefficients are C1, C2, C3 such as C1+C2+C3=1 (those coefficients refers to the influence of each vertex in the color of P) OpenGL must calculate those such as the result is equivalent to
C1 = (AreaOfTriangle PV2V3) / (AreaOfTriangle V1V2V3)
C2 = (AreaOfTriangle PV3V1) / (AreaOfTriangle V1V2V3)
C3 = (AreaOfTriangle PV1V2) / (AreaOfTriangle V1V2V3)
and the area of a triangle can be calculated with half the length of the cross product of two vector defining it (in direct sens) for example AreaOfTriangle V1V2V3 = length(cross(V2-V1, V3-V1)) / 2 We then have something like:
float areaOfTriangle = length(cross(V2-V1, V3-V1)); //Two times the area of the triangle
float C1 = length(cross(V2-P, V3-P)) / areaOfTriangle; //Because A1*2/A*2 = A1/A
float C2 = length(cross(V3-P, V1-P)) / areaOfTriangle; //Because A2*2/A*2 = A2/A
float C3 = 1.0f - C1 - C2; //Because C1 + C2 + C3 = 1
But after some math (and little bit of web research :D), the most efficient way of doing this I found was:
YOURVECTYPE sideVec1 = V2 - V1, sideVec2 = V3 - V1, sideVec3 = P - V1;
float dot11 = dot(sideVec1, sideVec1);
float dot12 = dot(sideVec1, sideVec2);
float dot22 = dot(sideVec2, sideVec2);
float dot31 = dot(sideVec3, sideVec1);
float dot32 = dot(sideVec3, sideVec2);
float denom = dot11 * dot22 - dot12 * dot12;
float C1 = (dot22 * dot31 - dot12 * dot32) / denom;
float C2 = (dot11 * dot32 - dot12 * dot31) / denom;
float C3 = 1.0f - C1 - C2;
Then, to interpolate things like colors, color1, color2 and color3 being the colors of your vertices, you do:
float color = C1*color1 + C2*color2 + C3*color3;
But beware that this doesn't work properly if you're using perspective transformations (or any transformation of vertices implying the w component) so in this case, you'll have to use:
float color = (C1*color1/w1 + C2*color2/w2 + C3*color3/w3)/(C1/w1 + C2/w2 + C3/w3);
w1, w2, and w3 are respectively the fourth components of the original vertices that made V1, V2 and V3.
V1, V2 and V3 in the first calculation must be 3 dimensional because of the cross product but in the second one (the most efficient), it can be 2 dimensional as well as 3 dimensional, the results will be the same (I think you guessed that 2D was faster in the second calculation) but in both case, don't forget to divide them by the fourth component of their original vector if you're doing perspective transformations and to use the second formula for interpolation in that case. (And in case you didn't understand, all vectors in those calculations should NOT include a fourth component!)
And one last thing; I strongly advise you to use OpenGL just by rendering a big quad on the screen and putting all your code in the shaders (Although you'll need very strong knowledge about OpenGL for advanced use) because you'll benefit from parallelism (even from a s#!+ video card) except if you're writing that on a 30years-old computer or if you're just doing that to see how it works.
IIRC, for this you don't really need to do anything in GLSL -- the interpolated color will already be the input color to your fragment shader if you just pass on the vertex color in the vertex shader.
Edit: Yes, this doesnt answer the question -- the correct answer is in the first comment above already: Use Barycentric coordinates (which is what GL does).

OpenGL C++ Plain Subdivison in QUADS (Radiosity) patches on arrays

I am trying to implement Radiosity in OpenGL for my project. Firstly, I need to be able to draw a plane (representing walls). And then be able to subdivide that plane into patches or smaller quads within that plane by using a method.
The difficulty is to draw them in a way when I draw another plain (another wall) if the height and width aren't the same the vertices are not aligned creating T vertices which I want to avoid.
I was thinking in something like
void drawPlaneMethod(float width, float height, int numberOfSubDivisions) {}
However I might need to use ratios or something related. I don't care about the Z-axis coordinate as I can rotate my planes after they are constructed. The number of sub-division on height and width must be proportional to the other wall.
If this is not possible, then I can do that by using planes of the same height and width, however it looks unrealistic as I end up a heigh ceiling. And I can't make windows, doors without having to carefully creating many planes to represent a single wall.
Then I gotta face another problem that is to be able to store information on each patch such as colour, radiosity values etc. I was thinking of using arrays of objects (Patches) and representing the planes indexes to access the patches objects. As I am not very good with c++ I am finding it hard to use arrays of any sort (2 dimensional arrays would be ideal I guess).
Any insight on this problem?
PS: I am using glBegin(GL_QUADS), I can change later on for VBOs once I do the basics of my project.
Usually, you will want to create more complex geometry (windows, doors, stairs) in a 3D modeling application, export it from there and import it into your application.
If you build your geometry that way, you can also enforce that all edges of all planes/quads are connected - and when you divide them all evenly into the same number of faces, the new vertices will naturally meet at the edges:
Assuming you have a 3D vector class with some basic arithmetic operators...
using Position3 = std::array<float, 3>; //math operators left as excercise :)
you could represent a plane/quad simply as an array of its 4 vertices' positions...
using Plane = std::array<Position3, 4>;
Let's assume the vertices are in counter clockwise order.
If we want to subdivide a quad into four quads, we will need 5 new vertices, let's call them p1 through 5:
These aren't hard to calculate:
Position3 e1 = plane[1] - plane[0];
Position3 e2 = plane[2] - plane[3];
Position3 e3 = plane[3] - plane[0];
Position3 e4 = plane[2] - plane[1];
Position3 p1 = e1 * 0.5f + plane[0];
Position3 p2 = e2 * 0.5f + plane[3];
Position3 p3 = e3 * 0.5f + plane[0];
Position3 p4 = e4 * 0.5f + plane[1];
Position3 e5 = p2 - p1;
Position3 p5 = e5 * 0.5f + p1;
and from these and our original vertices we can build the 4 new quads:
{{ plane[0], p1, p5, p3 },
{ p1, plane[1], p4, p5 },
{ p5, p4, plane[2], p2 },
{ p3, p5, p2, plane[3] }}
Now with a simple recursive function we can divide any planar quad into 4, 16, 64, ... smaller quads.
If you also want to be able to divide it into NxN smaller quads, you'll want to calculate N-1 points along each edge, e.g. for 3x3 at e1 * 1/3, e1 * 2/3 and so on. An iterative approach would probably be easier there, and you could even implement it in a geometry shader if you wanted.
Here's the result of running my little example algorithm (full source here) on a quad:
If you are trying to implement radiosity on the GPU though, you might not even have to implement any of this yourself, if you use hardware tesselation.

About .obj 3d model format and VBO

Are the amounts of v, vn and vt same in an .obj model ? I ask it because i can only use one index per draw so i have that to use VBO
struct VertexCoord
{
float x,y,z,w;
float nx,ny,nz;
float u,v;
};
so i can use one index for all buffers by striding offsets.
no, the number of v, vt, vn can be different.
notice that there is a list of "v", then list of "vt", "vn", etc...
At the end there is a list of faces: 1/2/3, 4/5/4, etc.
Faces index vertex pos, texture coords, normals, but since those indexes are not related to each other this also means that num of vers can be different.
Only when the list of faces looks like "1/1/1", "4/4/4" we would have the same about of attributes.
This is a bit tricky to explain, but I hope you get the point :)
So in general you cannot directly map obj data into your VBO structure.
In OpenGL you can use indexed geometry of course, but that means one index per all attribs for particular vertex. You cannot index position, texture coords separately. You have to somehow rearrange the data.
here are some links:
http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_Load_OBJ
http://xiangchen.wordpress.com/2010/05/04/loading-a-obj-file-in-opengl/

Generating a normal map from a height map?

I'm working on procedurally generating patches of dirt using randomized fractals for a video game. I've already generated a height map using the midpoint displacement algorithm and saved it to a texture. I have some ideas for how to turn that into a texture of normals, but some feedback would be much appreciated.
My height texture is currently a 257 x 257 gray-scale image (height values are scaled for visibility purposes):
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
So given the 3D coordinates of A, B, C, and D, would it make sense to:
split the four into two triangles: ABC and BCD
calculate the normals of those two faces via cross product
split into two triangles: ACD and ABD
calculate the normals of those two faces
average the four normals
...or is there a much easier method that I'm missing?
Example GLSL code from my water surface rendering shader:
#version 130
uniform sampler2D unit_wave
noperspective in vec2 tex_coord;
const vec2 size = vec2(2.0,0.0);
const ivec3 off = ivec3(-1,0,1);
vec4 wave = texture(unit_wave, tex_coord);
float s11 = wave.x;
float s01 = textureOffset(unit_wave, tex_coord, off.xy).x;
float s21 = textureOffset(unit_wave, tex_coord, off.zy).x;
float s10 = textureOffset(unit_wave, tex_coord, off.yx).x;
float s12 = textureOffset(unit_wave, tex_coord, off.yz).x;
vec3 va = normalize(vec3(size.xy,s21-s01));
vec3 vb = normalize(vec3(size.yx,s12-s10));
vec4 bump = vec4( cross(va,vb), s11 );
The result is a bump vector: xyz=normal, a=height
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
No. Each pixel of the image represents a vertex of the grid, so intuitively, from symmetry, its normal is determined by heights of neighboring pixels (i-1,j), (i+1,j), (i,j-1), (i,j+1).
Given a function f : ℝ2 → ℝ that describes a surface in ℝ3, a unit normal at (x,y) is given by
v = (−∂f/∂x, −∂f/∂y, 1) and n = v/|v|.
It can be proven that the best approximation to ∂f/∂x by two samples is archived by:
∂f/∂x(x,y) = (f(x+ε,y) − f(x−ε,y))/(2ε)
To get a better approximation you need to use at least four points, thus adding a third point (i.e. (x,y)) doesn't improve the result.
Your hightmap is a sampling of some function f on a regular grid. Taking ε=1 you get:
2v = (f(x−1,y) − f(x+1,y), f(x,y−1) − f(x,y+1), 2)
Putting it into code would look like:
// sample the height map:
float fx0 = f(x-1,y), fx1 = f(x+1,y);
float fy0 = f(x,y-1), fy1 = f(x,y+1);
// the spacing of the grid in same units as the height map
float eps = ... ;
// plug into the formulae above:
vec3 n = normalize(vec3((fx0 - fx1)/(2*eps), (fy0 - fy1)/(2*eps), 1));
A common method is using a Sobel filter for a weighted/smooth derivative in each direction.
Start by sampling a 3x3 area of heights around each texel (here, [4] is the pixel we want the normal for).
[6][7][8]
[3][4][5]
[0][1][2]
Then,
//float s[9] contains above samples
vec3 n;
n.x = scale * -(s[2]-s[0]+2*(s[5]-s[3])+s[8]-s[6]);
n.y = scale * -(s[6]-s[0]+2*(s[7]-s[1])+s[8]-s[2]);
n.z = 1.0;
n = normalize(n);
Where scale can be adjusted to match the heightmap real world depth relative to its size.
If you think of each pixel as a vertex rather than a face, you can generate a simple triangular mesh.
+--+--+
|\ |\ |
| \| \|
+--+--+
|\ |\ |
| \| \|
+--+--+
Each vertex has an x and y coordinate corresponding to the x and y of the pixel in the map. The z coordinate is based on the value in the map at that location. Triangles can be generated explicitly or implicitly by their position in the grid.
What you need is the normal at each vertex.
A vertex normal can be computed by taking an area-weighted average of the surface normals for each of the triangles that meet at that point.
If you have a triangle with vertices v0, v1, v2, then you can use a vector cross product (of two vectors that lie on two of the sides of the triangle) to compute a vector in the direction of the normal and scaled proportionally to the area of the triangle.
Vector3 contribution = Cross(v1 - v0, v2 - v1);
Each of your vertices that aren't on the edge will be shared by six triangles. You can loop through those triangles, summing up the contributions, and then normalize the vector sum.
Note: You have to compute the cross products in a consistent way to make sure the normals are all pointing in the same direction. Always pick two sides in the same order (clockwise or counterclockwise). If you mix some of them up, those contributions will be pointing in the opposite direction.
For vertices on the edge, you end up with a shorter loop and a lot of special cases. It's probably easier to create a border around your grid of fake vertices and then compute the normals for the interior ones and discard the fake borders.
for each interior vertex V {
Vector3 sum(0.0, 0.0, 0.0);
for each of the six triangles T that share V {
const Vector3 side1 = T.v1 - T.v0;
const Vector3 side2 = T.v2 - T.v1;
const Vector3 contribution = Cross(side1, side2);
sum += contribution;
}
sum.Normalize();
V.normal = sum;
}
If you need the normal at a particular point on a triangle (other than one of the vertices), you can interpolate by weighing the normals of the three vertices by the barycentric coordinates of your point. This is how graphics rasterizers treat the normal for shading. It allows a triangle mesh to appear like smooth, curved surface rather than a bunch of adjacent flat triangles.
Tip: For your first test, use a perfectly flat grid and make sure all of the computed normals are pointing straight up.