obj file - averaging normals - opengl

I have a obj file which stores data this way:
v value1 value2 value3
f value1 value2 value3
First I calculate a normal for the face and then assign for each vertex of that face:
for(int i = 0; i < verticesInd.size(); i+=3)
{
glm::vec3 normal = glm::normalize(glm::cross(glm::vec3(vertices[verticesInd[i + 1]]) - glm::vec3(vertices[verticesInd[i]]), glm::vec3(vertices[verticesInd[i + 2]]) - glm::vec3(vertices[verticesInd[i]])));
out_Normals[i] = normal;
out_Normals[i + 1] = normal;
out_Normals[i + 2] = normal;
}
For achieve a flat shading I can duplicate vertices:
for(int i = 0; i < verticesInd.size(); i++)
{
out_Vertices.push_back(vertices[verticesInd[i]]);
}
and then draw the object using glDrawArrays:
glDrawArrays(GL_TRIANGLES, 0, out_Vertices.size());
For achieve a smooth shading I need to average normals for each vertex but I have no idea how to find adjacent faces.
Edit1: I didn't notice a single s parameter before f:
v value1 value2 value3
s 1
f value1 value2 value3
Edit2: Normals averaging
glm::vec3 tNormal;
for(int i = 0; i < vertices.size(); i++)
{
for(int j = 0; j < verticesInd.size(); j++)
{
if(verticesInd[j] == i)
{
tNormal += faceNormals[j / 3];
}
}
aNormals.push_back(glm::normalize(tNormal));
tNormal = glm::vec3(0,0,0);
}
Edit 3 Face normals:
for(int i = 0; i < verticesInd.size(); i+=3)
{
glm::vec3 normal = glm::normalize(glm::cross(glm::vec3(vertices[verticesInd[i + 1]]) - glm::vec3(vertices[verticesInd[i]]), glm::vec3(vertices[verticesInd[i + 2]]) - glm::vec3(vertices[verticesInd[i]])));
faceNormals.push_back(normal);
}

In most object formats adjacent faces should be sharing vertices. Finding the smooth shaded normal at a vertex should just then be a question of averaging the normal of any face which uses that vertex.
I suggest that you create an additional new array the same size as your existing vertex array.
Iterate over each face, and for each vertex index, add that vertice's face normal to the new array.
At the end of the process, normalise the result normal vectors and then use that instead of the previously computed face normals.
If I understand your data structures correctly, it would look something like this:
glm::vec3 aNormals[]; // one for each vertex - use the appropriate constructor
for (int i = 0; i < verticesInd.size(); ++i) {
int f = i / 3; // which face is this index part of (3 per face?)
int v = verticesInd[i]; // which vertex number is being used
aNormals[v] += faceNormals[f]; // add the face normal to this vertex
}
// now normalise aNormals

I have no idea how to find adjacent faces.
Add a list of faces to your temporary, in-OBJ-loader vertex storage. As you're processing the face lines add the new face to the face list of each vertex it references.
That way you can spin over all the vertexes at the end, look up the face(s) it belonged to, grab the face normals, and average them.
If your OBJ doesn't have such nice baked-in connectivity information (and there's no requirement that it has to) then you'll have to do a nearest-neighbor search on each vertex to find vertexes (and corresponding faces) that are near it. Use your favorite spatial index to speed those sorts of queries up.

Follow this guy https://www.youtube.com/watch?v=MRD_zN0SWh0&feature=plcp on per fragment lighting. It contains the same equations for finding the normals on a per vertex basis as far as I know.
You would need a list of faces. If you've ever seen a wavefront obj. They contain multiple triangle indices 4,7,2 1,2,3. Each 3 numbers representing a face. With a normal Mesh each point can only be used 3 times. If you find each group that has a 3 you can find each face. Find their corresponding normal values and then average.
Alnitak has valid info too. You have a list of vertices, then list them off as groups of 3 so that you can reuse vertices(share) as face data.

Related

Compute normal based on Voronoi pattern

I am applying a 3D Voronoi pattern on a mesh. Using those loops, I am able to compute the cell position, an id and the distance.
But I would like to compute a normal based on the generated pattern.
How can I generate a normal or reorient the current normal based on this pattern and associated cells ?
The aim is to provide a faced look for the mesh. Each cell's normal should point in the same direction and adjacent cells point in different directions. Those directions should be based on the original mesh normals, I don't want to totally break mesh normals and have those points in random directions.
Here's how I generate the Voronoi pattern.
float3 p = floor(position);
float3 f = frac(position);
float id = 0.0;
float distance = 10.0;
for (int k = -1; k <= 1; k++)
{
for (int j = -1; j <= 1; j++)
{
for (int i = -1; i <= 1; i++)
{
float3 cell = float3(float(i), float(j), float(k));
float3 random = hash3(p + cell);
float3 r = cell - f + random * angleOffset;
float d = dot(r, r);
if (d < distance)
{
id = random;
distance = d;
cellPosition = cell + p;
normal = ?
}
}
}
}
And here's the hash function :
float3 hash3(float3 x)
{
x = float3(dot(x, float3(127.1, 311.7, 74.7)),
dot(x, float3(269.5, 183.3, 246.1)),
dot(x, float3(113.5, 271.9, 124.6)));
return frac(sin(x)*43758.5453123);
}
This looks like a fairly expensive fragment shader, it might more sense to bake out a normal map than to try to do this in real time.
It's hard to tell what your shader is doing but I think it's checking every pixel against a 3x3 grid of voronoi cells. One weird thing is that random is a vec3 that somehow gets assigned to id, which is just a scalar.
Anyway, it sounds like you would like to perturb the mesh-supplied normal by a random vector, but you'd like all pixels corresponding to a particular voronoi cell to be perturbed in the same way.
Since you already have a variable called random which presumably represents a random value generated deterministically as a function of the voronoi cell, you could just use that. For example, the following would perturb the normal by a small amount:
normal = normalize(meshNormal + 0.2 * normalize(random));
If you want to give more weight to the random component, just increase the 0.2 constant.

Procedurally generate seamless fractal noise textures

I have been generating noise textures to use as height maps for terrain generation. In this application, initially there is a 256x256 noise texture that is used to create a block of land that the user is free to roam around. When the user reaches a certain boundary in-game the application generates a new texture and thus another block of terrain.
In the code, a table of 64x64 random values are generated, and the values in the texture are the result of interpolating between these points at various 'frequencies' and 'wavelengths' using a smoothstep function, and then combined to form the final noise texture; and finally the values in the texture are divided through by its largest value to effectively normalize it. When the player is at the boundary and a new texture is created, the random number table that is created re-uses the values from the appropriate edge of the previous texture (eg. if the new texture is for a block of land that is on the +X side of the previous one, the last value in every row of the previous texture is used as the first value in every row of random numbers in the next.)
My problem is this: even though the same values are being used across the edges of adjacent textures, they are nowhere near seamless - some neighboring points on the terrain are mismatched by many many metres. My guess is that the changing frequencies that are used to sample the random number table are probably having a significant effect on all areas of the texture. So how might one generate fractal noise poceduraly, ie. as needed, AND have it look continuous with adjacent values?
Here is a section of the code that returns a value interpolated between the points on the random number table given a point P:
float MainApp::assessVal(glm::vec2 P){
//Integer component of P
int xi = (int)P.x;
int yi = (int)P.y;
//Decimal component ofP
float xr = P.x - xi;
float yr = P.y - yi;
//Find the grid square P lies inside of
int x0 = xi % randX;
int x1 = (xi + 1) % randX;
int y0 = yi % randY;
int y1 = (yi + 1) % randY;
//Get random values for the 4 nodes
float r00 = randNodes->randNodes[y0][x0];
float r10 = randNodes->randNodes[y0][x1];
float r01 = randNodes->randNodes[y1][x0];
float r11 = randNodes->randNodes[y1][x1];
//Smoother interpolation so
//texture appears less blocky
float sx = smoothstep(xr);
float sy = smoothstep(yr);
//Find the weighted value of the 4
//random values. This will be the
//final value in the noise texture
float sx0 = mix(r00, r10, sx);
float sx1 = mix(r01, r11, sx);
return mix(sx0, sx1, sy);
}
Where randNodes is a 2 dimensional array containing the random values.
And here is the code that takes all the values returned from the above function and constructs texture data:
int layers = 5;
float wavelength = 1, frequency = 1;
for (int k = 0; k < layers; k++) {
for (int i = 0; i < stepsY; i++) {
for(int j = 0; j < stepsX; j++){
//Compute value for (stepsX * stepsY) interpolation points
//across the grid of random numbers
glm::vec2 P = glm::vec2((float)j/stepsX * randX, (float)i/stepsY * randY);
buf[i * stepsY + j] += assessVal(P * wavelength) * frequency;
}
}
//repeat (layers) times with different signals
wavelength *= 0.5;
frequency *= 2;
}
for(int i = 0; i < buf.size(); i++){
//divide all data by the largest value.
//this normalises the data to avoid saturation
buf[i] /= largestVal;
}
Finally, here is an example of two textures generated by these functions that should be seamless, but aren't:
The 2 images placed side by side as they are now are obviously mis-matched.
Your code wraps the values only in the domain of the noise texture you read from, but not in the domain of the texture being generated.
For the texture T of size stepX to be repeatable (let's consider 1-d case for simplicity) you must have
T(0) == T(stepX)
Or in your case (substitute j = 0 and j = stepX):
assessVal(0) == assessVal(randX * wavelength)
For when k >= 1 this is clearly not true in your code, because
(randX / pow(2, k)) % randX != 0
One solution is to decrease randX and randY while you go up the frequencies.
But my typical approach would rather be starting from a 2x2 random texture, upscale it to 4x4 with GL_REPEAT, add a bit more per-pixel noise, continue upscaling to 8x8 etc.. till I get to the desired size.
The root cause of course is that your smoothing changes pixels to match their neighbors, but you later add new neighbors and do not re-smooth the pixels who got new neighbors.
One simple and common workaround is to keep an edge of invisible pixels, the width of which is half that of your smoothing kernel. Now, when expanding the area, you can resmooth those invisible pixels just before they're revealed. Don't forget to add a new edge of invisible pixels!

Silhouette detection (geometry shader) for edges that connect only one triangle

I want to draw a mesh silhouette using geometry shader(line_strip).
The problem occurs when the mesh has edges with only one triangle(like the edge of a cloth). Enclosed(all edges connect 2 triangles) meshes work.
I built adjacency index buffer using RESTART_INDEX where neighbor vertex didn't exist, during rendering I tried to used:
glEnable(GL_PRIMITIVE_RESTART);
glPrimitiveRestartIndex(RESTART_INDEX); // RESTART_INDEX = ushort(-1)
The result:
As you can see head, feet, hands are OK, other parts of the model not so much.
// silhouette.gs.glsl
void main(void) {
vec3 e1 = gs_in[2].vPosition - gs_in[0].vPosition; // 1 ---- 2 ----- 3
vec3 e2 = gs_in[4].vPosition - gs_in[0].vPosition; // \ / \ /
vec3 e3 = gs_in[1].vPosition - gs_in[0].vPosition; // \ e1 \ /
vec3 e4 = gs_in[3].vPosition - gs_in[2].vPosition; // \ / \ /
vec3 e5 = gs_in[4].vPosition - gs_in[2].vPosition; // 0 -e2-- 4
vec3 e6 = gs_in[5].vPosition - gs_in[0].vPosition; // \ /
// // \ /
vec3 vN = cross(e1, e2); // \ /
vec3 vL = u_vLightPos - gs_in[0].vPosition; // 5
How does gs manage vertices when it encounters a primitive restart index? Example: For some triangles 1,3 or 5 shouldn't have any vertex.
If a triangle edge is not part of any other triangle then it is still adjacent to the back face of its own triangle. For example: if there is no other triangle attached to e1 in your diagram, you can use the third vertex of the triangle (4 in this case, as e1 consists of 0 and 2) in place of 1. This will not require any additional check in the geometry shader.
I removed:
glEnable(GL_PRIMITIVE_RESTART);
glPrimitiveRestartIndex(RESTART_INDEX);
during draw pass and this is the result:
which closer to the desired result.
Another possible solution to this problem could be tackled while constructing the adjacency index buffer.
// for every mesh
// for every triangle in the mesh // first pass - edges & neighbors
...
// for every triangle in the mesh // second pass - build indices
const triangle_t* triangle = &tcache[ti]; // unique index triangles
for(i = 0; i < 3; ++i) // triangle = 3 edges
{
edge_t edge(triangle->index[i], triangle->index[(i+1) % 3]); // get edge
neighbors_t neighbors = oEdgeMap[edge]; // get edge neighbors
ushort tj = getOther(neighbors, ti); // get the opposite triangle index
ushort index = RESTART_INDEX; // = ushort(-1)
if(tj != (ushort)(-1))
{
const triangle_t& opposite = tcache[tj]; // opposite triangle
index = getOppositeIndex(opposite, edge) // opposite vertex from other triangle
}
else
{
index = triangle->index[i];
}
indices[ii+i*2+0] = triangle->index[i];
indices[ii+i*2+1] = index;
}
Instead of using a RESTART_INDEX as mentioned in the first post, use the same vertex. Edge triangles will have a neighbor triangle with a 0 length edge(built from the same vertices). I think this needs to be checked during gs.
Does the geometry shader fire fore triangle with incomplete adjacency information?

Calculating offset of linear gaussian blur based on variable amount of weights

I've recently been looking into optimising a gaussian blur shader by using a linear sampling method instead of a discrete method.
I read a very informative article:
Efficient Gaussian Blur With Linear Sampling
In case of such a merge of two texels we have to adjust the coordinates that the distance of the determined coordinate from the texel #1 center should be equal to the weight of texel #2 divided by the sum of the two weights. In the same style, the distance of the determined coordinate from the texel #2 center should be equal to the weight of texel #1 divided by the sum of the two weights.
While I understand the logic behind this, I'm not sure how they arrived at the figures for the offset with the given weights. Would anyone be kind enough to shed more light on this for me and to also explain how, given uniform weight variables we could calculate correct offsets?
Regarding non hard coded offsets, I found another post which recommended a method of calculating the offsets, however no solution was posted for a variable amount of samples. How could I achieve that?
vec2 offsets[3];
offsets[0] = vec2(0.0, 0.0);
offsets[1] = vec2(dFdx(gl_TexCoord[0].s), dFdy(gl_TexCoord[0].t));
offsets[2] = offsets[1] + offsets[1];
I just came across the same article, and found it tremendously useful as well. The formula to calculate the weights and offsets is given in it:
(source: rastergrid.com)
The author arrived at the weights by using the 12th row in the pascal triangle. So for example the second offset is calculated by:
1.3846153846 = (1 * 792 + 2 * 495) / (792 + 495)
The second weight is calculated by:
0.1945945946 = (792 + 495) / 4070
I'm not sure what you mean by calculating the offsets given uniform weight variables but if it's of help I've included a C++ program at the end of this post that outputs the offsets and weights for an arbitrary row in the pascal triangle.
If I understand your question about non hardcoded offsets, then you want to be able to calculate the offsets on the fly in GLSL? You could do that by porting the program below, but you'll still need to hardcode the binomial coefficients, or calculate those on the fly as well. However, that will be expensive since it will have to be done for every pixel. I think a better alternative is to precalculate the offsets and weights in C (or whatever programming language you're using), and then bind them to a uniform array value in GLSL. Here's the GLSL snippet for what I mean:
uniform float offset[5];
uniform float weight[5];"
uniform int numOffsets;
You'll want to replace "5" with the maximum number of offsets/weights you plan to use, and set numOffsets to the number you're using for a particular operation.
Here's the program that outputs the weights and offsets. "coeffs" should be replaced with the binomial coefficients of the row you want in the pascal table. The one included here is from the 22nd row
#include <iostream>
#include <vector>
using namespace std;
int main(int argc, char* argv[])
{
float coeffs[] = { 705432, 646646, 497420, 319770, 170544, 74613, 26334, 7315, 1540, 231 };
double total = coeffs[0];
for (int i = 1; i < sizeof(coeffs) / sizeof(float); i++)
{
total += 2 * coeffs[i];
}
vector<float> offsets;
vector<float> weights;
offsets.push_back(0);
weights.push_back(coeffs[0] / total);
for (int i = 1; i <= (sizeof(coeffs) / sizeof(float) - 1) / 2; i++)
{
int index = (i - 1) * 2 + 1;
float weight = coeffs[index] + coeffs[index + 1];
offsets.push_back((coeffs[index] * index + coeffs[index + 1] * (index + 1)) / weight);
weights.push_back(weight / total);
}
for (int i = 0; i < offsets.size(); i++)
{
cout << offsets[i] << ", ";
}
cout << "\n";
for (int i = 0; i < weights.size(); i++)
{
cout << weights[i] << ", ";
}
cout << "\n";
}

Generating a normal map from a height map?

I'm working on procedurally generating patches of dirt using randomized fractals for a video game. I've already generated a height map using the midpoint displacement algorithm and saved it to a texture. I have some ideas for how to turn that into a texture of normals, but some feedback would be much appreciated.
My height texture is currently a 257 x 257 gray-scale image (height values are scaled for visibility purposes):
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
So given the 3D coordinates of A, B, C, and D, would it make sense to:
split the four into two triangles: ABC and BCD
calculate the normals of those two faces via cross product
split into two triangles: ACD and ABD
calculate the normals of those two faces
average the four normals
...or is there a much easier method that I'm missing?
Example GLSL code from my water surface rendering shader:
#version 130
uniform sampler2D unit_wave
noperspective in vec2 tex_coord;
const vec2 size = vec2(2.0,0.0);
const ivec3 off = ivec3(-1,0,1);
vec4 wave = texture(unit_wave, tex_coord);
float s11 = wave.x;
float s01 = textureOffset(unit_wave, tex_coord, off.xy).x;
float s21 = textureOffset(unit_wave, tex_coord, off.zy).x;
float s10 = textureOffset(unit_wave, tex_coord, off.yx).x;
float s12 = textureOffset(unit_wave, tex_coord, off.yz).x;
vec3 va = normalize(vec3(size.xy,s21-s01));
vec3 vb = normalize(vec3(size.yx,s12-s10));
vec4 bump = vec4( cross(va,vb), s11 );
The result is a bump vector: xyz=normal, a=height
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
No. Each pixel of the image represents a vertex of the grid, so intuitively, from symmetry, its normal is determined by heights of neighboring pixels (i-1,j), (i+1,j), (i,j-1), (i,j+1).
Given a function f : ℝ2 → ℝ that describes a surface in ℝ3, a unit normal at (x,y) is given by
v = (−∂f/∂x, −∂f/∂y, 1) and n = v/|v|.
It can be proven that the best approximation to ∂f/∂x by two samples is archived by:
∂f/∂x(x,y) = (f(x+ε,y) − f(x−ε,y))/(2ε)
To get a better approximation you need to use at least four points, thus adding a third point (i.e. (x,y)) doesn't improve the result.
Your hightmap is a sampling of some function f on a regular grid. Taking ε=1 you get:
2v = (f(x−1,y) − f(x+1,y), f(x,y−1) − f(x,y+1), 2)
Putting it into code would look like:
// sample the height map:
float fx0 = f(x-1,y), fx1 = f(x+1,y);
float fy0 = f(x,y-1), fy1 = f(x,y+1);
// the spacing of the grid in same units as the height map
float eps = ... ;
// plug into the formulae above:
vec3 n = normalize(vec3((fx0 - fx1)/(2*eps), (fy0 - fy1)/(2*eps), 1));
A common method is using a Sobel filter for a weighted/smooth derivative in each direction.
Start by sampling a 3x3 area of heights around each texel (here, [4] is the pixel we want the normal for).
[6][7][8]
[3][4][5]
[0][1][2]
Then,
//float s[9] contains above samples
vec3 n;
n.x = scale * -(s[2]-s[0]+2*(s[5]-s[3])+s[8]-s[6]);
n.y = scale * -(s[6]-s[0]+2*(s[7]-s[1])+s[8]-s[2]);
n.z = 1.0;
n = normalize(n);
Where scale can be adjusted to match the heightmap real world depth relative to its size.
If you think of each pixel as a vertex rather than a face, you can generate a simple triangular mesh.
+--+--+
|\ |\ |
| \| \|
+--+--+
|\ |\ |
| \| \|
+--+--+
Each vertex has an x and y coordinate corresponding to the x and y of the pixel in the map. The z coordinate is based on the value in the map at that location. Triangles can be generated explicitly or implicitly by their position in the grid.
What you need is the normal at each vertex.
A vertex normal can be computed by taking an area-weighted average of the surface normals for each of the triangles that meet at that point.
If you have a triangle with vertices v0, v1, v2, then you can use a vector cross product (of two vectors that lie on two of the sides of the triangle) to compute a vector in the direction of the normal and scaled proportionally to the area of the triangle.
Vector3 contribution = Cross(v1 - v0, v2 - v1);
Each of your vertices that aren't on the edge will be shared by six triangles. You can loop through those triangles, summing up the contributions, and then normalize the vector sum.
Note: You have to compute the cross products in a consistent way to make sure the normals are all pointing in the same direction. Always pick two sides in the same order (clockwise or counterclockwise). If you mix some of them up, those contributions will be pointing in the opposite direction.
For vertices on the edge, you end up with a shorter loop and a lot of special cases. It's probably easier to create a border around your grid of fake vertices and then compute the normals for the interior ones and discard the fake borders.
for each interior vertex V {
Vector3 sum(0.0, 0.0, 0.0);
for each of the six triangles T that share V {
const Vector3 side1 = T.v1 - T.v0;
const Vector3 side2 = T.v2 - T.v1;
const Vector3 contribution = Cross(side1, side2);
sum += contribution;
}
sum.Normalize();
V.normal = sum;
}
If you need the normal at a particular point on a triangle (other than one of the vertices), you can interpolate by weighing the normals of the three vertices by the barycentric coordinates of your point. This is how graphics rasterizers treat the normal for shading. It allows a triangle mesh to appear like smooth, curved surface rather than a bunch of adjacent flat triangles.
Tip: For your first test, use a perfectly flat grid and make sure all of the computed normals are pointing straight up.