Silhouette detection (geometry shader) for edges that connect only one triangle - c++

I want to draw a mesh silhouette using geometry shader(line_strip).
The problem occurs when the mesh has edges with only one triangle(like the edge of a cloth). Enclosed(all edges connect 2 triangles) meshes work.
I built adjacency index buffer using RESTART_INDEX where neighbor vertex didn't exist, during rendering I tried to used:
glEnable(GL_PRIMITIVE_RESTART);
glPrimitiveRestartIndex(RESTART_INDEX); // RESTART_INDEX = ushort(-1)
The result:
As you can see head, feet, hands are OK, other parts of the model not so much.
// silhouette.gs.glsl
void main(void) {
vec3 e1 = gs_in[2].vPosition - gs_in[0].vPosition; // 1 ---- 2 ----- 3
vec3 e2 = gs_in[4].vPosition - gs_in[0].vPosition; // \ / \ /
vec3 e3 = gs_in[1].vPosition - gs_in[0].vPosition; // \ e1 \ /
vec3 e4 = gs_in[3].vPosition - gs_in[2].vPosition; // \ / \ /
vec3 e5 = gs_in[4].vPosition - gs_in[2].vPosition; // 0 -e2-- 4
vec3 e6 = gs_in[5].vPosition - gs_in[0].vPosition; // \ /
// // \ /
vec3 vN = cross(e1, e2); // \ /
vec3 vL = u_vLightPos - gs_in[0].vPosition; // 5
How does gs manage vertices when it encounters a primitive restart index? Example: For some triangles 1,3 or 5 shouldn't have any vertex.

If a triangle edge is not part of any other triangle then it is still adjacent to the back face of its own triangle. For example: if there is no other triangle attached to e1 in your diagram, you can use the third vertex of the triangle (4 in this case, as e1 consists of 0 and 2) in place of 1. This will not require any additional check in the geometry shader.

I removed:
glEnable(GL_PRIMITIVE_RESTART);
glPrimitiveRestartIndex(RESTART_INDEX);
during draw pass and this is the result:
which closer to the desired result.

Another possible solution to this problem could be tackled while constructing the adjacency index buffer.
// for every mesh
// for every triangle in the mesh // first pass - edges & neighbors
...
// for every triangle in the mesh // second pass - build indices
const triangle_t* triangle = &tcache[ti]; // unique index triangles
for(i = 0; i < 3; ++i) // triangle = 3 edges
{
edge_t edge(triangle->index[i], triangle->index[(i+1) % 3]); // get edge
neighbors_t neighbors = oEdgeMap[edge]; // get edge neighbors
ushort tj = getOther(neighbors, ti); // get the opposite triangle index
ushort index = RESTART_INDEX; // = ushort(-1)
if(tj != (ushort)(-1))
{
const triangle_t& opposite = tcache[tj]; // opposite triangle
index = getOppositeIndex(opposite, edge) // opposite vertex from other triangle
}
else
{
index = triangle->index[i];
}
indices[ii+i*2+0] = triangle->index[i];
indices[ii+i*2+1] = index;
}
Instead of using a RESTART_INDEX as mentioned in the first post, use the same vertex. Edge triangles will have a neighbor triangle with a 0 length edge(built from the same vertices). I think this needs to be checked during gs.
Does the geometry shader fire fore triangle with incomplete adjacency information?

Related

triangle texture is rotating and flipping horizontally

writing a ray tracer for class, and I'm getting an odd issue I can't seem to nail down the source of. I've got my texture and for some reason its rotating 90 clockwise, then flipping horizontally. I'm using barycentric coordinates to navigate my uv space coordinates.
I've already attempted to play with how i'm generating u,v,w. but it seems to result in the same issue.
In program issue visible
my actual test texture
//how i'm generating my barycentric coordinates:
Ph = Pe + Npe*Th; //Ph is the point in space that is being tested, I'm generating u,v,w while testing inside/outside triangle (Pe = Point of eye, Npe = vector from eye to point on triangle, Th = time hit)
A = glm::cross(P1 - P0, P2 - P0);
//glm::vec3 A0 = glm::cross(Ph - P1, Ph - P2);
//glm::vec3 A1 = glm::cross(Ph - P2, Ph - P0);
//glm::vec3 A2 = glm::cross(Ph - P0, Ph - P1);
glm::vec3 A0 = glm::cross(P1-Ph, P2-Ph);
glm::vec3 A1 = glm::cross(P2-Ph, P0-Ph);
glm::vec3 A2 = glm::cross(P0-Ph, P1-Ph);
if (glm::dot(n0, glm::normalize(A0)) < 0 || glm::dot(n0, glm::normalize(A1)) < 0 || glm::dot(n0, glm::normalize(A2)) < 0)
{
//point is outside triangle
return -1;
}
//normalize and chec k dot products to detrmine if they are facing the right way.
u = glm::length(A0) / glm::length(A);
v = glm::length(A1) / glm::length(A);
w = 1 - u - v;
And then here is the portion that uses that to calculate the texture coordinates.
//portion of code calculating texture coordinates
//calculate new location of texture coordinate, assume z position is 0
glm::vec3 textureCo = P0TexCo*this->u + P1TexCo*this->v + P2TexCo*this->w;
u = textureCo[0];
v = textureCo[1];
Found the issue, it had to do with how opengl interprets coordinates and what corner it starts in for displaying pixels from an array. the solution is to invert the image after you load it in.

Algorithm for coloring a triangle by vertex color

I'm working on a toy raytracer using vertex based triangles, similar to OpenGL. Each vertex has its own color and the coloring of a triangle at each point should be based on a weighted average of the colors of the vertex, weighted by how close the point is to each vertex.
I can't figure out how to calculate the weight of each color at a given point on the triangle to mimic the color shading done by OpenGL, as shown by many examples here. I have several thoughts, but I'm not sure which one is correct (V is a vertex, U and W are the other two vertices, P is the point to color, C is the centroid of the triangle, and |PQ| is the distance form point P to point Q):
Have weight equal to `1-(|VP|/|VC|), but this would leave black at the centroid (all colors are weighted 0), which is not correct.
Weight is equal to 1-(|VP|/max(|VU|,|VW|)), so V has non-zero weight at the closer of the two vertices, which I don't think is correct.
Weight is equal to 1-(|VP|/min(|VU|,|VW|)), so V has zero weight at the closer of the two vertices, and negative weight (which would saturate to 0) at the further of the two. I'm not sure if this is right or not.
Line segment L extends from V through P to the opposite side of the triangle (UW): weight is the ratio of |VP| to |L|. So the weight of V would be 0 all along the opposite side.
The last one seems like the most likely, but I'm having trouble implementing it so I'm not sure if its correct.
OpenGL uses Barycentric coordinates (linear interpolation precisely although you can change that using interpolation functions or qualifiers such as centroid or noperspective in latest versions).
In case you don't know, barycentric coordinates works like that:
For a location P in a triangle made of vertices V1, V2 and V3 whose respective coefficients are C1, C2, C3 such as C1+C2+C3=1 (those coefficients refers to the influence of each vertex in the color of P) OpenGL must calculate those such as the result is equivalent to
C1 = (AreaOfTriangle PV2V3) / (AreaOfTriangle V1V2V3)
C2 = (AreaOfTriangle PV3V1) / (AreaOfTriangle V1V2V3)
C3 = (AreaOfTriangle PV1V2) / (AreaOfTriangle V1V2V3)
and the area of a triangle can be calculated with half the length of the cross product of two vector defining it (in direct sens) for example AreaOfTriangle V1V2V3 = length(cross(V2-V1, V3-V1)) / 2 We then have something like:
float areaOfTriangle = length(cross(V2-V1, V3-V1)); //Two times the area of the triangle
float C1 = length(cross(V2-P, V3-P)) / areaOfTriangle; //Because A1*2/A*2 = A1/A
float C2 = length(cross(V3-P, V1-P)) / areaOfTriangle; //Because A2*2/A*2 = A2/A
float C3 = 1.0f - C1 - C2; //Because C1 + C2 + C3 = 1
But after some math (and little bit of web research :D), the most efficient way of doing this I found was:
YOURVECTYPE sideVec1 = V2 - V1, sideVec2 = V3 - V1, sideVec3 = P - V1;
float dot11 = dot(sideVec1, sideVec1);
float dot12 = dot(sideVec1, sideVec2);
float dot22 = dot(sideVec2, sideVec2);
float dot31 = dot(sideVec3, sideVec1);
float dot32 = dot(sideVec3, sideVec2);
float denom = dot11 * dot22 - dot12 * dot12;
float C1 = (dot22 * dot31 - dot12 * dot32) / denom;
float C2 = (dot11 * dot32 - dot12 * dot31) / denom;
float C3 = 1.0f - C1 - C2;
Then, to interpolate things like colors, color1, color2 and color3 being the colors of your vertices, you do:
float color = C1*color1 + C2*color2 + C3*color3;
But beware that this doesn't work properly if you're using perspective transformations (or any transformation of vertices implying the w component) so in this case, you'll have to use:
float color = (C1*color1/w1 + C2*color2/w2 + C3*color3/w3)/(C1/w1 + C2/w2 + C3/w3);
w1, w2, and w3 are respectively the fourth components of the original vertices that made V1, V2 and V3.
V1, V2 and V3 in the first calculation must be 3 dimensional because of the cross product but in the second one (the most efficient), it can be 2 dimensional as well as 3 dimensional, the results will be the same (I think you guessed that 2D was faster in the second calculation) but in both case, don't forget to divide them by the fourth component of their original vector if you're doing perspective transformations and to use the second formula for interpolation in that case. (And in case you didn't understand, all vectors in those calculations should NOT include a fourth component!)
And one last thing; I strongly advise you to use OpenGL just by rendering a big quad on the screen and putting all your code in the shaders (Although you'll need very strong knowledge about OpenGL for advanced use) because you'll benefit from parallelism (even from a s#!+ video card) except if you're writing that on a 30years-old computer or if you're just doing that to see how it works.
IIRC, for this you don't really need to do anything in GLSL -- the interpolated color will already be the input color to your fragment shader if you just pass on the vertex color in the vertex shader.
Edit: Yes, this doesnt answer the question -- the correct answer is in the first comment above already: Use Barycentric coordinates (which is what GL does).

glsl fragment shader calculate texture position

I'm writing a fragment shader for rendering a 1D texture containing an arbitrary byte array into a kind of barcode.
my idea is to encode each byte into a square divided diagonally (so each of the 4 triangles represents 2 bit), like so:
_____
|\ A /| each byte encoded as binary is DDCCBBAA,
| \ / | the colors are: Red if 11
|D X B| Green if 10
| / \ | Blue if 01
|/ C \| Black if 00
¯¯¯¯¯ so color can be calculated as: [(H & L), (H & !L), (!H & L)]
so for example: 198 == 11 00 01 10 would be:
_____ DD CC BB AA
|\ G /|
| \ / | A=10=Green
|R X B| B=01=Blue
| / \ | C=00=Black
|/ b \| D=11=Red
¯¯¯¯¯ (B=Blue, b=Black)
what I got so far are a function for encoding 2 bools (H,L in the example notation) into a vec3 color and a function for encoding a byte and "corner index" (A/B/C/D in the example) into the color:
#version 400
out vec4 gl_FragColor; // the output fragment
in vec2 vf_texcoord; // normalized texture coords, 0/0=top/left
uniform isampler1D uf_texture; // the input data
uniform int uf_texLen; // the input data's byte count
vec3 encodeColor(bool H, bool L){
return vec3(H&&L,H&&!L,!H&&L);
}
vec3 encodeByte(int data,int corner){
int maskL = 1 << corner;
int maskH = maskL << 1;
int shiftL = corner/2;
int shiftH = shiftL+1;
bool H=bool((data&maskH)>>shiftH);
bool L=bool((data&maskL)>>shiftL);
return encodeColor(H,L);
}
void main(void) {
// the part I can't figure out
gl_FragColor.rgb=encodeByte(/* some stuff calculated by the part above*/);
gl_FragColor.a=1;
}
the problem is I can't figure out how to calculate which byte to encode and in what "corner" the current fragment is.
(Note, the code here is is off the top of my head and untested, and I haven't written a lot of GLSL. The variable names are sloppy and I've probably made some stupid syntax mistakes, but it should be enough to convey the idea.)
The first thing you need to do is translate the texture coordinates into a data index (which square to display colors for) and a modified set of texture coordinates that represent the position within that square.
For a horizontal arrangement, you could do something like:
float temp = vf_texcoord.x * uf_texLen;
float temp2 = floor(temp);
int dataIndex = temp2;
vec2 squareTexcoord = { temp2 - temp, vf_texcoord.y };
Then you'd use squareTexcoord to decide which quadrant of the square you're in:
int corner;
vec2 squareTexcoord2 = squareTexcoord - { 0.5, 0.5 }
if (abs(squareTexcoord2.x) > abs(squareTexcoord2.y)) { // Left or right triangle
if (squareTexcoord2.x > 0) { // Right triangle
corner = 0;
}
else { // Left triangle
corner = 1;
}
}
else { // Top or bottom triangle
if (squareTexcoord2.y > 0) { // Bottom triangle
corner = 2;
}
else { // Top triangle
corner = 3;
}
}
And now you have all you need for shading:
gl_FragColor = { encodeByte(int(texelFetch(uf_texture,dataIndex,0).r), corner), 1.0 };

obj file - averaging normals

I have a obj file which stores data this way:
v value1 value2 value3
f value1 value2 value3
First I calculate a normal for the face and then assign for each vertex of that face:
for(int i = 0; i < verticesInd.size(); i+=3)
{
glm::vec3 normal = glm::normalize(glm::cross(glm::vec3(vertices[verticesInd[i + 1]]) - glm::vec3(vertices[verticesInd[i]]), glm::vec3(vertices[verticesInd[i + 2]]) - glm::vec3(vertices[verticesInd[i]])));
out_Normals[i] = normal;
out_Normals[i + 1] = normal;
out_Normals[i + 2] = normal;
}
For achieve a flat shading I can duplicate vertices:
for(int i = 0; i < verticesInd.size(); i++)
{
out_Vertices.push_back(vertices[verticesInd[i]]);
}
and then draw the object using glDrawArrays:
glDrawArrays(GL_TRIANGLES, 0, out_Vertices.size());
For achieve a smooth shading I need to average normals for each vertex but I have no idea how to find adjacent faces.
Edit1: I didn't notice a single s parameter before f:
v value1 value2 value3
s 1
f value1 value2 value3
Edit2: Normals averaging
glm::vec3 tNormal;
for(int i = 0; i < vertices.size(); i++)
{
for(int j = 0; j < verticesInd.size(); j++)
{
if(verticesInd[j] == i)
{
tNormal += faceNormals[j / 3];
}
}
aNormals.push_back(glm::normalize(tNormal));
tNormal = glm::vec3(0,0,0);
}
Edit 3 Face normals:
for(int i = 0; i < verticesInd.size(); i+=3)
{
glm::vec3 normal = glm::normalize(glm::cross(glm::vec3(vertices[verticesInd[i + 1]]) - glm::vec3(vertices[verticesInd[i]]), glm::vec3(vertices[verticesInd[i + 2]]) - glm::vec3(vertices[verticesInd[i]])));
faceNormals.push_back(normal);
}
In most object formats adjacent faces should be sharing vertices. Finding the smooth shaded normal at a vertex should just then be a question of averaging the normal of any face which uses that vertex.
I suggest that you create an additional new array the same size as your existing vertex array.
Iterate over each face, and for each vertex index, add that vertice's face normal to the new array.
At the end of the process, normalise the result normal vectors and then use that instead of the previously computed face normals.
If I understand your data structures correctly, it would look something like this:
glm::vec3 aNormals[]; // one for each vertex - use the appropriate constructor
for (int i = 0; i < verticesInd.size(); ++i) {
int f = i / 3; // which face is this index part of (3 per face?)
int v = verticesInd[i]; // which vertex number is being used
aNormals[v] += faceNormals[f]; // add the face normal to this vertex
}
// now normalise aNormals
I have no idea how to find adjacent faces.
Add a list of faces to your temporary, in-OBJ-loader vertex storage. As you're processing the face lines add the new face to the face list of each vertex it references.
That way you can spin over all the vertexes at the end, look up the face(s) it belonged to, grab the face normals, and average them.
If your OBJ doesn't have such nice baked-in connectivity information (and there's no requirement that it has to) then you'll have to do a nearest-neighbor search on each vertex to find vertexes (and corresponding faces) that are near it. Use your favorite spatial index to speed those sorts of queries up.
Follow this guy https://www.youtube.com/watch?v=MRD_zN0SWh0&feature=plcp on per fragment lighting. It contains the same equations for finding the normals on a per vertex basis as far as I know.
You would need a list of faces. If you've ever seen a wavefront obj. They contain multiple triangle indices 4,7,2 1,2,3. Each 3 numbers representing a face. With a normal Mesh each point can only be used 3 times. If you find each group that has a 3 you can find each face. Find their corresponding normal values and then average.
Alnitak has valid info too. You have a list of vertices, then list them off as groups of 3 so that you can reuse vertices(share) as face data.

Generating a normal map from a height map?

I'm working on procedurally generating patches of dirt using randomized fractals for a video game. I've already generated a height map using the midpoint displacement algorithm and saved it to a texture. I have some ideas for how to turn that into a texture of normals, but some feedback would be much appreciated.
My height texture is currently a 257 x 257 gray-scale image (height values are scaled for visibility purposes):
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
So given the 3D coordinates of A, B, C, and D, would it make sense to:
split the four into two triangles: ABC and BCD
calculate the normals of those two faces via cross product
split into two triangles: ACD and ABD
calculate the normals of those two faces
average the four normals
...or is there a much easier method that I'm missing?
Example GLSL code from my water surface rendering shader:
#version 130
uniform sampler2D unit_wave
noperspective in vec2 tex_coord;
const vec2 size = vec2(2.0,0.0);
const ivec3 off = ivec3(-1,0,1);
vec4 wave = texture(unit_wave, tex_coord);
float s11 = wave.x;
float s01 = textureOffset(unit_wave, tex_coord, off.xy).x;
float s21 = textureOffset(unit_wave, tex_coord, off.zy).x;
float s10 = textureOffset(unit_wave, tex_coord, off.yx).x;
float s12 = textureOffset(unit_wave, tex_coord, off.yz).x;
vec3 va = normalize(vec3(size.xy,s21-s01));
vec3 vb = normalize(vec3(size.yx,s12-s10));
vec4 bump = vec4( cross(va,vb), s11 );
The result is a bump vector: xyz=normal, a=height
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
No. Each pixel of the image represents a vertex of the grid, so intuitively, from symmetry, its normal is determined by heights of neighboring pixels (i-1,j), (i+1,j), (i,j-1), (i,j+1).
Given a function f : ℝ2 → ℝ that describes a surface in ℝ3, a unit normal at (x,y) is given by
v = (−∂f/∂x, −∂f/∂y, 1) and n = v/|v|.
It can be proven that the best approximation to ∂f/∂x by two samples is archived by:
∂f/∂x(x,y) = (f(x+ε,y) − f(x−ε,y))/(2ε)
To get a better approximation you need to use at least four points, thus adding a third point (i.e. (x,y)) doesn't improve the result.
Your hightmap is a sampling of some function f on a regular grid. Taking ε=1 you get:
2v = (f(x−1,y) − f(x+1,y), f(x,y−1) − f(x,y+1), 2)
Putting it into code would look like:
// sample the height map:
float fx0 = f(x-1,y), fx1 = f(x+1,y);
float fy0 = f(x,y-1), fy1 = f(x,y+1);
// the spacing of the grid in same units as the height map
float eps = ... ;
// plug into the formulae above:
vec3 n = normalize(vec3((fx0 - fx1)/(2*eps), (fy0 - fy1)/(2*eps), 1));
A common method is using a Sobel filter for a weighted/smooth derivative in each direction.
Start by sampling a 3x3 area of heights around each texel (here, [4] is the pixel we want the normal for).
[6][7][8]
[3][4][5]
[0][1][2]
Then,
//float s[9] contains above samples
vec3 n;
n.x = scale * -(s[2]-s[0]+2*(s[5]-s[3])+s[8]-s[6]);
n.y = scale * -(s[6]-s[0]+2*(s[7]-s[1])+s[8]-s[2]);
n.z = 1.0;
n = normalize(n);
Where scale can be adjusted to match the heightmap real world depth relative to its size.
If you think of each pixel as a vertex rather than a face, you can generate a simple triangular mesh.
+--+--+
|\ |\ |
| \| \|
+--+--+
|\ |\ |
| \| \|
+--+--+
Each vertex has an x and y coordinate corresponding to the x and y of the pixel in the map. The z coordinate is based on the value in the map at that location. Triangles can be generated explicitly or implicitly by their position in the grid.
What you need is the normal at each vertex.
A vertex normal can be computed by taking an area-weighted average of the surface normals for each of the triangles that meet at that point.
If you have a triangle with vertices v0, v1, v2, then you can use a vector cross product (of two vectors that lie on two of the sides of the triangle) to compute a vector in the direction of the normal and scaled proportionally to the area of the triangle.
Vector3 contribution = Cross(v1 - v0, v2 - v1);
Each of your vertices that aren't on the edge will be shared by six triangles. You can loop through those triangles, summing up the contributions, and then normalize the vector sum.
Note: You have to compute the cross products in a consistent way to make sure the normals are all pointing in the same direction. Always pick two sides in the same order (clockwise or counterclockwise). If you mix some of them up, those contributions will be pointing in the opposite direction.
For vertices on the edge, you end up with a shorter loop and a lot of special cases. It's probably easier to create a border around your grid of fake vertices and then compute the normals for the interior ones and discard the fake borders.
for each interior vertex V {
Vector3 sum(0.0, 0.0, 0.0);
for each of the six triangles T that share V {
const Vector3 side1 = T.v1 - T.v0;
const Vector3 side2 = T.v2 - T.v1;
const Vector3 contribution = Cross(side1, side2);
sum += contribution;
}
sum.Normalize();
V.normal = sum;
}
If you need the normal at a particular point on a triangle (other than one of the vertices), you can interpolate by weighing the normals of the three vertices by the barycentric coordinates of your point. This is how graphics rasterizers treat the normal for shading. It allows a triangle mesh to appear like smooth, curved surface rather than a bunch of adjacent flat triangles.
Tip: For your first test, use a perfectly flat grid and make sure all of the computed normals are pointing straight up.