I have a list of faces [index], that a list of vertices and normals are mapped to. I generate offsets at each vertex to produce terrain/variation, but by doing so the vertice normals stop working like they should. This is what I do to make the shading work again, namely try to redo the vertice normals (vn);
for (auto &x : normals)x = vec3(0); //zero normals first
vec3 facenormal; //buffer
for (size_t i = 0; i<indices.size();) //iterate 3 points per face
{
//find the face normal (b-a x c-a)
facenormal = cross(
(shape[indices[i + 1]] - shape[indices[i]]),
(shape[indices[i + 2]] - shape[indices[i]])
);
//add this face normal, to each of the 3 vn slots nearby
normals[indices[i++]] += facenormal; //note +=
normals[indices[i++]] += facenormal;
normals[indices[i++]] += facenormal;
}
for (auto &x : normals)x = normalize(x); //then normalize them
According to this reply it should do the trick. But, something is wrong with the approach.
It causes artifacting like shown on in the lower half of the image, while the shading mostly seems to work.
The question is: how should I calculate the vn's, to avoid these lines?
Remaking the parser: The way I was parsing (loading) the model was taking out duplicate index-values, which can save reference faces that I have to pass to the GPU, but will cause some of the values to stack/be reused. Atleast I suspect this was the problem, since it seems alot cleaner now. The only problem however is I'm getting flat shading all of a sudden. The flat shading makes it hard to see if the problem is really gone too, but I suspect it is. Why the new parsing produces flat shading after recalculation is beyond me. But these are the results im looking at.
I still have to find out the correct formula to calculate vertex normals? I dont understand why the shading was round before, but now its flat, all because I stopped stacking 1/4th of the index values?
Finding the right solution, finally! As NicoSchertler pointed out, the vertex normals were not of the base mesh. The way I load the vertices/uv/normals to fit the index for the glDrawElementsBaseVertex render-call, means cycling through the vertices and normals as they are loaded, does not refer to the base, but copies designed to fit indexing for uv+normals.
Soo, what I end up getting is weird artifacting and flat shading, as the values I'm modifying are post-parsing. Making every face not-unique does not solve anything, but clarify part of the problem, namely that I needed to modify the base index-values, not the loaded index ones.
After I recalculate for the base mesh (pre indexing) I get this result; smooth shading and visible differences for the mountainsides. The model is higher poly (and serialized in binary) and the coloring is still in the its early stages. It is however, shadily correct.
Related
I am attempting to load models exported from Blender into OpenGL. Particularly, I followed the source code from this tutorial to help me get started. Because the loader is fairly simple, it only read in the vertex coordinates as well as face indices, ignoring the normal and tex.
It then calculates the normal for each face:
float coord1[3] = { Faces_Triangles[triangle_index], Faces_Triangles[triangle_index+1],Faces_Triangles[triangle_index+2]};
float coord2[3] = {Faces_Triangles[triangle_index+3],Faces_Triangles[triangle_index+4],Faces_Triangles[triangle_index+5]};
float coord3[3] = {Faces_Triangles[triangle_index+6],Faces_Triangles[triangle_index+7],Faces_Triangles[triangle_index+8]};
float *norm = this->calculateNormal( coord1, coord2, coord3 );
float* Model_OBJ::calculateNormal( float *coord1, float *coord2, float *coord3 )
{
/* calculate Vector1 and Vector2 */
float va[3], vb[3], vr[3], val;
va[0] = coord1[0] - coord2[0];
va[1] = coord1[1] - coord2[1];
va[2] = coord1[2] - coord2[2];
vb[0] = coord1[0] - coord3[0];
vb[1] = coord1[1] - coord3[1];
vb[2] = coord1[2] - coord3[2];
/* cross product */
vr[0] = va[1] * vb[2] - vb[1] * va[2];
vr[1] = vb[0] * va[2] - va[0] * vb[2];
vr[2] = va[0] * vb[1] - vb[0] * va[1];
/* normalization factor */
val = sqrt( vr[0]*vr[0] + vr[1]*vr[1] + vr[2]*vr[2] );
float norm[3];
norm[0] = vr[0]/val;
norm[1] = vr[1]/val;
norm[2] = vr[2]/val;
return norm;
}
I have 2 questions.
How do I know if the normal is facing inwards or outwards? Is there some ordering of the vertices in each row in the .obj file that gives indication how to calculate the normal?
In the initialization function, he uses GL_SMOOTH. Is this incorrect, since I need to provide normals for each vertex if using GL_SMOOTH instead of GL_FLAT?
Question 1
glFrontFace determines wind order
Wind order means, what order a set of vertexes should appear for a normal to be considered positive. Consider the triangle below. It's vertexes are defined clockwise. If we told OpenGL glFrontFace(GL_CW) (That clockwise means front face) then the normal would essentially be sticking right out of the screen towards you in order to be considered "outward".
On a side note, counter-clockwise is the default and what you should stick with.
No matter what, you need should really define normals especially if you want to do any lighting in your scene as they are used for lighting calculation. glFrontFace just lets you tell OpenGL which way you want to interpret what the front of a polygon is.
In the above example, and below diagram, if we told OpenGL that we define faces counter-clockwise and also glEnabled glCullFace and set it to GL_BACK then our triangle wouldn't show up because we would be looking at the back of it and we told OpenGL not to show the back of polygons.
You can read more about face culling here: Face Culling.
Wavefront .obj has support for declaring normals in a file if you don't want to create them yourself. Just make sure your exporter adds them.
Additionaly, Wavefront wants each vertex to have a normal defined:
f v1//vn1 v2//vn2 v3//vn3 ...
Where vN is the vertex of the f face and vnN is the normal of the vertex. By providing a normal for each vertex, you achieve a smoother looking surface than you would by defining a normal per face or by setting all of the normals of vertexes of the same face to be the same. Take a look at this question to see the difference you can make on a sphere: OpenGL: why do I have to set a normal with glNormal?
If your .obj file doesn't have normals defined, I would use the face definition order and cross two edges of the defined face. Consider the method used here: Calculating a Surface Normal
Edit
I think I may be a little confusing. The front face of a polygon is only slightly related to its normals. Normals are only really used for lighting calculations. You don't have to have them, but they are one of the big variables used in calculating how lit your object is.
I am explaining the "front-faced-ness" of a polygon at the same time because it sort of makes sense, when talking about convex polygons, that your normal would stick out of the "front" of your triangle with respect to the shape you are making.
If you created a huge cave, or if your camera were to mostly reside inside of some concave shape, then it would make sense to have your normals point inwards since your light sources are probably going to want to bounce off of the inside of your shape.
Question 2
GL_SMOOTH determines which of the shading models you want to use with glShadeModel
GL_SMOOTH means smooth shading, where color is actually interpolated between each vertex, vs GL_FLAT means flat shading, where only one one color will be used. Typically, you'll use the default value, GL_SMOOTH.
You don't have to define normals for each vertex in either case. However, if you want GL_SMOOTH to look "good" you'll probably want to as it will interpolate as it renders between each vertex rather than just picking one vertex for properties.
Also, bear in mind that all of this goes out the window whenever you leave the fixed-function pipeline and start using shaders.
I've got a model that I've loaded from a JSON file (stored as each tile /w lots of bools for height, slope, smooth, etc.). I've then computed face normals for all of it's faces and copied them to each of their verticies. What I now want to do (have been trying for days) is to smooth the vertex normals, in the simplest way possible. What I'm trying to do is set each vertex normal to a normalized sum of it's surrounding face normals. Now, my problem is this:
The two circled vertices should end up with perfectly mirrored normals. However, the one on the left has 2 light faces and 4 dark faces. The one on the right has 1 light face and 6 dark faces. As such, they both end up with completely different normals.
What I can't work out is how to do this properly. What faces should I be summing up? Or perhaps there is a completely different method I should be using? All of my attempts so far have come up with junk and / or consisted of hundreds of (almost certainly pointless) special cases.
Thanks for any advice, James
Edit: Actually, I just had a thought about what to try next. Would only adding a percentage of each triangle based on it's angle work (if that makes sense). I mean, for the left, clockwise: x1/8, x1/8, x1/4, x1/8, x1/8, x1/4 ???
And then not normalize it?
That solution worked wonderfully. Final result:
Based on the image it looks like you might want to take the average of all unique normals of all adjacent faces. This avoids double counting faces with the same normal.
I'm trying to construct a proper destructible terrain, just for research purposes.
Well, everything went fine, but resolution is not satisfying me enough.
I have seen a lot of examples how people implement MC algorithm, but most of them,
as far as I understand, uses functions to triangulate final mesh, which is not
appropriate for me.
I will try briefly to explain how I'm constructing my terrain, and maybe someone
of you will give me suggestion how to improve, or to increase resolution of final terrain.
1) Precalculating MC triangles.
I'm running simple loop through MC lookup tables for each case(0-255) and calculating triangles
in rage: [0,0,0] - [1,1,1].
No problems here.
2) Terrain
I have terrain class, which stores my voxels.
In general, it looks like this:
int size = 32;//Size of each axis.
unsigned char *voxels = new unsigned char[(size * size * size)/8];
So, each axis is 32 units of size long, but, I store voxel information per bit.
Meaning if bit is turned on (1), there is something, and there should be draw something.
I have couple of functions:
TurnOn(x,y,z);
TurnOff(x,y,z);
to turn location of voxel on or off. (Helps to work with bits).
Once terrain is allocated, I'm running perlin noise, and turning bits on or off.
My terrain class has one more function, to extract Marching Cubes case number (0-255) from x,y,z location:
unsigned char GetCaseNumber(x,y,z);
by determining if neighbours of that voxel is turned on or off.
No problems here.
3) Rendering part
I'm looping for each axis, extracting case number, then getting precalculated triangles by case,
translating to x,y,z coordinates, and drawing those triangles.
no problems here.
So result looks like this:
But as you can see, in any single location, resolution is not comparable to for example this:
(source: angelfire.com)
I have seen in MC examples that people are using something called "iso values", which I don't understand.
Any suggestions how to improve my work, or what is iso values, and how to implement it in uniform grid would be truly lovely.
The problem is that your voxels are a binary mask (just on or off).
This is great for the "default" marching cubes algorithm, but it it does mean you get sharp edges in your mesh.
The smooth example is probably generated from smooth scalar data.
Imagine that if your data varies smoothly between 0 and 1.0, and you set your threshold to 0.5. Now, after you detect which configuration a given cube is, you look at the all the vertices generated.
Say, that you have a vertex on an edge between two voxels, one with value 0.4 and the other 0.7. Then you move the vertex to the position where you would get exactly 0.5 (the threshold) when interpolating between 0.4 and 0.7. So it will be closer to the 0.4 vertex.
This way, each vertex is exactly on the interpolated iso surface and you will generate much smoother triangles.
But it does require that your input voxels are scalar (and vary smoothly). If your voxels are bi-level (all either 0 or 1), this will produce the same triangles as you got earlier.
Another idea (not the answer to your question but perhaps useful):
To just get smoother rendering, without mathematical correctness, it could be worthwile to compute an average normal vector for each vertex, and use that normal for each triangle connecting to it. This will hide the sharp edges.
Okay forgive me if this is at all vague, but I've been up all night trying to catch up with some coding.
I have a reasonably large and defined terrain with minimal optimisations in place and I have just started to introduce predefined meshes (.X objects) which come with materials and textures. Previously I was working in a fixed-function-pipeline approach as I have only recently started working with DirectX9. It has become apparent that FFP is old school and deprecated in DirectX10, so I have been moving relevant code to using a HLSL approach.
In my initial approach I had loaded my models into a std::vector container of models and created another std::vector of objects which contain a reference to which model to display. In my render loop, I would iterate this container and check to see if the objects were within the Field-of-view of my camera. If so I would first translate the meshes to their positions, using a SetTransform() call, then DrawSubset().
However it has become clear that SetTransform() is not applicable to the HLSL approach; therefore I'm a little stumped to how I can pre-translate these meshes to their relevant positions, or whether I should be translating them within the vertex shader. The meshes are stored within an ID3DXMESH type and it seems that I can access the Index and Vertex Buffer of these meshes; Am I supposed to take the contents of these buffers, translate the contents then draw them? Or am I really going the wrong way about doing this?
I am familiar with the Vertex Buffer approach, but not sure what the vertex format is within the mesh itself.
Any help would be appreciated as I'm about to tear my eyeballs out.
Edit
I'll accept Sergio's answer as it pushed me in the right direction although the solution came when I realised in my debug output a line about committing changes.
Solution
After transforming my mesh I needed to call
g_pEffect->CommitChanges();
As someone said earlier you should be doing the translation on your vertex shader, if you have a worldViewProjection matrix constant within your shader, then you need to multiply the vertex position by that matrix and return the transformed position on your output before you move onto your pixel shader.
Make sure the world transform you are passing in with your view and projection is not just an identity as this wont transform the verts at all.
This is a sample on how you can achieve this on your vertex shader, which is essentially multiplying the incoming untransformed vertex position by the worldViewProj matrix.
VS_OUTPUT vs_main( float4 inPos: POSITION )
{
VS_OUTPUT output;
output.pos = mul( inPos, g_worldViewProj );
return output;
}
I draw lots of quadratic Bézier curves in my OpenGL program. Right now, the curves are one-pixel thin and software-generated, because I'm at a rather early stage, and it is enough to see what works.
Simply enough, given 3 control points (P0 to P2), I evaluate the following equation with t varying from 0 to 1 (with steps of 1/8) in software and use GL_LINE_STRIP to link them together:
B(t) = (1 - t)2P0 + 2(1 - t)tP1 + t2P2
Where B, obviously enough, results in a 2-dimensional vector.
This approach worked 'well enough', since even my largest curves don't need much more than 8 steps to look curved. Still, one pixel thin curves are ugly.
I wanted to write a GLSL shader that would accept control points and a uniform thickness variable to, well, make the curves thicker. At first I thought about making a pixel shader only, that would color only pixels within a thickness / 2 distance of the curve, but doing so requires solving a third degree polynomial, and choosing between three solutions inside a shader doesn't look like the best idea ever.
I then tried to look up if other people already did it. I stumbled upon a white paper by Loop and Blinn from Microsoft Research where the guys show an easy way of filling the area under a curve. While it works well to that extent, I'm having trouble adapting the idea to drawing between two bouding curves.
Finding bounding curves that match a single curve is rather easy with a geometry shader. The problems come with the fragment shader that should fill the whole thing. Their approach uses the interpolated texture coordinates to determine if a fragment falls over or under the curve; but I couldn't figure a way to do it with two curves (I'm pretty new to shaders and not a maths expert, so the fact I didn't figure out how to do it certainly doesn't mean it's impossible).
My next idea was to separate the filled curve into triangles and only use the Bézier fragment shader on the outer parts. But for that I need to split the inner and outer curves at variable spots, and that means again that I have to solve the equation, which isn't really an option.
Are there viable algorithms for stroking quadratic Bézier curves with a shader?
This partly continues my previous answer, but is actually quite different since I got a couple of central things wrong in that answer.
To allow the fragment shader to only shade between two curves, two sets of "texture" coordinates are supplied as varying variables, to which the technique of Loop-Blinn is applied.
varying vec2 texCoord1,texCoord2;
varying float insideOutside;
varying vec4 col;
void main()
{
float f1 = texCoord1[0] * texCoord1[0] - texCoord1[1];
float f2 = texCoord2[0] * texCoord2[0] - texCoord2[1];
float alpha = (sign(insideOutside*f1) + 1) * (sign(-insideOutside*f2) + 1) * 0.25;
gl_FragColor = vec4(col.rgb, col.a * alpha);
}
So far, easy. The hard part is setting up the texture coordinates in the geometry shader. Loop-Blinn specifies them for the three vertices of the control triangle, and they are interpolated appropriately across the triangle. But, here we need to have the same interpolated values available while actually rendering a different triangle.
The solution to this is to find the linear function mapping from (x,y) coordinates to the interpolated/extrapolated values. Then, these values can be set for each vertex while rendering a triangle. Here's the key part of my code for this part.
vec2[3] tex = vec2[3]( vec2(0,0), vec2(0.5,0), vec2(1,1) );
mat3 uvmat;
uvmat[0] = vec3(pos2[0].x, pos2[1].x, pos2[2].x);
uvmat[1] = vec3(pos2[0].y, pos2[1].y, pos2[2].y);
uvmat[2] = vec3(1, 1, 1);
mat3 uvInv = inverse(transpose(uvmat));
vec3 uCoeffs = vec3(tex[0][0],tex[1][0],tex[2][0]) * uvInv;
vec3 vCoeffs = vec3(tex[0][1],tex[1][1],tex[2][1]) * uvInv;
float[3] uOther, vOther;
for(i=0; i<3; i++) {
uOther[i] = dot(uCoeffs,vec3(pos1[i].xy,1));
vOther[i] = dot(vCoeffs,vec3(pos1[i].xy,1));
}
insideOutside = 1;
for(i=0; i< gl_VerticesIn; i++){
gl_Position = gl_ModelViewProjectionMatrix * pos1[i];
texCoord1 = tex[i];
texCoord2 = vec2(uOther[i], vOther[i]);
EmitVertex();
}
EndPrimitive();
Here pos1 and pos2 contain the coordinates of the two control triangles. This part renders the triangle defined by pos1, but with texCoord2 set to the translated values from the pos2 triangle. Then the pos2 triangle needs to be rendered, similarly. Then the gap between these two triangles at each end needs to filled, with both sets of coordinates translated appropriately.
The calculation of the matrix inverse requires either GLSL 1.50 or it needs to be coded manually. It would be better to solve the equation for the translation without calculating the inverse. Either way, I don't expect this part to be particularly fast in the geometry shader.
You should be able to use technique of Loop and Blinn in the paper you mentioned.
Basically you'll need to offset each control point in the normal direction, both ways, to get the control points for two curves (inner and outer). Then follow the technique in Section 3.1 of Loop and Blinn - this breaks up sections of the curve to avoid triangle overlaps, and then triangulates the main part of the interior (note that this part requires the CPU). Finally, these triangles are filled, and the small curved parts outside of them are rendered on the GPU using Loop and Blinn's technique (at the start and end of Section 3).
An alternative technique that may work for you is described here:
Thick Bezier Curves in OpenGL
EDIT:
Ah, you want to avoid even the CPU triangulation - I should have read more closely.
One issue you have is the interface between the geometry shader and the fragment shader - the geometry shader will need to generate primitives (most likely triangles) that are then individually rasterized and filled via the fragment program.
In your case with constant thickness I think quite a simple triangulation will work - using Loop and Bling for all the "curved bits". When the two control triangles don't intersect it's easy. When they do, the part outside the intersection is easy. So the only hard part is within the intersection (which should be a triangle).
Within the intersection you want to shade a pixel only if both control triangles lead to it being shaded via Loop and Bling. So the fragment shader needs to be able to do texture lookups for both triangles. One can be as standard, and you'll need to add a vec2 varying variable for the second set of texture coordinates, which you'll need to set appropriately for each vertex of the triangle. As well you'll need a uniform "sampler2D" variable for the texture which you can then sample via texture2D. Then you just shade fragments that satisfy the checks for both control triangles (within the intersection).
I think this works in every case, but it's possible I've missed something.
I don't know how to exactly solve this, but it's very interesting. I think you need every different processing unit in the GPU:
Vertex shader
Throw a normal line of points to your vertex shader. Let the vertex shader displace the points to the bezier.
Geometry shader
Let your geometry shader create an extra point per vertex.
foreach (point p in bezierCurve)
new point(p+(0,thickness,0)) // in tangent with p1-p2
Fragment shader
To stroke your bezier with a special stroke, you can use a texture with an alpha channel. You can check the alpha channel on its value. If it's zero, clip the pixel. This way, you can still make the system think it is a solid line, instead of a half-transparent one. You could apply some patterns in your alpha channel.
I hope this will help you on your way. You will have to figure out things yourself a lot, but I think that the Geometry shading will speed your bezier up.
Still for the stroking I keep with my choice of creating a GL_QUAD_STRIP and an alpha-channel texture.