Strange 'striping' issue when rendering terrain normals - opengl

I have a strange issue where my normals just do not work when I render terrain. My terrain renders just fine, so I left out all the code for calculating the terrain points from a height map and how I calculated the indices. I know I should be using shaders, but I want to get this fixed first before I move on. I am assuming the issue comes from something obvious I have overlooked in my normals generation code, which is as follows:
for (currentind = 0; currentind < indices.size() - 3; currentind+=3)
{
indtopt = indices[currentind] * 3;
point1.vects[0]=terrainpoints[indtopt];//x
point1.vects[1]=terrainpoints[indtopt+1];//y
point1.vects[2]=terrainpoints[indtopt+2];//z
indtopt = indices[currentind+1] * 3;
//second indice
//points of that indice
point2.vects[0]=terrainpoints[indtopt];//x
point2.vects[1]=terrainpoints[indtopt+1];//y
point2.vects[2]=terrainpoints[indtopt+2];//z
indtopt = indices[currentind+2] *3;
//third indice
//points of that indice
point3.vects[0]=terrainpoints[indtopt];//x
point3.vects[1]=terrainpoints[indtopt+1];//y
point3.vects[2]=terrainpoints[indtopt+2];//z
//--------------------------------------------------
point4.vects[0]=(point2.vects[0]-point1.vects[0]);
point4.vects[1]=(point2.vects[1]-point1.vects[1]);
point4.vects[2]=(point2.vects[2]-point1.vects[2]);
point5.vects[0]=(point3.vects[0]-point2.vects[0]);
point5.vects[1]=(point3.vects[1]-point2.vects[1]);
point5.vects[2]=(point3.vects[2]-point2.vects[2]);
//-------------------------------------------------
//cross product
point6.vects[0]=point4.vects[1]*point5.vects[2] - point4.vects[2]*point5.vects[1];
point6.vects[1]=point4.vects[2]*point5.vects[0] - point4.vects[0]*point5.vects[2];
point6.vects[2]=point4.vects[0]*point5.vects[1] - point4.vects[1]*point5.vects[0];
point6 = point6.normalize();
ternormals[currentind]=point6.vects[0];
ternormals[currentind+1]=point6.vects[1];
ternormals[currentind+2]=point6.vects[2];
}
Below is a picture of what the issue is in both wireframe and triangle renders:
I can post more code if need be, but I just wanted to keep this post short, so I tried to find where I thought the issue might be.

Well, for every "dark" band you're accidently flipping the normal, probably because the surface tangent vectors are passed into the cross product in the wrong order
a × b = - (b × a)
If your terrain is made of triangle strips, then you've got a bidirectional ordering, which means, that you have to either flip the operands or negate the result of the cross product for every odd row.

Related

How to draw a terrain model efficiently from Esri Grid (osg)?

I have many Esri Grid files (https://en.wikipedia.org/wiki/Esri_grid#ASCII) and I would like to render them in 3D without losing precision, I am using OpenSceneGraph.
The problem is this grids are around 1000x1000 (or more) points, so when I extract the vertices, then compute the triangles to create the geometry, I end up having millions of them and the interaction with the scene is impossible (frame rate drops to 0).
I've tried several approches:
Triangle list
Basically, as I read the file, I fill an array with 3 vertices per triangle (this leads to duplication);
osg::ref_ptr<osg::Geode> l_pGeodeSurface = new osg::Geode;
osg::ref_ptr<osg::Geometry> l_pGeometrySurface = new osg::Geometry;
osg::ref_ptr<osg::Vec3Array> l_pvTrianglePoints = osg::Vec3Array;
osg::ref_ptr<osg::Vec3Array> l_pvOriginalPoints = osg::Vec3Array;
... // Read the file and fill l_pvOriginalPoints
for(*triangle inside the file*)
{
... // Compute correct triangle indices (l_iP1, l_iP2, l_iP3)
// Push triangle vertices inside the array
l_pvTrianglePoints->push_back(l_pvOriginalPoints->at(l_iP1));
l_pvTrianglePoints->push_back(l_pvOriginalPoints->at(l_iP2));
l_pvTrianglePoints->push_back(l_pvOriginalPoints->at(l_iP3));
}
l_pGeometrySurface->setVertexArray(l_pvTrianglePoints);
l_pGeometrySurface->addPrimitiveSet(new osg::DrawArrays(GL_TRIANGLES, 0, 3, l_pvTrianglePoints->size()));
Indexed triangle list
Same as before, but the array contains the every vertices just once and I create a second array of indices (basically i tell osg how to build triangles, no duplication)
osg::ref_ptr<osg::Geode> l_pGeodeSurface = new osg::Geode;
osg::ref_ptr<osg::Geometry> l_pGeometrySurface = new osg::Geometry;
osg::ref_ptr<osg::DrawElementsUInt> l_pIndices = new osg::DrawElementsUInt(osg::PrimitiveSet::TRIANGLES, *number of indices*);
osg::ref_ptr<osg::Vec3Array> l_pvOriginalPoints = osg::Vec3Array;
... // Read the file and fill l_pvOriginalPoints
for(i = 0; i < *number of indices*; i++)
{
... // Compute correct triangle indices (l_iP1, l_iP2, l_iP3)
// Push vertices indices inside the array
l_pIndices->at(i) = l_iP1;
l_pIndices->at(i+1) = l_iP2;
l_pIndices->at(i+2) = l_iP3;
}
l_pGeometrySurface->setVertexArray(l_pvOriginalPoints );
l_pGeometrySurface->addPrimitiveSet(l_pIndices.get());
Instancing
this was a bit of an experiment, since I've never used shaders, I tought I could instance a single triangle, then manipulate its coordinates in a vertex shader for every triangle in my scene, using transformation matrices (passing the matrices as a uniform array, one for triangle). I ended up with too many uniforms just with a grid 20x20.
I used these links as a reference:
https://learnopengl.com/Advanced-OpenGL/Instancing,
https://books.google.it/books?id=x_RkEBIJeFQC&pg=PT265&lpg=PT265&dq=osg+instanced+geometry&source=bl&ots=M8ii8zn8w7&sig=ACfU3U0_92Z5EGCyOgbfGweny4KIUfqU8w&hl=en&sa=X&ved=2ahUKEwj-7JD0nq7qAhUXxMQBHcLaAiUQ6AEwAnoECAkQAQ#v=onepage&q=osg%20instanced%20geometry&f=false
None of the above solved my issue, what else can I try? Am I missing something in terms of rendering techinques? I thought it was fairly simple task, but I'm kind of stuck.
I feel like you should consider taking a step back. If you're visualizing GIS-based terrain data, osgEarth is really designed for doing this and has fairly efficient LOD tools for large terrains. Do you need the data always represented at maximum full LOD or are you looking for dynamic LOD to improve frame rate?
Depending on your goals and requirements you might want to look at some more advanced terrain rendering techniques, like rightfield tracing, etc. If the terrain is always static, you can precompute quadtrees and Signed Distance Functions and trace against the heightfield.

Skinning Animation - Weights Destroy Mesh

I am in the process of writing an animation system with my own Collada parser and am running into an issue that I can't wrap my head around.
I have collected my mesh/skin information (vertices, normals, jointIds, weights, etc), my skeleton information (joints, their local transforms, inverse bind position, hierarchy structure), and my animation (keyframe transform position for each joint, timestamp).
My issue is that with everything calculated and then implemented in the shader (the summation of weights multiplied by the joint transform and vertex position) - I get the following:
When I remove the weight multiplication, the mesh remains fully intact - however the skin doesn't actually follow the animation. I am at a lost as I feel as though the math is correct, but very obviously I am going wrong somewhere. Would someone be able to shine light on the aspect I have misinterpreted?
Here is my current understanding and implementation:
After collecting all of the joint's localTransforms and hierarchy, I calculate their inverse bind transfromation matrix. To do this I multiple each joints localTransform with their parentLocalTransform to get a bindTransform. Inverting that bindTransform results in their inverseBindTransform. Below is my code for that:
// Recursively collect each Joints InverseBindTransform -
// root joint's local position is an identity matrix.
// Function is only called once after data collection.
void Joint::CalcInverseBindTransform(glm::mat4 parentLocalPosition)
{
glm::mat4 bindTransform = parentLocalPosition * m_LocalTransform;
m_InverseBindPoseMatrix = glm::inverse(bindTransform);
for (Joint child : Children) {
child.CalcInverseBindTransform(bindTransform);
}
}
Within my animator during an animation, for each joint I take the two JointTransforms for the two frame's my currentTime is in between and I calculate the interpolated JointTransform. (JointTransform simply has a vec3 for position and quaternion for rotation). I do this for every joint and then apply those interpolated values to each Joint by again recursively muliplying the new frameLocalTransform by their parentLocalTransform. I take that bindTransform and multiply it by the invBindTransform and then transpose the matrix. Below is the code for that:
std::unordered_map<int, glm::mat4> Animator::InterpolatePoses(float time) {
std::unordered_map<int, glm::mat4> poses;
if (IsPlaying()) {
for (std::pair<int, JointTransform> keyframe : m_PreviousFrame.GetJointKeyFrames()) {
JointTransform previousFrame = m_PreviousFrame.GetJointKeyFrames()[keyframe.first];
JointTransform nextFrame = m_NextFrame.GetJointKeyFrames()[keyframe.first];
JointTransform interpolated = JointTransform::Interpolate(previousFrame, nextFrame, time);
poses[keyframe.first] = interpolated.getLocalTransform();
}
}
return poses;
}
void Animator::ApplyPosesToJoints(std::unordered_map<int, glm::mat4> newPose, Joint* j, glm::mat4 parentTransform)
{
if (IsPlaying()) {
glm::mat4 currentPose = newPose[j->GetJointId()];
glm::mat4 modelSpaceJoint = parentTransform * currentPose;
for (Joint child : j->GetChildren()) {
ApplyPosesToJoints(newPose, &child, modelSpaceJoint);
}
modelSpaceJoint = glm::transpose(j->GetInvBindPosition() * modelSpaceJoint);
j->SetAnimationTransform(modelSpaceJoint);
}
}
I then collect all the newly AnimatedTransforms for each joint and send them to the shader:
void AnimationModel::Render(bool& pass)
{
[...]
std::vector<glm::mat4> transforms = GetJointTransforms();
for (int i = 0; i < transforms.size(); ++i) {
m_Shader->SetMat4f(transforms[i], ("JointTransforms[" + std::to_string(i) + "]").c_str());
}
[...]
}
void AnimationModel::AddJointsToArray(Joint current, std::vector<glm::mat4>& matrix)
{
glm::mat4 finalMatrix = current.GetAnimatedTransform();
matrix.push_back(finalMatrix);
for (Joint child : current.GetChildren()) {
AddJointsToArray(child, matrix);
}
}
In the shader, I simply follow the summation formula that can be found all over the web when researchiing this topic:
for (int i = 0; i < total_weight_amnt; ++i) {
mat4 jointTransform = JointTransforms[jointIds[i]];
vec4 newVertexPos = jointTransform * vec4(pos, 1.0);
total_pos += newVertexPos * weights[i];
[...]
---------- Reply to Normalizing Weights ------------
There were a few weights summing above 1, but after solving the error in my code the model looked like this:
For calculating the weights - I loop through all preadded weights in the vector, and if I find a weight that is less than the weight I'm looking to add - I replace that weight in that position. Otherwise, I append the weight onto the end of the vector. If there are less weights in my vector than my specified max_weights (which is 4) - I fill in the remaining weights/jointIds with 0.
I understand when something is going wrong in skinning animations, there can be alot of different areas the problem is occuring. As such, for future googlers experiencing the same issue I am - take this as more of a list of suggestions of what you could be doing wrong rather than absolutely doing wrong.
For my problem - I had the right idea but wrong approach in a lot of minor areas. Which brought me fairly close but, as they say, no cigar.
I had no need to calculate the Inverse Bind Pose myself, Collada's Inverse Bind Pose (sometimes/often declared as an "offsetMatrix") is more than perfect. This wasn't a problem more as I was just doing unnecessary calculations.
In a Collada file, they often provide you more "joints" or "nodes" in the hierarchy than what is needed for the animation. Prior to the start of your actual animated "joints", there is the scene and an initial armature "node" type. The scene is typically an identity matrix that was manipulated based on your "up axis" upon reading in the Collada file. The Node type will determine the overall size of each joint in the skeleton - so if it wasn't resized, its probably the identity matrix. Make sure your hierarchy still contains ALL nodes/joints listed in the hierarchy. I very much was not doing so - which greatly distorted my globalPosition (BindPose).
If you are representing your Joint's transforms rotation through quaternions (which is highly recommended), make sure the resulted quaternion is normalized after interpolating between two rotated positions.
On the same note - when combining the Rotation and Transform into your final matrix - make sure your order of multiplication and the final output is correct.
Finally - your last skinning matrix is comprised of your joints InvBindMatrix * GlobalPosition * GlobalInverseRootTransform (<- this is the inverse of the local transfrom from your "scene" node mentioned in (1), remember?).
Based on your prior matrix multiplications up to this point, you may or may not need to transpose this final matrix.
And with that - I was able to successfully animate my model!
One final note - my mesh and animation files are added in separately. If your animations are in separate files from your mesh, make sure you collect the skinning/joint information from the files with an animation rather than the file with the mesh. I list my steps for loading in a model and then giving it multiple animations through different files:
Load in the Mesh (This contains Vertices,Normals,TexCoords,JointIds,Weights)
Load in the animation file (This gives Skeleton, InverseBindPositions, and other needed info to bind skeleton to mesh) - Once skeleton and binding info is collected, gather first animation info from that file as well.
For another animation, the above Skeleton should work fine for any other animation on the same mesh/model - just read in the animation information and store in your chosen data structure. Repeat step 3 til happy.

Marching Cubes Issues

I've been trying to implement the marching cubes algorithm with C++ and Qt. Anyway, so far all the steps have been written, but I'm getting a really bad result. I'm looking for orientation or advices about what can be going wrong. I suspect one of the problems may be with the voxel conception, specifically about which vertex goes in which corner (0, 1, ..., 7). Also, I'm not a 100% sure about how to interpret the input for the algorithm (I'm using datasets). Should I read it in the ZYX order and move the marching cube in the same way or it doesn't matter at all? (Leaving aside the fact that no every dimension has to have the same size).
Here is what I'm getting against what it should look like...
http://i57.tinypic.com/2nb7g46.jpg
http://en.wikipedia.org/wiki/Marching_cubes
http://en.wikipedia.org/wiki/Marching_cubes#External_links
Paul Bourke. "Overview and source code".
http://paulbourke.net/geometry/polygonise/
Qt_MARCHING_CUBES.zip: Qt/OpenGL example courtesy Dr. Klaus Miltenberger.
http://paulbourke.net/geometry/polygonise/Qt_MARCHING_CUBES.zip
The example requires boost, but looks like it probably should work.
In his example, it has in marchingcubes.cpp, a few different methods for calculating the marching cubes: vMarchCube1 and vMarchCube2.
In the comments it says vMarchCube2 performs the Marching Tetrahedrons algorithm on a single cube by making six calls to vMarchTetrahedron.
Below is the source for the first one vMarchCube1:
//vMarchCube1 performs the Marching Cubes algorithm on a single cube
GLvoid GL_Widget::vMarchCube1(const GLfloat &fX, const GLfloat &fY, const GLfloat &fZ, const GLfloat &fScale, const GLfloat &fTv)
{
GLint iCorner, iVertex, iVertexTest, iEdge, iTriangle, iFlagIndex, iEdgeFlags;
GLfloat fOffset;
GLvector sColor;
GLfloat afCubeValue[8];
GLvector asEdgeVertex[12];
GLvector asEdgeNorm[12];
//Make a local copy of the values at the cube's corners
for(iVertex = 0; iVertex < 8; iVertex++)
{
afCubeValue[iVertex] = (this->*fSample)(fX + a2fVertexOffset[iVertex][0]*fScale,fY + a2fVertexOffset[iVertex][1]*fScale,fZ + a2fVertexOffset[iVertex][2]*fScale);
}
//Find which vertices are inside of the surface and which are outside
iFlagIndex = 0;
for(iVertexTest = 0; iVertexTest < 8; iVertexTest++)
{
if(afCubeValue[iVertexTest] <= fTv) iFlagIndex |= 1<<iVertexTest;
}
//Find which edges are intersected by the surface
iEdgeFlags = aiCubeEdgeFlags[iFlagIndex];
//If the cube is entirely inside or outside of the surface, then there will be no intersections
if(iEdgeFlags == 0)
{
return;
}
//Find the point of intersection of the surface with each edge
//Then find the normal to the surface at those points
for(iEdge = 0; iEdge < 12; iEdge++)
{
//if there is an intersection on this edge
if(iEdgeFlags & (1<<iEdge))
{
fOffset = fGetOffset(afCubeValue[ a2iEdgeConnection[iEdge][0] ],afCubeValue[ a2iEdgeConnection[iEdge][1] ], fTv);
asEdgeVertex[iEdge].fX = fX + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][0] + fOffset * a2fEdgeDirection[iEdge][0]) * fScale;
asEdgeVertex[iEdge].fY = fY + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][1] + fOffset * a2fEdgeDirection[iEdge][1]) * fScale;
asEdgeVertex[iEdge].fZ = fZ + (a2fVertexOffset[ a2iEdgeConnection[iEdge][0] ][2] + fOffset * a2fEdgeDirection[iEdge][2]) * fScale;
vGetNormal(asEdgeNorm[iEdge], asEdgeVertex[iEdge].fX, asEdgeVertex[iEdge].fY, asEdgeVertex[iEdge].fZ);
}
}
//Draw the triangles that were found. There can be up to five per cube
for(iTriangle = 0; iTriangle < 5; iTriangle++)
{
if(a2iTriangleConnectionTable[iFlagIndex][3*iTriangle] < 0) break;
for(iCorner = 0; iCorner < 3; iCorner++)
{
iVertex = a2iTriangleConnectionTable[iFlagIndex][3*iTriangle+iCorner];
vGetColor(sColor, asEdgeVertex[iVertex], asEdgeNorm[iVertex]);
glColor4f(sColor.fX, sColor.fY, sColor.fZ, 0.6);
glNormal3f(asEdgeNorm[iVertex].fX, asEdgeNorm[iVertex].fY, asEdgeNorm[iVertex].fZ);
glVertex3f(asEdgeVertex[iVertex].fX, asEdgeVertex[iVertex].fY, asEdgeVertex[iVertex].fZ);
}
}
}
UPDATE: Github working example, tested
https://github.com/peteristhegreat/qt-marching-cubes
Hope that helps.
Finally, I found what was wrong.
I use a VBO indexer class to reduce the ammount of duplicated vertices and make the render faster. This class is implemented with a std::map to find and discard already existing vertices, using a tuple of < vec3, unsigned short >. As you may imagine, a marching cubes algorithm generates structures with thousands if not millions of vertices. The highest number a common unsigned short can hold is 65536, or 2^16. So, when the output geometry had more than that, the map index started to overflow and the result was a mess, since it started to overwrite vertices with the new ones. I just changed my implementation to draw with common VBO and not indexed while I fix my class to support millions of vertices.
The result, with some minor vertex normal issues, speaks for itself:
http://i61.tinypic.com/fep2t3.jpg

Why is detail lost when computing shadow and reflections in my ray tracer

I am building a ray tracer and I am able to correctly render diffuse and specular parts of my sphere. When I come to calculate shadows and reflections however I end up with a very pixelated result as shown in the below image:
I can see that the shadow is cast in the correct place and if you zoom in the reflection is also visible but again pixelated. I call this method to determine if a pixel is in shade and it is also called recursively by my reflect ray method to determine the reflected colours.
RGBColour Scene::illumination(Ray incidentRay, Shape *closestShape, RGBColour shapeColour, Ray ray)
{
RGBColour diffuseLight = _backgroundColour;
RGBColour specularLight = _backgroundColour;
double projectionNormalToSource = 0.0;
for (int i = 0; i < _lightSources.size(); i++)
{
Ray shadowRay(incidentRay.Direction(), (_lightSources.at(i).GetPosition() - incidentRay.Direction()).UnitVector());
Vector surfaceNormal = closestShape->SurfaceNormal(incidentRay);
//lambertian shading.
projectionNormalToSource = surfaceNormal.ScalarProduct(shadowRay.Direction());
if (projectionNormalToSource > 0)
{
bool isShadow = false;
std::vector<double> shadowIntersections;
Ray temp(incidentRay.Direction(), (_lightSources.at(i).GetPosition() - incidentRay.Direction()));
for (int j = 0; j < _sceneObjects.size(); j++)
{
shadowIntersections.push_back(_sceneObjects.at(j)->Intersection(temp));
}
//Test each point to see if it is in shadow.
for (int j = 0; j < shadowIntersections.size(); j++)
{
if (shadowIntersections.at(j) != -1)
{
if (shadowIntersections.at(j) > _epsilon && shadowIntersections.at(j) <= temp.Direction().Magnitude() && closestShape != _sceneObjects.at(j))
{
isShadow = true;
}
break;
}
}
if (!isShadow)
{
diffuseLight = diffuseLight + (closestShape->Colour() * projectionNormalToSource * closestShape->DiffuseCoefficient() * _lightSources.at(i).DiffuseIntensity());
specularLight = specularLight + specularReflection(_lightSources.at(i), projectionNormalToSource, closestShape, incidentRay, temp, ray);
}
}
}
return diffuseLight + specularLight;
}
As I am able to correctly render the spheres apart from these aspects I am convinced the problem must lie within this particular method so I have not posted the others. What I think is happening is that where the pixel values retain their initial colour instead of being shaded I must incorrectly be calculating very small values or the other option is that the calculated ray did not intersect, however I do not think the latter option is valid otherwise the same intersection method would return incorrect results elsewhere in the program but as the spheres render correctly (excluding the shading and reflection).
So typically what causes results like this and can you spot any obvious logic errors in my method?
Edit: I have moved my light source in front and I can now see that the shadow appears to be correctly cast for the green sphere and the blue one becomes pixelated. So I think on any subsequent shape iterations something must not be updating correctly.
Edit 2: The first issue has been fixed and the shadows are now not pixelated, the resolution was to move the break statement into the if statement directly preceding it. The issue that the reflections are still pixelated still occurs currently.
Pixelation like this could occur due to numerical instability. An example: Suppose you calculate an intersection point that lies on a curved surface. You then use that point as the origin of a ray (a shadow ray, for example). You would assume that the ray wouldn't intersect that curved surface, but in practice it sometimes can. You could check for this by discarding such self intersections, but that could cause problems if you decide to implement concave shapes. Another approach could be to move the origin of the generated ray along its direction vector by some infinitesimally small amount, so that no unwanted self-intersection occurs.

3D intersection (Radiosity) - OpenGl

I am trying to compute the visibility between two planes or patches.
I have a wireframe of quads. Each quad has a normal vector with X, Y and Z coordinates. Each quad has 4 vertices. Each vertex has X, Y and Z coordinates.
Given two quads, how can I know if there is an occluder or another object in between these two patches (quads).
Therefore, I need to create a method that returns 1 if patches has no occluders or return 0 if patches has occluder.
The method I picture would be something like this:
GLint visibility(Patch i, Patch j) {
GLboolean isVisible;
vector<Patch> allPatches; // can be used to get all patches in the scene
// Check if there is any occluder between patch i and patch j
Some computations here
if(isVisible) {
return 1;
} else {
return 0;
}
}
I've heard of z-buffer algorithms and the hemicube implementation that would get this done. I already have the form-factors computed. I just need to finish this step to get shadows.
Make sure you give some form of answer with graphs or methods because I am not that genius
I found the solution. Basically I needed to use ray tracing techniques. Throw ray from one patch to another and check if ray intercepts the planes with barycentric equation computation. Once you find the control points you need to check if the control point lies on you quad.