Generating spheres with vertices and indices? - c++

I'm currently working on an OpenGL project where I should currently generate spheres with vertices and indices.
All vertices represent a point, and indices tells the graphic card to link 3 points as a triangle.
Example : indices : {0,1,2} = it will make a triangle with the first, second and third point.
I've managed to make a UV sphere with correct vertices, but I don't know how I can get indices.
Here is my current result :
And here is my code :
Mesh ObjectFactory::createSphere() {
// Texture loaded and binded to the shaderprogram
Texture textures[]{
Texture("resources/pop_cat.png", "diffuse", 0, GL_RGBA, GL_UNSIGNED_BYTE),
};
int numHorizontalSegments = 20;
int numVerticalSegments = 20;
Vertex vertices[numVerticalSegments * numVerticalSegments] = {};
GLuint indices[numVerticalSegments * numVerticalSegments] = {};
int i = 0;
for (int h = 0; h < numHorizontalSegments; h++) {
float angle1 = (h + 1) * M_PI / (numHorizontalSegments + 1);
for (int v = 0; v < numVerticalSegments; v++) {
i++;
float angle2 = v * (2 * M_PI) / numVerticalSegments;
float x = sinf(angle1) * cosf(angle2);
float y = cosf(angle1);
float z = sinf(angle1) * sinf(angle2);
vertices[i] = Vertex{glm::vec3(x, y, z), glm::vec3(0.83f, 0.70f, 0.44f), glm::vec2(0.0f, 0.0f)};
indices[i] = i;
}
}
// Store mesh data in vectors for the mesh
std::vector<Vertex> verts(vertices, vertices + sizeof(vertices) / sizeof(Vertex));
std::vector<GLuint> ind(indices, indices + sizeof(indices) / sizeof(GLuint));
std::vector<Texture> tex(textures, textures + sizeof(textures) / sizeof(Texture));
// Create sphere mesh
return {verts, ind, tex};
}
Thank you a lot for your help !

You have created a sphere of vertices by calculating horizontal circles in multiple vertical layers, a UV sphere. Good.
You are just adding indexes once for each vertex for a total of one index per vertex, that is not according to the concept.
What you need to repeatedy do is finding the three indexes in your array of vertices, which make a usable triangle.
Among other things it means that you will name the same vertex index multiple times. Mostly six times, because most of your vertices will be part of six triangles. At least one to the "upper left", one towards the "upper right", "lower left", "lower right"; while there are usually two double directions; e.g. two triangles to the upper right.
"mostly six", because of edge cases like the "north pole" and "south pole"; which participate in many triangles and quads.
Lets looks at a part of your UV sphere:
V03----------V02----------V01---------V00
| | __/ | __/|
| | __/ | c __/ |
| | __/ | __/ |
| | / f | / d |
V13----------V12----------V11----------V10
| | __/ | e __/|
| | a __/ | __/ |
| | __/ | __/ |
| | / b | / |
V23----------V22----------V21----------V20
| | | |
| | | |
| | | |
| | | |
V33----------V32----------V31---------V30
You can see that for the quad in the middle (represented by two triangles "a" and "b"),
you need six index entries, though it only has 4 of the vertices, and each of the vertices will be used even more often, from the other touching quads.
For the triangle "a" you get indexes for the vertexes V11, V12, V22 (mind the orientation, depending on where you want the surface, thinking either always "counter clock" or always "clockwise" will get you where you only need a few tries to get the desired result).
For the triangle "b" you get indexes for the vertexes V11, V22, V21.
Also, the vertex V11 will have to be index again for the triangle "c" and "d", and "e", and "f"; for participating in six triangles or four quads.
You managed to do your UV sphere fine, so I do not think that I need to provide the loop and selection code for getting that done. You managed to visualise the result of your UV sphere, just try and retry after checking the result.

Related

Problem multiplying vec3 by model matrice (scaling problem)

in my 3D application I have an Object3D class which construct is bounding box at creation, this bounding box is a simple cuboid with a min and a max point :
However, when I translate my Object3D, I want to update my bounding box using the Model matrix of my object :
BoundingBox& TransformBy(const glm::mat4& modelMatrice){
//extract position of my model matrix
glm::vec3 pos( modelMatrice[3] );
//extract scale of my model matrix
glm::vec3 scale( glm::length(glm::vec3(modelMatrice[0])), glm::length(glm::vec3(modelMatrice[1])) , glm::length(glm::vec3(modelMatrice[2])));
//Creating vec4 min/max from vec3
glm::vec4 min_(min.x, min.y, min.z, 1.0f);
glm::vec4 max_(max.x, max.y, max.z, 1.0f);
//Inverting the matrice to multiply my vec4 with it in order to rotate my min max
glm::mat4 inverse = glm::inverse(modelMatrice);
min_ = min_ * inverse;
max_ = max_ * inverse;
//Scale my min_ max_
min_ *= glm::vec4(scale,1.0f);
max_ *= glm::vec4(scale,1.0f);
//adding model matrice translation to min_ and max_
min_ += glm::vec4(pos,0.0f);
max_ += glm::vec4(pos,0.0f);
//Redefining max and min
max = glm::vec3( (min_.x > max_.x)? min_.x:max_.x, (min_.y > max_.y)? min_.y: max_.y, (min_.z > max_.z)? min_.z : max_.z);
min = glm::vec3( (min_.x < max_.x)? min_.x:max_.x, (min_.y < max_.y)? min_.y: max_.y, (min_.z < max_.z)? min_.z : max_.z);
return *this;
}
Here is how I call my TransformBy function :
box.TransformBy(transform.GetModelMatrix());
For some reason, it make my min and max point rotating correctly it also translate it but my scaling is not correctly applied, making my bounding box having a bigger scale than my 3D object.
Why my scaling is not working properly ?
Maybe there is a less complicated way of doing what I want ?
If I understand correctly how the result of GetModelMatrix() is supposed to look, i.e. that it is just a normal 4x4 transformation matrix, then I think I can guess what the problem is. I find it easiest to visualize/explain with an example, so hopefully that's alright.
Suppose your result of GetModelMatrix() is the following, a simple transformation matrix with no rotational component, just a translation by [5 6 7].
| 1 0 0 5 |
| 0 1 0 6 |
| 0 0 1 7 |
| 0 0 0 1 |
The inverse of that, which gets passed in as modelMatrice, is just:
| 1 0 0 -5 |
| 0 1 0 -6 |
| 0 0 1 -7 |
| 0 0 0 1 |
After calling glm::vec4 min_(min.x,min.y,min.z,1.0f);, let us represent min_ by the vector [x y z 1]. Then min_ * modelMatrice looks like:
| 1 0 0 -5 |
| 0 1 0 -6 |
[x y z 1] * | 0 0 1 -7 | = [x y z (-5x-6y-7z+1)]
| 0 0 0 1 |
That is, the x, y, and z components are not altered because it is the |0 0 0 1| row that gets applied to them in the multiplication, not the column containing the translation components.
May I ask, do you have a particular reason for (a) inverting GetModelMatrix() and (b) multiplying the matrix by the vector as opposed to vice versa? I'd guess that perhaps your code should instead look as follows:
BoundingBox& TransformBy(const glm::mat4 modelMatrice){
...
min_ = modelMatrice * min_;
max_ = modelMatrice * max_;
...
return *this;
}
Called with box.TransformBy(transform.GetModelMatrix());
Update:
In regards to your new code, I question the need for inversion of the transformation matrix for rotation, the need to split up the scaling/translation/rotation into three separate steps, etc. It feels like, unless there is an error somewhere else, the process should be as simple as taking your object's transformation vector, multiplying min_ and max_ by it, and then doing the test for whether any min/max components swapped that you currently have at the end of the function:
BoundingBox& TransformBy(const glm::mat4& modelMatrice){
min_ = modelMatrice * min_;
max_ = modelMatrice * max_;
max = glm::vec3( (min_.x > max_.x)? min_.x:max_.x, (min_.y > max_.y)? min_.y: max_.y, (min_.z > max_.z)? min_.z : max_.z);
min = glm::vec3( (min_.x < max_.x)? min_.x:max_.x, (min_.y < max_.y)? min_.y: max_.y, (min_.z < max_.z)? min_.z : max_.z);
return *this;
}
If the above five-line function does not work, please let me know how it fails; if it does not work, I feel the problem may lie elsewhere in your code.
However, to answer your updated question, as to why the scaling does not work: When you invert the transformation matrix, you are also inverting the scaling. Thus, you are not just rotating when you multiply by inverse: you are also applying the inverted scale. While you could fix this by multiplying by the scale again, the far simpler solution seems to be trying to get the above five-line solution working.

How to get texture coordinates from VTK IntersectWithLine?

I've loaded a texture mapped OBJ via vtkOBJReader and loaded it into a vtkModifiedBSPTree:
auto readerOther(vtkSmartPointer<vtkOBJReader>::New());
auto rawOtherPath(modelPathOther.toLatin1());
readerOther->SetFileName(rawOtherPath.data());
readerOther->Update();
auto meshDataOther(readerOther->GetOutput());
auto bspTreeOther(vtkSmartPointer<vtkModifiedBSPTree>::New());
bspTreeOther->SetDataSet(meshDataOther);
bspTreeOther->BuildLocator();
I then compute my line segment start and end and feed that into
if (bspTreeOther->IntersectWithLine(p1, p2, tolerance, distanceAlongLine, intersectionCoords, pcoords, subId, cellId, cell))
With all the relevant predefined variables of course.
What I need is the texture's UV coordinates at the point of intersection.
I'm so very new to VTK that I've not yet caught the logic of how its put together yet; the abstraction layers are still losing me while I'm digging through the source.
I've hunted for this answer across SO and the VTK users archives and found vague hints given by those who understood VTK deeply to those who were nearly there themselves, and thus of little help to me thus far.
(Appended 11/9/2018)
To clarify, I'm working with non-degenerate triangulated meshes created by a single 3D scanner shot, so quads and other higher polygons are not going to be ever seen by my code. A general solution should account for such things, but that can be accomplished via triangulating the mesh first via a good application of handwavium.
Code
Note that if one vertex belongs to several polygons and has different texture coordinates, VTK will create duplicates of the vertex.
I don't use vtkCleanPolyData, because VTK will merge such "duplicates" and we will lose needed information, as far as I know.
I use vtkCellLocator instead of vtkModifiedBSPTree,
because in my case it was faster.
The main file main.cpp.
You can find magic numbers in start and end arrays — these are your p1 and p2.
I've set these values just for example
#include <vtkSmartPointer.h>
#include <vtkPointData.h>
#include <vtkCellLocator.h>
#include <vtkGenericCell.h>
#include <vtkOBJReader.h>
#include <vtkTriangleFilter.h>
#include <vtkMath.h>
#include <iostream>
int main(int argc, char * argv[])
{
if (argc < 2)
{
std::cerr << "Usage: " << argv[0] << " OBJ_file_name" << std::endl;
return EXIT_FAILURE;
}
auto reader{vtkSmartPointer<vtkOBJReader>::New()};
reader->SetFileName(argv[1]);
reader->Update();
// Triangulate the mesh if needed
auto triangleFilter{vtkSmartPointer<vtkTriangleFilter>::New()};
triangleFilter->SetInputConnection(reader->GetOutputPort());
triangleFilter->Update();
auto mesh{triangleFilter->GetOutput()};
// Use `auto mesh(reader->GetOutput());` instead if no triangulation needed
// Build a locator to find intersections
auto locator{vtkSmartPointer<vtkCellLocator>::New()};
locator->SetDataSet(mesh);
locator->BuildLocator();
// Initialize variables needed for intersection calculation
double start[3]{-1, 0, 0.5};
double end[3]{ 1, 0, 0.5};
double tolerance{1E-6};
double relativeDistanceAlongLine;
double intersectionCoordinates[3];
double parametricCoordinates[3];
int subId;
vtkIdType cellId;
auto cell{vtkSmartPointer<vtkGenericCell>::New()};
// Find intersection
int intersected = locator->IntersectWithLine(
start,
end,
tolerance,
relativeDistanceAlongLine,
intersectionCoordinates,
parametricCoordinates,
subId,
cellId,
cell.Get()
);
// Get points of intersection cell
auto pointsIds{vtkSmartPointer<vtkIdList>::New()};
mesh->GetCellPoints(cellId, pointsIds);
// Store coordinates and texture coordinates of vertices of the cell
double meshTrianglePoints[3][3];
double textureTrianglePoints[3][2];
auto textureCoordinates{mesh->GetPointData()->GetTCoords()};
for (unsigned pointNumber = 0; pointNumber < cell->GetNumberOfPoints(); ++pointNumber)
{
mesh->GetPoint(pointsIds->GetId(pointNumber), meshTrianglePoints[pointNumber]);
textureCoordinates->GetTuple(pointsIds->GetId(pointNumber), textureTrianglePoints[pointNumber]);
}
// Normalize the coordinates
double movedMeshTrianglePoints[3][3];
for (unsigned i = 0; i < 3; ++i)
{
movedMeshTrianglePoints[0][i] = 0;
movedMeshTrianglePoints[1][i] =
meshTrianglePoints[1][i] -
meshTrianglePoints[0][i];
movedMeshTrianglePoints[2][i] =
meshTrianglePoints[2][i] -
meshTrianglePoints[0][i];
}
// Normalize the texture coordinates
double movedTextureTrianglePoints[3][2];
for (unsigned i = 0; i < 2; ++i)
{
movedTextureTrianglePoints[0][i] = 0;
movedTextureTrianglePoints[1][i] =
textureTrianglePoints[1][i] -
textureTrianglePoints[0][i];
movedTextureTrianglePoints[2][i] =
textureTrianglePoints[2][i] -
textureTrianglePoints[0][i];
}
// Calculate SVD of a matrix consisting of normalized vertices
double U[3][3];
double w[3];
double VT[3][3];
vtkMath::SingularValueDecomposition3x3(movedMeshTrianglePoints, U, w, VT);
// Calculate pseudo inverse of a matrix consisting of normalized vertices
double pseudoInverse[3][3]{0};
for (unsigned i = 0; i < 3; ++i)
{
for (unsigned j = 0; j < 3; ++j)
{
for (unsigned k = 0; k < 3; ++k)
{
if (w[k] != 0)
{
pseudoInverse[i][j] += VT[k][i] * U[j][k] / w[k];
}
}
}
}
// Calculate interpolation matrix
double interpolationMatrix[3][2]{0};
for (unsigned i = 0; i < 3; ++i)
{
for (unsigned j = 0; j < 2; ++j)
{
for (unsigned k = 0; k < 3; ++k)
{
interpolationMatrix[i][j] += pseudoInverse[i][k] * movedTextureTrianglePoints[k][j];
}
}
}
// Calculate interpolated texture coordinates of the intersection point
double interpolatedTexturePoint[2]{textureTrianglePoints[0][0], textureTrianglePoints[0][1]};
for (unsigned i = 0; i < 2; ++i)
{
for (unsigned j = 0; j < 3; ++j)
{
interpolatedTexturePoint[i] += (intersectionCoordinates[j] - meshTrianglePoints[0][j]) * interpolationMatrix[j][i];
}
}
// Print the result
std::cout << "Interpolated texture coordinates";
for (unsigned i = 0; i < 2; ++i)
{
std::cout << " " << interpolatedTexturePoint[i];
}
std::cout << std::endl;
return EXIT_SUCCESS;
}
CMake project file CMakeLists.txt
cmake_minimum_required(VERSION 3.1)
PROJECT(IntersectInterpolate)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
find_package(VTK REQUIRED)
include(${VTK_USE_FILE})
add_executable(IntersectInterpolate MACOSX_BUNDLE main.cpp)
if(VTK_LIBRARIES)
target_link_libraries(IntersectInterpolate ${VTK_LIBRARIES})
else()
target_link_libraries(IntersectInterpolate vtkHybrid vtkWidgets)
endif()
Math
What we need
Suppose you have a mesh consisting of triangles and your vertices have texture coordinates.
Given vertices of a triangle A, B and C, corresponding texture coordinates A', B' and C', you want to find a mapping (to interpolate) from another inner and boundary points of the triangle to the texture.
Let's make some rational assumptions:
Points A, B, C should correspond to their texture coordinates A', B', C';
Each point X on the border, say AB, should correspond to the points of A'B' line in following way: |AX| / |AB| = |A'X'| / |A'B'| — half way on the original triangle should be a half way on the texture map;
Centroid of the triangle (A + B + C) / 3 should correspond to centroid of the texture triangle (A' + B' + C') / 3.
Equations to solve
Looks like we want to have affine mapping: coordinates of vertices of the original triangle should be multiplied by some coefficients and be added to some constants.
Let's construct the system of equations
Ax * Mxx + Ay * Myx + Az * Mzx + M0x = A'x
Ax * Mxy + Ay * Myy + Az * Mzy + M0y = A'y
Ax * Mxz + Ay * Myz + Az * Mzz + M0z = 0
and the same for B and C.
You can see that we have 9 equations and 12 unknowns.
Though, equations containing Miz (for i in {x, y, z}) have the solution 0 and don't play any role in further computations, so we can just set them equal to 0.
Thus, we have system with 6 equations and 8 unknowns
Ax * Mxx + Ay * Myx + Az * Mzx + M0x = A'x
Ax * Mxy + Ay * Myy + Az * Mzy + M0y = A'y
Let's write entire system in matrix view
-- -- -- -- -- --
| 1 Ax Ay Az | | M0x M0y | | A'x A'y |
| 1 Bx By Bz | x | Mxx Mxy | = | B'x B'y |
| 1 Cx Cy Cz | | Myx Myy | | C'x C'y |
-- -- | Mzx Mzy | -- --
-- --
I subtract coordinates of A vertex from B and C
and texture coordinates A' from B' and C',
and now we have the triangle with the first vertex located
in start of coordinate system as well as corresponding texture coordinates.
This means that now triangles are not translated (moved) one relative to another
and we don't need M0 part of interpolation matrix
-- -- -- -- -- --
| Bx By Bz | | Mxx Mxy | | B'x B'y |
| Cx Cy Cz | x | Myx Myy | = | C'x C'y |
-- -- | Mzx Mzy | -- --
-- --
Solution
Let's call the first matrix P, the second M and the last one T
P M = T
The matrix P is not square.
If we add zero row to it, the matrix becomes singular.
So, we have to calculate pseudo-inverse of it in order to solve the equation.
There's no function for calculating pseudo-inverse matrix in VTK.
We go to Moore–Penrose inverse article on Wikipedia and see that it can be calculated using SVD.
VTKMath::SingularValueDecomposition3x3 function allows us to do it.
The function gives us U, S and VT matrices.
I'll write pseudo-inverse of matrix P as P",
transposition of U as UT and transposition of VT as V.
Pseudo-inverse of diagonal matrix S is a matrix with 1 / Sii elements
where Sii is not a zero and 0 for zero elements
P = U S VT
P" = V S" UT
M = P" T
Usage
To apply interpolation matrix,
we need to not forget that we need to translate input and output vectors.
A' is a 2D vector of texture coordinates of the first vertex in the triangle,
A is a 3D vector of coordinates of the vertex,
M is the found interpolation matrix,
p is a 3D intersection point we want to get texture coordinates for,
t' is the resulting 2D vector with interpolated texture coordinates
t' = A' + (p - A) M
[Rewritten 2019/5/7 to reflect an updated understanding.]
After finding out that the parametric coordinates are inputs to a function from which one can get barycentric coordinates in the case of triangular cells, and then learning about what barycentric coordinates are, I was able to work out the following.
const auto readerOther(vtkSmartPointer<vtkOBJReader>::New());
const auto rawOtherPath(modelPathOther.toLatin1());
readerOther->SetFileName(rawOtherPath.data());
readerOther->Update();
const auto meshDataOther(readerOther->GetOutput());
const auto bspTreeOther(vtkSmartPointer<vtkModifiedBSPTree>::New());
bspTreeOther->SetDataSet(meshDataOther);
bspTreeOther->BuildLocator();
double point1[3]{0.0, 0.0, 0.0}; // start of line segment used to intersect the model.
double point2[3]{0.0, 0.0, 10.0}; // end of line segment
double distanceAlongLine;
double intersectionCoords[3]; // The coordinate of the intersection.
double parametricCoords[3]; // Parametric Coordinates of the intersection - see https://lorensen.github.io/VTKExamples/site/VTKBook/08Chapter8/#82-interpolation-functions
int subId; // ?
vtkIdType cellId;
double intersectedTextureCoords[2];
if (bspTreeOther->IntersectWithLine(point1, point2, TOLERANCE, distanceAlongLine, intersectionCoords, parametricCoords, subId, cellId))
{
const auto textureCoordsOther(meshDataOther->GetPointData()->GetTCoords());
const auto pointIds{meshDataOther->GetCell(cellId)->GetPointIds()};
const auto vertexIndex0{pointIds->GetId(0)};
const auto vertexIndex1{pointIds->GetId(1)};
const auto vertexIndex2{pointIds->GetId(2)};
double texCoord0[2];
double texCoord1[2];
double texCoord2[2];
textureCoordsOther->GetTuple(vertexIndex0, texCoord0);
textureCoordsOther->GetTuple(vertexIndex1, texCoord1);
textureCoordsOther->GetTuple(vertexIndex2, texCoord2);
const auto parametricR{parametricCoords[0]};
const auto parametricS{parametricCoords[1]};
const auto barycentricW0{1 - parametricR - parametricS};
const auto barycentricW1{parametricR};
const auto barycentricW2{parametricS};
intersectedTextureCoords[0] =
barycentricW0 * texCoord0[0] +
barycentricW1 * texCoord1[0] +
barycentricW2 * texCoord2[0];
intersectedTextureCoords[1] =
barycentricW0 * texCoord0[1] +
barycentricW1 * texCoord1[1] +
barycentricW2 * texCoord2[1];
}
Please note that this code is an interpretation of the actual code I'm using; I'm using Qt and its QVector2D and QVector3D classes along with some interpreter glue functions to go to and from arrays of doubles.
See https://lorensen.github.io/VTKExamples/site/VTKBook/08Chapter8 for details about the parametric coordinate systems of various cell types.

Explanation of the Perspective Projection Matrix (Second row)

I try to figure out how the Perspective Projection Matrix works.
According to this: https://www.opengl.org/sdk/docs/man2/xhtml/gluPerspective.xml
f = cotangent(fovy/2)
Logically I understand how it works (x- and y-Values moving further away from the bounding box or vice versa), but I need an mathematical explanation why this works. Maybe because of the theorem of intersecting lines???
I found an explanation here: http://www.songho.ca/opengl/gl_projectionmatrix.html
But I don't understand the relevent part of it.
As for me, an explanation of the perspective projection matrix at songho.ca is the best one.
I'll try to retell the main idea, without going into details. But, first of all, let's clarify why the cotangent is used in OpenGL docs.
What is cotangent? Accordingly to wikipedia:
The cotangent of an angle is the ratio of the length of the adjacent side to the length of the opposite side.
Look at the picture below, the near is the length of the adjacent side and the top is the length of the opposite side .
The fov/2 is the angle we are interested in.
The angle fov is the angle between the top plane and bottom plane, respectively the angle fov/2 is the angle between top(or botton) plane and the symmetry axis.
So, the [1,1] element of projection matrix that is defined as cotangent(fovy/2) in opengl docs is equivalent to the ratio near/top.
Let's have a look at the point A specified at the picture. Let's find the y' coordinate of the point A' that is a projection of the point A on the near plane.
Using the ratio of similar triangles, the following relation can be inferred:
y' / near = y / -z
Or:
y' = near * y / -z
The y coordinate in normalized device coordinates can be obtained by dividing by the value top (the range (-top, top) is mapped to the range (-1.0,1.0)), so:
yndc = near / top * y / -z
The coefficient near / top is a constant, but what about z? There is one very important detail about normalized device coordinates.
The output of the vertex shader is a four component vector, that is transformed to three component vector in the interpolator by dividing first three component by the fourth component:
,
So, we can assign to the fourth component the value of -z. It can be done by assigning to the element [2,3] of the projection matrix the value -1.
Similar reasoning can be done for the x coordinate.
We have found the following elements of projection matrix:
| near / right 0 0 0 |
| 0 near / top 0 0 |
| 0 0 ? ? |
| 0 0 -1 0 |
There are two elements that we didn't found, they are marked with '?'.
To make things clear, let's project an arbitary point (x,y,z) to normalized device coordinates:
| near / right 0 0 0 | | x |
| 0 near / top 0 0 | X | y | =
| 0 0 ? ? | | z |
| 0 0 -1 0 | | 1 |
| near / right * x |
= | near / top * y |
| ? |
| -z |
And finally, after dividing by the w component we will get:
| - near / right * x / z |
| - near / top * y / z |
| ? |
Note, that the result matches the equation inferred earlier.
As for the third component that marked with '?'. More complex reasoning is needed to find out how to calculate it. Refer to the songho.ca for more information.
I hope that my explanations make things a bit more clear.

glsl fragment shader calculate texture position

I'm writing a fragment shader for rendering a 1D texture containing an arbitrary byte array into a kind of barcode.
my idea is to encode each byte into a square divided diagonally (so each of the 4 triangles represents 2 bit), like so:
_____
|\ A /| each byte encoded as binary is DDCCBBAA,
| \ / | the colors are: Red if 11
|D X B| Green if 10
| / \ | Blue if 01
|/ C \| Black if 00
¯¯¯¯¯ so color can be calculated as: [(H & L), (H & !L), (!H & L)]
so for example: 198 == 11 00 01 10 would be:
_____ DD CC BB AA
|\ G /|
| \ / | A=10=Green
|R X B| B=01=Blue
| / \ | C=00=Black
|/ b \| D=11=Red
¯¯¯¯¯ (B=Blue, b=Black)
what I got so far are a function for encoding 2 bools (H,L in the example notation) into a vec3 color and a function for encoding a byte and "corner index" (A/B/C/D in the example) into the color:
#version 400
out vec4 gl_FragColor; // the output fragment
in vec2 vf_texcoord; // normalized texture coords, 0/0=top/left
uniform isampler1D uf_texture; // the input data
uniform int uf_texLen; // the input data's byte count
vec3 encodeColor(bool H, bool L){
return vec3(H&&L,H&&!L,!H&&L);
}
vec3 encodeByte(int data,int corner){
int maskL = 1 << corner;
int maskH = maskL << 1;
int shiftL = corner/2;
int shiftH = shiftL+1;
bool H=bool((data&maskH)>>shiftH);
bool L=bool((data&maskL)>>shiftL);
return encodeColor(H,L);
}
void main(void) {
// the part I can't figure out
gl_FragColor.rgb=encodeByte(/* some stuff calculated by the part above*/);
gl_FragColor.a=1;
}
the problem is I can't figure out how to calculate which byte to encode and in what "corner" the current fragment is.
(Note, the code here is is off the top of my head and untested, and I haven't written a lot of GLSL. The variable names are sloppy and I've probably made some stupid syntax mistakes, but it should be enough to convey the idea.)
The first thing you need to do is translate the texture coordinates into a data index (which square to display colors for) and a modified set of texture coordinates that represent the position within that square.
For a horizontal arrangement, you could do something like:
float temp = vf_texcoord.x * uf_texLen;
float temp2 = floor(temp);
int dataIndex = temp2;
vec2 squareTexcoord = { temp2 - temp, vf_texcoord.y };
Then you'd use squareTexcoord to decide which quadrant of the square you're in:
int corner;
vec2 squareTexcoord2 = squareTexcoord - { 0.5, 0.5 }
if (abs(squareTexcoord2.x) > abs(squareTexcoord2.y)) { // Left or right triangle
if (squareTexcoord2.x > 0) { // Right triangle
corner = 0;
}
else { // Left triangle
corner = 1;
}
}
else { // Top or bottom triangle
if (squareTexcoord2.y > 0) { // Bottom triangle
corner = 2;
}
else { // Top triangle
corner = 3;
}
}
And now you have all you need for shading:
gl_FragColor = { encodeByte(int(texelFetch(uf_texture,dataIndex,0).r), corner), 1.0 };

Calculating normals in a triangle mesh

I have drawn a triangle mesh with 10000 vertices(100x100) and it will be a grass ground. I used gldrawelements() for it. I have looked all day and still can't understand how to calculate the normals for this. Does each vertex have its own normals or does each triangle have its own normals? Can someone point me in the right direction on how to edit my code to incorporate normals?
struct vertices {
GLfloat x;
GLfloat y;
GLfloat z;
}vertices[10000];
GLuint indices[60000];
/*
99..9999
98..9998
........
01..9901
00..9900
*/
void CreateEnvironment() {
int count=0;
for (float x=0;x<10.0;x+=.1) {
for (float z=0;z<10.0;z+=.1) {
vertices[count].x=x;
vertices[count].y=0;
vertices[count].z=z;
count++;
}
}
count=0;
for (GLuint a=0;a<99;a++){
for (GLuint b=0;b<99;b++){
GLuint v1=(a*100)+b;indices[count]=v1;count++;
GLuint v2=(a*100)+b+1;indices[count]=v2;count++;
GLuint v3=(a*100)+b+100;indices[count]=v3;count++;
}
}
count=30000;
for (GLuint a=0;a<99;a++){
for (GLuint b=0;b<99;b++){
indices[count]=(a*100)+b+100;count++;//9998
indices[count]=(a*100)+b+1;count++;//9899
indices[count]=(a*100)+b+101;count++;//9999
}
}
}
void ShowEnvironment(){
//ground
glPushMatrix();
GLfloat GroundAmbient[]={0.0,0.5,0.0,1.0};
glMaterialfv(GL_FRONT,GL_AMBIENT,GroundAmbient);
glEnableClientState(GL_VERTEX_ARRAY);
glIndexPointer( GL_UNSIGNED_BYTE, 0, indices );
glVertexPointer(3,GL_FLOAT,0,vertices);
glDrawElements(GL_TRIANGLES,60000,GL_UNSIGNED_INT,indices);
glDisableClientState(GL_VERTEX_ARRAY);
glPopMatrix();
}
EDIT 1
Here is the code I have written out. I just used arrays instead of vectors and I stored all of the normals in the struct called normals. It still doesn't work however. I get an unhandled exception at *indices.
struct Normals {
GLfloat x;
GLfloat y;
GLfloat z;
}normals[20000];
Normals* normal = normals;
//***************************************ENVIRONMENT*************************************************************************
struct vertices {
GLfloat x;
GLfloat y;
GLfloat z;
}vertices[10000];
GLuint indices[59403];
/*
99..9999
98..9998
........
01..9901
00..9900
*/
void CreateEnvironment() {
int count=0;
for (float x=0;x<10.0;x+=.1) {
for (float z=0;z<10.0;z+=.1) {
vertices[count].x=x;
vertices[count].y=rand()%2-2;;
vertices[count].z=z;
count++;
}
}
//calculate normals
GLfloat vector1[3];//XYZ
GLfloat vector2[3];//XYZ
count=0;
for (int x=0;x<9900;x+=100){
for (int z=0;z<99;z++){
vector1[0]= vertices[x+z].x-vertices[x+z+1].x;//vector1x
vector1[1]= vertices[x+z].y-vertices[x+z+1].y;//vector1y
vector1[2]= vertices[x+z].z-vertices[x+z+1].z;//vector1z
vector2[0]= vertices[x+z+1].x-vertices[x+z+100].x;//vector2x
vector2[1]= vertices[x+z+1].y-vertices[x+z+100].y;//vector2y
vector2[2]= vertices[x+z+1].z-vertices[x+z+100].z;//vector2z
normals[count].x= vector1[1] * vector2[2]-vector1[2]*vector2[1];
normals[count].y= vector1[2] * vector2[0] - vector1[0] * vector2[2];
normals[count].z= vector1[0] * vector2[1] - vector1[1] * vector2[0];count++;
}
}
count=10000;
for (int x=100;x<10000;x+=100){
for (int z=0;z<99;z++){
vector1[0]= vertices[x+z].x-vertices[x+z+1].x;//vector1x -- JUST ARRAYS
vector1[1]= vertices[x+z].y-vertices[x+z+1].y;//vector1y
vector1[2]= vertices[x+z].z-vertices[x+z+1].z;//vector1z
vector2[0]= vertices[x+z+1].x-vertices[x+z-100].x;//vector2x
vector2[1]= vertices[x+z+1].y-vertices[x+z-100].y;//vector2y
vector2[2]= vertices[x+z+1].z-vertices[x+z-100].z;//vector2z
normals[count].x= vector1[1] * vector2[2]-vector1[2]*vector2[1];
normals[count].y= vector1[2] * vector2[0] - vector1[0] * vector2[2];
normals[count].z= vector1[0] * vector2[1] - vector1[1] * vector2[0];count++;
}
}
count=0;
for (GLuint a=0;a<99;a++){
for (GLuint b=0;b<99;b++){
GLuint v1=(a*100)+b;indices[count]=v1;count++;
GLuint v2=(a*100)+b+1;indices[count]=v2;count++;
GLuint v3=(a*100)+b+100;indices[count]=v3;count++;
}
}
count=30000;
for (GLuint a=0;a<99;a++){
for (GLuint b=0;b<99;b++){
indices[count]=(a*100)+b+100;count++;//9998
indices[count]=(a*100)+b+1;count++;//9899
indices[count]=(a*100)+b+101;count++;//9999
}
}
}
void ShowEnvironment(){
//ground
glPushMatrix();
GLfloat GroundAmbient[]={0.0,0.5,0.0,1.0};
GLfloat GroundDiffuse[]={1.0,0.0,0.0,1.0};
glMaterialfv(GL_FRONT,GL_AMBIENT,GroundAmbient);
glMaterialfv(GL_FRONT,GL_DIFFUSE,GroundDiffuse);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer( GL_FLOAT, 0, normal);
glVertexPointer(3,GL_FLOAT,0,vertices);
glDrawElements(GL_TRIANGLES,60000,GL_UNSIGNED_INT,indices);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glPopMatrix();
}
//***************************************************************************************************************************
Does each vertex have its own normals or does each triangle have its own normals?
Like so often the answer is: "It depends". Since a normal is defined as being the vector perpendicular to all vectors within a given plane (in N dimensions), you need a plane to calculate a normal. A vertex position is just a point and thus singular, so you actually need a face to calculate the normal. Thus, naively, one could assume that normals are per face as the first step in normal calculation is determining the face normals, by evaluating the cross product of the faces edges.
Say you have a triangle with points A, B, C, then these points have position vectors ↑A, ↑B, ↑C and the edges have vectors ↑B - ↑A and ↑C - ↑A so the face normal vector is ↑Nf = (↑B - ↑A) × (↑C - ↑A)
Note that the magnitude of ↑Nf as it's stated above is directly proportional to the face's area.
In smooth surfaces vertices are shared between faces (or you could say those faces share a vertex). In that case the normal at the vertex is not one of the face normals of the faces it is part of, but a linear combination of them:
↑Nv = ∑ p ↑Nf ; where p is a weighting for each face.
One could either assume a equal weighting between the participating face normals. But it makes more sense to assume that the larger a face is, the more it contributes to the normal.
Now recall that you normalize by a vector ↑v by scaling it with it's recipocal length: ↑vi = ↑v/|↑v|. But as already told the length of the face normals already depends on the face's area. So the weighting factor p given above is already contained in the vector itself: Its length, aka magnitude. So we can get the vertex normal vector by simply summing up all the face normals.
In lighting calculations the normal vector must be unit length, i.e. normalized to be useable. So after summing up, we normalize the newly found vertex normal and use that.
The carefull reader may have noticed I specifically said smooth surfaces share vertices. And in fact, if you have some creases / hard edges in your geometry, then the faces on either side don't share vertices. In OpenGL a vertex is the whole combination of
position
normal
(colour)
N texture coordinates
M further attributes
You change one of these and you got a completely different vertex. Now some 3D modelers see a vertex only as a point's position and store the rest of those attributes per face (Blender is such a modeler). This saves some memory (or considerable memory, depending on the number of attributes). But OpenGL needs the whole thing, so if working with such a mixed paradigm file you will have to decompose it into OpenGL compatible data first. Have a look at one of Blender's export scripts, like the PLY exporter to see how it's done.
Now to cover some other thing. In your code you have this:
glIndexPointer( GL_UNSIGNED_BYTE, 0, indices );
The index pointer has nothing to do with vertex array indices! This is an anachronsim from the days, when graphics still used palettes instead of true color. A pixels colour wasn't set by giving it's RGB values, but by a single number offsetting into a limited palette of colours. Palette colours can still be found in several graphics file formats, but no decent piece of hardware uses them anymore.
Please erase glIndexPointer (and glIndex) from your memory and your code, they don't do what you think they do The whole indexed color mode is arcane to used, and frankly I don't know of any hardware built after 1998 that still supported it.
Thumbs up for datenwolf! I completely agree with his approach. Adding the normal vectors of the adjacent triangles for each vertex and then normalising is the way to go. I just want to push the answer a little bit and have a closer look at the particular but quite common case of a rectangular, smooth mesh that has a constant x/y step. In other words, a rectangular x/y grid with a variable height at each point.
Such a mesh is created by looping over x and y and setting a value for z and can represent things like the surface of a hill. So each point of the mesh is represented by a vector
P = (x, y, f(x,y))
where f(x,y) is a function giving the z of each point on the grid.
Usually to draw such a mesh we use a TriangleStrip or a TriangleFan but any technique should give a similar topography for the resulting triangles.
|/ |/ |/ |/
...--+----U----UR---+--...
/| /| 2 /| /| Y
/ | / | / | / | ^
| / | / | / | / |
|/ 1 |/ 3 |/ |/ |
...--L----P----R----+--... +-----> X
/| 6 /| 4 /| /|
/ | / | / | / |
| /5 | / | / | /
|/ |/ |/ |/
...--DL---D----+----+--...
/| /| /| /|
For a triangleStrip each vertex P=(x0, y0, z0) has 6 adjacent vertices denoted
up = (x0 , y0 + ay, Zup)
upright = (x0 + ax, y0 + ay, Zupright)
right = (x0 + ax, y0 , Zright)
down = (x0 , y0 - ay, Zdown)
downleft = (x0 - ax, y0 - ay, Zdownleft)
left = (x0 - ax, y0 , Zleft)
where ax/ay is the constant grid step on the x/y axis respectively. On a square grid ax = ay.
ax = width / (nColumns - 1)
ay = height / (nRows - 1)
Thus each vertex has 6 adjacent triangles each one with its own normal vector (denoted N1 to N6). These can be calculated using the cross product of the two vectors defining the side of the triangle and being careful on the order in which we do the cross product. If the normal vector points in the Z direction towards you :
N1 = up x left =
= (Yup*Zleft - Yleft*Zup, Xleft*Zup - Xup*ZLeft, Xleft*Yup - Yleft*Xup)
=( (y0 + ay)*Zleft - y0*Zup,
(x0 - ax)*Zup - x0*Zleft,
x0*y0 - (y0 + ay)*(x0 - ax) )
N2 = upright x up
N3 = right x upright
N4 = down x right
N5 = downleft x down
N6 = left x downleft
And the resulting normal vector for each point P is the sum of N1 to N6. We normalise after summing. It's very easy to create a loop, calculate the values of each normal vector, add them and then normalise. However, as pointed out by Mr. Shickadance, this can take quite a while, especially for large meshes and/or on embedded devices.
If we have a closer look and perform the calculations by hand, we will find out that most of the terms cancel out each other, leaving us with a very elegant and easy to calculate final solution for the resulting vector N. The point here is to speed up calculations by avoiding calculating the coordinates of N1 to N6, doing 6 cross-products and 6 additions for each point. Algebra helps us to jump straight to the solution, use less memory and less CPU time.
I will not show the details of the calculations as it is long but straight-forward and will jump to the final expression of the Normal vector for any point on the grid. Only N1 is decomposed for the sake of clarity, the other vectors look alike. After summing we obtain N which is not yet normalized :
N = N1 + N2 + ... + N6
= .... (long but easy algebra) ...
= ( (2*(Zleft - Zright) - Zupright + Zdownleft + Zup - Zdown) / ax,
(2*(Zdown - Zup) + Zupright + Zdownleft - Zup - Zleft) / ay,
6 )
There you go! Just normalise this vector and you have the normal vector for any point on the grid, provided you know the Z values of its surrounding points and the horizontal/vertical step of your grid.
Note that this is the weighed average of the surrounding triangles' normal vectors. The weight is the area of the triangles and is already included in the cross product.
You can even simplify it more by only taking into account the Z values of four surrounding points (up,down,left and right). In that case you get :
| \|/ |
N = N1 + N2 + N3 + N4 ..--+----U----+--..
= ( (Zleft - Zright) / ax, | /|\ |
(Zdown - Zup ) / ay, | / | \ |
2 ) \ | / 1|2 \ | /
\|/ | \|/
..--L----P----R--...
/|\ | /|\
/ | \ 4|3 / | \
| \ | / |
| \|/ |
..--+----D----+--..
| /|\ |
which is even more elegant and even faster to calculate.
Hope this will make some meshes faster.
Cheers
Per-vertex.
Use cross-products to calculate the face normals for the triangles surrounding a given vertex, add them together, and normalize.
As simple as it may seem, calculating the normal of a triangle is only part of the problem. The cross product of 2 sides of the polygon is sufficient in triangular cases, unless the triangle is collapsed onto itself and degenerate; in that case there is no one valid normal, so you can select one to your liking.
So why is the normalized cross product only part of the problem? The winding order of the vertices in that polygon defines the direction of the normal, i.e. if one pair of vertices is swapped in place, the normal will point in the opposite direction. So in fact this can be problematic if the mesh itself contains inconsistencies in that regard, i.e. parts of it assume one ordering, while other parts assume different orderings. One famous example is the original Stanford Bunny model, where some parts of the surface will point inwards, while others point outwards. The reason for that is because the model was constructed using a scanner, and no care was taken to produce triangles with regular winding patterns. (obviously, clean versions of the bunny also exist)
The winding problem is even more prominent if polygons can have multiple vertices, because in that case you would be averaging partial normals of the semi-triangulation of that polygon. Consider the case where partial normals are pointing in opposite directions, resulting in normal vectors of length 0 when taking the mean!
In the same sense, disconnected polygon soups and point clouds present challenges for accurate reconstruction due to the ill-defined winding number.
One potential strategy that is often used to solve this problem is to shoot random rays from outward to the center of each semi-triangulation (i.e. ray-stabbing). But one cannot assume that the triangulation is valid if polygons can contain multiple vertices, so rays may miss that particular sub-triangle. If a ray hits, then normal opposite to the ray direction, i.e. with dot(ray, n) < .5 satisfied, can be used as the normal for the entire polygon. Obviously this is rather expensive and scales with the number of vertices per polygon.
Thankfully, there's great new work that describes an alternative method that is not only faster (for large and complex meshes) but also generalizes the 'winding order' concept for constructions beyond polygon meshes, such as point clouds and polygon soups, iso-surfaces, and point-set surfaces, where connectivity may not even be defined!
As outlined in the paper, the method constructs a hierarchical splitting tree representation that is refined progressively, taking the parent 'dipole' orientation into account at every split operation. A polygon normal would then simply be an integration (mean) over all di-poles (i.e. point+normal pairs) of the polygon.
For people who are dealing with unclean mesh/pcl data from Lidar scanners or other sources, this could def. be a game-changer.
For those like me who came across this question, your answer might be this :
// Compute Vertex Normals
std::vector<sf::Glsl::Vec3> verticesNormal;
verticesNormal.resize(verticesCount);
for (i = 0; i < indices.size(); i += 3)
{
// Get the face normal
auto vector1 = verticesPos[indices[(size_t)i + 1]] - verticesPos[indices[i]];
auto vector2 = verticesPos[indices[(size_t)i + 2]] - verticesPos[indices[i]];
auto faceNormal = sf::VectorCross(vector1, vector2);
sf::Normalize(faceNormal);
// Add the face normal to the 3 vertices normal touching this face
verticesNormal[indices[i]] += faceNormal;
verticesNormal[indices[(size_t)i + 1]] += faceNormal;
verticesNormal[indices[(size_t)i + 2]] += faceNormal;
}
// Normalize vertices normal
for (i = 0; i < verticesNormal.size(); i++)
sf::Normalize(verticesNormal[i]);
The easy way is to translate one of the triangles (p1,p2,p3) points (say p1) to (0,0,0) so that means (x2,y2,z2)->(x2-x1,y2-y1,z2-z1) and (x3,y3,z3)->(x3-x1,y3-y1,z3-z1). Then you perform a dot product on the transformed points to obtain the planar slope, or cross-product to obtain the outward normal.
See:
https://en.wikipedia.org/wiki/Cross_product#/media/File:Cross_product_vector.svg
for a simple visual representation of the difference between cross product and dot product.
The moving of one of the points to the origin is basically equivalent to generating vectors along p1p2 and p2p3.