glsl fragment shader calculate texture position - opengl

I'm writing a fragment shader for rendering a 1D texture containing an arbitrary byte array into a kind of barcode.
my idea is to encode each byte into a square divided diagonally (so each of the 4 triangles represents 2 bit), like so:
_____
|\ A /| each byte encoded as binary is DDCCBBAA,
| \ / | the colors are: Red if 11
|D X B| Green if 10
| / \ | Blue if 01
|/ C \| Black if 00
¯¯¯¯¯ so color can be calculated as: [(H & L), (H & !L), (!H & L)]
so for example: 198 == 11 00 01 10 would be:
_____ DD CC BB AA
|\ G /|
| \ / | A=10=Green
|R X B| B=01=Blue
| / \ | C=00=Black
|/ b \| D=11=Red
¯¯¯¯¯ (B=Blue, b=Black)
what I got so far are a function for encoding 2 bools (H,L in the example notation) into a vec3 color and a function for encoding a byte and "corner index" (A/B/C/D in the example) into the color:
#version 400
out vec4 gl_FragColor; // the output fragment
in vec2 vf_texcoord; // normalized texture coords, 0/0=top/left
uniform isampler1D uf_texture; // the input data
uniform int uf_texLen; // the input data's byte count
vec3 encodeColor(bool H, bool L){
return vec3(H&&L,H&&!L,!H&&L);
}
vec3 encodeByte(int data,int corner){
int maskL = 1 << corner;
int maskH = maskL << 1;
int shiftL = corner/2;
int shiftH = shiftL+1;
bool H=bool((data&maskH)>>shiftH);
bool L=bool((data&maskL)>>shiftL);
return encodeColor(H,L);
}
void main(void) {
// the part I can't figure out
gl_FragColor.rgb=encodeByte(/* some stuff calculated by the part above*/);
gl_FragColor.a=1;
}
the problem is I can't figure out how to calculate which byte to encode and in what "corner" the current fragment is.

(Note, the code here is is off the top of my head and untested, and I haven't written a lot of GLSL. The variable names are sloppy and I've probably made some stupid syntax mistakes, but it should be enough to convey the idea.)
The first thing you need to do is translate the texture coordinates into a data index (which square to display colors for) and a modified set of texture coordinates that represent the position within that square.
For a horizontal arrangement, you could do something like:
float temp = vf_texcoord.x * uf_texLen;
float temp2 = floor(temp);
int dataIndex = temp2;
vec2 squareTexcoord = { temp2 - temp, vf_texcoord.y };
Then you'd use squareTexcoord to decide which quadrant of the square you're in:
int corner;
vec2 squareTexcoord2 = squareTexcoord - { 0.5, 0.5 }
if (abs(squareTexcoord2.x) > abs(squareTexcoord2.y)) { // Left or right triangle
if (squareTexcoord2.x > 0) { // Right triangle
corner = 0;
}
else { // Left triangle
corner = 1;
}
}
else { // Top or bottom triangle
if (squareTexcoord2.y > 0) { // Bottom triangle
corner = 2;
}
else { // Top triangle
corner = 3;
}
}
And now you have all you need for shading:
gl_FragColor = { encodeByte(int(texelFetch(uf_texture,dataIndex,0).r), corner), 1.0 };

Related

Generating spheres with vertices and indices?

I'm currently working on an OpenGL project where I should currently generate spheres with vertices and indices.
All vertices represent a point, and indices tells the graphic card to link 3 points as a triangle.
Example : indices : {0,1,2} = it will make a triangle with the first, second and third point.
I've managed to make a UV sphere with correct vertices, but I don't know how I can get indices.
Here is my current result :
And here is my code :
Mesh ObjectFactory::createSphere() {
// Texture loaded and binded to the shaderprogram
Texture textures[]{
Texture("resources/pop_cat.png", "diffuse", 0, GL_RGBA, GL_UNSIGNED_BYTE),
};
int numHorizontalSegments = 20;
int numVerticalSegments = 20;
Vertex vertices[numVerticalSegments * numVerticalSegments] = {};
GLuint indices[numVerticalSegments * numVerticalSegments] = {};
int i = 0;
for (int h = 0; h < numHorizontalSegments; h++) {
float angle1 = (h + 1) * M_PI / (numHorizontalSegments + 1);
for (int v = 0; v < numVerticalSegments; v++) {
i++;
float angle2 = v * (2 * M_PI) / numVerticalSegments;
float x = sinf(angle1) * cosf(angle2);
float y = cosf(angle1);
float z = sinf(angle1) * sinf(angle2);
vertices[i] = Vertex{glm::vec3(x, y, z), glm::vec3(0.83f, 0.70f, 0.44f), glm::vec2(0.0f, 0.0f)};
indices[i] = i;
}
}
// Store mesh data in vectors for the mesh
std::vector<Vertex> verts(vertices, vertices + sizeof(vertices) / sizeof(Vertex));
std::vector<GLuint> ind(indices, indices + sizeof(indices) / sizeof(GLuint));
std::vector<Texture> tex(textures, textures + sizeof(textures) / sizeof(Texture));
// Create sphere mesh
return {verts, ind, tex};
}
Thank you a lot for your help !
You have created a sphere of vertices by calculating horizontal circles in multiple vertical layers, a UV sphere. Good.
You are just adding indexes once for each vertex for a total of one index per vertex, that is not according to the concept.
What you need to repeatedy do is finding the three indexes in your array of vertices, which make a usable triangle.
Among other things it means that you will name the same vertex index multiple times. Mostly six times, because most of your vertices will be part of six triangles. At least one to the "upper left", one towards the "upper right", "lower left", "lower right"; while there are usually two double directions; e.g. two triangles to the upper right.
"mostly six", because of edge cases like the "north pole" and "south pole"; which participate in many triangles and quads.
Lets looks at a part of your UV sphere:
V03----------V02----------V01---------V00
| | __/ | __/|
| | __/ | c __/ |
| | __/ | __/ |
| | / f | / d |
V13----------V12----------V11----------V10
| | __/ | e __/|
| | a __/ | __/ |
| | __/ | __/ |
| | / b | / |
V23----------V22----------V21----------V20
| | | |
| | | |
| | | |
| | | |
V33----------V32----------V31---------V30
You can see that for the quad in the middle (represented by two triangles "a" and "b"),
you need six index entries, though it only has 4 of the vertices, and each of the vertices will be used even more often, from the other touching quads.
For the triangle "a" you get indexes for the vertexes V11, V12, V22 (mind the orientation, depending on where you want the surface, thinking either always "counter clock" or always "clockwise" will get you where you only need a few tries to get the desired result).
For the triangle "b" you get indexes for the vertexes V11, V22, V21.
Also, the vertex V11 will have to be index again for the triangle "c" and "d", and "e", and "f"; for participating in six triangles or four quads.
You managed to do your UV sphere fine, so I do not think that I need to provide the loop and selection code for getting that done. You managed to visualise the result of your UV sphere, just try and retry after checking the result.

How to get texture coordinates from VTK IntersectWithLine?

I've loaded a texture mapped OBJ via vtkOBJReader and loaded it into a vtkModifiedBSPTree:
auto readerOther(vtkSmartPointer<vtkOBJReader>::New());
auto rawOtherPath(modelPathOther.toLatin1());
readerOther->SetFileName(rawOtherPath.data());
readerOther->Update();
auto meshDataOther(readerOther->GetOutput());
auto bspTreeOther(vtkSmartPointer<vtkModifiedBSPTree>::New());
bspTreeOther->SetDataSet(meshDataOther);
bspTreeOther->BuildLocator();
I then compute my line segment start and end and feed that into
if (bspTreeOther->IntersectWithLine(p1, p2, tolerance, distanceAlongLine, intersectionCoords, pcoords, subId, cellId, cell))
With all the relevant predefined variables of course.
What I need is the texture's UV coordinates at the point of intersection.
I'm so very new to VTK that I've not yet caught the logic of how its put together yet; the abstraction layers are still losing me while I'm digging through the source.
I've hunted for this answer across SO and the VTK users archives and found vague hints given by those who understood VTK deeply to those who were nearly there themselves, and thus of little help to me thus far.
(Appended 11/9/2018)
To clarify, I'm working with non-degenerate triangulated meshes created by a single 3D scanner shot, so quads and other higher polygons are not going to be ever seen by my code. A general solution should account for such things, but that can be accomplished via triangulating the mesh first via a good application of handwavium.
Code
Note that if one vertex belongs to several polygons and has different texture coordinates, VTK will create duplicates of the vertex.
I don't use vtkCleanPolyData, because VTK will merge such "duplicates" and we will lose needed information, as far as I know.
I use vtkCellLocator instead of vtkModifiedBSPTree,
because in my case it was faster.
The main file main.cpp.
You can find magic numbers in start and end arrays — these are your p1 and p2.
I've set these values just for example
#include <vtkSmartPointer.h>
#include <vtkPointData.h>
#include <vtkCellLocator.h>
#include <vtkGenericCell.h>
#include <vtkOBJReader.h>
#include <vtkTriangleFilter.h>
#include <vtkMath.h>
#include <iostream>
int main(int argc, char * argv[])
{
if (argc < 2)
{
std::cerr << "Usage: " << argv[0] << " OBJ_file_name" << std::endl;
return EXIT_FAILURE;
}
auto reader{vtkSmartPointer<vtkOBJReader>::New()};
reader->SetFileName(argv[1]);
reader->Update();
// Triangulate the mesh if needed
auto triangleFilter{vtkSmartPointer<vtkTriangleFilter>::New()};
triangleFilter->SetInputConnection(reader->GetOutputPort());
triangleFilter->Update();
auto mesh{triangleFilter->GetOutput()};
// Use `auto mesh(reader->GetOutput());` instead if no triangulation needed
// Build a locator to find intersections
auto locator{vtkSmartPointer<vtkCellLocator>::New()};
locator->SetDataSet(mesh);
locator->BuildLocator();
// Initialize variables needed for intersection calculation
double start[3]{-1, 0, 0.5};
double end[3]{ 1, 0, 0.5};
double tolerance{1E-6};
double relativeDistanceAlongLine;
double intersectionCoordinates[3];
double parametricCoordinates[3];
int subId;
vtkIdType cellId;
auto cell{vtkSmartPointer<vtkGenericCell>::New()};
// Find intersection
int intersected = locator->IntersectWithLine(
start,
end,
tolerance,
relativeDistanceAlongLine,
intersectionCoordinates,
parametricCoordinates,
subId,
cellId,
cell.Get()
);
// Get points of intersection cell
auto pointsIds{vtkSmartPointer<vtkIdList>::New()};
mesh->GetCellPoints(cellId, pointsIds);
// Store coordinates and texture coordinates of vertices of the cell
double meshTrianglePoints[3][3];
double textureTrianglePoints[3][2];
auto textureCoordinates{mesh->GetPointData()->GetTCoords()};
for (unsigned pointNumber = 0; pointNumber < cell->GetNumberOfPoints(); ++pointNumber)
{
mesh->GetPoint(pointsIds->GetId(pointNumber), meshTrianglePoints[pointNumber]);
textureCoordinates->GetTuple(pointsIds->GetId(pointNumber), textureTrianglePoints[pointNumber]);
}
// Normalize the coordinates
double movedMeshTrianglePoints[3][3];
for (unsigned i = 0; i < 3; ++i)
{
movedMeshTrianglePoints[0][i] = 0;
movedMeshTrianglePoints[1][i] =
meshTrianglePoints[1][i] -
meshTrianglePoints[0][i];
movedMeshTrianglePoints[2][i] =
meshTrianglePoints[2][i] -
meshTrianglePoints[0][i];
}
// Normalize the texture coordinates
double movedTextureTrianglePoints[3][2];
for (unsigned i = 0; i < 2; ++i)
{
movedTextureTrianglePoints[0][i] = 0;
movedTextureTrianglePoints[1][i] =
textureTrianglePoints[1][i] -
textureTrianglePoints[0][i];
movedTextureTrianglePoints[2][i] =
textureTrianglePoints[2][i] -
textureTrianglePoints[0][i];
}
// Calculate SVD of a matrix consisting of normalized vertices
double U[3][3];
double w[3];
double VT[3][3];
vtkMath::SingularValueDecomposition3x3(movedMeshTrianglePoints, U, w, VT);
// Calculate pseudo inverse of a matrix consisting of normalized vertices
double pseudoInverse[3][3]{0};
for (unsigned i = 0; i < 3; ++i)
{
for (unsigned j = 0; j < 3; ++j)
{
for (unsigned k = 0; k < 3; ++k)
{
if (w[k] != 0)
{
pseudoInverse[i][j] += VT[k][i] * U[j][k] / w[k];
}
}
}
}
// Calculate interpolation matrix
double interpolationMatrix[3][2]{0};
for (unsigned i = 0; i < 3; ++i)
{
for (unsigned j = 0; j < 2; ++j)
{
for (unsigned k = 0; k < 3; ++k)
{
interpolationMatrix[i][j] += pseudoInverse[i][k] * movedTextureTrianglePoints[k][j];
}
}
}
// Calculate interpolated texture coordinates of the intersection point
double interpolatedTexturePoint[2]{textureTrianglePoints[0][0], textureTrianglePoints[0][1]};
for (unsigned i = 0; i < 2; ++i)
{
for (unsigned j = 0; j < 3; ++j)
{
interpolatedTexturePoint[i] += (intersectionCoordinates[j] - meshTrianglePoints[0][j]) * interpolationMatrix[j][i];
}
}
// Print the result
std::cout << "Interpolated texture coordinates";
for (unsigned i = 0; i < 2; ++i)
{
std::cout << " " << interpolatedTexturePoint[i];
}
std::cout << std::endl;
return EXIT_SUCCESS;
}
CMake project file CMakeLists.txt
cmake_minimum_required(VERSION 3.1)
PROJECT(IntersectInterpolate)
set(CMAKE_CXX_STANDARD 11)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
find_package(VTK REQUIRED)
include(${VTK_USE_FILE})
add_executable(IntersectInterpolate MACOSX_BUNDLE main.cpp)
if(VTK_LIBRARIES)
target_link_libraries(IntersectInterpolate ${VTK_LIBRARIES})
else()
target_link_libraries(IntersectInterpolate vtkHybrid vtkWidgets)
endif()
Math
What we need
Suppose you have a mesh consisting of triangles and your vertices have texture coordinates.
Given vertices of a triangle A, B and C, corresponding texture coordinates A', B' and C', you want to find a mapping (to interpolate) from another inner and boundary points of the triangle to the texture.
Let's make some rational assumptions:
Points A, B, C should correspond to their texture coordinates A', B', C';
Each point X on the border, say AB, should correspond to the points of A'B' line in following way: |AX| / |AB| = |A'X'| / |A'B'| — half way on the original triangle should be a half way on the texture map;
Centroid of the triangle (A + B + C) / 3 should correspond to centroid of the texture triangle (A' + B' + C') / 3.
Equations to solve
Looks like we want to have affine mapping: coordinates of vertices of the original triangle should be multiplied by some coefficients and be added to some constants.
Let's construct the system of equations
Ax * Mxx + Ay * Myx + Az * Mzx + M0x = A'x
Ax * Mxy + Ay * Myy + Az * Mzy + M0y = A'y
Ax * Mxz + Ay * Myz + Az * Mzz + M0z = 0
and the same for B and C.
You can see that we have 9 equations and 12 unknowns.
Though, equations containing Miz (for i in {x, y, z}) have the solution 0 and don't play any role in further computations, so we can just set them equal to 0.
Thus, we have system with 6 equations and 8 unknowns
Ax * Mxx + Ay * Myx + Az * Mzx + M0x = A'x
Ax * Mxy + Ay * Myy + Az * Mzy + M0y = A'y
Let's write entire system in matrix view
-- -- -- -- -- --
| 1 Ax Ay Az | | M0x M0y | | A'x A'y |
| 1 Bx By Bz | x | Mxx Mxy | = | B'x B'y |
| 1 Cx Cy Cz | | Myx Myy | | C'x C'y |
-- -- | Mzx Mzy | -- --
-- --
I subtract coordinates of A vertex from B and C
and texture coordinates A' from B' and C',
and now we have the triangle with the first vertex located
in start of coordinate system as well as corresponding texture coordinates.
This means that now triangles are not translated (moved) one relative to another
and we don't need M0 part of interpolation matrix
-- -- -- -- -- --
| Bx By Bz | | Mxx Mxy | | B'x B'y |
| Cx Cy Cz | x | Myx Myy | = | C'x C'y |
-- -- | Mzx Mzy | -- --
-- --
Solution
Let's call the first matrix P, the second M and the last one T
P M = T
The matrix P is not square.
If we add zero row to it, the matrix becomes singular.
So, we have to calculate pseudo-inverse of it in order to solve the equation.
There's no function for calculating pseudo-inverse matrix in VTK.
We go to Moore–Penrose inverse article on Wikipedia and see that it can be calculated using SVD.
VTKMath::SingularValueDecomposition3x3 function allows us to do it.
The function gives us U, S and VT matrices.
I'll write pseudo-inverse of matrix P as P",
transposition of U as UT and transposition of VT as V.
Pseudo-inverse of diagonal matrix S is a matrix with 1 / Sii elements
where Sii is not a zero and 0 for zero elements
P = U S VT
P" = V S" UT
M = P" T
Usage
To apply interpolation matrix,
we need to not forget that we need to translate input and output vectors.
A' is a 2D vector of texture coordinates of the first vertex in the triangle,
A is a 3D vector of coordinates of the vertex,
M is the found interpolation matrix,
p is a 3D intersection point we want to get texture coordinates for,
t' is the resulting 2D vector with interpolated texture coordinates
t' = A' + (p - A) M
[Rewritten 2019/5/7 to reflect an updated understanding.]
After finding out that the parametric coordinates are inputs to a function from which one can get barycentric coordinates in the case of triangular cells, and then learning about what barycentric coordinates are, I was able to work out the following.
const auto readerOther(vtkSmartPointer<vtkOBJReader>::New());
const auto rawOtherPath(modelPathOther.toLatin1());
readerOther->SetFileName(rawOtherPath.data());
readerOther->Update();
const auto meshDataOther(readerOther->GetOutput());
const auto bspTreeOther(vtkSmartPointer<vtkModifiedBSPTree>::New());
bspTreeOther->SetDataSet(meshDataOther);
bspTreeOther->BuildLocator();
double point1[3]{0.0, 0.0, 0.0}; // start of line segment used to intersect the model.
double point2[3]{0.0, 0.0, 10.0}; // end of line segment
double distanceAlongLine;
double intersectionCoords[3]; // The coordinate of the intersection.
double parametricCoords[3]; // Parametric Coordinates of the intersection - see https://lorensen.github.io/VTKExamples/site/VTKBook/08Chapter8/#82-interpolation-functions
int subId; // ?
vtkIdType cellId;
double intersectedTextureCoords[2];
if (bspTreeOther->IntersectWithLine(point1, point2, TOLERANCE, distanceAlongLine, intersectionCoords, parametricCoords, subId, cellId))
{
const auto textureCoordsOther(meshDataOther->GetPointData()->GetTCoords());
const auto pointIds{meshDataOther->GetCell(cellId)->GetPointIds()};
const auto vertexIndex0{pointIds->GetId(0)};
const auto vertexIndex1{pointIds->GetId(1)};
const auto vertexIndex2{pointIds->GetId(2)};
double texCoord0[2];
double texCoord1[2];
double texCoord2[2];
textureCoordsOther->GetTuple(vertexIndex0, texCoord0);
textureCoordsOther->GetTuple(vertexIndex1, texCoord1);
textureCoordsOther->GetTuple(vertexIndex2, texCoord2);
const auto parametricR{parametricCoords[0]};
const auto parametricS{parametricCoords[1]};
const auto barycentricW0{1 - parametricR - parametricS};
const auto barycentricW1{parametricR};
const auto barycentricW2{parametricS};
intersectedTextureCoords[0] =
barycentricW0 * texCoord0[0] +
barycentricW1 * texCoord1[0] +
barycentricW2 * texCoord2[0];
intersectedTextureCoords[1] =
barycentricW0 * texCoord0[1] +
barycentricW1 * texCoord1[1] +
barycentricW2 * texCoord2[1];
}
Please note that this code is an interpretation of the actual code I'm using; I'm using Qt and its QVector2D and QVector3D classes along with some interpreter glue functions to go to and from arrays of doubles.
See https://lorensen.github.io/VTKExamples/site/VTKBook/08Chapter8 for details about the parametric coordinate systems of various cell types.

Silhouette detection (geometry shader) for edges that connect only one triangle

I want to draw a mesh silhouette using geometry shader(line_strip).
The problem occurs when the mesh has edges with only one triangle(like the edge of a cloth). Enclosed(all edges connect 2 triangles) meshes work.
I built adjacency index buffer using RESTART_INDEX where neighbor vertex didn't exist, during rendering I tried to used:
glEnable(GL_PRIMITIVE_RESTART);
glPrimitiveRestartIndex(RESTART_INDEX); // RESTART_INDEX = ushort(-1)
The result:
As you can see head, feet, hands are OK, other parts of the model not so much.
// silhouette.gs.glsl
void main(void) {
vec3 e1 = gs_in[2].vPosition - gs_in[0].vPosition; // 1 ---- 2 ----- 3
vec3 e2 = gs_in[4].vPosition - gs_in[0].vPosition; // \ / \ /
vec3 e3 = gs_in[1].vPosition - gs_in[0].vPosition; // \ e1 \ /
vec3 e4 = gs_in[3].vPosition - gs_in[2].vPosition; // \ / \ /
vec3 e5 = gs_in[4].vPosition - gs_in[2].vPosition; // 0 -e2-- 4
vec3 e6 = gs_in[5].vPosition - gs_in[0].vPosition; // \ /
// // \ /
vec3 vN = cross(e1, e2); // \ /
vec3 vL = u_vLightPos - gs_in[0].vPosition; // 5
How does gs manage vertices when it encounters a primitive restart index? Example: For some triangles 1,3 or 5 shouldn't have any vertex.
If a triangle edge is not part of any other triangle then it is still adjacent to the back face of its own triangle. For example: if there is no other triangle attached to e1 in your diagram, you can use the third vertex of the triangle (4 in this case, as e1 consists of 0 and 2) in place of 1. This will not require any additional check in the geometry shader.
I removed:
glEnable(GL_PRIMITIVE_RESTART);
glPrimitiveRestartIndex(RESTART_INDEX);
during draw pass and this is the result:
which closer to the desired result.
Another possible solution to this problem could be tackled while constructing the adjacency index buffer.
// for every mesh
// for every triangle in the mesh // first pass - edges & neighbors
...
// for every triangle in the mesh // second pass - build indices
const triangle_t* triangle = &tcache[ti]; // unique index triangles
for(i = 0; i < 3; ++i) // triangle = 3 edges
{
edge_t edge(triangle->index[i], triangle->index[(i+1) % 3]); // get edge
neighbors_t neighbors = oEdgeMap[edge]; // get edge neighbors
ushort tj = getOther(neighbors, ti); // get the opposite triangle index
ushort index = RESTART_INDEX; // = ushort(-1)
if(tj != (ushort)(-1))
{
const triangle_t& opposite = tcache[tj]; // opposite triangle
index = getOppositeIndex(opposite, edge) // opposite vertex from other triangle
}
else
{
index = triangle->index[i];
}
indices[ii+i*2+0] = triangle->index[i];
indices[ii+i*2+1] = index;
}
Instead of using a RESTART_INDEX as mentioned in the first post, use the same vertex. Edge triangles will have a neighbor triangle with a 0 length edge(built from the same vertices). I think this needs to be checked during gs.
Does the geometry shader fire fore triangle with incomplete adjacency information?

Calculating normals in a triangle mesh

I have drawn a triangle mesh with 10000 vertices(100x100) and it will be a grass ground. I used gldrawelements() for it. I have looked all day and still can't understand how to calculate the normals for this. Does each vertex have its own normals or does each triangle have its own normals? Can someone point me in the right direction on how to edit my code to incorporate normals?
struct vertices {
GLfloat x;
GLfloat y;
GLfloat z;
}vertices[10000];
GLuint indices[60000];
/*
99..9999
98..9998
........
01..9901
00..9900
*/
void CreateEnvironment() {
int count=0;
for (float x=0;x<10.0;x+=.1) {
for (float z=0;z<10.0;z+=.1) {
vertices[count].x=x;
vertices[count].y=0;
vertices[count].z=z;
count++;
}
}
count=0;
for (GLuint a=0;a<99;a++){
for (GLuint b=0;b<99;b++){
GLuint v1=(a*100)+b;indices[count]=v1;count++;
GLuint v2=(a*100)+b+1;indices[count]=v2;count++;
GLuint v3=(a*100)+b+100;indices[count]=v3;count++;
}
}
count=30000;
for (GLuint a=0;a<99;a++){
for (GLuint b=0;b<99;b++){
indices[count]=(a*100)+b+100;count++;//9998
indices[count]=(a*100)+b+1;count++;//9899
indices[count]=(a*100)+b+101;count++;//9999
}
}
}
void ShowEnvironment(){
//ground
glPushMatrix();
GLfloat GroundAmbient[]={0.0,0.5,0.0,1.0};
glMaterialfv(GL_FRONT,GL_AMBIENT,GroundAmbient);
glEnableClientState(GL_VERTEX_ARRAY);
glIndexPointer( GL_UNSIGNED_BYTE, 0, indices );
glVertexPointer(3,GL_FLOAT,0,vertices);
glDrawElements(GL_TRIANGLES,60000,GL_UNSIGNED_INT,indices);
glDisableClientState(GL_VERTEX_ARRAY);
glPopMatrix();
}
EDIT 1
Here is the code I have written out. I just used arrays instead of vectors and I stored all of the normals in the struct called normals. It still doesn't work however. I get an unhandled exception at *indices.
struct Normals {
GLfloat x;
GLfloat y;
GLfloat z;
}normals[20000];
Normals* normal = normals;
//***************************************ENVIRONMENT*************************************************************************
struct vertices {
GLfloat x;
GLfloat y;
GLfloat z;
}vertices[10000];
GLuint indices[59403];
/*
99..9999
98..9998
........
01..9901
00..9900
*/
void CreateEnvironment() {
int count=0;
for (float x=0;x<10.0;x+=.1) {
for (float z=0;z<10.0;z+=.1) {
vertices[count].x=x;
vertices[count].y=rand()%2-2;;
vertices[count].z=z;
count++;
}
}
//calculate normals
GLfloat vector1[3];//XYZ
GLfloat vector2[3];//XYZ
count=0;
for (int x=0;x<9900;x+=100){
for (int z=0;z<99;z++){
vector1[0]= vertices[x+z].x-vertices[x+z+1].x;//vector1x
vector1[1]= vertices[x+z].y-vertices[x+z+1].y;//vector1y
vector1[2]= vertices[x+z].z-vertices[x+z+1].z;//vector1z
vector2[0]= vertices[x+z+1].x-vertices[x+z+100].x;//vector2x
vector2[1]= vertices[x+z+1].y-vertices[x+z+100].y;//vector2y
vector2[2]= vertices[x+z+1].z-vertices[x+z+100].z;//vector2z
normals[count].x= vector1[1] * vector2[2]-vector1[2]*vector2[1];
normals[count].y= vector1[2] * vector2[0] - vector1[0] * vector2[2];
normals[count].z= vector1[0] * vector2[1] - vector1[1] * vector2[0];count++;
}
}
count=10000;
for (int x=100;x<10000;x+=100){
for (int z=0;z<99;z++){
vector1[0]= vertices[x+z].x-vertices[x+z+1].x;//vector1x -- JUST ARRAYS
vector1[1]= vertices[x+z].y-vertices[x+z+1].y;//vector1y
vector1[2]= vertices[x+z].z-vertices[x+z+1].z;//vector1z
vector2[0]= vertices[x+z+1].x-vertices[x+z-100].x;//vector2x
vector2[1]= vertices[x+z+1].y-vertices[x+z-100].y;//vector2y
vector2[2]= vertices[x+z+1].z-vertices[x+z-100].z;//vector2z
normals[count].x= vector1[1] * vector2[2]-vector1[2]*vector2[1];
normals[count].y= vector1[2] * vector2[0] - vector1[0] * vector2[2];
normals[count].z= vector1[0] * vector2[1] - vector1[1] * vector2[0];count++;
}
}
count=0;
for (GLuint a=0;a<99;a++){
for (GLuint b=0;b<99;b++){
GLuint v1=(a*100)+b;indices[count]=v1;count++;
GLuint v2=(a*100)+b+1;indices[count]=v2;count++;
GLuint v3=(a*100)+b+100;indices[count]=v3;count++;
}
}
count=30000;
for (GLuint a=0;a<99;a++){
for (GLuint b=0;b<99;b++){
indices[count]=(a*100)+b+100;count++;//9998
indices[count]=(a*100)+b+1;count++;//9899
indices[count]=(a*100)+b+101;count++;//9999
}
}
}
void ShowEnvironment(){
//ground
glPushMatrix();
GLfloat GroundAmbient[]={0.0,0.5,0.0,1.0};
GLfloat GroundDiffuse[]={1.0,0.0,0.0,1.0};
glMaterialfv(GL_FRONT,GL_AMBIENT,GroundAmbient);
glMaterialfv(GL_FRONT,GL_DIFFUSE,GroundDiffuse);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer( GL_FLOAT, 0, normal);
glVertexPointer(3,GL_FLOAT,0,vertices);
glDrawElements(GL_TRIANGLES,60000,GL_UNSIGNED_INT,indices);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glPopMatrix();
}
//***************************************************************************************************************************
Does each vertex have its own normals or does each triangle have its own normals?
Like so often the answer is: "It depends". Since a normal is defined as being the vector perpendicular to all vectors within a given plane (in N dimensions), you need a plane to calculate a normal. A vertex position is just a point and thus singular, so you actually need a face to calculate the normal. Thus, naively, one could assume that normals are per face as the first step in normal calculation is determining the face normals, by evaluating the cross product of the faces edges.
Say you have a triangle with points A, B, C, then these points have position vectors ↑A, ↑B, ↑C and the edges have vectors ↑B - ↑A and ↑C - ↑A so the face normal vector is ↑Nf = (↑B - ↑A) × (↑C - ↑A)
Note that the magnitude of ↑Nf as it's stated above is directly proportional to the face's area.
In smooth surfaces vertices are shared between faces (or you could say those faces share a vertex). In that case the normal at the vertex is not one of the face normals of the faces it is part of, but a linear combination of them:
↑Nv = ∑ p ↑Nf ; where p is a weighting for each face.
One could either assume a equal weighting between the participating face normals. But it makes more sense to assume that the larger a face is, the more it contributes to the normal.
Now recall that you normalize by a vector ↑v by scaling it with it's recipocal length: ↑vi = ↑v/|↑v|. But as already told the length of the face normals already depends on the face's area. So the weighting factor p given above is already contained in the vector itself: Its length, aka magnitude. So we can get the vertex normal vector by simply summing up all the face normals.
In lighting calculations the normal vector must be unit length, i.e. normalized to be useable. So after summing up, we normalize the newly found vertex normal and use that.
The carefull reader may have noticed I specifically said smooth surfaces share vertices. And in fact, if you have some creases / hard edges in your geometry, then the faces on either side don't share vertices. In OpenGL a vertex is the whole combination of
position
normal
(colour)
N texture coordinates
M further attributes
You change one of these and you got a completely different vertex. Now some 3D modelers see a vertex only as a point's position and store the rest of those attributes per face (Blender is such a modeler). This saves some memory (or considerable memory, depending on the number of attributes). But OpenGL needs the whole thing, so if working with such a mixed paradigm file you will have to decompose it into OpenGL compatible data first. Have a look at one of Blender's export scripts, like the PLY exporter to see how it's done.
Now to cover some other thing. In your code you have this:
glIndexPointer( GL_UNSIGNED_BYTE, 0, indices );
The index pointer has nothing to do with vertex array indices! This is an anachronsim from the days, when graphics still used palettes instead of true color. A pixels colour wasn't set by giving it's RGB values, but by a single number offsetting into a limited palette of colours. Palette colours can still be found in several graphics file formats, but no decent piece of hardware uses them anymore.
Please erase glIndexPointer (and glIndex) from your memory and your code, they don't do what you think they do The whole indexed color mode is arcane to used, and frankly I don't know of any hardware built after 1998 that still supported it.
Thumbs up for datenwolf! I completely agree with his approach. Adding the normal vectors of the adjacent triangles for each vertex and then normalising is the way to go. I just want to push the answer a little bit and have a closer look at the particular but quite common case of a rectangular, smooth mesh that has a constant x/y step. In other words, a rectangular x/y grid with a variable height at each point.
Such a mesh is created by looping over x and y and setting a value for z and can represent things like the surface of a hill. So each point of the mesh is represented by a vector
P = (x, y, f(x,y))
where f(x,y) is a function giving the z of each point on the grid.
Usually to draw such a mesh we use a TriangleStrip or a TriangleFan but any technique should give a similar topography for the resulting triangles.
|/ |/ |/ |/
...--+----U----UR---+--...
/| /| 2 /| /| Y
/ | / | / | / | ^
| / | / | / | / |
|/ 1 |/ 3 |/ |/ |
...--L----P----R----+--... +-----> X
/| 6 /| 4 /| /|
/ | / | / | / |
| /5 | / | / | /
|/ |/ |/ |/
...--DL---D----+----+--...
/| /| /| /|
For a triangleStrip each vertex P=(x0, y0, z0) has 6 adjacent vertices denoted
up = (x0 , y0 + ay, Zup)
upright = (x0 + ax, y0 + ay, Zupright)
right = (x0 + ax, y0 , Zright)
down = (x0 , y0 - ay, Zdown)
downleft = (x0 - ax, y0 - ay, Zdownleft)
left = (x0 - ax, y0 , Zleft)
where ax/ay is the constant grid step on the x/y axis respectively. On a square grid ax = ay.
ax = width / (nColumns - 1)
ay = height / (nRows - 1)
Thus each vertex has 6 adjacent triangles each one with its own normal vector (denoted N1 to N6). These can be calculated using the cross product of the two vectors defining the side of the triangle and being careful on the order in which we do the cross product. If the normal vector points in the Z direction towards you :
N1 = up x left =
= (Yup*Zleft - Yleft*Zup, Xleft*Zup - Xup*ZLeft, Xleft*Yup - Yleft*Xup)
=( (y0 + ay)*Zleft - y0*Zup,
(x0 - ax)*Zup - x0*Zleft,
x0*y0 - (y0 + ay)*(x0 - ax) )
N2 = upright x up
N3 = right x upright
N4 = down x right
N5 = downleft x down
N6 = left x downleft
And the resulting normal vector for each point P is the sum of N1 to N6. We normalise after summing. It's very easy to create a loop, calculate the values of each normal vector, add them and then normalise. However, as pointed out by Mr. Shickadance, this can take quite a while, especially for large meshes and/or on embedded devices.
If we have a closer look and perform the calculations by hand, we will find out that most of the terms cancel out each other, leaving us with a very elegant and easy to calculate final solution for the resulting vector N. The point here is to speed up calculations by avoiding calculating the coordinates of N1 to N6, doing 6 cross-products and 6 additions for each point. Algebra helps us to jump straight to the solution, use less memory and less CPU time.
I will not show the details of the calculations as it is long but straight-forward and will jump to the final expression of the Normal vector for any point on the grid. Only N1 is decomposed for the sake of clarity, the other vectors look alike. After summing we obtain N which is not yet normalized :
N = N1 + N2 + ... + N6
= .... (long but easy algebra) ...
= ( (2*(Zleft - Zright) - Zupright + Zdownleft + Zup - Zdown) / ax,
(2*(Zdown - Zup) + Zupright + Zdownleft - Zup - Zleft) / ay,
6 )
There you go! Just normalise this vector and you have the normal vector for any point on the grid, provided you know the Z values of its surrounding points and the horizontal/vertical step of your grid.
Note that this is the weighed average of the surrounding triangles' normal vectors. The weight is the area of the triangles and is already included in the cross product.
You can even simplify it more by only taking into account the Z values of four surrounding points (up,down,left and right). In that case you get :
| \|/ |
N = N1 + N2 + N3 + N4 ..--+----U----+--..
= ( (Zleft - Zright) / ax, | /|\ |
(Zdown - Zup ) / ay, | / | \ |
2 ) \ | / 1|2 \ | /
\|/ | \|/
..--L----P----R--...
/|\ | /|\
/ | \ 4|3 / | \
| \ | / |
| \|/ |
..--+----D----+--..
| /|\ |
which is even more elegant and even faster to calculate.
Hope this will make some meshes faster.
Cheers
Per-vertex.
Use cross-products to calculate the face normals for the triangles surrounding a given vertex, add them together, and normalize.
As simple as it may seem, calculating the normal of a triangle is only part of the problem. The cross product of 2 sides of the polygon is sufficient in triangular cases, unless the triangle is collapsed onto itself and degenerate; in that case there is no one valid normal, so you can select one to your liking.
So why is the normalized cross product only part of the problem? The winding order of the vertices in that polygon defines the direction of the normal, i.e. if one pair of vertices is swapped in place, the normal will point in the opposite direction. So in fact this can be problematic if the mesh itself contains inconsistencies in that regard, i.e. parts of it assume one ordering, while other parts assume different orderings. One famous example is the original Stanford Bunny model, where some parts of the surface will point inwards, while others point outwards. The reason for that is because the model was constructed using a scanner, and no care was taken to produce triangles with regular winding patterns. (obviously, clean versions of the bunny also exist)
The winding problem is even more prominent if polygons can have multiple vertices, because in that case you would be averaging partial normals of the semi-triangulation of that polygon. Consider the case where partial normals are pointing in opposite directions, resulting in normal vectors of length 0 when taking the mean!
In the same sense, disconnected polygon soups and point clouds present challenges for accurate reconstruction due to the ill-defined winding number.
One potential strategy that is often used to solve this problem is to shoot random rays from outward to the center of each semi-triangulation (i.e. ray-stabbing). But one cannot assume that the triangulation is valid if polygons can contain multiple vertices, so rays may miss that particular sub-triangle. If a ray hits, then normal opposite to the ray direction, i.e. with dot(ray, n) < .5 satisfied, can be used as the normal for the entire polygon. Obviously this is rather expensive and scales with the number of vertices per polygon.
Thankfully, there's great new work that describes an alternative method that is not only faster (for large and complex meshes) but also generalizes the 'winding order' concept for constructions beyond polygon meshes, such as point clouds and polygon soups, iso-surfaces, and point-set surfaces, where connectivity may not even be defined!
As outlined in the paper, the method constructs a hierarchical splitting tree representation that is refined progressively, taking the parent 'dipole' orientation into account at every split operation. A polygon normal would then simply be an integration (mean) over all di-poles (i.e. point+normal pairs) of the polygon.
For people who are dealing with unclean mesh/pcl data from Lidar scanners or other sources, this could def. be a game-changer.
For those like me who came across this question, your answer might be this :
// Compute Vertex Normals
std::vector<sf::Glsl::Vec3> verticesNormal;
verticesNormal.resize(verticesCount);
for (i = 0; i < indices.size(); i += 3)
{
// Get the face normal
auto vector1 = verticesPos[indices[(size_t)i + 1]] - verticesPos[indices[i]];
auto vector2 = verticesPos[indices[(size_t)i + 2]] - verticesPos[indices[i]];
auto faceNormal = sf::VectorCross(vector1, vector2);
sf::Normalize(faceNormal);
// Add the face normal to the 3 vertices normal touching this face
verticesNormal[indices[i]] += faceNormal;
verticesNormal[indices[(size_t)i + 1]] += faceNormal;
verticesNormal[indices[(size_t)i + 2]] += faceNormal;
}
// Normalize vertices normal
for (i = 0; i < verticesNormal.size(); i++)
sf::Normalize(verticesNormal[i]);
The easy way is to translate one of the triangles (p1,p2,p3) points (say p1) to (0,0,0) so that means (x2,y2,z2)->(x2-x1,y2-y1,z2-z1) and (x3,y3,z3)->(x3-x1,y3-y1,z3-z1). Then you perform a dot product on the transformed points to obtain the planar slope, or cross-product to obtain the outward normal.
See:
https://en.wikipedia.org/wiki/Cross_product#/media/File:Cross_product_vector.svg
for a simple visual representation of the difference between cross product and dot product.
The moving of one of the points to the origin is basically equivalent to generating vectors along p1p2 and p2p3.

Generating a normal map from a height map?

I'm working on procedurally generating patches of dirt using randomized fractals for a video game. I've already generated a height map using the midpoint displacement algorithm and saved it to a texture. I have some ideas for how to turn that into a texture of normals, but some feedback would be much appreciated.
My height texture is currently a 257 x 257 gray-scale image (height values are scaled for visibility purposes):
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
So given the 3D coordinates of A, B, C, and D, would it make sense to:
split the four into two triangles: ABC and BCD
calculate the normals of those two faces via cross product
split into two triangles: ACD and ABD
calculate the normals of those two faces
average the four normals
...or is there a much easier method that I'm missing?
Example GLSL code from my water surface rendering shader:
#version 130
uniform sampler2D unit_wave
noperspective in vec2 tex_coord;
const vec2 size = vec2(2.0,0.0);
const ivec3 off = ivec3(-1,0,1);
vec4 wave = texture(unit_wave, tex_coord);
float s11 = wave.x;
float s01 = textureOffset(unit_wave, tex_coord, off.xy).x;
float s21 = textureOffset(unit_wave, tex_coord, off.zy).x;
float s10 = textureOffset(unit_wave, tex_coord, off.yx).x;
float s12 = textureOffset(unit_wave, tex_coord, off.yz).x;
vec3 va = normalize(vec3(size.xy,s21-s01));
vec3 vb = normalize(vec3(size.yx,s12-s10));
vec4 bump = vec4( cross(va,vb), s11 );
The result is a bump vector: xyz=normal, a=height
My thinking is that each pixel of the image represents a lattice coordinate in a 256 x 256 grid (hence, why there are 257 x 257 heights). That would mean that the normal at coordinate (i, j) is determined by the heights at (i, j), (i, j + 1), (i + 1, j), and (i + 1, j + 1) (call those A, B, C, and D, respectively).
No. Each pixel of the image represents a vertex of the grid, so intuitively, from symmetry, its normal is determined by heights of neighboring pixels (i-1,j), (i+1,j), (i,j-1), (i,j+1).
Given a function f : ℝ2 → ℝ that describes a surface in ℝ3, a unit normal at (x,y) is given by
v = (−∂f/∂x, −∂f/∂y, 1) and n = v/|v|.
It can be proven that the best approximation to ∂f/∂x by two samples is archived by:
∂f/∂x(x,y) = (f(x+ε,y) − f(x−ε,y))/(2ε)
To get a better approximation you need to use at least four points, thus adding a third point (i.e. (x,y)) doesn't improve the result.
Your hightmap is a sampling of some function f on a regular grid. Taking ε=1 you get:
2v = (f(x−1,y) − f(x+1,y), f(x,y−1) − f(x,y+1), 2)
Putting it into code would look like:
// sample the height map:
float fx0 = f(x-1,y), fx1 = f(x+1,y);
float fy0 = f(x,y-1), fy1 = f(x,y+1);
// the spacing of the grid in same units as the height map
float eps = ... ;
// plug into the formulae above:
vec3 n = normalize(vec3((fx0 - fx1)/(2*eps), (fy0 - fy1)/(2*eps), 1));
A common method is using a Sobel filter for a weighted/smooth derivative in each direction.
Start by sampling a 3x3 area of heights around each texel (here, [4] is the pixel we want the normal for).
[6][7][8]
[3][4][5]
[0][1][2]
Then,
//float s[9] contains above samples
vec3 n;
n.x = scale * -(s[2]-s[0]+2*(s[5]-s[3])+s[8]-s[6]);
n.y = scale * -(s[6]-s[0]+2*(s[7]-s[1])+s[8]-s[2]);
n.z = 1.0;
n = normalize(n);
Where scale can be adjusted to match the heightmap real world depth relative to its size.
If you think of each pixel as a vertex rather than a face, you can generate a simple triangular mesh.
+--+--+
|\ |\ |
| \| \|
+--+--+
|\ |\ |
| \| \|
+--+--+
Each vertex has an x and y coordinate corresponding to the x and y of the pixel in the map. The z coordinate is based on the value in the map at that location. Triangles can be generated explicitly or implicitly by their position in the grid.
What you need is the normal at each vertex.
A vertex normal can be computed by taking an area-weighted average of the surface normals for each of the triangles that meet at that point.
If you have a triangle with vertices v0, v1, v2, then you can use a vector cross product (of two vectors that lie on two of the sides of the triangle) to compute a vector in the direction of the normal and scaled proportionally to the area of the triangle.
Vector3 contribution = Cross(v1 - v0, v2 - v1);
Each of your vertices that aren't on the edge will be shared by six triangles. You can loop through those triangles, summing up the contributions, and then normalize the vector sum.
Note: You have to compute the cross products in a consistent way to make sure the normals are all pointing in the same direction. Always pick two sides in the same order (clockwise or counterclockwise). If you mix some of them up, those contributions will be pointing in the opposite direction.
For vertices on the edge, you end up with a shorter loop and a lot of special cases. It's probably easier to create a border around your grid of fake vertices and then compute the normals for the interior ones and discard the fake borders.
for each interior vertex V {
Vector3 sum(0.0, 0.0, 0.0);
for each of the six triangles T that share V {
const Vector3 side1 = T.v1 - T.v0;
const Vector3 side2 = T.v2 - T.v1;
const Vector3 contribution = Cross(side1, side2);
sum += contribution;
}
sum.Normalize();
V.normal = sum;
}
If you need the normal at a particular point on a triangle (other than one of the vertices), you can interpolate by weighing the normals of the three vertices by the barycentric coordinates of your point. This is how graphics rasterizers treat the normal for shading. It allows a triangle mesh to appear like smooth, curved surface rather than a bunch of adjacent flat triangles.
Tip: For your first test, use a perfectly flat grid and make sure all of the computed normals are pointing straight up.