I'm trying to use a geometry shader to display a tile map created in Tiled. I'm having some trouble implementing how Tiled handles rotation and flipping, though.
Tiled reserves the highest four bits of a tile ID for specifying how the tile should be oriented (as described here), with each bit specifying which axis to flip the tile across. My geometry shader starts by creating four vertices and then swapping their positions based on how each flag is set:
void createVertices(bool flipDiagonal, bool flipVertical, bool flipHorizontal, out vec4 vertices[4])
{
vec4 verts[4];
verts[0] = gl_in[0].gl_Position + vec4( 0.0, 0.0, 0.0, 1.0);
verts[1] = gl_in[0].gl_Position + vec4( 1.0, 0.0, 0.0, 1.0);
verts[2] = gl_in[0].gl_Position + vec4( 0.0, -1.0, 0.0, 1.0);
verts[3] = gl_in[0].gl_Position + vec4( 1.0, -1.0, 0.0, 1.0);
vec4 swap;
if(flipDiagonal)
{
swap = verts[1];
verts[1] = verts[2];
verts[2] = swap;
}
if(flipHorizontal)
{
swap = verts[0];
verts[0] = verts[1];
verts[1] = swap;
swap = verts[2];
verts[2] = verts[3];
verts[3] = swap;
}
if(flipVertical)
{
swap = verts[0];
verts[0] = verts[2];
verts[2] = swap;
swap = verts[1];
verts[1] = verts[3];
verts[3] = swap;
}
for(int i = 0; i < vertices.length(); i++)
{
vertices[i] = u_projection * u_camera * u_transform * verts[i];
}
}
The problem is that when the diagonal bit is set and either the horizontal bit or the vertical bit is set, it flips in the opposite direction of what I expect it to. Tiles that should be flipped horizontally are flipped vertically, and tiles that should be flipped vertically appear horizontally flipped. Tiles in which all three flags are set render correctly, however.
Here are some images for reference. The one on the left was exported directly from Tiled and is what should be displayed. The one on the right is what my program is rendering, with the incorrect tiles circled:
Now, I could easily work around this with a couple of if-statements, but I want to know why this is happening. I've tried working through this on paper, but to no avail. What am I missing?
I want to move from basic shadow mapping on to adaptive biased shadow mapping.
I found a paper which describes how to do it, but I am not sure how to achieve a certain step in the process:
The idea is to have a plane P (which is basically just the normal of the current fragment's surface in the fragment shader stage) and the world space position of the fragment (F1 in the picture above).
In order to calculate the correct bias (to fight shadow acne) I need to find the world space position of F2 which I can get if I shoot a ray from the light source through the center of the shadow map's texel center. This ray then eventually hits the plane P which results in the needed point F2.
With F1 and F2 now known, I then can calculate the distance between F1 and F2 along the light ray (I guess) and thus get the ideal bias to fight shadow acne.
Right now my basic shader code looks like this:
Vertex shader:
in vec3 aLocalObjectPos;
out vec4 vShadowCoord;
out vec3 vF1;
// to shift the coordinates from [-1;1] to [0;1]
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
int main()
{
// get the vertex position in the light's view space:
vShadowCoord = (biasMatrix * viewProjShadowMap * modelMatrix) * vec4(aLocalObjectPos, 1.0);
vF1 = (modelMatrix * vec4(aLocalObjectPos, 1.0)).xyz;
}
Helper method in fragment shader:
uniform sampler2DShadow uTextureShadowMap;
float calculateShadow(float bias)
{
vShadowCoord.z -= bias;
return textureProjOffset(uTextureShadowMap, vShadowCoord, ivec2(0, 0));
}
My problem now is:
How do I get the light ray that goes from the light source through the shadow map's texel center?
I already found this topic: Adaptive Depth Bias for Shadow Maps Ray Casting
Unfortunately there is no answer and I don't quite get all the things the author is talking about :-/
So, I think I have figured it out myself. I followed the directions in this paper:
http://cwyman.org/papers/i3d14_adaptiveBias.pdf
Vertex Shader (not much going on there):
const mat4 biasMatrix = mat4(
0.5, 0.0, 0.0, 0.0,
0.0, 0.5, 0.0, 0.0,
0.0, 0.0, 0.5, 0.0,
0.5, 0.5, 0.5, 1.0
);
in vec4 aPosition; // vertex in model's local space (not modified in any way)
uniform mat4 uVPShadowMap; // light's view-projection matrix
out vec4 vShadowCoord;
void main()
{
// ...
vShadowCoord = (biasMatrix * uVPShadowMap * uModelMatrix) * aPosition;
// ...
}
Fragment Shader:
#version 450
in vec3 vFragmentWorldSpace; // fragment position in World space
in vec4 vShadowCoord; // texture coordinates for shadow map lookup (see vertex shader)
uniform sampler2DShadow uTextureShadowMap;
uniform vec4 uLightPosition; // Light's position in world space
uniform vec2 uLightNearFar; // Light's zNear and zFar values
uniform float uK; // variable offset faktor to tweak the computed bias a little bit
uniform mat4 uVPShadowMap; // light's view-projection matrix
const vec4 corners[2] = vec4[]( // frustum diagonal points in light's view space normalized [-1;+1]
vec4(-1.0, -1.0, -1.0, 1.0), // left bottom near
vec4( 1.0, 1.0, 1.0, 1.0) // right top far
);
float calculateShadowIntensity(vec3 fragmentNormal)
{
// get fragment's position in light space:
vec4 fragmentLightSpace = uVPShadowMap * vec4(vFragmentWorldSpace, 1.0);
vec3 fragmentLightSpaceNormalized = fragmentLightSpace.xyz / fragmentLightSpace.w; // range [-1;+1]
vec3 fragmentLightSpaceNormalizedUV = fragmentLightSpaceNormalized * 0.5 + vec3(0.5, 0.5, 0.5); // range [ 0; 1]
// get shadow map's texture size:
ivec2 textureDimensions = textureSize(uTextureShadowMap, 0);
vec2 delta = vec2(textureDimensions.x, textureDimensions.y);
// get width of every texel:
vec2 textureStep = vec2(1.0 / textureDimensions.x, 1.0 / textureDimensions.y);
// get the UV coordinates of the texel center:
vec2 fragmentLightSpaceUVScaled = fragmentLightSpaceNormalizedUV.xy * delta;
vec2 texelCenterUV = floor(fragmentLightSpaceUVScaled) * textureStep + textureStep / 2;
// convert range for texel center in light space in range [-1;+1]:
vec2 texelCenterLightSpaceNormalized = 2.0 * texelCenterUV - vec2(1.0, 1.0);
// recreate light ray in world space:
vec4 recreatedVec4 = vec4(texelCenterLightSpaceNormalized.x, texelCenterLightSpaceNormalized.y, -uLightsNearFar.x, 1.0);
mat4 vpShadowMapInversed = inverse(uVPShadowMap);
vec4 texelCenterWorldSpace = vpShadowMapInversed * recreatedVec4;
vec3 lightRayNormalized = normalize(texelCenterWorldSpace.xyz - uLightsPositions.xyz);
// compute scene scale for epsilon computation:
vec4 frustum1 = vpShadowMapInversed * corners[0];
frustum1 = frustum1 / frustum1.w;
vec4 frustum2 = vpShadowMapInversed * corners[1];
frustum2 = frustum2 / frustum2.w;
float ln = uLightNearFar.x;
float lf = uLightNearFar.y;
// compute light ray intersection with fragment plane:
float dotLightRayfragmentNormal = dot(fragmentNormal, lightRayNormalized);
float d = dot(fragmentNormal, vFragmentWorldSpace);
float x = (d - dot(fragmentNormal, uLightsPositions.xyz)) / dot(fragmentNormal, lightRayNormalized);
vec4 intersectionWorldSpace = vec4(uLightsPositions.xyz + lightRayNormalized * x, 1.0);
// compute bias:
vec4 texelInLightSpace = uVPShadowMap * intersectionWorldSpace;
float intersectionDepthTexelCenterUV = (texelInLightSpace.z / texelInLightSpace.w) / 2.0 + 0.5;
float fragmentDepthLightSpaceUV = fragmentLightSpaceNormalizedUV.z;
float bias = intersectionDepthTexelCenterUV - fragmentDepthLightSpaceUV;
float depthCompressionResult = pow(lf - fragmentLightSpaceNormalizedUV.z * (lf - ln), 2.0) / (lf * ln * (lf - ln));
float epsilon = depthCompressionResult * length(frustum1.xyz - frustum2.xyz) * uK;
bias += epsilon;
vec4 shadowCoord = vShadowCoord;
shadowCoord.z -= bias;
float shadowValue = textureProj(uTextureShadowMap, shadowCoord);
return max(shadowValue, 0.0);
}
Please note that this is a very verbose method (you could optimise several steps, I know) to better explain what I did to make it work.
All my shadow casting lights use perspective projection.
I tested the results on the CPU side in a separate project (only c# with the math structs from the OpenTK package) and they seem reasonable. I used several light positions, texture sizes, etc. The bias values looked ok in all my tests. Of course, this is no proof, but I have a good feeling about this.
In the end:
The benefits were very small. The visual results are good (especially for shadow maps with >= 2048 samples per dimension) but I still had to tweak the offset value (uniform float uK in the fragment shader) for each of my scenes. I found values from 0.01 to 0.03 to deliver useable results.
I lost about 10% performance (fps-wise) compared to my previous approach (slope-scaled bias) and gained maybe 1% of visual fidelity when it comes to shadows (acne, peter panning). The 1% is not measured - only felt by me :-)
I wanted this approach to be the "one-solution-to-all-problems". But I guess, there is no "fire-and-forget" solution when it comes to shadow mapping ;-/
I have a basic square. How can i rotate it ?
let vertices = vec![
x, y, uv1.0, uv1.0, layer, // top left
// ---
x + w, y, uv2.0, uv2.1, layer, // top right
// ---
x + w, y + h, uv3.0, uv3.1, layer, // bottom right
// ---
x, y + h, uv4.0, uv4.1, layer // bottom left
This is my orthographic projection matrix.
let c_str_vert = CString::new("MVP".as_bytes()).unwrap();
let modelLoc = gl::GetUniformLocation(shaderProgram, c_str_vert.as_ptr());
let model = cgmath::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
gl::UniformMatrix4fv(modelLoc, 1, gl::FALSE, model.as_ptr());
#version 430 core
layout(location = 0) in vec2 position;
// ...
uniform mat4 MVP;
void main() {
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
// ...
}
I have a lot of squares but I don't want to rotate them all.
width and height is 100px, how can I turn my square to make it look like this?
I know that I can achieve this by using transformation matrices. I did it in web development while working on svg, but I have no idea how I can get this working in OpenGl.
I would like to know how can i multiply matrix and vector in OpenGL.
I would like to know how can i multiply matrix and vector in OpenGL.
You already multiplying matrices with a vector
gl_Position = MVP * vec4(position.x, position.y, 0.0, 1.0);
All you have to do is multiply your MVP matrix with your rotation matrix and than with your vector
gl_Position = MVP * rotationMatrix * vec4(position.x, position.y, 0.0, 1.0);
The rotation Matrix should be also a 4x4 matrix
i tried the library nalgebra and nalgebra-glm. API cgmath confuses me.
let mut ortho = glm::ortho(0.0, SCR_WIDTH as f32, SCR_HEIGHT as f32, 0.0, -1.0, 1.0);
ortho = glm::translate(&ortho, &glm::vec3(0.0, 50.0, 0.0));
let rad = 30.0 * (PI / 360.0);
ortho = glm::rotate(&ortho, rad, &glm::vec3(0.0, 0.0, 1.0));
ortho = glm::translate(&ortho, &glm::vec3(0.0, -50.0, 0.0) );
Thank you for all the answers
I'm trying to make a cube, which is irregularly triangulated, but virtually coplanar, shade correctly.
Here is the current result I have:
With wireframe:
Normals calculated in my program:
Normals calculated by meshlabjs.net:
The lighting works properly when using regular size triangles for the cube. As you can see, I'm duplicating vertices and using angle weighting.
lighting.frag
vec4 scene_ambient = vec4(1, 1, 1, 1.0);
struct material
{
vec4 ambient;
vec4 diffuse;
vec4 specular;
float shininess;
};
material frontMaterial = material(
vec4(0.25, 0.25, 0.25, 1.0),
vec4(0.4, 0.4, 0.4, 1.0),
vec4(0.774597, 0.774597, 0.774597, 1.0),
76
);
struct lightSource
{
vec4 position;
vec4 diffuse;
vec4 specular;
float constantAttenuation, linearAttenuation, quadraticAttenuation;
float spotCutoff, spotExponent;
vec3 spotDirection;
};
lightSource light0 = lightSource(
vec4(0.0, 0.0, 0.0, 1.0),
vec4(100.0, 100.0, 100.0, 100.0),
vec4(100.0, 100.0, 100.0, 100.0),
0.1, 1, 0.01,
180.0, 0.0,
vec3(0.0, 0.0, 0.0)
);
vec4 light(lightSource ls, vec3 norm, vec3 deviation, vec3 position)
{
vec3 viewDirection = normalize(vec3(1.0 * vec4(0, 0, 0, 1.0) - vec4(position, 1)));
vec3 lightDirection;
float attenuation;
//ls.position.xyz = cameraPos;
ls.position.z += 50;
if (0.0 == ls.position.w) // directional light?
{
attenuation = 1.0; // no attenuation
lightDirection = normalize(vec3(ls.position));
}
else // point light or spotlight (or other kind of light)
{
vec3 positionToLightSource = vec3(ls.position - vec4(position, 1.0));
float distance = length(positionToLightSource);
lightDirection = normalize(positionToLightSource);
attenuation = 1.0 / (ls.constantAttenuation
+ ls.linearAttenuation * distance
+ ls.quadraticAttenuation * distance * distance);
if (ls.spotCutoff <= 90.0) // spotlight?
{
float clampedCosine = max(0.0, dot(-lightDirection, ls.spotDirection));
if (clampedCosine < cos(radians(ls.spotCutoff))) // outside of spotlight cone?
{
attenuation = 0.0;
}
else
{
attenuation = attenuation * pow(clampedCosine, ls.spotExponent);
}
}
}
vec3 ambientLighting = vec3(scene_ambient) * vec3(frontMaterial.ambient);
vec3 diffuseReflection = attenuation
* vec3(ls.diffuse) * vec3(frontMaterial.diffuse)
* max(0.0, dot(norm, lightDirection));
vec3 specularReflection;
if (dot(norm, lightDirection) < 0.0) // light source on the wrong side?
{
specularReflection = vec3(0.0, 0.0, 0.0); // no specular reflection
}
else // light source on the right side
{
specularReflection = attenuation * vec3(ls.specular) * vec3(frontMaterial.specular)
* pow(max(0.0, dot(reflect(lightDirection, norm), viewDirection)), frontMaterial.shininess);
}
return vec4(ambientLighting + diffuseReflection + specularReflection, 1.0);
}
vec4 generateGlobalLighting(vec3 norm, vec3 position)
{
return light(light0, norm, vec3(2,0,0), position);
}
mainmesh.frag
#version 430
in vec3 f_color;
in vec3 f_normal;
in vec3 f_position;
in float f_opacity;
out vec4 fragColor;
vec4 generateGlobalLighting(vec3 norm, vec3 position);
void main()
{
vec3 norm = normalize(f_normal);
vec4 l0 = generateGlobalLighting(norm, f_position);
fragColor = vec4(f_color, f_opacity) * l0;
}
Follows the code to generate the verts, normals and faces for the painter.
m_vertices_buf.resize(m_mesh.num_faces() * 3, 3);
m_normals_buf.resize(m_mesh.num_faces() * 3, 3);
m_faces_buf.resize(m_mesh.num_faces(), 3);
std::map<vertex_descriptor, std::list<Vector3d>> map;
GLDebugging* deb = GLDebugging::getInstance();
auto getAngle = [](Vector3d a, Vector3d b) {
double angle = 0.0;
angle = std::atan2(a.cross(b).norm(), a.dot(b));
return angle;
};
for (const auto& f : m_mesh.faces()) {
auto f_hh = m_mesh.halfedge(f);
//auto n = PMP::compute_face_normal(f, m_mesh);
vertex_descriptor vs[3];
Vector3d ps[3];
int i = 0;
for (const auto& v : m_mesh.vertices_around_face(f_hh)) {
auto p = m_mesh.point(v);
ps[i] = Vector3d(p.x(), p.y(), p.z());
vs[i++] = v;
}
auto n = (ps[1] - ps[0]).cross(ps[2] - ps[0]).normalized();
auto a1 = getAngle((ps[1] - ps[0]).normalized(), (ps[2] - ps[0]).normalized());
auto a2 = getAngle((ps[2] - ps[1]).normalized(), (ps[0] - ps[1]).normalized());
auto a3 = getAngle((ps[0] - ps[2]).normalized(), (ps[1] - ps[2]).normalized());
auto area = PMP::face_area(f, m_mesh);
map[vs[0]].push_back(n * a1);
map[vs[1]].push_back(n * a2);
map[vs[2]].push_back(n * a3);
auto p = m_mesh.point(vs[0]);
deb->drawLine(Vector3d(p.x(), p.y(), p.z()), Vector3d(p.x(), p.y(), p.z()) + Vector3d(n.x(), n.y(), n.z()) * 4);
p = m_mesh.point(vs[1]);
deb->drawLine(Vector3d(p.x(), p.y(), p.z()), Vector3d(p.x(), p.y(), p.z()) + Vector3d(n.x(), n.y(), n.z()) * 4);
p = m_mesh.point(vs[2]);
deb->drawLine(Vector3d(p.x(), p.y(), p.z()), Vector3d(p.x(), p.y(), p.z()) + Vector3d(n.x(), n.y(), n.z()) * 4);
}
int j = 0;
int i = 0;
for (const auto& f : m_mesh.faces()) {
auto f_hh = m_mesh.halfedge(f);
for (const auto& v : m_mesh.vertices_around_face(f_hh)) {
const auto& p = m_mesh.point(v);
m_vertices_buf.row(i) = RowVector3d(p.x(), p.y(), p.z());
Vector3d n(0, 0, 0);
//auto n = PMP::compute_face_normal(f, m_mesh);
Vector3d norm = Vector3d(n.x(), n.y(), n.z());
for (auto val : map[v]) {
norm += val;
}
norm.normalize();
deb->drawLine(Vector3d(p.x(), p.y(), p.z()), Vector3d(p.x(), p.y(), p.z()) + Vector3d(norm.x(), norm.y(), norm.z()) * 3,
Vector3d(1.0, 0, 0));
m_normals_buf.row(i++) = RowVector3d(norm.x(), norm.y(), norm.z());
}
m_faces_buf.row(j++) = RowVector3i(i - 3, i - 2, i - 1);
}
Follows the painter code:
m_vertexAttrLoc = program.attributeLocation("v_vertex");
m_colorAttrLoc = program.attributeLocation("v_color");
m_normalAttrLoc = program.attributeLocation("v_normal");
m_mvMatrixLoc = program.uniformLocation("v_modelViewMatrix");
m_projMatrixLoc = program.uniformLocation("v_projectionMatrix");
m_normalMatrixLoc = program.uniformLocation("v_normalMatrix");
//m_relativePosLoc = program.uniformLocation("v_relativePos");
m_opacityLoc = program.uniformLocation("v_opacity");
m_colorMaskLoc = program.uniformLocation("v_colorMask");
//bool for unmapping depth color
m_useDepthMap = program.uniformLocation("v_useDepthMap");
program.setUniformValue(m_mvMatrixLoc, modelView);
//uniform used for Color map to regular model switch
program.setUniformValue(m_useDepthMap, (m_showColorMap &&
(m_showProblemAreas || m_showPrepMap || m_showDepthMap || m_showMockupMap)));
QMatrix3x3 normalMatrix = modelView.normalMatrix();
program.setUniformValue(m_normalMatrixLoc, normalMatrix);
program.setUniformValue(m_projMatrixLoc, projection);
//program.setUniformValue(m_relativePosLoc, m_relativePos);
program.setUniformValue(m_opacityLoc, m_opacity);
program.setUniformValue(m_colorMaskLoc, m_colorMask);
glEnableVertexAttribArray(m_vertexAttrLoc);
m_vertices.bind();
glVertexAttribPointer(m_vertexAttrLoc, 3, GL_DOUBLE, false, 3 * sizeof(GLdouble), NULL);
m_vertices.release();
glEnableVertexAttribArray(m_normalAttrLoc);
m_normals.bind();
glVertexAttribPointer(m_normalAttrLoc, 3, GL_DOUBLE, false, 0, NULL);
m_normals.release();
glEnableVertexAttribArray(m_colorAttrLoc);
if (m_showProblemAreas) {
m_problemColorMap.bind();
glVertexAttribPointer(m_colorAttrLoc, 3, GL_DOUBLE, false, 0, NULL);
m_problemColorMap.release();
}
else if (m_showPrepMap) {
m_prepColorMap.bind();
glVertexAttribPointer(m_colorAttrLoc, 3, GL_DOUBLE, false, 0, NULL);
m_prepColorMap.release();
}
else if (m_showMockupMap) {
m_mokupColorMap.bind();
glVertexAttribPointer(m_colorAttrLoc, 3, GL_DOUBLE, false, 0, NULL);
m_mokupColorMap.release();
}
else {
//m_colors.bind();
//glVertexAttribPointer(m_colorAttrLoc, 3, GL_DOUBLE, false, 0, NULL);
//m_colors.release();
}
m_indices.bind();
glDrawElements(GL_TRIANGLES, m_indices.size() / sizeof(int), GL_UNSIGNED_INT, NULL);
m_indices.release();
glDisableVertexAttribArray(m_vertexAttrLoc);
glDisableVertexAttribArray(m_normalAttrLoc);
glDisableVertexAttribArray(m_colorAttrLoc);
EDIT: Sorry for not being clear enough. The cube is merely an example. My requirements are that the shading works for any kind of mesh. Those with very sharp edges, and those that are very organic (like humans or animals).
The issue is clearly explained by the image "Normals calculated in my program" from your question. The normal vectors at the corners and edges of the cube are not normal perpendicular to the faces:
For a proper specular reflection on plane faces, the normal vectors have to be perpendicular to the sides of the cube.
The vertex coordinate and its normal vector from a tuple with 6 components (x, y, z, nx, ny, nz).
A vertex coordinate on an edge of the cube is adjacent to 2 sides of the cube and 2 (face) normal vectors. The 8 vertex coordinates on the 8 corners of the cube are adjacent to 3 sides (3 normal vectors) each.
To define the vertex attributes with face normal vectors (perpendicular to a side) you have to define multiple tuples with the same vertex coordinate but different normal vectors. You have to use the different attribute tuples to form the triangle primitives on the different sides of the cube.
e.g. If you have defined a cube with the left, front, bottom coordinate of (-1, -1, -1) and the right, back, top coordinate of (1, 1, 1), then the vertex coordinate (-1, -1, -1) is adjacent to the left, front and bottom side of the cube:
x y z nx ny nz
left: -1 -1 -1 -1 0 0
front: -1 -1 -1 0 -1 0
bottom: -1 -1 -1 0 0 -1
Use the left attribute tuple to form the triangle primitives on the left side, the front to form the front and bottom for the triangles on the bottom.
In general you have to decide what you want. There is no general approach for all meshes.
Either you have a fine granulated mesh and you want a smooth appearance (e.g a sphere). In that case your approach is fine, it will generate a smooth light transition on the edges between the primitives.
Or you have a mesh with hard edges like a cube. In that case you have to "duplicate" vertices. If 2 (or even more) triangles share a vertex coordinate, but the face normal vectors are different, then you have to create a separate tuple, for all the combinations of the vertex coordinate and the face normal vector.
For a general "smooth" solution you would have to interpolate the normal vectors of the vertex coordinates which are in the middle of plane surfaces, according to the surrounding geometry. That means if a bunch of triangle primitives form a plane, then all the normal vectors of the vertices have to be computed dependent on there position on the plane. At the centroid the normal vector is equal to the face normal vector. For all other points the normal vector has to be interpolated with the normal vectors of the surrounding faces.
Anyway that seems to be an XY problem. Why is there a "vertex" somewhere in the middle of a plane? Probably the plane is tessellated. But if the plan is tessellated, why are the normal vectors not interpolated too, during the tessellation process?
As mentioned in the other answers the problem is your mesh normals.
Computing an average normal, like you are doing currently, is what you would want
to do for a smooth object like a sphere. cgal has a function for that CGAL::Polygon_mesh_processing::compute_vertex_normal For a cube what you want is normals perpendicular to the faces
cgal has a functoin for that too CGAL::Polygon_mesh_processing::compute_face_normal
To debug the normals you can just set fragColor = vec4(norm,1); in mainmesh.frag. Here the cubes on the left have averaged (smooth) normals and on the right have face (flat) normals:And shaded they look like this:
shading has to work for any kind of mesh (a cube or any organic mesh)
For that you can use something like per_corner_normals whitch:
Implements a simple scheme which computes corner normals as averages
of normals of faces incident on the corresponding vertex which do not
deviate by more than a specified dihedral angle (e.g. 20°)
And this is what it looks like with a angle of 1°, 20°, 100°:
In your image, we can see that the inner triangle (the one that doesn't have point on cube edges, in top left quarter) has an homogeneous color.
My interpretation is that triangles that have points on the edge/corner of the cube share the same vertex and then share the same normal and some how the normal are averaged. So it's not perpendicular to the faces.
To debug this, you should create a simple geometry of a cube with 6 faces and 2 triangles per face. Hence it's make 12 triangles.
Two options:
If you have 8 vertex in the geometry, the corner are shared between triangles of different face and the issue came from the geometry generator.
If you have 6×4=24 vertex in the geometry the truth lies elsewhere.
I am trying to create a triangle in openGL that won't stretch when I resize the window. I pass this orthographic matrix to my vertex shader.
void resize(int w, int h) {
float orthMat[4][4];
orthMat[0][0] = 2.0/w;
orthMat[1][1] = -2.0/h;
orthMat[3][3] = 1.0;
orthMat[3][3] = 1.0;
orthMat[3][0] = -1.0;
orthMat[3][1] = -1.0;
orthMat[0][1] = 0.0;
orthMat[0][2] = 0.0;
orthMat[0][3] = 0.0;
orthMat[1][0] = 0.0;
orthMat[1][3] = 0.0;
orthMat[1][2] = 0.0;
orthMat[2][0] = 0.0;
orthMat[2][1] = 0.0;
orthMat[2][2] = 0.0;
orthMat[2][3] = 0.0;
orthMat[3][2] = 0.0;
glUniformMatrix4fv(uniformLocationIndex, 1, GL_TRUE, &orthMat[0][0]);
glViewport(0,0,w,h);
}
That uniformLocationIndex points to "projMat" in my vertex shader.
#version 330
in vec2 position;
in mat4 projMat;
void main() {
//vec4 contains normalized x and y coordinates
gl_Position = projMat * vec4(((-1.0) + position.x*(2.0/(2.0/projMat[0][0]))),
(1.0 - position.y*(2.0/(-2.0/projMat[1][1]))),
0.0, 1.0);
}
No matter where I put the points, they all go to the center and won't scale correctly when I resize the window. It is supposed keep the aspect ratio and scale down.
I did some math, and I think the problem is your shader calculation.
I created a projection Matrix with:
Left clipping plane = -1
Right clipping plane = 1
Top clipping plane = 1
Bottom clipping plane = -1
Near clipping plane = 0.001
Far clipping plane = 2
Comparing this matrix I created to yours, there are significant differences in rows 2 and 3, but since it looks like you don't use these, I ignored that.
I then did the math involved in your gl_Position calculation, and unless I got it wrong, I got:
Point (1, 1) transforms to (0, 0)
Point (1, -1) transforms to (0, 2) (outside the field of view)
Point (-1, 1) transforms to (-2, 0) (outside the field of view)
Point (-1, -1) transforms to (-2, 2) (outside the field of view)
I think you don't have rows 2 and 3 right in your matrix (0 and 1 look OK), but again since you don't use them you should be able to get by if in your shader you say something like:
vec4 Position = projMat * vec4(position, 0.0, 1.0);
gl_Position = vec4(Position.x, Position.y, 0.0, 1.0);