How to draw rotating cube glium? - opengl

Why is my rotating cube doesn't looks like rotating cube?
Do I need to move camera?
I don't have any idea what is wrong.
I am using glium in rust.
24 Vertices
const P: f32 = 0.5;
let vertex_buffer = glium::VertexBuffer::new(&display, &vec![
// front 0-3
Vertex { position: (-P, -P, 0.0) },
Vertex { position: (P, P, 0.0) },
Vertex { position: (P, -P, 0.0) },
Vertex { position: (-P, P, 0.0) },
// back 4-7
Vertex { position: (-P, -P, 1.0) },
Vertex { position: (P, P, 1.0) },
Vertex { position: (P, -P, 1.0) },
Vertex { position: (-P, P, 1.0) },
// up 8-11
Vertex { position: (-P, P, 0.0) },
Vertex { position: (P, P, 0.0) },
Vertex { position: (P, P, 1.0) },
Vertex { position: (-P, P, 1.0) },
// down 12-15
Vertex { position: (-P, -P, 0.0) },
Vertex { position: (P, -P, 0.0) },
Vertex { position: (P, -P, 1.0) },
Vertex { position: (-P, -P, 1.0) },
// right 16-19
Vertex { position: (P, -P, 0.0) },
Vertex { position: (P, -P, 1.0) },
Vertex { position: (P, P, 1.0) },
Vertex { position: (P, P, 0.0) },
// left 20-23
Vertex { position: (-P, -P, 0.0) },
Vertex { position: (-P, -P, 1.0) },
Vertex { position: (-P, P, 1.0) },
Vertex { position: (-P, P, 0.0) },
]
).unwrap();
Indices for generating 6 faces of cube(4 Vertices per face, 2 triangles per face):
let indices = glium::IndexBuffer::new(&display, glium::index::PrimitiveType::TrianglesList,
&[
// front
0, 2, 1,
0, 3, 1,
// back
4, 6, 5,
4, 7, 5,
// up
8, 9, 10,
8, 11, 10,
// down
12, 13, 14,
12, 15, 14,
// right
16, 17, 18,
16, 19, 18,
// left
20, 21, 22,
20, 23, 22u16,
]
).unwrap();
I am using Gouraud shading(from glium tutorial) for lighting.
I find normals for cube on the stackoverflow.
Normals:
let normals = glium::VertexBuffer::new(&display, &vec![
// front
Normal { normal: (1.0, 0.0, 0.0) },
// back
Normal { normal: (0.0, -1.0, 0.0) },
// up
Normal { normal: (1.0, 0.0, -1.0) },
// down
Normal { normal: (0.0, 0.0, 1.0) },
// right
Normal { normal: (1.0, 0.0, 0.0) },
// left
Normal { normal: (-1.0, 0.0, 0.0) },
]
).unwrap();
Vertex shader(glsl):
#version 150
in vec3 position;
in vec3 normal;
out vec3 v_normal;
uniform mat4 m;
void main() {
v_normal = transpose(inverse(mat3(m))) * normal;
gl_Position = m * vec4(position, 1.0);
}
Fragment shader(glsl):
#version 150
in vec3 v_normal;
out vec4 color;
uniform vec3 u_light;
void main() {
float brightness = dot(normalize(v_normal), normalize(u_light));
vec3 dark_color = vec3(0.6, 0.0, 0.0);
vec3 regular_color = vec3(1.0, 0.0, 0.0);
color = vec4(mix(dark_color, regular_color, brightness), 1.0);
}
Rotating is done by 4x4 matrix:
let mut t: f32 = 0.0;
let mut s: f32 = 0.002;
// ... loop.run
t += s;
if t > 180.0 || t < -180.0 {
s = -s;
}
let m = [
[1.0, 0.0, 0.0, 0.0],
[0.0, t.cos(), -t.sin(), 0.0],
[0.0, t.sin(), t.cos(), 0.0],
[0.0, 0.0, 0.0, 1.0f32]
];
let light = [-1.0, -0.4, 0.9f32];
// params
target.draw((&vertex_buffer, &normals), &indices, &program, &uniform! { m: m, u_light: light }, &params).unwrap();
target.finish().unwrap();
Any ideas whats wrong?
I am sorry for so long Q. which has so much of code, but I don't know what's else than this I can provide.
There's some images of the my "rotating cube":

Your rotation matrix is a rotation about the X axis, and you are applying no perspective projection matrix, so you get the implied default orthographic projection. Your pictures are more or less typical examples of what you get in that case, though with what might be further errors, perhaps in the vertex data (I'm noticing the visible diagonal line).
The vertices are being moved in circles about the X axis, in the YZ plane, and you can't see Z directly, so you just see them moving up and down in Y.
In order to get a picture that looks like a rotating cube, you will want to set up a perspective projection, and probably a camera position/rotation (view matrix) that is looking at the cube.
Or you could try changing your matrix to a matrix for rotation about the Z axis:
let m = [
[t.cos(), -t.sin(), 0.0, 0.0],
[t.sin(), t.cos(), 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.0, 1.0f32]
];
That will be visibly a rotation.
But in general, I highly recommend getting perspective projection in your program. This will reveal motion in the Z axis (currently invisible) and make the images make more intuitive sense to human vision. It is also useful to add more objects and more complex ones, making a more complete “scene” — then if you have code bugs, you can see how they're affecting all the geometry being displayed, rather than seeing only a very abstract bunch of rectangles as you have now.

Related

Why drawing a texture using Opengl ( using rust ) is showing border colour

I am drawing this texture using opengl and rust. But for some reason the image has border around it when I draw away from the top left. I think there is something wrong with matrix / viewport. Not entirely sure.
let width = 1280; // Window is initialised to these values. Hard coded for context.
let height = 720;
unsafe {
gl::Viewport(0, 0, width, height);
}
let orth = cgmath::ortho::<f32>(0.0, width as f32, height as f32, 0.0, -1.00, 100.0)
unsafe {
gl::TexParameteri(gl::TEXTURE_2D, gl::TEXTURE_WRAP_S, gl::CLAMP_TO_BORDER as i32);
gl::TexParameteri(gl::TEXTURE_2D, gl::TEXTURE_WRAP_T, gl::CLAMP_TO_BORDER as i32);
gl::TexParameteri(gl::TEXTURE_2D, gl::TEXTURE_MIN_FILTER, gl::LINEAR as i32);
gl::TexParameteri(gl::TEXTURE_2D, gl::TEXTURE_MAG_FILTER, gl::LINEAR as i32);
let border_colour = [0.9, 0.2, 0.5, 1.0];
gl::TexParameterfv(gl::TEXTURE_2D, gl::TEXTURE_BORDER_COLOR, border_colour.as_ptr());
gl::TexImage2D(
gl::TEXTURE_2D,
0,
gl::RGBA as i32,
img.width as i32,
img.height as i32,
0,
gl::RGBA,
gl::UNSIGNED_BYTE,
data.as_ptr() as *const c_void,
);
gl::GenerateMipmap(gl::TEXTURE_2D);
}
Before Drawing the texture, I clear the display
pub fn clear_frame_buffer() {
unsafe {
gl::Clear(gl::COLOR_BUFFER_BIT | gl::DEPTH_BUFFER_BIT);
gl::ClearColor(0.0, 0.0, 0.0, 1.0);
}
}
Drawing the texture:
pub fn draw_quad(tex: &Texture2D, program : &Program, mesh : &Mesh, projection : &Matrix4<f32>, position : Vector3<f32>) {
program.bind();
mesh.vao.bind();
tex.bind();
if let Some(location) = program.get_uniform_location("view") {
let camera_transform = Matrix4::<f32>::identity() * Matrix4::from_scale(1.0);
program.set_matrix(location, &[camera_transform]);
}
if let Some(location) = program.get_uniform_location("projection") {
program.set_matrix(location, &[*projection]);
}
if let Some(location) = program.get_uniform_location("model") {
let model = {
let translate = Matrix4::<f32>::from_translation(position);
let (w, h) = tex.get_dimensions();
let scale = Matrix4::<f32>::from_nonuniform_scale(w as f32, h as f32, 1.0);
translate * scale
};
program.set_matrix(location, &[model]);
}
let count = mesh.count as i32;
unsafe {
gl::DrawArrays(gl::TRIANGLES, 0, count);
}
}
Then I move the texture
I know the pink is coming from my color text param. But I shouldn't be able to see it? I should be only drawing the image and not the border.
let quad_vertices: [f32; 30] = [
// Positions // Texcoords
1.0, -1.0, 0.0, 1.0, 0.0,
-1.0, -1.0, 0.0, 0.0, 0.0,
-1.0, 1.0, 0.0, 0.0, 1.0,
1.0, 1.0, 0.0, 1.0, 1.0,
1.0, -1.0, 0.0, 1.0, 0.0,
-1.0, 1.0, 0.0, 0.0, 1.0,
];
Vertex Shader
#version 330 core
layout (location = 0) in vec3 vertex;
layout (location = 2) in vec2 texcoords;
out vec2 TexCoords;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
TexCoords = texcoords;
gl_Position = projection * view * model * vec4(vertex.xy, 0.0, 1.0);
}
Fragment Shader
#version 330 core
in vec2 TexCoords;
out vec4 color;
uniform sampler2D image;
void main()
{
color = texture(image, TexCoords);
}

glow depth buffer does not clear

renderer.draw
pub fn draw(&self, rotation: f32) {
unsafe {
// Shader sources
const TEXTURE_VS_SRC: &str = "
#version 330 core
layout (location = 0) in vec3 aposition;
layout (location = 1) in vec3 acolor;
layout (location = 2) in vec2 atexture_coordinate;
out vec3 color;
out vec2 texture_coordinate;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aposition, 1.0);
color = acolor;
texture_coordinate = atexture_coordinate;
}
";
const TEXTURE_FS_SRC: &str = "
#version 330 core
// Outputs colors in RGBA
out vec4 FragColor;
// Inputs the color from the Vertex Shader
in vec3 color;
// Inputs the texture coordinates from the Vertex Shader
in vec2 texture_coordinate;
// Gets the Texture Unit from the main function
uniform sampler2D tex0;
void main()
{
FragColor = texture(tex0, texture_coordinate);
}
";
// Vertices coordinates
let VERTICES: Vec<f32> = vec![
// COORDINATES / COLORS / TexCoord //
-0.5, 0.0, 0.5, 0.83, 0.70, 0.44, 0.0, 0.0,
-0.5, 0.0, -0.5, 0.83, 0.70, 0.44, 5.0, 0.0,
0.5, 0.0, -0.5, 0.83, 0.70, 0.44, 0.0, 0.0,
0.5, 0.0, 0.5, 0.83, 0.70, 0.44, 5.0, 0.0,
0.0, 0.8, 0.0, 0.92, 0.86, 0.76, 2.5, 5.0
];
// Indices for vertices order
let INDICES: Vec<u32> = vec![
0, 1, 2,
0, 2, 3,
0, 1, 4,
1, 2, 4,
2, 3, 4,
3, 0, 4
];
self.gl.clear_color(0.3, 0.3, 0.3, 1.0);
self.gl.clear(glow::COLOR_BUFFER_BIT | glow::DEPTH_BUFFER_BIT);
self.gl.clear_depth_f32(1.0);
self.gl.depth_func(glow::LESS);
self.gl.depth_mask(true);
self.gl.enable(glow::DEPTH_TEST);
let shader = Shader::new(&self.gl, TEXTURE_VS_SRC, TEXTURE_FS_SRC);
shader.bind(&self.gl);
shader.upload_uniform_mat4(&self.gl, "model", &glm::rotate(&glm::identity(), rotation, &glm::vec3(0.0, 1.0, 0.0)));
shader.upload_uniform_mat4(&self.gl, "view", &glm::translate(&glm::identity(), &glm::vec3(0.0, -0.5, -2.0)));
shader.upload_uniform_mat4(&self.gl, "projection", &Mat4::new_perspective((800.0 / 600.0), 45.0, 0.01, 100.0));
let mut texture = Texture::new(String::from("sandbox/assets/textures/checkerboard.png"), 1.0);
texture.init(&self.gl);
Texture::bind(&self.gl, texture.get_renderer_id().unwrap(), 0);
shader.upload_uniform_integer1(&self.gl, "tex0", 0);
let layout = BufferLayout::new(
vec![
BufferElement::new("aposition".parse().unwrap(), ShaderDataType::Float3, false),
BufferElement::new("acolor".parse().unwrap(), ShaderDataType::Float3, false),
BufferElement::new("atexture_coordinate".parse().unwrap(), ShaderDataType::Float2, false),
]
);
let index_buffer = IndexBuffer::new(&self.gl, INDICES);
let vertex_buffer = VertexBuffer::new(&self.gl, VERTICES, layout);
let vertex_array = VertexArray::new(&self.gl, index_buffer, vertex_buffer);
self.gl.draw_elements(glow::TRIANGLES, vertex_array.get_indices_len() as i32, glow::UNSIGNED_INT, 0);
}
}
I'm using egui with the glow backend which has its own gl context so i made sure to reset everything before drawing. Obviously this needs refactoring since the resources shouldnt be created every draw but i wanted to get it working first.
texture.init
pub(crate) fn init(&mut self, gl: &glow::Context) {
match image::open(String::from(self.get_path())) {
Err(err) => panic!("Could not load image {}: {}", self.get_path(), err),
Ok(img) => unsafe {
let (width, height) = img.dimensions();
let (image, internal_format, data_format) = match img {
DynamicImage::ImageRgb8(img) => (img.into_raw(), glow::RGB8, glow::RGB),
DynamicImage::ImageRgba8(img) => (img.into_raw(), glow::RGBA8, glow::RGBA),
img => (img.to_rgb8().into_raw(), glow::RGB8, glow::RGB)
};
let renderer_id = gl.create_texture().unwrap();
gl.bind_texture(glow::TEXTURE_2D, Some(renderer_id));
gl.tex_storage_2d(glow::TEXTURE_2D, 1, internal_format, width as i32, height as i32);
gl.tex_parameter_i32(glow::TEXTURE_2D, glow::TEXTURE_MIN_FILTER, glow::NEAREST as i32);
gl.tex_parameter_i32(glow::TEXTURE_2D, glow::TEXTURE_MAG_FILTER, glow::NEAREST as i32);
gl.tex_parameter_i32(glow::TEXTURE_2D, glow::TEXTURE_WRAP_S, glow::REPEAT as i32);
gl.tex_parameter_i32(glow::TEXTURE_2D, glow::TEXTURE_WRAP_T, glow::REPEAT as i32);
gl.tex_sub_image_2d(glow::TEXTURE_2D, 0, 0, 0, width as i32, height as i32, data_format, glow::UNSIGNED_BYTE, PixelUnpackData::Slice(image.as_slice()));
gl.generate_mipmap(glow::TEXTURE_2D);
self.set_renderer_id(renderer_id);
}
}
nsight depth state
this is what it looks like
EDIT:
how the window is created
fn create_display(
event_loop: &glutin::event_loop::EventLoop<()>,
title: &str
) -> (
glutin::WindowedContext<glutin::PossiblyCurrent>,
glow::Context,
) {
let window_builder = glutin::window::WindowBuilder::new()
.with_resizable(true)
.with_inner_size(glutin::dpi::LogicalSize {
width: 800.0,
height: 600.0,
})
.with_title(title);
let gl_window = unsafe {
glutin::ContextBuilder::new()
.with_depth_buffer(0)
.with_srgb(true)
.with_stencil_buffer(0)
.with_vsync(true)
.build_windowed(window_builder, event_loop)
.unwrap()
.make_current()
.unwrap()
};
let gl = unsafe { glow::Context::from_loader_function(|s| gl_window.get_proc_address(s)) };
unsafe {
use glow::HasContext as _;
gl.enable(glow::FRAMEBUFFER_SRGB);
}
(gl_window, gl)
}
The default framebuffer does not have a depth buffer. Therefore, the Depth Test does not work at all. You need to specify the depth buffer bits when creating the OpenGL window. e.g. 24 bits:
let gl_window = unsafe {
glutin::ContextBuilder::new()
.with_depth_buffer(24)
.with_srgb(true)
.with_stencil_buffer(0)
.with_vsync(true)
.build_windowed(window_builder, event_loop)
.unwrap()
.make_current()
.unwrap()
};

Quadratic curve stroke width on GPU

I was wondering how can I draw curves using a triangle, so I ended up reading these 2 articles.
http://commaexcess.com/articles/6/vector-graphics-on-the-gpu
http://www.mdk.org.pl/2007/10/27/curvy-blues
As I understand it correctly, the hardware interpolates u and v across three vertices.
Here is my result
My coordinates for the triangle
vec![100.0, 100.0
100.0, 100.0,
200.0, 100.0];
UV coordinates
vec![0.0, 0.0
0.5, 0.0
1.0, 1.0]
My fragment shader looks like this
#version 430 core
out vec4 frag_color;
in vec2 uv;
in vec4 cords;
void main() {
float x = uv.x;
float y = uv.y;
float result = (x*x) - y;
if(result > 0.0) {
frag_color = vec4(1.0, 0.1, 0.5, 1.0);
} else {
frag_color = vec4(2.0, 0.2, 3.0, 1.0);
}
}
I would like to have a good stroke at the middle point. I tried the following.
if(result > 0.01) {
frag_color = vec4(1.0,0.1,0.5, 1.0);
} else if(result < -0.01) {
frag_color = vec4(2.0,0.2,1.0, 1.0);
} else {
frag_color = vec4(2.0, 0.2, 3.0, 0.0); // mid point
}
I got this
But when i try to resize the triangle i get this result
The stroke width is thicker, but I want to keep the width in the previous image.
I have no idea how to achieve this, is it even possible ?

Too many arguments to constructor of TouchDesigner's

I use TouchDesigner's GLSL. I want to try to make the color transparent like alpha but he will appear too many arguments to constructor of XXX
void main()
{
vec2 r = vUV.st;
vec3 backgroundColor = vec3(1.0,0,0,0);
vec3 axesColor = vec3(0.0, 0.0, 1.0);
vec3 gridColor = vec3(0.5);
// start by setting the background color. If pixel's value
// is not overwritten later, this color will be displayed.
vec3 pixel = backgroundColor;
// Draw the grid lines
// we used "const" because loop variables can only be manipulated
// by constant expressions.
const float tickWidth = 0.1;
for(float i=0.0; i<1.0; i+=tickWidth) {
// "i" is the line coordinate.
if(abs(r.x - i)<0.002) pixel = gridColor;
if(abs(r.y - i)<0.002) pixel = gridColor;
}
// Draw the axes
if( abs(r.x)<0.005 ) pixel = axesColor;
if( abs(r.y)<0.006 ) pixel = axesColor;
fragColor = TDOutputSwizzle(vec4(pixel, 1.0));
}
The issue is the line
vec3 backgroundColor = vec3(1.0,0,0,0);
You have passed 4 arguments to the constructor of vec3. A Vector constructor take the number of values that the vector stores.
Pass 3 singel floating point values to solve the issue:
vec3 backgroundColor = vec3(1.0, 0.0, 0.0);
The alpha channel is the 4th component. If you want to add an alpha channel, then you have to use vec4 instead of vec3:
void main()
{
vec2 r = vUV.st;
vec4 backgroundColor = vec4(1.0, 0.0, 0.0, 0.0); // red: 1, green: 0, blue: 0, alpha: 0
vec4 axesColor = vec3(0.0, 0.0, 1.0, 1.0); // red: 0, green: 0, blue: 1, alpha: 1
vec4 gridColor = vec3(0.5, 0.0, 0.0, 1.0); // red: 0.5, green: 0, blue: 0, alpha: 1
// start by setting the background color. If pixel's value
// is not overwritten later, this color will be displayed.
vec4 pixel = backgroundColor;
// Draw the grid lines
// we used "const" because loop variables can only be manipulated
// by constant expressions.
const float tickWidth = 0.1;
for(float i=0.0; i<1.0; i+=tickWidth) {
// "i" is the line coordinate.
if (abs(r.x - i) < 0.002 || abs(r.y - i) < 0.002)
pixel = gridColor;
}
// Draw the axes
if (abs(r.x) < 0.005 || abs(r.y) < 0.006)
pixel = axesColor;
fragColor = TDOutputSwizzle(pixel);
}

Bad lighting using Phong Method

I'm trying to make a cube, which is irregularly triangulated, but virtually coplanar, shade correctly.
Here is the current result I have:
With wireframe:
Normals calculated in my program:
Normals calculated by meshlabjs.net:
The lighting works properly when using regular size triangles for the cube. As you can see, I'm duplicating vertices and using angle weighting.
lighting.frag
vec4 scene_ambient = vec4(1, 1, 1, 1.0);
struct material
{
vec4 ambient;
vec4 diffuse;
vec4 specular;
float shininess;
};
material frontMaterial = material(
vec4(0.25, 0.25, 0.25, 1.0),
vec4(0.4, 0.4, 0.4, 1.0),
vec4(0.774597, 0.774597, 0.774597, 1.0),
76
);
struct lightSource
{
vec4 position;
vec4 diffuse;
vec4 specular;
float constantAttenuation, linearAttenuation, quadraticAttenuation;
float spotCutoff, spotExponent;
vec3 spotDirection;
};
lightSource light0 = lightSource(
vec4(0.0, 0.0, 0.0, 1.0),
vec4(100.0, 100.0, 100.0, 100.0),
vec4(100.0, 100.0, 100.0, 100.0),
0.1, 1, 0.01,
180.0, 0.0,
vec3(0.0, 0.0, 0.0)
);
vec4 light(lightSource ls, vec3 norm, vec3 deviation, vec3 position)
{
vec3 viewDirection = normalize(vec3(1.0 * vec4(0, 0, 0, 1.0) - vec4(position, 1)));
vec3 lightDirection;
float attenuation;
//ls.position.xyz = cameraPos;
ls.position.z += 50;
if (0.0 == ls.position.w) // directional light?
{
attenuation = 1.0; // no attenuation
lightDirection = normalize(vec3(ls.position));
}
else // point light or spotlight (or other kind of light)
{
vec3 positionToLightSource = vec3(ls.position - vec4(position, 1.0));
float distance = length(positionToLightSource);
lightDirection = normalize(positionToLightSource);
attenuation = 1.0 / (ls.constantAttenuation
+ ls.linearAttenuation * distance
+ ls.quadraticAttenuation * distance * distance);
if (ls.spotCutoff <= 90.0) // spotlight?
{
float clampedCosine = max(0.0, dot(-lightDirection, ls.spotDirection));
if (clampedCosine < cos(radians(ls.spotCutoff))) // outside of spotlight cone?
{
attenuation = 0.0;
}
else
{
attenuation = attenuation * pow(clampedCosine, ls.spotExponent);
}
}
}
vec3 ambientLighting = vec3(scene_ambient) * vec3(frontMaterial.ambient);
vec3 diffuseReflection = attenuation
* vec3(ls.diffuse) * vec3(frontMaterial.diffuse)
* max(0.0, dot(norm, lightDirection));
vec3 specularReflection;
if (dot(norm, lightDirection) < 0.0) // light source on the wrong side?
{
specularReflection = vec3(0.0, 0.0, 0.0); // no specular reflection
}
else // light source on the right side
{
specularReflection = attenuation * vec3(ls.specular) * vec3(frontMaterial.specular)
* pow(max(0.0, dot(reflect(lightDirection, norm), viewDirection)), frontMaterial.shininess);
}
return vec4(ambientLighting + diffuseReflection + specularReflection, 1.0);
}
vec4 generateGlobalLighting(vec3 norm, vec3 position)
{
return light(light0, norm, vec3(2,0,0), position);
}
mainmesh.frag
#version 430
in vec3 f_color;
in vec3 f_normal;
in vec3 f_position;
in float f_opacity;
out vec4 fragColor;
vec4 generateGlobalLighting(vec3 norm, vec3 position);
void main()
{
vec3 norm = normalize(f_normal);
vec4 l0 = generateGlobalLighting(norm, f_position);
fragColor = vec4(f_color, f_opacity) * l0;
}
Follows the code to generate the verts, normals and faces for the painter.
m_vertices_buf.resize(m_mesh.num_faces() * 3, 3);
m_normals_buf.resize(m_mesh.num_faces() * 3, 3);
m_faces_buf.resize(m_mesh.num_faces(), 3);
std::map<vertex_descriptor, std::list<Vector3d>> map;
GLDebugging* deb = GLDebugging::getInstance();
auto getAngle = [](Vector3d a, Vector3d b) {
double angle = 0.0;
angle = std::atan2(a.cross(b).norm(), a.dot(b));
return angle;
};
for (const auto& f : m_mesh.faces()) {
auto f_hh = m_mesh.halfedge(f);
//auto n = PMP::compute_face_normal(f, m_mesh);
vertex_descriptor vs[3];
Vector3d ps[3];
int i = 0;
for (const auto& v : m_mesh.vertices_around_face(f_hh)) {
auto p = m_mesh.point(v);
ps[i] = Vector3d(p.x(), p.y(), p.z());
vs[i++] = v;
}
auto n = (ps[1] - ps[0]).cross(ps[2] - ps[0]).normalized();
auto a1 = getAngle((ps[1] - ps[0]).normalized(), (ps[2] - ps[0]).normalized());
auto a2 = getAngle((ps[2] - ps[1]).normalized(), (ps[0] - ps[1]).normalized());
auto a3 = getAngle((ps[0] - ps[2]).normalized(), (ps[1] - ps[2]).normalized());
auto area = PMP::face_area(f, m_mesh);
map[vs[0]].push_back(n * a1);
map[vs[1]].push_back(n * a2);
map[vs[2]].push_back(n * a3);
auto p = m_mesh.point(vs[0]);
deb->drawLine(Vector3d(p.x(), p.y(), p.z()), Vector3d(p.x(), p.y(), p.z()) + Vector3d(n.x(), n.y(), n.z()) * 4);
p = m_mesh.point(vs[1]);
deb->drawLine(Vector3d(p.x(), p.y(), p.z()), Vector3d(p.x(), p.y(), p.z()) + Vector3d(n.x(), n.y(), n.z()) * 4);
p = m_mesh.point(vs[2]);
deb->drawLine(Vector3d(p.x(), p.y(), p.z()), Vector3d(p.x(), p.y(), p.z()) + Vector3d(n.x(), n.y(), n.z()) * 4);
}
int j = 0;
int i = 0;
for (const auto& f : m_mesh.faces()) {
auto f_hh = m_mesh.halfedge(f);
for (const auto& v : m_mesh.vertices_around_face(f_hh)) {
const auto& p = m_mesh.point(v);
m_vertices_buf.row(i) = RowVector3d(p.x(), p.y(), p.z());
Vector3d n(0, 0, 0);
//auto n = PMP::compute_face_normal(f, m_mesh);
Vector3d norm = Vector3d(n.x(), n.y(), n.z());
for (auto val : map[v]) {
norm += val;
}
norm.normalize();
deb->drawLine(Vector3d(p.x(), p.y(), p.z()), Vector3d(p.x(), p.y(), p.z()) + Vector3d(norm.x(), norm.y(), norm.z()) * 3,
Vector3d(1.0, 0, 0));
m_normals_buf.row(i++) = RowVector3d(norm.x(), norm.y(), norm.z());
}
m_faces_buf.row(j++) = RowVector3i(i - 3, i - 2, i - 1);
}
Follows the painter code:
m_vertexAttrLoc = program.attributeLocation("v_vertex");
m_colorAttrLoc = program.attributeLocation("v_color");
m_normalAttrLoc = program.attributeLocation("v_normal");
m_mvMatrixLoc = program.uniformLocation("v_modelViewMatrix");
m_projMatrixLoc = program.uniformLocation("v_projectionMatrix");
m_normalMatrixLoc = program.uniformLocation("v_normalMatrix");
//m_relativePosLoc = program.uniformLocation("v_relativePos");
m_opacityLoc = program.uniformLocation("v_opacity");
m_colorMaskLoc = program.uniformLocation("v_colorMask");
//bool for unmapping depth color
m_useDepthMap = program.uniformLocation("v_useDepthMap");
program.setUniformValue(m_mvMatrixLoc, modelView);
//uniform used for Color map to regular model switch
program.setUniformValue(m_useDepthMap, (m_showColorMap &&
(m_showProblemAreas || m_showPrepMap || m_showDepthMap || m_showMockupMap)));
QMatrix3x3 normalMatrix = modelView.normalMatrix();
program.setUniformValue(m_normalMatrixLoc, normalMatrix);
program.setUniformValue(m_projMatrixLoc, projection);
//program.setUniformValue(m_relativePosLoc, m_relativePos);
program.setUniformValue(m_opacityLoc, m_opacity);
program.setUniformValue(m_colorMaskLoc, m_colorMask);
glEnableVertexAttribArray(m_vertexAttrLoc);
m_vertices.bind();
glVertexAttribPointer(m_vertexAttrLoc, 3, GL_DOUBLE, false, 3 * sizeof(GLdouble), NULL);
m_vertices.release();
glEnableVertexAttribArray(m_normalAttrLoc);
m_normals.bind();
glVertexAttribPointer(m_normalAttrLoc, 3, GL_DOUBLE, false, 0, NULL);
m_normals.release();
glEnableVertexAttribArray(m_colorAttrLoc);
if (m_showProblemAreas) {
m_problemColorMap.bind();
glVertexAttribPointer(m_colorAttrLoc, 3, GL_DOUBLE, false, 0, NULL);
m_problemColorMap.release();
}
else if (m_showPrepMap) {
m_prepColorMap.bind();
glVertexAttribPointer(m_colorAttrLoc, 3, GL_DOUBLE, false, 0, NULL);
m_prepColorMap.release();
}
else if (m_showMockupMap) {
m_mokupColorMap.bind();
glVertexAttribPointer(m_colorAttrLoc, 3, GL_DOUBLE, false, 0, NULL);
m_mokupColorMap.release();
}
else {
//m_colors.bind();
//glVertexAttribPointer(m_colorAttrLoc, 3, GL_DOUBLE, false, 0, NULL);
//m_colors.release();
}
m_indices.bind();
glDrawElements(GL_TRIANGLES, m_indices.size() / sizeof(int), GL_UNSIGNED_INT, NULL);
m_indices.release();
glDisableVertexAttribArray(m_vertexAttrLoc);
glDisableVertexAttribArray(m_normalAttrLoc);
glDisableVertexAttribArray(m_colorAttrLoc);
EDIT: Sorry for not being clear enough. The cube is merely an example. My requirements are that the shading works for any kind of mesh. Those with very sharp edges, and those that are very organic (like humans or animals).
The issue is clearly explained by the image "Normals calculated in my program" from your question. The normal vectors at the corners and edges of the cube are not normal perpendicular to the faces:
For a proper specular reflection on plane faces, the normal vectors have to be perpendicular to the sides of the cube.
The vertex coordinate and its normal vector from a tuple with 6 components (x, y, z, nx, ny, nz).
A vertex coordinate on an edge of the cube is adjacent to 2 sides of the cube and 2 (face) normal vectors. The 8 vertex coordinates on the 8 corners of the cube are adjacent to 3 sides (3 normal vectors) each.
To define the vertex attributes with face normal vectors (perpendicular to a side) you have to define multiple tuples with the same vertex coordinate but different normal vectors. You have to use the different attribute tuples to form the triangle primitives on the different sides of the cube.
e.g. If you have defined a cube with the left, front, bottom coordinate of (-1, -1, -1) and the right, back, top coordinate of (1, 1, 1), then the vertex coordinate (-1, -1, -1) is adjacent to the left, front and bottom side of the cube:
x y z nx ny nz
left: -1 -1 -1 -1 0 0
front: -1 -1 -1 0 -1 0
bottom: -1 -1 -1 0 0 -1
Use the left attribute tuple to form the triangle primitives on the left side, the front to form the front and bottom for the triangles on the bottom.
In general you have to decide what you want. There is no general approach for all meshes.
Either you have a fine granulated mesh and you want a smooth appearance (e.g a sphere). In that case your approach is fine, it will generate a smooth light transition on the edges between the primitives.
Or you have a mesh with hard edges like a cube. In that case you have to "duplicate" vertices. If 2 (or even more) triangles share a vertex coordinate, but the face normal vectors are different, then you have to create a separate tuple, for all the combinations of the vertex coordinate and the face normal vector.
For a general "smooth" solution you would have to interpolate the normal vectors of the vertex coordinates which are in the middle of plane surfaces, according to the surrounding geometry. That means if a bunch of triangle primitives form a plane, then all the normal vectors of the vertices have to be computed dependent on there position on the plane. At the centroid the normal vector is equal to the face normal vector. For all other points the normal vector has to be interpolated with the normal vectors of the surrounding faces.
Anyway that seems to be an XY problem. Why is there a "vertex" somewhere in the middle of a plane? Probably the plane is tessellated. But if the plan is tessellated, why are the normal vectors not interpolated too, during the tessellation process?
As mentioned in the other answers the problem is your mesh normals.
Computing an average normal, like you are doing currently, is what you would want
to do for a smooth object like a sphere. cgal has a function for that CGAL::Polygon_mesh_processing::compute_vertex_normal For a cube what you want is normals perpendicular to the faces
cgal has a functoin for that too CGAL::Polygon_mesh_processing::compute_face_normal
To debug the normals you can just set fragColor = vec4(norm,1); in mainmesh.frag. Here the cubes on the left have averaged (smooth) normals and on the right have face (flat) normals:And shaded they look like this:
shading has to work for any kind of mesh (a cube or any organic mesh)
For that you can use something like per_corner_normals whitch:
Implements a simple scheme which computes corner normals as averages
of normals of faces incident on the corresponding vertex which do not
deviate by more than a specified dihedral angle (e.g. 20°)
And this is what it looks like with a angle of 1°, 20°, 100°:
In your image, we can see that the inner triangle (the one that doesn't have point on cube edges, in top left quarter) has an homogeneous color.
My interpretation is that triangles that have points on the edge/corner of the cube share the same vertex and then share the same normal and some how the normal are averaged. So it's not perpendicular to the faces.
To debug this, you should create a simple geometry of a cube with 6 faces and 2 triangles per face. Hence it's make 12 triangles.
Two options:
If you have 8 vertex in the geometry, the corner are shared between triangles of different face and the issue came from the geometry generator.
If you have 6×4=24 vertex in the geometry the truth lies elsewhere.