Cannot apply blending to cube located behind half-transparent textured surface - c++

Following the tutorial from learnopengl.com about rendering half-transparent windows glasses using blending, I tried to apply that principle to my simple scene (where we can navigate the scene using the mouse) containing:
Cube: 6 faces, each having 2 triangles, constructed using two attributes (position and color) defined in its associated vertex shader and passed to its fragment shader.
Grass: 2D Surface (two triangles) to which a png texture was applied using a sampler2D uniform (the background of the png image is transparent).
Window: A half-transparent 2D surface based on the same shaders (vertex and fragment) as the grass above. Both textures were downloaded from learnopengl.com
The issue I'm facing is that when it comes to the Grass, I can see it through the Window but not the Cube!
My code is structured as follows (I left the rendering of the window to the very last on purpose):
// enable depth test & blending
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA);
while (true):
glClearColor(background.r, background.g, background.b, background.a);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
cube.draw();
grass.draw();
window.draw();
Edit: I'll share below the vertex and fragment shaders used to draw the two textured surfaces (grass and window):
#version 130
in vec2 position;
in vec2 texture_coord;
// opengl tranformation matrices
uniform mat4 model; // object coord -> world coord
uniform mat4 view; // world coord -> camera coord
uniform mat4 projection; // camera coord -> ndc coord
out vec2 texture_coord_vert;
void main() {
gl_Position = projection * view * model * vec4(position, 0.0, 1.0);
texture_coord_vert = texture_coord;
}
#version 130
in vec2 texture_coord_vert;
uniform sampler2D texture2d;
out vec4 color_out;
void main() {
vec4 color = texture(texture2d, texture_coord_vert);
// manage transparency
if (color.a == 0.0)
discard;
color_out = color;
}
And the ones used to render the colored cube:
#version 130
in vec3 position;
in vec3 color;
// opengl tranformation matrices
uniform mat4 model; // object coord -> world coord
uniform mat4 view; // world coord -> camera coord
uniform mat4 projection; // camera coord -> ndc coord
out vec3 color_vert;
void main() {
gl_Position = projection * view * model * vec4(position, 1.0);
color_vert = color;
}
#version 130
in vec3 color_vert;
out vec4 color_out;
void main() {
color_out = vec4(color_vert, 1.0);
}
P.S: My shader programs uses GLSL v1.30, because my internal GPU didn't seem to support later versions.
Regarding the piece of code that does the actual drawing, I basically have one instance of a Renderer class for each type of geometry (one shared by both textured surfaces, and one for the cube). This class manages the creation/binding/deletion of VAOs and binding/deletion of VBOs (creation of VBOs made outside the class so I can share vertexes with similar shapes). Its constructor takes as an argument the shader program and the vertex attributes. I'll try to show the relevant piece of code below
Renderer::Renderer(Program program, vector attributes) {
vao.bind();
vbo.bind();
define_attributes(attributes);
vao.unbind();
vbo.unbind();
}
Renderer::draw(Uniforms uniforms) {
vao.bind();
program.use();
set_uniforms(unfiorms);
glDrawArrays(GL_TRIANGLES, 0, n_vertexes);
vao.unbind();
program.unuse();
}

Your blend function function depends on the target's alpha channel (GL_ONE_MINUS_DST_ALPHA):
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_DST_ALPHA);
dest = src * src_alpha + dest * (1-dest_alpha)
If the alpha channel of the cube is 0.0, the color of the cube is not mixed with the color of the window.
The traditional alpha blending function depends only on the source alpha channel:
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
dest = src * src_alpha + dest * (1-src_alpha)
See also glBlendFunc and Blending

Related

OpenGL shapes look darker when camera is below them

I have a problem with rendering my quads in OpenGL. They look darker when translucency is applied, if the camera is below a certain point. How can I fix this? The objects are lots of quads with tiny amounts of Z difference. I have implemented rendering of translucent objects from this webpage: http://www.alecjacobson.com/weblog/?p=2750
Render code:
double alpha_factor = 0.75;
double alpha_frac = (r_alpha - alpha_factor * r_alpha) / (1.0 - alpha_factor * r_alpha);
double prev_alpha = r_alpha;
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
// quintuple pass to get the rendering of translucent objects, somewhat correct
// reverse render order for getting alpha going!
// 1st pass: only depth checks
glDisable(GL_CULL_FACE);
glDepthFunc(GL_LESS);
r_alpha = 0;
// send alpha for each pass
// reverse order
drawobjects(RENDER_REVERSE);
// 2nd pass: guaranteed back face display with normal alpha
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
glDepthFunc(GL_ALWAYS);
r_alpha = alpha_factor * (prev_alpha + 0.025);
// reverse order
drawobjects(RENDER_REVERSE);
// 3rd pass: depth checked version of fraction of calculated alpha. (minus 1)
glEnable(GL_CULL_FACE);
glCullFace(GL_FRONT);
glDepthFunc(GL_LEQUAL);
r_alpha = alpha_frac + 0.025;
// normal order
drawobjects(RENDER_NORMAL);
// 4th pass: same for back face
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glDepthFunc(GL_ALWAYS);
r_alpha = alpha_factor * (prev_alpha + 0.025);
// reverse order
drawobjects(RENDER_REVERSE);
// 5th pass: just put out the entire thing now
glDisable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL);
r_alpha = alpha_frac + 0.025;
// normal order
drawobjects(RENDER_NORMAL);
glDisable(GL_BLEND);
r_alpha = prev_alpha;
GLSL shaders:
Vertex shader:
#version 330 core
layout(location = 0) in vec3 vPos_ModelSpace;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in mat4 model_instance;
out vec2 UV;
out float alpha;
flat out uint alpha_mode;
// model + view + proj matrix
uniform mat4 proj;
uniform mat4 view;
uniform float v_alpha;
uniform uint v_alpha_mode;
void main() {
gl_Position = proj * view * model_instance * vec4(vPos_ModelSpace, 1.0);
// send to frag shader
UV = vertexUV;
alpha = v_alpha;
alpha_mode = v_alpha_mode;
}
Fragment shader:
#version 330 core
// texture UV coordinate
in vec2 UV;
in float alpha;
flat in uint alpha_mode;
out vec4 color;
// Values that stay constant for the whole mesh.
uniform sampler2D texSampler;
void main() {
int amode = int(alpha_mode);
color.rgb = texture(texSampler, UV).rgb;
color.a = alpha;
if(amode == 1)
color.rgb *= alpha;
}
Image when problem happens:
Image comparison for how it should look regardless of my position:
The reason it fades away in the center is because when you look at the infinitely thin sides of the planes they disappear. As for the brightness change top vs bottom, it's due to how your passes treat surface normals. The dark planes are normals facing away from the camera but with no planes facing the camera to lighten them up.
It looks like you are rendering many translucent planes in a cube to estimate a volume. Here is a simple example of a volume rendering: https://www.shadertoy.com/view/lsG3D3
http://developer.download.nvidia.com/books/HTML/gpugems/gpugems_ch39.html is a fantastic resource. It explains different ways to render volume, shows how awesome it is. For reference, that last example used a sphere as proxy geometry to raymarch a volume fractal.
Happy coding!

Silhouette-Outlined shader

I'm trying to implement GLSL shader which would highlight the outer edges of rendered 3D mesh. The problem is that I do not have access to the OpenGL client side code so this must be done only in GLSL shaders.
My first attempt was to use/adopt this shader from Unity and do it in OpenGL GLSL. Here how it should look:
And here is what I got:
I'm not sure If I compute the stuff correctly but as you can see the output is nowhere near my expectations.
Here is the ogre material
material Chassis
{
technique
{
pass standard
{
cull_software back
scene_blend zero one
}
pass psssm
{
cull_software front
scene_blend src_alpha one_minus_src_alpha
vertex_program_ref reflection_cube_specularmap_normalmap_vs100
{
param_named_auto modelViewProjectionMatrix worldviewproj_matrix
param_named_auto normalMatrix inverse_transpose_world_matrix
param_named_auto modelView worldview_matrix
param_named_auto camera_world_position camera_position
param_named_auto inverse_projection_matrix inverse_projection_matrix
param_named_auto projection_matrix projection_matrix
param_named_auto p_InverseModelView inverse_worldview_matrix
}
fragment_program_ref reflection_cube_specularmap_normalmap_fs100
{
}
}
}
}
Here is the vertex shader
#version 140
#define lowp
#define mediump
#define highp
in vec4 vertex;
in vec3 normal;
uniform mat4 normalMatrix;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelView;
uniform vec3 camera_world_position;
uniform mat4 projection_matrix;
uniform mat4 inverse_projection_matrix;
void main()
{
vec4 pos = modelViewProjectionMatrix * vertex;
mat4 modelView = inverse_projection_matrix * modelViewProjectionMatrix;
vec4 norm = inverse(transpose(modelView)) * vec4(normal, 0.0);
vec2 offset = vec2( norm.x * projection_matrix[0][0], norm.y * projection_matrix[1][1] );
pos.xy += offset * pos.z * 0.18;
gl_Position = pos;
}
EDIT: I have added the material script which ogre uses and I have added the vertex shader code.
I assume single complex 3D mesh. I would do this with 2 pass rendering:
clear screen
let use (0,0,0) as clear color.
render mesh
disable depth output,test (or clear it afterwards). Do not use shading fill just with some predefined color for example (1,1,1) Lets do this for simple cube:
read the frame buffer and use it as a texture
So either use FBO and render to texture for #1,#2 or use glReadPixels instead and load it as some texture back to GPU (I know it slower but works also on Intel). For more info see both answers in here:
OpenGL Scale Single Pixel Line
clear screen with background color
render
so either render GL_QUAD covering whole screen or render your mesh with shading and what ever you want. You need to pass also the texture from previous step into GLSL.
In fragment render as usual ... but at the end also add this:
Scan all texels around current fragment screen position up to distance equal to outline thickness in the texture from previous step. If any black pixel found in it override outputted color with your outline color. You can even modulate it with the smallest distance to black color.
This is very similar to this:
How to implement 2D raycasting light effect in GLSL
but much simpler. Here result:
I took this example Analysis of a shader in VR of mine and converted it to this:
Fragment:
// Fragment
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
in vec3 LCS_pos; // fragment position [LCS]
in vec3 pixel_pos; // fragment position [GCS]
in vec3 pixel_col; // fragment surface color
in vec3 pixel_nor; // fragment surface normal [GCS]
out vec4 col;
// outline
uniform sampler2D txr; // texture from previous pass
uniform int thickness; // [pixels] outline thickness
uniform float xs,ys; // [pixels] texture/screen resolution
void main()
{
// standard rendering
float li;
vec3 c,lt_dir;
lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
li=dot(pixel_nor,lt_dir);
if (li<0.0) li=0.0;
c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
// outline effect
if (thickness>0) // thickness effect in second pass
{
int i,j,r=thickness;
float xx,yy,rr,x,y,dx,dy;
dx=1.0/xs; // texel size
dy=1.0/ys;
x=gl_FragCoord.x*dx;
y=gl_FragCoord.y*dy;
rr=thickness*thickness;
for (yy=y-(float(thickness)*dy),i=-r;i<=r;i++,yy+=dy)
for (xx=x-(float(thickness)*dx),j=-r;j<=r;j++,xx+=dx)
if ((i*i)+(j*j)<=rr)
if ((texture(txr,vec2(xx,yy)).r)<0.01)
{
c=vec3(1.0,0.0,0.0); // outline color
i=r+r+1;
j=r+r+1;
break;
}
}
else c=vec3(1.0,1.0,1.0); // render with white in first pass
// output color
col=vec4(c,1.0);
}
The Vertex shader is without change:
// Vertex
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location = 0) in vec3 pos;
layout(location = 2) in vec3 nor;
layout(location = 3) in vec3 col;
layout(location = 0) uniform mat4 m_model; // model matrix
layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
layout(location =32) uniform mat4 m_view; // inverse of camera matrix
layout(location =48) uniform mat4 m_proj; // projection matrix
out vec3 LCS_pos; // fragment position [LCS]
out vec3 pixel_pos; // fragment position [GCS]
out vec3 pixel_col; // fragment surface color
out vec3 pixel_nor; // fragment surface normal [GCS]
void main()
{
LCS_pos=pos;
pixel_col=col;
pixel_pos=(m_model*vec4(pos,1)).xyz;
pixel_nor=(m_normal*vec4(nor,1)).xyz;
gl_Position=m_proj*m_view*m_model*vec4(pos,1);
}
And CPU side code looks like this:
//---------------------------------------------------------------------------
#include <vcl.h>
#pragma hdrstop
#include "Unit1.h"
#include "gl_simple.h"
//---------------------------------------------------------------------------
#pragma package(smart_init)
#pragma resource "*.dfm"
TForm1 *Form1;
//---------------------------------------------------------------------------
GLfloat lt_pnt_pos[3]={+2.5,+2.5,+2.5};
GLfloat lt_pnt_col[3]={0.8,0.8,0.8};
GLfloat lt_amb_col[3]={0.2,0.2,0.2};
GLuint txrid=0;
GLfloat animt=0.0;
//---------------------------------------------------------------------------
// https://stackoverflow.com/q/46603878/2521214
//---------------------------------------------------------------------------
void gl_draw()
{
// load values into shader
GLint i,id;
GLfloat m[16];
glUseProgram(prog_id);
GLfloat x,y,z,d=0.25;
id=glGetUniformLocation(prog_id,"txr"); glUniform1i(id,0);
id=glGetUniformLocation(prog_id,"xs"); glUniform1f(id,xs);
id=glGetUniformLocation(prog_id,"ys"); glUniform1f(id,ys);
id=64; glUniform3fv(id,1,lt_pnt_pos);
id=67; glUniform3fv(id,1,lt_pnt_col);
id=70; glUniform3fv(id,1,lt_amb_col);
glGetFloatv(GL_MODELVIEW_MATRIX,m);
id=0; glUniformMatrix4fv(id,1,GL_FALSE,m);
m[12]=0.0; m[13]=0.0; m[14]=0.0;
id=16; glUniformMatrix4fv(id,1,GL_FALSE,m);
for (i=0;i<16;i++) m[i]=0.0; m[0]=1.0; m[5]=1.0; m[10]=1.0; m[15]=1.0;
id=32; glUniformMatrix4fv(id,1,GL_FALSE,m);
glGetFloatv(GL_PROJECTION_MATRIX,m);
id=48; glUniformMatrix4fv(id,1,GL_FALSE,m);
// draw VAO cube (no outline)
id=glGetUniformLocation(prog_id,"thickness"); glUniform1i(id,0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
vao_draw(); // render cube
// copy frame buffer to CPU memory and than back to GPU as Texture
BYTE *map=new BYTE[xs*ys*4];
glReadPixels(0,0,xs,ys,GL_RGB,GL_UNSIGNED_BYTE,map); // framebuffer -> map[]
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, xs, ys, 0, GL_RGB, GL_UNSIGNED_BYTE, map); // map[] -> texture txrid
delete[] map;
// draw VAO cube (outline)
id=glGetUniformLocation(prog_id,"thickness"); glUniform1i(id,5);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
vao_draw(); // render cube
glDisable(GL_TEXTURE_2D);
// turn of shader
glUseProgram(0);
// rotate the cube to see animation
glMatrixMode(GL_MODELVIEW);
// glRotatef(1.0,0.0,1.0,0.0);
// glRotatef(1.0,1.0,0.0,0.0);
glFlush();
SwapBuffers(hdc);
}
//---------------------------------------------------------------------------
__fastcall TForm1::TForm1(TComponent* Owner):TForm(Owner)
{
gl_init(Handle);
glGenTextures(1,&txrid);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D,txrid);
glPixelStorei(GL_UNPACK_ALIGNMENT, 4);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T,GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexEnvf(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE,GL_COPY);
glDisable(GL_TEXTURE_2D);
int hnd,siz; char vertex[4096],fragment[4096];
hnd=FileOpen("normal_shading.glsl_vert",fmOpenRead); siz=FileSeek(hnd,0,2); FileSeek(hnd,0,0); FileRead(hnd,vertex ,siz); vertex [siz]=0; FileClose(hnd);
hnd=FileOpen("normal_shading.glsl_frag",fmOpenRead); siz=FileSeek(hnd,0,2); FileSeek(hnd,0,0); FileRead(hnd,fragment,siz); fragment[siz]=0; FileClose(hnd);
glsl_init(vertex,fragment);
// hnd=FileCreate("GLSL.txt"); FileWrite(hnd,glsl_log,glsl_logs); FileClose(hnd);
int i0,i;
mm_log->Lines->Clear();
for (i=i0=0;i<glsl_logs;i++)
if ((glsl_log[i]==13)||(glsl_log[i]==10))
{
glsl_log[i]=0;
mm_log->Lines->Add(glsl_log+i0);
glsl_log[i]=13;
for (;((glsl_log[i]==13)||(glsl_log[i]==10))&&(i<glsl_logs);i++);
i0=i;
}
if (i0<glsl_logs) mm_log->Lines->Add(glsl_log+i0);
vao_init();
}
//---------------------------------------------------------------------------
void __fastcall TForm1::FormDestroy(TObject *Sender)
{
glDeleteTextures(1,&txrid);
gl_exit();
glsl_exit();
vao_exit();
}
//---------------------------------------------------------------------------
void __fastcall TForm1::FormResize(TObject *Sender)
{
gl_resize(ClientWidth,ClientHeight-mm_log->Height);
glMatrixMode(GL_PROJECTION);
glTranslatef(0,0,-15.0);
glMatrixMode(GL_MODELVIEW);
glRotatef(-15.0,0.0,1.0,0.0);
glRotatef(-125.0,1.0,0.0,0.0);
}
//---------------------------------------------------------------------------
void __fastcall TForm1::FormPaint(TObject *Sender)
{
gl_draw();
}
//---------------------------------------------------------------------------
void __fastcall TForm1::Timer1Timer(TObject *Sender)
{
gl_draw();
animt+=0.02; if (animt>1.5) animt=-0.5;
Caption=animt;
}
//---------------------------------------------------------------------------
void __fastcall TForm1::FormMouseWheel(TObject *Sender, TShiftState Shift, int WheelDelta, TPoint &MousePos, bool &Handled)
{
GLfloat dz=2.0;
if (WheelDelta<0) dz=-dz;
glMatrixMode(GL_PROJECTION);
glTranslatef(0,0,dz);
gl_draw();
}
//---------------------------------------------------------------------------
As usual the code is using/based on this:
complete GL+GLSL+VAO/VBO C++ example
[Notes]
In case you got multiple objects then use for each object different color in #2. Then in #5 scan for any different color then the one that is in the texel at current position instead of scanning for black.
Also this can be done on 2D image instead of using mesh. You just need to know the background color. So you can use pre-renderd/grabed/screenshoted images for this too.
You can add discard and or change the final if logic to change behaviour (like you want just outline and no mesh inside etc ...). Or you can add the outline color to render color instead of assigning it directly to get the impression of highlight ... instead of coloring
see a),b),c) options in modified fragment:
// Fragment
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
in vec3 LCS_pos; // fragment position [LCS]
in vec3 pixel_pos; // fragment position [GCS]
in vec3 pixel_col; // fragment surface color
in vec3 pixel_nor; // fragment surface normal [GCS]
out vec4 col;
// outline
uniform sampler2D txr; // texture from previous pass
uniform int thickness; // [pixels] outline thickness
uniform float xs,ys; // [pixels] texture/screen resolution
void main()
{
// standard rendering
float li;
vec3 c,lt_dir;
lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
li=dot(pixel_nor,lt_dir);
if (li<0.0) li=0.0;
c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
// outline effect
if (thickness>0) // thickness effect in second pass
{
int i,j,r=thickness;
float xx,yy,rr,x,y,dx,dy;
dx=1.0/xs; // texel size
dy=1.0/ys;
x=gl_FragCoord.x*dx;
y=gl_FragCoord.y*dy;
rr=thickness*thickness;
for (yy=y-(float(thickness)*dy),i=-r;i<=r;i++,yy+=dy)
for (xx=x-(float(thickness)*dx),j=-r;j<=r;j++,xx+=dx)
if ((i*i)+(j*j)<=rr)
if ((texture(txr,vec2(xx,yy)).r)<0.01)
{
c =vec3(1.0,0.0,0.0); // a) assign outline color
// c+=vec3(1.0,0.0,0.0); // b) add outline color
i=r+r+1;
j=r+r+1;
r=0;
break;
}
// if (r!=0) discard; // c) do not render inside
}
else c=vec3(1.0,1.0,1.0); // render with white in first pass
// output color
col=vec4(c,1.0);
}
[Edit1] single pass approach for smooth edges
As you can not access client side code this approach will work in shader only. For smooth (curved) edged shapes the surface normal is near perpendicular to camera view axis (z). So dot between them is near zero. This can be exploited directly ... Here update of the shaders:
Vertex
// Vertex
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location = 0) in vec3 pos;
layout(location = 2) in vec3 nor;
layout(location = 3) in vec3 col;
layout(location = 0) uniform mat4 m_model; // model matrix
layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
layout(location =32) uniform mat4 m_view; // inverse of camera matrix
layout(location =48) uniform mat4 m_proj; // projection matrix
out vec3 pixel_pos; // fragment position [GCS]
out vec3 pixel_col; // fragment surface color
out vec3 pixel_nor; // fragment surface normal [GCS]
out vec3 view_nor; // surface normal in camera [LCS]
void main()
{
pixel_col=col;
pixel_pos=(m_model*vec4(pos,1)).xyz;
pixel_nor=(m_normal*vec4(nor,1)).xyz;
mat4 m;
m=m_model*m_view; // model view matrix
m[3].xyz=vec3(0.0,0.0,0.0); // with origin set to (0,0,0)
view_nor=(m*vec4(nor,1.0)).xyz; // object local normal to camera local normal
gl_Position=m_proj*m_view*m_model*vec4(pos,1);
}
Fragment
// Fragment
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
in vec3 pixel_pos; // fragment position [GCS]
in vec3 pixel_col; // fragment surface color
in vec3 pixel_nor; // fragment surface normal [GCS]
out vec4 col;
// outline
in vec3 view_nor; // surface normal in camera [LCS]
void main()
{
// standard rendering
float li;
vec3 c,lt_dir;
lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
li=dot(pixel_nor,lt_dir);
if (li<0.0) li=0.0;
c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
// outline effect
if (abs(dot(view_nor,vec3(0.0,0.0,1.0)))<=0.5) c=vec3(1.0,0.0,0.0);
// output color
col=vec4(c,1.0);
}
Here preview:
As you can see it works properly for smooth objects but for sharp edges like on cube is this not working at all... You can use the same combinations (a,b,c) as in previous approach.
The m holds modelview matrix with origin set to (0,0,0). That enables it for vector conversion (no translation). For more info see Understanding 4x4 homogenous transform matrices.
The 0.5 in the dot product result if is the thickness of outline. 0.0 means no outline and 1.0 means whole object is outline.

Why does OpenSceneGraph map all Sampler2D to the first texture

I am currently writing a program with OpenSceneGraph (3.4.0) and my own glsl (330) shaders.
It uses multiple textures for input, then does a multiple render target rendering with a pre render camera and reads in those multiple render target textures with a second camera for deferred shading. Thus both cameras have their own shaders (called geometry_pass and lighting_pass here).
My problem: both shaders use the same textures in all sampler2D uniforms when reading.
//in geometry_pass.frag
uniform sampler2D uAlbedoMap;
uniform sampler2D uHeightMap;
uniform sampler2D uNormalMap;
uniform sampler2D uRoughnessMap;
uniform sampler2D uSpecularMap;
[...]
layout (location = 0) out vec4 albedo;
layout (location = 1) out vec4 height;
layout (location = 2) out vec4 normal;
layout (location = 3) out vec4 position;
layout (location = 4) out vec4 roughness;
layout (location = 5) out vec4 specular;
[...]
albedo = vec4(texture(uAlbedoMap, vTexCoords).rgb, 1.0);
height = vec4(texture(uHeightMap, vTexCoords).rgb, 1.0);
normal = vec4(texture(uNormalMap, vTexCoords).rgb, 1.0);
position = vec4(vPosition_WorldSpace, 1.0);
roughness = vec4(texture(uRoughnessMap, vTexCoords).rgb, 1.0);
specular = vec4(texture(uSpecularMap, vTexCoords).rgb, 1.0);
Here the output is always the color of the uAlbedoMapexcept for the position, which gets exported correctly.
In the lighting pass, when I read in the textures of the geometry pass, again all input textures are the same
//in lighting_pass.frag
uniform sampler2D uAlbedoMap;
uniform sampler2D uHeightMap;
uniform sampler2D uNormalMap;
uniform sampler2D uPositionMap;
uniform sampler2D uRoughnessMap;
uniform sampler2D uSpecularMap;
[...]
vec3 albedo = texture(uAlbedoMap, vTexCoord).rgb;
vec3 height = texture(uHeightMap, vTexCoord).rgb;
vec3 normal_TangentSpace = texture(uNormalMap, vTexCoord).rgb;
vec3 position_WorldSpace = texture(uPositionMap, vTexCoord).rgb;
vec3 roughness = texture(uRoughnessMap, vTexCoord).rgb;
vec3 specular = texture(uSpecularMap, vTexCoord).rgb;
i.e. the position map that was correctly exported has the color of the albedo in the lighting pass as well.
Thus, what seems to be working correctly is the texture output, but what is obviously not working is the input.
I have tried to debug this with CodeXL and there I can see that all the images for the geometry_pass have (at some point at least) been correctly bound, they're all visible. The output textures of the framebuffer object confirm that the position texture of the geometry_pass is correct.
As far as I can see when going step by step through this, the textures are correctly bound (i.e. the uniform locations are correct).
Now the obvious question: How can I get those textures to be correctly used in the shaders?
Construction of the program
The viewer is an osgViewer::Viewer, so there is only one view.
The scene graph is as follows:
The displayCamerais the camera from the viewer. Since I'm working with Qt (5.9.1), I reset the GraphicsContext before I do anything else with the scene graph.
osg::ref_ptr<osg::Camera> camera = viewer.getCamera();
osg::ref_ptr<osg::GraphicsContext::Traits> traits = new osg::GraphicsContext::Traits;
traits->windowDecoration = false;
traits->x = 0;
traits->y = 0;
traits->width = 640;
traits->height = 480;
traits->doubleBuffer = true;
camera->setGraphicsContext(new osgQt::GraphicsWindowQt(traits.get()));
camera->getGraphicsContext()->getState()->setUseModelViewAndProjectionUniforms(true);
camera->getGraphicsContext()->getState()->setUseVertexAttributeAliasing(true);
camera->setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
camera->setClearColor(osg::Vec4(0.2f, 0.2f, 0.6f, 1.0f));
camera->setViewport(new osg::Viewport(0, 0, traits->width, traits->height));
camera->setViewMatrix(osg::Matrix::identity());
I then set displayCamera to this viewer camera, create a second camera for render to texture (thus called rttCamera) and add it as a child to the displayCamera. I add the scene (consisting out of agroup node containing a geode containing a hardcoded geometry) to the rttCamera and in the end create a screen quad geometry (below a geode, which in turn is child of matrix transform; this matrix transform is what is added as a child to displayCamera).
Thus the displayCamera has the two children rttCamera and the matrixtransform->screenQuad. The rttCamera has the child scene->geode.
Both cameras have their own rendermask, the screen quad uses the displayCameras rendermask, the scene the rttCameras rendermask.
With the scene node I read in 5 Textures from file (all bitmaps) and then render the rttCamera into the Framebuffer Object with multiple render targets (for deferred shading).
//model is the geode in the scene group node
osg::ref_ptr<osg::StateSet> ss = model->getOrCreateStateSet();
ss->addUniform(new osg::Uniform(name.toStdString().c_str(), counter));
ss->setTextureAttributeAndModes(counter, pairNameTexture.second, osg::StateAttribute::ON | osg::StateAttribute::PROTECTED);
.
//camera is the rttCamera
//bufferComponent is constructed by osg::Camera::COLOR_BUFFER0+counter
//(where counter is just an integer that gets incremented)
//texture is an osg::Texture2D that is newly created
camera->attach(bufferComponent, texture);
//the textures get stored to assign them later on
gBufferTextures[name] = texture;
These mrt textures are bound to the screenquad as textures
//ssQuad is the stateset of the screen quad geode
QString uniformName = "u" + name + "Map";
uniformName[1] = uniformName[1].toUpper();
ssQuad->addUniform(new osg::Uniform(uniformName.toStdString().c_str(), counter));
osg::ref_ptr<osg::Texture2D> tex = gBufferTextures[name];
ssQuad->setTextureAttributeAndModes(counter, gBufferTextures[name], osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE);
other set ups are the rendertarget (FBO for rttCamera, Framebuffer for displayCamera), lighting (off in both cameras). the rttCamera gets the same graphics context that it is created for the displaycamera (i.e. the graphics context object is passed to the rttCamera and set as its own graphics context).
The texture attachments are created as follows (where there is no difference in using width and height or the power-of-2 values for size)
osg::ref_ptr<osg::Texture2D> Utils::createTextureAttachment(int width, int height)
{
osg::Texture2D* texture = new osg::Texture2D();
//texture->setTextureSize(width, height);
texture->setTextureSize(512, 512);
texture->setInternalFormat(GL_RGBA);
texture->setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR);
texture->setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR);
return texture;
}
Let me know if there is more crucial-for-solving code or information missing.
So I finally found the error. My counter has been an unsigned int which apperantly is not allowed. Since osg is hiding so much of the errors from me, I didn't see that this was an issue...
After changing it to just a normal int, I now get different textures into my shader.

Depth Buffer seems to not work - OpenGL Shader

I'm using openGL with GLFW and GLEW. I'm rendering everything using shaders but it seems like the depth buffer doesn't work.
The shaders I'm using for 3D rendering are:
Vertex Shader
#version 410\n
layout(location = 0) in vec3 vertex_position;
layout(location = 1) in vec2 vt
uniform mat4 view, proj, model;
out vec2 texture_coordinates;
void main() {
texture_coordinates = vt;
gl_Position = proj * view * model* vec4(vertex_position, 1.0);
};
Fragment Shader
#version 410\n
in vec2 texture_coordinates;
uniform sampler2D basic_texture;
out vec4 frag_colour;
void main() {
vec4 texel = texture(basic_texture, vec2(texture_coordinates.x, 1 - texture_coordinates.y));
frag_colour = texel;
};
and I'm also enabling the depth buffer and cull face
glEnable(GL_DEPTH_BUFFER);
glDepthFunc(GL_NEVER);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
glFrontFace(GL_CCW);
This is how it is looking:
The cube is being renderer first, because it is the first group of the Mesh and then the monkey is always renderer on the front, if I change the order of rendering, the cube is going to be in front
Another example, you can see the ear of the monkey being renderer in the front
You're not enabling depth testing. Change glEnable(GL_DEPTH_BUFFER); into glEnable(GL_DEPTH_TEST); This error could have been detected using glGetError().
Like SurvivalMachine said, change GL_DEPTH_BUFFER to GL_DEPTH_TEST. And also make sure that in your main loop you are calling glClear(GL_DEPTH_BUFFER_BIT) before any drawing commands.

OpenGL - displacement vertex shader

I'm working with OpenTK wrapper and C# and trying to use displacement vertex shaders to generate 3D models.
I can run dummie shaders to render cubes and triangles, but now I want to create a 3D grid using texture data. For first attempts I created an image (.png) with different areas using red and black colors.
For reference, here is the texture-loading function:
loadImage(Bitmap image)
{
int texID = GL.GenTexture();
GL.BindTexture(TextureTarget.Texture2D, texID);
System.Drawing.Imaging.BitmapData data = image.LockBits(new System.Drawing.Rectangle(0, 0, image.Width, image.Height),
System.Drawing.Imaging.ImageLockMode.ReadOnly, System.Drawing.Imaging.PixelFormat.Format32bppArgb);
GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.Rgba, data.Width, data.Height, 0,
OpenTK.Graphics.OpenGL.PixelFormat.Bgra, PixelType.UnsignedByte, data.Scan0);
image.UnlockBits(data);
GL.GenerateMipmap(GenerateMipmapTarget.Texture2D);
return texID;
}
As far as I read in documentation after loading the texture, I bind both arrays (vertex position and texcoords), and call GL.UseProgram. I assume then the texture is binded and loaded, isn't it?
GL.ActiveTexture(TextureUnit.Texture0);
GL.BindTexture(TextureTarget.Texture2D, objects[0].TextureID);
int loc = GL.GetUniformLocation(shaders[activeShader].ProgramID, "maintexture");
GL.Uniform1(loc, 0);
GL.UniformMatrix4(shaders[activeShader].GetUniform("modelview"), false, ref objects[0].ModelViewProjectionMatrix);
vertex shader:
#version 330
in vec3 vPosition;
in vec2 texcoord;
out vec2 f_texcoord;
uniform mat4 modelview;
uniform sampler2D maintexture;
void
main()
{
vec3 newPos = vPosition;
newPos.y += texture(maintexture, texcoord).r;
gl_Position = modelview * (vec4(newPos, 1.0) );
f_texcoord = texcoord;
}
What I'm trying to achieve is that the red areas in the input texture appear as elevated vertices, and black areas produce vertices at 'ground' level, but I'm getting a perfectly flat grid and I can't understand why.