I'm working on a deferred shading project and I've got a problem with blending all the lights into the final render.
Basically I'm just looping over each light and then rendering the fullscreen quad with my shader that does lighting calculations but the final result is just a pure white screen. If I disable blending, I can see the scene fine but it will be lit by one light.
void Render()
{
FirstPass();
SecondPass();
}
void FirstPass()
{
glDisable(GL_BLEND);
glEnable(GL_DEPTH);
glDepthMask(GL_TRUE);
renderTarget->BindFrameBuffer();
gbufferShader->Bind();
glViewport(0, 0, renderTarget->Width(), renderTarget->Height());
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for (int i = 0; i < meshes.size(); ++i)
{
// set uniforms and render mesh
}
renderTarget->UnbindFrameBuffer();
}
EDIT: I'm not rendering light volumes/geometry, i'm just calculating final pixel colours based on the lights (point/spot/directional).
void SecondPass()
{
glDepthMask(GL_FALSE);
glDisable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
renderTarget->BindTextures();
pointLightShader->Bind();
glViewport(0, 0, renderTarget->Width(), renderTarget->Height());
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
for (int i = 0; i < lights.size(); ++i)
{
// set uniforms
// shader does lighting calculations
screenQuad->Render();
}
renderTarget->UnbindTextures();
}
I can't imagine there being anything special to do in the shader other than output a vec4 for the final frag colour each time?
This is the main part of the pointLight fragment shader:
out vec4 FragColour;
void main()
{
vec4 posData = texture(positionTexture, TexCoord);
vec4 normData = texture(normalTexture, TexCoord);
vec4 diffData = texture(diffuseTexture, TexCoord);
vec3 pos = vec3(posData);
vec3 norm = vec3(normData);
vec3 diff = vec3(diffData);
float a = posData.w;
float b = normData.w;
float c = diffData.w;
FragColour = vec4(shadePixel(pos, norm, diff), 1.0);
}
But yeah, basically if I use this blend the whole screen is just white.
Well I fixed it, and I feel like an idiot now :)
My opengl was set to
glClearColor(1.0, 1.0, 1.0, 1.0);
which (obviously) is pure white.
I just changed it to black background
glClearColor(0.0, 0.0, 0.0, 1.0);
And now I see everything fine. I guess it was additively blending with the white background, which obviously would be white.
Related
I want to create a 2D plotter in GLSL (with SFML for window handling). I import an empty texture into the fragment shader via uniform sampler2D texture (which works). Then I try iterating through the gl_TexCoord and set the pixels a colour.
Doing this changes the colour to red
vec4 pixel = texture2D(texture, gl_TexCoord[0].xy);
pixel = vec4(1.0, 0.0, 0.0, 1.0);
gl_FragColor = pixel * gl_Color;
However, this turns the whole thing red as well:
for (int j = 0; j < gl_TexCoord[0].y; j++)
for (int i = 0; i < gl_TexCoord[0].x; i++)
{
vec4 pixel = texture2D(texture, vec2(i, j));
if (i * 2 == j) // y = 2x
{
pixel = vec4(1.0, 0.0, 0.0, 1.0);
}
else
{
pixel = vec4(0.0, 0.0, 0.0, 1.0);
}
gl_FragColor = pixel * gl_Color;
}
This is supposed to only colour the pixels that have coordinates where y = 2x.
I am not very sure whether I understood the idea of texture2D correctly or not. If this is not how to, than how do you change the pixel of an empty texture?
texture2D(texture, vec2(u,v))
texture2D samples texels from the texture bound to sampler at texture coordinate (u,v).
It is an input parameter, you don't write into the texture. you write into the framebuffer.
The fragment shader main task is just to provide a color (RGBA) that will be displayed in the framebuffer (color texture) (the one that you see in the screen by default).
If you want to create a texture using GLSL, you can use a framebuffer with a color(empty) texture. When drawing, you texture will be filled. See details here:
Framebuffers
Other option is to write directly in the texture from the CPU, without GLSL. See glGetTextureSubImage to modify a texture already passed to the GPU.
glTexSubImage?D
In your example, it could be easier to create a 2D array with the vertex colors and use it to fill the Texture with
unsigned char* vertexColors= new char[widthheight4];
// fill vertexColors
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, vertexColors))
glTextImage?D
I have a problem with a simple shader.
I plan to draw a triangle (one for a start) in color. What i want: i culculete color for each node of triangle and give it to vertex shader, then pass to fragmant and get a colorfull triangle. What i get is nothing - no triangle. So i decided to simplify a littel - i give parameters to shaders, but i not use them. And i get same result. It's C++ code:
QVector4D colors[3];
...
glBegin(GL_TRIANGLES);
invers_sh.setAttributeValue("b_color", colors[1]);
glVertex2d(0, 0);
invers_sh.setAttributeValue("b_color", colors[1]);
glVertex2d(2.0, 0);
invers_sh.setAttributeValue("b_color", colors[2]);
glVertex2d(0, 2.0);
glEnd();
Vertex shader:
in vec4 vertex;
attribute vec4 b_color;
varying vec4 color_v;
uniform mat4 qt_ModelViewProjectionMatrix;
void main( void )
{
gl_Position = qt_ModelViewProjectionMatrix * vertex;
color_v = b_color;
}
Fragment shader:
varying vec4 color_v;
void main( void )
{
gl_FragColor = vec4(1.0, 0, 0, 0);
}
I figured that i get my red triangle if i comment all setAttributeValue in C++ code and line
color_v = b_color;
in vertex shader.
Help me.
Can you test the following invers_sh.setAttributeValue("b_color", colors[0]);
=> replace that line with
invers_sh.setAttributeValue(b_colorLocation, colors[1]);
set a global for colorLocation
int b_colorLocation;
and add this to where you compile your shaders get location of b_color:
b_colorLocation = invers_sh.attributeLocation("b_color");
Edit: This turned out to be correct, hopefully it still helps others with similar issues.
Is there a piece I'm missing in setting up the depth testing pipeline in OpenGL ES 2.0 (using EGL)?
I've found many questions about this but all were solved by either correctly setting up the depth buffer on context initialization:
EGLint egl_attributes[] = {
...
EGL_DEPTH_SIZE, 16,
...
EGL_NONE };
if (!eglChooseConfig(
m_eglDisplay, egl_attributes, &m_eglConfig, 1, &numConfigs)) {
cerr << "Failed to set EGL configuration" << endl;
return EGL_FALSE;
}
or by properly enabling and clearing the depth buffer, and doing so after the context has been initialized:
// Set the viewport
glViewport(0, 0, m_display->width(), m_display->height());
// Enable culling and depth testing
glEnable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL);
glEnable(GL_DEPTH_TEST);
// Clear the color and depth buffers
glClearColor(0.0f, 0.0f, 0.0f, 0.0f);
glClearDepthf(1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Draw elements
m_Program->drawAll();
Commenting out the glEnable(GL_DEPTH_TEST) I have a scene but without the depth test occlusion I would love to have.
In a shader, outputting the z component of gl_Position visually works as expected (z values are in the range [0, 1]):
// Vertex Shader
uniform mat4 u_matrix;
attribute vec4 a_position;
varying float v_depth;
void main() {
vec4 v_position = u_matrix * a_position;
v_depth = v_position.z / v_position.w;
gl_Position = v_position;
}
// Fragment shader
varying float v_depth;
void main() {
gl_FragColor = vec4((v_depth < 0.0) ? 1.0 : 0.0,
v_depth,
(v_depth > 1.0) ? 1.0 : 0.0,
1.0);
}
All objects are a shade of pure green, darker for nearer and brighter for further, as expected. Sadly some further (brighter) objects are drawn over nearer (darker) objects.
Any ideas what I'm missing? (If nothing else I hope this summarises some issues others have been having).
It appears I wasn't missing anything. I had a rogue polygon (in a different shader program) that, when depth was enabled occluded everything. The above is a correct setup.
I implemented a new rendering pipeline in my engine and rendering is broken now. When I directly draw a texture of the G-Buffer to screen, it shows up correctly. So the G-Buffer is fine. But somehow the lighting pass makes trouble. Even if I don't use the resulting texture of it but try to display albedo from G-Buffer after the lighting pass, it shows a solid gray color.
I can't explain this behavior and the strange thing is that there are no OpenGL errors at any point.
Vertex Shader to draw a fullscreen quad.
#version 330
in vec4 vertex;
out vec2 coord;
void main()
{
coord = vertex.xy;
gl_Position = vertex * 2.0 - 1.0;
}
Fragment Shader for lighting.
#version 330
in vec2 coord;
out vec3 image;
uniform int type = 0;
uniform sampler2D positions;
uniform sampler2D normals;
uniform vec3 light;
uniform vec3 color;
uniform float radius;
uniform float intensity = 1.0;
void main()
{
if(type == 0) // directional light
{
vec3 normal = texture2D(normals, coord).xyz;
float fraction = max(dot(normalize(light), normal) / 2.0 + 0.5, 0);
image = intensity * color * fraction;
}
else if(type == 1) // point light
{
vec3 pixel = texture2D(positions, coord).xyz;
vec3 normal = texture2D(normals, coord).xyz;
float dist = max(distance(pixel, light), 1);
float magnitude = 1 / pow(dist / radius + 1, 2);
float cutoff = 0.4;
float attenuation = clamp((magnitude - cutoff) / (1 - cutoff), 0, 1);
float fraction = clamp(dot(normalize(light - pixel), normal), -1, 1);
image = intensity * color * attenuation * max(fraction, 0.2);
}
}
Targets and samplers for the lighting pass. Texture ids are mapped to attachment respectively shader location.
unordered_map<GLenum, GLuint> targets;
targets.insert(make_pair(GL_COLOR_ATTACHMENT2, ...)); // light
targets.insert(make_pair(GL_DEPTH_STENCIL_ATTACHMENT, ...)); // depth and stencil
unordered_map<string, GLuint> samplers;
samplers.insert(make_pair("positions", ...)); // positions from G-Buffer
samplers.insert(make_pair("normals", ...)); // normals from G-Buffer
Draw function for lighting pass.
void DrawLights(unordered_map<string, GLuint> Samplers, GLuint Program)
{
auto lis = Entity->Get<Light>();
glClear(GL_COLOR_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glUseProgram(Program);
int n = 0; for(auto i : Samplers)
{
glActiveTexture(GL_TEXTURE0 + n);
glBindTexture(GL_TEXTURE_2D, i.second);
glUniform1i(glGetUniformLocation(Program, i.first.c_str()), n);
n++;
}
mat4 view = Entity->Get<Camera>(*Global->Get<unsigned int>("camera"))->View;
for(auto i : lis)
{
int type = i.second->Type == Light::DIRECTIONAL ? 0 : 1;
vec3 pos = vec3(view * vec4(Entity->Get<Form>(i.first)->Position(), !type ? 0 : 1));
glUniform1i(glGetUniformLocation(Program, "type"), type);
glUniform3f(glGetUniformLocation(Program, "light"), pos.x, pos.y, pos.z);
glUniform3f(glGetUniformLocation(Program, "color"), i.second->Color.x, i.second->Color.y, i.second->Color.z);
glUniform1f(glGetUniformLocation(Program, "radius"), i.second->Radius);
glUniform1f(glGetUniformLocation(Program, "intensity"), i.second->Intensity);
glBegin(GL_QUADS);
glVertex2i(0, 0);
glVertex2i(1, 0);
glVertex2i(1, 1);
glVertex2i(0, 1);
glEnd();
}
glDisable(GL_BLEND);
glActiveTexture(GL_TEXTURE0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindTexture(GL_TEXTURE_2D, 0);
}
I found the error and it was such a stupid one. The old rendering pipeline bound the correct framebuffer before calling the draw function of that pass. But the new one didn't so each draw function had to do that itself. Therefore I wanted to update all draw function, but I missed the draw function of the lighting pass.
Therefore the framebuffer of the G-Buffer was still bound and the lighting pass changed its targets.
Thanks to you guys, you had no change to find that error, since I hadn't posted my complete pipeline system.
I am learning volume rendering using ray casting algorithm. I have found a good demo and tuturial in here. but the problem is that I have a ATI graphic card instead of nVidia which make me can't using the cg shader in the demo, so I want to change the cg shader to glsl shader. I have gone through the red book (7 edition) of OpenGL, but not familiar with glsl and cg.
does anyone can help me change the cg shader in the demo to glsl? or is there any materials to the simplest demo of volume rendering using ray casting (of course in glsl).
here is the cg shader of the demo. and it can work on my friend's nVidia graphic card. what most confusing me is that I don't know how to translate the entry part of cg to glsl, for example:
struct vertex_fragment
{
float4 Position : POSITION; // For the rasterizer
float4 TexCoord : TEXCOORD0;
float4 Color : TEXCOORD1;
float4 Pos : TEXCOORD2;
};
what's more, I can write a program bind 2 texture object with 2 texture unit to the shader provided that I assign two texcoord when draw the screen, for example
glMultiTexCoord2f(GL_TEXTURE0, 1.0, 0.0);
glMultiTexCoord2f(GL_TEXTURE1, 1.0, 0.0);
In the demo the program will bind to two texture (one 2D for backface_buffer one 3D for volume texture), but with only one texture unit like glMultiTexCoord3f(GL_TEXTURE1, x, y, z); I think the GL_TEXTURE1 unit is for the volume texture, but which one (texure unit) is for the backface_buffer? as far as I know in order to bind texture obj in a shader, I must get a texture unit to bind for example:
glLinkProgram(p);
texloc = glGetUniformLocation(p, "tex");
volume_texloc = glGetUniformLocation(p, "volume_tex");
stepsizeloc = glGetUniformLocation(p, "stepsize");
glUseProgram(p);
glUniform1i(texloc, 0);
glUniform1i(volume_texloc, 1);
glUniform1f(stepsizeloc, stepsize);
//When rendering an object with this program.
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, backface_buffer);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_3D, volume_texture);
the program is compiled fine and linked ok. but I only got -1 of all three location(texloc, volume_texloc and stepsizeloc). I know it may be optimized out.
anyone can help me translate the cg shader to glsl shader?
Edit: If you are interest in modern OpenGL API implementation(C++ source code) with glsl:Volume_Rendering_Using_GLSL
Problem solved. the glsl version of the demo:
vertex shader
void main()
{
gl_Position = gl_ModelViewProjectionMatrix*gl_Vertex;
//gl_FrontColor = gl_Color;
gl_TexCoord[2] = gl_Position;
gl_TexCoord[0] = gl_MultiTexCoord1;
gl_TexCoord[1] = gl_Color;
}
fragment shader
uniform sampler2D tex;
uniform sampler3D volume_tex;
uniform float stepsize;
void main()
{
vec2 texc = ((gl_TexCoord[2].xy/gl_TexCoord[2].w) + 1) / 2;
vec4 start = gl_TexCoord[0];
vec4 back_position = texture2D(tex, texc);
vec3 dir = vec3(0.0);
dir.x = back_position.x - start.x;
dir.y = back_position.y - start.y;
dir.z = back_position.z - start.z;
float len = length(dir.xyz); // the length from front to back is calculated and used to terminate the ray
vec3 norm_dir = normalize(dir);
float delta = stepsize;
vec3 delta_dir = norm_dir * delta;
float delta_dir_len = length(delta_dir);
vec3 vect = start.xyz;
vec4 col_acc = vec4(0,0,0,0); // The dest color
float alpha_acc = 0.0; // The dest alpha for blending
float length_acc = 0.0;
vec4 color_sample; // The src color
float alpha_sample; // The src alpha
for(int i = 0; i < 450; i++)
{
color_sample = texture3D(volume_tex,vect);
// why multiply the stepsize?
alpha_sample = color_sample.a*stepsize;
// why multply 3?
col_acc += (1.0 - alpha_acc) * color_sample * alpha_sample*3 ;
alpha_acc += alpha_sample;
vect += delta_dir;
length_acc += delta_dir_len;
if(length_acc >= len || alpha_acc > 1.0)
break; // terminate if opacity > 1 or the ray is outside the volume
}
gl_FragColor = col_acc;
}
if you seen the original shader of cg there is only a little difference between cg and glsl. the most difficult part to translate the demo to glsl version is that the cg function in the opengl such as:
param = cgGetNamedParameter(program, par);
cgGLSetTextureParameter(param, tex);
cgGLEnableTextureParameter(param);
encapsulate the process of texture unit and multitexture activation (using glActiveTexture) and deactivation, which is very important in this demo as it used the fixed pipeline as well as programmable pipeline. here is the key segment changed in the function void raycasting_pass() of main.cpp of the demo in Peter Triers GPU raycasting tutorial:
function raycasting_pass
void raycasting_pass()
{
// specify which texture to bind
glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_COLOR_ATTACHMENT0_EXT,
GL_TEXTURE_2D, final_image, 0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT );
glUseProgram(p);
glUniform1f(stepsizeIndex, stepsize);
glActiveTexture(GL_TEXTURE1);
glEnable(GL_TEXTURE_3D);
glBindTexture(GL_TEXTURE_3D, volume_texture);
glUniform1i(volume_tex, 1);
glActiveTexture(GL_TEXTURE0);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, backface_buffer);
glUniform1i(tex, 0);
glUseProgram(p);
glEnable(GL_CULL_FACE);
glCullFace(GL_BACK);
drawQuads(1.0,1.0, 1.0); // Draw a cube
glDisable(GL_CULL_FACE);
glUseProgram(0);
// recover to use only one texture unit as for the fixed pipeline
glActiveTexture(GL_TEXTURE1);
glDisable(GL_TEXTURE_3D);
glActiveTexture(GL_TEXTURE0);
}
That's it.