Have a strange issue with my glsl shader. It renders nothing (eg black screen) and makes my glDrawElements cast a GL_INVALID_OPERATION. The shader in use is shown bellow. When I comment out the line with v = texture3D(texVol,pos).r; and replace it with v = 0.4; it outputs what is expected (orange-like color) and no gl errors is generated.
uniform sampler2D texBack;
uniform sampler3D texVol;
uniform vec3 texSize;
uniform vec2 winSize;
uniform float iso;
varying vec3 inCoords;
vec4 raytrace(in vec3 entryPoint,in vec3 exitPoint){
vec3 dir = exitPoint - entryPoint;
vec3 pos = entryPoint;
vec4 color = vec4(0.0,0.0,0.0,0.0);
int steps = int(2.0*length(texSize));
dir = dir * (1.0/steps);
vec3 n;
float v,m=0.0,avg=0.0,avg2=0.0;
for(int i = 0;i<steps || i < 2500;i++){
v = texture3D(texVol,pos).r;
m = max(v,m);
avg += v;
pos += dir;
}
return vec4(avg/steps,m,0,1);
}
void main()
{
vec2 texCoord = gl_FragCoord.xy/winSize;
vec3 exitPoint = texture2D(texBack,texCoord).xyz;
gl_FragColor = raytrace(inCoords,exitPoint);
}
I am using an VBO for rendering a color cube as entry and exist point for my rays. They are stored in FBOs and they look ok when I render them directly to the screen.
I have tried chaning to glBegin/glEnd and draw the cube with quads and then I get the same errors.
I cant find what I am doing wrong and now I need your help. Why is my texture3D generating GL_INVALID_OPERATION?
Note:
I have enabled both 2d and 3d textures.
Edit:
I've just uploaded the whole project to github. browse to for more code https://github.com/r-englund/rGraphicsLibrary
This is tested on both Intel HD 3000 and Nvidia GT550m
According to OpenGL specification glDrawElements() generates GL_INVALID_OPERATION in the following cases:
If a geometry shader is active and mode is incompatible with the input primitive type of the geometry shader in the currently installed program object.
If a non-zero buffer object name is bound to an enabled array or the element array and the buffer object's data store is currently mapped.
This means the problem has nothing to do with your fragment shader. If you don't use geometry shaders, you should fix the buffer objects accordingly.
It looks like your are not providing additional relevant information in your question.
Related
What am I using: Qt 5.11.1, MinGW 5.3, Windows 10, C++11, GPU: NVidia 820M (supports OpenGL 4.5)
My task: I have non-solid (just surface) object, rendering by glDrawArrays, and i need to get cross-section of this object by plane. I have found ancient openGL function glClipPlane, but its not compability with VAOs and VBOs. Also Ive found out that its possible to rewrite glClipPlane via geometry shader.
My questions/problems:
Do you know other ways to realize this task?
I really dont understand, how to add geometry shader in QtCreator, there is no "icon" of geometry shader, I tried to add vertex shader and rename it to .gsh or just .glsl, tried to use QOpenGLShaderProgram::addShaderFromSourceCode(QOpenGLShader::Geometry, QString &source) and write shader code in program, but every time I get "QOpenGLShader: could not create shader" on string with adding geometry shader.
look of adding shader into program
Vertex shader:
layout (triangles) in;
layout (triangles) out;
layout (max_vertices = 3) out;
void main()
{
int i;
for (i = 0; i < gl_in.length(); i++)
{
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
Geometry shader:
layout (triangles) in;
layout (triangles) out;
layout (max_vertices = 3) out;
void main()
{
int i;
for (i = 0; i < gl_in.length(); i++)
{
gl_Position = gl_in[i].gl_Position;
EmitVertex();
}
EndPrimitive();
}
Fragment shader:
precision mediump float;
uniform highp float u_lightPower;
uniform sampler2D u_texture;
uniform highp mat4 u_viewMatrix;
varying highp vec4 v_position;
varying highp vec2 v_texCoord;
varying highp vec3 v_normal;
void main(void)
{
vec4 resultColor = vec4(0.25, 0.25, 0.25, 0.0);
vec4 diffMatColor = texture2D(u_texture, v_texCoord);
vec3 eyePosition = vec3(u_viewMatrix);
vec3 eyeVect = normalize(v_position.xyz - eyePosition);
float dist = length(v_position.xyz - eyePosition);
vec3 reflectLight = normalize(reflect(eyeVect, v_normal));
float specularFactor = 1.0;
float ambientFactor = 0.05;
vec4 diffColor = diffMatColor * u_lightPower * dot(v_normal, -eyeVect);// * (1.0 + 0.25 * dist * dist);
resultColor += diffColor;
gl_FragColor = resultColor;
}
Let's sort out a few misconceptions first:
have found ancient openGL function glClipPlane, but its not compability with VAOs and VBOs.
That is not correct. The user defined clip planes via glClipPlane are indeed deprecated in modern GL, and removed from core profiles. But if you use a context where they still exist, you can combine them with VAOs and VBOs without any issue.
Also Ive found out that its possible to rewrite glClipPlane via geometry shader.
You don't need a geometry shader for custom clip planes.
The modern way of user-defined clip planes is calculating gl_ClipDistance for each vertex. While you can modify this value in a geometry shader, you can also directly generate it in the vertex shader. If you don't otherwise need a geometry shader, there is absolutely no reason to add it just for the clip planes.
I really dont understand, how to add geometry shader in QtCreator, there is no "icon" of geometry shader, I tried to add vertex shader and rename it to .gsh or just .glsl, tried to use OpenGLShaderProgram::addShaderFromSourceCode(QOpenGLShader::Geometry, QString &source) and write shader code in program, but every time I get "QOpenGLShader: could not create shader" on string with adding geometry shader.
You first need to find out which OpenGL version you're actually using. With Qt, you can easily end up with an OpenGLES 2.0 context (depending on how you create the context, and also how your Qt was compiled). Your shader code is either desktop GL 2.x (GLSL 1.10/1.20) or GLES 2.0 (GLSL 1.00ES), but not valid in modern core profiles of OpenGL.
GLES2 does not support geometry shaders at all. It also does not support gl_ClipDistance, so if you _really) have to use GLES2, you can try to emulate the clipping in the fragment shader. But the better option would be switching to a modern core profile GL context.
While glClipPlane is deprecated in modern OpenGL, the concept of clipping planes is not.
In your CPU code before you start drawing the geometry to be clipped you must enable one of the clipping planes.
glEnable(GL_CLIP_DISTANCE0);
Once you have finished drawing you would disable this in a similar way.
glDisable(GL_CLIP_DISTANCE0);
You are guaranteed to be able to enable minimum of 8 clipping planes.
In your vertex or geometry shader you must then tell OpenGL the signed distance of your vertex from the plane so that it knows what to clip. To be clear you don't need a geometry shader for clipping but it can be done there if you wish. The shader code would look something like the following:
// vertex in world space
vec4 vert_pos_world = world_matrix * vec4(vert_pos_model, 1.0);
// a horizontal plane at a specified height with normal pointing up
// could be a uniform or hardcoded
vec4 plane = vec4(0, 1, 0, clip_height_world);
// 0 index since that's the clipping plane we enabled
gl_ClipDistance[0] = dot(vert_pos_world, plane);
I am currently writing a program with OpenSceneGraph (3.4.0) and my own glsl (330) shaders.
It uses multiple textures for input, then does a multiple render target rendering with a pre render camera and reads in those multiple render target textures with a second camera for deferred shading. Thus both cameras have their own shaders (called geometry_pass and lighting_pass here).
My problem: both shaders use the same textures in all sampler2D uniforms when reading.
//in geometry_pass.frag
uniform sampler2D uAlbedoMap;
uniform sampler2D uHeightMap;
uniform sampler2D uNormalMap;
uniform sampler2D uRoughnessMap;
uniform sampler2D uSpecularMap;
[...]
layout (location = 0) out vec4 albedo;
layout (location = 1) out vec4 height;
layout (location = 2) out vec4 normal;
layout (location = 3) out vec4 position;
layout (location = 4) out vec4 roughness;
layout (location = 5) out vec4 specular;
[...]
albedo = vec4(texture(uAlbedoMap, vTexCoords).rgb, 1.0);
height = vec4(texture(uHeightMap, vTexCoords).rgb, 1.0);
normal = vec4(texture(uNormalMap, vTexCoords).rgb, 1.0);
position = vec4(vPosition_WorldSpace, 1.0);
roughness = vec4(texture(uRoughnessMap, vTexCoords).rgb, 1.0);
specular = vec4(texture(uSpecularMap, vTexCoords).rgb, 1.0);
Here the output is always the color of the uAlbedoMapexcept for the position, which gets exported correctly.
In the lighting pass, when I read in the textures of the geometry pass, again all input textures are the same
//in lighting_pass.frag
uniform sampler2D uAlbedoMap;
uniform sampler2D uHeightMap;
uniform sampler2D uNormalMap;
uniform sampler2D uPositionMap;
uniform sampler2D uRoughnessMap;
uniform sampler2D uSpecularMap;
[...]
vec3 albedo = texture(uAlbedoMap, vTexCoord).rgb;
vec3 height = texture(uHeightMap, vTexCoord).rgb;
vec3 normal_TangentSpace = texture(uNormalMap, vTexCoord).rgb;
vec3 position_WorldSpace = texture(uPositionMap, vTexCoord).rgb;
vec3 roughness = texture(uRoughnessMap, vTexCoord).rgb;
vec3 specular = texture(uSpecularMap, vTexCoord).rgb;
i.e. the position map that was correctly exported has the color of the albedo in the lighting pass as well.
Thus, what seems to be working correctly is the texture output, but what is obviously not working is the input.
I have tried to debug this with CodeXL and there I can see that all the images for the geometry_pass have (at some point at least) been correctly bound, they're all visible. The output textures of the framebuffer object confirm that the position texture of the geometry_pass is correct.
As far as I can see when going step by step through this, the textures are correctly bound (i.e. the uniform locations are correct).
Now the obvious question: How can I get those textures to be correctly used in the shaders?
Construction of the program
The viewer is an osgViewer::Viewer, so there is only one view.
The scene graph is as follows:
The displayCamerais the camera from the viewer. Since I'm working with Qt (5.9.1), I reset the GraphicsContext before I do anything else with the scene graph.
osg::ref_ptr<osg::Camera> camera = viewer.getCamera();
osg::ref_ptr<osg::GraphicsContext::Traits> traits = new osg::GraphicsContext::Traits;
traits->windowDecoration = false;
traits->x = 0;
traits->y = 0;
traits->width = 640;
traits->height = 480;
traits->doubleBuffer = true;
camera->setGraphicsContext(new osgQt::GraphicsWindowQt(traits.get()));
camera->getGraphicsContext()->getState()->setUseModelViewAndProjectionUniforms(true);
camera->getGraphicsContext()->getState()->setUseVertexAttributeAliasing(true);
camera->setClearMask(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
camera->setClearColor(osg::Vec4(0.2f, 0.2f, 0.6f, 1.0f));
camera->setViewport(new osg::Viewport(0, 0, traits->width, traits->height));
camera->setViewMatrix(osg::Matrix::identity());
I then set displayCamera to this viewer camera, create a second camera for render to texture (thus called rttCamera) and add it as a child to the displayCamera. I add the scene (consisting out of agroup node containing a geode containing a hardcoded geometry) to the rttCamera and in the end create a screen quad geometry (below a geode, which in turn is child of matrix transform; this matrix transform is what is added as a child to displayCamera).
Thus the displayCamera has the two children rttCamera and the matrixtransform->screenQuad. The rttCamera has the child scene->geode.
Both cameras have their own rendermask, the screen quad uses the displayCameras rendermask, the scene the rttCameras rendermask.
With the scene node I read in 5 Textures from file (all bitmaps) and then render the rttCamera into the Framebuffer Object with multiple render targets (for deferred shading).
//model is the geode in the scene group node
osg::ref_ptr<osg::StateSet> ss = model->getOrCreateStateSet();
ss->addUniform(new osg::Uniform(name.toStdString().c_str(), counter));
ss->setTextureAttributeAndModes(counter, pairNameTexture.second, osg::StateAttribute::ON | osg::StateAttribute::PROTECTED);
.
//camera is the rttCamera
//bufferComponent is constructed by osg::Camera::COLOR_BUFFER0+counter
//(where counter is just an integer that gets incremented)
//texture is an osg::Texture2D that is newly created
camera->attach(bufferComponent, texture);
//the textures get stored to assign them later on
gBufferTextures[name] = texture;
These mrt textures are bound to the screenquad as textures
//ssQuad is the stateset of the screen quad geode
QString uniformName = "u" + name + "Map";
uniformName[1] = uniformName[1].toUpper();
ssQuad->addUniform(new osg::Uniform(uniformName.toStdString().c_str(), counter));
osg::ref_ptr<osg::Texture2D> tex = gBufferTextures[name];
ssQuad->setTextureAttributeAndModes(counter, gBufferTextures[name], osg::StateAttribute::ON | osg::StateAttribute::OVERRIDE);
other set ups are the rendertarget (FBO for rttCamera, Framebuffer for displayCamera), lighting (off in both cameras). the rttCamera gets the same graphics context that it is created for the displaycamera (i.e. the graphics context object is passed to the rttCamera and set as its own graphics context).
The texture attachments are created as follows (where there is no difference in using width and height or the power-of-2 values for size)
osg::ref_ptr<osg::Texture2D> Utils::createTextureAttachment(int width, int height)
{
osg::Texture2D* texture = new osg::Texture2D();
//texture->setTextureSize(width, height);
texture->setTextureSize(512, 512);
texture->setInternalFormat(GL_RGBA);
texture->setFilter(osg::Texture2D::MIN_FILTER, osg::Texture2D::LINEAR);
texture->setFilter(osg::Texture2D::MAG_FILTER, osg::Texture2D::LINEAR);
return texture;
}
Let me know if there is more crucial-for-solving code or information missing.
So I finally found the error. My counter has been an unsigned int which apperantly is not allowed. Since osg is hiding so much of the errors from me, I didn't see that this was an issue...
After changing it to just a normal int, I now get different textures into my shader.
UPDATE: So it turns out this was due to a bug in the C side of things, causing some of the matrix to become malformed. The shaders are all fine. So if adding uniforms causes weird things to happen, my advice would be to use a debugger to check the value of ALL uniforms and make sure that they are all being set correctly.
So I am trying to render depth to a cube map to use as a shadow map, but when I add and use a uniform in the fragment shader everything becomes white as if the shader isn't being used. No warnings or errors are generated when compiling/linking the shader.
The shader program I am using to render the depth map (setting the depth simply to the fragment z position as a test) is as follows:
//vertex shader
#version 430
in layout(location=0) vec4 vertexPositionModel;
uniform mat4 modelToWorldMatrix;
void main() {
gl_Position = modelToWorldMatrix * vertexPositionModel;
}
//geometry shader
#version 430
layout (triangles) in;
layout (triangle_strip, max_vertices=18) out;
out vec4 fragPositionWorld;
uniform mat4 projectionMatrices[6];
void main() {
for (int face = 0; face < 6; face++) {
gl_Layer = face;
for (int i = 0; i < 3; i++) {
fragPositionWorld = gl_in[i].gl_Position;
gl_Position = projectionMatrices[face] * fragPositionWorld;
EmitVertex();
}
EndPrimitive();
}
}
//Fragment shader
#version 430
in vec4 fragPositionWorld;
void main() {
gl_FragDepth = abs(fragPositionWorld.z);
}
The main shader samples from the cubemap and simply renders the depth as greyscale colour:
vec3 lightDirection = fragPositionWorld - pointLight.position;
float closestDepth = texture(shadowMap, lightDirection).r;
finalColour = vec4(vec3(closestDepth), 1.0);
The scene is a small cube in a larger cubic room, and renders as expected, dark near z = 0 and the cube projected back onto the wall (The depth map is being rendered from the centre of the room):
Good:
[2
I can move the small cube around and the projection projects correctly onto all the sides of the cubemap. All good so far.
The problem is when I add a uniform to the fragment shader, i.e:
#version 430
in vec4 fragPositionWorld;
uniform vec3 lightPos;
void main() {
gl_FragDepth = min(lightPos.y, 0.5);
}
Everything renders as white, same as if the render failed to compile:
Bad:
gDEBugger reports that the uniform is set correctly (0,4,0) but regardless of what that lightPos is, gl_FragDepth should be set to a value less than 0.5 and appear a shade of grey (which is what happens if I set gl_FragDepth = 0.5 directly), so I can only conclude that the fragment shader is not being used for some reason and the default one is being use instead. Unfortunately I have no idea why.
This is how it should look like. It uses the same vertices/uv coordinates which are used for DX11 and OpenGL. This scene was rendered in DirectX10.
This is how it looks like in DirectX11 and OpenGL.
I don't know how this can happen. I am using for both DX10 and DX11 the same code on top and also they both handle things really similiar. Do you have an Idea what the problem may be and how to fix it?
I can send code if needed.
also using another texture.
changed the transparent part of the texture to red.
Fragment Shader GLSL
#version 330 core
in vec2 UV;
in vec3 Color;
uniform sampler2D Diffuse;
void main()
{
//color = texture2D( Diffuse, UV ).rgb;
gl_FragColor = texture2D( Diffuse, UV );
//gl_FragColor = vec4(Color,1);
}
Vertex Shader GLSL
#version 330 core
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal;
uniform mat4 Projection;
uniform mat4 View;
uniform mat4 World;
out vec2 UV;
out vec3 Color;
void main()
{
mat4 MVP = Projection * View * World;
gl_Position = MVP * vec4(vertexPosition,1);
UV = vertexUV;
Color = vertexColor;
}
Quickly said, it looks like you are using back face culling (which is good), and the other side of your model is wrongly winded. You can ensure that this is the problem by turning back face culling off (OpenGL: glDisable(GL_CULL_FACE)).
The real correction is (if this was the problem) to have correct winding of faces, usually it is counter-clockwise. This depends where you get this model. If you generate it on your own, correct winding in your model generation routine. Usually, model files created by 3D modeling software have correct face winding.
This is just a guess, but are you telling the system the correct number of polygons to draw? Calls like glBufferData() take the size in bytes of the data, not the number of vertices or polygons. (Maybe they should have named the parameter numBytes instead of size?) Also, the size has to contain the size of all the data. If you have color, normals, texture coordinates and vertices all interleaved, it needs to include the size of all of that.
This is made more confusing by the fact that glDrawElements() and other stuff takes the number of vertices as their size argument. The argument is named count, but it's not obvious that it's vertex count, not polygon count.
I found the error.
The reason is that I forgot to set the Texture SamplerState to Wrap/Repeat.
It was set to clamp so the uv coordinates were maxed to 1.
A few things that you could try :
Is depth test enabled ? It seems that your inner faces of the polygons from the 'other' side are being rendered over the polygons that are closer to the view point. This could happen if depth test is disabled. Enable it just in case.
Is lighting enabled ? If so turn it off. Some flashes of white seem to be coming in the rotating image. Could be because of incorrect normals ...
HTH
What is the correct way of doing the following:
Render a scene into a texture using a FBO (fbo-a)
Then apply an effect using the texture (tex-a) and render this into another texture (tex-b) using the same fbo (fbo-a)
Then render this second texture, with the applied effect (tex-b) as a full screen quad.
My approach is this, but this gives me a texture filled with "noise" on window + the applied effect (all pixels are randomly colored red, green, blue white, black).
I'm using one FBO, with two textures set to GL_COLOR_ATTACHENT0 (tex-a) and GL_COLOR_ATTACHMENT1 (tex-b)
I bind my fbo, make sure it's rendered into the tex-a using glDrawBuffer(GL_COLOR_ATTACHMENT0)
Then I apply the effect in a shader with tex-a bound and set as 'sampler2D'. Using texture unit 1, and switch to the second color attachment (glDrawBuffer(GL_COLOR_ATTACHMENT1)). and render a full screen quad. Everything is now rendered into tex-b
Then I switch back to the default FBO (0) and use tex-b with a full screen quad to render the result.
Example of the result when applying my shader
This is the shader I'm using. I'm not aware this could be what is causing this, but maybe the noise is caused by a overflow?
Vertex shader
attribute vec4 a_pos;
attribute vec2 a_tex;
varying vec2 v_tex;
void main() {
mat4 ident = mat4(1.0);
v_tex = a_tex;
gl_Position = ident * a_pos;
}
Fragment shader
uniform int u_mode;
uniform sampler2D u_texture;
uniform float u_exposure;
uniform float u_decay;
uniform float u_density;
uniform float u_weight;
uniform float u_light_x;
uniform float u_light_y;
const int NUM_SAMPLES = 100;
varying vec2 v_tex;
void main() {
if (u_mode == 0) {
vec2 pos_on_screen = vec2(u_light_x, u_light_y);
vec2 delta_texc = vec2(v_tex.st - pos_on_screen.xy);
vec2 texc = v_tex;
delta_texc *= 1.0 / float(NUM_SAMPLES) * u_density;
float illum_decay = 1.0;
for(int i = 0; i < NUM_SAMPLES; i++) {
texc -= delta_texc;
vec4 sample = texture2D(u_texture, texc);
sample *= illum_decay * u_weight;
gl_FragColor += sample;
illum_decay *= u_decay;
}
gl_FragColor *= u_exposure;
}
else if(u_mode == 1) {
gl_FragColor = texture2D(u_texture, v_tex);
gl_FragColor.a = 1.0;
}
}
I've read this FBO article on opengl.org, where they describe a feedback loop at the bottom of the article. The description is not completely clear to me and I'm wondering if I'm exactly doing what they describe there.
Update 1:
Link to source code
Update 2:
When I first set gl_FragColor.rgb = vec3(0.0, 0.0, 0.0); before I start the sampling loop (with NUM_SAMPLES), it works find. No idea why though.
The problem is that you're not initializing gl_FragColor, and you're modifying it with the lines
gl_FragColor += sample;
and
gl_FragColor *= u_exposure;
both of which depend on the previous value of gl_FragColor. So you're getting some random junk (whatever happened to be in the register that the shader compiler decided to use for the gl_FragColor computation) added in. This has a strong possibility of working fine on some driver/hardware combinations (because the compiler decided to use a register that was always 0 for some reason) and not on others.