OpenGL(2.1) shader program not running, using color from glColor instead - c++

I am working on an OpenGL 2.1 application, and trying to shift code from fixed to programmable pipeline piece by piece(i.e deprecated functions (glVertex*, glColor* etc) calls to buffers/shaders ).
The problem is color I set for each fragment from inside the fragment shader is not being set, instead it automatically paints fragment with color specified in last glColor3f call (which is somewhere else in the codebase, completely unrelated to piece of code at hand).
My vert/frag shaders are pretty straightforward :
#version 120
attribute vec3 position;
attribute vec3 color;
varying vec3 Color;
void main()
{
mat4 scaleMat = mat4(0.1,0,0,0,0,0.1,0,0,0,0,0.1,0,0,0,0,1);
Color = color;
//gl_Position = scaleMat * vec4(position,1.0); //scaling doesn't have effect!
gl_Position = vec4(position, 1.0);
}
and
#version 120
varying vec3 Color;
void main()
{
//gl_FragColor = vec4(0.3,0.3,1.0,1.0); //Const color does not work.
gl_FragColor = vec4(Color,1.0); //taking interpolated color doesn't work either!
}
One more thing, you may notice I applied a scaling matrix in vertex shader to make geometry 10x smaller, but it does not have any effect! Which makes me wonder whether the shader is even 'doing its thing' or not. Because if I don't write the shader correctly, I get compile errors(duh.)
I'm using Qt, below is the code used for setting up buffers and shaderprogram.
void myObject::init()
{
//Shader related code. 'shader' is QGLShaderProgram object
shader.addShaderFromSourceFile(QGLShader::Vertex,
":/shaders/mainshader.vert");
shader.addShaderFromSourceFile(QGLShader::Fragment,
":/shaders/mainshader.frag");
shader.link();
shader.bind();
vertexLocation = shader.attributeLocation("position");
colorLocation = shader.attributeLocation("color");
.......
//bathBuffer and colorBuffer are QGLBuffer objects,
//triList is a vector<vec3> containing triangle data
bathBuffer.create();
bathBuffer.bind();
bathBuffer.setUsagePattern(QGLBuffer::StaticDraw);
bathBuffer.allocate(sizeof(triList));
bathBuffer.write(0,triList.data(),sizeof(triList));
for(int i=0;i<triList.size();++i)
{
colorList.push_back(Vec(1,1,0));
}
colorBuffer.create();
colorBuffer.bind();
colorBuffer.setUsagePattern(QGLBuffer::StaticDraw);
colorBuffer.allocate(sizeof(colorList));
colorBuffer.write(0,colorList.data(),sizeof(colorList));
//copies both color and vert data
}
and here's the draw() function:
void myObject::draw()
{
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
shader.bind();
bathBuffer.bind();
shader.enableAttributeArray(vertexLocation);
shader.setAttributeArray(vertexLocation,GL_DOUBLE,triList.data(),3,0);
colorBuffer.bind();
shader.enableAttributeArray(colorLocation);
shader.setAttributeArray(colorLocation,GL_DOUBLE,colorList.data(),3,0);
glDrawArrays(GL_TRIANGLES, 0, triList.size());
shader.disableAttributeArray(vertexLocation);
shader.disableAttributeArray(colorLocation);
//shader.release();
}
I even tried to use a GL profiler to figure out what might be going wrong beneath all the abstraction:
As you can see, glCreateProgram() returns 1, but trying to glUseProgram returns error saying Program handle does not refer to an object generated by opengl. Like wtf.
Then I thought it might be some Qt thing where it might be making a buggy context from the start, so I added this code to my main.cpp as well, just to make sure we get correct context.
QGLFormat f;
f.setOption(QGL::DeprecatedFunctions| QGL::DepthBuffer | QGL::DoubleBuffer | QGL::Rgba | QGL::AlphaChannel);
f.setVersion(2,1);
QGLContext ogl_context(f);
ogl_context.create();
ogl_context.makeCurrent();
....to no avail.
tldr;
I'm not sure whether the shaders are working or not
Is there a way to ignore/'switch off' glColor calls so that I paint geometry from within shaders itself?
I have run out of options to think of. I can provide more info if needed. Any help on this would be appreciated.
Thanks.
EDIT: Whole class relevant to this question: bathymetry.h , bathymetry.cpp . I have added the whole class relevant to this question. Other code is a custom widget class(not doing any opengl stuff) that handles the Qt part and a main.cpp.

Related

gl_PointCoord compiles and links, but crashes at runtime

I successfully wrote a standard basic transform feedback particle system with point-sprites. No flickering, the particles update from one buffer into the next, which is then rendered, then output buffer becomes input buffer on next iteration. All GPU-side, standard transform feedback.
Wonderful! ONE BIG PROBLEM: It only works if I don't use gl_PointCoord. Using a flat color for my point sprites works fine. But I need gl_PointCoord to do anything meaningful. All my shaders, whether or not they use gl_PointCoord, compile and link just fine. However, at runtime, if the shader uses gl_PointCoord (whether or not gl_PointCoord is actually in the execution path), the program crashes. I explicitly glEnable(GL_POINT_SPRITE). This doesn't have any effect. Omitting gl_PointCoord, and setting glPointSize(100.0f), and using vec4(1.0,1.0,1.0,1.) the particle system renders just fine as large white blocky squares (as expected). But using gl_PointCoord in any way (as standard texture lookup coord or procedural color or anything else) will crash my shaders at runtime, after successfully compiling and linking. I simply don't understand why. It passed glShaderSource, glCompileShader,glAttachShader, glLinkProgram. I'm compiling my shaders as #version 430, and 440, and I even tried 300 es. All compile, link, and I checked the status of the compile and link. All good. I'm using a high-end microsoft surface book pro, visual studio 2015. NVIDIA GeForce GPU. I also made sure all my drivers are up to date. Unfortunately, with point sprites, I don't have billboard vertices from the vertex shader to use to interpolate into the fragment shader as texture coordinates. gl_FragCoord doesn't work either (as I would expect for point sprites). Does anyone know how to solve this or use another technique for texture coordinates for point sprites?
glBeginTransformFeedback(GL_POINTS);//if my fragment shader uses gl_PointCoord, it hard crashes here.
When replying, please understand I'm very experienced in writing shaders, Vertex Shaders, Pixel Shaders, Tessellation control, tessellation evaluation, and geometry shaders, in GLSL and HLSL. But I don't claim to know everything. I could have forgotten something simple; I just have no idea what that could be. I'm figuring it could be a state I don't have enabled. As far as transform feedback goes, I also set up the varying attribs correctly via glTransformFeedbackVaryings.
C++ :
void Render(void* pData)
{
auto pOwner = static_cast<CPointSpriteSystem*>(pData);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
glEnable(GL_POINT_SPRITE);
glEnable(GL_POINT_SMOOTH);
//glPointParameteri(GL_POINT_SPRITE_COORD_ORIGIN, GL_LOWER_LEFT);
m_Shader.Activate();
auto num_particles = pOwner->m_NumPointSprites;
FeedbackIndex = 0;
while (true)
{
m_Shader.SetSubroutine(GL_VERTEX_SHADER, "RenderPass",
vssubroutines[FeedbackIndex],
vsprevSubLoc[FeedbackIndex],
vsupdateSub[FeedbackIndex]);
m_Shader.SetSubroutine(GL_FRAGMENT_SHADER, "RenderPixelPass",
pssubroutines[0],
psprevSubLoc[0],
psrenderSub[0]);
if (!FeedbackIndex)
{
glEnable(GL_RASTERIZER_DISCARD);
glBindVertexArray(m_vao[bufferIndex]);
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, m_Feedback[bufferIndex]);
glBeginTransformFeedback(GL_POINTS);//if feedback fragment shader uses gl_PointCoord, will always hard-crash here
glDrawArrays(GL_POINTS, 0, num_particles);
glEndTransformFeedback();
glFlush();
}
else
{
m_Shader.SetSubroutine(GL_FRAGMENT_SHADER, "RenderPixelPass",
pssubroutines[(int)pOwner->m_ParticleType],
psprevSubLoc[(int)pOwner->m_ParticleType],
psrenderSub[(int)pOwner->m_ParticleType]);
glPointSize(100.0f);
glDisable(GL_RASTERIZER_DISCARD);
glDrawTransformFeedback(GL_POINTS, m_Feedback[bufferIndex]);
bufferIndex = 1 - bufferIndex;
break;
}
FeedbackIndex = 1 - FeedbackIndex;
}
}
VS feedback:
#version 310 es
subroutine void RenderPassType();
subroutine uniform RenderPassType RenderPass;
layout(location=0) in vec3 VertexPosition;
layout(location=1) in vec3 VertexVelocity;
layout(location=2) in float VertexStartTime;
layout(location=3) in vec3 VertexInitialVelocity;
out vec3 Position;
out vec3 Velocity;
out float StartTime;
out float Transp;
uniform float g_fCurSeconds;
uniform float g_fElapsedSeconds;
uniform float Time;
uniform float H;
uniform vec3 Accel;
#ifdef USE_VIEW_BLOCK
layout(std140) uniform view_block{
mat4 g_mView,
g_mInvView,
g_mPrevView,
g_mPrevInvView,
g_mProj,
g_mInvProj;
};
uniform mat4 g_mWorld;
#endif
subroutine(RenderPassType) void UpdateSphere(){
Position=VertexPosition+VertexVelocity*g_fElapsedSeconds;
Velocity=VertexVelocity;
StartTime=VertexStartTime;
}
subroutine(RenderPassType) void Render(){
gl_Position=g_mProj*g_mInvView*vec4(VertexPosition,1.0);
}
void main(){
RenderPass();"
}
PS feedback:
#version 310 es //version 430 and 440 same results
subroutine void RenderPixelType();
subroutine uniform RenderPixelType RenderPixelPass;
uniform sampler2D tex0;
layout(location=0) out vec4 g_FragColor;
subroutine(RenderPixelType) void Basic(){
g_FragColor=vec4(1.0,1.0,1.0,1.0);
}
subroutine(RenderPixelType) void ProceduralSphere(){
#if 1
vec2 coord=gl_PointCoord;//at runtime: BOOM!
coord=coord*2.0-1.0;
float len=length(coord);
if(len>1.0) discard;
g_FragColor=vec4(1.0-len,1.0-len,1.0-len,1.0);
#else
g_FragColor=vec4(1.0,1.0,1.0,1.0);//always works
#endif
}
subroutine(RenderPixelType) void StandardImage(){
g_FragColor=texture2D(tex0,gl_PointCoord); //boom!!
g_FragColor=vec4(1.0,1.0,1.0,1.0);
}
void main(){
RenderPixelPass();
}
I solved the problem! The problem was actually that I didn't write a value to Transp (declared out float Transp;//in vs). I casually thought I didn't have to do this. But I started to trim some fat, and as soon as I wrote out a generic float (not actually being used by later shader stages: Transp=0.0f), and then compiled as #version 430, it all started to work as originally expected: little white spheres

How to pass-through fixed-function material and lighting to fragment shader?

I am adding a vertex and fragment shader to my OpenGL 2.1/GLSL 1.2 application.
Vertex shader:
#version 120
void main(void)
{
gl_Position = ftransform();
gl_FrontColor = gl_Color;
}
Fragment shader:
#version 120
void main(void)
{
if (/* test some condition */) {
discard;
} else {
gl_FragColor = gl_Color;
}
}
The problem is that if the condition fails, gl_FragColor just gets set to whatever the last call to gl.glColor3f() was in my fixed-function method.
Instead, I want to pass through the normal color, derived from the material and lighting parameters. For example, this:
gl.glLightfv(GLLightingFunc.GL_LIGHT0, GLLightingFunc.GL_AMBIENT, lightingAmbient, 0);
gl.glLightfv(GLLightingFunc.GL_LIGHT0, GLLightingFunc.GL_DIFFUSE, lightingDiffuse, 0);
gl.glLightfv(GLLightingFunc.GL_LIGHT0, GLLightingFunc.GL_SPECULAR, lightingSpecular, 0);
gl.glLightfv(GLLightingFunc.GL_LIGHT0, GLLightingFunc.GL_POSITION, directionalLightFront, 0);
gl.glMaterialfv(GL.GL_FRONT, GLLightingFunc.GL_AMBIENT, materialAmbient, 0);
gl.glMaterialfv(GL.GL_FRONT, GLLightingFunc.GL_DIFFUSE, materialDiffuse, 0);
gl.glMaterialfv(GL.GL_FRONT, GLLightingFunc.GL_SPECULAR, materialSpecular, 0);
gl.glMaterialfv(GL.GL_FRONT, GLLightingFunc.GL_EMISSION, materialEmissive, 0);
gl.glMaterialf(GL.GL_FRONT, GLLightingFunc.GL_SHININESS, shininess);
Is there a way to assign this value to gl_FragColor? Or do I need to implement the lighting from scratch in the fragment shader?
(Note, I'm not trying to do any kind of advanced lighting techniques. I'm using the shaders for clipping purposes and want to just use standard lighting methods.)
Unfortunately there is no way to do what you want. Using the fixed function pipeline and vertex/pixel shaders are mutually exclusive. Each primitive you render must use one or the other.
Thus, you must do the lighting computations yourself in the shader. This tutorial provides all the code you would need to do so. Since it does the lighting calculation for every pixel instead of every vertex, the results actually look far better!

Changing color of fragment

I have written a fragment shader which i would like to change the color of the fragment. for example I would like if the color it receives is black then it should change it to a blue.
This is the shader that I am using:
uniform sampler2D mytex;
layout (pixel_center_integer) in vec4 gl_FragCoord;
uniform sampler2D texture1;
void main ()
{
ivec2 screenpos = ivec2 (gl_FragCoord.xy);
vec4 color = texelFetch (mytex, screenpos, 0);
if (color == vec4 (0.0,0.0,0.0,1.0)) {
color = (0.0,0.0,0.0,0.0);
}
gl_FragColor = texture2D (texture1, gl_TexCoord[0].st);
}
And here is the log that I am getting from it:
WARNING: -1:65535: 'GL_ARB_explicit_attrib_location' : extension is not available in current GLSL version
WARNING: 0:1: 'texelFetch' : function is not available in current GLSL version
I am aware of the warning- but shouldn't it compile anyways?
The shader is not doing what i would like it to do, can someone explain why?
For one thing, you are using functions that are not available in your GLSL implementation. The result of calling these will be undefined.
However, the kicker here is that gl_FragColor has absolutely NOTHING to do with the value of color in this shader. So even if your texelFetch (...) logic actually did work correctly, changing the value of color does nothing to the final output. A smart compiler will see this as a no-op and effectively strip your shader down to this:
uniform sampler2D texture1;
void main ()
{
gl_FragColor = texture2D (texture1, gl_TexCoord[0].st);
}
If that were not enough, texelFetch (...) is completely unnecessary in this shader. If you want to lookup the texel that corresponds to the current fragment in your shader and the texture has the same dimensions as the viewport you are drawing into you can actually use texture2D (texture1, gl_FragCoord.xy); This is because the default behaviour in GLSL is to have gl_FragCoord supply the coordinate of the fragment's center (x+0.5, y+0.5) - this is also the center of the corresponding texel in your texture (if it is the same resolution), so you can do a traditional texture lookup without worrying that texture filtering will alter your sampled result.
texelFetch (...) lets you fetch an explicit texel in a texture without using normalized coordinates, it is sort of like a "grownup" rectangle texture :) It is generally useful if you are using a multisample texture and want a specific sample, or if you want to bypass texture filtering (which includes mipmap level selection). In this case, it is not needed at all.
This is probably what you really want (OpenGL 3.2):
#version 150
uniform sampler2D mytex;
uniform sampler2D texture1;
layout (location=0) out vec4 frag_color;
layout (location=1) out vec4 mytex_color;
void main ()
{
mytex_color = texture2D (mytex, gl_FragCoord.xy);
// This is not black->blue like you explained in your question...
// ... This is generally opaque->transparent, assuming 4th component = alpha
if (mytex_color == vec4 (0.0,0.0,0.0,1.0)) {
mytex_color = vec4 (0.0);
}
frag_color = texture2D (texture1, gl_TexCoord[0].st);
}
In older GLSL versions, you will have to use glBindFragDataLocation (...) and set the data locations manually or use gl_FragData[n] instead of out variables.
Now the real problem here is that you seem to be wanting to change the color of the texture you are sampling from. That will not work, at best you will have to use two fragment data outputs. Writing into the same texture you are sampling from can be done under some very controlled circumstances, but generally what you would do is ping-pong between textures. In other words, you would fetch from one texture, write to another texture and all subsequent render passes that reference to the original texture should be swapped with the one you just wrote to.
See "Fragment Data Location" for more information on Multiple Render Target drawing.

OpenGL issue: cannot render geometry on screen

My program was meant to draw a simple textured cube on screen, however, I cannot get it to render anything other than the clear color. This is my draw function:
void testRender() {
glClearColor(.25f, 0.35f, 0.15f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUniformMatrix4fv(resources.uniforms.m4ModelViewProjection, 1, GL_FALSE, (const GLfloat*)resources.modelviewProjection.modelViewProjection);
glEnableVertexAttribArray(resources.attributes.vTexCoord);
glEnableVertexAttribArray(resources.attributes.vVertex);
//deal with vTexCoord first
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER,resources.hiBuffer);
glBindBuffer(GL_ARRAY_BUFFER, resources.htcBuffer);
glVertexAttribPointer(resources.attributes.vTexCoord,2,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*2,(void*)0);
//now the other one
glBindBuffer(GL_ARRAY_BUFFER,resources.hvBuffer);
glVertexAttribPointer(resources.attributes.vVertex,3,GL_FLOAT,GL_FALSE,sizeof(GLfloat)*3,(void*)0);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, resources.htextures[0]);
glUniform1i(resources.uniforms.colorMap, 0);
glDrawElements(GL_TRIANGLES, 36, GL_UNSIGNED_SHORT, (void*)0);
//clean up a bit
};
In addition, here is the vertex shader:
#version 330
in vec3 vVertex;
in vec2 vTexCoord;
uniform mat4 m4ModelViewProjection;
smooth out vec2 vVarryingTexCoord;
void main(void) {
vVarryingTexCoord = vTexCoord;
gl_Position = m4ModelViewProjection * vec4(vVertex, 1.0);
};
and the fragment shader (I have given up on textures for now):
#version 330
uniform sampler2D colorMap;
in vec2 vVarryingTexCoord;
out vec4 vVaryingFragColor;
void main(void) {
vVaryingFragColor = texture(colorMap, vVarryingTexCoord);
vVaryingFragColor = vec4(1.0,1.0,1.0,1.0);
};
the vertex array buffer for the position coordinates make a simple cube (with all coordinates a signed 0.25) while the modelview projection is just the inverse camera matrix (moved back by a factor of two) applied to a perspective matrix. However, even without the matrix transformation, I am unable to see anything onscreen. Originally, I had two different buffers that needed two different element index lists, but now both buffers (containing the vertex and texture coordinate data) are the same length and in order. The code itself is derived from the Durian Software Tutorial and the latest OpenGL Superbible. The rest of the code is here.
By this point, I have tried nearly everything I can think of. Is this code even remotely close? If so, why can't I get anything to render onscreen?
You're looking pretty good so far.
The only thing that I see right now is that you've got DEPTH_TEST enabled, but you don't clear the depth buffer. Even if the buffer initialized to a good value, you would be drawing empty scenes on every frame after the first one, because the depth buffer's not being cleared.
If that does not help, can you make sure that you have no glGetError() errors? You may have to clean up your unused texturing attributes/uniforms to get the errors to be clean, but that would be my next step.

Pass-through geometry shader for points

I'm having some problems writing a simple pass-through geometry shader for points. I figured it should be something like this:
#version 330
precision highp float;
layout (points) in;
layout (points) out;
void main(void)
{
gl_Position = gl_in[0].gl_Position;
EmitVertex();
EndPrimitive();
}
I have a bunch of points displayed on screen when I don't specify a geometry shader, but when I try to link this shader to my shader program, no points show up and no error is reported.
I'm using C# and OpenTK, but I don't think that is the problem.
Edit: People requested the other shaders, though I did test these shaders without using the geometry shader and they worked fine without the geometry shader.
Vertex shader:
void main()
{
gl_FrontColor = gl_Color;
gl_Position = ftransform();
}
Fragment shader:
void main()
{
gl_FragColor = gl_Color;
}
I'm not that sure sure (have no real experience with geometry shaders), but don't you have to specify the maximum number of output vertices. In your case it's just one, so try
layout (points, max_vertices=1) out;
Perhaps the shader compiles succesfully because you could still specify the number of vertices by the API (at least in compatibility, I think).
EDIT: You use the builtin varying gl_FrontColor (and read gl_Color in the fragment shader), but then in the geometry shader you don't propagate it to the fragment shader (it doesn't get propagated automatically).
This brings us to another problem. You mix new syntax (like gl_in) with old deprecated syntax (like ftransform and the builtin color varyings). Perhaps that's not a good idea and in this case you got a problem, as gl_in has no gl_Color or gl_FrontColor member if I remember correctly. So the best thing would be to use your own color variable as out variable of the vertex and geometry shaders and as in variable of the geometry and fragment shaders (but remember that the in has to be an array in the geometry shader).