GLSL not behaving the same between graphics cards - opengl

I have the following GLSL shader that is working on a Mac computer with an NVIDIA GeForce GT 330M, a different Mac computer with an ATI Radeon HD 5750, an Ubuntu VM inside this second Mac, but not on an Ubuntu VM inside a Windows machine with a GeForce GTX 780 (all drivers up to date). The shader is pretty basic, so I'm looking for some help what might be wrong! The vertex shader looks like (I'm using the cocos2d-x game engine, which is where all of the CC_{x} variables are defined):
varying vec4 v_fragmentColor;
void main() {
gl_Position = CC_PMatrix * CC_MVMatrix * a_position;
gl_PointSize = CC_MVMatrix[0][0] * u_size * 1.5f;
v_fragmentColor = vec4(1, 1, 1, 1);
}
And the fragment shader:
varying vec4 v_fragmentColor;
void main() {
gl_FragColor = texture2D(CC_Texture0, gl_PointCoord) * v_fragmentColor; // Can't see anything
// gl_FragColor = texture2D(CC_Texture0, gl_PointCoord); // Produces the texture as expected, no problems!
// gl_FragColor = v_fragmentColor; // Produces a white box as expected, no problems!
}
As you can see, I'm getting very strange behavior where both the sampler, CC_Texture0, and the varying vec4, v_fragmentColor, seem to be working properly, but multiplying them causes problems. I'm reasonably confident everything else is set up right because I'm seeing it work properly on the other systems, so it seems to be related to the graphics card or some undefined behavior that I'm not aware of? Also, I'm using #version 120 ( which was needed for gl_PointCoord). Thanks for any help!

Related

GLSL alpha test optimized out on NVIDIA

I have a small fragment shader on glsl, that is used when I want to render shadow texture. To have the desired shadow from textures with alpha channel (like leaves), alpha test is used.
#version 130
in lowp vec2 UV; //comes from vertex shader, works fine
uniform sampler2D DiffuseTextureSampler; //ok, alpha channel IS present in passed texture
void main(){
lowp vec4 MaterialDiffuseColor = texture( DiffuseTextureSampler, UV ).rgba;
if(MaterialDiffuseColor.a < 0.5)
{
discard;
}
//no color output is required at this stage
}
The following shader works fine on Intel cards (HD 520 && HD Graphics 3000), but on NVIDIA (420M and GTX 660, on Win7 and Linux Mint 17.3 respectively, using the latest drivers, 37x.xx something...) the alpha test does not work. Shader does compile without any errors, so it seems that NVIDIA optimizer is doing weird stuff.
I've copy-pasted parts of 'fullbright' shader to the 'shadow' shader, and got to this odd result, that does work as intended, though a lot of useless stuff (for shadow rendering) is done.
#version 130
in lowp vec2 UV;
in lowp float fade;
out lowp vec4 color;
uniform sampler2D DiffuseTextureSampler;
uniform bool overlayBool;
uniform lowp float translucencyAlpha;
uniform lowp float useImageAlpha;
void main(){
lowp vec4 MaterialDiffuseColor = texture( DiffuseTextureSampler, UV ).rgba;
//notice the added OR :/
if(MaterialDiffuseColor.a < 0.5 || overlayBool)
{
discard;
}
//next line is meaningless
color = vec4(0.0, 0.0, 1.0, min(translucencyAlpha * useImageAlpha, fade));
}
If I remove any uniform, or change something (like, replace the min function with some arithmetic, save the declaration of uniform variable but do not use it etc), the alpha test breaks again.
Just outputting the color does not work (ie color = vec4(1.0, 1.0, 1.0, 1.0); has no effect).
I tried using the
#pragma optimize (off)
but it did not help.
By the way, when alpha test is broken, the expression "MaterialDiffuseColor.a == 0.0" is true.
Sorry, if it is a dumb question, but what causes such behaviour on NVIDIA cards and what can I do to avoid it? Thank you.

gl_PointCoord compiles and links, but crashes at runtime

I successfully wrote a standard basic transform feedback particle system with point-sprites. No flickering, the particles update from one buffer into the next, which is then rendered, then output buffer becomes input buffer on next iteration. All GPU-side, standard transform feedback.
Wonderful! ONE BIG PROBLEM: It only works if I don't use gl_PointCoord. Using a flat color for my point sprites works fine. But I need gl_PointCoord to do anything meaningful. All my shaders, whether or not they use gl_PointCoord, compile and link just fine. However, at runtime, if the shader uses gl_PointCoord (whether or not gl_PointCoord is actually in the execution path), the program crashes. I explicitly glEnable(GL_POINT_SPRITE). This doesn't have any effect. Omitting gl_PointCoord, and setting glPointSize(100.0f), and using vec4(1.0,1.0,1.0,1.) the particle system renders just fine as large white blocky squares (as expected). But using gl_PointCoord in any way (as standard texture lookup coord or procedural color or anything else) will crash my shaders at runtime, after successfully compiling and linking. I simply don't understand why. It passed glShaderSource, glCompileShader,glAttachShader, glLinkProgram. I'm compiling my shaders as #version 430, and 440, and I even tried 300 es. All compile, link, and I checked the status of the compile and link. All good. I'm using a high-end microsoft surface book pro, visual studio 2015. NVIDIA GeForce GPU. I also made sure all my drivers are up to date. Unfortunately, with point sprites, I don't have billboard vertices from the vertex shader to use to interpolate into the fragment shader as texture coordinates. gl_FragCoord doesn't work either (as I would expect for point sprites). Does anyone know how to solve this or use another technique for texture coordinates for point sprites?
glBeginTransformFeedback(GL_POINTS);//if my fragment shader uses gl_PointCoord, it hard crashes here.
When replying, please understand I'm very experienced in writing shaders, Vertex Shaders, Pixel Shaders, Tessellation control, tessellation evaluation, and geometry shaders, in GLSL and HLSL. But I don't claim to know everything. I could have forgotten something simple; I just have no idea what that could be. I'm figuring it could be a state I don't have enabled. As far as transform feedback goes, I also set up the varying attribs correctly via glTransformFeedbackVaryings.
C++ :
void Render(void* pData)
{
auto pOwner = static_cast<CPointSpriteSystem*>(pData);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
glEnable(GL_POINT_SPRITE);
glEnable(GL_POINT_SMOOTH);
//glPointParameteri(GL_POINT_SPRITE_COORD_ORIGIN, GL_LOWER_LEFT);
m_Shader.Activate();
auto num_particles = pOwner->m_NumPointSprites;
FeedbackIndex = 0;
while (true)
{
m_Shader.SetSubroutine(GL_VERTEX_SHADER, "RenderPass",
vssubroutines[FeedbackIndex],
vsprevSubLoc[FeedbackIndex],
vsupdateSub[FeedbackIndex]);
m_Shader.SetSubroutine(GL_FRAGMENT_SHADER, "RenderPixelPass",
pssubroutines[0],
psprevSubLoc[0],
psrenderSub[0]);
if (!FeedbackIndex)
{
glEnable(GL_RASTERIZER_DISCARD);
glBindVertexArray(m_vao[bufferIndex]);
glBindTransformFeedback(GL_TRANSFORM_FEEDBACK, m_Feedback[bufferIndex]);
glBeginTransformFeedback(GL_POINTS);//if feedback fragment shader uses gl_PointCoord, will always hard-crash here
glDrawArrays(GL_POINTS, 0, num_particles);
glEndTransformFeedback();
glFlush();
}
else
{
m_Shader.SetSubroutine(GL_FRAGMENT_SHADER, "RenderPixelPass",
pssubroutines[(int)pOwner->m_ParticleType],
psprevSubLoc[(int)pOwner->m_ParticleType],
psrenderSub[(int)pOwner->m_ParticleType]);
glPointSize(100.0f);
glDisable(GL_RASTERIZER_DISCARD);
glDrawTransformFeedback(GL_POINTS, m_Feedback[bufferIndex]);
bufferIndex = 1 - bufferIndex;
break;
}
FeedbackIndex = 1 - FeedbackIndex;
}
}
VS feedback:
#version 310 es
subroutine void RenderPassType();
subroutine uniform RenderPassType RenderPass;
layout(location=0) in vec3 VertexPosition;
layout(location=1) in vec3 VertexVelocity;
layout(location=2) in float VertexStartTime;
layout(location=3) in vec3 VertexInitialVelocity;
out vec3 Position;
out vec3 Velocity;
out float StartTime;
out float Transp;
uniform float g_fCurSeconds;
uniform float g_fElapsedSeconds;
uniform float Time;
uniform float H;
uniform vec3 Accel;
#ifdef USE_VIEW_BLOCK
layout(std140) uniform view_block{
mat4 g_mView,
g_mInvView,
g_mPrevView,
g_mPrevInvView,
g_mProj,
g_mInvProj;
};
uniform mat4 g_mWorld;
#endif
subroutine(RenderPassType) void UpdateSphere(){
Position=VertexPosition+VertexVelocity*g_fElapsedSeconds;
Velocity=VertexVelocity;
StartTime=VertexStartTime;
}
subroutine(RenderPassType) void Render(){
gl_Position=g_mProj*g_mInvView*vec4(VertexPosition,1.0);
}
void main(){
RenderPass();"
}
PS feedback:
#version 310 es //version 430 and 440 same results
subroutine void RenderPixelType();
subroutine uniform RenderPixelType RenderPixelPass;
uniform sampler2D tex0;
layout(location=0) out vec4 g_FragColor;
subroutine(RenderPixelType) void Basic(){
g_FragColor=vec4(1.0,1.0,1.0,1.0);
}
subroutine(RenderPixelType) void ProceduralSphere(){
#if 1
vec2 coord=gl_PointCoord;//at runtime: BOOM!
coord=coord*2.0-1.0;
float len=length(coord);
if(len>1.0) discard;
g_FragColor=vec4(1.0-len,1.0-len,1.0-len,1.0);
#else
g_FragColor=vec4(1.0,1.0,1.0,1.0);//always works
#endif
}
subroutine(RenderPixelType) void StandardImage(){
g_FragColor=texture2D(tex0,gl_PointCoord); //boom!!
g_FragColor=vec4(1.0,1.0,1.0,1.0);
}
void main(){
RenderPixelPass();
}
I solved the problem! The problem was actually that I didn't write a value to Transp (declared out float Transp;//in vs). I casually thought I didn't have to do this. But I started to trim some fat, and as soon as I wrote out a generic float (not actually being used by later shader stages: Transp=0.0f), and then compiled as #version 430, it all started to work as originally expected: little white spheres

OpenGL shader version error

I am using Visual Studio 2013 but running under Visual Studio 2010 compiler.
I am running Windows 8 in bootcamp on a Macbook Pro with intel iris pro 5200 graphics.
I have a very simple vertex and fragment shader, I am just displaying simple primitives in a window but I am getting warnings in console stating..
OpenGL Debug Output: Source(Shader Comiler), type(Other), Priority(Medium), GLSL compile warning(s) for shader 3, "": WARNING: -1:65535: #version : version number deprecated in OGL 3.0 forward compatible context driver
Anyone have any idea how to get rid of these annoying errors..?
Vertex Shader Code:
#version 330 core
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projMatrix;
in vec3 position;
in vec2 texCoord;
in vec4 colour;
out Vertex {
vec2 texCoord;
vec4 colour;
} OUT;
void main(void) {
gl_Position = (projMatrix * viewMatrix * modelMatrix) * vec4(position, 1.0);
OUT.texCoord = texCoord;
OUT.colour = colour;
}
Frag Shader code
#version 330 core
in Vertex {
vec2 texCoord;
vec4 colour;
} IN;
out vec4 color;
void main() {
color= IN.colour;
//color= vec4(1,1,1,1);
}
I always knew Intel drivers were bad, but this is ridiculous. The #version directive is NOT deprecated in GL 3.0. In fact it is more important than ever beginning with GL 3.2, because in addition to the number you can also specify core (default) or compatibility.
Nevertheless, that is not an actual error. It is an invalid warning, and having OpenGL debug output setup is why you keep seeing it. You can ignore it. AMD seems to be the only vendor that uses debug output in a useful way. NV almost never outputs anything, opting instead to crash... and Intel appears to be spouting nonsense.
It is possible that what the driver is really trying to tell you is that you have an OpenGL 3.0 context and you are using a GLSL 3.30 shader. If that is the case, this has got to be the stupidest way I have ever seen of doing that.
Have you tried #version 130 instead? If you do this, the interface blocks (e.g. in Vertex { ...) should generate parse errors, but it would at least rule out the only interpretation of this warning that makes any sense.
There is another possibility, that makes a lot more sense in the end. The debug output mentions this is related to shader object #3. While there is no guarantee that shader names are assigned sequentially beginning with 0, this is usually the case. You have only shown a total of 2 shaders here, #3 would imply the 4th shader your software loaded.
Are you certain that these are the shaders causing the problem?

Why does this GLSL shader not working on Intel G41?

Some info about my graphic card:
GL_RENDERER: Intel(R) G41 Express Chipset
OpenGL_VERSION: 2.1.0 - Build 8.15.10.1986
GLSL_VERSION: 1.20 - Intel Build 8.15.10.1986
Vertex shader 1:
#version 110
attribute vec3 vertexPosition_modelspace;
varying vec3 normal;
varying vec3 vertex;
void light(inout vec3 ver, out vec3 nor);
void main()
{
gl_Position = vec4(vertexPosition_modelspace, 1.0);
light(vertex, normal);
}
Vertex shader 2:
#version 110
void light(inout vec3 ver, out vec3 nor)
{
ver = vec3(0.0,1.0,0.0);
//vec3 v = -ver; // wrong line
nor = vec3(0.0,0.0,1.0);
//float f = dot(ver, nor); // wrong line
}
Fragment shader:
#version 110
varying vec3 normal;
varying vec3 vertex;
void main()
{
gl_FragColor = vec4(vertex, 1.0);
}
These shaders works well if the two lines are commented in second vertex shader. However, once one of them is enabled, we get a error. The error occur in opengl function glDrawArrays.
It seems that variable of out/inout can not used as right value.
I have run the same program on Intel HD Graphics 3000 which opengl's version is 3.1 and GLSL's version is 1.4, and the program works well. Is this a bug of Intel's driver or just wrong used by me?
Because intel g41 is an extremely weak gpu.
The only way through it is to upgrade your gpu.

converting GLSL #130 segment to #330

I have the following piece of shader code that works perfectly with GLSL #130, but I would like to convert it to code that works with version #330 (as somehow the #130 version doesn't work on my Ubuntu machine with a Geforce 210; the shader does nothing). After several failed attempts (I keep getting undescribed link errors) I've decided to ask for some help. The code below dynamically changes the contrast and brightness of a texture using the uniform variables Brightness and Contrast. I have implemented it in Python using PyOpenGL:
def createShader():
"""
Compile a shader that adjusts contrast and brightness of active texture
Returns
OpenGL.shader - reference to shader
dict - reference to variables that can be passed to the shader
"""
fragmentShader = shaders.compileShader("""#version 130
uniform sampler2D Texture;
uniform float Brightness;
uniform float Contrast;
uniform vec4 AverageLuminance;
void main(void)
{
vec4 texColour = texture2D(Texture, gl_TexCoord[0].st);
gl_FragColor = mix(texColour * Brightness,
mix(AverageLuminance, texColour, Contrast), 0.5);
}
""", GL_FRAGMENT_SHADER)
shader = shaders.compileProgram(fragmentShader)
uniform_locations = {
'Brightness': glGetUniformLocation( shader, 'Brightness' ),
'Contrast': glGetUniformLocation( shader, 'Contrast' ),
'AverageLuminance': glGetUniformLocation( shader, 'AverageLuminance' ),
'Texture': glGetUniformLocation( shader, 'Texture' )
}
return shader, uniform_locations
I've looked up the changes that need to made for the new GLSL version and tried changing the fragment shader code to the following, but then only get non-descriptive Link errors:
fragmentShader = shaders.compileShader("""#version 330
uniform sampler2D Texture;
uniform float Brightness;
uniform float Contrast;
uniform vec4 AverageLuminance;
in vec2 TexCoord;
out vec4 FragColor;
void main(void)
{
vec4 texColour = texture2D(Texture, TexCoord);
FragColor = mix(texColour * Brightness,
mix(AverageLuminance, texColour, Contrast), 0.5);
}
""", GL_FRAGMENT_SHADER)
Is there anyone that can help me with this conversion?
I doubt that raising the shader version profile will solve any issue. #version 330 is OpenGL-3.3 and according to the NVidia product website the maximum OpenGL version supported by the GeForce 210 is OpenGL-3.1, i.e. #version 140
I created no vertex shader cause I didn't think I'd need one (I wouldn't know what I should make it do). It worked before without any vertex shader as well.
Probably only as long as you didn't use a fragment shader or before you were attempting to use a texture. The fragment shader needs input variables, coming from a vertex shader, to have something it can use as texture coordinates. TexCoord is not a built-in variable (and with higher GLSL versions any builtin variables suitable for the job have been removed), so you need to fill that with value (and sense) in a vertex shader.
the glGetString(GL_VERSION) on the NVidia machine reads out OpenGL version 3.3.0. This is Ubuntu, so it might be possible that it differs with the windows specifications?
Do you have the NVidia propriatary drivers installed? And are they actually used? Check with glxinfo or glGetString(GL_RENDERER). OpenGL-3.3 is not too far from OpenGL-3.1 and in theory OpenGL major versions map to hardware capabilities.