gltexcoord[].st doesnt work on new hardware - c++

Since switching hardware from AMD to Intel, something that worked on AMD seems to cause fatal glsl error on Intel and I had to comment it out:
gltexcoord[0].st is not recognised and breaks the shader.
I am looking for help for an alternative method or maybe a workaround for this piece of code:
gl_TexCoord[0].s = r.x / m + 0.5;
gl_TexCoord[0].t = r.y / m + 0.5;
vec4 rS = texture(reflectionSampler, gl_TexCoord[0].st);
OpenGL 3.3, GLSL 3.3 - both vertex & fragment shaders 3.30 core.

gl_TexCoord was removed from core profile GLSL. The easiest way to achieve the same effect would be defining output variable vec2 in vertex shader:
out vec2 texCoord;
[..]
texCoord.xy = vec2(r.x / m + 0.f, r.y / m + 0.5);
and input variable in fragment shader:
in vec2 texCoord;
[..]
vec4 rS = texture(reflectionSampler, texCoord.xy);

Related

Unexpected crashes in Vulkan geometry shader

I am experiencing odd crashes when doing float comparisons in a Vulkan geometry shader. The shader code is as follows:
#version 450
#extension GL_ARB_separate_shader_objects : enable
#extension GL_ARB_shading_language_420pack : enable
layout (triangles) in;
layout (triangle_strip, max_vertices=3) out;
layout(binding = 0) uniform UniformBufferObject {
mat4 modelView;
mat4 staticModelView;
} ubo;
in vec2 texCoordGeom[];
layout(location = 0) out vec2 texCoord;
void main() {
float dist0 = length(gl_in[0].gl_Position.xyz - gl_in[1].gl_Position.xyz);
float dist1 = length(gl_in[1].gl_Position.xyz - gl_in[2].gl_Position.xyz);
float dist2 = length(gl_in[0].gl_Position.xyz - gl_in[2].gl_Position.xyz);
float maxDist = max(dist0, max(dist1, dist2));
if(maxDist < 0.01) {
gl_Position = ubo.modelView * gl_in[0].gl_Position;
texCoord = texCoordGeom[0];
EmitVertex();
gl_Position = ubo.modelView * gl_in[1].gl_Position;
texCoord = texCoordGeom[1];
EmitVertex();
gl_Position = ubo.modelView * gl_in[2].gl_Position;
texCoord = texCoordGeom[2];
EmitVertex();
EndPrimitive();
}
}
It appears to crash at the conditional:
if(maxDist < 0.01)
When I remove this conditional the code runs without issues. If I change the value of the threshold from 0.01 to something larger, such as 0.1 or 1, again the code runs without issues.
Note that I am using the glslangValidator.exe from the VulkanSDK to compile the shader code. No validation errors are thrown except for the warning:
Warning, version 450 is not yet complete; most version-specific features are present, but some are missing.
Also note that no helpful errors are thrown when the program does crash as the entire GPU freezes (screen goes black momentarily) and the program exits.
For future readers this appeared to be a driver issue. Since updating to the latest driver (Radeon Driver Packaging Version
16.50.2011-161219a-309792E) along with the latest LunarG Vulkan SDK (1.0.37.0) the problem has resolved itself. Note I was running on an
AMD Radeon R9 380 Series.

OpenGL ES 2.0 shader integer operations

I am having trouble with getting integer operations working in the OpenGL ES 2.0 shaders.
GL_SHADING_LANGUAGE_VERSION: OpenGL ES GLSL ES 1.00
One the example lines where I'm having issues is: color.r = floor(f / 65536);
I get this error:
Error linking program: ERROR:SEMANTIC-4 (vertex shader, line 13) Operator not supported for operand types
To give more context to what's happening, within the library that I am working with the only way to pass the color is as a float to which 3 integers have been bit shifted into. 3 (8-bit) int -> float pass to shader | float - > r g b (using integer manipulation) this all works fine on the normal OpenGL but having trouble making this work on the raspberry pi.
Full vexter shader code here:
attribute vec4 position;
varying vec3 Texcoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 proj;
vec3 unpackColor(float f)
{
vec3 color;
f -= 0x1000000;
color.r = floor(f / 65536);
color.g = floor((f - color.r * 65536) / 256.0);
color.b = floor(f - color.r * 65536 - color.g * 256.0);
return color / 256.0;
}
void main()
{
Texcoord = unpackColor(position.w);
gl_Position = proj * view * model * vec4(position.xyz, 1.0);
}
Any ideas how to get this working?

LibGdx Shader ("no uniform with name 'u_texture' in shader")

The Shader compiles successfully, but the program crashes as soon as rendering starts... This is the error i get: "no uniform with name 'u_texture' in shader". This is what my shader looks like:
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
varying vec2 surfacePosition;
#define MAX_ITER 10
void main( void ) {
vec2 p = surfacePosition*4.0;
vec2 i = p;
float c = 0.0;
float inten = 1.0;
for (int n = 0; n < MAX_ITER; n++) {
float t = time * (1.0 - (1.0 / float(n+1)));
i = p + vec2(
cos(t - i.x) + sin(t + i.y),
sin(t - i.y) + cos(t + i.x)
);
c += 1.0/length(vec2(
p.x / (sin(i.x+t)/inten),
p.y / (cos(i.y+t)/inten)
)
);
}
c /= float(MAX_ITER);
gl_FragColor = vec4(vec3(pow(c,1.5))*vec3(0.99, 0.97, 1.8), 1.0);
}
Can someone please help me. I don't know what I'm doing wrong. BTW, this is shader i found on the internet, so I know it is working, the only problem is making it work with libgdx.
libGDX's SpriteBatch assumes that your shader will have u_texture uniform. To overcome just add
ShaderProgram.pedantic = false;(Javadoc) before putting your shader program into the SpriteBatch.
UPDATE: raveesh is right about shader compiler vanishing unused uniforms and attributes, but libGDX wraps OpenGL shader in custom ShaderProgram.
Not only should you add the uniform u_texture in your shader program, you should also use it, otherwise it will be optimized away by the shader compiler.
But looking at you shader, you don't seem to need the uniform anyway, so check your program for something like shader.setUniformi("u_texture", 0); and remove the line. It should work fine then.

Use of undeclared identifier 'gl_LightSource'

It's really strange:
here are some log:
OpenGL Version = 4.1 INTEL-10.2.40
vs shaderid = 1, file = shaders/pointlight_shadow.vert
- Shader 1 (shaders/pointlight_shadow.vert) compile error: ERROR: 0:39: Use of undeclared identifier 'gl_LightSource'
BTW, I'm using C++/OpenGL/GLFW/GLEW on Mac OS X 10.10. Is there a way to check all the versions or attributes required to use "gl_LightSource" in the shader language?
Shader file:
#version 330
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexNormal_modelspace;
layout(location = 3) in vec3 vertexTangent_modelspace;
layout(location = 4) in vec3 vertexBitangent_modelspace;
out vec4 diffuse,ambientGlobal, ambient;
out vec3 normal,lightDir,halfVector;
out float dist;
out vec3 fragmentcolor;
out vec4 ShadowCoord;
//Model, view, projection matrices
uniform mat4 MVP;
uniform mat4 V;
uniform mat4 M;
uniform mat3 MV3x3;
uniform mat4 DepthBiasMVP;
void main()
{
//shadow coordinate in light space...
ShadowCoord = DepthBiasMVP * vec4(vertexPosition_modelspace,1);
// first transform the normal into camera space and normalize the result
normal = normalize(MV3x3 * vertexNormal_modelspace);
// now normalize the light's direction. Note that according to the
// OpenGL specification, the light is stored in eye space.
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
vec3 vertexPosition_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz;
vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz;
//light
vec3 light0_camerapace = (V* vec4(gl_LightSource[0].position.xyz,1) ).xyz;
vec3 L_cameraspace= light0_camerapace-vertexPosition_cameraspace;
lightDir = normalize(L_cameraspace);
// compute the distance to the light source to a varying variable
dist = length(L_cameraspace);
// Normalize the halfVector to pass it to the fragment shader
{
// compute eye vector and normalize it
vec3 eye = normalize(-vertexPosition_cameraspace);
// compute the half vector
halfVector = normalize(lightDir + eye);
}
// Compute the diffuse, ambient and globalAmbient terms
diffuse = gl_FrontMaterial.diffuse * gl_LightSource[0].diffuse;
ambient = gl_FrontMaterial.ambient * gl_LightSource[0].ambient;
ambientGlobal = gl_LightModel.ambient * gl_FrontMaterial.ambient;
}
You're not specifying a profile in your shader version:
#version 330
The default in this case is core, corresponding to the OpenGL core profile. On some platforms, you could change this to using the compatibility profile:
#version 330 compatibility
But since you say that you're working on Mac OS, that's not an option for you. Mac OS only supports the core profile for OpenGL 3.x and later.
The reason your shader does not compile with the core profile is that you're using a bunch of deprecated pre-defined variables. For example:
gl_FrontMaterial
gl_LightSource
gl_LightModel
All of these go along with the old style fixed function pipeline, which is not available anymore in the core profile. You will have to define your own uniform variables for these values, and pass the values into the shader with glUniform*() calls.
I wrote a more detailed description of what happened to built-in GLSL variables in the transition to the core profile in an answer here: GLSL - Using custom output attribute instead of gl_Position.

OSG: GLSL Shader working on AMD but not on NVIDIA

currently I am working on a OSG Project for my study and wrote a CelShading shader (alongside a simpleFog Shader). I first render with the CelShader along with the depth buffer to Texture and then use the fogShader. Everything works fine on my AMD Radeon HD 7950 and on my Intel HD4400 (although it is slow on the last), both running Windows. However, on a Quadro 600 runnning Linux, the Shader compiles without error, but is still wrong, the light is dulled and because of the lack of some light spots, it seems that not every light in the Scene is used. The whole toon effect is also gone.
I confirmed the Shader working on another AMD, a ATI Mobility HD3400.
But on other NVIDIAs, like a GTX 670 or 660 TI oder 560 TI (this time windows) the Shader is not working. First it was totally messed up because of non-uniform flow, but after I fixed it it is still not working.
I have this Problem now for some days and it is giving me a headache. I do not know what am I missing, why is it working on a simple Intel HD 4400 but not on high end NVIDIA Cards?
Strangely, the fogShader is working perfectly on every system and gives me the nice fog I want.
Does anyone have an idea? The Uniforms are set for the toonTex, but texture0 is not set, because the model is uv-mapped with blender, but the textures seem to work just fine (look at the Pony in the Screens). I assuming 0 is used as layout for texture0, which is perfectly valid,as far as I know. Here is a Video showing the shader on a GTX 660 TI. Something seems to work, if there is only one light, but it is not how it should look like, on a Radeon HD 7950 it is like this (ignore the black border, screenshot issue).
The light is cleary different.
EDIT: Just did another test: on the Intel HD 4400 and Windows, it is working. But the same System running Linux is showing only a whole lot of White with some outlines but no textures at all.
Anyone any suggestion?
The sources for the shaders are here:
celShader.vert
#version 120
varying vec3 normalModelView;
varying vec4 vertexModelView;
uniform bool zAnimation;
uniform float osg_FrameTime;
void main()
{
normalModelView = gl_NormalMatrix * gl_Normal;
vertexModelView = gl_ModelViewMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
vec4 vertexPos = gl_Vertex;
if(zAnimation){//
vertexPos.z = sin(5.0*vertexPos.z + osg_FrameTime)*0.25;//+ vertexPos.z;
}
gl_Position = gl_ModelViewProjectionMatrix * vertexPos;
}
celShader.frag
#version 120
#define NUM_LIGHTS 5
uniform sampler2D texture0;
uniform sampler2D toonTex;
uniform float osg_FrameTime;
uniform bool tex;
varying vec3 normalModelView;
varying vec4 vertexModelView;
vec4 calculateLightFromLightSource(int lightIndex, bool front){
vec3 lightDir;
vec3 eye = normalize(-vertexModelView.xyz);
vec4 curLightPos = gl_LightSource[lightIndex].position;
//curLightPos.z = sin(10*osg_FrameTime)*4+curLightPos.z;
lightDir = normalize(curLightPos.xyz - vertexModelView.xyz);
float dist = distance( gl_LightSource[lightIndex].position, vertexModelView );
float attenuation = 1.0 / (gl_LightSource[lightIndex].constantAttenuation
+ gl_LightSource[lightIndex].linearAttenuation * dist
+ gl_LightSource[lightIndex].quadraticAttenuation * dist * dist);
float z = length(vertexModelView);
vec4 color;
vec3 n = normalize(normalModelView);
vec3 nBack = normalize(-normalModelView);
float intensity = dot(n,lightDir); //NdotL, Lambert
float intensityBack = dot(nBack,lightDir); //NdotL, Lambert
//-Phong Modell
vec3 reflected = normalize(reflect( -lightDir, n));
float specular = pow(max(dot(reflected, eye), 0.0), gl_FrontMaterial.shininess);
vec3 reflectedBack = normalize(reflect( -lightDir, nBack));
float specularBack = pow(max(dot(reflectedBack, eye), 0.0), gl_BackMaterial.shininess);
//Toon-Shading
//2D Toon http://www.cs.rpi.edu/~cutler/classes/advancedgraphics/S12/final_projects/hutchins_kim.pdf
vec4 toonColor = texture2D(toonTex,vec2(intensity,specular));
vec4 toonColorBack = texture2D(toonTex,vec2(intensityBack,specularBack));
if(front){
color += gl_FrontMaterial.ambient * gl_LightSource[lightIndex].ambient[lightIndex];
if(intensity > 0.0){
color += gl_FrontMaterial.diffuse * gl_LightSource[lightIndex].diffuse * intensity * attenuation ;
color += gl_FrontMaterial.specular * gl_LightSource[lightIndex].specular * specular *attenuation ;
}
return color * toonColor;
} else {//back
color += gl_BackMaterial.ambient * gl_LightSource[lightIndex].ambient[lightIndex];
if(intensity > 0.0){
color += gl_BackMaterial.diffuse * gl_LightSource[lightIndex].diffuse * intensityBack * attenuation ;
color += gl_BackMaterial.specular * gl_LightSource[lightIndex].specular * specularBack *attenuation ;
}
return color * toonColorBack;
}
}
void main(void) {
vec4 color = vec4(0.0);
bool front = true;
//non-uniform-flow error correction
//see more here: http://www.opengl.org/wiki/GLSL_Sampler#Non-uniform_flow_control
//and here: http://gamedev.stackexchange.com/questions/32543/glsl-if-else-statement-unexpected-behaviour
vec4 texColor = texture2D(texture0,gl_TexCoord[0].xy);
if(!gl_FrontFacing)
front = false;
for(int i = 0; i< NUM_LIGHTS; i++){
color += calculateLightFromLightSource(i,front);
}
if(tex)
gl_FragColor =color * texColor;
else
gl_FragColor = color;
}
fogShader.vert
#version 120
varying vec4 vertexModelView;
void main()
{
gl_Position = ftransform();
vertexModelView = gl_ModelViewMatrix * gl_Vertex;
gl_TexCoord[0] = gl_MultiTexCoord0;
}
fogShader.frag
varying vec4 vertexModelView;
uniform sampler2D texture0;
uniform sampler2D deepth;
uniform vec3 fogColor;
uniform float zNear;
uniform float zFar;
float linearDepth(float z){
return (2.0 * (zNear+zFar)) / ((zFar + zNear) - z * (zFar - zNear));// -1.0;
}
void main(void){
//Literature
//http://www.ozone3d.net/tutorials/glsl_fog/p04.php and depth_of_field example OSG Cookbook
vec2 deepthPoint = gl_TexCoord[0].xy;
float z = texture2D(deepth, deepthPoint).x;
//fogFactor = (end - z) / (end - start)
z = linearDepth(z);
float fogFactor = (4000*4-z) / (4000*4 - 30*4);
fogFactor = clamp(fogFactor, 0.0, 1.0);
vec4 texColor = texture2D(texture0,gl_TexCoord[0].xy);
gl_FragColor = mix(vec4(fogColor,1.0), texColor,fogFactor);
}
ProgramLinking
osg::ref_ptr<osg::Shader> toonFrag = osgDB::readShaderFile("../Shader/celShader.frag");
osg::ref_ptr<osg::Shader> toonVert = osgDB::readShaderFile("../Shader/" + _vertSource);
osg::ref_ptr<osg::Program> celShadingProgram = new osg::Program;
celShadingProgram->addShader(toonFrag);
celShadingProgram->addShader(toonVert);
osg::ref_ptr<osg::Texture2D> toonTex = new osg::Texture2D;
toonTex->setImage(osgDB::readImageFile("../BlenderFiles/Texturen/toons/" + _toonTex));
toonTex->setFilter(osg::Texture::MIN_FILTER, osg::Texture::NEAREST);
toonTex->setFilter(osg::Texture::MAG_FILTER, osg::Texture::NEAREST);
osg::ref_ptr<osg::StateSet> ss = new osg::StateSet;
ss->setTextureAttributeAndModes(1, toonTex, osg::StateAttribute::OVERRIDE | osg::StateAttribute::ON);
ss->addUniform(new osg::Uniform("toonTex", 1));
ss->setAttributeAndModes(celShadingProgram, osg::StateAttribute::OVERRIDE | osg::StateAttribute::ON);
//TODO NEEED?
ss->setTextureMode(1, GL_TEXTURE_1D, osg::StateAttribute::OVERRIDE | osg::StateAttribute::OFF);
ss->addUniform(new osg::Uniform("tex", true));
ss->addUniform(new osg::Uniform("zAnimation", false));
Okay, I finally found the error.
There was a faulty Line since version zero of my Shader which I overlooked for a whole week (and I am suprised my AMD Driver did not gave my an error, it was just plain wrong!
EDIT: not wrong at all, see comment below!).
This two lines were broken:
color += gl_FrontMaterial.ambient * gl_LightSource[lightIndex].ambient[lightIndex];
color += gl_BackMaterial.ambient * gl_LightSource[lightIndex].ambient[lightIndex];
ambient is of course not an array....