Shaders not working as expected - c++

Using OpenGL 3.2 / GLSL 150 with OpenFrameworks v0.8.3
I'm trying to implement shaders into my programs. My program successfully loads the correct frag and vert files, but I get this glitched visual and error:
[ error ] ofShader: setupShaderFromSource(): GL_FRAGMENT_SHADER shader failed to compile
[ error ] ofShader: GL_FRAGMENT_SHADER shader reports:
ERROR: 0:39: Use of undeclared identifier 'gl_FragColor'
I read this SO answer explaining that gl_FragColor is not supported in GLSL 300, but I'm (pretty sure I'm) not using that version. Regardless, when I change gl_FragColor with an outputColor var, my screen just appears black with no error.
Why isn't my shader appearing as expected? I have a feeling it is either my .vert file / a fundamental misunderstanding of how shapes are drawn from within shaders, or versioning problems.
My simplified program:
.h
#pragma once
#include "ofMain.h" //includes all openGL libs/reqs
#include "GL/glew.h"
#include "ofxGLSLSandbox.h" //addon lib for runtime shader editing capability
class ofApp : public ofBaseApp{
public:
void setup();
void draw();
ofxGLSLSandbox *glslSandbox; //an object from the addon lib
};
.cpp
#include "ofApp.h"
//--------------------------------------------------------------
void ofApp::setup(){
// create new ofxGLSLSandbox instance
glslSandbox = new ofxGLSLSandbox();
// setup shader width and height
glslSandbox->setResolution(800, 480);
// load fragment shader file
glslSandbox->loadFile("shader"); //shorthand for loading both .frag and .vert as they are both named "shader.frag" and "shader.vert" and placed in the correct dir
}
//--------------------------------------------------------------
void ofApp::draw(){
glslSandbox->draw();
}
.vert (just meant to be a pass-through... if that makes sense)
#version 150
uniform mat4 modelViewProjectionMatrix;
in vec4 position;
void main(){
gl_Position = modelViewProjectionMatrix * position;
}
.frag (see 3rd interactive code block down this page for intended result)
#version 150
#ifdef GL_ES
precision mediump float;
#endif
out vec4 outputColor;
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
float circle(in vec2 _st, in float _radius){
vec2 dist = _st-vec2(0.5);
return 1.-smoothstep(_radius-(_radius*0.01),
_radius+(_radius*0.01),
dot(dist,dist)*4.0);
}
void main(){
vec2 st = gl_FragCoord.xy/u_resolution.xy;
vec3 color = vec3(circle(st,0.9));
outputColor = vec4( color, 1.0 );
}

I don't see the uniform variables getting set anywhere, which means they take the default values. This means that your matrix is all zeroes.

Related

Do GLSL code snippets stored in named strings have to be compileable code to #include via ARB_shading_language_include extension?

I learned how to use https://registry.khronos.org/OpenGL/extensions/ARB/ARB_shading_language_include.txt extension thanks to opengl - How to Using the #include in glsl support ARB_shading_language_include - Stack Overflow. I prepared the following named string and shader:
option A
// NamedString /lib/VertexData.glsl
struct VertexData {
vec3 objectPosition;
vec3 worldPosition;
};
VertexData fillVertexData() {
VertexData v;
v.objectPosition = objectPosition;
v.worldPosition = vec3(worldFromObject * vec4(objectPosition, 1));
return v
}
// MyShader.vert
#version 430
#extension GL_ARB_shading_language_include : require
layout(location = 0) in vec3 objectPosition;
uniform mat4 worldFromObject; // Model
uniform mat4 viewFromWorld; // View
uniform mat4 projectionFromView; // Projection
#include "/lib/VertexData.glsl"
out VertexData v;
void main() {
v = fillVertexData();
gl_Position = projectionFromView * viewFromWorld * vec4(v.worldPosition, 1);
}
When compiling the shader via glCompileShaderIncludeARB(vertex, 0, NULL, NULL); I get following error: Error. Message: 0:18(21): error: 'objectPosition' undeclared. The objectPosition vertex attribute declared in MyShader.vert is not recognized.
Whereas if I move the fillVertexData() function to MyShader.vert shader, it works fine.
option B
// NamedString /lib/VertexData.glsl
struct VertexData {
vec3 objectPosition;
vec3 worldPosition;
};
// MyShader.vert
#version 430
#extension GL_ARB_shading_language_include : require
layout(location = 0) in vec3 objectPosition;
uniform mat4 worldFromObject; // Model
uniform mat4 viewFromWorld; // View
uniform mat4 projectionFromView; // Projection
#include "/lib/VertexData.glsl"
out VertexData v;
VertexData fillVertexData() {
VertexData v;
v.objectPosition = objectPosition;
v.worldPosition = vec3(worldFromObject * vec4(objectPosition, 1));
return v
}
void main() {
v = fillVertexData();
gl_Position = projectionFromView * viewFromWorld * vec4(v.worldPosition, 1);
}
This made me think that the extension checks the scope of variables per named string. But that's not the behavior I expect from an #include preprocessor macro system. It should NOT care about variable scopes and just preprocess MyShader.vert and compile that one.
I tried option A via GL_GOOGLE_include_directive and glslangValidator -l MyShader.vert does NOT throw any errors for both options, and generated GLSL code via -E looks correct. Which was my expectation.
I read the extension specifications and it didn't mention that variables that are used in a named string should be declared by the time the extension is processing that named string. Am I doing something wrong? Or is this by design of ARB_shading_language_extension? Any suggestions on how I can keep fillVertexData() in the named string?
By the way, before writing my own #include implementation for my own app, I wanted to exhaust existing solutions. I first tried glslang library. But the preprocessor output I get from it is not compileable GLSL: my version of OpenGL does not support GL_GOOGLE_include_directive and it fills the code with #line directives where the second parameter is NOT an integer but a string (filename) which is not valid GLSL.
Using ARB_shading_language_extension was my second attempt of having reusable GLSL code via #includes.

glUniform1i has no effect

I am trying to assign texture unit 0 to a sampler2D uniform but the uniform's value does not change.
My program is coloring points based on their elevation (Y coordinates). Their color is looked up in a texture.
Here is my vertex shader code :
#version 330 core
#define ELEVATION_MODE
layout (location = 0) in vec3 position;
layout (location = 1) in float intensity;
uniform mat4 vpMat;
flat out vec4 f_color;
#ifdef ELEVATION_MODE
uniform sampler2D elevationTex;
#endif
#ifdef INTENSITY_MODE
uniform sampler2D intensityTex;
#endif
// texCoords is the result of calculations done on vertex coords, I removed the calculation for clarity
vec4 elevationColor() {
return vec4(textureLod(elevationTex, elevationTexCoords, 0), 1.0);
}
vec4 intensityColor() {
return vec4(textureLod(elevationTex, intensityTexCoords, 0), 1.0);
}
int main() {
gl_Position = vpMat * vec4(position.xyz, 1.0);
#ifdef ELEVATION_MODE
f_color = elevationColor();
#endif
#ifdef COLOR_LODDEPTH
f_color = getNodeDepthColor();
#endif
}
Here is my fragment shader :
#version 330 core
out vec4 color;
flat in vec4 f_color;
void main() {
color = f_color;
}
When this shader is executed, I have 2 textures bound :
elevation texture in texture unit 0
intensity texture in texture unit 1
I am using glUniform1i to set the uniform's value :
glUniform1i(elevationTexLocation, (GLuint)0);
But when I run my program, the value of the uniform elevationTex is 1 instead of 0.
If I remove the glUniform1i call, the uniform value does not change (still 1) so I think the call is doing nothing (but generates no error).
If I change the uniform's type to float and the call from glUniform1i to :
glUniform1f(elevationYexLocation, 15.0f);
The value in the uniform is now 15.0f. So there is no problem in my program with the location from which I call glUniform1i, it just has no impact on the uniform's value.
Any idea about what I could be doing wrong ?
I could give you more code but it is not really accessible so if you know the answer without it that's great. If you need the C++ part of the code, ask, I'll try to retrieve the important parts

ARB Assembly to GLSL

I have an NVidia example that uses an ARB Assembly shader:
!!ARBfp1.0
TEX result.color, fragment.texcoord, texture[0], RECT;
END
Now I would like to translate that into a GLSL shader. This is what I've come up with:
uniform sampler2D tex;
void main(void)
{
vec4 col = texture2D ( tex, gl_TexCoord[0] );
gl_FragColor = vec4(col.r, col.g, col.b, col.a);
}
I was hoping to see no change in the resulting rendering, but sadly I only get a black texture.
I've already made sure that the tex sampler is set correctly. Also my GLSL code compiles with no errors. For debugging I tried to make my shader even simpler:
void main(void)
{
gl_FragColor = vec4(1,0,0,1);
}
This gives me a red texture. Thus my basic setup seems to be OK.
Pay attention to the 4th parameter of TEX. It says RECT, so the sampler needs to have sampler2DRect type.
uniform sampler2DRect tex;
void main(void) {
gl_FragColor = texture2DRect(tex, gl_TexCoord[0]);
}

Translate ARB assembly to GLSL?

I'm trying to translate some old OpenGL code to modern OpenGL. This code is reading data from a texture and displaying it. The fragment shader is currently created using ARB_fragment_program commands:
static const char *gl_shader_code =
"!!ARBfp1.0\n"
"TEX result.color, fragment.texcoord, texture[0], RECT; \n"
"END";
GLuint program_id;
glGenProgramsARB(1, &program_id);
glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, program_id);
glProgramStringARB(GL_FRAGMENT_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, (GLsizei) strlen(gl_shader_code ), (GLubyte *) gl_shader_code );
I'd simply like to translate this into GLSL code. I think the fragment shader should look something like this:
#version 430 core
uniform sampler2DRect s;
void main(void)
{
gl_FragColor = texture2DRect(s, ivec2(gl_FragCoord.xy), 0);
}
But I'm not sure of a couple of details:
Is this the right usage of texture2DRect?
Is this the right usage of gl_FragCoord?
The texture is being fed with a pixel buffer object using GL_PIXEL_UNPACK_BUFFER target.
I think you can just use the standard sampler2D instead of sampler2DRect (if you do not have a real need for it) since, quoting the wiki, "From a modern perspective, they (rectangle textures) seem essentially useless.".
You can then change your texture2DRect(...) to texture(...) or texelFetch(...) (to mimic your rectangle fetching).
Since you seem to be using OpenGL 4, you do not need to (should not ?) use gl_FragColor but instead declare an out variable and write to it.
Your fragment shader should look something like this in the end:
#version 430 core
uniform sampler2D s;
out vec4 out_color;
void main(void)
{
out_color = texelFecth(s, vec2i(gl_FragCoord.xy), 0);
}
#Zouch, thank you very much for your response. I took it and worked on this for a bit. My final cores were very similar to what you suggested. For the record the final vertex and fragment shaders I implemented were as follows:
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main()
{
gl_Position = MVP * vec4(vertexPosition_modelspace, 1);
UV = vertexUV;
}
Fragment Shader:
#version 330 core
in vec2 UV;
out vec3 color;
uniform sampler2D myTextureSampler;
void main()
{
color = texture2D(myTextureSampler, UV).rgb;
}
That seemed to work.

no uniform with name 'u_proj' in shader

I wrote a pair of shaders to display the textures as greyscale instead of full color. I used these shaders with libGDX's built in SpriteBatch class and it worked. Then when I tried to use it with the built in SpriteCache class it didn't work. I looked at the SpriteCache code and saw that it set some different uniforms that I tried to take into account but I seem to have gone wrong somwhere.
The SpriteCache class in libGDX sets the following uniforms:
customShader.setUniformMatrix("u_proj", projectionMatrix);
customShader.setUniformMatrix("u_trans", transformMatrix);
customShader.setUniformMatrix("u_projTrans", combinedMatrix);
customShader.setUniformi("u_texture", 0);
This is my vertex shader:
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
uniform mat4 u_proj;
uniform mat4 u_projTrans;
uniform mat4 u_trans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
v_texCoords = a_texCoord0;
gl_Position = a_position* u_proj * u_trans;
}
and this is the fragment shader:
varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
uniform u_projTrans;
void main() {
vec4 color = texture2D(u_texture, v_texCoords).rgba;
float gray = (color.r + color.g + color.b) / 3.0;
vec3 grayscale = vec3(gray + 0* u_projTrans[0][0]);
gl_FragColor = vec4(grayscale, color.a);
}
The error I get is:
Exception in thread "LWJGL Application" java.lang.IllegalArgumentException: no uniform with name 'u_proj' in shader
...
com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:206)
at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:114)ackends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:206)
at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:114)
I guess do any of you guys know why this isn't working? There is a uniform with the name u_proj.
Thank you all!
What Reto Koradi said was true I had forgotten to put a mat4 tag before u_projTrans, that helped me.
Then what Tenfour04 said was a huge help too! I hadn't known about:
if (!shader.isCompiled()) throw new GdxRuntimeException("Couldn't compile shader: " + shader.getLog());
What helped me most in the long run was finding that glsl, when compiling, would do away with unused imports and that if you weren't able to trick the compiler into thinking that unused imports were used being used the shader would compile and then crash on runtime.
In libgdx there is a static "pedantic" variable that you can set. If it is set to false the application won't crash if variables are sent to the shader that the shader isn't using, they will simply be ignored. The code in my libgdx program looked something like this:
ShaderProgram.pedantic = false;
Thanks for your help all! I hope this can help someone in the future
Make sure that you check the success of your shader compilation/linking. Your fragment shader will not compile:
uniform u_projTrans;
This variable declaration needs a type. It should be:
uniform mat4 u_projTrans;
You can use the following calls to test for errors while setting up your shader programs:
glGetShaderiv(shaderId, GL_COMPILE_STATUS, ...);
glGetShaderInfoLog(shaderId, ...);
glGetProgramiv(programId, GL_LINK_STATUS, ...);
glGetProgramInfoLog(programId, ...);