I have a couple of examples that I want to run on my PC. The problem is that they're written with glsl target 150 and my PC only supports version 120. I'm pretty sure that the program itself is simple enough not to require any extended functionality of OpenGL 3.1. I have found some information on what steps should be taken to transform glsl(f.e. changing in to attribute, out to varying) but it's still not compiling(is it actually possible to somehow get a meaningful error message out of this?).
original .vert
#version 150
in vec2 in_Position;
in vec3 in_Color;
out vec3 ex_Color;
void main(void) {
gl_Position = vec4(in_Position.x, in_Position.y, 0.0, 1.0);
ex_Color = in_Color;
}
original .frag
#version 150
precision highp float;
in vec3 ex_Color;
out vec4 gl_FragColor;
void main(void) {
gl_FragColor = vec4(ex_Color,1.0);
}
changed .vert
#version 120
attribute vec2 in_Position;
attribute vec3 in_Color;
varying vec3 ex_Color;
void main(void) {
gl_Position = vec4(in_Position.x, in_Position.y, 0.0, 1.0);
ex_Color = in_Color;
}
changed .frag
#version 120
precision highp float;
attribute vec3 ex_Color;
void main(void) {
gl_FragColor = vec4(ex_Color,1.0);
}
So can anyone spot a problem here?
To get the compile/link error messages, you need to use the commands glGetShaderInfoLog for shaders, and glGetProgramInfoLog for programs.
These will tell you what your particular errors are.
Just taking a stab at what the error might be, you're declaring an attribute input in the fragment shader, which I believe should be a varying. Attributes are for data->vertex shader, and varyings are for vertex shader -> fragment shader.
The glsl 120 spec also mentions that precision qualifier is "reserved for future use", so it may not be applicable to version 120. You can probably leave it out.
But you should still get familiar with the infolog functions regardless, you'll definitely need them eventually.
You can get the compile errors by retrieving the "info log":
GLuint nVertexShader, nPixelShader; // handles to objects
GLint vertCompiled, fragCompiled; // status values
GLint linked;
glCompileShader(nVertexShader);
glGetShaderiv(nVertexShader, GL_COMPILE_STATUS, &vertCompiled);
if vertCompiled (or fragCompile) == 0, do this to see why:
int infologLength = 0;
int charsWritten = 0;
glGetShaderiv(nVertexShader, GL_INFO_LOG_LENGTH, &infologLength);
if (infologLength > 0)
{
GLchar* infoLog = (GLchar *)malloc(infologLength);
if (infoLog == NULL)
{
printf( "ERROR: Could not allocate InfoLog buffer");
exit(1);
}
glGetShaderInfoLog(nVertexShader, infologLength, &charsWritten, infoLog);
printf( "Shader InfoLog:\n%s", infoLog );
free(infoLog);
}
You can do the same with linking, just check for linked == 0 and retrieve the log as above:
glLinkProgram(m_nProgram);
glGetProgramiv(m_nProgram, GL_LINK_STATUS, &linked);
precision highp float;
That is not legal in GLSL 1.20. I don't even know why it was put in the 1.50 shader anyway, since the precision qualifiers only do useful things in GLSL ES, and you can't share 1.50 shaders with GLSL ES.
Related
When I try to compile this GLSL code in OpenGL 4.0 I get error 1282
Vertex shader:
#version 330 core
layout(location = 0) in vec2 aPos;
uniform mat4 model[100];
uniform mat4 projection;
out int instanceID;
void main()
{
instanceID = gl_InstanceID;
gl_Position = projection * model[gl_InstanceID] * vec4(aPos.x, aPos.y, 0.0, 1.0);
}
Fragment shader:
#version 330 core
in int instanceID;
out vec4 FragColor;
uniform vec4 color[100];
void main()
{
FragColor = color[instanceID];
}
The code is regular ShaderProgram creation because I get the error before drawing or anything like that but just in case here is the code:
unsigned int vertex, fragment;
vertex = glCreateShader(GL_VERTEX_SHADER);
std::string vShaderCode = ReadEntireTextFile(vertPath);
const char* c_vShaderCode = vShaderCode.c_str();
glShaderSource(vertex, 1, &c_vShaderCode, NULL);
glCompileShader(vertex);
std::string fShaderCode = ReadEntireTextFile(fragPath);
fragment = glCreateShader(GL_FRAGMENT_SHADER);
const char* c_fShaderCode = fShaderCode.c_str();
glShaderSource(fragment, 1, &c_fShaderCode, NULL);
glCompileShader(fragment);
// shader Program
m_RendererID = glCreateProgram();
glAttachShader(m_RendererID, vertex);
glAttachShader(m_RendererID, fragment);
glLinkProgram(m_RendererID);
// delete the shaders as they're linked into our program now and no longer necessery
glDeleteShader(vertex);
glDeleteShader(fragment);
glUseProgram(m_RendererID);
I know for sure that the error is because of instanceID because it worked when the shader didn't have that. but I tried to find where the problem is exactly with no luck.
UPDATE:
Made sure shaders were compiled successfully. and turns out they are compiled successfully.
I was able to pinpoint where the error occurs and it was the glUseProgram funtion.
I think the error is because of instanceID because when I change:
FragColor = color[instanceID];
to
FragColor = color[0];
the program works.
UPDATE(2):
I solved the problem.
Turns out I had 2 problems one being that I had too many components and I fixed that thanks to one of the answers alerting me.
The other was that I can't put uniforms directly in the fragment shader which I thought you could do.So I put the colors in the vertex shader and passed the one color I needed for the fragment shader.
Thanks for the help!
uniform mat4 model[100];
That is way outside of what is guaranteed by the spec. The limit here is GL_MAX_VERTEX_UNIFORM_COMPONENTS, which the spec guarantees to be at least 1024. Since a mat4 consumes 16 components, that's 64 matrices at most. Now your particular limit might be higher, but you also have the other uniforms in your program, like color[100].
(from comments):
It does not return anything for both fragment and vertex shaders and glGetShaderiv(shader, GL_COMPILE_STATUS, &output); returns true for both shaders.
But that does not imply that the program object linked successfully. Such ressource limits are usually enforced during linking.
I'm pretty sure program is an object created by openGL successfully, I'm not really sure about the others though. you see if i change the fragment shader main function to FragColor = color[0]; it will work so the issue is with instanceID I think.
That conclusion does not follow. If you write color[0], it will optimize your vec4r[100] array to vec4[1], and you might get below your particular limit.
I am using OpenGl 3 and the tutorial from BennyBox on Youtube.
Using this method:
static void CheckShaderError(GLuint shader, GLuint flag, bool isProgram, const std::string& errorMessage){
GLint success = 0;
GLchar error [1024] = {0};
if(isProgram){
glGetProgramiv(shader, flag, &success);
}else{
glGetShaderiv(shader, flag, &success);
}
if(success == GL_FALSE){
if(isProgram){
glGetProgramInfoLog(shader, sizeof(error), NULL, error);
}else{
glGetShaderInfoLog(shader,sizeof(error), NULL, error);
}
std::cerr<< errorMessage<< ": " << error<< "'"<<std::endl;
}
}
I should be able to load shader file (fragment and vertex shaders). It works with basic shaders but when I try to modify them to that point:
#version 120
attribute vec3 position;
attribute vec2 texCoord;
varying vec2 texCoord;
void main(){
gl_Position = vec4(position, 1.0);
texCoord0 = texCoord;
}
The I get:
Error compiling shader!: 'ERROR: 0:6: 'texCoord' : redefinition
ERROR: 0:11: 'texCoord0' : undeclared identifier
ERROR: 0:11: 'assign' : cannot convert from 'attribute 2-component vector of float' to 'float'
and the fragment shader:
#version 120
uniform sampler2D diffuse;
varying vec2 texCoord0;
void main(){
gl_FragColor = texture2D(diffuse, texCoord0);
}
gives:
Unable to load shader: ./res/basicShader.fs
Error linking shader program: 'Attached vertex shader is not compiled.
I have the exact same code as the video, and it runs fine using basic coloring shader. I am on Visual Studio 2012.
There are two errors in your vertex shader.
First, you are declaring texCoord twice.
attribute vec2 texCoord;
varying vec2 texCoord;
Then you are trying to use a varying called texCoord0, without ever declaring it.
texCoord0 = texCoord;
Your vertex shader should look like this:
#version 120
attribute vec3 position;
attribute vec2 texCoord;
varying vec2 texCoord0;
void main(){
gl_Position = vec4(position, 1.0);
texCoord0 = texCoord;
}
I wrote a pair of shaders to display the textures as greyscale instead of full color. I used these shaders with libGDX's built in SpriteBatch class and it worked. Then when I tried to use it with the built in SpriteCache class it didn't work. I looked at the SpriteCache code and saw that it set some different uniforms that I tried to take into account but I seem to have gone wrong somwhere.
The SpriteCache class in libGDX sets the following uniforms:
customShader.setUniformMatrix("u_proj", projectionMatrix);
customShader.setUniformMatrix("u_trans", transformMatrix);
customShader.setUniformMatrix("u_projTrans", combinedMatrix);
customShader.setUniformi("u_texture", 0);
This is my vertex shader:
attribute vec4 a_position;
attribute vec4 a_color;
attribute vec2 a_texCoord0;
uniform mat4 u_proj;
uniform mat4 u_projTrans;
uniform mat4 u_trans;
varying vec4 v_color;
varying vec2 v_texCoords;
void main() {
v_color = a_color;
v_texCoords = a_texCoord0;
gl_Position = a_position* u_proj * u_trans;
}
and this is the fragment shader:
varying vec4 v_color;
varying vec2 v_texCoords;
uniform sampler2D u_texture;
uniform u_projTrans;
void main() {
vec4 color = texture2D(u_texture, v_texCoords).rgba;
float gray = (color.r + color.g + color.b) / 3.0;
vec3 grayscale = vec3(gray + 0* u_projTrans[0][0]);
gl_FragColor = vec4(grayscale, color.a);
}
The error I get is:
Exception in thread "LWJGL Application" java.lang.IllegalArgumentException: no uniform with name 'u_proj' in shader
...
com.badlogic.gdx.backends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:206)
at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:114)ackends.lwjgl.LwjglApplication.mainLoop(LwjglApplication.java:206)
at com.badlogic.gdx.backends.lwjgl.LwjglApplication$1.run(LwjglApplication.java:114)
I guess do any of you guys know why this isn't working? There is a uniform with the name u_proj.
Thank you all!
What Reto Koradi said was true I had forgotten to put a mat4 tag before u_projTrans, that helped me.
Then what Tenfour04 said was a huge help too! I hadn't known about:
if (!shader.isCompiled()) throw new GdxRuntimeException("Couldn't compile shader: " + shader.getLog());
What helped me most in the long run was finding that glsl, when compiling, would do away with unused imports and that if you weren't able to trick the compiler into thinking that unused imports were used being used the shader would compile and then crash on runtime.
In libgdx there is a static "pedantic" variable that you can set. If it is set to false the application won't crash if variables are sent to the shader that the shader isn't using, they will simply be ignored. The code in my libgdx program looked something like this:
ShaderProgram.pedantic = false;
Thanks for your help all! I hope this can help someone in the future
Make sure that you check the success of your shader compilation/linking. Your fragment shader will not compile:
uniform u_projTrans;
This variable declaration needs a type. It should be:
uniform mat4 u_projTrans;
You can use the following calls to test for errors while setting up your shader programs:
glGetShaderiv(shaderId, GL_COMPILE_STATUS, ...);
glGetShaderInfoLog(shaderId, ...);
glGetProgramiv(programId, GL_LINK_STATUS, ...);
glGetProgramInfoLog(programId, ...);
I want to debug my application but when i run it, i receive errors on the glLinkProgram and i cannot see an output visually in the debugger execution.
While debugging, in the "shader" code view i see a red line saying that the program cannot be linked because i have to write to gl_Position.
But my shaders are like this:
#version 440
in vec3 VertexPosition;
in vec3 VertexColor;
in vec2 UV;
in float MaxVal;
out vec3 outColor;
out vec2 outUV;
out float outMaxVal;
void main()
{
outColor = VertexColor;
outUV = UV;
outMaxVal = MaxVal;
gl_Position = vec4( VertexPosition, 1.0 );
}
And fragment:
#version 440
in vec3 outColor;
in vec2 outUV;
in float outMaxVal;
uniform sampler2D filterTex;
out vec4 FragColor;
void main() {
float texColor = texture( filterTex, outUV ).r;
texColor = texColor * outMaxVal;
FragColor = vec4(outColor.rgb * texColor,1.0f);
}
When i execute my code i can see the result of the compilation and the shaders, apparently work perfectly, but i want to debug and this has no sense.
Besides, using QGLWidget in the gDEBugger i see three contexts created, the first two deleted. The active one is the third. I Do nothing about them, i just use the widget. Who builds those contexts? Could it give me errors?
* UPDATE * here i add what i did to check errors. I get all strings empty... shouldn't they have contained any kind of information?
GLint status;
glFuncs.glGetProgramiv(programHandle, GL_LINK_STATUS, &status);
if (status == GL_FALSE) {
int loglen;
char logbuffer[1000];
glFuncs.glGetProgramInfoLog(programHandle, sizeof(logbuffer), &loglen, logbuffer);
fprintf(stderr, "OpenGL Program Linker Error \n%.*s", loglen, logbuffer);
} else {
int loglen;
char logbuffer[1000];
glFuncs.glGetProgramInfoLog(programHandle, sizeof(logbuffer), &loglen, logbuffer);
if (loglen > 0) {
fprintf(stderr, "OpenGL Program Link OK \n%.*s", loglen, logbuffer);
}
glFuncs.glValidateProgram(programHandle);
glFuncs.glGetProgramInfoLog(programHandle, sizeof(logbuffer), &loglen, logbuffer);
if (loglen > 0) {
fprintf(stderr, "OpenGL Program Validation results:\n%.*s", loglen, logbuffer);
}
}
I'm using xcode to make a game with OpenGL. I have used GLUT to initialise a window. I have shaders that I wish to implement but when I try to compile them, I get two compile errors in the info log. My shaders look like this:
//FirstShader.vsh
#version 150 core
in vec3 position;
void main()
{
gl_Position = vec4(position, 1.0);
}
//FirstShader.fsh
#version 150 core
out vec4 fragData;
void main()
{
fragData = vec4(0.0, 0.0, 1.0, 1.0);
}
I'm reading the file and compiling it with this code:
GLuint createShaderFromFile(const GLchar *path, GLenum shaderType){
GLuint shaderID = glCreateShader(shaderType);
std::ifstream fin;
fin.open(path);
if(!fin.is_open()){
fin.close();
std::cout << "Shader Not Found" << std:: endl;
return 0;
}
std::string source((std::istreambuf_iterator<GLchar>(fin)),std::istreambuf_iterator<GLchar> ());
fin.close();
const GLchar* shaderSource = source.c_str();
glShaderSource(shaderID, 1, &shaderSource, NULL);
glCompileShader(shaderID);
GLint compileStatus;
glGetShaderiv(shaderID, GL_COMPILE_STATUS, &compileStatus);
if (compileStatus != GL_TRUE) {
std::cout << "Shader failed to compile" << std::endl;
GLint infoLoglength;
glGetShaderiv(shaderID, GL_INFO_LOG_LENGTH, &infoLoglength);
GLchar* infoLog = new GLchar[infoLoglength + 1];
glGetShaderInfoLog(shaderID, infoLoglength, NULL, infoLog);
std::cout << infoLog << std::endl;
delete infoLog;
return 0;
}
return shaderID;
}
Im getting these errors:
ERROR: 0:1: '' : version '150' is not supported
ERROR: 0:1: '' : syntax error #version
My OpenGL version is 2.1 and my glsl version is 1.20. Does anybody know how I can fix this?
You can tell OSX to use a newer version of OpenGL by setting it in your pixel format when you create the context. Here is an example of how to set it up.
Or if you're using glut, I think you want glutInitContextVersion(x, y); where x and y are the major and minor version numbers. GLSL 1.5 is supported in OpenGL 3.2, I think. (And you might also want glutInitContextProfile(GLUT_CORE_PROFILE);, I think.)
I re-wrote your shaders in a way that they will actually work in GLSL 1.20.
in must be replaced with attribute in a GLSL 1.20 vertex shader
out for fragment shader output is invalid, use gl_FragColor or gl_FragData [n] instead
Declaring a vertex attribute as vec3 and then doing something like vec4 (vtx, 1.0) is completely redundant
If you declare the attribute as vec4 and give it data using fewer than 4 components GLSL will automatically fill-in the attribute's missing components this way: vec4 (0.0, 0.0, 0.0, 1.0).
Fragment Shader:
#version 120
attribute vec4 position;
void main()
{
gl_Position = position;
}
Vertex Shader:
#version 120
void main()
{
gl_FragColor = vec4(0.0, 0.0, 1.0, 1.0);
}
Of course this does not really solve your problem, because if you have a version of OS X 10.7 or newer, it supports OpenGL 3.2 and therefore GLSL 1.50 core. You need to request a core profile context for this to work, however - otherwise you will get OpenGL 2.1 and GLSL 1.20.