GLSL VertexShader works with Qt but not plain OpenGL(SL) - c++

I'm currently developing an OpenGL-Widget in Qt, based on the QOpenGLWidget. I followed some examples and used the GLSL-Wrapper for Demo purposes. The application itself should be as independent as possible for compatibility purposes, like changing the GUI framework.
When the Qt code takes care of the shader, the app works fine:
QOpenGLShader *vshader = new QOpenGLShader(QOpenGLShader::Vertex, this);
const char *vsrc =
"uniform mediump mat4 matrix;\n"
"void main(void)\n"
"{\n"
" gl_Position = matrix * gl_Vertex;\n"
" gl_FrontColor = gl_Color;\n"
"}\n";
bool success = vshader->compileSourceCode(vsrc);
program = new QOpenGLShaderProgram();
program->addShader(vshader);
program->link();
Next, I upload and compile the shader on my own:
const char *vsrc =
"uniform mediump mat4 matrix;\n"
"void main(void)\n"
"{\n"
" gl_Position = matrix * gl_Vertex;\n"
" gl_FrontColor = gl_Color;\n"
"}\n";
GLuint programmID = glCreateProgram();
GLuint shaderID = glCreateShader(GL_VERTEX_SHADER);
int length =(int) std::char_traits<char>::length(vsrc);
glShaderSource(shaderID, 1, &vsrc, &length);
glCompileShader(shaderID);
char *error = new char[1000];
int* messagelength = new int;
glGetShaderInfoLog(shaderID, (GLsizei)1000, messagelength, error);
string str = string(error, *messagelength);
std::cout << str << std::endl << std::flush;
delete error;
delete messagelength;
glAttachShader(programmID, shaderID);
glDeleteShader(shaderID);
glLinkProgram(programmID);
glUseProgram(programmID);
However, this results in the following errors:
0(1) : error C0000: syntax error, unexpected type identifier, expecting '{' at token "mat4"
0(4) : warning C7506: OpenGL does not define the global type matrix
0(4) : warning C7531: pointers requires "#extension GL_NV_shader_buffer_load : enable" before use
0(4) : error C0000: syntax error, unexpected identifier, expecting '(' at token "gl_Vertex"
How do I make this work?

Well, your code is invalid in desktop GL. Since your shader does not contain a #version directive, it is to be interpreted as GLSL 1.10. And that version does not know of the precision qualifiers like mediump. (Later just accept that keywords, for improved compatibility with GLSL ES).
Note that Qt might very well use GL ES 2.0 as default (and not desktop GL), depending on your local configuration and also on how the qt libs were built.
Also note that your shader is totally invalid in a modern core profile of desktop GL.
The only recommendation I can give you is to first decide which version (or versions) of OpenGL you want/have to target.

Related

OpenGLES 3.0: "Only consts can be used in a global initializer "

I am a newbie following the book <OpenGL ES 3.0 cookbook> chapter2, but I get stuck with this error during installing the demo APP with this:
2020-12-06 16:16:10.888 7549-7578/com.demo.hellosquare E/glOpenGLES3Native: Could not compile shader 35633:
ERROR: 0:6: 'RadianAngle' : Only consts can be used in a global initializer
ERROR: 0:6: 'RadianAngle' : Only consts can be used in a global initializer
ERROR: 0:6: 'RadianAngle' : Only consts can be used in a global initializer
ERROR: 0:6: 'RadianAngle' : Only consts can be used in a global initializer
ERROR: 4 compilation errors. No code generated.
The problem is, I don't understand what is that message trying to tell me(Google has no relevant results).
The code involves "RadianAngle" are the following places:
In the top of my single CPP file, I declared:
GLuint radianAngle;
And then with my shader also in top the same file:
static const char vertexShader[] =
"#version 300 es \n"
"in vec4 VertexPosition; \n"
"in vec4 VertexColor; \n"
"uniform float RadianAngle; \n"
"out vec4 TriangleColor; \n"
"mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle), \
-sin(RadianAngle),cos(RadianAngle)); \n"
"void main() { \n"
" gl_Position = mat4(rotation)*VertexPosition; \n"
" TriangleColor = VertexColor; \n"
"}\n";
Finally inside my render function(will be called through JNI) in same file:
radianAngle = glGetUniformLocation(programID, "RadianAngle");
glUniform1f(radianAngle, radian);
Strangely, I copied exactly from the book, sigh..
The issue is related to the line:
mat2 rotation = mat2(cos(RadianAngle),sin(RadianAngle),
-sin(RadianAngle),cos(RadianAngle));
rotation is a variable in global scope. Global variables can only be initialized with constant expressions. RadianAngle is not constant because it is a uniform variable. This causes the error:
ERROR: 0:6: 'RadianAngle' : Only consts can be used in a global initializer
The error occurs 4 times, because RadianAngle is used 4 times in the initializer of rotation.
You have to set the value of rotation in main:
mat2 rotation;
void main()
{
rotation = mat2(cos(RadianAngle),sin(RadianAngle),
-sin(RadianAngle),cos(RadianAngle));
// [...]
}

Using openGL on mac, version '150' is not supported error [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed last month.
Improve this question
I'm trying to setup my assignment from school, but the only guidance is for windows. We're using Qt to for the shaders, and I'm working through visual studio code and compiling with terminal. Problem is it seems that what ever version of openGL I try end up with the same error:
*QOpenGLShader::compile(Vertex): ERROR: 0:1: '' : version '150' is not supported*
I'm using a macbook mid 12, and as it seems I'm using version 4.1 of openGL, I've tried multiple versions like 3.3, 2.1 4.1 and so on. Nothing seems to do the trick. I have also tried to override the version but that doesn't work. Am I looking at the problem the wrong way?
This is the source code that is causing the error:
#version 150 core <----
// input from application
uniform vec3 vecLight;
uniform sampler2D samTexture;
// input from geometry shader
smooth in vec3 normal;
smooth in vec3 vertex;
smooth in vec3 cartescoord;
// material constants
float ka = 0.1;
float kd = 0.6;
float ks = 0.3;
float spec_exp = 50.0;
// useful constants
float PI = 3.14159265;
// output color
out vec4 outFragColor;
vec2 cartesian2normspherical(vec3 cart)
{
float fPhi = 0.0;
float fTheta = 0.0;
float fRadius = length(cart);
fTheta = acos (cart.z / fRadius) / (PI);
fPhi = atan(cart.y, cart.x)/ (PI);
//transform phi from [-1,1] to [0,1]
fPhi = (fPhi+1.0)/2.0;
return vec2(fPhi, fTheta);
}
float calcPhongBlinn(vec3 vecV, vec3 vecN, vec3 vecL)
{
float fLightingIntesity = 1.0;
float fDiffuseIntensity = clamp(dot(normal, vecLight), 0, 1);
vec3 vecHalfway = normalize(vecL + vecV);
float fSpecularIntensity = pow(clamp(dot(vecHalfway, vecN), 0, 1), spec_exp);
fLightingIntesity = ka + kd*fDiffuseIntensity + ks*fSpecularIntensity;
return fLightingIntesity;
}
void main(void)
{
vec2 sphCoord = cartesian2normspherical(cartescoord);
vec2 texcoord = sphCoord.xy;
// this vector is constant since we assume that we look orthogonally at the computer screen
vec3 vecView = vec3(0.0, 0.0, 1.0);
float fI = calcPhongBlinn(vecView, normal, vecLight);
vec4 vecColor = texture2D(samTexture, texcoord.xy);
outFragColor = vec4(vecColor.rgb*fI, 1.0);
}
This is the extensive error:
QOpenGLShader::compile(Vertex): ERROR: 0:1: '' : version '150' is not supported
*** Problematic Vertex shader source code ***
QOpenGLShader: could not create shader
QOpenGLShader::link: ERROR: One or more attached shaders not successfully compiled
With my script file running:
make && ./myMain.app/Contents/MacOS/myMain
EDIT1: Adding my window file to the question
Window::Window()
{
QGridLayout *mainLayout = new QGridLayout;
QGLFormat glFormat;
glFormat.setVersion(3, 3);
glFormat.setProfile(QGLFormat::CoreProfile);
glFormat.setSampleBuffers(true);
GLWidget *glWidget = new GLWidget(/*glFormat,0*/);
mainLayout->addWidget(glWidget, 0, 0);
setLayout(mainLayout);
setWindowTitle(tr("Rendering with OpenGL"));
}
EDIT2:
After a lot of research, I have concluded that first and foremost I have to use openGL v: 3.3 for the shaders to work. Which my "OpenGl Extensions Viewer" says my mac does not support, i'm still wondering if there is a loophole I can exploit. So any information about how or if it's possible to do, would help.
LAST EDIT:
I found the solution, you have to pass the QGLFormat to QGLWidget as a constructor parameter. Found some of the solution here:
Qt5 OpenGL GLSL version error
Thank you for all the help!
Copied from the question above:
After a lot of research, I have concluded that first and foremost I
have to use openGL v: 3.3 for the shaders to work. Which my "OpenGl
Extensions Viewer" says my mac does not support, i'm still wondering
if there is a loophole I can exploit. So any information about how or
if it's possible to do, would help.
So the initial question is answered, and the second part regarding the "loophole" should be asked as a separate question (relating to this question).
Update:
LAST EDIT:
I found the solution, you have to pass the QGLFormat to QGLWidget as a
constructor parameter. Found some of the solution here:
Qt5 OpenGL GLSL version error
Thank you for all the help!
Update 2023-01-01 Qt 5.15:
int main(int argc, char *argv[])
{
QSurfaceFormat glFormat;
glFormat.setVersion(3, 3); // or (4,1) or whatever
glFormat.setDepthBufferSize(24);
glFormat.setProfile(QSurfaceFormat::CoreProfile);
QSurfaceFormat::setDefaultFormat(glFormat);
QApplication app(argc, argv);
...
}
class MainWidget : public QOpenGLWidget, protected QOpenGLFunctions_3_3_Core // or QOpenGLFunctions_4_1_Core, etc.
{
...
protected:
void initializeGL() override;
...
}
void MainWidget::initializeGL()
{
QOpenGLWidget::initializeGL();
QOpenGLContext* glContext = this->context();
int glMajorVersion = glContext->format().majorVersion();
int glMinorVersion = glContext->format().minorVersion();
qDebug() << "Running MyProgram";
qDebug() << "Checking QOpenGLWidget:";
qDebug() << "Widget OpenGL:" << QString("%1.%2").arg(glMajorVersion).arg(glMinorVersion);
qDebug() << "Context valid:" << glContext->isValid();
qDebug() << "OpenGL information:";
qDebug() << "VENDOR:" << (const char*)glGetString(GL_VENDOR);
qDebug() << "RENDERER:" << (const char*)glGetString(GL_RENDERER);
qDebug() << "VERSION:" << (const char*)glGetString(GL_VERSION);
qDebug() << "GLSL VERSION:" << (const char*)glGetString(GL_SHADING_LANGUAGE_VERSION);
...
}
Possible output:
Running MyProgram
Checking QOpenGLWidget:
Widget OpenGL: "4.1"
Context valid: true
OpenGL information:
VENDOR: ATI Technologies Inc.
RENDERER: AMD Radeon R9 M370X OpenGL Engine
VERSION: 4.1 ATI-4.8.101
GLSL VERSION: 4.10

What is the syntax for 'pixel_interlock_ordered' in GLSL?

I'm trying out the ARB_fragment_shader_interlock extension in OpenGL 4.5 and am failing to get the shader to compile when trying to use pixel_interlock_ordered.
#version 430
#extension GL_ARB_shading_language_420pack : require
#extension GL_ARB_shader_image_load_store : require
#extension GL_ARB_fragment_shader_interlock : require
layout(location = 0, rg8, pixel_interlock_ordered) uniform image2D image1;
void main()
{
beginInvocationInterlockARB();
ivec2 coords = ivec2(gl_FragCoord.xy);
vec4 pixel = imageLoad(image1, coords);
pixel.g = pixel.g + 0.01;
if (pixel.g > 0.5)
pixel.r = pixel.r + 0.01;
else
pixel.r = pixel.r + 0.02;
imageStore(image1, coords, pixel);
endInvocationInterlockARB();
}
The following shader fails compilation with:
0(6) : error C7600: no value specified for layout qualifier 'pixel_interlock_ordered'
Which is the same error you would get for any random name instead of pixel_interlock_ordered. I guess the syntax is different somehow, but the spec (https://www.khronos.org/registry/OpenGL/extensions/ARB/ARB_fragment_shader_interlock.txt) refer to it as a "layout qualifier".
Googling "pixel_interlock_ordered" comes up short with just links to the official specs, so I can't find an example. What is the correct syntax?
Layout qualifiers in GLSL are a bit weird. They usually apply to declarations, but some of them effectively apply to the shader as a whole. Such qualifiers are basically shader-specific options you set from within the shader.
The interlock qualifiers are those kinds of qualifiers. You're not saying that this variable will be accessed via interlocking, because that's not what interlocking means. It means that the execution of the interlock-bound code will have a certain property, relative to executing interlock-bound code on other invocations of the same shader. The qualifier specifies the details of the execution restriction.
Qualifiers that apply to the shader as a whole are grammatically specified as qualifiers on in or out (most such qualifiers use in, but a few use out):
layout(pixel_interlock_ordered) in;

shader compilation error on const value

Hi I'm having a bug on a fragment shader that doesn't compile on certain computers. The program using this shader is running on my computer (Quadro K1000M, OpenGl 4.2) but crashes at launch on my friend's computer (AMD Firepro M4100 FireGL V, OpenGl 4.2).
The error is:
Failed to compile fragment shader:
Fragment shader failed to compile with the following errors:
ERROR: 0:2: error(#207) Non-matching types for const initializer: const
ERROR: error(#273) 1 compilation errors. No code generated
The shader code is as follows:
uniform sampler2D source;
const float Threshold = 0.75;
const float Factor = 4.0;
#define saturate(x) clamp(x, 0.f, 1.f)
void main()
{
vec4 sourceFragment = texture2D(source, gl_TexCoord[0].xy);
float luminance = sourceFragment.r * 0.2126 + sourceFragment.g * 0.7152 + sourceFragment.b * 0.0722;
sourceFragment *= saturate(luminance - Threshold) * Factor;
gl_FragColor = sourceFragment;
}
I didn't find any info about this error. I asked my friend to edit the shaders to remove the const keyword, the program then worked. But I don't understand why this keyword generates compilation error on his computer since he seems to have the same OpenGL version as I have. I tried to search for known issues between const and shaders but I didn't find anything.
Do you have any idea what's happening ?

Invalid value GLSL?

After letting my opengl program run for a while and viewing the scene from different angles I am getting an OpenGL "invalid value" error in my shader program. This is literally my program:
Vertex
#version 420
in vec4 Position;
uniform mat4 modelViewProjection;
void main()
{
in vec4 Position;
uniform mat4 modelViewProjection;
}
Fragment
#version 420
out vec4 fragment;
void main()
{
fragment = vec4(1,0,0,1);
}
This error occurs right after the function call to tell OpenGL to use my shader program. What could the cause of this be? It happens regardless of the object I call it on. How can I get more information on what is going on? The error occurs almost randomly for a series of frames, but then works again after a while, fails again after a bit, ect.
If it helps, here is what my program linking looks like:
...
myShader = glCreateProgram();
CreateShader(myShader,GL_VERTEX_SHADER, "shaders/prog.vert");
CreateShader(myShader,GL_FRAGMENT_SHADER, "shaders/prog.frag");
glLinkProgram(myShader);
PrintProgramLog(myShader);
...
void CreateShader(int prog, const GLenum type, const char* file)
{
int shad = glCreateShader(type);
char* source = ReadText(file);
glShaderSource(shad,1,(const char**)&source,NULL);
free(source);
glCompileShader(shad);
PrintShaderLog(shad,file);
glAttachShader(prog,shad);
}
This is what I'm using to get the error:
void ErrCheck(const char* where)
{
int err = glGetError();
if (err) fprintf(stderr,"ERROR: %s [%s]\n",gluErrorString(err),where);
}
And here is what is being printed out at me:
ERROR: invalid value [drawThing]
It happens after I call to use the program:
glUseProgram(_knightShaders[0]);
ErrCheck("drawThing");
or glGetUniformLocation:
glGetUniformLocation(myShader, "modelViewProjection");
ErrCheck("drawThing2");
So I fixed the problem. What I had above wasn't the whole truth, what I actually had was
myShader[0] = glCreateProgram();
myShader was an array of 4 GLuint(s), each int being a different shader program (although at this point they were all copies of the shader program I posted above). The problem was fixed when I stopped using an array and instead used:
GLuint myShader0;
GLuint myShader1;
GLuint myShader2;
GLuint myShader3;
Why this fixed the problem makes no sense to me, it's also pretty annyoing because rather than being able to index the shader mode I want, such as:
int mode = ... (code the determine what shader to use here)
glUseProgram(myShader[mode]);
I have to instead use conditionals:
int mode = ... (code the determine what shader to use here)
if (mode == 0) glUseProgram(myShader0);
else if (mode == 1) glUseProgram(myShader1);
else if (mode == 2) glUseProgram(myShader2);
else glUseProgram(myShader3);
If anyone of you know why this fixes the problem, I would very much appreciate the knowledge!