I used a shader which worked in another program (in the same environment afaik) which can't compile now for some reason:
// Vertex Shader
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
out vec2 fragmentUV;
uniform mat4 ortho_matrix;
void main()
{
gl_Position = ortho_matrix * vec4(vertexPosition_modelspace, 1);
fragmentUV = vertexUV;
}
// Fragment Shader
#version 330 core
in vec2 fragmentUV;
uniform sampler2D texture;
out vec4 color;
void main()
{
color.rgba = texture(texture, fragmentUV).rgba;
}
It's a super basic shader and now it starts to throw errors all of the sudden.
Windows 8.1
Nvidia GeForce 1080 (this one is new maybe that's the problem?)
This is whats being output by Visual Studio:
uniform sampler2D texture;
out vec4 color;
void main()
{
color.rgba = texture(texture, fragmentUV).rgba;
}
I'm amazed this compiled at all in a different setting. You've named your texture the same as the function used to make texture lookups. You need to rename uniform sampler2D texture; to something else.
Related
I was making a shader with GLSL so I could apply texture arrays to a cube when I ran into an issue. The issue was that layout wasn't supported in the version I was using. Easy fix, use a later version, but the problem with that was that I don't have support for later versions that support layout. I was wondering, is it possible to get support for later versions of GLSL, and if so, how would I do such a thing?
This is my current GLSL code for applying textures
Vertex Shader:
#version 130
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aColor;
layout (location = 2) in vec2 aTexCoord;
out vec3 ourColor;
out vec2 TexCoord;
void main() {
gl_Position = vec4(aPos, 1.0);
ourColor = aColor;
TexCoord = aTexCoord;
}
Fragment Shader:
#version 130
out vec4 FragColor;
in vec3 ourColor;
in vec2 TexCoord;
uniform sampler2DArray texture;
uniform index;
void main()
{
FragColor = texture(texture, vec3(TexCoord, index));
}
Result of running:
print(glGetString(GL_VENDOR))
print(glGetString(GL_RENDERER))
print(glGetString(GL_VERSION))
print(glGetString(GL_SHADING_LANGUAGE_VERSION))
Intel Open Source Technology Center
Mesa DRI Intel(R) Sandybridge Mobile
3.0 Mesa 18.3.6
1.30
I want to porting demo project of Box2D from VS2013 in Qt. But i have one unusual problem. When i run demo project in VS i get message with info about my openGL configuration.
I replaced code on Qt and changed initialization of shaders and calls of opengl funcitons with QOpeGL wrappers.
Firstly, i change default surface format like this:
QSurfaceFormat format;
format.setVersion(3, 1);
format.setProfile(QSurfaceFormat::CoreProfile);
QSurfaceFormat::setDefaultFormat(format);
But when i debug program is format of my surface stay on 3.0.
And when i try to create shaders i get error message like this:
QOpenGLShader::compile(Vertex): ERROR: 0:1: '140' : version number not supported
ERROR: 0:2: 'layout' : syntax error
*** Problematic Vertex shader source code ***
#version 140
#line 1
uniform mat4 projectionMatrix;
layout(location = 0) in vec2 v_position;
layout(location = 1) in vec4 v_color;
layout(location = 2) in float v_size;
out vec4 f_color;
void main(void)
{
f_color = v_color;
gl_Position = projectionMatrix * vec4(v_position, 0.0f, 1.0f);
gl_PointSize = v_size;
}
Why VS can create shaders and work with OpenGL 3.1/GLSL 1.40, but Qt doesn't do this?
Initialize of GL (m_functions - QOpenGLExtraFunctions)
void MainOpenGLWidget::initializeGL()
{
makeCurrent();
qDebug() << "OPENGL " << QSurfaceFormat::defaultFormat().majorVersion() <<
"." << QSurfaceFormat::defaultFormat().minorVersion();
m_functions.initializeOpenGLFunctions();
m_functions.glClearColor(0, 0, 0, 0);
g_camera.m_width = 1024;
g_camera.m_height = 640;
g_debugDraw.Create(&m_functions);
}
function g_debugDraw.Create(...):
void Create(QOpenGLExtraFunctions *functions)
{
const char *vs = \
"#version 140\n"
"uniform mat4 projectionMatrix;\n"
"layout(location = 0) in vec2 v_position;\n"
"layout(location = 1) in vec4 v_color;\n"
"layout(location = 2) in float v_size;\n"
"out vec4 f_color;\n"
"void main(void)\n"
"{\n"
" f_color = v_color;\n"
" gl_Position = projectionMatrix * vec4(v_position, 0.0f, 1.0f);\n"
" gl_PointSize = v_size;\n"
"}\n";
...
m_program.addShaderFromSourceCode(QOpenGLShader::Vertex, vs);
...
m_functions = *functions;
m_functions.glGenVertexArrays(1, &m_vaoId);
m_functions.glGenBuffers(3, m_vboIds);
m_functions.glBindVertexArray(m_vaoId);
m_functions.glEnableVertexAttribArray(m_vertexAttribute);
m_functions.glEnableVertexAttribArray(m_colorAttribute);
m_functions.glEnableVertexAttribArray(m_sizeAttribute);
...
}
I get error when call m_program.addShaderFromSourceCode(QOpenGLShader::Vertex, vs);
If you really want to stick to GLSL #version 140, then replace the following lines
layout(location = 0) in vec2 v_position;
layout(location = 1) in vec4 v_color;
layout(location = 2) in float v_size;
by
in vec2 v_position;
in vec4 v_color;
in float v_size;
Explicit attribute location is not part of GLSL until #version 330.
Be careful after that to query your shader's attribute location before enabling it in your C++ code. The shader compiler may or may not order them in the order of declaration.
const int POS_LOCATION = glGetAttribLocation(program_id, "v_position");
if(-1 != POS_LOCATION)
{
glEnableVertexAttribArray(POS_LOCATION);
...
glVertexAttribPointer(POS_LOCATION, ... );
...
}
I've found no mention of layout(location = x) in documentation of GLSL #version 140.
https://www.khronos.org/registry/OpenGL/specs/gl/GLSLangSpec.1.40.pdf
Not mentioned until #version 330 at page 35.
https://www.khronos.org/registry/OpenGL/specs/gl/GLSLangSpec.3.30.pdf
In your case the shader compiler error message is a little misleading. Maybe if you update you graphic card driver you'll get a better shader compiler.
I didn't took into account that GLSL for desktop is not the same that GLSL from OpenGL ES. This topic little helped me. I change all in to attribute or varying and all out to varying and also set precision on float. After change my shader was compile. Finally my shader code looks like this:
precision lowp float;
uniform mat4 projectionMatrix;
attribute vec2 v_position;
attribute vec4 v_color;
attribute float v_size;
varying vec4 f_color;
void main(void)
{
f_color = v_color;
gl_Position = projectionMatrix * vec4(v_position, 0.0, 1.0);
gl_PointSize = v_size;
}
Thank Octo for support!
I'm trying to translate some old OpenGL code to modern OpenGL. This code is reading data from a texture and displaying it. The fragment shader is currently created using ARB_fragment_program commands:
static const char *gl_shader_code =
"!!ARBfp1.0\n"
"TEX result.color, fragment.texcoord, texture[0], RECT; \n"
"END";
GLuint program_id;
glGenProgramsARB(1, &program_id);
glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, program_id);
glProgramStringARB(GL_FRAGMENT_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, (GLsizei) strlen(gl_shader_code ), (GLubyte *) gl_shader_code );
I'd simply like to translate this into GLSL code. I think the fragment shader should look something like this:
#version 430 core
uniform sampler2DRect s;
void main(void)
{
gl_FragColor = texture2DRect(s, ivec2(gl_FragCoord.xy), 0);
}
But I'm not sure of a couple of details:
Is this the right usage of texture2DRect?
Is this the right usage of gl_FragCoord?
The texture is being fed with a pixel buffer object using GL_PIXEL_UNPACK_BUFFER target.
I think you can just use the standard sampler2D instead of sampler2DRect (if you do not have a real need for it) since, quoting the wiki, "From a modern perspective, they (rectangle textures) seem essentially useless.".
You can then change your texture2DRect(...) to texture(...) or texelFetch(...) (to mimic your rectangle fetching).
Since you seem to be using OpenGL 4, you do not need to (should not ?) use gl_FragColor but instead declare an out variable and write to it.
Your fragment shader should look something like this in the end:
#version 430 core
uniform sampler2D s;
out vec4 out_color;
void main(void)
{
out_color = texelFecth(s, vec2i(gl_FragCoord.xy), 0);
}
#Zouch, thank you very much for your response. I took it and worked on this for a bit. My final cores were very similar to what you suggested. For the record the final vertex and fragment shaders I implemented were as follows:
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main()
{
gl_Position = MVP * vec4(vertexPosition_modelspace, 1);
UV = vertexUV;
}
Fragment Shader:
#version 330 core
in vec2 UV;
out vec3 color;
uniform sampler2D myTextureSampler;
void main()
{
color = texture2D(myTextureSampler, UV).rgb;
}
That seemed to work.
There's a uniform vec3 in my shader that causes some odd behavior. If I use it in any way inside the shader - even if it has no actual effect on anything - the shader breaks and nothing that uses it is rendered.
This is the (vertex) shader:
#version 330 core
layout(std140) uniform ViewProjection
{
mat4 V;
mat4 P;
};
layout(location = 0) in vec3 vertexPosition_modelspace;
smooth out vec3 UVW;
uniform mat4 M;
uniform vec3 cameraPosition;
void main()
{
vec3 vtrans = vec3(vertexPosition_modelspace.x,vertexPosition_modelspace.y,vertexPosition_modelspace.z);
// if(cameraPosition.y == 123456)
// {}
mat4 MVP = P *V *M;
vec4 MVP_Pos = MVP *vec4(vtrans,1);
gl_Position = MVP_Pos;
UVW = vertexPosition_modelspace;
}
If I use it like this, it works fine, but as soon as I uncomment the commented lines, the shader breaks. There's no error on compiling or linking the shader, glGetError() reports no errors either. It happens if 'cameraPosition' is used in ANY way, even if it's meaningless.
This only happens on my laptop however, which is running OpenGL 3.1. On my PC with OpenGL 4.* I don't have this issue.
What's going on here?
I'm currently trying out some of the opengl components in Qt 5, I'm compiling on Mac OSX 10.8 with QtCreator 2.6.2, clang 4.2
I've written a very basic GLSL shader that compiles and links well in OpenGl Shader Builder, but when I try to load it using a QGLShader, it fails to compile and the log function returns no error message.
Vertex Shader:
attribute vec4 in_position;
attribute vec3 in_normal;
attribute vec3 in_color;
attribute vec2 in_tex;
uniform vec4 lightPosition;
varying vec2 texCoords;
varying vec3 normal;
varying vec3 vertToLightDir;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * in_position;
vec4 worldVert = gl_ModelViewMatrix * in_position;
vertToLightDir = normalize(vec3(lightPosition - worldVert));
normal = gl_NormalMatrix * in_normal;
texCoords = in_tex;
}
Fragment Shader:
uniform sampler2D texture0;
uniform vec4 lightColor;
varying vec2 texCoords;
varying vec3 normal;
varying vec3 vertToLightDir;
void main(void)
{
float lightIntensity = clamp(dot(normal, vertToLightDir), 0.0, 1.0);
gl_FragColor = (texture2D(texture0, texCoords) + (lightColor * lightIntensity)) * 0.5;
}
The code that loads the shaders:
QGLShader fragShader(QGLShader::Fragment);
bool success = fragShader.compileSourceFile("Fragment.glsl");
qDebug() << fragShader.log();
I used the debugger to see that the compileSourceFile function returns false, I also used access("Fragment.glsl", F_OK) to see if the program manages to find the file and it does, the same goes for the vertex shader file, I can't seem to find the reason they won't compile. Is there something I'm doing wrong ?
After some debugging I realized I was trying to compile the shaders before a valid context was created. My bad.