I was making a shader with GLSL so I could apply texture arrays to a cube when I ran into an issue. The issue was that layout wasn't supported in the version I was using. Easy fix, use a later version, but the problem with that was that I don't have support for later versions that support layout. I was wondering, is it possible to get support for later versions of GLSL, and if so, how would I do such a thing?
This is my current GLSL code for applying textures
Vertex Shader:
#version 130
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aColor;
layout (location = 2) in vec2 aTexCoord;
out vec3 ourColor;
out vec2 TexCoord;
void main() {
gl_Position = vec4(aPos, 1.0);
ourColor = aColor;
TexCoord = aTexCoord;
}
Fragment Shader:
#version 130
out vec4 FragColor;
in vec3 ourColor;
in vec2 TexCoord;
uniform sampler2DArray texture;
uniform index;
void main()
{
FragColor = texture(texture, vec3(TexCoord, index));
}
Result of running:
print(glGetString(GL_VENDOR))
print(glGetString(GL_RENDERER))
print(glGetString(GL_VERSION))
print(glGetString(GL_SHADING_LANGUAGE_VERSION))
Intel Open Source Technology Center
Mesa DRI Intel(R) Sandybridge Mobile
3.0 Mesa 18.3.6
1.30
Related
When loading the jpg image using SOIL, the image is displayed slanted and the colors are not right (i don't know how to describe it, because it is in color, but it looks black and white)
The intended version
However, it is displayed like this
The shaders are:
vertex shader
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 1) in vec3 color;
layout (location = 2) in vec2 texCoord;
out vec3 ourColor;
out vec2 TexCoord;
void main(){
gl_Position = vec4(position, 1.0f);
ourColor = color;
TexCoord = texCoord;
}
fragment shader
#version 330 core
in vec3 ourColor;
in vec2 TexCoord;
out vec4 color;
uniform sampler2D ourTexture;
void main(){
color = texture(ourTexture, TexCoord);
}
How to display it correctly, the intended way?
By default OpenGL assumes that the start of each row of an image is aligned with 4 bytes. This is because the GL_UNPACK_ALIGNMENT parameter by default is 4. If the format of the image is RGB and width*3 is not divisibly by 4, you must change the parameter before specifying the two-dimensional texture image (glTexImage2D):
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
For example, I have a code like this (fragment shader):
Option 1
#version 330 core
uniform sampler2D baseImage;
in vec2 texCoord;
out vec3 fragColor;
void main() {
fragColor = texture(baseImage, texCoord).rgb;
// some calculations that don't use fragColor
vec3 calcResult = ...
fragColor = calcResult;
}
Will compiler remove fragColor = texture(baseImage, texCoord).rgb?
Option 2
#version 330 core
uniform sampler2D baseImage;
in vec2 texCoord;
out vec3 fragColor;
void init0() {
fragColor = texture(baseImage, texCoord).rgb;
}
void init1() {
fragColor = texture(baseImage, texCoord).rgb;
}
void main() {
init0();
init1();
}
Will compiler use code from init1()?
What the compiler has to optimize is not specified in the specification.
The only thing what is specified is the behavior of active program resources. See OpenGL 4.6 Core Profile Specification - 7.3.1 Program Interfaces, page 102.
The glsl shader code is compiled by the graphics driver (except you generate SPIR-V), so it depends on. But a modern graphics driver will do such optimizations.
By default optimizations are switched on, but they can be switched off by
#pragma optimize(off)
See OpenGL Shading Language 4.60 Specification - 3.3. Preprocessor
I used a shader which worked in another program (in the same environment afaik) which can't compile now for some reason:
// Vertex Shader
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
out vec2 fragmentUV;
uniform mat4 ortho_matrix;
void main()
{
gl_Position = ortho_matrix * vec4(vertexPosition_modelspace, 1);
fragmentUV = vertexUV;
}
// Fragment Shader
#version 330 core
in vec2 fragmentUV;
uniform sampler2D texture;
out vec4 color;
void main()
{
color.rgba = texture(texture, fragmentUV).rgba;
}
It's a super basic shader and now it starts to throw errors all of the sudden.
Windows 8.1
Nvidia GeForce 1080 (this one is new maybe that's the problem?)
This is whats being output by Visual Studio:
uniform sampler2D texture;
out vec4 color;
void main()
{
color.rgba = texture(texture, fragmentUV).rgba;
}
I'm amazed this compiled at all in a different setting. You've named your texture the same as the function used to make texture lookups. You need to rename uniform sampler2D texture; to something else.
I'm trying to implement some basic lighting and shading following the tutorial over here and here.
Everything is more or less working but I get some kind of strange flickering on object surfaces due to the shading.
I have two images attached to show you guys how this problem looks.
I think the problem is related to the fact that I'm passing vertex coordinates from vertex shader to fragment shader to compute some lighting variables as stated in the above linked tutorials.
Here is some source code (stripped out unrelated code).
Vertex Shader:
#version 150 core
in vec4 pos;
in vec4 in_col;
in vec2 in_uv;
in vec4 in_norm;
uniform mat4 model_view_projection;
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
void main(void) {
gl_Position = model_view_projection * pos;
out_col = in_col;
out_vert = pos;
out_norm = in_norm;
passed_uv = in_uv;
}
and Fragment Shader:
#version 150 core
uniform sampler2D tex;
uniform mat4 model_mat;
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
out vec4 col;
void main(void) {
mat3 norm_mat = mat3(transpose(inverse(model_mat)));
vec3 norm = normalize(norm_mat * vec3(in_norm));
vec3 light_pos = vec3(0.0, 6.0, 0.0);
vec4 light_col = vec4(1.0, 0.8, 0.8, 1.0);
vec3 col_pos = vec3(model_mat * vert_pos);
vec3 s_to_f = light_pos - col_pos;
float brightness = dot(norm, normalize(s_to_f));
brightness = clamp(brightness, 0, 1);
gl_FragColor = out_col;
gl_FragColor = vec4(brightness * light_col.rgb * gl_FragColor.rgb, 1.0);
}
As I said earlier I guess the problem has to do with the way the vertex position is passed to the fragment shader. If I change the position values to something static no more flickering occurs.
I changed all other values to statics, too. It's the same result - no flickering if I am not using the vertex position data passed from vertex shader.
So, if there is someone out there with some GL-wisdom .. ;)
Any help would be appreciated.
Side note: running all this stuff on an Intel HD 4000 if that may provide further information.
Thanks in advance!
Ivan
The names of the out variables in the vertex shader and the in variables in the fragment shader need to match. You have this in the vertex shader:
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
and this in the fragment shader:
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
These variables are associated by name, not by order. Except for passed_uv, the names do not match here. For example, you could use these declarations in the vertex shader:
out vec4 passed_col;
out vec2 passed_uv;
out vec4 passed_vert;
out vec4 passed_norm;
and these in the fragment shader:
in vec4 passed_col;
in vec2 passed_uv;
in vec4 passed_vert;
in vec4 passed_norm;
Based on the way I read the spec, your shader program should actually fail to link. At least in the GLSL 4.50 spec, in the table on page 43, it lists "Link-Time Error" for this situation. The rules seem somewhat ambiguous in earlier specs, though.
I'm writing an application using OpenGL 4.3 and GLSL and I need the shader to do basic UV mapping. The problem is that GLSL compiler seems to be optimising-out the UV coordinates. I cannot access them from the application side of things.
Vertex shader:
#version 330 core
uniform mat4 projection;
layout (location = 0) in vec4 position;
layout (location = 1) in vec2 uvCoord;
out vec2 texCoord;
void main(void)
{
texCoord = uvCoord;
gl_Position = position;
}
Vertex shader:
#version 330 core
in vec2 texCoord;
out vec4 color;
uniform sampler2D tex;
void main(void)
{
color = texture2D(tex, texCoord);
}
Both the vertex and fragment shader compile and link without errors, but when I call the attributes using the following code:
GLint effectPositionLocation = glGetAttribLocation(effect->getEffect(), "position");
GLint effectUVLocation = glGetAttribLocation(effect->getEffect(), "uvCoord");
I get the 0 for the position and -1 for the uvCoord, so I can only assume that the uvCoord has been optimised out even though I am using it to pass it from the vertex shader to the fragment shader.
The result is that the geometry is displayed but only in black, no texture mapping.
I have Written similar applications in Direct3D and HLSL with no problem of attributes being optimised out. I'm thinking that it is something simple that I am forgetting or not doing but have not found out what.
Replace the 'texture2D' with 'texture', and your attribute will be used.
Bad GLSL compiler: it should not compile your shader since texture2D is not available in core profile.
EDIT: You may have forgotten to call glEnableVertexAttribArray(1); after setting your glVertexAttribPointers.