For example, I have a code like this (fragment shader):
Option 1
#version 330 core
uniform sampler2D baseImage;
in vec2 texCoord;
out vec3 fragColor;
void main() {
fragColor = texture(baseImage, texCoord).rgb;
// some calculations that don't use fragColor
vec3 calcResult = ...
fragColor = calcResult;
}
Will compiler remove fragColor = texture(baseImage, texCoord).rgb?
Option 2
#version 330 core
uniform sampler2D baseImage;
in vec2 texCoord;
out vec3 fragColor;
void init0() {
fragColor = texture(baseImage, texCoord).rgb;
}
void init1() {
fragColor = texture(baseImage, texCoord).rgb;
}
void main() {
init0();
init1();
}
Will compiler use code from init1()?
What the compiler has to optimize is not specified in the specification.
The only thing what is specified is the behavior of active program resources. See OpenGL 4.6 Core Profile Specification - 7.3.1 Program Interfaces, page 102.
The glsl shader code is compiled by the graphics driver (except you generate SPIR-V), so it depends on. But a modern graphics driver will do such optimizations.
By default optimizations are switched on, but they can be switched off by
#pragma optimize(off)
See OpenGL Shading Language 4.60 Specification - 3.3. Preprocessor
Related
I was making a shader with GLSL so I could apply texture arrays to a cube when I ran into an issue. The issue was that layout wasn't supported in the version I was using. Easy fix, use a later version, but the problem with that was that I don't have support for later versions that support layout. I was wondering, is it possible to get support for later versions of GLSL, and if so, how would I do such a thing?
This is my current GLSL code for applying textures
Vertex Shader:
#version 130
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aColor;
layout (location = 2) in vec2 aTexCoord;
out vec3 ourColor;
out vec2 TexCoord;
void main() {
gl_Position = vec4(aPos, 1.0);
ourColor = aColor;
TexCoord = aTexCoord;
}
Fragment Shader:
#version 130
out vec4 FragColor;
in vec3 ourColor;
in vec2 TexCoord;
uniform sampler2DArray texture;
uniform index;
void main()
{
FragColor = texture(texture, vec3(TexCoord, index));
}
Result of running:
print(glGetString(GL_VENDOR))
print(glGetString(GL_RENDERER))
print(glGetString(GL_VERSION))
print(glGetString(GL_SHADING_LANGUAGE_VERSION))
Intel Open Source Technology Center
Mesa DRI Intel(R) Sandybridge Mobile
3.0 Mesa 18.3.6
1.30
I'm trying to translate some old OpenGL code to modern OpenGL. This code is reading data from a texture and displaying it. The fragment shader is currently created using ARB_fragment_program commands:
static const char *gl_shader_code =
"!!ARBfp1.0\n"
"TEX result.color, fragment.texcoord, texture[0], RECT; \n"
"END";
GLuint program_id;
glGenProgramsARB(1, &program_id);
glBindProgramARB(GL_FRAGMENT_PROGRAM_ARB, program_id);
glProgramStringARB(GL_FRAGMENT_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, (GLsizei) strlen(gl_shader_code ), (GLubyte *) gl_shader_code );
I'd simply like to translate this into GLSL code. I think the fragment shader should look something like this:
#version 430 core
uniform sampler2DRect s;
void main(void)
{
gl_FragColor = texture2DRect(s, ivec2(gl_FragCoord.xy), 0);
}
But I'm not sure of a couple of details:
Is this the right usage of texture2DRect?
Is this the right usage of gl_FragCoord?
The texture is being fed with a pixel buffer object using GL_PIXEL_UNPACK_BUFFER target.
I think you can just use the standard sampler2D instead of sampler2DRect (if you do not have a real need for it) since, quoting the wiki, "From a modern perspective, they (rectangle textures) seem essentially useless.".
You can then change your texture2DRect(...) to texture(...) or texelFetch(...) (to mimic your rectangle fetching).
Since you seem to be using OpenGL 4, you do not need to (should not ?) use gl_FragColor but instead declare an out variable and write to it.
Your fragment shader should look something like this in the end:
#version 430 core
uniform sampler2D s;
out vec4 out_color;
void main(void)
{
out_color = texelFecth(s, vec2i(gl_FragCoord.xy), 0);
}
#Zouch, thank you very much for your response. I took it and worked on this for a bit. My final cores were very similar to what you suggested. For the record the final vertex and fragment shaders I implemented were as follows:
Vertex Shader:
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
out vec2 UV;
uniform mat4 MVP;
void main()
{
gl_Position = MVP * vec4(vertexPosition_modelspace, 1);
UV = vertexUV;
}
Fragment Shader:
#version 330 core
in vec2 UV;
out vec3 color;
uniform sampler2D myTextureSampler;
void main()
{
color = texture2D(myTextureSampler, UV).rgb;
}
That seemed to work.
I used a shader which worked in another program (in the same environment afaik) which can't compile now for some reason:
// Vertex Shader
#version 330 core
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
out vec2 fragmentUV;
uniform mat4 ortho_matrix;
void main()
{
gl_Position = ortho_matrix * vec4(vertexPosition_modelspace, 1);
fragmentUV = vertexUV;
}
// Fragment Shader
#version 330 core
in vec2 fragmentUV;
uniform sampler2D texture;
out vec4 color;
void main()
{
color.rgba = texture(texture, fragmentUV).rgba;
}
It's a super basic shader and now it starts to throw errors all of the sudden.
Windows 8.1
Nvidia GeForce 1080 (this one is new maybe that's the problem?)
This is whats being output by Visual Studio:
uniform sampler2D texture;
out vec4 color;
void main()
{
color.rgba = texture(texture, fragmentUV).rgba;
}
I'm amazed this compiled at all in a different setting. You've named your texture the same as the function used to make texture lookups. You need to rename uniform sampler2D texture; to something else.
There's a uniform vec3 in my shader that causes some odd behavior. If I use it in any way inside the shader - even if it has no actual effect on anything - the shader breaks and nothing that uses it is rendered.
This is the (vertex) shader:
#version 330 core
layout(std140) uniform ViewProjection
{
mat4 V;
mat4 P;
};
layout(location = 0) in vec3 vertexPosition_modelspace;
smooth out vec3 UVW;
uniform mat4 M;
uniform vec3 cameraPosition;
void main()
{
vec3 vtrans = vec3(vertexPosition_modelspace.x,vertexPosition_modelspace.y,vertexPosition_modelspace.z);
// if(cameraPosition.y == 123456)
// {}
mat4 MVP = P *V *M;
vec4 MVP_Pos = MVP *vec4(vtrans,1);
gl_Position = MVP_Pos;
UVW = vertexPosition_modelspace;
}
If I use it like this, it works fine, but as soon as I uncomment the commented lines, the shader breaks. There's no error on compiling or linking the shader, glGetError() reports no errors either. It happens if 'cameraPosition' is used in ANY way, even if it's meaningless.
This only happens on my laptop however, which is running OpenGL 3.1. On my PC with OpenGL 4.* I don't have this issue.
What's going on here?
Some info about my graphic card:
GL_RENDERER: Intel(R) G41 Express Chipset
OpenGL_VERSION: 2.1.0 - Build 8.15.10.1986
GLSL_VERSION: 1.20 - Intel Build 8.15.10.1986
Vertex shader 1:
#version 110
attribute vec3 vertexPosition_modelspace;
varying vec3 normal;
varying vec3 vertex;
void light(inout vec3 ver, out vec3 nor);
void main()
{
gl_Position = vec4(vertexPosition_modelspace, 1.0);
light(vertex, normal);
}
Vertex shader 2:
#version 110
void light(inout vec3 ver, out vec3 nor)
{
ver = vec3(0.0,1.0,0.0);
//vec3 v = -ver; // wrong line
nor = vec3(0.0,0.0,1.0);
//float f = dot(ver, nor); // wrong line
}
Fragment shader:
#version 110
varying vec3 normal;
varying vec3 vertex;
void main()
{
gl_FragColor = vec4(vertex, 1.0);
}
These shaders works well if the two lines are commented in second vertex shader. However, once one of them is enabled, we get a error. The error occur in opengl function glDrawArrays.
It seems that variable of out/inout can not used as right value.
I have run the same program on Intel HD Graphics 3000 which opengl's version is 3.1 and GLSL's version is 1.4, and the program works well. Is this a bug of Intel's driver or just wrong used by me?
Because intel g41 is an extremely weak gpu.
The only way through it is to upgrade your gpu.