I wonder whether I should consider not used variables in glsl.
In next situation(which is just sample code for description).
If "trigger" is true, "position" and "normal" are not used in the fragment shader.
Then, are "position" and "normal" abandoned? or calculated by rasterizer?
Vertex shader :
layout(location = 0) in vec3 vertice;
layout(location = 1) in vec2 uv;
layout(location = 2) in vec3 normal;
out VertexData{
out vec3 position;
out vec2 uv;
out vec3 normal;
flat out bool trigger;
} VertexOut;
void main(){
gl_Position = vertice;
VertexOut.uv = uv;
if(uv.x > 0){
VertexOut.trigger = true;
else{
VertexOut.trigger = false;
VertexOut.position = vertice;
VertexOut.normal = normal;
}
}
Fragment shdaer :
in VertexData{
in vec3 position;
in vec2 uv;
in vec3 normal;
flat in bool trigger;
}VertexIn;
out vec4 color;
uniform sampler2D tex;
void main(){
if(VertexIn.trigger){
color = texture2D(tex, VertexIn.uv);
}
else{
vec4 result = texture2D(tex, VertexIn.uv);
/*
Lighting with position and normal...
*/
color = result;
}
}
Depends on hardware. On newer hardware smart compiler can avoid interpolating/fetching this values. Older hardware don't have this operations controllable by fragment shader, so it would be done anyway (interpolated from undefined value - which is still undefined).
Conditional operation isn't free though, especially if your fragment branches are highly divergent.
Related
I have code trying to upload data into an OpenGL shader, but when I call glGetAttribLocation() going for the array of data I am looking for, it always returns -1 as location (thus not found). I have no idea how to debug this issue in the first place, since the variables are in the code (albeit the vertex shader only passes it on to the geometry shader).
Can someone help me figure out why the glGetAttribLocation returns not found? Other items, like worldMatrix for example, when using glGetUniformLocation(), work just fine.
C++ Code trying to get the attribute id:
for (unsigned int i = 0; i < _nNumCameras; ++i) {
const auto glName = "texcoords[" + std::to_string(i) + "]";
const auto location = glGetAttribLocation(id(), glName.c_str());
if (location == -1) {
continue;
}
Vertex Shader:
#version 430
#define NUM_CAMERAS 3
uniform mat4 worldMatrix;
uniform mat4 viewProjMatrix;
layout(location = 0)in vec3 position;
layout(location = 1)in vec3 normal;
layout(location = 2)in float radius;
in vec2 texcoords[NUM_CAMERAS];
in uint cameraIds[NUM_CAMERAS];
out vec4 gl_Position;
out VS_OUT {
vec2 v_texCoords[NUM_CAMERAS];
vec3 v_normal;
uint cameraIDs[NUM_CAMERAS];
} vs_out;
void main()
{
gl_Position.xyz = position.xyz;
gl_Position.w = 1;
gl_Position = worldMatrix * gl_Position;
gl_Position = viewProjMatrix * gl_Position;
vs_out.v_texCoords = texcoords;
vs_out.cameraIDs = cameraIds;
vs_out.v_normal = normal;
}
Geometry Shader:
#version 430
#define NUM_CAMERAS 3
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
out VS_OUT {
vec2 v_texCoords[NUM_CAMERAS];
vec3 v_normal;
uint cameraIDs[NUM_CAMERAS];
} gs_in[];
out GS_OUT {
vec2 v_texcoord;
} gs_out;
flat out uint camera_id_unique;
void main() {
// Code selecting best camera texture
...
///
gl_Position = gl_in[0].gl_Position;
gs_out.v_texcoord = gs_in[0].v_texCoords[camera_id_unique];
EmitVertex();
gl_Position = gl_in[1].gl_Position;
gs_out.v_texcoord = gs_in[1].v_texCoords[camera_id_unique];
EmitVertex();
gl_Position = gl_in[2].gl_Position;
gs_out.v_texcoord = gs_in[2].v_texCoords[camera_id_unique];
EmitVertex();
EndPrimitive();
}
Fragment Shader:
#version 430
#define NUM_CAMERAS 3
uniform sampler2D colorTextures[NUM_CAMERAS];
in GS_OUT {
vec2 v_texcoord;
} fs_in;
flat in uint camera_id_unique;
out vec4 color;
void main(){
color = texture(colorTextures[camera_id_unique], fs_in.v_texcoord);
}
Arrayed program resources work in different ways depending on whether they are arrays of basic types or arrays of structs (or arrays). Resources that are arrays of basic types only expose the entire array as a single resource, whose name is "name[0]" and which has an explicit array size (if you query that property). Other arrayed resources expose separate names for each array element.
Since texcoords is an array of basic types, there is no "texcoords[2]" or "texcoords[1]"; there is only "texcoords[0]". Arrayed attributes are always assigned contiguous locations, the locations for indices 1 and 2 will simply be 1 or 2 plus the location for index 0.
I have been trying to imploment parallax corrected local cubemaps in OpenGL for a little while now, but I've not really managed to get anywhere, does anybody know where to start?
Here is my current shader code:
Fragment:
#version 330
in vec2 TexCoord;
in vec3 Normal;
in vec3 Position;
in vec3 Color;
out vec4 color;
uniform samplerCube CubeMap;
uniform vec3 CameraPosition;
void main() {
vec4 OutColor = vec4(Color,1.0);
vec3 normal = normalize(Normal);
vec3 view = normalize(Position-CameraPosition);
vec3 ReflectionVector = reflect(view,normal);
vec4 ReflectionColor = texture(CubeMap,ReflectionVector);
OutColor = mix(OutColor,ReflectionColor,0.5);
color = OutColor;
}
Vertex:
#version 330
layout (location = 0) in vec3 position;
layout (location = 2) in vec3 normal;
layout (location = 1) in vec2 texCoord;
layout (location = 3) in vec3 color;
out vec2 TexCoord;
out vec3 Normal;
out vec3 Position;
out vec3 Color;
uniform mat4 ModelMatrix;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;
void main()
{
gl_Position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(position, 1.0f);
TexCoord=texCoord;
Normal = normal;
Position = vec4(ModelMatrix * vec4(position,1.0)).xyz;
Color = color;
}
Position = vec4(ModelMatrix * vec4(position,1.0)).xyz;
This puts the Position output variable into world space. Assuming the CameraPosition uniform is also in world space, then:
vec3 view = normalize(Position-CameraPosition);
view will also be in world space.
However, Normal comes directly from the vertex attribute. So unless you are updating each vertex's Normal value every time you rotate the object, that value will probably be in model space. And therefore:
vec3 ReflectionVector = reflect(view,normal);
This statement is incoherent. view is in one space, while normal is in another. And you can't really perform reasonable operations on vectors that are in different coordinate systems.
You need to make sure that Normal, Position, and CameraPosition are all in the same space. If you want that space to be world space, then you need to transform Normal into world space. Which in the general case requires computing the inverse/transpose of your model-to-world matrix.
I created two shaders for my program to render simple objects.
Vertex shader source:
#version 400 core
layout (location = 1) in vec4 i_vertexCoords;
layout (location = 2) in vec3 i_textureCoords;
layout (location = 3) in vec3 i_normalCoords;
layout (location = 4) in int i_material;
uniform mat4 u_transform;
out VertexData {
vec3 textureCoords;
vec3 normalCoords;
int material;
} vs_out;
void main() {
vs_out.textureCoords = i_textureCoords;
vs_out.material = i_material;
gl_Position = u_transform * i_vertexCoords;
vs_out.normalCoords = gl_Position.xyz;
}
Fragment shader source:
#version 400 core
struct MaterialStruct {
int ambientTexutre;
int diffuseTexture;
int specularTexture;
int bumpTexture;
vec4 ambientColor;
vec4 diffuseColor;
vec4 specularColor;
float specularComponent;
float alpha;
int illuminationModel;
};
in VertexData {
vec3 textureCoords;
vec3 normalCoords;
int material;
} vs_out;
layout (std140) uniform MaterialsBlock {
MaterialStruct materials[8];
} u_materials;
uniform sampler2D u_samplers[16];
out vec4 fs_color;
void main() {
MaterialStruct m = u_materials.materials[vs_out.material];
fs_color = vec4(m.diffuseColor.rgb, m.diffuseColor.a * m.alpha);
}
Program created with this two shaders renders picture 2
When I change main() function contents to next:
void main() {
MaterialStruct m = u_materials.materials[vs_out.material];
fs_color = vec4(m.diffuseColor.rgb * (vs_out.normalCoords.z + 0.5), m.diffuseColor.a * m.alpha);
}
It renders picture 1, but materials still exists (if I'm trying to select material from u_materials.materials manually it works). Shader thinks what vs_out.material is constant and equals 0, but it isn't. Data is not changed (excluding transformation matrix)
Could someone explain the solution of this problem?
The GLSL 4.5 spec states in section 4.3.4 "Input Variables":
Fragment shader inputs that are signed or unsigned integers, integer vectors, or any double-precision
floating-point type must be qualified with the interpolation qualifier flat.
You can't use interpolation with those types, and actually, your code shouldn't compile on a strict implementation.
Currently i try to implement Bump Mapping for my OpenGL Project.
The Problem is that same parts of my cube is black. Like shown in this picture :
I am almost certain that i just dont understand how the shaders works, so i used the shader from OpenGL Superbible .
Here is my code :
Vertex Shader
#version 330
layout(location = 0) in vec4 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec2 texCoord;
layout(location = 3) in vec3 tangent;
layout(location = 4) in vec3 bitangent;
uniform mat4 matModel;
uniform mat4 matNormal;
uniform mat4 matMVP;
uniform mat4 matMV;
uniform vec3 light_pos = vec3(0.0,0.0,100.0);
out VS_OUT{
vec3 eyeDir;
vec3 lightDir;
vec2 fragTexCoord;
} vs_out;
void main()
{
vec4 P = matMV*position;
vec3 V = P.xyz;
vec3 L = normalize(light_pos - P.xyz);
vec3 N = normalize(mat3(matNormal)*normal);
vec3 T = normalize(mat3(matNormal)*tangent);
vec3 B = cross(N,T);
vs_out.lightDir = normalize(vec3(dot(L,T),
dot(L,B),
dot(L,N)));
V = -P.xyz;
vs_out.eyeDir = normalize(vec3(dot(V,T),
dot(V,B),
dot(V,N)));
vs_out.fragTexCoord = texCoord;
gl_Position = matMVP * position;
}
And the fragment shader :
#version 330
uniform sampler2D diffuseTex;
uniform sampler2D heightTex;
uniform vec3 heightColor;
in vec3 fragNormal;
in vec2 fragTexCoord;
in vec3 tangent_out_normalized;
in vec3 bitangent_out;
in vec3 normal_out_normalized;
in VS_OUT{
vec3 eyeDir;
vec3 lightDir;
vec2 fragTexCoord;
}fs_in;
out vec4 outputColor;
void main()
{
vec3 V = normalize(fs_in.eyeDir);
vec3 L = normalize(fs_in.lightDir);
vec3 N = normalize(texture(heightTex,fs_in.fragTexCoord).rgb*2-vec3(1.0));
vec3 R = reflect(-L,N);
vec3 diffuse_albedo = texture(diffuseTex,fs_in.fragTexCoord).rgb;
vec3 diffuse =max(dot(N,L),0)*diffuse_albedo;
vec3 specular_albedo = vec3(1.0);
vec3 specular = max(pow(dot(R,V),25.0),0.0)*specular_albedo;
outputColor = vec4(diffuse+specular,1);
}
What am i missing?
It seems very wrong that you don't use fragNormal anywhere. You should use it to rotate the texture normal. To make it obvious, if the bump is flat you should still get the usual surface lighting.
The next strange thing is that you need to multiply you bump normal by 2 and subtract {1,1,1}. The normal should never flip and I suspect this is going on in your case. When it flips you will suddenly go from in light to in shadow, and that might cause the black areas.
Hi i'm new to GLSL and i'm having a few problems.
I'm trying to create a pair of GLSL Shaders to either use color or texture but i must be doing something wrong.
The problem is that if set uUseTexture to 0 (which should indicate color) it doesn't work (object is not colored). I know the coloring code works separately, any hints why it does not work using the if statement?
Here is the code:
// Fragment
precision mediump float;
uniform int uUseTexture;
uniform sampler2D uSampler;
varying vec4 vColor;
varying vec2 vTextureCoord;
void main(void) {
if(uUseTexture == 1) {
gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
} else {
gl_FragColor = vColor;
}
}
// Vertex
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
attribute vec2 aTextureCoord;
uniform int uUseTexture;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying vec4 vColor;
varying vec2 vTextureCoord;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
if(uUseTexture == 1) {
vTextureCoord = aTextureCoord;
} else {
vColor = aVertexColor;
}
Nothing springs to mind immediately glancing over your code, but I'd like to take a moment and point out that this use case can be covered without needing an if statement. For example, let's treat uUseTexture as a float instead of an int (you could cast it in the shader but this is more interesting):
// Vertex
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
attribute vec2 aTextureCoord;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying vec4 vColor;
varying vec2 vTextureCoord;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
// It may actually be faster to just assign both of these anyway
vTextureCoord = aTextureCoord;
vColor = aVertexColor;
}
// Fragment
uniform float uUseTexture;
uniform sampler2D uSampler;
varying vec4 vColor;
varying vec2 vTextureCoord;
void main(void) {
// vTextureCoord is already a vec2, BTW
vec4 texColor = texture2D(uSampler, vTextureCoord) * uUseTexture;
vec4 vertColor = vColor * (1.0 - uUseTexture);
gl_FragColor = texColor + vertColor;
}
Now uUseTexture simply acts as a modulator for how much of each color source you want to use. And it's more flexible in that you could set it to 0.5 and get half texture/half vertex color too!
The thing that may surprise you is that there's a good likelihood that this is what the shader compiler is doing behind the scenes anyway when you use an if statement like that. It's typically more efficient for the hardware that way.