I'm trying to use a shader program that consists of two shaders.
Ortho.vert:
uniform vec2 ViewOrigin;
uniform vec2 ViewSize;
in vec2 Coord;
void main ()
{
gl_Position = vec4((Coord.x - ViewOrigin.x) / ViewSize.x,
1.0f - (Coord.y - ViewOrigin.y) / ViewSize.y,
0.0f, 1.0f);
}
Tiles.frag:
uniform sampler2D Texture;
uniform sampler2D LightMap;
in vec2 TextureCoord;
in vec2 LightMapCoord;
void main ()
{
vec4 textureColor = texture2D(Texture, TextureCoord);
vec4 lightMapColor = texture2D(LightMap, LightMapCoord);
gl_FragColor = vec4(textureColor.rgb * lightMapColor.rgb, 1.0f);
}
glGetAttribLocation is returning -1 for TextureCoord and LightMapCoord. I've read about instances where the compiler optimizes away attributes that aren't used, but in this example you can see that they are clearly used. Is there something about the OpenGL state (glEnable, etc.) that is required in order to enable samplers? I'm not sure what else could be wrong here. Any help is appreciated.
Attributes cannot go directly into the fragment shader. Their full name is vertex attributes, which means that they provide values per vertex, and are inputs to the vertex shader.
Fragment shaders can have in variables, but they do not correspond to vertex attributes. They need to match up with out variables of the vertex shader.
So what you need to do to get this working is to define the attributes as inputs to the vertex shader, and then pass the values from the vertex shader to the fragment shader. In the vertex shader, this could look like this:
uniform vec2 ViewOrigin;
uniform vec2 ViewSize;
in vec2 Coord;
in vec2 TextureCoord;
in vec2 LightMapCoord;
out FragTextureCoord;
out FragLightMapCoord;
void main ()
{
gl_Position = vec4((Coord.x - ViewOrigin.x) / ViewSize.x,
1.0f - (Coord.y - ViewOrigin.y) / ViewSize.y,
0.0f, 1.0f);
FragTextureCoord = TextureCoord;
FragLightMapCoord = LightMapCoord;
}
Then in the fragment shader, you declare in variables that match the out variables of the fragment shader. These variables will receive the per-fragment interpolated values of the what you wrote to the corresponding out variables in the vertex shader:
uniform sampler2D Texture;
uniform sampler2D LightMap;
in vec2 FragTextureCoord;
in vec2 FragLightMapCoord;
void main ()
{
vec4 textureColor = texture2D(Texture, FragTextureCoord);
vec4 lightMapColor = texture2D(LightMap, FragLightMapCoord);
gl_FragColor = vec4(textureColor.rgb * lightMapColor.rgb, 1.0f);
}
Having to receive the attribute values in the vertex shader, and explicitly passing the exact same values through to the fragment shader, may look cumbersome. The important thing to realize is that this is just a special case of a much more generic mechanism. Instead of simply passing the attribute value directly to the out variable in the vertex shader, you can obviously apply any kind of computation to calculate the values of the out values, which is often necessary in more complex shaders.
Related
I have been trying to implement a heightmap to my terrain shader, but the terrain remains flat. The texture is properly loaded in the vertex shader, and I try to use the greyscale values of the texture based on the mesh's uvs to adjust the vertex height:
//DIFFUSE VERTEX SHADER
#version 330
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
in vec3 vertex;
in vec3 normal;
in vec2 uv;
uniform sampler2D heightmap;
out vec2 texCoord;
void main( void ){
vec3 _vertex = vertex;
_vertex.y = texture(heightmap, uv).r * 2.f;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(_vertex, 1.f);
texCoord = uv;
}
Fragment: (the splatmap works so ignore that)
uniform sampler2D splatmap;
uniform sampler2D diffuse1;
uniform sampler2D diffuse2;
uniform sampler2D diffuse3;
uniform sampler2D diffuse4;
in vec2 texCoord;
out vec4 fragment_color;
void main( void ) {
///Loading the splatmap and the diffuse textures
vec4 splatTexture = texture2D(splatmap, texCoord);
vec4 diffuseTexture1 = texture2D(diffuse1, texCoord);
vec4 diffuseTexture2 = texture2D(diffuse2, texCoord);
vec4 diffuseTexture3 = texture2D(diffuse3, texCoord);
vec4 diffuseTexture4 = texture2D(diffuse4, texCoord);
//Interpolate between the different textures using the splatmap's rgb values (works)
diffuseTexture1 *= splatTexture.r;
diffuseTexture2 = mix (diffuseTexture1, diffuseTexture2, splatTexture.g);
diffuseTexture3 = mix (diffuseTexture2,diffuseTexture3, splatTexture.b);
vec4 outcolor = mix (diffuseTexture3, diffuseTexture4, splatTexture.a);
fragment_color = outcolor;
}
Some additional info:
All textures are loaded like this in my terrain material and passed to the shader (works properly):
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, heightMap->getId());
glUniform1i (_shader->getUniformLocation("heightMap"),0);
...
The plane mesh uvs are mapped like this:
(0,1) (1,1)
(0,0) (1,0)
I guess I am doing something horribly wrong, but I can't figure out what. Any help is appreciated!
Does your writing this:
The plane mesh uvs are mapped like this:
(0,1) (1,1)
(0,0) (1,0)
… mean that your mesh consists of just 4 vertices? If so, then that's your problem right there: The Vertex shader can not magically create "new" vertices, so your heightmap texture is sampled at only 4 points (and nothing in between).
And because you sample the texture coordinates at integer values and your texture coordinates and are at 0 and 1, you're effectively sampling the very same texture coordinate, so you're going to see the same displacement for all four vertices.
Solution: Tesselate your base mesh so that there are actually vertices available to displace. A tesselation shader is perfectly fine for that.
EDIT:
BTW, you can simplyfiy your vertex shader a bit: For the attributes make it a
in vec2 vertex;
which requires just 2/3 of the space of vec3, since you're not using the z component anyway.
float y = texture(heightmap, uv).r * 2.f;
gl_Position =
projectionMatrix
* viewMatrix
* modelMatrix
* vec4(vertex.x, y, vertex.y, 1.f);
I'm trying to implement some basic lighting and shading following the tutorial over here and here.
Everything is more or less working but I get some kind of strange flickering on object surfaces due to the shading.
I have two images attached to show you guys how this problem looks.
I think the problem is related to the fact that I'm passing vertex coordinates from vertex shader to fragment shader to compute some lighting variables as stated in the above linked tutorials.
Here is some source code (stripped out unrelated code).
Vertex Shader:
#version 150 core
in vec4 pos;
in vec4 in_col;
in vec2 in_uv;
in vec4 in_norm;
uniform mat4 model_view_projection;
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
void main(void) {
gl_Position = model_view_projection * pos;
out_col = in_col;
out_vert = pos;
out_norm = in_norm;
passed_uv = in_uv;
}
and Fragment Shader:
#version 150 core
uniform sampler2D tex;
uniform mat4 model_mat;
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
out vec4 col;
void main(void) {
mat3 norm_mat = mat3(transpose(inverse(model_mat)));
vec3 norm = normalize(norm_mat * vec3(in_norm));
vec3 light_pos = vec3(0.0, 6.0, 0.0);
vec4 light_col = vec4(1.0, 0.8, 0.8, 1.0);
vec3 col_pos = vec3(model_mat * vert_pos);
vec3 s_to_f = light_pos - col_pos;
float brightness = dot(norm, normalize(s_to_f));
brightness = clamp(brightness, 0, 1);
gl_FragColor = out_col;
gl_FragColor = vec4(brightness * light_col.rgb * gl_FragColor.rgb, 1.0);
}
As I said earlier I guess the problem has to do with the way the vertex position is passed to the fragment shader. If I change the position values to something static no more flickering occurs.
I changed all other values to statics, too. It's the same result - no flickering if I am not using the vertex position data passed from vertex shader.
So, if there is someone out there with some GL-wisdom .. ;)
Any help would be appreciated.
Side note: running all this stuff on an Intel HD 4000 if that may provide further information.
Thanks in advance!
Ivan
The names of the out variables in the vertex shader and the in variables in the fragment shader need to match. You have this in the vertex shader:
out vec4 out_col;
out vec2 passed_uv;
out vec4 out_vert;
out vec4 out_norm;
and this in the fragment shader:
in vec4 in_col;
in vec2 passed_uv;
in vec4 vert_pos;
in vec4 in_norm;
These variables are associated by name, not by order. Except for passed_uv, the names do not match here. For example, you could use these declarations in the vertex shader:
out vec4 passed_col;
out vec2 passed_uv;
out vec4 passed_vert;
out vec4 passed_norm;
and these in the fragment shader:
in vec4 passed_col;
in vec2 passed_uv;
in vec4 passed_vert;
in vec4 passed_norm;
Based on the way I read the spec, your shader program should actually fail to link. At least in the GLSL 4.50 spec, in the table on page 43, it lists "Link-Time Error" for this situation. The rules seem somewhat ambiguous in earlier specs, though.
Is there anyway to get the TexCoords of the vertices inside the fragment shader or to give them to the fragment shader from the vertex shader, without giving them from the normal code to the vertex shader?
If you have read any tutorial, you'd know this is pretty much required for any reasonable texturing.
VS:
attribute vec2 in_texCoord;
out vec2 fs_texCoord;
main () {
fs_texCoord = in_texCoord;
}
FS:
in vec2 fs_texCoord;
out vec4 out_color;
main () {
out_color = vec4(fs_texCoord, 0.0, 1.0);
}
All of the rules regarding interpolation/filtering still apply.
My vertex shader looks as follows:
#version 120
uniform float m_thresh;
varying vec2 texCoord;
void main(void)
{
gl_Position = ftransform();
texCoord = gl_TexCoord[0].xy;
}
and my fragment shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
varying vec2 texCoord;
void main(void)
{
vec4 grab = vec4(texture2D(grabTexture, texCoord.xy));
vec3 colour = vec3(grab.xyz * m_thresh);
gl_FragColor = vec4( colour, 0.5 );
}
basically i am getting the error message "Error in shader -842150451 - 0<9> : error C7565: assignment to varying 'texCoord'"
But I have another shader which does the exact same thing and I get no error when I compile that and it works!!!
Any ideas what could be happening?
For starters, there is no sensible reason to construct a vec4 from texture2D (...). Texture functions in GLSL always return a vec4. Likewise, grab.xyz * m_thresh is always a vec3, because a scalar multiplied by a vector does not change the dimensions of the vector.
Now, here is where things get interesting... the gl_TexCoord [n] GLSL built-in you are using is actually a pre-declared varying. You should not be reading from this in a vertex shader, because it defines a vertex shader output / fragment shader input.
The appropriate vertex shader built-in variable in GLSL 1.2 for getting the texture coordinates for texture unit N is actually gl_MultiTexCoord<N>
Thus, your vertex and fragment shaders should look like this:
Vertex Shader:
#version 120
//varying vec2 texCoord; // You actually do not need this
void main(void)
{
gl_Position = ftransform();
//texCoord = gl_MultiTexCoord0.st; // Same as comment above
gl_TexCoord [0] = gl_MultiTexCoord0;
}
Fragment Shader:
#version 120
uniform float m_thresh;
uniform sampler2D grabTexture;
//varying vec2 texCoord;
void main(void)
{
//vec4 grab = texture2D (grabTexture, texCoord.st);
vec4 grab = texture2D (grabTexture, gl_TexCoord [0].st);
vec3 colour = grab.xyz * m_thresh;
gl_FragColor = vec4( colour, 0.5 );
}
Remember how I said gl_TexCoord [n] is a built-in varying? You can read/write to this instead of creating your own custom varying vec2 texCoord; in GLSL 1.2. I commented out the lines that used a custom varying to show you what I meant.
The OpenGL® Shading Language (1.2) - 7.6 Varying Variables - pp. 53
The following built-in varying variables are available to write to in a vertex shader. A particular one should be written to if any functionality in a corresponding fragment shader or fixed pipeline uses it or state derived from it.
[...]
varying vec4 gl_TexCoord[]; // at most will be gl_MaxTextureCoords
The OpenGL® Shading Language (1.2) - 7.3 Vertex Shader Built-In Attributes - pp. 49
The following attribute names are built into the OpenGL vertex language and can be used from within a vertex shader to access the current values of attributes declared by OpenGL.
[...]
attribute vec4 gl_MultiTexCoord0;
The bottom line is that gl_MultiTexCoord<N> defines vertex attributes (vertex shader input), gl_TexCoord [n] defines a varying (vertex shader output, fragment shader input). It is also worth mentioning that these are not available in newer (core) versions of GLSL.
Could someone assist me or head me in the right direction to implement the basic FVFs from DirectX in GLSL code? I completely understand how to create a program, apply VBOs and all that, but I'm having great difficulty in the actual creation of the shaders. Namely:
transformed+lit (x,y,color,specular,tu,tv)
lit (x,y,z,color,specular,tu,tv)
unlit (x,y,z,nx,ny,nz,tu,tv) [material/lights]
With this, I'd be given enough to implement far more interesting shaders.
So, I'm not asking for a mechanism to deal with FVFs. I'm simply asking, for the shader code, given the proper streams. I understand that the unlit and lit versions rely on passing in matrices and I completely understand the concept. I am just having trouble finding shader examples showing these concepts.
Okay. If you have troubles finding working shaders, there is example (Honestly, you can find it at any OpenGL book).
This shader program will use your object's world matrix and camera's matrices to transform vertices, and then map one texture to pixels and lit them with one directional light, (according to material properties and light direction).
Vertex shader:
#version 330
// Vertex input layout
attribute vec3 inPosition;
attribute vec3 inNormal;
attribute vec4 inVertexCol;
attribute vec2 inTexcoord;
attribute vec3 inTangent;
attribute vec3 inBitangent;
// Output
struct PSIn
{
vec3 normal;
vec4 vertexColor;
vec2 texcoord;
vec3 tangent;
vec3 bitangent;
};
out PSIn psin;
// Uniform buffers
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
void main()
{
// transform position
vec4 pos = vec4(inPosition, 1.0f);
pos = mtxWorld * pos;
pos = mtxView * pos;
pos = mtxProj * pos;
gl_Position = pos;
// just pass-through other stuff
psin.normal = inNormal;
psin.tangent = inTangent;
psin.bitangent = inBitangent;
psin.texcoord = inTexcoord;
psin.vertexColor = inVertexCol;
}
And fragment shader:
#version 330
// Input
in vec3 position;
in vec3 normal;
in vec4 vertexColor;
in vec2 texcoord;
in vec3 tangent;
in vec3 bitangent;
// Output
out vec4 fragColor;
// Uniforms
uniform sampler2D sampler0;
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka; // ambient quotient
float Kd; // diffuse quotient
float Ks; // specular quotient
float A; // shininess
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
// function to calculate pixel lighting
float Lit( Material material, vec3 pos, vec3 nor, vec3 lit, vec3 eye )
{
vec3 V = normalize( eye - pos );
vec3 R = reflect( lit, nor);
float Ia = material.Ka;
float Id = material.Kd * clamp( dot(nor, -lit), 0.0f, 1.0f );
float Is = material.Ks * pow( clamp(dot(R,V), 0.0f, 1.0f), material.A );
return Ia + Id + Is;
}
void main()
{
vec3 nnormal = normalize(normal);
vec3 ntangent = normalize(tangent);
vec3 nbitangent = normalize(bitangent);
vec4 outColor = texture(sampler0, texcoord); // texture mapping
outColor *= Lit( material, position, nnormal, lightDirection, cameraPosition ); // lighting
outColor.w = 1.0f;
fragColor = outColor;
}
If you don't want texturing, just don't sample texture, but equate outColor to vertexColor.
If you don't need lighting, just comment out Lit() function.
Edit:
For 2D objects you can still use same program, but many of functionality will be redundant. You can strip out:
camera
light
material
all of vertex attributes, but inPosition and inTexcoord (maybe also inVertexCol, f you need vertices to have color) and all of code related with unneeded attributes
inPosition can be vec2
you will need to pass orthographic projection matrix instead of perspective one
you can even strip out matrices, and pass vertex buffer with positions in pixels. See my answer here about how to transform those pixel positions to screen space positions. You can do it either in C/C++ code or in GLSL/HLSL.
Hope it helps somehow.
Intro
You've not specified OpenGL/GLSL version that you targeting, so I'll assume that it is at least OpenGL 3.
One of the main advantages of programmable pipeline, to be compared with with fixed-function pipeline, is fully customizable vertex input. I'm not quite sure, if it is a good idea to introduce such constraints as fixed vertex format. For what?.. (You will find modern approach in paragraph "Another way" of my post)
But, if you really want to emulate fixed-function...
I think you'll need to have a vertex shader for each vertex format
you have, or somehow generate vertex shader on the fly. Or even for
all of the shader stages.
For example, for x, y, color, tu, tv input you will have vertex
shader such as:
attribute vec2 inPosition;
attribute vec4 inCol;
attribute vec2 inTexcoord;
void main()
{
...
}
As you don't have transforms, light and materials fixed-functionality in OpenGL 3, you must implement it yourself:
You must pass matrices for transformations
For lit shader you must pass additional variables, such as light direction
For material shader you must have materials in input
Typically, in shader, you do it with uniforms or uniform blocks:
layout(std140)
uniform CameraBuffer
{
mat4 mtxView;
mat4 mtxProj;
vec3 cameraPosition;
};
layout(std140)
uniform ObjectBuffer
{
mat4 mtxWorld;
};
layout(std140)
uniform LightBuffer
{
vec3 lightDirection;
};
struct Material
{
float Ka;
float Kd;
float Ks;
float A;
};
layout(std140)
uniform MaterialBuffer
{
Material material;
};
Probably, you can somehow combine all of shaders with different formats , uniforms, etc. in one big ubershader with branching.
Another way
You can stick to modern approach and just allow user to declare vertex format he wants (format, that he used in his shader). Just implement concept similar to IDirect3DDevice9::CreateVertexDeclaration or ID3D11Device::CreateInputLayout: you will make use of glVertexAttribPointer() and, probably, VAOs. This way you can also abstract out vertex layout, in API-independent way.
The main ideas are:
user passes an array of structures that describes format in API-independent way to your function (this struct can be similar to D3DVERTEXELEMENT9 or D3D11_INPUT_ELEMENT_DESC)
that function interpret array's elements one by one and builds some kind of internal info that describes format in API-specific way (such as IDirect3DVertexDeclaration9 for D3D9, ID3D11InputLayout for D3D11 or custom struct or VAO for OpenGL)
when it's time to set vertex format you just use this info
P.S. If you need ideas on how to properly implement light, materials in GLSL (I mean algorithms here), you'd better pick up some book or online tutorials, than asking here. Or just Google up "GLSL lighting".
You can find interesting these links:
Good resources for learning modern OpenGL (3.0 or later)?
OpenGL documentation
Select Books on OpenGL and 3D Graphics Coding
Happy coding!