I'm using a scene graph for my scene. It contains lights and meshes. Meshes have local transforms, and if a node is the child of a mesh node, its own local transform gets composed with the transform of its parent.
Now i am trying to implement lights. I send the coordinates of lights (world space coordinates) as a uniform to my shader.
Do I have to transform these light coordinates? It does not make sense to me to have to transform these light coordinates with the transform that is specific to a mesh.
Do I need to send another uniform matrix to the shader to transform the lights in a different way?
My code:
#version 330
const int numOfLights = 2;
// shader input
in vec2 vUV; // vertex uv coordinate
in vec3 vNormal; // untransformed vertex normal
in vec3 vPosition; // untransformed vertex position
uniform vec3 pointLights[numOfLights];
// shader output
out vec4 normal; // transformed vertex normal
out vec2 uv;
out vec4 position;
out vec3 lights[numOfLights];
uniform mat4 transform; //transform unique for this mesh containing the camera transformation
// vertex shader
void main()
{
position = transform * vec4(vPosition, 1.0);
// transform vertex using supplied matrix
gl_Position = position;
// forward normal and uv coordinate; will be interpolated over triangle
normal = transform * vec4( vNormal, 0.0f );
uv = vUV;
for (int i = 0; i < numOfLights; i++) {
//lights[i] = (transform * vec4(pointLights[i], 1.0)).xyz; //**Transform**?
lights[i] = pointLights[i];
}
}
Related
I have a deferred renderer which appears to work correctly, depth, colour and shading comes out correctly. However the position buffer is fine for orthographic, while the geometry appears 'inverted' (or depth disabled) when using a perspective projection.
I am getting the following buffer outputs for orthographic.
With the final 'shaded' image currently looking correct.
However when I am using a perspective projection I get the following buffers coming out...
And final image is fine, although I don't incorporate any position buffer information at the moment (N.B Only doing 'headlight' shading at the moment)
While the final image appears correct, the depth buffer appears to be ignored for my position buffer...(there is no glDisable(GL_DEPTH_TEST) in the code.
The depth and normal buffers looks ok to me, it's only the 'position' buffer which appears to be ignoring the depth? The render pipeline is exactly the same in for ortho and perspective with the only difference being the projection matrix.
I use glm::ortho, and glm::perspective and I calculate my near/far clipping distances on the fly based on the scene AABB. For orthographic my near/far is 1 & 11.4734 respectively, and for perspective it is 11.0875 & 22.5609... The width and height values are the same, fov is 45 for perspective projection.
I do have these calls before drawing any geometry...
glEnable(GL_DEPTH_TEST);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
Which I use for compositing different layers as part of the render pipeline.
Am I doing anything wrong here? or am I misunderstanding something?
Here are my shaders...
Vertex shader of gBuffer...
#version 430 core
layout (std140) uniform MatrixPV
{
mat4 P;
mat4 V;
};
layout(location = 0) in vec3 InPoint;
layout(location = 1) in vec3 InNormal;
layout(location = 2) in vec2 InUV;
uniform mat4 M;
out vec4 Position;
out vec3 Normal;
out vec2 UV;
void main()
{
mat4 VM = V * M;
gl_Position = P * VM * vec4(InPoint, 1.0);
Position = P * VM * vec4(InPoint, 1.0);
Normal = mat3(M) * InNormal;
UV = InUV;
}
Fragment shader of gBuffer...
#version 430 core
layout(location = 0) out vec4 gBufferPicker;
layout(location = 1) out vec4 gBufferPosition;
layout(location = 2) out vec4 gBufferNormal;
layout(location = 3) out vec4 gBufferDiffuse;
in vec3 Normal;
in vec4 Position;
vec4 Diffuse();
uniform vec4 PickerColour;
void main()
{
gBufferPosition = Position;
gBufferNormal = vec4(Normal.xyz, 1.0);
gBufferPicker = PickerColour;
gBufferDiffuse = Diffuse();
}
And here is the 'second pass' shader to visualise the position buffer...
#version 430 core
uniform sampler2D debugBufferPosition;
in vec2 UV;
out vec4 frag;
void main()
{
vec3 val = texture(debugBufferPosition, UV).xyz;
frag = vec4(val.xyz, 1.0);
}
I haven't used the position buffer data yet, and I know I can reconstruct it without having to store them in another buffer, however the positions are useful for me for other reasons and I would like to know why they are coming out as they are for perspective?
What you actually write in the position buffer is the clip space coordinate
Position = P * VM * vec4(InPoint, 1.0);
The clip space coordinate is a Homogeneous coordinates and transformed to the normaliced device cooridnate (which is a Cartesian coordinate by a Perspective divide.
ndc = gl_Position.xyz / gl_Position.w;
At orthographic projection the w component is 1, but at perspective projection, the w component contains a value which depends on the z component (depth) of the (cartesian ) view space coordinate.
I recommend to store the normalized device coordinate to the position buffer, rather than the clip space coordinate. e.g.:
gBufferPosition = vec4(Position.xyz / Position.w, 1.0);
I would like to create a shader like that that takes world coordinates and creates waves. I would like to analyse the video and know the steps that are required.
I'm not looking for codes, I'm just looking for ideas on how to implement that using GLSL or HLSL or any other language.
Here low quality and fps GIF in case link broke.
Here is the fragment shader:
#version 330 core
// Interpolated values from the vertex shaders
in vec2 UV;
in vec3 Position_worldspace;
in vec3 Normal_cameraspace;
in vec3 EyeDirection_cameraspace;
in vec3 LightDirection_cameraspace;
// highlight effect
in float pixel_z; // fragment z coordinate in [LCS]
uniform float animz; // highlight animation z coordinate [GCS]
// Ouput data
out vec4 color;
vec3 c;
// Values that stay constant for the whole mesh.
uniform sampler2D myTextureSampler;
uniform mat4 MV;
uniform vec3 LightPosition_worldspace;
void main(){
// Light emission properties
// You probably want to put them as uniforms
vec3 LightColor = vec3(1,1,1);
float LightPower = 50.0f;
// Material properties
vec3 MaterialDiffuseColor = texture( myTextureSampler, UV ).rgb;
vec3 MaterialAmbientColor = vec3(0.1,0.1,0.1) * MaterialDiffuseColor;
vec3 MaterialSpecularColor = vec3(0.3,0.3,0.3);
// Distance to the light
float distance = length( LightPosition_worldspace - Position_worldspace );
// Normal of the computed fragment, in camera space
vec3 n = normalize( Normal_cameraspace );
// Direction of the light (from the fragment to the light)
vec3 l = normalize( LightDirection_cameraspace );
// Cosine of the angle between the normal and the light direction,
// clamped above 0
// - light is at the vertical of the triangle -> 1
// - light is perpendicular to the triangle -> 0
// - light is behind the triangle -> 0
float cosTheta = clamp( dot( n,l ), 0,1 );
// Eye vector (towards the camera)
vec3 E = normalize(EyeDirection_cameraspace);
// Direction in which the triangle reflects the light
vec3 R = reflect(-l,n);
// Cosine of the angle between the Eye vector and the Reflect vector,
// clamped to 0
// - Looking into the reflection -> 1
// - Looking elsewhere -> < 1
float cosAlpha = clamp( dot( E,R ), 0,1 );
c =
// Ambient : simulates indirect lighting
MaterialAmbientColor +
// Diffuse : "color" of the object
MaterialDiffuseColor * LightColor * LightPower * cosTheta / (distance*distance) +
// Specular : reflective highlight, like a mirror
MaterialSpecularColor * LightColor * LightPower * pow(cosAlpha,5) / (distance*distance);
float z;
z=abs(pixel_z-animz); // distance to animated z coordinate
z*=1.5; // scale to change highlight width
if (z<1.0)
{
z*=0.5*3.1415926535897932384626433832795; // z=<0,M_PI/2> 0 in the middle
z=0.5*cos(z);
color+=vec3(0.0,z,z);
}
color=vec4(c,1.0);
}
here is the vertex shader:
#version 330 core
// Input vertex data, different for all executions of this shader.
layout(location = 0) in vec3 vertexPosition_modelspace;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexNormal_modelspace;
// Output data ; will be interpolated for each fragment.
out vec2 UV;
out vec3 Position_worldspace;
out vec3 Normal_cameraspace;
out vec3 EyeDirection_cameraspace;
out vec3 LightDirection_cameraspace;
out float pixel_z; // fragment z coordinate in [LCS]
// Values that stay constant for the whole mesh.
uniform mat4 MVP;
uniform mat4 V;
uniform mat4 M;
uniform vec3 LightPosition_worldspace;
void main(){
pixel_z=vertexPosition_modelspace.z;
// Output position of the vertex, in clip space : MVP * position
gl_Position = MVP * vec4(vertexPosition_modelspace,1);
// Position of the vertex, in worldspace : M * position
Position_worldspace = (M * vec4(vertexPosition_modelspace,1)).xyz;
// Vector that goes from the vertex to the camera, in camera space.
// In camera space, the camera is at the origin (0,0,0).
vec3 vertexPosition_cameraspace = ( V * M * vec4(vertexPosition_modelspace,1)).xyz;
EyeDirection_cameraspace = vec3(0,0,0) - vertexPosition_cameraspace;
// Vector that goes from the vertex to the light, in camera space. M is ommited because it's identity.
vec3 LightPosition_cameraspace = ( V * vec4(LightPosition_worldspace,1)).xyz;
LightDirection_cameraspace = LightPosition_cameraspace + EyeDirection_cameraspace;
// Normal of the the vertex, in camera space
Normal_cameraspace = ( V * M * vec4(vertexNormal_modelspace,0)).xyz; // Only correct if ModelMatrix does not scale the model ! Use its inverse transpose if not.
// UV of the vertex. No special space for this one.
UV = vertexUV;
}
there are 2 approaches I can think of for this:
3D reconstruction based
so you need to reconstruct the 3D scene from motion (not an easy task and way of my cup of tea). then you simply apply modulation to the selected mesh texture based on u,v texture mapping coordinates and time of animation.
Describe such topic will not fit in SO answer so you should google some CV books/papers on the subject instead.
Image processing based
you simply segmentate the image based on color continuity/homogenity. So you group neighboring pixels that have similar color and intensity (growing regions). When done try to fake surface 3D reconstruction based on intensity gradients similar to this:
Turn any 2D image into 3D printable sculpture with code
and after that create u,v mapping where one axis is depth.
When done then just apply your sin-wave effect modulation to color.
I would divide this into 2 stages. 1st pass will segmentate (I would chose CPU side for this) and second for the effect rendering (on GPU).
As this is form of augmented reality you should also read this:
Augment reality like zookazam
btw what is done on that video is neither of above options. They most likely have the mesh for that car already in vector form and use silhouette matching to obtain its orientation on image ... and rendered as usual ... so it would not work for any object on the scene but only for that car ... Something like this:
How to get the transformation matrix of a 3d model to object in a 2d image
[Edit1] GLSL highlight effect
I took this example:
complete GL+GLSL+VAO/VBO C++ example
And added the highlight to it like this:
On CPU side I added animz variable
it determines the z coordinate in object local coordinate system LCS where the highlight is actually placed. and I animate it in timer between min and max z value of rendered mesh (cube) +/- some margin so the highlight does not teleport at once from one to another side of object...
// global
float animz=-1.0;
// in timer
animz+=0.05; if (animz>1.5) animz=-1.5; // my object z = <-1,+1> 0.5 is margin
// render
id=glGetUniformLocation(prog_id,"animz"); glUniform1f(id,animz);
Vertex shader
I just take vertex z coordinate and pass it without transform into fragment
out float pixel_z; // fragment z coordinate in [LCS]
pixel_z=pos.z;
Fragment shader
After computing target color c (by standard rendering) I compute distance of pixel_z and animz and if small then I modulate c with a sinwave depended on the distance.
// highlight effect
float z;
z=abs(pixel_z-animz); // distance to animated z coordinate
z*=1.5; // scale to change highlight width
if (z<1.0)
{
z*=0.5*3.1415926535897932384626433832795; // z=<0,M_PI/2> 0 in the middle
z=0.5*cos(z);
c+=vec3(0.0,z,z);
}
Here the full GLSL shaders...
Vertex:
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location = 0) in vec3 pos;
layout(location = 2) in vec3 nor;
layout(location = 3) in vec3 col;
layout(location = 0) uniform mat4 m_model; // model matrix
layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
layout(location =32) uniform mat4 m_view; // inverse of camera matrix
layout(location =48) uniform mat4 m_proj; // projection matrix
out vec3 pixel_pos; // fragment position [GCS]
out vec3 pixel_col; // fragment surface color
out vec3 pixel_nor; // fragment surface normal [GCS]
// highlight effect
out float pixel_z; // fragment z coordinate in [LCS]
void main()
{
pixel_z=pos.z;
pixel_col=col;
pixel_pos=(m_model*vec4(pos,1)).xyz;
pixel_nor=(m_normal*vec4(nor,1)).xyz;
gl_Position=m_proj*m_view*m_model*vec4(pos,1);
}
Fragment:
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
in vec3 pixel_pos; // fragment position [GCS]
in vec3 pixel_col; // fragment surface color
in vec3 pixel_nor; // fragment surface normal [GCS]
out vec4 col;
// highlight effect
in float pixel_z; // fragment z coordinate in [LCS]
uniform float animz; // highlight animation z coordinate [GCS]
void main()
{
// standard rendering
float li;
vec3 c,lt_dir;
lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
li=dot(pixel_nor,lt_dir);
if (li<0.0) li=0.0;
c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
// highlight effect
float z;
z=abs(pixel_z-animz); // distance to animated z coordinate
z*=1.5; // scale to change highlight width
if (z<1.0)
{
z*=0.5*3.1415926535897932384626433832795; // z=<0,M_PI/2> 0 in the middle
z=0.5*cos(z);
c+=vec3(0.0,z,z);
}
col=vec4(c,1.0);
}
And preview:
This approach does not require textures nor u,v mapping.
[Edit2] highlight with start point
There are many ways how to implement it. I chose distance from the start point as a highlight parameter. So the highlight will grow from the point in all directions. Here preview for two different touch point locations:
The white bold cross is the location of touch point rendered for visual check. Here the code:
Vertex:
// Vertex
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location = 0) in vec3 pos;
layout(location = 2) in vec3 nor;
layout(location = 3) in vec3 col;
layout(location = 0) uniform mat4 m_model; // model matrix
layout(location =16) uniform mat4 m_normal; // model matrix with origin=(0,0,0)
layout(location =32) uniform mat4 m_view; // inverse of camera matrix
layout(location =48) uniform mat4 m_proj; // projection matrix
out vec3 LCS_pos; // fragment position [LCS]
out vec3 pixel_pos; // fragment position [GCS]
out vec3 pixel_col; // fragment surface color
out vec3 pixel_nor; // fragment surface normal [GCS]
void main()
{
LCS_pos=pos;
pixel_col=col;
pixel_pos=(m_model*vec4(pos,1)).xyz;
pixel_nor=(m_normal*vec4(nor,1)).xyz;
gl_Position=m_proj*m_view*m_model*vec4(pos,1);
}
Fragment:
// Fragment
#version 400 core
#extension GL_ARB_explicit_uniform_location : enable
layout(location =64) uniform vec3 lt_pnt_pos;// point light source position [GCS]
layout(location =67) uniform vec3 lt_pnt_col;// point light source color&strength
layout(location =70) uniform vec3 lt_amb_col;// ambient light source color&strength
in vec3 LCS_pos; // fragment position [LCS]
in vec3 pixel_pos; // fragment position [GCS]
in vec3 pixel_col; // fragment surface color
in vec3 pixel_nor; // fragment surface normal [GCS]
out vec4 col;
// highlight effect
uniform vec3 touch; // highlight start point [GCS]
uniform float animt; // animation parameter <0,1> or -1 for off
uniform float size; // highlight size
void main()
{
// standard rendering
float li;
vec3 c,lt_dir;
lt_dir=normalize(lt_pnt_pos-pixel_pos); // vector from fragment to point light source in [GCS]
li=dot(pixel_nor,lt_dir);
if (li<0.0) li=0.0;
c=pixel_col*(lt_amb_col+(lt_pnt_col*li));
// highlight effect
float t=length(LCS_pos-touch)/size; // distance from start point
if (t<=animt)
{
t*=0.5*3.1415926535897932384626433832795; // z=<0,M_PI/2> 0 in the middle
t=0.75*cos(t);
c+=vec3(0.0,t,t);
}
col=vec4(c,1.0);
}
You control this with uniforms:
uniform vec3 touch; // highlight start point [GCS]
uniform float animt; // animation parameter <0,1> or -1 for off
uniform float size; // max distance of any point of object from touch point
I have been trying to implement a heightmap to my terrain shader, but the terrain remains flat. The texture is properly loaded in the vertex shader, and I try to use the greyscale values of the texture based on the mesh's uvs to adjust the vertex height:
//DIFFUSE VERTEX SHADER
#version 330
uniform mat4 projectionMatrix;
uniform mat4 viewMatrix;
uniform mat4 modelMatrix;
in vec3 vertex;
in vec3 normal;
in vec2 uv;
uniform sampler2D heightmap;
out vec2 texCoord;
void main( void ){
vec3 _vertex = vertex;
_vertex.y = texture(heightmap, uv).r * 2.f;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(_vertex, 1.f);
texCoord = uv;
}
Fragment: (the splatmap works so ignore that)
uniform sampler2D splatmap;
uniform sampler2D diffuse1;
uniform sampler2D diffuse2;
uniform sampler2D diffuse3;
uniform sampler2D diffuse4;
in vec2 texCoord;
out vec4 fragment_color;
void main( void ) {
///Loading the splatmap and the diffuse textures
vec4 splatTexture = texture2D(splatmap, texCoord);
vec4 diffuseTexture1 = texture2D(diffuse1, texCoord);
vec4 diffuseTexture2 = texture2D(diffuse2, texCoord);
vec4 diffuseTexture3 = texture2D(diffuse3, texCoord);
vec4 diffuseTexture4 = texture2D(diffuse4, texCoord);
//Interpolate between the different textures using the splatmap's rgb values (works)
diffuseTexture1 *= splatTexture.r;
diffuseTexture2 = mix (diffuseTexture1, diffuseTexture2, splatTexture.g);
diffuseTexture3 = mix (diffuseTexture2,diffuseTexture3, splatTexture.b);
vec4 outcolor = mix (diffuseTexture3, diffuseTexture4, splatTexture.a);
fragment_color = outcolor;
}
Some additional info:
All textures are loaded like this in my terrain material and passed to the shader (works properly):
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, heightMap->getId());
glUniform1i (_shader->getUniformLocation("heightMap"),0);
...
The plane mesh uvs are mapped like this:
(0,1) (1,1)
(0,0) (1,0)
I guess I am doing something horribly wrong, but I can't figure out what. Any help is appreciated!
Does your writing this:
The plane mesh uvs are mapped like this:
(0,1) (1,1)
(0,0) (1,0)
… mean that your mesh consists of just 4 vertices? If so, then that's your problem right there: The Vertex shader can not magically create "new" vertices, so your heightmap texture is sampled at only 4 points (and nothing in between).
And because you sample the texture coordinates at integer values and your texture coordinates and are at 0 and 1, you're effectively sampling the very same texture coordinate, so you're going to see the same displacement for all four vertices.
Solution: Tesselate your base mesh so that there are actually vertices available to displace. A tesselation shader is perfectly fine for that.
EDIT:
BTW, you can simplyfiy your vertex shader a bit: For the attributes make it a
in vec2 vertex;
which requires just 2/3 of the space of vec3, since you're not using the z component anyway.
float y = texture(heightmap, uv).r * 2.f;
gl_Position =
projectionMatrix
* viewMatrix
* modelMatrix
* vec4(vertex.x, y, vertex.y, 1.f);
I've been trying to implement a simple light / shading system, a simple Phong lighting system without specular lights to be precise. It basically works, except it has some (in my opinion) nasty artifacts.
My first thought was that maybe this is a problem of the texture mipmaps, but disabling them didn't work. My next best guess would be a shader issue, but I can't seem to find the error.
Has anybody ever experienced a similiar issue or an idea on how to solve this?
Image of the artifacts
Vertex shader:
#version 330 core
// Vertex shader
layout(location = 0) in vec3 vpos;
layout(location = 1) in vec2 vuv;
layout(location = 2) in vec3 vnormal;
out vec2 uv; // UV coordinates
out vec3 normal; // Normal in camera space
out vec3 pos; // Position in camera space
out vec3 light[3]; // Vertex -> light vector in camera space
uniform mat4 mv; // View * model matrix
uniform mat4 mvp; // Proj * View * Model matrix
uniform mat3 nm; // Normal matrix for transforming normals into c-space
void main() {
// Pass uv coordinates
uv = vuv;
// Adjust normals
normal = nm * vnormal;
// Calculation of vertex in camera space
pos = (mv * vec4(vpos, 1.0)).xyz;
// Vector vertex -> light in camera space
light[0] = (mv * vec4(0.0,0.3,0.0,1.0)).xyz - pos;
light[1] = (mv * vec4(-6.0,0.3,0.0,1.0)).xyz - pos;
light[2] = (mv * vec4(0.0,0.3,4.8,1.0)).xyz - pos;
// Pass position after projection transformation
gl_Position = mvp * vec4(vpos, 1.0);
}
Fragment shader:
#version 330 core
// Fragment shader
layout(location = 0) out vec3 color;
in vec2 uv; // UV coordinates
in vec3 normal; // Normal in camera space
in vec3 pos; // Position in camera space
in vec3 light[3]; // Vertex -> light vector in camera space
uniform sampler2D tex;
uniform float flicker;
void main() {
vec3 n = normalize(normal);
// Ambient
color = 0.05 * texture(tex, uv).rgb;
// Diffuse lights
for (int i = 0; i < 3; i++) {
l = normalize(light[i]);
cos = clamp(dot(n,l), 0.0, 1.0);
length = length(light[i]);
color += 0.6 * texture(tex, uv).rgb * cos / pow(length, 2);
}
}
As the first comment says, it looks like your color computation is using insufficient precision. Try using mediump or highp floats.
Additionally, the length = length(light[i]); pow(length,2) expression is quite inefficient, and could also be a source of the observed banding; you should use dot(light[i],light[i]) instead.
So i found information about my problem described as "gradient banding", also discussed here. The problem appears to be in the nature of my textures, since both, only the "white" texture and the real texture are mostly grey/white and there are effectively 256 levels of grey when using 8 bit per color channel.
The solution would be to implement post-processing dithering or to use better textures.
I'm getting some pretty freaky results from my tangent space normal mapping shader :). In the scene I show here, the teapot and checkered walls are being shaded with my ordinary Phong-Blinn shader (obviously teapot backface cull gives it a lightly ephemeral look and feel :-) ). I've tried to add in normal mapping to the sphere, with psychedelic results:
The light is coming from the right (just about visible as a black blob). The normal map I'm using on the sphere looks like this:
I'm using AssImp to process input models, so it's calculating tangent and bi-normals for each vertex automatically for me.
The pixel and vertex shaders are below. I'm not too sure what's going wrong, but it wouldn't surprise me if the tangent basis matrix is somehow wrong. I assume I have to compute things into eye space and then transform the eye and light vectors into tangent space and that this is the correct way to go about it. Note that the light position comes into the shader already in view space.
// Vertex Shader
#version 420
// Uniform Buffer Structures
// Camera.
layout (std140) uniform Camera
{
mat4 Camera_Projection;
mat4 Camera_View;
};
// Matrices per model.
layout (std140) uniform Model
{
mat4 Model_ViewModelSpace;
mat4 Model_ViewModelSpaceInverseTranspose;
};
// Spotlight.
layout (std140) uniform OmniLight
{
float Light_Intensity;
vec3 Light_Position; // Already in view space.
vec4 Light_Ambient_Colour;
vec4 Light_Diffuse_Colour;
vec4 Light_Specular_Colour;
};
// Streams (per vertex)
layout(location = 0) in vec3 attrib_Position;
layout(location = 1) in vec3 attrib_Normal;
layout(location = 2) in vec3 attrib_Tangent;
layout(location = 3) in vec3 attrib_BiNormal;
layout(location = 4) in vec2 attrib_Texture;
// Output streams (per vertex)
out vec3 attrib_Fragment_Normal;
out vec4 attrib_Fragment_Position;
out vec3 attrib_Fragment_Light;
out vec3 attrib_Fragment_Eye;
// Shared.
out vec2 varying_TextureCoord;
// Main
void main()
{
// Compute normal.
attrib_Fragment_Normal = (Model_ViewModelSpaceInverseTranspose * vec4(attrib_Normal, 0.0)).xyz;
// Compute position.
vec4 position = Model_ViewModelSpace * vec4(attrib_Position, 1.0);
// Generate matrix for tangent basis.
mat3 tangentBasis = mat3( attrib_Tangent,
attrib_BiNormal,
attrib_Normal);
// Light vector.
attrib_Fragment_Light = tangentBasis * normalize(Light_Position - position.xyz);
// Eye vector.
attrib_Fragment_Eye = tangentBasis * normalize(-position.xyz);
// Return position.
gl_Position = Camera_Projection * position;
}
... and the pixel shader looks like this:
// Pixel Shader
#version 420
// Samplers
uniform sampler2D Map_Normal;
// Global Uniforms
// Material.
layout (std140) uniform Material
{
vec4 Material_Ambient_Colour;
vec4 Material_Diffuse_Colour;
vec4 Material_Specular_Colour;
vec4 Material_Emissive_Colour;
float Material_Shininess;
float Material_Strength;
};
// Spotlight.
layout (std140) uniform OmniLight
{
float Light_Intensity;
vec3 Light_Position;
vec4 Light_Ambient_Colour;
vec4 Light_Diffuse_Colour;
vec4 Light_Specular_Colour;
};
// Input streams (per vertex)
in vec3 attrib_Fragment_Normal;
in vec3 attrib_Fragment_Position;
in vec3 attrib_Fragment_Light;
in vec3 attrib_Fragment_Eye;
// Shared.
in vec2 varying_TextureCoord;
// Result
out vec4 Out_Colour;
// Main
void main(void)
{
// Compute normals.
vec3 N = normalize(texture(Map_Normal, varying_TextureCoord).xyz * 2.0 - 1.0);
vec3 L = normalize(attrib_Fragment_Light);
vec3 V = normalize(attrib_Fragment_Eye);
vec3 R = normalize(-reflect(L, N));
// Compute products.
float NdotL = max(0.0, dot(N, L));
float RdotV = max(0.0, dot(R, V));
// Compute final colours.
vec4 ambient = Light_Ambient_Colour * Material_Ambient_Colour;
vec4 diffuse = Light_Diffuse_Colour * Material_Diffuse_Colour * NdotL;
vec4 specular = Light_Specular_Colour * Material_Specular_Colour * (pow(RdotV, Material_Shininess) * Material_Strength);
// Final colour.
Out_Colour = ambient + diffuse + specular;
}
Edit: 3D Studio Render of the scene (to show the UV's are OK on the sphere):
I think your shaders are okay, but your texture coordinates on the sphere are totally off. It's as if they got distorted towards the poles along the longitude.