Explanation of working principle of openGL [closed] - opengl

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I'm trying to understand how coding in openGL works. I found this code on the internet and I want to understand it clearly.
For my vertex shader I have:
Vertex
uniform vec3 fvLightPosition;
varying vec2 Texcoord;
varying vec2 Texcoordcut;
varying vec3 ViewDirection;
varying vec3 LightDirection;
uniform mat4 extra;
attribute vec3 rm_Binormal;
attribute vec3 rm_Tangent;
uniform float fSinTime0_X;
uniform float fCosTime0_X;
void main( void )
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex * extra;
Texcoord = gl_MultiTexCoord0.xy;
Texcoordcut = gl_MultiTexCoord0.xy;
vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;
vec3 rotationLight = vec3(fCosTime0_X,0, fSinTime0_X);
ViewDirection = - fvObjectPosition.xyz;
LightDirection = (-rotationLight ) * (gl_NormalMatrix);
}
And for my Fragment shader, I created a white color on the picture to create a hole in it. :
uniform vec4 fvAmbient;
uniform vec4 fvSpecular;
uniform vec4 fvDiffuse;
uniform float fSpecularPower;
uniform sampler2D baseMap;
uniform sampler2D bumpMap;
varying vec2 Texcoord;
varying vec2 Texcoordcut;
varying vec3 ViewDirection;
varying vec3 LightDirection;
void main( void )
{
vec3 fvLightDirection = normalize( LightDirection );
vec3 fvNormal = normalize( ( texture2D( bumpMap, Texcoord ).xyz * 2.0 ) - 1.0 );
float fNDotL = dot( fvNormal, fvLightDirection );
vec3 fvReflection = normalize( ( ( 2.0 * fvNormal ) * fNDotL ) - fvLightDirection );
vec3 fvViewDirection = normalize( ViewDirection );
float fRDotV = max( 0.0, dot( fvReflection, fvViewDirection ) );
vec4 fvBaseColor = texture2D( baseMap, Texcoord );
vec4 fvTotalAmbient = fvAmbient * fvBaseColor;
vec4 fvTotalDiffuse = fvDiffuse * fNDotL * fvBaseColor;
vec4 fvTotalSpecular = fvSpecular * ( pow( fRDotV, fSpecularPower ) );
if(fvBaseColor == vec4(1,1,1,1)){
discard;
}else{
gl_FragColor = ( fvTotalDiffuse + fvTotalSpecular );
}
}
Somebody who can explain me very whell what everything does? I understand the basic idea of it. But not often why you need it, and what happend when you use other variables? What happens is that light around the teapot is comming and removing in the time. How is this correctly linked with the cosinus and sinus variables? What if I want the the light comes from above and goes to the bottom of the teapot?
Also,
What do this lines mean?
vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;
And why is here a minus before the variable?
ViewDirection = - fvObjectPosition.xyz;
Why do we use a negative rotationLight?
LightDirection = (-rotationLight ) * (gl_NormalMatrix);
Why do they use *2.0 ) - 1.0 for calculating the normalvector? Isn't that not possible with Normal = normalize( gl_NormalMatrix * gl_Normal);?
vec3 fvNormal = normalize( ( texture2D( bumpMap, Texcoord ).xyz * 2.0 ) - 1.0 );

Too lazy to fully analyze the code without the propper context of what you are sending to the shaders ... but your subquestions are easy enough:
What do this lines mean? vec4 fvObjectPosition = gl_ModelViewMatrix * gl_Vertex;
This converts gl_Vertex (polygon edge points) from object/model coordinate system to camera coordinate system. In other words it apply all the rotations and translations of your vertexes. The z axis is camera view axis pointing to or from Screen and x,y axises are the same as screens. No projections/clippings/clampings are applied yet !!! The resulting point is stored in fvObjectPosition 4D vector (x,y,z,w) I strongly recommend you to read Understanding 4x4 homogenous transform matrices and the sub-links there are also worth looking into.
And why is here a minus before the variable? ViewDirection = - fvObjectPosition.xyz;
Most likely because you need the direction from surface to camera so direction_from_surface=camera_pos-surface_pos as your surface_pos is already in camera coordinate system then camera position in the same coordinates are (0,0,0) so the result is direction_from_surface=(0,0,0)-surface_pos=-surface_pos or you got negative Z axis view direction (depends on the format of your matrices). It is Hard to determine without background info.
Why do we use a negative rotationLight? LightDirection = (-rotationLight ) * (gl_NormalMatrix);
most likely for the same reasons as bullet 2
Why do they use *2.0)-1.0 for calculating the normalvector?
The shader use normal/bump mapping which means you got an texture with normal vectors encoded as RGB. As RGB textures are clamped to range <0,1> and normal vector coordinates are in range <-1,+1> then you just need to rescale the texel So:
RGB*2.0 is in range <0,2>
RGB*2.0-1.0 is in range <-1,+1>
This obtains your normal vector in polygon coordinate system so you need to convert it to the coordinate system your equations work with. Usually global world space or camera space. The normalize is not necessary if your normal/bump map is normalized already. Normal textures are distinctive with colors ...
flat surface has normal=(0.0,0.0,+1.0) so in RGB it would be (0.5,0.5,1.0)
That is the common bluish/magenta color often seen in textures (see the link above).
But Yes you can use Normal = normalize( gl_NormalMatrix * gl_Normal);
But that will eliminate the bump/normal map and you would got just flat surfaces instead. Something like this:
GL+GLSL normal shading complete example (C++)
Light
vec3(fCosTime0_X,0, fSinTime0_X) looks like the light direction. This one is rotating around y axis. If you want to change light direction to something else just make it an uniform and pass it directly to shader instead of fCosTime0_X,fSinTime0_X

How is this correctly linked with the cosinus and sinus variables?
You can send data to a shader uniform variable via glUniform function. For example: in your vertex shader, you have 2 float values, so you will call glUniform1f twice each time with different location and different value.
Or you can stick the float variables to one vec2 variable like so:
uniform vec2 fSinValues; and fill them with glUniform2f(location, sinVal, cosVal);
What if I want the the light comes from above and goes to the bottom of the teapot?
If you want your light rotate in different direction, just pass the sin and cos values to different space coordinate right here: vec3 rotationLight = vec3(fCosTime0_X,fSinTime0_X, 0);

Related

GLSL point light shader moving with camera

I've been trying to make a basic static point light using shaders for an LWJGL game, but it appears as if the light is moving as the camera's position is being translated and rotated. These shaders are slightly modified from the OpenGL 4.3 guide, so I'm not sure why they aren't working as intended. Can anyone explain why these shaders aren't working as intended and what I can do to get them to work?
Vertex Shader:
varying vec3 color, normal;
varying vec4 vertexPos;
void main() {
color = vec3(0.4);
normal = normalize(gl_NormalMatrix * gl_Normal);
vertexPos = gl_ModelViewMatrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
}
Fragment Shader:
varying vec3 color, normal;
varying vec4 vertexPos;
void main() {
vec3 lightPos = vec3(4.0);
vec3 lightColor = vec3(0.75);
vec3 lightDir = lightPos - vertexPos.xyz;
float lightDist = length(lightDir);
float attenuation = 1.0 / (3.0 + 0.007 * lightDist + 0.000008 * lightDist * lightDist);
float diffuse = max(0.0, dot(normal, lightDir));
vec3 ambient = vec3(0.4, 0.4, 0.4);
vec3 finalColor = color * (ambient + lightColor * diffuse * attenuation);
gl_FragColor = vec4(finalColor, 1.0);
}
If anyone's interested, I ended up finding the solution. Removing the calls to gl_NormalMatrix and gl_ModelViewMatrix solved the problem.
The critical value here, lightPos, was being set as a function of vertexPos, which you have expressed in screen space (this happened because its original world space form was multiplied by modelView). Screen space stays with the camera, not anything in the 3D world. So to have a non-moving light source with respect to some absolute point in world space (like [4.0, 4.0, 4.0]), you could just leave your object's points in that space as you found out.
But getting rid of modelview is not a good idea, since the whole point of the model matrix is to place your objects where they belong (so you can re-use your vertex arrays with changes only to the model matrix, instead of burdening them with specifying every single shape's vertex positions from scratch).
A better way is to perform the modelView multiplication on both vertexPos AND lightPos. This way you're treating lightPos as originally a quantity in world space, but then doing the comparison in screen space. The math to get light intensities from normals will work out to the same in either space and you'll get a correct looking light source.

Creating a rectangular light source in OpenGL?

I am trying to create a rectangular, sharp-edge light source in OpenGL for one application. My idea is to create a spot light and somehow mask the shape of the shade into a rectangle, the mask of course has to be invisible through camera. When I was trying to implement this idea, it turns out that OpenGL will just skip rendering objects outside the camera, although lighting source outside camera is still valid. This has prevented me from creating the effect I wanted and I am wondering if any of you have come across similar problems before.
To make my question more specific, consider the following case of my question:
spot light at 0,0,5
target object at 0,0,0
mask object (a simple quad parallel to x-axis) at 0,0,3.
When camera is at 0,0,4, light passes through mask object and leaves a rectangular shape on the target object (which is what I wanted), but I can also see the mask object!(while I need the mask object to be invisible)
When I move the camera closer to the target object, say 0,0,2. The mask object is behind the camera and therefore invisible. However, since it's invisible, OpenGL stopped rendering it and therefore the mask object does not have any effect on the target object, and the light shade is still round!
My guess would be to start from a spot light, but separating the angle calculation:
* Project the L vector on the YZ plane to calculate the angle on the X axis
* Project the L vector on the XZ plane to calculate the angle on the Y axis
A very naive implementation of this could be (GLSL):
varying vec3 v_V; // World-space position
varying vec3 v_N; // World-space normal
uniform float time; // global time in seconds since shaderprogram link
uniform vec2 uSpotSize; // Spot size, on X and Y axes
vec3 lp = vec3(0.0, 0.0, 7.0 + cos(time) * 5.0); // Light world-space position
vec3 lz = vec3(0.0, 0.0, -1.0); // Light direction (Z vector)
// Light radius (for attenuation calculation)
float lr = 3.0;
void main()
{
// Calculate L, the vector from model surface to light
vec3 L = lp - v_V;
// Project L on the YZ / XZ plane
vec3 LX = normalize(vec3(L.x, 0.0, L.z));
vec3 LY = normalize(vec3(0.0, L.y, L.z));
// Calculate the angle on X and Y axis using projected vectors just above
float ax = dot(LX, -lz);
float ay = dot(LY, -lz);
// Light attenuation
float d = distance(lp, v_V);
float attenuation = 1.0 / (1.0 + (2.0/lr)*d + (1.0/(lr*lr))*d*d);
float shaded = max(0.0, dot(v_N, L)) * attenuation;
if(ax > cos(uSpotSize.x) && ay > cos(uSpotSize.y))
gl_FragColor = vec4(shaded); // Inside the light influence zone, light it up !
else
gl_FragColor = vec4(0.1); // Outside the light influence zone.
}
Again, this is very naive. For instance, the X/Y projection is done in world-space. If you want to be able to rotate the light rectangle, you might have to introduce a vector pointing to the right of the light.
Thus, you'll be able to get the fragment coordinate in the light's coordinate frame, and with this, you can decide whether to shade the fragment or not.
One solution might be adapting the calculations used for projective texture lookups to simulate a rectangular light source. You did not specify which OpenGL version you're using, but projective texture lookups can even be achieved with the fixed function pipeline
- although they're arguably easier to do in a shader.
Of course, this would not simulate a rectangular area light source, just a point light source that is constrained to a rectangular region.
Using this approach, you'd have to specify view & projection matrices for the light source; where the view matrix is essentially generated by a 'look at' with the light position & it's direction; the projection matrix encodes a perspective projection with your desired horizontal & vertical 'field of view'.
If you just want a rectangular area, you don't even need a texture; A simple vertex/ fragment shader pair could look like this:
( the vertex shader basically transforms the position to the light's clip space, the fragment shader performs the clipping & computes a lambert shading if the fragment is inside the light frustum )
#version 330 core
layout ( location = 0 ) in vec3 vertexPosition;
layout ( location = 1 ) in vec3 vertexNormal;
layout ( location = 3 ) in vec3 vertexDiffuse;
uniform mat4 modelTf;
uniform mat3 normalTf;
uniform mat4 viewTf; // view matrix for render camera
uniform mat4 projectiveTf; // projection matrix for render camera
uniform mat4 viewTf_lightCam; // view matrix of light source
uniform mat4 projectiveTf_lightCam; // projective matrix of light source
uniform vec4 lightPosition_worldSpace;
out vec3 diffuseColor;
out vec3 normal_worldSpace;
out vec3 toLight_worldSpace;
out vec4 position_lightClipSpace;
void main()
{
diffuseColor = vertexDiffuse;
vec4 vertexPosition_worldSpace = modelTf * vec4( vertexPosition, 1.0 );
normal_worldSpace = normalTf * vertexNormal;
toLight_worldSpace = normalize( lightPosition_worldSpace - vertexPosition_worldSpace ).xyz;
position_lightClipSpace = projectiveTf_lightCam * viewTf_lightCam * vertexPosition_worldSpace;
gl_Position = projectiveTf * viewTf * vertexPosition_worldSpace;
}
#version 330 core
layout ( location=0 ) out vec4 fragColor;
in vec3 diffuseColor;
in vec3 normal_worldSpace;
in vec3 toLight_worldSpace;
in vec4 position_lightClipSpace;
uniform vec3 ambientLight;
void main()
{
// clipping against the light frustum
bool isInsideX = ( position_lightClipSpace.x <= position_lightClipSpace.w && position_lightClipSpace.x >= -position_lightClipSpace.w );
bool isInsideY = ( position_lightClipSpace.y <= position_lightClipSpace.w && position_lightClipSpace.y >= -position_lightClipSpace.w );
bool isInsideZ = ( position_lightClipSpace.z <= position_lightClipSpace.w && position_lightClipSpace.z >= -position_lightClipSpace.w );
bool isInside = isInsideX && isInsideY && isInsideZ;
vec3 N = normalize( normal_worldSpace );
vec3 L = normalize( toLight_worldSpace );
vec3 lightColor = isInside ? max( dot( N, L ), 0.0 ) * vec3( 0.99, 0.66, 0.33 ) : vec3( 0.0 );
fragColor = vec4( clamp( ( ambientLight + lightColor ) * diffuseColor, vec3( 0.0 ), vec3( 1.0 ) ), 1.0 );
}
There are a lot of good papers on this, Brian Karis wrote about it in 2013 (in regards to UE4) here:
https://de45xmedrsdbp.cloudfront.net/Resources/files/2013SiggraphPresentationsNotes-26915738.pdf
And more recently Michal Drobot wrote an article about area lights in GPU Pro 5.
If you are using a metalness workflow you can also crank up the roughness as an approximation to area lighting, a technique introduced by Tri-Ace:
http://www.fxguide.com/featured/game-environments-parta-remember-me-rendering/

GLSL Normal Mapping (Areas With 0.0 Lambert Gets Lit)

when i use the model's normal , the result is fine ( there are dark areas and lit areas , as i would expect from a simple lambert diffuse shader )
but when i use a normal map , the dark areas gets lit!
i want to use a normal map and still get correct diffuse lighting like these examples
here is the code with and without normal mapping
and here is the code that uses the normal map
Vertex Shader
varying vec3 normal,lightDir;
attribute vec3 vertex,normalVec,tangent;
attribute vec2 UV;
void main(){
gl_TexCoord[0] = gl_TextureMatrix[0] * vec4(UV,0.0,0.0);
normal = normalize (gl_NormalMatrix * normalVec);
vec3 t = normalize (gl_NormalMatrix * tangent);
vec3 b = cross (normal, t);
vec3 vertexPosition = normalize(vec3(gl_ModelViewMatrix * vec4(vertex,1.0)));
vec3 v;
v.x = dot (lightDir, t);
v.y = dot (lightDir, b);
v.z = dot (lightDir, normal);
lightDir = normalize (v);
lightDir = normalize(vec3(1.0,0.5,1.0) - vertexPosition);
gl_Position = gl_ModelViewProjectionMatrix*vec4(vertex,1.0);
}
Fragment Shader
vec4 computeDiffuseLight (const in vec3 direction, const in vec4 lightcolor, const in vec3 normal, const in vec4 mydiffuse){
float nDotL = dot(normal, direction);
vec4 lambert = mydiffuse * lightcolor * max (nDotL, 0.0);
return lambert;
}
varying vec3 normal,lightDir;
uniform sampler2D textures[8];
void main(){
vec3 normalVector = normalize( 2 * texture2D(textures[0],gl_TexCoord[0].st).rgb - 1 );
vec4 diffuse = computeDiffuseLight (lightDir, vec4(1,1,1,1) , normalVector, vec4(0.7,0.7,0.7,0));
gl_FragColor = diffuse ;
}
Note: the actual normal mapping works correctly as seen in the specular highlights
i used Assimp to load the model ( md5mesh ) and calculated the tangents using Assimp too , then sent it to the shaders as an attribute
here is a link to the code and screenshots of the problem
https://dl.dropboxusercontent.com/u/32670019/code%20and%20screenshots.zip
is that a problem in the code or am i having a misconception ?
Updated code and screenshots
https://dl.dropboxusercontent.com/u/32670019/updated%20code%20and%20screenshots.zip
now the normal map works with the diffuse , but the diffuse alone is not correct
For Answer, see below.
Quick (possibly wrong) observation:
The line
vec3 normalVector = normalize( 2 * texture2D(textures[0],gl_TexCoord[0].st).rgb - 1 );
in your fragment shader correctly rescales your normal to allow for negative values. If your normal map is incorrect, negative values might occur where you do not want them (your Y axis I presume). Negative values in a normal can result in reversed lighting.
My question to you: Is your normal map correct?
ANSWER: After a bit of discussion we found the problem, I've edited this post to keep the thread clean, the solution to Darko's problems are in the comments here. It came down to uninitialized varying called lightDir.
Original comment:
lightDir = normalize (v); lightDir = normalize(vec3(1.0,0.5,1.0) - vertexPosition); This is strange, you overwrite it instantly, is this wrong? you dont seem to keep the correctly translated lightDir... Or am I crazy... Also this lightDir is a varying, but you don't set it at all. So you calculate a v vector from nothing?

Odd effect with GLSL normals

As a somewhat similar to a problem I had before and posted before, I'm trying to get normals to display correctly in my GLSL app.
For the purposes of my explanation, I'm using the ninjaHead.obj model provided with RenderMonkey for testing purposes (you can grab it here). Now in the preview window in RenderMonkey, everything looks great:
and the vertex and fragment code generated respectively is:
Vertex:
uniform vec4 view_position;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
// World-space lighting
vNormal = gl_Normal;
vViewVec = view_position.xyz - gl_Vertex.xyz;
}
Fragment:
uniform vec4 color;
varying vec3 vNormal;
varying vec3 vViewVec;
void main(void)
{
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v* color;
}
I based my GLSL code on this but I'm not quite getting the expected results...
My vertex shader code:
uniform mat4 P;
uniform mat4 modelRotationMatrix;
uniform mat4 modelScaleMatrix;
uniform mat4 modelTranslationMatrix;
uniform vec3 cameraPosition;
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
vec4 pos = gl_ProjectionMatrix * P * modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex;
gl_Position = pos;
gl_TexCoord[0] = gl_MultiTexCoord0;
gl_FrontColor = gl_Color;
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
// World-space lighting
vNormal = normal4*modelRotationMatrix;
vec4 tempCameraPos = vec4(cameraPosition.x,cameraPosition.y,cameraPosition.z,0);
//vViewVec = cameraPosition.xyz - pos.xyz;
vViewVec = tempCameraPos - pos;
}
My fragment shader code:
varying vec4 vNormal;
varying vec4 vViewVec;
void main()
{
//gl_FragColor = gl_Color;
float v = 0.5 * (1.0 + dot(normalize(vViewVec), vNormal));
gl_FragColor = v * gl_Color;
}
However my render produces this...
Does anyone know what might be causing this and/or how to make it work?
EDIT
In response to kvark's comments, here is the model rendered without any normal/lighting calculations to show all triangles being rendered.
And here is the model shading with the normals used for colors. I believe the problem has been found! Now the reason is why it is being rendered like this and how to solve it? Suggestions are welcome!
SOLUTION
Well everyone the problem has been solved! Thanks to kvark for all his helpful insight that has definitely helped my programming practice but I'm afraid the answer comes from me being a MASSIVE tit... I had an error in the display() function of my code that set the glNormalPointer offset to a random value. It used to be this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, getNormalsBufferObject());
But should have been this:
gl.glEnableClientState(GL.GL_NORMAL_ARRAY);
gl.glBindBuffer(GL.GL_ARRAY_BUFFER, getNormalsBufferObject());
gl.glNormalPointer(GL.GL_FLOAT, 0, 0);
So I guess this is a lesson. NEVER mindlessly Ctrl+C and Ctrl+V code to save time on a Friday afternoon AND... When you're sure the part of the code you're looking at is right, the problem is probably somewhere else!
What is your P matrix? (I suppose it's a world->camera view transform).
vNormal = normal4*modelRotationMatrix; Why did you change the order of arguments? Doing that you are multiplying the normal by inversed rotation, what you don't really want. Use the standard order instead (modelRotationMatrix * normal4)
vViewVec = tempCameraPos - pos. This is entirely incorrect. pos is your vertex in the homogeneous clip-space, while tempCameraPos is in world space (I suppose). You need to have the result in the same space as your normal is (world space), so use world-space vertex position (modelTranslationMatrix * modelRotationMatrix * modelScaleMatrix * gl_Vertex) for this equation.
You seem to be mixing GL versions a bit? You are passing the matrices manually via uniforms, but use fixed function to pass vertex attributes. Hm. Anyway...
I sincerely don't like what you're doing to your normals. Have a look:
vec4 normal4 = vec4(gl_Normal.x,gl_Normal.y,gl_Normal.z,0);
vNormal = normal4*modelRotationMatrix;
A normal only stores directional data, why use a vec4 for it? I believe it's more elegant to just use just vec3. Furthermore, look what happens next- you multiply the normal by the 4x4 model rotation matrix... And additionally your normal's fourth cordinate is equal to 0, so it's not a correct vector in homogenous coordinates. I'm not sure that's the main problem here, but I wouldn't be surprised if that multiplication would give you rubbish.
The standard way to transform normals is to multiply a vec3 by the 3x3 submatrix of the model-view matrix (since you're only interested in the orientation, not the translation). Well, precisely, the "correctest" approach is to use the inverse transpose of that 3x3 submatrix (this gets important when you have scaling). In old OpenGL versions you had it precalculated as gl_NormalMatrix.
So instead of the above, you should use something like
// (...)
varying vec3 vNormal;
// (...)
mat3 normalMatrix = transpose(inverse(mat3(modelRotationMatrix)));
// or if you don't need scaling, this one should work too-
mat3 normalMatrix = mat3(modelRotationMatrix);
vNormal = gl_Normal*normalMatrix;
That's certainly one thing to fix in your code - I hope it solves your problem.

Opengl Refraction ,Texture is repeating on ellipsoid object

I have a query regarding refraction.
I am using a texture image for refraction(refertest_car.png).
But somehow the texture is getting multiplied and givinga distorted image(Refer Screenshot.png)
i am using following shader.
attribute highp vec4 vertex;
attribute mediump vec3 normal;
uniformhighp mat4 matrix;
uniformhighp vec3 diffuse_color;
uniformhighp mat3 matrixIT;
uniformmediump mat4 matrixMV;
uniformmediump vec3 EyePosModel;
uniformmediump vec3 LightDirModel;
varyingmediump vec4 color;
constmediump float cShininess = 3.0;
constmediump float cRIR = 1.015;
varyingmediump vec2 RefractCoord;
vec3 SpecularColor= vec3(1.0,1.0,1.0);
voidmain(void)
{
vec3 toLight = normalize(vec3(1.0,1.0,1.0));
mediump vec3 eyeDirModel = normalize(vertex.xyz -EyePosModel);
mediump vec3 refractDir =refract(eyeDirModel,normal, cRIR);
refractDir = (matrix * vec4(refractDir, 0.0)).xyw;
RefractCoord = 0.5 * (refractDir.xy / refractDir.z) + 0.5;
vec3 normal_cal = normalize(matrixIT *normal );
float NDotL = max(dot(normal_cal, toLight), 0.0);
vec4 ecPosition = normalize(matrixMV * vertex);
vec3 eyeDir = vec3(1.0,1.0,1.0);
float NDotH = 0.0;
vec3 SpecularLight = vec3(0.0,0.0,0.0);
if(NDotL > 0.0)
{
vec3 halfVector = normalize( eyeDirModel + LightDirModel);
float NDotH = max(dot(normal_cal, halfVector), 0.0);
float specular =pow(NDotH,3.0);
SpecularLight = specular * SpecularColor;
}
color = vec4((NDotL * diffuse_color.xyz) + (SpecularLight.xyz) ,1.0);
gl_Position = matrix * vertex;
}
And
varyingmediump vec2 RefractCoord;
uniformsampler2D sTexture;
varyingmediump vec4 color;
voidmain(void)
{
lowp vec3 refractColor = texture2D(sTexture,RefractCoord).rgb;
gl_FragColor = vec4(color.xyz + refractColor,1.0);
}
Can anyone let me know the solution to this problem?
Thanks for any help.
Sorry guys i am not able to attach image.
It seems that you are calculating the refraction vector incorrectly. Hovewer, the answer to your question is already in it's title. If you are looking at ellipsoid, the rays from the view span a cone, wrapping the ellipsoid. But after the refraction, the cone may be much wider, reaching beyond the edges of your images, therefore giving texture coordinates larger than 0 - 1 and leading to texture being wrapped. So we need to take care of that as well.
First, the refraction coordinate should be calculated in vertex shader as follows:
vec3 eyeDirModel = normalize(-vertex * matrix);
vec3 refractDir = refract(eyeDirModel, normal, cRIR);
RefractCoord = normalize((matrix * vec4(refractDir, 0.0)).xyz); // no dehomog!
RefractCoord now contains refracted eye-space vectors. This counts on "matrix" being modelview matrix (that is not clear from your code, but i suspect it is). You could possibly skip normalization if you wish the shader to run faster, it shouldn't cause noticeable errors. Now a little bit of modification to your fragment shader.
vec3 refractColor = texture2D(sTexture, normalize(RefractCoord).xy * .5 + .5).rgb;
Here, using normalize() makes sure that the texture coordinates do not cause the texture to repeat.
Note that using 2D texture for refractions should be only justified by generating it on the fly (as e.g. Half-Life 2 does), otherwise one should probably use cube-map texture, which does the normalization for you and gives you color based on 3D direction - which is what you need.
Hope this helps ... (and, oh yeah, i wrote this from memory, in case there are any errors, please comment).