Reflection/refraction with chromatic aberration - eye correction - opengl

I am writing a GLSL shader that simulates chromatic aberration for simple objects. I am staying OpenGL 2.0 compatible, so I use the built-in OpenGL matrix stack. This is the simple vertex shader:
uniform vec3 cameraPos;
varying vec3 incident;
varying vec3 normal;
void main(void) {
vec4 position = gl_ModelViewMatrix * gl_Vertex;
incident = position.xyz / position.w - cameraPos;
normal = gl_NormalMatrix * gl_Normal;
gl_Position = ftransform();
}
The cameraPos uniform is the position of the camera in model space, as one might imagine. Here is the fragment shader:
const float etaR = 1.14;
const float etaG = 1.12;
const float etaB = 1.10;
const float fresnelPower = 2.0;
const float F = ((1.0 - etaG) * (1.0 - etaG)) / ((1.0 + etaG) * (1.0 + etaG));
uniform samplerCube environment;
varying vec3 incident;
varying vec3 normal;
void main(void) {
vec3 i = normalize(incident);
vec3 n = normalize(normal);
float ratio = F + (1.0 - F) * pow(1.0 - dot(-i, n), fresnelPower);
vec3 refractR = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaR), 1.0));
vec3 refractG = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaG), 1.0));
vec3 refractB = vec3(gl_TextureMatrix[0] * vec4(refract(i, n, etaB), 1.0));
vec3 reflectDir = vec3(gl_TextureMatrix[0] * vec4(reflect(i, n), 1.0));
vec4 refractColor;
refractColor.ra = textureCube(environment, refractR).ra;
refractColor.g = textureCube(environment, refractG).g;
refractColor.b = textureCube(environment, refractB).b;
vec4 reflectColor;
reflectColor = textureCube(environment, reflectDir);
vec3 combinedColor = mix(refractColor, reflectColor, ratio);
gl_FragColor = vec4(combinedColor, 1.0);
}
The environment is a cube map that is rendered live from the drawn object's environment.
Under normal circumstances, the shader behaves (I think) like expected, yielding this result:
However, when the camera is rotated 180 degrees around its target, so that it now points at the object from the other side, the refracted/reflected image gets warped like so (This happens gradually for angles between 0 and 180 degrees, of course):
Similar artifacts appear when the camera is lowered/raised; it only seems to behave 100% correctly when the camera is directly over the target object (pointing towards negative Z, in this case).
I am having trouble figuring out which transformation in the shader that is responsible for this warped image, but it should be something obvious related to how cameraPos is handled. What is causing the image to warp itself in this way?

This looks suspect to me:
vec4 position = gl_ModelViewMatrix * gl_Vertex;
incident = position.xyz / position.w - cameraPos;
Is your cameraPos defined in world space? You're subtracting a view space vector (position), from a supposedly world space cameraPos vector. You either need to do the calculation in world space or view space, but you can't mix them.
To do this correctly in world space you'll have to upload the model matrix separately to get the world space incident vector.

Related

How to handle lightning (ambient, diffuse, specular) for point spheres in openGL

Initial situation
I want to visualize simulation data in openGL.
My data consists of particle positions (x, y, z) where each particle has some properties (like density, temperature, ...) which will be used for coloring. Those (SPH) particles (100k to several millions), grouped together, actually represent planets, in case you wonder. I want to render those particles as small 3D spheres and add ambient, diffuse and specular lighting.
Status quo and questions
In MY case: In which coordinate frame do I do the lightning calculations? Which way is the "best" to pass the various components through the pipeline?
I saw that it is common to do it in view space which is also very intuitive. However: The normals at the different fragment positions are calculated in the fragment shader in clip space coordinates (see appended fragment shader). Can I actually convert them "back" into view space to do the lightning calculations in view space for all the fragments? Would there be any advantage compared to doing it in clip space?
It would be easier to get the normals in view space if I would use meshes for each sphere but I think with several million particles this would decrease performance drastically, so better do it with sphere intersection, would you agree?
PS: I don't need a model matrix since all the particles are already in place.
//VERTEX SHADER
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 2) in float density;
uniform float radius;
uniform vec3 lightPos;
uniform vec3 viewPos;
out vec4 lightDir;
out vec4 viewDir;
out vec4 viewPosition;
out vec4 posClip;
out float vertexColor;
// transformation matrices
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
lightDir = projection * view * vec4(lightPos - position, 1.0f);
viewDir = projection * view * vec4(viewPos - position, 1.0f);
viewPosition = projection * view * vec4(lightPos, 1.0f);
posClip = projection * view * vec4(position, 1.0f);
gl_Position = posClip;
gl_PointSize = radius;
vertexColor = density;
}
I know that projective divion happens for the gl_Position variable, does that actually happen to ALL vec4's which are passed from the vertex to the fragment shader? If not, maybe the calculations in the fragment shader would be wrong?
And the fragment shader where the normals and diffuse/specular lightning calculations in clip space:
//FRAGMENT SHADER
#version 330 core
in float vertexColor;
in vec4 lightDir;
in vec4 viewDir;
in vec4 posClip;
in vec4 viewPosition;
uniform vec3 lightColor;
vec4 colormap(float x); // returns vec4(r, g, b, a)
out vec4 vFragColor;
void main(void)
{
// AMBIENT LIGHT
float ambientStrength = 0.0;
vec3 ambient = ambientStrength * lightColor;
// Normal calculation done in clip space (first from texture (gl_PointCoord 0 to 1) coord to NDC( -1 to 1))
vec3 normal;
normal.xy = gl_PointCoord * 2.0 - vec2(1.0); // transform from 0->1 point primitive coords to NDC -1->1
float mag = dot(normal.xy, normal.xy); // sqrt(x=1) = sqrt(x)
if (mag > 1.0) // discard fragments outside sphere
discard;
normal.z = sqrt(1.0 - mag); // because x^2 + y^2 + z^2 = 1
// DIFFUSE LIGHT
float diff = max(0.0, dot(vec3(lightDir), normal));
vec3 diffuse = diff * lightColor;
// SPECULAR LIGHT
float specularStrength = 0.1;
vec3 viewDir = normalize(vec3(viewPosition) - vec3(posClip));
vec3 reflectDir = reflect(-vec3(lightDir), normal);
float shininess = 64;
float spec = pow(max(dot(vec3(viewDir), vec3(reflectDir)), 0.0), shininess);
vec3 specular = specularStrength * spec * lightColor;
vFragColor = colormap(vertexColor / 8) * vec4(ambient + diffuse + specular, 1);
}
Now this actually "kind of" works but i have the feeling that also the sides of the sphere which do NOT face the light source are being illuminated, which shouldn't happen. How can I fix this?
Some weird effect: In this moment the light source is actually BEHIND the left planet (it just peaks out a little bit at the top left), bit still there are diffuse and specular effects going on. This side should be actually pretty dark! =(
Also at this moment I get some glError: 1282 error in the fragment shader and I don't know where it comes from since the shader program actually compiles and runs, any suggestions? :)
The things that you are drawing aren't actually spheres. They just look like them from afar. This is absolutely ok if you are fine with that. If you need geometrically correct spheres (with correct sizes and with a correct projection), you need to do proper raycasting. This seems to be a comprehensive guide on this topic.
1. What coordinate system?
In the end, it is up to you. The coordinate system just needs to fulfill some requirements. It must be angle-preserving (because lighting is all about angles). And if you need distance-based attenuation, it should also be distance-preserving. The world and the view coordinate systems usually fulfill these requirements. Clip space is not suited for lighting calculations as neither angles nor distances are preserved. Furthermore, gl_PointCoord is in none of the usual coordinate systems. It is its own coordinate system and you should only use it together with other coordinate systems if you know their relation.
2. Meshes or what?
Meshes are absolutely not suited to render spheres. As mentioned above, raycasting or some screen-space approximation are better choices. Here is an example shader that I used in my projects:
#version 330
out vec4 result;
in fData
{
vec4 toPixel; //fragment coordinate in particle coordinates
vec4 cam; //camera position in particle coordinates
vec4 color; //sphere color
float radius; //sphere radius
} frag;
uniform mat4 p; //projection matrix
void main(void)
{
vec3 v = frag.toPixel.xyz - frag.cam.xyz;
vec3 e = frag.cam.xyz;
float ev = dot(e, v);
float vv = dot(v, v);
float ee = dot(e, e);
float rr = frag.radius * frag.radius;
float radicand = ev * ev - vv * (ee - rr);
if(radicand < 0)
discard;
float rt = sqrt(radicand);
float lambda = max(0, (-ev - rt) / vv); //first intersection on the ray
float lambda2 = (-ev + rt) / vv; //second intersection on the ray
if(lambda2 < lambda) //if the first intersection is behind the camera
discard;
vec3 hit = lambda * v; //intersection point
vec3 normal = (frag.cam.xyz + hit) / frag.radius;
vec4 proj = p * vec4(hit, 1); //intersection point in clip space
gl_FragDepth = ((gl_DepthRange.diff * proj.z / proj.w) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
vec3 vNormalized = -normalize(v);
float nDotL = dot(vNormalized, normal);
vec3 c = frag.color.rgb * nDotL + vec3(0.5, 0.5, 0.5) * pow(nDotL, 120);
result = vec4(c, frag.color.a);
}
3. Perspective division
Perspective division is not applied to your attributes. The GPU does perspective division on the data that you pass via gl_Position on the way to transforming them to screen space. But you will never actually see this perspective-divided position unless you do it yourself.
4. Light in the dark
This might be the result of you mixing different coordinate systems or doing lighting calculations in clip space. Btw, the specular part is usually not multiplied by the material color. This is light that gets reflected directly at the surface. It does not penetrate the surface (which would absorb some colors depending on the material). That's why those highlights are usually white (or whatever light color you have), even on black objects.

opengl - spot light moves when camera rotates when using a shader

I implemented a simple shader for the lighting; it kind of works, but the light seems to move when the camera rotates (and only when it rotates).
I'm experimenting with a spotlight, this is how it looks like (it's the spot in the center):
If now I rotate the camera, the spot moves around; for example, here I looked down (I didn't move at all, just looked down) and it seemed at my feet:
I've looked it up and I've seen that it's a common mistake when mixing reference systems in the shader and/or when setting the light's position before moving the camera.
The thing is, I'm pretty sure I'm not doing these two things, but apparently I'm wrong; it's just that I can't find the bug.
Here's the shader:
Vertex Shader
varying vec3 vertexNormal;
varying vec3 lightDirection;
void main()
{
vertexNormal = gl_NormalMatrix * gl_Normal;
lightDirection = vec3(gl_LightSource[0].position.xyz - (gl_ModelViewMatrix * gl_Vertex).xyz);
gl_Position = ftransform();
}
Fragment Shader
uniform vec3 ambient;
uniform vec3 diffuse;
uniform vec3 specular;
uniform float shininess;
varying vec3 vertexNormal;
varying vec3 lightDirection;
void main()
{
vec3 color = vec3(0.0, 0.0, 0.0);
vec3 lightDirNorm;
vec3 eyeVector;
vec3 half_vector;
float diffuseFactor;
float specularFactor;
float attenuation;
float lightDistance;
vec3 normalDirection = normalize(vertexNormal);
lightDirNorm = normalize(lightDirection);
eyeVector = vec3(0.0, 0.0, 1.0);
half_vector = normalize(lightDirNorm + eyeVector);
diffuseFactor = max(0.0, dot(normalDirection, lightDirNorm));
specularFactor = max(0.0, dot(normalDirection, half_vector));
specularFactor = pow(specularFactor, shininess);
color += ambient * gl_LightSource[0].ambient;
color += diffuseFactor * diffuse * gl_LightSource[0].diffuse;
color += specularFactor * specular * gl_LightSource[0].specular;
lightDistance = length(lightDirection[i]);
float constantAttenuation = 1.0;
float linearAttenuation = (0.02 / SCALE_FACTOR) * lightDistance;
float quadraticAttenuation = (0.0 / SCALE_FACTOR) * lightDistance * lightDistance;
attenuation = 1.0 / (constantAttenuation + linearAttenuation + quadraticAttenuation);
// If it's a spotlight
if(gl_LightSource[i].spotCutoff <= 90.0)
{
float spotEffect = dot(normalize(gl_LightSource[0].spotDirection), normalize(-lightDirection));
if (spotEffect > gl_LightSource[0].spotCosCutoff)
{
spotEffect = pow(spotEffect, gl_LightSource[0].spotExponent);
attenuation = spotEffect / (constantAttenuation + linearAttenuation + quadraticAttenuation);
}
else
attenuation = 0.0;
}
color = color * attenuation;
// Moltiplico il colore per il fattore di attenuazione
gl_FragColor = vec4(color, 1.0);
}
Now, I can't show you the code where I render the things, because it's a custom language which integrates opengl and it's designed to create 3D applications (it wouldn't help to show you); but what I do is something like this:
SetupLights();
UpdateCamera();
RenderStuff();
Where:
SetupLights contains actual opengl calls that setup the lights and their positions;
UpdateCamera updates the camera's position using the built-in classes of the language; I don't have much power here;
RenderStuff calls the built-in functions of the language to draw the scene; I don't have much power here either.
So, either I'm doing something wrong in the shader or there's something in the language that "behind the scenes" breaks things.
Can you point me in the right direction?
you wrote
the light's position is already in world coordinates, and that is where I'm doing the computations
however, since you're applying gl_ModelViewMatrix to your vertex and gl_NormalMatrix to your normal, these values are probably in view space, which might cause the moving light.
as an aside, your eye vector looks like it should be in view coordinates, however, view space is a right-handed coordinate system, so "forward" points along the negative z-axis. also, your specular computation will likely be off since you're using the same eye vector for all fragments, but it should probably point towards that fragment's position on the near/far planes.

Reconstructed position from depth leads to incorrect lighting

I am attempting to reconstruct my fragment's position from a depth value stored in a GL_DEPTH_ATTACHMENT. To do this, I linearize the depth then multiply the depth by a ray from the camera position and to the corresponding point on the far plane.
This method is the second one described here. In order to get the ray from the camera to the far plane, I retrieve rays to the four corners of the far planes, pass them to my vertex shader, then interpolate into the fragment shader. I am using the following code to get the rays from the camera to the far plane's corners in world space.
std::vector<float> Camera::GetFlatFarFrustumCorners() {
// rotation is the orientation of my camera in a quaternion.
glm::quat inverseRotation = glm::inverse(rotation);
glm::vec3 localUp = glm::normalize(inverseRotation * glm::vec3(0.0f, 1.0f, 0.0f));
glm::vec3 localRight = glm::normalize(inverseRotation * glm::vec3(1.0f, 0.0f, 0.0f));
float farHeight = 2.0f * tan(90.0f / 2) * 100.0f;
float farWidth = farHeight * aspect;
// 100.0f is the distance to the far plane. position is the location of the camera in word space.
glm::vec3 farCenter = position + glm::vec3(0.0f, 0.0f, -1.0f) * 100.0f;
glm::vec3 farTopLeft = farCenter + (localUp * (farHeight / 2)) - (localRight * (farWidth / 2));
glm::vec3 farTopRight = farCenter + (localUp * (farHeight / 2)) + (localRight * (farWidth / 2));
glm::vec3 farBottomLeft = farCenter - (localUp * (farHeight / 2)) - (localRight * (farWidth / 2));
glm::vec3 farBottomRight = farCenter - (localUp * (farHeight / 2)) + (localRight * (farWidth / 2));
return {
farTopLeft.x, farTopLeft.y, farTopLeft.z,
farTopRight.x, farTopRight.y, farTopRight.z,
farBottomLeft.x, farBottomLeft.y, farBottomLeft.z,
farBottomRight.x, farBottomRight.y, farBottomRight.z
};
}
Is this a correct way to retrieve the corners of the far plane in world space?
When I use these corners with my shaders, the results are incorrect, and what I get seems to be in view space. These are the shaders I am using:
Vertex Shader:
layout(location = 0) in vec2 vp;
layout(location = 1) in vec3 textureCoordinates;
uniform vec3 farFrustumCorners[4];
uniform vec3 cameraPosition;
out vec2 st;
out vec3 frustumRay;
void main () {
st = textureCoordinates.xy;
gl_Position = vec4 (vp, 0.0, 1.0);
frustumRay = farFrustumCorners[int(textureCoordinates.z)-1] - cameraPosition;
}
Fragment Shader:
in vec2 st;
in vec3 frustumRay;
uniform sampler2D colorTexture;
uniform sampler2D normalTexture;
uniform sampler2D depthTexture;
uniform vec3 cameraPosition;
uniform vec3 lightPosition;
out vec3 color;
void main () {
// Far and near distances; Used to linearize the depth value.
float f = 100.0;
float n = 0.1;
float depth = (2 * n) / (f + n - (texture(depthTexture, st).x) * (f - n));
vec3 position = cameraPosition + (normalize(frustumRay) * depth);
vec3 normal = texture(normalTexture, st);
float k = 0.00001;
vec3 distanceToLight = lightPosition - position;
float distanceLength = length(distanceToLight);
float attenuation = (1.0 / (1.0 + (0.1 * distanceLength) + k * (distanceLength * distanceLength)));
float diffuseTemp = max(dot(normalize(normal), normalize(distanceToLight)), 0.0);
vec3 diffuse = vec3(1.0, 1.0, 1.0) * attenuation * diffuseTemp;
vec3 gamma = vec3(1.0/2.2);
color = pow(texture(colorTexture, st).xyz+diffuse, gamma);
//color = texture(colorTexture, st);
//colour.r = (2 * n) / (f + n - texture( tex, st ).x * (f - n));
//colour.g = (2 * n) / (f + n - texture( tex, st ).y* (f - n));
//colour.b = (2 * n) / (f + n - texture( tex, st ).z * (f - n));
}
This is what my scene's lighting looks like under these shaders:
I am pretty sure that this is the result of either my reconstructed position being completely wrong, or it being in the wrong space. What is wrong with my reconstruction, and what can I do to fix it?
What you will first want to do is develop a temporary addition to your G-Buffer setup that stores the initial position of each fragment in world/view space (really, whatever space you are trying to reconstruct here). Then write a shader that does nothing but reconstruct these positions from the depth buffer. Set everything up so that half of your screen is displays the original G-Buffer and the other half displays your reconstructed position. You should be able to immediately spot discrepancies this way.
That said, you might want to take a look at an implementation I have used in the past to reconstruct (object space) position from the depth buffer. It basically gets you into view space first, then uses the inverse modelview matrix to go to object space. You can adjust it for world space trivially. It is probably not the most flexible implementation, what with FOV being hard-coded and all, but you can easily modify it to use uniforms instead...
Trimmed down fragment shader:
flat in mat4 inv_mv_mat;
in vec2 uv;
...
float linearZ (float z)
{
#ifdef INVERT_NEAR_FAR
const float f = 2.5;
const float n = 25000.0;
#else
const float f = 25000.0;
const float n = 2.5;
#endif
return n / (f - z * (f - n)) * f;
}
vec4
reconstruct_pos (float depth)
{
depth = linearZ (depth);
vec4 pos = vec4 (uv * depth, -depth, 1.0);
vec4 ret = (inv_mv_mat * pos);
return ret / ret.w;
}
It takes a little additional setup in the vertex shader stage of the deferred shading lighting pass, which looks like this:
#version 150 core
in vec4 vtx_pos;
in vec2 vtx_st;
uniform mat4 modelview_mat; // Matrix used when the G-Buffer was built
uniform mat4 camera_matrix; // Matrix used to stretch the G-Buffer over the viewport
uniform float buffer_res_x;
uniform float buffer_res_y;
out vec2 tex_st;
flat out mat4 inv_mv_mat;
out vec2 uv;
// Hard-Coded 45 degree FOV
//const float fovy = 0.78539818525314331; // NV pukes on the line below!
//const float fovy = radians (45.0);
//const float tan_half_fovy = tan (fovy * 0.5);
const float tan_half_fovy = 0.41421356797218323;
float aspect = buffer_res_x / buffer_res_y;
vec2 inv_focal_len = vec2 (tan_half_fovy * aspect,
tan_half_fovy);
const vec2 uv_scale = vec2 (2.0, 2.0);
const vec2 uv_translate = vec2 (1.0, 1.0);
void main (void)
{
inv_mv_mat = inverse (modelview_mat);
tex_st = vtx_st;
gl_Position = camera_matrix * vtx_pos;
uv = (vtx_st * uv_scale - uv_translate) * inv_focal_len;
}
Depth range inversion is something you might find useful for deferred shading, normally a perspective depth buffer gives you more precision than you need at close range and not enough far away for quality reconstruction. If you flip things on their head by inverting the depth range you can even things out a little bit while still using the hardware depth buffer. This is discussed in detail here.

Calculate tangent Space in FragmentShader

I want to calculate the tangentspace in GLSL.
Here is the important part from my code:
// variables passed from vertex to fragment program //
in vec3 vertexNormal;
in vec2 textureCoord;
in vec3 lightPosition;
in vec3 vertexPos;
in mat4 modelView;
in mat4 viewMatrix;
// TODO: textures for color and normals //
uniform sampler2D normal;
uniform sampler2D texture;
// this defines the fragment output //
out vec4 color;
void main() {
// ###### TANGENT SPACE STUFF ############
vec4 position_eye = modelView * vec4(vertexPos,1.0);
vec3 q0 = dFdx(position_eye.xyz);
vec3 q1 = dFdy(position_eye.xyz);
vec2 st0 = dFdx(textureCoord.st);
vec2 st1 = dFdy(textureCoord.st);
float Sx = ( q0.x * st1.t - q1.x * st0.t) / (st1.t * st0.s - st0.t * st1.s);
float Tx = (-q0.x * st1.s + q1.x * st0.s) / (st1.t * st0.s - st0.t * st1.s);
q0.x = st0.s * Sx + st0.t * Tx;
q1.x = st1.s * Sx + st1.t * Tx;
vec3 S = normalize( q0 * st1.t - q1 * st0.t);
vec3 T = normalize(-q0 * st1.s + q1 * st0.s);
vec3 n = texture2D(normal,textureCoord).xyz;
n = smoothstep(-1,1,n);
mat3 tbn = (mat3(S,T,n));
// #######################################
n = tbn * n; // transfer the read normal to worldSpace;
vec3 eyeDir = - (modelView * vec4(vertexPos,1.0)).xyz;
vec3 lightDir = (modelView * vec4(lightPosition.xyz, 1.0)).xyz;
After this code there is a phong shading which will be mixed with the texture. Applying the shaders to a normal texture without normalMapping everything works fine.
I need to calculate this in the shader for later other dynamic parts.
Can someone tell me what is going wrong?
This is how it currently looks like:
Can someone tell me what is going wrong?
You're trying to compute the tangent-space basis matrix in your shader; that's what's wrong. You can't actually do that.
dFdx/y computes the rate-of-change of the given value, locally in screen-space, across the surface of a primitive. In other words, it computes the derivative of the given value over the primitive. Your input values are linearly interpolated.
The derivative of a line is a constant. And linear interpolation produces linear results. Therefore, every fragment from each primitive will get the same derivative for the inputs/outputs. Therefore, every fragment will compute the same S and T values, since they're based entirely on the derivatives.
That's why you're getting a faceted surface: two of the three matrix components will be identical across a triangle's surface.
Your computation doesn't work because it can't work. You're going to have to do what everyone else does: calculate the NBT matrix offline and pass them as per-vertex attributes. Or use some known property of the mesh to compute them. But this? It isn't going to work.

Opengl Refraction ,Texture is repeating on ellipsoid object

I have a query regarding refraction.
I am using a texture image for refraction(refertest_car.png).
But somehow the texture is getting multiplied and givinga distorted image(Refer Screenshot.png)
i am using following shader.
attribute highp vec4 vertex;
attribute mediump vec3 normal;
uniformhighp mat4 matrix;
uniformhighp vec3 diffuse_color;
uniformhighp mat3 matrixIT;
uniformmediump mat4 matrixMV;
uniformmediump vec3 EyePosModel;
uniformmediump vec3 LightDirModel;
varyingmediump vec4 color;
constmediump float cShininess = 3.0;
constmediump float cRIR = 1.015;
varyingmediump vec2 RefractCoord;
vec3 SpecularColor= vec3(1.0,1.0,1.0);
voidmain(void)
{
vec3 toLight = normalize(vec3(1.0,1.0,1.0));
mediump vec3 eyeDirModel = normalize(vertex.xyz -EyePosModel);
mediump vec3 refractDir =refract(eyeDirModel,normal, cRIR);
refractDir = (matrix * vec4(refractDir, 0.0)).xyw;
RefractCoord = 0.5 * (refractDir.xy / refractDir.z) + 0.5;
vec3 normal_cal = normalize(matrixIT *normal );
float NDotL = max(dot(normal_cal, toLight), 0.0);
vec4 ecPosition = normalize(matrixMV * vertex);
vec3 eyeDir = vec3(1.0,1.0,1.0);
float NDotH = 0.0;
vec3 SpecularLight = vec3(0.0,0.0,0.0);
if(NDotL > 0.0)
{
vec3 halfVector = normalize( eyeDirModel + LightDirModel);
float NDotH = max(dot(normal_cal, halfVector), 0.0);
float specular =pow(NDotH,3.0);
SpecularLight = specular * SpecularColor;
}
color = vec4((NDotL * diffuse_color.xyz) + (SpecularLight.xyz) ,1.0);
gl_Position = matrix * vertex;
}
And
varyingmediump vec2 RefractCoord;
uniformsampler2D sTexture;
varyingmediump vec4 color;
voidmain(void)
{
lowp vec3 refractColor = texture2D(sTexture,RefractCoord).rgb;
gl_FragColor = vec4(color.xyz + refractColor,1.0);
}
Can anyone let me know the solution to this problem?
Thanks for any help.
Sorry guys i am not able to attach image.
It seems that you are calculating the refraction vector incorrectly. Hovewer, the answer to your question is already in it's title. If you are looking at ellipsoid, the rays from the view span a cone, wrapping the ellipsoid. But after the refraction, the cone may be much wider, reaching beyond the edges of your images, therefore giving texture coordinates larger than 0 - 1 and leading to texture being wrapped. So we need to take care of that as well.
First, the refraction coordinate should be calculated in vertex shader as follows:
vec3 eyeDirModel = normalize(-vertex * matrix);
vec3 refractDir = refract(eyeDirModel, normal, cRIR);
RefractCoord = normalize((matrix * vec4(refractDir, 0.0)).xyz); // no dehomog!
RefractCoord now contains refracted eye-space vectors. This counts on "matrix" being modelview matrix (that is not clear from your code, but i suspect it is). You could possibly skip normalization if you wish the shader to run faster, it shouldn't cause noticeable errors. Now a little bit of modification to your fragment shader.
vec3 refractColor = texture2D(sTexture, normalize(RefractCoord).xy * .5 + .5).rgb;
Here, using normalize() makes sure that the texture coordinates do not cause the texture to repeat.
Note that using 2D texture for refractions should be only justified by generating it on the fly (as e.g. Half-Life 2 does), otherwise one should probably use cube-map texture, which does the normalization for you and gives you color based on 3D direction - which is what you need.
Hope this helps ... (and, oh yeah, i wrote this from memory, in case there are any errors, please comment).