I'm creating a terrain mesh, and following this SO answer I'm trying to migrate my CPU computed normals to a shader based version, in order to improve performances by reducing my mesh resolution and using a normal map computed in the fragment shader.
I'm using MapBox height map for the terrain data. Tiles look like this:
And elevation at each pixel is given by the following formula:
const elevation = -10000.0 + ((red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1);
My original code first creates a dense mesh (256*256 squares of 2 triangles) and then computes triangle and vertices normals. To get a visually satisfying result I was diving the elevation by 5000 to match the tile's width & height in my scene (in the future I'll do a proper computation to display the real elevation).
I was drawing with these simple shaders:
Vertex shader:
uniform mat4 u_Model;
uniform mat4 u_View;
uniform mat4 u_Projection;
attribute vec3 a_Position;
attribute vec3 a_Normal;
attribute vec2 a_TextureCoordinates;
varying vec3 v_Position;
varying vec3 v_Normal;
varying mediump vec2 v_TextureCoordinates;
void main() {
v_TextureCoordinates = a_TextureCoordinates;
v_Position = vec3(u_View * u_Model * vec4(a_Position, 1.0));
v_Normal = vec3(u_View * u_Model * vec4(a_Normal, 0.0));
gl_Position = u_Projection * u_View * u_Model * vec4(a_Position, 1.0);
}
Fragment shader:
precision mediump float;
varying vec3 v_Position;
varying vec3 v_Normal;
varying mediump vec2 v_TextureCoordinates;
uniform sampler2D texture;
void main() {
vec3 lightVector = normalize(-v_Position);
float diffuse = max(dot(v_Normal, lightVector), 0.1);
highp vec4 textureColor = texture2D(texture, v_TextureCoordinates);
gl_FragColor = vec4(textureColor.rgb * diffuse, textureColor.a);
}
It was slow but gave visually satisfying results:
Now, I removed all the CPU based normals computation code, and replaced my shaders by those:
Vertex shader:
#version 300 es
precision highp float;
precision highp int;
uniform mat4 u_Model;
uniform mat4 u_View;
uniform mat4 u_Projection;
in vec3 a_Position;
in vec2 a_TextureCoordinates;
out vec3 v_Position;
out vec2 v_TextureCoordinates;
out mat4 v_Model;
out mat4 v_View;
void main() {
v_TextureCoordinates = a_TextureCoordinates;
v_Model = u_Model;
v_View = u_View;
v_Position = vec3(u_View * u_Model * vec4(a_Position, 1.0));
gl_Position = u_Projection * u_View * u_Model * vec4(a_Position, 1.0);
}
Fragment shader:
#version 300 es
precision highp float;
precision highp int;
in vec3 v_Position;
in vec2 v_TextureCoordinates;
in mat4 v_Model;
in mat4 v_View;
uniform sampler2D u_dem;
uniform sampler2D u_texture;
out vec4 color;
const vec2 size = vec2(2.0,0.0);
const ivec3 offset = ivec3(-1,0,1);
float getAltitude(vec4 pixel) {
float red = pixel.x;
float green = pixel.y;
float blue = pixel.z;
return (-10000.0 + ((red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1)) * 6.0; // Why * 6 and not / 5000 ??
}
void main() {
float s01 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.xy));
float s21 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.zy));
float s10 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.yx));
float s12 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.yz));
vec3 va = (vec3(size.xy, s21 - s01));
vec3 vb = (vec3(size.yx, s12 - s10));
vec3 normal = normalize(cross(va, vb));
vec3 transformedNormal = normalize(vec3(v_View * v_Model * vec4(normal, 0.0)));
vec3 lightVector = normalize(-v_Position);
float diffuse = max(dot(transformedNormal, lightVector), 0.1);
highp vec4 textureColor = texture(u_texture, v_TextureCoordinates);
color = vec4(textureColor.rgb * diffuse, textureColor.a);
}
It now loads nearly instantly, but something is wrong:
in the fragment shader I had to multiply the elevation by 6 rather than dividing by 5000 to get something close to my original code
the result is not as good. Especially when I tilt the scene, the shadows are very dark (the more I tilt the darker they get):
Can you spot what causes that difference?
EDIT: I created two JSFiddles:
first version with CPU computed vertices normals: http://jsfiddle.net/tautin/tmugzv6a/10
second version with GPU computed normal map: http://jsfiddle.net/tautin/8gqa53e1/42
The problem appears when you play with the tilt slider.
There were three problems I could find.
One you saw and fixed by trial and error, which is that the scale of your height calculation was wrong. In CPU, your color coordinates varies from 0 to 255, but on GLSL, texture values are normalized from 0 to 1, so the correct height calculation is:
return (-10000.0 + ((red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1 * 256.0)) / Z_SCALE;
But for this shader purpose, the -10000.00 doesn't matter, so you can do:
return (red * 256.0 * 256.0 + green * 256.0 + blue) * 0.1 * 256.0 / Z_SCALE;
The second problem is that the scale of your x and y coordinates was also wrong. In the CPU code the distance between two neighbor points is (SIZE * 2.0 / (RESOLUTION + 1)), but in GPU, you had set it to 1. The correct way to define your size variable is:
const float SIZE = 2.0;
const float RESOLUTION = 255.0;
const vec2 size = vec2(2.0 * SIZE / (RESOLUTION + 1.0), 0.0);
Notice that I increased the resolution to 255 because I assume this is what you want (one minus the texture resolution). Also, this is needed to match the value of offset, which you defined as:
const ivec3 offset = ivec3(-1,0,1);
To use a different RESOLUTION value, you will have to adjust offset accordingly, e.g. for RESOLUTION == 127, offset = ivec3(-2,0,2), i.e. the offset must be <real texture resolution>/(RESOLUTION + 1), which limits the possibilities for RESOLUTION, since offset must be integer.
The third problem is that you used a different normal calculation algorithm in the GPU, which strikes to me as having lower resolution than the one used on CPU, because you use the four outer pixels of a cross, but ignores the central one. It seems that this is not the full story, but I can't explain why they are so different. I tried to implement the exact CPU algorithm as I thought it should be, but it yield different results. Instead, I had to use the following algorithm, which is similar but not exactly the same, to get an almost identical result (if you increase the CPU resolution to 255):
float s11 = getAltitude(texture(u_dem, v_TextureCoordinates));
float s21 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.zy));
float s10 = getAltitude(textureOffset(u_dem, v_TextureCoordinates, offset.yx));
vec3 va = (vec3(size.xy, s21 - s11));
vec3 vb = (vec3(size.yx, s10 - s11));
vec3 normal = normalize(cross(va, vb));
This is the original CPU solution, but with RESOLUTION=255: http://jsfiddle.net/k0fpxjd8/
This is the final GPU solution: http://jsfiddle.net/7vhpuqd8/
Related
I have simple normal map shader for 7 lights and it work on entire screen. How the hell to make it work only on limited distance? I tried calculate distance between light and pixel, and simple 'if' if distance is to big but this don't work for me.
varying vec4 v_color;
varying vec2 v_texCoords;
uniform vec3 lightColor[7];
uniform vec3 light[7];
uniform sampler2D u_texture;
uniform sampler2D u_normals;
uniform vec2 resolution;
uniform bool useNormals;
uniform bool useShadow;
uniform float strength;
uniform bool yInvert;
uniform bool xInvert;
uniform vec4 ambientColor;
void main() {
// sample color & normals from our textures
vec4 color = texture2D(u_texture, v_texCoords.st);
vec3 nColor = texture2D(u_normals, v_texCoords.st).rgb;
// some bump map programs will need the Y value flipped..
nColor.g = yInvert ? 1.0 - nColor.g : nColor.g;
nColor.r = xInvert ? 1.0 - nColor.r : nColor.r;
// this is for debugging purposes, allowing us to lower the intensity of our bump map
vec3 nBase = vec3(0.5, 0.5, 1.0);
nColor = mix(nBase, nColor, strength);
// normals need to be converted to [-1.0, 1.0] range and normalized
vec3 normal = normalize(nColor * 2.0 - 1.0);
vec3 sum = vec3(0.0);
for ( int i = 0; i < 7; ++i ){
vec3 currentLight = light[i];
vec3 currentLightColor = lightColor[i];
// here we do a simple distance calculation
vec3 deltaPos = vec3( (currentLight.xy - gl_FragCoord.xy) / resolution.xy, currentLight.z );
vec3 lightDir = normalize(deltaPos * 1);
float lambert = clamp(dot(normal, lightDir), 0.0, 1.0);
vec3 result = color.rgb;
result = (currentLightColor.rgb * lambert);
result *= color.rgb;
sum += result;
}
vec3 ambient = ambientColor.rgb * ambientColor.a;
vec3 intensity = min(vec3(1.0), ambient + sum); // don't remember if min is critical, but I think it might be to avoid shifting the hue when multiple lights add up to something very bright.
vec3 finalColor = color.rgb * intensity;
//finalColor *= (ambientColor.rgb * ambientColor.a);
gl_FragColor = v_color * vec4(finalColor, color.a);
}
edit:
my map editor screen
close-up of details
You need to measure the length of the light delta vector and use that to attenuate.
Right after the lightDir line, you can put something like this, but you'll have to adjust the FALLOFF constant to get the distance you want. FALLOFF must be greater than 0. As a starting point, a value of 0.1 will give you a light radius of about 4 units. Smaller values enlarge the radius. You might even want to define it as a parameter of each light (make them vec4s).
float distance = length(deltaPos);
float attenuation = 1.0 / (1.0 + FALLOFF * distance * distance);
float lambert = attenuation * clamp(dot(normal, lightDir), 0.0, 1.0);
This attenuation formula has a bell curve. If you want the curve to have a pointy tip, which is maybe more realistic (though probably pointless for 2D lighting), you can add a second parameter (which you can initially give a value of 0.1 and increase from there):
float attenuation = 1.0 / (1.0 + SHARPNESS * distance + FALLOFF * distance * distance);
Someone on this question posted this helpful chart you can play with to visually see how the parameters change the curve.
Also, don't multiply by an integer. This will cause the shader to fail to compile on some devices:
vec3 lightDir = normalize(deltaPos * 1); // The integer 1 is unsupported.
I'm in front of a very strange problem which seems to originate from a simple multiplication in the fragment shader
I'm trying to calculate shadows using a framebuffer that renders only the depths from "light's perspective" which is a common tecnique for beginners easier to implement
Fragment Shader:
#version 330 core
uniform sampler2D parquet;
uniform samplerCube depthMaps[15];
in vec2 TexCoords;
out vec4 color;
in vec3 Normal;
in vec3 FragPos;
uniform vec3 lightPos[15];
uniform vec3 lightColor[15];
uniform float intensity[15];
uniform float far_plane;
uniform vec3 viewPos;
float ShadowCalculation(vec3 fragPos, vec3 lightPost, samplerCube depthMaps)
{
vec3 fragToLight = fragPos - lightPost;
float closestDepth = texture(depthMaps, fragToLight).r;
// original depth value
closestDepth *= far_plane;
float currentDepth = length(fragToLight);
float bias = 0.05;
float shadow = currentDepth - bias > closestDepth ? 1.0 : 0.0;
return shadow;
}
void main()
{
vec3 norm = normalize(Normal);
vec3 lightDir = normalize(lightPos[0] - FragPos);
float diff = max(dot(norm, lightDir), 0.0);
vec3 diffuse = diff * lightColor[0];
float _distance = length(vec3(FragPos - lightPos[0]));
float attenuation = 1.0 / pow(_distance +1, 2);
if(attenuation > 1.0) attenuation = 1.0;
float intens = intensity[0];
if(intensity[0] > 150) intens = 150.0f;
vec3 resulta = (diffuse * attenuation) * intens;
//texture color
vec3 tCol = vec3(texture(parquet, TexCoords));
//gamma correction
tCol.rgb = pow(tCol.rgb, vec3(0.45));
vec3 colors = resulta * tCol * (1.0f - ShadowCalculation(FragPos, lightPos[0], depthMaps[0]));
color = vec4(colors, 1.0f);
}
The last multiplication inside main() behaves strangely, multiplying the result of the diffuse light by the texture color renders nicely (so we have no shadows, just diffuse lightning)
//works
vec3 colors = resulta * tCol;
Multiplying the diffuse light by the shadow results renders also nicely (now we have no textures)
//works
vec3 colors = resulta * (1.0f - ShadowCalculation(FragPos, lightPos[0], depthMaps[0]));
Doing all togheter, renders just a black screen. I've tried all sort of things in the fragment shader, but none worked.
Lastly, here is the fragment shader used to render the cubemap:
#version 330 core
in vec4 FragPos;
uniform vec3 lightPos;
uniform float far_plane;
void main()
{
float lightDistance = length(FragPos.xyz - lightPos);
// map to [0;1] range by dividing by far_plane
lightDistance = lightDistance / far_plane;
gl_FragDepth = lightDistance;
}
Can you spot any logical error? I'm using uniforms array buffers since i'll later need multiple lights at once
After a while trying to visually debug the shader's output I finally found the error, I was binding the depthmap's cubemap texture incorrectly and this caused the strange behaviour I was seeing in the last multiplication
Lesson learned: It' not always fragment's fault
I am working on the beginnings of omnidirectional shadow mapping in my engine. For now I am only producing one shadowmap as a test. I am getting an odd result when using my current shaders. Here is a screenshot which shows the problem:
I am using a near value of 0.5 and a far value of 5.0 in the projection matrix for the shadowmap render. As near as I can tell, any value with a light-space z larger than my far plane distance is being computed by my fragment shader as in shadow.
This is my fragment shader:
in vec2 st;
uniform sampler2D colorTexture;
uniform sampler2D normalTexture;
uniform sampler2D depthTexture;
uniform sampler2D shadowmapTexture;
uniform mat4 invProj;
uniform mat4 lightProj;
uniform vec3 lightPosition;
out vec3 color;
void main () {
vec3 clipSpaceCoords;
clipSpaceCoords.xy = st.xy * 2.0 - 1.0;
clipSpaceCoords.z = texture(depthTexture, st).x * 2.0 - 1.0;
vec4 position = invProj * vec4(clipSpaceCoords,1.0);
position.xyz /= position.w;
vec4 lightSpace = lightProj * vec4(position.xyz,1.0);
lightSpace.xyz /= lightSpace.w;
lightSpace.xyz = lightSpace.xyz * 0.5 + 0.5;
float lightDepth = texture(shadowmapTexture, lightSpace.xy).x;
vec3 normal = texture(normalTexture, st);
vec3 diffuse;
float shadowFactor = 1.0;
if(lightSpace.w > 0.0 && lightSpace.z > lightDepth+0.0042) {
shadowFactor = 0.2;
}
else {
float k = 0.00001;
vec3 distanceToLight = lightPosition - position.xyz;
float distanceLength = length(distanceToLight);
float attenuation = (1.0 / (1.0 + (0.1 * distanceLength) + k * (distanceLength * distanceLength)));
float diffuseTemp = max(dot(normalize(normal), normalize(distanceToLight)), 0.0);
diffuse = vec3(1.0, 1.0, 1.0) * attenuation * diffuseTemp;
}
vec3 gamma = vec3(1.0/2.2);
color = pow(texture(colorTexture, st).xyz*shadowFactor+diffuse, gamma);
}
How can I fix this issue (Other than increasing my far plane distance)?
One other question, as this is the first time I have attempted shadowmapping: am I doing the lighting in relation to the shadows correctly?
When I run the program on my computer, it works exactly how I expected it to be working. However, when I try to run it on my campus lab computers, the fragment shader is all kinds of strange.
Right now it's just a simple Phong lighting calculation with a point source light at the origin. However, on the lab computers, it looks more like some strange cross between cel shading and a flashlight. I run an ATI graphics card, while the lab computers run NVIDIA. The shading works as expected on Macs as well (no idea about the graphics card).
The NVIDIA cards support up to OpenGL 3.1, though I run it on this Linux distribution at 2.1. I've tried clamping the shader version to 1.2 (GLSL), among a slew of other things, but they achieve the same results. The strangest thing is that when I do vertex shading rather than pixel shading, the result is the same on both computers...I've exhausted my ideas about how to fix this.
Here's the vertex shader:
#version 120
attribute vec2 aTexCoord;
attribute vec3 aPosition;
attribute vec3 aNormal;
attribute vec3 camLoc;
attribute float mat;
varying vec3 vColor;
varying vec2 vTexCoord;
varying vec3 normals;
varying vec3 lightPos;
varying vec3 camPos;
varying float material;
uniform mat4 uProjMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uModelMatrix;
uniform mat4 uNormMatrix;
uniform vec3 uLight;
uniform vec3 uColor;
void main()
{
//set up object position in world space
vec4 vPosition = uModelMatrix * vec4(aPosition, 1.0);
vPosition = uViewMatrix * vPosition;
vPosition = uProjMatrix * vPosition;
gl_Position = vPosition;
//set up light vector in world space
vec4 vLight = vec4(uLight, 1.0) * uViewMatrix;
lightPos = vLight.xyz - vPosition.xyz;
//set up normal vector in world space
normals = (vec4(aNormal,1.0) * uNormMatrix).xyz;
//set up view vector in world space
camPos = camLoc.xyz - vPosition.xyz;
//set up material shininess
material = mat;
//pass color and vertex
vColor = uColor;
vTexCoord = aTexCoord;
}
And the fragment shader:
#version 120
varying vec3 lightPos;
varying vec3 normals;
varying vec3 camPos;
varying vec2 vTexCoord;
varying vec3 vColor;
varying float material;
uniform sampler2D uTexUnit;
uniform mat4 uProjMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uModelMatrix;
void main(void)
{
float diffuse;
float diffuseRed, diffuseBlue, diffuseGreen;
float specular;
float specRed, specBlue, specGreen;
vec3 lightColor = vec3(0.996, 0.412, 0.706); //color of light (HOT PINK) **UPDATE WHEN CHANGED**
vec4 L; //light vector
vec4 N; //normal vector
vec4 V; //view vector
vec4 R; //reflection vector
vec4 H; //halfway vector
float red;
float green;
float blue;
vec4 texColor1 = texture2D(uTexUnit, vTexCoord);
//diffuse calculations
L = vec4(normalize(lightPos),0.0);
N = vec4(normalize(normals),0.0);
N = uModelMatrix * N;
//calculate RGB of diffuse light
diffuse = max(dot(N,L),0.0);
diffuseRed = diffuse*lightColor[0];
diffuseBlue = diffuse*lightColor[1];
diffuseGreen = diffuse*lightColor[2];
//specular calculations
V = vec4(normalize(camPos),0.0);
V = uModelMatrix * V;
R = vec4(-1.0 * L.x, -1.0 * L.y, -1.0 * L.z, 0.0);
float temp = 2.0*dot(L,N);
vec3 tempR = vec3(temp * N.x, temp * N.y, temp * N.z);
R = vec4(R.x + tempR.x, R.y + tempR.y, R.z + tempR.z, 0.0);
R = normalize(R);
H = normalize(L + V);
specular = dot(H,R);
specular = pow(specular,material);
specRed = specular*lightColor[0];
specBlue = specular*lightColor[1];
specGreen = specular*lightColor[2];
//set new colors
//textures
red = texColor1[0]*diffuseRed + texColor1[0]*specRed*0.7 + texColor1[0]*.05;
green = texColor1[1]*diffuseBlue + texColor1[1]*specBlue*0.7 + texColor1[1]*.05;
blue = texColor1[2]*diffuseGreen + texColor1[2]*specGreen*0.7 + texColor1[2]*.05;
//colors
red = vColor[0]*diffuseRed + vColor[0]*specRed*0.7 + vColor[0]*.05;
green = vColor[1]*diffuseBlue + vColor[1]*specBlue*0.7 + vColor[1]*.05;
blue = vColor[2]*diffuseGreen + vColor[2]*specGreen*0.7 + vColor[2]*.05;
gl_FragColor = vec4(red, green, blue, 1.0);
}
Your code looks wrong in many, many ways.
The following code is wrong. The comment is misleading (it’s in clip space, not world space). But the major problem is that you overwrite vPosition with the clip space coordinates while using it as if it was in view space several lines after this part.
//set up object position in world space
vec4 vPosition = uModelMatrix * vec4(aPosition, 1.0);
vPosition = uViewMatrix * vPosition;
vPosition = uProjMatrix * vPosition;
gl_Position = vPosition;
The following code is wrong, too. First you need matrix * vector, not vector * matrix. But also, the comment says world space, yet you compute vLight in view space and add vPosition which is in clip space!
//set up light vector in world space
vec4 vLight = vec4(uLight, 1.0) * uViewMatrix;
lightPos = vLight.xyz - vPosition.xyz;
Again here, matrix * vector:
//set up normal vector in world space
normals = (vec4(aNormal,1.0) * uNormMatrix).xyz;
Now what is this? camPos is computed in world coordinates, yet you apply the model matrix which converts model space to world space.
//specular calculations
V = vec4(normalize(camPos),0.0);
V = uModelMatrix * V;
I have no idea why your shader performs differently on different computers, but I am pretty sure none of these computers shows anything remotely close to the expected result.
You really need to read your shaders again, and each time you see a vector, ask yourself “in what coordinate space is this vector meaningful?” and each time you see a matrix, ask yourself “what coordinate spaces does this matrix convert from and to?”
I have a query regarding refraction.
I am using a texture image for refraction(refertest_car.png).
But somehow the texture is getting multiplied and givinga distorted image(Refer Screenshot.png)
i am using following shader.
attribute highp vec4 vertex;
attribute mediump vec3 normal;
uniformhighp mat4 matrix;
uniformhighp vec3 diffuse_color;
uniformhighp mat3 matrixIT;
uniformmediump mat4 matrixMV;
uniformmediump vec3 EyePosModel;
uniformmediump vec3 LightDirModel;
varyingmediump vec4 color;
constmediump float cShininess = 3.0;
constmediump float cRIR = 1.015;
varyingmediump vec2 RefractCoord;
vec3 SpecularColor= vec3(1.0,1.0,1.0);
voidmain(void)
{
vec3 toLight = normalize(vec3(1.0,1.0,1.0));
mediump vec3 eyeDirModel = normalize(vertex.xyz -EyePosModel);
mediump vec3 refractDir =refract(eyeDirModel,normal, cRIR);
refractDir = (matrix * vec4(refractDir, 0.0)).xyw;
RefractCoord = 0.5 * (refractDir.xy / refractDir.z) + 0.5;
vec3 normal_cal = normalize(matrixIT *normal );
float NDotL = max(dot(normal_cal, toLight), 0.0);
vec4 ecPosition = normalize(matrixMV * vertex);
vec3 eyeDir = vec3(1.0,1.0,1.0);
float NDotH = 0.0;
vec3 SpecularLight = vec3(0.0,0.0,0.0);
if(NDotL > 0.0)
{
vec3 halfVector = normalize( eyeDirModel + LightDirModel);
float NDotH = max(dot(normal_cal, halfVector), 0.0);
float specular =pow(NDotH,3.0);
SpecularLight = specular * SpecularColor;
}
color = vec4((NDotL * diffuse_color.xyz) + (SpecularLight.xyz) ,1.0);
gl_Position = matrix * vertex;
}
And
varyingmediump vec2 RefractCoord;
uniformsampler2D sTexture;
varyingmediump vec4 color;
voidmain(void)
{
lowp vec3 refractColor = texture2D(sTexture,RefractCoord).rgb;
gl_FragColor = vec4(color.xyz + refractColor,1.0);
}
Can anyone let me know the solution to this problem?
Thanks for any help.
Sorry guys i am not able to attach image.
It seems that you are calculating the refraction vector incorrectly. Hovewer, the answer to your question is already in it's title. If you are looking at ellipsoid, the rays from the view span a cone, wrapping the ellipsoid. But after the refraction, the cone may be much wider, reaching beyond the edges of your images, therefore giving texture coordinates larger than 0 - 1 and leading to texture being wrapped. So we need to take care of that as well.
First, the refraction coordinate should be calculated in vertex shader as follows:
vec3 eyeDirModel = normalize(-vertex * matrix);
vec3 refractDir = refract(eyeDirModel, normal, cRIR);
RefractCoord = normalize((matrix * vec4(refractDir, 0.0)).xyz); // no dehomog!
RefractCoord now contains refracted eye-space vectors. This counts on "matrix" being modelview matrix (that is not clear from your code, but i suspect it is). You could possibly skip normalization if you wish the shader to run faster, it shouldn't cause noticeable errors. Now a little bit of modification to your fragment shader.
vec3 refractColor = texture2D(sTexture, normalize(RefractCoord).xy * .5 + .5).rgb;
Here, using normalize() makes sure that the texture coordinates do not cause the texture to repeat.
Note that using 2D texture for refractions should be only justified by generating it on the fly (as e.g. Half-Life 2 does), otherwise one should probably use cube-map texture, which does the normalization for you and gives you color based on 3D direction - which is what you need.
Hope this helps ... (and, oh yeah, i wrote this from memory, in case there are any errors, please comment).