Ray box intersection computing wrong distances - opengl

I have a ray box intersection algorithm that is supposedly returning the distance to the intersected plane, however it is not meeting my expectaions.
I am outputing the absolute value of the position of the intersection point as a color. My expectation being that the color should be the same wherever the camera is, since the intersection point hasn't moved.
However my cube has different colors depending on where it is looked from:
Front view:
Slightly uo and right view (same face):
As you can see the color has changed based on the position.
I am raytracing the entire structure on teh fragment shader as follows:
#version 430
in vec2 f_coord;
out vec4 fragment_color;
uniform vec3 camera_pos;
uniform float aspect_ratio;
uniform float cube_dim;
#define EPSILON 0.0001
// Check whether the position is inside of the specified box
bool inBoxBounds(vec3 corner, float size, vec3 position)
{
bool inside = true;
//Put the position in the coordinate frame of the box
position-=corner;
//The point is inside only if all of it's components are inside
for(int i=0; i<3; i++)
{
inside = inside && (position[i] > -EPSILON);
inside = inside && (position[i] < size+EPSILON);
}
return inside;
}
//Calculate the distance to the intersection to a box, or inifnity if the bos cannot be hit
float boxIntersection(vec3 origin, vec3 dir, vec3 corner0, float size)
{
dir = normalize(dir);
//calculate opposite corner
vec3 corner1 = corner0 + vec3(size,size,size);
//Set the ray plane intersections
float coeffs[6];
coeffs[0] = (corner0.x - origin.x)/(dir.x);
coeffs[1] = (corner0.y - origin.y)/(dir.y);
coeffs[2] = (corner0.z - origin.z)/(dir.z);
coeffs[3] = (corner1.x - origin.x)/(dir.x);
coeffs[4] = (corner1.y - origin.y)/(dir.y);
coeffs[5] = (corner1.z - origin.z)/(dir.z);
float t = 1.f/0.f;
//Check for the smallest valid intersection distance
for(uint i=0; i<6; i++)
t = coeffs[i]>=0&& inBoxBounds(corner0,size,origin+dir*coeffs[i])?
min(coeffs[i],t) : t;
return t;
}
void main()
{
vec3 r = vec3(f_coord.x, f_coord.y, 1.f/tan(radians(40)));
vec3 dir = r;
dir.y /= aspect_ratio;
r = camera_pos;
float t = boxIntersection(r, dir, vec3(-cube_dim), cube_dim*2);
if(isinf(t))
discard;
r += dir*(t);
fragment_color = vec4(abs(r)/100,0);
}
Edit:
f_coord is the normalized coordinate system from -1 to 1 (the normalized screen coordinate in the opengl window)
camera_pos is the position of the camera in the 3D world coordinate system.

The reason is this line in boxIntersection():
dir = normalize(dir);
The t that you are calculating is the ray parameter in x = origin + t * dir. If you normalize dir, then t is equal to the Euclidean distance.
But in main(), you use a different dir:
r += dir*(t);
Here, dir is not normalized, hence you get a different intersection point.
The solution is simple: Do not normalize at all. Or if you need the actual distance, normalize in main() instead of boxIntersection(). Alternatively, you can make the dir parameter an inout parameter. This way, any change to dir from within the function is reflected back to the caller:
float boxIntersection(vec3 origin, inout vec3 dir, vec3 corner0, float size)

Related

Godot shader swap materials by world position on 3d mesh

I am trying to replicate something similar to this from Unity in Godot Engine with shaders, however, I am not able to find a solution. Calculating the position of the effect is the problem. How can I get the position in Godot, where I don't have access to the worlPos variable used in the video? A full code snippet of the shader would be really appreciated.
Currently, my shader code looks like this. ob_position is the position passed from the node.
shader_type spatial;
uniform vec2 ob_position = vec2(1, 0.68);
uniform float ob_radius = 0.01;
float circle(vec2 position, float radius, float feather)
{
return smoothstep(radius, radius + feather, length(position - vec2(0.5)));
}
void fragment() {
ALBEDO.rgb = vec3(circle(UV * (ob_position), ob_radius, 0.001) );
}
The video says:
Send the sphere position to the shader in script.
We can do that. First define an uniform:
uniform vec3 sphere_position;
And we can set it from code:
material.set_shader_param("sphere_position", global_transform.origin)
Since you need to set this every time the sphere moves, you can use NOTIFICATION_TRANSFORM_CHANGED which you enable by calling set_notify_local_transform(true).
Get the distance between the sphere and World Position.
To do that we need to figure out the world position of the fragment. Let us start by looking at the Fragment Build-ins. We find that:
VERTEX is the position of the fragment in view space.
CAMERA_MATRIX is the transform from view space to world space.
Yes, the naming is confusing.
So we can do this (in fragment):
vec3 pixel_world_pos = (CAMERA_MATRIX * vec4(VERTEX, 1.0)).xyz;
You can use this to debug: ALBEDO.rgb = pixel_world_pos;. In general, output whatever variable you want to visualize for debugging to ALBEDO.
And now the distance is:
float dist = distance(sphere_position, pixel_world_pos);
Control the size by dividing by radius.
While we don't have direct translation for the code in the video… sure, we can divide by radius (dist / radius). Where radius would be a uniform float.
Create a cutoff with Step.
That would be something like this: step(0.5, dist / radius).
Honestly, I would rather do this: step(radius, dist).
Your mileage may vary.
Lerp two different textures over the cutoff.
For that we can use mix. But first, define your textures as uniform sampler2D. Then you can something like this:
float threshold = step(radius, dist);
ALBEDO.rgb = mix(texture(tex1, UV).rgb, texture(tex2, UV).rgb, threshold);
Moving worldspace noise.
Add one more uniform sampler2D and set a NoiseTexture (make sure to set its noise and make seamless to true), and then we can query it with the world coordinates we already have.
float noise_value = texture(noise_texture, pixel_world_pos.xy + vec2(TIME)).r;
Add worldspace to noise.
I'm not sure what they mean. But from the visual, they use the noise to distort the cutoff. I'm not sure if this yields the same result, but it looks good to me:
vec3 pixel_world_pos = (CAMERA_MATRIX * vec4(VERTEX, 1.0)).xyz;
float noise_value = texture(noise_texture, pixel_world_pos.xy + vec2(TIME)).r;
float dist = distance(sphere_position, pixel_world_pos) + noise_value;
float threshold = step(radius, dist);
ALBEDO.rgb = mix(texture(tex1, UV).rgb, texture(tex2, UV).rgb, threshold);
Add a line to Emission (glow).
I don't understand what they did originally, so I came up with my own solution:
EMISSION = vec3(step(dist, edge + radius) * step(radius, dist));
What is going on here is that we will have a white EMISSION when dist < edge + radius and radius < dist. To reiterate, we will have white EMISSION when the distance is greater than the radius (radius < dist) and lesser than the radius plus some edge (dist < edge + radius). The comparisons become step functions, which return 0.0 or 1.0, and the AND operation is a multiplication.
Reveal object by clipping instead of adding a second texture.
I suppose that means there is another version of the shader that either uses discard or ALPHA and it is used for other objects.
This is the shader I wrote to test this:
shader_type spatial;
uniform vec3 sphere_position;
uniform sampler2D noise_texture;
uniform sampler2D tex1;
uniform sampler2D tex2;
uniform float radius;
uniform float edge;
void fragment()
{
vec3 pixel_world_pos = (CAMERA_MATRIX * vec4(VERTEX, 1.0)).xyz;
float noise_value = texture(noise_texture, pixel_world_pos.xy + vec2(TIME)).r;
float dist = distance(sphere_position, pixel_world_pos) + noise_value;
float threshold = step(radius, dist);
ALBEDO.rgb = mix(texture(tex1, UV).rgb, texture(tex2, UV).rgb, threshold);
EMISSION = vec3(step(dist, edge + radius) * step(radius, dist));
}
The answer from Theraot was a lifesaver for me however, I also needed support for multiple positions, using arrays, uniform vec3 sphere_position[];
So I came up with this:
shader_type spatial;
uniform uint ob_position_size;
uniform vec3 sphere_position[2];
uniform sampler2D noise_texture;
uniform sampler2D tex1;
uniform float radius;
uniform float edge;
void fragment()
{
vec3 pixel_world_pos = (INV_VIEW_MATRIX * vec4(VERTEX, 1.0)).xyz;
float noise_value = texture(noise_texture, pixel_world_pos.xy + vec2(TIME)).r;
ALBEDO = texture(SCREEN_TEXTURE, SCREEN_UV).rgb;
for(int i = 0; i < sphere_position.length(); i++) {
float dist = distance(sphere_position[i], pixel_world_pos) + noise_value;
float threshold = step(radius, dist);
ALBEDO.rgb = mix(texture(tex1, UV).rgb, ALBEDO.rgb, threshold);
//EMISSION = vec3(step(dist, edge + radius) * step(radius, dist));
}
}

How to generate camera rays for ray casting

I am trying to make a simple voxel engine with OpenGL and C++. My first step is to send out rays from the camera and detect if the ray intersected with something (for testing purposes its just two planes). I have got it working with without the camera rotating by creating a full screen quad and programming the fragment shader to send out a ray for every fragment (for now I'm just assuming a fragment is a pixel) which is in the direction texCoord.x, texCoord.y, -1. Now I am trying to implement camera rotation.
I have tried to generate a rotation matrix within the cpu and send that to the shader which will multiply it with every ray. However, when I rotate the camera, the planes start to stretch in a way which I can only describe with this video.
https://www.youtube.com/watch?v=6NScMwnPe8c
Here is the code that creates the matrix and is run every frame:
float pi = 3.141592;
// camRotX and Y are defined elsewhere and can be controlled from the keyboard during runtime.
glm::vec3 camEulerAngles = glm::vec3(camRotX, camRotY, 0);
std::cout << "X: " << camEulerAngles.x << " Y: " << camEulerAngles.y << "\n";
// Convert to radians
camEulerAngles.x = camEulerAngles.x * pi / 180;
camEulerAngles.y = camEulerAngles.y * pi / 180;
camEulerAngles.z = camEulerAngles.z * pi / 180;
// Generate Quaternian
glm::quat camRotation;
camRotation = glm::quat(camEulerAngles);
// Generate rotation matrix from quaternian
glm::mat4 camToWorldMatrix = glm::toMat4(camRotation);
// No transformation matrix is created because the rays should be relative to 0,0,0
// Send the rotation matrix to the shader
int camTransformMatrixID = glGetUniformLocation(shader, "cameraTransformationMatrix");
glUniformMatrix4fv(camTransformMatrixID, 1, GL_FALSE, glm::value_ptr(camToWorldMatrix));
And the fragment shader:
#version 330 core
in vec4 texCoord;
layout(location = 0) out vec4 color;
uniform vec3 cameraPosition;
uniform vec3 cameraTR;
uniform vec3 cameraTL;
uniform vec3 cameraBR;
uniform vec3 cameraBL;
uniform mat4 cameraTransformationMatrix;
uniform float fov;
uniform float aspectRatio;
float pi = 3.141592;
int RayHitCell(vec3 origin, vec3 direction, vec3 cellPosition, float cellSize)
{
if(direction.z != 0)
{
float multiplicationFactorFront = cellPosition.z - origin.z;
if(multiplicationFactorFront > 0){
vec2 interceptFront = vec2(direction.x * multiplicationFactorFront + origin.x,
direction.y * multiplicationFactorFront + origin.y);
if(interceptFront.x > cellPosition.x && interceptFront.x < cellPosition.x + cellSize &&
interceptFront.y > cellPosition.y && interceptFront.y < cellPosition.y + cellSize)
{
return 1;
}
}
float multiplicationFactorBack = cellPosition.z + cellSize - origin.z;
if(multiplicationFactorBack > 0){
vec2 interceptBack = vec2(direction.x * multiplicationFactorBack + origin.x,
direction.y * multiplicationFactorBack + origin.y);
if(interceptBack.x > cellPosition.x && interceptBack.x < cellPosition.x + cellSize &&
interceptBack.y > cellPosition.y && interceptBack.y < cellPosition.y + cellSize)
{
return 2;
}
}
}
return 0;
}
void main()
{
// For now I'm not accounting for FOV and aspect ratio because I want to get the rotation working first
vec4 beforeRotateRayDirection = vec4(texCoord.x,texCoord.y,-1,0);
// Apply the rotation matrix that was generated on the cpu
vec3 rayDirection = vec3(cameraTransformationMatrix * beforeRotateRayDirection);
int t = RayHitCell(cameraPosition, rayDirection, vec3(0,0,5), 1);
if(t == 1)
{
// Hit front plane
color = vec4(0, 0, 1, 0);
}else if(t == 2)
{
// Hit back plane
color = vec4(0, 0, 0.5, 0);
}else{
// background color
color = vec4(0, 1, 0, 0);
}
}
Okay. Its really hard to know what is wrong, I will try non-theless.
Here are few tips and notes:
1) You can debug directions by mapping them to RGB color. Keep in mind you should normalize the vectors and map from (-1,1) to (0,1). Just do the dir*0.5+1.0 type of thing. Example:
color = vec4(normalize(rayDirection) * 0.5, 0) + vec4(1);
2) You can get the rotation matrix in a more straight manner. Quaternion is initialized from an forward direction, it will first rotate around Y axis (horizontal look) then, and only then, around X axis (vertical look). Keep in mind that the rotations order is implementation dependent if you initialize from euler-angles. Use mat4_cast to avoid experimental glm extension (gtx) whenever possible. Example:
// Define rotation quaternion starting from look rotation
glm::quat camRotation = glm::vec3(0, 0, 0);
camRotation = glm::rotate(camRotation, glm::radians(camRotY), glm::vec3(0, 1, 0));
camRotation = glm::rotate(camRotation, glm::radians(camRotX), glm::vec3(1, 0, 0));
glm::mat4 camToWorldMatrix = glm::mat4_cast(camRotation);
3) Your beforeRotateRayDirection is a vector that (probably) points from (-1,-1,-1) all the way to (1,1,-1). Which is not normalized, the length of (1,1,1) is √3 ≈ 1.7320508075688772... Be sure you have taken that into account for your collision math or just normalize the vector.
My partial answer so far...
Your collision test is a bit weird... It appears you want to cast the ray into the Z plane for the given cell position (but twice, one for the front and one for the back). I have reviewed your code logic and it makes some sense, but without the vertex program, thus not knowing what the texCoord range values are, it is not possible to be sure. You might want to rethink your logic to something like this:
int RayHitCell(vec3 origin, vec3 direction, vec3 cellPosition, float cellSize)
{
//Get triangle side vectors
vec3 tu = vec3(cellSize,0,0); //Triangle U component
vec3 tv = vec3(0,cellSize,0); //Triangle V component
//Determinant for inverse matrix
vec3 q = cross(direction, tv);
float det = dot(tu, q);
//if(abs(det) < 0.0000001) //If too close to zero
// return;
float invdet = 1.0/det;
//Solve component parameters
vec3 s = origin - cellPosition;
float u = dot(s, q) * invdet;
if(u < 0.0 || u > 1.0)
return 0;
vec3 r = cross(s, tu);
float v = dot(direction, r) * invdet;
if(v < 0.0 || v > 1.0)
return 0;
float t = dot(tv, r) * invdet;
if(t <= 0.0)
return 0;
return 1;
}
void main()
{
// For now I'm not accounting for FOV and aspect ratio because I want to get the
// rotation working first
vec4 beforeRotateRayDirection = vec4(texCoord.x, texCoord.y, -1, 0);
// Apply the rotation matrix that was generated on the cpu
vec3 rayDirection = vec3(cameraTransformationMatrix * beforeRotateRayDirection);
int t = RayHitCell(cameraPosition, normalize(rayDirection), vec3(0,0,5), 1);
if (t == 1)
{
// Hit front plane
color = vec4(0, 0, 1, 0);
}
else
{
// background color
color = vec4(0, 1, 0, 0);
}
}
This should give you a plane, let me know if it works. A cube will be very easy to do.
PS.: u and v can be used for texture mapping.

Volume rendering from inside volume

We've been doing lots of work trying to volume render 3D cloud fields in WebGL. The approach we've taken so far is outlined here - the start position of each ray is the current position in the front face of the volume cube, and the end position is calculated from a previous pass, which encodes the xyx vales as a backface texture.
How can we extend this to work when the camera is inside the volume? Do we need to create smaller volume cubes on the fly? Can we just change the shader to start marching from the camera instead of the front face, and project onto the back of the cube?
We're not really sure where to start with this!
Thanks in advance
Render only a single pass.
In that pass you render the back faces only. The camera position needs to be translated from world coordinates into a coordinate system that is build by the 3 axes with their sizes of the volume box you render. Your goal is to create a 4x4 matrix where the all column vectors are a vec4(...,0) and x,y,z of these vectors are defined by x,y,z-axis directions with length of the volume box. If the box is parallel to x axis, that vector is (1,0,0). If it is stretched to (2,0,0) then that is its own x-axis and that will be the column vector for column 0 in the matrix. Do so with y and z axis with their length. The last column vector in the matrix is the position of the box as vec4(tx,ty,tz,1) as this matrix then defines a coordinate system and you use it to transform the camera position into the uniform (0,0,0)-(1,1,1) box of the volume.
Create the inverse of that volumes box matrix and multiply the cam as vec4( campos, 1) from the right side to the invVolMatrix. Send the resulting vec3 as UNIFORM to shader.
Render only backfaces with (0,0,0) to (1,1,1) coordinates on their respective volBox corners - as you already did. Now you have in your shader
uniform campos
back face voltex coordinate
you know your volbox is a unit cube in a local coordinate system with diagonal from (0,0,0) to (1,1,1)
In the shader do:
varying vec3 vLocalUnitTexCoord; // backface interpolated coordinate
uniform vec3 LOCAL_CAM_POS; // localised camPos
struct AABB {
vec3 min; // (0,0,0)
vec3 max; // (1,1,1)
};
struct Ray {
vec3 origin; vec3 dir;
};
float getUnitAABBEntry( in Ray r ) {
AABB b;
b.min = vec3( 0 );
b.max = vec3( 1 );
// compute clipping for box.min and box.max corner
vec3 rInvDir = vec3( 1.0 ) / r.dir;
vec3 tMinima = ( b.min - r.origin ) * rInvDir;
vec3 tMaxima = ( b.max - r.origin ) * rInvDir;
// sort for nearest corner
vec3 tEntries = min( tMinima, tMaxima );
// find first real entry value of 3 t-distance values in vec3 container
vec2 tMaxEntryCandidates = max( vec2( tEntries.st ), vec2( tEntries.pp ) );
float tMaxEntry = max( tMaxEntryCandidates.s, tMaxEntryCandidates.t );
}
vec3 getCloserPos( in vec3 camera, in vec3 frontFaceIntersection, in float t ) {
float useFrontCoord = 0.5 + 0.5 * sign( t );
vec3 startPos = mix( camera, frontFaceIntersection, useFrontCoord );
return startPos;
}
vec4 main(void)
{
Ray r;
r.origin = LOCAL_CAM_POS;
r.dir = normalize( vLocalUnitTexCoord - LOCAL_CAM_POS );
float t = getUnitAABBEntry( r );
vec3 frontFaceLocalUnitTexCoord = r.origin + r.dir * t;
vec3 startPos = getCloserPos( LOCAL_CAM_POS, frontFaceLocalUnitTexCoord, t );
// loop for integration follows here
vec3 start = startpos;
vec3 end = vLocalUnitTexCoord;
...for loop..etc...
}
Happy coding!

GPU Pro 5 Area Lights

i'm trying to implement the area lights described in GPU Pro 5 in GLSL, but i'm having some trouble with the projections.
here is the shader code i'm currently using for diffuse lightning:
vec3 linePlaneIntersection
(
vec3 linePoint, vec3 lineNormal,
vec3 planeCenter, vec3 planeNormal
)
{
float t = (dot(planeNormal, planeCenter - linePoint) / dot(planeNormal, lineNormal));
return linePoint + lineNormal * t;
}
vec3 projectOnPlane(vec3 point, vec3 planeCenter, vec3 planeNormal)
{
float distance = dot(planeNormal, point - planeCenter);
return point - distance * planeNormal;
}
vec3 diffuse_color(vec3 p, vec3 surfaceDiffuseColor, vec3 n)
{
// for point p with normal n
vec3 directionToLight = normalize(light.position.xyz - p);
vec3 planeNormal = directionToLight;
planeNormal = light.orientation.xyz;
// intersect a ray from p in direction nII with light plane,
// creating point pI
vec3 nII = n;
if(dot(n, light.orientation.xyz) > 0.0)
{
// light plane points away from n, skew in direction of light plane
nII = -light.orientation.xyz;
}
vec3 pI = linePlaneIntersection(p, nII, light.position.xyz, planeNormal);
// project point p on the light plane, creating point pII
vec3 pII = projectOnPlane(p, light.position.xyz, planeNormal);
// create halfway vector h between ppI and ppII
vec3 ppI = pI - p;
vec3 ppII = pII - p;
vec3 h = normalize(ppI + ppII);
// intersect ray from p in direction h with the light plane,
// creating point pd
vec3 pd = linePlaneIntersection(p, h, light.position.xyz, planeNormal);
// treat vector ppd as light vector for diffuse equation
vec3 ppd = normalize(pd - p);
// distance from point p to point pI on dArea
float r = distance(p, pI);
// angle between light vector ppd and surface normal n
float cosP = clamp(dot(ppd, n), 0.0, 1.0);
// angle between surface normal and light plane orientation normal
float cosO = clamp(dot(n, -light.orientation.xyz), 0.0, 1.0);
float dArea = light.dAreaRadiance.x;
float radiance = light.dAreaRadiance.y;
return radiance * surfaceDiffuseColor * cosP * cosO * dArea / (r * r);
}
the light has the position {0, 100, 0} and the orientation {0, -1, 0}.
if i use the light orientation as the plane normal for the projections, the light always comes directly from the top, even when i'm changing the position on the x axis.
when i use the direction to the light position as the plane normal, it seems to work, but i'm pretty sure it is still not correct.

Drawing circles on a sphere

I'm trying to draw lots of circles on a sphere using shaders. The basic alogrith is like this:
calculate the distance from the fragment (using it's texture coordinates) to the location of the circle's center (the circle's center is also specified in texture coordinates)
calculate the angle from the fragent to the center of the circle.
based on the angle, access a texture (which has 360 pixels in it and the red channel specifies a radius distance) and retrieve the radius for the given angle
if the distance from the fragment to the circle's center is less than the retrieved radius then the fragment's color is red, otherwise blue.
I would like to draw ... say 60 red circles on a blue sphere. I got y shader to work for one circle, but how to do 60? Here's what I've tried so far....
I passed in a data texture that specifies the radius for a given angle, but I notice artifacts creep in. I believe this is due to linear interpolation when I try to retrieve information from the data texture using:
float returnV = texture2D(angles, vec2(x, y)).r;
where angles is the data texture (Sampler2D) that contains the radius for a given angle, and x = angle / 360.0 (angle is 0 to 360) and y = 0 to 60 (y is the circle number)
I tried passing in a Uniform float radii[360], but I cannot access radii with dynamic indexing. I even tried this mess ...
getArrayValue(int index) {
if (index == 0) {
return radii[0];
}
else if (index == 1) {
return radii[1];
}
and so on ...
If I create a texture and place all of the circles on that texture and then multi-texture the blue sphere with the one containing the circles it works, but as you would expect, I have really bad aliasing. I like the idea of proceduraly generating the circles based on the position of the fragment and that of the circle because of virtually no aliasing. However, I do I do ore than one?
Thx!!!
~Bolt
i have a shader that makes circle on the terrain. It moves by the mouse moves.
maybe you get an inspiration?
this is a fragment program. it is not the main program but you can add it to your program.
try this...
for now you can give some uniform parameters in hardcode.
uniform float showCircle;
uniform float radius;
uniform vec4 mousePosition;
varying vec3 vertexCoord;
void calculateTerrainCircle(inout vec4 pixelColor)
{
if(showCircle == 1)
{
float xDist = vertexCoord.x - mousePosition.x;
float yDist = vertexCoord.y - mousePosition.y;
float dist = xDist * xDist + yDist * yDist;
float radius2 = radius * radius;
if (dist < radius2 * 1.44f && dist > radius2 * 0.64f)
{
vec4 temp = pixelColor;
float diff;
if (dist < radius2)
diff = (radius2 - dist) / (0.36f * radius2);
else
diff = (dist - radius2) / (0.44f * radius2);
pixelColor = vec4(1, 0, 0, 1.0) * (1 - diff) + pixelColor * diff;
pixelColor = mix(pixelColor, temp, diff);
}
}
}
and in vertex shader you add:
varying vec3 vertexCoord;
void main()
{
gl_Position = ftransform();
vec4 v = vec4(gl_ModelViewMatrix * gl_Vertex);
vertexCoord = vec3(gl_ModelViewMatrixInverse * v);
}
ufukgun, if you multuiply a matrix by its inverse you get the identity.
Your;
vec4 v = vec4(gl_ModelViewMatrix * gl_Vertex);
vertexCoord = vec3(gl_ModelViewMatrixInverse * v);
is therefore equivalent to
vertexCoord = vec3(gl_Vertex);