I'm trying to get an LOD working with the tessellation shader. I have a simple sphere which is tessellated with a 5 rings et 5 sectors at the begining. I would like the sphere to increase its details when the camera is approching. But the new primitves generated by the tessellation are mapped in a flat plane, I tried to change there position, but I couldn't manage to get it working.
Here is an illustration of the problem :
As you can see, I'm not getting a sphere when the camera is approroching. This is what I would like to get when I'm near the sphere :
Here is the code in the tessellation evaluation shader :
void main(void){
float u = gl_TessCoord.x;
float v = gl_TessCoord.y;
vec4 pos0 = gl_in[0].gl_Position;
vec4 pos1 = gl_in[1].gl_Position;
vec4 pos2 = gl_in[2].gl_Position;
vec4 pos3 = gl_in[3].gl_Position;
vec4 a = mix(pos1,pos0, u);
vec4 b = mix(pos2, pos3, u);
float l = length(a - b);
vec4 position = mix(a, b, v);
gl_Position = u_transformMatrix * position;
tes_positions = (u_transformMatrix * position).xyz;
}
geometry shader :
layout(triangles) in;
layout(triangle_strip, max_vertices = 3) out;
void main(void){
for(int i=0; i<3; i++){
vec4 pos = gl_in[i].gl_Position;
vec4 normal = normalize(pos);
pos = normal * u_radius;
gl_Position = u_projectionMatrix * u_viewMatrix * pos;
EmitVertex();
}
EndPrimitive();
}
Thank you for your help ! And if you need anything else, please ask me and I'll post it.
So the #slicer4ever find the answer, all credits go to him. (Thank you by the way !). He doesn't have an SO account so he can't post It himself, unfortunately.
I quote him : your normalizing the vec4, which might be messing up the w component of your vertex?
And that was it, the w coordinate was the problem.
And here is the output now :
Related
I have a very simple shader program that takes in a bunch of position data as GL_POINTS that generate screen-aligned squares of fragments like normal with a size depending on depth, and then in the fragment shader I wanted to draw a very simple ray-traced sphere for each one with just the shadow that is on the sphere opposite to the light. I went to this shadertoy to try to figure it out on my own. I used the sphIntersect function for ray-sphere intersection, and sphNormal to get the normal vectors on the sphere for lighting. The problem is that the spheres do not align with the squares of fragments, causing them to be cut off. This is because I am not sure how to match the projections of the spheres and the vertex positions so that they line up. Can I have an explanation of how to do this?
Here is a picture for reference.
Here are my vertex and fragment shaders for reference:
//vertex shader:
#version 460
layout(location = 0) in vec4 position; // position of each point in space
layout(location = 1) in vec4 color; //color of each point in space
layout(location = 2) uniform mat4 view_matrix; // projection * camera matrix
layout(location = 6) uniform mat4 cam_matrix; //just the camera matrix
out vec4 col; // color of vertex
out vec4 posi; // position of vertex
void main() {
vec4 p = view_matrix * vec4(position.xyz, 1.0);
gl_PointSize = clamp(1024.0 * position.w / p.z, 0.0, 4000.0);
gl_Position = p;
col = color;
posi = cam_matrix * position;
}
//fragment shader:
#version 460
in vec4 col; // color of vertex associated with this fragment
in vec4 posi; // position of the vertex associated with this fragment relative to camera
out vec4 f_color;
layout (depth_less) out float gl_FragDepth;
float sphIntersect( in vec3 ro, in vec3 rd, in vec4 sph )
{
vec3 oc = ro - sph.xyz;
float b = dot( oc, rd );
float c = dot( oc, oc ) - sph.w*sph.w;
float h = b*b - c;
if( h<0.0 ) return -1.0;
return -b - sqrt( h );
}
vec3 sphNormal( in vec3 pos, in vec4 sph )
{
return normalize(pos-sph.xyz);
}
void main() {
vec4 c = clamp(col, 0.0, 1.0);
vec2 p = ((2.0*gl_FragCoord.xy)-vec2(1920.0, 1080.0)) / 2.0;
vec3 ro = vec3(0.0, 0.0, -960.0 );
vec3 rd = normalize(vec3(p.x, p.y,960.0));
vec3 lig = normalize(vec3(0.6,0.3,0.1));
vec4 k = vec4(posi.x, posi.y, -posi.z, 2.0*posi.w);
float t = sphIntersect(ro, rd, k);
vec3 ps = ro + (t * rd);
vec3 nor = sphNormal(ps, k);
if(t < 0.0) c = vec4(1.0);
else c.xyz *= clamp(dot(nor,lig), 0.0, 1.0);
f_color = c;
gl_FragDepth = t * 0.0001;
}
Looks like you have many spheres so I would do this:
Input data
I would have VBO containing x,y,z,r describing your spheres, You will also need your view transform (uniform) that can create ray direction and start position for each fragment. Something like my vertex shader in here:
Reflection and refraction impossible without recursive ray tracing?
Create BBOX in Geometry shader and convert your POINT to QUAD or POLYGON
note that you have to account for perspective. If you are not familiar with geometry shaders see:
rendring cubics in GLSL
Where I emmit sequence of OBB from input lines...
In fragment raytrace sphere
You have to compute intersection between sphere and ray, chose the closer intersection and compute its depth and normal (for lighting). In case of no intersection you have to discard; fragment !!!
From what I can see in your images Your QUADs does not correspond to your spheres hence the clipping and also you do not discard; fragments with no intersections so you overwrite with background color already rendered stuff around last rendered spheres so you have only single sphere left in QUAD regardless of how many spheres are really there ...
To create a ray direction that matches a perspective matrix from screen space, the following ray direction formula can be used:
vec3 rd = normalize(vec3(((2.0 / screenWidth) * gl_FragCoord.xy) - vec2(aspectRatio, 1.0), -proj_matrix[1][1]));
The value of 2.0 / screenWidth can be pre-computed or the opengl built-in uniform structs can be used.
To get a bounding box or other shape for your spheres, it is very important to use camera-facing shapes, and not camera-plane-facing shapes. Use the following process where position is the incoming VBO position data, and the w-component of position is the radius:
vec4 p = vec4((cam_matrix * vec4(position.xyz, 1.0)).xyz, position.w);
o.vpos = p;
float l2 = dot(p.xyz, p.xyz);
float r2 = p.w * p.w;
float k = 1.0 - (r2/l2);
float radius = p.w * sqrt(k);
if(l2 < r2) {
p = vec4(0.0, 0.0, -p.w * 0.49, p.w);
radius = p.w;
k = 0.0;
}
vec3 hx = radius * normalize(vec3(-p.z, 0.0, p.x));
vec3 hy = radius * normalize(vec3(-p.x * p.y, p.z * p.z + p.x * p.x, -p.z * p.y));
p.xyz *= k;
Then use hx and hy as basis vectors for any 2D shape that you want the billboard to be shaped like for the vertices. Don't forget later to multiply each vertex by a perspective matrix to get the final position of each vertex. Here is a visualization of the billboarding on desmos using a hexagon shape: https://www.desmos.com/calculator/yeeew6tqwx
The normal mapping looks great when the objects aren't rotated from the origin, and spot lights and directional lights work, but when I spin an object on the spot it darkens and then lightens again, just on the top face.
I'm testing using a cube. I've used a geometry shader to visualise my calculated normals (after multiplying by a TBN matrix), and they appear to be in the correct places. If I take the normal map out of the equation then the lighting is fine.
Here's where the TBN is calculated:
void calculateTBN()
{
//get the normal matrix
mat3 model = mat3(transpose(inverse(mat3(transform))));
vec3 T = normalize(vec3(model * tangent.xyz ));
vec3 N = normalize(vec3(model * normal ));
vec3 B = cross(N, T);
mat3 TBN = mat3( T , B , N);
outputVertex.TBN =TBN;
}
And the normal is sampled and transformed:
vec3 calculateNormal()
{
//Sort the input so that the normal is between 1 and minus 1 instead of 0 and 1
vec3 input = texture2D(normalMap, inputFragment.textureCoord).xyz;
input = 2.0 * input - vec3(1.0, 1.0, 1.0);
vec3 newNormal = normalize(inputFragment.TBN* input);
return newNormal;
}
My Lighting is in world space (as far as I understand the term, it takes into account the transform matrix but not the camera or projection matrix)
I did try the technique where I pass down the TBN as inverse (or transpose) and then multiplied every vector apart from the normal by it. That had the same effect. I'd rather work in world space anyway as apparently this is better for deffered lighting? Or so I've heard.
If you'd like to see any of the lighting code and so on I'll add it in but I didn't think it was necessary as it works apart from this.
EDIT::
As requested, here is vertex and part of frag shader
#version 330
uniform mat4 T; // Translation matrix
uniform mat4 S; // Scale matrix
uniform mat4 R; // Rotation matrix
uniform mat4 camera; // camera matrix
uniform vec4 posRelParent; // the position relative to the parent
// Input vertex packet
layout (location = 0) in vec4 position;
layout (location = 2) in vec3 normal;
layout (location = 3) in vec4 tangent;
layout (location = 4) in vec4 bitangent;
layout (location = 8) in vec2 textureCoord;
// Output vertex packet
out packet {
vec2 textureCoord;
vec3 normal;
vec3 vert;
mat3 TBN;
vec3 tangent;
vec3 bitangent;
vec3 normalTBN;
} outputVertex;
mat4 transform;
mat3 TBN;
void calculateTBN()
{
//get the model matrix, the transform of the object with scaling and transform removeds
mat3 model = mat3(transpose(inverse(transform)));
vec3 T = normalize(model*tangent.xyz);
vec3 N = normalize(model*normal);
//I used to retrieve the bitangents by crossing the normal and tangent but now they are calculated independently
vec3 B = normalize(model*bitangent.xyz);
TBN = mat3( T , B , N);
outputVertex.TBN = TBN;
//Pass though TBN vectors for colour debugging in the fragment shader
outputVertex.tangent = T;
outputVertex.bitangent = B;
outputVertex.normalTBN = N;
}
void main(void) {
outputVertex.textureCoord = textureCoord;
// Setup local variable pos in case we want to modify it (since position is constant)
vec4 pos = vec4(position.x, position.y, position.z, 1.0) + posRelParent;
//Work out the transform matrix
transform = T * R * S;
//Work out the normal for lighting
mat3 normalMat = transpose(inverse(mat3(transform)));
outputVertex.normal = normalize(normalMat* normal);
calculateTBN();
outputVertex.vert =(transform* pos).xyz;
//Work out the final pos of the vertex
gl_Position = camera * transform * pos;
}
And Lighting vector of fragment:
vec3 applyLight(Light thisLight, vec3 baseColor, vec3 surfacePos, vec3 surfaceToCamera)
{
float attenuation = 1.0f;
vec3 lightPos = (thisLight.finalLightMatrix*thisLight.position).xyz;
vec3 surfaceToLight;
vec3 coneDir = normalize(thisLight.coneDirection);
if (thisLight.position.w == 0.0f)
{
//Directional Light (all rays same angle, use position as direction)
surfaceToLight = normalize( (thisLight.position).xyz);
attenuation = 1.0f;
}
else
{
//Point light
surfaceToLight = normalize(lightPos - surfacePos);
float distanceToLight = length(lightPos - surfacePos);
attenuation = 1.0 / (1.0f + thisLight.attenuation * pow(distanceToLight, 2));
//Work out the Cone restrictions
float lightToSurfaceAngle = degrees(acos(dot(-surfaceToLight, normalize(coneDir))));
if (lightToSurfaceAngle > thisLight.coneAngle)
{
attenuation = 0.0;
}
}
}
Here's the main of the frag shader too:
void main(void) {
//get the base colour from the texture
vec4 tempFragColor = texture2D(textureImage, inputFragment.textureCoord).rgba;
//Support for objects with and without a normal map
if (useNormalMap == 1)
{
calcedNormal = calculateNormal();
}
else
{
calcedNormal = inputFragment.normal;
}
vec3 surfaceToCamera = normalize((cameraPos_World) - (inputFragment.vert));
vec3 tempColour = vec3(0.0, 0.0, 0.0);
for (int count = 0; count < numLights; count++)
{
tempColour += applyLight(allLights[count], tempFragColor.xyz, inputFragment.vert, surfaceToCamera);
}
vec3 gamma = vec3(1.0 / 2.2);
fragmentColour = vec4(pow(tempColour,gamma), tempFragColor.a);
//fragmentColour = vec4(calcedNormal, 1);
}
Edit 2:
The geometry shader used to visualize "sampled" normals by the TBN matrix as shown here:
void GenerateLineAtVertex(int index)
{
vec3 testSampledNormal = vec3(0, 0, 1);
vec3 bitangent = cross(gs_in[index].normal, gs_in[index].tangent);
mat3 TBN = mat3(gs_in[index].tangent, bitangent, gs_in[index].normal);
testSampledNormal = TBN * testSampledNormal;
gl_Position = gl_in[index].gl_Position;
EmitVertex();
gl_Position =
gl_in[index].gl_Position
+ vec4(testSampledNormal, 0.0) * MAGNITUDE;
EmitVertex();
EndPrimitive();
}
And it's vertex shader
void main(void) {
// Setup local variable pos in case we want to modify it (since position is constant)
vec4 pos = vec4(position.x, position.y, position.z, 1.0);
mat4 transform = T* R * S;
// Apply transformation to pos and store result in gl_Position
gl_Position = projection* camera* transform * pos;
mat3 normalMatrix = mat3(transpose(inverse(camera * transform)));
vs_out.tangent = normalize(vec3(projection * vec4(normalMatrix * tangent.xyz, 0.0)));
vs_out.normal = normalize(vec3(projection * vec4(normalMatrix * normal , 0.0)));
}
Here is the TBN vectors visualized. The slight angles on the points are due to an issue with how I'm applying the projection matrix, rather than mistakes in the actual vectors. The red lines just show where the arrows I've drawn on the texture are, they're not very clear from that angle that's all.
Problem Solved!
Actually nothing to do with the code above, although thanks to everyone that helped.
I was importing the texture using my own texture loader, which uses by default non-gamma corrected, SRGB colour in 32 bit. I switched it to 24bit and just RGB colour and it worked straight away. Typical developer problems....
While implementing SSLR, I ran into the problem of incorrectly displaying objects: they are infinitely projected "down" and displayed in no way at all in the mirror. I give the code and screenshot below.
Fragment SSLR shader:
#version 330 core
uniform sampler2D normalMap; // in view space
uniform sampler2D depthMap; // in view space
uniform sampler2D colorMap;
uniform sampler2D reflectionStrengthMap;
uniform mat4 projection;
uniform mat4 inv_projection;
in vec2 texCoord;
layout (location = 0) out vec4 fragColor;
vec3 calcViewPosition(in vec2 texCoord) {
// Combine UV & depth into XY & Z (NDC)
vec3 rawPosition = vec3(texCoord, texture(depthMap, texCoord).r);
// Convert from (0, 1) range to (-1, 1)
vec4 ScreenSpacePosition = vec4(rawPosition * 2 - 1, 1);
// Undo Perspective transformation to bring into view space
vec4 ViewPosition = inv_projection * ScreenSpacePosition;
// Perform perspective divide and return
return ViewPosition.xyz / ViewPosition.w;
}
vec2 rayCast(vec3 dir, inout vec3 hitCoord, out float dDepth) {
dir *= 0.25f;
for (int i = 0; i < 20; i++) {
hitCoord += dir;
vec4 projectedCoord = projection * vec4(hitCoord, 1.0);
projectedCoord.xy /= projectedCoord.w;
projectedCoord.xy = projectedCoord.xy * 0.5 + 0.5;
float depth = calcViewPosition(projectedCoord.xy).z;
dDepth = hitCoord.z - depth;
if(dDepth < 0.0) return projectedCoord.xy;
}
return vec2(-1.0);
}
void main() {
vec3 normal = texture(normalMap, texCoord).xyz * 2.0 - 1.0;
vec3 viewPos = calcViewPosition(texCoord);
// Reflection vector
vec3 reflected = normalize(reflect(normalize(viewPos), normalize(normal)));
// Ray cast
vec3 hitPos = viewPos;
float dDepth;
float minRayStep = 0.1f;
vec2 coords = rayCast(reflected * max(minRayStep, -viewPos.z), hitPos, dDepth);
if (coords != vec2(-1.0)) fragColor = mix(texture(colorMap, texCoord), texture(colorMap, coords), texture(reflectionStrengthMap, texCoord).r);
else fragColor = texture(colorMap, texCoord);
}
Screenshot:
Also, the lamp is not reflected at all
I will grateful for help
UPDATE:
colorMap:
normalMap:
depthMap:
UPDATE: I solved the problem with the wrong reflection, but there are still problems.
I solved it as follows: ViewPosition.y *= -1
Now, as you can see in the screenshot, the lower parts of the objects are not reflected for some reason.
The question still remains open.
I m struggling to get a fine ssr too. I found two things that could help.
To get the view space normals you have to keep only the rotation of the camera and remove the translation, because if you dont, you will get the normals stretched to the opposite direction of the camera movement and will no longer have the right direction even if you normalize them again, for column major mat4 you can do it like:
mat4 viewNoTranslation = view;
viewNoTranslation[3] = vec4(0.0, 0.0, 0.0, 1.0);
The depth sampling from the depth image is logarithmic and if you linearize it you will get indeed the values from 0 to 1 but they will be inaccurate as to the needed precision. I tried to get the depth value straight from the vertex shader:
gl_Position = ubo.projection * ubo.view * ubo.model * inPos;
depth = gl_Position.z;
I dont know if it is right but the depth now is more accurate.
If you make proggress, please update :)
When I run the program on my computer, it works exactly how I expected it to be working. However, when I try to run it on my campus lab computers, the fragment shader is all kinds of strange.
Right now it's just a simple Phong lighting calculation with a point source light at the origin. However, on the lab computers, it looks more like some strange cross between cel shading and a flashlight. I run an ATI graphics card, while the lab computers run NVIDIA. The shading works as expected on Macs as well (no idea about the graphics card).
The NVIDIA cards support up to OpenGL 3.1, though I run it on this Linux distribution at 2.1. I've tried clamping the shader version to 1.2 (GLSL), among a slew of other things, but they achieve the same results. The strangest thing is that when I do vertex shading rather than pixel shading, the result is the same on both computers...I've exhausted my ideas about how to fix this.
Here's the vertex shader:
#version 120
attribute vec2 aTexCoord;
attribute vec3 aPosition;
attribute vec3 aNormal;
attribute vec3 camLoc;
attribute float mat;
varying vec3 vColor;
varying vec2 vTexCoord;
varying vec3 normals;
varying vec3 lightPos;
varying vec3 camPos;
varying float material;
uniform mat4 uProjMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uModelMatrix;
uniform mat4 uNormMatrix;
uniform vec3 uLight;
uniform vec3 uColor;
void main()
{
//set up object position in world space
vec4 vPosition = uModelMatrix * vec4(aPosition, 1.0);
vPosition = uViewMatrix * vPosition;
vPosition = uProjMatrix * vPosition;
gl_Position = vPosition;
//set up light vector in world space
vec4 vLight = vec4(uLight, 1.0) * uViewMatrix;
lightPos = vLight.xyz - vPosition.xyz;
//set up normal vector in world space
normals = (vec4(aNormal,1.0) * uNormMatrix).xyz;
//set up view vector in world space
camPos = camLoc.xyz - vPosition.xyz;
//set up material shininess
material = mat;
//pass color and vertex
vColor = uColor;
vTexCoord = aTexCoord;
}
And the fragment shader:
#version 120
varying vec3 lightPos;
varying vec3 normals;
varying vec3 camPos;
varying vec2 vTexCoord;
varying vec3 vColor;
varying float material;
uniform sampler2D uTexUnit;
uniform mat4 uProjMatrix;
uniform mat4 uViewMatrix;
uniform mat4 uModelMatrix;
void main(void)
{
float diffuse;
float diffuseRed, diffuseBlue, diffuseGreen;
float specular;
float specRed, specBlue, specGreen;
vec3 lightColor = vec3(0.996, 0.412, 0.706); //color of light (HOT PINK) **UPDATE WHEN CHANGED**
vec4 L; //light vector
vec4 N; //normal vector
vec4 V; //view vector
vec4 R; //reflection vector
vec4 H; //halfway vector
float red;
float green;
float blue;
vec4 texColor1 = texture2D(uTexUnit, vTexCoord);
//diffuse calculations
L = vec4(normalize(lightPos),0.0);
N = vec4(normalize(normals),0.0);
N = uModelMatrix * N;
//calculate RGB of diffuse light
diffuse = max(dot(N,L),0.0);
diffuseRed = diffuse*lightColor[0];
diffuseBlue = diffuse*lightColor[1];
diffuseGreen = diffuse*lightColor[2];
//specular calculations
V = vec4(normalize(camPos),0.0);
V = uModelMatrix * V;
R = vec4(-1.0 * L.x, -1.0 * L.y, -1.0 * L.z, 0.0);
float temp = 2.0*dot(L,N);
vec3 tempR = vec3(temp * N.x, temp * N.y, temp * N.z);
R = vec4(R.x + tempR.x, R.y + tempR.y, R.z + tempR.z, 0.0);
R = normalize(R);
H = normalize(L + V);
specular = dot(H,R);
specular = pow(specular,material);
specRed = specular*lightColor[0];
specBlue = specular*lightColor[1];
specGreen = specular*lightColor[2];
//set new colors
//textures
red = texColor1[0]*diffuseRed + texColor1[0]*specRed*0.7 + texColor1[0]*.05;
green = texColor1[1]*diffuseBlue + texColor1[1]*specBlue*0.7 + texColor1[1]*.05;
blue = texColor1[2]*diffuseGreen + texColor1[2]*specGreen*0.7 + texColor1[2]*.05;
//colors
red = vColor[0]*diffuseRed + vColor[0]*specRed*0.7 + vColor[0]*.05;
green = vColor[1]*diffuseBlue + vColor[1]*specBlue*0.7 + vColor[1]*.05;
blue = vColor[2]*diffuseGreen + vColor[2]*specGreen*0.7 + vColor[2]*.05;
gl_FragColor = vec4(red, green, blue, 1.0);
}
Your code looks wrong in many, many ways.
The following code is wrong. The comment is misleading (it’s in clip space, not world space). But the major problem is that you overwrite vPosition with the clip space coordinates while using it as if it was in view space several lines after this part.
//set up object position in world space
vec4 vPosition = uModelMatrix * vec4(aPosition, 1.0);
vPosition = uViewMatrix * vPosition;
vPosition = uProjMatrix * vPosition;
gl_Position = vPosition;
The following code is wrong, too. First you need matrix * vector, not vector * matrix. But also, the comment says world space, yet you compute vLight in view space and add vPosition which is in clip space!
//set up light vector in world space
vec4 vLight = vec4(uLight, 1.0) * uViewMatrix;
lightPos = vLight.xyz - vPosition.xyz;
Again here, matrix * vector:
//set up normal vector in world space
normals = (vec4(aNormal,1.0) * uNormMatrix).xyz;
Now what is this? camPos is computed in world coordinates, yet you apply the model matrix which converts model space to world space.
//specular calculations
V = vec4(normalize(camPos),0.0);
V = uModelMatrix * V;
I have no idea why your shader performs differently on different computers, but I am pretty sure none of these computers shows anything remotely close to the expected result.
You really need to read your shaders again, and each time you see a vector, ask yourself “in what coordinate space is this vector meaningful?” and each time you see a matrix, ask yourself “what coordinate spaces does this matrix convert from and to?”
I am just messing around with some geometry shaders taking a list of GL_POINTS and outputting a box with triangle strips. i have it basically working but when i zoom in/out or pan around the triangle strips go all over the place and do not maintain their posistion in the world but are still correctly drawing the box.
for example if i give the input (5,5,0) it will draw a triangle strip with these points to make a box:
(5 , 5 , 0)
(5.5, 5 , 0)
(5 , 5.5, 0)
(5.5, 5.5, 0)
Vertex Shader:
// Vertex Shader
#version 130
in vec4 vVertex;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * vVertex;
}
Geometry Shader:
version 130
#extension GL_EXT_geometry_shader4 : enable
void main(void)
{
vec4 a;
vec4 b;
vec4 c;
vec4 d;
int i = 0;
for(i = 0; i gl_VerticesIn; i++)
{
a = gl_PositionIn[i];
//a.x -= 0.5;
//a.y -= 0.5;
//a.z = 0.0;
gl_Position = a;
EmitVertex();
b = gl_PositionIn[i];
b.x += 0.5;
//b.y -= 0.5;
//b.z = 0.0;
gl_Position = b;
EmitVertex();
d = gl_PositionIn[i];
//d.x -= 0.5;
d.y += 0.5;
//d.z = 0.0;
gl_Position = d;
EmitVertex();
c = gl_PositionIn[i];
c.x += 0.5;
c.y += 0.5;
//c.z = 0.0;
gl_Position = c;
EmitVertex();
}
EndPrimitive();
}
im probably missing something dumb.
Multiply each vertices by gl_ModelViewMatrix in your vertex shader instead. It's far more easy to reason in world space.
After that you can do what you do in the geometry shader, but don't forget to multiply vertices by your projection matrix, before emiting them. This should fix your issue.
Edit: I forget about ModelViewMatrix which transforms to view space, sorry. Just pass the vertex in the VS without doing nothing on it. That means you still will be in model space in the GS. Do your offset work in GS, then before emiting, transform with gl_ModelViewProjectionMatrix.
The geometry shader runs after the vertex shader. So those changes you're making to the vertices are being made in screen coordinates, not world.