Mirroring using the geometry shader - opengl

I'm trying to perform simple geometry mirroring at the geometry shader stage. My vertex data comes out correctly but the vertex lighting is not correct for the mirrored geometry.
For the vertices i'm simply mirroring about the XZ plane which works fine for my needs;
a = VPosition[0];
a.y = -a.y;
b = VPosition[1];
b.y = -b.y;
c = VPosition[2];
c.y = -c.y;
gl_Position = ModelViewProj * vec4( a, 1.0 );
EmitVertex();
gl_Position = ModelViewProj * vec4( b, 1.0 );
EmitVertex();
gl_Position = ModelViewProj * vec4( c, 1.0 );
EmitVertex();
EndPrimitive();
But how to i mirror the vertex normals? Simply negating the normal y in the same way as the vertex doesn't seem to work.
E.g.
an = VNormal[0];
an.y = -an.y;

Related

OpenGL texture transformation when two textures on same mesh

I have a situation where i have two textures on a single mesh. I want to transform these textures independently. I have base code wherein i was able to load and transform one texture. Now i have code to load two textures but the issue is that when i try to transform the first texture both of them gets
transformed as we are modifying texture coordinates.
Green one is the first texture and star is the second texture.
I have no idea how to transform just the second texture. Guide me with any solution you have.
You can do it in many ways , one of them would be to have two different texture Matrices.
and than pass them to the vertex shader.
#version 400 compatibility
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aNormal;
layout (location = 2) in vec2 aTexCoord;
out vec2 TexCoord;
out vec2 TexCoord2;
uniform mat4 textureMatrix;
uniform mat4 textureMatrix2;
void main()
{
vec4 mTex2;
vec4 mTex;
Normal = mat3(NormalMatrix) * aNormal;
Tex2Matrix = textureMatrix2;
ViewDirMatrix = textureMatrix;
mTex = textureMatrix * vec4( aTexCoord.x , aTexCoord.y , 0.0 , 1.0 ) ;
mTex2 = textureMatrix2 * vec4( aTexCoord.x , aTexCoord.y , 0.0 , 1.0 ) ;
TexCoord = vec2(mTex.x , mTex.y );
TexCoord2 = vec2(mTex2.x , mTex2.y );
FragPos = vec3( ubo_model * (vec4( aPos, 1.0 )));
gl_Position = ubo_projection * ubo_view * (vec4(FragPos, 1.0));
}
This is how you can create a texture matrix.
glm::mat4x4 GetTextureMatrix()
{
glm::mat4x4 matrix = glm::mat4x4(1.0f);
matrix = glm::translate(matrix, glm::vec3(-PositionX + 0.5, PositionY + 0.5, 0.0));
matrix = glm::scale(matrix, glm::vec3(1.0 / ScalingX, 1.0 / ScalingY, 0.0));
matrix = glm::rotate(matrix, glm::radians(RotationX) , glm::vec3(1.0, 0.0, 0.0));
matrix = glm::rotate(matrix, glm::radians( RotationY), glm::vec3(0.0, 1.0, 0.0));
matrix = glm::rotate(matrix, glm::radians(-RotationZ), glm::vec3(0.0, 0.0, 1.0));
matrix = glm::translate(matrix, glm::vec3(-PositionX -0.5, -PositionY -0.5, 0.0));
matrix = glm::translate(matrix, glm::vec3(PositionX, PositionY, 0.0));
return matrix;
}

How do I translate clip planes in GLSL?

So the short of it is that I'm trying to switch from old openGL's glClipPlane function to gl_ClipDistance[0].
For glClipPlane I could intuitively do (pseudocode)
pushMatrix()
glRotatef(camera.xRot, 1,0,0)
glRotatef(camera.yRot + 180, 0,1,0)
glTranslate(x-camera.x, y-camera.y, z-camera.z)
glClipPlane(plane_equation)
popMatrix()
and this would translate the plane to the correct location, and face it in the right direction.
For the life of me I cannot get the plane to translate with GLSL - I've tried passing various model matrices, model/view matrices, and altering the plane equation, but no matter what I do the plane is attached to the "Camera" instead of being attached to the "object". As in, moving the camera also moves the portion of the object being clipped, which is less than ideal.
Here are some things I've tried in my vertex shader based on random google searches:
vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(modelPos,uClipPlane);
or:
// vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(gl_Position,uClipPlane);
or:
vec4 modelPos = ModelMat * vec4( Position, 1.0 );
gl_Position = ProjMat * ModelViewMat * vec4(Position, 1.0);
gl_ClipDistance[0] = dot(uClipPlane, ModelMat);
Is it just that I don't understand how to properly calculate the model matrix? or is there some obvious plane translation step that I'm missing that could solve my problems?

Write positions to texture OpenGL/GLSL

I want to write the model-space vertex positions of a 3D mesh to a texture in OGL. Currently in order to write to a texture I set it to a fullscreen quad and write to it using a separate pass (based on tutorial seen here.)
The problem is that, from what I understand, I cannot pass more than one vertex list to the shader as the vertex shader can only be bound to one vertex list at a time, currently occupied by the screenspace quad.
Vertex Shader code:
layout(location = 0) in vec4 in_position;
out vec4 vs_position;
void main() {
vs_position = in_position;
gl_Position = vec4(in_position.xy, 0.0, 1.0);
}
Fragment Shader code:
in vec4 position; // coordinate in the screenspace quad
out vec4 outColor;
void main() {
vec2 uv = vec2(0.5, 0.5) * position.xy + vec2(0.5, 0.5);
outColor = ?? // Here I need my vertex position
}
Possible solution (?):
My idea was to introduce another shader pass before this to output the positions as r, g, b so that the position of the current texel can be retrieved from the texture (the only input format large enough to store many vertecies).
Although not 100% accurate, this image might give you an idea of what I want to do:
Model space coordinate map
Is there a way to encode the positions to the texture without using a fullscreen quad on the GPU?
Please let me know if you want to see more code.
Instead of generating the quad CPU side I would attach a geometry shader and create the quad there, that should free up the slot for your model-geometry to be passed in.
Geometry shader:
layout(points) in;
layout(triangle_strip, max_vertices = 4) out;
out vec2 texcoord;
void main()
{
gl_Position = vec4( 1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 1.0 );
EmitVertex();
gl_Position = vec4(-1.0, 1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 1.0 );
EmitVertex();
gl_Position = vec4( 1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 1.0, 0.0 );
EmitVertex();
gl_Position = vec4(-1.0,-1.0, 0.5, 1.0 );
texcoord = vec2( 0.0, 0.0 );
EmitVertex();
EndPrimitive();
}

OpenGL reconstructing eye-view position from linearized depth incorrect

i have been trying to implement deferred rendering for past 2 weeks. I have finally come to the spot lighting pass part using stencil buffer and linearized depth. I hold 3 framebuffer textures : albedo, normal+depth (X,Y,Z,EyeViewLinearDepth), Lighting texture. So I draw my light (sphere) and apply this fragment shader :
void main(void)
{
vec2 texCoord = gl_FragCoord.xy * u_inverseScreenSize.xy;
float linearDepth = texture2D(u_normalDepth, texCoord.st).a;
// vector to far plane
vec3 viewRay = vec3(v_vertex.xy * (-farClip/v_vertex.z), -farClip);
// scale viewRay by linear depth to get view space position
vec3 vertex = viewRay * linearDepth;
vec3 normal = texture2D(u_normalDepth, texCoord.st).xyz*2.0 - 1.0;
vec4 ambient = vec4(0.0, 0.0, 0.0, 1.0);
vec4 diffuse = vec4(0.0, 0.0, 0.0, 1.0);
vec4 specular = vec4(0.0, 0.0, 0.0, 1.0);
vec3 lightDir = lightpos - vertex ;
vec3 R = normalize(reflect(lightDir, normal));
vec3 V = normalize(vertex);
float lambert = max(dot(normal, normalize(lightDir)), 0.0);
if (lambert > 0.0) {
float distance = length(lightDir);
if (distance <= u_lightRadius) {
//CLASSICAL LIGHTING COMPUTATION PART
}
}
vec4 final_color = vec4(ambient + diffuse + specular);
gl_FragColor = vec4(final_color.xyz, 1.0);
}
The variables you need to know : v_vertex is eye space position of the vertex (of sphere), lightpos is the position/center of the light in eye space, linearDepth is generated on geometry pass stage in eye space.
The problem is that, the code fail this if check : if (distance <= u_lightRadius). The light is never computed until i remove the distance check. I am sure that i pass these values correctly, radius is 170.0, light position is only like 40-50 units away from the model. There is definitely something wrong but i can't find it somehow. I tried many possibilities of radius and other variables.

Geometry shader - Point to triangle strip not maintaining world position

I am just messing around with some geometry shaders taking a list of GL_POINTS and outputting a box with triangle strips. i have it basically working but when i zoom in/out or pan around the triangle strips go all over the place and do not maintain their posistion in the world but are still correctly drawing the box.
for example if i give the input (5,5,0) it will draw a triangle strip with these points to make a box:
(5 , 5 , 0)
(5.5, 5 , 0)
(5 , 5.5, 0)
(5.5, 5.5, 0)
Vertex Shader:
// Vertex Shader
#version 130
in vec4 vVertex;
void main(void)
{
gl_Position = gl_ModelViewProjectionMatrix * vVertex;
}
Geometry Shader:
version 130
#extension GL_EXT_geometry_shader4 : enable
void main(void)
{
vec4 a;
vec4 b;
vec4 c;
vec4 d;
int i = 0;
for(i = 0; i gl_VerticesIn; i++)
{
a = gl_PositionIn[i];
//a.x -= 0.5;
//a.y -= 0.5;
//a.z = 0.0;
gl_Position = a;
EmitVertex();
b = gl_PositionIn[i];
b.x += 0.5;
//b.y -= 0.5;
//b.z = 0.0;
gl_Position = b;
EmitVertex();
d = gl_PositionIn[i];
//d.x -= 0.5;
d.y += 0.5;
//d.z = 0.0;
gl_Position = d;
EmitVertex();
c = gl_PositionIn[i];
c.x += 0.5;
c.y += 0.5;
//c.z = 0.0;
gl_Position = c;
EmitVertex();
}
EndPrimitive();
}
im probably missing something dumb.
Multiply each vertices by gl_ModelViewMatrix in your vertex shader instead. It's far more easy to reason in world space.
After that you can do what you do in the geometry shader, but don't forget to multiply vertices by your projection matrix, before emiting them. This should fix your issue.
Edit: I forget about ModelViewMatrix which transforms to view space, sorry. Just pass the vertex in the VS without doing nothing on it. That means you still will be in model space in the GS. Do your offset work in GS, then before emiting, transform with gl_ModelViewProjectionMatrix.
The geometry shader runs after the vertex shader. So those changes you're making to the vertices are being made in screen coordinates, not world.