Wide lines in geometry shader behaves oddly - opengl

I'm trying to render arbitrary wide lines (in screen space) using a geometry shader. At first it seems all good, but on certain view position the lines are rendered incorrectly:
The image on the left present the correct rendering (three lines on positive X, Y and Z axes, 2 pixel wide).
When the camera moves near the origin (and indeed near the lines), the lines are rendered like the right image. The shader seems straightforward, and I don't understand what's going on my GPU:
--- Vertex Shader
#version 410 core
// Modelview-projection matrix
uniform mat4 ds_ModelViewProjection;
// Vertex position
in vec4 ds_Position;
// Vertex color
in vec4 ds_Color;
// Processed vertex color
out vec4 ds_VertexColor;
void main()
{
gl_Position = ds_ModelViewProjection * ds_Position;
ds_VertexColor = ds_Color;
}
--- Geometry Shader
#version 410 core
// Viewport size, in pixels
uniform vec2 ds_Viewport;
// Line width, in pixels
uniform float ds_LineWidth = 2.0;
// Processed vertex color (from VS, in clip space)
in vec4 ds_VertexColor[2];
// Processed primitive vertex color
out vec4 ds_GeoColor;
layout (lines) in;
layout (triangle_strip, max_vertices = 4) out;
void main()
{
vec3 ndc0 = gl_in[0].gl_Position.xyz / gl_in[0].gl_Position.w;
vec3 ndc1 = gl_in[1].gl_Position.xyz / gl_in[1].gl_Position.w;
vec2 lineScreenForward = normalize(ndc1.xy - ndc0.xy);
vec2 lineScreenRight = vec2(-lineScreenForward.y, lineScreenForward.x);
vec2 lineScreenOffset = (vec2(ds_LineWidth) / ds_ViewportSize) * lineScreenRight;
gl_Position = vec4(ndc0.xy + lineScreenOffset, ndc0.z, 1.0);
ds_GeoColor = ds_VertexColor[0];
EmitVertex();
gl_Position = vec4(ndc0.xy - lineScreenOffset, ndc0.z, 1.0);
ds_GeoColor = ds_VertexColor[0];
EmitVertex();
gl_Position = vec4(ndc1.xy + lineScreenOffset, ndc1.z, 1.0);
ds_GeoColor = ds_VertexColor[1];
EmitVertex();
gl_Position = vec4(ndc1.xy - lineScreenOffset, ndc1.z, 1.0);
ds_GeoColor = ds_VertexColor[1];
EmitVertex();
EndPrimitive();
}
--- Fragment Shader
// Processed primitive vertex color
in vec4 ds_GeoColor;
// The fragment color.
out vec4 ds_FragColor;
void main()
{
ds_FragColor = ds_GeoColor;
}

Your mistake is in this:
gl_Position = vec4(ndc0.xy + lineScreenOffset, ndc0.z, 1.0 /* WRONG */);
To fix it:
vec4 cpos = gl_in[0].gl_Position;
gl_Position = vec4(cpos.xy + lineScreenOffset*cpos.w, cpos.z, cpos.w);
What you did was: lose information about W, and thus detune the HW clipper, downgrading it from a 3D clipper into a 2D one.

Today I've found the answer by myself. I don't understand it entirely, but this solved the question.
The problem arise when the line vertices goes beyond the near plane of the projection matrix defined for the scene (in my case, all ending vertices of the three lines). The solution is to manually clip the line vertices within the view frustum (in this way the vertices cannot go beyond the near plane!).
What happens to ndc0 and ndc1 when they are out of the view frustum? Looking at the images, it seems that the XY components have the sign changed (after they are transformed in clip space!): that would mean that the W coordinate is opposite of the normal one, isn't?
Without the geometry shader, the rasterizer would have be taken the responsability to clip those primitives outside the view frustum, but since I've introduced the geometry shader, I need to compute those result by myself. Do anyone can suggest me some link about this matter?

I met a similar problem, where I was trying to draw normals of vertices as colored lines. The way I was drawing the normal was that I draw all the vertices as points, and then use the GS to expand each vertex to a line. The GS was straightforward, and I found that there were random incorrect lines running through the entire screen. Then I added this line into the GS (marked by the comment below), the problem is fixed. Seems like that the problem was because one end of the line is within frustum but the other is outside, so I end up with lines running across the entire screen.
// Draw normal of a vertex by expanding a vertex into a line
[maxvertexcount(2)]
void GSExpand2( point PointVertex points[ 1 ], inout LineStream< PointInterpolants > stream )
{
PointInterpolants v;
float4 pos0 = mul(float4(points[0].pos, 1), g_viewproj);
pos0 /= pos0.w;
float4 pos1 = mul(float4(points[0].pos + points[0].normal * 0.1f, 1), g_viewproj);
pos1 /= pos1.w;
// seems like I need to manually clip the lines, otherwise I end up with incorrect lines running across the entire screen
if ( pos0.z < 0 || pos1.z < 0 || pos0.z > 1 || pos1.z > 1 )
return;
v.color = float3( 0, 0, 1 );
v.pos = pos0;
stream.Append( v );
v.color = float3( 1, 0, 0 );
v.pos = pos1;
stream.Append( v );
stream.RestartStrip();
}

Related

GLSL: Fade 2D grid based on distance from camera

I am currently trying to draw a 2D grid on a single quad using only shaders. I am using SFML as the graphics library and sf::View to control the camera. So far I have been able to draw an anti-aliased multi level grid. The first level (blue) outlines a chunk and the second level (grey) outlines the tiles within a chunk.
I would now like to fade grid levels based on the distance from the camera. For example, the chunk grid should fade in as the camera zooms in. The same should be done for the tile grid after the chunk grid has been completely faded in.
I am not sure how this could be implemented as I am still new to OpenGL and GLSL. If anybody has any pointers on how this functionality can be implemented, please let me know.
Vertex Shader
#version 130
out vec2 texCoords;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
texCoords = (gl_TextureMatrix[0] * gl_MultiTexCoord0).xy;
}
Fragment Shader
#version 130
uniform vec2 chunkSize = vec2(64.0, 64.0);
uniform vec2 tileSize = vec2(16.0, 16.0);
uniform vec3 chunkBorderColor = vec3(0.0, 0.0, 1.0);
uniform vec3 tileBorderColor = vec3(0.5, 0.5, 0.5);
uniform bool drawGrid = true;
in vec2 texCoords;
void main() {
vec2 uv = texCoords.xy * chunkSize;
vec3 color = vec3(1.0, 1.0, 1.0);
if(drawGrid) {
float aa = length(fwidth(uv));
vec2 halfChunkSize = chunkSize / 2.0;
vec2 halfTileSize = tileSize / 2.0;
vec2 a = abs(mod(uv - halfChunkSize, chunkSize) - halfChunkSize);
vec2 b = abs(mod(uv - halfTileSize, tileSize) - halfTileSize);
color = mix(
color,
tileBorderColor,
smoothstep(aa, .0, min(b.x, b.y))
);
color = mix(
color,
chunkBorderColor,
smoothstep(aa, .0, min(a.x, a.y))
);
}
gl_FragColor.rgb = color;
gl_FragColor.a = 1.0;
}
You need to split your multiplication in the vertex shader to two parts:
// have a variable to be interpolated per fragment
out vec2 vertex_coordinate;
...
{
// this will store the coordinates of the vertex
// before its projected (i.e. its "world" coordinates)
vertex_coordinate = gl_ModelViewMatrix * gl_Vertex;
// get your projected vertex position as before
gl_Position = gl_ProjectionMatrix * vertex_coordinate;
...
}
Then in the fragment shader you change the color based on the world vertex coordinate and the camera position:
in vec2 vertex_coordinate;
// have to update this value, every time your camera changes its position
uniform vec2 camera_world_position = vec2(64.0, 64.0);
...
{
...
// calculate the distance from the fragment in world coordinates to the camera
float fade_factor = length(camera_world_position - vertex_coordinate);
// make it to be 1 near the camera and 0 if its more then 100 units.
fade_factor = clamp(1.0 - fade_factor / 100.0, 0.0, 1.0);
// update your final color with this factor
gl_FragColor.rgb = color * fade_factor;
...
}
The second way to do it is to use the projected coordinate's w. I personally prefer to calculate the distance in units of space. I did not test this code, it might have some trivial syntax errors, but if you understand the idea, you can apply it in any other way.

Textures in the OpenGL Rendering pipeline

Let's say I start with a quad that covers the entire screen space just. I then put it through a projection matrix so that it appears as a trapezoid on the screen. There is a texture on this. As the base of the trapezoid is meant to be closer to the camera, opengl correctly renders the texture such that things in the texture appear bigger at the base of the trapezoid (as this is seemingly closer to the camera).
How does OpenGL know to render the texture itself in this perspective-based way rather than just stretching the sides of the texture into the trapezoid shape? Certainly it must be using the vertex z values, but how does it use those to map to textures in the fragment shader? In the fragment shader it feels like I am just working with x and y coordinates of textures with no z values being relevant.
EDIT:
I tried using the information provided in the links in the comments. I am not sure if there is information I am missing related to my question specifically, or if I am doing something incorrectly.
What I am trying to do is make a (if you don't know what this is, it's ok, I explain further what I'm trying to do) pseudo 3D SNES Mode 7-like projection.
Here's how it's coming out now.
As you can see something funny is happening. You can clearly see that the quad is actually 2 triangles and the black text area at the top should be straight, not crooked.
Whatever is happening, it's clear that the triangle on the left and the triangle on the right have their textures being rendered differently. The z-values are not being changed. Based on info in links in the comments I thought that I could simply move the top two vertices of my rectangular quad inward so that it became a trapezoid instead and this would act like a projection.
I know that a "normal" thing to do would be to use glm::lookat for a view matrix and glm::perspective for a projection matrix, but these are a little bit of black boxes to me and I would rather find a more easy-to-understand way.
I may have already provided enough info for someone to answer, but just in case, here is my code:
Vertex Shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 2) in vec2 texCoord;
out vec2 TexCoord;
void main()
{
// adjust vertex positions to make rectangle into trapezoid
if( position.y < 0){
gl_Position = vec4(position.x * 2.0, position.y * 2.0, 0.0, 1.0);
}else {
gl_Position = vec4(position.x * 1.0, position.y * 2.0, 0.0, 1.0);
}
TexCoord = vec2(texCoord.x, 1.0 - texCoord.y);
}
Fragment Shader:
#version 330 core
in vec2 TexCoord;
out vec4 color;
uniform sampler2D ourTexture1;
uniform mat3 textures_transform_mat_input;
mat3 TexCoord_to_mat3;
mat3 foo_mat3;
void main()
{
TexCoord_to_mat3[0][0] = 1.0;
TexCoord_to_mat3[1][1] = 1.0;
TexCoord_to_mat3[2][2] = 1.0;
TexCoord_to_mat3[0][2] = TexCoord.x;
TexCoord_to_mat3[1][2] = TexCoord.y;
foo_mat3 = TexCoord_to_mat3 * textures_transform_mat_input;
vec2 foo = vec2(foo_mat3[0][2], foo_mat3[1][2]);
vec2 bar = vec2(TexCoord.x, TexCoord.y);
color = texture(ourTexture1, foo);
vec2 center = vec2(0.5, 0.5);
}
Relevant code in main (note I am using a C library, CGLM that is like GLM; also, the "center" and "center undo" stuff is just to make sure rotation happens about the center rather than a corner):
if(!init_complete){
glm_mat3_identity(textures_scale_mat);
textures_scale_mat[0][0] = 1.0/ASPECT_RATIO / 3.0;
textures_scale_mat[1][1] = 1.0/1.0 / 3.0;
}
mat3 center_mat;
center_mat[0][0] = 1.0;
center_mat[1][1] = 1.0;
center_mat[2][2] = 1.0;
center_mat[0][2] = -0.5;
center_mat[1][2] = -0.5;
mat3 center_undo_mat;
center_undo_mat[0][0] = 1.0;
center_undo_mat[1][1] = 1.0;
center_undo_mat[2][2] = 1.0;
center_undo_mat[0][2] = 0.5;
center_undo_mat[1][2] = 0.5;
glm_mat3_identity(textures_position_mat);
textures_position_mat[0][2] = player.y / 1.0;
textures_position_mat[1][2] = player.x / 1.0;
glm_mat3_identity(textures_orientation_mat);
textures_orientation_mat[0][0] = cos(player_rotation_radians);
textures_orientation_mat[0][1] = sin(player_rotation_radians);
textures_orientation_mat[1][0] = -sin(player_rotation_radians);
textures_orientation_mat[1][1] = cos(player_rotation_radians);
glm_mat3_identity(textures_transform_mat);
glm_mat3_mul(center_mat, textures_orientation_mat, textures_transform_mat);
glm_mat3_mul(textures_transform_mat, center_undo_mat, textures_transform_mat);
glm_mat3_mul(textures_transform_mat, textures_scale_mat, textures_transform_mat);
glm_mat3_mul(textures_transform_mat, textures_position_mat, textures_transform_mat);
glUniformMatrix3fv(glGetUniformLocation(shader_perspective, "textures_transform_mat_input"), 1, GL_FALSE, textures_transform_mat);
glBindTexture(GL_TEXTURE_2D, texture_mute_city);
glDrawArrays(GL_TRIANGLES, 0, 6);

GBUFFER Decal Projection and scaling

I have been working on projecting decals on to anything that the decals bounding box encapsulates. After reading and trying numerous code snippets (usually in HLSL) I have a some what working method in GLSL for projecting the decals.
Let me start with trying to explain what I'm doing and how this works (so far).
The code below is now fixed and works!
This all is while in the perspective view mode.
I send 2 uniforms to the fragment shader "tr" and "bl". These are the 2 corners of the bounding box. I can and will replace these with hard coded sizes because they are the size of the decals original bounding box. tr = vec3(.5, .5, .5) and br = vec3(-.5, -.5, -.5). I'd prefer to find a way to do the position tests in the decals transformed state. (more about this at the end).
Adding this for clarity. The vertex emitted from the vertex program is the bounding box multiplied by the decals matrix and than by the model view projection matrix.. I use this for the next step:
With that vertex, I get the depth value from the depth texture and with it, calculate the position in world space using the inverse of the projection matrix.
Next, I translate this position using the Inverse of the Decals matrix. (The matrix that scales, rotates and translates the 1,1,1 cube to its world location. I thought that by using the inverse of the decals transform matrix, the correct size and rotation of the screen point would be handled correctly but it is not.
Vertex Program:
//Decals color pass.
#version 330 compatibility
out mat4 matPrjInv;
out vec4 positionSS;
out vec4 positionWS;
out mat4 invd_mat;
uniform mat4 decal_matrix;
void main(void)
{
gl_Position = decal_matrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Position;
positionWS = (decal_matrix * gl_Vertex);;
positionSS = gl_Position;
matPrjInv = inverse(gl_ModelViewProjectionMatrix);
invd_mat = inverse(decal_matrix);
}
Fragment Program:
#version 330 compatibility
layout (location = 0) out vec4 gPosition;
layout (location = 1) out vec4 gNormal;
layout (location = 2) out vec4 gColor;
uniform sampler2D depthMap;
uniform sampler2D colorMap;
uniform sampler2D normalMap;
uniform mat4 matrix;
uniform vec3 tr;
uniform vec3 bl;
in vec2 TexCoords;
in vec4 positionSS; // screen space
in vec4 positionWS; // world space
in mat4 invd_mat; // inverse decal matrix
in mat4 matPrjInv; // inverse projection matrix
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}
vec2 postProjToScreen(vec4 position)
{
vec2 screenPos = position.xy / position.w;
return 0.5 * (vec2(screenPos.x, screenPos.y) + 1);
}
void main(){
// Calculate UVs
vec2 UV = postProjToScreen(positionSS);
// sample the Depth from the Depthsampler
float Depth = texture2D(depthMap, UV).x * 2.0 - 1.0;
// Calculate Worldposition by recreating it out of the coordinates and depth-sample
vec4 ScreenPosition;
ScreenPosition.xy = UV * 2.0 - 1.0;
ScreenPosition.z = (Depth);
ScreenPosition.w = 1.0f;
// Transform position from screen space to world space
vec4 WorldPosition = matPrjInv * ScreenPosition ;
WorldPosition.xyz /= WorldPosition.w;
WorldPosition.w = 1.0f;
// transform to decal original position and size.
// 1 x 1 x 1
WorldPosition = invd_mat * WorldPosition;
clip (WorldPosition.xyz);
// Get UV for textures;
WorldPosition.xy += 0.5;
WorldPosition.y *= -1.0;
vec4 bump = texture2D(normalMap, WorldPosition.xy);
gColor = texture2D(colorMap, WorldPosition.xy);
//Going to have to do decals in 2 passes..
//Blend doesn't work with GBUFFER.
//Lots more to sort out.
gNormal.xyz = bump;
gPosition = positionWS;
}
And here are a couple of Images showing whats wrong.
What I get for the projection:
And this is the actual size of the decals.. Much larger than what my shader is creating!
I have tried creating a new matrix using the decals and the projection matrix to construct a sort of "lookat" matrix and translate the screen position in to the decals post transformed state.. I have not been able to get this working. Some where I am missing something but where? I thought that translating using the inverse of the decals matrix would deal with the transform and put the screen position in the proper transformed state. Ideas?
Updated the code for the texture UVs.. You may have to fiddle with the y and x depending on if your texture is flipped on x or y. I also fixed the clip sub so it works correctly. As it is, this code now works. I will update this more if needed so others don't have to go through the pain I did to get it working.
Some issues to resolve are decals laying over each other. The one on top over writes the one below. I think I will have to accumulated the colors and normals in to the default FBO and then blend(Add) them to the GBUFFER textures before or during the lighting pass. Adding more screen size textures is not a great idea so I will need to be creative and recycle any textures I can.
I found the solution to decals overlaying each other.
Turn OFF depth masking while drawing the decals and turn int back on afterwards:
glDepthMask(GL_FALSE)
OK.. I'm so excited. I found the issue.
I updated the code above again.
I had a mistake in what I was sending the shader for tr and bl:
Here is the change to clip:
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}

3D texture get hidden when viewed from different angle

I have encountered a problem of rendering artifacts of 3D texture as below:
I have searched on net as to find solution of this problem, and most answer pointed towards the problem in regards of depth buffer bit.
While i have tried to change the depth buffer bit to 24 bit from GL_DEPTH to GL_STENCIL in GLUT, the result remains the same as the texture(or geometry-not really sure) get hidden when viewed from certain angle..
So, can i know what is exactly the problem that results in this kind of artifacts??
Below is the fragment shader code snippet(OpenGL Development Cookbook)
void main()
{
//get the 3D texture coordinates for lookup into the volume dataset
vec3 dataPos = vUV;
vec3 geomDir = normalize((vec3(0.556,0.614,0.201)*vUV-vec3(0.278,0.307,0.1005)) - camPos);
vec3 dirStep = geomDir * step_size;
//flag to indicate if the raymarch loop should terminate
bool stop = false;
//for all samples along the ray
for (int i = 0; i < MAX_SAMPLES; i++) {
// advance ray by dirstep
dataPos = dataPos + dirStep;
stop = dot(sign(dataPos-texMin),sign(texMax-dataPos)) < 3.0f;
//if the stopping condition is true we brek out of the ray marching loop
if (stop)
break;
// data fetching from the red channel of volume texture
float sample = texture(volume, dataPos).r;
float prev_alpha = sample - (sample * vFragColor.a);
vFragColor.rgb = (prev_alpha) * vec3(sample) + vFragColor.rgb;
vFragColor.a += prev_alpha;
if( vFragColor.a>0.99)
break;
}
FYI, below is the vertex shader snippet:
#version 330 core
layout(location = 0) in vec3 vVertex; //object space vertex position
//uniform
uniform mat4 MVP; //combined modelview projection matrix
smooth out vec3 vUV; //3D texture coordinates for texture lookup in the fragment shader
void main()
{
//get the clipspace position
gl_Position = MVP*vec4(vVertex.xyz,1);
//get the 3D texture coordinates by adding (0.5,0.5,0.5) to the object space
//vertex position. Since the unit cube is at origin (min: (-0.5,-0.5,-0.5) and max: (0.5,0.5,0.5))
//adding (0.5,0.5,0.5) to the unit cube object space position gives us values from (0,0,0) to
//(1,1,1)
//vUV = (vVertex + vec3(0.278,0.307,0.1005))/vec3(0.556,0.614,0.201);
vUV = vVertex/vec3(0.556,0.614,0.201);//after moving the cube to coordinates range of 0-1
}
EDITED: The artifacts present especially when viewing is done relatively at the edge.
FYI, glm::perspective(45.0f,(float)w/h, 1.0f,10.0f);

Resizing point sprites based on distance from the camera

I'm writing a clone of Wolfenstein 3D using only core OpenGL 3.3 for university and I've run into a bit of a problem with the sprites, namely getting them to scale correctly based on distance.
From what I can tell, previous versions of OGL would in fact do this for you, but that functionality has been removed, and all my attempts to reimplement it have resulted in complete failure.
My current implementation is passable at distances, not too shabby at mid range and bizzare at close range.
The main problem (I think) is that I have no understanding of the maths I'm using.
The target size of the sprite is slightly bigger than the viewport, so it should 'go out of the picture' as you get right up to it, but it doesn't. It gets smaller, and that's confusing me a lot.
I recorded a small video of this, in case words are not enough. (Mine is on the right)
Can anyone direct me to where I'm going wrong, and explain why?
Code:
C++
// setup
glPointParameteri(GL_POINT_SPRITE_COORD_ORIGIN, GL_LOWER_LEFT);
glEnable(GL_PROGRAM_POINT_SIZE);
// Drawing
glUseProgram(StaticsProg);
glBindVertexArray(statixVAO);
glUniformMatrix4fv(uStatixMVP, 1, GL_FALSE, glm::value_ptr(MVP));
glDrawArrays(GL_POINTS, 0, iNumSprites);
Vertex Shader
#version 330 core
layout(location = 0) in vec2 pos;
layout(location = 1) in int spriteNum_;
flat out int spriteNum;
uniform mat4 MVP;
const float constAtten = 0.9;
const float linearAtten = 0.6;
const float quadAtten = 0.001;
void main() {
spriteNum = spriteNum_;
gl_Position = MVP * vec4(pos.x + 1, pos.y, 0.5, 1); // Note: I have fiddled the MVP so that z is height rather than depth, since this is how I learned my vectors.
float dist = distance(gl_Position, vec4(0,0,0,1));
float attn = constAtten / ((1 + linearAtten * dist) * (1 + quadAtten * dist * dist));
gl_PointSize = 768.0 * attn;
}
Fragment Shader
#version 330 core
flat in int spriteNum;
out vec4 color;
uniform sampler2DArray Sprites;
void main() {
color = texture(Sprites, vec3(gl_PointCoord.s, gl_PointCoord.t, spriteNum));
if (color.a < 0.2)
discard;
}
First of all, I don't really understand why you use pos.x + 1.
Next, like Nathan said, you shouldn't use the clip-space point, but the eye-space point. This means you only use the modelview-transformed point (without projection) to compute the distance.
uniform mat4 MV; //modelview matrix
vec3 eyePos = MV * vec4(pos.x, pos.y, 0.5, 1);
Furthermore I don't completely understand your attenuation computation. At the moment a higher constAtten value means less attenuation. Why don't you just use the model that OpenGL's deprecated point parameters used:
float dist = length(eyePos); //since the distance to (0,0,0) is just the length
float attn = inversesqrt(constAtten + linearAtten*dist + quadAtten*dist*dist);
EDIT: But in general I think this attenuation model is not a good way, because often you just want the sprite to keep its object space size, which you have quite to fiddle with the attenuation factors to achieve that I think.
A better way is to input its object space size and just compute the screen space size in pixels (which is what gl_PointSize actually is) based on that using the current view and projection setup:
uniform mat4 MV; //modelview matrix
uniform mat4 P; //projection matrix
uniform float spriteWidth; //object space width of sprite (maybe an per-vertex in)
uniform float screenWidth; //screen width in pixels
vec4 eyePos = MV * vec4(pos.x, pos.y, 0.5, 1);
vec4 projCorner = P * vec4(0.5*spriteWidth, 0.5*spriteWidth, eyePos.z, eyePos.w);
gl_PointSize = screenWidth * projCorner.x / projCorner.w;
gl_Position = P * eyePos;
This way the sprite always gets the size it would have when rendered as a textured quad with a width of spriteWidth.
EDIT: Of course you also should keep in mind the limitations of point sprites. A point sprite is clipped based of its center position. This means when its center moves out of the screen, the whole sprite disappears. With large sprites (like in your case, I think) this might really be a problem.
Therefore I would rather suggest you to use simple textured quads. This way you circumvent this whole attenuation problem, as the quads are just transformed like every other 3d object. You only need to implement the rotation toward the viewer, which can either be done on the CPU or in the vertex shader.
Based on Christian Rau's answer (last edit), I implemented a geometry shader that builds a billboard in ViewSpace, which seems to solve all my problems:
Here are the shaders: (Note that I have fixed the alignment issue that required the original shader to add 1 to x)
Vertex Shader
#version 330 core
layout (location = 0) in vec4 gridPos;
layout (location = 1) in int spriteNum_in;
flat out int spriteNum;
// simple pass-thru to the geometry generator
void main() {
gl_Position = gridPos;
spriteNum = spriteNum_in;
}
Geometry Shader
#version 330 core
layout (points) in;
layout (triangle_strip, max_vertices = 4) out;
flat in int spriteNum[];
smooth out vec3 stp;
uniform mat4 Projection;
uniform mat4 View;
void main() {
// Put us into screen space.
vec4 pos = View * gl_in[0].gl_Position;
int snum = spriteNum[0];
// Bottom left corner
gl_Position = pos;
gl_Position.x += 0.5;
gl_Position = Projection * gl_Position;
stp = vec3(0, 0, snum);
EmitVertex();
// Top left corner
gl_Position = pos;
gl_Position.x += 0.5;
gl_Position.y += 1;
gl_Position = Projection * gl_Position;
stp = vec3(0, 1, snum);
EmitVertex();
// Bottom right corner
gl_Position = pos;
gl_Position.x -= 0.5;
gl_Position = Projection * gl_Position;
stp = vec3(1, 0, snum);
EmitVertex();
// Top right corner
gl_Position = pos;
gl_Position.x -= 0.5;
gl_Position.y += 1;
gl_Position = Projection * gl_Position;
stp = vec3(1, 1, snum);
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 330 core
smooth in vec3 stp;
out vec4 colour;
uniform sampler2DArray Sprites;
void main() {
colour = texture(Sprites, stp);
if (colour.a < 0.2)
discard;
}
I don't think you want to base the distance calculation in your vertex shader on the projected position. Instead just calculate the position relative to your view, i.e. use the model-view matrix instead of the model-view-projection one.
Think about it this way -- in projected space, as an object gets closer to you, its distance in the horizontal and vertical directions becomes exaggerated. You can see this in the way the lamps move away from the center toward the top of the screen as you approach them. That exaggeration of those dimensions is going to make the distance get larger when you get really close, which is why you're seeing the object shrink.
At least in OpenGL ES 2.0, there is a maximum size limitation on gl_PointSize imposed by the OpenGL implementation. You can query the size with ALIASED_POINT_SIZE_RANGE.