Textures in the OpenGL Rendering pipeline - opengl

Let's say I start with a quad that covers the entire screen space just. I then put it through a projection matrix so that it appears as a trapezoid on the screen. There is a texture on this. As the base of the trapezoid is meant to be closer to the camera, opengl correctly renders the texture such that things in the texture appear bigger at the base of the trapezoid (as this is seemingly closer to the camera).
How does OpenGL know to render the texture itself in this perspective-based way rather than just stretching the sides of the texture into the trapezoid shape? Certainly it must be using the vertex z values, but how does it use those to map to textures in the fragment shader? In the fragment shader it feels like I am just working with x and y coordinates of textures with no z values being relevant.
EDIT:
I tried using the information provided in the links in the comments. I am not sure if there is information I am missing related to my question specifically, or if I am doing something incorrectly.
What I am trying to do is make a (if you don't know what this is, it's ok, I explain further what I'm trying to do) pseudo 3D SNES Mode 7-like projection.
Here's how it's coming out now.
As you can see something funny is happening. You can clearly see that the quad is actually 2 triangles and the black text area at the top should be straight, not crooked.
Whatever is happening, it's clear that the triangle on the left and the triangle on the right have their textures being rendered differently. The z-values are not being changed. Based on info in links in the comments I thought that I could simply move the top two vertices of my rectangular quad inward so that it became a trapezoid instead and this would act like a projection.
I know that a "normal" thing to do would be to use glm::lookat for a view matrix and glm::perspective for a projection matrix, but these are a little bit of black boxes to me and I would rather find a more easy-to-understand way.
I may have already provided enough info for someone to answer, but just in case, here is my code:
Vertex Shader:
#version 330 core
layout (location = 0) in vec3 position;
layout (location = 2) in vec2 texCoord;
out vec2 TexCoord;
void main()
{
// adjust vertex positions to make rectangle into trapezoid
if( position.y < 0){
gl_Position = vec4(position.x * 2.0, position.y * 2.0, 0.0, 1.0);
}else {
gl_Position = vec4(position.x * 1.0, position.y * 2.0, 0.0, 1.0);
}
TexCoord = vec2(texCoord.x, 1.0 - texCoord.y);
}
Fragment Shader:
#version 330 core
in vec2 TexCoord;
out vec4 color;
uniform sampler2D ourTexture1;
uniform mat3 textures_transform_mat_input;
mat3 TexCoord_to_mat3;
mat3 foo_mat3;
void main()
{
TexCoord_to_mat3[0][0] = 1.0;
TexCoord_to_mat3[1][1] = 1.0;
TexCoord_to_mat3[2][2] = 1.0;
TexCoord_to_mat3[0][2] = TexCoord.x;
TexCoord_to_mat3[1][2] = TexCoord.y;
foo_mat3 = TexCoord_to_mat3 * textures_transform_mat_input;
vec2 foo = vec2(foo_mat3[0][2], foo_mat3[1][2]);
vec2 bar = vec2(TexCoord.x, TexCoord.y);
color = texture(ourTexture1, foo);
vec2 center = vec2(0.5, 0.5);
}
Relevant code in main (note I am using a C library, CGLM that is like GLM; also, the "center" and "center undo" stuff is just to make sure rotation happens about the center rather than a corner):
if(!init_complete){
glm_mat3_identity(textures_scale_mat);
textures_scale_mat[0][0] = 1.0/ASPECT_RATIO / 3.0;
textures_scale_mat[1][1] = 1.0/1.0 / 3.0;
}
mat3 center_mat;
center_mat[0][0] = 1.0;
center_mat[1][1] = 1.0;
center_mat[2][2] = 1.0;
center_mat[0][2] = -0.5;
center_mat[1][2] = -0.5;
mat3 center_undo_mat;
center_undo_mat[0][0] = 1.0;
center_undo_mat[1][1] = 1.0;
center_undo_mat[2][2] = 1.0;
center_undo_mat[0][2] = 0.5;
center_undo_mat[1][2] = 0.5;
glm_mat3_identity(textures_position_mat);
textures_position_mat[0][2] = player.y / 1.0;
textures_position_mat[1][2] = player.x / 1.0;
glm_mat3_identity(textures_orientation_mat);
textures_orientation_mat[0][0] = cos(player_rotation_radians);
textures_orientation_mat[0][1] = sin(player_rotation_radians);
textures_orientation_mat[1][0] = -sin(player_rotation_radians);
textures_orientation_mat[1][1] = cos(player_rotation_radians);
glm_mat3_identity(textures_transform_mat);
glm_mat3_mul(center_mat, textures_orientation_mat, textures_transform_mat);
glm_mat3_mul(textures_transform_mat, center_undo_mat, textures_transform_mat);
glm_mat3_mul(textures_transform_mat, textures_scale_mat, textures_transform_mat);
glm_mat3_mul(textures_transform_mat, textures_position_mat, textures_transform_mat);
glUniformMatrix3fv(glGetUniformLocation(shader_perspective, "textures_transform_mat_input"), 1, GL_FALSE, textures_transform_mat);
glBindTexture(GL_TEXTURE_2D, texture_mute_city);
glDrawArrays(GL_TRIANGLES, 0, 6);

Related

How do I align the raytraced spheres from my fragment shader with GL_POINTS?

I have a very simple shader program that takes in a bunch of position data as GL_POINTS that generate screen-aligned squares of fragments like normal with a size depending on depth, and then in the fragment shader I wanted to draw a very simple ray-traced sphere for each one with just the shadow that is on the sphere opposite to the light. I went to this shadertoy to try to figure it out on my own. I used the sphIntersect function for ray-sphere intersection, and sphNormal to get the normal vectors on the sphere for lighting. The problem is that the spheres do not align with the squares of fragments, causing them to be cut off. This is because I am not sure how to match the projections of the spheres and the vertex positions so that they line up. Can I have an explanation of how to do this?
Here is a picture for reference.
Here are my vertex and fragment shaders for reference:
//vertex shader:
#version 460
layout(location = 0) in vec4 position; // position of each point in space
layout(location = 1) in vec4 color; //color of each point in space
layout(location = 2) uniform mat4 view_matrix; // projection * camera matrix
layout(location = 6) uniform mat4 cam_matrix; //just the camera matrix
out vec4 col; // color of vertex
out vec4 posi; // position of vertex
void main() {
vec4 p = view_matrix * vec4(position.xyz, 1.0);
gl_PointSize = clamp(1024.0 * position.w / p.z, 0.0, 4000.0);
gl_Position = p;
col = color;
posi = cam_matrix * position;
}
//fragment shader:
#version 460
in vec4 col; // color of vertex associated with this fragment
in vec4 posi; // position of the vertex associated with this fragment relative to camera
out vec4 f_color;
layout (depth_less) out float gl_FragDepth;
float sphIntersect( in vec3 ro, in vec3 rd, in vec4 sph )
{
vec3 oc = ro - sph.xyz;
float b = dot( oc, rd );
float c = dot( oc, oc ) - sph.w*sph.w;
float h = b*b - c;
if( h<0.0 ) return -1.0;
return -b - sqrt( h );
}
vec3 sphNormal( in vec3 pos, in vec4 sph )
{
return normalize(pos-sph.xyz);
}
void main() {
vec4 c = clamp(col, 0.0, 1.0);
vec2 p = ((2.0*gl_FragCoord.xy)-vec2(1920.0, 1080.0)) / 2.0;
vec3 ro = vec3(0.0, 0.0, -960.0 );
vec3 rd = normalize(vec3(p.x, p.y,960.0));
vec3 lig = normalize(vec3(0.6,0.3,0.1));
vec4 k = vec4(posi.x, posi.y, -posi.z, 2.0*posi.w);
float t = sphIntersect(ro, rd, k);
vec3 ps = ro + (t * rd);
vec3 nor = sphNormal(ps, k);
if(t < 0.0) c = vec4(1.0);
else c.xyz *= clamp(dot(nor,lig), 0.0, 1.0);
f_color = c;
gl_FragDepth = t * 0.0001;
}
Looks like you have many spheres so I would do this:
Input data
I would have VBO containing x,y,z,r describing your spheres, You will also need your view transform (uniform) that can create ray direction and start position for each fragment. Something like my vertex shader in here:
Reflection and refraction impossible without recursive ray tracing?
Create BBOX in Geometry shader and convert your POINT to QUAD or POLYGON
note that you have to account for perspective. If you are not familiar with geometry shaders see:
rendring cubics in GLSL
Where I emmit sequence of OBB from input lines...
In fragment raytrace sphere
You have to compute intersection between sphere and ray, chose the closer intersection and compute its depth and normal (for lighting). In case of no intersection you have to discard; fragment !!!
From what I can see in your images Your QUADs does not correspond to your spheres hence the clipping and also you do not discard; fragments with no intersections so you overwrite with background color already rendered stuff around last rendered spheres so you have only single sphere left in QUAD regardless of how many spheres are really there ...
To create a ray direction that matches a perspective matrix from screen space, the following ray direction formula can be used:
vec3 rd = normalize(vec3(((2.0 / screenWidth) * gl_FragCoord.xy) - vec2(aspectRatio, 1.0), -proj_matrix[1][1]));
The value of 2.0 / screenWidth can be pre-computed or the opengl built-in uniform structs can be used.
To get a bounding box or other shape for your spheres, it is very important to use camera-facing shapes, and not camera-plane-facing shapes. Use the following process where position is the incoming VBO position data, and the w-component of position is the radius:
vec4 p = vec4((cam_matrix * vec4(position.xyz, 1.0)).xyz, position.w);
o.vpos = p;
float l2 = dot(p.xyz, p.xyz);
float r2 = p.w * p.w;
float k = 1.0 - (r2/l2);
float radius = p.w * sqrt(k);
if(l2 < r2) {
p = vec4(0.0, 0.0, -p.w * 0.49, p.w);
radius = p.w;
k = 0.0;
}
vec3 hx = radius * normalize(vec3(-p.z, 0.0, p.x));
vec3 hy = radius * normalize(vec3(-p.x * p.y, p.z * p.z + p.x * p.x, -p.z * p.y));
p.xyz *= k;
Then use hx and hy as basis vectors for any 2D shape that you want the billboard to be shaped like for the vertices. Don't forget later to multiply each vertex by a perspective matrix to get the final position of each vertex. Here is a visualization of the billboarding on desmos using a hexagon shape: https://www.desmos.com/calculator/yeeew6tqwx

GLSL: Fade 2D grid based on distance from camera

I am currently trying to draw a 2D grid on a single quad using only shaders. I am using SFML as the graphics library and sf::View to control the camera. So far I have been able to draw an anti-aliased multi level grid. The first level (blue) outlines a chunk and the second level (grey) outlines the tiles within a chunk.
I would now like to fade grid levels based on the distance from the camera. For example, the chunk grid should fade in as the camera zooms in. The same should be done for the tile grid after the chunk grid has been completely faded in.
I am not sure how this could be implemented as I am still new to OpenGL and GLSL. If anybody has any pointers on how this functionality can be implemented, please let me know.
Vertex Shader
#version 130
out vec2 texCoords;
void main() {
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
texCoords = (gl_TextureMatrix[0] * gl_MultiTexCoord0).xy;
}
Fragment Shader
#version 130
uniform vec2 chunkSize = vec2(64.0, 64.0);
uniform vec2 tileSize = vec2(16.0, 16.0);
uniform vec3 chunkBorderColor = vec3(0.0, 0.0, 1.0);
uniform vec3 tileBorderColor = vec3(0.5, 0.5, 0.5);
uniform bool drawGrid = true;
in vec2 texCoords;
void main() {
vec2 uv = texCoords.xy * chunkSize;
vec3 color = vec3(1.0, 1.0, 1.0);
if(drawGrid) {
float aa = length(fwidth(uv));
vec2 halfChunkSize = chunkSize / 2.0;
vec2 halfTileSize = tileSize / 2.0;
vec2 a = abs(mod(uv - halfChunkSize, chunkSize) - halfChunkSize);
vec2 b = abs(mod(uv - halfTileSize, tileSize) - halfTileSize);
color = mix(
color,
tileBorderColor,
smoothstep(aa, .0, min(b.x, b.y))
);
color = mix(
color,
chunkBorderColor,
smoothstep(aa, .0, min(a.x, a.y))
);
}
gl_FragColor.rgb = color;
gl_FragColor.a = 1.0;
}
You need to split your multiplication in the vertex shader to two parts:
// have a variable to be interpolated per fragment
out vec2 vertex_coordinate;
...
{
// this will store the coordinates of the vertex
// before its projected (i.e. its "world" coordinates)
vertex_coordinate = gl_ModelViewMatrix * gl_Vertex;
// get your projected vertex position as before
gl_Position = gl_ProjectionMatrix * vertex_coordinate;
...
}
Then in the fragment shader you change the color based on the world vertex coordinate and the camera position:
in vec2 vertex_coordinate;
// have to update this value, every time your camera changes its position
uniform vec2 camera_world_position = vec2(64.0, 64.0);
...
{
...
// calculate the distance from the fragment in world coordinates to the camera
float fade_factor = length(camera_world_position - vertex_coordinate);
// make it to be 1 near the camera and 0 if its more then 100 units.
fade_factor = clamp(1.0 - fade_factor / 100.0, 0.0, 1.0);
// update your final color with this factor
gl_FragColor.rgb = color * fade_factor;
...
}
The second way to do it is to use the projected coordinate's w. I personally prefer to calculate the distance in units of space. I did not test this code, it might have some trivial syntax errors, but if you understand the idea, you can apply it in any other way.

Resizing window cause my 2D Lighting to stretch

I am trying to implement a simple artificial 2D lighting. I am not using an algorithm like Phong's. However, I am having some difficulty in ensuring that my lighting do not stretch/squeeze whenever the window resize. Any tips and suggestions will be appreciated. I have tried converting my radius into a vec2 so that I can scale them accordingly based on the aspect ratio, however it doesnt work properly. Also, I am aware that my code is not the most efficient, any feedback is also appreciated as I am still learning! :D
I have an orthographic projection matrix transforming the light position so that it will be at the correct spot in the viewport, this fixed the position but not the radius (as I am calculating per fragment). How would I go about transforming the radius based on the aspect ratio?
void LightSystem::Update(const OrthographicCamera& camera)
{
std::vector<LightComponent> lights;
for (auto& entity : m_Entities)
{
auto& light = g_ECSManager.GetComponent<LightComponent>(entity);
auto& trans = g_ECSManager.GetComponent<TransformComponent>(entity);
if (light.lightEnabled)
{
light.pos = trans.Position;
glm::mat4 viewProjMat = camera.GetViewProjectionMatrix();
light.pos = viewProjMat * glm::vec4(light.pos, 1.f);
// Need to store all the light atrributes in an array
lights.emplace_back(light);
}
// Create a function in Render2D.cpp, pass all the arrays as a uniform variable to the shader, call this function here
glm::vec2 res{ camera.GetWidth(), camera.GetHeight() };
Renderer2D::DrawLight(lights, camera, res);
}
}
Here is my shader:
#type fragment
#version 330 core
layout (location = 0) out vec4 color;
#define MAX_LIGHTS 10
uniform struct Light
{
vec4 colour;
vec3 position;
float radius;
float intensity;
} allLights[MAX_LIGHTS];
in vec4 v_Color;
in vec2 v_TexCoord;
in float v_TexIndex;
in float v_TilingFactor;
in vec4 fragmentPosition;
uniform sampler2D u_Textures[32];
uniform vec4 u_ambientColour;
uniform int numLights;
uniform vec2 resolution;
vec4 calculateLight(Light light)
{
float lightDistance = length(distance(fragmentPosition.xy, light.position.xy));
//float ar = resolution.x / resolution.y;
if (lightDistance >= light.radius)
{
return vec4(0, 0, 0, 1); //outside of radius make it black
}
return light.intensity * (1 - lightDistance / light.radius) * light.colour;
}
void main()
{
vec4 texColor = v_Color;
vec4 netLightColour = vec4(0, 0, 0, 1);
if (numLights == 0)
color = texColor;
else
{
for(int i = 0; i < numLights; ++i) //Loop through lights
netLightColour += calculateLight(allLights[i]) + u_ambientColour;
color = texColor * netLightColour;
}
}
You must use an orthographic projection matrix in the vertex shader. Modify the clip space position through the projection matrix.
Alternatively, consider the aspect ratio when calculating the distance to the light:
float aspectRatio = resolution.x/resolution.y;
vec2 pos = fragmentPosition.xy * vec2(aspectRatio, 1.0);
float lightDistance = length(distance(pos.xy, light.position.xy));
I'm going to compile all the answers for my question, as I had done a bad job in asking and everything turned out to be a mess.
As the other answers suggest, first I had to use an orthographic projection matrix to ensure that the light source position was displayed at the correct position in the viewport.
Next, from the way I did my lighting, the projection matrix earlier would not fix the stretch effect as my light wasn't an actual circle object made with actual vertices. I had to turn radius into a vec2 type, representing the radius vectors along x and y axis. This is so that I can then modify the vectors based on the aspect ratio:
if (aspectRatio > 1.0)
light.radius.x /= aspectRatio;
else
light.radius.x /= aspectRatio;
I had posted another question here, to modify my lighting algorithm to support an ellipse shape. This allowed me to then perform the scalings needed to counter the stretching along x/y axis whenever my aspect ratio changed. Thank you all for the answers.

GBUFFER Decal Projection and scaling

I have been working on projecting decals on to anything that the decals bounding box encapsulates. After reading and trying numerous code snippets (usually in HLSL) I have a some what working method in GLSL for projecting the decals.
Let me start with trying to explain what I'm doing and how this works (so far).
The code below is now fixed and works!
This all is while in the perspective view mode.
I send 2 uniforms to the fragment shader "tr" and "bl". These are the 2 corners of the bounding box. I can and will replace these with hard coded sizes because they are the size of the decals original bounding box. tr = vec3(.5, .5, .5) and br = vec3(-.5, -.5, -.5). I'd prefer to find a way to do the position tests in the decals transformed state. (more about this at the end).
Adding this for clarity. The vertex emitted from the vertex program is the bounding box multiplied by the decals matrix and than by the model view projection matrix.. I use this for the next step:
With that vertex, I get the depth value from the depth texture and with it, calculate the position in world space using the inverse of the projection matrix.
Next, I translate this position using the Inverse of the Decals matrix. (The matrix that scales, rotates and translates the 1,1,1 cube to its world location. I thought that by using the inverse of the decals transform matrix, the correct size and rotation of the screen point would be handled correctly but it is not.
Vertex Program:
//Decals color pass.
#version 330 compatibility
out mat4 matPrjInv;
out vec4 positionSS;
out vec4 positionWS;
out mat4 invd_mat;
uniform mat4 decal_matrix;
void main(void)
{
gl_Position = decal_matrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Position;
positionWS = (decal_matrix * gl_Vertex);;
positionSS = gl_Position;
matPrjInv = inverse(gl_ModelViewProjectionMatrix);
invd_mat = inverse(decal_matrix);
}
Fragment Program:
#version 330 compatibility
layout (location = 0) out vec4 gPosition;
layout (location = 1) out vec4 gNormal;
layout (location = 2) out vec4 gColor;
uniform sampler2D depthMap;
uniform sampler2D colorMap;
uniform sampler2D normalMap;
uniform mat4 matrix;
uniform vec3 tr;
uniform vec3 bl;
in vec2 TexCoords;
in vec4 positionSS; // screen space
in vec4 positionWS; // world space
in mat4 invd_mat; // inverse decal matrix
in mat4 matPrjInv; // inverse projection matrix
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}
vec2 postProjToScreen(vec4 position)
{
vec2 screenPos = position.xy / position.w;
return 0.5 * (vec2(screenPos.x, screenPos.y) + 1);
}
void main(){
// Calculate UVs
vec2 UV = postProjToScreen(positionSS);
// sample the Depth from the Depthsampler
float Depth = texture2D(depthMap, UV).x * 2.0 - 1.0;
// Calculate Worldposition by recreating it out of the coordinates and depth-sample
vec4 ScreenPosition;
ScreenPosition.xy = UV * 2.0 - 1.0;
ScreenPosition.z = (Depth);
ScreenPosition.w = 1.0f;
// Transform position from screen space to world space
vec4 WorldPosition = matPrjInv * ScreenPosition ;
WorldPosition.xyz /= WorldPosition.w;
WorldPosition.w = 1.0f;
// transform to decal original position and size.
// 1 x 1 x 1
WorldPosition = invd_mat * WorldPosition;
clip (WorldPosition.xyz);
// Get UV for textures;
WorldPosition.xy += 0.5;
WorldPosition.y *= -1.0;
vec4 bump = texture2D(normalMap, WorldPosition.xy);
gColor = texture2D(colorMap, WorldPosition.xy);
//Going to have to do decals in 2 passes..
//Blend doesn't work with GBUFFER.
//Lots more to sort out.
gNormal.xyz = bump;
gPosition = positionWS;
}
And here are a couple of Images showing whats wrong.
What I get for the projection:
And this is the actual size of the decals.. Much larger than what my shader is creating!
I have tried creating a new matrix using the decals and the projection matrix to construct a sort of "lookat" matrix and translate the screen position in to the decals post transformed state.. I have not been able to get this working. Some where I am missing something but where? I thought that translating using the inverse of the decals matrix would deal with the transform and put the screen position in the proper transformed state. Ideas?
Updated the code for the texture UVs.. You may have to fiddle with the y and x depending on if your texture is flipped on x or y. I also fixed the clip sub so it works correctly. As it is, this code now works. I will update this more if needed so others don't have to go through the pain I did to get it working.
Some issues to resolve are decals laying over each other. The one on top over writes the one below. I think I will have to accumulated the colors and normals in to the default FBO and then blend(Add) them to the GBUFFER textures before or during the lighting pass. Adding more screen size textures is not a great idea so I will need to be creative and recycle any textures I can.
I found the solution to decals overlaying each other.
Turn OFF depth masking while drawing the decals and turn int back on afterwards:
glDepthMask(GL_FALSE)
OK.. I'm so excited. I found the issue.
I updated the code above again.
I had a mistake in what I was sending the shader for tr and bl:
Here is the change to clip:
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}

Resizing point sprites based on distance from the camera

I'm writing a clone of Wolfenstein 3D using only core OpenGL 3.3 for university and I've run into a bit of a problem with the sprites, namely getting them to scale correctly based on distance.
From what I can tell, previous versions of OGL would in fact do this for you, but that functionality has been removed, and all my attempts to reimplement it have resulted in complete failure.
My current implementation is passable at distances, not too shabby at mid range and bizzare at close range.
The main problem (I think) is that I have no understanding of the maths I'm using.
The target size of the sprite is slightly bigger than the viewport, so it should 'go out of the picture' as you get right up to it, but it doesn't. It gets smaller, and that's confusing me a lot.
I recorded a small video of this, in case words are not enough. (Mine is on the right)
Can anyone direct me to where I'm going wrong, and explain why?
Code:
C++
// setup
glPointParameteri(GL_POINT_SPRITE_COORD_ORIGIN, GL_LOWER_LEFT);
glEnable(GL_PROGRAM_POINT_SIZE);
// Drawing
glUseProgram(StaticsProg);
glBindVertexArray(statixVAO);
glUniformMatrix4fv(uStatixMVP, 1, GL_FALSE, glm::value_ptr(MVP));
glDrawArrays(GL_POINTS, 0, iNumSprites);
Vertex Shader
#version 330 core
layout(location = 0) in vec2 pos;
layout(location = 1) in int spriteNum_;
flat out int spriteNum;
uniform mat4 MVP;
const float constAtten = 0.9;
const float linearAtten = 0.6;
const float quadAtten = 0.001;
void main() {
spriteNum = spriteNum_;
gl_Position = MVP * vec4(pos.x + 1, pos.y, 0.5, 1); // Note: I have fiddled the MVP so that z is height rather than depth, since this is how I learned my vectors.
float dist = distance(gl_Position, vec4(0,0,0,1));
float attn = constAtten / ((1 + linearAtten * dist) * (1 + quadAtten * dist * dist));
gl_PointSize = 768.0 * attn;
}
Fragment Shader
#version 330 core
flat in int spriteNum;
out vec4 color;
uniform sampler2DArray Sprites;
void main() {
color = texture(Sprites, vec3(gl_PointCoord.s, gl_PointCoord.t, spriteNum));
if (color.a < 0.2)
discard;
}
First of all, I don't really understand why you use pos.x + 1.
Next, like Nathan said, you shouldn't use the clip-space point, but the eye-space point. This means you only use the modelview-transformed point (without projection) to compute the distance.
uniform mat4 MV; //modelview matrix
vec3 eyePos = MV * vec4(pos.x, pos.y, 0.5, 1);
Furthermore I don't completely understand your attenuation computation. At the moment a higher constAtten value means less attenuation. Why don't you just use the model that OpenGL's deprecated point parameters used:
float dist = length(eyePos); //since the distance to (0,0,0) is just the length
float attn = inversesqrt(constAtten + linearAtten*dist + quadAtten*dist*dist);
EDIT: But in general I think this attenuation model is not a good way, because often you just want the sprite to keep its object space size, which you have quite to fiddle with the attenuation factors to achieve that I think.
A better way is to input its object space size and just compute the screen space size in pixels (which is what gl_PointSize actually is) based on that using the current view and projection setup:
uniform mat4 MV; //modelview matrix
uniform mat4 P; //projection matrix
uniform float spriteWidth; //object space width of sprite (maybe an per-vertex in)
uniform float screenWidth; //screen width in pixels
vec4 eyePos = MV * vec4(pos.x, pos.y, 0.5, 1);
vec4 projCorner = P * vec4(0.5*spriteWidth, 0.5*spriteWidth, eyePos.z, eyePos.w);
gl_PointSize = screenWidth * projCorner.x / projCorner.w;
gl_Position = P * eyePos;
This way the sprite always gets the size it would have when rendered as a textured quad with a width of spriteWidth.
EDIT: Of course you also should keep in mind the limitations of point sprites. A point sprite is clipped based of its center position. This means when its center moves out of the screen, the whole sprite disappears. With large sprites (like in your case, I think) this might really be a problem.
Therefore I would rather suggest you to use simple textured quads. This way you circumvent this whole attenuation problem, as the quads are just transformed like every other 3d object. You only need to implement the rotation toward the viewer, which can either be done on the CPU or in the vertex shader.
Based on Christian Rau's answer (last edit), I implemented a geometry shader that builds a billboard in ViewSpace, which seems to solve all my problems:
Here are the shaders: (Note that I have fixed the alignment issue that required the original shader to add 1 to x)
Vertex Shader
#version 330 core
layout (location = 0) in vec4 gridPos;
layout (location = 1) in int spriteNum_in;
flat out int spriteNum;
// simple pass-thru to the geometry generator
void main() {
gl_Position = gridPos;
spriteNum = spriteNum_in;
}
Geometry Shader
#version 330 core
layout (points) in;
layout (triangle_strip, max_vertices = 4) out;
flat in int spriteNum[];
smooth out vec3 stp;
uniform mat4 Projection;
uniform mat4 View;
void main() {
// Put us into screen space.
vec4 pos = View * gl_in[0].gl_Position;
int snum = spriteNum[0];
// Bottom left corner
gl_Position = pos;
gl_Position.x += 0.5;
gl_Position = Projection * gl_Position;
stp = vec3(0, 0, snum);
EmitVertex();
// Top left corner
gl_Position = pos;
gl_Position.x += 0.5;
gl_Position.y += 1;
gl_Position = Projection * gl_Position;
stp = vec3(0, 1, snum);
EmitVertex();
// Bottom right corner
gl_Position = pos;
gl_Position.x -= 0.5;
gl_Position = Projection * gl_Position;
stp = vec3(1, 0, snum);
EmitVertex();
// Top right corner
gl_Position = pos;
gl_Position.x -= 0.5;
gl_Position.y += 1;
gl_Position = Projection * gl_Position;
stp = vec3(1, 1, snum);
EmitVertex();
EndPrimitive();
}
Fragment Shader
#version 330 core
smooth in vec3 stp;
out vec4 colour;
uniform sampler2DArray Sprites;
void main() {
colour = texture(Sprites, stp);
if (colour.a < 0.2)
discard;
}
I don't think you want to base the distance calculation in your vertex shader on the projected position. Instead just calculate the position relative to your view, i.e. use the model-view matrix instead of the model-view-projection one.
Think about it this way -- in projected space, as an object gets closer to you, its distance in the horizontal and vertical directions becomes exaggerated. You can see this in the way the lamps move away from the center toward the top of the screen as you approach them. That exaggeration of those dimensions is going to make the distance get larger when you get really close, which is why you're seeing the object shrink.
At least in OpenGL ES 2.0, there is a maximum size limitation on gl_PointSize imposed by the OpenGL implementation. You can query the size with ALIASED_POINT_SIZE_RANGE.