I'm working on a game engine with LWJGL3 in which all objects are point sprites. It uses an orthographic camera and I wrote a vertex shader that calculates the 3D position of each sprite (which also causes the fish-eye lens effect). I calculate the distance to the camera and use that value as the depth value for each point sprite. The data for these sprites is stored in chunks of 16x16 in VBOs.
The issue that I'm having is that the sprites are not always rendered front to back. When looking away from the origin, the depth testing works as intended, but when looking in the direction of the origin, sprites are rendered from back to front which causes a big performance drop.
This might seem like depth testing is not enabled, but when I disable depth testing, sprites in the back are drawn on top of the ones in front, so that is not the case.
Here's the full vertex shader:
#version 330 core
#define M_PI 3.1415926535897932384626433832795
uniform mat4 camRotMat; // Virtual camera rotation
uniform vec3 camPos; // Virtual camera position
uniform vec2 fov; // Virtual camera field of view
uniform vec2 screen; // Screen size (pixels)
in vec3 pos;
out vec4 vColor;
void main() {
// Compute distance and rotated delta position to camera
float dist = distance(pos, camPos);
vec3 dXYZ = (camRotMat * vec4(camPos - pos, 0)).xyz;
// Compute angles of this 3D position relative center of camera FOV
// Distance is never negative, so negate it manually when behind camera
vec2 rla = vec2(atan(dXYZ.x, length(dXYZ.yz)),
atan(dXYZ.z, length(dXYZ.xy) * sign(-dXYZ.y)));
// Find sprite size and coordinates of the center on the screen
float size = screen.y / dist * 2; // Sprites become smaller based on their distance
vec2 px = -rla / fov * 2; // Find pixel position on screen of this object
// Output
vColor = vec4((1 - (dist * dist) / (64 * 64)) + 0.5); // Brightness
gl_Position = vec4(px, dist / 1000, 1.0); // Position on the screen
gl_PointSize = size; // Sprite size
}
In the first image, you can see how the game normally looks. In the second one, I've disabled alpha-testing, so you can see sprites are rendered front to back. But in the third image, when looking in the direction of the origin, sprites are being drawn back to front.
Edit:
I am almost 100% certain the depth value is set correctly. The size of the sprites is directly linked to the distance, and they resize correctly when moving around. I also set the color to be brighter when the distance is lower, which works as expected.
I also set the following flags (and ofcourse clear the frame and depth buffer):
GL11.glEnable(GL11.GL_DEPTH_TEST);
GL11.glDepthFunc(GL11.GL_LEQUAL);
Edit2:
Here's a gif of what it looks like when you rotate around: https://i.imgur.com/v4iWe9p.gifv
Edit3:
I think I misunderstood how depth testing works. Here is a video of how the sprites are drawn over time: https://www.youtube.com/watch?v=KgORzkM9U2w
That explains the initial problem, so now I just need to find a way to render them in a different order depending on the camera rotation.
Related
I am using this code to generate sphere vertices and textures but as you can see in the image , when I rotate it I can see a dark band.
for (int i = 0; i <= stacks; ++i)
{
float s = (float)i / (float) stacks;
float theta = s * 2 * glm::pi<float>();
for (int j = 0; j <= slices; ++j)
{
float sl = (float)j / (float) slices;
float phi = sl * (glm::pi<float>());
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
const float z = cos(phi);
sphere_vertices.push_back(radius * glm::vec3(x, y, z));
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
}
}
// get the indices
for (int i = 0; i < stacks * slices + slices; ++i)
{
sphere_indices.push_back(i);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i + slices);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i);
sphere_indices.push_back(i + 1);
}
I can't figure a way to make it right whatever texture coordinates I used.
Hmm.. If I use another image, then the mapping is different (and worst!)
vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aTexCoord;
out vec4 vertexColor;
out vec2 TexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
vertexColor = vec4(0.5, 0.2, 0.5, 1.0);
TexCoord = vec2(aTexCoord.x, aTexCoord.y);
}
fragment shader:
#version 330 core
out vec4 FragColor;
in vec4 vertexColor;
in vec2 TexCoord;
uniform sampler2D sphere_texture;
void main()
{
FragColor = texture(sphere_texture, TexCoord);
}
I am not using any lighting conditions.
If I use FragColor = vec4(TexCoord.x, TexCoord.y, 0.0f, 1.0f); in fragment shader (for debugging purposes) , I am receiving a nice sphere.
I am using this as texture:
That image of the tennis ball that you linked reveals the problem. I'm glad you ultimately provided it.
Your image is a four-channel PNG with transparency (Alpha channel). There are transparent pixels all around the outside of the yellow part of the ball that have (R,G,B,A) = (0, 0, 0, 0), so if you're ignoring the A channel then (R, G, B), will be (0, 0, 0) = black.
Here are just the Red, Green, and Blue (RGB) channels:
And here is just the Alpha (A) channel.
The important thing to notice is that the circle of the ball does not fill the square. There is a significant margin of 53 pixels of black from the extent of the ball to the edge of the texture. We can calculate the radius of the ball from this. Half the width is 1000 pixels, of which 53 pixels are not used. The ball's radius is 1000-53, which is 947 pixels. Or about 94.7% of the distance from the center to the edge of the texture. The remaining 5.3% of the distance is black.
Side note: I also notice that your ball doesn't quite reach 100% opacity. The yellow part of the ball has an alpha channel value of 254 (of 255) Meaning 99.6% opaque. The white lines and the shiny hot spot do actually reach 100% opacity, giving it sort of a Death Star look. ;)
To fix your problem, there's the intuitive approach (which may not work) and then there are two things that you need to do that will work. Here are a few things you can do:
Intuitive Solution:
This won't quite get you 100% there.
1) Resize the ball to fill the texture. Use image editing software to enlarge the ball to fill the texture, or to trim off the black pixels. This will just make more efficient use of pixels, for one, but it will ensure that there are useful pixels being sampled at the boundary. You'll probably want to expand the image to be slightly larger than 100%. I'll explain why below.
2) Remap your texture coordinates to only extend to 94.7% of the radius of the ball. (Similar to approach 1, but doesn't require image editing). This just uses coordinates that actually correspond to the image you provided. Your x and y coordinates need to be scaled about the center of the image and reduced to about 94.7%.
x2 = 0.5 + (x - 0.5) * 0.947;
y2 = 0.5 + (y - 0.5) * 0.947;
Suggested Solution:
This will ensure no more black.
3) Fill the "black" portion of your ball texture with a less objectionable colour - probably the colour that is at the circumference of the tennis ball. This ensures that any texels that are sampled at exactly the edge of the ball won't be linearly combined with black to produce an unsightly dark-but-not-quite-black band, which is almost the problem you have right now anyway. You can do this in two ways. A) Image editing software. Remove the transparency from your image and matte it against a dark yellow colour. B) Use the shader to detect pixels that are outside the image and replace them with a border colour (this is clever, but probably more trouble than it's worth.)
Different Texture Coordinates
The last thing you can do is avoid this degenerate texture mapping coordinate problem altogether. At the equator, you're not really sure which pixels to sample. The black (transparent) pixels or the coloured pixels of the ball. The discrete nature of square pixels, is fighting against the polar nature of your texture map. You'll never find the exact colour you need near the edge to produce a continuous, seamless map. Instead, you can use a different coordinate system. I hope you're not attached to how that ball looks, because let me introduce you to the equirectangular projection. It's the same projection that you can naively use to map the globe of the Earth to a typical rectangular map of the world you're likely familiar with where the north and south poles get all the distortion but the equatorial regions look pretty good.
Here's your image mapped to equirectangular coordinates:
Notice that black bar at the bottom...we're onto something! That black bar is actually exactly what appears around the equator of your ball with your current texture mapping coordinate system. But with this coordinate system, you can see easily that if we just remapped the ball to fill the square we'd completely eliminate any transparent pixels at all.
It may be inconvenient to work in this coordinate system, but you can transform your image in Photoshop using Filter > Distort > Polar Coordinates... > Polar to Rectangular.
Sigismondo's answer already suggests how to adjust your texture mapping coordinates do this.
And finally, here's a texture that is both enlarged to fill the texture space, and remapped to equirectangular coordinates. No black bars, minimal distortion. But you'll have to use Sigismondo's texture mapping coordinates. Again, this may not be for you, especially if you're attached to the idea of the direct projection for your texture (i.e.: if you don't want to manipulate your tennis ball image and you want to use that projection.) But if you're willing to remap your data, you can rest easy that all the black pixels will be gone!
Good luck! Feel free to ask for clarifications.
I cannot test it, being the code incomplete, but from a rough look I have spotted this problem:
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
The texture coordinates should not be evaluated from x and y, being:
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
but from the angles thta-phi, or stacks-slices. this could work better - untested:
sphere_texcoords.push_back(glm::vec2(s,sl));
being already defined:
float s = (float)i / (float) stacks;
float sl = (float)j / (float) slices;
Furthermore in your code you are using the first and the last "slices" of the sphere as the rest... Shouldn't they be treated differently? This seems quite odd to me - but I don't know whether your implementation is just a simpler one, working fine.
Compare with this explanation, for example: http://www.songho.ca/opengl/gl_sphere.html
I am attempting to write and read from a 3D texture, but it seems my mapping is wrong. I have used Render doc to check the textures and they look ok.
A random layer of this voluemtric texture looks like:
So just some blue to denote absence and some green values to denote pressence.
The coordinates I calculate when I write to each layer are calculated in the vertex shader as:
pos.x = (2.f*pos.x-width+2)/(width-2);
pos.y = (2.f*pos.y-depth+2)/(depth-2);
pos.z -= level;
pos.z *= 1.f/voxel_size;
gl_Position = pos;
Since the texture itself looks ok it seems these coordinates are good to achieve my goal.
It's important to note that right now voxel_size is 1 and the scale of the texture is supposed to be 1 to 1 with the scene dimensions. In essence, each pixel in the texture represents a 1x1x1 voxel in the scene.
Next I attempt to fetch the texture values as follows:
vec3 pos = vertexPos;
pos.x = (2.f*pos.x-width+2)/(width-2);
pos.y = (2.f*pos.y-depth+2)/(depth-2);
pos.z *= 1.f/(4*16);
outColor = texture(voxel_map, pos);
Where vertexPos is the global vertex position in the scene. The z coordinate may be completely wrong however (i am not sure if I am supposed to normalize the depth component or not) but that is not the only issue. If you look at the final result:
There is a horizontal sclae component problem. Since each texel represents a voxel, the color of a cube should always be a fixed color. But as you can see I am getting multiple colors for a single cube on the top faces. So my horizontal scale is wrong.
What am i doing wrong when fetching the texels from the texture?
I have the following fragment shader:
#version 330
layout(location=0) out vec4 frag_colour;
in vec2 texelCoords;
uniform sampler2D uTexture; // the color
uniform sampler2D uTextureHeightmap; // the heightmap
uniform float uSunDistance = -10000000.0; // really far away vertically
uniform float uSunInclination; // height from the heightmap plane
uniform float uSunAzimuth; // clockwise rotation point
uniform float uQuality; // used to determine number of steps and steps size
void main()
{
vec4 c = texture(uTexture,texelCoords);
vec2 textureD = textureSize(uTexture,0);
float d = max(textureD.x,textureD.y); // use the largest dimension to determine stepsize etc
// position the sun in the centre of the screen and convert from spherical to cartesian coordinates
vec3 sunPosition = vec3(textureD.x/2,textureD.y/2,0) + vec3( uSunDistance*sin(uSunInclination)*cos(uSunAzimuth),
uSunDistance*sin(uSunInclination)*sin(uSunAzimuth),
uSunDistance*cos(uSunInclination) );
float height = texture2D(uTextureHeightmap, texelCoords).r; // starting height
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition); // sunlight direction
float sampleDistance = 0;
float samples = d*uQuality;
float stepSize = 1.0 / ((samples/d) * d);
for(int i = 0; i < samples; i++)
{
sampleDistance += stepSize; // increase the sample distance
vec3 newPoint = vec3(texelCoords,height) + direction * sampleDistance; // get the coord for the next sample point
float newHeight = texture2D(uTextureHeightmap,newPoint.xy).r; // get the height of that sample point
// put it in shadow if we hit something that is higher than our starting point AND is heigher than the ray we're casting
if(newHeight > height && newHeight > newPoint.z)
{
c *= 0.5;
break;
}
}
frag_colour = c;
}
The purpose is for it to cast shadows based on a heightmap. Pretty nifty, and the results look good.
However, there's a problem where the shadows appear longer when they are horizontal compared to vertical. If I make the window size different, with a window that is taller than wide, I get the opposite effect. I.e., the shadows are casting longer in the longer dimension.
This tells me that it's to do with the way I'm stepping in the above shader, but I can't tell the problem.
To illustrate, here is the with a uSunAzimuth that results in a horizontally cast shadow:
And here is the exact same code with a uSunAzimuth for a vertical shadow:
It's not very pronounced in these low resolution images, but in larger resolutions the effect gets more exaggerated. Essentially; the shadow when you measure how it casts in all 360 degrees of azimuth clears out an ellipse instead of a circle.
The shadow fragment shader operates on a "snapshot" of the viewport. When your scene is rendered and this "snapshot" is generated, then the vertex positions are transformed by the projection matrix. The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport and takes in account the aspect ration of the viewport.
(see Both depth buffer and triangle face orientation are reversed in OpenGL,
and Transform the modelMatrix).
This causes that the high map (uTextureHeightmap) represents a rectangular field of view, dependent on the aspect ratio.
But the texture coordinates, which you use to access the height map describe a quad in the range (0, 0) to (1, 1).
This mismatch must be balanced, by scaling with the aspect ratio.
vec3 direction = ....;
float aspectRatio = textureD.x / textureD.y;
direction.xy *= vec2( 1.0/aspectRatio, 1.0 );
I just needed to adjust the direction slightly.
float aspectCorrection = textureD.x / textureD.y;
...
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition);
direction.y *= aspectCorrection;
I'm attempting to implement shadow mapping into my deferred rendering pipeline, but I'm running into a few issues actually generating the shadow map, then shadowing the pixels – pixels that I believe should be shadowed simply aren't.
I have a single directional light, which is the 'sun' in my engine. I have deferred rendering set up for lighting, which works properly thus far. I render the scene again into a depth-only FBO for the shadow map, using the following code to generate the view matrix:
glm::vec3 position = r->getCamera()->getCameraPosition(); // position of level camera
glm::vec3 lightDir = this->sun->getDirection(); // sun direction vector
glm::mat4 depthProjectionMatrix = glm::ortho<float>(-10,10,-10,10,-10,20); // ortho projection
glm::mat4 depthViewMatrix = glm::lookAt(position + (lightDir * 20.f / 2.f), -lightDir, glm::vec3(0,1,0));
glm::mat4 lightSpaceMatrix = depthProjectionMatrix * depthViewMatrix;
Then, in my lighting shader, I use the following code to determine whether a pixel is in shadow or not:
// lightSpaceMatrix is the same as above, FragWorldPos is world position of the texekl
vec4 FragPosLightSpace = lightSpaceMatrix * vec4(FragWorldPos, 1.0f);
// multiply non-ambient light values by ShadowCalculation(FragPosLightSpace)
// ... do more stuff ...
float ShadowCalculation(vec4 fragPosLightSpace) {
// perform perspective divide
vec3 projCoords = fragPosLightSpace.xyz / fragPosLightSpace.w;
// vec3 projCoords = fragPosLightSpace.xyz;
// Transform to [0,1] range
projCoords = projCoords * 0.5 + 0.5;
// Get closest depth value from light's perspective (using [0,1] range fragPosLight as coords)
float closestDepth = texture(gSunShadowMap, projCoords.xy).r;
// Get depth of current fragment from light's perspective
float currentDepth = projCoords.z;
// Check whether current frag pos is in shadow
float bias = 0.005;
float shadow = (currentDepth - bias) > closestDepth ? 1.0 : 0.0;
// Ensure that Z value is no larger than 1
if(projCoords.z > 1.0) {
shadow = 0.0;
}
return shadow;
}
However, that doesn't really get me what I'm after. Here's a screenshot of the output after shadowing, as well as the shadow map half-assedly converted to an image in Photoshop:
Render output
Shadow Map
Since the directional light is the only light in my shader, it seems that the shadow map is being rendered pretty close to correctly, since the perspective/direction roughly match. However, what I don't understand is why none of the teapots actually end up casting a shadow on the others.
I'd appreciate any pointers on what I might be doing wrong. I think that my issue lies either in the calculation of that light space matrix (I'm not sure how to properly calculate that, given a moving camera, such that the stuff that's in view will be updated,) or in the way I determine whether the texel the deferred renderer is shading is in shadow or not. (FWIW, I determine the world position from the depth buffer, but I've proven that this calculation is working correctly.)
Thanks for any help.
Debugging shadow problems can be tricky. Lets start with a few points:
If you look at your render closely, you will actually see a shadow on one of the pots in the top left corner.
Try rotating your sun, this usually helps to see if there are any problems with the light transform matrix. From your output, it seems the sun is very horizontal and might not cast shadows on this setup. (another angle might show more shadows)
It appears as though you are calculating the matrix correctly, but try shrinking your maximum depth in glm::ortho(-10,10,-10,10,-10,20) to tightly fit your scene. If the depth is too large, you will lose precision and shadow will have artifacts.
To visualize where the problem is coming from further, try outputing the result from your shadow map lookup from here:
closestDepth = texture(gSunShadowMap, projCoords.xy).r
If the shadow map is being projected correctly, then you know you have a problem in your depth comparisons. Hope this helps!
I've read nearly all ocean animation topics(programming and math too), and finally I decided to render it with Gerstner waves with reflection, refraction and caustics.
Well, now my reflection is working with flat plane and with only vertical displacements,but with Gerstner waves I displace the x,z coords too, and my reflection texture coordinates goes out of range when my camera is under a specific height or with changing the angle.
(the closer the ocean surface is, the more the texture wraps)
So, my shader codes:
Gerstner Wave:
vec3 calcWave(vec2 X,float t,float A,vec2 K,float L)
{
vec3 wave;
float k = 2*pi/L;
float W = sqrt(g*k);
wave.xz = -((K/k)*A*sin(dot(K,X)-W*t));
wave.y = A*cos(dot(K,X)-W*t)/2-A/2;
//I do it this way so the max wave amplitude is always
//lower than the plane of the reflection, so I can't see bellow it
return wave;
};
Texture Coordinates
VERTEX SHADER:
undisplaced_world_Vertex.xzw=world_Vertex.xzw;
//this is before the wave calculation, so it contains simple grid coordinates without any displacement
undisplaced_world_Vertex.y = waterLevel;
FRAGMENT SHADER:
vec4 screenCoord = mvp_matrix*undisplaced_world_Vertex;
vec2 projCoord=vec2((screen_coord.x/screen_coord.w+1)/2,(screen_coord.y/screen_coord.w+1)/2);
I've just read the reflection part in Deep-Water Animation and Rendering article, but i have no idea how to implement it.
My question is how to project the texture coordinates of the reflection texture, so it fits my 2 expectations:
the texture coordinates are always in range, or there is a minimal wrap at the edge of the screen
from every angle I can't see "under" the texture (or just a very little:P)
Also, it will be displaced with the normals.
EDIT:
Problems with understanding:
vec3 R = ReflectLeave(viewerDir,normal);
float4 projR = float4(Rh,0)*ReflViewProjTM;
float2 reflUV = (projR.xy / projR.z) * float2(0.5,-0.5)+float2(0.5,0.5);
half4 refl = tex2D(reflTex,reflUV);
:/