How can i make GL_POINTS overlap to look like spheres? - c++

I am attempting to create a voxel style game, and I want to use GL_POINTS to simulate spherical voxels.
I am aiming to have them look like 3d spheres without having to render an actual sphere with many vertices.
However, when I created a mass of GL_POINTS, they overlap in a way that makes it obvious that they are flat circle sprites.
Here is an example:
my image example of gl_points overlapping showing circular sprite:
I would like to have the circular GL_POINTS overlap in a way that makes them look like spheres being squished together and hiding parts of each other.
For an example of what I would like to achieve, here is an image showing Star Defenders 3D by Eric Gurt, in which he used spherical points as voxels in Javascript for his levels:
Example image showing points that look like spheres:
As you can see, where the points overlap, they hide parts of each other creating the illusion that they are 3d spheres instead of circular sprites.
Is there a way to replicate this in openGL?
I am using OpenGL 3.3.0.

I have finally implemented a way to make points look like spheres by changing gl_FragDepth.
This is the code from my fragment shader to make a square gl_point into a sphere. (no lighting)
void makeSphere()
{
//clamps fragments to circle shape.
vec2 mapping = gl_PointCoord * 2.0F - 1.0F;
float d = dot(mapping, mapping);
if (d >= 1.0F)
{//discard if the vectors length is more than 0.5
discard;
}
float z = sqrt(1.0F - d);
vec3 normal = vec3(mapping, z);
normal = mat3(transpose(viewMatrix)) * normal;
vec3 cameraPos = vec3(worldPos) + rad * normal;
////Set the depth based on the new cameraPos.
vec4 clipPos = projectionMatrix * viewMatrix * vec4(cameraPos, 1.0);
float ndcDepth = clipPos.z / clipPos.w;
gl_FragDepth = ((gl_DepthRange.diff * ndcDepth) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
//calc ambient occlusion for circle
if (bool(fAoc))
ambientOcclusion = sqrt(1.0F - d * 0.5F);
}

Related

OpenGL Normal Mapping

I'm trying to implement Normal Mapping, using a simple cube that i created. I followed this tutorial https://learnopengl.com/Advanced-Lighting/Normal-Mapping but i can't really get how normal mapping should be done when drawing 3d objects, since the tutorial is using a 2d object.
In particular, my cube seems almost correctly lighted but there's something i think it's not working how it should be. I'm using a geometry shader that will output green vector normals and red vector tangents, to help me out. Here i post three screenshot of my work.
Directly lighted
Side lighted
Here i actually tried calculating my normals and tangents in a different way. (quite wrong)
In the first image i calculate my cube normals and tangents one face at a time. This seems to work for the face, but if i rotate my cube i think the lighting on the adiacent face is wrong. As you can see in the second image, it's not totally absent.
In the third image, i tried summing all normals and tangents per vertex, as i think it should be done, but the result seems quite wrong, since there is too little lighting.
In the end, my question is how i should calculate normals and tangents.
Should i consider per face calculations or sum vectors per vertex across all relative faces, or else?
EDIT --
I'm passing normal and tangent to the vertex shader and setting up my TBN matrix. But as you can see in the first image, drawing face by face my cube, it seems that those faces adjacent to the one i'm looking directly (that is well lighted) are not correctly lighted and i don't know why. I thought that i wasn't correctly calculating my 'per face' normal and tangent. I thought that calculating some normal and tangent that takes count of the object in general, could be the right way.
If it's right to calculate normal and tangent as visible in the second image (green normal, red tangent) to set up the TBN matrix, why does the right face seems not well lighted?
EDIT 2 --
Vertex shader:
void main(){
texture_coordinates = textcoord;
fragment_position = vec3(model * vec4(position,1.0));
mat3 normalMatrix = transpose(inverse(mat3(model)));
vec3 T = normalize(normalMatrix * tangent);
vec3 N = normalize(normalMatrix * normal);
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T);
mat3 TBN = transpose(mat3(T,B,N));
view_position = TBN * viewPos; // camera position
light_position = TBN * lightPos; // light position
fragment_position = TBN * fragment_position;
gl_Position = projection * view * model * vec4(position,1.0);
}
In the VS i set up my TBN matrix and i transform all light, fragment and view vectors to tangent space; doing so i won't have to do any other calculation in the fragment shader.
Fragment shader:
void main() {
vec3 Normal = texture(TextSamplerNormals,texture_coordinates).rgb; // extract normal
Normal = normalize(Normal * 2.0 - 1.0); // correct range
material_color = texture2D(TextSampler,texture_coordinates.st); // diffuse map
vec3 I_amb = AmbientLight.color * AmbientLight.intensity;
vec3 lightDir = normalize(light_position - fragment_position);
vec3 I_dif = vec3(0,0,0);
float DiffusiveFactor = max(dot(lightDir,Normal),0.0);
vec3 I_spe = vec3(0,0,0);
float SpecularFactor = 0.0;
if (DiffusiveFactor>0.0) {
I_dif = DiffusiveLight.color * DiffusiveLight.intensity * DiffusiveFactor;
vec3 vertex_to_eye = normalize(view_position - fragment_position);
vec3 light_reflect = reflect(-lightDir,Normal);
light_reflect = normalize(light_reflect);
SpecularFactor = pow(max(dot(vertex_to_eye,light_reflect),0.0),SpecularLight.power);
if (SpecularFactor>0.0) {
I_spe = DiffusiveLight.color * SpecularLight.intensity * SpecularFactor;
}
}
color = vec4(material_color.rgb * (I_amb + I_dif + I_spe),material_color.a);
}
Handling discontinuity vs continuity
You are thinking about this the wrong way.
Depending on the use case your normal map may be continous or discontinous. For example in your cube, imagine if each face had a different surface type, then the normals would be different depending on which face you are currently in.
Which normal you use is determined by the texture itself and not by any blending in the fragment.
The actual algorithm is
Load rgb values of normal
Convert to -1 to 1 range
Rotate by the model matrix
Use new value in shading calculations
If you want continous normals, then you need to make sure that the charts in the texture space that you use obey that the limits of the texture coordinates agree.
Mathematically that means that if U and V are regions of R^2 that map to the normal field N of your Shape then if the function of the mapping is f it should be that:
If lim S(x_1, x_2) = lim S(y_1, y_2) where {x1,x2} \subset U and {y_1, y_2} \subset V then lim f(x_1, x_2) = lim f(y_1, y_2).
In plain English, if the cooridnates in your chart map to positions that are close in the shape, then the normals they map to should also be close in the normal space.
TL;DR do not belnd in the fragment. This is something that should be done by the normal map itself when its baked, not'by you when rendering.
Handling the tangent space
You have 2 options. Option n1, you pass the tangent T and the normal N to the shader. In which case the binormal B is T X N and the basis {T, N, B} gives you the true space where normals need to be expressed.
Assume that in tangent space, x is side, y is forward z is up. Your transformed normal becomes (xB, yT, zN).
If you do not pass the tangent, you must first create a random vector that is orthogonal to the normal, then use this as the tangent.
(Note N is the model normal, where (x,y,z) is the normal map normal)

texture a ball on a sphere has a dark band

I am using this code to generate sphere vertices and textures but as you can see in the image , when I rotate it I can see a dark band.
for (int i = 0; i <= stacks; ++i)
{
float s = (float)i / (float) stacks;
float theta = s * 2 * glm::pi<float>();
for (int j = 0; j <= slices; ++j)
{
float sl = (float)j / (float) slices;
float phi = sl * (glm::pi<float>());
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
const float z = cos(phi);
sphere_vertices.push_back(radius * glm::vec3(x, y, z));
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
}
}
// get the indices
for (int i = 0; i < stacks * slices + slices; ++i)
{
sphere_indices.push_back(i);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i + slices);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i);
sphere_indices.push_back(i + 1);
}
I can't figure a way to make it right whatever texture coordinates I used.
Hmm.. If I use another image, then the mapping is different (and worst!)
vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aTexCoord;
out vec4 vertexColor;
out vec2 TexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
vertexColor = vec4(0.5, 0.2, 0.5, 1.0);
TexCoord = vec2(aTexCoord.x, aTexCoord.y);
}
fragment shader:
#version 330 core
out vec4 FragColor;
in vec4 vertexColor;
in vec2 TexCoord;
uniform sampler2D sphere_texture;
void main()
{
FragColor = texture(sphere_texture, TexCoord);
}
I am not using any lighting conditions.
If I use FragColor = vec4(TexCoord.x, TexCoord.y, 0.0f, 1.0f); in fragment shader (for debugging purposes) , I am receiving a nice sphere.
I am using this as texture:
That image of the tennis ball that you linked reveals the problem. I'm glad you ultimately provided it.
Your image is a four-channel PNG with transparency (Alpha channel). There are transparent pixels all around the outside of the yellow part of the ball that have (R,G,B,A) = (0, 0, 0, 0), so if you're ignoring the A channel then (R, G, B), will be (0, 0, 0) = black.
Here are just the Red, Green, and Blue (RGB) channels:
And here is just the Alpha (A) channel.
The important thing to notice is that the circle of the ball does not fill the square. There is a significant margin of 53 pixels of black from the extent of the ball to the edge of the texture. We can calculate the radius of the ball from this. Half the width is 1000 pixels, of which 53 pixels are not used. The ball's radius is 1000-53, which is 947 pixels. Or about 94.7% of the distance from the center to the edge of the texture. The remaining 5.3% of the distance is black.
Side note: I also notice that your ball doesn't quite reach 100% opacity. The yellow part of the ball has an alpha channel value of 254 (of 255) Meaning 99.6% opaque. The white lines and the shiny hot spot do actually reach 100% opacity, giving it sort of a Death Star look. ;)
To fix your problem, there's the intuitive approach (which may not work) and then there are two things that you need to do that will work. Here are a few things you can do:
Intuitive Solution:
This won't quite get you 100% there.
1) Resize the ball to fill the texture. Use image editing software to enlarge the ball to fill the texture, or to trim off the black pixels. This will just make more efficient use of pixels, for one, but it will ensure that there are useful pixels being sampled at the boundary. You'll probably want to expand the image to be slightly larger than 100%. I'll explain why below.
2) Remap your texture coordinates to only extend to 94.7% of the radius of the ball. (Similar to approach 1, but doesn't require image editing). This just uses coordinates that actually correspond to the image you provided. Your x and y coordinates need to be scaled about the center of the image and reduced to about 94.7%.
x2 = 0.5 + (x - 0.5) * 0.947;
y2 = 0.5 + (y - 0.5) * 0.947;
Suggested Solution:
This will ensure no more black.
3) Fill the "black" portion of your ball texture with a less objectionable colour - probably the colour that is at the circumference of the tennis ball. This ensures that any texels that are sampled at exactly the edge of the ball won't be linearly combined with black to produce an unsightly dark-but-not-quite-black band, which is almost the problem you have right now anyway. You can do this in two ways. A) Image editing software. Remove the transparency from your image and matte it against a dark yellow colour. B) Use the shader to detect pixels that are outside the image and replace them with a border colour (this is clever, but probably more trouble than it's worth.)
Different Texture Coordinates
The last thing you can do is avoid this degenerate texture mapping coordinate problem altogether. At the equator, you're not really sure which pixels to sample. The black (transparent) pixels or the coloured pixels of the ball. The discrete nature of square pixels, is fighting against the polar nature of your texture map. You'll never find the exact colour you need near the edge to produce a continuous, seamless map. Instead, you can use a different coordinate system. I hope you're not attached to how that ball looks, because let me introduce you to the equirectangular projection. It's the same projection that you can naively use to map the globe of the Earth to a typical rectangular map of the world you're likely familiar with where the north and south poles get all the distortion but the equatorial regions look pretty good.
Here's your image mapped to equirectangular coordinates:
Notice that black bar at the bottom...we're onto something! That black bar is actually exactly what appears around the equator of your ball with your current texture mapping coordinate system. But with this coordinate system, you can see easily that if we just remapped the ball to fill the square we'd completely eliminate any transparent pixels at all.
It may be inconvenient to work in this coordinate system, but you can transform your image in Photoshop using Filter > Distort > Polar Coordinates... > Polar to Rectangular.
Sigismondo's answer already suggests how to adjust your texture mapping coordinates do this.
And finally, here's a texture that is both enlarged to fill the texture space, and remapped to equirectangular coordinates. No black bars, minimal distortion. But you'll have to use Sigismondo's texture mapping coordinates. Again, this may not be for you, especially if you're attached to the idea of the direct projection for your texture (i.e.: if you don't want to manipulate your tennis ball image and you want to use that projection.) But if you're willing to remap your data, you can rest easy that all the black pixels will be gone!
Good luck! Feel free to ask for clarifications.
I cannot test it, being the code incomplete, but from a rough look I have spotted this problem:
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
The texture coordinates should not be evaluated from x and y, being:
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
but from the angles thta-phi, or stacks-slices. this could work better - untested:
sphere_texcoords.push_back(glm::vec2(s,sl));
being already defined:
float s = (float)i / (float) stacks;
float sl = (float)j / (float) slices;
Furthermore in your code you are using the first and the last "slices" of the sphere as the rest... Shouldn't they be treated differently? This seems quite odd to me - but I don't know whether your implementation is just a simpler one, working fine.
Compare with this explanation, for example: http://www.songho.ca/opengl/gl_sphere.html

Billboarded sprites intersecting 3D objects

I'm using spherically billboarded sprites along with 3D objects. Because the quad leans backwards to match the camera angle, it intersects with 3D objects immediately behind it. It is more noticeable when the camera angle is very large.The following link provides a very clear visual.
http://answers.unity3d.com/questions/582680/billboard-issue-in-front-of-3d-object.html
Is there an efficient way to resolve this?
The best solution I could come up with was to use cylindrical billboarding for depth calculations and spherical for the quad's actual position. This allows you to use spherical billboarding while ensuring the quad's depth remains constant.
For reference here are the billboarding ModelView Matrixes. [x]: implies the value is left as is.
Cylindrical mvMatrix Spherical mvMatrix
[1][x][0][x] [1][0][0][x]
[0][x][0][x] [0][1][0][x]
[0][x][1][x] [0][0][1][x]
[x][x][x][x] [x][x][x][x]
First modify the ModelViewMatrix for cylindrical billboarding and generate a depth vertex as such:
depthV = projectionMatrix * (mvm * vertex);
Next set the second column values for spherical billboarding and create the quad as usual:
mvm[1][0] = 0; mvm[1][2] = 0; mvm[1][1] = 1;
gl_Position = projectionMatrix * (mvm * vertex);
Finally send depthV to the fragment shader and use it for the depth calculation.
float ndcDepth = depthV.z / depthV.w;
gl_FragDepth = ((gl_DepthRange.diff * ndcDepth ) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
Scaling should be done before applying the ModelView Matrixes.

Variance Shadow Map Depth Issue

I have been trying to get variance shadow mapping to work in my webgl application, but I seem to be having an issue that I could use some help with. In short, my shadows seem to vary over a much smaller distance than the examples I have seen out there. I.e. the shadow range is from 0 to 500 units, but the shadow is black 5 units away and almost non-existent 10 units away. The examples I am following are based on these two links:
VSM from Florian Boesch
VSM from Fabian Sanglard
In both of those examples, the authors are using spot light perspective projection to map the variance values to a floating point texture. In my engine, I have so far tried to use the same logic except I am using a directional light and orthographic projection. I tried both techniques and the result seems to always be the same for me. I'm not sure if its the because of me using an orthographic matrix to do projection - I suspect it might be. Here is a picture of the problem:
Notice how the box is only a few units away from the circle but the shadow is much darker even though the camera shadow is 0.1 to 500 units.
In the light shadow pass my code looks like this:
// viewMatrix is a uniform of the inverse world matrix of the camera
// vWorldPosition is the varying vec4 of the vertex position x world matrix
vec3 lightPos = (viewMatrix * vWorldPosition).xyz;
depth = clamp(length(lightPos) / 40.0, 0.0, 1.0);
float moment1 = depth;
float moment2 = depth * depth;
// Adjusting moments (this is sort of bias per pixel) using partial derivative
float dx = dFdx(depth);
float dy = dFdy(depth);
moment2 += pow(depth, 2.0) + 0.25 * (dx * dx + dy * dy) ;
gl_FragColor = vec4(moment1, moment2, 0.0, 1.0);
Then in my shadow pass:
// lightViewMatrix is the light camera's inverse world matrix
// vertWorldPosition is the attribute position x world matrix
vec3 lightViewPos = lightViewMatrix * vertWorldPosition;
float lightDepth2 = clamp(length(lightViewPos) / 40.0, 0.0, 1.0);
float illuminated = vsm( shadowMap[i], shadowCoord.xy, lightDepth2, shadowBias[i] );
shadowColor = shadowColor * illuminated
Firstly, should I be doing anything differently with Orthographic projection (Its probably not this, but I don't know what it might be as it happens using both techniques above :( )? If not, what might I be able to do to get a more even spread of the shadow?
Many thanks

How to make radial gradient on each face using shader in OpenGL

using simple shaders I've found a way to create gradients.
Here's result of my job:
http://goo.gl/A7pY01 (A little updated after OpenGL ES 2.0 Shader - 2D Radial Gradient in Polygon question)
It's nice, but I still need to display this gradient pattern on each face of my meshes. Or on the billboard face, just like it's a texture.
The glsl function gl_FragCoord returns window-related coordinates. Could someone explain me the way how to translate this into face-related coords and then draw my pattern?
Okey. A little surfing of stackoverflow gave me this topic: OpenGL: How to render perfect rectangular gradient?
Here is the meaning string: gl_FragColor = mix(color0, color1, uv.u + uv.v - 2 * uv.u * uv.v);
Of course we cannot translate window-space coordinates into something "face-related", but we could use UV coordinates of a face. So, I decided, what if we have a square face with uv-coordinates corresponding to full-sized texture (like 0,0; 0,1; 1,0; 1,1); So the center of a structure is 0.5,0.5. This could be a center of my round-gradient.
so my code of fragment shader is:
vec2 u_c = vec2(0.5,0.5);
float distanceFromLight = length(uv - u_c);
gl_FragColor = mix(vec4(1.,0.5,1.,1.), vec4(0.,0.,0.,1.), distanceFromLight*2.0);
Vertex shader:
gl_Position = _mvProj * vec4(vertex, 1.0);
uv = uv1;
Of course, we need to give correct UV coordinates, but the point is understood.
Here's example:
http://goo.gl/A7pY01