3D Volume rendering and multiple point of view occlusion - c++

I've a `W x H x D' volumetric data that is zero everywhere except for little spherical volumes containing 1.
I have written the shader to extract the "intersection" of that 3D volume with a generic object made of vertices.
Vertex shader
varying vec3 textureCoordinates;
uniform float objectSize;
uniform vec3 objectTranslation;
void main()
{
vec4 v=gl_Vertex;
textureCoordinates= vec3( ((v.xz-objectTranslation.xz)/objectSize+1.0)*0.5, ((v.y-objectTranslation.y)/objectSize+1.0)*0.5);
gl_Position = gl_ModelViewProjectionMatrix*v;
}
Fragment shader
varying vec3 textureCoordinates;
uniform sampler3D volumeSampler;
void main()
{
vec4 uniformColor = vec4(1.0,1.0,0.0,1.0); //it's white
if ( textureCoordinates.x <=0.0 || textureCoordinates.x >= 1.0 || textureCoordinates.z <= 0.0 || textureCoordinates.z >= 1.0)
gl_FragColor =vec4(0.0,0.0,0.0,1.0); //Can be uniformColor to color again the thing
else
gl_FragColor = uniformColor*texture3D(volumeSampler, textureCoordinates);
}
In the OpenGL program, I'm looking the centered object with those almost-spherical patches of white on it from (0,100,0) eye coordinates, but I want that for another viewer (0,0,0) the spheres that lie on the same line-of-sight are correctly occluded, so that only the parts that I underlined in red in the picture are emitted.
Is this an application of raycasting or similar?

It seems what you want is occlusion culling, you have two main options to implement occlusion culling
Using GPU occlusion queries
This is essentially about asking the hardware if a certain fragment will be draw or not if not you can cull the object.
Occlusion queries count the number of fragments (or samples) that pass the depth test, which is useful to determine visibility of objects.
This algorithm is more complex than can be explained here, here is an excellent Nvidia article on the topic.
Using CPU ray casting
This is simply check each object (or possibly it's bounding volume), if a ray hits the object then it possibly hides other objects behind it. The objects need to be Spatially sorted using Octree or BSP Tree, so you don't end up checking every object and you only check objects near the camera.
For more on culling techniques check my answer here.

Is this an application of raycasting or similar?
This is in essence the raytracing shadow algorithm: Once you've hit a (visible) surface with your view-ray, you take that point as point of origin for a trace toward the other point (a light source or whatever) and if you can reach that point (without) "bumping" into something else use that information as further input into rendering calculations.

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

Bypass classical deferred shading light volumes

I would like to "bypass" the classical light volume approach of deferred lighting.
Usually, when you want to affect pixels within a pointlight volume, you can simply render a sphere mesh.
I would like to try another way to do that, the idea is to render a cube which encompass the sphere, the cube is "circumscribes" to the sphere so each face's center is a sphere's point. Then you only have to know from your point of view which fragment would be a part of the circle (the sphere on your screen) if you had render the sphere instead.
So the main problem is to know which fragment will have to be discarded.
How could I do that:
Into the fragment shader, I have my "camera" world coordinates, my fragment world coordinates, my sphere world center, and my sphere radius.
Thus I have the straight line whose the orientation vector is modelized by camera-fragment world points.
And I can build my sphere equation.
Finally I can know if the line intersect the sphere.
Is is correct to say that, from my point of view, if the line intersect the sphere, thus this fragment must be considered as an highlighted fragment (a fragment that would have been rendered if I had rendered a sphere instead) ?
Thus the check "lenght(fragment - sphereCenter) <= sphereRadius" doesn't really mean something here because the fragment is not on the sphere.
So what?
The standard deferred shading solution for lights is to render a full-screen quad. The purpose of rendering a sphere instead is to avoid doing a bunch of per-fragment calculations for fragments which are outside of the light source's effect. This means that the center of that sphere is the light source, and its radius represents the maximum distance for which the source has an effect.
So the length from the fragment (that is, reconstructed from your g-buffer data, not the fragment produced by the cube) to the sphere's center is very much relevant. That's the length between the fragment and the light source. If that is larger than the sphere radius (AKA: maximum reach of the light), then you can cull the fragment.
Or you can just let your light attenuation calculations do the same job. After all, in order for lights to not look like they are being cropped, that sphere radius must also be used with some form of light attenuation. That is, when a fragment is at that distance, the attenuation of the light must be either 0 or otherwise negligibly small.
As such... it doesn't matter if you're rendering a sphere, cube, or a full-screen quad. You can either cull the fragment or let the light attenuation do its job.
However, if you want to possibly save performance by discarding the fragment before reading any of the g-buffers, you can do this. Assuming you have access to the camera-space position of the sphere/cube's center in the FS:
Convert the position of the cube's fragment into camera-space. You can do this by reverse-transforming gl_FragCoord, but it'd probably be faster to just pass the camera-space position to the fragment shader. It's not like your VS is doing a lot of work or anything.
Because the camera-space position is in camera space, it already represents a direction from the camera into the scene. So now, use this direction to perform part of ray/sphere intersection. Namely, you stop once you compute the discriminant (to avoid an expensive square-root). The discriminant is:
float A = dot(cam_position, cam_position);
float B = -2 * (dot(cam_position, cam_sphere_center);
float C = (dot(cam_sphere_center, cam_sphere_center)) - (radius * radius)
float Discriminant = (B * B) - 4 * A * C;
If the discriminant is negative, discard the fragment. Otherwise, do your usual stuff.

atmospheric scattering and sky geometry

I'm trying to implement an atmospheric scattering in my graphics (game) engine based on the gpu gems article: link. An example implementation from that article uses a skydome. My scene is different - I don't render
a whole earth with an atmosphere which can be also visible from the space but some finite flat (rectangle)
area with objects, for example a race track. In fact this is the most common scenario in many games. Now
I wonder how to render a sky in such case:
1.What kind of geometry I should use: skydome, skybox or a full screen quad - then I have to
move almost all calculations to the fragment shader, but I don't know if it makse sense in terms
quality/performance ?
2.How to place sky geometry on the scene ? My idea:
I have a hemisphere (skydome) geometry with radius = 1 and center in vec3(0, 0, 0) - in object space.
Those vertices are sent to the atmospheric scattering vertex shader:
layout(location=0) in vec3 inPosition;
Next, In the vertex shader I transform vertex this way:
v3Pos = inPosition * 0.25f + 10.0f;
Uniform v3CameraPos = vec3(0.0f, 10.0f, 0.0f), uniform fInnerRadius = 10.0f, uniform fCameraHeight = 10.0f
Then I have correct an inner/outer radius propotion (10/10.25),right? I also send to the vertex shader a model matrix which sets a position
of the hemisphere to the postion of the mobile camera vec3(myCamera.x, myCamera.y, myCamera.z):
vec4 position = ProjectionMatrix * ViewMatrix * ModelMatrix * vec4(inPosition, 1.0);
gl_Position = position.xyww; // always fails depth test.
The hemisphere moves together with the camera (encloses only some space around camera with radius = 1, but
it also always fails a depth test.)
Unfortunately a sky color which I get is not correct: screen1
3.What about a "sky curve"? Here is a picture which demonstrate what I mean: image1
How should I set a sky curve ?
Edit1 - debugging: In the vertex shader I assigned to v3Pos position of the "highest" vertex in the hemisphere:
vec3 v3Pos = vec3(0.0f, 10.25f, 0.0f);
Now the whole sky contains a color of that vertex:
screen2
here is my simplified atmosphere scattering
https://stackoverflow.com/a/19659648/2521214
work for both in outside and inside atmosphere
For realistic scattering You need to compute the curve integral through atmosphere depth
so you need to know how thick is atmosphere in which direction. Your terrain is 'flat QUAD +/- some bumps' but you must know where on Earth it is position (x,y,z) and normal (nx,ny,nz). For atmosphere you can use sphere or like me ellipsoid unless you want implement the whole scattering (integration through area not curve) process then what you really need is just the ray length from your camera to end of atmosphere.
so you can also do a cube-map texture with precomputed atmosphere edge distance because the movement inside your flat area does not matter much.
You also need the Sun position
can be simplified to just rotating normal around earth axis 1 round per day or you can also implement the season trajectory shift
Now just before rendering fill screen with sky
You can do it all inside shaders. one Quad is enough.
in fragment: get the atmosphere thickness form cube-map or from pixel direction intersection with atmosphere sphere/ellipsoid and then do the integration (full or simplified). Output the color to screen pixel
It really doesn't matter what geometry you use as long as (a) it covers the scren and (b) you feed the fragment shader a reasonable value for v3Direction. For all cases is a typical on-the-earth game like one with a race track, the sky will be behind all other objects, so accurate Z is not such a big deal.

Low polygon cone - smooth shading at the tip

If you subdivide a cylinder into an 8-sided prism, calculating vertex normals based on their position ("smooth shading"), it looks pretty good.
If you subdivide a cone into an 8-sided pyramid, calculating normals based on their position, you get stuck on the tip of the cone (technically the vertex of the cone, but let's call it the tip to avoid confusion with the mesh vertices).
For each triangular face, you want to match the normals along both edges. But because you can only specify one normal at each vertex of a triangle, you can match one edge or the other, but not both. You can compromise by choosing a tip normal that is the average of the two edges, but now none of your edges look good. Here is a detail of what choosing the average normal for each tip vertex looks like.
In a perfect world, the GPU could rasterize a true quad, not just triangles. Then we could specify each face with a degenerate quad, allowing us to specify a different normal for the two adjoining edges of each triangle. But all we have to work with are triangles... We can cut the cone into multiple "stacks", so that the edge discontinuities are only visible at the tip of the cone rather than along the whole thing, but there will still be a tip!
Anybody have any tricks for smooth-shaded low-poly cones?
I was struggling with cones in modern OpenGL (i.e. shaders) made up from triangles a bit but then I found a surprisingly simple solution! I would say it is much better and simpler than what is suggested in the currently accepted answer.
I have an array of triangles (obviously each has 3 vertices) which form the cone surface. I did not care about the bottom face (circular base) as this is really straightforward. In all my work I use the following simple vertex structure:
position: vec3 (was automatically converted to vec4 in the shader by adding 1.0f as the last element)
normal_vector: vec3 (was kept as vec3 in the shaders as it was used for calculation dot product with the light direction)
color: vec3 (I did not use transparency)
In my vertex shader I was only transforming the vertex positions (multiplying by projection and model-view matrix) and also transforming the normal vectors (multiplying by transformed inverse of model-view matrix). Then the transformed positions, normal vectors and untransformed colors were passed to fragment shader where I calculated the dot product of light direction and normal vector and multiplied this number with the color.
Let me start with what I did and found unsatisfactory:
Attempt#1: Each cone face (triangle) was using a constant normal vector, i.e. all vertices of one triangle had the same normal vector.
This was simple but did not achieve smooth lighting, each face had a constant color because all fragments of the triangle had the same normal vector. Wrong.
Attempt#2: I calculated the normal vector for each vertex separately. This was easy for the vertices on the circular base of the cone but what should be used for the tip of the cone? I used the normal vector of the whole triangle (i.e. the same value as in attempt#). Well this was better because I had smooth lighting in the part closer to the base of the cone but not smooth near the tip. Wrong.
But then I found the solution:
Attempt#3: I did everything as in attempt#2 except I assigned the normal vector in the cone-tip vertices equal to zero vector vec3(0.0f, 0.0f, 0.0f). This is the key to the trick! Then this zero normal vector is passed to the fragment shader, (i.e. between vertex and fragment shaders it is automatically interpolated with the normal vectors of the other two vertices). Of course then you need to normalize the vector in the fragment (!) shader because it does not have constant size of 1 (which I need for the dot product). So I normalize it - of course this is not possible for the very tip of the cone where the normal vector has the size of zero. But it works for all other points. And that's it.
There is one important thing to remember, either you can only normalize the normal vector in the fragment shader. Sure you will get error if you try to normalize vector of zero size in C++. So If you need normalization before entering into fragment shader for some reason make sure you exclude the normal vectors of size of zero (i.e. the tip of the cone or you will get error).
This produces smooth shading of the cone in all points except the very point of the cone-tip. But that point is just not important (who cares about one pixel...) or you can handle it in a special way. Another advantage is that you can use even very simple shader. The only change is to normalize the normal vectors in the fragment shader rather than in vertex shader or even before.
Yes, it certainly is a limitation of triangles. I think showing the issue as you approach a cone from a cylinder makes the problem quite clear:
Here's some things you could try...
Use quads (as #WhitAngl says). To hell with new OpenGL, there is a use for quads after all.
Tessellate a bit more evenly. Setting the normal at the tip to a common up vector removes any harsh edges, though looks a bit strange against the unlit side. Unfortunately this goes against your question title, low polygon cone.
Making sure your cone is centred around the object space origin (or procedurally generating it in the vertex shader), use the fragment position to generate the normal...
in vec2 coneSlope; //normal x/z magnitude and y
in vec3 objectSpaceFragPos;
uniform mat3 normalMatrix;
void main()
{
vec3 osNormal = vec3(normalize(objectSpaceFragPos.xz) * coneSlope.x, coneSlope.y);
vec3 esNormal = normalMatrix * osNormal;
...
}
Maybe there's some fancy tricks you can do to reduce fragment shader ops too.
Then there's the whole balance of tessellating more vs more expensive shaders.
A cone is a fairly simple object and, while I like the challenge, in practice I can't see this being an issue unless you want lots of cones. In which case you might get into geometry shaders or instancing. Better yet you could draw the cones using quads and raycast implicit cones in the fragment shader. If the cones are all on a plane you could try normal mapping or even parallax mapping.

How to apply texture to a part of a sphere

I am trying to put a texture in only a part of a sphere.
I have a sphere representing the earth with its topography and a terrain texture for a part of the globe, say satellite map for Italy.
I want to show that terrain over the part of the sphere where Italy is.
I'm creating my sphere drawing a set of triangle strips.
As far as I understand, if I want to use a texture I need to specify a texture coord for each vertex (glTexCoord2*). But I do not have a valid texture for all of them.
So how do I tell OpenGL to skip texture for those vertexes?
I'll assume you have two textures or a color attribute for the remainder of the sphere ("not Italy").
The easiest way to do this would be to create a texture that covers the whole sphere, but use the alpha channel. For example, use alpha=1 for "not italy" and alpha=0 for "italy". Then you could do something like this in your fragment shader (pseudo-code, I did not test anything):
...
uniform sampler2D extra_texture;
in vec2 texture_coords;
out vec3 final_color;
...
void main() {
...
// Assume color1 to be the base color for the sphere, no matter how you get it (attribute/texture), it has at least 3 components.
vec4 color2 = texture(extra_texture, texture_coords);
final_color = mix(vec3(color2), vec3(color1), color2.a);
}
The colors in mix are combined as follows, mix(x,y,a) = x*(1-a)+y*a, this is done component wise for vectors. So you can see that if alpha=1 ("not Italy"), color1 will be picked, and vice versa for alpha=0.
You could extend this to multiple layers using texture arrays or something similar, but I'd keep it simple 2-layer to begin with.