Show common part of 2 semitransparent objects? - opengl

I'm creating a math app that uses OpenGL to show several geometric 3D shapes. In some cases these shapes intersect and when this happen I want to show the intersecting part of each shape inside the other. As the both shapes are translucent, what I need is more or less something like this:
However, with 2 translucent spheres I get that result instead:
I know that, to achieve a correct transparency effect, the depth testing should be turned off before the transparent shapes are drawn. However this cause another side effect when a transparent shape is beyond another non transparent shape:
So, is there a way to show correctly the intersecting part of 2 volumetric shapes, each inside the other, without breaking the depth testing?

So, is there a way to show (…) volumetric shapes
OpenGL (by itself) doesn't know about "volumes". It knows flat triangles, lines and points, which by pure happenstance also may cause rendering side effects like depth sorting by a depth buffer test.
Technically it is possible to chain a series of drawing and stencil buffer operations to perform CSG (constructive solid geometry); see ftp://ftp.sgi.com/opengl/contrib/blythe/advanced99/notes/node22.html for details.
However what you want to do is easier implemented through a simple raytracer executed in the fragment shader. Now raytracing by itself is a broad subject and you can fill books on it (actually lots of books have been written on the subject). It's probably best to refer to an example. In this case I refer to the following ShaderToy https://www.shadertoy.com/view/ldS3DW – a slightly stripped down version of that shader draws the kind of intersectional geometry you're interested in:
float sphere(vec3 ray, vec3 dir, vec3 center, float radius)
{
vec3 rc = ray-center;
float c = dot(rc, rc) - (radius*radius);
float b = dot(dir, rc);
float d = b*b - c;
float t = -b - sqrt(abs(d));
float st = step(0.0, min(t,d));
return mix(-1.0, t, st);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = (-1.0 + 2.0*fragCoord.xy / iResolution.xy) *
vec2(iResolution.x/iResolution.y, 1.0);
vec3 ro = vec3(0.0, 0.0, -3.0);
vec3 rd = normalize(vec3(uv, 1.0));
vec3 p0 = vec3(0.5, 0.0, 0.0);
float t0 = sphere(ro, rd, p0, 1.0);
vec3 p1 = vec3(-0.05, 0.0, 0.0);
float t1 = sphere(ro, rd, p1, 1.0);
fragColor = vec4( step(0.0,t0)*step(0.0,t1)*rd, 1.0 );
}

Related

GLSL compute shader flickering blocks/squares artifact

I'm trying to write a bare minimum GPU raycaster using compute shaders in OpenGL. I'm confident the raycasting itself is functional, as I've gotten clean outlines of bounding boxes via a ray-box intersection algorithm.
However, when attempting ray-triangle intersection, I get strange artifacts. My shader is programmed to simply test for a ray-triangle intersection, and color the pixel white if an intersection was found and black otherwise. Instead of the expected behavior, when the triangle should be visible onscreen, the screen is instead filled with black and white squares/blocks/tiles which flicker randomly like TV static. The squares are at most 8x8 pixels (the size of my compute shader blocks), although there are dots as small as single pixels as well. The white blocks generally lie in the expected area of my triangle, although sometimes they are spread out across the bottom of the screen as well.
Here is a video of the artifact. In my full shader the camera can be rotated around and the shape appears more triangle-like, but the flickering artifact is the key issue and still appears in this video which I generated from the following minimal version of my shader code:
layout(local_size_x = 8, local_size_y = 8, local_size_z = 1) in;
uvec2 DIMS = gl_NumWorkGroups.xy*gl_WorkGroupSize.xy;
uvec2 UV = gl_GlobalInvocationID.xy;
vec2 uvf = vec2(UV) / vec2(DIMS);
layout(location = 1, rgba8) uniform writeonly image2D brightnessOut;
struct Triangle
{
vec3 v0;
vec3 v1;
vec3 v2;
};
struct Ray
{
vec3 origin;
vec3 direction;
vec3 inv;
};
// Wikipedia Moller-Trumbore algorithm, GLSL-ified
bool ray_triangle_intersection(vec3 rayOrigin, vec3 rayVector,
in Triangle inTriangle, out vec3 outIntersectionPoint)
{
const float EPSILON = 0.0000001;
vec3 vertex0 = inTriangle.v0;
vec3 vertex1 = inTriangle.v1;
vec3 vertex2 = inTriangle.v2;
vec3 edge1 = vec3(0.0);
vec3 edge2 = vec3(0.0);
vec3 h = vec3(0.0);
vec3 s = vec3(0.0);
vec3 q = vec3(0.0);
float a = 0.0, f = 0.0, u = 0.0, v = 0.0;
edge1 = vertex1 - vertex0;
edge2 = vertex2 - vertex0;
h = cross(rayVector, edge2);
a = dot(edge1, h);
// Test if ray is parallel to this triangle.
if (a > -EPSILON && a < EPSILON)
{
return false;
}
f = 1.0/a;
s = rayOrigin - vertex0;
u = f * dot(s, h);
if (u < 0.0 || u > 1.0)
{
return false;
}
q = cross(s, edge1);
v = f * dot(rayVector, q);
if (v < 0.0 || u + v > 1.0)
{
return false;
}
// At this stage we can compute t to find out where the intersection point is on the line.
float t = f * dot(edge2, q);
if (t > EPSILON) // ray intersection
{
outIntersectionPoint = rayOrigin + rayVector * t;
return true;
}
return false;
}
void main()
{
// Generate rays by calculating the distance from the eye
// point to the screen and combining it with the pixel indices
// to produce a ray through this invocation's pixel
const float HFOV = (3.14159265359/180.0)*45.0;
const float WIDTH_PX = 1280.0;
const float HEIGHT_PX = 720.0;
float VIEW_PLANE_D = (WIDTH_PX/2.0)/tan(HFOV/2.0);
vec2 rayXY = vec2(UV) - vec2(WIDTH_PX/2.0, HEIGHT_PX/2.0);
// Rays have origin at (0, 0, 20) and generally point towards (0, 0, -1)
Ray r;
r.origin = vec3(0.0, 0.0, 20.0);
r.direction = normalize(vec3(rayXY, -VIEW_PLANE_D));
r.inv = 1.0 / r.direction;
// Triangle in XY plane at Z=0
Triangle debugTri;
debugTri.v0 = vec3(-20.0, 0.0, 0.0);
debugTri.v1 = vec3(20.0, 0.0, 0.0);
debugTri.v0 = vec3(0.0, 40.0, 0.0);
// Test triangle intersection; write 1.0 if hit, else 0.0
vec3 hitPosDebug = vec3(0.0);
bool hitDebug = ray_triangle_intersection(r.origin, r.direction, debugTri, hitPosDebug);
imageStore(brightnessOut, ivec2(UV), vec4(vec3(float(hitDebug)), 1.0));
}
I render the image to a fullscreen triangle using a normal sampler2D and rasterized triangle UVs chosen to map to screen space.
None of this code should be time dependent, and I've tried multiple ray-triangle algorithms from various sources including both branching and branch-free versions and all exhibit the same problem which leads me to suspect some sort of memory incoherency behavior I'm not familiar with, a driver issue, or a mistake I've made in configuring or dispatching my compute (I dispatch 160x90x1 of my 8x8x1 blocks to cover my 1280x720 framebuffer texture).
I've found a few similar issues like this one on SE and the general internet, but they seem to almost exclusively be caused by using uninitialized variables, which I am not doing as far as I can tell. They mention that the pattern continues to move when viewed in the NSight debugger; while RenderDoc doesn't do that, the contents of the image do vary between draw calls even after the compute shader has finished. E.g. when inspecting the image in the compute draw call there is one pattern of artifacts, but when I scrub to the subsequent draw calls which use my image as input, the pattern in the image has changed despite nothing writing to the image.
I also found this post which seems very similar, but that one also seems to be caused by an uninitialized variable, which again I've been careful to avoid. I've also not been able to alleviate the issue by tweaking the code as they have done.
This post has a similar looking artifact which was a memory model problem, but I'm not using any shared memory.
I'm running the latest NVidia drivers (461.92) on a GTX 1070. I've tried inserting glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT); (as well as some of the other barrier types) after my compute shader dispatch which I believe is the correct barrier to use if using a sampler2D to draw a texture that was previously modified by an image load/store operation, but it doesn't seem to change anything.
I just tried re-running it with glMemoryBarrier(GL_ALL_BARRIER_BITS); both before and after my dispatch call, so synchronization doesn't seem to be the issue.
Odds are that the cause of the problem lies somewhere between my chair and keyboard, but this kind of problem lies outside my usual shader debugging abilities as I'm relatively new to OpenGL. Any ideas would be appreciated! Thanks.
I've fixed the issue, and it was (unsurprisingly) simply a stupid mistake on my own part.
Observe the following lines from my code snippet:
Which leaves my v2 vertex quite uninitialized.
The moral of this story is that if you have a similar issue to the one I described above, and you swear up and down that you've initialized all your variables and it must be a driver bug or someone else's fault... quadruple-check your variables, you probably forgot to initialize one.

How can i make GL_POINTS overlap to look like spheres?

I am attempting to create a voxel style game, and I want to use GL_POINTS to simulate spherical voxels.
I am aiming to have them look like 3d spheres without having to render an actual sphere with many vertices.
However, when I created a mass of GL_POINTS, they overlap in a way that makes it obvious that they are flat circle sprites.
Here is an example:
my image example of gl_points overlapping showing circular sprite:
I would like to have the circular GL_POINTS overlap in a way that makes them look like spheres being squished together and hiding parts of each other.
For an example of what I would like to achieve, here is an image showing Star Defenders 3D by Eric Gurt, in which he used spherical points as voxels in Javascript for his levels:
Example image showing points that look like spheres:
As you can see, where the points overlap, they hide parts of each other creating the illusion that they are 3d spheres instead of circular sprites.
Is there a way to replicate this in openGL?
I am using OpenGL 3.3.0.
I have finally implemented a way to make points look like spheres by changing gl_FragDepth.
This is the code from my fragment shader to make a square gl_point into a sphere. (no lighting)
void makeSphere()
{
//clamps fragments to circle shape.
vec2 mapping = gl_PointCoord * 2.0F - 1.0F;
float d = dot(mapping, mapping);
if (d >= 1.0F)
{//discard if the vectors length is more than 0.5
discard;
}
float z = sqrt(1.0F - d);
vec3 normal = vec3(mapping, z);
normal = mat3(transpose(viewMatrix)) * normal;
vec3 cameraPos = vec3(worldPos) + rad * normal;
////Set the depth based on the new cameraPos.
vec4 clipPos = projectionMatrix * viewMatrix * vec4(cameraPos, 1.0);
float ndcDepth = clipPos.z / clipPos.w;
gl_FragDepth = ((gl_DepthRange.diff * ndcDepth) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
//calc ambient occlusion for circle
if (bool(fAoc))
ambientOcclusion = sqrt(1.0F - d * 0.5F);
}

Implementing anisotropic lighting for planar surfaces

I am trying to emulate some kind of specular reflections through anisotropy in my WebGL shader, something like this described in this tutorial here.
I realized that while is easy to get convincing results with spherical shapes like the Utah teapot, it is somewhat more difficult to get nice looking lighting effects by implementing anisotropic lighting for planar or squared geometries.
After reading this paper Anisotropic Lighting using HLS from nVIDIA, I started playing with following shader:
vec3 surfaceToLight(vec3 p) {
return normalize(p - v_worldPos);
}
vec3 anisoVector() {
return vec3(0.0,0.0,1.0);
}
void main() {
vec3 texColor = vec3(0.0);
vec3 N = normalize(v_normal); // Interpolated directions need to be re-normalized
vec3 L = surfaceToLight(lightPos);
vec3 E = anisoVector();
vec3 H = normalize(E + L);
vec2 tCoords = vec2(0.0);
tCoords.x = 2.0 * dot(H, N) - 1.0; //The value varies with the line of sight
tCoords.y = 2.0 * dot(L, N) - 1.0; //each vertex has a fixed value
vec4 tex = texture2D(s_texture, tCoords);
//vec3 anisoColor = tex.rgb; // show texture color only
//vec3 anisoColor = tex.aaa; // show anisotropic term only
vec3 anisoColor = tex.rgb * tex.aaa;
texColor += material.specular * light.color * anisoColor;
gl_FragColor = vec4(texColor, u_opacity);
}
Here is what I get actually (geometries are without texture coordinates and without creased normals):
I am aware that I can't simply use the method described in the above mentioned paper for everything, but, at least, it seems to me a good starting point to achieve fast simulated anisotropic lighting.
Sadly, I am not able to fully understand the math used to create the texture and so I am asking if there is any method to either
tweak the texture coordinates in this part of the fragment shader
tCoords.x = 2.0 * dot(H, N) - 1.0;
tCoords.y = 2.0 * dot(L, N) - 1.0;
tweak the shape of the alpha channel inside the texture (below the RGBA layers and the result)
...to get nice-looking vertical specular reflections for planar geometries, like the cube on the right side.
Does anyone know anything about this?
BTW, if someone interested, pay attention that the texture shall be mirrored:
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.MIRRORED_REPEAT);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.MIRRORED_REPEAT);
...and here is the texture as PNG (the original one from nVIDIA is in TGA format).

Strange Voxel Cone Tracing Results

Im currently in the process of writing a Voxel Cone Tracing Rendering Engine with C++ and OpenGL. Everything is going rather fine, except that I'm getting rather strange results for wider cone angles.
Right now, for the purposes of testing, all I am doing is shoot out one singular cone perpendicularly to the fragment normal. I am only calculating 'indirect light'. For reference, here is the rather simple Fragment Shader I'm using:
#version 450 core
out vec4 FragColor;
in vec3 pos_fs;
in vec3 nrm_fs;
uniform sampler3D tex3D;
vec3 indirectDiffuse();
vec3 voxelTraceCone(const vec3 from, vec3 direction);
void main()
{
FragColor = vec4(0, 0, 0, 1);
FragColor.rgb += indirectDiffuse();
}
vec3 indirectDiffuse(){
// singular cone in direction of the normal
vec3 ret = voxelTraceCone(pos_fs, nrm);
return ret;
}
vec3 voxelTraceCone(const vec3 origin, vec3 dir) {
float max_dist = 1f;
dir = normalize(dir);
float current_dist = 0.01f;
float apperture_angle = 0.01f; //Angle in Radians.
vec3 color = vec3(0.0f);
float occlusion = 0.0f;
float vox_size = 128.0f; //voxel map size
while(current_dist < max_dist && occlusion < 1) {
//Get cone diameter (tan = cathetus / cathetus)
float current_coneDiameter = 2.0f * current_dist * tan(apperture_angle * 0.5f);
//Get mipmap level which should be sampled according to the cone diameter
float vlevel = log2(current_coneDiameter * vox_size);
vec3 pos_worldspace = origin + dir * current_dist;
vec3 pos_texturespace = (pos_worldspace + vec3(1.0f)) * 0.5f; //[-1,1] Coordinates to [0,1]
vec4 voxel = textureLod(tex3D, pos_texturespace, vlevel); //get voxel
vec3 color_read = voxel.rgb;
float occlusion_read = voxel.a;
color = occlusion*color + (1 - occlusion) * occlusion_read * color_read;
occlusion = occlusion + (1 - occlusion) * occlusion_read;
float dist_factor = 0.3f; //Lower = better results but higher performance hit
current_dist += current_coneDiameter * dist_factor;
}
return color;
}
The tex3D uniform is the voxel 3d-texture.
Under a regular Phong shader (under which the voxel values are calculated) the scene looks like this:
For reference, this is what the voxel map (tex3D) (128x128x128) looks like when visualized:
Now we get to the actual problem I'm having. If I apply the shader above to the scene, I get following results:
For very small cone angles (apperture_angle=0.01) I get roughly what you might expect: The voxelized scene is essentially 'reflected' perpendicularly on each surface:
Now if I increase the apperture angle to, for example 30 degrees (apperture_angle=0.52), I get this really strange 'wavy'-looking result:
I would have expected a much more similar result to the earlier one, just less specular. Instead I get mostly the outline of each object reflected in a specular manner with some occasional pixels inside the outline. Considering this is meant to be the 'indirect lighting' in the scene, it won't look exactly good even if I add the direct light.
I have tried different values for max_dist, current_dist etc. aswell as shooting several cones instead of just one. The result remains similar, if not worse.
Does someone know what I'm doing wrong here, and how to get actual remotely realistic indirect light?
I suspect that the textureLod function somehow yields the wrong result for any LOD levels above 0, but I haven't been able to confirm this.
The Mipmaps of the 3D texture were not being generated correctly.
In addition there was no hardcap on vlevel leading to all textureLod calls returning a #000000 color that accessed any mipmaplevel above 1.

What exactly happens with alpha colors after blending?

I am working on a test project where one big 3D quad is intersected by several other 3D quads (rotated differently). All quads have transparency (ranging from fully opaque to fully transparent). The small quads never overlap, well, they might, but the camera placement and the Z-buffer make sure that only those parts that may overlap actually overlap.
To render this, I first render the big quad to a different rgba framebuffer for later lookup.
Now I start rendering to the screen. A skybox is first rendered, then the small quads are rendered with alpha blending enabled: GL_SRC_ALHPA and GL_ONE_MINUS_SRC_ALPHA. After that I render the big quad again with alpha blending also enabled this time. Of course, only the pixels in front of the quads will be rendered because of the Z-buffer.
That's why, in the fragment shader of the small quads, I do a raytrace to find the intersection with the big quad. If the intersection point is BEHIND the small quad, I fetch the texel from the originally created framebuffer and blend this texel manually with the calculated small quad texel color.
But the result is not the same: the colors in front are rendered correctly (GPU handles the blending there), the colors behind them are "weird". They are lighter or darker but I never get the same result. After consideration, I think this must be because I am not emitting the correct alpha value from my small-quads shader, so that the GPU performed blending changes the colors even more.
So, how exactly is the alpha value calculated when blending on the GPU when using the above mentioned blending method. In the opengl manual, I find: A * Srgba + (1 - A) * Drgba. When I emit that result, the blending is not the same. I am quite sure that this is because the result is again passing through the GPU blending again.
The ray tracing is correct, I'm certain of that.
So, what should I do with my manual blending?
Or, should I use another method to get the right effect?
Not really necessary I believe, but here is some code (optimization is for later):
vec4 vRayOrg = vec4(0.0, 0.0, 0.0, 1.0);
vec4 vRayDir = vec4(normalize(vBBPosition.xyz), 0.0);
vec4 vPlaneOrg = mView * vec4(0.0, 0.0, 0.0, 1.0);
vec4 vPlaneNormal = mView * vec4(0.0, 1.0, 0.0, 0.0);
float div = dot(vRayDir.xyz, vPlaneNormal.xyz);
float t = dot(vPlaneOrg.xyz - vRayOrg.xyz, vPlaneNormal.xyz) / div;
vec4 pIntersection = vRayOrg + t * vRayDir;
vec3 normal = normalize(vec4(vCoord, 0.0, 0.0)).xyz;
vec3 vLight = normalize(vPosition.xyz / vPosition.w);
float distance = length(vCoord) - 1.0;
float distf = clamp(0.225 - distance, 0.0, 1.0);
float diffuse = clamp(0.25 + dot(-normal, vLight), 0.0, 1.0);
vec4 cOut = diffuse * 9.0 * distf * cGlow;
if (distance > 0.0 && t >= 0.0 && pIntersection.z <= vBBPosition.z)
{
vec2 vTexcoord = (vProjectedPosition.xy / vProjectedPosition.w) * 0.5 + 0.5;
vec4 cLookup = texture(tLookup, vTexcoord);
cOut = cLookup.a * cLookup + (1.0 - cLookup.a) * cOut;
}
vFragColor = cOut;