I'm trying to write a bare minimum GPU raycaster using compute shaders in OpenGL. I'm confident the raycasting itself is functional, as I've gotten clean outlines of bounding boxes via a ray-box intersection algorithm.
However, when attempting ray-triangle intersection, I get strange artifacts. My shader is programmed to simply test for a ray-triangle intersection, and color the pixel white if an intersection was found and black otherwise. Instead of the expected behavior, when the triangle should be visible onscreen, the screen is instead filled with black and white squares/blocks/tiles which flicker randomly like TV static. The squares are at most 8x8 pixels (the size of my compute shader blocks), although there are dots as small as single pixels as well. The white blocks generally lie in the expected area of my triangle, although sometimes they are spread out across the bottom of the screen as well.
Here is a video of the artifact. In my full shader the camera can be rotated around and the shape appears more triangle-like, but the flickering artifact is the key issue and still appears in this video which I generated from the following minimal version of my shader code:
layout(local_size_x = 8, local_size_y = 8, local_size_z = 1) in;
uvec2 DIMS = gl_NumWorkGroups.xy*gl_WorkGroupSize.xy;
uvec2 UV = gl_GlobalInvocationID.xy;
vec2 uvf = vec2(UV) / vec2(DIMS);
layout(location = 1, rgba8) uniform writeonly image2D brightnessOut;
struct Triangle
{
vec3 v0;
vec3 v1;
vec3 v2;
};
struct Ray
{
vec3 origin;
vec3 direction;
vec3 inv;
};
// Wikipedia Moller-Trumbore algorithm, GLSL-ified
bool ray_triangle_intersection(vec3 rayOrigin, vec3 rayVector,
in Triangle inTriangle, out vec3 outIntersectionPoint)
{
const float EPSILON = 0.0000001;
vec3 vertex0 = inTriangle.v0;
vec3 vertex1 = inTriangle.v1;
vec3 vertex2 = inTriangle.v2;
vec3 edge1 = vec3(0.0);
vec3 edge2 = vec3(0.0);
vec3 h = vec3(0.0);
vec3 s = vec3(0.0);
vec3 q = vec3(0.0);
float a = 0.0, f = 0.0, u = 0.0, v = 0.0;
edge1 = vertex1 - vertex0;
edge2 = vertex2 - vertex0;
h = cross(rayVector, edge2);
a = dot(edge1, h);
// Test if ray is parallel to this triangle.
if (a > -EPSILON && a < EPSILON)
{
return false;
}
f = 1.0/a;
s = rayOrigin - vertex0;
u = f * dot(s, h);
if (u < 0.0 || u > 1.0)
{
return false;
}
q = cross(s, edge1);
v = f * dot(rayVector, q);
if (v < 0.0 || u + v > 1.0)
{
return false;
}
// At this stage we can compute t to find out where the intersection point is on the line.
float t = f * dot(edge2, q);
if (t > EPSILON) // ray intersection
{
outIntersectionPoint = rayOrigin + rayVector * t;
return true;
}
return false;
}
void main()
{
// Generate rays by calculating the distance from the eye
// point to the screen and combining it with the pixel indices
// to produce a ray through this invocation's pixel
const float HFOV = (3.14159265359/180.0)*45.0;
const float WIDTH_PX = 1280.0;
const float HEIGHT_PX = 720.0;
float VIEW_PLANE_D = (WIDTH_PX/2.0)/tan(HFOV/2.0);
vec2 rayXY = vec2(UV) - vec2(WIDTH_PX/2.0, HEIGHT_PX/2.0);
// Rays have origin at (0, 0, 20) and generally point towards (0, 0, -1)
Ray r;
r.origin = vec3(0.0, 0.0, 20.0);
r.direction = normalize(vec3(rayXY, -VIEW_PLANE_D));
r.inv = 1.0 / r.direction;
// Triangle in XY plane at Z=0
Triangle debugTri;
debugTri.v0 = vec3(-20.0, 0.0, 0.0);
debugTri.v1 = vec3(20.0, 0.0, 0.0);
debugTri.v0 = vec3(0.0, 40.0, 0.0);
// Test triangle intersection; write 1.0 if hit, else 0.0
vec3 hitPosDebug = vec3(0.0);
bool hitDebug = ray_triangle_intersection(r.origin, r.direction, debugTri, hitPosDebug);
imageStore(brightnessOut, ivec2(UV), vec4(vec3(float(hitDebug)), 1.0));
}
I render the image to a fullscreen triangle using a normal sampler2D and rasterized triangle UVs chosen to map to screen space.
None of this code should be time dependent, and I've tried multiple ray-triangle algorithms from various sources including both branching and branch-free versions and all exhibit the same problem which leads me to suspect some sort of memory incoherency behavior I'm not familiar with, a driver issue, or a mistake I've made in configuring or dispatching my compute (I dispatch 160x90x1 of my 8x8x1 blocks to cover my 1280x720 framebuffer texture).
I've found a few similar issues like this one on SE and the general internet, but they seem to almost exclusively be caused by using uninitialized variables, which I am not doing as far as I can tell. They mention that the pattern continues to move when viewed in the NSight debugger; while RenderDoc doesn't do that, the contents of the image do vary between draw calls even after the compute shader has finished. E.g. when inspecting the image in the compute draw call there is one pattern of artifacts, but when I scrub to the subsequent draw calls which use my image as input, the pattern in the image has changed despite nothing writing to the image.
I also found this post which seems very similar, but that one also seems to be caused by an uninitialized variable, which again I've been careful to avoid. I've also not been able to alleviate the issue by tweaking the code as they have done.
This post has a similar looking artifact which was a memory model problem, but I'm not using any shared memory.
I'm running the latest NVidia drivers (461.92) on a GTX 1070. I've tried inserting glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT); (as well as some of the other barrier types) after my compute shader dispatch which I believe is the correct barrier to use if using a sampler2D to draw a texture that was previously modified by an image load/store operation, but it doesn't seem to change anything.
I just tried re-running it with glMemoryBarrier(GL_ALL_BARRIER_BITS); both before and after my dispatch call, so synchronization doesn't seem to be the issue.
Odds are that the cause of the problem lies somewhere between my chair and keyboard, but this kind of problem lies outside my usual shader debugging abilities as I'm relatively new to OpenGL. Any ideas would be appreciated! Thanks.
I've fixed the issue, and it was (unsurprisingly) simply a stupid mistake on my own part.
Observe the following lines from my code snippet:
Which leaves my v2 vertex quite uninitialized.
The moral of this story is that if you have a similar issue to the one I described above, and you swear up and down that you've initialized all your variables and it must be a driver bug or someone else's fault... quadruple-check your variables, you probably forgot to initialize one.
Related
I'm trying to implement Normal Mapping, using a simple cube that i created. I followed this tutorial https://learnopengl.com/Advanced-Lighting/Normal-Mapping but i can't really get how normal mapping should be done when drawing 3d objects, since the tutorial is using a 2d object.
In particular, my cube seems almost correctly lighted but there's something i think it's not working how it should be. I'm using a geometry shader that will output green vector normals and red vector tangents, to help me out. Here i post three screenshot of my work.
Directly lighted
Side lighted
Here i actually tried calculating my normals and tangents in a different way. (quite wrong)
In the first image i calculate my cube normals and tangents one face at a time. This seems to work for the face, but if i rotate my cube i think the lighting on the adiacent face is wrong. As you can see in the second image, it's not totally absent.
In the third image, i tried summing all normals and tangents per vertex, as i think it should be done, but the result seems quite wrong, since there is too little lighting.
In the end, my question is how i should calculate normals and tangents.
Should i consider per face calculations or sum vectors per vertex across all relative faces, or else?
EDIT --
I'm passing normal and tangent to the vertex shader and setting up my TBN matrix. But as you can see in the first image, drawing face by face my cube, it seems that those faces adjacent to the one i'm looking directly (that is well lighted) are not correctly lighted and i don't know why. I thought that i wasn't correctly calculating my 'per face' normal and tangent. I thought that calculating some normal and tangent that takes count of the object in general, could be the right way.
If it's right to calculate normal and tangent as visible in the second image (green normal, red tangent) to set up the TBN matrix, why does the right face seems not well lighted?
EDIT 2 --
Vertex shader:
void main(){
texture_coordinates = textcoord;
fragment_position = vec3(model * vec4(position,1.0));
mat3 normalMatrix = transpose(inverse(mat3(model)));
vec3 T = normalize(normalMatrix * tangent);
vec3 N = normalize(normalMatrix * normal);
T = normalize(T - dot(T, N) * N);
vec3 B = cross(N, T);
mat3 TBN = transpose(mat3(T,B,N));
view_position = TBN * viewPos; // camera position
light_position = TBN * lightPos; // light position
fragment_position = TBN * fragment_position;
gl_Position = projection * view * model * vec4(position,1.0);
}
In the VS i set up my TBN matrix and i transform all light, fragment and view vectors to tangent space; doing so i won't have to do any other calculation in the fragment shader.
Fragment shader:
void main() {
vec3 Normal = texture(TextSamplerNormals,texture_coordinates).rgb; // extract normal
Normal = normalize(Normal * 2.0 - 1.0); // correct range
material_color = texture2D(TextSampler,texture_coordinates.st); // diffuse map
vec3 I_amb = AmbientLight.color * AmbientLight.intensity;
vec3 lightDir = normalize(light_position - fragment_position);
vec3 I_dif = vec3(0,0,0);
float DiffusiveFactor = max(dot(lightDir,Normal),0.0);
vec3 I_spe = vec3(0,0,0);
float SpecularFactor = 0.0;
if (DiffusiveFactor>0.0) {
I_dif = DiffusiveLight.color * DiffusiveLight.intensity * DiffusiveFactor;
vec3 vertex_to_eye = normalize(view_position - fragment_position);
vec3 light_reflect = reflect(-lightDir,Normal);
light_reflect = normalize(light_reflect);
SpecularFactor = pow(max(dot(vertex_to_eye,light_reflect),0.0),SpecularLight.power);
if (SpecularFactor>0.0) {
I_spe = DiffusiveLight.color * SpecularLight.intensity * SpecularFactor;
}
}
color = vec4(material_color.rgb * (I_amb + I_dif + I_spe),material_color.a);
}
Handling discontinuity vs continuity
You are thinking about this the wrong way.
Depending on the use case your normal map may be continous or discontinous. For example in your cube, imagine if each face had a different surface type, then the normals would be different depending on which face you are currently in.
Which normal you use is determined by the texture itself and not by any blending in the fragment.
The actual algorithm is
Load rgb values of normal
Convert to -1 to 1 range
Rotate by the model matrix
Use new value in shading calculations
If you want continous normals, then you need to make sure that the charts in the texture space that you use obey that the limits of the texture coordinates agree.
Mathematically that means that if U and V are regions of R^2 that map to the normal field N of your Shape then if the function of the mapping is f it should be that:
If lim S(x_1, x_2) = lim S(y_1, y_2) where {x1,x2} \subset U and {y_1, y_2} \subset V then lim f(x_1, x_2) = lim f(y_1, y_2).
In plain English, if the cooridnates in your chart map to positions that are close in the shape, then the normals they map to should also be close in the normal space.
TL;DR do not belnd in the fragment. This is something that should be done by the normal map itself when its baked, not'by you when rendering.
Handling the tangent space
You have 2 options. Option n1, you pass the tangent T and the normal N to the shader. In which case the binormal B is T X N and the basis {T, N, B} gives you the true space where normals need to be expressed.
Assume that in tangent space, x is side, y is forward z is up. Your transformed normal becomes (xB, yT, zN).
If you do not pass the tangent, you must first create a random vector that is orthogonal to the normal, then use this as the tangent.
(Note N is the model normal, where (x,y,z) is the normal map normal)
I am trying to emulate some kind of specular reflections through anisotropy in my WebGL shader, something like this described in this tutorial here.
I realized that while is easy to get convincing results with spherical shapes like the Utah teapot, it is somewhat more difficult to get nice looking lighting effects by implementing anisotropic lighting for planar or squared geometries.
After reading this paper Anisotropic Lighting using HLS from nVIDIA, I started playing with following shader:
vec3 surfaceToLight(vec3 p) {
return normalize(p - v_worldPos);
}
vec3 anisoVector() {
return vec3(0.0,0.0,1.0);
}
void main() {
vec3 texColor = vec3(0.0);
vec3 N = normalize(v_normal); // Interpolated directions need to be re-normalized
vec3 L = surfaceToLight(lightPos);
vec3 E = anisoVector();
vec3 H = normalize(E + L);
vec2 tCoords = vec2(0.0);
tCoords.x = 2.0 * dot(H, N) - 1.0; //The value varies with the line of sight
tCoords.y = 2.0 * dot(L, N) - 1.0; //each vertex has a fixed value
vec4 tex = texture2D(s_texture, tCoords);
//vec3 anisoColor = tex.rgb; // show texture color only
//vec3 anisoColor = tex.aaa; // show anisotropic term only
vec3 anisoColor = tex.rgb * tex.aaa;
texColor += material.specular * light.color * anisoColor;
gl_FragColor = vec4(texColor, u_opacity);
}
Here is what I get actually (geometries are without texture coordinates and without creased normals):
I am aware that I can't simply use the method described in the above mentioned paper for everything, but, at least, it seems to me a good starting point to achieve fast simulated anisotropic lighting.
Sadly, I am not able to fully understand the math used to create the texture and so I am asking if there is any method to either
tweak the texture coordinates in this part of the fragment shader
tCoords.x = 2.0 * dot(H, N) - 1.0;
tCoords.y = 2.0 * dot(L, N) - 1.0;
tweak the shape of the alpha channel inside the texture (below the RGBA layers and the result)
...to get nice-looking vertical specular reflections for planar geometries, like the cube on the right side.
Does anyone know anything about this?
BTW, if someone interested, pay attention that the texture shall be mirrored:
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.MIRRORED_REPEAT);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.MIRRORED_REPEAT);
...and here is the texture as PNG (the original one from nVIDIA is in TGA format).
Im currently in the process of writing a Voxel Cone Tracing Rendering Engine with C++ and OpenGL. Everything is going rather fine, except that I'm getting rather strange results for wider cone angles.
Right now, for the purposes of testing, all I am doing is shoot out one singular cone perpendicularly to the fragment normal. I am only calculating 'indirect light'. For reference, here is the rather simple Fragment Shader I'm using:
#version 450 core
out vec4 FragColor;
in vec3 pos_fs;
in vec3 nrm_fs;
uniform sampler3D tex3D;
vec3 indirectDiffuse();
vec3 voxelTraceCone(const vec3 from, vec3 direction);
void main()
{
FragColor = vec4(0, 0, 0, 1);
FragColor.rgb += indirectDiffuse();
}
vec3 indirectDiffuse(){
// singular cone in direction of the normal
vec3 ret = voxelTraceCone(pos_fs, nrm);
return ret;
}
vec3 voxelTraceCone(const vec3 origin, vec3 dir) {
float max_dist = 1f;
dir = normalize(dir);
float current_dist = 0.01f;
float apperture_angle = 0.01f; //Angle in Radians.
vec3 color = vec3(0.0f);
float occlusion = 0.0f;
float vox_size = 128.0f; //voxel map size
while(current_dist < max_dist && occlusion < 1) {
//Get cone diameter (tan = cathetus / cathetus)
float current_coneDiameter = 2.0f * current_dist * tan(apperture_angle * 0.5f);
//Get mipmap level which should be sampled according to the cone diameter
float vlevel = log2(current_coneDiameter * vox_size);
vec3 pos_worldspace = origin + dir * current_dist;
vec3 pos_texturespace = (pos_worldspace + vec3(1.0f)) * 0.5f; //[-1,1] Coordinates to [0,1]
vec4 voxel = textureLod(tex3D, pos_texturespace, vlevel); //get voxel
vec3 color_read = voxel.rgb;
float occlusion_read = voxel.a;
color = occlusion*color + (1 - occlusion) * occlusion_read * color_read;
occlusion = occlusion + (1 - occlusion) * occlusion_read;
float dist_factor = 0.3f; //Lower = better results but higher performance hit
current_dist += current_coneDiameter * dist_factor;
}
return color;
}
The tex3D uniform is the voxel 3d-texture.
Under a regular Phong shader (under which the voxel values are calculated) the scene looks like this:
For reference, this is what the voxel map (tex3D) (128x128x128) looks like when visualized:
Now we get to the actual problem I'm having. If I apply the shader above to the scene, I get following results:
For very small cone angles (apperture_angle=0.01) I get roughly what you might expect: The voxelized scene is essentially 'reflected' perpendicularly on each surface:
Now if I increase the apperture angle to, for example 30 degrees (apperture_angle=0.52), I get this really strange 'wavy'-looking result:
I would have expected a much more similar result to the earlier one, just less specular. Instead I get mostly the outline of each object reflected in a specular manner with some occasional pixels inside the outline. Considering this is meant to be the 'indirect lighting' in the scene, it won't look exactly good even if I add the direct light.
I have tried different values for max_dist, current_dist etc. aswell as shooting several cones instead of just one. The result remains similar, if not worse.
Does someone know what I'm doing wrong here, and how to get actual remotely realistic indirect light?
I suspect that the textureLod function somehow yields the wrong result for any LOD levels above 0, but I haven't been able to confirm this.
The Mipmaps of the 3D texture were not being generated correctly.
In addition there was no hardcap on vlevel leading to all textureLod calls returning a #000000 color that accessed any mipmaplevel above 1.
I'm creating a math app that uses OpenGL to show several geometric 3D shapes. In some cases these shapes intersect and when this happen I want to show the intersecting part of each shape inside the other. As the both shapes are translucent, what I need is more or less something like this:
However, with 2 translucent spheres I get that result instead:
I know that, to achieve a correct transparency effect, the depth testing should be turned off before the transparent shapes are drawn. However this cause another side effect when a transparent shape is beyond another non transparent shape:
So, is there a way to show correctly the intersecting part of 2 volumetric shapes, each inside the other, without breaking the depth testing?
So, is there a way to show (…) volumetric shapes
OpenGL (by itself) doesn't know about "volumes". It knows flat triangles, lines and points, which by pure happenstance also may cause rendering side effects like depth sorting by a depth buffer test.
Technically it is possible to chain a series of drawing and stencil buffer operations to perform CSG (constructive solid geometry); see ftp://ftp.sgi.com/opengl/contrib/blythe/advanced99/notes/node22.html for details.
However what you want to do is easier implemented through a simple raytracer executed in the fragment shader. Now raytracing by itself is a broad subject and you can fill books on it (actually lots of books have been written on the subject). It's probably best to refer to an example. In this case I refer to the following ShaderToy https://www.shadertoy.com/view/ldS3DW – a slightly stripped down version of that shader draws the kind of intersectional geometry you're interested in:
float sphere(vec3 ray, vec3 dir, vec3 center, float radius)
{
vec3 rc = ray-center;
float c = dot(rc, rc) - (radius*radius);
float b = dot(dir, rc);
float d = b*b - c;
float t = -b - sqrt(abs(d));
float st = step(0.0, min(t,d));
return mix(-1.0, t, st);
}
void mainImage( out vec4 fragColor, in vec2 fragCoord )
{
vec2 uv = (-1.0 + 2.0*fragCoord.xy / iResolution.xy) *
vec2(iResolution.x/iResolution.y, 1.0);
vec3 ro = vec3(0.0, 0.0, -3.0);
vec3 rd = normalize(vec3(uv, 1.0));
vec3 p0 = vec3(0.5, 0.0, 0.0);
float t0 = sphere(ro, rd, p0, 1.0);
vec3 p1 = vec3(-0.05, 0.0, 0.0);
float t1 = sphere(ro, rd, p1, 1.0);
fragColor = vec4( step(0.0,t0)*step(0.0,t1)*rd, 1.0 );
}
I'm trying to create geometry to represent the Earth in OpenGL. I have what's more or less a sphere (closer to the elliptical geoid that Earth is though). I map a texture of the Earth's surface (that's probably a mercator projection or something similar). The texture's UV coordinates correspond to the geometry's latitude and longitude. I have two issues that I'm unable to solve. I am using OpenSceneGraph but I think this is a general OpenGL / 3D programming question.
There's a texture seam that's very apparent. I'm sure this occurs because I don't know how to map the UV coordinates to XYZ where the seam occurs. I only map UV coords up to the last vertex before wrapping around... You'd need to map two different UV coordinates to the same XYZ vertex to eliminate the seam. Is there a commonly used trick to get around this, or am I just doing it wrong?
There's crazy swirly distortion going on at the poles. I'm guessing this because I map a single UV point at the poles (for Earth, I use [0.5,1] for the North Pole, and [0.5,0] for the South Pole). What else would you do though? I can sort of live with this... but its extremely noticeable at lower resolution meshes.
I've attached an image to show what I'm talking about.
The general way this is handled is by using a cube map, not a 2D texture.
However, if you insist on using a 2D texture, you have to create a break in your mesh's topology. The reason you get that longitudinal line is because you have one vertex with a texture coordinate of something like 0.9 or so, and its neighboring vertex has a texture coordinate of 0.0. What you really want is that the 0.9 one neighbors a 1.0 texture coordinate.
Doing this means replicating the position down one line of the sphere. So you have the same position used twice in your data. One is attached to a texture coordinate of 1.0 and neighbors a texture coordinate of 0.9. The other has a texture coordinate of 0.0, and neighbors a vertex with 0.1.
Topologically, you need to take a longitudinal slice down your sphere.
Your link really helped me out, furqan, thanks.
Why couldn't you figure it out? A point where I stumbled was, that I didn't know you can exceed the [0,1] interval when calculating the texture coordinates. That makes it a lot easier to jump from one side of the texture to the other with OpenGL doing all the interpolation and without having to calculate the exact position where the texture actually ends.
You can also go a dirty way: interpolate X,Y positions in between vertex shader and fragment shader and recalculate correct texture coordinate in fragment shader. This may be somewhat slower, but it doesn't involve duplicate vertexes and it's simplier, I think.
For example:
vertex shader:
#version 150 core
uniform mat4 projM;
uniform mat4 viewM;
uniform mat4 modelM;
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 pass_TextureCoord;
out vec2 pass_xy_position;
void main(void) {
gl_Position = projM * viewM * modelM * in_Position;
pass_xy_position = in_Position.xy; // 2d spinning interpolates good!
pass_TextureCoord = in_TextureCoord;
}
fragment shader:
#version 150 core
uniform sampler2D texture1;
in vec2 pass_xy_position;
in vec2 pass_TextureCoord;
out vec4 out_Color;
#define PI 3.141592653589793238462643383279
void main(void) {
vec2 tc = pass_TextureCoord;
tc.x = (PI + atan(pass_xy_position.y, pass_xy_position.x)) / (2 * PI); // calculate angle and map it to 0..1
out_Color = texture(texture1, tc);
}
It took a long time to figure this extremely annoying issue out. I'm programming with C# in Unity and I didn't want to duplicate any vertices. (Would cause future issues with my concept) So I went with the shader idea and it works out pretty well. Although I'm sure the code could use some heavy duty optimization, I had to figure out how to port it over to CG from this but it works. This is in case someone else runs across this post, as I did, looking for a solution to the same problem.
Shader "Custom/isoshader" {
Properties {
decal ("Base (RGB)", 2D) = "white" {}
}
SubShader {
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#define PI 3.141592653589793238462643383279
sampler2D decal;
struct appdata {
float4 vertex : POSITION;
float4 texcoord : TEXCOORD0;
};
struct v2f {
float4 pos : SV_POSITION;
float4 tex : TEXCOORD0;
float3 pass_xy_position : TEXCOORD1;
};
v2f vert(appdata v){
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.pass_xy_position = v.vertex.xyz;
o.tex = v.texcoord;
return o;
}
float4 frag(v2f i) : COLOR {
float3 tc = i.tex;
tc.x = (PI + atan2(i.pass_xy_position.x, i.pass_xy_position.z)) / (2 * PI);
float4 color = tex2D(decal, tc);
return color;
}
ENDCG
}
}
}
As Nicol Bolas said, some triangles have UV coordinates going from ~0.9 back to 0, so the interpolation messes the texture around the seam. In my code, I've created this function to duplicate the vertices around the seam. This will create a sharp line splitting those vertices. If your texture has only water around the seam (the Pacific ocean?), you may not notice this line. Hope it helps.
/**
* After spherical projection, some triangles have vertices with
* UV coordinates that are far away (0 to 1), because the Azimuth
* at 2*pi = 0. Interpolating between 0 to 1 creates artifacts
* around that seam (the whole texture is thinly repeated at
* the triangles around the seam).
* This function duplicates vertices around the seam to avoid
* these artifacts.
*/
void PlatonicSolid::SubdivideAzimuthSeam() {
if (m_texCoord == NULL) {
ApplySphericalProjection();
}
// to take note of the trianges in the seam
int facesSeam[m_numFaces];
// check all triangles, looking for triangles with vertices
// separated ~2π. First count.
int nSeam = 0;
for (int i=0;i < m_numFaces; ++i) {
// check the 3 vertices of the triangle
int a = m_faces[3*i];
int b = m_faces[3*i+1];
int c = m_faces[3*i+2];
// just check the seam in the azimuth
float ua = m_texCoord[2*a];
float ub = m_texCoord[2*b];
float uc = m_texCoord[2*c];
if (fabsf(ua-ub)>0.5f || fabsf(ua-uc)>0.5f || fabsf(ub-uc)>0.5f) {
//test::printValue("Face: ", i, "\n");
facesSeam[nSeam] = i;
++nSeam;
}
}
if (nSeam==0) {
// no changes
return;
}
// reserve more memory
int nVertex = m_numVertices;
m_numVertices += nSeam;
m_vertices = (float*)realloc((void*)m_vertices, 3*m_numVertices*sizeof(float));
m_texCoord = (float*)realloc((void*)m_texCoord, 2*m_numVertices*sizeof(float));
// now duplicate vertices in the seam
// (the number of triangles/faces is the same)
for (int i=0; i < nSeam; ++i, ++nVertex) {
int t = facesSeam[i]; // triangle index
// check the 3 vertices of the triangle
int a = m_faces[3*t];
int b = m_faces[3*t+1];
int c = m_faces[3*t+2];
// just check the seam in the azimuth
float u_ab = fabsf(m_texCoord[2*a] - m_texCoord[2*b]);
float u_ac = fabsf(m_texCoord[2*a] - m_texCoord[2*c]);
float u_bc = fabsf(m_texCoord[2*b] - m_texCoord[2*c]);
// select the vertex further away from the other 2
int f = 2;
if (u_ab >= 0.5f && u_ac >= 0.5f) {
c = a;
f = 0;
} else if (u_ab >= 0.5f && u_bc >= 0.5f) {
c = b;
f = 1;
}
m_vertices[3*nVertex] = m_vertices[3*c]; // x
m_vertices[3*nVertex+1] = m_vertices[3*c+1]; // y
m_vertices[3*nVertex+2] = m_vertices[3*c+2]; // z
// repeat u from texcoord
m_texCoord[2*nVertex] = 1.0f - m_texCoord[2*c];
m_texCoord[2*nVertex+1] = m_texCoord[2*c+1];
// change this face so all the vertices have close UV
m_faces[3*t+f] = nVertex;
}
}
One approach is like in the accepted answer. In the code generating the array of vertex attributes you will have a code like this:
// FOR EVERY TRIANGLE
const float threshold = 0.7;
if(tcoords_1.s > threshold || tcoords_2.s > threshold || tcoords_3.s > threshold)
{
if(tcoords_1.s < 1. - threshold)
{
tcoords_1.s += 1.;
}
if(tcoords_2.s < 1. - threshold)
{
tcoords_2.s += 1.;
}
if(tcoords_3.s < 1. - threshold)
{
tcoords_3.s += 1.;
}
}
If you have triangles which are not meridian-aligned you will also want glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);. You also need to use glDrawArrays since vertices with the same position will have different texture coords.
I think the better way to go is to eliminate the root of all evil, which is texture coords interpolation in this case. Since you know basically all about your sphere/ellipsoid, you can calculate texture coords, normals, etc. in the fragment shader based on position. This means that your CPU code generating vertex attributes will be much simpler and you can use indexed drawing again. And I don't think this approach is dirty. It's clean.