My aim is to pass an array of points to the shader, calculate their distance to the fragment and paint them with a circle colored with a gradient depending of that computation.
For example:
(From a working example I set up on shader toy)
Unfortunately it isn't clear to me how I should calculate and convert the coordinates passed for processing inside the shader.
What I'm currently trying is to pass two array of floats - one for x positions and one for y positions of each point - to the shader though a uniform. Then inside the shader iterate through each point like so:
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform float sourceX[100];
uniform float sourceY[100];
uniform vec2 resolution;
in vec4 gl_FragCoord;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main()
{
float intensity = 0.0;
for(int i=0; i<100; i++)
{
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
intensity += exp(-0.5*d*d);
}
intensity=3.0*pow(intensity,0.02);
if (intensity<=1.0)
gl_FragColor=vec4(0.0,intensity*0.5,0.0,1.0);
else if (intensity<=2.0)
gl_FragColor=vec4(intensity-1.0, 0.5+(intensity-1.0)*0.5,0.0,1.0);
else
gl_FragColor=vec4(1.0,3.0-intensity,0.0,1.0);
}
But that doesn't work - and I believe it may be because I'm trying to work with the pixel coordinates without properly translating them. Could anyone explain to me how to make this work?
Update:
The current result is:
The sketch's code is:
PShader pointShader;
float[] sourceX;
float[] sourceY;
void setup()
{
size(1024, 1024, P3D);
background(255);
sourceX = new float[100];
sourceY = new float[100];
for (int i = 0; i<100; i++)
{
sourceX[i] = random(0, 1023);
sourceY[i] = random(0, 1023);
}
pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
shader(pointShader, POINTS);
pointShader.set("sourceX", sourceX);
pointShader.set("sourceY", sourceY);
pointShader.set("resolution", float(width), float(height));
}
void draw()
{
for (int i = 0; i<100; i++) {
strokeWeight(60);
point(sourceX[i], sourceY[i]);
}
}
while the vertex shader is:
#define PROCESSING_POINT_SHADER
uniform mat4 projection;
uniform mat4 transform;
attribute vec4 vertex;
attribute vec4 color;
attribute vec2 offset;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main() {
vec4 clip = transform * vertex;
gl_Position = clip + projection * vec4(offset, 0, 0);
vertColor = color;
center = clip.xy;
pos = offset;
}
Update:
Based on the comments it seems you have confused two different approaches:
Draw a single full screen polygon, pass in the points and calculate the final value once per fragment using a loop in the shader.
Draw bounding geometry for each point, calculate the density for just one point in the fragment shader and use additive blending to sum the densities of all points.
The other issue is your points are given in pixels but the code expects a 0 to 1 range, so d is large and the points are black. Fixing this issue as #RetoKoradi describes should address the points being black, but I suspect you'll find ramp clipping issues when many are in close proximity. Passing points into the shader limits scalability and is inefficient unless the points cover the whole viewport.
As below, I think sticking with approach 2 is better. To restructure your code for it, remove the loop, don't pass in the array of points and use center as the point coordinate instead:
//calc center in pixel coordinates
vec2 centerPixels = (center * 0.5 + 0.5) * resolution.xy;
//find the distance in pixels (avoiding aspect ratio issues)
float dPixels = distance(gl_FragCoord.xy, centerPixels);
//scale down to the 0 to 1 range
float d = dPixels / resolution.y;
//write out the intensity
gl_FragColor = vec4(exp(-0.5*d*d));
Draw this to a texture (from comments: opengl-tutorial.org code and this question) with additive blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
Now that texture will contain intensity as it was after your original loop. In another fragment shader during a full screen pass (draw a single triangle that covers the whole viewport), continue with:
uniform sampler2D intensityTex;
...
float intensity = texture2D(intensityTex, gl_FragCoord.xy/resolution.xy).r;
intensity = 3.0*pow(intensity, 0.02);
...
The code you have shown is fine, assuming you're drawing a full screen polygon so the fragment shader runs once for each pixel. Potential issues are:
resolution isn't set correctly
The point coordinates aren't in the range 0 to 1 on the screen.
Although minor, d will be stretched by the aspect ratio, so you might be better scaling the points up to pixel coordinates and diving distance by resolution.y.
This looks pretty similar to creating a density field for 2D metaballs. For performance you're best off limiting the density function for each point so it doesn't go on forever, then spatting discs into a texture using additive blending. This saves processing those pixels a point doesn't affect (just like in deferred shading). The result is the density field, or in your case per-pixel intensity.
These are a little related:
2D OpenGL ES Metaballs on android (no answers yet)
calculate light volume radius from intensity
gl_PointSize Corresponding to World Space Size
It looks like the point center and fragment position are in different coordinate spaces when you subtract them:
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
Based on your explanation and code, source and source are in window coordinates, meaning that they are in units of pixels. gl_FragCoord is in the same coordinate space. And even though you don't show that directly, I assume that resolution is the size of the window in pixels.
This means that:
vec2 position = ( gl_FragCoord.xy / resolution.xy );
calculates the normalized position of the fragment within the window, in the range [0.0, 1.0] for both x and y. But then on the next line:
float d = distance(position, source);
you subtrace source, which is still in window coordinates, from this position in normalized coordinates.
Since it looks like you wanted the distance in normalized coordinates, which makes sense, you'll also need to normalize source:
vec2 source = vec2(sourceX[i],sourceY[i]) / resolution.xy;
Related
I'm working on a game using GLSL shaders
I'm using Go with the library Pixel, it's a 2d game and there's no "camera" (I've had people suggest using a second camera to achieve this)
My current shader is just a basic grayscale shader
#version 330 core
in vec2 vTexCoords;
out vec4 fragColor;
uniform vec4 uTexBounds;
uniform sampler2D uTexture;
void main() {
// Get our current screen coordinate
vec2 t = (vTexCoords - uTexBounds.xy) / uTexBounds.zw;
// Sum our 3 color channels
float sum = texture(uTexture, t).r;
sum += texture(uTexture, t).g;
sum += texture(uTexture, t).b;
// Divide by 3, and set the output to the result
vec4 color = vec4( sum/3, sum/3, sum/3, 1.0);
fragColor = color;
}
I want to take out a circle of the shader to show the color of objects almost like light is shining on them.
This is an example of what I'm trying to achieve
I can't really figure out what to search to find a shadertoy example or something that does this, but I've seen something similar before so I'm pretty sure it's possible.
To restate; I basically just want to remove part of the shader.
Not sure if using shaders is the best way to approach this, if there's another way then please let me know and I will remake the question.
You can easily extend this to use any arbitrary position as the "light."
Declare a uniform buffer to store the current location and a radius.
If the distance from the given location to the current pixel is less than the radius squared return the current color.
Otherwise, return its greyscale.
vec2 displacement = t - light_location;
float distanceSq = (displacement.x * displacement.x + displacement.y * displacement.y)
float radiusSq = radius * radius;
if(distanceSq < radiusSq) {
fragColor = texture(uTexture);
} else {
float sum = texture(uTexture).r;
sum += texture(uTexture).g;
sum += texture(uTexture).b;
float grey = sum / 3.0f;
fragColor = vec4(grey, grey, grey, 1.0f);
}
I’m trying to achieve a circular vignette with GLSL, but the result is elliptical when the texture is rectangular. What is the correct way to make it square regardless of the texture size? The input texture size (resolution) can be both rectangular or square.
I tried a solution using the discard method, but this doesn't suit what I require, as I need to use smoothstep to get a gradient edge.
Current result:
GLSL shader:
varying vec2 v_texcoord;
uniform sampler2D u_texture;
uniform vec2 u_resolution;
vec4 applyVignette(vec4 color)
{
vec2 position = (gl_FragCoord.xy / u_resolution) - vec2(0.5);
float dist = length(position);
float radius = 0.5;
float softness = 0.02;
float vignette = smoothstep(radius, radius - softness, dist);
color.rgb = color.rgb - (1.0 - vignette);
return color;
}
void main()
{
vec4 color = texture2D(u_texture, v_texcoord);
color = applyVignette(color);
gl_FragColor = color;
}
You have to respect the aspect ration when you calculate the distance to the center point of the circular view:
float dist = length(position * vec2(u_resolution.x/u_resolution.y, 1.0));
Note, if you have a rectangular viewport, where the width is greater than the height, then a perfect circle is squeezed at it left and right to an ellipse, when the coordinates are transformed from view space the normalized devices space.
You must counteract this squeezing by scaling up the x axis of the distance vector.
I have the following fragment shader:
#version 330
layout(location=0) out vec4 frag_colour;
in vec2 texelCoords;
uniform sampler2D uTexture; // the color
uniform sampler2D uTextureHeightmap; // the heightmap
uniform float uSunDistance = -10000000.0; // really far away vertically
uniform float uSunInclination; // height from the heightmap plane
uniform float uSunAzimuth; // clockwise rotation point
uniform float uQuality; // used to determine number of steps and steps size
void main()
{
vec4 c = texture(uTexture,texelCoords);
vec2 textureD = textureSize(uTexture,0);
float d = max(textureD.x,textureD.y); // use the largest dimension to determine stepsize etc
// position the sun in the centre of the screen and convert from spherical to cartesian coordinates
vec3 sunPosition = vec3(textureD.x/2,textureD.y/2,0) + vec3( uSunDistance*sin(uSunInclination)*cos(uSunAzimuth),
uSunDistance*sin(uSunInclination)*sin(uSunAzimuth),
uSunDistance*cos(uSunInclination) );
float height = texture2D(uTextureHeightmap, texelCoords).r; // starting height
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition); // sunlight direction
float sampleDistance = 0;
float samples = d*uQuality;
float stepSize = 1.0 / ((samples/d) * d);
for(int i = 0; i < samples; i++)
{
sampleDistance += stepSize; // increase the sample distance
vec3 newPoint = vec3(texelCoords,height) + direction * sampleDistance; // get the coord for the next sample point
float newHeight = texture2D(uTextureHeightmap,newPoint.xy).r; // get the height of that sample point
// put it in shadow if we hit something that is higher than our starting point AND is heigher than the ray we're casting
if(newHeight > height && newHeight > newPoint.z)
{
c *= 0.5;
break;
}
}
frag_colour = c;
}
The purpose is for it to cast shadows based on a heightmap. Pretty nifty, and the results look good.
However, there's a problem where the shadows appear longer when they are horizontal compared to vertical. If I make the window size different, with a window that is taller than wide, I get the opposite effect. I.e., the shadows are casting longer in the longer dimension.
This tells me that it's to do with the way I'm stepping in the above shader, but I can't tell the problem.
To illustrate, here is the with a uSunAzimuth that results in a horizontally cast shadow:
And here is the exact same code with a uSunAzimuth for a vertical shadow:
It's not very pronounced in these low resolution images, but in larger resolutions the effect gets more exaggerated. Essentially; the shadow when you measure how it casts in all 360 degrees of azimuth clears out an ellipse instead of a circle.
The shadow fragment shader operates on a "snapshot" of the viewport. When your scene is rendered and this "snapshot" is generated, then the vertex positions are transformed by the projection matrix. The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport and takes in account the aspect ration of the viewport.
(see Both depth buffer and triangle face orientation are reversed in OpenGL,
and Transform the modelMatrix).
This causes that the high map (uTextureHeightmap) represents a rectangular field of view, dependent on the aspect ratio.
But the texture coordinates, which you use to access the height map describe a quad in the range (0, 0) to (1, 1).
This mismatch must be balanced, by scaling with the aspect ratio.
vec3 direction = ....;
float aspectRatio = textureD.x / textureD.y;
direction.xy *= vec2( 1.0/aspectRatio, 1.0 );
I just needed to adjust the direction slightly.
float aspectCorrection = textureD.x / textureD.y;
...
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition);
direction.y *= aspectCorrection;
I'm following the tutorial by John Chapman (http://john-chapman-graphics.blogspot.nl/2013/01/ssao-tutorial.html) to implement SSAO in a deferred renderer. The input buffers to the SSAO shaders are:
World-space positions with linearized depth as w-component.
World-space normal vectors
Noise 4x4 texture
I'll first list the complete shader and then briefly walk through the steps:
#version 330 core
in VS_OUT {
vec2 TexCoords;
} fs_in;
uniform sampler2D texPosDepth;
uniform sampler2D texNormalSpec;
uniform sampler2D texNoise;
uniform vec3 samples[64];
uniform mat4 projection;
uniform mat4 view;
uniform mat3 viewNormal; // transpose(inverse(mat3(view)))
const vec2 noiseScale = vec2(800.0f/4.0f, 600.0f/4.0f);
const float radius = 5.0;
void main( void )
{
float linearDepth = texture(texPosDepth, fs_in.TexCoords).w;
// Fragment's view space position and normal
vec3 fragPos_World = texture(texPosDepth, fs_in.TexCoords).xyz;
vec3 origin = vec3(view * vec4(fragPos_World, 1.0));
vec3 normal = texture(texNormalSpec, fs_in.TexCoords).xyz;
normal = normalize(normal * 2.0 - 1.0);
normal = normalize(viewNormal * normal); // Normal from world to view-space
// Use change-of-basis matrix to reorient sample kernel around origin's normal
vec3 rvec = texture(texNoise, fs_in.TexCoords * noiseScale).xyz;
vec3 tangent = normalize(rvec - normal * dot(rvec, normal));
vec3 bitangent = cross(normal, tangent);
mat3 tbn = mat3(tangent, bitangent, normal);
// Loop through the sample kernel
float occlusion = 0.0;
for(int i = 0; i < 64; ++i)
{
// get sample position
vec3 sample = tbn * samples[i]; // From tangent to view-space
sample = sample * radius + origin;
// project sample position (to sample texture) (to get position on screen/texture)
vec4 offset = vec4(sample, 1.0);
offset = projection * offset;
offset.xy /= offset.w;
offset.xy = offset.xy * 0.5 + 0.5;
// get sample depth
float sampleDepth = texture(texPosDepth, offset.xy).w;
// range check & accumulate
// float rangeCheck = abs(origin.z - sampleDepth) < radius ? 1.0 : 0.0;
occlusion += (sampleDepth <= sample.z ? 1.0 : 0.0);
}
occlusion = 1.0 - (occlusion / 64.0f);
gl_FragColor = vec4(vec3(occlusion), 1.0);
}
The result is however not pleasing. The occlusion buffer is mostly all white and doesn't show any occlusion. However, if I move really close to an object I can see some weird noise-like results as you can see below:
This is obviously not correct. I've done a fair share of debugging and believe all the relevant variables are correctly passed around (they all visualize as colors). I do the calculations in view-space.
I'll briefly walk through the steps (and choices) I've taken in case any of you figure something goes wrong in one of the steps.
view-space positions/normals
John Chapman retrieves the view-space position using a view ray and a linearized depth value. Since I use a deferred renderer that already has the world-space positions per fragment I simply take those and multiply them with the view matrix to get them to view-space.
I take a similar approach for the normal vectors. I take the world-space normal vectors from a buffer texture, transform them to [-1,1] range and multiply them with transpose(inverse(mat3(..))) of view matrix.
The view-space position and normals are visualized as below:
This looks correct to me.
Orient hemisphere around normal
The steps to create the tbn matrix are the same as described in John Chapman's tutorial. I create the noise texture as follows:
std::vector<glm::vec3> ssaoNoise;
for (GLuint i = 0; i < noise_size; i++)
{
glm::vec3 noise(randomFloats(generator) * 2.0 - 1.0, randomFloats(generator) * 2.0 - 1.0, 0.0f);
noise = glm::normalize(noise);
ssaoNoise.push_back(noise);
}
...
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB16F, 4, 4, 0, GL_RGB, GL_FLOAT, &ssaoNoise[0]);
I can visualize the noise in the fragment shader so that seems to work.
sample depths
I transform all samples from tangent to view-space (samples are random between [-1,1] on xy axis and [0,1] on z-axis and translate them to fragment's current view-space position (origin).
I then sample from linearized depth buffer (which I visualize below when looking close to an object):
and finally compare sampled depth values to current fragment's depth value and add occlusion values. Note that I do not perform a range-check since I don't believe that is the cause of this behavior and I'd rather keep it as minimal as possible for now.
I don't know what is causing this behavior. I believe it is somewhere in sampling the depth values. As far as I can tell I am working in the right coordinate system, linearized depth values are in view-space as well and all variables are set somewhat properly.
I'm trying to create geometry to represent the Earth in OpenGL. I have what's more or less a sphere (closer to the elliptical geoid that Earth is though). I map a texture of the Earth's surface (that's probably a mercator projection or something similar). The texture's UV coordinates correspond to the geometry's latitude and longitude. I have two issues that I'm unable to solve. I am using OpenSceneGraph but I think this is a general OpenGL / 3D programming question.
There's a texture seam that's very apparent. I'm sure this occurs because I don't know how to map the UV coordinates to XYZ where the seam occurs. I only map UV coords up to the last vertex before wrapping around... You'd need to map two different UV coordinates to the same XYZ vertex to eliminate the seam. Is there a commonly used trick to get around this, or am I just doing it wrong?
There's crazy swirly distortion going on at the poles. I'm guessing this because I map a single UV point at the poles (for Earth, I use [0.5,1] for the North Pole, and [0.5,0] for the South Pole). What else would you do though? I can sort of live with this... but its extremely noticeable at lower resolution meshes.
I've attached an image to show what I'm talking about.
The general way this is handled is by using a cube map, not a 2D texture.
However, if you insist on using a 2D texture, you have to create a break in your mesh's topology. The reason you get that longitudinal line is because you have one vertex with a texture coordinate of something like 0.9 or so, and its neighboring vertex has a texture coordinate of 0.0. What you really want is that the 0.9 one neighbors a 1.0 texture coordinate.
Doing this means replicating the position down one line of the sphere. So you have the same position used twice in your data. One is attached to a texture coordinate of 1.0 and neighbors a texture coordinate of 0.9. The other has a texture coordinate of 0.0, and neighbors a vertex with 0.1.
Topologically, you need to take a longitudinal slice down your sphere.
Your link really helped me out, furqan, thanks.
Why couldn't you figure it out? A point where I stumbled was, that I didn't know you can exceed the [0,1] interval when calculating the texture coordinates. That makes it a lot easier to jump from one side of the texture to the other with OpenGL doing all the interpolation and without having to calculate the exact position where the texture actually ends.
You can also go a dirty way: interpolate X,Y positions in between vertex shader and fragment shader and recalculate correct texture coordinate in fragment shader. This may be somewhat slower, but it doesn't involve duplicate vertexes and it's simplier, I think.
For example:
vertex shader:
#version 150 core
uniform mat4 projM;
uniform mat4 viewM;
uniform mat4 modelM;
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 pass_TextureCoord;
out vec2 pass_xy_position;
void main(void) {
gl_Position = projM * viewM * modelM * in_Position;
pass_xy_position = in_Position.xy; // 2d spinning interpolates good!
pass_TextureCoord = in_TextureCoord;
}
fragment shader:
#version 150 core
uniform sampler2D texture1;
in vec2 pass_xy_position;
in vec2 pass_TextureCoord;
out vec4 out_Color;
#define PI 3.141592653589793238462643383279
void main(void) {
vec2 tc = pass_TextureCoord;
tc.x = (PI + atan(pass_xy_position.y, pass_xy_position.x)) / (2 * PI); // calculate angle and map it to 0..1
out_Color = texture(texture1, tc);
}
It took a long time to figure this extremely annoying issue out. I'm programming with C# in Unity and I didn't want to duplicate any vertices. (Would cause future issues with my concept) So I went with the shader idea and it works out pretty well. Although I'm sure the code could use some heavy duty optimization, I had to figure out how to port it over to CG from this but it works. This is in case someone else runs across this post, as I did, looking for a solution to the same problem.
Shader "Custom/isoshader" {
Properties {
decal ("Base (RGB)", 2D) = "white" {}
}
SubShader {
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#define PI 3.141592653589793238462643383279
sampler2D decal;
struct appdata {
float4 vertex : POSITION;
float4 texcoord : TEXCOORD0;
};
struct v2f {
float4 pos : SV_POSITION;
float4 tex : TEXCOORD0;
float3 pass_xy_position : TEXCOORD1;
};
v2f vert(appdata v){
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.pass_xy_position = v.vertex.xyz;
o.tex = v.texcoord;
return o;
}
float4 frag(v2f i) : COLOR {
float3 tc = i.tex;
tc.x = (PI + atan2(i.pass_xy_position.x, i.pass_xy_position.z)) / (2 * PI);
float4 color = tex2D(decal, tc);
return color;
}
ENDCG
}
}
}
As Nicol Bolas said, some triangles have UV coordinates going from ~0.9 back to 0, so the interpolation messes the texture around the seam. In my code, I've created this function to duplicate the vertices around the seam. This will create a sharp line splitting those vertices. If your texture has only water around the seam (the Pacific ocean?), you may not notice this line. Hope it helps.
/**
* After spherical projection, some triangles have vertices with
* UV coordinates that are far away (0 to 1), because the Azimuth
* at 2*pi = 0. Interpolating between 0 to 1 creates artifacts
* around that seam (the whole texture is thinly repeated at
* the triangles around the seam).
* This function duplicates vertices around the seam to avoid
* these artifacts.
*/
void PlatonicSolid::SubdivideAzimuthSeam() {
if (m_texCoord == NULL) {
ApplySphericalProjection();
}
// to take note of the trianges in the seam
int facesSeam[m_numFaces];
// check all triangles, looking for triangles with vertices
// separated ~2π. First count.
int nSeam = 0;
for (int i=0;i < m_numFaces; ++i) {
// check the 3 vertices of the triangle
int a = m_faces[3*i];
int b = m_faces[3*i+1];
int c = m_faces[3*i+2];
// just check the seam in the azimuth
float ua = m_texCoord[2*a];
float ub = m_texCoord[2*b];
float uc = m_texCoord[2*c];
if (fabsf(ua-ub)>0.5f || fabsf(ua-uc)>0.5f || fabsf(ub-uc)>0.5f) {
//test::printValue("Face: ", i, "\n");
facesSeam[nSeam] = i;
++nSeam;
}
}
if (nSeam==0) {
// no changes
return;
}
// reserve more memory
int nVertex = m_numVertices;
m_numVertices += nSeam;
m_vertices = (float*)realloc((void*)m_vertices, 3*m_numVertices*sizeof(float));
m_texCoord = (float*)realloc((void*)m_texCoord, 2*m_numVertices*sizeof(float));
// now duplicate vertices in the seam
// (the number of triangles/faces is the same)
for (int i=0; i < nSeam; ++i, ++nVertex) {
int t = facesSeam[i]; // triangle index
// check the 3 vertices of the triangle
int a = m_faces[3*t];
int b = m_faces[3*t+1];
int c = m_faces[3*t+2];
// just check the seam in the azimuth
float u_ab = fabsf(m_texCoord[2*a] - m_texCoord[2*b]);
float u_ac = fabsf(m_texCoord[2*a] - m_texCoord[2*c]);
float u_bc = fabsf(m_texCoord[2*b] - m_texCoord[2*c]);
// select the vertex further away from the other 2
int f = 2;
if (u_ab >= 0.5f && u_ac >= 0.5f) {
c = a;
f = 0;
} else if (u_ab >= 0.5f && u_bc >= 0.5f) {
c = b;
f = 1;
}
m_vertices[3*nVertex] = m_vertices[3*c]; // x
m_vertices[3*nVertex+1] = m_vertices[3*c+1]; // y
m_vertices[3*nVertex+2] = m_vertices[3*c+2]; // z
// repeat u from texcoord
m_texCoord[2*nVertex] = 1.0f - m_texCoord[2*c];
m_texCoord[2*nVertex+1] = m_texCoord[2*c+1];
// change this face so all the vertices have close UV
m_faces[3*t+f] = nVertex;
}
}
One approach is like in the accepted answer. In the code generating the array of vertex attributes you will have a code like this:
// FOR EVERY TRIANGLE
const float threshold = 0.7;
if(tcoords_1.s > threshold || tcoords_2.s > threshold || tcoords_3.s > threshold)
{
if(tcoords_1.s < 1. - threshold)
{
tcoords_1.s += 1.;
}
if(tcoords_2.s < 1. - threshold)
{
tcoords_2.s += 1.;
}
if(tcoords_3.s < 1. - threshold)
{
tcoords_3.s += 1.;
}
}
If you have triangles which are not meridian-aligned you will also want glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);. You also need to use glDrawArrays since vertices with the same position will have different texture coords.
I think the better way to go is to eliminate the root of all evil, which is texture coords interpolation in this case. Since you know basically all about your sphere/ellipsoid, you can calculate texture coords, normals, etc. in the fragment shader based on position. This means that your CPU code generating vertex attributes will be much simpler and you can use indexed drawing again. And I don't think this approach is dirty. It's clean.