Efficiently iterate trough neighbouring pixels in fragment shader - c++

For context:
I'm working on a generative 2D animation with OpenFrameworks.
I'm trying to implement a shader shifts the fills some shapes with a color, depending on the orientation of the shapes edges.
Basically it takes an image like this one:
and spits out something like this:
Note that it intentionally only takes the color from the left side of the shape.
Right now my fragment shader looks like this:
#version 150
out vec4 outputColor;
uniform sampler2DRect fbo;
uniform sampler2DRect mask;
vec2 point;
vec4 col;
float x,i;
float delta = 200;
vec4 extrudeColor()
{
x = gl_FragCoord.x > delta ? gl_FragCoord.x - delta : 0;
for(i = gl_FragCoord.x; i > x; i--)
{
point = vec2(i, gl_FragCoord.y);
if(texture(fbo, point) != vec4(0,0,0,0)){
col = texture(fbo, point);
return vec4(col.r, col.g, col.b, (i-x)/delta);
}
}
return vec4(0,0,0,1);
}
void main()
{
outputColor = texture(mask, gl_FragCoord.xy) == vec4(1,1,1,1) && texture(fbo, gl_FragCoord.xy) == vec4(0,0,0,0) ? extrudeColor() : vec4(0,0,0,0);
}
the mask sampler is just a black and white version of the second image that I use to avoid calculating pixels outside of the shapes.
The shader I have works but it is slow and I feel like I'm not using proper GPU thinking and coding.
The actual, more general question:
I'm totally new to glsl and opengl. Is there a way to make this kind of iteration trough neighbouring pixels more efficiently and without having this many texture() reads?
Maybe using matrices? IDK!

This is a highly inefficient way to approach this problem. Try to avoid conditionals (if's) and loops (for's) in your shader. I would suggest loading or generating a single texture, and then using an alpha mask to create the shape you need. The texture could remain constant, while the 2 or 8-bit mask could be generated per frame.
An alternative method would be to use a few uniforms and upload "per-line" data in an array:
#version 440 core
uniform sampler2D _Texture ; // The texture to draw
uniform vec2 _Position ; // The 'object' position (screen coordinate)
uniform int _RowOffset ; // The offset in the 'object' to start drawing
uniform int _RowLength ; // The length of the 'row' to draw
uniform int _Height ; // The height of the 'object' to draw
in vec2 _TexCoord ; // The texture coordinate passed from Vertex shader
out vec4 _FragColor ; // The output color (gl_FragColor deprecated)
void main () {
if (gl_FragCoord.x < (_Position.x + _RowOffset)) discard ;
if (gl_FragCoord.x > (_Position.x + _RowOffset + _RowLength)) discard ;
_FragColor = texture2D (_Texture, _TexCoord.st) ;
}
Or, without sampling a texture at all, you could generate a linear gradient function and sample the color from it using the Y coordinate:
const vec4 _Red = vec4 (1, 0, 0, 1) ;
const vec4 _Green vec4 (0, 1, 0, 0) ;
vec4 _GetGradientColor (float _P /* Percentage */) {
float _R = (_Red.r * _P + _Green.r * (1 - _P) / 2 ;
float _G = (_Red.g * _P + _Green.g * (1 - _P) / 2 ;
float _B = (_Red.b * _P + _Green.b * (1 - _P) / 2 ;
float _A = (_Red.a * _P + _Green.a * (1 - _P) / 2 ;
return (vec4 (_R, _G, _B, _A)) ;
}
Then in your Frag Shader,
float _P = gl_FragCoord.y - _Position.y / _Height ;
_FragColor = _GetGradientColor (_P) ;
Shader Output
Of course this all could be optimised a bit, and this only generates a 2-color gradient whereas it looks like you're needing several colors. A quick Google search for "linear gradient generator" can land you some nicer alternatives. I should also note this simple example will not work for shapes with 'holes' in them, but it can be revised to do so. If the shader math gets too heavy, then choose the texture with alpha mask option.

Related

Why my texture coordinates are inverted each time I call my glsl shader in p5js?

I am trying to use a glsl shader with p5js to create a simulation like the game of life. To do that I want to create a shader which will take a texture as uniform and which will draw a new texture based on this previous texture. In a next iteration this new texture will be used as uniform and that should allow me create a simulation following the idea exposed here. I am experienced with p5.js but I'm completely new to shader programming so I'm probably missing something.
For now my code is as straightforward as possible:
In the preload() function, I create a texture using the createImage() function and setup some pixels to be white and the others to be black.
In the setup() function I use this texture to run the shader a first time to create a new texture. I also set a timer to run the shader at regular intervals and draw the result in a buffer.
In the draw() function I draw the buffer in the canvas.
To keep things simple I keep the canvas and the texture the same size.
My issue is that at some point the y coordinates in my code seems to get inverted and I don't understand why. My understanding is that my code should show a still image but each time I run the shader the image is inverted. Here is what I mean:
I am not sure if my issue comes from how I use glsl or how I use p5 or a mix of both. Can someone explain to me where this weird y inversion comes from?
Here is my minimal reproducible example (which is also in the p5 editor here):
The sketch file:
const sketch = (p5) => {
const D = 100;
let initialTexture;
p5.preload = () => {
// Create the initial image
initialTexture = p5.createImage(D, D);
initialTexture.loadPixels();
for (let i = 0; i < initialTexture.width; i++) {
for (let j = 0; j < initialTexture.height; j++) {
const alive = i === j || i === 10 || j === 40;
const color = p5.color(250, 250, 250, alive ? 250 : 0);
initialTexture.set(i, j, color);
}
}
initialTexture.updatePixels();
// Initialize the shader
shader = p5.loadShader('uniform.vert', 'test.frag');
};
p5.setup = () => {
const canvas = p5.createCanvas(D, D, p5.WEBGL);
canvas.parent('canvasDiv');
// Create the buffer the shader will draw on
graphics = p5.createGraphics(D, D, p5.WEBGL);
graphics.shader(shader);
/*
* Initial step to setup the initial texture
*/
// Used to normalize the frag coordinates
shader.setUniform('u_resolution', [p5.width, p5.height]);
// First state of the simulation
shader.setUniform('u_texture', initialTexture);
graphics.rect(0, 0, p5.width, p5.height);
// Call the shader each time interval
setInterval(updateSimulation, 1009);
};
const updateSimulation = () => {
// Use the previous state as a texture
shader.setUniform('u_texture', graphics);
graphics.rect(0, 0, p5.width, p5.height);
};
p5.draw = () => {
p5.background(0);
// Use the buffer on the canvas
p5.image(graphics, -p5.width / 2, -p5.height / 2);
};
};
new p5(sketch);
The fragment shader which for now only takes the color of the texture and reuses it (I tried using st instead of uv to no avail):
precision highp float;
uniform vec2 u_resolution;
uniform sampler2D u_texture;
// grab texcoords from vert shader
varying vec2 vTexCoord;
void main() {
// Normalize the position between 0 and 1
vec2 st = gl_FragCoord.xy/u_resolution.xy;
// Get the texture coordinate from the vertex shader
vec2 uv = vTexCoord;
// Get the color at the texture coordinate
vec4 c = texture2D(u_texture, uv);
// Reuse the same color
gl_FragColor = c;
}
And the vertex shader which I took from an example and does nothing excepted passing the coordinates:
/*
* vert file and comments from adam ferriss https://github.com/aferriss/p5jsShaderExamples with additional comments from Louise Lessel
*/
precision highp float;
// This “vec3 aPosition” is a built in shader functionality. You must keep that naming.
// It automatically gets the position of every vertex on your canvas
attribute vec3 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
// We always must do at least one thing in the vertex shader:
// tell the pixel where on the screen it lives:
void main() {
// copy the texcoords
vTexCoord = aTexCoord;
// copy the position data into a vec4, using 1.0 as the w component
vec4 positionVec4 = vec4(aPosition, 1.0);
positionVec4.xy = positionVec4.xy * 2.0 - 1.0;
// Send the vertex information on to the fragment shader
// this is done automatically, as long as you put it into the built in shader function “gl_Position”
gl_Position = positionVec4;
}
Long story short: the texture coordinates for a rectangle or a plane drawn with p5.js are (0, 0) in the bottom left, and (1, 1) in the top right, where as the coordinate system for sampling values from a texture are (0, 0) in the top left and (1, 1) in the bottom right. You can verify this by commenting out your color sampling code in your fragment shader and using the following:
float val = (uv.x + uv.y) / 2.0;
gl_FragColor = vec4(val, val, val, 1.0);
As you can see by the resulting image:
The value (0 + 0) / 2 results in black in the lower left, and (1 + 1) / 2 results in white in the upper right.
So, to sample the correct portion of the texture you just need to flip the y component of the uv vector:
texture2D(u_texture, vec2(uv.x, 1.0 - uv.y));
const sketch = (p5) => {
const D = 200;
let initialTexture;
p5.preload = () => {
// This doesn't actually need to go in preload
// Create the initial image
initialTexture = p5.createImage(D, D);
initialTexture.loadPixels();
for (let i = 0; i < initialTexture.width; i++) {
for (let j = 0; j < initialTexture.height; j++) {
// draw a big checkerboard
const alive = (p5.round(i / 10) + p5.round(j / 10)) % 2 == 0;
const color = alive ? p5.color('white') : p5.color(150, p5.map(j, 0, D, 50, 200), p5.map(i, 0, D, 50, 200));
initialTexture.set(i, j, color);
}
}
initialTexture.updatePixels();
};
p5.setup = () => {
const canvas = p5.createCanvas(D, D, p5.WEBGL);
// Create the buffer the shader will draw on
graphics = p5.createGraphics(D, D, p5.WEBGL);
// Initialize the shader
shader = graphics.createShader(vert, frag);
graphics.shader(shader);
/*
* Initial step to setup the initial texture
*/
// Used to normalize the frag coordinates
shader.setUniform('u_resolution', [p5.width, p5.height]);
// First state of the simulation
shader.setUniform('u_texture', initialTexture);
graphics.rect(0, 0, p5.width, p5.height);
// Call the shader each time interval
setInterval(updateSimulation, 100);
};
const updateSimulation = () => {
// Use the previous state as a texture
shader.setUniform('u_texture', graphics);
graphics.rect(0, 0, p5.width, p5.height);
};
p5.draw = () => {
p5.background(0);
// Use the buffer on the canvas
p5.texture(graphics);
p5.rect(-p5.width / 2, -p5.height / 2, p5.width, p5.height);
};
const frag = `
precision highp float;
uniform vec2 u_resolution;
uniform sampler2D u_texture;
// grab texcoords from vert shader
varying vec2 vTexCoord;
varying vec2 vPos;
void main() {
// Get the texture coordinate from the vertex shader
vec2 uv = vTexCoord;
gl_FragColor = texture2D(u_texture, vec2(uv.x, 1.0 - uv.y));
//// For debugging uv coordinate orientation
// float val = (uv.x + uv.y) / 2.0;
// gl_FragColor = vec4(val, val, val, 1.0);
}
`;
const vert = `
/*
* vert file and comments from adam ferriss https://github.com/aferriss/p5jsShaderExamples with additional comments from Louise Lessel
*/
precision highp float;
// This “vec3 aPosition” is a built in shader functionality. You must keep that naming.
// It automatically gets the position of every vertex on your canvas
attribute vec3 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
// We always must do at least one thing in the vertex shader:
// tell the pixel where on the screen it lives:
void main() {
// copy the texcoords
vTexCoord = aTexCoord;
// copy the position data into a vec4, using 1.0 as the w component
vec4 positionVec4 = vec4(aPosition, 1.0);
// This maps positions 0..1 to -1..1
positionVec4.xy = positionVec4.xy * 2.0 - 1.0;
// Send the vertex information on to the fragment shader
// this is done automatically, as long as you put it into the built in shader function “gl_Position”
gl_Position = positionVec4;
}`;
};
new p5(sketch);
<script src="https://cdn.jsdelivr.net/npm/p5#1.3.1/lib/p5.js"></script>

Artifacts when sampling texture in OpenGL

I've been trying to code a fragment shader such that I can pass it an arbitrary image and it would convert it into a 9-box (a repeating center and static borders). A sample input image would be this one:
My code then creates a single square (2 triangles), with a 1-to-1 orthographic projection that draws images fine, and call the following fragment shader:
#version 330
in vec2 outTexCoord;
out vec4 fragColor;
uniform sampler2D texture_sampler;
uniform vec4 colour;
uniform vec2 top;
uniform vec2 mid;
uniform vec2 bottom;
uniform ivec2 repeat;
void main()
{
vec2 boxSize = top + bottom + repeat * mid;
vec2 boxCoord = outTexCoord * boxSize;
vec2 textCoord = boxCoord;
// Fiddle with the X coordinate, for items in the middle or bottom.
if(boxSize.x - boxCoord.x <= bottom.x)
textCoord.x = 1.0 - (boxSize.x - boxCoord.x);
else if(boxCoord.x > top.x) {
float m = boxCoord.x - top.x;
m = m / mid.x;
textCoord.x = top.x + mid.x * (m - floor(m));
}
// Fiddle with the Y coordinate, for items in the middle or bottom.
if(boxSize.y - boxCoord.y <= bottom.y)
textCoord.y = 1.0 - (boxSize.y - boxCoord.y);
else if(boxCoord.y > top.y) {
float m = boxCoord.y - top.y;
m = m / mid.y;
textCoord.y = top.y + mid.y * (m - floor(m));
}
fragColor = colour * texture(texture_sampler, textCoord);
}
The uniforms are filled so that the sizes of top, mid, bottom correspond to A, B and C respectively, with top + mid + bottom = (1,1). The shader then extrapolates the texture coordinates that map to the original texture coordinates. The new coordinates should always fall within (0,0) and (1,1). Problem is: it works but for some reason a 2-pixel horizontal distortion appears every time I "repeat":
Another example with a different image and more vertical repeats.
What bothers me enormously is that nowhere in the original is there any empty or gray pixel to to sample from. Even if they where the wrong coordinates, it should be sampling from the texture itself (I thought it could be a sampling issue, but the error does not occur either at the frontier of the triangles nor at the frontier of the texture, nor does it seem to be interpolating from nearby texture pixels). I literally don-t know where those color values are coming from! Also, no such problem seems to occur on the X axis, even if the code is equivalent :-(

Scene voxelization not working due to lack of comprehension of texture coordinates

The goal is to take an arbitrary geometry and create a 3D texture containing the voxel approximation of the scene. However right now we only have cubes.
The scene looks as follows:
The 2 most important aspects of these scene are the following:
Each cube in the scene is supposed to correspond to a voxel in the 3D texture. The scene geometry becomes smaller as the height increases (similar to a pyramid). The scene geometry is hollow (i.e if you go inside one of these hills the interior has no cubes, only the outline does).
To voxelize the scene we render layer by layer as follows:
glViewport(0, 0, 7*16, 7*16);
glBindFramebuffer(GL_FRAMEBUFFER, FBOs[FBO_TEXTURE]);
for(int i=0; i<4*16; i++)
{
glFramebufferTexture3D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_3D,
vMap->textureID, 0, i);
glClearColor(0.f, 0.f, 0.f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
load_uniform((float)i, "level");
draw();
}
Where "level" corresponds to the current layer.
Then in the vertex shader we attempt to create a single layer as follows;
#version 450
layout(location = 0) in vec3 position; //(x,y,z) coordinates of a vertex
layout(std430, binding = 3) buffer instance_buffer
{
vec4 cubes_info[];//first 3 values are position of object
};
out vec3 normalized_pos;
out float test;
uniform float width = 128;
uniform float depth = 128;
uniform float height = 128;
uniform float voxel_size = 1;
uniform float level=0;
void main()
{
vec4 pos = (vec4(position, 1.0) + vec4(vec3(cubes_info[gl_InstanceID]),0));
pos.x = (2.f*pos.x-width)/(width);
pos.y = (2.f*pos.y-depth)/(depth);
pos.z = floor(pos.z);
test = pos.z;
pos.z -= level;
gl_Position = pos;
}
Finally the fragment shader:
#version 450
in vec3 normalized_pos;
in float l;
in float test;
out vec4 outColor;//Final color of the pixel
void main()
{
outColor = vec4(vec3(test)/10.f, 1.0);
}
Using renderdoc I have taken some screenshots of what the resulting texture looks like:
Layer 0:
Layer 2:
The immediate 2 noticeable problems are that:
A layer should not have multiple tones of gray, only one (since each layer corresponds to a different height there should not be multiple heights being rendered to the same layer)
The darkest section of layer 2 looks like what layer 0 should look like (i.e a filled shape with no "holes"). So not only does it seem I am rendering multiple heights to teh same layer, it also seems I have an offset of 2 when rendering, which should not happen.
Does anyone have any idea as to what the problem could be?
EDIT:
In case anyone is wondering the cubes have dimenions of [1,1,1] And their coordinate system is aligned with teh texture. i.e the bottom, left, front corner of the first cube is at (0,0,0)
EDIT 2:
Changing
pos.z = floor(pos.z);
To:
pos.z = floor(pos.z)+0.1;
Partially fixes the problem. The lowest layer is now correct however instead of 3 different colors (height values) there's now 2.
EDIT 3:
It seems the problem comes from drawing the geometry multiple times.
i.e my actual draw clal looks like:
for(uint i=0; i<render_queue.size(); i++)
{
Object_3D *render_data = render_queue[i];
//Render multiple instances of the current object
multi_render(render_data->VAO, &(render_data->VBOs),
&(render_data->types), render_data->layouts,
render_data->mesh_indices, render_data->render_instances);
}
void Renderer::multi_render(GLuint VAO, vector<GLuint> *VBOs,
vector<GLuint> *buffer_types, GLuint layout_num,
GLuint index_num, GLuint instances)
{
//error check
if(VBOs->size() != buffer_types->size())
{
cerr << "Mismatching VBOs's and buffer_types sizes" << endl;
return;
}
//Bind Vertex array object and rendering rpogram
glBindVertexArray(VAO);
glUseProgram(current_program);
//enable shader layouts
for(int i=0; i<layout_num;i++)
glEnableVertexAttribArray(i);
//Bind VBO's storing rendering data
for(uint i=0; i<buffer_types->size(); i++)
{
if((*buffer_types)[i]==GL_SHADER_STORAGE_BUFFER)
{
glBindBuffer((*buffer_types)[i], (*VBOs)[i]);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, i, (*VBOs)[i]);
}
}
//Draw call
glDrawElementsInstanced(GL_TRIANGLES, index_num, GL_UNSIGNED_INT, (void*)0, instances);
}
It seems then that due to rendering multiple subsets of the scene at a time I end up with different cubes being mapped to the same voxel in 2 different draw calls.
I have figured out the problem.
Since my geometry matches the voxel grid 1 to 1. Different layers could be mapped to the same voxel, causing them to overlap in the same layer.
Modifying the fragment shader to the following:
#version 450
layout(location = 0) in vec3 position; //(x,y,z) coordinates of a vertex
layout(std430, binding = 3) buffer instance_buffer
{
vec4 cubes_info[];//first 3 values are position of object
};
out vec3 normalized_pos;
out float test;
uniform float width = 128;
uniform float depth = 128;
uniform float height = 128;
uniform float voxel_size = 1;
uniform float level=0;
void main()
{
vec4 pos = (vec4(position, 1.0) + vec4(vec3(cubes_info[gl_InstanceID]),0));
pos.x = (2.f*pos.x-width)/(width);
pos.y = (2.f*pos.y-depth)/(depth);
pos.z = cubes_info[gl_InstanceID].z;
test = pos.z + 1;
pos.z -= level;
if(pos.z >=0 && pos.z < 0.999f)
pos.z = 1;
else
pos.z = 2;
gl_Position = pos;
normalized_pos = vec3(pos);
}
Fixes the issue.
The if statement check guarantees that geometry from a different layer that could potentially be mapped to the current layer is discarded.
There are probably better ways to do this. So I will accept as an answer anything that produces an equivalent result in a more elegant way.
This is what layer 0 looks like now:
And this is what layer 2 looks like:

Opengl texture flickering when used with mix()

I'm rendering a terrain with multiple textures that includes smooth transitions between the textures, based on the height of each fragment.
Here's my fragment shader:
#version 430
uniform sampler2D tex[3];
uniform float renderHeight;
in vec3 fsVertex;
in vec2 fsTexCoords;
out vec4 color;
void main()
{
float height = fsVertex.y / renderHeight;
const float range1 = 0.2;
const float range2 = 0.35;
const float range3 = 0.7;
const float range4 = 0.85;
if(height < range1)
color = texture(tex[0], fsTexCoords);
else if(height < range2) //smooth transition
color = mix( texture(tex[0], fsTexCoords), texture(tex[1], fsTexCoords), (height - range1) / (range2 - range1) );
else if(height < range3)
color = texture(tex[1], fsTexCoords);
else if(height < range4) //smooth transition
color = mix( texture(tex[1], fsTexCoords), texture(tex[2], fsTexCoords), (height - range3) / (range4 - range3) );
else
color = texture(tex[2], fsTexCoords);
}
'height' will always be in the range [0,1].
Here's the weird flickering I get. From what I can see they happen when 'height' equals one of the rangeN variables when using mix().
What may be the cause of this? I also tried playing around with adding and subtracting a 'bias' variable in some computations but had no luck.
Your problem is non-uniform flow control.
Basically, you can't call texture() inside an if.
Two solutions:
make all the calls to texture() first then blend the results with mix()
calculate the partial derivatives of the texture coordinates (with dFdx & co., there is an example in the link above) and use textureGrad() instead of texture()
In very simple cases, the first solution may be slightly faster. The second one is the way to go if you want to have many textures (normal maps etc.) But don't take my word for it, measure.

Seam issue when mapping a texture to a sphere in OpenGL

I'm trying to create geometry to represent the Earth in OpenGL. I have what's more or less a sphere (closer to the elliptical geoid that Earth is though). I map a texture of the Earth's surface (that's probably a mercator projection or something similar). The texture's UV coordinates correspond to the geometry's latitude and longitude. I have two issues that I'm unable to solve. I am using OpenSceneGraph but I think this is a general OpenGL / 3D programming question.
There's a texture seam that's very apparent. I'm sure this occurs because I don't know how to map the UV coordinates to XYZ where the seam occurs. I only map UV coords up to the last vertex before wrapping around... You'd need to map two different UV coordinates to the same XYZ vertex to eliminate the seam. Is there a commonly used trick to get around this, or am I just doing it wrong?
There's crazy swirly distortion going on at the poles. I'm guessing this because I map a single UV point at the poles (for Earth, I use [0.5,1] for the North Pole, and [0.5,0] for the South Pole). What else would you do though? I can sort of live with this... but its extremely noticeable at lower resolution meshes.
I've attached an image to show what I'm talking about.
The general way this is handled is by using a cube map, not a 2D texture.
However, if you insist on using a 2D texture, you have to create a break in your mesh's topology. The reason you get that longitudinal line is because you have one vertex with a texture coordinate of something like 0.9 or so, and its neighboring vertex has a texture coordinate of 0.0. What you really want is that the 0.9 one neighbors a 1.0 texture coordinate.
Doing this means replicating the position down one line of the sphere. So you have the same position used twice in your data. One is attached to a texture coordinate of 1.0 and neighbors a texture coordinate of 0.9. The other has a texture coordinate of 0.0, and neighbors a vertex with 0.1.
Topologically, you need to take a longitudinal slice down your sphere.
Your link really helped me out, furqan, thanks.
Why couldn't you figure it out? A point where I stumbled was, that I didn't know you can exceed the [0,1] interval when calculating the texture coordinates. That makes it a lot easier to jump from one side of the texture to the other with OpenGL doing all the interpolation and without having to calculate the exact position where the texture actually ends.
You can also go a dirty way: interpolate X,Y positions in between vertex shader and fragment shader and recalculate correct texture coordinate in fragment shader. This may be somewhat slower, but it doesn't involve duplicate vertexes and it's simplier, I think.
For example:
vertex shader:
#version 150 core
uniform mat4 projM;
uniform mat4 viewM;
uniform mat4 modelM;
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 pass_TextureCoord;
out vec2 pass_xy_position;
void main(void) {
gl_Position = projM * viewM * modelM * in_Position;
pass_xy_position = in_Position.xy; // 2d spinning interpolates good!
pass_TextureCoord = in_TextureCoord;
}
fragment shader:
#version 150 core
uniform sampler2D texture1;
in vec2 pass_xy_position;
in vec2 pass_TextureCoord;
out vec4 out_Color;
#define PI 3.141592653589793238462643383279
void main(void) {
vec2 tc = pass_TextureCoord;
tc.x = (PI + atan(pass_xy_position.y, pass_xy_position.x)) / (2 * PI); // calculate angle and map it to 0..1
out_Color = texture(texture1, tc);
}
It took a long time to figure this extremely annoying issue out. I'm programming with C# in Unity and I didn't want to duplicate any vertices. (Would cause future issues with my concept) So I went with the shader idea and it works out pretty well. Although I'm sure the code could use some heavy duty optimization, I had to figure out how to port it over to CG from this but it works. This is in case someone else runs across this post, as I did, looking for a solution to the same problem.
Shader "Custom/isoshader" {
Properties {
decal ("Base (RGB)", 2D) = "white" {}
}
SubShader {
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#define PI 3.141592653589793238462643383279
sampler2D decal;
struct appdata {
float4 vertex : POSITION;
float4 texcoord : TEXCOORD0;
};
struct v2f {
float4 pos : SV_POSITION;
float4 tex : TEXCOORD0;
float3 pass_xy_position : TEXCOORD1;
};
v2f vert(appdata v){
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.pass_xy_position = v.vertex.xyz;
o.tex = v.texcoord;
return o;
}
float4 frag(v2f i) : COLOR {
float3 tc = i.tex;
tc.x = (PI + atan2(i.pass_xy_position.x, i.pass_xy_position.z)) / (2 * PI);
float4 color = tex2D(decal, tc);
return color;
}
ENDCG
}
}
}
As Nicol Bolas said, some triangles have UV coordinates going from ~0.9 back to 0, so the interpolation messes the texture around the seam. In my code, I've created this function to duplicate the vertices around the seam. This will create a sharp line splitting those vertices. If your texture has only water around the seam (the Pacific ocean?), you may not notice this line. Hope it helps.
/**
* After spherical projection, some triangles have vertices with
* UV coordinates that are far away (0 to 1), because the Azimuth
* at 2*pi = 0. Interpolating between 0 to 1 creates artifacts
* around that seam (the whole texture is thinly repeated at
* the triangles around the seam).
* This function duplicates vertices around the seam to avoid
* these artifacts.
*/
void PlatonicSolid::SubdivideAzimuthSeam() {
if (m_texCoord == NULL) {
ApplySphericalProjection();
}
// to take note of the trianges in the seam
int facesSeam[m_numFaces];
// check all triangles, looking for triangles with vertices
// separated ~2π. First count.
int nSeam = 0;
for (int i=0;i < m_numFaces; ++i) {
// check the 3 vertices of the triangle
int a = m_faces[3*i];
int b = m_faces[3*i+1];
int c = m_faces[3*i+2];
// just check the seam in the azimuth
float ua = m_texCoord[2*a];
float ub = m_texCoord[2*b];
float uc = m_texCoord[2*c];
if (fabsf(ua-ub)>0.5f || fabsf(ua-uc)>0.5f || fabsf(ub-uc)>0.5f) {
//test::printValue("Face: ", i, "\n");
facesSeam[nSeam] = i;
++nSeam;
}
}
if (nSeam==0) {
// no changes
return;
}
// reserve more memory
int nVertex = m_numVertices;
m_numVertices += nSeam;
m_vertices = (float*)realloc((void*)m_vertices, 3*m_numVertices*sizeof(float));
m_texCoord = (float*)realloc((void*)m_texCoord, 2*m_numVertices*sizeof(float));
// now duplicate vertices in the seam
// (the number of triangles/faces is the same)
for (int i=0; i < nSeam; ++i, ++nVertex) {
int t = facesSeam[i]; // triangle index
// check the 3 vertices of the triangle
int a = m_faces[3*t];
int b = m_faces[3*t+1];
int c = m_faces[3*t+2];
// just check the seam in the azimuth
float u_ab = fabsf(m_texCoord[2*a] - m_texCoord[2*b]);
float u_ac = fabsf(m_texCoord[2*a] - m_texCoord[2*c]);
float u_bc = fabsf(m_texCoord[2*b] - m_texCoord[2*c]);
// select the vertex further away from the other 2
int f = 2;
if (u_ab >= 0.5f && u_ac >= 0.5f) {
c = a;
f = 0;
} else if (u_ab >= 0.5f && u_bc >= 0.5f) {
c = b;
f = 1;
}
m_vertices[3*nVertex] = m_vertices[3*c]; // x
m_vertices[3*nVertex+1] = m_vertices[3*c+1]; // y
m_vertices[3*nVertex+2] = m_vertices[3*c+2]; // z
// repeat u from texcoord
m_texCoord[2*nVertex] = 1.0f - m_texCoord[2*c];
m_texCoord[2*nVertex+1] = m_texCoord[2*c+1];
// change this face so all the vertices have close UV
m_faces[3*t+f] = nVertex;
}
}
One approach is like in the accepted answer. In the code generating the array of vertex attributes you will have a code like this:
// FOR EVERY TRIANGLE
const float threshold = 0.7;
if(tcoords_1.s > threshold || tcoords_2.s > threshold || tcoords_3.s > threshold)
{
if(tcoords_1.s < 1. - threshold)
{
tcoords_1.s += 1.;
}
if(tcoords_2.s < 1. - threshold)
{
tcoords_2.s += 1.;
}
if(tcoords_3.s < 1. - threshold)
{
tcoords_3.s += 1.;
}
}
If you have triangles which are not meridian-aligned you will also want glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);. You also need to use glDrawArrays since vertices with the same position will have different texture coords.
I think the better way to go is to eliminate the root of all evil, which is texture coords interpolation in this case. Since you know basically all about your sphere/ellipsoid, you can calculate texture coords, normals, etc. in the fragment shader based on position. This means that your CPU code generating vertex attributes will be much simpler and you can use indexed drawing again. And I don't think this approach is dirty. It's clean.