Seam issue when mapping a texture to a sphere in OpenGL - c++

I'm trying to create geometry to represent the Earth in OpenGL. I have what's more or less a sphere (closer to the elliptical geoid that Earth is though). I map a texture of the Earth's surface (that's probably a mercator projection or something similar). The texture's UV coordinates correspond to the geometry's latitude and longitude. I have two issues that I'm unable to solve. I am using OpenSceneGraph but I think this is a general OpenGL / 3D programming question.
There's a texture seam that's very apparent. I'm sure this occurs because I don't know how to map the UV coordinates to XYZ where the seam occurs. I only map UV coords up to the last vertex before wrapping around... You'd need to map two different UV coordinates to the same XYZ vertex to eliminate the seam. Is there a commonly used trick to get around this, or am I just doing it wrong?
There's crazy swirly distortion going on at the poles. I'm guessing this because I map a single UV point at the poles (for Earth, I use [0.5,1] for the North Pole, and [0.5,0] for the South Pole). What else would you do though? I can sort of live with this... but its extremely noticeable at lower resolution meshes.
I've attached an image to show what I'm talking about.

The general way this is handled is by using a cube map, not a 2D texture.
However, if you insist on using a 2D texture, you have to create a break in your mesh's topology. The reason you get that longitudinal line is because you have one vertex with a texture coordinate of something like 0.9 or so, and its neighboring vertex has a texture coordinate of 0.0. What you really want is that the 0.9 one neighbors a 1.0 texture coordinate.
Doing this means replicating the position down one line of the sphere. So you have the same position used twice in your data. One is attached to a texture coordinate of 1.0 and neighbors a texture coordinate of 0.9. The other has a texture coordinate of 0.0, and neighbors a vertex with 0.1.
Topologically, you need to take a longitudinal slice down your sphere.

Your link really helped me out, furqan, thanks.
Why couldn't you figure it out? A point where I stumbled was, that I didn't know you can exceed the [0,1] interval when calculating the texture coordinates. That makes it a lot easier to jump from one side of the texture to the other with OpenGL doing all the interpolation and without having to calculate the exact position where the texture actually ends.

You can also go a dirty way: interpolate X,Y positions in between vertex shader and fragment shader and recalculate correct texture coordinate in fragment shader. This may be somewhat slower, but it doesn't involve duplicate vertexes and it's simplier, I think.
For example:
vertex shader:
#version 150 core
uniform mat4 projM;
uniform mat4 viewM;
uniform mat4 modelM;
in vec4 in_Position;
in vec2 in_TextureCoord;
out vec2 pass_TextureCoord;
out vec2 pass_xy_position;
void main(void) {
gl_Position = projM * viewM * modelM * in_Position;
pass_xy_position = in_Position.xy; // 2d spinning interpolates good!
pass_TextureCoord = in_TextureCoord;
}
fragment shader:
#version 150 core
uniform sampler2D texture1;
in vec2 pass_xy_position;
in vec2 pass_TextureCoord;
out vec4 out_Color;
#define PI 3.141592653589793238462643383279
void main(void) {
vec2 tc = pass_TextureCoord;
tc.x = (PI + atan(pass_xy_position.y, pass_xy_position.x)) / (2 * PI); // calculate angle and map it to 0..1
out_Color = texture(texture1, tc);
}

It took a long time to figure this extremely annoying issue out. I'm programming with C# in Unity and I didn't want to duplicate any vertices. (Would cause future issues with my concept) So I went with the shader idea and it works out pretty well. Although I'm sure the code could use some heavy duty optimization, I had to figure out how to port it over to CG from this but it works. This is in case someone else runs across this post, as I did, looking for a solution to the same problem.
Shader "Custom/isoshader" {
Properties {
decal ("Base (RGB)", 2D) = "white" {}
}
SubShader {
Pass {
Fog { Mode Off }
CGPROGRAM
#pragma vertex vert
#pragma fragment frag
#define PI 3.141592653589793238462643383279
sampler2D decal;
struct appdata {
float4 vertex : POSITION;
float4 texcoord : TEXCOORD0;
};
struct v2f {
float4 pos : SV_POSITION;
float4 tex : TEXCOORD0;
float3 pass_xy_position : TEXCOORD1;
};
v2f vert(appdata v){
v2f o;
o.pos = mul(UNITY_MATRIX_MVP, v.vertex);
o.pass_xy_position = v.vertex.xyz;
o.tex = v.texcoord;
return o;
}
float4 frag(v2f i) : COLOR {
float3 tc = i.tex;
tc.x = (PI + atan2(i.pass_xy_position.x, i.pass_xy_position.z)) / (2 * PI);
float4 color = tex2D(decal, tc);
return color;
}
ENDCG
}
}
}

As Nicol Bolas said, some triangles have UV coordinates going from ~0.9 back to 0, so the interpolation messes the texture around the seam. In my code, I've created this function to duplicate the vertices around the seam. This will create a sharp line splitting those vertices. If your texture has only water around the seam (the Pacific ocean?), you may not notice this line. Hope it helps.
/**
* After spherical projection, some triangles have vertices with
* UV coordinates that are far away (0 to 1), because the Azimuth
* at 2*pi = 0. Interpolating between 0 to 1 creates artifacts
* around that seam (the whole texture is thinly repeated at
* the triangles around the seam).
* This function duplicates vertices around the seam to avoid
* these artifacts.
*/
void PlatonicSolid::SubdivideAzimuthSeam() {
if (m_texCoord == NULL) {
ApplySphericalProjection();
}
// to take note of the trianges in the seam
int facesSeam[m_numFaces];
// check all triangles, looking for triangles with vertices
// separated ~2π. First count.
int nSeam = 0;
for (int i=0;i < m_numFaces; ++i) {
// check the 3 vertices of the triangle
int a = m_faces[3*i];
int b = m_faces[3*i+1];
int c = m_faces[3*i+2];
// just check the seam in the azimuth
float ua = m_texCoord[2*a];
float ub = m_texCoord[2*b];
float uc = m_texCoord[2*c];
if (fabsf(ua-ub)>0.5f || fabsf(ua-uc)>0.5f || fabsf(ub-uc)>0.5f) {
//test::printValue("Face: ", i, "\n");
facesSeam[nSeam] = i;
++nSeam;
}
}
if (nSeam==0) {
// no changes
return;
}
// reserve more memory
int nVertex = m_numVertices;
m_numVertices += nSeam;
m_vertices = (float*)realloc((void*)m_vertices, 3*m_numVertices*sizeof(float));
m_texCoord = (float*)realloc((void*)m_texCoord, 2*m_numVertices*sizeof(float));
// now duplicate vertices in the seam
// (the number of triangles/faces is the same)
for (int i=0; i < nSeam; ++i, ++nVertex) {
int t = facesSeam[i]; // triangle index
// check the 3 vertices of the triangle
int a = m_faces[3*t];
int b = m_faces[3*t+1];
int c = m_faces[3*t+2];
// just check the seam in the azimuth
float u_ab = fabsf(m_texCoord[2*a] - m_texCoord[2*b]);
float u_ac = fabsf(m_texCoord[2*a] - m_texCoord[2*c]);
float u_bc = fabsf(m_texCoord[2*b] - m_texCoord[2*c]);
// select the vertex further away from the other 2
int f = 2;
if (u_ab >= 0.5f && u_ac >= 0.5f) {
c = a;
f = 0;
} else if (u_ab >= 0.5f && u_bc >= 0.5f) {
c = b;
f = 1;
}
m_vertices[3*nVertex] = m_vertices[3*c]; // x
m_vertices[3*nVertex+1] = m_vertices[3*c+1]; // y
m_vertices[3*nVertex+2] = m_vertices[3*c+2]; // z
// repeat u from texcoord
m_texCoord[2*nVertex] = 1.0f - m_texCoord[2*c];
m_texCoord[2*nVertex+1] = m_texCoord[2*c+1];
// change this face so all the vertices have close UV
m_faces[3*t+f] = nVertex;
}
}

One approach is like in the accepted answer. In the code generating the array of vertex attributes you will have a code like this:
// FOR EVERY TRIANGLE
const float threshold = 0.7;
if(tcoords_1.s > threshold || tcoords_2.s > threshold || tcoords_3.s > threshold)
{
if(tcoords_1.s < 1. - threshold)
{
tcoords_1.s += 1.;
}
if(tcoords_2.s < 1. - threshold)
{
tcoords_2.s += 1.;
}
if(tcoords_3.s < 1. - threshold)
{
tcoords_3.s += 1.;
}
}
If you have triangles which are not meridian-aligned you will also want glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);. You also need to use glDrawArrays since vertices with the same position will have different texture coords.
I think the better way to go is to eliminate the root of all evil, which is texture coords interpolation in this case. Since you know basically all about your sphere/ellipsoid, you can calculate texture coords, normals, etc. in the fragment shader based on position. This means that your CPU code generating vertex attributes will be much simpler and you can use indexed drawing again. And I don't think this approach is dirty. It's clean.

Related

Scene voxelization not working due to lack of comprehension of texture coordinates

The goal is to take an arbitrary geometry and create a 3D texture containing the voxel approximation of the scene. However right now we only have cubes.
The scene looks as follows:
The 2 most important aspects of these scene are the following:
Each cube in the scene is supposed to correspond to a voxel in the 3D texture. The scene geometry becomes smaller as the height increases (similar to a pyramid). The scene geometry is hollow (i.e if you go inside one of these hills the interior has no cubes, only the outline does).
To voxelize the scene we render layer by layer as follows:
glViewport(0, 0, 7*16, 7*16);
glBindFramebuffer(GL_FRAMEBUFFER, FBOs[FBO_TEXTURE]);
for(int i=0; i<4*16; i++)
{
glFramebufferTexture3D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_3D,
vMap->textureID, 0, i);
glClearColor(0.f, 0.f, 0.f, 0.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
load_uniform((float)i, "level");
draw();
}
Where "level" corresponds to the current layer.
Then in the vertex shader we attempt to create a single layer as follows;
#version 450
layout(location = 0) in vec3 position; //(x,y,z) coordinates of a vertex
layout(std430, binding = 3) buffer instance_buffer
{
vec4 cubes_info[];//first 3 values are position of object
};
out vec3 normalized_pos;
out float test;
uniform float width = 128;
uniform float depth = 128;
uniform float height = 128;
uniform float voxel_size = 1;
uniform float level=0;
void main()
{
vec4 pos = (vec4(position, 1.0) + vec4(vec3(cubes_info[gl_InstanceID]),0));
pos.x = (2.f*pos.x-width)/(width);
pos.y = (2.f*pos.y-depth)/(depth);
pos.z = floor(pos.z);
test = pos.z;
pos.z -= level;
gl_Position = pos;
}
Finally the fragment shader:
#version 450
in vec3 normalized_pos;
in float l;
in float test;
out vec4 outColor;//Final color of the pixel
void main()
{
outColor = vec4(vec3(test)/10.f, 1.0);
}
Using renderdoc I have taken some screenshots of what the resulting texture looks like:
Layer 0:
Layer 2:
The immediate 2 noticeable problems are that:
A layer should not have multiple tones of gray, only one (since each layer corresponds to a different height there should not be multiple heights being rendered to the same layer)
The darkest section of layer 2 looks like what layer 0 should look like (i.e a filled shape with no "holes"). So not only does it seem I am rendering multiple heights to teh same layer, it also seems I have an offset of 2 when rendering, which should not happen.
Does anyone have any idea as to what the problem could be?
EDIT:
In case anyone is wondering the cubes have dimenions of [1,1,1] And their coordinate system is aligned with teh texture. i.e the bottom, left, front corner of the first cube is at (0,0,0)
EDIT 2:
Changing
pos.z = floor(pos.z);
To:
pos.z = floor(pos.z)+0.1;
Partially fixes the problem. The lowest layer is now correct however instead of 3 different colors (height values) there's now 2.
EDIT 3:
It seems the problem comes from drawing the geometry multiple times.
i.e my actual draw clal looks like:
for(uint i=0; i<render_queue.size(); i++)
{
Object_3D *render_data = render_queue[i];
//Render multiple instances of the current object
multi_render(render_data->VAO, &(render_data->VBOs),
&(render_data->types), render_data->layouts,
render_data->mesh_indices, render_data->render_instances);
}
void Renderer::multi_render(GLuint VAO, vector<GLuint> *VBOs,
vector<GLuint> *buffer_types, GLuint layout_num,
GLuint index_num, GLuint instances)
{
//error check
if(VBOs->size() != buffer_types->size())
{
cerr << "Mismatching VBOs's and buffer_types sizes" << endl;
return;
}
//Bind Vertex array object and rendering rpogram
glBindVertexArray(VAO);
glUseProgram(current_program);
//enable shader layouts
for(int i=0; i<layout_num;i++)
glEnableVertexAttribArray(i);
//Bind VBO's storing rendering data
for(uint i=0; i<buffer_types->size(); i++)
{
if((*buffer_types)[i]==GL_SHADER_STORAGE_BUFFER)
{
glBindBuffer((*buffer_types)[i], (*VBOs)[i]);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, i, (*VBOs)[i]);
}
}
//Draw call
glDrawElementsInstanced(GL_TRIANGLES, index_num, GL_UNSIGNED_INT, (void*)0, instances);
}
It seems then that due to rendering multiple subsets of the scene at a time I end up with different cubes being mapped to the same voxel in 2 different draw calls.
I have figured out the problem.
Since my geometry matches the voxel grid 1 to 1. Different layers could be mapped to the same voxel, causing them to overlap in the same layer.
Modifying the fragment shader to the following:
#version 450
layout(location = 0) in vec3 position; //(x,y,z) coordinates of a vertex
layout(std430, binding = 3) buffer instance_buffer
{
vec4 cubes_info[];//first 3 values are position of object
};
out vec3 normalized_pos;
out float test;
uniform float width = 128;
uniform float depth = 128;
uniform float height = 128;
uniform float voxel_size = 1;
uniform float level=0;
void main()
{
vec4 pos = (vec4(position, 1.0) + vec4(vec3(cubes_info[gl_InstanceID]),0));
pos.x = (2.f*pos.x-width)/(width);
pos.y = (2.f*pos.y-depth)/(depth);
pos.z = cubes_info[gl_InstanceID].z;
test = pos.z + 1;
pos.z -= level;
if(pos.z >=0 && pos.z < 0.999f)
pos.z = 1;
else
pos.z = 2;
gl_Position = pos;
normalized_pos = vec3(pos);
}
Fixes the issue.
The if statement check guarantees that geometry from a different layer that could potentially be mapped to the current layer is discarded.
There are probably better ways to do this. So I will accept as an answer anything that produces an equivalent result in a more elegant way.
This is what layer 0 looks like now:
And this is what layer 2 looks like:

Why is my frag shader casting long shadows horizontally and short shadows vertically?

I have the following fragment shader:
#version 330
layout(location=0) out vec4 frag_colour;
in vec2 texelCoords;
uniform sampler2D uTexture; // the color
uniform sampler2D uTextureHeightmap; // the heightmap
uniform float uSunDistance = -10000000.0; // really far away vertically
uniform float uSunInclination; // height from the heightmap plane
uniform float uSunAzimuth; // clockwise rotation point
uniform float uQuality; // used to determine number of steps and steps size
void main()
{
vec4 c = texture(uTexture,texelCoords);
vec2 textureD = textureSize(uTexture,0);
float d = max(textureD.x,textureD.y); // use the largest dimension to determine stepsize etc
// position the sun in the centre of the screen and convert from spherical to cartesian coordinates
vec3 sunPosition = vec3(textureD.x/2,textureD.y/2,0) + vec3( uSunDistance*sin(uSunInclination)*cos(uSunAzimuth),
uSunDistance*sin(uSunInclination)*sin(uSunAzimuth),
uSunDistance*cos(uSunInclination) );
float height = texture2D(uTextureHeightmap, texelCoords).r; // starting height
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition); // sunlight direction
float sampleDistance = 0;
float samples = d*uQuality;
float stepSize = 1.0 / ((samples/d) * d);
for(int i = 0; i < samples; i++)
{
sampleDistance += stepSize; // increase the sample distance
vec3 newPoint = vec3(texelCoords,height) + direction * sampleDistance; // get the coord for the next sample point
float newHeight = texture2D(uTextureHeightmap,newPoint.xy).r; // get the height of that sample point
// put it in shadow if we hit something that is higher than our starting point AND is heigher than the ray we're casting
if(newHeight > height && newHeight > newPoint.z)
{
c *= 0.5;
break;
}
}
frag_colour = c;
}
The purpose is for it to cast shadows based on a heightmap. Pretty nifty, and the results look good.
However, there's a problem where the shadows appear longer when they are horizontal compared to vertical. If I make the window size different, with a window that is taller than wide, I get the opposite effect. I.e., the shadows are casting longer in the longer dimension.
This tells me that it's to do with the way I'm stepping in the above shader, but I can't tell the problem.
To illustrate, here is the with a uSunAzimuth that results in a horizontally cast shadow:
And here is the exact same code with a uSunAzimuth for a vertical shadow:
It's not very pronounced in these low resolution images, but in larger resolutions the effect gets more exaggerated. Essentially; the shadow when you measure how it casts in all 360 degrees of azimuth clears out an ellipse instead of a circle.
The shadow fragment shader operates on a "snapshot" of the viewport. When your scene is rendered and this "snapshot" is generated, then the vertex positions are transformed by the projection matrix. The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport and takes in account the aspect ration of the viewport.
(see Both depth buffer and triangle face orientation are reversed in OpenGL,
and Transform the modelMatrix).
This causes that the high map (uTextureHeightmap) represents a rectangular field of view, dependent on the aspect ratio.
But the texture coordinates, which you use to access the height map describe a quad in the range (0, 0) to (1, 1).
This mismatch must be balanced, by scaling with the aspect ratio.
vec3 direction = ....;
float aspectRatio = textureD.x / textureD.y;
direction.xy *= vec2( 1.0/aspectRatio, 1.0 );
I just needed to adjust the direction slightly.
float aspectCorrection = textureD.x / textureD.y;
...
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition);
direction.y *= aspectCorrection;

GBUFFER Decal Projection and scaling

I have been working on projecting decals on to anything that the decals bounding box encapsulates. After reading and trying numerous code snippets (usually in HLSL) I have a some what working method in GLSL for projecting the decals.
Let me start with trying to explain what I'm doing and how this works (so far).
The code below is now fixed and works!
This all is while in the perspective view mode.
I send 2 uniforms to the fragment shader "tr" and "bl". These are the 2 corners of the bounding box. I can and will replace these with hard coded sizes because they are the size of the decals original bounding box. tr = vec3(.5, .5, .5) and br = vec3(-.5, -.5, -.5). I'd prefer to find a way to do the position tests in the decals transformed state. (more about this at the end).
Adding this for clarity. The vertex emitted from the vertex program is the bounding box multiplied by the decals matrix and than by the model view projection matrix.. I use this for the next step:
With that vertex, I get the depth value from the depth texture and with it, calculate the position in world space using the inverse of the projection matrix.
Next, I translate this position using the Inverse of the Decals matrix. (The matrix that scales, rotates and translates the 1,1,1 cube to its world location. I thought that by using the inverse of the decals transform matrix, the correct size and rotation of the screen point would be handled correctly but it is not.
Vertex Program:
//Decals color pass.
#version 330 compatibility
out mat4 matPrjInv;
out vec4 positionSS;
out vec4 positionWS;
out mat4 invd_mat;
uniform mat4 decal_matrix;
void main(void)
{
gl_Position = decal_matrix * gl_Vertex;
gl_Position = gl_ModelViewProjectionMatrix * gl_Position;
positionWS = (decal_matrix * gl_Vertex);;
positionSS = gl_Position;
matPrjInv = inverse(gl_ModelViewProjectionMatrix);
invd_mat = inverse(decal_matrix);
}
Fragment Program:
#version 330 compatibility
layout (location = 0) out vec4 gPosition;
layout (location = 1) out vec4 gNormal;
layout (location = 2) out vec4 gColor;
uniform sampler2D depthMap;
uniform sampler2D colorMap;
uniform sampler2D normalMap;
uniform mat4 matrix;
uniform vec3 tr;
uniform vec3 bl;
in vec2 TexCoords;
in vec4 positionSS; // screen space
in vec4 positionWS; // world space
in mat4 invd_mat; // inverse decal matrix
in mat4 matPrjInv; // inverse projection matrix
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}
vec2 postProjToScreen(vec4 position)
{
vec2 screenPos = position.xy / position.w;
return 0.5 * (vec2(screenPos.x, screenPos.y) + 1);
}
void main(){
// Calculate UVs
vec2 UV = postProjToScreen(positionSS);
// sample the Depth from the Depthsampler
float Depth = texture2D(depthMap, UV).x * 2.0 - 1.0;
// Calculate Worldposition by recreating it out of the coordinates and depth-sample
vec4 ScreenPosition;
ScreenPosition.xy = UV * 2.0 - 1.0;
ScreenPosition.z = (Depth);
ScreenPosition.w = 1.0f;
// Transform position from screen space to world space
vec4 WorldPosition = matPrjInv * ScreenPosition ;
WorldPosition.xyz /= WorldPosition.w;
WorldPosition.w = 1.0f;
// transform to decal original position and size.
// 1 x 1 x 1
WorldPosition = invd_mat * WorldPosition;
clip (WorldPosition.xyz);
// Get UV for textures;
WorldPosition.xy += 0.5;
WorldPosition.y *= -1.0;
vec4 bump = texture2D(normalMap, WorldPosition.xy);
gColor = texture2D(colorMap, WorldPosition.xy);
//Going to have to do decals in 2 passes..
//Blend doesn't work with GBUFFER.
//Lots more to sort out.
gNormal.xyz = bump;
gPosition = positionWS;
}
And here are a couple of Images showing whats wrong.
What I get for the projection:
And this is the actual size of the decals.. Much larger than what my shader is creating!
I have tried creating a new matrix using the decals and the projection matrix to construct a sort of "lookat" matrix and translate the screen position in to the decals post transformed state.. I have not been able to get this working. Some where I am missing something but where? I thought that translating using the inverse of the decals matrix would deal with the transform and put the screen position in the proper transformed state. Ideas?
Updated the code for the texture UVs.. You may have to fiddle with the y and x depending on if your texture is flipped on x or y. I also fixed the clip sub so it works correctly. As it is, this code now works. I will update this more if needed so others don't have to go through the pain I did to get it working.
Some issues to resolve are decals laying over each other. The one on top over writes the one below. I think I will have to accumulated the colors and normals in to the default FBO and then blend(Add) them to the GBUFFER textures before or during the lighting pass. Adding more screen size textures is not a great idea so I will need to be creative and recycle any textures I can.
I found the solution to decals overlaying each other.
Turn OFF depth masking while drawing the decals and turn int back on afterwards:
glDepthMask(GL_FALSE)
OK.. I'm so excited. I found the issue.
I updated the code above again.
I had a mistake in what I was sending the shader for tr and bl:
Here is the change to clip:
void clip(vec3 v){
if (v.x > tr.x || v.x < bl.x ) { discard; }
if (v.y > tr.y || v.y < bl.y ) { discard; }
if (v.z > tr.z || v.z < bl.z ) { discard; }
}

Uniform point arrays and managing fragment shader coordinates systems

My aim is to pass an array of points to the shader, calculate their distance to the fragment and paint them with a circle colored with a gradient depending of that computation.
For example:
(From a working example I set up on shader toy)
Unfortunately it isn't clear to me how I should calculate and convert the coordinates passed for processing inside the shader.
What I'm currently trying is to pass two array of floats - one for x positions and one for y positions of each point - to the shader though a uniform. Then inside the shader iterate through each point like so:
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform float sourceX[100];
uniform float sourceY[100];
uniform vec2 resolution;
in vec4 gl_FragCoord;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main()
{
float intensity = 0.0;
for(int i=0; i<100; i++)
{
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
intensity += exp(-0.5*d*d);
}
intensity=3.0*pow(intensity,0.02);
if (intensity<=1.0)
gl_FragColor=vec4(0.0,intensity*0.5,0.0,1.0);
else if (intensity<=2.0)
gl_FragColor=vec4(intensity-1.0, 0.5+(intensity-1.0)*0.5,0.0,1.0);
else
gl_FragColor=vec4(1.0,3.0-intensity,0.0,1.0);
}
But that doesn't work - and I believe it may be because I'm trying to work with the pixel coordinates without properly translating them. Could anyone explain to me how to make this work?
Update:
The current result is:
The sketch's code is:
PShader pointShader;
float[] sourceX;
float[] sourceY;
void setup()
{
size(1024, 1024, P3D);
background(255);
sourceX = new float[100];
sourceY = new float[100];
for (int i = 0; i<100; i++)
{
sourceX[i] = random(0, 1023);
sourceY[i] = random(0, 1023);
}
pointShader = loadShader("pointfrag.glsl", "pointvert.glsl");
shader(pointShader, POINTS);
pointShader.set("sourceX", sourceX);
pointShader.set("sourceY", sourceY);
pointShader.set("resolution", float(width), float(height));
}
void draw()
{
for (int i = 0; i<100; i++) {
strokeWeight(60);
point(sourceX[i], sourceY[i]);
}
}
while the vertex shader is:
#define PROCESSING_POINT_SHADER
uniform mat4 projection;
uniform mat4 transform;
attribute vec4 vertex;
attribute vec4 color;
attribute vec2 offset;
varying vec4 vertColor;
varying vec2 center;
varying vec2 pos;
void main() {
vec4 clip = transform * vertex;
gl_Position = clip + projection * vec4(offset, 0, 0);
vertColor = color;
center = clip.xy;
pos = offset;
}
Update:
Based on the comments it seems you have confused two different approaches:
Draw a single full screen polygon, pass in the points and calculate the final value once per fragment using a loop in the shader.
Draw bounding geometry for each point, calculate the density for just one point in the fragment shader and use additive blending to sum the densities of all points.
The other issue is your points are given in pixels but the code expects a 0 to 1 range, so d is large and the points are black. Fixing this issue as #RetoKoradi describes should address the points being black, but I suspect you'll find ramp clipping issues when many are in close proximity. Passing points into the shader limits scalability and is inefficient unless the points cover the whole viewport.
As below, I think sticking with approach 2 is better. To restructure your code for it, remove the loop, don't pass in the array of points and use center as the point coordinate instead:
//calc center in pixel coordinates
vec2 centerPixels = (center * 0.5 + 0.5) * resolution.xy;
//find the distance in pixels (avoiding aspect ratio issues)
float dPixels = distance(gl_FragCoord.xy, centerPixels);
//scale down to the 0 to 1 range
float d = dPixels / resolution.y;
//write out the intensity
gl_FragColor = vec4(exp(-0.5*d*d));
Draw this to a texture (from comments: opengl-tutorial.org code and this question) with additive blending:
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
Now that texture will contain intensity as it was after your original loop. In another fragment shader during a full screen pass (draw a single triangle that covers the whole viewport), continue with:
uniform sampler2D intensityTex;
...
float intensity = texture2D(intensityTex, gl_FragCoord.xy/resolution.xy).r;
intensity = 3.0*pow(intensity, 0.02);
...
The code you have shown is fine, assuming you're drawing a full screen polygon so the fragment shader runs once for each pixel. Potential issues are:
resolution isn't set correctly
The point coordinates aren't in the range 0 to 1 on the screen.
Although minor, d will be stretched by the aspect ratio, so you might be better scaling the points up to pixel coordinates and diving distance by resolution.y.
This looks pretty similar to creating a density field for 2D metaballs. For performance you're best off limiting the density function for each point so it doesn't go on forever, then spatting discs into a texture using additive blending. This saves processing those pixels a point doesn't affect (just like in deferred shading). The result is the density field, or in your case per-pixel intensity.
These are a little related:
2D OpenGL ES Metaballs on android (no answers yet)
calculate light volume radius from intensity
gl_PointSize Corresponding to World Space Size
It looks like the point center and fragment position are in different coordinate spaces when you subtract them:
vec2 source = vec2(sourceX[i],sourceY[i]);
vec2 position = ( gl_FragCoord.xy / resolution.xy );
float d = distance(position, source);
Based on your explanation and code, source and source are in window coordinates, meaning that they are in units of pixels. gl_FragCoord is in the same coordinate space. And even though you don't show that directly, I assume that resolution is the size of the window in pixels.
This means that:
vec2 position = ( gl_FragCoord.xy / resolution.xy );
calculates the normalized position of the fragment within the window, in the range [0.0, 1.0] for both x and y. But then on the next line:
float d = distance(position, source);
you subtrace source, which is still in window coordinates, from this position in normalized coordinates.
Since it looks like you wanted the distance in normalized coordinates, which makes sense, you'll also need to normalize source:
vec2 source = vec2(sourceX[i],sourceY[i]) / resolution.xy;

3D texture get hidden when viewed from different angle

I have encountered a problem of rendering artifacts of 3D texture as below:
I have searched on net as to find solution of this problem, and most answer pointed towards the problem in regards of depth buffer bit.
While i have tried to change the depth buffer bit to 24 bit from GL_DEPTH to GL_STENCIL in GLUT, the result remains the same as the texture(or geometry-not really sure) get hidden when viewed from certain angle..
So, can i know what is exactly the problem that results in this kind of artifacts??
Below is the fragment shader code snippet(OpenGL Development Cookbook)
void main()
{
//get the 3D texture coordinates for lookup into the volume dataset
vec3 dataPos = vUV;
vec3 geomDir = normalize((vec3(0.556,0.614,0.201)*vUV-vec3(0.278,0.307,0.1005)) - camPos);
vec3 dirStep = geomDir * step_size;
//flag to indicate if the raymarch loop should terminate
bool stop = false;
//for all samples along the ray
for (int i = 0; i < MAX_SAMPLES; i++) {
// advance ray by dirstep
dataPos = dataPos + dirStep;
stop = dot(sign(dataPos-texMin),sign(texMax-dataPos)) < 3.0f;
//if the stopping condition is true we brek out of the ray marching loop
if (stop)
break;
// data fetching from the red channel of volume texture
float sample = texture(volume, dataPos).r;
float prev_alpha = sample - (sample * vFragColor.a);
vFragColor.rgb = (prev_alpha) * vec3(sample) + vFragColor.rgb;
vFragColor.a += prev_alpha;
if( vFragColor.a>0.99)
break;
}
FYI, below is the vertex shader snippet:
#version 330 core
layout(location = 0) in vec3 vVertex; //object space vertex position
//uniform
uniform mat4 MVP; //combined modelview projection matrix
smooth out vec3 vUV; //3D texture coordinates for texture lookup in the fragment shader
void main()
{
//get the clipspace position
gl_Position = MVP*vec4(vVertex.xyz,1);
//get the 3D texture coordinates by adding (0.5,0.5,0.5) to the object space
//vertex position. Since the unit cube is at origin (min: (-0.5,-0.5,-0.5) and max: (0.5,0.5,0.5))
//adding (0.5,0.5,0.5) to the unit cube object space position gives us values from (0,0,0) to
//(1,1,1)
//vUV = (vVertex + vec3(0.278,0.307,0.1005))/vec3(0.556,0.614,0.201);
vUV = vVertex/vec3(0.556,0.614,0.201);//after moving the cube to coordinates range of 0-1
}
EDITED: The artifacts present especially when viewing is done relatively at the edge.
FYI, glm::perspective(45.0f,(float)w/h, 1.0f,10.0f);