texture a ball on a sphere has a dark band - c++

I am using this code to generate sphere vertices and textures but as you can see in the image , when I rotate it I can see a dark band.
for (int i = 0; i <= stacks; ++i)
{
float s = (float)i / (float) stacks;
float theta = s * 2 * glm::pi<float>();
for (int j = 0; j <= slices; ++j)
{
float sl = (float)j / (float) slices;
float phi = sl * (glm::pi<float>());
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
const float z = cos(phi);
sphere_vertices.push_back(radius * glm::vec3(x, y, z));
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
}
}
// get the indices
for (int i = 0; i < stacks * slices + slices; ++i)
{
sphere_indices.push_back(i);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i + slices);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i);
sphere_indices.push_back(i + 1);
}
I can't figure a way to make it right whatever texture coordinates I used.
Hmm.. If I use another image, then the mapping is different (and worst!)
vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aTexCoord;
out vec4 vertexColor;
out vec2 TexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
vertexColor = vec4(0.5, 0.2, 0.5, 1.0);
TexCoord = vec2(aTexCoord.x, aTexCoord.y);
}
fragment shader:
#version 330 core
out vec4 FragColor;
in vec4 vertexColor;
in vec2 TexCoord;
uniform sampler2D sphere_texture;
void main()
{
FragColor = texture(sphere_texture, TexCoord);
}
I am not using any lighting conditions.
If I use FragColor = vec4(TexCoord.x, TexCoord.y, 0.0f, 1.0f); in fragment shader (for debugging purposes) , I am receiving a nice sphere.
I am using this as texture:

That image of the tennis ball that you linked reveals the problem. I'm glad you ultimately provided it.
Your image is a four-channel PNG with transparency (Alpha channel). There are transparent pixels all around the outside of the yellow part of the ball that have (R,G,B,A) = (0, 0, 0, 0), so if you're ignoring the A channel then (R, G, B), will be (0, 0, 0) = black.
Here are just the Red, Green, and Blue (RGB) channels:
And here is just the Alpha (A) channel.
The important thing to notice is that the circle of the ball does not fill the square. There is a significant margin of 53 pixels of black from the extent of the ball to the edge of the texture. We can calculate the radius of the ball from this. Half the width is 1000 pixels, of which 53 pixels are not used. The ball's radius is 1000-53, which is 947 pixels. Or about 94.7% of the distance from the center to the edge of the texture. The remaining 5.3% of the distance is black.
Side note: I also notice that your ball doesn't quite reach 100% opacity. The yellow part of the ball has an alpha channel value of 254 (of 255) Meaning 99.6% opaque. The white lines and the shiny hot spot do actually reach 100% opacity, giving it sort of a Death Star look. ;)
To fix your problem, there's the intuitive approach (which may not work) and then there are two things that you need to do that will work. Here are a few things you can do:
Intuitive Solution:
This won't quite get you 100% there.
1) Resize the ball to fill the texture. Use image editing software to enlarge the ball to fill the texture, or to trim off the black pixels. This will just make more efficient use of pixels, for one, but it will ensure that there are useful pixels being sampled at the boundary. You'll probably want to expand the image to be slightly larger than 100%. I'll explain why below.
2) Remap your texture coordinates to only extend to 94.7% of the radius of the ball. (Similar to approach 1, but doesn't require image editing). This just uses coordinates that actually correspond to the image you provided. Your x and y coordinates need to be scaled about the center of the image and reduced to about 94.7%.
x2 = 0.5 + (x - 0.5) * 0.947;
y2 = 0.5 + (y - 0.5) * 0.947;
Suggested Solution:
This will ensure no more black.
3) Fill the "black" portion of your ball texture with a less objectionable colour - probably the colour that is at the circumference of the tennis ball. This ensures that any texels that are sampled at exactly the edge of the ball won't be linearly combined with black to produce an unsightly dark-but-not-quite-black band, which is almost the problem you have right now anyway. You can do this in two ways. A) Image editing software. Remove the transparency from your image and matte it against a dark yellow colour. B) Use the shader to detect pixels that are outside the image and replace them with a border colour (this is clever, but probably more trouble than it's worth.)
Different Texture Coordinates
The last thing you can do is avoid this degenerate texture mapping coordinate problem altogether. At the equator, you're not really sure which pixels to sample. The black (transparent) pixels or the coloured pixels of the ball. The discrete nature of square pixels, is fighting against the polar nature of your texture map. You'll never find the exact colour you need near the edge to produce a continuous, seamless map. Instead, you can use a different coordinate system. I hope you're not attached to how that ball looks, because let me introduce you to the equirectangular projection. It's the same projection that you can naively use to map the globe of the Earth to a typical rectangular map of the world you're likely familiar with where the north and south poles get all the distortion but the equatorial regions look pretty good.
Here's your image mapped to equirectangular coordinates:
Notice that black bar at the bottom...we're onto something! That black bar is actually exactly what appears around the equator of your ball with your current texture mapping coordinate system. But with this coordinate system, you can see easily that if we just remapped the ball to fill the square we'd completely eliminate any transparent pixels at all.
It may be inconvenient to work in this coordinate system, but you can transform your image in Photoshop using Filter > Distort > Polar Coordinates... > Polar to Rectangular.
Sigismondo's answer already suggests how to adjust your texture mapping coordinates do this.
And finally, here's a texture that is both enlarged to fill the texture space, and remapped to equirectangular coordinates. No black bars, minimal distortion. But you'll have to use Sigismondo's texture mapping coordinates. Again, this may not be for you, especially if you're attached to the idea of the direct projection for your texture (i.e.: if you don't want to manipulate your tennis ball image and you want to use that projection.) But if you're willing to remap your data, you can rest easy that all the black pixels will be gone!
Good luck! Feel free to ask for clarifications.

I cannot test it, being the code incomplete, but from a rough look I have spotted this problem:
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
The texture coordinates should not be evaluated from x and y, being:
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
but from the angles thta-phi, or stacks-slices. this could work better - untested:
sphere_texcoords.push_back(glm::vec2(s,sl));
being already defined:
float s = (float)i / (float) stacks;
float sl = (float)j / (float) slices;
Furthermore in your code you are using the first and the last "slices" of the sphere as the rest... Shouldn't they be treated differently? This seems quite odd to me - but I don't know whether your implementation is just a simpler one, working fine.
Compare with this explanation, for example: http://www.songho.ca/opengl/gl_sphere.html

Related

How can i make GL_POINTS overlap to look like spheres?

I am attempting to create a voxel style game, and I want to use GL_POINTS to simulate spherical voxels.
I am aiming to have them look like 3d spheres without having to render an actual sphere with many vertices.
However, when I created a mass of GL_POINTS, they overlap in a way that makes it obvious that they are flat circle sprites.
Here is an example:
my image example of gl_points overlapping showing circular sprite:
I would like to have the circular GL_POINTS overlap in a way that makes them look like spheres being squished together and hiding parts of each other.
For an example of what I would like to achieve, here is an image showing Star Defenders 3D by Eric Gurt, in which he used spherical points as voxels in Javascript for his levels:
Example image showing points that look like spheres:
As you can see, where the points overlap, they hide parts of each other creating the illusion that they are 3d spheres instead of circular sprites.
Is there a way to replicate this in openGL?
I am using OpenGL 3.3.0.
I have finally implemented a way to make points look like spheres by changing gl_FragDepth.
This is the code from my fragment shader to make a square gl_point into a sphere. (no lighting)
void makeSphere()
{
//clamps fragments to circle shape.
vec2 mapping = gl_PointCoord * 2.0F - 1.0F;
float d = dot(mapping, mapping);
if (d >= 1.0F)
{//discard if the vectors length is more than 0.5
discard;
}
float z = sqrt(1.0F - d);
vec3 normal = vec3(mapping, z);
normal = mat3(transpose(viewMatrix)) * normal;
vec3 cameraPos = vec3(worldPos) + rad * normal;
////Set the depth based on the new cameraPos.
vec4 clipPos = projectionMatrix * viewMatrix * vec4(cameraPos, 1.0);
float ndcDepth = clipPos.z / clipPos.w;
gl_FragDepth = ((gl_DepthRange.diff * ndcDepth) + gl_DepthRange.near + gl_DepthRange.far) / 2.0;
//calc ambient occlusion for circle
if (bool(fAoc))
ambientOcclusion = sqrt(1.0F - d * 0.5F);
}

Showing Red Light on Green Surfaces LWJGL

Hey everyone I'm working with lighting in a 2D Tile Based game and have run into a problem with my lighting calculations, in my game I take greyscale images then color them using shaders whatever color I like whether that be green(rgb=(0,1,0)) or red(rgb=(1,0,0)) or any color. So then I apply my lighting calculations to that textured and colored pixel. The lighting works fine when the light is white(rgb=(1,1,1)) but when it is say red or green it wont show the way I want it to. I know why this is happening of course because realistic a pure red light in a pure green room would reflect no red light so the room would remain dark. What I really want is to see a red light appear over a green surface. So my question is how can I show a red light clearly on a green surface?(or really any other color on any surface)
This is the code for my fragment shader, where attenuation is simply the attenuation for the light, lightColor is obviously the lights rgb value, distance is the distance from the given vector to that light(calculated in the vertex shader) and finally color is the rgb value that is applied to the texture.
Thanks in advance for your help!
vec3 totalDiffuse = vec3(0.0);
for(int i = 0; i < 4; i++)
{
float attFactor = attenuation[i].x + (attenuation[i].y * distance[i]) + (attenuation[i].z * distance[i] * distance[i]);
totalDiffuse = totalDiffuse + (lightColor[i])/attFactor;
}
totalDiffuse = max(totalDiffuse,0.2);
out_Color = texture(textureSampler, pass_textureCoords)*vec4(color,alpha)*vec4(totalDiffuse,1);
And here is an image of what a pure red light looks like on a surface currently, it should be inside the white circle and you may be able to see it is affecting the water a little bit because I give the water a small red component-
Light Demo Image
One possibility would be to change the light calculation.
Calculate a gray scales of the light color and the surface color. Multiply the surface color by the gray scale of the light color and the multiply the light color by the gray scale of the surface color, finally sum them up:
vec4 texCol = texture(textureSampler, pass_textureCoords);
float grayTex = dot(texCol.rgb, vec3(0.2126, 0.7152, 0.0722));
float grayCol = dot(colGray.rgb, vec3(0.2126, 0.7152, 0.0722));
vec3 mixCol = texCol.rgb * grayCol + color.rgb * grayTex;
out_Color = vec4(mixCol * totalDiffuse, texCol.a * alpha);
Note, this algorithm emphasizes the color of the light at the expense of the color of the surface. But that was what you wanted by dipping a green area in red light. Of course, that contradicts the desire to illuminate an area in its own color. If the light is white, then the surface will also shine white.
If you want some light sources with the effect described above, other sources but with the original effect of the question, then I recommend to introduce a parameter that mixes the two effects:
uniform float u_lightTint;
void main()
{
.....
vec3 mixCol = texCol.rgb * grayCol + color.rgb * grayTex;
mixCol = mix(texCol.rgb * color.rgb, mixCol.rgb, u_lightTint);
out_Color = vec4(mixCol * totalDiffuse, texCol.a * alpha);
}
If u_lightTint is set 1.0, then the "new" light calculation is uses, it it is set 0.0, then the original light calculation is use. Both algorithms can be interpolated linearly by u_lightTint.
Alternatively the u_lightTint parameter can be encoded in the alpha channel of the light color:
mixCol = mix(texCol.rgb * color.rgb, mixCol.rgb, color.a);

Why is my frag shader casting long shadows horizontally and short shadows vertically?

I have the following fragment shader:
#version 330
layout(location=0) out vec4 frag_colour;
in vec2 texelCoords;
uniform sampler2D uTexture; // the color
uniform sampler2D uTextureHeightmap; // the heightmap
uniform float uSunDistance = -10000000.0; // really far away vertically
uniform float uSunInclination; // height from the heightmap plane
uniform float uSunAzimuth; // clockwise rotation point
uniform float uQuality; // used to determine number of steps and steps size
void main()
{
vec4 c = texture(uTexture,texelCoords);
vec2 textureD = textureSize(uTexture,0);
float d = max(textureD.x,textureD.y); // use the largest dimension to determine stepsize etc
// position the sun in the centre of the screen and convert from spherical to cartesian coordinates
vec3 sunPosition = vec3(textureD.x/2,textureD.y/2,0) + vec3( uSunDistance*sin(uSunInclination)*cos(uSunAzimuth),
uSunDistance*sin(uSunInclination)*sin(uSunAzimuth),
uSunDistance*cos(uSunInclination) );
float height = texture2D(uTextureHeightmap, texelCoords).r; // starting height
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition); // sunlight direction
float sampleDistance = 0;
float samples = d*uQuality;
float stepSize = 1.0 / ((samples/d) * d);
for(int i = 0; i < samples; i++)
{
sampleDistance += stepSize; // increase the sample distance
vec3 newPoint = vec3(texelCoords,height) + direction * sampleDistance; // get the coord for the next sample point
float newHeight = texture2D(uTextureHeightmap,newPoint.xy).r; // get the height of that sample point
// put it in shadow if we hit something that is higher than our starting point AND is heigher than the ray we're casting
if(newHeight > height && newHeight > newPoint.z)
{
c *= 0.5;
break;
}
}
frag_colour = c;
}
The purpose is for it to cast shadows based on a heightmap. Pretty nifty, and the results look good.
However, there's a problem where the shadows appear longer when they are horizontal compared to vertical. If I make the window size different, with a window that is taller than wide, I get the opposite effect. I.e., the shadows are casting longer in the longer dimension.
This tells me that it's to do with the way I'm stepping in the above shader, but I can't tell the problem.
To illustrate, here is the with a uSunAzimuth that results in a horizontally cast shadow:
And here is the exact same code with a uSunAzimuth for a vertical shadow:
It's not very pronounced in these low resolution images, but in larger resolutions the effect gets more exaggerated. Essentially; the shadow when you measure how it casts in all 360 degrees of azimuth clears out an ellipse instead of a circle.
The shadow fragment shader operates on a "snapshot" of the viewport. When your scene is rendered and this "snapshot" is generated, then the vertex positions are transformed by the projection matrix. The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport and takes in account the aspect ration of the viewport.
(see Both depth buffer and triangle face orientation are reversed in OpenGL,
and Transform the modelMatrix).
This causes that the high map (uTextureHeightmap) represents a rectangular field of view, dependent on the aspect ratio.
But the texture coordinates, which you use to access the height map describe a quad in the range (0, 0) to (1, 1).
This mismatch must be balanced, by scaling with the aspect ratio.
vec3 direction = ....;
float aspectRatio = textureD.x / textureD.y;
direction.xy *= vec2( 1.0/aspectRatio, 1.0 );
I just needed to adjust the direction slightly.
float aspectCorrection = textureD.x / textureD.y;
...
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition);
direction.y *= aspectCorrection;

distortion correction with gpu shader bug

So I have a camera with a wide angle lens. I know the distortion coefficients, the focal length, the optical center. I want to undistort the image I get from this camera. I used OpenCV for the first try (cv::undistort), which worked well, but was way too slow.
Now I want to do this on the gpu. There is a shader doing exactly this documented in http://willsteptoe.com/post/67401705548/ar-rift-aligning-tracking-and-video-spaces-part-5
the formulas can be seen here:
http://en.wikipedia.org/wiki/Distortion_%28optics%29#Software_correction
So I went and implemented my own version as a glsl shader. I am sending a quad with texture coordinates on the corners between 0..1.
I assume the texture coordinates that arrive are the coordinates of the undistorted image. I calculate the coordinates for the distorted point corresponding to my texture coordinates. Then I sample the distorted image texture.
With this shader nothing in the final image changes. The problem I identified through a cpu implementation is, the coefficient term is very close to zero. The numbers get smaller and smaller through radius squaring etc.. So I have a scaling problem - I can't figure it out what to do differently! I tried everything... I guess it is something quite obvious, since this kind of process seems to work for a lot of people.
I left out the tangential distortion correction for simplicity.
#version 330 core
in vec2 UV;
out vec4 color;
uniform sampler2D textureSampler;
void main()
{
vec2 focalLength = vec2(438.568f, 437.699f);
vec2 opticalCenter = vec2(667.724f, 500.059f);
vec4 distortionCoefficients = vec4(-0.035109f, -0.002393f, 0.000335f, -0.000449f);
const vec2 imageSize = vec2(1280.f, 960.f);
vec2 opticalCenterUV = opticalCenter / imageSize;
vec2 shiftedUVCoordinates = (UV - opticalCenterUV);
vec2 lensCoordinates = shiftedUVCoordinates / focalLength;
float radiusSquared = sqrt(dot(lensCoordinates, lensCoordinates));
float radiusQuadrupled = radiusSquared * radiusSquared;
float coefficientTerm = distortionCoefficients.x * radiusSquared + distortionCoefficients.y * radiusQuadrupled;
vec2 distortedUV = ((lensCoordinates + lensCoordinates * (coefficientTerm))) * focalLength;
vec2 resultUV = (distortedUV + opticalCenterUV);
color = texture2D(textureSampler, resultUV);
}
I see two issues with your solution. The main issue is that you mix two different spaces. You seem to work in [0,1] texture space by converting the optical center to that space, but you did not adjust focalLenght. The key point is that for such a distortion model, the focal lenght is determined in pixels. However, now a pixel is not 1 base unit wide anymore, but 1/width and 1/height units, respectively.
You could add vec2 focalLengthUV = focalLength / imageSize, but you will see that both divisions will cancel out each other when you calculate lensCoordinates. It is much more convenient to convert the texture space UV coordinates to pixel coordinates and use that space directly:
vec2 lensCoordinates = (UV * imageSize - opticalCenter) / focalLenght;
(and also respectively changing the calculation for distortedUV and resultUV).
There is still one issue with the approach I have sketched so far: the conventions of that pixel space I mentioned earlier. In GL, the origin will be the lower left corner, while in most pixel spaces, the origin is at the top left. You might have to flip the y coordinate when doing the conversion. Another thing is where exactly pixel centers are located. So far, the code assumes that pixel centers are at integer + 0.5. The texture coordinate (0,0) is not the center of the lower left pixel, but the corner point. The parameters you use for the distortion might (I don't know OpenCV's conventions) assume the pixel centers at integers, so that instead of the conversion pixelSpace = uv * imageSize, you might need to offset this by half a pixel like pixelSpace = uv * imageSize - vec2(0.5).
The second issue I see is
float radiusSquared = sqrt(dot(lensCoordinates, lensCoordinates));
That sqrt is not correct here, as dot(a,a) will already give the squared lenght of vector a.

Cross-fade between two textures on a sphere

I have a 3D scene with only one sphere in it and I have two textures - one for the night, and one for the day of this planet.
In addition I have the a lightSource at (15,15,15) in my scene. For each vertex on the sphere I also have the normal.
Now I want to blend between the two texture so that the fading between day and night seems to be realistic.
Therefore I calculate the angle between the normal and the light using dot-product, but with this approach I get hard crossover if I check if the angle is > 0 (which will be the dayside).
I need to mix the textures based on the angle, that it is a soft crossover.
Can anyone help me how I can mix the textures? My code so far:
float angle = dot(L,N);
vec4 texture = texture2D(day, textureCoord);
texture = texture2D(night,textureCoord) * (1-angle) + texture * angle;
vec4 light = vec4(ambientTerm + diffuseTerm + specularTerm , 1);
if(angle > 0) {
color = light * texture;
} else if(angle >= -0.25) {
color = texture2D(night,textureCoord)*(angle-1) + texture * (angle);
} else if( angle < -0.25) {
color = texture2D(night,textureCoord);
}
You can use the smoothstep function to turn a continous value into a 0/1 decision with a small smooth transition in between. So basically you define a range where the transisition is, let's just take [-0.25,0.25], which would be an angle range from about 75 to 105 degrees (take larger values for a larger transistion area, but try to make it symmetrical, thus centered around 90 degrees, since people switch on their lights at dusk already ;)). Then we transform our [-1,1] monotonic cosine using
angle = smoothstep(-0.25, 0.25, angle)
which will result in angle (though that name is a bit unfortunately chosen, given that it isn't an angle and behaves inverse to the actual angle) being 0 if it was < -0.25, 1 if it was > 0.25 and a smooth transition if in between. And the whole texturing would look like (cleaned from the strange if and making use of the builtin mix function):
float angle = dot(N, L);
vec4 nightColor = texture2D(night, textureCoord);
vec4 dayColor = texture2D(day, textureCoord);
color = light * mix(nightColor, dayColor, smoothstep(-0.25, 0.25, angle));
Or, for the fun and sake of completeness the more streamlined
color = light * mix(texture2D(night, textureCoord),
texture2D(day, textureCoord),
smoothstep(-0.25, 0.25, dot(N, L)));