Using shader to move texel to its center - opengl

I'm trying to move squared texture texel to the center of the texture following time.
The following code is doing it's job unless I want the pixel drawn vanish when it reaches the center of the geometry (a plane) and for now it's only becoming more a more small while time increase and the texture seems to be like contracted.
uniform float time;
varying vec2 vUv;
void main() {
vec2 center = vec2(0.5, 0.5);
vec2 newPosition = vec2(vUv.x + t * (center.x - vUv.x), vUv.y + t * (center.y - vUv.y);
gl_FragColor = texture2D( texture, vec2(newPosition.x, newPosition.y) );
}
Edit :
Think of this as a black hole in the texture center.

As I understand it, you want the texel to be at vUv when t=0 and at center after a while.
The result is a zoom-in on the center of the texture.
Actually your code do it from t = 0 to t = 1. When t = 1 the texel position is center.
You have the same behavior using the mix function.
vec2 newPosition = mix(vUv, center, t);
Also when t = 1 all the texel are at center and the image is a single color image. (The color of the center of the texture).
Your problem is that t keep growing. And when t > 1 the texel continues on their path. After they all meet at the center, now they stray from each other. The effect is that the texture is reversed and you see a zoom-out.
Depending on how you want it to end there are multiple solutions:
You want to go to the maximal zoom and keep this image: clamp t in the range [0, 1] like this t = clamp(t, 0, 1);.
You want to go to the maximal zoom and the image vanish: stop to draw it when t > 1 (or t >= 1 if you do not want the single color image).
You want an infinite zoom-in, ie texel going closer and closer to the center but never reach it.
For this third behavior, you can use a new t, say t2:
t2 = 1 - 1/(t*k+1); // Where k > 0
t2 = 1 - pow(k, -t); // Where k > 1
t2 = f(t); // Where f(0) = 0, f is increasing, continuous and limit in +∞ is 1

Recently, I've faced the same issue. But I found alternative solution somewhere on Internet which is use tunnel shader instead of moving texture's texels to center. It has similar behaviour as you want.
float time = -u_time * 0.15;
vec2 p = 2.0 * gl_FragCoord.xy / u_res.xy -1.0;
vec2 uv;
float a = atan(p.y,p.x);
float r = sqrt(dot(p,p));
uv.x = time+.1/r;
uv.y = a/3.1416;
vec4 bg = texture2D(u_texture, uv);
I hope it will be helpfull.

Related

texture a ball on a sphere has a dark band

I am using this code to generate sphere vertices and textures but as you can see in the image , when I rotate it I can see a dark band.
for (int i = 0; i <= stacks; ++i)
{
float s = (float)i / (float) stacks;
float theta = s * 2 * glm::pi<float>();
for (int j = 0; j <= slices; ++j)
{
float sl = (float)j / (float) slices;
float phi = sl * (glm::pi<float>());
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
const float z = cos(phi);
sphere_vertices.push_back(radius * glm::vec3(x, y, z));
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
}
}
// get the indices
for (int i = 0; i < stacks * slices + slices; ++i)
{
sphere_indices.push_back(i);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i + slices);
sphere_indices.push_back(i + slices + 1);
sphere_indices.push_back(i);
sphere_indices.push_back(i + 1);
}
I can't figure a way to make it right whatever texture coordinates I used.
Hmm.. If I use another image, then the mapping is different (and worst!)
vertex shader:
#version 330 core
layout (location = 0) in vec3 aPos;
layout (location = 1) in vec3 aTexCoord;
out vec4 vertexColor;
out vec2 TexCoord;
uniform mat4 model;
uniform mat4 view;
uniform mat4 projection;
void main()
{
gl_Position = projection * view * model * vec4(aPos.x, aPos.y, aPos.z, 1.0);
vertexColor = vec4(0.5, 0.2, 0.5, 1.0);
TexCoord = vec2(aTexCoord.x, aTexCoord.y);
}
fragment shader:
#version 330 core
out vec4 FragColor;
in vec4 vertexColor;
in vec2 TexCoord;
uniform sampler2D sphere_texture;
void main()
{
FragColor = texture(sphere_texture, TexCoord);
}
I am not using any lighting conditions.
If I use FragColor = vec4(TexCoord.x, TexCoord.y, 0.0f, 1.0f); in fragment shader (for debugging purposes) , I am receiving a nice sphere.
I am using this as texture:
That image of the tennis ball that you linked reveals the problem. I'm glad you ultimately provided it.
Your image is a four-channel PNG with transparency (Alpha channel). There are transparent pixels all around the outside of the yellow part of the ball that have (R,G,B,A) = (0, 0, 0, 0), so if you're ignoring the A channel then (R, G, B), will be (0, 0, 0) = black.
Here are just the Red, Green, and Blue (RGB) channels:
And here is just the Alpha (A) channel.
The important thing to notice is that the circle of the ball does not fill the square. There is a significant margin of 53 pixels of black from the extent of the ball to the edge of the texture. We can calculate the radius of the ball from this. Half the width is 1000 pixels, of which 53 pixels are not used. The ball's radius is 1000-53, which is 947 pixels. Or about 94.7% of the distance from the center to the edge of the texture. The remaining 5.3% of the distance is black.
Side note: I also notice that your ball doesn't quite reach 100% opacity. The yellow part of the ball has an alpha channel value of 254 (of 255) Meaning 99.6% opaque. The white lines and the shiny hot spot do actually reach 100% opacity, giving it sort of a Death Star look. ;)
To fix your problem, there's the intuitive approach (which may not work) and then there are two things that you need to do that will work. Here are a few things you can do:
Intuitive Solution:
This won't quite get you 100% there.
1) Resize the ball to fill the texture. Use image editing software to enlarge the ball to fill the texture, or to trim off the black pixels. This will just make more efficient use of pixels, for one, but it will ensure that there are useful pixels being sampled at the boundary. You'll probably want to expand the image to be slightly larger than 100%. I'll explain why below.
2) Remap your texture coordinates to only extend to 94.7% of the radius of the ball. (Similar to approach 1, but doesn't require image editing). This just uses coordinates that actually correspond to the image you provided. Your x and y coordinates need to be scaled about the center of the image and reduced to about 94.7%.
x2 = 0.5 + (x - 0.5) * 0.947;
y2 = 0.5 + (y - 0.5) * 0.947;
Suggested Solution:
This will ensure no more black.
3) Fill the "black" portion of your ball texture with a less objectionable colour - probably the colour that is at the circumference of the tennis ball. This ensures that any texels that are sampled at exactly the edge of the ball won't be linearly combined with black to produce an unsightly dark-but-not-quite-black band, which is almost the problem you have right now anyway. You can do this in two ways. A) Image editing software. Remove the transparency from your image and matte it against a dark yellow colour. B) Use the shader to detect pixels that are outside the image and replace them with a border colour (this is clever, but probably more trouble than it's worth.)
Different Texture Coordinates
The last thing you can do is avoid this degenerate texture mapping coordinate problem altogether. At the equator, you're not really sure which pixels to sample. The black (transparent) pixels or the coloured pixels of the ball. The discrete nature of square pixels, is fighting against the polar nature of your texture map. You'll never find the exact colour you need near the edge to produce a continuous, seamless map. Instead, you can use a different coordinate system. I hope you're not attached to how that ball looks, because let me introduce you to the equirectangular projection. It's the same projection that you can naively use to map the globe of the Earth to a typical rectangular map of the world you're likely familiar with where the north and south poles get all the distortion but the equatorial regions look pretty good.
Here's your image mapped to equirectangular coordinates:
Notice that black bar at the bottom...we're onto something! That black bar is actually exactly what appears around the equator of your ball with your current texture mapping coordinate system. But with this coordinate system, you can see easily that if we just remapped the ball to fill the square we'd completely eliminate any transparent pixels at all.
It may be inconvenient to work in this coordinate system, but you can transform your image in Photoshop using Filter > Distort > Polar Coordinates... > Polar to Rectangular.
Sigismondo's answer already suggests how to adjust your texture mapping coordinates do this.
And finally, here's a texture that is both enlarged to fill the texture space, and remapped to equirectangular coordinates. No black bars, minimal distortion. But you'll have to use Sigismondo's texture mapping coordinates. Again, this may not be for you, especially if you're attached to the idea of the direct projection for your texture (i.e.: if you don't want to manipulate your tennis ball image and you want to use that projection.) But if you're willing to remap your data, you can rest easy that all the black pixels will be gone!
Good luck! Feel free to ask for clarifications.
I cannot test it, being the code incomplete, but from a rough look I have spotted this problem:
sphere_texcoords.push_back((glm::vec2((x + 1.0) / 2.0, (y + 1.0) / 2.0)));
The texture coordinates should not be evaluated from x and y, being:
const float x = cos(theta) * sin(phi);
const float y = sin(theta) * sin(phi);
but from the angles thta-phi, or stacks-slices. this could work better - untested:
sphere_texcoords.push_back(glm::vec2(s,sl));
being already defined:
float s = (float)i / (float) stacks;
float sl = (float)j / (float) slices;
Furthermore in your code you are using the first and the last "slices" of the sphere as the rest... Shouldn't they be treated differently? This seems quite odd to me - but I don't know whether your implementation is just a simpler one, working fine.
Compare with this explanation, for example: http://www.songho.ca/opengl/gl_sphere.html

OpenGL GLSL bloom effect bleeds on edges

I have a framebuffer called "FBScene" that renders to a texture TexScene.
I have a framebuffer called "FBBloom" that renders to a texture TexBloom.
I have a framebuffer called "FBBloomTemp" that renders to a texture TexBloomTemp.
First I render all my blooming / glowing objects to FBBloom and thus into TexBloom. Then I play ping pong with FBBloom and FBBloomTemp, alternatingly blurring horizontally / vertically to get a nice bloom texture.
Then I pass the final "TexBloom" texture and the TexScene to a screen shader that draws a screen filling quad with both textures:
gl_FragColor = texture(TexBloom, uv) + texture(TexScene, uv);
The problem is:
While blurring the images, the bloom effect bleeds into the opposite edges of the screen if the glowing object is too close to the screen border.
This is my blur shader:
vec4 color = vec4(0.0);
vec2 off1 = vec2(1.3333333333333333) * direction;
vec2 off1DivideByResolution = off1 / resolution;
vec2 uvPlusOff1 = uv + off1DivideByResolution;
vec2 uvMinusOff1 = uv - off1DivideByResolution;
color += texture(image, uv) * 0.29411764705882354;
color += texture(image, uvPlusOff1) * 0.35294117647058826;
color += texture(image, uvMinusOff1) * 0.35294117647058826;
gl_FragColor = color;
I think I need to prevent uvPlusOff1 and uvMinusOff1 from beeing outside of the -1 and +1 uv range. But I don't know how to do that.
I tried to clamp the uv values at the gap in the code above with:
float px = clamp(uvPlusOff1.x, -1, 1);
float py = clamp(uvPlusOff1.y, -1, 1);
float mx = clamp(uvMinusOff1.x, -1, 1);
float my = clamp(uvMinusOff1.y, -1, 1);
uvPlusOff1 = vec2(px, py);
uvMinusOff1 = vec2(mx, my);
But it did not work as expected. Any help is highly appreciated.
Bleeding to the other side of the screen usually happens when the wrap-mode is set to GL_REPEAT. Set it to GL_CLAMP_TO_EDGE and it shouldn't happen anymore.
Edit - To explain a little bit more why this happens in your case: A texture coordinate of [1,1] means the bottom-right corner of the bottom-right texel. When linear filtering is enabled, this location will read four pixels around that corner. In case of repeating textures, three of them are on other sides of the screen. If you want to prevent the problem manually, you have to clamp to the range [0 + 1/texture_size, 1 - 1/texture_size].
I'm also not sure why you even clamp to [-1, 1], because texture coordinates usually range from [0, 1]. Negative values will be outside of the texture and are handled by the wrap mode.

Why is my frag shader casting long shadows horizontally and short shadows vertically?

I have the following fragment shader:
#version 330
layout(location=0) out vec4 frag_colour;
in vec2 texelCoords;
uniform sampler2D uTexture; // the color
uniform sampler2D uTextureHeightmap; // the heightmap
uniform float uSunDistance = -10000000.0; // really far away vertically
uniform float uSunInclination; // height from the heightmap plane
uniform float uSunAzimuth; // clockwise rotation point
uniform float uQuality; // used to determine number of steps and steps size
void main()
{
vec4 c = texture(uTexture,texelCoords);
vec2 textureD = textureSize(uTexture,0);
float d = max(textureD.x,textureD.y); // use the largest dimension to determine stepsize etc
// position the sun in the centre of the screen and convert from spherical to cartesian coordinates
vec3 sunPosition = vec3(textureD.x/2,textureD.y/2,0) + vec3( uSunDistance*sin(uSunInclination)*cos(uSunAzimuth),
uSunDistance*sin(uSunInclination)*sin(uSunAzimuth),
uSunDistance*cos(uSunInclination) );
float height = texture2D(uTextureHeightmap, texelCoords).r; // starting height
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition); // sunlight direction
float sampleDistance = 0;
float samples = d*uQuality;
float stepSize = 1.0 / ((samples/d) * d);
for(int i = 0; i < samples; i++)
{
sampleDistance += stepSize; // increase the sample distance
vec3 newPoint = vec3(texelCoords,height) + direction * sampleDistance; // get the coord for the next sample point
float newHeight = texture2D(uTextureHeightmap,newPoint.xy).r; // get the height of that sample point
// put it in shadow if we hit something that is higher than our starting point AND is heigher than the ray we're casting
if(newHeight > height && newHeight > newPoint.z)
{
c *= 0.5;
break;
}
}
frag_colour = c;
}
The purpose is for it to cast shadows based on a heightmap. Pretty nifty, and the results look good.
However, there's a problem where the shadows appear longer when they are horizontal compared to vertical. If I make the window size different, with a window that is taller than wide, I get the opposite effect. I.e., the shadows are casting longer in the longer dimension.
This tells me that it's to do with the way I'm stepping in the above shader, but I can't tell the problem.
To illustrate, here is the with a uSunAzimuth that results in a horizontally cast shadow:
And here is the exact same code with a uSunAzimuth for a vertical shadow:
It's not very pronounced in these low resolution images, but in larger resolutions the effect gets more exaggerated. Essentially; the shadow when you measure how it casts in all 360 degrees of azimuth clears out an ellipse instead of a circle.
The shadow fragment shader operates on a "snapshot" of the viewport. When your scene is rendered and this "snapshot" is generated, then the vertex positions are transformed by the projection matrix. The projection matrix describes the mapping from 3D points of a scene, to 2D points of the viewport and takes in account the aspect ration of the viewport.
(see Both depth buffer and triangle face orientation are reversed in OpenGL,
and Transform the modelMatrix).
This causes that the high map (uTextureHeightmap) represents a rectangular field of view, dependent on the aspect ratio.
But the texture coordinates, which you use to access the height map describe a quad in the range (0, 0) to (1, 1).
This mismatch must be balanced, by scaling with the aspect ratio.
vec3 direction = ....;
float aspectRatio = textureD.x / textureD.y;
direction.xy *= vec2( 1.0/aspectRatio, 1.0 );
I just needed to adjust the direction slightly.
float aspectCorrection = textureD.x / textureD.y;
...
vec3 direction = normalize(vec3(texelCoords,height) - sunPosition);
direction.y *= aspectCorrection;

OpenGL screen postprocessing effects [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I've built a nice music visualizer using OpenGL in Java. It already looks pretty neat, but I've thought about adding some post processing to it. At the time, it looks like this:
There is already a framebuffer for recording the output, so I have the texture already available. Now I wonder if someone has an idea for some effects. The current Fragment shader looks like this:
#version 440
in vec3 position_FS_in;
in vec2 texCoords_FS_in;
out vec4 out_Color;
//the texture of the last Frame by now exactly the same as the output
uniform sampler2D textureSampler;
//available data:
//the average height of the lines seen in the screenshot, ranging from 0 to 1
uniform float mean;
//the array of heights of the lines seen in the screenshot
uniform float music[512];
void main()
{
vec4 texColor = texture(textureSampler, texCoords_FS_in);
//insert post processing here
out_Color = texColor;
}
Most post processing effects vary with time, so it is common to have a uniform that varies with the passage of time. For example, a "wavy" effect might be created by offsetting texture coordinates using sin(elapsedSec * wavyRadsPerSec + (PI * gl_FragCoord.y * 0.5 + 0.5) * wavyCyclesInFrame).
Some "postprocessing" effects can be done very simply, for example, instead of clearing the back buffer with glClear you can blend a nearly-black transparent quad over the whole screen. This will create a persistence effect where the past frames fade to black behind the current one.
A directional blur can be implemented by taking multiple samples at various distances from each point, and weighting the closer ones more strongly and summing. If you track the motion of a point relative to the camera position and orientation, it can be made into a motion blur implementation.
Color transformations are very simple as well, simply treat the RGB as though they are the XYZ of a vector, and do interesting transformations on it. Sepia and "psychedelic" colors can be produced this way.
You might find it helpful to convert the color into something like HSV, do transformations on that representation, and convert it back to RGB for the framebuffer write. You could affect hue, saturation, for example, fading to black and white, or intensifying the color saturation smoothly.
A "smearing into the distance" effect can be done by blending the framebuffer onto the framebuffer, by reading from texcoord that is slightly scaled up from gl_FragCoord, like texture(textureSampler, (gl_FragCoord * 1.01).xy).
On that note, you should not need those texture coordinate attributes, you can use gl_FragCoord to find out where you are on the screen, and use (an adjusted copy of) that for your texture call.
Have a look at a few shaders on GLSLSandbox for inspiration.
I have done a simple emulation of the trail effect on GLSLSandbox. In the real one, the loop would not exist, it would take one sample from a small offset. The "loop" effect would happen by itself because its input includes the output from the last frame. To emulate having a texture of the last frame, I simply made it so I can calculate what the other pixel is. You would read the last-frame texture instead of calling something like pixelAt when doing the trail effect.
You can use the wave instead of my faked sine wave. Use the uv.x to select an index, scaled appropriately.
GLSL
#ifdef GL_ES
precision mediump float;
#endif
uniform float time;
uniform vec2 mouse;
uniform vec2 resolution;
const float PI = 3.14159265358979323;// lol ya right, but hey, I memorized it
vec4 pixelAt(vec2 uv)
{
vec4 result;
float thickness = 0.05;
float movementSpeed = 0.4;
float wavesInFrame = 5.0;
float waveHeight = 0.3;
float point = (sin(time * movementSpeed +
uv.x * wavesInFrame * 2.0 * PI) *
waveHeight);
const float sharpness = 1.40;
float dist = 1.0 - abs(clamp((point - uv.y) / thickness, -1.0, 1.0));
float val;
float brightness = 0.8;
// All of the threads go the same way so this if is easy
if (sharpness != 1.0)
dist = pow(dist, sharpness);
dist *= brightness;
result = vec4(vec3(0.3, 0.6, 0.3) * dist, 1.0);
return result;
}
void main( void ) {
vec2 fc = gl_FragCoord.xy;
vec2 uv = fc / resolution - 0.5;
vec4 pixel;
pixel = pixelAt(uv);
// I can't really do postprocessing in this shader, so instead of
// doing the texturelookup, I restructured it to be able to compute
// what the other pixel might be. The real code would lookup a texel
// and there would be one sample at a small offset, the feedback
// replaces the loop.
const float e = 64.0, s = 1.0 / e;
for (float i = 0.0; i < e; ++i) {
pixel += pixelAt(uv + (uv * (i*s))) * (0.3-i*s*0.325);
}
pixel /= 1.0;
gl_FragColor = pixel;
}

Texture repeating and clamping in shader

I have the following fragment and vertex shader, in which I repeat a texture:
//Fragment
vec2 texcoordC = gl_TexCoord[0].xy;
texcoordC *= 10.0;
texcoordC.x = mod(texcoordC.x, 1.0);
texcoordC.y = mod(texcoordC.y, 1.0);
texcoordC.x = clamp(texcoordC.x, 0.0, 0.9);
texcoordC.y = clamp(texcoordC.y, 0.0, 0.9);
vec4 texColor = texture2D(sampler, texcoordC);
gl_FragColor = texColor;
//Vertex
gl_TexCoord[0] = gl_MultiTexCoord0;
colorC = gl_Color.r;
gl_Position = ftransform();
ADDED: After this process, I fetch the texture coordinates and use a texture pack:
vec4 textureGet(vec2 texcoord) {
// Tile is 1.0/16.0 part of texture, on x and y
float tileSp = 1.0 / 16.0;
vec4 color = texture2D(sampler, texcoord);
// Get tile x and y by red color stored
float texTX = mod(color.r, tileSp);
float texTY = color.r - texTX;
texTX /= tileSp;
// Testing tile
texTX = 1.0 - tileSp;
texTY = 1.0 - tileSp;
vec2 savedC = color.yz;
// This if else statement can be ignored. I use time to move the texture. Seams show without this as well.
if (color.r > 0.1) {
savedC.x = mod(savedC.x + sin(time / 200.0 * (color.r * 3.0)), 1.0);
savedC.y = mod(savedC.y + cos(time / 200.0 * (color.r * 3.0)), 1.0);
} else {
savedC.x = mod(savedC.x + time * (color.r * 3.0) / 1000.0, 1.0);
savedC.y = mod(savedC.y + time * (color.r * 3.0) / 1000.0, 1.0);
}
vec2 texcoordC = vec2(texTX + savedC.x * tileSp, texTY + savedC.y * tileSp);
vec4 res = texture2D(texturePack, texcoordC);
return res;
}
I have some troubles with showing seams (of 1 pixel it seems) however. If I leave out texcoord *= 10.0 no seams are shown (or barely), if I leave it in they appear. I clamp the coordinates (even tried lower than 1.0 and bigger than 0.0) to no avail. I strongly have the feeling it is a rounding error somewhere, but I have no idea where. ADDED: Something to note is that in the actual case I convert the texcoordC x and y to 8 bit floats. I think the cause lies here; I added another shader describing this above.
The case I show is a little more complicated in reality, so there is no use for me to do this outside the shader(!). I added the previous question which explains a little about the case.
EDIT: As you can see the natural texture span is divided by 10, and the texture is repeated (10 times). The seams appear at the border of every repeating texture. I also added a screenshot. The seams are the very thin lines (~1pixel). The picture is a cut out from a screenshot, not scaled. The repeated texture is 16x16, with 256 subpixels total.
EDIT: This is a followup question of: this question, although all necessary info should be included here.
Last picture has no time added.
Looking at the render of the UV coordinates, they are being filtered, which will cause the same issue as in your previous question, but on a smaller scale. What is happening is that by sampling the UV coordinate texture at a point between two discontinuous values (i.e. two adjacent points where the texture coordinates wrapped), you get an interpolated value which isn't in the right part of the texture. Thus the boundary between texture tiles is a mess of pixels from all over that tile.
You need to get the mapping 1:1 between screen pixels and the captured UV values. Using nearest sampling might get you some of the way there, but it should be possible to do without using that, if you have the right texture and pixel coordinates in the first place.
Secondly, you may find you get bleeding effects due to the way you are doing the texture atlas lookup, as you don't account for the way texels are sampled. This will be amplified if you use any mipmapping. Ideally you need a border, and possibly some massaging of the coordinates to account for half-texel offsets. However I don't think that's the main issue you're seeing here.