Repeating part of texture over another texture - glsl

So I'm trying to replace a part of a texture over another in GLSL, first step in a grand scheme.
So I have a image, 2048x2048, with 3 textures on the top left, each 512x512. For testing purposes I'm trying to just repeatedly draw the first one.
//get coord of smaller texture
coord = vec2(int(gl_TexCoord[0].s)%512,int(gl_TexCoord[0].t)%512);
//grab color from it and return it
fragment = texture2D(textures, coord);
gl_FragColor = fragment;
It seems that it only grabs the same pixel, I get one color from the texture returned to me. Everything ends up grey. Anyone know what's off?

Unless that's a rectangle texture (which is isn't since you're using texture2D), your texture coordinates are normalized. That means that the range [0, 1] maps to the entire range of the texture. 0.5 always means halfway, whether for a 256 sized texture or a 8192 one.
Therefore, you need to stop passing non-normalized texture coordinates (texel values). Pass normalized texture coordinates and adjust those.

Related

OpenGL - tex coord in fragment shader outside of specified range

I'm trying to draw a rectangle with a texture in OpenGL. I'm simply trying to render an entire .jpg image, so I specify the texture coordinates as [0, 0] to [1, 1] in the vertex buffer. I expect all the interpolated texture coordinates in the fragment shader to be between [0, 0] and [1, 1], however, depending on where the texture is drawn, I sometimes get a texture coordinate that is less than 0 (I know this is the case because I tried outputting red from the fragment shader if the tex coord is less than 0).
How come I get an interpolated value outside of the specified range? I currently visualize vertices/fragments like the following image (https://learnopengl.com/Advanced-OpenGL/Anti-Aliasing):
If I imagine a rectangle instead, then if the pixel sample is inside the rectangle, then the interpolated texture coord must be at least 0, since the very left of the rectangle represents 0, right? So how do I end up with a value less than 0?
Edit: after some basic testing, it looks like the fragment shader is called if a shape simply intersects that pixel, not if the pixel sample point is inside the shape. I tested this by placing the start of the rectangle slightly before and slightly after the middle of a pixel - when slightly behind the middle of the pixel, I don't get a negative value, but if I place it slightly after the middle, then I do get a negative value. This contradicts what the website I linked to said - perhaps it's driver-dependent?
Edit: the previous test I did was with multisampling on. If I turn multisampling off, then even if the shape is past the middle, I don't get a negative value...
Turns out I just needed to keep reading the article I linked:
This is where multisampling becomes interesting. We determined that 2 subsamples were covered by the triangle so the next step is to determine a color for this specific pixel. Our initial guess would be that we run the fragment shader for each covered subsample and later average the colors of each subsample per pixel. In this case we'd run the fragment shader twice on the interpolated vertex data at each subsample and store the resulting color in those sample points. This is (fortunately) not how it works, because this basically means we need to run a lot more fragment shaders than without multisampling, drastically reducing performance.
How MSAA really works is that the fragment shader is only run once per pixel (for each primitive) regardless of how many subsamples the triangle covers. The fragment shader is run with the vertex data interpolated to the center of the pixel and the resulting color is then stored inside each of the covered subsamples. Once the color buffer's subsamples are filled with all the colors of the primitives we've rendered, all these colors are then averaged per pixel resulting in a single color per pixel. Because only two of the 4 samples were covered in the previous image, the color of the pixel was averaged with the triangle's color and the color stored at the other 2 sample points (in this case: the clear color) resulting in a light blue-ish color.
So I was getting a negative value because the fragment shader was being run on a pixel that had at least one of its sub-sample points covered by the shape, but the shape was slightly after the mid-point of the pixel, and since "the fragment shader is run with the vertex data interpolated to the center of the pixel", I was getting a negative value.

Why do we need texture filtering in OpenGL?

When mapping texture to a geometry when we can choose the filtering method between GL_NEAREST and GL_LINEAR.
In the examples we have a texture coordinate surrounded by the texels like so:
And it's explained how each algorithm chooses what color the fragment be, for example linear interpolate all the neighboring texels based on distance from the texture coordinate.
Isn't each texture coordinate is essentially the fragment position which are mapped to pixel on screen? So how these coordinates are smaller than the texels which are essentially pixels and the same size as fragments?
A (2D) texture can be looked at as a function t(u, v), whose output is a "color" value. This is a pure function, so it will return the same value for the same u and v values. The value comes from a lookup table stored in memory, indexed by u and v, rather than through some kind of computation.
Texture "mapping" is the process whereby you associate a particular location on a surface with a particular location in the space of a texture. That is, you "map" a surface location to a location in a texture. As such, the inputs to the texture function t are often called "texture coordinates". Some surface locations may map to the same position on a texture, and some texture positions may not have surface locations mapped to them. It all depends on the mapping
An actual texture image is not a smooth function; it is a discrete function. It has a value at the texel locations (0, 0), and another value at (1, 0), but the value of a texture at (0.5, 0) is undefined. In image space, u and v are integers.
Your picture of a zoomed in part of the texture is incorrect. There are no values "between" the texels, because "between the texels" is not possible. There is no number between 0 and 1 on an integer number line.
However, any useful mapping from surface to the texture function is going to need to happen in a continuous space, not a discrete space. After all, it's unlikely that every fragment will land exactly on a location that maps to an exact integer within a texture. After all, especially in shader-based rendering, a shader can just invent a mapping arbitrarily. The "mapping" could be based on light directions (projective texturing), the elevation of a fragment relative to some surface, or anything a user might want. To a fragment shader, a texture is just a function t(u, v) which can be evaluated to produce a value.
So we really want that function to be in a continuous space.
The purpose of filtering is to create a continuous function t by inventing values in-between the discrete texels. This allows you to declare that u and v are floating-point values, rather than integers. We also get to normalize the texture coordinates, so that they're on the range [0, 1] rather than being based on the texture's size.
Texture filtering does not decide what color the fragment should be. This is what the fragment shader does. However, the fragment shader may sample a texture at a given position to get a color. It may directly return that color or it can process it (e.g. add shading etc.)
Texture filtering happens at sampling. The texture coordinates are not necessarily perfect pixel positions. E.g., the texture could be the material of a 3D model that you show in a perspective view. Then a fragment may cover more than a single texel or it may cover less. Or it might not be aligned with the texture grid. In all cases you need some kind of filtering.
For applications that render a sprite at its original size without any deformation, you usually don't need filtering as you have a 1:1 mapping from screen pixels to texels.

How to wrap texture coordinates manually?

I am using C++ and HLSL and need to have my texture coordinates wrap so that the texture is tiled across a triangle.
After the coordinates are "wrapped" into 0-1 range they will be rotated, so I can't simply use the texture sampler AddressU and AddressV properties set to wrap, because they need to be wrapped and THEN rotated, so it can't be done inside the sampler.
The solution here is simple, just use the fractional part of the texture coordinate and it will wrap.
Here is an example of a pixel shader that will tile the texture 36 times (6 * 6):
input.tex *= 6.0f; //number of times to tile ^ 2
input.tex = frac(input.tex); //wraps texCoords to 0-1 range
return shaderTexture.Sample(SampleType, input.tex);
This does tile the texture, but can create a problem at the boarder where the texture wraps. The tiles have to divide evenly into the space they are being displayed on or it creates a seam where the boarders meet. My square that the texture is drawn to is 800x600 pixels, so tiling by 5 will divide evenly but 6 will not and will cause seams along the Y axis.
I have tried using the modulus operator input.tex = input.tex % 1 to wrap the coordinates but I get the exact same results. I have also tried changing the texture filtering method and the AddressU and AddressV properties along with countless different methods of debugging.
I had some luck using this code. If the x coordinate is too high it gets set to 0, and if it is too low it gets set to 1.
input.tex *= 6.0f;
input.tex = frac(input.tex);
if (input.tex.x > 0.999f) input.tex.x = 0;
if (input.tex.x < 0.001f) input.tex.x = 1;
return shaderTexture.Sample(SampleType, input.tex);
This only fixes the problem in certain spots though, so it is definitely not a solution.
Here is a picture that shows a texture (left) and what it looks like when wrapped manually (right). You can see that not everywhere that the boarders touch has this error.
I have also tried not changing the texture coordinates to 0-1 range and rotating them around the center of each tile instead of (0.5, 0.5) but I get identical results. Also my texture coordinates are completely independent of the vertices and are calculated inside the pixel shader.
Anything I have seen relating to this issue has to do with having a high value at one pixel and then a low value at the next, for example u = 0.95 and the next pixel u = 0.03, which causes it to interpolate backwards across the texture. But when I rotate my texture coordinates nothing changes at all. Even when each tile has a random rotation applied to it. In this case the edges have all sorts of different values bordering each other, not just a high value on the left side and a low value on the right side, but the area where the seam occurs never changes.
As MuertoExcobito said the main problem is that at the borders the texture coordinate jumps from 1 to 0 in a single pixel. Semantically it is right to say that the entire texture gets averaged in this pixel, but it is not caused by interpolating the texture from 1 to 0 in this pixel. The real reason is the mipmapping.
For your texture there are mipmaps generated as you are loading it. This means the texture get multiple mipmap levels which are all half sized in respect to the before.
If a texture becomes distorted sampling of the highest level would lead to oversampling (pointfiltering like artifacts). To fight oversampling a texture lookup chooses the appropriate mipmaplevel dependent on the changes of the texture coordinate in screen space. In your case the borders are a very high change in a small place, which is leading to use the lowest mipmap possible (which is as you see a small red dot, which is the reason for the red border).
Returning to your problem, you should take control over the mipmapping by using the texture lookup method SampleGrad (Docs). To get the current changes of the texture coordinate of your pixel you can use the intrinsic methods ddx and ddy. They return for a arbitary variable in your shader, how it changes locally to adjacent pixels (the correct explaination would go to deep for this topic). So using following code shouldn't change anything, because it should be semantically identical:
input.tex *= 6.0f; //number of times to tile ^ 2
input.tex = frac(input.tex); //wraps texCoords to 0-1 range
float2 xchange = ddx(input.tex);
float2 ychange = ddy(input.tex);
return shaderTexture.SampleGrad(SampleType, input.tex, xchange, ychange);
Now you can apply your code which prevents big changes in xchange and ychange to force the graphic device to use a higher mipmap. This should remove you artifacts.
If your texture doesn't need mipmapping, because you are rendering the texture screen aligned and the texture size doesn't lead to oversampling, you can use the simpler alternative sampleLevel (Docs) There you can pass a parameter, which picks a specific mipmap, which you can determine by yourself.
The code here is causing the sampling the entire texture in the span of a single pixel. For example, for two adjacent pixels at the seam, one 'u' sample could be 1.0-eps, the next sample would be 0.0+eps, where eps is a number smaller than the width of a texel. When the output pixels are interpolated, you will interpolate from 1.0 .. 0.0, sampling the entire texture between those two samples. The averaging of the entire texture causes the 'greyness', even though your input texture doesn't actually contain any pixels that are exactly grey.
If you require to rotate the texture coordinates within each range (eg. 0..1, 1..2 are rotated independently), there are a few ways this could be solved. First, you could change the interpolation from linear to point, which will avoid the interpolation between texels. However, if you require bilinear interpolation, this might not be acceptable. In that case, you could construct a 'grid' mesh, and map the input texture 0..1 across each tile, with the tiles texture coordinates rotated independently in the shader.
Another possible solution, would be to transform the coordinates to 0..1 space, perform the rotation, and then translate them back into their original space. For example, you would do:
// pseudo-code:
int2 whole;
input.tex = modf(input.tex, whole);
input.tex = Rotate(input.tex); // do whatever rotation is needed
input.tex += whole;
This would ensure that the wrapping does not have any discontinuities. Alternatively, you could have your rotation code take into account non-unity space texture coordinates.
Gnietschow posted the answer to this question but I am going to add an answer that shows exactly how I used the answer.
I'm actually not even sure why this works I just know it does even with other multiples of tiling, various textures, and random rotation.
input.tex *= 6.0f; //number of times to tile ^ 2
input.tex = frac(input.tex); //wraps texCoords to 0-1 range
float2 xchange = ddx(input.tex);
float2 ychange = ddy(input.tex);
if ((xchange.x < 0)) xchange = float2(0, 0);
if ((ychange.y < 0)) ychange = float2(0, 0);
return shaderTexture.SampleGrad(SampleType, input.tex, xchange, ychange);
if you don't need mipmapping at all, this other method Gniet mentioned works fine too
input.tex *= 6.0f; //number of times to tile ^ 2
input.tex = frac(input.tex); //wraps texCoords to 0-1 range
return shaderTexture.SampleLevel(SampleType, input.tex, 0);

Manually change color of framebuffer

I am having a scene containing of thousands of little planes. The setup is that the plane can occlude each other in the depth.
The planes are red and green. Now I want to do the following in a shader:
Render all the planes. As soon as a plane is red, substract 0.5 from the currently bound framebuffer and if the texture is green, add 0.5 to the framebuffer.
Therefore I should be able to see for each pixel in the texture of the framebuffer: < 0 => more red planes at this pixel, = 0 => Same amount of red and green and for the last case >0 => more green planes, as well as I can tell the difference.
This is just a very rough simplification of what I need to do, but the core is to write change a pixel of a texture/framebuffer depending on the given values of planes in the scene influencing the current fragment. This should happen in the fragment shader.
So how do I change the values of the framebuffer using GLSL? using gl_FragColor just sets a new color, but does not manipulate the color set before.
PS I also gonna deactivate depth testing.
The fragment shader cannot read the (old) value from the framebuffer; it just generates a new value to put into the framebuffer. When multiple fragments output to the same pixel (overlapping planes in your example), how those value combine is controlled by the BLEND function of the pipeline.
What you appear to want can be done by setting a custom blending function. The GL_FUNC_ADD blending function allows adding the old value and new value (with weights); what you want is probably something like:
glBlendEquationSeparate(GL_FUNC_ADD, GL_FUNC_ADD);
glBlendFuncSeparate(GL_ONE, GL_ONE, GL_ONE, GL_ONE);
this will simply add each output pixel to the old pixel in the framebuffer (in all four channels; its not clear from your question whether you're using a 1-channel, 3-channel, or 4-channel frame buffer). Then, you just have your fragment shader output 0.5 or -0.5 depending. In order for this to make sense, you need a framebuffer format that supports values outside the normal [0..1] range, such as GL_RGBA32F or GL_R32F

How to use GL_REPEAT to repeat only a selection of a texture atlas? (OpenGL)

How can I repeat a selection of a texture atlas?
For example, my sprite (selection) is within the texture coordinates:
GLfloat textureCoords[]=
{
.1f, .1f,
.3f, .1f,
.1f, .3f,
.3f, .3f
};
Then I want to repeat that sprite N times to a triangle strip (or quad) defined by:
GLfloat vertices[]=
{
-100.f, -100.f,
100.f, -100.f,
-100.f, 100.f,
100.f, 100.f
};
I know it has something to do with GL_REPEAT and textureCoords going passed the range [0,1]. This however, doesn't work: (trying to repeat N = 10)
GLfloat textureCoords[]=
{
10.1f, 10.1f,
10.3f, 10.1f,
10.1f, 10.3f,
10.3f, 10.3f
};
We're seeing our full texture atlas repeated...
How would I do this the right way?
It can't be done the way it's described in the question. OpenGL's texture coordinate modes only apply for the entire texture.
Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
None of the texture wrap modes support the kind of operation as described in the question, i.e. they all map to the full [0,1] range, not some arbitrary subset. when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL vertex program to map the texture coordinates appropriately.
I'm not sure you can do that. I think OpenGL's texture coordinate modes only apply for the entire texture. When using an atlas, you're using "sub-textures", so that your texture coordinates never come close to 0 and 1, the normal limits where wrapping and clamping occurs.
There might be extensions to deal with this, I haven't checked.
EDIT: Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
This is a bit crude to explain without images.
Anyway, when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
None of the texture wrap modes support the kind of operation you are looking for, i.e. they all map to the full [0,1] range, not some arbitrary subset. You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL pixel program to map the texture coordinates appropriately.
While this may be an old topic; here's how I ended up doing it:
A workaround would be to create multiple meshes, glued together containing the subset of the Texture UV's.
E.g.:
I have a laser texture contained within a larger texture atlas, at U[0.05 - 0.1] & V[0.05-0.1].
I would then construct N meshes, each having U[0.05-0.1] & V[0.05-0.1] coordinates.
(N = length / texture.height; height being the dimension of the texture I would like to repeat. Or easier: the amount of times I want to repeat the texture.)
This solution would be more cost effective than having to reload texture after texture.
Especially if you batch all render calls (as you should).
(OpenGL ES 1.0,1.1,2.0 - Mobile Hardware 2011)
Can be done with modulo of your tex-coords in shader. The mod will repeat your sub range coords.
I was running into your question while working on the same issue - although in HLSL and DirectX. I also needed mip mapping and solve the related texture bleeding too.
I solved it this way:
min16float4 sample_atlas(Texture2D<min16float4> atlasTexture, SamplerState samplerState, float2 uv, AtlasComponent atlasComponent)
{
//Get LOD
//Never wrap these as that will cause the LOD value to jump on wrap
//xy is left-top, zw is width-height of the atlas texture component
float2 lodCoords = atlasComponent.Extent.xy + uv * atlasComponent.Extent.zw;
uint lod = ceil(atlasTexture.CalculateLevelOfDetail(samplerState, lodCoords));
//Get texture size
float2 textureSize;
uint levels;
atlasTexture.GetDimensions(lod, textureSize.x, textureSize.y, levels);
//Calculate component size and calculate edge thickness - this is to avoid bleeding
//Note my atlas components are well behaved, that is they are all power of 2 and mostly similar size, they are tightly packed, no gaps
float2 componentSize = textureSize * atlasComponent.Extent.zw;
float2 edgeThickness = 0.5 / componentSize;
//Calculate texture coordinates
//We only support wrap for now
float2 wrapCoords = clamp(wrap(uv), edgeThickness, 1 - edgeThickness);
float2 texCoords = atlasComponent.Extent.xy + wrapCoords * atlasComponent.Extent.zw;
return atlasTexture.SampleLevel(samplerState, texCoords, lod);
}
Note the limitation is that the mip levels are blended this way, but in our use-case that is completely fine.
Can't be done...