How to wrap texture coordinates manually? - c++

I am using C++ and HLSL and need to have my texture coordinates wrap so that the texture is tiled across a triangle.
After the coordinates are "wrapped" into 0-1 range they will be rotated, so I can't simply use the texture sampler AddressU and AddressV properties set to wrap, because they need to be wrapped and THEN rotated, so it can't be done inside the sampler.
The solution here is simple, just use the fractional part of the texture coordinate and it will wrap.
Here is an example of a pixel shader that will tile the texture 36 times (6 * 6):
input.tex *= 6.0f; //number of times to tile ^ 2
input.tex = frac(input.tex); //wraps texCoords to 0-1 range
return shaderTexture.Sample(SampleType, input.tex);
This does tile the texture, but can create a problem at the boarder where the texture wraps. The tiles have to divide evenly into the space they are being displayed on or it creates a seam where the boarders meet. My square that the texture is drawn to is 800x600 pixels, so tiling by 5 will divide evenly but 6 will not and will cause seams along the Y axis.
I have tried using the modulus operator input.tex = input.tex % 1 to wrap the coordinates but I get the exact same results. I have also tried changing the texture filtering method and the AddressU and AddressV properties along with countless different methods of debugging.
I had some luck using this code. If the x coordinate is too high it gets set to 0, and if it is too low it gets set to 1.
input.tex *= 6.0f;
input.tex = frac(input.tex);
if (input.tex.x > 0.999f) input.tex.x = 0;
if (input.tex.x < 0.001f) input.tex.x = 1;
return shaderTexture.Sample(SampleType, input.tex);
This only fixes the problem in certain spots though, so it is definitely not a solution.
Here is a picture that shows a texture (left) and what it looks like when wrapped manually (right). You can see that not everywhere that the boarders touch has this error.
I have also tried not changing the texture coordinates to 0-1 range and rotating them around the center of each tile instead of (0.5, 0.5) but I get identical results. Also my texture coordinates are completely independent of the vertices and are calculated inside the pixel shader.
Anything I have seen relating to this issue has to do with having a high value at one pixel and then a low value at the next, for example u = 0.95 and the next pixel u = 0.03, which causes it to interpolate backwards across the texture. But when I rotate my texture coordinates nothing changes at all. Even when each tile has a random rotation applied to it. In this case the edges have all sorts of different values bordering each other, not just a high value on the left side and a low value on the right side, but the area where the seam occurs never changes.

As MuertoExcobito said the main problem is that at the borders the texture coordinate jumps from 1 to 0 in a single pixel. Semantically it is right to say that the entire texture gets averaged in this pixel, but it is not caused by interpolating the texture from 1 to 0 in this pixel. The real reason is the mipmapping.
For your texture there are mipmaps generated as you are loading it. This means the texture get multiple mipmap levels which are all half sized in respect to the before.
If a texture becomes distorted sampling of the highest level would lead to oversampling (pointfiltering like artifacts). To fight oversampling a texture lookup chooses the appropriate mipmaplevel dependent on the changes of the texture coordinate in screen space. In your case the borders are a very high change in a small place, which is leading to use the lowest mipmap possible (which is as you see a small red dot, which is the reason for the red border).
Returning to your problem, you should take control over the mipmapping by using the texture lookup method SampleGrad (Docs). To get the current changes of the texture coordinate of your pixel you can use the intrinsic methods ddx and ddy. They return for a arbitary variable in your shader, how it changes locally to adjacent pixels (the correct explaination would go to deep for this topic). So using following code shouldn't change anything, because it should be semantically identical:
input.tex *= 6.0f; //number of times to tile ^ 2
input.tex = frac(input.tex); //wraps texCoords to 0-1 range
float2 xchange = ddx(input.tex);
float2 ychange = ddy(input.tex);
return shaderTexture.SampleGrad(SampleType, input.tex, xchange, ychange);
Now you can apply your code which prevents big changes in xchange and ychange to force the graphic device to use a higher mipmap. This should remove you artifacts.
If your texture doesn't need mipmapping, because you are rendering the texture screen aligned and the texture size doesn't lead to oversampling, you can use the simpler alternative sampleLevel (Docs) There you can pass a parameter, which picks a specific mipmap, which you can determine by yourself.

The code here is causing the sampling the entire texture in the span of a single pixel. For example, for two adjacent pixels at the seam, one 'u' sample could be 1.0-eps, the next sample would be 0.0+eps, where eps is a number smaller than the width of a texel. When the output pixels are interpolated, you will interpolate from 1.0 .. 0.0, sampling the entire texture between those two samples. The averaging of the entire texture causes the 'greyness', even though your input texture doesn't actually contain any pixels that are exactly grey.
If you require to rotate the texture coordinates within each range (eg. 0..1, 1..2 are rotated independently), there are a few ways this could be solved. First, you could change the interpolation from linear to point, which will avoid the interpolation between texels. However, if you require bilinear interpolation, this might not be acceptable. In that case, you could construct a 'grid' mesh, and map the input texture 0..1 across each tile, with the tiles texture coordinates rotated independently in the shader.
Another possible solution, would be to transform the coordinates to 0..1 space, perform the rotation, and then translate them back into their original space. For example, you would do:
// pseudo-code:
int2 whole;
input.tex = modf(input.tex, whole);
input.tex = Rotate(input.tex); // do whatever rotation is needed
input.tex += whole;
This would ensure that the wrapping does not have any discontinuities. Alternatively, you could have your rotation code take into account non-unity space texture coordinates.

Gnietschow posted the answer to this question but I am going to add an answer that shows exactly how I used the answer.
I'm actually not even sure why this works I just know it does even with other multiples of tiling, various textures, and random rotation.
input.tex *= 6.0f; //number of times to tile ^ 2
input.tex = frac(input.tex); //wraps texCoords to 0-1 range
float2 xchange = ddx(input.tex);
float2 ychange = ddy(input.tex);
if ((xchange.x < 0)) xchange = float2(0, 0);
if ((ychange.y < 0)) ychange = float2(0, 0);
return shaderTexture.SampleGrad(SampleType, input.tex, xchange, ychange);
if you don't need mipmapping at all, this other method Gniet mentioned works fine too
input.tex *= 6.0f; //number of times to tile ^ 2
input.tex = frac(input.tex); //wraps texCoords to 0-1 range
return shaderTexture.SampleLevel(SampleType, input.tex, 0);

Related

Getting depth from Float texture in post process

Im having a bit of trouble with getting a depth value that I'm storing in a Float texture (or rather i don't understand the values). Essentially I am creating a deffered renderer, and in one of the passes I am storing the depth in the alpha component of a floating point render target. The code for that shader looks something like this
Define the clip position as a varying
varying vec4 clipPos;
...
In the vertex shader assign the position
clipPos = gl_Position;
Now in the fragment shader I store the depth:
gl_FragColor.w = clipPos.z / clipPos.w;
This by and large works. When I access this render target in any subsequent shaders I can get the depth. I.e something like this:
float depth = depthMap.w;
Am i right to assume that 0.0 is right in front of the camera and 1 is in the distance? Because I am doing some fog calculations based on this but they don't seem to be correct.
fogFactor = smoothstep( fogNear, fogFar, depth );
fogNear and fogFar are uniforms I send to the shader. When the fogNear is set to 0, I would have thought I get a smooth transition of fog from right in front of the camera to its draw distance. However this is what I see:
When I set the fogNear to 0.995, then I get something more like what Im expecting:
Is that correct, it just doesn't seem right to me? (The scale of the geometry is not really small / too large and neither is the camera near and far too large. All the values are pretty reasonable)
There are two issues with your approach:
You assume the depth is in the range of [0,1], buit what you use is clipPos.z / clipPos.w, which is NDC z coord in the range [-1,1]. You might be better of by directly writing the window space z coord to your depth texture, which is in [0,1] and will simply be gl_FragCoord.z.
The more serious issue that you assume a linear depth mapping. However, that is not the case. The NDC and window space z value is not a linear representation of the distance to the camera plane. It is not surprisinng that anything you see in the screenshot is very closely to 1. Typical, fog calculations are done in eye space. However, since you only need the z coord here, you simply could store the clip space w coordinate - since typically, that is just -z_eye (look at the last row of your projection matrix). However, the resulting value will be not in any normailized range, but in [near,far] that you use in your projection matrix - but specifying fog distances in eye space units (which normally are indentical to world space units) is more intuitive anyway.

why did the width/height of viewport change from int to float in glViewportIndexed

glViewport: width/height are integers (which are pixels).
But glViewportIndexed has these values in float. What are the advantages of having them in float. My understanding is based on the fact that pixels are always integers.
It may look like the glViewport*() calls specify pixel rectangles. But if you look at the details of the OpenGL rendering pipeline, that's not the case. They specify the parameters for the viewport transformation. This is the transformation that maps normalized device coordinates (NDC) to window coordinates.
If x, y, w and h are your specified viewport dimensions, xNdc and yNdc your NDC coordinates, the viewport transformation can be written like this:
xWin = x + 0.5 * (xNdc + 1.0) * w;
yWin = y + 0.5 * (yNdc + 1.0) * h;
In this calculation, xNdc and yNdc are of course floating point values, in their usual (-1.0, 1.0) range. I do not see any good reason why x, y, w and h should be restricted to integer values in this calculation. This transformation is applied before rasterization, so there is no need to round anything to pixel units.
Not needing integer values for the viewport dimensions could even be practically useful. Say you have a window of size 1000x1000, and you want to render 9 sub-views of equal size in the window. There's no reason for the API to stop you from doing what's most natural: Make each sub-view the size 333.3333x333.3333, and use those sizes for the parameters of glViewport().
If you look at glScissorIndexed() for comparison, you will notice that it still takes integer coordinates. This makes complete sense, because gScissor() does in fact specify a region of pixels in the window, unlike glViewport().
Answering your new questions in comments would have proved difficult, so even though Reto Koradi has already answered your question I will attempt to answer them here.
#AndonM.Coleman, ok got it. But then why is glViewport have x,y,w,h in integers?
Probably because back when glViewport (...) was created, there was no programmable pipeline. Even back then, sub-pixel offsets were sometimes used (particularly when trying to match rasterization coverage rules for things like GL_LINES and GL_TRIANGLES) but they had to be applied to the transformation matrices.
Now you can do the same thing using the viewport transform instead, which is a heck of a lot simpler (4 scalars needed for the viewport) than passing a giant mat4 (16 scalars) into a Geometry Shader.
Does it apply the viewport transformation to all the viewports or only the first viewport.
From the GL_ARB_viewport_array extension specification:
glViewport sets the parameters for all viewports to the same values
and is equivalent (assuming no errors are generated) to:
for (GLuint i = 0; i < GL_MAX_VIEWPORTS; i++)
glViewportIndexedf(i, 1, (GLfloat)x, (GLfloat)y, (GLfloat)w, (GLfloat)h);
#AndonM.Coleman, 2nd question: if VIEWPORT_SUBPIXEL_BITS returns value 4, then will gl_FragCoord.xy have values with offsets (0,0) (0.5, 0) (0, 0.5) and (0.5, 0.5) ?
If you have 4-bits of sub-pixel precision then what that means is that vertex positions after transformation will be snapped to a position 1/16th the width of a pixel. GL actually does not require any sub-pixel bits here; in such a case your vertex positions after transformation into window-space would jump by 1 pixel distances at a time and you would see a lot of "sparklies" as you move anything in the scene.
This animation demonstrates "sparklies"; it originated from here:
See the white dots as the camera moves? If you do not have enough sub-pixel precision when you transform your vertices, the rasterizer has difficulty properly dealing with edges that are supposed to be adjacent. The technical term for this is T-Junction Error, but I am quite fond of the word "sparkly" ;)
As for gl_FragCoord.xy, no that is actually unaffected by your sub-pixel precision during vertex transform. That is the sample location within your fragment (usually aligned to ... + 0.5 as you point out), and it is unrelated to vertex processing.

The result of projMat * viewMat * modelMat * vertPos should result in a screen-space position... right?

And given that, shouldn't all position values that end up being rendered be between the values -1 and 1?
I tried passing said position value as the "color" value from my vertex shader to my fragment shader to see what would happen, and expected a gradient the full way across the screen (at least, where geometry exists).
I would EXPECT the top-right corner of the screen to have color value rgb(1.0,1.0,?.?) (? because the z value might vary), and that would gradiate towards (0.0,0.0,?.?) at the very center of the screen (and anything to the bottom left would have 0'd r and g components because their value would be negative).
But instead, what I'm getting looks like the gradiation is happening in a smaller scale towards the center of my screen (attached):
(
Why is this? This makes it look like the geometry position resulting from the composition of my matrices and position is a value between -10ish and 10ish...?
Any ideas what I might be missing?
edit- don't worry about the funky geometry. if it helps, what's being rendered is a quad behind ~100 unit triangles randomly rotated about the origin. debugging stuff.
You are confusing several things here.
projMat * viewMat * modelMat * vPos will transform vPos into clip space, not screen space. However, using your coordinates as colors, you don't even want screen space (which are the pixel coordinates relative to the output window, and accessible via gl_FragmentPosition in the fragment shader), but you want [-1,1] normalized device coordinates. You get to NDC by dividing by the w component of your clipspace value.
From the images you get I can guess that projMat is a projective transform, since in the orthogonal case, clip.w will typically be 1 for all vertices (not necessarily, but very likely), and the results would look more like you would expect.
You can see from your image that x and y (r and g) are zero in the center. However, those values are not limited to the interval [-1,1], but your shader output values are clamped to [0,1] so you don't see much of a gradient. Your z coord is >= 1 anywhere, so it does not change at all in the image.
If you would use the NDC coords as colors, you would indeed see a red gradient in the right half, and a green gradient in the upper half, but the other areas where the value is below 0 will still be clamped to zero. You would also get some more info in the blue channel (although it might be possible that NDC z is <= 0 for your whole scene), but you should be aware of the nonlinear z distortions introducted by the divide.

Repeating part of texture over another texture

So I'm trying to replace a part of a texture over another in GLSL, first step in a grand scheme.
So I have a image, 2048x2048, with 3 textures on the top left, each 512x512. For testing purposes I'm trying to just repeatedly draw the first one.
//get coord of smaller texture
coord = vec2(int(gl_TexCoord[0].s)%512,int(gl_TexCoord[0].t)%512);
//grab color from it and return it
fragment = texture2D(textures, coord);
gl_FragColor = fragment;
It seems that it only grabs the same pixel, I get one color from the texture returned to me. Everything ends up grey. Anyone know what's off?
Unless that's a rectangle texture (which is isn't since you're using texture2D), your texture coordinates are normalized. That means that the range [0, 1] maps to the entire range of the texture. 0.5 always means halfway, whether for a 256 sized texture or a 8192 one.
Therefore, you need to stop passing non-normalized texture coordinates (texel values). Pass normalized texture coordinates and adjust those.

How to use GL_REPEAT to repeat only a selection of a texture atlas? (OpenGL)

How can I repeat a selection of a texture atlas?
For example, my sprite (selection) is within the texture coordinates:
GLfloat textureCoords[]=
{
.1f, .1f,
.3f, .1f,
.1f, .3f,
.3f, .3f
};
Then I want to repeat that sprite N times to a triangle strip (or quad) defined by:
GLfloat vertices[]=
{
-100.f, -100.f,
100.f, -100.f,
-100.f, 100.f,
100.f, 100.f
};
I know it has something to do with GL_REPEAT and textureCoords going passed the range [0,1]. This however, doesn't work: (trying to repeat N = 10)
GLfloat textureCoords[]=
{
10.1f, 10.1f,
10.3f, 10.1f,
10.1f, 10.3f,
10.3f, 10.3f
};
We're seeing our full texture atlas repeated...
How would I do this the right way?
It can't be done the way it's described in the question. OpenGL's texture coordinate modes only apply for the entire texture.
Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
None of the texture wrap modes support the kind of operation as described in the question, i.e. they all map to the full [0,1] range, not some arbitrary subset. when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL vertex program to map the texture coordinates appropriately.
I'm not sure you can do that. I think OpenGL's texture coordinate modes only apply for the entire texture. When using an atlas, you're using "sub-textures", so that your texture coordinates never come close to 0 and 1, the normal limits where wrapping and clamping occurs.
There might be extensions to deal with this, I haven't checked.
EDIT: Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
This is a bit crude to explain without images.
Anyway, when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
None of the texture wrap modes support the kind of operation you are looking for, i.e. they all map to the full [0,1] range, not some arbitrary subset. You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL pixel program to map the texture coordinates appropriately.
While this may be an old topic; here's how I ended up doing it:
A workaround would be to create multiple meshes, glued together containing the subset of the Texture UV's.
E.g.:
I have a laser texture contained within a larger texture atlas, at U[0.05 - 0.1] & V[0.05-0.1].
I would then construct N meshes, each having U[0.05-0.1] & V[0.05-0.1] coordinates.
(N = length / texture.height; height being the dimension of the texture I would like to repeat. Or easier: the amount of times I want to repeat the texture.)
This solution would be more cost effective than having to reload texture after texture.
Especially if you batch all render calls (as you should).
(OpenGL ES 1.0,1.1,2.0 - Mobile Hardware 2011)
Can be done with modulo of your tex-coords in shader. The mod will repeat your sub range coords.
I was running into your question while working on the same issue - although in HLSL and DirectX. I also needed mip mapping and solve the related texture bleeding too.
I solved it this way:
min16float4 sample_atlas(Texture2D<min16float4> atlasTexture, SamplerState samplerState, float2 uv, AtlasComponent atlasComponent)
{
//Get LOD
//Never wrap these as that will cause the LOD value to jump on wrap
//xy is left-top, zw is width-height of the atlas texture component
float2 lodCoords = atlasComponent.Extent.xy + uv * atlasComponent.Extent.zw;
uint lod = ceil(atlasTexture.CalculateLevelOfDetail(samplerState, lodCoords));
//Get texture size
float2 textureSize;
uint levels;
atlasTexture.GetDimensions(lod, textureSize.x, textureSize.y, levels);
//Calculate component size and calculate edge thickness - this is to avoid bleeding
//Note my atlas components are well behaved, that is they are all power of 2 and mostly similar size, they are tightly packed, no gaps
float2 componentSize = textureSize * atlasComponent.Extent.zw;
float2 edgeThickness = 0.5 / componentSize;
//Calculate texture coordinates
//We only support wrap for now
float2 wrapCoords = clamp(wrap(uv), edgeThickness, 1 - edgeThickness);
float2 texCoords = atlasComponent.Extent.xy + wrapCoords * atlasComponent.Extent.zw;
return atlasTexture.SampleLevel(samplerState, texCoords, lod);
}
Note the limitation is that the mip levels are blended this way, but in our use-case that is completely fine.
Can't be done...