How to (fake) mipmap equirectangular rendering? - opengl

When rendering an equirectangular encoded 360 texture, there is usually a lookup like
u = atan(x,z)
v = acos(y)
The equirectangular texture is already very prefiltered. Just turning on mipmaping does not work. u is not continuous and the texture itself has non uniform data in uv. And creating mipmaps with a 2x2 box downsample is also not right for the equirectangular.
But assuming the 2x2 box for miplevels, and hardware mipmap lookup, is there a good way to compute either lod or gradients that makes any sense?
Using just dFdx(v) and dFdy(v) kind of works to handle small viewports. But there must be a better way?

You can generate the mipmaps with glGenerateTextureMipmap() function and use the texture(tex, uv) lookup function. It will already do the job.
You should, however, normalize the uv coordinates so that they are in the [0,1] range:
u = atan(x,z)
v = acos(y)
uv = uv/6.283185 + .5

Related

Converting an equiangular cubemap to an equirectangular one

I am making a retro-style game with OpenGL, and I want to draw my own cubemaps for it. Here is an example of one:
As you can tell, there is no perspective warping anywhere; each face is fully equiangular. When using this as a cubemap, the result is this:
As you can see, it looks box-y, and not spherical at all. I know of a solution to this, which is to remap each point on the cubemap to a a sphere position. I have done this manually by creating a sphere mesh and mapping the cubemap texture onto it (and then rendering that to an environment map), but this is time-consuming and complicated.
I seek a different solution: in my fragment shader, I hope to remap the sampling ray to a sphere position, instead of a cube position. Here is my original fragment shader, without any changes:
#version 400 core
in vec3 cube_edge;
out vec3 color;
uniform samplerCube skybox_sampler;
void main(void) {
color = texture(skybox_sampler, cube_edge).rgb;
}
I can get a ray that maps to the sphere by just normalizing cube_edge, but that doesn't change anything, for some reason. After messing around a bit, I tried this mapping, which almost works, but not quite:
vec3 sphere_edge = vec3(cube_edge.x, normalize(cube_edge).y, cube_edge.z);
As you can see, some faces become spherical in nature, whereas the top face warps inwards, instead of outwards.
I also tried the results from this site: http://mathproofs.blogspot.com/2005/07/mapping-cube-to-sphere.html, but the faces were not curved outwards enough.
I have been stuck on this for so long now - if you know how I can change my cube to sphere mapping in my fragment shader, or if that's even possible, please let me know!
As you can tell, there is no perspective warping anywhere; each face is fully equiangular.
This premise is incorrect. You hand-drew some images; this doesn't make them equiangular.
'Equiangular cubemap' (EAC) specifically means a cubemap remapped by this formula (section 2.4):
u = 4/pi * atan(u)
v = 4/pi * atan(v)
Let's recognize first that the term is misleading, because even though EAC aims at reducing the variation in sampling rate, the sampling rate is not constant. In fact no 2d projection of any part of a sphere can truly be equi-angular; this is a mathematical fact.
Nonetheless, we can try to apply this correction. Implemented in GLSL fragment shader as:
d /= max(abs(d.x), max(abs(d.y), abs(d.z));
d = atan(d)/atan(1);
gives the following result:
Compare it with the uncorrected d:
As you can see the EAC projection shrinks the pixels in the middle by a little bit, and expands them near the corners, so that they cover more equal area.
Instead, it appears that you want a cylindrical projection around the horizon. It can be implemented like so:
d /= length(d.xy);
d.xy /= max(abs(d.x), abs(d.y));
d.xy = atan(d.xy)/atan(1);
Which gives the following result:
However there's no artifact-free way to fit the top/bottom square faces of the cube onto the circular faces of the cylinder -- which is why you see the artifacts there.
Bottom-line: you cannot fit the image that you drew onto a sphere in a visually pleasing way. You should instead re-focus your effort on alternative ways of authoring your environment map. I recommend you try using an equidistant cylindrical projection for the horizon, cap it with solid colors above/below a fixed latitude, and use billboards for objects that cannot be represented in that projection.
Your problem is that the size of the geometry on which the environment is placed is too small. You are not looking at the environment but at the inside of a small cube in which you are sitting. The environment map should behave as if you are always in the center of the map and the environment is infinitely far away. I suggest to draw the environment map on the far plane of the viewing frustum. You can do this by setting the z-component of the clip space position equal to the w-component in the vertex shader. If you set z to w, you guarantee that the final z value of the position will be 1.0. This is the z value of the far plane. (You can do that with Swizzling gl_Position = clipPos.xyww). It is quite sufficient to draw a cube and wrap the environment by looking up the map with the interpolated vertices of the cube. In the case of a samplerCube, the 3-dimensional texture coordinate is treated as a direction vector. You can use the vertex coordinate of the cube to look up the texture.
Vertex shader:
cube_edge = inVertex.xyz;
vec4 clipPos = projection * view * vec4(inVertex.xyz, 1.0);
gl_Position = clipPos.xyww;
Fragment shader:
color = texture(skybox_sampler, cube_edge).rgb;
The solution is also explained in detail at LearnOpenGL - Cubemap.

How to wrap texture coordinates manually?

I am using C++ and HLSL and need to have my texture coordinates wrap so that the texture is tiled across a triangle.
After the coordinates are "wrapped" into 0-1 range they will be rotated, so I can't simply use the texture sampler AddressU and AddressV properties set to wrap, because they need to be wrapped and THEN rotated, so it can't be done inside the sampler.
The solution here is simple, just use the fractional part of the texture coordinate and it will wrap.
Here is an example of a pixel shader that will tile the texture 36 times (6 * 6):
input.tex *= 6.0f; //number of times to tile ^ 2
input.tex = frac(input.tex); //wraps texCoords to 0-1 range
return shaderTexture.Sample(SampleType, input.tex);
This does tile the texture, but can create a problem at the boarder where the texture wraps. The tiles have to divide evenly into the space they are being displayed on or it creates a seam where the boarders meet. My square that the texture is drawn to is 800x600 pixels, so tiling by 5 will divide evenly but 6 will not and will cause seams along the Y axis.
I have tried using the modulus operator input.tex = input.tex % 1 to wrap the coordinates but I get the exact same results. I have also tried changing the texture filtering method and the AddressU and AddressV properties along with countless different methods of debugging.
I had some luck using this code. If the x coordinate is too high it gets set to 0, and if it is too low it gets set to 1.
input.tex *= 6.0f;
input.tex = frac(input.tex);
if (input.tex.x > 0.999f) input.tex.x = 0;
if (input.tex.x < 0.001f) input.tex.x = 1;
return shaderTexture.Sample(SampleType, input.tex);
This only fixes the problem in certain spots though, so it is definitely not a solution.
Here is a picture that shows a texture (left) and what it looks like when wrapped manually (right). You can see that not everywhere that the boarders touch has this error.
I have also tried not changing the texture coordinates to 0-1 range and rotating them around the center of each tile instead of (0.5, 0.5) but I get identical results. Also my texture coordinates are completely independent of the vertices and are calculated inside the pixel shader.
Anything I have seen relating to this issue has to do with having a high value at one pixel and then a low value at the next, for example u = 0.95 and the next pixel u = 0.03, which causes it to interpolate backwards across the texture. But when I rotate my texture coordinates nothing changes at all. Even when each tile has a random rotation applied to it. In this case the edges have all sorts of different values bordering each other, not just a high value on the left side and a low value on the right side, but the area where the seam occurs never changes.
As MuertoExcobito said the main problem is that at the borders the texture coordinate jumps from 1 to 0 in a single pixel. Semantically it is right to say that the entire texture gets averaged in this pixel, but it is not caused by interpolating the texture from 1 to 0 in this pixel. The real reason is the mipmapping.
For your texture there are mipmaps generated as you are loading it. This means the texture get multiple mipmap levels which are all half sized in respect to the before.
If a texture becomes distorted sampling of the highest level would lead to oversampling (pointfiltering like artifacts). To fight oversampling a texture lookup chooses the appropriate mipmaplevel dependent on the changes of the texture coordinate in screen space. In your case the borders are a very high change in a small place, which is leading to use the lowest mipmap possible (which is as you see a small red dot, which is the reason for the red border).
Returning to your problem, you should take control over the mipmapping by using the texture lookup method SampleGrad (Docs). To get the current changes of the texture coordinate of your pixel you can use the intrinsic methods ddx and ddy. They return for a arbitary variable in your shader, how it changes locally to adjacent pixels (the correct explaination would go to deep for this topic). So using following code shouldn't change anything, because it should be semantically identical:
input.tex *= 6.0f; //number of times to tile ^ 2
input.tex = frac(input.tex); //wraps texCoords to 0-1 range
float2 xchange = ddx(input.tex);
float2 ychange = ddy(input.tex);
return shaderTexture.SampleGrad(SampleType, input.tex, xchange, ychange);
Now you can apply your code which prevents big changes in xchange and ychange to force the graphic device to use a higher mipmap. This should remove you artifacts.
If your texture doesn't need mipmapping, because you are rendering the texture screen aligned and the texture size doesn't lead to oversampling, you can use the simpler alternative sampleLevel (Docs) There you can pass a parameter, which picks a specific mipmap, which you can determine by yourself.
The code here is causing the sampling the entire texture in the span of a single pixel. For example, for two adjacent pixels at the seam, one 'u' sample could be 1.0-eps, the next sample would be 0.0+eps, where eps is a number smaller than the width of a texel. When the output pixels are interpolated, you will interpolate from 1.0 .. 0.0, sampling the entire texture between those two samples. The averaging of the entire texture causes the 'greyness', even though your input texture doesn't actually contain any pixels that are exactly grey.
If you require to rotate the texture coordinates within each range (eg. 0..1, 1..2 are rotated independently), there are a few ways this could be solved. First, you could change the interpolation from linear to point, which will avoid the interpolation between texels. However, if you require bilinear interpolation, this might not be acceptable. In that case, you could construct a 'grid' mesh, and map the input texture 0..1 across each tile, with the tiles texture coordinates rotated independently in the shader.
Another possible solution, would be to transform the coordinates to 0..1 space, perform the rotation, and then translate them back into their original space. For example, you would do:
// pseudo-code:
int2 whole;
input.tex = modf(input.tex, whole);
input.tex = Rotate(input.tex); // do whatever rotation is needed
input.tex += whole;
This would ensure that the wrapping does not have any discontinuities. Alternatively, you could have your rotation code take into account non-unity space texture coordinates.
Gnietschow posted the answer to this question but I am going to add an answer that shows exactly how I used the answer.
I'm actually not even sure why this works I just know it does even with other multiples of tiling, various textures, and random rotation.
input.tex *= 6.0f; //number of times to tile ^ 2
input.tex = frac(input.tex); //wraps texCoords to 0-1 range
float2 xchange = ddx(input.tex);
float2 ychange = ddy(input.tex);
if ((xchange.x < 0)) xchange = float2(0, 0);
if ((ychange.y < 0)) ychange = float2(0, 0);
return shaderTexture.SampleGrad(SampleType, input.tex, xchange, ychange);
if you don't need mipmapping at all, this other method Gniet mentioned works fine too
input.tex *= 6.0f; //number of times to tile ^ 2
input.tex = frac(input.tex); //wraps texCoords to 0-1 range
return shaderTexture.SampleLevel(SampleType, input.tex, 0);

Getting depth from Float texture in post process

Im having a bit of trouble with getting a depth value that I'm storing in a Float texture (or rather i don't understand the values). Essentially I am creating a deffered renderer, and in one of the passes I am storing the depth in the alpha component of a floating point render target. The code for that shader looks something like this
Define the clip position as a varying
varying vec4 clipPos;
...
In the vertex shader assign the position
clipPos = gl_Position;
Now in the fragment shader I store the depth:
gl_FragColor.w = clipPos.z / clipPos.w;
This by and large works. When I access this render target in any subsequent shaders I can get the depth. I.e something like this:
float depth = depthMap.w;
Am i right to assume that 0.0 is right in front of the camera and 1 is in the distance? Because I am doing some fog calculations based on this but they don't seem to be correct.
fogFactor = smoothstep( fogNear, fogFar, depth );
fogNear and fogFar are uniforms I send to the shader. When the fogNear is set to 0, I would have thought I get a smooth transition of fog from right in front of the camera to its draw distance. However this is what I see:
When I set the fogNear to 0.995, then I get something more like what Im expecting:
Is that correct, it just doesn't seem right to me? (The scale of the geometry is not really small / too large and neither is the camera near and far too large. All the values are pretty reasonable)
There are two issues with your approach:
You assume the depth is in the range of [0,1], buit what you use is clipPos.z / clipPos.w, which is NDC z coord in the range [-1,1]. You might be better of by directly writing the window space z coord to your depth texture, which is in [0,1] and will simply be gl_FragCoord.z.
The more serious issue that you assume a linear depth mapping. However, that is not the case. The NDC and window space z value is not a linear representation of the distance to the camera plane. It is not surprisinng that anything you see in the screenshot is very closely to 1. Typical, fog calculations are done in eye space. However, since you only need the z coord here, you simply could store the clip space w coordinate - since typically, that is just -z_eye (look at the last row of your projection matrix). However, the resulting value will be not in any normailized range, but in [near,far] that you use in your projection matrix - but specifying fog distances in eye space units (which normally are indentical to world space units) is more intuitive anyway.

Sampling data from a shadow map texture using automatic comparison via the texture2D function

I've got a sampler2DShadow in my shader and I want to use it to implement shadow mapping. My shadow texture has the good initializers, with GL_TEXTURE_COMPARE_MODE set to GL_COMPARE_R_TO_TEXTURE and GL_TEXTURE_COMPARE_FUNC set to GL_LEQUAL (meaning that the comparison should return 1 if the r value of my coordinates are less or equal to the depth value fetched in the texture). This texture is bound to the GL_DEPTH_ATTACHMENT of a FBO rendered in light space coordinates.
What coordinates should I give the texture2D function in my final fragment shader? I currently have a
smooth in vec4 light_vert_pos
set in my fragment shader that is defined in the vertex shader by the function
light_vert_pos = light_projection_camera_matrix*modelview*in_Vertex;
I would assume I could multiply my lighting by the value
texture2D(shadowmap,(light_vert_pos.xyz)/light_vert_pos.w)
but this does not seem to work. Since light_vert_pos is only in post projective coordinates (the matrix used to create it is the matrix I use to create the depth buffer in the FBO), should I manually clamp the 3 x/y/z variables to [0,1]?
You don't say how you generated your depth values. So I'll assume you generated your depth values by rendering triangles using normal projection. That is, you transform the geometry to camera space, transform it to projection space, and let the rasterization pipeline handle things from there as normal.
In order to make shadow mapping work, your texture coordinates must match what the rasterizer did.
The output of a vertex shader is clip-space. From there, you get the perspective divide, followed by the viewport transform. The latter uses the values from glViewport and glDepthRange to compute the window-space XYZ. The window-space Z is the depth value written to the depth buffer.
Note that this is all during the depth pass: the generation of the depth values for the shadow map.
However, you can take some shortcuts. If your glViewport range was set to the same size as the texture (which is generally how it's done), then you can ignore the viewport transform. You will still need the glDepthRange you used in the depth pass.
In your fragment shader, you can perform the perspective divide, which puts the coordinates in normalized device coordinate (NDC) space. That space is [-1, 1] in all directions. Your texture coordinates are [0, 1], so you need to divide the X and Y by two and add 0.5 to them:
vec3 ndc_space_values = light_vert_pos.xyz / light_vert_pos.w
vec3 texCoords;
texCoords.xy = ndc_space_values.xy * 0.5 + 0.5;
To compute the Z value, you need to know the near and far values you use for glDepthRange.
texCoords.z = ((f-n) * 0.5) * ndc_space_values.z + ((n+f) * 0.5);
Where n and f are the glDepthRange near and far values. You can of course precompute some of these and pass them as uniforms. Or, if you use the default range of near=0 and far=1, you get
texCoords.z = ndc_space_values.z * 0.5 + 0.5;
Which looks familiar somehow.
Aside:
Since you defined your inputs with in rather than varying, you have to be using GLSL 1.30 or above. So why are you using texture2D (which is an old function) rather than texture?

How to use GL_REPEAT to repeat only a selection of a texture atlas? (OpenGL)

How can I repeat a selection of a texture atlas?
For example, my sprite (selection) is within the texture coordinates:
GLfloat textureCoords[]=
{
.1f, .1f,
.3f, .1f,
.1f, .3f,
.3f, .3f
};
Then I want to repeat that sprite N times to a triangle strip (or quad) defined by:
GLfloat vertices[]=
{
-100.f, -100.f,
100.f, -100.f,
-100.f, 100.f,
100.f, 100.f
};
I know it has something to do with GL_REPEAT and textureCoords going passed the range [0,1]. This however, doesn't work: (trying to repeat N = 10)
GLfloat textureCoords[]=
{
10.1f, 10.1f,
10.3f, 10.1f,
10.1f, 10.3f,
10.3f, 10.3f
};
We're seeing our full texture atlas repeated...
How would I do this the right way?
It can't be done the way it's described in the question. OpenGL's texture coordinate modes only apply for the entire texture.
Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
None of the texture wrap modes support the kind of operation as described in the question, i.e. they all map to the full [0,1] range, not some arbitrary subset. when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL vertex program to map the texture coordinates appropriately.
I'm not sure you can do that. I think OpenGL's texture coordinate modes only apply for the entire texture. When using an atlas, you're using "sub-textures", so that your texture coordinates never come close to 0 and 1, the normal limits where wrapping and clamping occurs.
There might be extensions to deal with this, I haven't checked.
EDIT: Normally, to repeat a texture, you'd draw a polygon that is "larger" than your texture implies. For instance, if you had a square texture that you wanted to repeat a number of times (say six) over a bigger area, you'd draw a rectangle that's six times as wide as it is tall. Then you'd set the texture coordinates to (0,0)-(6,1), and the texture mode to "repeat". When interpolating across the polygon, the texture coordinate that goes beyond 1 will, due to repeat being enabled, "wrap around" in the texture, causing the texture to be mapped six times across the rectangle.
This is a bit crude to explain without images.
Anyway, when you're texturing using just a part of the texture, there's no way to specify that larger texture coordinate in a way that makes OpenGL repeat it inside only the sub-rectangle.
None of the texture wrap modes support the kind of operation you are looking for, i.e. they all map to the full [0,1] range, not some arbitrary subset. You basically have two choices: Either create a new texture that only has the sprite you need from the existing texture or write a GLSL pixel program to map the texture coordinates appropriately.
While this may be an old topic; here's how I ended up doing it:
A workaround would be to create multiple meshes, glued together containing the subset of the Texture UV's.
E.g.:
I have a laser texture contained within a larger texture atlas, at U[0.05 - 0.1] & V[0.05-0.1].
I would then construct N meshes, each having U[0.05-0.1] & V[0.05-0.1] coordinates.
(N = length / texture.height; height being the dimension of the texture I would like to repeat. Or easier: the amount of times I want to repeat the texture.)
This solution would be more cost effective than having to reload texture after texture.
Especially if you batch all render calls (as you should).
(OpenGL ES 1.0,1.1,2.0 - Mobile Hardware 2011)
Can be done with modulo of your tex-coords in shader. The mod will repeat your sub range coords.
I was running into your question while working on the same issue - although in HLSL and DirectX. I also needed mip mapping and solve the related texture bleeding too.
I solved it this way:
min16float4 sample_atlas(Texture2D<min16float4> atlasTexture, SamplerState samplerState, float2 uv, AtlasComponent atlasComponent)
{
//Get LOD
//Never wrap these as that will cause the LOD value to jump on wrap
//xy is left-top, zw is width-height of the atlas texture component
float2 lodCoords = atlasComponent.Extent.xy + uv * atlasComponent.Extent.zw;
uint lod = ceil(atlasTexture.CalculateLevelOfDetail(samplerState, lodCoords));
//Get texture size
float2 textureSize;
uint levels;
atlasTexture.GetDimensions(lod, textureSize.x, textureSize.y, levels);
//Calculate component size and calculate edge thickness - this is to avoid bleeding
//Note my atlas components are well behaved, that is they are all power of 2 and mostly similar size, they are tightly packed, no gaps
float2 componentSize = textureSize * atlasComponent.Extent.zw;
float2 edgeThickness = 0.5 / componentSize;
//Calculate texture coordinates
//We only support wrap for now
float2 wrapCoords = clamp(wrap(uv), edgeThickness, 1 - edgeThickness);
float2 texCoords = atlasComponent.Extent.xy + wrapCoords * atlasComponent.Extent.zw;
return atlasTexture.SampleLevel(samplerState, texCoords, lod);
}
Note the limitation is that the mip levels are blended this way, but in our use-case that is completely fine.
Can't be done...