Need to create a custom data 2D texture with reasonable precision - c++

The idea
I need to create a 2D texture to be fed with resonably precise float values (I mean at least as precise as a glsl mediump float). I want to store in it each pixel's distance from the camera. I don't want the GL Zbuffer distance to near plane, only my own lovely custom data :>
The problem/What I've tried
By using a standard texture as color attachment, I don't get enough precision. Or maybe I missed something ?
By using a depth attachment texture as GL_DEPTH_COMPONENT32 I am getting the clamped near plane distance - rubbish.
So it seems I am stuck at not using a depth attachment even tho they seem to eventually hold more precision. So is there a way to have mediump float precision for standard textures ?
I find it strange OpenGL doesn't have a generic container for arbitrary data. I mean with custom bit-depth. Or maybe I missed something again!

You can use floating point textures instead of a RGBA texture with 8 bit per pixel. However support of these depends on the devices you want to support, especially older mobile devices have a lack of support for these formats.
Example for GL_RGBA16F( Not tested ):
glTexImage2D(GL_TEXTURE_2D, mipmap, GL_RGBA16F, mipmapWidth, mipmapHeight, GL_RGBA, GL_HALF_FLOAT, null);
Now you can store the data in your fragment-shader manually. However clamping still occurs depending on you MVP. Also you need to pass the data to the fragment shader.
There are also 32bit formats.

There are a number of options for texture formats that give you more than 8-bit component precision.
If your values are in a pre-defined range, meaning that you can easily map your values into the [0.0, 1.0] interval, normalized 16-bit formats are your best option. For a single component texture, the format would be GL_R16. You can allocate a texture of this format using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16, 512, 512, 0, GL_RED, GL_UNSIGNED_SHORT, NULL);
There are matching formats for 2, 3, and 4 components (GL_RG16, GL_RGB16, GL_RGBA16).
If you need a larger range of values that is not easily constrained, float textures become more attractive. The corresponding calls for 1 component textures are:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16F, 512, 512, 0, GL_RED, GL_HALF_FLOAT, NULL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, 512, 512, 0, GL_RED, GL_FLOAT, NULL);
Again, there are matching formats for 2, 3, and 4 components.
The advantage of float textures is that, just based on the nature of float values encoded with a mantissa and exponent, they can cover a wide range. This comes at the price of less precision. For example, GL_R16F gives you 11 bits of precision, while GL_R16 gives you a full 16 bits. Of course GL_R32F gives you plenty of precision (24 bits) as well as a wide range, but it uses twice the amount of storage.

You would probably have an easier time accomplishing this in GLSL as opposed to the C API. However, any custom depth buffer will be consistently, considerably slower than the one provided by OpenGL. Don't expect it to operate in real-time.
If your aim is to have access to the raw distance of any fragment from the camera, remember that depth buffer values are z/w, where z is the distance from the near plane and w is the distance from the camera. So, it is possible to extract quickly with an acceptable amount of precision. However, you are still faced with your original problem: fragments between the camera and the near plane will not be in the depth buffer.

Related

glReadPixels from back buffer. Problem with float precision

I'm trying to get color for my color picker, but I get a float value, that differ from value I stored. For example I set 0.5, but I take 0.498039 (this is the real numbers).
I don't build any FBO and read color from GL_BACK directly:
glReadBuffer(GL_BACK);
glReadPixels(x, y, 1, 1, GL_RGB, GL_FLOAT, &color);
How can I save precision of floating point value? Is it possible to change GL_FLOAT to another which would save precision? Is it possible to get in &color numbers greater than 1.0?
The precision is limited by the precision of the frambuffer (back buffer). This precision cannot be set individually and is (most likely) limited to 8 bit per channel. Actually the Default framebuffer is generated once, when the OpenGL window and OpenGL Context is generated.
Thus it makes no sense to read the read the buffer to 32bit float target, because the source buffer just has 8 bits.
Anyway it is possible to render the scene to a named Framebuffer Object where the color plane of the attached Renderbuffer has a floating point format (e.g. GL_RGBA32F). See LearnOpenGL - Framebuffers.

How do I determine the dimension arguments for glTexImage3D?

I'm working on a graphics project involving using a 3D texture to do some volume rendering on data stored in the form of a rectilinear grid, and I was a little confused on the width, height, and depth arguments for glTexImage3D. For a 1D texture, I know that you can use something like this:
glTexImage1D(GL_TEXTURE_1D, 0, GL_RGB, 256, 0, GL_RGB, GL_UNSIGNED_BYTE, data);
where the width is the 256 possible values for each color stream. Here, I'm still using colors in the form of unsigned bytes (and the main purpose of the texture here is still to interpolate the colors of the points in the grid, along with the transparencies), so it makes sense to me that one of the arguments would still be 256. It's a 64 X 64 X 64 grid, so it makes sense that one of the arguments (maybe the depth?) would be 64, but I'm not sure about the third, or even if I'm on the right track there. Could anyone enlighten me on the proper use of those three parameters? I looked at both of these discussions, but still came away confused:
regarding glTexImage3D
OPENGL how to use glTexImage3D function
It looks like you misunderstood the 1D case. In your example, 256 is the size of the 1D texture, i.e. the number of texels.
The number of possible values for each color component is given by the internal format, which is the 3rd argument. GL_RGB actually leaves it up to the implementation what the color depth should be. It is supported for backwards compatibility with old code. It gets clearer if you specify a sized format like GL_RGB8 for the internal format, which requests 8 bits per color component, which corresponds to 256 possible values each for R, G, and B.
For the 3D case, if you want a 64 x 64 x 64 grid, you simply specify 64 for the size in each of the 3 dimensions (width, height, and depth). Say for this size, using RGBA and a color depth of 8 bits, the call is:
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGBA8, 64, 64, 64, 0,
GL_RGBA, GL_UNSIGNED_BYTE, data);

How to make a 1D lut in C++ for GLSL

I'm beginning to understand how to implement a fragment shader to do a 1D LUT but I am struggling to find any good resources that tell you how to make the 1D LUT in C++ and then texture it.
So for a simple example given the following 1D lut below:
Would I make an array with the following data?
int colorLUT[255] = {255,
254,
253,
...,
...,
...,
3,
2,
1,
0};
or unsigned char I guess since I'm going to be texturing it.
If this is how to create the LUT, then how would I convert it to a texture? Should I use glTexImage1D? Or is there a better method to do this? I'm really at a lose here, any advice would be helpful
I'm sorry to be so brief but I haven't seen any tutorials about how to actually make and link the LUT, every tutorial on GLSL only tells you about the shaders they always neglect the linking part.
My end goal is I would like to know how to take different 1D LUTs as seen below and apply them all to images.
Yes, you can use 1D textures as lookup tables.
You can load the data into a 1D texture with glTexImage1D(). Using GL_R8 as the internal texture format, and specifying the data as GL_UNSIGNED_BYTE when passing it to glTexImage1D(), is your best choice if 8 bits of precision are enough for the value. Your call will look like this, with lutData being a pointer/array to GLubyte data, and lutSize the size of your LUT:
glTexImage1D(GL_TEXTURE_1D, 0, GL_R8, lutSize, 0, GL_RED, GL_UNSIGNED_BYTE, lutData);
If you need higher precision than 8 bits, you can use formats like GL_R16 or GL_R32F.
Make sure that you also set the texture parameters correctly, e.g. for linear sampling between values in the lookup table:
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
You then bind the texture to a sampler1D uniform in your shader, and use the regular texture sampling functions to retrieve the new value. Remember that texture coordinates are in the range 0.0 to 1.0, so you need to map the range of your original values to [0.0, 1.0] before you pass it into the texture sampling function. The new value you receive from the texture sampling function will also be in the range [0.0, 1.0].
Note that as long as your lookup is a relatively simple function, it might be more efficient to calculate the function in the shader. But if the LUT can contain completely arbitrary mappings, using a 1D texture is a good way to go.
In OpenGL variations that do not have 1D textures, like OpenGL ES, you can use a 2D texture with height set to 1 instead.
If you need lookup tables that are larger than the maximum supported texture size, you can also look into buffer textures, as suggested by Andon in his comment.

Deferred Rendering with OpenGL, experiencing heavy pixelization near lit boundaries on surfaces

Problem Explaination
I am currently implementing point lights for a deferred renderer and am having trouble determining where a the heavy pixelization/triangulation that is only noticeable near the borders of lights is coming from.
The problem appears to be caused by loss of precision somewhere, but I have been unable to track down the precise source. Normals are an obvious possibility, but I have a classmate who is using directx and is handling his normals in a similar manner with no issues.
From about 2 meters away in our game's units (64 units/meter):
A few centimeters away. Note that the "pixelization" does not change size in the world as I approach it. However, it will appear to swim if I change the camera's orientation:
A comparison with a closeup from my forward renderer which demonstrates the spherical banding that one would expect with a RGBA8 render target (only 0-255 possible values for each color). Note that in my deferred picture the back walls exhibit normal spherical banding:
The light volume is shown here as the green wireframe:
As can be seen the effect isn't visible unless you get close to the surface (around one meter in our game's units).
Position reconstruction
First, I should mention that I am using a spherical mesh which I am using to only render the portion of the screen that the light overlaps. I rendering only the back-faces if the depth is greater or equal the depth buffer as suggested here.
To reconstruct the camera space position of a fragment I am taking the vector from the camera space fragment on the light volume, normalizing it, and scaling it by the linear depth from my gbuffer. This is sort of a hybrid of the methods discussed here (using linear depth) and here (spherical light volumes).
Geometry Buffer
My gBuffer setup is:
enum render_targets { e_dist_32f = 0, e_diffuse_rgb8, e_norm_xyz8_specpow_a8, e_light_rgb8_specintes_a8, num_rt };
//...
GLint internal_formats[num_rt] = { GL_R32F, GL_RGBA8, GL_RGBA8, GL_RGBA8 };
GLint formats[num_rt] = { GL_RED, GL_RGBA, GL_RGBA, GL_RGBA };
GLint types[num_rt] = { GL_FLOAT, GL_FLOAT, GL_FLOAT, GL_FLOAT };
for(uint i = 0; i < num_rt; ++i)
{
glBindTexture(GL_TEXTURE_2D, _render_targets[i]);
glTexImage2D(GL_TEXTURE_2D, 0, internal_formats[i], _width, _height, 0, formats[i], types[i], nullptr);
}
// Separate non-linear depth buffer used for depth testing
glBindTexture(GL_TEXTURE_2D, _depth_tex_id);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, _width, _height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, nullptr);
Normal Precision
The problem was that my normals just didn't have enough precision. At 8 bits per component that means 255 discrete possible values. Examining the normals in my gbuffer overlaid ontop of the lighting showed a 1-1 correspondence with normal value to lit "pixel" value.
I am unsure why my classmate does not get the same issue (he is going to investigate further).
After some more research I found that a term for this is quantization. Another example of it can be seen here with a specular highlight on page 19.
Solution
After changing my normal render target to RG16F the problem is resolved.
Using method suggested here to store and retrieve normals I get the following results:
I now need to store my normals more compactly (I only have room for 2 components). This is a good survey of techniques if anyone finds themselves in the same situation.
[EDIT 1]
As both Andon and GuyRT have pointed out in the comments, 16 bits is a bit overkill for what I need. I've switched to RGB10_A2 as they suggested and it gives very satisfactory results, even on rounded surfaces. The extra 2 bits help a lot (256 vs 1024 discrete values).
Here's what it looks like now.
It should also be noted (for anyone that references this post in the future) that the image I posted for RG16F has some undesirable banding from the method I was using to compress/decompress the normal (there was some error involved).
[EDIT 2]
After discussing the issue some more with a classmate (who is using RGB8 with no ill effects), I think it is worth mentioning that I might just have the perfect combination of elements to make this appear. The game I'm building this renderer for is a horror game that places you in pitch black environments with a sonar-like ability. Normally in a scene you would have a number of lights at different angles (my classmate's environments are all very well lit - they're making an outdoor racing game). That combined with the fact that it only appears on very round objects relatively close up might be why I provoked this. This is all just a (slightly educated) guess on my part.

What exactly is a floating point texture?

I tried reading the OpenGL ARB_texture_float spec, but I still cannot get it in my head..
And how is floating point data related to just normal 8-bit per channel RGBA or RGB data from an image that I am loading into a texture?
Here is a read a little bit here about it.
Basically floating point texture is a texture in which data is of floating point type :)
That is it is not clamped. So if you have 3.14f in your texture you will read the same value in the shader.
You may create them with different numbers of channels. Also you may crate 16 or 32 bit textures depending on the format. e.g.
// create 32bit 4 component texture, each component has type float
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 16, 16, 0, GL_RGBA, GL_FLOAT, data);
where data could be like this:
float data[16][16];
for(int i=0;i<16*16;++i) data[i] = sin(i*M_PI/180.0f); // whatever
then in shader you can get exactly same (if you use FLOAT32 texture) value.
e.g.
uniform sampler2D myFloatTex;
float value = texture2D(myFloatTex, texcoord.xy);
If you were using 16bit format, say GL_RGBA16F, then whenever you read in shader you will have a convertion. So, to avoid this you may use half4 type:
half4 value = texture2D(my16BitTex, texcoord.xy);
So, basically, difference between the normalized 8bit and floating point texture is that in the first case your values will be brought to [0..1] range and clamped, whereas in latter you will receive your values as is ( except for 16<->32 conversion, see my example above).
Not that you'd probably want to use them with FBO as a render target, in this case you need to know that not all of the formats may be attached as a render target. E.g. you cannot attach Luminance and intensity formats.
Also not all hardware supports filtering of floating point textures, so you need to check it first for your case if you need it.
Hope this helps.
FP textures have a special designated range of internal formats (RGBA_16F,RGBA_32F,etc).
Regular textures store fixed-point data, so reading from them gives you [0,1] range values. Contrary, FP textures give you [-inf,+inf] range as a result (not necessarily with a higher precision).
In many cases (like HDR rendering) you can easily proceed without FP textures, just by transforming the values to fit in [0,1] range. But there are cases like deferred rendering when you may want to store, for example, world-space coordinate without caring about their range.