glReadPixels from back buffer. Problem with float precision - c++

I'm trying to get color for my color picker, but I get a float value, that differ from value I stored. For example I set 0.5, but I take 0.498039 (this is the real numbers).
I don't build any FBO and read color from GL_BACK directly:
glReadBuffer(GL_BACK);
glReadPixels(x, y, 1, 1, GL_RGB, GL_FLOAT, &color);
How can I save precision of floating point value? Is it possible to change GL_FLOAT to another which would save precision? Is it possible to get in &color numbers greater than 1.0?

The precision is limited by the precision of the frambuffer (back buffer). This precision cannot be set individually and is (most likely) limited to 8 bit per channel. Actually the Default framebuffer is generated once, when the OpenGL window and OpenGL Context is generated.
Thus it makes no sense to read the read the buffer to 32bit float target, because the source buffer just has 8 bits.
Anyway it is possible to render the scene to a named Framebuffer Object where the color plane of the attached Renderbuffer has a floating point format (e.g. GL_RGBA32F). See LearnOpenGL - Framebuffers.

Related

Understanding glClearBuffer* (difference between fixed- and floating point- color buffers)

Imagine having a framebuffer color attachment, a texture of format:
glTexImage2D(target, 0, GL_RGBA8, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
RGBA8, Unsigned byte.
Normally I would use "glClear()" to clear the framebuffer as a whole.
Even with multiple attachments of various formats, using "glClear()" clears all of them
(together with a clear color "glClearColor()")
And it confusingly clears the the RGBA8 (uint) format texture with THAT FLOATING POINT clear color.
So if I want to clear a single fbo attachment one possible option would be to use:
glClearBuffer*
The *fv, *iv and *uiv forms of these commands should be used to clear fixed- and floating-point, signed integer, and unsigned integer color buffers respectively.
But which one should I use to clear the RGBA8 uint attachment? I have tried both:
glClearBufferfv(GL_COLOR,0,new float[4]);
and
glClearBufferuiv(GL_COLOR,0,new int[4]);
And both work.
What would be the correct way of clearing this particular attachment?
GL_RGBA8 is a normalized, fixed-point image format. Therefore, you should use the fv function.
That being said, the standard says that if the clear buffer function you use is not appropriate for the format of the buffer, you get undefined behavior. But you don't get an error.
Undefined behavior can also include "works as I expected".

Difference between glColor3f and glColor3d

The code I'm working on, in a nutshell, uses OpenGL to represent data (stored as double) as the colour of some geometry. Using the default colour shading between vertices it then samples back the colour over the geometry to see the values at different pixels and convert them back to my data. Currently it's done using 1D texture mapping, such that any value I draw and sample back can be exactly located on a scale. However since the data I'm working with is stored as a double, a lot of precision is lost when all values drawn and sampled back are mapped on a 14-bit texture map.
So to address that issue, now I'm implementing the code using floating-point colour.
I'm using an FBO with a GL_RGBA32F renderbuffer colour attachment. The thing which I don't understand is what will change if I set the colour of my vertices using glColor3d versus glColor3f.
If I'm understanding correctly, with a single precision floating point renderbuffer, the values I sample back for RGB will basically be GLfloat type, not GLdouble.
Also, is there any way I can configure my renderbuffer such that I can draw with GLdouble colour values and be able to sample back GLdouble values? Looking at the OpenGL 4.5 spec p198 there aren't any colour formats with more than 32f precision per clour channel. From my understanding, double precision colour is fairly modern tech only supported on newer systems, which only confuses me more about the presence of glColour3d.

Need to create a custom data 2D texture with reasonable precision

The idea
I need to create a 2D texture to be fed with resonably precise float values (I mean at least as precise as a glsl mediump float). I want to store in it each pixel's distance from the camera. I don't want the GL Zbuffer distance to near plane, only my own lovely custom data :>
The problem/What I've tried
By using a standard texture as color attachment, I don't get enough precision. Or maybe I missed something ?
By using a depth attachment texture as GL_DEPTH_COMPONENT32 I am getting the clamped near plane distance - rubbish.
So it seems I am stuck at not using a depth attachment even tho they seem to eventually hold more precision. So is there a way to have mediump float precision for standard textures ?
I find it strange OpenGL doesn't have a generic container for arbitrary data. I mean with custom bit-depth. Or maybe I missed something again!
You can use floating point textures instead of a RGBA texture with 8 bit per pixel. However support of these depends on the devices you want to support, especially older mobile devices have a lack of support for these formats.
Example for GL_RGBA16F( Not tested ):
glTexImage2D(GL_TEXTURE_2D, mipmap, GL_RGBA16F, mipmapWidth, mipmapHeight, GL_RGBA, GL_HALF_FLOAT, null);
Now you can store the data in your fragment-shader manually. However clamping still occurs depending on you MVP. Also you need to pass the data to the fragment shader.
There are also 32bit formats.
There are a number of options for texture formats that give you more than 8-bit component precision.
If your values are in a pre-defined range, meaning that you can easily map your values into the [0.0, 1.0] interval, normalized 16-bit formats are your best option. For a single component texture, the format would be GL_R16. You can allocate a texture of this format using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16, 512, 512, 0, GL_RED, GL_UNSIGNED_SHORT, NULL);
There are matching formats for 2, 3, and 4 components (GL_RG16, GL_RGB16, GL_RGBA16).
If you need a larger range of values that is not easily constrained, float textures become more attractive. The corresponding calls for 1 component textures are:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16F, 512, 512, 0, GL_RED, GL_HALF_FLOAT, NULL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, 512, 512, 0, GL_RED, GL_FLOAT, NULL);
Again, there are matching formats for 2, 3, and 4 components.
The advantage of float textures is that, just based on the nature of float values encoded with a mantissa and exponent, they can cover a wide range. This comes at the price of less precision. For example, GL_R16F gives you 11 bits of precision, while GL_R16 gives you a full 16 bits. Of course GL_R32F gives you plenty of precision (24 bits) as well as a wide range, but it uses twice the amount of storage.
You would probably have an easier time accomplishing this in GLSL as opposed to the C API. However, any custom depth buffer will be consistently, considerably slower than the one provided by OpenGL. Don't expect it to operate in real-time.
If your aim is to have access to the raw distance of any fragment from the camera, remember that depth buffer values are z/w, where z is the distance from the near plane and w is the distance from the camera. So, it is possible to extract quickly with an acceptable amount of precision. However, you are still faced with your original problem: fragments between the camera and the near plane will not be in the depth buffer.

How to get a floating-point color from GLSL

I am currently faced with a problem closely related to the OpenGL pipeline, and the use of shaders.
Indeed, I work on a project whose one of the steps consists of reading pixels from an image that we generate using OpenGL, with as much accuracy as possible : I mean that instead of reading integers, I would like to read float numbers. (So, instead of reading the value (134, 208, 108) for a pixel, I would like to obtain something like (134.180, 207.686, 108.413), for example.)
For this project, I used both vertex and fragment shaders to render my scene. I assume that the color computed and returned by the fragment shader, is a vector of 4 floats (one per RGBA component) belonging to the "continuous" [0, 1] internal. But, how can I get it in my C++ file ? Is there a way of doing it ?
I thought of calling the glReadPixels() function just after having rendered my scene in a buffer, by setting the format argument to GL_RGBA, and the data type of the pixel data to GL_FLOAT. But I have the feeling that the values associated to the pixels that we read, have already been casted to a integer in the meanwhile, because the float numbers that I finally get, correspond to the interval [0, 255] clamped to [0, 1], without any gain in precision. A closer look on the OpenGL spectifications strengthens this idea : I think there is indeed a cast somewhere between rendering my scene, and callingglReadPixels().
Do you have any idea about the way I can reach my objective ?
The GL_RGBA format returned by the fragment shader stores pixels components in 8-bit integers. You should use a floating point format, such as GL_RGBA16F or GL_RGBA32F, where 16 and 32 are the depths for each component.

What exactly is a floating point texture?

I tried reading the OpenGL ARB_texture_float spec, but I still cannot get it in my head..
And how is floating point data related to just normal 8-bit per channel RGBA or RGB data from an image that I am loading into a texture?
Here is a read a little bit here about it.
Basically floating point texture is a texture in which data is of floating point type :)
That is it is not clamped. So if you have 3.14f in your texture you will read the same value in the shader.
You may create them with different numbers of channels. Also you may crate 16 or 32 bit textures depending on the format. e.g.
// create 32bit 4 component texture, each component has type float
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 16, 16, 0, GL_RGBA, GL_FLOAT, data);
where data could be like this:
float data[16][16];
for(int i=0;i<16*16;++i) data[i] = sin(i*M_PI/180.0f); // whatever
then in shader you can get exactly same (if you use FLOAT32 texture) value.
e.g.
uniform sampler2D myFloatTex;
float value = texture2D(myFloatTex, texcoord.xy);
If you were using 16bit format, say GL_RGBA16F, then whenever you read in shader you will have a convertion. So, to avoid this you may use half4 type:
half4 value = texture2D(my16BitTex, texcoord.xy);
So, basically, difference between the normalized 8bit and floating point texture is that in the first case your values will be brought to [0..1] range and clamped, whereas in latter you will receive your values as is ( except for 16<->32 conversion, see my example above).
Not that you'd probably want to use them with FBO as a render target, in this case you need to know that not all of the formats may be attached as a render target. E.g. you cannot attach Luminance and intensity formats.
Also not all hardware supports filtering of floating point textures, so you need to check it first for your case if you need it.
Hope this helps.
FP textures have a special designated range of internal formats (RGBA_16F,RGBA_32F,etc).
Regular textures store fixed-point data, so reading from them gives you [0,1] range values. Contrary, FP textures give you [-inf,+inf] range as a result (not necessarily with a higher precision).
In many cases (like HDR rendering) you can easily proceed without FP textures, just by transforming the values to fit in [0,1] range. But there are cases like deferred rendering when you may want to store, for example, world-space coordinate without caring about their range.