What exactly is a floating point texture? - opengl

I tried reading the OpenGL ARB_texture_float spec, but I still cannot get it in my head..
And how is floating point data related to just normal 8-bit per channel RGBA or RGB data from an image that I am loading into a texture?

Here is a read a little bit here about it.
Basically floating point texture is a texture in which data is of floating point type :)
That is it is not clamped. So if you have 3.14f in your texture you will read the same value in the shader.
You may create them with different numbers of channels. Also you may crate 16 or 32 bit textures depending on the format. e.g.
// create 32bit 4 component texture, each component has type float
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 16, 16, 0, GL_RGBA, GL_FLOAT, data);
where data could be like this:
float data[16][16];
for(int i=0;i<16*16;++i) data[i] = sin(i*M_PI/180.0f); // whatever
then in shader you can get exactly same (if you use FLOAT32 texture) value.
e.g.
uniform sampler2D myFloatTex;
float value = texture2D(myFloatTex, texcoord.xy);
If you were using 16bit format, say GL_RGBA16F, then whenever you read in shader you will have a convertion. So, to avoid this you may use half4 type:
half4 value = texture2D(my16BitTex, texcoord.xy);
So, basically, difference between the normalized 8bit and floating point texture is that in the first case your values will be brought to [0..1] range and clamped, whereas in latter you will receive your values as is ( except for 16<->32 conversion, see my example above).
Not that you'd probably want to use them with FBO as a render target, in this case you need to know that not all of the formats may be attached as a render target. E.g. you cannot attach Luminance and intensity formats.
Also not all hardware supports filtering of floating point textures, so you need to check it first for your case if you need it.
Hope this helps.

FP textures have a special designated range of internal formats (RGBA_16F,RGBA_32F,etc).
Regular textures store fixed-point data, so reading from them gives you [0,1] range values. Contrary, FP textures give you [-inf,+inf] range as a result (not necessarily with a higher precision).
In many cases (like HDR rendering) you can easily proceed without FP textures, just by transforming the values to fit in [0,1] range. But there are cases like deferred rendering when you may want to store, for example, world-space coordinate without caring about their range.

Related

glReadPixels from back buffer. Problem with float precision

I'm trying to get color for my color picker, but I get a float value, that differ from value I stored. For example I set 0.5, but I take 0.498039 (this is the real numbers).
I don't build any FBO and read color from GL_BACK directly:
glReadBuffer(GL_BACK);
glReadPixels(x, y, 1, 1, GL_RGB, GL_FLOAT, &color);
How can I save precision of floating point value? Is it possible to change GL_FLOAT to another which would save precision? Is it possible to get in &color numbers greater than 1.0?
The precision is limited by the precision of the frambuffer (back buffer). This precision cannot be set individually and is (most likely) limited to 8 bit per channel. Actually the Default framebuffer is generated once, when the OpenGL window and OpenGL Context is generated.
Thus it makes no sense to read the read the buffer to 32bit float target, because the source buffer just has 8 bits.
Anyway it is possible to render the scene to a named Framebuffer Object where the color plane of the attached Renderbuffer has a floating point format (e.g. GL_RGBA32F). See LearnOpenGL - Framebuffers.

read and write integer 1-channel texture opengl

I want to:
create a read and writable 1-channel texture that contains integers.
using a shader, write integer "I" to the texture.
use the texture as a source, sample it and compare if the sample is equal to the integer I.
All this with core profile 3.3.
This is what I've got so far:
I create the texture like so:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, width, height, 0, GL_RED, GL_INT, (java.nio.ByteBuffer) null);
I've Also tried GL_R8I and GL_RED_INTEGER, but that won't work.
I bind this texture as my FBO, set blendfunc to (GL_ONE, GL_ONE) (additive) and render 1 quad using a shader that simply does:
layout(location = 2) out float value;
main{ value = 1.0;}
Again, Integers won't work here!
Next, I bind another FBO and use the above as a source. I use a shader that samples the above with:
"if (texture2D(Tshadow, v_sampler_coo).r == 1.0){discard;}" + "\n"
The problem is that only way I've managed to get it to somehow work is by outputting:
out_diffuse = 1.0;
The compared:
if (texture2D(Tshadow, v_sampler_coo).r == 1.0){discard;}
Any other value, even a power of two (2^-2, 2^-3 etc) will not generate a true statement, nothing except 1.0.
Now what I'd ideally like is to be able to write an integer and sample that integer. But I can't manage. And regardless if I succeeded to write and integer, the sampler2D thing only returns normalized floats, right? And that's the only way to sample a color attachment?
UPDATE:
I found that if I write 0.5, then this work:
if (texture2D(Tshadow, v_sampler_coo).r == 0.4980392307){discard;}
The interal format GL_R8I is actually the correct format for your use case, and the spec explicitely lists that format as color-renderable.
layout(location = 2) out float value;
Why are you using a float output type if you intend to store integers? The OpenGL 3.3 core profile specification explicitely states the following:
Color values written by a fragment shader may be floating-point,
signed integer, or unsigned integer. If the color buffer has an signed
or unsigned normalized fixed-point format, color values are assumed to
be floating-point and are converted to fixed-point as described in
equations 2.6 or 2.4, respectively; otherwise no type conversion is
applied.
However, as you put it:
Now what I'd ideally like is to be able to write an integer and sample that integer.
You can do that, assuming you use the correct integer sampler type in the shader. What you can't do is use blending with integers. To quote the spec:
Blending applies only if the color buffer has a fixed-point or
floating-point format. If the color buffer has an integer format,
proceed to the next operation.
So if you need the blending, the best option is to work with GL_R8 normalized integer format, and with floating point outputs in the shader.
With more modern GL, you could simply emulate the additive blending by directly sampling the previous value in the shader which is updating the texture. Such feedback loops where a shader reads exactly the texel location it is later going to write to are well-defined and allowed in recent GL versions, but unfortunately not in GL3.x.
UPDATE
I found that if I write 0.5, then this work:
if (texture2D(Tshadow, v_sampler_coo).r == 0.4980392307){discard;}
Well, 0.5 is not representable by normalized integer formats (and actually, the bit depth does not even matter, it will never work). In the 8 bit case, the GL will convert
0.5 to 0.5*255 = 127.5. Now it will be implementation specific if it will round to 127 or 128, so you will end up with either 127.0/255.0 (which you got), or 128.0/255.0.
Note on the rounding rules: In the GL 3.3 spec, it is stated that the value after the multiplication
is then cast to an unsigned binary integer value with exactly b bits
With no rounding at all, so that 127 should always be the result. However, in the latest version, GL 4.5, it is stated as
returns one of the two unsigned binary integer values with exactly b
bits which are closest to the floating-point value r (where rounding
to nearest is preferred).
so actually, any rounding behavior is allowed...
As a final note: you should never compare floats for equality.

Need to create a custom data 2D texture with reasonable precision

The idea
I need to create a 2D texture to be fed with resonably precise float values (I mean at least as precise as a glsl mediump float). I want to store in it each pixel's distance from the camera. I don't want the GL Zbuffer distance to near plane, only my own lovely custom data :>
The problem/What I've tried
By using a standard texture as color attachment, I don't get enough precision. Or maybe I missed something ?
By using a depth attachment texture as GL_DEPTH_COMPONENT32 I am getting the clamped near plane distance - rubbish.
So it seems I am stuck at not using a depth attachment even tho they seem to eventually hold more precision. So is there a way to have mediump float precision for standard textures ?
I find it strange OpenGL doesn't have a generic container for arbitrary data. I mean with custom bit-depth. Or maybe I missed something again!
You can use floating point textures instead of a RGBA texture with 8 bit per pixel. However support of these depends on the devices you want to support, especially older mobile devices have a lack of support for these formats.
Example for GL_RGBA16F( Not tested ):
glTexImage2D(GL_TEXTURE_2D, mipmap, GL_RGBA16F, mipmapWidth, mipmapHeight, GL_RGBA, GL_HALF_FLOAT, null);
Now you can store the data in your fragment-shader manually. However clamping still occurs depending on you MVP. Also you need to pass the data to the fragment shader.
There are also 32bit formats.
There are a number of options for texture formats that give you more than 8-bit component precision.
If your values are in a pre-defined range, meaning that you can easily map your values into the [0.0, 1.0] interval, normalized 16-bit formats are your best option. For a single component texture, the format would be GL_R16. You can allocate a texture of this format using:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16, 512, 512, 0, GL_RED, GL_UNSIGNED_SHORT, NULL);
There are matching formats for 2, 3, and 4 components (GL_RG16, GL_RGB16, GL_RGBA16).
If you need a larger range of values that is not easily constrained, float textures become more attractive. The corresponding calls for 1 component textures are:
glTexImage2D(GL_TEXTURE_2D, 0, GL_R16F, 512, 512, 0, GL_RED, GL_HALF_FLOAT, NULL);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32F, 512, 512, 0, GL_RED, GL_FLOAT, NULL);
Again, there are matching formats for 2, 3, and 4 components.
The advantage of float textures is that, just based on the nature of float values encoded with a mantissa and exponent, they can cover a wide range. This comes at the price of less precision. For example, GL_R16F gives you 11 bits of precision, while GL_R16 gives you a full 16 bits. Of course GL_R32F gives you plenty of precision (24 bits) as well as a wide range, but it uses twice the amount of storage.
You would probably have an easier time accomplishing this in GLSL as opposed to the C API. However, any custom depth buffer will be consistently, considerably slower than the one provided by OpenGL. Don't expect it to operate in real-time.
If your aim is to have access to the raw distance of any fragment from the camera, remember that depth buffer values are z/w, where z is the distance from the near plane and w is the distance from the camera. So, it is possible to extract quickly with an acceptable amount of precision. However, you are still faced with your original problem: fragments between the camera and the near plane will not be in the depth buffer.

How to get a floating-point color from GLSL

I am currently faced with a problem closely related to the OpenGL pipeline, and the use of shaders.
Indeed, I work on a project whose one of the steps consists of reading pixels from an image that we generate using OpenGL, with as much accuracy as possible : I mean that instead of reading integers, I would like to read float numbers. (So, instead of reading the value (134, 208, 108) for a pixel, I would like to obtain something like (134.180, 207.686, 108.413), for example.)
For this project, I used both vertex and fragment shaders to render my scene. I assume that the color computed and returned by the fragment shader, is a vector of 4 floats (one per RGBA component) belonging to the "continuous" [0, 1] internal. But, how can I get it in my C++ file ? Is there a way of doing it ?
I thought of calling the glReadPixels() function just after having rendered my scene in a buffer, by setting the format argument to GL_RGBA, and the data type of the pixel data to GL_FLOAT. But I have the feeling that the values associated to the pixels that we read, have already been casted to a integer in the meanwhile, because the float numbers that I finally get, correspond to the interval [0, 255] clamped to [0, 1], without any gain in precision. A closer look on the OpenGL spectifications strengthens this idea : I think there is indeed a cast somewhere between rendering my scene, and callingglReadPixels().
Do you have any idea about the way I can reach my objective ?
The GL_RGBA format returned by the fragment shader stores pixels components in 8-bit integers. You should use a floating point format, such as GL_RGBA16F or GL_RGBA32F, where 16 and 32 are the depths for each component.

Fragment shader output values

I'm using my alpha channel as an 8 bit integer index for something unrelated to blending so I want to carefully control the bit values. In particular, I need for all of the pixels from one FBO-rendered texture with a particular alpha value to match all of the pixels with the same alpha value in the shader. Experience has taught me to be careful when comparing floating point values for equality...
While setting the color values using the floating point vec4 might not cause me issues, and my understanding is that even a half precision 16bit float will be able to differentiate all 8 bit integer (0-255) values. But I would prefer to perform integer operations in the fragment shader so I am certain of the values.
Am I likely to incur a performance hit by performing integer ops in the fragment shader?
How is the output scaled? I read somewhere that it is valid to send integer vectors as color output for a fragment. But how is it scaled? If I send a uvec4 with integers 0-255 will it scale that appropriately? I'd like for it to directly write the integer value into the pixel format, for integer formats I don't want it to do any scaling. Perhaps for RGBA8 sending in an int value above 255 would clamp it to 255, and clamp negative ints to zero, and so on.
This issue is made difficult by the fact that I cannot debug by printing out the color values unless I grab the rendered images and examine them carefully. Perhaps I can draw a bright color if something fails to match.
Here is a relevant thread I found on this topic. It has confused me even more than before.
I suggest not using the color attachment's alpha channel, but an additional render target with an explicit integer format. This is available since at least OpenGL-3.1 (the oldest spec I looked at, for this answer). See the OpenGL function glBindFragDataLocation, which binds a fragments shader out variable. In your case a int out $VARIABLENAME. For input into the next state use a integer sampler. I refer you to the specification of OpenGL-3.1 and GLSL-1.30 for the details.