OpenGL/GLSL Color Attachment range - opengl

Is there a way, in GLSL/OpenGL textures, to store floats which are higher than 1 or lower than 0 ?
I'm working on a deferred rendering framework but when i try to store the positions as non-homogenious values (first shader) i get only values between 0-1 in my phong shader (second shader).
Same with normals, the light was displaying wrong.
The way to fix this was in the first shader :
gbuffer[x] =normal *0.5 +1. and in phong : normal * 2. - 1. //(non-)homogenious conversation
But I don't want to use this method.
So my current texture format is RGB. I tried RGB_16 but then I get a black window.

GL_RGB16 is still a normalized format, which means that the sampled value is between 0.0 and 1.0.
What you need to get a range outside of [0.0, 1.0] is a floating point texture. The formats with 3 components are GL_RGB16F for 16-bit float components and GL_RGB32F for 32-bit float components. Those two formats are not guaranteed to be supported for render targets, though (see table 8.12 in GL 4.5 spec). You will need to use their 4-component versions if you need to render to them: GL_RGBA16F or GL_RGBA32F.
If you still have a fixed range for your values, the approach you tried where you map your given range into a [0.0, 1.0] range by applying an offset/scale actually looks very valid to me. Using for example GL_RGBA16 gives you 16 bits of precision, while you get only 12 bits of precision from GL_RGBA16F with the same memory usage, since 4 bits are used for the exponent.

Related

Difference between glColor3f and glColor3d

The code I'm working on, in a nutshell, uses OpenGL to represent data (stored as double) as the colour of some geometry. Using the default colour shading between vertices it then samples back the colour over the geometry to see the values at different pixels and convert them back to my data. Currently it's done using 1D texture mapping, such that any value I draw and sample back can be exactly located on a scale. However since the data I'm working with is stored as a double, a lot of precision is lost when all values drawn and sampled back are mapped on a 14-bit texture map.
So to address that issue, now I'm implementing the code using floating-point colour.
I'm using an FBO with a GL_RGBA32F renderbuffer colour attachment. The thing which I don't understand is what will change if I set the colour of my vertices using glColor3d versus glColor3f.
If I'm understanding correctly, with a single precision floating point renderbuffer, the values I sample back for RGB will basically be GLfloat type, not GLdouble.
Also, is there any way I can configure my renderbuffer such that I can draw with GLdouble colour values and be able to sample back GLdouble values? Looking at the OpenGL 4.5 spec p198 there aren't any colour formats with more than 32f precision per clour channel. From my understanding, double precision colour is fairly modern tech only supported on newer systems, which only confuses me more about the presence of glColour3d.

How does opengl depth testing use the 24bit depth buffer?

I am using 32bit float values which I input into the vertex shader for the x,y,z positions of every vertex. But, I have read that opengl uses 24bit for depth buffer and 8 bit for stencil buffer.
Since, I am copying the same 32bit float in gl_position that I receive as input in vertex shader, I want to understand how does opengl convert this 32bit float to 24bit for depth testing.
The gl_Position in the vertex shader is a clip space coordinate. There will be a division by w to generate normalized device coordinates, where the visible range is [-1,1] in OpenGL (by default, can be changed nowadays). Those values will be transformed according to the currently set glDepthRange parameters to finally get the window space z value, which is in the range [0,1].
The depth buffer must just store these values, and - very similar to color values which often store only even 8 bit per channel values - an integer depth buffer is used to represent fixed point values in that range.
Quoting from setction 13.6 "coordinate transformations" of the OpenGL 4.5 core profile spec (emphasis mine):
z_w may be represented using either a fixed-point or floating-point representation.
However, a floating-point representation must be used if the draw framebuffer has a floating-point depth buffer. If an
m-bit fixed-point representation is used, we
assume that it represents each value k/(2^m-1),
where k in {0,1,...,2^m- 1}, as k (e.g. 1.0 is represented in binary as a string of all ones).
So, the window space z_w value (which is in [0,1]) is just multiplied by 2^m -1, and rounded to integer, and the result is stored in the buffer.

How to get a floating-point color from GLSL

I am currently faced with a problem closely related to the OpenGL pipeline, and the use of shaders.
Indeed, I work on a project whose one of the steps consists of reading pixels from an image that we generate using OpenGL, with as much accuracy as possible : I mean that instead of reading integers, I would like to read float numbers. (So, instead of reading the value (134, 208, 108) for a pixel, I would like to obtain something like (134.180, 207.686, 108.413), for example.)
For this project, I used both vertex and fragment shaders to render my scene. I assume that the color computed and returned by the fragment shader, is a vector of 4 floats (one per RGBA component) belonging to the "continuous" [0, 1] internal. But, how can I get it in my C++ file ? Is there a way of doing it ?
I thought of calling the glReadPixels() function just after having rendered my scene in a buffer, by setting the format argument to GL_RGBA, and the data type of the pixel data to GL_FLOAT. But I have the feeling that the values associated to the pixels that we read, have already been casted to a integer in the meanwhile, because the float numbers that I finally get, correspond to the interval [0, 255] clamped to [0, 1], without any gain in precision. A closer look on the OpenGL spectifications strengthens this idea : I think there is indeed a cast somewhere between rendering my scene, and callingglReadPixels().
Do you have any idea about the way I can reach my objective ?
The GL_RGBA format returned by the fragment shader stores pixels components in 8-bit integers. You should use a floating point format, such as GL_RGBA16F or GL_RGBA32F, where 16 and 32 are the depths for each component.

Fragment shader output values

I'm using my alpha channel as an 8 bit integer index for something unrelated to blending so I want to carefully control the bit values. In particular, I need for all of the pixels from one FBO-rendered texture with a particular alpha value to match all of the pixels with the same alpha value in the shader. Experience has taught me to be careful when comparing floating point values for equality...
While setting the color values using the floating point vec4 might not cause me issues, and my understanding is that even a half precision 16bit float will be able to differentiate all 8 bit integer (0-255) values. But I would prefer to perform integer operations in the fragment shader so I am certain of the values.
Am I likely to incur a performance hit by performing integer ops in the fragment shader?
How is the output scaled? I read somewhere that it is valid to send integer vectors as color output for a fragment. But how is it scaled? If I send a uvec4 with integers 0-255 will it scale that appropriately? I'd like for it to directly write the integer value into the pixel format, for integer formats I don't want it to do any scaling. Perhaps for RGBA8 sending in an int value above 255 would clamp it to 255, and clamp negative ints to zero, and so on.
This issue is made difficult by the fact that I cannot debug by printing out the color values unless I grab the rendered images and examine them carefully. Perhaps I can draw a bright color if something fails to match.
Here is a relevant thread I found on this topic. It has confused me even more than before.
I suggest not using the color attachment's alpha channel, but an additional render target with an explicit integer format. This is available since at least OpenGL-3.1 (the oldest spec I looked at, for this answer). See the OpenGL function glBindFragDataLocation, which binds a fragments shader out variable. In your case a int out $VARIABLENAME. For input into the next state use a integer sampler. I refer you to the specification of OpenGL-3.1 and GLSL-1.30 for the details.

What exactly is a floating point texture?

I tried reading the OpenGL ARB_texture_float spec, but I still cannot get it in my head..
And how is floating point data related to just normal 8-bit per channel RGBA or RGB data from an image that I am loading into a texture?
Here is a read a little bit here about it.
Basically floating point texture is a texture in which data is of floating point type :)
That is it is not clamped. So if you have 3.14f in your texture you will read the same value in the shader.
You may create them with different numbers of channels. Also you may crate 16 or 32 bit textures depending on the format. e.g.
// create 32bit 4 component texture, each component has type float
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 16, 16, 0, GL_RGBA, GL_FLOAT, data);
where data could be like this:
float data[16][16];
for(int i=0;i<16*16;++i) data[i] = sin(i*M_PI/180.0f); // whatever
then in shader you can get exactly same (if you use FLOAT32 texture) value.
e.g.
uniform sampler2D myFloatTex;
float value = texture2D(myFloatTex, texcoord.xy);
If you were using 16bit format, say GL_RGBA16F, then whenever you read in shader you will have a convertion. So, to avoid this you may use half4 type:
half4 value = texture2D(my16BitTex, texcoord.xy);
So, basically, difference between the normalized 8bit and floating point texture is that in the first case your values will be brought to [0..1] range and clamped, whereas in latter you will receive your values as is ( except for 16<->32 conversion, see my example above).
Not that you'd probably want to use them with FBO as a render target, in this case you need to know that not all of the formats may be attached as a render target. E.g. you cannot attach Luminance and intensity formats.
Also not all hardware supports filtering of floating point textures, so you need to check it first for your case if you need it.
Hope this helps.
FP textures have a special designated range of internal formats (RGBA_16F,RGBA_32F,etc).
Regular textures store fixed-point data, so reading from them gives you [0,1] range values. Contrary, FP textures give you [-inf,+inf] range as a result (not necessarily with a higher precision).
In many cases (like HDR rendering) you can easily proceed without FP textures, just by transforming the values to fit in [0,1] range. But there are cases like deferred rendering when you may want to store, for example, world-space coordinate without caring about their range.