I'm using my alpha channel as an 8 bit integer index for something unrelated to blending so I want to carefully control the bit values. In particular, I need for all of the pixels from one FBO-rendered texture with a particular alpha value to match all of the pixels with the same alpha value in the shader. Experience has taught me to be careful when comparing floating point values for equality...
While setting the color values using the floating point vec4 might not cause me issues, and my understanding is that even a half precision 16bit float will be able to differentiate all 8 bit integer (0-255) values. But I would prefer to perform integer operations in the fragment shader so I am certain of the values.
Am I likely to incur a performance hit by performing integer ops in the fragment shader?
How is the output scaled? I read somewhere that it is valid to send integer vectors as color output for a fragment. But how is it scaled? If I send a uvec4 with integers 0-255 will it scale that appropriately? I'd like for it to directly write the integer value into the pixel format, for integer formats I don't want it to do any scaling. Perhaps for RGBA8 sending in an int value above 255 would clamp it to 255, and clamp negative ints to zero, and so on.
This issue is made difficult by the fact that I cannot debug by printing out the color values unless I grab the rendered images and examine them carefully. Perhaps I can draw a bright color if something fails to match.
Here is a relevant thread I found on this topic. It has confused me even more than before.
I suggest not using the color attachment's alpha channel, but an additional render target with an explicit integer format. This is available since at least OpenGL-3.1 (the oldest spec I looked at, for this answer). See the OpenGL function glBindFragDataLocation, which binds a fragments shader out variable. In your case a int out $VARIABLENAME. For input into the next state use a integer sampler. I refer you to the specification of OpenGL-3.1 and GLSL-1.30 for the details.
Related
I want to count the number of fragments on each pixel (with depth test disabled). I have tried enabling blending and setting glBlendFunc(GL_ONE,GL_ONE) to accumulate them. This works just fine with a float32 format texture binding to a FBO, but I think a uint32 format texture (e.g. GL_R32UI) is more intuitive for this task. However, I can't get the expected behavior. It seems each fragment just overwrites the texture. I just wonder if there's other methods to do the accumulation on integer format textures.
However, I can't get the expected behavior. It seems each fragment just overwrites the texture.
That's because the blending stage is not available on pure integer framebuffer formats.
but I think a uint32 format texture (e.g. GL_R32UI) is more intuitive for this task.
Well, is it? What does "intuitive" even mean here? First of all, a GL_R16F format is probably enough for a reasonable of overdraw, and it would reduce bandwidth demands a lot (which seems to be the limiting factor for such a pass).
I just wonder if there's other methods to do the accumulation on integer format textures.
I can see two ways, I doubt that either of them is really more "intuitive", but I you absolutely need the result as integer, you could try these:
Don't use a framebuffer at all, but use image load/store on an unsigned integer texture in the fragment shader. Ust atomic operations, in particular imageAtomicAdd to count the number of fragments at each fragment location. Note that if you go that route, your're outside of the GL's automatic synchronization paths, and you'll have to add an exlicit glMemoryBarrier call after that render pass.
You could also just use a standard normalized integer format like GL_RED8 (or GL_RED16) use blending as before, but have the fragment shader output 1.0/255.0 (or 1.0/65535.0, respectively). The data which ends up in the framebuffer will be integer in the end. If you need this data on the CPU, you can directly read it back, if you need it on the GPU, you can use glTextureView to reinterpret the data as an unnormalized integer texture without a copy/conversion step.
I'm doing a simple openGL program that involves rendering to a depth texture offscreen. However I'm dealing with large depths that exceed what can be represented by a float's precision. As a result I need to use unsigned int for drawing my points. I run into two issues when I try to implement this.
1) Whenever I attempt to draw a VBO that uses unsigned int (screen coordinates) for drawing it doesn't fall within the -1 to 1 range so none of them draw to the screen. The only way I can find to fix this problem is by using a orthographic projection matrix to adjust it to draw to screen coordinates.
Is this understanding correct or is there an easier way to do it.
If it is correct how do you properly implement this for what I want.
2) Secondly when drawing this way is there any way to preserve the initial values (not converting them to floats when drawing) so they are no different when you read them back again, this is necessary because my objective is to create a depth buffer of random points with random depths up to 2^32. If this gets converted to floats precision is lost so the data read out again is not the same as what was put in.
This is the wrong solution to the problem. To answer your question itself, gl_Position is a vec4. And therefore, the depth that OpenGL sees is a float. There's nothing you can do to change that, short of ignoring the depth buffer entirely and doing "depth tests" yourself in the fragment shader.
The preferred solution to the problem is to use a floating-point depth buffer. Using GL_DEPTH_COMPONENT_32F or something of the kind. But that alone is insufficient, due to an unfortunate legacy issue with how OpenGL defines its coordinate transforms. See, floats put a lot of precision into the range [0, 1], but it's biased closer to zero. But because of the way OpenGL defines its transforms, that precision gets lost along the way; effectively, the exponent part of the float never gets used. It makes a 32-bit float seem like a 24-bit fixed-point value.
OpenGL has fixed that problem with ARB_clip_control, which restores the ability to use full 32-bit floats effectively. You should attempt to employ that if possible.
I am currently faced with a problem closely related to the OpenGL pipeline, and the use of shaders.
Indeed, I work on a project whose one of the steps consists of reading pixels from an image that we generate using OpenGL, with as much accuracy as possible : I mean that instead of reading integers, I would like to read float numbers. (So, instead of reading the value (134, 208, 108) for a pixel, I would like to obtain something like (134.180, 207.686, 108.413), for example.)
For this project, I used both vertex and fragment shaders to render my scene. I assume that the color computed and returned by the fragment shader, is a vector of 4 floats (one per RGBA component) belonging to the "continuous" [0, 1] internal. But, how can I get it in my C++ file ? Is there a way of doing it ?
I thought of calling the glReadPixels() function just after having rendered my scene in a buffer, by setting the format argument to GL_RGBA, and the data type of the pixel data to GL_FLOAT. But I have the feeling that the values associated to the pixels that we read, have already been casted to a integer in the meanwhile, because the float numbers that I finally get, correspond to the interval [0, 255] clamped to [0, 1], without any gain in precision. A closer look on the OpenGL spectifications strengthens this idea : I think there is indeed a cast somewhere between rendering my scene, and callingglReadPixels().
Do you have any idea about the way I can reach my objective ?
The GL_RGBA format returned by the fragment shader stores pixels components in 8-bit integers. You should use a floating point format, such as GL_RGBA16F or GL_RGBA32F, where 16 and 32 are the depths for each component.
I'm building a LIDAR simulator in OpenGL. This means that the fragment shader returns the length of the light vector (the distance) in place of one of the color channels, normalized by the distance to the far plane (so it'll be between 0 and 1). In other words, I use red to indicate light intensity and blue to indicate distance; and I set green to 0. Alpha is unused, but I keep it at 1.
Here's my test object, which happens to be a rock:
I then write the pixel data to a file and load it into a point cloud visualizer (one point per pixel) — basically the default. When I do that, it becomes clear that all of my points are in discrete planes each located at a different depth:
I tried plotting the same data in R. It doesn't show up initially with the default histogram because the density of the planes is pretty high. But when I set the breaks to about 60, I get this:
.
I've tried shrinking the distance between the near and far planes, in case it was a precision issue. First I was doing 1–1000, and now I'm at 1–500. It may have decreased the distance between planes, but I can't tell, because it means the camera has to be closer to the object.
Is there something I'm missing? Does this have to do with the fact that I disabled anti-aliasing? (Anti-aliasing was causing even worse periodic artifacts, but between the camera and the object instead. I disabled line smoothing, polygon smoothing, and multisampling, and that took care of that particular problem.)
Edit
These are the two places the distance calculation is performed:
The vertex shader calculates ec_pos, the position of the vertex relative to the camera.
The fragment shader calculates light_dir0 from ec_pos and the camera position and uses this to compute a distance.
Is it because I'm calculating ec_pos in the vertex shader? How can I calculate ec_pos in the fragment shader instead?
There are several possible issues I can think of.
(1) Your depth precision. The far plane has very little effect on resolution; the near plane is what's important. See Learning to Love your Z-Buffer.
(2) The more probable explanation, based on what you've provided, is the conversion/saving of the pixel data. The shader outputs floating point values, but these are stored in the framebuffer, which will typically have only 8bits per channel. For color, what that means is that your floats will be mapped to the underlying 8-bit (fixed width, integer) representation, therefore only possessing 256 values.
If you want to output pixel data as the true floats they are, you should make a 32-bit floating point RGBA FBO (with e.g. GL_RGBA32F or something similar). This will store actual floats. Then, when your data from the GPU, it will return the original shader values.
I suppose you could alternately encode a single float in a vec4 with some multiplication, if you don't have a FBO implementation handy.
I tried reading the OpenGL ARB_texture_float spec, but I still cannot get it in my head..
And how is floating point data related to just normal 8-bit per channel RGBA or RGB data from an image that I am loading into a texture?
Here is a read a little bit here about it.
Basically floating point texture is a texture in which data is of floating point type :)
That is it is not clamped. So if you have 3.14f in your texture you will read the same value in the shader.
You may create them with different numbers of channels. Also you may crate 16 or 32 bit textures depending on the format. e.g.
// create 32bit 4 component texture, each component has type float
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, 16, 16, 0, GL_RGBA, GL_FLOAT, data);
where data could be like this:
float data[16][16];
for(int i=0;i<16*16;++i) data[i] = sin(i*M_PI/180.0f); // whatever
then in shader you can get exactly same (if you use FLOAT32 texture) value.
e.g.
uniform sampler2D myFloatTex;
float value = texture2D(myFloatTex, texcoord.xy);
If you were using 16bit format, say GL_RGBA16F, then whenever you read in shader you will have a convertion. So, to avoid this you may use half4 type:
half4 value = texture2D(my16BitTex, texcoord.xy);
So, basically, difference between the normalized 8bit and floating point texture is that in the first case your values will be brought to [0..1] range and clamped, whereas in latter you will receive your values as is ( except for 16<->32 conversion, see my example above).
Not that you'd probably want to use them with FBO as a render target, in this case you need to know that not all of the formats may be attached as a render target. E.g. you cannot attach Luminance and intensity formats.
Also not all hardware supports filtering of floating point textures, so you need to check it first for your case if you need it.
Hope this helps.
FP textures have a special designated range of internal formats (RGBA_16F,RGBA_32F,etc).
Regular textures store fixed-point data, so reading from them gives you [0,1] range values. Contrary, FP textures give you [-inf,+inf] range as a result (not necessarily with a higher precision).
In many cases (like HDR rendering) you can easily proceed without FP textures, just by transforming the values to fit in [0,1] range. But there are cases like deferred rendering when you may want to store, for example, world-space coordinate without caring about their range.