What is the advantage or need of integer and SNORM textures? when do we use (-1,0) range for textures?
Any example where we need specifically integer or SNORM textures?
According to https://www.opengl.org/registry/specs/EXT/texture_snorm.txt,
signed normalized integer texture formats are used to represent a floating-point values in the range [-1.0, 1.0] with an exact value of floating point zero.
It is nice for storing noise, especially Perlin.
Better than float textures because no bits wasted on the exponent.
Unfortunately, as of GL4.4 none of the snorm formats can be rendered to.
You can use the unorm formats and *2-1 for the same effect, although there may be issues with getting an exact 0.
Related
I have a ARGB float texture that contains data. One of the data entries is an index that points to another pixel in the texture.
This is the code decoding the index into a UV coordinate, which can be used with a texelfetch to read the pixel it points to:
ivec2 getTexelfetchCoord_TEXTURE(float index)
{
return ivec2( mod(index, TEXTURE_WIDTH), int(index / TEXTURE_WIDTH) );
}
This works fine as long as the texture is no larger than 4096x4096. Any larger than that, and the floating point index value becomes inaccurate due to float precision issues.
The problem is that the 32 bit floating point only uses 24 bit for the integer part, which means the U and V components only have 12 bits each. 2^12=4096, alas the max range is 4096. Using the sign I could extend this to 8192, but that is still not enough for my purpose.
The obvious solution would be to store the index as two separate entries, U and V coordinates. However, for reasons that are too complex to get into here, this option is not available to me.
So, I wonder, is there a way to pack a signed 32 bit integer into a float, and unpack it back into an int that has the full 32 bit precision? Actually, I am not even sure if int's in OpenGL are really 32 bit, or if they are in fact internally stored as floats with the same 24 bit range...
Basically I would like to pack UV coordinates into a single float, and unpack to an UV coordinate again, with an accurate range beyond 4096x4096. Is this possible in GLSL?
glTexImage2D takes internalFormat (which specifies number of bits and data type/encoding), format (without number of bits and encoding) and type.
Is it possible, for example, to let OpenGL convert passed pixel data containing 32 bit integers from format GL_RGB_INTEGER and type GL_INT to internal format GL_RGB32F?
The wiki article https://www.khronos.org/opengl/wiki/Pixel_Transfer#Format_conversion suggests to me it's possible by stating:
Pixels specified by the user must be converted between the user-specified format (with format and type) and the internal representation controlled by the image format of the image.
But I wasn't able to read from floating point sampler in shader.
The _INTEGER pixel transfer formats are only to be used for transferring data to integer image formats. You are filling in a floating-point texture, so that doesn't qualify. You should have gotten an OpenGL Error.
Indeed, the very article you linked to spells this out:
Also, if "_INTEGER" is specified but the image format is not integral, then the transfer fails.
GL_RGBA32F is not an integral image format.
If you remove the _INTEGER part, then the pixel transfer will "work". OpenGL will assume that the integer data are normalized values, and therefore you will get floating-point values on the range [-1, 1]. That is, if you pass a 32-bit integer value of 1, the corresponding floating-point value will be 1/(2^31-1), which is a very small number (and thus, almost certainly just 0.0).
If you want OpenGL to cast the integer as if by a C cast (float)1... well, there's actually no way to do that. You'll just have to convert the data yourself.
The OpenGL documentation says very little about these two functions. When it would make sense to use glTexParameterIiv instead of glTexParameteriv or even glTexParameterfv?
If the values for GL_TEXTURE_BORDER_COLOR are specified with glTexParameterIiv or glTexParameterIuiv, the values are stored unmodified with an internal data type of integer. If specified with glTexParameteriv, they are converted to floating point with the following equation: f=(2c+1)/(2b−1). If specified with glTexParameterfv, they are stored unmodified as floating-point values.
You sort of answered your own question with the snippet you pasted. Traditional textures are fixed-point (unsigned normalized, where values like 255 are converted to 1.0 through normalization), but GL 3.0 introduced integral (signed / unsigned integer) texture types (where integer values stay integers).
If you had an integer texture and wanted to assign a border color (for use with the GL_CLAMP_TO_BORDER wrap mode), you would use one variant of those two functions (depending on whether you want signed or unsigned).
You cannot filter integer textures, but you can still have texture coordinate wrap behavior. Since said textures are integer and glTexParameteriv (...) normalizes the color values it is passed, an extra function had to be created to keep the color data integer.
You will find this same sort of thing with glVertexAttribIPointer (...) and so forth; adding support for integer data (as opposed to simply converting integer data to floating-point) to the GL pipeline required a lot of new commands.
In OpenGL, vertices are specified between -1.0 and 1.0 range in NDC and then are mapped to the actual screen. But isn't it possible that with very large screen resolution it becomes impossible to specify the exact pixel location on a screen with this limited floating point value range?
So, mathematically, how large should be the screen resolution to that happen?
A standard (IEEE 754) 32-bit float has 24 bits of precision in the mantissa. 23 bits are stored, plus an implicit leading 1. Since we're looking at a range of -1.0 to 1.0 here, we can also include the sign bit when estimating the precision. So that gives 25 bits of precision.
25 bits of precision is enough to cover 2^25 values. 2^25 = 33,554,432. So with float precision, we could handle a resolution of about 33,554,432 x 33,554,432 pixels. I think we're safe for a while!
Generally the coordinates used for rasterization are not floating-point at all.
They are fixed-point with a few bits reserved for subpixel accuracy (you absolutely need this since pixel coverage for things like triangles is based on distance from pixel center).
The amount of subpixel precision you are afforded really depends on the value of GL_MAX_VIEWPORT_DIMS. But if GL_MAX_VIEWPORT_DIMS did not exist, then for sure, it would make sense to use floating-point pixel coordinates since you would want to support a massive (potentially unknown) range of coordinates.
In the minimum OpenGL implementation, there must be 4-bits of sub-pixel precision (GL_SUBPIXEL_BITS), so if your GPU used 16-bit for raster coordinates that would give you 12-bit (integer) + 4-bit (fractional) to spread across GL_MAX_VIEWPORT_DIMS (the value would probably be 4096 for 12.4 fixed-point). Such an implementation would limit the integer coordinates to the range [0,4095] and would divide each of those integer coordinates into 16 sub-pixel positions.
Relevant reading: http://www.opengl.org/wiki/Image_Format#Color_formats
Normalized texture formats, (e.g., GL_RGB8_SNORM and GL_RGB16), store integers that map to floating point ranges (-[1.0,1.0] for signed normalized, [0.0,1.0] for unsigned normalized).
It seems to me like there's a very good reason for having GL_RGB32, GL_RGBA_SNORM, etc. tokens: the precisions would surpass dedicated floating point formats, like GL_RGB32F. Also, for completeness: why have normalized formats for 8 bits and 16 bits, but not 32?
So, why don't GL_RGB32, GL_RGBA32 exist?
It seems to me like there's a very good reason for having GL_RGB32, GL_RGBA_SNORM, etc. tokens: the precisions would surpass dedicated floating point formats, like GL_RGB32F.
You've just answered your own question. They don't allow it because to allow it would mean either discarding 8 bits of that extra precision in the conversion to single-precision floats, or converting those 32-bit normalized integers to double-precision floats.
And you'll notice that not even GL 4.2 allows for double-precision float textures.
It's not allowed because it simply wouldn't be useful on current hardware. Current hardware doesn't support it because supporting it would mean fetching double-precision values from textures.