Why do OpenGL 4 GL_RGBA2 and GL_RGBA4 sized format have a base format of GL_RGB instead of GL_RGBA? - opengl

According to the OpenGL 4 documentation, GL_RGBA2 and GL_RGBA4 have a base format of GL_RGB even though they have alpha bits.
This is inconsistent with the OpenGL 3 ES documentation. Is this due to some sort of legacy issue?

Since this question has no official answer after a week, I'll post on Yakov Galka's behalf.
"They don't; That documentation has a typo. The authoritative source is the OpenGL spec, where in table 8.12 RGBA2 and RGBA4 clearly listed as having RGBA base internal format."

Related

Using `GL_UNSIGNED_INT_24_8` with `glTexImage2D`

According to the wiki and this answer, it should be possible to use the enums GL_UNSIGNED_INT_24_8 and GL_FLOAT_32_UNSIGNED_INT_24_8_REV with glTexImage2D to upload image data for packed depth stencil formats, but according to the reference pages, these types are not supported by that function (they are listed in the opengl es reference pages).
Is this a mistake in the reference pages, or is it not possible to use these formats for pixel upload? If so, is there a way to upload to this type of texture (other than rendering to it)?
The reference page is missing information (as it is for glTexSubImage2D). And that's not the only missing information. For example, GL_UNSIGNED_INT_5_9_9_9_REV isn't listed as a valid type, but it is listed in the errors section as if it were a valid type. For whatever reason, they've been doing a better job keeping the ES pages updated and accurate than the desktop GL pages.
It's best to look at the OpenGL specification for these kinds of details, especially if you see a contradiction like this.

DXT Texture working despite S3TC not being supported

The topic involves OpenGL ES 2.0.
I have a device that when queried on the extensions via
glGetString(GL_EXTENSIONS)
Returns a list of supported extensions, none of which is GL_EXT_texture_compression_s3tc .
AFAIK , not haveing GL_EXT_texture_compression_s3tc shouldn't allow using DXT compressed textures.
However, when DXT compressed texture are used on the device , they render without any problems.
Texture data is commited using glCompressedTexImage2D .
Tried for DXT1 , DXT3 and DXT5 .
Why does it work ? Is it safe to use a texture compression although the compression seems to not be supported ?
I think, that missing support for GL_EXT_texture_compression_s3tc does not mean, that you can't use compressed formats. They may be supported anyway.
Quote from glCompressedTexImage2D doc page for ES2:
The texture image is decoded according to the extension specification
defining the specified internalformat. OpenGL ES (...) provide a
mechanism to obtain symbolic constants for such formats provided by
extensions. The number of compressed texture formats supported can be
obtained by querying the value of GL_NUM_COMPRESSED_TEXTURE_FORMATS.
The list of specific compressed texture formats supported can be
obtained by querying the value of GL_COMPRESSED_TEXTURE_FORMATS.
Note, that there is nothing about GL_EXT_texture_compression_s3tc. Support for various capabilities may be implemented even though their 'pseudo-standardized' (I mean - extensionized) substitutes are not listed as supported.
You should probably query those constants (GL_NUM_COMPRESSED_TEXTURE_FORMATS and GL_COMPRESSED_TEXTURE_FORMATS) using glGetIntegerv() to learn, which compressed formats are actually supported.

Incompatibility of CL_UNSIGNED_INT8 and CL_RGB

Do you know why it is not allowed to create an Image2D - I am using C++ API - instance, which is set to having image format (ImageFormat class) of
CL_RGB (cl_channel_order)
CL_UNSIGNED_INT8 (cl_channel_type)?
I looks that now, if I have an image with RGB layout, in which every value (R, G and B) is a eight bit number, I have to either
manually add alpha values (CL_UNSIGNED_INT8 and CL_RGBA is allowed)
write a kernel, which accept image as "unsigned char*" and not use Image2D class at all
Here is the table of compatibility: khronos page.
To summarize:
Why I am not being able to create CL_UNSIGNED_INT8 and CL_RGB Image2D object?
Is there a way to workaround this?
Should I even workarounding this? Or should I just use one of mine ways ("CL_UNSIGNED_INT8 and CL_RGBA" or "unsigned char*") to process the image?
PS: I saw e.g. this one, but it does not explain why the incompatibility occurs.
See table "Table 5.7 Min. list of supported image formats" in the OpenCL 1.1 specification. An implementation is not required to support any of the CL_RGB image types. You can query your device using clGetSupportedImageFormats to see which formats it supports, but it appears that CL_RGB is not one of them. Use CL_RGBA; it is required for most pixel depths.

What data type for internalformat specified as GL_RGBA?

In the OpenGL wiki glTexImage2D, it writes:
internalFormat Specifies the number of color components in the
texture. Must be one of base internal formats given in Table 1, one of
the sized internal formats given in Table 2, or one of the compressed
internal formats given in Table 3, below.
In OpenGL Programming Guide, Chapter 9, Texture Mapping
By definition, GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_RGB, and GL_RGBA
are lenient, because they do not ask for a specific resolution.
So if we assign GL_RGBA to the internalformat, what datatype is used?
Default by the GPU processor?
The size used for GL_RGBA is specifically undefined.
From the OpenGL 4.5 spec, section "8.5 Texture Image Specification", page 153 (emphasis added):
The internal component resolution is the number of bits allocated to each value in a texture image. If internalformat is speciļ¬ed as a base internal format, the GL stores the resulting texture with internal component resolutions of its own choosing.
where "base internal format" refers to the formats listed in table 8.11, which includes GL_RGBA.
I would expect the chosen format to typically be GL_RGBA8, but there's really no guarantee. If you care about the size, you should use a sized format. In fact, I think you always should. The unsized formats seem to still be there to maintain backwards compatibility. I was always surprised that they were not removed in the Core Profile. For example the newer glTexStorage*() entry points, which serve as better replacements for glTexImage*() in many use cases, only accept sized internal formats.
I'm not sure what your question here is but GL_RGBA is normally a 4x8bit (4-byte/GL_BYTE type) format in the R,G,B,A order respectively but the constant is just saying that the buffer is composed of this order and not exactly how big each channel width is.
More info here
Edit: For some methods, you also need to specify this channel width (e.g. glReadPixels() or glDrawPixels())

Does a conversion take place when uploading a texture of GL_ALPHA format (glTexImage2D)?

The documentation for glTexImage2D says
GL_RED (for GL) / GL_ALPHA (for GL ES). "The GL converts it to floating point and assembles it into an RGBA element by attaching 0 for green and blue, and 1 for alpha. Each component is clamped to the range [0,1]."
I've read through the GL ES specs to see if it specifies whether the GPU memory is actually 32bit vs 8bit, but it seems rather vague. Can anyone confirm whether uploading a texture as GL_RED / GL_ALPHA gets converted from 8bit to 32bit on the GPU?
I'm interested in answers for GL and GL ES.
I've read through the GL ES specs to see if it specifies whether the GPU memory is actually 32bit vs 8bit, but it seems rather vague.
Well, that's what it is. The actual details are left for the actual implementation to decide. Giving such liberties in the specification allows actual implementations to contain optimizations tightly tailored to the target system. For example a certain GPU may cope better with a 10 bits per channel format, so it's then at liberty to convert to such a format.
So it's impossible to say in general, but for a specific implementation (i.e. GPU + driver) a certain format will be likely choosen. Which one depends on GPU and driver.
Following on from what datenwolf has said, I found the following in the "POWERVR SGX
OpenGL ES 2.0 Application Development Recommendations" document:
6.3. Texture Upload
When you upload textures to the OpenGL ES driver via glTexImage2D, the input data is usually in linear scanline
format. Internally, though, POWERVR SGX uses a twiddled layout (i.e.
following a plane-filling curve) to greatly improve memory access
locality when texturing. Because of this different layout uploading
textures will always require a somewhat expensive reformatting
operation, regardless of whether the input pixel format exactly
matches the internal pixel format or not.
For this reason we
recommend that you upload all required textures at application or
level start-up time in order to not cause any framerate dips when
additional textures are uploaded later on.
You should especially
avoid uploading texture data mid-frame to a texture object that has
already been used in the frame.