glTexImage2D - channels count as the internal format - opengl

OpenGL 2.1 documentation says that glTexImage2D accepts channels count as the internal format. How does choosing the internal format work in such scenario ? Does specyfing channels count as the internal format is an error in OpenGL > 2.1 ?

Does specyfing channels count as the internal format is an error in OpenGL > 2.1 ?
Yes, for Core Profile at least. See OpenGL 3.2 (or following) Core specifications in section "3.8.1 Texture Image Specification"
internalformat may be specified as one of the internal format symbolic constants listed in table 3.11, as one of the sized internal format symbolic constants listed in tables 3.12- 3.13, as one of the generic compressed internal format symbolic constants listed in table 3.14, or as one of the specific compressed internal format symbolic constants (if listed in table 3.14). Specifying a value for internalformat that is not one of the above values generates the error INVALID_VALUE.
Contrast this with the additional sentence in the OpenGL 2.1 specification in that same section 3.8.1 (which is omitted in the 3.2 core specification):
internalformat may (for backwards compatibility with the 1.0 version of the GL) also take on the integer values 1, 2, 3, and 4, which are equivalent to symbolic constants LUMINANCE, LUMINANCE ALPHA, RGB, and RGBA respectively.

Related

glTexImage2D - GL_TEXTURE_RECTANGLE_NV and compressed internal format

I have modified a legacy code (OpenGL 2.1) which uses glTexImage2D with GL_TEXTURE_RECTANGLE_NV texture target. I have noticed that when I set some compressed internal format, for example GL_COMPRESSED_RGBA_S3TC_DXT5_EXT it doesn't work with GL_TEXTURE_RECTANGLE_NV (I get a white texture). I have tested other scenarios and everything works fine, i.e. GL_TEXTURE_2D with compressed internal format, GL_TEXTURE_RECTANGLE_NV with non-compressed internal format. Does it mean that GL_TEXTURE_RECTANGLE_NV can't be used with compressed formats ?
Here's what the spec for NV_texture_rectangle extension says about compressed formats:
Can compressed texture images be specified for a rectangular texture?
RESOLUTION: The generic texture compression internal formats
introduced by ARB_texture_compression are supported for rectangular
textures because the image is not presented as compressed data and
the ARB_texture_compression extension always permits generic texture
compression internal formats to be stored in uncompressed form.
Implementations are free to support generic compression internal
formats for rectangular textures if supported but such support is
not required.
This extensions makes a blanket statement that specific compressed
internal formats for use with CompressedTexImage<n>DARB are NOT
supported for rectangular textures. This is because several
existing hardware implementations of texture compression formats
such as S3TC are not designed for compressing rectangular textures.
This does not preclude future texture compression extensions from
supporting compressed internal formats that do work with rectangular
extensions (by relaxing the current blanket error condition).
So your specific format GL_COMPRESSED_RGBA_S3TC_DXT5_EXT is not necessarily supported as being the one mentioned to be "not designed for compressing rectangular textures".

DXT Texture working despite S3TC not being supported

The topic involves OpenGL ES 2.0.
I have a device that when queried on the extensions via
glGetString(GL_EXTENSIONS)
Returns a list of supported extensions, none of which is GL_EXT_texture_compression_s3tc .
AFAIK , not haveing GL_EXT_texture_compression_s3tc shouldn't allow using DXT compressed textures.
However, when DXT compressed texture are used on the device , they render without any problems.
Texture data is commited using glCompressedTexImage2D .
Tried for DXT1 , DXT3 and DXT5 .
Why does it work ? Is it safe to use a texture compression although the compression seems to not be supported ?
I think, that missing support for GL_EXT_texture_compression_s3tc does not mean, that you can't use compressed formats. They may be supported anyway.
Quote from glCompressedTexImage2D doc page for ES2:
The texture image is decoded according to the extension specification
defining the specified internalformat. OpenGL ES (...) provide a
mechanism to obtain symbolic constants for such formats provided by
extensions. The number of compressed texture formats supported can be
obtained by querying the value of GL_NUM_COMPRESSED_TEXTURE_FORMATS.
The list of specific compressed texture formats supported can be
obtained by querying the value of GL_COMPRESSED_TEXTURE_FORMATS.
Note, that there is nothing about GL_EXT_texture_compression_s3tc. Support for various capabilities may be implemented even though their 'pseudo-standardized' (I mean - extensionized) substitutes are not listed as supported.
You should probably query those constants (GL_NUM_COMPRESSED_TEXTURE_FORMATS and GL_COMPRESSED_TEXTURE_FORMATS) using glGetIntegerv() to learn, which compressed formats are actually supported.

Is it able to use EXT texture formats in opengl es 2.0?

Now i have to rewrite my working OpenGL code to OpenGL ES2.0 (for ANGLE using).
I use OpenTK and see EXT texture formats like OpenTK.Graphics.ES20.TextureComponentCount.R32fExt in "ES20" namespace. But when i try to use it i got an OpenGL error "InvalidEnum".
I need something like this (one channel, >= 16bit):
GL.TexImage2D(TextureTarget2d.Texture2D, 0, OpenTK.Graphics.ES20.TextureComponentCount.R32fExt, 1296, 1296, 0, OpenTK.Graphics.ES20.PixelFormat.Red, PixelType.UnsignedShort, (IntPtr)dataPtr);
Is it able in ES2.0?
UPD: I need to attach that texture to framebuffer, so i cant use ALPHA or LUMINANCE format.
If you want your code to run across a wide range of devices, you need runtime checks for the extensions you are using.
The general mechanism is that you get the extension string:
const GLubyte* extStr = glGetString(GL_EXTENSIONS);
and then check if the extension in question is part of this string. The list of all registered extensions can be found in the bottom section of the OpenGL ES registry page on www.khronos.org.
For example, to use float textures, the necessary extension is OES_texture_float. If the name of this extension is part of the extension string, you can use GL_FLOAT as a texture format. Note that ES 2.0 still uses unsized internal formats, so for a 1-component float texture you would use for example GL_ALPHA as the internal format, and GL_FLOAT as the type.

Incompatibility of CL_UNSIGNED_INT8 and CL_RGB

Do you know why it is not allowed to create an Image2D - I am using C++ API - instance, which is set to having image format (ImageFormat class) of
CL_RGB (cl_channel_order)
CL_UNSIGNED_INT8 (cl_channel_type)?
I looks that now, if I have an image with RGB layout, in which every value (R, G and B) is a eight bit number, I have to either
manually add alpha values (CL_UNSIGNED_INT8 and CL_RGBA is allowed)
write a kernel, which accept image as "unsigned char*" and not use Image2D class at all
Here is the table of compatibility: khronos page.
To summarize:
Why I am not being able to create CL_UNSIGNED_INT8 and CL_RGB Image2D object?
Is there a way to workaround this?
Should I even workarounding this? Or should I just use one of mine ways ("CL_UNSIGNED_INT8 and CL_RGBA" or "unsigned char*") to process the image?
PS: I saw e.g. this one, but it does not explain why the incompatibility occurs.
See table "Table 5.7 Min. list of supported image formats" in the OpenCL 1.1 specification. An implementation is not required to support any of the CL_RGB image types. You can query your device using clGetSupportedImageFormats to see which formats it supports, but it appears that CL_RGB is not one of them. Use CL_RGBA; it is required for most pixel depths.

What data type for internalformat specified as GL_RGBA?

In the OpenGL wiki glTexImage2D, it writes:
internalFormat Specifies the number of color components in the
texture. Must be one of base internal formats given in Table 1, one of
the sized internal formats given in Table 2, or one of the compressed
internal formats given in Table 3, below.
In OpenGL Programming Guide, Chapter 9, Texture Mapping
By definition, GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_RGB, and GL_RGBA
are lenient, because they do not ask for a specific resolution.
So if we assign GL_RGBA to the internalformat, what datatype is used?
Default by the GPU processor?
The size used for GL_RGBA is specifically undefined.
From the OpenGL 4.5 spec, section "8.5 Texture Image Specification", page 153 (emphasis added):
The internal component resolution is the number of bits allocated to each value in a texture image. If internalformat is speciļ¬ed as a base internal format, the GL stores the resulting texture with internal component resolutions of its own choosing.
where "base internal format" refers to the formats listed in table 8.11, which includes GL_RGBA.
I would expect the chosen format to typically be GL_RGBA8, but there's really no guarantee. If you care about the size, you should use a sized format. In fact, I think you always should. The unsized formats seem to still be there to maintain backwards compatibility. I was always surprised that they were not removed in the Core Profile. For example the newer glTexStorage*() entry points, which serve as better replacements for glTexImage*() in many use cases, only accept sized internal formats.
I'm not sure what your question here is but GL_RGBA is normally a 4x8bit (4-byte/GL_BYTE type) format in the R,G,B,A order respectively but the constant is just saying that the buffer is composed of this order and not exactly how big each channel width is.
More info here
Edit: For some methods, you also need to specify this channel width (e.g. glReadPixels() or glDrawPixels())