Incompatibility of CL_UNSIGNED_INT8 and CL_RGB - c++

Do you know why it is not allowed to create an Image2D - I am using C++ API - instance, which is set to having image format (ImageFormat class) of
CL_RGB (cl_channel_order)
CL_UNSIGNED_INT8 (cl_channel_type)?
I looks that now, if I have an image with RGB layout, in which every value (R, G and B) is a eight bit number, I have to either
manually add alpha values (CL_UNSIGNED_INT8 and CL_RGBA is allowed)
write a kernel, which accept image as "unsigned char*" and not use Image2D class at all
Here is the table of compatibility: khronos page.
To summarize:
Why I am not being able to create CL_UNSIGNED_INT8 and CL_RGB Image2D object?
Is there a way to workaround this?
Should I even workarounding this? Or should I just use one of mine ways ("CL_UNSIGNED_INT8 and CL_RGBA" or "unsigned char*") to process the image?
PS: I saw e.g. this one, but it does not explain why the incompatibility occurs.

See table "Table 5.7 Min. list of supported image formats" in the OpenCL 1.1 specification. An implementation is not required to support any of the CL_RGB image types. You can query your device using clGetSupportedImageFormats to see which formats it supports, but it appears that CL_RGB is not one of them. Use CL_RGBA; it is required for most pixel depths.

Related

OpenCV cvtColor without having to know source type

is there a way of converting images in OpenCV (>=2) without having to know their source types? I realize there is cvtColor, however you have to specify the conversion code, which always requires me to create a respective switch-block, which is really tedious. I would be surprised if there were no helper functions for something common like that.
Thanks
The OpenCV cv::Mat does not store the format or color-space that the image data represents (only things related to memory storage like depth and channel count).
Thus, this information is external and must be managed by the user.
The management also means that if you want to convert to another color space you need to know and specify both the source and the target color-spaces.

Vulkan vkCreateImage with 3 components

I am trying to use vkCreateImage with a 3-component image (rgb).
But all the the rgb formats give:
vkCreateImage format parameter (VK_FORMAT_R8G8B8_xxxx) is an unsupported format
Does this mean that I have to reshape the data in memory? So add an empty byte after each 3, and then load it as RGBA?
I also noticed R8 and R8G8 formats do work, so I would guess the only reason RGB is not supported because 3 is not a power of two.
Before I actually do this reshaping of the data I'd like to know for sure that this is the only way, because it is not very good for performance and maybe there is some offset or padding value somewhere that will help loading the RGB data into an RGBA format. So can somebody confirm the reshaping into RGBA is a necessary step to load RGB formats (albeit with 33% overhead)?
Thanks in advance.
First, you're supposed to check to see what is supported before you try to create an image. You shouldn't rely on validation layers to stop you; that's just a debugging aid to catch something when you forgot to check. What is and is not supported is dynamic, not static. It's based on your implementation. So you have to ask every time your application starts whether the formats you intend to use are available.
And if they are not, then you must plan accordingly.
Second, yes, if your implementation does not support 3-channel formats, then you'll need to emulate them with a 4-channel format. You will have to re-adjust your data to fit your new format.
If you don't like doing that, I'm sure there are image editors you can use to load your image, add an opaque alpha of 1.0, and save it again.

DXT Texture working despite S3TC not being supported

The topic involves OpenGL ES 2.0.
I have a device that when queried on the extensions via
glGetString(GL_EXTENSIONS)
Returns a list of supported extensions, none of which is GL_EXT_texture_compression_s3tc .
AFAIK , not haveing GL_EXT_texture_compression_s3tc shouldn't allow using DXT compressed textures.
However, when DXT compressed texture are used on the device , they render without any problems.
Texture data is commited using glCompressedTexImage2D .
Tried for DXT1 , DXT3 and DXT5 .
Why does it work ? Is it safe to use a texture compression although the compression seems to not be supported ?
I think, that missing support for GL_EXT_texture_compression_s3tc does not mean, that you can't use compressed formats. They may be supported anyway.
Quote from glCompressedTexImage2D doc page for ES2:
The texture image is decoded according to the extension specification
defining the specified internalformat. OpenGL ES (...) provide a
mechanism to obtain symbolic constants for such formats provided by
extensions. The number of compressed texture formats supported can be
obtained by querying the value of GL_NUM_COMPRESSED_TEXTURE_FORMATS.
The list of specific compressed texture formats supported can be
obtained by querying the value of GL_COMPRESSED_TEXTURE_FORMATS.
Note, that there is nothing about GL_EXT_texture_compression_s3tc. Support for various capabilities may be implemented even though their 'pseudo-standardized' (I mean - extensionized) substitutes are not listed as supported.
You should probably query those constants (GL_NUM_COMPRESSED_TEXTURE_FORMATS and GL_COMPRESSED_TEXTURE_FORMATS) using glGetIntegerv() to learn, which compressed formats are actually supported.

How to obtain raw (unconverted) texture data in OpenGL?

I need to serialise an arbitrary OpenGL texture object to be able to restore it later with the exact same state and data.
I'm looking for a way to get the texture image data. Here's what I've found so far:
There's glGetTexImage.
It allows to gets the texture image, but it requires a specified format/type pair (like (GL_RGB, GL_HALF_FLOAT) to which it performs a conversion.
The allowed formats and types don't map 1:1 to image formats though, and won't allow to get more obscure formats like GL_R3_G3_B2 without additional conversion.
Also correctly determining the C type for base internal formats (like GL_RGB with no size) involves some non-trivial labour.
There's ARB_internalformat_query2 that allows to ask for GL_GET_TEXTURE_IMAGE_FORMAT and GL_GET_TEXTURE_IMAGE_TYPE which represent the best choices for glGetTexImage for a given texture.
Nice, but suffers from the same limitations as glGetTexImage and isn't widely available.
There's the wonderful glGetCompressedTexImage that elegantly returns the compressed texture's data as-is, but it neither works for non-compressed images nor has a counterpart that would.
None of these allows to get or set raw data for non-compressed textures. Is there a way?
The trick is, to find matches of format and type the yield the right data layout.
The allowed formats and types don't map 1:1 to image formats though, and won't allow to get more obscure formats like GL_R3_G3_B2 without additional conversion.
That would be GL_RGB, GL_UNSIGNED_BYTE_3_3_2
Also correctly determining the C type for base internal formats (like GL_RGB with no size) involves some non-trivial labour.
Yes it does. *puts on sunglasses* Deal with it! ;)
As for the internal format. I hereby refer you to
glGetTexLevelParameter(GL_TEXTURE_…, …, GL_TEXTURE_INTERNAL_FORMAT,…);
glGetTexLevelParameter(GL_TEXTURE_…, …, GL_TEXTURE_{RED,GREEN,BLUE,ALPHA,DEPTH}_TYPE, …);
glGetTexLevelParameter(GL_TEXTURE_…, …, GL_TEXTURE_{RED,GREEN,BLUE,ALPHA,DEPTH}_SIZE, …);

Difference between OpenGL and D3D pixel formats

I'm continuing to try and develop an OpenGL path for my software. I'm using abstract classes with concrete implementations for both, but obviously I need a common pixel format enumerator so that I can describe texture, backbuffer/frontbuffer and render target formats between the two. I provide a function in each concrete implementation that accepts my abstract identifier for say, R8G8B8A8, and provides the concrete implementation with an enum suitable for either D3D or OpenGL
I can easily enumerate all D3D pixel formats using CheckDeviceFormat. For OpenGL, I'm firstly iterating through Win32 available accelerated formats (using DescribePixelFormat) and then looking at the PIXELFORMATDESCRIPTOR to see how it's made up, so I can assign it one of my enums. This is where my problems start:
I want to be able to discover all accelerated formats that OpenGL supports on any given system, as comparable to a D3D format. But according to the format descriptors, there aren't any RGB formats (they're all BGR). Further, things like DXT1 - 5 formats, enumerable in D3D, aren't enumerable using the above method. For the latter, I suppose I can just assume if the extension is available, it's a hardware accelerated format.
For the former (how to interpret the format descriptor in terms of RGB/BGR, etc.), I'm not too sure how it works.
Anyone know about this stuff?
Responses appreciated.
Ok, I think I found what I was looking for:
OpenGL image formats
Some image formats are defined by the spec (for backbuffer/depth-stencil, textures, render-targets, etc.), so there is a guarantee, to an extent, that these will be available (they don't need enumerating). The pixel format descriptor can still be used to work out available front buffer formats for the given window.