I have read this 'Pixel Transfer' page of the wiki but have an issues that remains annoying to me. Given the following statement:
Adding "_INTEGER" to any of the color formats represent transferring
data to/from integral image formats
Given that _integer only effects what happens to the pixel data when it is transfered to/from the internalFormat does pixel-type of data effect whether _integer can be used?
Can gl_red_integer, gl_rg_integer, gl_rgb_integer, gl_bgr_integer, gl_rgba_integer, gl_bgra_integer formats be used with any pixel type as long as their component counts match?
From the same page
If the format parameter specifies "_INTEGER", but the type is of a
floating-point type (GL_FLOAT, GL_HALF_FLOAT, or similar), then an
error results and the pixel transfer fails
Guess that answers my question. I'm still not 100% sure how _integer interacts with packed types (like 10F_11F_11F_REV) but it looks like I need to read the wiki harder anyway so I expect that it is in there!
Related
According to the wiki and this answer, it should be possible to use the enums GL_UNSIGNED_INT_24_8 and GL_FLOAT_32_UNSIGNED_INT_24_8_REV with glTexImage2D to upload image data for packed depth stencil formats, but according to the reference pages, these types are not supported by that function (they are listed in the opengl es reference pages).
Is this a mistake in the reference pages, or is it not possible to use these formats for pixel upload? If so, is there a way to upload to this type of texture (other than rendering to it)?
The reference page is missing information (as it is for glTexSubImage2D). And that's not the only missing information. For example, GL_UNSIGNED_INT_5_9_9_9_REV isn't listed as a valid type, but it is listed in the errors section as if it were a valid type. For whatever reason, they've been doing a better job keeping the ES pages updated and accurate than the desktop GL pages.
It's best to look at the OpenGL specification for these kinds of details, especially if you see a contradiction like this.
I am trying to use vkCreateImage with a 3-component image (rgb).
But all the the rgb formats give:
vkCreateImage format parameter (VK_FORMAT_R8G8B8_xxxx) is an unsupported format
Does this mean that I have to reshape the data in memory? So add an empty byte after each 3, and then load it as RGBA?
I also noticed R8 and R8G8 formats do work, so I would guess the only reason RGB is not supported because 3 is not a power of two.
Before I actually do this reshaping of the data I'd like to know for sure that this is the only way, because it is not very good for performance and maybe there is some offset or padding value somewhere that will help loading the RGB data into an RGBA format. So can somebody confirm the reshaping into RGBA is a necessary step to load RGB formats (albeit with 33% overhead)?
Thanks in advance.
First, you're supposed to check to see what is supported before you try to create an image. You shouldn't rely on validation layers to stop you; that's just a debugging aid to catch something when you forgot to check. What is and is not supported is dynamic, not static. It's based on your implementation. So you have to ask every time your application starts whether the formats you intend to use are available.
And if they are not, then you must plan accordingly.
Second, yes, if your implementation does not support 3-channel formats, then you'll need to emulate them with a 4-channel format. You will have to re-adjust your data to fit your new format.
If you don't like doing that, I'm sure there are image editors you can use to load your image, add an opaque alpha of 1.0, and save it again.
I have read that compressed textures are not readable and are not color render-able.
Though I have some idea of why its not allowed, can some one explain in little detail.
What exactly does it mean its not readable. I can not read from them in shader using say image Load etc? Or I cant even sample from them?
What does it mean its not render-able to? Is it because user is going to see all garbage anyway, so its not allowed.
I have not tried using compressed textures.
Compressed textures are "readable", by most useful definitions of that term. You can read from them via samplers. However, you can't use imageLoad operations on them. Why? Because reading such memory is not a simple memory fetch. It involves fetching lots of memory and doing a decompression operation.
Compressed images are not color-renderable, which means they cannot be attached to an FBO and used as a render target. One might think the reason for this was obvious, but if you need it spelled out. Writing to a compressed image requires doing image compression on the fly. And most texture compression formats (or compressed formats of any kind) are not designed to easily deal with changing a few values. Not to mention, most compressed texture formats are lossy, so every time you do a decompress/write/recompress operation, you lose image fidelity.
From the OpenGL Wiki:
Despite being color formats, compressed images are not color-renderable, for obvious reasons. Therefore, attaching a compressed image to a framebuffer object will cause that FBO to be incomplete and thus unusable. For similar reasons, no compressed formats can be used as the internal format of renderbuffers.
So "not color render-able" means that they can't be used in FBOs.
I'm not sure what "not readable" means; it may mean that you can't bind them to an FBO and read from the FBO (since you can't bind them to an FBO in the first place).
I need to serialise an arbitrary OpenGL texture object to be able to restore it later with the exact same state and data.
I'm looking for a way to get the texture image data. Here's what I've found so far:
There's glGetTexImage.
It allows to gets the texture image, but it requires a specified format/type pair (like (GL_RGB, GL_HALF_FLOAT) to which it performs a conversion.
The allowed formats and types don't map 1:1 to image formats though, and won't allow to get more obscure formats like GL_R3_G3_B2 without additional conversion.
Also correctly determining the C type for base internal formats (like GL_RGB with no size) involves some non-trivial labour.
There's ARB_internalformat_query2 that allows to ask for GL_GET_TEXTURE_IMAGE_FORMAT and GL_GET_TEXTURE_IMAGE_TYPE which represent the best choices for glGetTexImage for a given texture.
Nice, but suffers from the same limitations as glGetTexImage and isn't widely available.
There's the wonderful glGetCompressedTexImage that elegantly returns the compressed texture's data as-is, but it neither works for non-compressed images nor has a counterpart that would.
None of these allows to get or set raw data for non-compressed textures. Is there a way?
The trick is, to find matches of format and type the yield the right data layout.
The allowed formats and types don't map 1:1 to image formats though, and won't allow to get more obscure formats like GL_R3_G3_B2 without additional conversion.
That would be GL_RGB, GL_UNSIGNED_BYTE_3_3_2
Also correctly determining the C type for base internal formats (like GL_RGB with no size) involves some non-trivial labour.
Yes it does. *puts on sunglasses* Deal with it! ;)
As for the internal format. I hereby refer you to
glGetTexLevelParameter(GL_TEXTURE_…, …, GL_TEXTURE_INTERNAL_FORMAT,…);
glGetTexLevelParameter(GL_TEXTURE_…, …, GL_TEXTURE_{RED,GREEN,BLUE,ALPHA,DEPTH}_TYPE, …);
glGetTexLevelParameter(GL_TEXTURE_…, …, GL_TEXTURE_{RED,GREEN,BLUE,ALPHA,DEPTH}_SIZE, …);
As we know, glReadPixels() will block the pipeline and use CPU to convert data format, especially when I want to read depth value out to system RAM.
I tried PBO provided by Songho, but I found it was only useful when param of glReadPixels() was set to GL_BGRA.
When I use PBO with param GL_BGRA, the read time is almost 0.1ms and CPU usage is 4%.
When I change param to GL_RGBA, it reads 2ms with CPU usage 50%.
It is the same when I try GL_DEPTH_COMPONENT. Apparently the slowness is caused by converting, so any one knows how to stop it converting data format?
In my program, I have to read depth value and calculate 16*25 times in less one second, so 2ms is not acceptable.
so any one knows how to stop it converting data format?
D'uh, by reading a data format that does not need converting. On-screen framebuffers are typically formated as BGRA and if you want something different the data needs to be converted first.
You could use a FBO with texture/renderbuffer attachments that are in the format expected and render to that.
Desktop OpenGL will give you the data in whatever format you want, so unless you specify the format that doesn't require conversion, it will convert it for you. Because that's what you asked for.
Given an implementation that supports ARB_internalformat_query2 (just NVIDIA right now), you can simply ask. You ask for the GL_READ_PIXELS_FORMAT and GL_READ_PIXELS_TYPE, and then use those. It should return a format that doesn't require conversion.