I need to serialise an arbitrary OpenGL texture object to be able to restore it later with the exact same state and data.
I'm looking for a way to get the texture image data. Here's what I've found so far:
There's glGetTexImage.
It allows to gets the texture image, but it requires a specified format/type pair (like (GL_RGB, GL_HALF_FLOAT) to which it performs a conversion.
The allowed formats and types don't map 1:1 to image formats though, and won't allow to get more obscure formats like GL_R3_G3_B2 without additional conversion.
Also correctly determining the C type for base internal formats (like GL_RGB with no size) involves some non-trivial labour.
There's ARB_internalformat_query2 that allows to ask for GL_GET_TEXTURE_IMAGE_FORMAT and GL_GET_TEXTURE_IMAGE_TYPE which represent the best choices for glGetTexImage for a given texture.
Nice, but suffers from the same limitations as glGetTexImage and isn't widely available.
There's the wonderful glGetCompressedTexImage that elegantly returns the compressed texture's data as-is, but it neither works for non-compressed images nor has a counterpart that would.
None of these allows to get or set raw data for non-compressed textures. Is there a way?
The trick is, to find matches of format and type the yield the right data layout.
The allowed formats and types don't map 1:1 to image formats though, and won't allow to get more obscure formats like GL_R3_G3_B2 without additional conversion.
That would be GL_RGB, GL_UNSIGNED_BYTE_3_3_2
Also correctly determining the C type for base internal formats (like GL_RGB with no size) involves some non-trivial labour.
Yes it does. *puts on sunglasses* Deal with it! ;)
As for the internal format. I hereby refer you to
glGetTexLevelParameter(GL_TEXTURE_…, …, GL_TEXTURE_INTERNAL_FORMAT,…);
glGetTexLevelParameter(GL_TEXTURE_…, …, GL_TEXTURE_{RED,GREEN,BLUE,ALPHA,DEPTH}_TYPE, …);
glGetTexLevelParameter(GL_TEXTURE_…, …, GL_TEXTURE_{RED,GREEN,BLUE,ALPHA,DEPTH}_SIZE, …);
Related
I would like to use a texture with internal format GL_R11F_G11F_B10F as a framebuffer attachment (postprocessing effects, HDR rendering). I'm not sure which data type I should chosse - glTexImage2D 8th parameter. Here are the possible options:
GL_HALF_FLOAT
GL_FLOAT
GL_UNSIGNED_INT_10F_11F_11F_REV
Could you please explain based on which criteria should I choose that type ?
The format and type of glTexImage2D instruct OpenGL how to interpret the image that you pass to that function through the data argument. Since you're merely allocating your texture without specifying any image (i.e. set data = NULL) the exact values of format and type do not matter. The only requirement for them is to be compatible with the internalformat, or else glTexImage2D will generate GL_INVALID_OPERATION when validating the arguments.
However, since you're not specifying an image, it's best to use glTexStorage2D here. This function has simpler semantics and you don't need to specify format, type and data at all.
In the OpenGL wiki glTexImage2D, it writes:
internalFormat Specifies the number of color components in the
texture. Must be one of base internal formats given in Table 1, one of
the sized internal formats given in Table 2, or one of the compressed
internal formats given in Table 3, below.
In OpenGL Programming Guide, Chapter 9, Texture Mapping
By definition, GL_LUMINANCE, GL_LUMINANCE_ALPHA, GL_RGB, and GL_RGBA
are lenient, because they do not ask for a specific resolution.
So if we assign GL_RGBA to the internalformat, what datatype is used?
Default by the GPU processor?
The size used for GL_RGBA is specifically undefined.
From the OpenGL 4.5 spec, section "8.5 Texture Image Specification", page 153 (emphasis added):
The internal component resolution is the number of bits allocated to each value in a texture image. If internalformat is specified as a base internal format, the GL stores the resulting texture with internal component resolutions of its own choosing.
where "base internal format" refers to the formats listed in table 8.11, which includes GL_RGBA.
I would expect the chosen format to typically be GL_RGBA8, but there's really no guarantee. If you care about the size, you should use a sized format. In fact, I think you always should. The unsized formats seem to still be there to maintain backwards compatibility. I was always surprised that they were not removed in the Core Profile. For example the newer glTexStorage*() entry points, which serve as better replacements for glTexImage*() in many use cases, only accept sized internal formats.
I'm not sure what your question here is but GL_RGBA is normally a 4x8bit (4-byte/GL_BYTE type) format in the R,G,B,A order respectively but the constant is just saying that the buffer is composed of this order and not exactly how big each channel width is.
More info here
Edit: For some methods, you also need to specify this channel width (e.g. glReadPixels() or glDrawPixels())
I have read that compressed textures are not readable and are not color render-able.
Though I have some idea of why its not allowed, can some one explain in little detail.
What exactly does it mean its not readable. I can not read from them in shader using say image Load etc? Or I cant even sample from them?
What does it mean its not render-able to? Is it because user is going to see all garbage anyway, so its not allowed.
I have not tried using compressed textures.
Compressed textures are "readable", by most useful definitions of that term. You can read from them via samplers. However, you can't use imageLoad operations on them. Why? Because reading such memory is not a simple memory fetch. It involves fetching lots of memory and doing a decompression operation.
Compressed images are not color-renderable, which means they cannot be attached to an FBO and used as a render target. One might think the reason for this was obvious, but if you need it spelled out. Writing to a compressed image requires doing image compression on the fly. And most texture compression formats (or compressed formats of any kind) are not designed to easily deal with changing a few values. Not to mention, most compressed texture formats are lossy, so every time you do a decompress/write/recompress operation, you lose image fidelity.
From the OpenGL Wiki:
Despite being color formats, compressed images are not color-renderable, for obvious reasons. Therefore, attaching a compressed image to a framebuffer object will cause that FBO to be incomplete and thus unusable. For similar reasons, no compressed formats can be used as the internal format of renderbuffers.
So "not color render-able" means that they can't be used in FBOs.
I'm not sure what "not readable" means; it may mean that you can't bind them to an FBO and read from the FBO (since you can't bind them to an FBO in the first place).
I have read this 'Pixel Transfer' page of the wiki but have an issues that remains annoying to me. Given the following statement:
Adding "_INTEGER" to any of the color formats represent transferring
data to/from integral image formats
Given that _integer only effects what happens to the pixel data when it is transfered to/from the internalFormat does pixel-type of data effect whether _integer can be used?
Can gl_red_integer, gl_rg_integer, gl_rgb_integer, gl_bgr_integer, gl_rgba_integer, gl_bgra_integer formats be used with any pixel type as long as their component counts match?
From the same page
If the format parameter specifies "_INTEGER", but the type is of a
floating-point type (GL_FLOAT, GL_HALF_FLOAT, or similar), then an
error results and the pixel transfer fails
Guess that answers my question. I'm still not 100% sure how _integer interacts with packed types (like 10F_11F_11F_REV) but it looks like I need to read the wiki harder anyway so I expect that it is in there!
I'm continuing to try and develop an OpenGL path for my software. I'm using abstract classes with concrete implementations for both, but obviously I need a common pixel format enumerator so that I can describe texture, backbuffer/frontbuffer and render target formats between the two. I provide a function in each concrete implementation that accepts my abstract identifier for say, R8G8B8A8, and provides the concrete implementation with an enum suitable for either D3D or OpenGL
I can easily enumerate all D3D pixel formats using CheckDeviceFormat. For OpenGL, I'm firstly iterating through Win32 available accelerated formats (using DescribePixelFormat) and then looking at the PIXELFORMATDESCRIPTOR to see how it's made up, so I can assign it one of my enums. This is where my problems start:
I want to be able to discover all accelerated formats that OpenGL supports on any given system, as comparable to a D3D format. But according to the format descriptors, there aren't any RGB formats (they're all BGR). Further, things like DXT1 - 5 formats, enumerable in D3D, aren't enumerable using the above method. For the latter, I suppose I can just assume if the extension is available, it's a hardware accelerated format.
For the former (how to interpret the format descriptor in terms of RGB/BGR, etc.), I'm not too sure how it works.
Anyone know about this stuff?
Responses appreciated.
Ok, I think I found what I was looking for:
OpenGL image formats
Some image formats are defined by the spec (for backbuffer/depth-stencil, textures, render-targets, etc.), so there is a guarantee, to an extent, that these will be available (they don't need enumerating). The pixel format descriptor can still be used to work out available front buffer formats for the given window.