How to create one channel 32-bit float compressed texture in OpenGL - opengl

According documentation i can create HDR compressed texture, so i do this:
funcs->glGenTextures(1, &newCompressedTexture);
funcs->glActiveTexture(GL_TEXTURE0 + g_cTileTextureUnit);
funcs->glBindTexture(GL_TEXTURE_2D, newCompressedTexture);
funcs->glTexStorage2D(GL_TEXTURE_2D, 1, oglTileInfo.m_compressedInternalFormat, g_cTileWidthWithBorder, g_cTileWidthWithBorder);
funcs->glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, g_cTileWidthWithBorder, g_cTileWidthWithBorder, GL_RED, GL_FLOAT, getUncompressedData().rData()._m_data.data());
Where oglTileInfo.m_compressedInternalFormat is GL_COMPRESSED_RED_RGTC1 or GL_COMPRESSED_RGBA_ASTC_4x4_KHR.
There is no any gl errors, shader works correctly, but value which i am getting by texture() is clamped between [0,1] and its depth only 8 bit. Everything works ok if i use GL_R32F as texture internal format, but i need compression. Thanks.

If your OpenGL implementation supports ASTC textures, you can upload pre-compressed ASTC data to the texture. However, ASTC support does not require implementations to compress texture data for you, so uploading uncompressed data is simply not allowed. You have to compress your data elsewhere before uploading it.
Also, good ASTC compression is not particularly fast, so if you're generating this floating-point data in your application, that's going to be problematic.
As for GL_COMPRESSED_RED_RGTC1, that is a normalized format. So while implementations are required to compress data for you, it's still going to clamp that data to [0, 1].
BPTC requires implementations to be able to compress such textures inline. So using one of its floating-point formats would probably be functional. However, such compression will not be particularly fast nor will it be particularly good.

Related

glTexImage2D - GL_TEXTURE_RECTANGLE_NV and compressed internal format

I have modified a legacy code (OpenGL 2.1) which uses glTexImage2D with GL_TEXTURE_RECTANGLE_NV texture target. I have noticed that when I set some compressed internal format, for example GL_COMPRESSED_RGBA_S3TC_DXT5_EXT it doesn't work with GL_TEXTURE_RECTANGLE_NV (I get a white texture). I have tested other scenarios and everything works fine, i.e. GL_TEXTURE_2D with compressed internal format, GL_TEXTURE_RECTANGLE_NV with non-compressed internal format. Does it mean that GL_TEXTURE_RECTANGLE_NV can't be used with compressed formats ?
Here's what the spec for NV_texture_rectangle extension says about compressed formats:
Can compressed texture images be specified for a rectangular texture?
RESOLUTION: The generic texture compression internal formats
introduced by ARB_texture_compression are supported for rectangular
textures because the image is not presented as compressed data and
the ARB_texture_compression extension always permits generic texture
compression internal formats to be stored in uncompressed form.
Implementations are free to support generic compression internal
formats for rectangular textures if supported but such support is
not required.
This extensions makes a blanket statement that specific compressed
internal formats for use with CompressedTexImage<n>DARB are NOT
supported for rectangular textures. This is because several
existing hardware implementations of texture compression formats
such as S3TC are not designed for compressing rectangular textures.
This does not preclude future texture compression extensions from
supporting compressed internal formats that do work with rectangular
extensions (by relaxing the current blanket error condition).
So your specific format GL_COMPRESSED_RGBA_S3TC_DXT5_EXT is not necessarily supported as being the one mentioned to be "not designed for compressing rectangular textures".

Lossless texture compression for OpenGL

I have several 32-bit(with alpha channel) bitmap images which I'm using as essential information in my game. Slightest change in RGBA values breaks everything, so I can't use lossy compression methods like S3TC.
Is there any feasible lossless compression algorithms I can use with OpenGL? I'm using fragment shaders and I want to use the glCompressedTexImage2D() method to define the texture. I haven't tried compressing the texture with OpenGL using GL_COMPRESSED_RGBA parameter, is there any chance I can get lossless compression that way?
Texture compression, as opposed to regular image compression, is designed for one specific purpose: being a texture. And that means fast random access of data.
Lossless compression formats do not tend to do well when it comes to random access patterns. The major lossless compression formats are some form of RLE or table-based encoding. These are adequate for decompressing the entire dataset at once, but they're terrible at being able to know in which memory location the value for texel (U,V) is.
And that question gets asked a lot when accessing textures.
As such, there are no lossless hardware texture compression formats.
Your options are limited to the following:
Use texture memory as a kind of cache. That is, when you determine that you will need a particular image in this frame, decompress it. This could be done on the CPU or GPU (via compute shaders or the like). Note that for fast GPU decompression, you will have to come up with a compression scheme that takes advantage of parallel execution. Most lossless compression formats are not particularly parallel.
If a particular image has not been used in some time, you put it in a "subject to be reused" pile. And if you need to decompress a new image, you can take the least-recently-used image off of that pile, rather than constantly creating/destroying OpenGL texture objects.
Build your own lossless compression scheme, designed for your specific needs. If you absolutely need exact texel values from the texture, I assume that you aren't using linear filtering when accessing these textures. So these aren't really colors; they're arbitrary information about a texel.
I might suggest field compression (improved packing of your bits in the available space). But without knowing what your data actually is or means, I can't say whether your particular use case is amenable to it.

Frostbite PBR: different compression on separate texture channels of the same texture

Is it possible, and if yes how would it be possible, to create and read a RGBA Texture with different compression algorithms on separate channels in OpenGL4.x:
Example A without real meaning:
RG channels stores a normal map encoded in 3Dc
B channel stores height values for lets say tesselation with some encoding
A channel stores raw MaterialID without compression
Example B:
RGB stores material parameters compressed with DXT1
A saves MaterialID without compression
Background:
In the Frostbite implementation of Physically Based Rendering PBR (PDF) on page 15 and 18, the authors are describing how they structured material parameters into different texture channels. They also mention that they avoided texture compression on some channels not going into detail which channels they mean by that.
Page 15
All basic attributes (Normal, BaseColor, Smoothness, MetalMask, Reflectance) need to be blendable to support deferred decals. Unblendable attributes, like MaterialId are stored into the alpha channel. We have also avoided compression and encoding mechanisms which could affect blending quality.
Page 18
We chose to avoid compression on few of our attributes and rely on simple, linearly-interpolating alpha blending in this case.
There is no OpenGL or Hardware support for reading in a texture with different compression on different channels. Of course, one could create a prepass which would split such a handcrafted texture into separate textures and then decompress these separately.
As Nicol Bolas suggested, the Frostbite PBR implementation avoids compression due to blending.

Compressed Textures in OpenGL

I have read that compressed textures are not readable and are not color render-able.
Though I have some idea of why its not allowed, can some one explain in little detail.
What exactly does it mean its not readable. I can not read from them in shader using say image Load etc? Or I cant even sample from them?
What does it mean its not render-able to? Is it because user is going to see all garbage anyway, so its not allowed.
I have not tried using compressed textures.
Compressed textures are "readable", by most useful definitions of that term. You can read from them via samplers. However, you can't use imageLoad operations on them. Why? Because reading such memory is not a simple memory fetch. It involves fetching lots of memory and doing a decompression operation.
Compressed images are not color-renderable, which means they cannot be attached to an FBO and used as a render target. One might think the reason for this was obvious, but if you need it spelled out. Writing to a compressed image requires doing image compression on the fly. And most texture compression formats (or compressed formats of any kind) are not designed to easily deal with changing a few values. Not to mention, most compressed texture formats are lossy, so every time you do a decompress/write/recompress operation, you lose image fidelity.
From the OpenGL Wiki:
Despite being color formats, compressed images are not color-renderable, for obvious reasons. Therefore, attaching a compressed image to a framebuffer object will cause that FBO to be incomplete and thus unusable. For similar reasons, no compressed formats can be used as the internal format of renderbuffers.
So "not color render-able" means that they can't be used in FBOs.
I'm not sure what "not readable" means; it may mean that you can't bind them to an FBO and read from the FBO (since you can't bind them to an FBO in the first place).

What is the most efficient process to push YUV texture data onto a GPU in OpenGL?

Does anyone know of an efficient way to push 2vuy non-planar data onto a GPU in a way that doesn't require swizzling?
I am grabbing the raw 2vuy data from an h264 video file and successfully loading it into a texture that I map to an an OpenGL object. I notice that my code spends a fair amount of time in glgProcessPixelsWithProcessor. My glTexImage2D call looks like the following:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_YCBCR_422_APPLE,
GL_UNSIGNED_SHORT_8_8_APPLE, data);
Apple says in its OpenGL guide that GL_YCBCR_422_APPLE, provides "acceptable" performance (p103), but that
Note: If your data needs only to be swizzled, glgProcessPixels performs the swizzling reasonably fast although not as fast as if the data didn't need swizzling. But non-native data formats are converted one byte at a time and incurs a performance cost that is best to avoid.
I assume that there is some kind of internal format conversion going on the CPU. I noticed in another thread that glgProcessPixels is running a block method as well.
Is my path the most efficient? If not, what is?
Your code, as it stands right now depends on extensions of Apple. I can't tell what's happening inside.
However what I suggest is, that you create three 2D textures, each with exactly one channel, where each texture receives one of the color planes; using independent textures makes supporting chroma subsampling (that 422) simpler.
In a shader you'd then perform the colorspace conversion. When writing down the math I suggest you do this via a contact color space, like XYZ, as this allows you, to take the color profile of the output device into account; ICC profiles provide the conversion data from XYZ color space coordinates to device color space (RGB) coordinates.