Is it possible, and if yes how would it be possible, to create and read a RGBA Texture with different compression algorithms on separate channels in OpenGL4.x:
Example A without real meaning:
RG channels stores a normal map encoded in 3Dc
B channel stores height values for lets say tesselation with some encoding
A channel stores raw MaterialID without compression
Example B:
RGB stores material parameters compressed with DXT1
A saves MaterialID without compression
Background:
In the Frostbite implementation of Physically Based Rendering PBR (PDF) on page 15 and 18, the authors are describing how they structured material parameters into different texture channels. They also mention that they avoided texture compression on some channels not going into detail which channels they mean by that.
Page 15
All basic attributes (Normal, BaseColor, Smoothness, MetalMask, Reflectance) need to be blendable to support deferred decals. Unblendable attributes, like MaterialId are stored into the alpha channel. We have also avoided compression and encoding mechanisms which could affect blending quality.
Page 18
We chose to avoid compression on few of our attributes and rely on simple, linearly-interpolating alpha blending in this case.
There is no OpenGL or Hardware support for reading in a texture with different compression on different channels. Of course, one could create a prepass which would split such a handcrafted texture into separate textures and then decompress these separately.
As Nicol Bolas suggested, the Frostbite PBR implementation avoids compression due to blending.
Related
I have modified a legacy code (OpenGL 2.1) which uses glTexImage2D with GL_TEXTURE_RECTANGLE_NV texture target. I have noticed that when I set some compressed internal format, for example GL_COMPRESSED_RGBA_S3TC_DXT5_EXT it doesn't work with GL_TEXTURE_RECTANGLE_NV (I get a white texture). I have tested other scenarios and everything works fine, i.e. GL_TEXTURE_2D with compressed internal format, GL_TEXTURE_RECTANGLE_NV with non-compressed internal format. Does it mean that GL_TEXTURE_RECTANGLE_NV can't be used with compressed formats ?
Here's what the spec for NV_texture_rectangle extension says about compressed formats:
Can compressed texture images be specified for a rectangular texture?
RESOLUTION: The generic texture compression internal formats
introduced by ARB_texture_compression are supported for rectangular
textures because the image is not presented as compressed data and
the ARB_texture_compression extension always permits generic texture
compression internal formats to be stored in uncompressed form.
Implementations are free to support generic compression internal
formats for rectangular textures if supported but such support is
not required.
This extensions makes a blanket statement that specific compressed
internal formats for use with CompressedTexImage<n>DARB are NOT
supported for rectangular textures. This is because several
existing hardware implementations of texture compression formats
such as S3TC are not designed for compressing rectangular textures.
This does not preclude future texture compression extensions from
supporting compressed internal formats that do work with rectangular
extensions (by relaxing the current blanket error condition).
So your specific format GL_COMPRESSED_RGBA_S3TC_DXT5_EXT is not necessarily supported as being the one mentioned to be "not designed for compressing rectangular textures".
I have written a C/C++ implementation of what I term a "compositor" (I come from a video background) to composite/overlay video/graphics on the top of a video source. My current compositor implementation is rather naive and there is room for CPU optimization improvements (ex: SIMD, threading, etc).
I've created a high-level diagram of what I am currently doing:
The diagram is self explanatory. Nonetheless, I'll elaborate on some of the constraints:
The main video always comes served in an 8-bit YUV 4:2:2 packed format
The secondary video (optional) will come served in either an 8-bit YUV 4:2:2 or YUVA 4:2:2:4 packed format.
The output from the overlay must come out in an 8-bit YUV 4:2:2 packed format
Some other bits of information:
The number of graphics inputs will vary; it may (or may not) be a constant value.
The colour format of the Graphics can be pinned to either ARGB or YUVA format (ie. I can provide it as you see fit). At the moment, I pin it to YUVA to keep a consistent colour format.
The potential of using OpenGL and accompanying shaders is rather appealing:
No need to reinvent the wheel (in terms of actually performing the composition)
The possibility of using GPU where available.
My concern with using OpenGL is performance. Looking around on the web, it is my understanding that a YUV surface would be converted to RGB internally; I would like to minimize the number of colour format conversions and ensure optimal performance. Without prior OpenGL experience, I hope someone can shed some light and suggest if I'm about to venture down the wrong path.
Perhaps my concern relating to performance is less of an issue when using a dedicated GPU? Do I need to consider separate code paths:
Hardware with GPU(s)
Hardware with only CPU(s)?
Additionally, am I going to struggle when I need to process 10-bit YUV?
You should be able to treat YUV as independent channels throughout. OpenGL shaders will be calling them r, g, and b, but it's just data that can be treated as whatever you want.
Most GPUs will support 10 bits per channel (+ 2 alpha bits). Various will support 16 bits per channel for all 4 channels but I'm a little rusty here so I have no idea how common support is for this. Not sure about the 4:2:2 data, but you can always treat it as 3 separate surfaces.
The number of graphics inputs will vary; it may (or may not) be a constant value.
This is something I'm a little less sure about. Shaders like this to be predictable. If your implementation allows you to add each input iteratively then you should be fine.
As an alternative suggestion, have you looked into OpenCL?
I have several 32-bit(with alpha channel) bitmap images which I'm using as essential information in my game. Slightest change in RGBA values breaks everything, so I can't use lossy compression methods like S3TC.
Is there any feasible lossless compression algorithms I can use with OpenGL? I'm using fragment shaders and I want to use the glCompressedTexImage2D() method to define the texture. I haven't tried compressing the texture with OpenGL using GL_COMPRESSED_RGBA parameter, is there any chance I can get lossless compression that way?
Texture compression, as opposed to regular image compression, is designed for one specific purpose: being a texture. And that means fast random access of data.
Lossless compression formats do not tend to do well when it comes to random access patterns. The major lossless compression formats are some form of RLE or table-based encoding. These are adequate for decompressing the entire dataset at once, but they're terrible at being able to know in which memory location the value for texel (U,V) is.
And that question gets asked a lot when accessing textures.
As such, there are no lossless hardware texture compression formats.
Your options are limited to the following:
Use texture memory as a kind of cache. That is, when you determine that you will need a particular image in this frame, decompress it. This could be done on the CPU or GPU (via compute shaders or the like). Note that for fast GPU decompression, you will have to come up with a compression scheme that takes advantage of parallel execution. Most lossless compression formats are not particularly parallel.
If a particular image has not been used in some time, you put it in a "subject to be reused" pile. And if you need to decompress a new image, you can take the least-recently-used image off of that pile, rather than constantly creating/destroying OpenGL texture objects.
Build your own lossless compression scheme, designed for your specific needs. If you absolutely need exact texel values from the texture, I assume that you aren't using linear filtering when accessing these textures. So these aren't really colors; they're arbitrary information about a texel.
I might suggest field compression (improved packing of your bits in the available space). But without knowing what your data actually is or means, I can't say whether your particular use case is amenable to it.
I have read that compressed textures are not readable and are not color render-able.
Though I have some idea of why its not allowed, can some one explain in little detail.
What exactly does it mean its not readable. I can not read from them in shader using say image Load etc? Or I cant even sample from them?
What does it mean its not render-able to? Is it because user is going to see all garbage anyway, so its not allowed.
I have not tried using compressed textures.
Compressed textures are "readable", by most useful definitions of that term. You can read from them via samplers. However, you can't use imageLoad operations on them. Why? Because reading such memory is not a simple memory fetch. It involves fetching lots of memory and doing a decompression operation.
Compressed images are not color-renderable, which means they cannot be attached to an FBO and used as a render target. One might think the reason for this was obvious, but if you need it spelled out. Writing to a compressed image requires doing image compression on the fly. And most texture compression formats (or compressed formats of any kind) are not designed to easily deal with changing a few values. Not to mention, most compressed texture formats are lossy, so every time you do a decompress/write/recompress operation, you lose image fidelity.
From the OpenGL Wiki:
Despite being color formats, compressed images are not color-renderable, for obvious reasons. Therefore, attaching a compressed image to a framebuffer object will cause that FBO to be incomplete and thus unusable. For similar reasons, no compressed formats can be used as the internal format of renderbuffers.
So "not color render-able" means that they can't be used in FBOs.
I'm not sure what "not readable" means; it may mean that you can't bind them to an FBO and read from the FBO (since you can't bind them to an FBO in the first place).
What is the easiest format to read a texture into opengl? Are there any tutorials -good tutorials, for loading image formats like jpg, png, or raw into an array which can be used for texture mapping (preferably without the use of a library like libpng)?
OpenGL itself does not knows nothing about common image formats (other than natively supported S3TC/DXT compressed and alikes, but they are a different story). You need to expand you source images into RGBA arrays. Number of formats and combinations are supported. You need to choose one that suits you, e.g. GL_ALPHA4 for masks, GL_RGB5_A1 for 1bit transparency, GL_BGRA/GL_RGBA for fullcolor, etc.
For me the easiest (not the fastest) way are PNGs, for their lossless compression and full Alpha support. I read the PNG and write RGBA values into array which I then hand over to OpenGL texture creation. If you don't need alpha you may as well accept JPG or BMP. Pipeline is common Source -> Expanded RGBA array -> OpenGL texture.
There is a handy OpenGL texture tutorial available at the link: http://www.nullterminator.net/gltexture.html