Is it possible in OpenGL 4.3 use imageLoad and imageStore ond GL_RGB textures?
The supported formats listed in glBindImageTexture​ only seem to support 1, 2 and 4 channel textures...
No, it is not. Or at least, not generally.
Exactly one RGB format is supported: r11f_g11f_b10f. Also, the rgb10_a2 and rgb10_a2ui are almost RGB.
But as far as general RGB formats are concerned? No. It should also be noted that the general RGB formats are also not required formats for render targets.
Related
I'm porting some OpenGL code from a technical paper to use with Metal. In it, they use a render target with only one channel - a 16-bit float buffer. But then they set blending operations on it like this:
glBlendFunci(1, GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);
With only one channel, does that mean that with OpenGL, the target defaults to being an alpha channel?
Does anyone know if it is the same with Metal? I am not seeing the results I expect and I am wondering if Metal differs, or if there is a setting that controls how single-channel targets are treated with regards to blending.
In OpenGL, image format channels are labeled explicitly with their channels. There is only one one-channel color format: GL_R* (obviously with different bitdepths and other info). That is, red-only. And while texture swizzling can make the red channel appear in other channels from a texture fetch, that doesn't work for framebuffer writes.
Furthermore, that blend function doesn't actually use the destination alpha. It only uses the source alpha, which has the value the FS gave it. So the fact that the framebuffer doesn't store an alpha is essentially irrelevant.
It looks like glBindImageTexture does not have an image format for 24-bit RGB (3 8-bit channels) images. My texture has an internal format of the type GL_RGB8 (a 24-bit RGB image). Unfortunately I cannot easily change the type of my texture that I'm binding to the image unit at runtime -- is it possible to use a different image format with imageLoad and still access the 24-bit RGB data?
No, you cannot use GL_RGB8 with image load/store. This is done because implementations are allowed to support GL_RGB8 by substituting it with GL_RGBA8. But they are also allowed to not do that if they can support 3-component formats directly. So OpenGL as a specification does not know if the implementation can actually handle 24-bits-per-pixel or if its pretending to do so with a 32-bit texture.
So OpenGL just forces you to do the substitution explicitly.
how does one pass an 8-bit alpha-only texture to GLSL?
You don't say what OpenGL version you're working with. But really, since you're using GLSL, you shouldn't care whether the 8-bits-per-pixel data is in the alpha component or not. What you care is that your texture data has only one channel, it's 8-bits-per-pixel, and that it is accessible by a known component.
GL 3.x+ provides the GL_R8 image format. Before that, you could just use GL_INTENSITY8 (which was removed from core OpenGL 3.1). The difference is that GL_R8 only puts the single channel into the red component, so GB will be 0 and A will be 1. The intensity format broadcasts the single channel into all four components, so the RGBA will each be the same value.
Your shader doesn't need to be changed. Just get the red component of the sampled value.
In my renderer, I produce an anti-aliased scene on a multisampled FBO, which is blitted to an FBO whose color attachment is a texture. The texture is then read during rendering to the framebuffer.
I'd like to update it so that I get gamma-correct results. The benefit of using an sRGB framebuffer is that it allows me to have a somewhat better color precision by storing nonlinear sRGB values directly in the framebuffer.
What I'm not sure about is what changes should I be making to get this, and what is being changed by the different settings.
It looks like extension ARB_framebuffer_sRGB is just dealing with reading and blending operations with sRGB framebuffers. In my situation I'll need to use a texture specifying a sRGB representation type, which means I'd be using extension EXT_texture_sRGB... using a linear texture format would disable the sRGB translation.
Edit: But I just saw this:
3) Should the ability to support sRGB framebuffer update and blending
be an attribute of the framebuffer?
RESOLVED: Yes. It should be a capability of some pixel formats
(mostly likely just RGB8 and RGBA8) that says sRGB blending can
be enabled.
This allows an implementation to simply mark the existing RGB8
and RGBA8 pixel formats as supporting sRGB blending and then
just provide the functionality for sRGB update and blending for
such formats.
Now I'm not so sure what to specify for my texture's pixel format.
Okay, and what about renderbuffers? the ARB_framebuffer_sRGB doc does not mention anything about renderbuffers. Is it possible to use glRenderbufferStorageMultisample with a sRGB format, so I can get sRGB storage enabled blending?
Also, what is the difference between GL_SRGB_ALPHA and GL_SRGB8_ALPHA8 when specifying the internal format for glTexImage2D?
What I'm not sure about is what changes should I be making to get this
That's because your question seems unsure about what you're trying to do.
The key to all of this stuff is to at all times know what your input data is and what your output data is.
Your first step is to know what is stored in each of your textures. Does a particular texture store linear data or data in the sRGB colorspace? If it stores linear data, then use one of the linear image formats. If it stores sRGB colorspace data, then use one of the sRGB image formats.
This ensures that you are fetching the data you want in your shaders. When it comes time to write/blend them to the framebuffer, you now need to decide how to handle that.
Your screen expects values that have been pre-gamma corrected to the gamma of the display device. As such, if you provide linear values, you will get incorrect color output.
However, sometimes, you want to write to intermediate values. For example, if you're doing forward or deferred rendering, you will write accumulated lighting to a floating-point buffer, then use HDR tone mapping to boil it down to a [0, 1] image for display. Post-processing techniques can again be used. Only the outputs to [0, 1] need to be to images in the sRGB colorspace.
When writing linear RGB values that you want converted into sRGB, you must enable GL_FRAMEBUFFER_SRGB. This is a special enable (note that textures don't have a way to turn off sRGB decoding) because sometimes, you want to write values that already are in sRGB. This is often the case for GUI interface widgets, which were designed and built using colors already in the sRGB colorspace.
I cover issues relating to writing gamma-correct values and reading them from textures in my tutorial series. The first one explains why gamma is important and does explicitly gamma correction in the shader. The second link covers how to use sRGB images, both in textures and framebuffers.
Okay, and what about renderbuffers? the ARB_framebuffer_sRGB doc does not mention anything about renderbuffers.
And why would it? ARB_framebuffer_sRGB is only interested in the framebuffer and the nature of images in it. It neither knows nor cares where those images come from. It doesn't care if it's talking about the default framebuffer, a texture attached to an FBO, a renderbuffer attached to an FBO, or something entirely new someone comes up with tomorrow.
The extension states what happens when the destination image is in the sRGB colorspace and when GL_FRAMEBUFFER_SRGB is enabled. Where that image comes from is up to you.
Also, what is the difference between GL_SRGB_ALPHA and GL_SRGB8_ALPHA8 when specifying the internal format for glTexImage2D?
One is sized. The other is not. In theory, GL_SRGB_ALPHA could give you any bitdepth the implementation wanted. It could give you 2 bits per component. You're giving the implementation freedom to pick what it wants.
In practice, I doubt you'll find a difference. That being said, always used sized internal formats whenever possible. It's good to be specific about what you want, and to prevent the implementation from doing something stupid. OpenGL even has some sized formats which are required to be supported explicitly as stated.
I have a rendering context with a color depth of 32-bits. I will be using some alpha blending, but not much; so using 24-bit images for most of the textures I need will greatly reduce the memory requirements. My question is, can I use 24-bit RGB textures with a 32-bit rendering context and expect the same performance as a 32-bit ARGB texture? I understand that the internal format is probably neither to begin with, but the target format of the RC is 32-bit ARGB.
Also, I am planning on using some form of texture compression. The platform will be Windows, exclusively. Which would provide the best compression and widest-compatibility? I am also hoping to use 24-bit compressed textures since I won't be using the alpha bits; but the rendering context will remain 32-bits.
It looks like DXT5 should work for you.
Here are the details for GL_EXT_texture_compression_s3tc
There is no problem at all using any texture format as input textures when rendering to a 32bit render context. The texture formats are either supported by the platform or not, that's about it, if it's supported you can use it without consideration of the format of the render context.
To compress your 24 bits textures, you could take a look at DXT1 texture compression that is supported on all OpenGL implementations on Windows PC. Note that depending on your content, and the compression quality you want to obtain, there are 'tricks' allowing to improve image quality using DXT compressions, alternative color spaces, and shader code for decoding, here is an example.