I have a rendering context with a color depth of 32-bits. I will be using some alpha blending, but not much; so using 24-bit images for most of the textures I need will greatly reduce the memory requirements. My question is, can I use 24-bit RGB textures with a 32-bit rendering context and expect the same performance as a 32-bit ARGB texture? I understand that the internal format is probably neither to begin with, but the target format of the RC is 32-bit ARGB.
Also, I am planning on using some form of texture compression. The platform will be Windows, exclusively. Which would provide the best compression and widest-compatibility? I am also hoping to use 24-bit compressed textures since I won't be using the alpha bits; but the rendering context will remain 32-bits.
It looks like DXT5 should work for you.
Here are the details for GL_EXT_texture_compression_s3tc
There is no problem at all using any texture format as input textures when rendering to a 32bit render context. The texture formats are either supported by the platform or not, that's about it, if it's supported you can use it without consideration of the format of the render context.
To compress your 24 bits textures, you could take a look at DXT1 texture compression that is supported on all OpenGL implementations on Windows PC. Note that depending on your content, and the compression quality you want to obtain, there are 'tricks' allowing to improve image quality using DXT compressions, alternative color spaces, and shader code for decoding, here is an example.
Related
I'm porting some OpenGL code from a technical paper to use with Metal. In it, they use a render target with only one channel - a 16-bit float buffer. But then they set blending operations on it like this:
glBlendFunci(1, GL_ZERO, GL_ONE_MINUS_SRC_ALPHA);
With only one channel, does that mean that with OpenGL, the target defaults to being an alpha channel?
Does anyone know if it is the same with Metal? I am not seeing the results I expect and I am wondering if Metal differs, or if there is a setting that controls how single-channel targets are treated with regards to blending.
In OpenGL, image format channels are labeled explicitly with their channels. There is only one one-channel color format: GL_R* (obviously with different bitdepths and other info). That is, red-only. And while texture swizzling can make the red channel appear in other channels from a texture fetch, that doesn't work for framebuffer writes.
Furthermore, that blend function doesn't actually use the destination alpha. It only uses the source alpha, which has the value the FS gave it. So the fact that the framebuffer doesn't store an alpha is essentially irrelevant.
It looks like glBindImageTexture does not have an image format for 24-bit RGB (3 8-bit channels) images. My texture has an internal format of the type GL_RGB8 (a 24-bit RGB image). Unfortunately I cannot easily change the type of my texture that I'm binding to the image unit at runtime -- is it possible to use a different image format with imageLoad and still access the 24-bit RGB data?
No, you cannot use GL_RGB8 with image load/store. This is done because implementations are allowed to support GL_RGB8 by substituting it with GL_RGBA8. But they are also allowed to not do that if they can support 3-component formats directly. So OpenGL as a specification does not know if the implementation can actually handle 24-bits-per-pixel or if its pretending to do so with a 32-bit texture.
So OpenGL just forces you to do the substitution explicitly.
I'm investigating Direct3D11 for displaying video output; in particular, I'm trying to figure out if there's a way to give a YUV surface to Direct3D11 and have it automatically (i.e. in hardware) convert it to RGB and present it as such. The documentation of DXGI_MODE_DESC states:
Because of the relaxed render target creation rules that Direct3D 11
has for back buffers, applications can create a
DXGI_FORMAT_B8G8R8A8_UNORM_SRGB render target view from a
DXGI_FORMAT_B8G8R8A8_UNORM swap chain so they can use automatic color
space conversion when they render the swap chain.
What does this "automatic color space conversion" refer to? Is there anything in Direct3D11 that does what I'm looking for or must this be performed either pre-render or through a shader?
When creating a texture in DX11 you can choose a number of formats that will inform shaders about the data structures. These formats belong to DXGI_FORMAT enum. They are basically various configurations of ARGB color format, so they allow you to specify for example a B8G8R8A8 or R16G16B16A16. There is however no option for YUV format.
The best thing you can do is passing your YUV data to the shader pipeline, "pretending" that they're RGB. Then in the pixel shader perform a conversion to the real RGB format. This solution should be sufficiently efficient because the conversion will be executed on GPU, in parallel for every visible pixel of your texture.
In my renderer, I produce an anti-aliased scene on a multisampled FBO, which is blitted to an FBO whose color attachment is a texture. The texture is then read during rendering to the framebuffer.
I'd like to update it so that I get gamma-correct results. The benefit of using an sRGB framebuffer is that it allows me to have a somewhat better color precision by storing nonlinear sRGB values directly in the framebuffer.
What I'm not sure about is what changes should I be making to get this, and what is being changed by the different settings.
It looks like extension ARB_framebuffer_sRGB is just dealing with reading and blending operations with sRGB framebuffers. In my situation I'll need to use a texture specifying a sRGB representation type, which means I'd be using extension EXT_texture_sRGB... using a linear texture format would disable the sRGB translation.
Edit: But I just saw this:
3) Should the ability to support sRGB framebuffer update and blending
be an attribute of the framebuffer?
RESOLVED: Yes. It should be a capability of some pixel formats
(mostly likely just RGB8 and RGBA8) that says sRGB blending can
be enabled.
This allows an implementation to simply mark the existing RGB8
and RGBA8 pixel formats as supporting sRGB blending and then
just provide the functionality for sRGB update and blending for
such formats.
Now I'm not so sure what to specify for my texture's pixel format.
Okay, and what about renderbuffers? the ARB_framebuffer_sRGB doc does not mention anything about renderbuffers. Is it possible to use glRenderbufferStorageMultisample with a sRGB format, so I can get sRGB storage enabled blending?
Also, what is the difference between GL_SRGB_ALPHA and GL_SRGB8_ALPHA8 when specifying the internal format for glTexImage2D?
What I'm not sure about is what changes should I be making to get this
That's because your question seems unsure about what you're trying to do.
The key to all of this stuff is to at all times know what your input data is and what your output data is.
Your first step is to know what is stored in each of your textures. Does a particular texture store linear data or data in the sRGB colorspace? If it stores linear data, then use one of the linear image formats. If it stores sRGB colorspace data, then use one of the sRGB image formats.
This ensures that you are fetching the data you want in your shaders. When it comes time to write/blend them to the framebuffer, you now need to decide how to handle that.
Your screen expects values that have been pre-gamma corrected to the gamma of the display device. As such, if you provide linear values, you will get incorrect color output.
However, sometimes, you want to write to intermediate values. For example, if you're doing forward or deferred rendering, you will write accumulated lighting to a floating-point buffer, then use HDR tone mapping to boil it down to a [0, 1] image for display. Post-processing techniques can again be used. Only the outputs to [0, 1] need to be to images in the sRGB colorspace.
When writing linear RGB values that you want converted into sRGB, you must enable GL_FRAMEBUFFER_SRGB. This is a special enable (note that textures don't have a way to turn off sRGB decoding) because sometimes, you want to write values that already are in sRGB. This is often the case for GUI interface widgets, which were designed and built using colors already in the sRGB colorspace.
I cover issues relating to writing gamma-correct values and reading them from textures in my tutorial series. The first one explains why gamma is important and does explicitly gamma correction in the shader. The second link covers how to use sRGB images, both in textures and framebuffers.
Okay, and what about renderbuffers? the ARB_framebuffer_sRGB doc does not mention anything about renderbuffers.
And why would it? ARB_framebuffer_sRGB is only interested in the framebuffer and the nature of images in it. It neither knows nor cares where those images come from. It doesn't care if it's talking about the default framebuffer, a texture attached to an FBO, a renderbuffer attached to an FBO, or something entirely new someone comes up with tomorrow.
The extension states what happens when the destination image is in the sRGB colorspace and when GL_FRAMEBUFFER_SRGB is enabled. Where that image comes from is up to you.
Also, what is the difference between GL_SRGB_ALPHA and GL_SRGB8_ALPHA8 when specifying the internal format for glTexImage2D?
One is sized. The other is not. In theory, GL_SRGB_ALPHA could give you any bitdepth the implementation wanted. It could give you 2 bits per component. You're giving the implementation freedom to pick what it wants.
In practice, I doubt you'll find a difference. That being said, always used sized internal formats whenever possible. It's good to be specific about what you want, and to prevent the implementation from doing something stupid. OpenGL even has some sized formats which are required to be supported explicitly as stated.
I want to mix two (or more) 16bit audio streams using OpenGL and I need a bit of help
Basically what I want to do is to put the audio data into texture which I draw to a frame buffer object and then read back. This is not a problem, however drawing the data in way that gives correct results is a bit more problematic.
I have basically two questions.
In order to mix the data by drawing i need to use blending (alpha = 0.5), however the result should not have any alpha channel. So if I render to e.g. a frame buffer with the format RGB will alpha blending still work as I expect and the resulting alpha will not be written to the fbo? (I want to avoid having to read back the fbo for each render pass)
texture |sR|sG|sB|
framebuffer(before) |dR|dG|dB|
framebuffer(after)
|dR*0.5+sR*0.5|dG*0.5+sG*0.5|dB*0.5+sB*0.5|
The audio samples are signed 16bit integer values. Is it possible do signed calculations this way? Or will I need to first convert the values to unsigned on the cpu, draw them, and then make them signed again on the cpu?
EDIT:
I was a bit unclear. My hardware is restricted to OpenGL 3.3 hardware. I would prefer to not use CUDA or OpenCL, since I'm alrdy using OpenGL for other stuff.
Each audio sample will be rendered in seperate passes, which means that it has to "mix" with whats already been rendered to the frame buffer. The problem is how the output from the pixel shader is written to the framebuffer (this blending is not accessible through programmable shaders, as far as i know, and one has to use glBlendFunc).
EDIT2:
Each audio sample will be rendered in different passes, so only one audio sample will be available in the shader at a time, which means that they need to be accumulated in the FBO.
foreach(var audio_sample in audio_samples)
draw(audio_sample);
and not
for(int n = 0; n < audio_samples.size(); ++n)
{
glActiveTexture(GL_TEXTURE0 + n);
glBindTexture(audio_sample);
}
draw_everything();
Frankly, why wouldn't you just use programmable pixel shaders for that?
Do you have to use OpenGL 1 Fixed Functionality Pipeline?
I'd just go with a programmable shader operating on signed 16bit grayscale linear textures.
Edit:
foreach(var audio_sample in audio_samples)
blend FBO1 + audio_sample => FBO2
swap FBO2, FBO1
It ought to be just as fast, if not faster (thanks to streaming pipelines).
I agree with QDot. Could you however inform us a bit about the hardware restrictions you are facing? If you have reasonable modern hardware I might even suggest to go the CUDA or OpenCL route, in stead of going through OpenGL.
You should be able to do blending even if the destination buffer does not have alpha. That said, rendering to non-power-of-two sizes (rgb16 = 6bytes/pixel) usually incurs performance penalties.
Signed is not your typical render target format, but it does exist in the OpenGL 4.0 specification (Table 3.12, called RGB16_SNORM or RGB16I, depending on whether you want a normalized representation or not).
As a side note, you also have glBlendFunc(GL_CONSTANT_ALPHA,GL_ONE_MINUS_CONSTANT_ALPHA) to not even have to specify an alpha per-pixel. That may not be available on all GL implementations though.