I’m trying to render to a DIB section with blending using OpenGL on XP. I’m trying to multiply the source and destination colour components together, as in:
glEnable(GL_BLEND);
glBlendFunc(GL_DST_COLOR, GL_ZERO);
However, it fails to draw a blended image. By changing the type of blending I ask for, I can make it draw as if without blending, or not draw at all. But it refuses to blend.
Here are details about the OpenGL version I’m using:
Vendor: Microsoft Corporation
Renderer: GDI Generic
Version: 1.1.0
Extensions: GL_WIN_swap_hint GL_EXT_bgra GL_EXT_paletted_texture
I was aware that I’m limited to “generic” (software) rendering with DIB sections, but I did not expect blending to fail. I have searched for confirmation about whether blending is or is not supported in such cases, but to no avail.
glBlendFunc(GL_DST_COLOR, GL_ZERO);
^^^ oh?
Transparency, Translucency, and Blending:
15.060 I want to use blending but can’t get destination alpha to work. Can I blend or create a transparency effect without destination alpha?
Many OpenGL devices don't support destination alpha. In particular, the OpenGL 1.1 software rendering libraries from Microsoft don't support it. The OpenGL specification doesn't require it.
Also:
No Alpha in the Framebuffer:
If you are doing Blending and you need a destination alpha, you need to make sure that your render target has one. This is easy to ensure when rendering to a Framebuffer Object. But with a Default Framebuffer, it depends on how you created your OpenGL Context.
For example, if you are using GLUT, you need to make sure you pass GLUT_ALPHA to the glutInitDisplayMode function.
Ok I made a silly mistake: I misread my own script code and ended up applying the texture in the wrong rendering pass. OpenGL wasn’t to blame, and blending DOES work.
Related
First, some context :
The 3D engine I wrote for my game allows me to switch between DirectX 9 and OpenGL, thanks to an intermediate API layer.
Both allow the user to enable multisampling (via GL_ARB_multisample for OpenGL, D3DMULTISAMPLE_x_SAMPLES for DirectX). Multisampling is enabled for the game window buffer.
The models for my characters use one big texture with texture atlases, so I disabled mipmapping there in order to avoid texture bleeding.
I experience the following results :
As I should, I get the same result when disabling multi-sampling for DirectX or OpenGL.
As I should, I correctly get edge smoothing on polygons when enabling multi-sampling for both.
However, in OpenGL, it seems that multi-sampling also has an effect akin to texture filtering, probably multi-sampling at different spots in the texture for each pixel, as the results are comparable to what mipmap would achieve, without texture bleeding - obviously, great. On the other hand, however, DirectX doesn't seem to provide this benefit as the result of texture mapping isn't anti-aliased, and the same as when multi-sampling is disabled.
I would very much like to know if there is anything I can do in order to get the same result in DirectX as in OpenGL. Maybe I am not aware of the good keywords, but I haven't been able to find documentation that relates to this specific aspect of multisampling.
I have a transparent OpenGL texture which has some simple shapes drawn on it by OpenGL:
circles, polygons, lines. They are drawn without anti-aliasing, multi-sampling, etc. Therefore, they have jaggy borders.
I don't have access to process of texture creation so I cannot enable multi-sampling
.
Is there a way to make those smooth AFTER drawing is done?
There are image-based anti-aliasing filters such as FXAA and MLAA that will work in this situation. I hesitate to call them anti-aliasing because they do not really avoid aliasing, they just hide it after the fact. They are more akin to intelligent blur filters.
I know from your other question that you do not want to use FBOs, so that leads me to believe you are using an OpenGL 2.1 or older codebase. FXAA can be implemented in GLSL 1.20, but it works better in 1.30 (GL 3.0). The one thing I do not know about is using FXAA on an image that includes transparency, it expects luminance to be encoded in the alpha channel (or sRGB, which is not a GL 2.1 feature).
You will probably not want to apply FXAA to your texture directly, rather you would need to draw into a PBuffer and apply FXAA after you blend your input texture.
I'm using GLFW3 to create a context and I've noticed that the GLFW_SRGB_CAPABLE property seems to have no effect. Regardless of what I set it to, I always get sRGB conversion when GL_FRAMEBUFFER_SRGB is enabled. My understanding is that when GL_FRAMEBUFFER_SRGB is enabled, you get sRGB conversion only if the framebuffer is an sRGB format. To add to the confusion, if I check the GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING I get GL_LINEAR regardless of what I set GLFW_SRGB_CAPABLE to. This doesn't appear to be an issue with GLFW. I created a window and context manually and was sure to set GL_FRAMEBUFFER_SRGB_CAPABLE_ARB to true.
I'm using a Nvidia GTX 760 with the 340.76 drivers. I'm checking the format like this:
glGetFramebufferAttachmentParameteriv(GL_FRAMEBUFFER, GL_FRONT_LEFT, GL_FRAMEBUFFER_ATTACHMENT_COLOR_ENCODING, &enc);
This should return GL_SRGB, should it not? If it is applying sRGB correction regardless of what WGL_FRAMEBUFFER_SRGB_CAPABLE_ARB is set to, then is Nvidia's driver not broken? Nobody has noticed this until now?
It seems that this is only an issue with the default framebuffer, therefor it must be a bug in Nvidia's WGL implementation. I've pointed it out to them so hopefully it will be fixed.
With GLX (Linux), I experience the same behaviour as well. It will report linear despite it clearly rendering as sRGB. One way you can verify that it is in fact working is by using an sRGB texture with texel value 1, render this texture to your sRGB framebuffer and see that it shows a dark-grey square. (For comparison, you can see how it looks when the texture is not an sRGB texture - still with texel value 1, that should give a lighter-grey square).
You can see this example: https://github.com/julienaubert/glsrgb
Interestingly, with an OpenGL ES context, the (almost) same code does not render correctly.
There is a topic on nvidia's developer OpenGL forum:
https://devtalk.nvidia.com/default/topic/776591/opengl/gl_framebuffer_srgb-functions-incorrectly/
At office we're working with an old GLX/Motif software that uses OpenGL's AccumulationBuffer to implement anti-aliasing for saving images.
Our problem is that Apple removed the AccumulationBuffer from all of its drivers (starting from OS X 10.7.5), and some Linux drivers like Intel HDxxxx don't support it neither.
Then I would like to update the anti-aliasing code of the software for making it compatible with most actual OSs and GPUs, but keeping the generated images as beautiful as they were before (because we need them for scientific publications).
SuperSampling seems to be the oldest and the best quality anti-aliasing method, but I can't find any example of SSAA that doesn't use AccumulationBuffer. Is there a different way to implement SuperSampling with OpenGL/GLX ???
You can use FBOs to implement the same kind of anti-aliasing that you most likely used with accumulation buffers. The process is almost the same, except that you use a texture/renderbuffer as your "accumulation buffer". You can either use two FBOs for the process, or change the attached render target of a single render FBO.
In pseudo-code, using two FBOs, the flow looks roughly like this:
create renderbuffer rbA
create fboA (will be used for accumulation)
bind fboA
attach rbA to fboA
clear
create texture texB
create fboB (will be used for rendering)
attach texB to fboB
(create and attach a renderbuffer for the depth buffer)
loop over jitter offsets
bind fboB
clear
render scene, with jitter offset applied
bind fboA
bind texB for texturing
set blend function GL_CONSTANT_ALPHA, GL_ONE
set blend color 0.0, 0.0, 0.0, 1.0 / #passes
enable blending
render screen size quad with simple texture sampling shader
disable blending
end loop
bind fboA as read_framebuffer
bind default framebuffer as draw framebuffer
blit framebuffer
Full super-sampling is also possible. As Andon in the comment above suggested, you create an FBO with a render target that is a multiple of your window size in each dimension, and in the end do a down-scaling blit to your window. The whole thing tends to be slow and use a lot of memory, even with just a factor of 2.
Simple task: draw a fullscreen quad with texture, nothing more, so we can be sure the texture will fill whole screen space. (We will do some more shader magic later).
Drawing fullscreen quad with simple fragment shader was easy, but now we are stuck for a whole day trying to make it textured. We read plenty of tutorials, but none of them helped us. Theose about sdl are mainly using opengl 1.x, those about OpenGL 2.0 are not about texturing, or SDL. :(
The code is here. Everything is in colorLUT.c, and fragment shader is in colorLUT.fs. The result is window of the same size as image, and if you comment the last line in shader, you get nice red/green gradient, so the shader is fine.
Texture initialization hasn't changed compared to OpenGL 1.4. Tutorials will work fine.
If fragment shader works, but you don't see texture (and get black screen), texture loading is broken or texture hasn't been set correctly. Disable shader, and try displaying textured polygon with fixed-function functionality.
You may want to call glPixelStorei(GL_UNPACK_ALIGNMENT, 1) before trying to init texture. Default value is 4.
Easier way to align texture to screen is to add vertex shader and pass texture coordinates - instead of trying to calculate them using gl_FragCoord.
You're passing surface size into "resolution" uniform. This is an error. You should be passing viewport size instead.
You may want to generate mipmaps. Either generate them yourself, or use GL_GENERATE_MIPMAPS because it is available in OpenGL 2 (but has been deprecated in later versions)
OpenGL.org has specifications for OpenGL 2.0 and GLSL 1.5. Download them and use them as reference, when in doubt.
NVIdia OpenGL SDK has examples you may want to check - they cover shaders.
And there's "OpenGL Orange book" (OpenGL shading language) which specifically deals with shaders.
Next time include code into question.