Setting up mipmapping and aliasing/filtering with normal SCNMaterials is pretty straightforward. In my case, I need to do materials with text and intricate line patterns, and mipmapping is really necessary in these cases.
However, I need to do this with an existing custom OpenGL texture that is applied using a SCNMaterial like this. SCNPrograms override normal SceneKit rendering and setting the mipmaps like above has zero effect.
How can I go about to achieve the same mipmap effect on a custom OpenGL texture in SceneKit?
Related
I wish to make an opengl universal texture transparent hack for the DxWnd tool (an open-source program hosted o SourceForge). The hack should work for every program using opengl to render RGBA textures. DxWnd cah hook and redirect all calls from libraries, including opengl32.dll.
I've read and tried to implement all suggestions about making a texture transparent, including enabling GL_BLEND, disabling GL_CULL_FACE and setting glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). In addition, there's a routine that enforces the alpha bits of all texture pixels.
I expected that, once finished, the result should be a semi-transparent scene, but that doesn't happen.
For instance, the following is a 3d scene from gl hexen II:
and this is the final result, with some textures not transparent and most pixel colors lost:
Just to demonstrate that DxWnd is able to manipulate color pixels (so that this should not be the cause of the problem) this is the same scene with a filter that recolors every texture:
What could be the reason of the problem? How should I fix it? Please, be aware that since DxWnd is hooking a generic program, it may easily have to confront with opengl calls that have an opposite purpose!
What you want is not generally possible just from hooking onto some other application.
You may be able to force blending to be on. But correct transparent rendering is a fundamentally different task from rendering an opaque scene. Because alpha-blended transparency is based on doing per-triangle blending operations with the background, it only really works if you render everything in a back-to-front order.
But as far as the program is concerned, it thinks it is doing opaque rendering. So it's going to render in the order it sees fit to use. Which for more modern applications is probably front-to-back, to take advantage of early depth testing.
And that's the exact opposite order you need to make transparency work. And there's no generic way to control the order of rendering just by hooking onto a few OpenGL functions.
Furthermore, applications tend to try to avoid rendering parts of the scene that are obviously not visible. So if the application thinks that a particular room is not visible because the door to that room isn't visible, then the room and its contents won't be rendered. So even if you could get the order of rendering correct, you'd also need to make the program change what it renders in order to correctly see through stuff.
It should also be noted that doing alpha blending requires that the fragments being rendered have a useful alpha value. But most fragment computations for opaque surfaces will have an alpha value of 1.0. And thus: no blending. And, unless you're dealing with fixed-function OpenGL rendering, or you're willing to manually patch shaders to add your own alpha uniform values, there's no way to change this from outside of the application.
I am trying to experiment with different alpha blending equations for transparent objects using OpenGL but it looks like fragment shaders operate on the color of fragments on single objects and cant take into account the scene behind the object.
On the other hand there doesn't seem to be a way to intercept the blending stage with arbitrary GLSL code, for example I can't think of a way to reproduce soft light blend mode with the current OpenGL primitives.
Is there a way to reconcile these?
There are a couple relatively well-supported extensions:
KHR_blend_equation_advanced - implements common blending modes (including soft light).
EXT_shader_framebuffer_fetch - provides destination color from the framebuffer for fully custom blending in the shader.
Blending is still one of those few parts of the fragment pipeline that's a hardwired circuit on the GPU. Hence it's not programmable. Your best bet is rendering to a texture and do a blending postprocessing pass.
copy render target, and draw your object with it as texture.
if there is many small object, you can only copy part of your render target.
first pass: draw object with render target as texture to texture_2;
second pass: draw object to render target with texture_2;
First, some context :
The 3D engine I wrote for my game allows me to switch between DirectX 9 and OpenGL, thanks to an intermediate API layer.
Both allow the user to enable multisampling (via GL_ARB_multisample for OpenGL, D3DMULTISAMPLE_x_SAMPLES for DirectX). Multisampling is enabled for the game window buffer.
The models for my characters use one big texture with texture atlases, so I disabled mipmapping there in order to avoid texture bleeding.
I experience the following results :
As I should, I get the same result when disabling multi-sampling for DirectX or OpenGL.
As I should, I correctly get edge smoothing on polygons when enabling multi-sampling for both.
However, in OpenGL, it seems that multi-sampling also has an effect akin to texture filtering, probably multi-sampling at different spots in the texture for each pixel, as the results are comparable to what mipmap would achieve, without texture bleeding - obviously, great. On the other hand, however, DirectX doesn't seem to provide this benefit as the result of texture mapping isn't anti-aliased, and the same as when multi-sampling is disabled.
I would very much like to know if there is anything I can do in order to get the same result in DirectX as in OpenGL. Maybe I am not aware of the good keywords, but I haven't been able to find documentation that relates to this specific aspect of multisampling.
I have a transparent OpenGL texture which has some simple shapes drawn on it by OpenGL:
circles, polygons, lines. They are drawn without anti-aliasing, multi-sampling, etc. Therefore, they have jaggy borders.
I don't have access to process of texture creation so I cannot enable multi-sampling
.
Is there a way to make those smooth AFTER drawing is done?
There are image-based anti-aliasing filters such as FXAA and MLAA that will work in this situation. I hesitate to call them anti-aliasing because they do not really avoid aliasing, they just hide it after the fact. They are more akin to intelligent blur filters.
I know from your other question that you do not want to use FBOs, so that leads me to believe you are using an OpenGL 2.1 or older codebase. FXAA can be implemented in GLSL 1.20, but it works better in 1.30 (GL 3.0). The one thing I do not know about is using FXAA on an image that includes transparency, it expects luminance to be encoded in the alpha channel (or sRGB, which is not a GL 2.1 feature).
You will probably not want to apply FXAA to your texture directly, rather you would need to draw into a PBuffer and apply FXAA after you blend your input texture.
I'm fairly new to OpenGL and trying to figure out how to add a post-processing stage to my scene rendering. What I believe I know so far is that I create an FBO, render the scene to that, and then I can render to the back buffer using my post-processing shader with the texture from the FBO as the input.
But where this goes beyond my knowledge is when multisampling gets thrown in. The FBO must be multisampled. That leaves two possibilities: 1. the post-process shader operates 1:1 on subsamples to generate the final multisampled screen output, or 2. the shader must resolve the multiple samples and output a single screen fragment for each screen pixel. How can these be done?
Well, option 1 is supported in the GL via the features braught in via GL_ARB_texture_multisample (in core since GL 3.2). Basically, this brings new multisample texture types, and the corresponding samplers like sampler2DMS, where you explicitely can fetch from a particular sample index. If this approach can be efficiently used to implement your post-processing effect, I don't know.
Option 2 is a little bit different than what you describe. Not the shader will do the multisample resolve. You can render into a multisample FBO (don't need a texture for that, a renderbuffer will do as well) and do the resolve explicitely using glBlitFramebuffer, into another, non-multisampled FBO (this time, with a texture). This non-multisamples texture can then be used as input for the post-processing. And neither the post-processing nor the default framebuffer need to be aware of multisampling at all.