A opengl texture transparent hack - c++

I wish to make an opengl universal texture transparent hack for the DxWnd tool (an open-source program hosted o SourceForge). The hack should work for every program using opengl to render RGBA textures. DxWnd cah hook and redirect all calls from libraries, including opengl32.dll.
I've read and tried to implement all suggestions about making a texture transparent, including enabling GL_BLEND, disabling GL_CULL_FACE and setting glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). In addition, there's a routine that enforces the alpha bits of all texture pixels.
I expected that, once finished, the result should be a semi-transparent scene, but that doesn't happen.
For instance, the following is a 3d scene from gl hexen II:
and this is the final result, with some textures not transparent and most pixel colors lost:
Just to demonstrate that DxWnd is able to manipulate color pixels (so that this should not be the cause of the problem) this is the same scene with a filter that recolors every texture:
What could be the reason of the problem? How should I fix it? Please, be aware that since DxWnd is hooking a generic program, it may easily have to confront with opengl calls that have an opposite purpose!

What you want is not generally possible just from hooking onto some other application.
You may be able to force blending to be on. But correct transparent rendering is a fundamentally different task from rendering an opaque scene. Because alpha-blended transparency is based on doing per-triangle blending operations with the background, it only really works if you render everything in a back-to-front order.
But as far as the program is concerned, it thinks it is doing opaque rendering. So it's going to render in the order it sees fit to use. Which for more modern applications is probably front-to-back, to take advantage of early depth testing.
And that's the exact opposite order you need to make transparency work. And there's no generic way to control the order of rendering just by hooking onto a few OpenGL functions.
Furthermore, applications tend to try to avoid rendering parts of the scene that are obviously not visible. So if the application thinks that a particular room is not visible because the door to that room isn't visible, then the room and its contents won't be rendered. So even if you could get the order of rendering correct, you'd also need to make the program change what it renders in order to correctly see through stuff.
It should also be noted that doing alpha blending requires that the fragments being rendered have a useful alpha value. But most fragment computations for opaque surfaces will have an alpha value of 1.0. And thus: no blending. And, unless you're dealing with fixed-function OpenGL rendering, or you're willing to manually patch shaders to add your own alpha uniform values, there's no way to change this from outside of the application.

Related

Can I carry out MSAA for deferred rendering by just rendering the geometry twice?

I have question about 3D rendering.
Deferred rendering is very powerful but popular for not being nice to MSAA.
I clearly see why, but I suddenly came up some idea to solve that.
It's simple : just do deferred rendering completely, and get screen image on texture. This texture(attached on framebuffer or whatever) is of course not-antialiased.
Here comes further processing : then next, draw full scene again but this time fragment shader looks up the exact same position on pre-rendered texture using texelFetch(). And output that. Done.
It's silly but I think it might work. If we draw the geometry again with deferred-rendered result as the output color, it means we re-render the scene with geometry.
So we can now provide super-sampled depth information, and the GPU will be able to perform MSAA with aliased color but super-sampled depth geometry. (It's similar with picking up only the 'center' of fragment and evaluating that on ordinary MSAA process).
I'm not sure whether this description makes sense or not. I tested using opengl, but doing that makes no difference with just deferred-rendering.
Does my idea work?
No, your idea does not work.
If you did not render the initial image with multisampling, reading from it later while doing multisampling will not magically create information that doesn't exist in that image.
In your method, every sample which corresponds to a particular pixel in the multisampled rendering will have the same color value. So if two primitives overlap in a pixel, writing to different samples, it won't matter, since both primitives will be generating the same color. All you would be doing is generating multiple different depth values within a pixel, and that doesn't actually contribute to an antialiased output (directly).

Should I call GL_BLEND on create once

I want to enable transparency for my graphic objects. I have found out that just setting alpha to value between 0 and 1 is not enough, and that I have to call Gdx.gl.glEnable(GL20.GL_BLEND) before calling shapeRenderer.begin() and after the render call glDisable(GL20.GL_BLEND) below shapeRenderer.end(). However, my question is, can I call this method Gdx.gl.glEnable(GL20.GL_BLEND) in create method, instead in render, and enable it for game runtime? I have tried once to disable it, but I haven't face any errors nor performance issues. So what are the use cases, and when should I use this glDisable(GL20.GL_BLEND), or is there another way of setting alpha to shapes without calling that GL function ?
Blending in OpenGL is not independent of order. It also doesn't work well with objects that are not convex. As such, you generally don't just throw objects at the GPU with blending on.
Also, having blending enabled has a performance cost. Typically more on mobile hardware than desktop, but it's not exactly free even on desktop hardware.
Therefore, the general rule is to render all opaque surfaces first, then sort the transparent ones back-to-front, then render the transparent ones in that order. Also, when doing the blended rendering, you need to turn off depth writes; depth testing is still needed, but writes will cause problems.

Partially render a 3D scene

I want to partially render a 3D scene, by this I mean I want to render some pixels and skip others. There are many non-realtime renderers that allow selecting a section that you want to render.
Example, fully rendered image (all pixels rendered) vs partially rendered:
I want to make the renderer not render part of a scene, in this case the renderer would just skip rendering these areas and save resources (memory/CPU).
If it's not possible to do in OpenGL, can someone suggest any other open source renderer, it could even be a software renderer.
If you're talking about rendering rectangular subportions of a display, you'd use glViewport and adjust your projection appropriately.
If you want to decide whether to render or not per pixel, especially with the purely fixed pipeline, you'd likely use a stencil buffer. That does exactly much the name says — you paint as though spraying through a stencil. It's a per-pixel mask, reliably at least 8 bits per pixel, and has supported in hardware for at least the last fifteen years. Amongst other uses, it used to be how you could render a stipple without paying for the 'professional' cards that officially supported glStipple.
With GLSL there is also the discard statement that immediately ends processing of a fragment and produces no output. The main caveat is that on some GPUs — especially embedded GPUs — the advice is to prefer returning any colour with an alpha of 0 (assuming that will have no effect according to your blend mode) if you can avoid a conditional by doing so. Conditionals and discards otherwise can have a strong negative effect on parallelism as fragment shaders are usually implemented by SIMD units doing multiple pixels simultaneously, so any time that a shader program look like they might diverge there can be a [potentially unnecessary] splitting of tasks. Very GPU dependent stuff though, so be sure to profile in real life.
EDIT: as pointed out in the comments, using a scissor rectangle would be smarter than adjusting the viewport. That both means you don't have to adjust your projection and, equally, that rounding errors in any adjustment can't possibly create seams.
It's also struck me that an alternative to using the stencil for a strict binary test is to pre-populate the z-buffer with the closest possible value on pixels you don't want redrawn; use the colour mask to draw to the depth buffer only.
You can split the scene and render it in parts - this way you will render with less memory consumption and you can simply skip unnecessary parts or regions. Also read this

C++ OpenGL array of coordinates to draw lines/borders and filled rectangles?

I'm working on a simple GUI for my application on OpenGL and all I need is to draw a bunch of rectangles and a 1px border arround them. Instead of going with glBegin and glEnd for each widget that has to draw (which can reduce performance). I need to know if this can be done with some sort of arrays/lists (batch data) of coordinates and their color.
Requirements:
Rectangles are simple filled with one color for every corner or each corner with a color. (mainly to form gradients)
Lines/borders are simple with one color and 1px thick, but they may not always closed (do not form a loop).
Use of textures/images is excluded. Only geometry data.
Must be compatible with older OpenGL versions (down to version 1.3)
Is there a way to achieve this with some sort of arrays and not glBegin and glEnd? I'm not sure how to do this for lines/borders.
I've seen this kind of implementation in Gwen GUI but it uses textures.
Example: jQuery EasyUI Metro Theme
In any case in modern OpenGL you should restrain to use old fashion API calls like glBegin and the likes. You should use the purer approach that has been introduced with core contexts from OpenGL 3.0. The philosophy behind it is to become much closer to actual way of modern hardware functionning. DiretX10 took this approach, somehow OpenGL ES also.
It means no more lists, no more immediate mode, no more glVertex or glTexCoord. In any case the drivers were already constructing VBOs behind this API because the hardware only understands that. So the OpenGL core "initiative" is to reduce OpenGL implementation complexity in order to let the vendors focus on hardware and stop producing bad drivers with buggy support.
Considering that, you should go with VBO, so you make an interleaved or multiple separated buffer data to store positions and color information, then you bind to attributes and use a shader combination to render the whole. The attributes you declare in the vertex shader are the attributes you bound using glBindVertexBuffer.
good explanation here:
http://www.opengl.org/wiki/Vertex_Specification
The recommended way is then to make one vertex buffer for the whole GUI and every element should just be put one after another in the buffer, then you can render your whole GUI in one draw call. This is how you will get the best performance.
Then if your GUI has dynamic elements this is no longer possible exept if using glUpdateBufferSubData or the likes but it has complex performance implications. You are better to cut your vertex buffer in as many buffers that are necessary to compose the independent parts, then you can render with uniforms modified between each draw call at will to configure the change of looks that is necessary in the dynamic part.

OpenGL to DirectX translation - alpha blending

I'm trying to translate an OpenGL renderer into DirectX9. It mostly seems to work, but the two don't seem to agree on the settings for alpha blending. In OpenGL, I'm using:
glDepthFunc(GL_LEQUAL);
glBlendFunc(GL_SRC_ALPHA,GL_ONE_MINUS_SRC_ALPHA);
glEnable(GL_BLEND);
and never actually setting the GL_DEST_ALPHA, so it's whatever the default is. This works fine. Translating to DirectX, I get:
device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_INVSRCALPHA);
which should do about the same thing, but totally doesn't. The closest I can get is:
device->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
device->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_DESTALPHA);
which is almost right, but if the geometry overlaps itself, the alpha in front overrides the alpha in back, and makes the more distant faces invisible. For the record, the other potentially related render states I've got going on are:
device->SetRenderState(D3DRS_LIGHTING, FALSE);
device->SetRenderState(D3DRS_ZENABLE, TRUE);
device->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);
device->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_MODULATE);
At this point, I feel like I'm just changing states at random to see which combination gives the best results, but nothing is working as well as it did in OpenGL. Not sure what I'm missing here...
The alpha blending itself is performed correctly. Otherwise, every particle would look strange. The reason why some parts of some particles are not drawn, is that they are behind the transparent parts of some other particles.
To solve this problem you have two options:
Turn off ZWriteEnable for the particles. With that, every object drawn after the particle will be in front of it. This could lead to problems, if you have objects that should actually behind the particles and are drawn afterwards.
Enable alpha testing for the particles. Alpha testing is a technique to remove transparent pixels (given a certain threshold) from the target. This includes the ZBuffer.
Btw. when rendering transparent objects, it is almost always necessary to sort the objects to solve ZBuffer issues. The above solutions work for some special cases.