OpenGL transparent effects displayed quite awful on Meego - c++

we've been creating several half-transparent 3D cubes in a scene by OpenGL which displays very good on Windows 7 and Fedora 15, but become quite awful on Meego system.
This is what it looks like on my Fedora 15 system:
This is what it looks like on Meego. The color of the line has been changed by us, otherwise the cubes you see would be more pathetic:
The effects are implemented by just using the normal glColor4f function, and made to be transparent just by setting the value of alpha. How could it be like that?
Both freeglut and openglut have been tried on the Meego system and failed to display any better.
I've even tried to use an engine like irrlicht to implement this instead but there would be nothing but black on the screen when the zBuffer argument of beginScene method was set to be false (and normal when it's true, but that would not be what we want).
This should not be the problem of the display card or the driver, because we've seen a 3D game with a transparent ball involved on the very same netbook and system.
We failed to find the reason here. Could any one give any help on why this would be happening please?

It sounds as if you may be relying on default settings (or behavior), which may be different between platforms.
Are you explicitly setting any of OpenGL's blend properties, such as glBlendFunc? If you are, it may help to post the relevant code that does this.
One of the comments mentioned sorting your transparent objects. If you aren't, that's something you might want to consider to achieve more accurate results. In either case, that behavior should be the same from platform to platform so I would have guessed that's not your issue.
Edit:
One other thought. Are you setting glCullFace? It could be that your transparent faces are being culled because of your vertex winding.

Both freeglut and openglut have been tried on the Meego system and failed to display any better.
Those are just simple windowing frameworks and have no effect whatsoever on the OpenGL execution.
Somewhere in your blending code you're messing up. From the looks of the correct rendering I'd say your blend function there is glBlendFunc(GL_ONE, GL_ONE), while on Meego it's something like glBlendFunc(GL_SRC_ALPHA, GL_ONE).

Related

Supersampling AA with PyOpenGL and GLFW

I am developing a application with OpenGL+GLFW and Linux as a target platform.
The default rasterizing has VERY strong aliasing. I have implemented FXAA on top of my pipeline and I still got pretty strong aliasing. Especially when there's some kind of animation or movement, the edges of meshes are flickering. This literally renders the whole project useless.
So, I thought I would also add a supersampling and I have been trying to implement it for two weeks already and still can't make it work. I start to think it's not possible with the combination PyOpenGL+GLFW+Ubuntu18.04.
So, the question is, can I do a supersampling by hand (without OpenGL extentions)? At the end of my (deferred) rendering pipeline I save all the data from different passes to the hard drive, so I thought I would do something like this:
Render the image with 2x/3x resolution to the texture.
Save the texturebuffer to the array.
Get the average pixel's value from each 2x2/3x3/4x4 block
of this array.
Save it to the hard drive.
Obviously, it's gonna be slower than mulstisampling with OpenGL extention and require more memory, but I don't need high fps and I have a pretty small resolution (like 480x640 or similar) so it might work out.
Do you guys have any thoughts about it? I would be glad to any advice.

intercept the opengl calls and make multi-viewpoints and multi-viewports

I want to creates a layer between any other OpenGL-based application and the original OpenGL library. It seamlessly intercepts OpenGL calls made by the application, and renders and sends images to the display, or sends the OpenGL stream to the rendering cluster.
I have completed my openg32.dll to replace the original library, I don't know what to do next,
How to convert OpenGL calls to images and what are OpenGL stream?
For an accurate description. visit the Opengl Wrapper
First and foremost OpenGL is not a libarary. It's an API. The opengl32.dll you have on your system is a library that provides the API and acts as a anchoring point for the actual graphics driver to attach to the programs.
Next it's a terrible idea to intercept OpenGL calls and turn them into something different, like multiple viewports. It may work for the fixed function pipeline, but as soon as shaders get involved it will break the program you hooked into. OpenGL is designed as an API to draw things to the screen, it's not a scene graph. Programs expect that when they make OpenGL calls they will produce an image in a pixel buffer according to their drawing commands. Now if you hook into that process and wildly alter the outcome, any graphics algorithm that relies on the visual outcome of the previous rendering for the following steps will break. For example any form of shadow mapping will be broken by what you do.
Also things like multiple viewport hacks will likely not work if the program does things like frustum culling internally, before making the actual OpenGL calls. Again this is because OpenGL is a drawing API, not a scene graph.
In the end yes you can hook into OpenGL, but whatever you do, you must make sure that OpenGL calls as made by the application get executed according to the specification. There is a authorative OpenGL specification for a reason, namely that programs rely on it to have predictable results.
OpenGL almost undoubtedly allows you to do the things you want to do without doing crazy modifications to it. Multi-viewpoints can be done by, in your render function, doing the following
glViewport(/*View 1 window coords*/0, 0, window_width, window_height / 2);
// Do all of your rendering for the first camera
glViewport(/*View 2 window coords*/0, window_height / 2, window_width, window_height);
glMatrixMode(GL_MODELVIEW);
// Redo your modelview matrix for a different viewpoint here, then re-render it all.
It's as simple as rendering twice into two areas which you specify with glViewport. If you Google around you can get a more detailed tutorial. I highly do not recommend messing with OpenGL as a good deal if it is implemented by the graphics card, and you should really just use what you're given. Chances are if you're modifying it you're doing it wrong. It probably allows you to do it a FAR better way.
Good luck!

Replicating Cathode retro terminal effect?

I'm trying to replicate the effect of Cathode but i'm not really aware of any rendering effects in SDL. Does anyone know the technique used in Cathode? Are they using OpenGL and shaders maybe?
If you are still interested in the subject I'm working on a similar project. The effects were obtained by using GLSL shaders.
You can grab the source code here: https://github.com/Swordifish90/cool-old-term/
The shaders strings might not be extremely readable due to the extensive use of the ternary operators (needed to customize the appearance) but they should give you a really good idea.
If you poke around a bit in the application bundle, you'll find that the only relevant framework is GLKit which, according to Apple, will "reduce the effort required to create new shader-based apps".
There's also a bunch of ".fragdata", ".vertdata", and ".glsldata" files, which are encrypted.
Very unfortunate for you.
So I would say: Yes, it's OpenGL shaders all the way.
Unfortunately, since the shaders are encrypted, you're going to have to locate suitable algorithms elsewhere.
(Perhaps it's possible to use the OpenGL debugging and profiling tools to capture the shader source as it is compiled, but I doubt it.)
You may have realized that Android phones have (had?) such animations when you put them to sleep. That code is available under in file named ElectronBeam.java.
However it is Java code and uses GLES 1.0 with GLES 1.1 Extenstions but algorithm for bending screen should be understandable.
Seems to be based on GLTerminal which uses OpenGL, it would have to use OpenGL and shaders for speed.
I guess the fastest approximation would be to render the text to buffers within OpenGL and use a deformed 2d grid to create the "rounded corners" radial distortion.
But it would take a lot of work to add all the features that cathode has, not to mention to run them quickly.
I suspect emulating a CRT perfectly is a bit like emulating an analog synth perfectly - hard to impossible.
If you want to work quickly and not killing the CPU, the GPU is the best solution! So pixel shaders. pixel shaders can do all of these effects. Once I made such an application. I wrote it in Silverlight, but it does not matter, I used the pixel shader.
Suggests to write this in Qt4 and add to the QWidget pixel shader effects.

Enable antialiasing using Xlib

I'm trying to develop a custom set of libraries for creating GUIs in Linux, with, you know, widgets, buttons, etc. So I'm now learning to creating user interfaces using X11 and its Xlib. I get to the point of having a nice window of a size specified, at a position specified, of a specified background color, and the possibility of drawing points, rectangles, arcs. However as I drew my first circle I got really disappointed by the fact that the circle is not antialiased. I can see every single pixel as a square.
Now the question is easy. Is there any way to tell X: please antialias anything before drawing? Or do I have to avoid using XDrawArc and use a custom function which calls XDrawPoint for each point of the circle? Or there is a third solution?
Thanks in advance.
The short answer is "no". Xlib doesn't do anti-aliasing.
The longer answer is "you can use a higher level API such as Cairo Graphics". It's not necessary to roll your own.
What you encountered are the limitations of the X11 core protocol; technically it would be perfectly possible to add antialiasing to it, but that didn't happen.
Instead there's the XRender extension, that provides nice antialiased primitives. You'll also want to look into Xft to render antialiased text using vector fonts.
You can roll your own antialiasing algorithm. You have the only 2 primitives you need: 1) a function to draw TrueColor points (namely, xcb_poly_point(), if you're using XCB), and 2) for loops.

Rendering problem with the OpenGL GL_POINTS primitive

When using software rendering, or any graphics card in our developlment office, our little coloured GL_POINTS render in exactly the colour we expect. Out in the field, some users report points rendered in the wrong colours. Getting them to turn off hardware acceleration fixed their problem, so we've been putting the whole thing down to a third-party issue and using a workaround (tiny pixel-sized rectangles whose colour remains unproblematic). The snag: we are taking a huge performance hit.
My question is, has anyone else had a similar issue, and, if so, did they come up with a way to keep their GL_POINTS and get the colour right?
I haven't encountered similar problem, but the solution is simple : get the card your user is using, and set the same environment.
Maybe the problem is something stupid as old drivers. I don't see what else can render wrong color.