I am working on a project which uses OpenGL only (it's supposed to become a game one time to be specific), now after some weeks of development I stumbled across the possibility to catch OpenGL errors with GL.GetError().
Since I dislike that it only says what went wrong but not where, I want to get the error that occurs fixed though.
So here is what happens:
When launching the app there are few frames (three or four) with StackUnderflow, it switches to StackOverflow and stays that way.
I checked my Matrix-Push-Pop consistency and didn't find any unclosed matrices. It might be interesting to know that, from what I see, lighting doesn't work (all faces of the various object have the very same brightness).
Is there any other possbile cause?
(If you want to see source, there is plenty at: http://galwarcom.svn.sourceforge.net/viewvc/galwarcom/trunk/galwarcom/ )
You need to set the matrix mode before popping since each mode has a separate stack. If you do something like this, it will underflow:
glMatrixMode(GL_MODELVIEW)
glPushMatrix();
... stuff with model view ...
glMatrixMode(GL_PROJECTION)
glPushMatrix()
... stuff with project matrix ...
glPopMatrix() // projection popped
glPopMatrix() // projection again
You are doing something like this in drawHUD(), probably other places.
Related
I've made a simple OpenGL application (link). There you can see an image of how it is supposed to look - and how it does look on my computer (OSX):
Problem is: when my client clones and compiles it on his computer (Ubuntu), this is what he sees:
wrong http://dl.dropboxusercontent.com/u/62862049/Screenshots/wl.png
I'm really puzzled with that. This would be no issue if I could reproduce the bug, but not being able to do so make me clueless on how to even start fixing it. How can I approach this issue?
How can I approach this issue?
I suggest using VirtualBox to create a virtual Ubuntu environment on your machine, so that you can compile and debug the issue yourself.
If it runs as intended on your virtual machine, then the issue is probably driver-related on your client's side.
I took the liberty of correcting what I see to be a huge obstacle to getting this code to behave predictably. gluPerspective (...) is supposed to be used to setup the projection matrix, you can cram everythng into a single matrix sometimes but it does not make a lot of sense.
void GLWidget::paintGL(){
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
/* Original code that does really bad things ...
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluPerspective(60.0f,(GLfloat)width/(GLfloat)height,0.01f,650.0f);
*/
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0f,(GLfloat)width/(GLfloat)height,0.01f,650.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.0, cam_radius, cam_radius,
0.0, 0.0, 0.0,
0.0, 0.0, 1.0);
glViewport(0, 0, width, height);
glRotatef( rotate_x, 1.0, 0.0, 0.0 );
glRotatef( rotate_y, 0.0, 0.0, 1.0 );
...
}
As for debugging something you cannot reproduce, the first step is to think about every state(s) that may produce an effect similar to the one you are (or rather, someone else is) experiencing. This is what I do most of the time on StackOverflow when someone presents a random bit of code and a screenshot. Often the code they initially provide is unhelpful, but the screenshot and description of the problem leads to the right solution; thankfully the comments section allows us to ask for more specific code before committing to an answer.
The first thing that came to my mind when I heard your problem was projection and viewport mapping, which lead me to qglwidget.cpp where I discovered some naughty code. It may not necessarily be your entire problem, but it is definitely a problem that fits all of the criteria.
If you do not want to follow this suggestion and use virtualbox, other option is to get the same setup as your client (at least similar hardware, and same OS and installation packages). This way it may prove easier to reproduce and debug.
Sometimes, applications behave differently with different GPUs due to driver problems. Doesn't matter if the drivers are up to date.
Some of the weird problems I've had such as this stem from uninitialized data somewhere. Valgrind is a godsend for finding such issues imo. Valkyrie is a nice app to organise its output.
In this specific case I'm going to throw out a wild guess. I've seen this happen before when the window manager sends delayed resize events, or no initial resize event. For example if your code expects to have a resize event sent before the first draw call (or whenever you store the initial window size for setting the viewport and aspect ratio) and the event doesn't happen straight away then you've got the wrong values. My GL framework injects a resize event internally if a real resize event hasn't occurred before entering the main loop (pretty fiddly if you want to ignore the real one when it does finally come along).
I am using SFML library (C++) and I copy-pasted Laurent Gomila's example (http://www.sfml-dev.org/tutorials/1.6/window-opengl.php) to test OpenGL.
It worked well. But at some point I would start playing with some of the gl functions... When I changed the first paramether of this...:
gluPerspective(100.f, 1.f, 0.1f, 500.f);
I could notice some diferences when I executed the program, but the 3rd or 4th time I changedThatParameter+compiled, it stopped displaying the graphics. I backtracked to get them displayed again but... guess what? THEY DIDN'T! Same code as before, but still not graphics!
What could possibly be happening?
It ended up being that my noobness got my RAM in trouble because I was stacking memory allocations without freeing none of them.
After implementing proper deletion of memory assignment it worked as it should have.
In a 2D platform game, how can I create a flashlight-effect (like in this video at around 0:41 http://www.youtube.com/v/DHbjped9gM8&hl=en_US&start=42) ?
I'm using OpenGL for my lighting.
PS: I've seen effects like this a few times, but I really don't know how to create them. I know that I can create new lightsources with glEnable, but they are always circular shining in a 90° angle onto my stage, so that's quite different from what I am looking for.
You have to tell OpenGL that you want a spot light, and what kind of cone you want. Let's guess that a typical flash-light covers around a 30 degree angle. For that you'd use:
glLightf(GL_LIGHTn, GL_SPOT_CUTOFF, 15.0f);
[where GL_LIGHTn would be GL_LIGHT1 for light 1, GL_LIGHT2 for light 2, and so on]
You'll also need to use glLightfv with GL_LIGHT_DIRECTION to specify the direction the flashlight is pointing toward. You may also want to use GL_SPOT_EXPONENT to specify how the light falls off at the edges of the cone. Oh, you may also want to use one of the GL_XXX_ATTENUATIONs as well (but a lot of times, that's unnecessary).
If you want to support shadows being cast, that's another, much more complex, subject of its own (probably too much to try to cover in an answer here).
What platform (as in hardware/operating system) are you developing for? Like previous post mentioned, it sounds like you're using fixed function OpenGL, something that is considered "deprecated" today. You might want to look into OpenGL 3.2, and do it with a full shader-based approach. This means handling all the light sources yourself. This will also allow you to create real-time shadows and other nice effects!
we've been creating several half-transparent 3D cubes in a scene by OpenGL which displays very good on Windows 7 and Fedora 15, but become quite awful on Meego system.
This is what it looks like on my Fedora 15 system:
This is what it looks like on Meego. The color of the line has been changed by us, otherwise the cubes you see would be more pathetic:
The effects are implemented by just using the normal glColor4f function, and made to be transparent just by setting the value of alpha. How could it be like that?
Both freeglut and openglut have been tried on the Meego system and failed to display any better.
I've even tried to use an engine like irrlicht to implement this instead but there would be nothing but black on the screen when the zBuffer argument of beginScene method was set to be false (and normal when it's true, but that would not be what we want).
This should not be the problem of the display card or the driver, because we've seen a 3D game with a transparent ball involved on the very same netbook and system.
We failed to find the reason here. Could any one give any help on why this would be happening please?
It sounds as if you may be relying on default settings (or behavior), which may be different between platforms.
Are you explicitly setting any of OpenGL's blend properties, such as glBlendFunc? If you are, it may help to post the relevant code that does this.
One of the comments mentioned sorting your transparent objects. If you aren't, that's something you might want to consider to achieve more accurate results. In either case, that behavior should be the same from platform to platform so I would have guessed that's not your issue.
Edit:
One other thought. Are you setting glCullFace? It could be that your transparent faces are being culled because of your vertex winding.
Both freeglut and openglut have been tried on the Meego system and failed to display any better.
Those are just simple windowing frameworks and have no effect whatsoever on the OpenGL execution.
Somewhere in your blending code you're messing up. From the looks of the correct rendering I'd say your blend function there is glBlendFunc(GL_ONE, GL_ONE), while on Meego it's something like glBlendFunc(GL_SRC_ALPHA, GL_ONE).
I'm having a rough time trying to set up this behavior in my program.
Basically, I want it that when a the user presses the "a" key a new sphere is displayed on the screen.
How can you do that?
I would probably do it by simply having some kind of data structure (array, linked list, whatever) holding the current "scene". Initially this is empty. Then when the event occurs, you create some kind of representation of the new desired geometry, and add that to the list.
On each frame, you clear the screen, and go through the data structure, mapping each representation into a suitble set of OpenGL commands. This is really standard.
The data structure is often referred to as a scene graph, it is often in the form of a tree or graph, where geometry can have child-geometries and so on.
If you're using the GLuT library (which is pretty standard), you can take advantage of its automatic primitive generation functions, like glutSolidSphere. You can find the API docs here. Take a look at section 11, 'Geometric Object Rendering'.
As unwind suggested, your program could keep some sort of list, but of the parameters for each primitive, rather than the actual geometry. In the case of the sphere, this would be position/radius/slices. You can then use the GLuT functions to easily draw the objects. Obviously this limits you to what GLuT can draw, but that's usually fine for simple cases.
Without some more details of what environment you are using it's difficult to be specific, but a few of pointers to things that can easily go wrong when setting up OpenGL
Make sure you have the camera set up to look at point you are drawing the sphere. This can be surprisingly hard, and the simplest approach is to implement glutLookAt from the OpenGL Utility Toolkit. Make sure you front and back planes are set to sensible values.
Turn off backface culling, at least to start with. Sure with production code backface culling gives you a quick performance gain, but it's remarkably easy to set up normals incorrectly on an object and not see it because you're looking at the invisible face
Remember to call glFlush to make sure that all commands are executed. Drawing to the back buffer then failing to call glSwapBuffers is also a common mistake.
Occasionally you can run into issues with buffer formats - although if you copy from sample code that works on your system this is less likely to be a problem.
Graphics coding tends to be quite straightforward to debug once you have the basic environment correct because the output is visual, but setting up the rendering environment on a new system can always be a bit tricky until you have that first cube or sphere rendered. I would recommend obtaining a sample or template and modifying that to start with rather than trying to set up the rendering window from scratch. Using GLUT to check out first drafts of OpenGL calls is good technique too.