I've made a simple OpenGL application (link). There you can see an image of how it is supposed to look - and how it does look on my computer (OSX):
Problem is: when my client clones and compiles it on his computer (Ubuntu), this is what he sees:
wrong http://dl.dropboxusercontent.com/u/62862049/Screenshots/wl.png
I'm really puzzled with that. This would be no issue if I could reproduce the bug, but not being able to do so make me clueless on how to even start fixing it. How can I approach this issue?
How can I approach this issue?
I suggest using VirtualBox to create a virtual Ubuntu environment on your machine, so that you can compile and debug the issue yourself.
If it runs as intended on your virtual machine, then the issue is probably driver-related on your client's side.
I took the liberty of correcting what I see to be a huge obstacle to getting this code to behave predictably. gluPerspective (...) is supposed to be used to setup the projection matrix, you can cram everythng into a single matrix sometimes but it does not make a lot of sense.
void GLWidget::paintGL(){
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
/* Original code that does really bad things ...
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluPerspective(60.0f,(GLfloat)width/(GLfloat)height,0.01f,650.0f);
*/
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0f,(GLfloat)width/(GLfloat)height,0.01f,650.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.0, cam_radius, cam_radius,
0.0, 0.0, 0.0,
0.0, 0.0, 1.0);
glViewport(0, 0, width, height);
glRotatef( rotate_x, 1.0, 0.0, 0.0 );
glRotatef( rotate_y, 0.0, 0.0, 1.0 );
...
}
As for debugging something you cannot reproduce, the first step is to think about every state(s) that may produce an effect similar to the one you are (or rather, someone else is) experiencing. This is what I do most of the time on StackOverflow when someone presents a random bit of code and a screenshot. Often the code they initially provide is unhelpful, but the screenshot and description of the problem leads to the right solution; thankfully the comments section allows us to ask for more specific code before committing to an answer.
The first thing that came to my mind when I heard your problem was projection and viewport mapping, which lead me to qglwidget.cpp where I discovered some naughty code. It may not necessarily be your entire problem, but it is definitely a problem that fits all of the criteria.
If you do not want to follow this suggestion and use virtualbox, other option is to get the same setup as your client (at least similar hardware, and same OS and installation packages). This way it may prove easier to reproduce and debug.
Sometimes, applications behave differently with different GPUs due to driver problems. Doesn't matter if the drivers are up to date.
Some of the weird problems I've had such as this stem from uninitialized data somewhere. Valgrind is a godsend for finding such issues imo. Valkyrie is a nice app to organise its output.
In this specific case I'm going to throw out a wild guess. I've seen this happen before when the window manager sends delayed resize events, or no initial resize event. For example if your code expects to have a resize event sent before the first draw call (or whenever you store the initial window size for setting the viewport and aspect ratio) and the event doesn't happen straight away then you've got the wrong values. My GL framework injects a resize event internally if a real resize event hasn't occurred before entering the main loop (pretty fiddly if you want to ignore the real one when it does finally come along).
Related
I have the following declaration:
glBegin( GL_QUADS );
glColor3f(0.0f,0.7f,0.7f);
glVertex2f(x1,y1);
glVertex2f(x2,y2);
glVertex2f(x3,y3);
glVertex2f(x4,y4);
glEnd();
The question is: If I apply a rotation, let's say, of 20 degrees, how can I know where these vertices are then?
Because later I need to be able to click on the square and identify if the place where I am clicking is, indeed, inside the square or not.
While I hope that nobody has used it in this millennium, there actually was a mechanism for getting transformed vertices in legacy OpenGL. It's called "feedback mode". Explaining it in detail is beyond the scope of an answer. But if you want to see how it worked, you can read up on it in the freely available online version of the Red Book.
The "click and identify" you talk about in your question is often called "picking" or "selection". There are numerous approaches to implement it, and the one to choose depends somewhat on your application. To give you a quick overview of some common approaches:
Selection mode. This is almost as obsolete as feedback mode. It's as old, but I have a feeling that it was at least much more commonly used, so it might have better support. Still, I wouldn't recommend using it in new code. Again, if you want to learn about it anyway, the explanation can be found in the Red Book.
Modern OpenGL has a feature called Transform Feedback. While its primary purpose is different, it can be used to read back transformed vertices similar to legacy Feedback Mode.
Draw the scene to an off screen buffer, with each object rendered in a different color. Then read back the color at the selection position, and map it to an object. This is a fairly elegant and efficient approach, and can be recommended if it works for your requirements.
Perform the calculations in your own code on the CPU. Instead of transforming all objects, the much more efficient approach is normally to apply the inverse transformation to your pick point (which actually becomes a ray), and intersect it with the geometry.
I'm porting/testing my code on a Raspberry Pi running Pidora, all updated.
The test is a very simple OpenGL program, which works fine on two other computers. I narrowed down a problem to a glPopMatrix call, which causes a segfault. A trivial reproduction of the problem (the entire draw function) is:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glViewport(0,0,200,200);
float d1=0.0f, d2=0.1f;
glPushMatrix(); //gratuitous push/pop pair to demonstrate
glBegin(GL_TRIANGLES);
glColor4f(1.0f,0.0f,0.0f,0.5f); glVertex3f( 0.00f, 1.0f,d1);
glColor4f(0.0f,1.0f,0.0f,0.5f); glVertex3f( 0.87f,-0.5f,d1);
glColor4f(0.0f,0.0f,1.0f,0.5f); glVertex3f(-0.87f,-0.5f,d1);
glColor4f(1,0,0,1);
glVertex3f(0,0,d2);
glVertex3f(1,0,d2);
glVertex3f(0,1,d2);
glColor4f(1,0,1,1);
glVertex3f( 0, 0,-d2);
glVertex3f(-1, 0,-d2);
glVertex3f( 0,-1,-d2);
glEnd();
glPopMatrix(); //segfault here!
The only reference I could find was this 2011 bug report, which describes effectively the same problem. So far, it seems they only have a workaround:
export GALLIUM_DRIVER=softpipe
export DRAW_USE_LLVM=no
I found only the first line was necessary. However, as suggested by the above, it looks like it might be forcing the OS to use a software fallback. It shows. The program (which as above draws three triangles) runs at about 10Hz.
It's common knowledge that the OpenGL matrix stack is deprecated, but simple usage cases like the above are useful for testing. I would expect glPopMatrix to at least not crash if it's present. Is there a way I can get hardware acceleration but still use this?
The raspberry pi has hardware support for OpenGL ES 1.x/2.x via vendor-specific libraries - but none for desktop GL. Immediate mode (glBegin/glEnd) was never in GLES. You have to use vertex arrays. However, the matrix stack is available in GLES1.x. You have to use egl to get a hw-accelerated context. On the upside, GL on the RPi does not require X11, so you can have an GL overlay just directly on the console, which is very cool. The official RPi firmware comes with the hello_triangle demo, which shows you how to get a valid context, the source can be found in /opt/vc/src/hello_pi/hello_triangle. There are also Ben O. Steen's RPi ports of the examples of the OpenGL ES programming guide/
You are currently using the mesa software renderer, which will be extremely slow on that platform. The crash seems to be a mesa bug, but as mesa doesn't have any hw accleration support for the GPU of the rpi, this should not really matter. The cod you have pasted is valid desktop GL1.x/2.x and should not crash on a conforming implementation.
A laptop user running an ATI Radeon HD 5800 Series just showed me a video of my 2D game running on his machine, and it would appear my glScissor code was not working as intended on it (despite it working on soo many other computers).
Whenever I want to restrict rendering to a certain rectangle on the screen, I call my SetClip function to either the particular rectangle, or Rectangle.Empty to reset the scissor.
public void SetClip(Rectangle theRect)
{
if (theRect.IsEmpty)
OpenGL.glDisable(OpenGL.EnableCapp.ScissorTest);
else
{
if (!OpenGL.glIsEnabled(OpenGL.EnableCapp.ScissorTest))
OpenGL.glEnable(OpenGL.EnableCapp.ScissorTest);
OpenGL.glScissor(theRect.Left, myWindow.clientSizeHeight - theRect.Bottom,
theRect.Width, theRect.Height);
CheckError();
}
}
Is this approach wrong? For instance, I have a feeling that glEnable / glDisable might require a glFlush or glFinish to guarantee it's executed in the order in which I call them.
I'd put my money on a driver bug. Your code looks fine.
The only thing I suggest changing this:
if (!OpenGL.glIsEnabled(OpenGL.EnableCapp.ScissorTest))
OpenGL.glEnable(OpenGL.EnableCapp.ScissorTest);
into a mere
OpenGL.glEnable(OpenGL.EnableCapp.ScissorTest);
Changing glEnable state comes practically free, so doing that test is a microoptimization. In fact first testing with glIsEnabled probably costs as much, as the overhead causes by redundantly setting it. But it happens that some drivers may be buggy in what they report by glIsEnabled, so I'd remove that to cut another potential error source.
I have a feeling that glEnable / glDisable might require a glFlush or glFinish to guarantee it's executed in the order in which I call them.
No they don't. In fact glFlush and glFinish are not required for most programs. If you're using a double buffer you don't need them at all.
I am using SFML library (C++) and I copy-pasted Laurent Gomila's example (http://www.sfml-dev.org/tutorials/1.6/window-opengl.php) to test OpenGL.
It worked well. But at some point I would start playing with some of the gl functions... When I changed the first paramether of this...:
gluPerspective(100.f, 1.f, 0.1f, 500.f);
I could notice some diferences when I executed the program, but the 3rd or 4th time I changedThatParameter+compiled, it stopped displaying the graphics. I backtracked to get them displayed again but... guess what? THEY DIDN'T! Same code as before, but still not graphics!
What could possibly be happening?
It ended up being that my noobness got my RAM in trouble because I was stacking memory allocations without freeing none of them.
After implementing proper deletion of memory assignment it worked as it should have.
I am working on a project which uses OpenGL only (it's supposed to become a game one time to be specific), now after some weeks of development I stumbled across the possibility to catch OpenGL errors with GL.GetError().
Since I dislike that it only says what went wrong but not where, I want to get the error that occurs fixed though.
So here is what happens:
When launching the app there are few frames (three or four) with StackUnderflow, it switches to StackOverflow and stays that way.
I checked my Matrix-Push-Pop consistency and didn't find any unclosed matrices. It might be interesting to know that, from what I see, lighting doesn't work (all faces of the various object have the very same brightness).
Is there any other possbile cause?
(If you want to see source, there is plenty at: http://galwarcom.svn.sourceforge.net/viewvc/galwarcom/trunk/galwarcom/ )
You need to set the matrix mode before popping since each mode has a separate stack. If you do something like this, it will underflow:
glMatrixMode(GL_MODELVIEW)
glPushMatrix();
... stuff with model view ...
glMatrixMode(GL_PROJECTION)
glPushMatrix()
... stuff with project matrix ...
glPopMatrix() // projection popped
glPopMatrix() // projection again
You are doing something like this in drawHUD(), probably other places.