I'm porting/testing my code on a Raspberry Pi running Pidora, all updated.
The test is a very simple OpenGL program, which works fine on two other computers. I narrowed down a problem to a glPopMatrix call, which causes a segfault. A trivial reproduction of the problem (the entire draw function) is:
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
glViewport(0,0,200,200);
float d1=0.0f, d2=0.1f;
glPushMatrix(); //gratuitous push/pop pair to demonstrate
glBegin(GL_TRIANGLES);
glColor4f(1.0f,0.0f,0.0f,0.5f); glVertex3f( 0.00f, 1.0f,d1);
glColor4f(0.0f,1.0f,0.0f,0.5f); glVertex3f( 0.87f,-0.5f,d1);
glColor4f(0.0f,0.0f,1.0f,0.5f); glVertex3f(-0.87f,-0.5f,d1);
glColor4f(1,0,0,1);
glVertex3f(0,0,d2);
glVertex3f(1,0,d2);
glVertex3f(0,1,d2);
glColor4f(1,0,1,1);
glVertex3f( 0, 0,-d2);
glVertex3f(-1, 0,-d2);
glVertex3f( 0,-1,-d2);
glEnd();
glPopMatrix(); //segfault here!
The only reference I could find was this 2011 bug report, which describes effectively the same problem. So far, it seems they only have a workaround:
export GALLIUM_DRIVER=softpipe
export DRAW_USE_LLVM=no
I found only the first line was necessary. However, as suggested by the above, it looks like it might be forcing the OS to use a software fallback. It shows. The program (which as above draws three triangles) runs at about 10Hz.
It's common knowledge that the OpenGL matrix stack is deprecated, but simple usage cases like the above are useful for testing. I would expect glPopMatrix to at least not crash if it's present. Is there a way I can get hardware acceleration but still use this?
The raspberry pi has hardware support for OpenGL ES 1.x/2.x via vendor-specific libraries - but none for desktop GL. Immediate mode (glBegin/glEnd) was never in GLES. You have to use vertex arrays. However, the matrix stack is available in GLES1.x. You have to use egl to get a hw-accelerated context. On the upside, GL on the RPi does not require X11, so you can have an GL overlay just directly on the console, which is very cool. The official RPi firmware comes with the hello_triangle demo, which shows you how to get a valid context, the source can be found in /opt/vc/src/hello_pi/hello_triangle. There are also Ben O. Steen's RPi ports of the examples of the OpenGL ES programming guide/
You are currently using the mesa software renderer, which will be extremely slow on that platform. The crash seems to be a mesa bug, but as mesa doesn't have any hw accleration support for the GPU of the rpi, this should not really matter. The cod you have pasted is valid desktop GL1.x/2.x and should not crash on a conforming implementation.
Related
I recently tried to make an .obj mesh loader in C++ with OpenGL and I am confronted to a strange problem.
I have a std::vector<Vector3f> that reprensents the coords of the vertices of the faces, and another one that represents its normals. In my Vector3f, there is a std::array<float,3> so I can preserve contiguity between elements.
// Vertex Pointer to triangle array
glVertexPointer(3,GL_FLOAT, sizeof(Vector3f), &(_triangles[0].data[0]));
// Normal pointer to normal array
glNormalPointer(GL_FLOAT,sizeof(Vector3f),&(_normals[0].data[0]));
When I compile the program on my school computers, it gives the good results, but when I compile it on my desktop computer, the lighting is strange, it's like all the faces are reflecting light in the camera, and so they appear all white.
Do you guys have any idea of what could be my problem ?
EDIT :
My computer is under ArchLinux, my window manager is Awesome, and this is written on a sticker on my pc
Intel Core i7-3632QM 2.2GHz with Turbo Boost up to 3.2GHz.
NVIDIA GeForce GT 740M
I don't know much about my school computers, but I think they are on Ubuntu.
I made it.
Of course, with such a little information, it would be difficult for anyone esle to find the answer.
This was based on sources given by school, and at a certain point, the shininess of the mesh was defined that way
glMaterialf (GL_FRONT, GL_SHININESS, 250);
However, in the Open GL documentation, it specified that
Only values in the range [0, 128] are accepted.
So I guess the different version of OpenGL reacted differently to this mistake :
my school's version of OpenGL probably decided to clamp the value of the shininess in [0,128]
my computer's version probably made saturated the shininess, which is why I had so bright results.
However, thank you very much for your help, and for taking time to read this post.
I have set out to learn OpenGl using this tutorial.
I followed the instructions, installed the libraries and compiled the tutorial source code and when I tried to run it I got:
Failed to open GLFW window. If you have an Intel GPU, they are not 3.3
compatible. Try the 2.1 version of the tutorials.
So I checked out the FAQ on this particular issue and got this advice:
However I do not fully understand this advice fully. I have a 5 year old laptop with Ubuntu 13.10 and a Mobile IntelĀ® GM45 Express Chipset x86/MMX/SSE2. According to the FAQ OpenGl
3.3 is not supported for me. The FAQ suggests that I learn OpenGl 3.3 anyhow.
But how can I learn it without actually running the code?
Is there a way to emulate OpenGl 3.3 somehow on older hardware?
I think the sad truth is that you have to update your hardware. It's relatively cheap on desktop computers (3.3 GPUs can be get for coffee money, really), but on mobile you are more limited, I guess.
The emulators available like ANGLE or the ARM MALI one focus on ES mostly, and in the latter case require 3.2/3.3 support anyway.
That being said, you absolutely can learn OpenGL without running the code, altough it's certainly less fun. Aside from GL2.1, I'd explore WebGL too; maybe it's not cutting edge, but it's fun enough for a lot of people to dig it.
Perhaps you can set out to learn OpenGL 2.1 instead; however, I wouldn't recommend sticking with it! There are a ton of changes that happened in OpenGL 3.0, where a lot of old functionality you could use in v2.1 becomes deprecated.
Modern versions of the OpenGL specification force developers to use their 'programmable pipeline' via shader programs in order to render.
While although v2.1 supports some shader features, it also contains support for the 'fixed-function pipeline' for rendering.
Personally, I started learning OpenGL through using the Java bindings for it (this may simplify things if you are using the Windows API). However, no matter which bindings you use, the OpenGL specification remains the same. All implementations of OpenGL require you to create some window/display to render to and to respond to some basic rendering events (initialization and window resize for example).
Within the fixed-function pipeline, you can make calls such as the following to render a triangle to the screen. The vertices and colors for those vertices are described within the glBegin/End block.
glBegin(GL_TRIANGLES)
glColor3d(1, 0, 0);
glVertex3d(-1, 0, 0);
glColor3d(0, 1, 0);
glVertex3d(1, 0, 0);
glColor3d(0, 0, 1);
glVertex3d(0, 1, 0);
glEnd();
Here are some links you may want to visit to learn more:
- OpenGL Version History
- Swiftless Tutorials (I highly recomend this one!)
- Lighthouse 3D (good for GLSL)
- Java OpenGL Tutorial
I am trying to write a video player that will play at the the EXACT FPS as the monitor refresh rate (lets say it is 60Hz).
I am writing c++ (VS2010) on windows and using OpenGL.
I have a very strong PC, when no sync is set I can reach to 500FPS.
this is the relevant code:
glfwSwapInterval(1);
while (1)
{
glBindTexture(GL_TEXTURE_2D,Texture[frameIndexInArray]);
glBegin(GL_POLYGON);
glNormal3f(0.0f, 0.0f, 1.0f);
...
glVertex3f(-1.0f, 1.0f, 0.0f);
glEnd();
glFinish();
glfwSwapBuffers(window);
glfwPollEvents();
...
}
the vertical sync option on the graphics driver set to "on".
and I have a grabber that records my output via DP cable. (I know for a fact that it works fine)
my problem is that my player, once in a few hundreds frames is getting out of sync,
the output is: frame(n-1), frame(n), frame(n), frame(n+1) ... (double frame)
or it can also be: frame(n-1), frame(n), frame(n), frame(n+2) ... (double and skip frame)
I tried to use glfwSwapInterval (0) and to set vsync in the graphics driver to "application settings", I tried without glFinish(), I even tried to give the thread high priority with SetThreadPriority(GetCurrentThread(), THREAD_PRIORITY_TIME_CRITICAL);
is it possible to get exactly 60FPS? end if so, how?? I could use any advice you have 'cause I literally tried everything I know.
If for some reason you are ever late on a buffer swap, you will wind up drawing 1 frame for at least two monitor refreshes while VSYNC is enabled.
If your driver supports adaptive VSYNC (most NV and AMD drivers do), I would suggest trying that first. That way it will never draw faster than 60 FPS, but if you are slightly late it is not going to penalize you. Check for this WGL extension: WGL_EXT_swap_control_tear and if it is supported, you can pass -1 to glfwSwapInterval (...). The extension does not add any new functions, it just gives meaning to negative values for the swap interval.
GLFW may be too stupid to accept a negative value (I have never tried it), and you might have to interface directly with the platform-specific function: wglSwapIntervalEXT (...)
I've made a simple OpenGL application (link). There you can see an image of how it is supposed to look - and how it does look on my computer (OSX):
Problem is: when my client clones and compiles it on his computer (Ubuntu), this is what he sees:
wrong http://dl.dropboxusercontent.com/u/62862049/Screenshots/wl.png
I'm really puzzled with that. This would be no issue if I could reproduce the bug, but not being able to do so make me clueless on how to even start fixing it. How can I approach this issue?
How can I approach this issue?
I suggest using VirtualBox to create a virtual Ubuntu environment on your machine, so that you can compile and debug the issue yourself.
If it runs as intended on your virtual machine, then the issue is probably driver-related on your client's side.
I took the liberty of correcting what I see to be a huge obstacle to getting this code to behave predictably. gluPerspective (...) is supposed to be used to setup the projection matrix, you can cram everythng into a single matrix sometimes but it does not make a lot of sense.
void GLWidget::paintGL(){
glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT);
/* Original code that does really bad things ...
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluPerspective(60.0f,(GLfloat)width/(GLfloat)height,0.01f,650.0f);
*/
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0f,(GLfloat)width/(GLfloat)height,0.01f,650.0f);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(0.0, cam_radius, cam_radius,
0.0, 0.0, 0.0,
0.0, 0.0, 1.0);
glViewport(0, 0, width, height);
glRotatef( rotate_x, 1.0, 0.0, 0.0 );
glRotatef( rotate_y, 0.0, 0.0, 1.0 );
...
}
As for debugging something you cannot reproduce, the first step is to think about every state(s) that may produce an effect similar to the one you are (or rather, someone else is) experiencing. This is what I do most of the time on StackOverflow when someone presents a random bit of code and a screenshot. Often the code they initially provide is unhelpful, but the screenshot and description of the problem leads to the right solution; thankfully the comments section allows us to ask for more specific code before committing to an answer.
The first thing that came to my mind when I heard your problem was projection and viewport mapping, which lead me to qglwidget.cpp where I discovered some naughty code. It may not necessarily be your entire problem, but it is definitely a problem that fits all of the criteria.
If you do not want to follow this suggestion and use virtualbox, other option is to get the same setup as your client (at least similar hardware, and same OS and installation packages). This way it may prove easier to reproduce and debug.
Sometimes, applications behave differently with different GPUs due to driver problems. Doesn't matter if the drivers are up to date.
Some of the weird problems I've had such as this stem from uninitialized data somewhere. Valgrind is a godsend for finding such issues imo. Valkyrie is a nice app to organise its output.
In this specific case I'm going to throw out a wild guess. I've seen this happen before when the window manager sends delayed resize events, or no initial resize event. For example if your code expects to have a resize event sent before the first draw call (or whenever you store the initial window size for setting the viewport and aspect ratio) and the event doesn't happen straight away then you've got the wrong values. My GL framework injects a resize event internally if a real resize event hasn't occurred before entering the main loop (pretty fiddly if you want to ignore the real one when it does finally come along).
I am currently taking a Game Console Programming module at Sunderland University.
What they are teaching in this module is OpenGL and Phyre Engine to develop PS3 game.
The fact that PS3 SDK kit is not available for free (it is quite expensive) makes it really difficult for me to get around when a problem arises.
Apparently, PS3 framework doesn't support most of the gl function calls like glGenList, glBegin, glEnd and so on.
glBegin(GL_QUADS);
glTexCoord2f(TEXTURE_SIZE, m_fTextureOffset);
glVertex3f(-100, 0, -100);
//some more
glEnd();
I get errors when debugging with PS3 debug mode at glBegin, glEnd and glTexCoord2f.
Is there any way to get around it?
like a different way of drawing object, perhaps?
Most games developed for the PS3 don't use OpenGL at all, but are programmed "on the metal" i.e. make direct use of the GPU without an intermediate, abstrace API. Yes, there is a OpenGL-esque API for the PS3, but this is actually based on OpenGL-ES.
In OpenGL-ES there is no immediate mode. Immediatate Mode is this cumbersome method of passing geometry to OpenGL by starting a primitive with glBegin and then chaining up calls of vertex attribute state setting, concluded by submitting the vertex by its position glVertex and finishing with glEnd. Nobody wants to use this! Especially not on a system with limited resources.
You have the geometry data in memory available anyway. So why not simply point OpenGL to use what's already there? Well, that's exactly what to do: Vertex Arrays. You give OpenGL pointers to where find data (generic glVertexAttribPointer in modern OpenGL, or in old fixed function the predefined, fixed attributesglVertexPointer, glTexCoordPointer, glNormalPointer, glColorPointer) and then have it draw a whole bunch of it using glDrawElements or glDrawArrays.
In modern OpenGL the drawing process is controlled by user programmable shaders. In fixed function OpenGL all you can do is parametize a inflationary number of state variables.
The OpenGL used by the PlayStation 3 is a variant of OpenGL ES 1.0 (according to wikipedia with some features of ES 2.0).
http://www.khronos.org/opengles/1_X
Has the specification. There doesn't seem to be glBegin/glEnd functions there. Those (as in, fixed pipeline functions) are deprecated (and with OpenGL 4.0 and OpenGL ES 2.0, removed) in favor of things like VBO's anyway though, so there probably isn't much point in learning how to work with these.
If you are using PhyreEngine, you should generally avoid directly calling the graphics API directly, as PhyreEngine sits on top of different APIs on different platforms.
On PC it uses GL (or D3D), but on PS3 it uses a lower-level API. So even if you used GL-ES functionality, and even if it compiles, it will likely not function. So it's not surprising you are seeing errors when building for PS3.
Ideally you should use PhyreEngine's pipeline for drawing, which is platform-agnostic. If you stick to that API, you can in principle compile your code for any supported platform.
There is a limit to how much I can comment on PhyreEngine publicly (sorry), but if you are on a university course, your university should have access to the official support forums where you could get more specific help.
If you really must target the underlying graphics API directly, be aware that you may need to write/modify your code per-platform, and that you will need to 'play nice' with any contextual state that PhyreEngine may rely on.