OpenGL low performances on my computer - c++

We began learning OpenGL at school and, in particular, implemented a .obj mesh loader. When I run my code at school with quite heavy meshes (4M up to 17M faces), I have to wait a few seconds for the mesh to be loaded but once it is done, I can rotate and move the scene with a perfect fluidity.
I compiled the same code at home, and I have very low performances when moving in a scene where heavy meshes are displayed.
I'm using the 3.0 Mesa 10.1.3 version of OpenGL (this is the output of cout << glGetString(GL_version) << endl) and compiling with g++-4.9. I don't remember the version numbers of my school but I'll update my message as soon as possible if needed. Finally, I'm on Ubuntu 14.04 my graphic card is a Nvidia Geforce 605, my CPU is an Intel(R) Core(TM) i5-2320 CPU # 3.00GHz, and I have 8Go RAM.
If you have any idea to help me to understand (and fix it) why it is running so slowly on a quite good computer (certainly not a racehorse but good enough for that), please tell me. Thanks in advance !

TL;DR: You're using the wrong driver. Install the proprietary, closed source binary drivers from NVidia and you'll get very good performance. Also with a GeForce 605 you should get some OpenGL-4.x support.
I'm using the 3.0 Mesa 10.1.3 version of OpenGL
(…)
my graphic card is a Nvidia Geforce 605
That's your problem right there. The open source "Noveau" drivers for NVidia GPUs that are part of Mesa are a very long way from offering any kind of reasonable HW acceleration support. This is because NVidia doesn't publish openly available documentation on their GPU's low level programming.
So at the moment the only option for getting HW accelerated OpenGL on your GPU is to install NVidia's proprietary drivers. They are available on NVidia's website; however since your GPU isn't "bleeding edge" right now I recommend you use those installable through the package manager; you'll have to add a "nonfree" package source repository though.
This is in stark contrast to the AMD GPUs which have full documentation coverage, openly accessible. Because of that the Mesa "radeon" drivers are quite mature; full OpenGL-3.3 core support, with performance good enough for most applications, in some applications even outperforming AMD's proprietary drivers. OpenGL-4 support is work in progress for Mesa at a whole and last time I checked the "radeon" drivers' development was actually moving at a faster pace than the Mesa OpenGL state tracker itself.

Related

Who runs OpenGL shaders if there is no video card

I am writing a very basic OpenGL C++ program (Linux 64 bits).
In fact, i have 3 programs:
a main C++ program
a vertex shader
a fragment shader
The 2 shaders are compiled at runtime. I suppose this programs are runs in parallel on video card by the GPU.
My question is what happens if my computer contains a very basic video cards with no GPU?
I have tried to run my program on VirtualBox with "3d acceleration" disabled and the program works!
Does that mean opengl detects the video card and run shaders on CPU automatically if there is no GPU?
OpenGL is just a standard, and that standard has different implementations. Normally, you'd rely on the implementation provided by your graphics driver, which is obviously going to be using the GPU.
However, most desktop Linux distros also include a software implementation of OpenGL, called Mesa, which is what get used if you don't have video drivers installed that support OpenGL. (It's very rare these days to find any video hardware, even integrated video on the CPU, that doesn't support OpenGL shaders, but on Linux drivers can be an issue and in you're case the VM is not making hardware acceleration available.)
So, the short answer is yes your shaders can run on the CPU, but that may or may not happen, and it may or may not be automatic, it depends on what video drivers (or other OpenGL implementation) you have installed.
On any modern personal computer there is a GPU. If you don't have a dedicated GPU card from vendors like NVidia or AMD,you will probably have a so called "on-board", or integrated video chip by Intel or another computer hardware manufacturer. The good thing is that even the on-board GPUs today are pretty good, (Intel started doing a good job finally) and the chance is high that such a hardware on your PC already supports modern programmable OpenGL version. Well, maybe not the latest one, but from my personal experience, Most of Intel's on-board GPUs from 2-3 years ago should support up to OpenGL 4.1/4.2 .So as long as you are not running on really old hardware, you should have a full access to gpu accelerated APIs. Otherwise you have Mesa library which comes with software (non GPU accelerated) implementation of OpenGL API.

OpenGL GLSL shader versions

Recently I have had some problems with GLSL shader versions on different computers. I know every GPU can have different support for shaders, but I don't know how to make one shader which will work on all GPU's. If I write some shaders on my PC (GPU - AMD HD7770) I don't even have to specify the version, but on some older PC's or on PS's with nVidia GPU it's more strict on the version, so I've to specify the version that the GPU supports.
Now here comes the real problem. If I specify e.g version 330 on my PC, it works as it should, but on other PC's which should support version 330 it does not seem to work. So I have to rewrite it and make it work. And if I switch back to my PC which has newer GPU, it doesn't work either.
Does anyone know, how do I have to write the shader so it can run on all GPU's?
Writing portable OpenGL code isn't as straightforward as you might like.
nVidia drivers are permissive. You can get away with a lot of things on nVidia drivers that you can't get away with on other systems.
It's easy to accidentally use extra features. For example, I wrote a program targeting the 3.2 core profile, but used GL_INT_2_10_10_10_REV as a vertex format. The GL_INT_2_10_10_10_REV symbol is defined in 3.2, but it's not allowed as a vertex format until 3.3, and you won't get any error messages for using it by accident.
Lots of people run old drivers. According to the Steam survey, in 2013, 38% of customers with OpenGL 3.x drivers didn't have 3.3 support, even though hardware which supports 3.0 should support 3.3.
You will always have to test. This is the unfortunate reality.
My recommendations are:
Always target the core profile.
Always specify shader language version.
Check the driver version and abort if it is too old.
If you can, use OpenGL headers/bindings that only expose symbols in the version you are targeting.
Get a copy of the spec for the target version, and use that as a reference instead of the OpenGL man pages.
Write your code so that it can also run on OpenGL ES, if that's feasible.
Test on different systems. One PC is probably not going to cut it. If you can dig up a second PC with a graphics card from a different vendor (don't forget Intel's integrated graphics), that would be better. You can probably get an OpenGL 3.x desktop for a couple hundred dollars, or if you want to save the money, ask to use a friend's computer for some quick testing. You could also buy a second video card (think under $40 for a low-end card with OpenGL 4.x support), just be careful when swapping them out.
The main reason that commercial games run on a variety of systems is that they have a QA budget. If you can afford a QA team, do it! If you don't have a QA team, then you're going to have to do both QA and development -- two jobs is more work, but that's the price you pay for quashing bugs.

Using OpenGL on lower-power side of Hybrid Graphics chip

I have hit a brick wall and I wonder if someone here can help. My program opens an OpenGL surface for very minor rendering needs. It seems on the MacbookPro this causes the graphics card driver to switch the hybrid card from low performance intel graphics to high performance AMD ATI graphics.
This causes me problems as there seems to be an issue with the AMD driver and putting the Mac to sleep, but also it drains the battery unnecessarily fast. I only need OpenGL to create a static 3D image on occasion, I do not require a fast frame rate!
Is there a way in a Cocoa app to prevent OpenGL switching a hybrid graphics card into performance mode?
The relevant documentation for this is QA1734, “Allowing OpenGL applications to utilize the integrated GPU”:
… On OS X 10.6 and earlier, you are not allowed to choose to run on the integrated GPU instead. …
On OS X 10.7 and later, there is a new attribute called NSSupportsAutomaticGraphicsSwitching. To allow your OpenGL application to utilize the integrated GPU, you must add in the Info.plist of your application this key with a Boolean value of true…
So you can only do this on Lion, and “only … on the dual-GPU MacBook Pros that were shipped Early 2011 and after.”
There are a couple of other important caveats:
Additionally, you must make sure that your application works correctly with multiple GPUs or else the system may continue forcing your application to use the discrete GPU. TN2229 Supporting Multiple GPUs on Mac OS X discusses in detail the required steps that you need to follow.
and:
Features that are available on the discrete GPU may not be available on the integrated GPU. You must check that features you desire to use exist on the GPU you are using. For a complete listing of supported features by GPU class, please see: OpenGL Capabilities Tables.

Which version of OpenGL to use?

I currently run a machine that allows me to program in OpenGL 2.1. If I were to make a program, should I use the power of the current OpenGL versions like 3.x/4.x or use 2.1?
On a side question: How can I tell what's the highest version of OpenGL my computer can run?
On another side question: does only upgrading my video card allow me to program in upgraded versions of OpenGL?
OpenGL versions (for AMD and NVIDIA GPUs) roughly correspond to levels of hardware. 2.x OpenGL versions are for DX9-level hardware. 3.x represents DX10-level, and 4.x represents DX11-class hardware. So the version you pick restricts you can run your code.
In general, any AMD or NVIDIA GPU you can actually buy new from a store will be 3.x or better (more than likely, 4.x). Even integrated GPUs, motherboard or CPU, from AMD are 3.x or better. I do some home development work on an HD 3300 motherboard GPU, and it works reasonably well.
Intel is a problem. Intel's OpenGL driver quality is pretty poor. Many old Intel machines can only support GL 1.4, which is pre-DX9 class functionality. They do support some higher-level extensions (shaders, but only vertex shaders, since they run them in software).
More recent Intel GPUs are a bit better, but their GL drivers are still rather buggy.
The above describes the situation for Windows. Linux is a bit fuzzier, because there are drivers from NVIDIA/AMD, and open-source community written drivers. The latter are generally not as good, but they are improving. These tend to be for 3.x-class hardware.
The MacOSX world is a bit different. Mac OSX Lion (10.7), recently released, adds support for OpenGL 3.2 (sadly, not 3.3, for some reason). Apple rigidly controls how OpenGL works on their platform, but hopefully they will be updating GL versions more frequently than they have been recently.
So on Macs, you really have two choices: 2.1 or 3.2. Note that Lion's 3.2 support only exposes core OpenGL functionality. See this page for details on what that means.
You cannot tell what the highest version your particular computer is capable of. There is simply the version you get when you create a context. In general, unless you specifically ask for a version (and even then, usually not), you will get the highest version your hardware and drivers can handle.
Oh, and yes: the OpenGL version is controlled by your video card's capabilities (and installed drivers).
The following advise assumes that you're developing a serious application that you intend for others to use. This isn't for little demo apps or whatever.
In general, I would advise against explicitly restricting your code to 4.x. While 4.x adoption increases every day (there are 2 hardware generations from both NVIDIA and AMD with 4.x support, and a third likely will be out by years end from AMD. Also, AMD is starting to embed 4.x capable GPUs in their CPUs now), there is still a lot of 3.x hardware. 4.x doesn't buy you a whole lot, and you can easily add code paths to conditionally support 4.x features if they are available.
In order to use OpenGL 3.x you need a card that supports DirectX10 and proper drivers that have support for it.
The advantage in opposite to DirectX is, that you can also use OpenGL3 and 4 on WindowsXP. No need for 7 or Vista.
Which version you should use depends on your audience. If your audience are gamers, go ahead, use 3. Won't do 4 exclusive yet. DX11 are still rare.
For a first look on how Gamers use their computers and what hardware they have, steam is a good source:
http://store.steampowered.com/hwsurvey
You can determine the version by running:
glGetString(GL_VERSION);
A good OpenGL3 Tutorial:
http://arcsynthesis.org/gltut/
The OpenGL 3.3 SDK Reference:
http://www.opengl.org/sdk/docs/man3/
Hope this helps a bit :).
Lots of embedded Intel graphics are limited to 1.4 or 1.5.
Mac OSX is stuck on 2.1 I hear.
All Radeon and GeForce cards can do 3+ (may need a driver update).
And you can program with any version, but if your hardware doesn't support it, you'll end up testing under a software renderer (slow!).
On a side question: How can I tell what's the highest version of OpenGL my computer can run?
I answer for the above question.
I come across to the tool below, it's really complete in itself and let me see all OpenGL version that my system currently support (from 1.0 up to what it actually support). As well for extensions available for my system to use. Not only for ARB though, it ranges from NV, ATI, OES, etc.
http://www.realtech-vr.com/glview/download.html

I need openGl 2.0 but my graphic card support 1.5

I want to start with my webGL project and minimal require is my graphic card support openGL 2.0.
Problem exist because i have intel laptop with integrated intel 965 graphic media accelerator and driver is up to date and it support openGL 1.5.
Is there any solution how to update my graphic carf to support 2.0? Is this possible?
Okay. just stay patient actually because ANGLE is coming. It seems to me that your hardware is able to run directX 9 and ANGLE is a project from google to allow webgl support from directX. But as the others say, you can't upgrade opengl drivers just like that. Or you could try MESA in the firefox build.
For more information, see Learningwebgl.com.
Sadly no. With a little more effort you can still develop against opengl 2.0 but you'll need to use another machine (or just buy a better graphics card) to test anything 2.0 specific (pixel shading for instance).
Ok, that's not entirely true. You could download the mesa library and compile it for win32 and get some of the opengl 2.0 functionality emulated in a software renderer but it would be very slow.
It's possible that updating drivers might help some, but probably won't make that jump. Otherwise, you could use something like Mesa3D, which does the rendering in software. It can be slow, but does support up through OpenGL 2.1 (including shaders), if memory serves.
If there's no other way, you could try http://www.mesa3d.org/ . I haven't followed this project for quite some time, but apparently they currently provide OpenGL 2.1 software rendering.
I just updated drivers my HP 6710b with Mobile Intel 965 Express Chipset -- and now WebGL is working in Firefox 4 RC1!
I put instructions on this site.
It is not pretty but it works!
angleproject is your best bet. Check out which exact 965 card you have from here (search for 'intel gma' in wikipedia), which also lists the OpenGL support version for these cards. It might take a couple of months though before you can use angleproject to accelerate your WebGL application.
I have a slightly newer 4500MHD, and I have the same problem. WebGL works on Firefox 3.7a4, but fails in the later versions a5 and a6. I had to use the latest drivers from Intel which claim to support OpenGL 2.0. The Microsoft drivers don't ship with OpenGL support.
I have reported a issue in the Firefox https://bugzilla.mozilla.org/show_bug.cgi?id=570474. It looks like support for Intel cards might be fixed by the time the releases are in beta.