Who runs OpenGL shaders if there is no video card - c++

I am writing a very basic OpenGL C++ program (Linux 64 bits).
In fact, i have 3 programs:
a main C++ program
a vertex shader
a fragment shader
The 2 shaders are compiled at runtime. I suppose this programs are runs in parallel on video card by the GPU.
My question is what happens if my computer contains a very basic video cards with no GPU?
I have tried to run my program on VirtualBox with "3d acceleration" disabled and the program works!
Does that mean opengl detects the video card and run shaders on CPU automatically if there is no GPU?

OpenGL is just a standard, and that standard has different implementations. Normally, you'd rely on the implementation provided by your graphics driver, which is obviously going to be using the GPU.
However, most desktop Linux distros also include a software implementation of OpenGL, called Mesa, which is what get used if you don't have video drivers installed that support OpenGL. (It's very rare these days to find any video hardware, even integrated video on the CPU, that doesn't support OpenGL shaders, but on Linux drivers can be an issue and in you're case the VM is not making hardware acceleration available.)
So, the short answer is yes your shaders can run on the CPU, but that may or may not happen, and it may or may not be automatic, it depends on what video drivers (or other OpenGL implementation) you have installed.

On any modern personal computer there is a GPU. If you don't have a dedicated GPU card from vendors like NVidia or AMD,you will probably have a so called "on-board", or integrated video chip by Intel or another computer hardware manufacturer. The good thing is that even the on-board GPUs today are pretty good, (Intel started doing a good job finally) and the chance is high that such a hardware on your PC already supports modern programmable OpenGL version. Well, maybe not the latest one, but from my personal experience, Most of Intel's on-board GPUs from 2-3 years ago should support up to OpenGL 4.1/4.2 .So as long as you are not running on really old hardware, you should have a full access to gpu accelerated APIs. Otherwise you have Mesa library which comes with software (non GPU accelerated) implementation of OpenGL API.

Related

Image Geometrical remapping on OpenGL ES

I have an algorithem that runs on PC and uses OpenCV remap. It is slow and I need to run it on an embedded system (for example a device such as this: http://www.hardkernel.com/main/products/prdt_info.php
It has OpenGL 3.0 and I am wondering if it is possible to write code in OpenGL shader to do the remapping (OpenCV remapping).
I have another device that has OpenGL 2.0, Can that device do shader programming?
where can I learn about shader programming in OpenGL?
I am using Linux.
Edit 1
The code runs on a PC and it takes around 1min, On am embedded system it takes around 2 hours!
I need to run it on an embedded system and for that reason I think to use OpenGL or OpenCL (the board has OpenCL 1.1 driver).
What is the best option for this? Can I use OpenGl 2 or OpenGL3?
A PC with a good graphic card (compatible with OpenCV) is much faster than a little embedded PC like Odroid or Banana Pi. I mean that computational_power/price or computational_power/energy is lower on these platforms.
If your algorithm is slow:
Are you sure your graphic driver is correctly configured to support OpenCV?
Try to improve your algorithm. On a current PC, is easy to get 1TFLOP with OpenCL, so if your program really require more, you should think about computer clouds and such. Check that you configured the appropriate buffers type, etc.
OpenGL 3 allow general shaders, but OpenGL 2 is very different and it must be much harder or impossible to make your algorithm compatible.
To learn OpenGL/GLSL, be very care because most page learn bad/old code.
I recommend you a good book, like: http://www.amazon.com/OpenGL-Shading-Language-Cookbook-Edition/dp/1782167021/ref=dp_ob_title_bk
EDIT 1
OpenGL 3+, or OpenGL ES 3+ have general purpose shaders and may be used for fast computing. So yes, you will get performance increased. But graphic cards on these platform are very small/slow (usually less than 10 cores). Do not expect to get the same 1min-result on this ODROID than on your PC with 500-2000 GPU cores.
OpenGL 2 has fixed pipeline and it is hard to use it for parallel computing.
If you really require to use an embedded platform, maybe you may use a cloud of them?

OpenGL low performances on my computer

We began learning OpenGL at school and, in particular, implemented a .obj mesh loader. When I run my code at school with quite heavy meshes (4M up to 17M faces), I have to wait a few seconds for the mesh to be loaded but once it is done, I can rotate and move the scene with a perfect fluidity.
I compiled the same code at home, and I have very low performances when moving in a scene where heavy meshes are displayed.
I'm using the 3.0 Mesa 10.1.3 version of OpenGL (this is the output of cout << glGetString(GL_version) << endl) and compiling with g++-4.9. I don't remember the version numbers of my school but I'll update my message as soon as possible if needed. Finally, I'm on Ubuntu 14.04 my graphic card is a Nvidia Geforce 605, my CPU is an Intel(R) Core(TM) i5-2320 CPU # 3.00GHz, and I have 8Go RAM.
If you have any idea to help me to understand (and fix it) why it is running so slowly on a quite good computer (certainly not a racehorse but good enough for that), please tell me. Thanks in advance !
TL;DR: You're using the wrong driver. Install the proprietary, closed source binary drivers from NVidia and you'll get very good performance. Also with a GeForce 605 you should get some OpenGL-4.x support.
I'm using the 3.0 Mesa 10.1.3 version of OpenGL
(…)
my graphic card is a Nvidia Geforce 605
That's your problem right there. The open source "Noveau" drivers for NVidia GPUs that are part of Mesa are a very long way from offering any kind of reasonable HW acceleration support. This is because NVidia doesn't publish openly available documentation on their GPU's low level programming.
So at the moment the only option for getting HW accelerated OpenGL on your GPU is to install NVidia's proprietary drivers. They are available on NVidia's website; however since your GPU isn't "bleeding edge" right now I recommend you use those installable through the package manager; you'll have to add a "nonfree" package source repository though.
This is in stark contrast to the AMD GPUs which have full documentation coverage, openly accessible. Because of that the Mesa "radeon" drivers are quite mature; full OpenGL-3.3 core support, with performance good enough for most applications, in some applications even outperforming AMD's proprietary drivers. OpenGL-4 support is work in progress for Mesa at a whole and last time I checked the "radeon" drivers' development was actually moving at a faster pace than the Mesa OpenGL state tracker itself.

Using OpenGL on lower-power side of Hybrid Graphics chip

I have hit a brick wall and I wonder if someone here can help. My program opens an OpenGL surface for very minor rendering needs. It seems on the MacbookPro this causes the graphics card driver to switch the hybrid card from low performance intel graphics to high performance AMD ATI graphics.
This causes me problems as there seems to be an issue with the AMD driver and putting the Mac to sleep, but also it drains the battery unnecessarily fast. I only need OpenGL to create a static 3D image on occasion, I do not require a fast frame rate!
Is there a way in a Cocoa app to prevent OpenGL switching a hybrid graphics card into performance mode?
The relevant documentation for this is QA1734, “Allowing OpenGL applications to utilize the integrated GPU”:
… On OS X 10.6 and earlier, you are not allowed to choose to run on the integrated GPU instead. …
On OS X 10.7 and later, there is a new attribute called NSSupportsAutomaticGraphicsSwitching. To allow your OpenGL application to utilize the integrated GPU, you must add in the Info.plist of your application this key with a Boolean value of true…
So you can only do this on Lion, and “only … on the dual-GPU MacBook Pros that were shipped Early 2011 and after.”
There are a couple of other important caveats:
Additionally, you must make sure that your application works correctly with multiple GPUs or else the system may continue forcing your application to use the discrete GPU. TN2229 Supporting Multiple GPUs on Mac OS X discusses in detail the required steps that you need to follow.
and:
Features that are available on the discrete GPU may not be available on the integrated GPU. You must check that features you desire to use exist on the GPU you are using. For a complete listing of supported features by GPU class, please see: OpenGL Capabilities Tables.

graphics libraries and GPUs [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How does OpenGL work at the lowest level?
When we make a program that uses the OpenGL library, for example for the Windows platform and have a graphics card that supports OpenGL, what happens is this:
We developed our program in a programming language linking the graphics with OpenGL (eg Visual C++).
Compile and link the program for the target platform (eg Windows)
When you run the program, as we have a graphics card that supports OpenGL, the driver installed on the same Windows will be responsible for managing the same graphics. To do this, when the CPU will send the required data to the chip on the graphics card (eg NVIDIA GPU) sketch the results.
In this context, we talk about graphics acceleration and downloaded to the CPU that the work of calculating the framebuffer end of our graphic representation.
In this environment, when the driver of the GPU receives data, how leverages the capabilities of the GPU to accelerate the drawing? Translates instructions and data received CUDA language type to exploit parallelism capabilities? Or just copy the data received from the CPU in specific areas of the device memory? Do not quite understand this part.
Finally, if I had a card that supports OpenGL, does the driver installed in Windows detect the problem? Would get a CPU error or would you calculate our framebuffer?
You'd better work into computer gaming sites. They frequently give articles on how 3D graphics works and how "artefacts" present themselves in case of errors in games or drivers.
You can also read article on architecture of 3D libraries like Mesa or Gallium.
Overall drivers have a set of methods for implementing this or that functionality of Direct 3D or OpenGL or another standard API. When they are loading, they check the hardware. You can have cheap videocard or expensive one, recent one or one released 3 years ago... that is different hardware. So drivers are trying to map each API feature to an implementation that can be used on given computer, accelerated by GPU, accelerated by CPU like SSE4, or even some more basic implementation.
Then driver try to estimate GPU load. Sometimes function can be accelerated, yet the GPU (especially low-end ones) is alreay overloaded by other task, then it maybe would try to calculate on CPU instead of waiting for GPU time slot.
When you make mistake there is always several variants, depending on intelligence and quality of driver.
Maybe driver would fix the error for you, ignoring your commands and running its own set of them instead.
Maybe the driver would return to your program some error code
Maybe the driver would execute the command as is. If you issued painting wit hred colour instead of green - that is an error, but the kind that driver can not know about. Search for "3d artefacts" on PC gaming related sites.
In worst case your eror would interfere with error in driver and your computer would crash and reboot.
Of course all those adaptive strategies are rather complex and indeterminate, that causes 3D drivers be closed and know-how of their internals closely guarded.
Search sites dedicated to 3D gaming and perhaps also to 3D modelling - they should rate videocards "which are better to buy" and sometimes when they review new chip families they compose rather detailed essays about technical internals of all this.
To question 5.
Some of the things that a driver does: It compiles your GPU programs (vertex,fragment, etc. shaders) to the machine instructions of the specific card, uploads the compiled programs to the appropriate area of the device memory and arranges the programs to be executed in parallel on the many many graphics cores on the card.
It uploads the graphics data (vertex coordinates, textures, etc.) to the appropriate type of graphics card memory, using various hints from the programmer, for example whether the date is frequently, infrequently, or not at all updated.
It may utilize special units in the graphics card for transfer of data to/from host memory, for example some nVidia card have a DMA unit (some Quadro card may have two or more), which can upload, for example, textures in parallel with the usual driver operation (other transfers, drawing, etc).

Where are run the Opengl commands?

i'm programming a simple OpenGL program on a multi-core computer that has a GPU. The GPU is a simple GeForce with PhysX, CUDA and OpenGL 2.1 support. When i run this program, is the host CPU that executes OpenGL specific commands or the ones are directly transferred
to the GPU ???
Normally that's a function of the drivers you're using. If you're just using vanilla VGA drivers, then all of the OpenGL computations are done on your CPU. Normally, however, and with modern graphics cards and production drivers, calls to OpenGL routines that your graphics card's GPU can handle in hardware are performed there. Others that the GPU can't perform are handed off to the CPU.