This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
How does OpenGL work at the lowest level?
When we make a program that uses the OpenGL library, for example for the Windows platform and have a graphics card that supports OpenGL, what happens is this:
We developed our program in a programming language linking the graphics with OpenGL (eg Visual C++).
Compile and link the program for the target platform (eg Windows)
When you run the program, as we have a graphics card that supports OpenGL, the driver installed on the same Windows will be responsible for managing the same graphics. To do this, when the CPU will send the required data to the chip on the graphics card (eg NVIDIA GPU) sketch the results.
In this context, we talk about graphics acceleration and downloaded to the CPU that the work of calculating the framebuffer end of our graphic representation.
In this environment, when the driver of the GPU receives data, how leverages the capabilities of the GPU to accelerate the drawing? Translates instructions and data received CUDA language type to exploit parallelism capabilities? Or just copy the data received from the CPU in specific areas of the device memory? Do not quite understand this part.
Finally, if I had a card that supports OpenGL, does the driver installed in Windows detect the problem? Would get a CPU error or would you calculate our framebuffer?
You'd better work into computer gaming sites. They frequently give articles on how 3D graphics works and how "artefacts" present themselves in case of errors in games or drivers.
You can also read article on architecture of 3D libraries like Mesa or Gallium.
Overall drivers have a set of methods for implementing this or that functionality of Direct 3D or OpenGL or another standard API. When they are loading, they check the hardware. You can have cheap videocard or expensive one, recent one or one released 3 years ago... that is different hardware. So drivers are trying to map each API feature to an implementation that can be used on given computer, accelerated by GPU, accelerated by CPU like SSE4, or even some more basic implementation.
Then driver try to estimate GPU load. Sometimes function can be accelerated, yet the GPU (especially low-end ones) is alreay overloaded by other task, then it maybe would try to calculate on CPU instead of waiting for GPU time slot.
When you make mistake there is always several variants, depending on intelligence and quality of driver.
Maybe driver would fix the error for you, ignoring your commands and running its own set of them instead.
Maybe the driver would return to your program some error code
Maybe the driver would execute the command as is. If you issued painting wit hred colour instead of green - that is an error, but the kind that driver can not know about. Search for "3d artefacts" on PC gaming related sites.
In worst case your eror would interfere with error in driver and your computer would crash and reboot.
Of course all those adaptive strategies are rather complex and indeterminate, that causes 3D drivers be closed and know-how of their internals closely guarded.
Search sites dedicated to 3D gaming and perhaps also to 3D modelling - they should rate videocards "which are better to buy" and sometimes when they review new chip families they compose rather detailed essays about technical internals of all this.
To question 5.
Some of the things that a driver does: It compiles your GPU programs (vertex,fragment, etc. shaders) to the machine instructions of the specific card, uploads the compiled programs to the appropriate area of the device memory and arranges the programs to be executed in parallel on the many many graphics cores on the card.
It uploads the graphics data (vertex coordinates, textures, etc.) to the appropriate type of graphics card memory, using various hints from the programmer, for example whether the date is frequently, infrequently, or not at all updated.
It may utilize special units in the graphics card for transfer of data to/from host memory, for example some nVidia card have a DMA unit (some Quadro card may have two or more), which can upload, for example, textures in parallel with the usual driver operation (other transfers, drawing, etc).
Related
I am writing a very basic OpenGL C++ program (Linux 64 bits).
In fact, i have 3 programs:
a main C++ program
a vertex shader
a fragment shader
The 2 shaders are compiled at runtime. I suppose this programs are runs in parallel on video card by the GPU.
My question is what happens if my computer contains a very basic video cards with no GPU?
I have tried to run my program on VirtualBox with "3d acceleration" disabled and the program works!
Does that mean opengl detects the video card and run shaders on CPU automatically if there is no GPU?
OpenGL is just a standard, and that standard has different implementations. Normally, you'd rely on the implementation provided by your graphics driver, which is obviously going to be using the GPU.
However, most desktop Linux distros also include a software implementation of OpenGL, called Mesa, which is what get used if you don't have video drivers installed that support OpenGL. (It's very rare these days to find any video hardware, even integrated video on the CPU, that doesn't support OpenGL shaders, but on Linux drivers can be an issue and in you're case the VM is not making hardware acceleration available.)
So, the short answer is yes your shaders can run on the CPU, but that may or may not happen, and it may or may not be automatic, it depends on what video drivers (or other OpenGL implementation) you have installed.
On any modern personal computer there is a GPU. If you don't have a dedicated GPU card from vendors like NVidia or AMD,you will probably have a so called "on-board", or integrated video chip by Intel or another computer hardware manufacturer. The good thing is that even the on-board GPUs today are pretty good, (Intel started doing a good job finally) and the chance is high that such a hardware on your PC already supports modern programmable OpenGL version. Well, maybe not the latest one, but from my personal experience, Most of Intel's on-board GPUs from 2-3 years ago should support up to OpenGL 4.1/4.2 .So as long as you are not running on really old hardware, you should have a full access to gpu accelerated APIs. Otherwise you have Mesa library which comes with software (non GPU accelerated) implementation of OpenGL API.
I have an algorithem that runs on PC and uses OpenCV remap. It is slow and I need to run it on an embedded system (for example a device such as this: http://www.hardkernel.com/main/products/prdt_info.php
It has OpenGL 3.0 and I am wondering if it is possible to write code in OpenGL shader to do the remapping (OpenCV remapping).
I have another device that has OpenGL 2.0, Can that device do shader programming?
where can I learn about shader programming in OpenGL?
I am using Linux.
Edit 1
The code runs on a PC and it takes around 1min, On am embedded system it takes around 2 hours!
I need to run it on an embedded system and for that reason I think to use OpenGL or OpenCL (the board has OpenCL 1.1 driver).
What is the best option for this? Can I use OpenGl 2 or OpenGL3?
A PC with a good graphic card (compatible with OpenCV) is much faster than a little embedded PC like Odroid or Banana Pi. I mean that computational_power/price or computational_power/energy is lower on these platforms.
If your algorithm is slow:
Are you sure your graphic driver is correctly configured to support OpenCV?
Try to improve your algorithm. On a current PC, is easy to get 1TFLOP with OpenCL, so if your program really require more, you should think about computer clouds and such. Check that you configured the appropriate buffers type, etc.
OpenGL 3 allow general shaders, but OpenGL 2 is very different and it must be much harder or impossible to make your algorithm compatible.
To learn OpenGL/GLSL, be very care because most page learn bad/old code.
I recommend you a good book, like: http://www.amazon.com/OpenGL-Shading-Language-Cookbook-Edition/dp/1782167021/ref=dp_ob_title_bk
EDIT 1
OpenGL 3+, or OpenGL ES 3+ have general purpose shaders and may be used for fast computing. So yes, you will get performance increased. But graphic cards on these platform are very small/slow (usually less than 10 cores). Do not expect to get the same 1min-result on this ODROID than on your PC with 500-2000 GPU cores.
OpenGL 2 has fixed pipeline and it is hard to use it for parallel computing.
If you really require to use an embedded platform, maybe you may use a cloud of them?
Motivation - Write a program in C (and Assembly, if required) to color a rectangular area in the screen red.
STRICT requirements - GNU/Linux running with the bare minimum utilities and interfaces in text/console mode. So no X (or equivalent like Wayland/Mir), no non-default (outside POSIX, LSB, etc. provided by the kernel) library or interface and no extra assumptions except the presence of the device driver for the monitor.
Effectively, what I am looking for is information on how to write a program which will eventually send a signal through the VGA port and cable to the monitor to color a particular portion of the screen red.
Apologies if this sounds rude, but no "Why do you want to do this?" or "Why don't you use ABC library?" answer. I am trying to understand how to write an implementation of the X server or a kernel framebuffer (/dev/fb0) library for example. It is ok to provide a link to the source of a C library.
no extra assumptions except the presence of the device driver for the monitor.
That means you can use X or Wayland, because those are the graphics driver infrastructure on Linux.
Linux (the Kernel) by itself doesn't contain any graphics primitives. It provides some interfaces to talk to the GPU, allocate memory on it and configure the on-screen framebuffer. But except raw framebuffer memory access the Linux kernel does have no way to perform drawing operations. For that you need some infrastructure in userspace.
Wayland builds on top of DRI2, which in turn talks to the DRM Kernel-API. Then you require GPU dependent state tracker. Mesa has state trackers for a number of GPUs and provides OpenGL and OpenVG frontends.
The NVidia and ATI propiatary, closed source graphics drivers are designed to work with X only. So with those to make use of the GPU you must use X. That's the way it is.
Outside of that you can manipulate the on-screen framebuffer memory through /dev/fbdev, but that's mere pixel pushing, without any GPU acceleration.
Once upon a time we we had svgalib (or was it called vgalib?) which did exactly what you are trying to do. I would recomend that you look at its source code. I don't know if it is still possible to find it anywhere, or if it would actually work with a modern kernel. Just whatever you do, be prepared to reboot often.
For whatever it's worth, for any future viewers, I have found a decent tutorial at http://betteros.org/tut/graphics1.php. It goes through Framebuffer, DRI/DRM, and X Windows at "the lowest level possible" (basically ioctls and file read/write).
I have gotten both the Framebuffer and DRI/DRM examples to work on both QEMU Debian (on a MacOS Host) and a Raspberry Pi, with a bit of modification for the latter.
I have hit a brick wall and I wonder if someone here can help. My program opens an OpenGL surface for very minor rendering needs. It seems on the MacbookPro this causes the graphics card driver to switch the hybrid card from low performance intel graphics to high performance AMD ATI graphics.
This causes me problems as there seems to be an issue with the AMD driver and putting the Mac to sleep, but also it drains the battery unnecessarily fast. I only need OpenGL to create a static 3D image on occasion, I do not require a fast frame rate!
Is there a way in a Cocoa app to prevent OpenGL switching a hybrid graphics card into performance mode?
The relevant documentation for this is QA1734, “Allowing OpenGL applications to utilize the integrated GPU”:
… On OS X 10.6 and earlier, you are not allowed to choose to run on the integrated GPU instead. …
On OS X 10.7 and later, there is a new attribute called NSSupportsAutomaticGraphicsSwitching. To allow your OpenGL application to utilize the integrated GPU, you must add in the Info.plist of your application this key with a Boolean value of true…
So you can only do this on Lion, and “only … on the dual-GPU MacBook Pros that were shipped Early 2011 and after.”
There are a couple of other important caveats:
Additionally, you must make sure that your application works correctly with multiple GPUs or else the system may continue forcing your application to use the discrete GPU. TN2229 Supporting Multiple GPUs on Mac OS X discusses in detail the required steps that you need to follow.
and:
Features that are available on the discrete GPU may not be available on the integrated GPU. You must check that features you desire to use exist on the GPU you are using. For a complete listing of supported features by GPU class, please see: OpenGL Capabilities Tables.
Ok, I know that online there are millions of answers to what OpenGL is, but I'm trying to figure out what it is in terms of a file on my computer. I've researched and found that OpenGL acts as a multi-platform translator for different computer graphics cards. So, then, is it a dll?
I was wondering, if it's a dll, then couldn't I download any version of the dll (preferably the latest), and then use it, knowing what it has?
EDIT: Ok, so if it's a windows dll, and I make an OpenGL game that uses a late version, what if it's not supported on someone else's computer with an earlier version? Am I allowed to carry the dll with my game so that it's supported on other windows computers? Or is the dll set up to communicate with the graphics card strictly on specific computers?
OpenGL is constantly being updated (whatever it is). How can this be done if all it's doing is communicating with a graphics card on a bunch of different computers that have graphics cards that are never going to be updated since they're built in?
There are two "parts" to OpenGL - the specification that's updated by the Khronos Group once every few months, and the driver that's written by your graphics card manufacturer specifically for your graphics card model.
The OpenGL specification essentially details how everything about the OpenGL API should work - what the expected behavior should be, when something is considered unexpected behavior, when to throw which errors, etc. The specification lets the driver writers know exactly what they need to do and lets application writers know what to expect from a driver. This is what OpenGL really "is" - the glue that holds applications and drivers together. You can read all the specifications for each version here.
Then there's drivers that implement the OpenGL API and are considered compliant to the specification. The driver does exactly what you'd expect it to do - copy data to and from the graphics card's memory, write data to graphics card registers, keep track of state, process vertices, compile shaders, instruct hundreds of stream processors to simultaneously transform vertices and fill pixels, etc. Without OpenGL, each graphics card model would have a separate, slightly faster API that would only work for that one graphics card because of the way it was structured. With OpenGL, the drivers are all written against the same API and an application's code will run on all graphics cards.
Compliance to the OpenGL specification doesn't change with driver updates. Most driver updates will either fix minor bugs or do some internal optimizing.
I know at one point there was a small bug with ATI driver where you had to call glEnable(GL_TEXTURE_2D); before you could generate mipmaps the OpenGL 3 way (glGenerateMipMaps()) despite GL_TEXTURE_2D being deprecated as a possible value for glEnable(). I'm not sure if it's fixed now, but it's certainly the type of edge case that can easily be overlooked by driver writers.
As for optimizations, there's a lot to optimize. Maybe there's another way to optimize shaders when they're being compiled, maybe there's a more efficient way to distribute work between the stream processors, I don't know.
OpenGL is a cross-platform API for graphics programming. In terms of compiled code, it will be available as an OS-specific library - e.g. a DLL (opengl32.dll) in Windows, or an SO in Linux.
You can get the SDK and binary redistributables from OpenGL.org
Depending on which language you're using, there may be OpenGL wrappers available. These are classes or API libraries designed to specifically work with your language. For example, there are .NET OpenGL wrappers available for C# / VB.NET developers. A quick Google search on the subject should give you some results.
The OpenGL API occasionally has new versions released, but these updates are backwards-compatible in nature. Moreover, new features are generally added as extensions, and it's possible to detect which extensions are present and only use those which are locally available and supported... so you can build your software to take advantage of new features when they're available but still be able to run when they aren't.
The API has nothing to do with individual drivers -- drivers can be updated without changing the API, and so the fact that drivers are constantly updated does not matter for purposes of compatibility with your software. On that account, you can choose an API version to develop against, and as long as your target operating systems ships with a version of the OpenGL library compatible with that API, you don't need to worry about driver updates breaking your software's ability to be dynamically linked against the locally available libraries.