Frame buffer object support - opengl

Do most OpenGL 2.0 and 2.1 graphics cards that still are in use support frame buffer objects (through the GL_ARB_framebuffer_object or GL_EXT_framebuffer_object extensions)?

In my experience, they do.
Among nVidia, GPUs at least as far back as the GeForce FX 5xxx (which support OpenGL 2.0) have FBO support, and I suspect even older cards do.
Among ATI GPUs old enough to only support OpenGL 2.0, I have seen such GPUs as the HD 2400 and the X1300, and they all support FBOs.
Among Intel GPUs, I think that it is mainly the HD Graphics families that have OpenGL 2.0 support at all, and all the HD Graphics GPUs I've seen have FBO support. I have also seen some other GPUs with 2.0 and FBO support, including some versions of the 965, and something called the "Eaglelake". I'm not sure why only some 965s support OpenGL 2.0, though. It could be a driver issue.
I have, on the other hand, not yet found any 2.0-compatible GPUs that do not support FBOs.
I hope this purely empirical answer helps somewhat.

I'd say yes. My Intel GMA 950's Windows 7 driver (at least) unofficially exposes OpenGL 2.0 features and frame buffer objects are supported through the EXT_framebuffer_object extension.

Related

Support for Cg profiles in modern hardware

I have an inhouse application that uses the now deprecated nvidia scenix and Cg shaders. It works fine, and as it is inhouse we can chose what hardware to run it on.
The shaders are currently using vp40/fp40 profiles (though I can change it to use later profiles like GLSLV/GLSLF). I am trying to confirm that the current crop of nvidia hardware STILL supports Cg shaders? i.e. if we purchase the latest OpenGL4 geforce or quadro cards, will they still support the Cg profiles? I have asked on the nvidia forum but no answer. Eventually we will have to upgrade to a new scene graph and GLSL, but I want to know what 'legacy' support there is for the Cg shaders.
Thanks
Yes you're perfectly fine. In fact the GLSL implementation in the NVidia drivers is actually an add-on to the Cg compiler. even on latest generation GPUs the NVidia driver internally first translates GLSL to NV/ARB_programm_… assembly (source code in fact) and runs this through the assembler. It's unlikely NVidia is going to change that in the near future (although the introduction of SPIR-V may force their hand). And all the legacy OpenGL ARB/NV_program interfaces are supported just fine as extension (even to to OpenGL-4 core profile).

Is OpenGL processor independent?

In other word, is there any GPU that does not support OpenGL, and instead support other graphic rendering libraries like DirectX, OpenCl.
"GPU support of OpenGL" is not uniquely defined. It takes much more than hardware to make OpenGL work. Notably, OS driver infrastructure, and driver itself.
Therefore, it is possible to have a GPU that is capable of all OpenGL features, but have no OpenGL software implementation (either not exists, not installed etc.). Ex.: because of marketing reasons Microsoft does not support OpenGL on XBox. Same thing with Windows: often there is only basic OpenGL available with default Windows graphics drivers. It could be easily fixed by installing vendor driver, but most users don't bother.
And other way around, there are GPUs that are not capable of running some or all of the OpenGL features in hardware. Those features could be implemented in software. Ex.: First Android OS versions had software implementations of OpenGL ES in case phone didn't have dedicated GPU or if GPU was not fully capable of OpenGL ES.
Also, there are platforms that do not support OpenGL or DirectX and use their own APIs. Ex.: Sony use custom API for their Playstations.
At this day and age, no, you'll not find a GPU that won't support some version of OpenGL, with the possible exception of some super-specialised chips - but those won't support DirectX either.

Image load store equivalent in OpenGL 3

My project should greatly benefit from arbitrary/atomic read and write operations in a texture from glsl shaders. The Image load store extension is what I need. Only problem, my target platform does not support OpenGL 4.
Is there an extension for OGL 3 that achieves similar results? I mean, atomic read/write operations in a texture or shared buffer of some sort from fragment shaders.
Image Load Store and, especially atomic operations are features that must be backed up by specific hardware capabilities, that are very similar to features used in compute shaders. Only some of the GL3 hardware can handle it and only in a limited way.
Image Load Store in core profile since 4.2, so if your hardware (and driver) is capable of OpenGL 4.2, then you don't need any extensions at all
if your hardware (and driver) capabilities is lower than GL 4.2, but higher than GL 3.0, you can, probably, use ARB_shader_image_load_store extension.
quote: OpenGL 3.0 and GLSL 1.30 are required
obviously, not all 3.0 hardware (and drivers) will support this extension, so you must check for its support before use it
I believe, most NVIDIA GL 3.3 hardware supports it, but not AMD or Intel (that's my subjective observations ;) ).
If your hardware is lower than GL 4.2 and not capable of this extension, nothing really you can do. Just have an alternative code path with texture sampling and rendering to texture and no atomics (as I understood this is possible, but without "great benefit of atomic"), or simply report an error to those users, who not yet upgraded their rigs.
Hope it helps.

Which version of OpenGL to use?

I currently run a machine that allows me to program in OpenGL 2.1. If I were to make a program, should I use the power of the current OpenGL versions like 3.x/4.x or use 2.1?
On a side question: How can I tell what's the highest version of OpenGL my computer can run?
On another side question: does only upgrading my video card allow me to program in upgraded versions of OpenGL?
OpenGL versions (for AMD and NVIDIA GPUs) roughly correspond to levels of hardware. 2.x OpenGL versions are for DX9-level hardware. 3.x represents DX10-level, and 4.x represents DX11-class hardware. So the version you pick restricts you can run your code.
In general, any AMD or NVIDIA GPU you can actually buy new from a store will be 3.x or better (more than likely, 4.x). Even integrated GPUs, motherboard or CPU, from AMD are 3.x or better. I do some home development work on an HD 3300 motherboard GPU, and it works reasonably well.
Intel is a problem. Intel's OpenGL driver quality is pretty poor. Many old Intel machines can only support GL 1.4, which is pre-DX9 class functionality. They do support some higher-level extensions (shaders, but only vertex shaders, since they run them in software).
More recent Intel GPUs are a bit better, but their GL drivers are still rather buggy.
The above describes the situation for Windows. Linux is a bit fuzzier, because there are drivers from NVIDIA/AMD, and open-source community written drivers. The latter are generally not as good, but they are improving. These tend to be for 3.x-class hardware.
The MacOSX world is a bit different. Mac OSX Lion (10.7), recently released, adds support for OpenGL 3.2 (sadly, not 3.3, for some reason). Apple rigidly controls how OpenGL works on their platform, but hopefully they will be updating GL versions more frequently than they have been recently.
So on Macs, you really have two choices: 2.1 or 3.2. Note that Lion's 3.2 support only exposes core OpenGL functionality. See this page for details on what that means.
You cannot tell what the highest version your particular computer is capable of. There is simply the version you get when you create a context. In general, unless you specifically ask for a version (and even then, usually not), you will get the highest version your hardware and drivers can handle.
Oh, and yes: the OpenGL version is controlled by your video card's capabilities (and installed drivers).
The following advise assumes that you're developing a serious application that you intend for others to use. This isn't for little demo apps or whatever.
In general, I would advise against explicitly restricting your code to 4.x. While 4.x adoption increases every day (there are 2 hardware generations from both NVIDIA and AMD with 4.x support, and a third likely will be out by years end from AMD. Also, AMD is starting to embed 4.x capable GPUs in their CPUs now), there is still a lot of 3.x hardware. 4.x doesn't buy you a whole lot, and you can easily add code paths to conditionally support 4.x features if they are available.
In order to use OpenGL 3.x you need a card that supports DirectX10 and proper drivers that have support for it.
The advantage in opposite to DirectX is, that you can also use OpenGL3 and 4 on WindowsXP. No need for 7 or Vista.
Which version you should use depends on your audience. If your audience are gamers, go ahead, use 3. Won't do 4 exclusive yet. DX11 are still rare.
For a first look on how Gamers use their computers and what hardware they have, steam is a good source:
http://store.steampowered.com/hwsurvey
You can determine the version by running:
glGetString(GL_VERSION);
A good OpenGL3 Tutorial:
http://arcsynthesis.org/gltut/
The OpenGL 3.3 SDK Reference:
http://www.opengl.org/sdk/docs/man3/
Hope this helps a bit :).
Lots of embedded Intel graphics are limited to 1.4 or 1.5.
Mac OSX is stuck on 2.1 I hear.
All Radeon and GeForce cards can do 3+ (may need a driver update).
And you can program with any version, but if your hardware doesn't support it, you'll end up testing under a software renderer (slow!).
On a side question: How can I tell what's the highest version of OpenGL my computer can run?
I answer for the above question.
I come across to the tool below, it's really complete in itself and let me see all OpenGL version that my system currently support (from 1.0 up to what it actually support). As well for extensions available for my system to use. Not only for ARB though, it ranges from NV, ATI, OES, etc.
http://www.realtech-vr.com/glview/download.html

What OpenGL version to choose for cross-platform desktop application

I'm working on some cross-platform desktop application with heavy 2-D graphics. I use OpenGL 2.0 specification because I need vertex shaders. I like 3.2+ core API because of it's simplicity and power. I think that 3.2+ core could be a choice for the future. But I'm afraid that nowadays this functionality may not be available on some platforms (I mean old graphic cards and lack (?) of modern Linux drivers). Maybe, I should use OpenGL ES 2.0 -like API for easy future porting.
What's the state of affairs with 3.2+ core, cards and linux driveres?
Older Intel chips only support OpenGL 1.5. The later chips (since about two years ago) have 2.1 but that performs worse than 1.5. Sandy Bridge claims to support "OpenGL 3" without specifying whether it is capable of doing 3.3 (as Damon suggests) but Linux drivers only do 2.1 for now. All remotely recent Radeons and Nvidia hardware with closed-source drivers support 3.3 (geometry shaders) and the 400-500 series support 4.1 (tesselation shaders).
Therefore, the versions you want to aim for are 1.5 (if you care about pre-Sandy-Bridge Intel crap), 2.1 (for pretty much all hardware), 3.3 (for decent hardware & closed-source drivers) or 4.1 (bleeding edge).
I have vertex and fragment shaders written with #version 120 and geometry shaders written in #version 330, to make fallback on old hardware easier.
You can stay on OpenGL ES 2.0. Even if ES mean Embed, it's a good approach because it remove all the fixed functions (glBegin, etc...): you are using a subset of OpenGL 2.x. So if you write your software by thinking only OpenGL ES 2.0, it will be fast and work on the majority.
In real, OpenGL ES 2.0 and desktop GL might have some difference, but i don't think it will be something you will use. If the extension GL_ARB_ES2_compatibility is supported, you have a "desktop" card that support the complete embed subset. (4 func and some const.)
Now, the real question is how many years of hardware do you want to support ? They are still lot of very old hardware that have very poor gl support. Best would be to support the less-old (OpenGL 2.0 is already old) :)
I would personally go for OpenGL 3.3, optionally with a fallback for 3.2 plus extensions (which is basically the same). It is the most convenient way of using OpenGL 3.x, and widely supported.
Targetting 3.1 or 3.0 is not really worth it any more, except if you really want to run on sandy bridge (which, for some obscure reason only supports 3.0 although the hardware is very well capable of doing 3.3). Also 3.1 and 3.0 have very considerable changes in shader code, which in my opinion are a maintenance nightmare if you want to support many versions (no such problem with 3.2 and 3.3).
Every hardware that supports 3.2 can also support 3.3, the only hindrance may be that IHVs don't provide a recent driver or a user may be too lazy to update. Therefore you cannot assume "3.3 works everywhere". The older drivers will usually have the same functionality via ARB extensions anyway, though.
Mac OS X doesn't support GL-3 context at the moment. This summer may change the situation, but I would recommend to stick with GL-2 plus extensions nevertheless.
Depends on your target market's average machine. Although to be honest, OpenGL 3.2+ is pretty ubiquitous these days.