I am very new to OpenGL so perhaps this is obvious, but is it possible to determine whether a specific function is supported by a given video card? This came up as I am using an old computer with an ATI Radeon 9550 video card, running Lubuntu 12.10 and discovered that it did not support the use of dFdx and dFfy. I was able to get around this problem but now I am curious if I can find out whether a failure has occurred due to a problem like this, and take action based on this, possibly using alternative methods and such.
You can check the supported OpenGL version using glGetString(GL_VERSION) or glGetInteverv(GL_MAJOR_VERSION, …); glGetInteverv(GL_MINOR_VERSION, …);. With OpenGL-3 and onwards, the OpenGL major version directly corresponds to hardware capabilites.
With older versions things are not as strict and due to OpenGL's abstract device model you can not really "query" hardware capabilities. You can check which extensions are supported, which is a good indicator for supported capabilities, as many ARB extensions made it into core functionality. If the extension on which a certain feature of a present OpenGL version is based, is not being supported then the core feature will be emulated, probably.
I know it's very vague and shaky, but that's how it is. The only other option is keeping around a database of GL_RENDERER strings and match against that.
Generally speaking, you cannot. Not for the kind of thing you're talking about.
dFdx and dFdy have been part of GLSL since version 1.10 (the first version in core OpenGL 2.0). Support for them is not optional.
Your problem is that ATI/AMD wants to claim that their older card supports 2.x, but their hardware can't actually do all the things that 2.x requires. So they lie about it, claiming support while silently making these null operations.
OpenGL doesn't have a way to detect perfidy. The only thing you can do is keep around a list of cards and use the GL_VENDOR and GL_RENDERER strings to test against.
Related
I am currently using a NVIDIA GeForce GTX 780 (from Gigabyte if that matters - I don't know how much this could be affected by the onboard BIOS, I've also got two of them Installed but due to Vulkans incapeability of SLI I only use one device at a time in my code. However in the NVIDIA control center SLI is activated. I use the official Driver version 375.63). That GPU is fully capable of geometry shaders of course.
I am using a geometry shader with Vulkan API and it works allright and does everything I exspect it to do. However I get the validation layer report as follows: #[SC]: Shader requires VkPhysicalDeviceFeatures::geometryShader but is not enabled on the device.
Is this a bug? Does someone have similiar issues?
PS: http://vulkan.gpuinfo.org/displayreport.php?id=777#features is saying the support for "Geometry Shader" is "true" as exspected. I am using Vulkan 1.0.30.0 SDK.
Vulkan features work differently from OpenGL extensions. In OpenGL, if an extension is supported, then it's always active. In Vulkan, the fact that a feature is available is not sufficient. When you create a VkDevice, you must explicitly ask for all features you intend to use.
If you didn't ask for the Geometry Shader feature, then you can't use GS's, even if the VkPhysicalDevice advertises support for it.
So the sequence of steps should be to check to see if a VkPhysicalDevice supports the features you want to use, then supply those features in VkDeviceCreateInfo::pEnabledFeatures when you call vkCreateDevice.
Since Vulkan doesn't do validation checking on most of its inputs, the actual driver will likely assume you enabled the feature and just do what it normally would. But it is not required to do so; using a feature which has not been enabled is undefined behavior. So the validation layer is right to stop you.
We're working with OpenGL 4.3. However, we're afraid that we're using features that are working with our graphics card, but not in the "minimal" required specs for OpenGL 4.3.
Is there any possibility to emulate the minimum behaviour? For example, to make the graphics card reject any non-standard texture formats etc.? (Could also be in software, speed doesn't matter for testing compatibility...)
Update
In the best case, a minimum set in all aspects would be perfect, so it is guaranteed the application works on all graphics cards supporting OpenGL 4.3. So this emulation mode should:
Reject all features/extensions deprecated in 4.3
Reject all features/extensions newer than 4.3
Only support required formats, no optional formats (for example for textures and renderbuffers)
Only support the minimum required precision for calculations
Have the minimum value of the supported Limits than can be queried via GetInteger (for example a MAX_TEXTURE_IMAGE_UNITS of 16)
There is a reference GLSL compiler that will solve half of this problem. But, as for the rest ... AMD, NV and Intel all have their own compliance issues and policies regarding how loosely they believe in following the specification.
I have seen each one of these vendors implicitly enable extensions from versions of OpenGL they should not have (without so much as a warning in the compiler log), and that is just the GLSL side of things. It is likely that Mesa can serve the role of greatest common factor for feature testing, but for OpenGL versions much older than 4.3. Mesa is effectively a minimalist implementation, and usually a few years behind the big hardware vendors.
Ideally GL's debug output extension, which is conveniently a core feature in GL 4.3, would issue API warnings if you use a feature your requested context version does not support. However, each vendor has different levels of support for this; AMD is generally the best. NVIDIA may even require you to enable "OpenGL Expert" mode before it spits out any genuinely useful information.
If all else fails, there is an XML file published by Khronos that you can parse to figure out which version and/or extension ANY OpenGL constant, function or enumerant is provided by. I wrote a simple project to do this with half a day's effort: https://github.com/Andon13/glvs. You could write some sort of validator yourself based on that principle.
There are a number of OpenGL Loading Libraries that will do what you need to some degree. GLEW just gives you everything and lets you pick and choose what you want. But there are others which generate more specific loaders.
GL3w for example generates only the core OpenGL functions, ignoring extensions entirely.
For a more comprehensive solution, there are glLoadGen or GLad. Both of these are generators for the headers and loading code. But both of them allow you to specify exactly which version of OpenGL you want and exactly which extensions you want. GLad even has a web application that can generate headers and download them to your computer.
In the interests of full disclosure, I wrote glLoadGen.
I'm a relative beginner with OpenGL (I'm not counting the ver. 1.1 NeHe tutorials I've done, because I'm trying to learn to do it the modern way with custom shaders), and I don't quite grasp how the different versions work, which ones require hardware changes, and which ones only require updates to the driver. Also, I've tried to find more details about how GLEW works (without diving into the code - yet), and it's still not clicking. While learning, I'm trying to find a balance between forward and backward compatibility in my code, especially since I'm working with older hardware, and it could become the basis of a game down the road. I'm trying to decide which version of GL and GLSL to code for.
My specific question is this: Why, when I use the GLEW (2.7) library (also using GLFW), does GLEW_VERSION_3_2 evaluate to true, even though the advertising for my GPU says it's only 2.0 compliant? Is it emulating higher-version functionality in software? Is it exposing hardware extensions in a way that makes it behave transparently like 3.2? Is it just a bug in GLEW?
It is an integrated Radeon HD 4250.
Then whatever advertisement you were looking at was wrong. All HD-4xxx class GPUs (whether integrated, mobile, or discrete cards) are perfectly capable of OpenGL 3.3. That ad was either extremely old, simply incorrect, or you read it wrong.
I'm working on some cross-platform desktop application with heavy 2-D graphics. I use OpenGL 2.0 specification because I need vertex shaders. I like 3.2+ core API because of it's simplicity and power. I think that 3.2+ core could be a choice for the future. But I'm afraid that nowadays this functionality may not be available on some platforms (I mean old graphic cards and lack (?) of modern Linux drivers). Maybe, I should use OpenGL ES 2.0 -like API for easy future porting.
What's the state of affairs with 3.2+ core, cards and linux driveres?
Older Intel chips only support OpenGL 1.5. The later chips (since about two years ago) have 2.1 but that performs worse than 1.5. Sandy Bridge claims to support "OpenGL 3" without specifying whether it is capable of doing 3.3 (as Damon suggests) but Linux drivers only do 2.1 for now. All remotely recent Radeons and Nvidia hardware with closed-source drivers support 3.3 (geometry shaders) and the 400-500 series support 4.1 (tesselation shaders).
Therefore, the versions you want to aim for are 1.5 (if you care about pre-Sandy-Bridge Intel crap), 2.1 (for pretty much all hardware), 3.3 (for decent hardware & closed-source drivers) or 4.1 (bleeding edge).
I have vertex and fragment shaders written with #version 120 and geometry shaders written in #version 330, to make fallback on old hardware easier.
You can stay on OpenGL ES 2.0. Even if ES mean Embed, it's a good approach because it remove all the fixed functions (glBegin, etc...): you are using a subset of OpenGL 2.x. So if you write your software by thinking only OpenGL ES 2.0, it will be fast and work on the majority.
In real, OpenGL ES 2.0 and desktop GL might have some difference, but i don't think it will be something you will use. If the extension GL_ARB_ES2_compatibility is supported, you have a "desktop" card that support the complete embed subset. (4 func and some const.)
Now, the real question is how many years of hardware do you want to support ? They are still lot of very old hardware that have very poor gl support. Best would be to support the less-old (OpenGL 2.0 is already old) :)
I would personally go for OpenGL 3.3, optionally with a fallback for 3.2 plus extensions (which is basically the same). It is the most convenient way of using OpenGL 3.x, and widely supported.
Targetting 3.1 or 3.0 is not really worth it any more, except if you really want to run on sandy bridge (which, for some obscure reason only supports 3.0 although the hardware is very well capable of doing 3.3). Also 3.1 and 3.0 have very considerable changes in shader code, which in my opinion are a maintenance nightmare if you want to support many versions (no such problem with 3.2 and 3.3).
Every hardware that supports 3.2 can also support 3.3, the only hindrance may be that IHVs don't provide a recent driver or a user may be too lazy to update. Therefore you cannot assume "3.3 works everywhere". The older drivers will usually have the same functionality via ARB extensions anyway, though.
Mac OS X doesn't support GL-3 context at the moment. This summer may change the situation, but I would recommend to stick with GL-2 plus extensions nevertheless.
Depends on your target market's average machine. Although to be honest, OpenGL 3.2+ is pretty ubiquitous these days.
I am working on a gaming framework of sorts, and am a newcomer to OpenGL. Most books seem to not give a terribly clear answer to this question, and I want to develop on my desktop using OpenGL, but execute the code in an OpenGL ES 2.0 environment. My question is twofold then:
If I target my framework for OpenGL on the desktop, will it just run without modification in an OpenGL ES 2.0 environment?
If not, then is there a good emulator out there, PC or Mac; is there a script that I can run that will convert my OpenGL code into OpenGL ES code, or flag things that won't work?
It's been about three years since I was last doing any ES work, so I may be out of date or simply remembering some stuff incorrectly.
No, targeting OpenGL for desktop does not equal targeting OpenGL ES, because ES is a subset. ES does not implement immediate mode functions (glBegin()/glEnd(), glVertex*(), ...) Vertex arrays are the main way of sending stuff into the pipeline.
Additionally, it depends on what profile you are targetting: at least in the Lite profile, ES does not need to implement floating point functions. Instead you get fixed point functions; think 32-bit integers where first 16 bits mean digits before decimal point, and the following 16 bits mean digits after the decimal point.
In other words, even simple code might be unportable if it uses floats (you'd have to replace calls to gl*f() functions with calls to gl*x() functions.
See how you might solve this problem in Trolltech's example (specifically the qtwidget.cpp file; it's Qt example, but still...). You'll see they make this call:
q_glClearColor(f2vt(0.1f), f2vt(0.1f), f2vt(0.2f), f2vt(1.0f));
This is meant to replace call to glClearColorf(). Additionally, they use macro f2vt() - meaning float to vertex type - which automagically converts the argument from float to the correct data type.
While I was developing some small demos three years ago for a company, I've had success working with PowerVR's SDK. It's for Visual C++ under Windows; I haven't tried it under Linux (no need since I was working on company PC).
A small update to reflect my recent experiences with ES. (June 7th 2011)
Today's platforms probably don't use the Lite profile, so you probably don't have to worry about fixed-point decimals
When porting your desktop code for mobile (e.g. iOS), quite probably you'll have to do primarily these, and not much else:
replace glBegin()/glEnd() with vertex arrays
replace some calls to functions such as glClearColor() with calls such as glClearColorf()
rewrite your windowing and input system
if targeting OpenGL ES 2.0 to get shader functionality, you'll now have to completely replace fixed-function pipeline's built in behavior with shaders - at least the basic ones that reimplement fixed-function pipeline
Really important: unless your mobile system is not memory-constrained, you really want to look into using texture compression for your graphics chip; for example, on iOS devices, you'll be uploading PVRTC-compressed data to the chip
In OpenGL ES 2.0, which is what new gadgets use, you also have to provide your own vertex and fragment shaders because the old fixed function pipeline is gone. This means having to do any shading calculations etc. yourself, things which would be quite complex, but you can find existing implementations on GLSL tutorials.
Still, as GLES is a subset of desktop OpenGL, it is possible to run the same program on both platforms.
I know of two projects to provide GL translation between desktop and ES:
glshim: Substantial fixed pipeline to 1.x support, basic ES 2.x support.
Regal: Anything to ES 2.x.
From my understanding OpenGL ES is a subset of OpenGL. I think if you refrain from using immediate mode stuff, like glBegin() and glEnd() you should be alright. I haven't done much with OpenGL in the past couple of months, but when I was working with ES 1.0 as long as I didn't use glBegin/glEnd all the code I had learned from the standard OpenGL worked.
I know the iPhone simulator runs OpenGL ES code. I'm not sure about the Android one.
Here is Windows emulator.
Option 3) You could use a library like Qt to handle your OpenGL code using their built in wrapper functions. This gives you the option of using one code base (or minimally different code bases) for OpenGL and building for most any platform you want. You wouldn't need to port it for each different platform you wanted to support. Qt can even choose the OpenGL context based on the functions that you use.