Find out, if hardware supports specific OpenGL feature - opengl

How do I find out, if a specific OpenGL feature is supported by hardware or not?
In my case I want to know, if two-sided lighting is available in hardware.
An approach using OpenInventor would be just as well.

In general, you don't.
If something is part of core OpenGL, then it should be implemented by the OpenGL implementation. Whether this happens "in hardware" or not is not something that you can detect.
For extension based features, you can obviously check for the presence of the extension. But otherwise, there's nothing you can do except better know the hardware your code is running on.

Related

Are gluTess* functions deprecated?

I'm working on an OpenGL project, and I'm looking for a triangulation/tessellation functionality. I see a lot of references to the GLUtessellator and related gluTess* functions (e.g., here).
I'm also using GLFW, which repeats over and over again in its guides that:
GLU has been deprecated and should not be used in new code, but some legacy code requires it.
Does this include the tessellation capability? Would it be wise to look into a different library to create complex polygons in OpenGL?
GLU is a library. While it makes OpenGL calls, it is not actually part of OpenGL. It is not defined by the OpenGL specification. So that specification cannot "deprecate" or remove it.
However, GLU does most of its work through OpenGL functions that were removed from core OpenGL. GLU should not be used if you are trying to use core OpenGL stuff.
Here follows a little addition. It should be noted here that there exist in regard to the original gluTess* implementation also more modern alternatives which follow the original concept in terms of simplicity and universality.
A notable alternative is Libtess2, a refactored version of the original libtess.
https://github.com/memononen/libtess2
It uses a different API which loosely resemble the OpenGL vertex array API. However, libtess2 seems to outperform the original GLU reference implementation "by order of magnitudes". ;-)
The current official OpenGL 4.0 Tessellation principle requires GPU hardware which is explicitly compatible. (This is most likely true for all DirectX 11 and newer compatible hardware.) More information regarding the current OpenGL Tessellation concept can be found here:
https://www.khronos.org/opengl/wiki/Tessellation
This new algorithm is better when it needs mesh quality, robustness, Delauney and other optimisations.
It generates mesh and outline in one loop for cases where gluTess needs several.
It supports many more modes, but is 100% compatible with the gluModies.
It is programmed and optimised for C++ x86/x64 with an installable COM interface.
It can also be used from C# without COM registration.
The same implementation is also available as a C# version which is about half as fast on (state of the art NET Framwork 4.7.2).
It computes with a Variant type that supports different formats: float, double and Rational for unlimited precision and robustness. In this mode, the machine Epsilon = 0, no errors can occur. Absolute precision.
If you want to achieve similar qualities with the gluTess algorithm, you need 3 times longer, including complex corrections to remove T-junctions and optimise the mesh.
The code is free at:
https://github.com/c-ohle/CSG-Project

OpenGL Stencil: Availability of GL_REPLACE_VALUE_AMD

OpenGL Stenciling, seperating ref from value written?
In the answer to this question, a vender specific extension GL_REPLACE_VALUE_AMD is able to do exactly what I'm struggling to do in OpenGL, but I'm worried it will limit what computers and platforms I want my program to run on, and I've had no luck researching where it would not be available.
My goal is for the program to run on any computer that supports OpenGL 2.0, without any functional differences between them. Should I compile a program that uses this extension, what computers/platforms in this set would no longer be able to run the program without problems, if any?
The fact that it's a vendor extension should be an immediate clue that there's a good chance that you'd be limiting yourself to that vendor's hardware. It's not a 100% guarantee; NV_texture_barrier has been implemented for years on pretty much anything that can run GL 3.3 or better.
Further research indicates that the publication date for that extension is from 2012. That suggests that the extension would likely be implemented by more recent, GL 4.x-capable hardware.
If you want more accurate information, there are databases of extension usage that give a clearer picture. From this, we see that the extension is only implemented on AMD hardware. While it is available on AMD's GL 3.x-class hardware, it is not available on any of AMD's 2.x class hardware.
So if your goal is to support GL 2.0 (why not 2.1?) as a maximum, then you can't use that extension.

Strict nVidia OpenGL support (OpenGL 3.2)

While AMD is following the OpenGL specification very strict, nVidia often works even when the specification is not followed. One example is that nVidia supports element incides (used in glDrawElements) on the CPU memory, whereas AMD only supports element indices from a element array buffer.
My question is: Is there a way to enforce strict OpenGL behaviour using a nVidia driver? Currently I'm interested in a solution for a Windows/OpenGL 3.2/FreeGlut/GLEW setup.
Edit: If it is not possible to enforce strict behaviour on the driver itself - is there some OpenGL proxy that guarantees strict behaviour (such as GLIntercept)
No vendor enforces the specification strictly. Be it AMD, nVidia, Intel, PowerVR, ... they all have their idiosyncrasies and you have to learn to live with them, sadly. That is one of the annoying things about having each vendor implement their own GLSL compiler, as opposed to Microsoft implementing the one and only HLSL compiler in D3D.
The ANGLE project tries to mitigate this to a certain extent by providing a single shader validator shared across many of the major web browsers, but it is an uphill battle and this only applies to WebGL for the most part. You will always have implementation differences when every vendor implements the entire API themselves.
Now that Khronos group has seriously taken on the task of establishing a set of conformance tests for desktop OpenGL like they have for WebGL / OpenGL ES, things might start to get a little bit better. But forcing a driver to operate in a strict conformance mode is not really a standard thing - there may be #pragmas and such that hint the compiler to behave more strictly, but these are all vendor specific.
By the way, I realize this question has nothing to do with GLSL per-se, but it was the best example I could give.
Unfortunately, the only way you can be sure that your OpenGL code will work on your target hardware is to test it. In theory simply writing standard compliant code should work everywhere, but sadly this isn't always the case.

Internal workings of OpenGL

How does OpenGL work, internally?
We will use OpenGL for our 2D game project, and think that it is important for us to first find out more about how OpenGL actually works before diving right into it.
What we need isn't some getting-started tutorial, rather basic information on how OpenGL internally handles textures, draws, interacts with the graphics card, and so on.
We have already searched for a while yet couldn't find anything suitable.
OpenGL is just an interface. How it works depends on the implementation, that is drivers and hardware. For example: if the hardware doesn't support some feature then the implementation is free to implement it on the client side (CPU) rather than on the GPU. Moreover there is software only implementation.
In general you can think of it as you sending commands to the graphics card that are buffered somewhere and executed with some ordering constraints on the graphics card.
Note: your question is too general.
You might be interested in mesa. It is an open source Opengl implementation. Most implementations are trade secret so you will never know how ATI\Nvidia implemented anything except what you can infer by the results produced by interacting with their implementations. You might find Intel's drivers informative as they are open source as well.
If by internally, you mean what is the works that opengl does with what you draw, you would be interested in pipeline.

OpenGL: What's the deal with deprecation?

OpenGL 3.0 and 3.1 have deprecated quite a few features I find essential. In particular, the use of fixed function in shaders.
Can anyone explain what's really the deal with that?
Why do they find the need to deprecate such useful feature that its obvious everybody uses and that no sane hardware company is going to remove support for?
As you said, no hardware company will remove support for fixed-function shaders, because there are so many existing applications that use them. What they don't want to do, though, is figure out how to specify the interactions between FF shaders and every future extension they add. Those interactions are very complicated (partly because FF shaders are so complicated), which leads to bugs and inconsistent implementations between vendors -- both of which are bad for developers and end users.
So they're drawing a line: if you want to use FF shaders, you don't get any of the new functionality. If you want new functionality, you can't use FF shaders. This is very similar to what Microsoft did in D3D10: it added a whole bunch of new functionality, but at the same time completely removed fixed-function shaders. The belief is that the set of developers who need the new non-shader functionality but who don't also need programmable shaders is very small.
It should be clarified that a feature that is marked "deprecated" is not actually removed. For example, an OpenGL 3.0 context has all of the features - nothing is gone. Further, some vendors will ship drivers that can create 3.1 and 3.2 contexts using a compatibility profile which will also enable the deprecated features. So, look closely at what vendor hardware you are going to support and ask about the ARB compatibility mode for old features. (There is also the "core" profile as of 3.2, which allows vendors to create a more lean and mean driver if they wish to make such a thing)
Note that any current card really doesn't have an FF hardware section any more - they only run shaders. When you ask for FF behavior, the GL runtime is authoring shaders on your behalf..
Why do they find the need to deprecate such useful feature that its obvious everybody uses and that no sane hardware company is going to remove support for?
I suppose then Apple must be insane, because MacOSX 10.7 supports only 3.2 core. No compatibility specification support, no ARB_compatibility extension, nothing. You can either create a 2.1 context or a 3.2 core context.
However, if you want reasons:
For the sake of completeness: what Jesse Hall said. The ARB no longer has to consider the interaction between fixed function and new features. Integer math, array textures, and various other features are defined to not be usable with the fixed function pipeline. OpenGL has really improved over the last 3 years since GL 3.0 came out; the pace of the ARB's changes is quite substantial. Would that have been possible if they had to find a way to make all of those features interact with fixed function? And if they didn't have fixed function interactions, would you not then be complaining how you can't access new features from your old code? Which leads nicely into:
It serves as a strong indication of what one ought to be using. Even if the compatibility context is always available, you can look at core OpenGL to see how one ought to be approaching problem solving.
It makes the eventual desktop GL and GL ES unification much more reasonable. ES 2.0 threw out all of the old stuff and just adopted what you might think of as core GL 2.1. The ultimate goal will be to only have one OpenGL. To do that, you have to be able to rid the desktop GL of all of the cruft.
Fixed function shaders are quite easily replaced with standard GLSL shaders so it's difficult to see why logically they shouldn't be deprecated.
I'm less certain than you that they won't be dropped from much hardware in the foreseeable future as OpenGL ES 2.0 doesn't support the FF pipeline (and so isn't backward compatible with OpenGL ES 1.x). It seems to me that much of the momentum with OpenGL these days is coming from the widespread adoption of OpenGL ES on mobile platforms and with FF functionality gone from there there will be some considerable pressure to move away from it's use.
Indeed I'd expect the leaner OpenGL ES implementation to replace standard OpenGL quite widely over the next few years, and FF functionality may disappear more because most hardware will implement OpenGL ES rather than because it's removed from hardware implementing the full OpenGL
OpenGL allows for both a 'core' profile and a 'compatibly' profile. So for most systems you wont loose any kind of access to deprecated or removed functions.
But if you want to ensure compatibly it is best to stick to the core stuff. You won't be guaranteed a compatibility profile (even if most hardware has one and at the current state it's more likely you will encounter an out of date OpenGL rather than a core only one). Also OpenGL ES is now a subset of OpenGL, it is possible to write a OpenGL ES 2.x/3.x program and have it run in OpenGL 4.3 with almost no changes.
Game console like the PlayStations and the Nintendo ones shipped with their own graphics libraries rather than using OpenGL.
They were based on OpenGL but here stripped down in a similar was to ES (I don't think ES 2.0 was out then). Those systems need to write their own graphics drivers and libraries, asking a hardware vendor to write what is basically a whole load of legacy wrapping libraries is a bit much (all the fixed function stuff would just end up being implemented in shaders at some stage and it's likely that glBegin/glEnd would just be getting turned into a VBO automatically anyway).
I think it has also been important to ensure that developers are made aware of the current way they should be programming. For decades people have been taught the 'wrong' way to do things by default and vertex buffer objects have been taught as an extra.