I want to know if there is a way (ideally cross platform, but if not just POSIX compliant or at least in Linux) to query for the highest supported OpenGL version in the current system.
I would like to be able to instantiate GLFW windows to be the highest supported version, rather than blindly trying different versions until one allows for valid context initialization.
EDIT:
Assume I am not creating a context. Imagine I want to replicate what glxinfo does. In other words, I want to be able to query the installed OpenGL version without EVER creating a context.
When you ask for version X.Y, you are not asking for version X.Y; you're asking for at least version X.Y. Implementations are permitted to give you the highest version which is backwards compatible with X.Y.
So if you ask for 3.3 core profile, you may well get 4.6 core profile. There's no point in the implementation returning a lower version.
So just write your code against a minimum OpenGL version, then ask for that. You'll get whatever the implementation gives you, and you can query what you got later.
Imagine I want to replicate what glxinfo does.
glxinfo does what it does by creating a context. Just look at its source code.
However, the GLX_MESA_query_renderer extension does allow asking about the context that would be created for a particular display/screen/renderer combination. Of course, that only works through MESA; drivers that don't go through MESA's system will not be visible through this.
Have you tried to create an OpenGL context without querying for a specific version?
When I use native OpenGL functions (glxCreateContext() or wglCreateContext()) to create a new OpenGL context, they always create a context with the highest OpenGL version supported by my graphic card (4.6).
I don't know if GLFW have the same behavior...
ANSWER FOR QUESTION EDIT:
It's impossible to query any of OpenGL informations WHITHOUT creating a context, since you are not able to call any of glXXX() functions WHITHOUT creating a context. You can instead create a dummy context (which is not shown to the user, but stay somewhere in the memory) to query all OpenGL informations you want, and delete it when you are done (don't worry about it, many many softwares, libraries and also game engines does this, even my own)
Related
Not sure if this question has already been asked or I didn't know the correct wordings to search but basically I want to be able to use a specific version of my application and disable all other features that have been added in a newer version.
For example I currently have version 1.4 of a application I created but I only want to use features available in V1.2 and make it so that I cannot use any features which have been added since, e.g. 1.3?
An example of this is GLSL, you set a specific version upon writing it, and you can't use any features from a higher version than what you assigned.
Is this possible, and if so how to approach it?
I was thinking about a project for quiet a while that would require the extraction of the color and depth buffer from OpenGL applications, in particular games. It has absolutely nothing to do with modding, in terms of manipulating the game itself or is intended for "cheating" purposes but more just for data gathering.
So now I'm trying to figure out possible ways to accomplish it. Of course being able to do it with Direct3D under Windows would even lead to more available applications but as I'm pretty familar with OpenGL under Linux, I would start this way.
As there are many modding/cheating applications that actually manipulate the color/depth buffer of video games in different kinds (e.g. wallhacks in ego shooters), it seems that this definitely has to be possible somehow.
Now the question is, what would be the best way to accomplish this? Reading out the GPU memory directly would most probably not work according to this thread as memory mapping in OpenGL is completely dependent on vendor implementation and there is no trivial way to get the VRAM addresses of the corresponding data.
Alternative approaches I can think of might now be categorized as extravenous or intravenous ones:
Extravenous: extract the OpenGL context from a process and access buffers, shaders, etc. from a third application without really manipulating the target applications binary directly.
Intravenous: manipulate the target applications binary/code in such a way, that it writes the correspondings buffers/data either to a specific place in the memory or directly saves it somewhere.
Latter approach should definitely work, but might be associated with a larger effort and would need to be done per application. Hence first would be definitely preferred, but is the described way even applicable at all? Is it just possible to access OpenGL ressources from different processes when you have the OpenGL context value of another? Does anyone has experience with this?
Update:
After some research I found out that what I was looking for is pretty common and called "Hooking" or "Interception" in general.
For OpenGL and Direct3D there are many different libraries and programs to do this:
glintercept: OpenGL # Windows
Indicium-Supra: DirectX
apitrace: OpenGL + Direct3D # Windows, macOS, Linux
D3D9Interceptor: Direct3D
Nvidia Nsight: Direct3D + OpenGL + Vulkan # Windows, Linux
and many others.
The de facto standard way of doing this would be to hook yourself into the process and redirect the graphics API calls the application (game) makes to your own code. Your hook can then record whatever data it needs and perform whatever action it wants before passing on the call to the actual API implementation. There are many ways of doing this with different pros and cons, ranging from building a fake library with the same interface and tricking the game into loading that one instead of the actual graphics library (DLL injection) to modifying the machine code in the loaded process image to make function calls jump into your code. Which approaches are applicable and best for your case will highly depend on many factors such as target platform, target applications, the API you want to hook into, and so on.
The main issue with any such approach, however, is that that's exactly how many cheats work. Thus, many video games will come with built-in protection to prevent precisely this kind of stuff from working. With many online games, you might even risk having your account suspended for suspected cheating by trying to do stuff like this. But that's just something to be aware of. In my experience, it will still work with many games, particularly single-player games.
"Extracting the OpenGL context from another process" will not work, at least not on any proper operating system. The whole point of having the process abstraction in the first place is to isolate applications from each other…
Since I don't have enough reputation to ask this in a comment, let me ask you here what exactly your goal is. Do you want to get this information once for a single frame or do you want to record this for many frames over a period of time? Do you need an automated solution in form of a custom application? If neither of those, you might be able to just use a graphics debugging tool like Nsight Graphics to capture and export the frame you want…
OpenGL is great in creating UI (specially in games) and it is highly portable. Is it unusual that an ordinary (not graphically intensive) application uses OpenGL for its UI? And if not, why? Is it about performance or ease of use?
For example, an Apple developer can use ready to use buttons and sliders, etc provided by Apple; he can also create the UI using OpenGL. The second method makes the code more flexible and portable. Why people don't do this?
Does using OpenGL makes sense if portability is our goal?
There's a lot more to UI than graphics.
As one example fonts. Rendering Chinese, Japanese, Korean, Arabic, Thai in the right directions with the write strokes etc it a TON of work. There are whole teams dedicated to that topic at Microsoft, Google, Apple, Adobe and other companies. If you go straight to GL you're going to have to solve that problem yourself.
Another example, native controls. iOS users expect certain controls to work a certain way. Android users expect something different. For a game that's usually not a problem, games usually have a very unique UI. Apps on the other hand are generally expected to stick to the conventions of their target platform. Generally users get upset when the controls don't match their native platform. Using GL for your UI means you won't get the native controls.
Similarly text editing is a very platform specific feature. Is it drag to select, right click to select, hold to select? What keys go which way, how do they work? Is it Ctrl-V or ⌘-V. All of that is platform dependent as well. You can use the native text editing controls and have the problem solved or you can use GL and have to reproduce not only all the code to edit text but try to make it work the right way on each platform in each configuration. Does your GL text editor handle Japanese keyboards? German Keyboards? Does it handle Input Method Editors for CJK and others?
So, it's more a matter of the right tool for the right job. There are whole UI platforms written on top of GL. Before Metal OSX was probably one of them (or still is?) But if you skip that and build your UI directly on GL you'll end up having to implement all those in-between pieces.
Certainly GL might be the right way to go for certain non-game apps. Paper comes to mind as an app that could be 98% GL and therefore gain portablility. On the other hand Evernote is probably on the far other side. It needs to handle every language, different fonts, input for users with disabilities, etc. All of that is built into the OS, not GL.
Yes, what you suggest is possible. Just have a look at Blender. Blender implements its own UI using OpenGL, for the exact portability reasons you gave.
However there's a lot more to user interfaces than just getting things drawn to the screen. Event management, input handling, interoperability with other applications. All that depends on functions that are not covered by OpenGL.
But there are cross platform application framework libraries, like Qt. And the whole drawing stuff makes only a small portion of what those frameworks do.
One problem you run into when using OpenGL for drawing the GUI though is, that there's a huge variation on the OpenGL profiles supported by the systems out there. It can vary from a mere OpenGL-1.1 software fallback on a old Windows XP machine, over OpenGL-1.4 on Windows Vista machine with only the default drivers installed by the Windows setup, up to OpenGL-4.5 on the same machine once that user installs the proper drivers. And the way you use OpenGL-4.5 is largely incompatible to OpenGL-1.4.
For a UI toolkit written with a OpenGL backend this means that you must implement at least three codepaths: A OpenGL-1.1 variant that uses the fixed function pipeline and client side vertex arrays. A OpenGL-3 compatibility profile. And a OpenGL-4 core profile.
This much more work than just using the OS specific methods, which you have to use anyway to create the window and get user input with.
The title pretty much says it. I was thinking about making a simple video editor, and I was unsure about the "logistics" of various effects and filters and such. Let's say I want to make it possible for an external program to apply some effect to the image. Does it have to be an executed program necessarily, or can it simply be a set of OpenGL instructions that the video editor can parse, and essentially "pass along" to OpenGL. Technically, that would be a program, but it seems more elegant and standardized than making a full fledged secondary program just to apply an effect. Maybe there's a better way?
Edit: Thanks for the answers guys. Here's a follow up though: How do other video editors implement this? The reason I ask is because the answers seem to be rather negative on the above point, so I was wondering how it is done by professional applications.
Does it have to be an executed program necessarily, or can it simply be a set of OpenGL instructions that the video editor can parse, and essentially "pass along" to OpenGL.
There are several ways to approach the problem ("make video editor with ability to create custom effects").
Make your editor support plugins, and provide api for making new video effect plugins. Macromedia Director worked this way. New effect will have to be implemented as a plugin library (dll/.so, depending on your platform).
Embed scripting language with OpenGL bindings into your application for making video effects, provide basic functions for interacting with internal state of your application. This will allow faster development of effects, but performance will suffer greatly for certain operations. "Blender" (3d editor) works this way, Maya provides similar framework (I think). Basically, this is #1 but implemented as scripting language.
Make your editor load GLSL shaders. This will allow you to make some effects fairly quickly, but other than that it is not good idea - while GLSL has decent amount of "power" (functions/maths), GLSL alone won't allow you to make anything you want, because it won't be able to interact with your application at all (pure GLSL can't create textures, framebuffers, and so on, which will limit your ability to make something more interesting). Most likely you'll be only able to make some kind of filter, nothing more. #1 and #2 will give "power" to end user.
Implement node-based effect editor. I.e. there are several types of nodes user can drag around that represent certain operations, they have inputs/outputs and user can connect them. Blender 3D and UDK(Unreal Development Kit) have such feature. I think .kkreiger (site is down, google it) used similar technique to make 96kb 1st person shooter. This can be quite powerful, but programmers are very likely to hate dragging graphical entities around with mouse, unless you also provide the way to make such node graph via scripting language (AviSynth had something similar but not quite like that). This can be as powerful as #1 or #2, but will cost you extra development time.
Aside from that, there is no "language-independent" way to make effect. Either you have to use existing language to make effect plugin interact with your application, OR you have to make your own "language" to describe effect. The only language-independent way to deal with this is to hire a programmer and tell him what kind of effect you want. But then again, that requires natural language.
Let's say I want to make it possible for an external program to apply some effect to the image. Does it have to be an executed program necessarily, or can it simply be a set of OpenGL instructions that the video editor can parse, and essentially "pass along" to OpenGL.
Either thing works. However in the OpenGL standard itself there's no such thing like "OpenGL instructions" or "opcodes". But at least in X11 based systems with indirect GLX this is possible, because GLX actually defines opcodes. And it is possible for multiple X clients to operate on the same context if it is indirect. Unfortunately you won't have that option most of the time, as you probably want a direct context, because you may want OpenGL3 (for which not all operations have opcodes defines, which makes indirect impossible for OpenGL-3), or because you're not using GLX.
So the next option is, that you provide the other process with some kind of command/interpreter prompt for OpenGL. If you're lazy I suggest you just embedd a Python interpreter in your program, together with the Python OpenGL bindings. Those operate on whatever context that's currently active, allowing the other program to actually send some Python script to do its stuff.
And last but not least you could provide OpenGL through some RPC interface.
Or you provide some plugin system, where you load some DLL, which does the deed.
Sure, but it would be basically the equivalent of you creating your own language. You can accept OpenGL "instructions" from the user via some text interface (or if you want to somehow put something together as a GUI), then parse those "instructions", and the underlying implementation would execute those instructions in whatever language your application is written in.
The short answer is no. What you are looking for is a HCI guy's dream DSL which would describe presentation regardless of the underlying technology.
Probably not quite what you wanted, you might want to look at GLSL. GLSL is a C-like language that is compiled by the graphic card driver into native "graphic assembly language".
I'm trying to understand how graphics card versions, OpenGL versions and the API headers work together. I've read up on the OpenGL 2.0/3.0/3.1 debacle on the OpenGL forums and elsewhere but it's still not clear how this will work out for me as a developer (new to OpenGL). (btw I'm using nVidia in this question as an example because I'm buying one of their cards, but obviously I'd like to be vendor agnostic for the software I develop).
First, there is the GPU which needs to support an OpenGL version. But e.g. nVidia have drivers for older video cards to support OpenGL 3. Does that mean that these drivers implement certain functionality in software to emulate new functionality that isn't in the hardware?
Then there are the API headers. If I decide to write an application for OpenGL 3, should I wait until Microsoft releases an updated version of the platform SDK with headers that support this version? How do you select which version of the API to use - through a preprocessor define in the code, or does updating to the latest platform SDK simply upgrade to whatever is the latest version (can't imagine this last option, but you never know...).
What about backward compatibility? If I write an application targeting OpenGL 1.2, will users who have installed the drivers for their card supporting OpenGL 3 still be able to run it or should I test the card's features/supported version in my application? Reading http://developer.nvidia.com/object/opengl_3_driver.html seems to confirm that at least for nVidia cards applications written against 1.2 will continue to work, but this also implies that other vendors may stop to support the 1.2 API. Basically that would put me (potentially) in a position in the future where the software wouldn't work with recent cards because they don't support 1.2 any more. But if I develop for OpenGL 3 or even 2 today, I may shut out users who's gpu's only support 1.2.
I don't need the fancy features in OpenGL - I hardly use any shading at all, the fixed pipeline works fine for me (my application is CAD-like). What is the best version to base a new application on, with the expectation that it will be a long-lived application with incremental updates over the years to come?
I'm probably forgetting other issues that are relevant in this context, any insights are much appreciated.
From my OpenGL experience, it seems that "targeting" a given version is just accessing the various extensions that have been added for that version. So the only reason you would "target" OpenGL version 3 is if you want to use some of the extensions that are new to version 3. If you don't really use version 3 extensions (if you're just doing basic OpenGL stuff), then you naturally aren't "targeting" version 3.
In Visual Studio, you will always link your application with opengl32.lib, and opengl32.lib doesn't change across different OpenGL versions. OpenGL instead uses wglGetProcAddress() to dynamically access OpenGL extensions/versions at run time instead of at compile time. Namely, if a given driver doesn't support an extension, then wglGetProcAddress() will return NULL at run-time when that extension's procedure is requested. So in your code you will need to implement logic that handles the NULL return case. In the simplest scenario, you could just print an error and say "this feature isn't available, so this program will behave ...". Or you can find other alternative methods for doing the same thing that doesn't use the extension. For the most part, you'll only get NULL returns from wglGetProcAddress if you application is running on old hardware/drivers that don't support the OpenGL version that added the extension you're looking for. However, in future years you'll want to keep abreast of those things that newer OpenGL versions decide to deprecate. I haven't read too much into the 3.1 spec, but apparently they're introducing a deprecation model where older technology/extensions may be deprecated, which will open the door for newer hardware/drivers to no longer support deprecated extensions, in which case wglGetProcAddress will again return NULL for those extensions. So if you put logic in for handling the NULL return on wglGetProcAddress(), you should still be fine even for deprecated extensions. It just might become necessary for you to implement better alternatives, or to make newer extensions default.
As far as the versioned API headers go, the changes to the headers are mostly just changes to allow access to new functions returned by wglGetProcAddress(). So if you include the API header for version 2, you're good to go as long as you only need the extensions for OpenGL 2. If you need to access functions/extensions that were added in version 3, then you just replace your version 2 header with the version 3 header, which just adds some additional function pointer typedefs associated with the new extensions, so that when you call wglGetProcAddress(), you can cast the return value to the right function pointer. Example:
PFNGLGENQUERIESARBPROC glGenQueriesARB = NULL;
...
glGenQueriesARB = (PFNGLGENQUERIESARBPROC)wglGetProcAddress("glGenQueriesARB");
In the above example, the typedef for PFNGLGENQUERIESARBPROC is defined in the API headers. glGenQueriesARB was added in 1.2 I believe, so I'd need at least the 1.2 API headers to get the definition of PFNGLGENQUERIESARBPROC. That's really all the headers do.
One more thing I want to mention about 3.1. Apparently with 3.1 they're deprecating a lot of OpenGL functionality that my company has used pretty ubiquitously, including display lists, the glBegin/glEnd mechanisms, and the GL_SELECT render mode. I don't know much about the details, but I don't see how they can do that without having to create a new opengl32.lib to link with, because it seems that most of that functionality is embedded into opengl32.lib, and not accessed through wglGetProcAddress. Additionally, is Microsoft going to include that new opengl32.lib in their Visual Studio versions? I don't have an answer for those questions, but I would think that, even though 3.1 deprecates it, this functionality is going to be around for a long time. If you keep linking with your current opengl32.lib, it should continue to work almost indefinitely, although you may lose hardware acceleration at some point. The vast majority of OpenGL tutorials available on the web use the glBegin/glEnd methods for drawing primitives. The same is true for GL_SELECT, although a lot of hardware no longer accelerates GL_SELECT render mode. Even opengl.org's tutorial's use the supposedly deprecated glBegin/glEnd methods. And I have yet to find a "getting started" tutorial that uses only 3.1 features, avoiding the deprecated functionality (certainly if someone knows of one, link me to it). Anyway, while it seems 3.1 has thrown away a lot of the old for all new stuff, I think the old stuff will still be around for quite a while.
Long story short, here's my advice. If your OpenGL needs are simple, just use the basic OpenGL functionality. Vertex arrays are supported on versions 1.1 - 3.1, so if you're looking for maximum lifetime and you're starting fresh, that's probably what you should use. However, my opinion is glBegin/glEnd and display lists are still going to be around for a while even though they are deprecated in version 3, so if you want to use them, I wouldn't fret too much. I would avoid GL_SELECT mode for picking in favor of an alternate method. Many of the hardware vendors have considered GL_SELECT deprecated for years now, even though it just got deprecated with version 3. In our application we get lots of issues where it doesn't work on ATI cards and integrated GMA cards. Subsequently we just implemented a picking method using occlusion queries which seems to fix the problem. So to do it right the first time, avoid GL_SELECT.
Good Luck.
As far as drivers go, I think that in some cases missing functionality it written in software. The whole point of using OpenGL (aside from the acceleration) is to write to an API and not care HOW it's implemented.
In my experience, you don't declare OpenGL versions. The function calls between API versions are non-overlapping. Be aware of the spec and if, for example, you call a 2.0 method, your application's minimum version just became 2.0. I have display applications written to target OpenGL (stupid old Sun video cards) and it works just fine on brand new nvidia 200 series cards.
I don't think there is anyway to guaranteee that an api won't change in the future; especially if you don't control it. Your application may stop working in 10 years when we're on OpenGL 6.4. Hopefully you've written a good enough application that customers will be willing to pay for an upgrade.
This fellow, MarK J. Kilgard has been publishing nVidia source code and documents on openGL since the 90's and has been doing so on behalf of one of the two biggest names in gaming hardware.
//-------------------------------------------------------------------------------------
... the notion that an OpenGL application is "wrong" to ever use immediate mode is overzealous. The OpenGL 3.0 specification has even gone so far as to mark immediate mode in OpenGL for "deprecation" (whatever that means!); such extremism is counter-productive and foolish. The right way to encourage good API usage isn't to try to deprecate or ban API usage, but rather educate developers about the right API usage for particular situations.