Targeting specific version of application - c++

Not sure if this question has already been asked or I didn't know the correct wordings to search but basically I want to be able to use a specific version of my application and disable all other features that have been added in a newer version.
For example I currently have version 1.4 of a application I created but I only want to use features available in V1.2 and make it so that I cannot use any features which have been added since, e.g. 1.3?
An example of this is GLSL, you set a specific version upon writing it, and you can't use any features from a higher version than what you assigned.
Is this possible, and if so how to approach it?

Related

Quarto: Facilities for multiple **natural** languages/translation?

In the context of setting up a homepage using Quarto, I am failing to find facilities and/or experiences doing so with multiple natural target languages.
I'm trying to achieve two versions of the same content (in my case one in English, one in German) within an identical navigational structure and a language name link to switch between them ...
Is this beyond the scope of Quarto? If not: how might this be achieved?
Thanks for any pointers.
Newer version of Quarto 1.2 available as pre-release (as of 09/2022) has a feature that should allow this more easily
https://quarto.org/docs/projects/profiles.html#language-content
This is the feature to use when looking to create a multi language book
Download for pre release can be done here: https://quarto.org/docs/download/

Query highest supported OpenGL version before context initialization?

I want to know if there is a way (ideally cross platform, but if not just POSIX compliant or at least in Linux) to query for the highest supported OpenGL version in the current system.
I would like to be able to instantiate GLFW windows to be the highest supported version, rather than blindly trying different versions until one allows for valid context initialization.
EDIT:
Assume I am not creating a context. Imagine I want to replicate what glxinfo does. In other words, I want to be able to query the installed OpenGL version without EVER creating a context.
When you ask for version X.Y, you are not asking for version X.Y; you're asking for at least version X.Y. Implementations are permitted to give you the highest version which is backwards compatible with X.Y.
So if you ask for 3.3 core profile, you may well get 4.6 core profile. There's no point in the implementation returning a lower version.
So just write your code against a minimum OpenGL version, then ask for that. You'll get whatever the implementation gives you, and you can query what you got later.
Imagine I want to replicate what glxinfo does.
glxinfo does what it does by creating a context. Just look at its source code.
However, the GLX_MESA_query_renderer extension does allow asking about the context that would be created for a particular display/screen/renderer combination. Of course, that only works through MESA; drivers that don't go through MESA's system will not be visible through this.
Have you tried to create an OpenGL context without querying for a specific version?
When I use native OpenGL functions (glxCreateContext() or wglCreateContext()) to create a new OpenGL context, they always create a context with the highest OpenGL version supported by my graphic card (4.6).
I don't know if GLFW have the same behavior...
ANSWER FOR QUESTION EDIT:
It's impossible to query any of OpenGL informations WHITHOUT creating a context, since you are not able to call any of glXXX() functions WHITHOUT creating a context. You can instead create a dummy context (which is not shown to the user, but stay somewhere in the memory) to query all OpenGL informations you want, and delete it when you are done (don't worry about it, many many softwares, libraries and also game engines does this, even my own)

"More Like This" on CLucene

Currently Java Lucene has feature called “More Like This”, which is used to find representative terms of a document which can be further searched to find similar documents.
I looked in latest CLucene code, but could not find this functionality.
Is it there in CLucene or something related to it? If not then are there any plans to include it?
If someone has done some work on this or area similar to this on CLucene, It will be great to hear from them.
I guess CLucene is just dead.
Currently you can get CLucene in two flavors - one is the 0.9.21
release, which has been proven to be stable over time, but is only
compatible with Java Lucene 1.9.1. Another option is our current
working copy on git, which conforms with Java Lucene 2.3.2
And according to this quote from sourceforge - you will never get MoreLikeThis feature, cause port of Lucene 1.x or 2.x is just too old.
Code: http://sourceforge.net/p/clucene/code/ci/master/tree/ (look at the commit date)

Mapforce: merging changes in Perforce version control

In my project I am using mapforce to create XSLT transforms. The problem is that mapforce generates different output even after a minor change (different var names, different object sequence etc.)
If I implement some functionality in, say, Project branch 1.2 and other developer adds another functionality in branch 1.3, and we both submit changes to branches 1.2 and 1.3 respectively, there is no way one can just integrate changes (I am using Perforce for version control) to version 1.3, it has to be reimplemented.
Is there anyway I could overcome this? Maybe a version control plug-in in mapforce?
I'm afraid this is just a limitation of code generators. Most just assign new labels (or numbers) as they go along, so depending on where the added code is, everything generated after that point will be changed.
To fix it, the MapForce code generator would have to save the metadata used when generating code, and re-use it the next time you generate code. The generator would have to distinguish new items from those that have just moved, and re-use all existing ones. It might be a major change to their software to do this, depending on how it is currently implemented.
It can't hurt to request/suggest such a feature, and having it would help Altova position Mapforce as more of a production-ready solution, able to support version control of the customer's work.

OpenGL versions and gpus - what kind of compatibility is there?

I'm trying to understand how graphics card versions, OpenGL versions and the API headers work together. I've read up on the OpenGL 2.0/3.0/3.1 debacle on the OpenGL forums and elsewhere but it's still not clear how this will work out for me as a developer (new to OpenGL). (btw I'm using nVidia in this question as an example because I'm buying one of their cards, but obviously I'd like to be vendor agnostic for the software I develop).
First, there is the GPU which needs to support an OpenGL version. But e.g. nVidia have drivers for older video cards to support OpenGL 3. Does that mean that these drivers implement certain functionality in software to emulate new functionality that isn't in the hardware?
Then there are the API headers. If I decide to write an application for OpenGL 3, should I wait until Microsoft releases an updated version of the platform SDK with headers that support this version? How do you select which version of the API to use - through a preprocessor define in the code, or does updating to the latest platform SDK simply upgrade to whatever is the latest version (can't imagine this last option, but you never know...).
What about backward compatibility? If I write an application targeting OpenGL 1.2, will users who have installed the drivers for their card supporting OpenGL 3 still be able to run it or should I test the card's features/supported version in my application? Reading http://developer.nvidia.com/object/opengl_3_driver.html seems to confirm that at least for nVidia cards applications written against 1.2 will continue to work, but this also implies that other vendors may stop to support the 1.2 API. Basically that would put me (potentially) in a position in the future where the software wouldn't work with recent cards because they don't support 1.2 any more. But if I develop for OpenGL 3 or even 2 today, I may shut out users who's gpu's only support 1.2.
I don't need the fancy features in OpenGL - I hardly use any shading at all, the fixed pipeline works fine for me (my application is CAD-like). What is the best version to base a new application on, with the expectation that it will be a long-lived application with incremental updates over the years to come?
I'm probably forgetting other issues that are relevant in this context, any insights are much appreciated.
From my OpenGL experience, it seems that "targeting" a given version is just accessing the various extensions that have been added for that version. So the only reason you would "target" OpenGL version 3 is if you want to use some of the extensions that are new to version 3. If you don't really use version 3 extensions (if you're just doing basic OpenGL stuff), then you naturally aren't "targeting" version 3.
In Visual Studio, you will always link your application with opengl32.lib, and opengl32.lib doesn't change across different OpenGL versions. OpenGL instead uses wglGetProcAddress() to dynamically access OpenGL extensions/versions at run time instead of at compile time. Namely, if a given driver doesn't support an extension, then wglGetProcAddress() will return NULL at run-time when that extension's procedure is requested. So in your code you will need to implement logic that handles the NULL return case. In the simplest scenario, you could just print an error and say "this feature isn't available, so this program will behave ...". Or you can find other alternative methods for doing the same thing that doesn't use the extension. For the most part, you'll only get NULL returns from wglGetProcAddress if you application is running on old hardware/drivers that don't support the OpenGL version that added the extension you're looking for. However, in future years you'll want to keep abreast of those things that newer OpenGL versions decide to deprecate. I haven't read too much into the 3.1 spec, but apparently they're introducing a deprecation model where older technology/extensions may be deprecated, which will open the door for newer hardware/drivers to no longer support deprecated extensions, in which case wglGetProcAddress will again return NULL for those extensions. So if you put logic in for handling the NULL return on wglGetProcAddress(), you should still be fine even for deprecated extensions. It just might become necessary for you to implement better alternatives, or to make newer extensions default.
As far as the versioned API headers go, the changes to the headers are mostly just changes to allow access to new functions returned by wglGetProcAddress(). So if you include the API header for version 2, you're good to go as long as you only need the extensions for OpenGL 2. If you need to access functions/extensions that were added in version 3, then you just replace your version 2 header with the version 3 header, which just adds some additional function pointer typedefs associated with the new extensions, so that when you call wglGetProcAddress(), you can cast the return value to the right function pointer. Example:
PFNGLGENQUERIESARBPROC glGenQueriesARB = NULL;
...
glGenQueriesARB = (PFNGLGENQUERIESARBPROC)wglGetProcAddress("glGenQueriesARB");
In the above example, the typedef for PFNGLGENQUERIESARBPROC is defined in the API headers. glGenQueriesARB was added in 1.2 I believe, so I'd need at least the 1.2 API headers to get the definition of PFNGLGENQUERIESARBPROC. That's really all the headers do.
One more thing I want to mention about 3.1. Apparently with 3.1 they're deprecating a lot of OpenGL functionality that my company has used pretty ubiquitously, including display lists, the glBegin/glEnd mechanisms, and the GL_SELECT render mode. I don't know much about the details, but I don't see how they can do that without having to create a new opengl32.lib to link with, because it seems that most of that functionality is embedded into opengl32.lib, and not accessed through wglGetProcAddress. Additionally, is Microsoft going to include that new opengl32.lib in their Visual Studio versions? I don't have an answer for those questions, but I would think that, even though 3.1 deprecates it, this functionality is going to be around for a long time. If you keep linking with your current opengl32.lib, it should continue to work almost indefinitely, although you may lose hardware acceleration at some point. The vast majority of OpenGL tutorials available on the web use the glBegin/glEnd methods for drawing primitives. The same is true for GL_SELECT, although a lot of hardware no longer accelerates GL_SELECT render mode. Even opengl.org's tutorial's use the supposedly deprecated glBegin/glEnd methods. And I have yet to find a "getting started" tutorial that uses only 3.1 features, avoiding the deprecated functionality (certainly if someone knows of one, link me to it). Anyway, while it seems 3.1 has thrown away a lot of the old for all new stuff, I think the old stuff will still be around for quite a while.
Long story short, here's my advice. If your OpenGL needs are simple, just use the basic OpenGL functionality. Vertex arrays are supported on versions 1.1 - 3.1, so if you're looking for maximum lifetime and you're starting fresh, that's probably what you should use. However, my opinion is glBegin/glEnd and display lists are still going to be around for a while even though they are deprecated in version 3, so if you want to use them, I wouldn't fret too much. I would avoid GL_SELECT mode for picking in favor of an alternate method. Many of the hardware vendors have considered GL_SELECT deprecated for years now, even though it just got deprecated with version 3. In our application we get lots of issues where it doesn't work on ATI cards and integrated GMA cards. Subsequently we just implemented a picking method using occlusion queries which seems to fix the problem. So to do it right the first time, avoid GL_SELECT.
Good Luck.
As far as drivers go, I think that in some cases missing functionality it written in software. The whole point of using OpenGL (aside from the acceleration) is to write to an API and not care HOW it's implemented.
In my experience, you don't declare OpenGL versions. The function calls between API versions are non-overlapping. Be aware of the spec and if, for example, you call a 2.0 method, your application's minimum version just became 2.0. I have display applications written to target OpenGL (stupid old Sun video cards) and it works just fine on brand new nvidia 200 series cards.
I don't think there is anyway to guaranteee that an api won't change in the future; especially if you don't control it. Your application may stop working in 10 years when we're on OpenGL 6.4. Hopefully you've written a good enough application that customers will be willing to pay for an upgrade.
This fellow, MarK J. Kilgard has been publishing nVidia source code and documents on openGL since the 90's and has been doing so on behalf of one of the two biggest names in gaming hardware.
//-------------------------------------------------------------------------------------
... the notion that an OpenGL application is "wrong" to ever use immediate mode is overzealous. The OpenGL 3.0 specification has even gone so far as to mark immediate mode in OpenGL for "deprecation" (whatever that means!); such extremism is counter-productive and foolish. The right way to encourage good API usage isn't to try to deprecate or ban API usage, but rather educate developers about the right API usage for particular situations.