GL_SHADING_LANGUAGE_VERSION returns a single language - c++

when using glGetString on the enum GL_SHADING_LANGUAGE_VERSION I get only one value for return, while I was expecting space separated values as with glGetString(GL_EXTENSIONS).
what is even more confusing is that when I use glGetIntegerv(GL_NUM_SHADING_LANGUAGE_VERSIONS, *); I get a number bigger than one, and when I use glGetStringi(GL_SHADING_LANGUAGE_VERSION, i), and iterate over them I can get all the values. So, why don't I get all the values with glGetString(GL_SHADING_LANGUAGE_VERSION). I just get one value

It is this way because that's how the feature was originally defined. All GLSL versions are backwards compatible with prior ones, so the expectation was that if you had a 1.10 shader, you could feed it to any implementation that accepted 1.10 or higher.
But with the break between core and compatibility, that become untenable. Making core implementations tacitly support GLSL shaders that included removed constructs made no sense. So there had to be a way for an implementation to specify exactly which GLSL versions it supported. But they couldn't change the existing version string's definition, so they (eventually) just added a new, indexed-based query.

Related

Maxima: creating a function that acts on parts of a string

Context: I'm using Maxima on a platform that also uses KaTeX. For various reasons related to content management, this means that we are regularly using Maxima functions to generate the necessary KaTeX commands.
I'm currently trying to develop a group of functions that will facilitate generating different sets of strings corresponding to KaTeX commands for various symbols related to vectors.
Problem
I have written the following function makeKatexVector(x), which takes a string, list or list-of-lists and returns the same type of object, with each string wrapped in \vec{} (i.e. makeKatexVector(string) returns \vec{string} and makeKatexVector(["a","b"]) returns ["\vec{a}", "\vec{b}"] etc).
/* Flexible Make KaTeX Vector Version of List Items */
makeKatexVector(x):= block([ placeHolderList : x ],
if stringp(x) /* Special Handling if x is Just a String */
then placeHolderList : concat("\vec{", x, "}")
else if listp(x[1]) /* check to see if it is a list of lists */
then for j:1 thru length(x)
do placeHolderList[j] : makelist(concat("\vec{", k ,"}"), k, x[j] )
else if listp(x) /* check to see if it is just a list */
then placeHolderList : makelist(concat("\vec{", k, "}"), k, x)
else placeHolderList : "makeKatexVector error: not a list-of-lists, a list or a string",
return(placeHolderList));
Although I have my doubts about the efficiency or elegance of the above code, it seems to return the desired expressions; however, I would like to modify this function so that it can distinguish between single- and multi-character strings.
In particular, I'd like multi-character strings like x_1 to be returned as \vec{x}_1 and not \vec{x_1}.
In fact, I'd simply like to modify the above code so that \vec{} is wrapped around the first character of the string, regardless of how many characters there may be.
My Attempt
I was ready to tackle this with brute force (e.g. transcribing each character of a string into a list and then reassembling); however, the real programmer on the project suggested I look into "Regular Expressions". After exploring that endless rabbit hole, I found the command regex_subst; however, I can't find any Maxima documentation for it, and am struggling to reproduce the examples in the related documentation here.
Once I can work out the appropriate regex to use, I intend to implement this in the above code using an if statement, such as:
if slength(x) >1
then {regex command}
else {regular treatment}
If anyone knows of helpful resources on any of these fronts, I'd greatly appreciate any pointers at all.
Looks like you got the regex approach working, that's great. My advice about handling subscripted expressions in TeX, however, is to avoid working with names which contain underscores in Maxima, and instead work with Maxima expressions with indices, e.g. foo[k] instead of foo_k. While writing foo_k is a minor convenience in Maxima, you'll run into problems pretty quickly, and in order to straighten it out you might end up piling one complication on another.
E.g. Maxima doesn't know there's any relation between foo, foo_1, and foo_k -- those have no more in common than foo, abc, and xyz. What if there are 2 indices? foo_j_k will become something like foo_{j_k} by the preceding approach -- what if you want foo_{j, k} instead? (Incidentally the two are foo[j[k]] and foo[j, k] when represented by subscripts.) Another problematic expression is something like foo_bar_baz. Does that mean foo_bar[baz], foo[bar_baz] or foo_bar_baz?
The code for tex(x_y) yielding x_y in TeX is pretty old, so it's unlikely to go away, but over the years I've come to increasing feel like it should be avoided. However, the last time it came up and I proposed disabling that, there were enough people who supported it that we ended up keeping it.
Something that might be helpful, there is a function texput which allows you to specify how a symbol should appear in TeX output. For example:
(%i1) texput (v, "\\vec{v}");
(%o1) "\vec{v}"
(%i2) tex ([v, v[1], v[k], v[j[k]], v[j, k]]);
$$\left[ \vec{v} , \vec{v}_{1} , \vec{v}_{k} , \vec{v}_{j_{k}} ,
\vec{v}_{j,k} \right] $$
(%o2) false
texput can modify various aspects of TeX output; you can take a look at the documentation (see ? texput).
While I didn't expect that I'd work this out on my own, after several hours, I made some progress, so figured I'd share here, in case anyone else may benefit from the time I put in.
to load the regex in wxMaxima, at least on the MacOS version, simply type load("sregex");. I didn't have this loaded, and was trying to work through our custom platform, which cost me several hours.
take note that many of the arguments in the linked documentation by Dorai Sitaram occur in the reverse, or a different order than they do in their corresponding Maxima versions.
not all the "pregexp" functions exist in Maxima;
In addition to this, escaping special characters varied in important ways between wxMaxima, the inline Maxima compiler (running within Ace editor) and the actual rendered version on our platform; in particular, the inline compiler often returned false for expressions that compiled properly in wxMaxima and on the platform. Because I didn't have sregex loaded on wxMaxima from the beginning, I lost a lot of time to this.
Finally, the regex expression that achieved the desired substitution, in my case, was:
regex_subst("\vec{\\1}", "([[:alpha:]])", "v_1");
which returns vec{v}_1 in wxMaxima (N.B. none of my attempts to get wxMaxima to return \vec{v}_1 were successful; escaping the backslash just does not seem to work; fortunately, the usual escaped version \\vec{\\1} does return the desired form).
I have yet to adjust the code for the rest of the function, but I doubt that will be of use to anyone else, and wanted to be sure to post an update here, before anyone else took time to assist me.
Always interested in better methods / practices or any other pointers / feedback.

What does the suffix on the end of glGetProgramiv signify?

glGetProgramiv is referenced inside the documentation as glGetProgram but no mention is made of there being multiple versions - and I see no indication through google of there being other versions such as glGetProgrammiii or glGetProgramii. Is the iv prefix roman numerals related to some type of internal versioning during the specifications process or does it hold some other significance?
Those suffixes are used to distinguish "overloaded" methods with same names and different parameters. iv means this version operates on vector of integers. It's the only version of glGetProgram at the moment.

Extracting wide chars w/ attributes in ncurses

[Please note I am using _XOPEN_SOURCE_EXTENDED 1 and setlocale(LC_CTYPE, "").]
Curses includes various functions for extracting characters from the screen; they can be divided into those which grab just the text and those which grab the text plus attributes (bold, color, etc.). The former use wchar_t (or char) and the latter curses' own chtype.
There are constants to mask a chtype to get just the character or just the attributes -- A_CHARTEXT and A_ATTRIBUTES. However, from the value of these, it is easy to see that there will be collisions with wchar_t values over 255. A_ATTRIBUTES is 64-bits and only the lower 8 are unset.
If the base type internally is chtype, this would mean ncurses was unworkable with most of unicode, but it isn't -- you can use hardcoded strings in UTF-8 source and write them out with attributes no problem. Where it gets interesting is getting them back again.
wchar_t s[] = "\412";
This character has a value of 266 and displays as Ċ. However, when extracted into a chtype using, e.g., mvwinchnstr(), it is exactly the same as a space (10) with the COLOR_PAIR(1) attribute (256) set. And in fact, if you take the extracted chtype and redisplay it, you get just that -- a space with COLOR_PAIR(1) set.
But if you extract it instead into a wchar_t with, e.g. mvwinnwstr(), it's correct, as is a colored space. The problem with this, of course, is that the attributes are gone. This implies the attributes are being masked out correctly, which is demonstrably impossible with a chtype, since a chtype for both of these has the same value (266). In other words, the internal representation is obviously niether a chtype nor a wchar_t.
I do not use ncurses much, and I notice there are other curses implementations (e.g. Oracle's) with functions that imply the chtype there might not have this problem. In any case, is there a way w/ ncurses to unambiguously extract wide chars together with their attributes?
[I've tagged this C and C++ since it is applicable in both contexts.]
It is more complicated than that. But briefly:
In the SVr4 implementation, there was just chtype.
X/Open work for standardization added on the multibyte characters, represented in cchar_t.
Not blatantly obvious in the X/Open documentation, but seen in the corresponding Unix implementations, the chtype and cchar_t were not envisioned as possibly different views of the same data. You can only make 8-bit encodings with the former.
Not many applications really delve into Unix implementations to make it apparent (in fact, at least one vendor's XPG4 implementation never worked well enough to do useful testing — so much for the state of the art).
The integration (or lack of same) was overlooked in ncurses, where it seemed a natural thing to do.
ncurses accepts multibyte strings in addstr (none of the Unix's do).
ncurses attempts to provide the same information via either style of interface which was set via the other.
There are obviously limitations: chtype corresponds to a single cell on the screen, and can hold only an 8-bit character. Interfaces such as winnstr which return a string will work within that constraint. The winchnstr function does return an array of chtype values.
If you want the attributes for a cell which is not an 8-bit character, you are best off by retrieving it via the analogous win_wchnstr

Declearing enum and setting values manually not in increasing order

Is the definition of enums in C++ and setting values in random order a valid? Is it used in any well-known code?
e.g.:
enum ESampleType{ STPositiveControl='P', STNegativeControl='N', STSample='S' };
VS2008 compiles without warnings. I wonder whether gcc would.
In my opinion, it a least harms "least surprise rule" as iterating over all values using
for(int i=STPositiveControl; i<=STSample; ++i)
would fail.
P.S: The rationale for this approach:
In my DB application I'm defining wrapper methods. Some columns contain "constants" encoded as
single char. I'm trying to make it a little more type safe.
It's a standard and widely used feature. Most of the time (but
not always) it will be used to create bitmaps, but it can also
be used as you show, to give "printable" values to the enum
contants.
It shouldn't surprise anyone who knows C++, as it is widely
used.
Yes it is valid.
There is an example use of it in the book The C++ Programming Language (3rd Edition) by Bjarne Stroustrup, section "6.1 A Desk Calculator [expr.calculator]" (and more precisely "6.1.1 The Parser [expr.parser]"), for the parser code of a simple arithmetic calculator. Here's an excerpt:
The parser uses a function get_token() to get input. The value of the most recent call of get_token() can be found in the global variable curr_tok. The type of curr_tok is the enumeration Token_value:
enum Token_value {
NAME, NUMBER, END,
PLUS='+', MINUS='-', MUL='*', DIV='/',
PRINT=';', ASSIGN='=', LP='(', RP=')'
};
Token_value curr_tok = PRINT;
Representing each token by the integer value of its character is convenient and efficient and can be a help to people using debuggers. This works as long as no character used as input has a value used as an enumerator – and no character set I know of has a printing character with a single-digit integer value. (...)
(That last sentence is specific to this example, because the enumeration mixes "default-value" and "explicit-value" enumerators and wants each one to be unique.)
However it's only an educative example (and notably it uses a global variable, and CAPS names for enumerators when you should reserve them for macros (but Stroustrup doesn't like macros :p)).
Now, indeed you can't iterate over it (at least with a plain for loop; but see this question). (And as James Kanze pointed, an enum's values are not always ordered, contiguous, and unique.)

Embedding other language in Flex/Bison

The bottom line:
If you would like to add one, very small feature into C++ using Flex/Bison, how would you do that? For example, ability to declare void xxx() functions with syntax: foo%%: xxx?
The whole story:
Once I have coded a custom shader processing program that built ready-to-use Cg/GLSL pixel and vertex shaders from a number of blocks. I've added few features, mostly related to static compilation (something like a "better preprocessor").
For example, something that would look like
#if LIGHT_TYPE == POINT
float lightStrength = dot(N, normalize(pos - lightPos));
#elif LIGHT_TYPE == DIRECTIONAL
float lightStrength = dot(N, lightDir);
#endif
with pure macros, looks like
[LightType = Point]
[Require: pos, lightPos]
float LightStrength()
{
!let %N Normal
return dot(%N, normalize(pos - lightPos));
}
in my "language". As you can see, "functions" can be provided for variable light/material types. There is also a possibility to "call" other function and to mark what uniform/varying attributes are required for specific shader.
Why all this effort? Because (especially in early cards like SM 2.0) there were numerous limitations of attributes, and my "compiler" produced ready-to-use shader with list of required attributes/variables, managed parameters between pixel and vertex shaders, optimized some static stuff (just for readability, as Cg compiler would optimize it later anyway).
OK, but I am not writing all of this to praise myself or something.
I've written this "compiler" in C# and it initially was a very small program. But as time passed, many features were added and now the program is a complete mess with no options of refactoring. Also, being coded in C# means I cannot embed this compiler directly in C++ game, forcing me to spawn processes to compile shaders (this takes a lot of time).
I would like to rewrite my language/compiler using C++ and Flex/Bison toolkit. I've already coded efficient math parser in Flex/Bison so I am somewhat experienced in this matter. However there is one thing that I cannot resolve myself and this is a very topic of my question.
How can I possibly embed GLSL into my language? In C# compiler, I've just went line-by-line and checked if line starts with special characters (like % or [) and later did many tricks&hacks using string_replace and/or regexes.
Now I would like to write down a clean Bison grammar and do it properly. But including whole syntax of GLSL scares me. It is quite a big and complicated language, constantly evolving. And most I would do would be to pass-through all this GLSL code anyway.
If you are not experienced with GLSL/Cg/graphics programming at all, this is not quite important. The question can be rewritten into the "bottom line".
So, if you would like to add one, very small feature into C++ using Flex/Bison, how would you do that? For example, ability to declare void xxx() functions with syntax: foo%%: xxx?
I added multithreading mechanisms to pascal for a school project once, so I've had to deal with this. What I did was I found a BNF definition of pascal and copied that, mainly making my tokens equal to the text in 99% of the cases, and adding the new intermediate language code in the 1% of new tokens I added. Pascal is simple, so the BNF definition is relatively simple. C++, not so much.
The C++ Programming Language by Stroustroup has the language grammar that is practically parser-ready, but that's a lot of code to copy by hand. Maybe a certain website can help you find a bison/lex set for "standard" C++, and then you can modify that.
EDIT:
I found my yacc files, so here's a small example from my pascal code
procheader: PROCEDURE ID paramslist ';'{thread::inLocalDecl=true; $$="procedure "+$2+$3+";\n";}
| FUNCTION ID paramslist ':' ID ';'{thread::inLocalDecl=true; $$="function "+$2+$3+":"+$5+";\n";}
| THREAD {thread::inThreadDecl=true;
thread::inArgDecl=true;}
ID {thread::curName=$3;} paramslist { thread::inArgDecl=false;} ';'
{
/*a lot of code to transform this custom construct into standard pascal*/
}
the first two elements were just standard pascal: I copied the input tokens into the output strings verbatim (ie I didn't actually modify anything from the input file).
the third element(my THREAD keyword) was, obviously, not standard pascal. So I transformed the output into something that I could actually compile in standard pascal.
Basically to compile my "threaded" pascal, I had to take my source file, pass it through my parser, then compile the output.