Performance implications setting readonly/writeonly on image units in GLSL - opengl

In glBindImageTexture(), access can be GL_READ_ONLY, GL_WRITE_ONLY, or GL_READ_WRITE.
I assume that these should match the readonly and writeonly qualifiers on image units (eg. image2D or imageBuffer) in GLSL.
I haven't noticed any difference when setting readonly or writeonly (2013, NVIDIA 319.49 drivers). Granted, I might not be doing anything that would otherwise cause a slowdown and hence don't see any improvement. These qualifiers may also simply be ignored by current GL implementations.
In what cases could a GL implementation make use of readonly/writeonly, why do they exist?
Cache coherency comes to mind, but don't the coherent/volatile qualifiers cover this already? I've used these and they appear to work.
Has anyone experienced a difference in performance or results when using readonly/writeonly, how important are they?
Don't be afraid to reply if it's been years, I'm interested to know and I'm sure others are too.

It only tells a hint to the driver of how the memory is supposed to be used. It's up to the driver to do optimizations or not. I have not seen any difference on NVidia drivers. One thing is certain, if you write in a read only image, the result is undefined. Same for reading a write only image. But even if it does not make a difference on today's drivers, you should still use them coherently with what you are actually doing with these images: future drivers may use these hints to optimize the memory access.

See, the thing is there is no actual enforcement of these access policies. OpenGL intentionally leaves the behavior of writing to a read-only image undefined for reasons that will be explained below. Among other things, this allows a lot of flexibility on the end of the implementation to use the read-only attribute for a number of different optimization strategies (think of it more like a hint than a strict rule). Because the behavior is undefined, you are right, they could be ignored by the implementation, but should not be ignored by the application developer. Invoking undefined behavior is a ticking time bomb.
To that end, there exists no mechanism in GLSL (or shaders in general) to emit an error when you try to store something in a read-only image, making enforcing this policy significantly more difficult. The best you could hope for is some kind of static analysis at compile-time that is later verified against the access policies of the bound image texture at runtime. That is not to say it may not happen in the future, but at present no such feature exists. The only thing that GLSL provides is a compile-time guarantee that readonly variables cannot be used in conjunction with imageStore (...) in the shader. In short, the GLSL qualifier is enforced at compile-time, but the access policy of the bound image is not at run-time.
The fact that you are not seeing any performance improvement is not surprising. This is a pretty new feature, you will have to give it years before drivers do anything profound with the access policy. It does not hurt anything to provide the hint, so long as you yourself do not violate it.

Related

Is undefined behavior only an issue if you are deploying on several platforms?

Most of the conversations around undefined behavior (UB) talk about how there are some platforms that can do this, or some compilers do that.
What if you are only interested in one platform and only one compiler (same version) and you know you will be using them for years?
Nothing is changing but the code, and the UB is not implementation-defined.
Once the UB has manifested for that architecture and that compiler and you have tested, can't you assume that from then on whatever the compiler did with the UB the first time, it will do that every time?
Note: I know undefined behavior is very, very bad, but when I pointed out UB in code written by somebody in this situation, they asked this, and I didn't have anything better to say than, if you ever have to upgrade or port, all the UB will be very expensive to fix.
It seems there are different categories of Behavior:
Defined - This is behavior documented to work by the standards
Supported - This is behavior documented to be supported a.k.a
implementation defined
Extensions - This is a documented addition, support for low level
bit operations like popcount, branch hints, fall into this category
Constant - While not documented, these are behaviors that will
likely be consistent on a given platform things like endianness,
sizeof int while not portable are likely to not change
Reasonable - generally safe and usually legacy, casting from
unsigned to signed, using the low bit of a pointer as temp space
Dangerous - reading uninitialized or unallocated memory, returning
a temp variable, using memcopy on a non pod class
It would seem that Constant might be invariant within a patch version on one platform. The line between Reasonable and Dangerous seems to be moving more and more behavior towards Dangerous as compilers become more aggressive in their optimizations
OS changes, innocuous system changes (different hardware version!), or compiler changes can all cause previously "working" UB to not work.
But it is worse than that.
Sometimes a change to an unrelated compilation unit, or far away code in the same compilation unit, can cause previously "working" UB to not work; as an example, two inline functions or methods with different definitions but the same signature. One is silently discarded during linking; and completely innocuous code changes can change which one is discarded.
The code that is working in one context can suddenly stop working in the same compiler, OS and hardware when you use it in a different context. An example of this is violating strong aliasing; the compiled code might work when called at spot A, but when inlined (possibly at link-time!) the code can change meaning.
Your code, if part of a larger project, could conditionally call some 3rd party code (say, a shell extension that previews an image type in a file open dialog) that changes the state of some flags (floating point precision, locale, integer overflow flags, division by zero behavior, etc). Your code, which worked fine before, now exhibits completely different behavior.
Next, many kinds of undefined behavior are inherently non-deterministic. Accessing the contents of a pointer after it is freed (even writing to it) might be safe 99/100, but 1/100 the page was swapped out, or something else was written there before you got to it. Now you have memory corruption. It passes all your tests, but you lacked complete knowledge of what can go wrong.
By using undefined behavior, you commit yourself to a complete understanding of the C++ standard, everything your compiler can do in that situation, and every way the runtime environment can react. You have to audit the produced assembly, not the C++ source, possibly for the entire program, every time you build it! You also commit everyone who reads that code, or who modifies that code, to that level of knowledge.
It is sometimes still worth it.
Fastest Possible Delegates uses UB and knowledge about calling conventions to be a really fast non-owning std::function-like type.
Impossibly Fast Delegates competes. It is faster in some situations, slower in others, and is compliant with the C++ standard.
Using the UB might be worth it, for the performance boost. It is rare that you gain something other than performance (speed or memory usage) from such UB hackery.
Another example I've seen is when we had to register a callback with a poor C API that just took a function pointer. We'd create a function (compiled without optimization), copy it to another page, modify a pointer constant within that function, then mark that page as executable, allowing us to secretly pass a pointer along with the function pointer to the callback.
An alternative implementation would be to have some fixed size set of functions (10? 100? 1000? 1 million?) all of which look up a std::function in a global array and invoke it. This would put a limit on how many such callbacks we install at any one time, but practically was sufficient.
No, that's not safe. First of all, you will have to fix everything, not only the compiler version. I do not have particular examples, but I guess that a different (upgraded) OS, or even an upgraded processor might change UB results.
Moreover, even having a different data input to your program can change UB behavior. For example, an out-of-bound array access (at least without optimizations) usually depend on whatever is in the memory after the array.
UPD: see a great answer by Yakk for more discussion on this.
And a bigger problem is optimization and other compiler flags. UB may manifest itself in a different ways depending on optimization flags, and it's quite difficult to imagine somebody to use always the same optimization flags (at least you'll use different flags for debug and release).
UPD: just noticed that you did never mention fixing a compiler version, you only mentioned fixing a compiler itself. Then everything is even more unsafe: new compiler versions might definitely change UB behavior. From this series of blog posts:
The important and scary thing to realize is that just about any
optimization based on undefined behavior can start being triggered on
buggy code at any time in the future. Inlining, loop unrolling, memory
promotion and other optimizations will keep getting better, and a
significant part of their reason for existing is to expose secondary
optimizations like the ones above.
This is basically a question about a specific C++ implementation. "Can I assume that a specific behavior, undefined by the standard, will continue to be handled by ($CXX) on platform XYZ in the same way under circumstances UVW?"
I think you either should clarify by saying exactly what compiler and platform you are working with, and then consult their documentation to see if they make any guarantees, otherwise the question is fundamentally unanswerable.
The whole point of undefined behavior is that the C++ standard doesn't specify what happens, so if you are looking for some kind of guarantee from the standard that it's "ok" you aren't going to find it. If you are asking whether the "community at large" considers it safe, that's primarily opinion based.
Once the UB has manifested for that architecture and that compiler and you have tested, can't you assume that from then on whatever the compiler did with the UB the first time, it will do that every time?
Only if the compiler makers guarantee that you can do this, otherwise, no, it's wishful thinking.
Let me try to answer again in a slightly different way.
As we all know, in normal software engineering, and engineering at large, programmers / engineers are taught to do things according to a standard, the compiler writers / parts manufacturers produce parts / tools that meet a standard, and at the end you produce something where "under the assumptions of the standards, my engineering work shows that this product will work", and then you test it and ship it.
Suppose you had a crazy uncle jimbo and one day, he got all his tools out and a whole bunch of two by fours, and worked for weeks and made a makeshift roller coaster in your backyard. And then you run it, and sure enough it doesn't crash. And you even run it ten times, and it doesn't crash. Now jimbo is not an engineer, so this is not made according to standards. But if it didn't crash after even ten times, that means it's safe and you can start charging admission to the public, right?
To a large extent what's safe and what isn't is a sociological question. But if you want to just make it a simple question of "when can I reasonably assume that no one would get hurt by me charging admission, when I can't really assume anything about the product", this is how I would do it. Suppose I estimate that, if I start charging admission to the public, I'll run it for X years, and in that time, maybe 100,000 people will ride it. If it's basically a biased coin flip whether it breaks or not, then what I would want to see is something like, "this device has been run a million times with crash dummies, and it never crashed or showed hints of breaking." Then I could quite reasonably believe that if I start charging admission to the public, the odds that anyone will ever get hurt are quite low, even though there are no rigorous engineering standards involved. That would just be based on a general knowledge of statistics and mechanics.
In relation to your question, I would say, if you are shipping code with undefined behavior, which no one, either the standard, the compiler maker, or anyone else will support, that's basically "crazy uncle jimbo" engineering, and it's only "okay" if you do vastly increased amounts of testing to verify that it meets your needs, based on a general knowledge of statistics and computers.
What you are referring to is more likely implementation defined and not undefined behavior. The former is when the standard doesn't tell you what will happen but it should work the same if you are using the same compiler and the same platform. An example for this is assuming that an int is 4 bytes long. UB is something more serious. There the standard doesn't say anything. It is possible that for a given compiler and platform it works, but it is also possible that it works only in some of the cases.
An example is using uninitialized values. If you use an uninitialized bool in an if, you may get true or false, and it may happen that it is always what you want, but the code will break in several surprising ways.
Another example is dereferencing a null pointer. While it will probably result in a segfault in all cases, but the standard doesn't require the program to even produce the same results every time a program is run.
In summary, if you are doing something that is implementation defined, then you are safe if you are only developing to one platform and you tested that it works. If you are doing something that is undefined behavior, then you are probably not safe in any case. There may be that it works but nothing guarantees it.
Think about it a different way.
Undefined behavior is ALWAYS bad, and should never be used, because you never know what you will get.
However, you can temper that with
Behavior can be defined by parties other than just the language specification
Thus you should never rely on UB, ever, but you can find alternate sources which state that a certain behavior is DEFINED behavior for your compiler in your circumstances.
Yakk gave great examples regarding the fast delegate classes. In those cases, the author explicitly claims that they are engaging in undefined behavior, according to the spec. However, they then go to explain a business reason why the behavior is better defined than that. For example, they declare that the memory layout of a member function pointer is unlikely to change in Visual Studio because there would be rampant business costs due to incompatibilities which are distasteful to Microsoft. Thus they declare that the behavior is "de facto defined behavior."
Similar behavior can be seen in the typical linux implementation of pthreads (to be compiled by gcc). There are cases where they make assumptions about what optimizations a compiler is allowed to invoke in multithreaded scenarios. Those assumptions are stated plainly in comments in the sourcecode. How is this "de facto defined behavior?" Well, pthreads and gcc go kind of hand in hand. It would be considered unacceptable to add an optimization to gcc which broke pthreads, so nobody will ever do it.
However, you cannot make the same assumption. You may say "pthreads does it, so I should be able to as well." Then, someone makes an optimization, and updates gcc to work with it (perhaps using __sync calls instead of relying on volatile). Now pthreads keeps functioning... but your code doesn't anymore.
Also consider the case of MySQL (or was it Postgre?) where they found a buffer overflow error. The overflow had actually been caught in the code, but it did so using undefined behavior, so the latest gcc started optimizing the entire check out.
So, in all, look for an alternate source of defining the behavior, rather than using it while it is undefined. It is totally legit to find a reason why you know 1.0/0.0 equals NaN, rather than causing a floating point trap to occur. But never use that assumption without first proving that it is a valid definition of behavior for you and your compiler.
And please oh please oh please remember that we upgrade compilers every now and then.
Historically, C compilers have generally tended to act in somewhat-predictable fashion even when not required to do so by the Standard. On most platforms, for example, a comparison between a null pointer and a pointer to a dead object will simply report that they are not equal (useful if code wishes to safely assert that the pointer is null and trap if it isn't). The Standard does not require compilers to do these things, but historically compilers which could do them easily have done so.
Unfortunately, some compiler writers have gotten the idea that if such a comparison could not be reached while the pointer was validly non-null, the compiler should omit the assertion code. Worse, if it can also determine that certain input would cause the code to be reached with an invalid non-null pointer, it should assume that such input will never be received, and omit all code which would handle such input.
Hopefully such compiler behavior will turn out to be a short-lived fad. Supposedly, it's driven by a desire to "optimize" code, but for most applications robustness is more important than speed, and having compilers mess with code that would have limited the damage caused by errant inputs or errand program behavior is a recipe for disaster.
Until then, however, one must be very careful when using compilers to read the documentation carefully, since there's no guarantee that a compiler writer won't have decided that it was less important to support useful behaviors which, though widely supported, aren't mandated by the Standard (such as being able to safely check whether two arbitrary objects overlap), than to exploit every opportunity to eliminate code which the Standard doesn't require it to execute.
Undefined behavior can be altered by things such as the ambient temperature, which causes rotating hard disk latencies to change, which causes thread scheduling to change, which in turn changes the contents of the random garbage that's getting evaluated.
In short, not safe unless the compiler or the OS specifies the behavior (since the language standard didn't).
There is a fundamental problem with undefined behavior of any kind: It is diagnosed by sanitizers and optimizers. A compiler can silently change behavior corresponding to those from one version to another (e.g. by expanding its repertoire), and suddenly you'll have some untraceable error in your program. This should be avoided.
There is undefined behavior that is made "defined" by your particular implementation, though. A left shift by a negative amount of bits can be defined by your machine, and it would be safe to use it there, as breaking changes of documented features occur quite rarely. One more common example is strict aliasing: GCC can disable this restriction with -fno-strict-aliasing.
While I agree with the answers that say that it's not safe even if you don't target multiple platforms, every rule can have exceptions.
I would like to present two examples where I'm confident that allowing undefined / implementation-defined behavior was the right choice.
A single-shot program. It's not a program which is intended to be used by anyone, but it's a small and quickly written program created to calculate or generate something now. In such a case a "quick and dirty" solution can be the right choice, for example, if I know the endianness of my system and I don't want to bother with writing a code which works with the other endianness. For example, I only needed it to perform a mathematical proof to know if I'll be able to use a specific formula in my other, user-oriented program or not.
Very small embedded devices. The cheapest microcontrollers have memory measured in a few hundred bytes. If you develop a small toy with blinking LEDs or a musical postcard, etc, every penny counts, because it will be produced in the millions with a very low profit per unit. Neither the processor nor the code ever changes, and if you have to use a different processor for the next generation of your product, you will probably have to rewrite your code anyway. A good example of an undefined behavior in this case is that there are microcontrollers which guarantee a value of zero (or 255) for every memory location at power-up. In this case you can skip the initialization of your variables. If your microcontroller has only 256 bytes of memory, this can make a difference between a program which fits into the memory and a code which doesn't.
Anyone who disagrees with point 2, please imagine what would happen if you told something like this to your boss:
"I know the hardware costs only $ 0.40 and we plan selling it for $ 0.50. However, the program with 40 lines of code I've written for it only works for this very specific type of processor, so if in the distant future we ever change to a different processor, the code will not be usable and I'll have to throw it out and write a new one. A standard-conforming program which works for every type of processor will not fit into our $ 0.40 processor. Therefore I request to use a processor which costs $ 0.60, because I refuse to write a program which is not portable."
"Software that doesn't change, isn't being used."
If you are doing something unusual with pointers, there's probably a way to use casts to define what you want. Because of their nature, they will not be "whatever the compiler did with the UB the first time".
For example, when you refer to memory pointed at by an uninitialize pointer, you get a random address that is different every time you run the program.
Undefined behavior generally means you are doing something tricky, and you would be better off doing the task another way.
For instance, this is undefined:
printf("%d %d", ++i, ++i);
It's hard to know what the intent would even be here, and should be re-thought.
Changing the code without breaking it requires reading and understanding the current code. Relying on undefined behavior hurts readability: If I can't look it up, how am I supposed to know what the code does?
While portability of the program might not be an issue, portability of the programmers might be. If you need to hire someone to maintain the program, you'll want to be able to look simply for a '<language x> developer with experience in <application domain>' that fits well into your team rather than having to find a capable '<language x> developer with experience in <application domain> knowing (or willing to learn) all the undefined behavior intrinsics of version x.y.z on platform foo when used in combination with bar while having baz on the furbleblawup'.
Nothing is changing but the code, and the UB is not implementation-defined.
Changing the code is sufficient to trigger different behavior from the optimizer with respect to undefined behavior and so code that may have worked can easily break due to seemingly minor changes that expose more optimization opportunities. For example a change that allows a function to be inlined, this is covered well in What Every C Programmer Should Know About Undefined Behavior #2/3 which says:
While this is intentionally a simple and contrived example, this sort of thing happens all the time with inlining: inlining a function often exposes a number of secondary optimization opportunities. This means that if the optimizer decides to inline a function, a variety of local optimizations can kick in, which change the behavior of the code. This is both perfectly valid according to the standard, and important for performance in practice.
Compiler vendors have become very aggressive with optimizations around undefined behavior and upgrades can expose previously unexploited code:
The important and scary thing to realize is that just about any optimization based on undefined behavior can start being triggered on buggy code at any time in the future. Inlining, loop unrolling, memory promotion and other optimizations will keep getting better, and a significant part of their reason for existing is to expose secondary optimizations like the ones above.

Force two threads to access a global variable directly in memory?

I know that volatile in C++ does not have the same meaning as in Java, so if I'm writing a C++ application for Windows, how can I share a variable between two threads and not allowing for each thread to cache its own copy of the variable?
Does using critical sections solves this problem or does it only allows for atomicity?
Actually, on Visual Studio, volatile does have pretty much the same meaning as in Java (or C#). Or at least, it used to, and still does by default; see Microsoft's documentation for details.
That said, in standard C++, it is true that volatile means approximately nothing. Also, in standard terms, threads do not "cache" anything and your question is ill-formed. The relevant concepts are atomicity and ordering, the standard term for the latter being the "happens-before" relationship. Everything you need to design, implement, and reason about multi-threaded algorithms is captured in these concepts; the notion of "cache" has nothing to do with it.
Standard C++11 provides many mechanisms for enforcing atomicity and ordering. You will get a better answer if you ask a specific question about implementing a specific algorithm.
[Update, to clarify]
Note that I am not saying you are using the wrong terminology; I am saying you are using the wrong concepts.
The standard does not talk about "cached variables" using different words... It does not talk about cached variables at all. That is because the concept is neither necessary nor sufficient for reasoning about threads. You can know everything about caches and still be unable to analyze concurrent algorithms, and you can know nothing about caches and be able to analyze them perfectly.
Similarly, "accessing a variable directly" is not just the wrong way to talk; the very concept is meaningless in (standard) C++. The notion of "do it right now" means nothing when each thread is progressing at a different rate and observing state changes in a different order. In standard C++, there simply is no "access directly" or "right now"; there is only happens-before.
This is not an academic point. The wrong mental model for concurrency is almost guaranteed to lead to fuzzy thinking and sloppy, buggy code.
Your question really does have no answer as phrased. The right answer could be to use std::atomic or to use std::mutex or to use std::atomic_thread_fence, depending on exactly what it is you are actually trying to do. I am suggesting you ask a question that states clearly what that is.

Why do glBindRenderbuffer and glRenderbufferStorage each take a "target" parameter?

It takes a target parameter, but the only viable target is
GL_RENDERBUFFER​.
http://www.opengl.org/wiki/Renderbuffer_Object
https://www.khronos.org/opengles/sdk/docs/man/xhtml/glBindRenderbuffer.xml
http://www.opengl.org/wiki/GlRenderbufferStorage
(I'm just learning OpenGL, and already found these two today; maybe I can expect this seemingly-useless target parameter to be common in many functions?)
There is bit of the rationale behind the target parameter in the issue 30 of the original EXT_framebuffer_object extension specification. (I generally recommend people to read the relevant extensions specs even for features which have become core GL features, since those specs have often more details, and sometimes contain bits of reasoning of the ARB (or vendors) for doing things one way or the other, especially in the "issues" section.):
(30) Do the calls to deal with renderbuffers need a target
parameter? It seems unlikely this will be used for anything.
RESOLUTION: resolved, yes
Whether we call it a "target" or not, there is some piece
of state in the context to hold the current renderbuffer
binding. This is required so that we can call routines like
RenderbufferStorage and {Get}RenderbufferParameter() without
passing in an object name. It is also possible we may
decide to use the renderbuffer target parameter to
distinguish between multisample and non multisample buffers.
Given those reasons, the precedent of texture objects, and
the possibility we may come up with some other renderbuffer
target types in the future, it seems prudent and not all
that costly to just include the target type now.
It's frequently the case that core OpenGL only defines one legal value for certain parameters, but extensions add others. Whether or not there are any more values defined in extensions today, clearly the architects wanted to leave that door open to future extensions.

When should you not use [[carries_dependency]]?

I've found questions (like this one) asking what [[carries_dependency]] does, and that's not what I'm asking here.
I want to know when you shouldn't use it, because the answers I've read all make it sound like you can plaster this code everywhere and magically you'd get equal or faster code. One comment said the code can be equal or slower, but the poster didn't elaborate.
I imagine appropriate places to use this is on any function return or parameter that is a pointer or reference and that will be passed or returned within the calling thread, and it shouldn't be used on callbacks or thread entry points.
Can someone comment on my understanding and elaborate on the subject in general, of when and when not to use it?
EDIT: I know there's this tome on the subject, should any other reader be interested; it may contain my answer, but I haven't had the chance to read through it yet.
In modern C++ you should generally not use std::memory_order_consume or [[carries_dependency]] at all. They're essentially deprecated while the committee comes up with a better mechanism that compilers can practically implement.
And that hopefully doesn't require sprinkling [[carries_dependency]] and kill_dependency all over the place.
2016-06 P0371R1: Temporarily discourage memory_order_consume
It is widely accepted that the current definition of memory_order_consume in the standard is not useful. All current compilers essentially map it to memory_order_acquire. The difficulties appear to stem both from the high implementation complexity, from the fact that the current definition uses a fairly general definition of "dependency", thus requiring frequent and inconvenient use of the kill_dependency call, and from the frequent need for [[carries_dependency]] annotations. Details can be found in e.g. P0098R0.
Notably that in C++ x - x still carries a dependency but most compilers would naturally break the dependency and replace that expression with a constant 0. But also compilers do sometimes turn data dependencies into control dependencies if they can prove something about value-ranges after a branch.
On modern compilers that just promote mo_consume to mo_acquire, fully aggressive optimizations can always happen; there's never anything to gain from [[carries_dependency]] and kill_dependency even in code that uses mo_consume, let alone in other code.
This strengthening to mo_acquire has potentially-significant performance cost (an extra barrier) for real use-cases like RCU on weakly-ordered ISAs like POWER and ARM. See this video of Paul E. McKenney's CppCon 2015 talk C++ Atomics: The Sad Story of memory_order_consume. (Link includes a summary).
If you want real dependency-ordering read-only performance, you have to "roll your own", e.g. by using mo_relaxed and checking the asm to verify it compiled to asm with a dependency. (Avoid doing anything "weird" with such a value, like passing it across functions.) DEC Alpha is basically dead and all other ISAs provide dependency ordering in asm without barriers, as long as the asm itself has a data dependency.
If you don't want to roll your own and live dangerously, it might not hurt to keep using mo_consume in "simple" use-cases where it should be able to work; perhaps some future mo_consume implementation will have the same name and work in a way that's compatible with C++11.
There is ongoing work on making a new consume, e.g. 2018's http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0750r1.html
because the answers I've read all make it sound like you can plaster
this code everywhere and magically you'd get equal or faster code
The only way you can get faster code is when that annotation allows the omission of a fence.
So the only case where it could possibly be useful is:
your program uses consume ordering on an atomic load operation, in an important frequently executed code;
the "consume value" isn't just used immediately and locally, but also passed to other functions;
the target CPU gives specific guarantees for consuming operations (as strong as a given fence before that operation, just for that operation);
the compiler writers take their job seriously: they manage to translate high level language consuming of a value to CPU level consuming, to get the benefit from CPU guarantees.
That's a bunch of necessary conditions to possibly get measurably faster code.
(And the latest trend in the C++ community is to give up inventing a proper compiling scheme that's safe in all cases and to come up with a completely different way for the user to instruct the compiler to produce code that "consumes" values, with much more explicit, naively translatable, C++ code.)
One comment said the code can be equal or slower, but the poster
didn't elaborate.
Of course annotations of the kind that you can randomly put on programs simply cannot make code more efficient in general! That would be too easy and also self contradictory.
Either some annotation specifies a constrain on your code, that is a promise to the compiler, and you can't put it anywhere where it doesn't correspond an guarantee in the code (like noexcept in C++, restrict in C), or it would break code in various ways (an exception in a noexcept function stops the program, aliasing of restricted pointers can cause funny miscompilation and bad behavior (formerly the behavior is not defined in that case); then the compiler can use it to optimize the code in specific ways.
Or that annotation doesn't constrain the code in any way, and the compiler can't count on anything and the annotation does not create any more optimization opportunity.
If you get more efficient code in some cases at no cost of breaking program with an annotation then you must potentially get less efficient code in other cases. That's true in general and specifically true with consume semantic, which imposes the previously described constrained on translation of C++ constructs.
I imagine appropriate places to use this is on any function return or
parameter that is a pointer or reference and that will be passed or
returned within the calling thread
No, the one and only case where it might be useful is when the intended calling function will probably use consume memory order.

OpenGL separate program stages

I am exploring the relatively new feature GL_ARB_separate_program_object.What I understand is I have to create a pipeline object which should contain shaders from stages which are mapped to there via
glUseProgramStages
This make me think about 2 possibilites of using multiple shaders:
1.Creating Multiple pipelines with variant Vertex/Fragment shaders couples(not using other shader types for now) coming from one time mapping to each pipeline.
2.Creating single pipeline and in runtime switch the mapping to different shaders using
glUseProgramStages
I am mostly concerned with performance.Which option is more performance wise ?
Your question cannot really be answered, as it would vary with driver implementations and such. However, the facts and history of the functionality should be informative.
EXT_separate_shader_objects was the first incarnation of this functionality. The biggest difference between them was this: you could not use user-defined varyings with the EXT version. You had to use the old compatibility input/outputs like gl_TexCoord.
Issue #2 in the EXT_separate_shader_objects specification attempts to justify this incomprehensible oversightexplains the reasoning for this as follows:
It is undesirable from a performance standpoint to attempt to support "rendezvous by name" for arbitrary separate shaders because the separate shaders won't be naturally compiled to match their varying inputs and outputs of the same name without a special link step. Such a special link would introduce an extra validation overhead to binding separate shaders. The link itself would have to be deferred until glBegin time since separate shaders won't match when transitioning from one set of consistent shaders to another. This special link would still create errors or undefined behavior when the names of input and output varyings matched but their types did not match.
This suggests that the reason not to rely on name matching, besides incompetence, was performance related (if you can't tell, I don't think very highly of EXT_SSO). The performance of "rendezvous by name" comes from having to do it at every draw call, rather than being able to do it once.
ARB_separate_shader_objects encapsulates the collection of programs in an object. Therefore, the object can store all of the "rendezvous" data. The first draw call may be slower, but subsequent uses of the same PPO will be fast, as long as you don't attach new programs to it.
So I would take that as evidence that PPOs should have programs set on them and then left alone. In general, modifying the attachments objects should be avoided whenever possible. That's why you're encouraged not to go adding or removing textures/renderbuffers from FBOs.