I am trying to use <<and >> operations within my GLSL shader (to unpack int from byte texture). However, if I use them, shader stop working and compiler write no error. Other operators like |, & works.
> and < are operators that perform comparisons. The bit shifting operators are >> and <<.
Although these operators are recognized in GLSL, they were "reserved for future use" in version 1.20. They are legal in version 4.10, according to the specification. I don't know in which version they were introduced though.
Related
Ok, this is probably an easy one for the pro's out there. I want to use an enum in GLSL in order to make an if bitwise and check on it, like in c++.
Pseudo C++ code:
enum PolyFlags
{
Invisible = 0x00000001,
Masked = 0x00000002,
Translucent = 0x00000004,
...
};
...
if ( Flag & Masked)
Alphathreshold = 0.5;
But I am already lost at the beginning because it fails already compiling with:
'enum' : Reserved word
I read that enum's in GLSL are supposed to work as well as the bitwise and, but I can't find a working example.
So, is it actually working/supported and if so, how? I tried already with different #version in the shader, but no luck so far.
The OpenGL Shading Language does not have enumeration types. However, they are reserved keywords, which is why you got that particular compiler error.
C enums are really just syntactic sugar for a value (C++ gives them some type-safety, with enum classes having much more). So you can emulate them in a number of ways. Perhaps the most traditional (and dangerous) is with #defines:
#define Invisible 0x00000001u
#define Masked 0x00000002u
#define Translucent 0x00000004u
A more reasonable way is to declare compile-time const qualified global variables. Any GLSL compiler worth using will optimize them away to nothingness, so they won't take up any more resources than the #define. And it won't have any of the drawbacks of the #define.
const uint Invisible = 0x00000001u;
const uint Masked = 0x00000002u;
const uint Translucent = 0x00000004u;
Obviously, you need to be using a version of GLSL that supports unsigned integers and bitwise operations (aka: GLSL 1.30+, or GLSL ES 3.00+).
GLSL has component wise functions for lessThan, greaterThan, etc, which return a bvec.
There's also any() and all(), but there seems to be no and().
If I have two bvec3s and want a new bvec3, equivalent to:
bvec3 new = bvec3(two.x && one.x, two.y && one.y, two.z && one.z);
Is there a faster way or more optimized way to do this?
I'm trying to write highly optimized GLSL code.
Not sure at all if this would be more efficient, but I believe you could do the and of two bvec3 values by converting them to another vector type like uvec3 or vec3, use the more extensive operations on those types (like bitwise and, multiplication), and then convert back.
With your bvec3 values one and two, these are a few options:
bvec3(uvec3(one) & uvec3(two))
bvec3(uvec3(one) * uvec3(two))
bvec3(vec3(one) * vec3(two))
You should definitely benchmark before using this. There's a good chance that the component-wise expression is faster.
On Nvidia GPUs you can just write bvec3 new = two && one;.
The specification is confusing on this. It states
The logical binary operators and (&&), or ( | | ), and exclusive or (^^) operate only on two Boolean
expressions and result in a Boolean expression
and throughout it lists bvec* as "Boolean types". Like here
It is a compile-time error to declare a vertex shader input containing any of the
following:
A Boolean type (bool, bvec2, bvec3, bvec4)
The glslang compiler does not interpret the spec this way and will give you a compile error.
New to c++, know the use of << in e.g:
cout<<"hi";
but the following:
int a = 1<<3;
cout<<a;
will output 8; why is this I simply ask? How to I interpret the use of << here?
The << operator performs a bitshift when applied to integers.
1 in binary is 00000001. Shift it by three bits and you get 00001000, which is 8.
<< in 1<<3 is bit-wise left shift operator not a stream insertion. It will shift 1 to left by three bits.
The operation that every particular operator does and the result of that operation depends on what kind of type/object stands on its left and right side, so expect different results for different objects. As already explained, in this case it does a bitshift, because that is the operation defined for this operator, when both left and right elements are ints. In case of
cout<<"hi";
you have an std::ostream type on the left, and the std::string type on the right, and that is why the result of this operation is different - '<<' is in this case defined as an insertion operator; in case of two ints it is defined as a bit shift operator.
int a = 1<<3;
This is the "real" << operator. It performs bit-shifting. You normally do not need such an operation in high-level code; it dates back to the old days, long before C++ existed, when there was only C and when programmers had to manually tinker with such things much more frequently than in today's world.
In order to understand what's going on, you need to know how binary numbers work.
A decimal 1 remains, coincidentally, the same in binary:
1
Now picture some zeroes in front of it (this usually makes it easier to understand for beginners -- leading zeroes do not change a number's meaning, neither in decimal nor in binary):
...00000001
The << operation moves the bit to the left:
moved by 3 positions
+--+
| |
v |
...00001000
Now remove the leading zeroes:
1000
Here we go. Binary 1000 is 8 in decimal (1*8 + 0*4 + 0*2 + 0*1). For details and interesting corner cases, consult any good C++ book or online tutorial.
Now let's look at the other meaning of << in C++:
cout<<"hi";
This is an overloaded operator. C++ allows you to take almost any built-in operator and give it a new meaning for different classes. The language's standard library does exactly that with std::ostream (of which std::cout is an instance). The meaning does not have anything to do with bit-shifting anymore. It roughly means "output".
Nowadays, it is considered bad programming style to take an operator and give it a meaning completely unrelated to its original one. Examples for accepted use of operator overloading are when the new meaning makes sense with the original semantics, e.g. when you overload + or - for mathematical matrix classes. However, when C++ was invented, this feature was apparently used more liberally.
This is why we are stuck with the completely different meaning of << (and >>) today. In fact, the new semantics even seem to overshadow the original one for quite many beginners (as evidenced by your question).
In this MSDN article on file sharing mode with std::ofstream, Microsoft writes:
To combine the filebuf::sh_read and filebuf::sh_write modes, use the logical OR (||) operator.
Both constants are of type int, as far as I can see, so I don't understand why we should use the logical OR instead of the bitwise OR (|). I always thought that the logical OR produces a Boolean value, so there is no way of interpreting the result?
It is a documentation error. In later versions, they have restructured the documentation, delegating the explanation of bitmask types to the following page:
A bitmask type can be implemented as either an integer type or an enumeration. In either case, you can perform bitwise operations (such as AND and OR) on values of the same bitmask type. The elements A and B of a bitmask type are nonzero values such that A & B is zero.
Get there via
google
http://msdn.microsoft.com/en-us/library/5785s5ts(v=vs.71).aspx
http://msdn.microsoft.com/en-us/library/7z434859(v=vs.71).aspx
http://msdn.microsoft.com/en-us/library/t60aakye(v=VS.71).aspx
http://msdn.microsoft.com/en-us/library/y1et11xw(v=VS.71).aspx
http://msdn.microsoft.com/en-us/library/5kb732k7(v=VS.71).aspx
Yay! for MSDN navigation. Also, VS2010 documentation has been altered again: the newest page doesn't even describe the semantics of the flags fields anymore (allthough, you could take one mention of _Mode | ios_base::out to imply that the params are bitmask combinations)
In C++ it's possible to use a logical operator where a biwise operator was intended:
int unmasked = getUnmasked(); //some wide value
int masked = unmasked & 0xFF; // izolate lowest 8 bits
the second statement could be easily mistyped:
int masked = unmasked && 0xFF; //&& used instead of &
This will cause incorrect behaviour - masked will now be either 0 or 1 when it is inteded to be from 0 to 255. And C++ will not ever complain.
Is is possible to design code in such a way that such errors are detected at compiler level?
Ban in your coding standards the direct use of any bitwise operations in an arbitrary part of the code. Make it mandatory to call a function instead.
So instead of:
int masked = unmasked & 0xFF; // izolate lowest 8 bits
You write:
int masked = GetLowestByte(unmasked);
As a bonus, you'll get a code base which doesn't have dozens of error prone bitwise operations spread all over it.
Only in one place (the implementation of GetLowestByte and its sisters) you'll have the actual bitwise operations. Then you can read these lines two or three times to see if you blew it. Even better, you can unit test that part.
This is a bit Captain Obvious, but you could of course apply encapsulation and just hide the bitmask inside a class. Then you can use operator overloading to make sure that the boolean operator&&() as you see fit.
I guess that a decent implementation of such a "safe mask" need not be overly expensive performance-wise, either.
In some instances you might get a compiler warning (I wouldn't expect one in your example though). A tool like lint might be able to spot possible mistakes.
I think the only way to be sure is to define your coding standards to make the difference between the two operators more obvious - something like:
template<typename T>
T BitwiseAnd( T value, T mask ) { return value & mask; }
and ban the bitwise operators & and |
Both operators represent valid operations on integers, so I don't see any way of detecting a problem. How is the compiler supposed to know which operation you really wanted?