OpenGL ES 2 glGetActiveAtrib and non floats - c++

I'm porting an engine from DX9/10/11 over to OpenGL ES 2. I'm having a bit of a problem with glGetActiveAttrib though.
According to the docs the type returned can only be one of the following:
The symbolic constants GL_FLOAT, GL_FLOAT_VEC2, GL_FLOAT_VEC3,
GL_FLOAT_VEC4, GL_FLOAT_MAT2, GL_FLOAT_MAT3, or GL_FLOAT_MAT4 may be
returned.
This seems t imply that you cannot have an integer vertex attribute? Am I missing something? Does this really mean you HAVE to implement every thing as floats? Does this mean I can't implement a colour as 4 byte values?
If so, this seems very strange as this would be a horrific waste of memory ... if not, can someone explain where I'm going wrong?
Cheers!

Attributes must be declared as floats in GLSL ES shader. But you can pass to them SHORT's or other supported values listed here. The conversion will happen automatically.

Related

Why are the OpenGL functions using a mix of GLuint and GLint for attribute locations/indices?

I'm trying to write a very simple OpenGL program, and I'm getting a bit confused by the mix of using GLint/GLuint.
glGetAttribLocation returns the attribute location (index?) as a GLint, while
glVertexAttribPointer accepts a GLuint as the attribute index. Why aren't both using the same type?
It has to do with the etymology of these functions.
glVertexAttribPointer was not created for GLSL specifically. It originally came from ARB_vertex_program, which is the old assembly shading language extension. glVertexAttribPointer used unsigned attribute indices. Also, it never had an API to query attribute indices; after all, it was for assembly where you hard-coded your attribute indices directly into your shader. Why would you need to query something you provided?
So, along comes GLSL, first defined by the ARB_shader_object extension (and written by the people at 3D Labs, a thankfully defunct organization, considering all of the mistakes they made with GLSL). They used signed integers for their locations. Note that the glUniform*ARB functions all take GLint rather than GLuint. So GLSL consistently uses signed integers for this sort of thing.
However, ARB_vertex_program already had functions for specifying attribute arrays to vertex shaders. Rather than create an entire new series of functions that do the same thing, their ARB_vertex_shader extension just used the ones we already had. This allowed existing code to be able to use GLSL relatively painlessly.
But it created this inconsistency, because the GLSL extensions all use GLint, while glVertexAttribPointer used GLuint.
if name starts with the reserved prefix "gl_", a value of -1 is returned.
It handles the error cases.

Making a NaN on purpose in WebGL

I have a GLSL shader that's supposed to output NaNs when a condition is met. I'm having trouble actually making that happen.
Basically I want to do this:
float result = condition ? NaN : whatever;
But GLSL doesn't seem to have a constant for NaN, so that doesn't compile. How do I make a NaN?
I tried making the constant myself:
float NaN = 0.0/0.0; // doesn't work
That works on one of the machines I tested, but not on another. Also it causes warnings when compiling the shader.
Given that the obvious computation didn't work on one of the machines I tried, I get the feeling that doing this correctly is quite tricky and involves knowing a lot of real-world facts about the inconsistencies between various types of GPUs.
Don't use NaNs here.
Section 2.3.4.1 from the OpenGL ES 3.2 Spec states that
The special values Inf and −Inf encode values with magnitudes too large to be represented; the special value NaN encodes “Not A Number” values resulting from undefined arithmetic operations such as 0/0. Implementations are permitted, but not required, to support Inf's and NaN's in their floating-point computations.
So it seems to really depend on implementation. You should be outputing another value instead of NaN
Pass it in as a uniform
Instead of trying to make the NaN in glsl, make it in javascript then pass it in:
shader = ...
uniform float u_NaN
...
call shader with "u_NaN" set to NaN
Fool the Optimizer
It seems like the issue is the shader compiler performing an incorrect optimization. Basically, it replaces a NaN expression with 0.0. I have no idea why it would do that... but it does. Maybe the spec allows for undefined behavior?
Based on that assumption, I tried making an obfuscated method that produces a NaN:
float makeNaN(float nonneg) {
return sqrt(-nonneg-1.0);
}
...
float NaN = makeNaN(some_variable_I_know_isnt_negative);
The idea is that the optimizer isn't clever enough to see through this.
And, on the test machine that was failing, this works! I also tried simplifying the function to just return sqrt(-1.0), but that brought back the failure (further reinforcing my belief that the optimizer is at fault).
This is a workaround, not a solution.
A sufficiently clever optimizer could see through the obfuscation and start breaking things again.
I only tested it in a couple machines, and this is clearly something that varies a lot.
The Unity glsl compiler will convert 0.0f/0.0f to intBitsToFloat(int(0xFFC00000u) - since intBitsToFloat is supported from OpenGL ES 3.0 onwards, this is a solution that works in WebGL2 but not WebGL1

Is it legal to reuse Bindings for several Shader Storage Blocks

Suppose that I have one shader storage buffer and want to have several views into it, e.g. like this:
layout(std430,binding=0) buffer FloatView { float floats[]; };
layout(std430,binding=0) buffer IntView { int ints[]; };
Is this legal GLSL?
opengl.org says no:
Two blocks cannot use the same index.
However, I could not find such a statement in the GL 4.5 Core Spec or GLSL 4.50 Spec (or the ARB_shader_storage_buffer_object extension description) and my NVIDIA Driver seems to compile such code without errors or warnings.
Does the OpenGL specification expressly forbid this? Apparently not. Or at least, if it does, I can't see where.
But that doesn't mean that it will work cross-platform. When dealing with OpenGL, it's always best to take the conservative path.
If you need to "cast" memory from one representation to another, you should just use separate binding points. It's safer.
There is some official word on this now. I filed a bug on this issue, and they've read it and decided some things. Specifically, the conclusion was:
There are separate binding namespaces for: atomic counters, images, textures, uniform buffers, and SSBOs.
We don't want to allow aliasing on any of them except atomic counters, where aliasing with different offsets (e.g. sharing a binding) is allowed.
In short, don't do this. Hopefully, the GLSL specification will be clarified in this regard.
This was "fixed" in the revision 7 of GLSL 4.5:
It is a compile-time or link-time error to use the same binding number for more than one uniform block or for more than one buffer block.
I say "fixed" because you can still perform aliasing manually via glUniform/ShaderStorageBlockBinding. And the specification doesn't say how this will work exactly.

What is the best way to subtype numeric parameters for OpenGL?

In the OpenGL specification there are certain parameters which take a set of values of the from GL_OBJECTENUMERATIONi with i ranging from 0 to some number indicated by something like GL_MAX_OBJECT. (Lights being an 'object', as one example.) It seems obvious that the number indicated is to be the upper-range is to be passed through the glGet function providing some indirection.
However, According to a literal interpretation of the OpenGL specification the "texture" parameter for glActiveTexture must be one of GL_TEXTUREi, where i ranges from 0 (GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1) means that the set of accepted constants must be GL_TEXTURE0 to GL_TEXTURE35660 because the constant is a constant of the value 35661.
Language-lawyering aside, this setup means that the subtype can be not only disjoint, but out of order as well, such that the following C-ish mapping would be valid:
#define GL_TEXTURE0 0x84C0
#define GL_TEXTURE1 0x84C1
#define GL_TEXTURE2 0x84C2
#define GL_TEXTURE3 0x84A0
#define GL_TEXTURE4 0x84A4
#define GL_TEXTURE5 0x84A5
#define GL_TEXTURE6 0x84A8
#define GL_TEXTURE7 0x84A2
First, is this an issue actually an issue, or are the constants always laid out as if GL_OBJECTi = GL_OBJECTi-1+1?
If that relationship holds true then there is the possibility of using Ada's subtype feature to avoid passing in invalid parameters...
Ideally, something like:
-- This is an old [and incorrect] declaration using constants.
-- It's just here for an example.
SubType Texture_Number is Enum Range
GL_TEXTURE0..Enum'Max(
GL_MAX_TEXTURE_COORDS - 1,
GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1);
But, if the maximum is dynamically determined then we have to do some monkeying about:
With GL_Constants;
Generic
GL_MAX_TEXTURE : Integer;
-- ...and one of those for EACH maximum for the ranges.
Package Types is
Use GL_Constants;
SubType Texture_Number is Enum Range
GL_TEXTURE0..GL_MAX_TEXTURE;
End Types;
with an instantiation of Package GL_TYPES is new Types( GL_MAX_TEXTURE => glGet(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS) ); and then using this new GL_TYPES package... a little more work, and a little more cumbersome than straight-out subtyping.
Most of this comes from being utterly new to OpenGL and not fully knowing/understanding it; but it does raise interesting questions as to the best way to proceed in building a good, thick Ada binding.
means that the set of accepted constants must be GL_TEXTURE0 to GL_TEXTURE35660 because the constant is a constant of the value 35661.
No, it doesn't mean this. GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS is a implementation dependent value, that is to be queried at runtime using glGetIntegerv(GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS, out)
Regarding the rest: The OpenGL specification states, that GL_TEXTURE = GL_TEXTURE0 + i, and similar for all other object types, with i < n where n is some reasonable number.
This is one of those situations where I don't think getting extra-sexy with the types buys you a whole lot.
If you were to just make a special integer type for GL_TEXTURE (type GL_TEXTURE is 16#84C0# .. 16#8B4C#;), and use that type for all parameters looking for GL Textures, the compiler would prevent the user from doing math between those and other integer objects. That would probably be plenty. It is certianly way better than what the poor C/C++ coders are stuck with!
Then again, I've never been a proponent of super-thick Ada bindings. Ada bindings should be used to make the types more Ada-like, and to convert C error codes into exceptions. If there are other ways to save the user a bit of work, go ahead and do it. However, do not abstract away any of the power of the API!
There were multiple questions in the comments about my choice of using a separate numeric type rather than an Integer subtype.
It is in fact a common Ada noob mistake to start making yourself custom numeric types when integer subtypes will do, and then getting annoyed at all the type conversions you have to do. The classic example is someone making a type for velocity, then another type for distance, then another for force, and then finding they have to do a type conversion on every single damn math operation.
However, there are times when custom numeric types are called for. In particular, you want to use a custom numeric type whenever objects of that type should live in a separate type universe from normal integers. The most common occurrance of this is happens in API bindings, where the number in question is actually a C-ish designation for some resource. The is the exact situation we have here. The only math you will ever want to do on GL_Textures is comparision with the type's bounds, and simple addtion and subtraction by a literal amount. (Most likely GL_Texture'next() will be sufficient.)
As a huge bonus, making it a custom type will prevent the common error of plugging a GL_Texture value into the wrong parameter in the API call. C API calls do love their ints...
In fact, if it were reasonable to sit and type them all in, I suspect you'd be tempted to just make the thing an enumeration. That'd be even less compatible with Integer without conversions, but nobody here would think twice about it.
OK, first rule you need to know about OpenGL: whenever you see something that says, "goes from X to Y", and one of those values is a GL_THINGY, they are not talking about the numeric value of GL_THINGY. They are talking about an implementation-dependent value that you query with GL_THINGY. This is typically an integer, so you use some form of glGetIntegerv to query it.
Next:
this setup means that the subtype can be not only disjoint, but out of order as well, such that the following C-ish mapping would be valid:
No, it wouldn't.
Every actual enumerator in OpenGL is assigned a specific value by the ARB. And the ARB-assigned values for the named GL_TEXTURE''i'' enumerators are:
#define GL_TEXTURE0 0x84C0
#define GL_TEXTURE1 0x84C1
#define GL_TEXTURE2 0x84C2
#define GL_TEXTURE3 0x84C3
#define GL_TEXTURE4 0x84C4
#define GL_TEXTURE5 0x84C5
#define GL_TEXTURE6 0x84C6
#define GL_TEXTURE7 0x84C7
#define GL_TEXTURE8 0x84C8
Notice how they are all in a sequential ordering.
As for the rest, let me quote you from the OpenGL 4.3 specification on glActiveTexture:
An INVALID_ENUM error is generated if an invalid texture is specified. texture is a symbolic constant of the form TEXTURE''i'', indicating that texture unit ''i'' is to be modified. The constants obey TEXTURE''i'' = TEXTURE0 + ''i'' where ''i'' is in the range 0 to ''k'' - 1, and ''k'' is the value of MAX_COMBINED_TEXTURE_IMAGE_UNITS).
If you're creating a binding in some language, the general idea is this: ''don't strongly type certain values''. This one in particular. Just take whatever the user gives you and pass it along. If the user gets an error, they get an error.
Better yet, expose a more reasonable version of glActiveTexture that takes a ''integer'' instead of an enumerator and do the addition yourself.

What defines those OpenGL render dimension limits?

On my system, anything I draw with OpenGL outside of the range of around (-32700,32700) is not rendered (or folded back into the range, I can't figure out).
What defines those limits? Can they be modified?
Edit: Thanks all for pointing the right direction. It turned out my drawing code was using GLshort values. I replaced those by GLint values and I don't see those limits anymore.
I don't know what exactly you are doing, but this looks like a numeric overflow of a signed 16-bit integer (-32768..32767).
Are you calling glVertex3s to draw your vertices? As Malte Clasen pointed out, your vertices would overflow at 2^15-1.