OpenGL - layout informations - opengl

I have GLSL compute shader. Very simple one. I specify input texture and output image, mapped to the texture.
layout (binding=2) uniform sampler2D srcTex;
layout (binding=5, r32f) writeonly uniform image2D destTex;
In my code, I need to call glBindImageTexture to attach texture to image. Now, first parameter of this function is
unit
Specifies the index of the image unit to which to bind the texture
I know, that I can set this value to 5 from code manually, but how to do this automatically.
If I create shader I use refraction to get variable names and its locations.
How can I get binding ID?
How can I get texture format from layout (r32f) to set it in my code automatically?

If I create shader I use refraction to get variable names and its locations.
I think you mean reflection and not refraction, assuming you mean the programming language concept.
Now, the interesting thing here is that image2D and sampler2D are what are known as opaque types in GLSL (handles to an image unit). Ordinarily, you could use the modern glGetProgramResource* (...) API (GL_ARB_program_interface_query or core in 4.3) to query really detailed information about uniforms, buffers, etc. in a GLSL shader. However, opaque types are not considered program resources by GLSL and are not compatible with that feature - the information you want is related to the image unit the uniform references and not the uniform itself.
How can I get binding ID?
This is fairly straight-forward.
You can call glGetUniformiv (...) to get the value assigned to your image2D. On the GL-side of things opaque uniforms work no differently than any other uniform (for assigning and querying values), but in GLSL you cannot assign a value to an opaque data type using the = operator. That is why the layout (binding = ...) semantics were created, they allow you to assign the binding in the shader itself rather than having to call an API function and are completely optional.
How can I get texture format from layout (r32f) to set it in my code automatically?
That is not currently possible, and in the future may become irrelevant for loads (it already is for stores). You do not technically need an exact match here. As long as image format size / class match, you can do an image load no problem.
In truth, the only reason you have to declare the format in GLSL is so that the return format for imageLoad (...) is known at compile-time (that makes it irrelevant for writeonly qualified images). There is an EXT extension right now (GL_EXT_shader_image_load_formatted) that completely eliminates the need to establish the image format for non-writeonly images.
Even for non-writeonly images, since only the size / class need to match, I do not think you really need this. r32f has an image class of 1x32 and a size of 32, but the fact that it is floating-point is not relevant to anything. Thus, what you might really consider is naming your uniforms by their image class instead - call this uniform something like destTex_1x32 and it will be obvious that it's a 1-component 32-bit image.

Related

Possible to find internal format of texture in shader?

Is it possible to find the internal format of a texture within the shader (glsl)?
For example, if I have a texture with the format GL_RG, is it possible to recognize in the shader that the blue and alpha value are "constant" and can be ignored?
I know I can use a uniform to pass the texture type from c++ to the shaders. But is there an "intrinsic" way to find out from within the shader?
No, I don't believe there is anything that would give you this information directly.
Looking at the latest GLSL spec (4.50 at this time), I would expect a hypothetical function to get this information to be listed in section "8.9.1. Texture Query Functions" starting on page 158. But the only functions listed there are:
textureSize: Get size of texture.
textureQueryLod: Get the level of detail used for the given texture coordinates.
textureQueryLevels: Get the number of mipmap levels in the texture.
textureSamples: Get the number of samples for a multisampled texture.
So unless there is something completely different I missed, what you're looking for does not exist.

Create an image2D from a uint64_t image handle

To use bindless images in OpenGL, you need to create a GLuint64 handle using glGetImageHandleARB. You can then set this handle to a uniform image2D variable and use the image as if you had bound it the old way. No problems with that. With textures/samplers, it is further possible to set the (texture) handle not to a sampler2D, but to a plain uniform uint64_t variable. This handle can then be used to "construct" a sampler object at runtime with the constructor sampler2D(handle).
The extension description says:
Samplers are represented using 64-bit integer handles, and may be
converted to and from 64-bit integers using constructors.
and
Images are represented using 64-bit integer handles, and may be
converted to and from 64-bit integers using constructors.
So I would assume that the construction for images works the same way as it does for samplers, but this is not the case. Sample code:
#version 450
#extension GL_ARB_bindless_texture : enable
#extension GL_NV_gpu_shader5 : enable
layout(bindless_image, rgba8) uniform image2D myBindlessImage;
uniform uint64_t textureHandle;
uniform uint64_t imageHandle;
void main()
{
sampler2D mySampler = sampler2D(textureHandle); // works like a charm
... = texture(mySampler, texCoord);
... = imageLoad(myBindlessImage, texCoordI); // works like a charm
layout(rgba8) image2D myImage = image2D(imageHandle); // error C7011: implicit cast from "uint64_t" to "int"
... = imageLoad(myImage, texCoordI);
}
Apparently, neither the image2D(uint64_t) constructor nor the image2D(uvec2) constructor mentioned in the extension description are known to the compiler. Am I missing something here or is this simply not implemented right now, although it should be? The video driver I am using right now is Nvidia's 355.82. I would be glad if someone could shed some light on whether this works with any other driver/vendor's card.
By the way, why would I need that feature: In contrast to texture handles, image handles do not identify the whole underlying data, but only one texture level. If you want to do any mipmap or otherwise hierarchical work in shaders and need to bind several/all texture levels, you could provide the handles of all levels in a buffer and then construct them at shader runtime as needed. Right now, you have to define n different uniform image2Ds for your n texture levels, which is rather tedious, especially if the image size changes.
Addendum: The fastest way to reproduce the compile error is to just put image2D(0lu); somewhere in your shader code.
The syntax you're using is wrong. The correct syntax to cast a uint64_t to an image is:
layout(rgba8) image2D myImage = layout(rgba8) image2D(imageHandle);
It is required to specify the format multiple times. I have no idea why, nor why it's even required to specify the format at all. The spec is woefully vague on this.

OpenGL Texture Usage

So recently I started reading through OpenGL wiki articles. This is how I picture OpenGL texturing described from here. Though, several points are not clear yet.
Will following statements be true, false or depends?
Binding two textures to same texture unit is impossible.
Binding two samplers to same texture unit is impossible.
Binding one texture to two different texture units is impossible.
Binding one sampler to two different texture units is impossible.
It is application's responsibility to be clear about what sampler type is passed to what uniform variable.
It is shader program's responsibility to make sure to take sampler as correct type of uniform variable.
number of texture units are large enough. Let each mesh loaded to application occupy as much texture unit as it please.
Some Sampler parameters are duplicate of texture parameters. They will override texture parameter setting.
Some Sampler parameters are duplicate of sampler description in shader program. Shader program's description will override samplers parameters.
I'm going through your statements in the following. Sometimes I will argue with quotes from the OpenGL 4.5 core profile specification. None of that is specific to GL 4.5, I just chose it because that is the most recent version.
1. Binding two textures to same texture unit is
impossible.
If I'd say "false", it would be probably misleading. The exact statement would be "Binding two textures to the same target of the same texture unit is impossible." Technically, you can say, bind a 2D texture and a 3D texture to the same unit. But you cannot use both in the same draw call. Note that this is a dynamic error condition which depends on what values you set the sampler uniforms to. Quote from section 7.10 "Samplers" of the GL spec:
It is not allowed to have variables of different sampler types
pointing to the same texture image unit within a program object. This
situation can only be detected at the next rendering command issued
which triggers shader invocations, and an INVALID_OPERATION error will
then be generated.
So the GL will detect this error condition as soon as you actually try to draw something (or otherwise trigger shader invocations) with that shaders while you configured it such that two sampler uniforms reference different targets of the same unit. But it is not an error before. If you temporarily set both uniforms to the same value but do not try to draw in that state, no error is ever generated.
2. Binding two samplers to same texture unit is impossible.
You probably mean Sampler Objects (as opposed to just "sampler" types in GLSL), so this is true.
3. Binding one texture to two different texture units is impossible.
False. You can bind the same texture object to as many units as there are available. However, that is quite a useless operation. Back in the days of the fixed-function pipeline, there were some corner cases where this was of limited use. For example, I've saw someone binding the same texture twice and use register combiners to multiply both together, because he needed some square operation. However, with shaders, you can sample the texture once and do anything you want with the result, so there is really no use case left for this.
4. Binding one sampler to two different texture units is impossible.
False. A single sampler object can be referenced by multiple texture units. You can just create a sampler object for each sampling state you need, no need to create redundant ones.
5. It is application's responsibility to be clear about what sampler type is passed to what uniform variable.
6. It is shader program's responsibility to make sure to take sampler as correct type of uniform variable.
I'm not really sure what exaclty you are asking here. The sampler variable in your shader selectes the texture target and must also match the internal data fromat of the texture object you want to use (i.e. for isampler or usampler, you'll need unnormalized integer texture formats otherwise results are undefined).
But I don't know what "what sampler type is passed to what uniform variable" is supposed to mean here. As far as the GL client side is concerned, the opaque sampler uniforms are just something which can be set to the index of the texture unit which is to be used, and that is done as an integer via glUniform1i or the like. There is no "sampler type" passed to a uniform variable.
7. number of texture units are large enough. Let each mesh loaded to application occupy as much texture unit as it please.
Not in the general case. The required GL_MAX_TEXTURE_IMAGE_UNITS (which defines how many different texture units a fragment shader can access) by the GL 4.5 spec is just 16. (There are separate limits per shder stage, so there is GL_MAX_VERTEX_TEXTURE_IMAGE_UNITS, GL_MAX_GEOMETRY_TEXTURE_IMAGE_UNITS and so on. They are all required to be at least 16 in the current spec.)
Usually, you have to switch textures inbetween draw calls. The usage of array textures and texture atlasses might allow one to further reduce the number of necessary state switches (and, ultimately, draw calls).
Very modern GPUs also support GL_ARB_bindless_texture, which completely bypasses the "texture unit" layer of indirection and allows the shader to directly reference a texture object by some opaque handle (which basically boils down to referencing some virtual GPU memory address under the hood). However, that feature is not yet part of the OpenGL standard.
8. Some Sampler parameters are duplicate of texture parameters. They will override texture parameter setting.
Yes. Traditionally, there were no separate sampler objects in the GL. Instead, the sampler states like filtering or wrap modes were part of the texture object itself. But modern hardware does not operate this way, so the sampler objects API has been introduced as the GL_ARB_sampler_objects extension (nowadays, a core feature of GL). If a sampler object is bound to a texture unit, its settings will override the sampler state present in the texture object.
9. Some Sampler parameters are duplicate of sampler description in shader program. Shader program's description will override samplers parameters.
I'm not sure what you mean by that. What "sampler descripitons" does a shader program define? There is only the declaration of the uniform and possibly the initialization via layout(binding=...). However, that is just the initial value. The client can update that any time by setting the uniform to another value, so that is not really "overriding" anything. But I'm not sure if you mean that.

How Many Shader Programs Do I Really Need?

Let's say I have a shader set up to use 3 textures, and that I need to render some polygon that needs all the same shader attributes except that it requires only 1 texture. I have noticed on my own graphics card that I can simply call glDisableVertexAttrib() to disable the other two textures, and that doing so apparently causes the disabled texture data received by the fragment shader to be all white (1.0f). In other words, if I have a fragment shader instruction (pseudo-code)...
final_red = tex0.red * tex1.red * tex2.red
...the operation produces the desired final value regardless whether I have 1, 2, or 3 textures enabled. From this comes a number of questions:
Is it legit to disable expected textures like this, or is it a coincidence that my particular graphics card has this apparent mathematical safeguard?
Is the "best practice" to create a separate shader program that only expects a single texture for single texture rendering?
If either approach is valid, is there a benefit to creating a second shader program? I'm thinking it would be cost less time to make 2 glDisableVertexAttrib() calls than to make a glUseProgram() + 5-6 glGetUniform() calls, but maybe #4 addresses that issue.
When changing the active shader program with glUseProgram() do I need to call glGetUniform... functions every time to re-establish the location of each uniform in the program, or is the location of each expected to be consistent until the shader program is deallocated?
Disabling vertex attributes would not really disable your textures, it would just give you undefined texture coordinates. That might produce an affect similar to disabling a certain texture, but to do this properly you should use a uniform or possibly subroutines (if you have dozens of variations of the same shader).
As far as time taken to disable a vertex array state, that's probably going to be slower than changing a uniform value. Setting uniform values don't really affect the render pipeline state, they're just small changes to memory. Likewise, constantly swapping the current GLSL program does things like invalidate shader cache, so that's also significantly more expensive than setting a uniform value.
If you're on a modern GL implementation (GL 4.1+ or one that implements GL_ARB_separate_shader_objects) you can even set uniform values without binding a GLSL program at all, simply by calling glProgramUniform* (...)
I am most concerned with the fact that you think you need to call glGetUniformLocation (...) each time you set a uniform's value. The only time the location of a uniform in a GLSL program changes is when you link it. Assuming you don't constantly re-link your GLSL program, you only need to query those locations once and store them persistently.

Vertex shader attribute mapping in GLSL

I'm coding a small rendering engine with GLSL shaders:
Each Mesh (well, submesh) has a number of vertex streams (eg. position,normal,texture,tangent,etc) into one big VBO and a MaterialID.
Each Material has a set of textures and properties (eg. specular-color, diffuse-color, color-texture, normal-map,etc)
Then I have a GLSL shader, with it's uniforms and attributes. Let's say:
uniform vec3 DiffuseColor;
uniform sampler2D NormalMapTexture;
attribute vec3 Position;
attribute vec2 TexCoord;
I'm a little bit stuck in trying to design a way for the GLSL shader to define the stream mappings (semantics) for the attributes and uniforms, and then bind the vertex streams to the appropriate attributes.
Something in the lines of saying to the mesh :"put your position stream in attribute "Position" and your tex coordinates in "TexCoord". Also put your material's diffuse color in "DiffuseColor" and your material's second texture in "NormalMapTexture"
At the moment I am using hard-coded names for the attributes (ie. vertex pos is always "Position" ,etc) and checking each uniform and attribute name to understand what the shader is using it for.
I guess I'm looking for some way of creating a "vertex declaration", but including uniforms and textures too.
So I'm just wondering how people do this in large-scale rendering engines.
Edit:
Recap of suggested methods:
1. Attribute/Uniform semantic is given by the name of the variable
(what I'm doing now)
Using pre-defined names for each possible attribute.The GLSL binder will query the name for each attribute and link the vertex array based on the name of the variable:
//global static variable
semantics (name,normalize,offset) = {"Position",false,0} {"Normal",true,1},{"TextureUV,false,2}
...when linking
for (int index=0;index<allAttribs;index++)
{
glGetActiveAttrib(program,index,bufSize,length,size[index],type[index],name);
semantics[index]= GetSemanticsFromGlobalHardCodedList(name);
}
... when binding vertex arrays for render
for (int index=0;index<allAttribs;index++)
{
glVertexAttribPointer(index,size[index],type[index],semantics[index]->normalized,bufferStride,semantics[index]->offset);
}
2. Predefined locations for each semantic
GLSL binder will always bind the vertex arrays to the same locations.It is up to the shader to use the the appropriate names to match. (This seems awfully similar to method 1, but unless I misunderstood, this implies binding ALL available vertex data, even if the shader does not consume it)
.. when linking the program...
glBindAttribLocation(prog, 0, "mg_Position");
glBindAttribLocation(prog, 1, "mg_Color");
glBindAttribLocation(prog, 2, "mg_Normal");
3. Dictionary of available attributes from Material, Engine globals, Renderer and Mesh
Maintain list of availlable attributes published by the active Material, the Engine globals, the current Renderer and the current Scene Node.
eg:
Material has (uniformName,value) = {"ambientColor", (1.0,1.0,1.0)}, {"diffuseColor",(0.2,0.2,0.2)}
Mesh has (attributeName,offset) = {"Position",0,},{"Normals",1},{"BumpBlendUV",2}
then in shader:
uniform vec3 ambientColor,diffuseColo;
attribute vec3 Position;
When binding the vertex data to the shader, the GLSL binder will loop over the attribs and bind to the one found (or not? ) in the dictionary:
for (int index=0;index<allAttribs;index++)
{
glGetActiveAttrib(program,index,bufSize,length,size[index],type[index],name);
semantics[index] = Mesh->GetAttributeSemantics(name);
}
and the same with uniforms, only query active Material and globals aswell.
Attributes:
Your mesh has a number of data streams. For each stream you can keep the following info: (name, type, data).
Upon linking, you can query the GLSL program for active attributes and form an attribute dictionary for this program. Each element here is just (name, type).
When you draw a mesh with a specified GLSL program, you go through programs attribute dictionary and bind the corresponding mesh streams (or reporting an error in case of inconsistency).
Uniforms:
Let the shader parameter dictionary be the set of (name, type, data link). Typically, you can have the following dictionaries:
Material (diffuse,specular,shininess,etc) - taken from the material
Engine (camera, model, lights, timers, etc) - taken from engine singleton (global)
Render (custom parameters related to the shader creator: SSAO radius, blur amount, etc) - provided exclusively by the shader creator class (render)
After linking, the GLSL program is given a set of parameter dictionaries in order to populate it's own dictionary with the following element format: (location, type, data link). This population is done by querying the list of active uniforms and matching (name, type) pair with the one in dictionaries.
Conclusion:
This method allows for any custom vertex attributes and shader uniforms to be passed, without hard-coded names/semantics in the engine. Basically only the loader and render know about particular semantics:
Loader fills out the mesh data streams declarations and materials dictionaries.
Render uses a shader that is aware of the names, provides additional parameters and selects proper meshes to be drawn with.
From my experience, OpenGL does not define the concept of attributes or uniforms semantics.
All you can do is define your own way of mapping your semantics to OpenGL variables, using the only parameter you can control about these variables: their location.
If you're not constrained by platform issues, you could try to use the 'new' GL_ARB_explicit_attrib_location (core in OpenGL 3.3 if I'm not mistaken) that allows shaders to explicitly express which location is intended for which attribute. This way, you can hardcode (or configure) which data you want to bind on which attribute location, and query the shaders' locations after it's compiled. It seems that this feature is not yet mature, and perhaps subject to bugs in various drivers.
The other way around is to bind the locations of your attributes using glBindAttribLocation. For this, you have to know the names of the attributes that you want to bind, and the locations you want to assign them.
To find out the names used in a shader, you can:
query the shader for active attributes
parse the shader source code to find them yourself
I would not recommend using the GLSL parsing way (although it may suit your needs if you're in simple enough contexts): the parser can easily be defeated by the preprocessor. Supposing that your shader code becomes somewhat complex, you may want to start using #includes, #defines, #ifdef, etc. Robust parsing supposes that you have a robust preprocessor, which can become quite a heavy lift to set up.
Anyway, with your active attributes names, you have to assign them locations (and/or semantics), for this, you're alone with your use case.
In our engine, we happily hardcode locations of predefined names to specific values, such as:
glBindAttribLocation(prog, 0, "mg_Position");
glBindAttribLocation(prog, 1, "mg_Color");
glBindAttribLocation(prog, 2, "mg_Normal");
...
After that, it's up to the shader writer to conform to the predefined semantics of the attributes.
AFAIK it's the most common way of doing things, OGRE uses it for example. It's not rocket science but works well in practice.
If you want to add some control, you could provide an API to define the semantics on a shader basis, perhaps even having this description in an additional file, easily parsable, living near the shader source code.
I don't get into uniforms where the situation is almost the same, except that the 'newer' extensions allow you to force GLSL uniform blocks to a memory layout that is compatible with your application.
I'm not satisfied by all this myself, so I'll be happy to have some contradictory information :)
You may want to consider actually parsing the GLSL itself.
The uniform/attribute declaration syntax is pretty simple. You can come up with a small manual parser that looks for lines that start with uniform or attribute, get the type and name and then expose some C++ API using strings. This will save you the trouble of hard coded names. If you don't want to get your hands dirty with manual parsing a couple of likes of Spirit would do the trick.
You probably won't want to fully parse GLSL so you'll need to make sure you don't do anything funny in the decelerations that might alter the actual meaning. One complication that comes to mind is conditional compilation using macros in the GLSL.