Is there a way to submit sampler2D and a texture object into different slots in OpenGL-GLSL? - opengl

I would like to do separate bindings for texture and the sampler.
Like the code below:
layout(set = 1, binding = 1) uniform texture2D SurfaceTexture;
layout(set = 1, binding = 2) uniform sampler SurfaceSampler;
But I get this error when linking the shaders:
error C7548: 'layout(set)' requires "#extension GL_KHR_vulkan_glsl : enable" before use
I'm trying to create an API-agnostic renderer which will also support DX11-DX12 and Vulkan so I should be committing texture and samplers in different slots.

I would like to do separate bindings for texture and the sampler.
OpenGL doesn't want you to. So you can't.
GL doesn't allow you to do the separate texture/sampler thing (there isn't even an extension for it). D3D requires you to do the separate texture/sampler thing. Vulkan will let you do either.
So you're going to have to pick a side. Or you're going to have to make your renderer translate one to the other.
Also, OpenGL doesn't have descriptor sets, so layout(set = # is meaningless.

Related

Vulkan material resource binding

I want to render a scene with different materials and multiple objects. When I started to implement it myself, I found it difficult to bind material resources before iterating over the objects. I originally plan to use one descriptor set for each material that contains several parameters and textures and pass an array of them to our shader, but this is not possible because the glsl shader requires me to explicitly refer to the descriptor sets and bindings like
location(set = 0, binding = 0) uniform someUniformBuffer/someTextureSampler
Also, the number of descriptor sets and bindings need to be hard-coded into my shader code. Therefore, I do not think it would be possible to do it this way.
After some searching I found two existing ways to bind multiple material resources, basically from SaschaWillems Repo and Vulkan texture rendering on multiple meshes:
Group the faces using the same material of an object, and bind material resources when drawing those faces (You have to do this because the materialId is specified per face). Then the framework would be like
// example for typical loops in rendering
for each view {
bind view resources // camera, environment...
for each shader {
bind shader pipeline
bind shader resources // shader control values
for each object {
bind object resources // object transforms
for each material group{
bind material resources // material parameters and textures
draw faces
}
}
}
}
This way works on all devices with Vulkan support, but is a little bit complicated because I will have to group the faces by their materialId when building my objects. This is similar to the framework proposed by The NVIDIA Post Vulkan Shader Resource Binding where they group the objects of the same material (In my case I have multiple materials inside the same object).
Use an array of textures and a separate sampler. In this case, I will have a bound for my texture array which could be as low as 128.
Use an array of combined samplers and use
location(set = 0, binding = 0) uniform sampler2DArray
in my shader to deal with an array of textures. This requires the physical device feature shaderSampledImageArrayDynamicIndexing which is not supported by most mobile devices.
In this case, I think I prefer the first method. However, in solution 1, the same material could be bound multiple times for different objects using this material. I am wondering if this would impact the performance and how to improve it considering that each object I want to draw has several materials for different faces.

Best way to change the texture of a CombinedImageSampler2D (sampler2D GLSL)?

I have a Vulkan application that uses a single Graphics Pipeline:
The Graphics Pipeline has a uniform MVP matrix inside the vertex shader, and a uniform sampler2D in the fragment shader for texturing.
The Graphics Pipeline is used to render a cube, the problem is that I sometimes need to change his texture (outside the RenderPass), but I don't know how to do it.
This forces me to create multiple Descriptor Sets, each one having its own different sampler2D, and then choosing the one I need.
P.S. The sampler of the CombinedImageSampler2D is always the same, I just need a way to change the Image it manages.

How does OpenGL differentiate binding points in VAO from ones defined with glBindBufferBase?

I am writing a particle simulation which uses OpenGL >= 4.3 and came upon a "problem" (or rather the lack of one), which confuses me.
For the compute shader part, I use various GL_SHADER_STORAGE_BUFFERs which are bound to binding points via glBindBufferBase().
One of these GL_SHADER_STORAGE_BUFFERs is also used in the vertex shader to supply normals needed for rendering.
The binding in both the compute and vertex shader GLSL (these are called shaders 1 below) looks like this:
OpenGL part:
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 1, normals_ssbo);
GLSL part:
...
layout(std430, binding = 1) buffer normals_ssbo
{
vec4 normals[];
};
...
The interesting part is that in a seperate shader program with a different vertex shader (below called shader 2), the binding point 1 is (re-)used like this:
GLSL:
layout(location = 1) in vec4 Normal;
but in this case, the normals come from a different buffer object and the binding is done using a VAO, like this:
OpenGL:
glVertexAttribPointer(1, 4, GL_FLOAT, GL_FALSE, 0, 0);
As you can see, the binding point and the layout of the data (both are vec4) are the same, but the actual buffer objects differ.
Now to my questions:
Why does the VAO of shader 2, which is created and used after setting up shaders 1 (which use glBindBufferBase for binding), seamingly overwrite (?) the binding point, but shaders 1 still remember the SSBO binding and work fine without calling glBindBufferBase again before using them?
How does OpenGL know which of those two buffer objects the binding point (which in both cases is 1) should use? Are binding points created via VAO and glBindBufferBase simply completely seperate things? If that's the case, why does something like this NOT work:
layout(std430, binding = 1) buffer normals_ssbo
{
vec4 normals[];
};
layout(location = 1) in vec4 Normal;
Are binding points created via VAO and glBindBufferBase simply completely seperate things?
Yes, they are. That's why they're set by two different functions.
If that's the case, why does something like this NOT work:
Two possibilities present themselves. You implemented it incorrectly on the rendering side, or your driver has a bug. Which is which cannot be determined without seeing your actual code.

opengl pass texture to program: once or at every rendering?

I've a program with two texture: one from a video, and one from an image.
For the image texture, do I have to pass it to the program at each rendering, or can I do it just once? ie can I do
glActiveTexture(GLenum(GL_TEXTURE1))
glBindTexture(GLenum(GL_TEXTURE_2D), texture.id)
glUniform1i(textureLocation, 1)
just once? I believed so, but in my experiment, this works ok if there no video texture involved, but as soon as I add the video texture that I'm attaching at every rendering pass (since it's changing) the only way to get the image is to run the above code at each rendering frame.
Let's dissect what your doing, including some unnecessary stuff, and what the GL does.
First of all, none of the C-style casts you're doing in your code are necessary. Just use GL_TEXTURE_2D and so on instead of GLenum(GL_TEXTURE_2D).
glActiveTexture(GL_TEXTURE0 + i), where i is in the range [0, GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS - 1], selects the currently active texture unit. Commands that alter texture unit state will affect unit i as long as you don't call glActiveTexture with another valid unit identifier.
As soon as you call glBindTexture(target, name) with the current active texture unit i, the state of the texture unit is changed to refer to name for the specified target when sampling it with the appropriate sampler in a shader (i.e. name might be bound to TEXTURE_2D and the corresponding sample would have to be a sampler2D). You can only bind one texture object to a specific target for the currently active texture unit - so, if you need to sample two 2D textures in your shader, you'd need to use two texture units.
From the above, it should be obvious what glUniform1i(samplerLocation, i) does.
So, if you have two 2D textures you need to sample in a shader, you need two texture units and two samplers, each referring to one specific unit:
GLuint regularTextureName = 0;
GLunit videoTextureName = 0;
GLint regularTextureSamplerLocation = ...;
GLint videoTextureSamplerLocation = ...;
GLenum regularTextureUnit = 0;
GLenum videoTextureUnit = 1;
// setup texture objects and shaders ...
// make successfully linked shader program current and query
// locations, or better yet, assign locations explicitly in
// the shader (see below) ...
glActiveTexture(GL_TEXTURE0 + regularTextureUnit);
glBindTexture(GL_TEXTURE_2D, regularTextureName);
glUniform(regularTextureSamplerLocation, regularTextureUnit);
glActiveTexture(GL_TEXTURE0 + videoTextureUnit);
glBindTexture(GL_TEXTURE_2D, videoTextureName);
glUniform(videoTextureSampleLocation, videoTextureUnit);
Your fragment shader, where I assume you'll be doing the sampling, would have to have the corresponding samplers:
layout(binding = 0) uniform sampler2D regularTextureSampler;
layout(binding = 1) uniform sampler2D videoTextureSampler;
And that's it. If both texture objects bound to the above units are setup correctly, it doesn't matter if the contents of the texture changes dynamically before each fragment shader invocation - there are numerous scenarios where this is common place, e.g. deferred rendering or any other render-to-texture algorithm so you're not exactly breaking new ground with some video texture.
As to the question on how often you need to do this: you need to do it when you need to do it - don't change state that doesn't need changing. If you never change the bindings of the corresponding texture unit, you don't need to rebind the texture at all. Set them up once correctly and leave them alone.
The same goes for the sampler bindings: if you don't sample other texture objects with your shader, you don't need to change the shader program state at all. Set it up once and leave it alone.
In short: don't change state if don't have to.
EDIT: I'm not quite sure if this is the case or not, but if you're using teh same shader with one sampler for both textures in separate shader invocations, you'd have to change something, but guess what, it's as simple as letting the sampler refer to another texture unit:
// same texture unit setup as before
// shader program is current
while (rendering)
{
glUniform(samplerLocation, regularTextureUnit);
// draw call sampling the regular texture
glUniform(samplerLocation, videoTextureUnit);
// draw call sampling teh video texture
}
You should bind the texture before every draw. You only need to set the location once. You can also do layout(binding = 1) in your shader code for that. The location uniform stays with the program. The texture binding is a global GL state. Also be careful about ActiveTexture: it is a global GL state.
Good practice would be:
On program creation, once, set texture location (uniform)
On draw: SetActive(i), Bind(i), Draw, SetActive(i) Bind(0), SetActive(0)
Then optimize later for redundant calls.

OpenGL rendering with multiple textures

Is there a way in OpenGL to render a vertex buffer using multiple independent textures in VRAM without manually binding them (i.e. returning control to the CPU) in between?
Edit: So I'm currently rendering objects with multiple textures by rendering with a single texture, binding a new texture, and repeating, until everything is done. This is slow and requires returning control to CPU and making syscalls for every texture. Is there a way to avoid this switching, and make multiple textures available to the shaders to choose based on vertex data?
As mentioned in the comments on the question, glActiveTexture is the key - samplers in GLSL bind to texture units (e.g. GL_TEXTURE0), not specific texture targets (e.g. GL_TEXTURE2D), so you can bind a GL_TEXTURE2D texture under glActiveTexture(GL_TEXTURE0), another under glActiveTexture(GL_TEXTURE1), and then bind your GLSL sampler2D values to be 0, 1, etc. (NB: do not make your sampler2D values GL_TEXTURE0, GL_TEXTURE1, etc. - they are offsets from GL_TEXTURE0).