What happens if Vertex Attributes not match Vertex Shader Input - opengl

As I know, if the vertex buffer has an attribute that shader does not use, there will be no problem.
What happens if the vertex buffer does not have an attribute that the vertex shader uses for OpenGL?
I know for DirectX11, nothing will draw if the attribute needed in shader is not provided in vertex buffer.
Example
vb only has: position
vertex shader:
attribute vec3 position;
attribute vec4 color;
varying vec4 out_color;
void main()
{
gl_Position = vec4(position, 1.0);
out_color = color;
}
pixel shader:
varying vec4 out_color;
void main()
{
gl_FragColor = vertex_color;
}
What is the pixel color after the shaders executed?

There are two scenarios:
If the attribute array is enabled (i.e. glEnableVertexAttribArray() was called for the attribute), but you didn't make a glVertexAttribPointer() call that specifies the data to be in a valid VBO, bad things can happen. I believe it can be implementation dependent what exactly the outcome is. For example, the draw call could crash, or there could be garbage rendering. The best thing I can find in the spec, which still sounds somewhat vague to me, is:
Most, but not all GL commands operating on buffer objects will detect attempts to read from or write to a location in a bound buffer object at an offset less than zero, or greater than or equal to the buffer’s size. When such an attempt is detected, a GL error will be generated. Any command which does not detect these attempts, and performs such an invalid read or write, has undefined results, and may result in GL interruption or termination.
If the attribute array is not enabled, the current attribute value is used for all vertices. This is the value set with glVertexAttrib4fv() and similar calls. If no such call was made, the default for the current attribute value is (0.0, 0.0, 0.0, 1.0).

Related

Can I modify vertex buffer in GPU through the vertex shader?

For some reason cannot find the answer on the web. I want to update vertex attributes in GPU through the shader in similar form:
#version 330 core
layout(location = 0) in vec4 position;
uniform mat4 someTransformation;
void main()
{
position = position * someTransformation;
gl_Position = position;
}
Is it possible?
Can you write the code you have written? Yes, that is legal code.
Will that change the contents of any GPU storage? No.
While there are ways for a VS to directly manipulate the contents of a buffer, if the buffer region being manipulated is also potentially being used as an attribute array for a rendering command, then you will have undefined behavior.
You can use SSBOs to manipulate other storage which is not being used as the input for rendering. And you can use transform feedback to accumulate data output from vertex processing. But you cannot have a VS directly modify its own input array.

How does OpenGL know the input position (vertex shader)

I am still kinda confused by the position input in the vertex shader. Because you never actually asign that variable. Does OpenGL just know it?
It's the same with sampler2d. You never assign that variable either, and in all of the tutorials I've seen/read that's never adressed.
layout(location = 0) in vec3 position;
uniform sample2d textureSampler.

OpenGL Bindless Textures: Bind to uniform sampler2D array

I am looking into using bindless textures to rapidly display a series of images. My reference is the OpenGL 4.5 redbook. The book says I can sample bindless textures in a shader with this fragment shader:
#version 450 core
#extension GL_ARB_bindless_texture : require
in FS_INPUTS {
vec2 i_texcoord;
flat int i_texindex;
};
layout (binding = 0) uniform ALL_TEXTURES {
sampler2D fs_textures[200];
};
out vec4 color;
void main(void) {
color = texture(fs_textures[i_texindex], i_texcoord);
};
I created a vertex shader that looks like this:
#version 450 core
in vec2 vert;
in vec2 texcoord;
uniform int texindex;
out FS_INPUTS {
vec2 i_texcoord;
flat int i_texindex;
} tex_data;
void main(void) {
tex_data.i_texcoord = texcoord;
tex_data.i_texindex = texindex;
gl_Position = vec4(vert.x, vert.y, 0.0, 1.0);
};
As you may notice, my grasp of whats going on is a little weak.
In my OpenGL code, I create a bunch of textures, get their handles, and make them resident. The function I am using to get the texture handles is 'glGetTextureHandleARB'. There is another function that could be used instead, 'glGetTextureSamplerHandleARB' where I can pass in a sampler location. Here is what I did:
Texture* textures = new Texture[load_limit];
GLuint64* tex_handles = new GLuint64[load_limit];
for (int i=0; i<load_limit; ++i)
{
textures[i].bind();
textures[i].data(new CvImageFile(image_names[i]));
tex_handles[i] = glGetTextureHandleARB(textures[i].id());
glMakeTextureHandleResidentARB(tex_handles[i]);
textures[i].unbind();
}
My question is how do I bind my texture handles to the ALL_TEXTURES uniform attribute of the fragment shader? Also, what should I use to update the vertex attribute 'texindex' - an actual index into my texture handle array or a texture handle?
It's bindless texturing. You do not "bind" such textures to anything.
In bindless texturing, the data value of a sampler is a number. Specifically, the number returned by glGetTextureHandleARB. Texture handles are 64-bit unsigned integer.
In a shader, values of sampler types in buffer-backed interface blocks (UBOs and SSBOs) are 64-bit unsigned integers. So an array of samplers is equivalent in structure to an array of 64-bit unsigned integers.
So in C++, a struct equivalent to your ALL_TEXTURES block would be:
struct AllTextures
{
GLuint64 textures[200];
};
Well, assuming you properly use std140 layout, of course. Otherwise, you'd have to query the layout of the structure.
At this point, you treat the buffer as no different from any other UBO usage. Build the data for the shader by sticking an AllTextures into a buffer object, then bind that buffer as a UBO to binding 0. You just need to fill the array in with the actual texture handles.
Also, what should I use to update the vertex attribute 'texindex' - an actual index into my texture handle array or a texture handle?
Well, neither one will work. Not the way you've written it.
See, ARB_bindless_texture does not allow you to access any texture you want in any way at any time from any shader invocation. Unless you are using NV_gpu_shader5, the code leading to the texture access must be based on dynamically uniform expressions.
So unless every vertex in your rendering command gets the same index or handle... you cannot use them to pick which texture to use. Even instancing will not save you, since dynamically uniform expressions don't care about instancing.
If you want to render a bunch of quads without having to change uniforms between them (and without having to rely on an NVIDIA extension), then you have a few options. Most hardware that supports bindless texture also supports ARB_shader_draw_parameters. This gives you access to gl_DrawID, which represents the current index of a rendering command within a glMultiDraw-style command. And that extension explicitly declares that gl_DrawID is dynamically uniform.
So you could use that to select which texture to render. You simply need to issue a multi-draw command where you render the same mesh data over and over, but it gets a different gl_DrawID index in each case.

Acessing VBO/VAO Data in GLSL Shader

In a vertex shader how can a function within the shader be made to access a specific attribute array value after buffering its vertex data to a VBO?
In the shader below the cmp() function is supposed to compare a uniform variable with vertex i.
#version 150 core
in vec2 vertices;
in vec3 color;
out vec3 Color;
uniform mat4 projection;
uniform mat4 view;
uniform mat4 model;
uniform vec2 cmp_vertex; // Vertex to compare
out int isEqual; // Output variable for cmp()
// Comparator
vec2 cmp(){
int i = 3;
return (cmp_vertex == vertices[i]);
}
void main() {
Color = color;
gl_Position = projection * view * model * vec4(vertices, 0.0, 1.0);
isEqual = cmp();
}
Also, can cmp() be modified so that it does the comparison in parallel?
Based on the naming in your shader code, and the wording of your question, it looks like you misunderstood the concept of vertex shaders.
The vertex shader is invoked once for each vertex. So when your vertex shader code executes, it always operates on a single vertex. This means that the name of your in variable is misleading:
in vec2 vertices;
This variable gives you the position of the one and only vertex your shader is working on. So it would probably be clearer if you used a name in singular form:
in vec2 vertex;
Once you realize that you're operating on a single vertex, the rest becomes easy. For the comparison:
bool cmp() {
return (cmp_vertex == vertex);
}
Vertex shaders are typically already invoked in parallel, meaning that many instances can execute at the same time, each one on its own vertex. So there is no need for parallelism within a single shader instance.
You'll probably have more issues achieving what you're after. But I hope that this gets you at least over the initial hurdle.
For example, the following out variable is problematic:
out int isEqual;
out variables of the vertex shader have matching in variables in the fragment shader. By default, the value written by the vertex shader is linearly interpolated across triangles, and the fragment shader gets the interpolated values. This is not supported for variables of type int. They only support flat interpolation:
flat out int isEqual;
But this will probably not give you what you're after, since the value you see in the fragment shader will always be the same across an entire triangle.

Is automatic vertex attribute assignment guaranteed to be in the correct order?

When specifying the vertex attribute location in the shader code using layout(location = ...) I do not need to fetch the locations using glGetAttribLocation in my C++ program.
If I neither define the location in the shader using the layout qualifiers nor fetch them in my C++ program, they are assigned automatically. The question is about this automatic assignment. Does the order equal the order of definition in the shader code?
For example, in first shader code, is the location guaranteed to be the same as in the second shader code?
// first shader code
#version 330
in vec3 position;
in vec3 normal;
in vec2 texcoord;
// second shader code
#version 330
layout(location = 0) in vec3 position;
layout(location = 1) in vec3 normal;
layout(location = 2) in vec2 texcoord;
Moreover, does the same rule apply to fragment shader outputs? For now I use glBindFragDataLocation to fetch them.
Automatic attribute assignments are arbitrary; they follow whatever algorithm that the implementation chooses for them to.
If you didn't assign an attribute location, then you cannot assume anything about it.
No -- at least one driver I'm aware of sorts the attributes into alphabetical order before assigning locations to them if they don't have explicit layout directives. In addition, any attribute that is unused in the program will almost certainly be left out and not assigned a location.
You can find out the values that were assigned to your attributes by using a debugger:
positionAttribute = glGetAttribLocation(shaderProgram, "position");