Per-vertex value with Element buffer - opengl

Say that I have a vertex shader. It's input section looks like this (simplified):
layout(location = 0) in vec3 V_pos;
layout(location = 1) in vec3 V_norm;
layout(location = 2) in vec2 V_texcoord1;
layout(location = 3) in vec2 V_texcoord2;
layout(location = 4) in int V_texNum;
What I want is to have the first 4 inputs come from an element buffer, while the last will come from a regular buffer. Eg, in this example, each element has two uv pairs, and I want to be able to give certain faces different textures to sample from.
Can this be done? One other option would be to give the shader a huge uniform of integers containing the values for texNum, and access that with gl_VertexID. But, that seems like a really ugly way to do it.
I'm using OpenGL 3.3 (happy to use extensions though) and c++.

Related

How to instance draw with different transformations for multiple objects

Im having a little problem with glDrawArraysInstanced().
Right now Im trying to draw a chess board with pieces.
I have all the models loaded in properly.
Ive tried drawing pawns only with instance drawing and it worked. I would send an array with transformation vec3s to shader through a uniform and move throught the array with gl_InstanceID
That would be done with this for loop (individual draw call for each model):
for (auto& i : this->models) {
i->draw(this->shaders[0], count);
}
which eventually leads to:
glDrawArraysInstanced(GL_TRIANGLES, 0, vertices.size(), count);
where the vertex shader is:
#version 460
layout(location = 0) in vec3 vertex_pos;
layout(location = 1) in vec2 vertex_texcoord;
layout(location = 2) in vec3 vertex_normal;
out vec3 vs_pos;
out vec2 vs_texcoord;
out vec3 vs_normal;
flat out int InstanceID;
uniform mat4 modelMatrix;
uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;
uniform vec3 offsets[16];
void main(void){
vec3 offset = offsets[gl_InstanceID]; //saving transformation in the offset
InstanceID = gl_InstanceID; //unimportant
vs_pos = vec4(modelMatrix * vec4(vertex_pos + offset, 1.f)).xyz; //using the offset
vs_texcoord = vec2(vertex_texcoord.x,1.f-vertex_texcoord.y);
vs_normal = mat3(transpose(inverse(modelMatrix))) * vertex_normal;
gl_Position = projectionMatrix * viewMatrix * modelMatrix * vec4(vertex_pos + offset,1.f); //using the offset
}
Now my problem is that I dont know how to draw multiple objects in this way and change their transformations since gl_InstanceID starts from 0 on each draw call and thus my array with transformations would be used again from the beggining (which would just draw next pieces on pawns positions).
Any help will be appreciated.
You've got two problems. Or rather, you have one problem, but the natural solution will create a second problem for you.
The natural solution to your problem is to use one of the base-instance rendering functions, like glDrawElementsInstancedBaseInstance. These allow you to specify a starting instance for your instanced rendering calls.
This will precipitate a second problem: gl_InstanceID does not respect the base instance. It will always be on the range [0, instancecount). Only instance arrays respect the base instance. So instead of using a uniform to provide your per-instance data, you must use instance array rendering. This means storing the per-instance data in a buffer object (which you should have done anyway) and accessing it via a VS input whose VAO specifies that the particular attribute is instanced.
This also has the advantage of not restricting your instance count to uniform limitations.
OpenGL 4.6/ARB_shader_draw_parameters allows access to the gl_BaseInstance vertex shader input, which provides the baseinstance value specified by the draw command. So if you don't want to/can't use instanced arrays (for example, the amount of per-instance data is too big for the attribute limitations), you will have to rely on that extension/4.6 functionality. Recent desktop GL drivers offer this functionality, so if your hardware is decently new, you should be able to use it.

Why does GLSL Warning tell me varying isn't written to, when it clearly is?

I have never had any problems passing variables from vertex shader to fragment shader. But today, I added a new "out" variable in the vs, and a corresponding "in" variable in the fs. GLSL says the following:
Shader Program: The fragment shader uses varying tbn, but previous shader does not write to it.
Just to confirm, here's the relevant part of the VS:
#version 330 core
layout(location = 0) in vec4 position;
layout(location = 1) in vec2 uv;
// plus other layout & uniform inputs here
out DATA
{
vec2 uv;
vec3 tangentViewDir;
mat3 tbn;
} vs_out;
void main()
{
vs_out.uv = uv;
vs_out.tangentViewDir = vec3(1.0);
vs_out.tbn = mat3(1.0);
gl_Position = sys_ProjectionViewMatrix * sys_ModelMatrix * position;
}
And in the FS, it is declared as:
in DATA
{
vec2 uv;
vec3 tangentViewDir;
mat3 tbn;
} fs_in;
Interestingly, all other varyings, like "uv" and all, work. They are declared the same way.
Also interesting: Even though GLSL says the variable isn't written to - it still recognizes the changes when I write to it, and displays those changes.
So is it just a false warning or bug? Even though it tells me otherwise, the value seems to be passed correctly. Why do I receive this warning?
HolvBlackCat pointed me in the right direction - it was indeed a shader mismatch!
I had 2 shader programs, same FS in both, but different VSs, and I forgot to update the outputs of the 2nd VS to match the output layout of the first, so that they both work with the same FS!
Ouch, I guess now that I've run into this error, lesson learnt.
Thank you HolvBlackCat!

Is glVertexAttribpointer used only for vertex, UVs, colors, and normals ? Nothing else?

I want to incorporate a custom attribute that varies per vertex. In this case it is assigned to location=4 ... but nothing happens, the other four attributes vary properly except that one. At the bottom, I added a test to produce a specific color if it encounters the value '1' (which I know exists in the buffer, because I queried the buffer earlier). Attribute 4 is stuck at the first value of its array and never moves.
Am I missing a setting ? (something to be enabled maybe ?) or is it that openGL only varies a handful attributes but nothing else ?
#version 330 //for openGL 3.3
//uniform variables stay constant for the whole glDraw call
uniform mat4 ProjViewModelMatrix;
uniform vec4 DefaultColor; //x=-1 signifies no default color
//non-uniform variables get fed per vertex from the buffers
layout (location=0) in vec3 coords; //feeding from attribute=0 of the main code
layout (location=1) in vec4 color; //per vertex color, feeding from attribute=1 of the main code
layout (location=2) in vec3 normals; //per vertex normals
layout (location=3) in vec2 UVcoord; //texture coordinates
layout (location=4) in int vertexTexUnit;//per vertex texture unit index
//Output
out vec4 thisColor;
out vec2 vertexUVcoord;
flat out int TexUnitIdx;
void main ()
{
vertexUVcoord = UVcoord;
TexUnitIdx=vertexTexUnit;
if (DefaultColor.x==-1) {thisColor = color;} //If no default color is set, use per vertex colors
else {thisColor = DefaultColor;}
gl_Position = ProjViewModelMatrix * vec4(coords,1.0); //This outputs the position to the graphics card.
//TESTING
if (vertexTexUnit==1) thisColor=vec4(1,1,0,1); //Never receives value of 1, but the buffer does contain such values
}
Because the vertexTexUnit attribute is an integer, you must use glVertexAttribIPointer() instead of glVertexAttribPointer().
You can use vertex attributes for whatever you want. OpenGL doesn't know or care what you're using them for.

DirectX11 / OpenGL only renders half of the texture

This is how it should look like. It uses the same vertices/uv coordinates which are used for DX11 and OpenGL. This scene was rendered in DirectX10.
This is how it looks like in DirectX11 and OpenGL.
I don't know how this can happen. I am using for both DX10 and DX11 the same code on top and also they both handle things really similiar. Do you have an Idea what the problem may be and how to fix it?
I can send code if needed.
also using another texture.
changed the transparent part of the texture to red.
Fragment Shader GLSL
#version 330 core
in vec2 UV;
in vec3 Color;
uniform sampler2D Diffuse;
void main()
{
//color = texture2D( Diffuse, UV ).rgb;
gl_FragColor = texture2D( Diffuse, UV );
//gl_FragColor = vec4(Color,1);
}
Vertex Shader GLSL
#version 330 core
layout(location = 0) in vec3 vertexPosition;
layout(location = 1) in vec2 vertexUV;
layout(location = 2) in vec3 vertexColor;
layout(location = 3) in vec3 vertexNormal;
uniform mat4 Projection;
uniform mat4 View;
uniform mat4 World;
out vec2 UV;
out vec3 Color;
void main()
{
mat4 MVP = Projection * View * World;
gl_Position = MVP * vec4(vertexPosition,1);
UV = vertexUV;
Color = vertexColor;
}
Quickly said, it looks like you are using back face culling (which is good), and the other side of your model is wrongly winded. You can ensure that this is the problem by turning back face culling off (OpenGL: glDisable(GL_CULL_FACE​)).
The real correction is (if this was the problem) to have correct winding of faces, usually it is counter-clockwise. This depends where you get this model. If you generate it on your own, correct winding in your model generation routine. Usually, model files created by 3D modeling software have correct face winding.
This is just a guess, but are you telling the system the correct number of polygons to draw? Calls like glBufferData() take the size in bytes of the data, not the number of vertices or polygons. (Maybe they should have named the parameter numBytes instead of size?) Also, the size has to contain the size of all the data. If you have color, normals, texture coordinates and vertices all interleaved, it needs to include the size of all of that.
This is made more confusing by the fact that glDrawElements() and other stuff takes the number of vertices as their size argument. The argument is named count, but it's not obvious that it's vertex count, not polygon count.
I found the error.
The reason is that I forgot to set the Texture SamplerState to Wrap/Repeat.
It was set to clamp so the uv coordinates were maxed to 1.
A few things that you could try :
Is depth test enabled ? It seems that your inner faces of the polygons from the 'other' side are being rendered over the polygons that are closer to the view point. This could happen if depth test is disabled. Enable it just in case.
Is lighting enabled ? If so turn it off. Some flashes of white seem to be coming in the rotating image. Could be because of incorrect normals ...
HTH

Can I pack both floats and ints into the same array buffer?

...because the floats seem to be coming out fine, but there's something wrong with the ints.
Essentially I have a struct called "BlockInstance" which holds a vec3 and an int. I've got an array of these BlockInstances which I buffer like so (translating from C# to C for clarity):
glBindBuffer(GL_ARRAY_BUFFER, bufferHandle);
glBufferData(GL_ARRAY_BUFFER, sizeof(BlockInstance)*numBlocks, blockData, GL_DYNAMIC_DRAW);
glVertexAttribPointer(3,3,GL_FLOAT,false,16,0);
glVertexAttribPointer(4,1,GL_INT,false,16,12);
glVertexAttribDivisor(3,1);
glVertexAttribDivisor(4,1);
And my vertex shader looks like this:
#version 330
layout (location = 0) in vec3 Position;
layout (location = 1) in vec2 TexCoord;
layout (location = 2) in vec3 Normal;
layout (location = 3) in vec3 Translation;
layout (location = 4) in int TexIndex;
uniform mat4 ProjectionMatrix;
out vec2 TexCoord0;
void main()
{
mat4 trans = mat4(
1,0,0,0,
0,1,0,0,
0,0,1,0,
Translation.x,Translation.y,Translation.z,1);
gl_Position = ProjectionMatrix * trans * vec4(Position, 1.0);
TexCoord0 = vec2(TexCoord.x+TexIndex,TexCoord.y)/16;
}
When I replace TexIndex on the last line of my GLSL shader with a constant like 0, 1, or 2, my textures come out fine, but if I leave it like it is, they come out all mangled, so there must be something wrong with the number, right? But I don't know what it's coming out as so it's hard to debug.
I've looked at my array of BlockInstances, and they're all set to 1,2, or 19 so I don't think my input is wrong...
What else could it be?
Note that I'm using a sprite map texture where each of the tiles is 16x16 px but my TexCoords are in the range 0-1, so I add a whole number to it to choose which tile, and then divide it by 16 (the map is also 16x16 tiles) to put it back into the proper range. The idea is I'll replace that last line with
TexCoord0 = vec2(TexCoord.x+(TexIndex%16),TexCoord.y+(TexIndex/16))/16;
-- GLSL does integer math, right? An int divided by an int will come out as whole number?
If I try this:
TexCoord0 = vec2(TexCoord.x+(TexIndex%16),TexCoord.y)/16;
The texture looks fine, but it's not using the right sprite. (Looks to be using the first sprite)
If I do this:
TexCoord0 = vec2(TexCoord.x+(TexIndex%16),TexCoord.y+(TexIndex/16))/16;
It comes out all white. This leads me to believe that TexIndex is coming out to be a very large number (bigger than 256 anyway) and that it's probably a multiple of 16.
layout (location = 4) in int TexIndex;
There's your problem.
glVertexAttribPointer is used to send data that will be converted to floating-point values. It's used to feed floating-point attributes. Passing integers is possible, but those integers are converted to floats, because that's what glVertexAttribPointer is for.
What you need is glVertexAttribIPointer (notice the I). This is used for providing signed and unsigned integer data.
So if you declare a vertex shader input as a float or some non-prefixed vec, you use glVertexAttribPointer to feed it. If you declare the input as int, uint, ivec or uvec, then you use glVertexAttribIPointer.