OpenGL Rendering Issue With glDrawElement - opengl

I am a little confused when it comes to what function i should be using for my OpenGL Deferred Rendering.
I have a list of VBO( vertex buffer objects )
each one represents a type of vertex such a one might be (Position + Color)
I also have one for indeces
my problem comes when i want to render different objects
im use to the directX way where you
1)activate your vertex buffer
2)activate your index buffer
3) then you specify on your draw call your primitive type
the start vertex from the vertex buffer, the min vertex index, the vertex count, the start indice and the primivite count
with openGL im lost because i can use:
glDrawElements - but this uses render mode, object count, the indice byte type and the indices
this just basically tells me it will render at the start of the VBO which wont help me any if i have multiple objects in there.
Anyone have any clue what rendering function openGL has that I can specify where in the VBO to start rendering from and specify my start vertex position? I hope someone has any idea =\

glDrawElements starts not from the start of the VBO, but from wherever you specify the pointer to start.
If you have a VBO and you want to start rendering from the second half of it, then you set the byte offset to start from as the last attribute of glVertexAttribPointer.

You have several options.
You can reposition the start of each particular object's vertex data by issuing more glVertexAttribPointer calls before you draw that object. Or you can use glDrawElementsBaseVertex, where you get to specify a vertex index that all of the indices fetched by gets added to.
The latter should be available on any non-Intel hardware still supported.

Edit: I just realized that this might not be applicable if you're not using shaders, but since I'm not familiar with legacy OpenGL I'm not sure what benefit what my post has to you. What version are you using/targeting?
I recently asked a question regarding glDrawElements here. Hopefully what I posted gives some explanation/example.
The way I understand it: your indices are determined by you, so if you want to start at the 2nd vertex, you could try (a basic skeleton, does not have glVertexAttribPointer calls and such but i can add those if you want):
const float VBOdata = {
//Some X and Y coordinates for triangle1 (in the first quadrant)
//(z = 0, w = 1 and are not strictly necessary here)
0.0f, 1.0f, 0.0f, 1.0f //index 0
0.0f, 0.0f, 0.0f, 1.0f //1
1.0f, 0.0f, 0.0f, 1.0f //2
//Some X and Y coordinates for triangle2 (in the second quadrant)
0.0f, -1.0f, 0.0f, 1.0f //3
0.0f, 0.0f, 0.0f, 1.0f //4
-1.0f, 0.0f, 0.0f, 1.0f //5
//Now some color data (all white)
1.0f, 1.0f, 1.0f, 1.0f, //0
1.0f, 1.0f, 1.0f, 1.0f, //1
1.0f, 1.0f, 1.0f, 1.0f, //2
1.0f, 1.0f, 1.0f, 1.0f, //3
1.0f, 1.0f, 1.0f, 1.0f, //4
1.0f, 1.0f, 1.0f, 1.0f, //5
};
const GLubyte indices[] = {
3, 4, 5,
};
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_BYTE, indices);
Where GL_TRIANGLES is the type of primitive, 3 is the number of elements inside of "indices", GL_UNSIGNED_BYTE is how to interpret them, and "indices" is just the name of the array.
This should draw only the second triangle. You can also try glDrawElementsBaseVertex, but I haven't personally tried it myself. What I gather is that you would generate/explicitly code your indices starting from 0, but give an offset so that it actually reads at 0+offset, 1+offset, and so on.
Does this help?

Related

What causes glTexSubImage1D() - GL_INVALID_VALUE error in this example when loading a texture?

I'm trying to load a transfer function as 1D texture as follows:
glBindTexture(GL_TEXTURE_1D, transfer);
glTexStorage1D(GL_TEXTURE_1D, 1, GL_RGBA32F, 11);
glTexSubImage1D(GL_TEXTURE_1D, 1, 0, 11, GL_RGBA, GL_FLOAT, props.transColors);
where props.transColors is a float array (these values can be changed from the GUI during runtime):
float transColors[4 * 11] =
{
1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f,
1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f, 1.0f
};
The plan was to load the array as a 1D texture every frame using glTexSubImage1D().
The function glTexSubImage1D() will always throw an GL_INVALID_VALUE error. From the documentation, the error can be cause because of one of the following reasons:
GL_INVALID_VALUE is generated if level is less than 0.
GL_INVALID_VALUE may be generated if level is greater than log2 max,
where max is the returned value of GL_MAX_TEXTURE_SIZE.
GL_INVALID_VALUE is generated if xoffset<−b, or if
(xoffset+width)>(w−b), where w is the GL_TEXTURE_WIDTH, and b is the
width of the GL_TEXTURE_BORDER of the texture image being modified.
Note that w includes twice the border width.
GL_INVALID_VALUE is generated if width is less than 0.
I don't think any of the listed points applies to my case. So I'm wondering, what causes this issue?
The second parameter (after GL_TEXTURE_1D) is the mipmap level. Level 0 is the largest mipmap level. Your image is allocated with one level (that's the second parameter to glTexStorage1D) but you are trying to upload the second level (since they start at 0). Pass 0 to glTexSubImage1d as the level number instead of 1.
Even if your texture did have more than one mipmap level, the data you're uploading would be too big for the second level.

GLSL. Change texture coordinates (or texture) for one object in shader

For my little 3D editor needs a transform gizmo with a clear and convenient interaction with user. For this reason my gizmo it's not just a polylines, but quads with a textures. Gizmo has 3 quads and a 2 textures for each quad with arrow pictures - for active and inactive states (as an option - a one texture with possible to offset current texture coordinates from one arrow to another arrow) .
Texture arrows:
I need when a cursor will over the one arrow a corresponding arrow will change a color (change texture or coordinates).
I managed to change the coordinates by offsetting fs_in.texCoords in fragment shader, but it changes all of my coordinates, this is not what I need.
My arrows is simmilar but I don't want to draw the same VAO 3 times and reinicialize a new VAO with new texture coordinates.
My VAO just a one array:
vertices =
{ //POSITION FIRST QUAD //TEXTURE FIRST QUAD
-1.0f, -1.0f, 0.0f, 0.3f, 0.3f,
-1.0f, 1.0f, 0.0f, 0.3f, 0.0f,
1.0f, 1.0f, 0.0f, 0.0f, 0.0f,
1.0f, -1.0f, 0.0f, 0.0f, 0.3f,
...
//POSITION SECOND QUAD //TEXTURE SECOND QUAD
...
//POSITION THIRD QUAD //TEXTURE THIRD QUAD
};
I want render gizmo at once in the one VAO, but in this case I don't know how to change texture only for the one quad in shader. Is there a trick or cheat how to implement it in shader?

OpenGL odd z-axis behaviour when drawing square

I'm a newcomer to OpenGL and I was playing around with drawing triangles with different z-coordinates. From what I understand, the z axis point out of the screen, and the -z axis points into the screen.
When I draw a square with 3 corners at a 0.0 z-coordinate, and the last corner at, say, -3.0 z-coordinate, I get this:
I don't understand how it's making this shape... I thought it would be something like this, since the 4th vertex is just 'far away'.
Can someone explain?
Edit: This is my vertex data
// vertex array
float vertices[] = {
-0.5f, 0.5f, 0.0f, 1.0f, 0.0f, 0.0f, // top left, first 3 are location, last 3 are color
0.5f, 0.5f, 0.0f, 0.0f, 1.0f, 0.0f, // top right
-0.5f, -0.5f, -2.0f, 0.0f, 0.0f, 1.0f, // bottom left
0.5f, -0.5f, 0.0f, 1.0f, 1.0f, 1.0f // bottom right
};
// element buffer array
GLuint elements[] = {
0, 1, 2,
2, 1, 3
};
And I am calling the draw like:
glDrawElements(GL_TRIANGLES, 6,GL_UNSIGNED_INT,0);
I assume that you've just begun learning OpenGL. The problem is that any value not belonging to the range [-1, 1] is simply "out of bounds". All values are normalized in OpenGL. This improves portability. Just think that the whole world(if you're familiar with game development) is a cube of side 2 units. Any further and you're somewhere else entirely. Hope it helps!

Why is my OpenGL program using matrix rotations displaying nothing?

I can't find how to create the view matrix with yaw, pitch and roll. I'm working with LWJGL and have a rotate function available.
viewMatrix.setZero();
viewMatrix.rotate(pitch, new Vector3f(1.0f, 0.0f, 0.0f));
viewMatrix.rotate(yaw, new Vector3f(0.0f, 1.0f, 0.0f));
viewMatrix.rotate(roll, new Vector3f(0.0f, 0.0f, 1.0f));
viewMatrix.m33 = 1.0f;
viewMatrix.translate(position);
I am doing something fundamentally wrong, and I hate the fact that I can't fix it do to the lack of documentation (or my lack of google skills).
I do not transpose the matrix.
As a note, position is a zero vector and I do not see anything on the screen (when view matrix is zero I do).
Added: I am trying to reach the equivalent of the following:
GL11.glRotatef(pitch, 1.0f, 0.0f, 0.0f);
GL11.glRotatef(yaw, 0.0f, 1.0f, 0.0f);
GL11.glRotatef(roll, 0.0f, 0.0f, 1.0f);
GL11.glTranslatef(position.x, position.y, position.z);
You should use viewMatrix.setIdentity() instead of viewMatrix.setZero() to initially set the matrix to a unit matrix, instead of zeroing the matrix.
compounding rotations like that is the wrong way to go about it, try this: http://tutorialrandom.blogspot.com/2012/08/how-to-rotate-in-3d-using-opengl-proper.html

DirectX 9 hiding first triangle in TRIANGLEFAN

My question is: How can I draw a correct pyramid (triangular quadrilateral pyramid) using D3DPT_TRIANGLEFAN?
I used as points:
CUSTOMVERTEX vertices[] =
{
{ 0.0f, 3.0f, 0.0f, 0x00ff0000, }, //The top Vertex
{ 1.0f, 0.0f, -1.0f, 0xff00ff00, }, //(A) vertex
{ 1.0f, 0.0f, 1.0f, 0xff0000ff, }, //(B) vertex
{ -1.0f, 0.0f, 1.0f, 0xffffff00, }, //(C) vertex
{ -1.0f, 0.0f, -1.0f, 0xffff00ff, }, //(D) vertex
{ 1.0f, 0.0f, -1.0f, 0xff00ff00, }, //(A) vertex
};
where a CUSTOMVERTEX is:
struct CUSTOMVERTEX
{
float x, y, z;
DWORD color;
};
and I call it by:
g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLEFAN, 0, 5);
The pyramid draws correctly, but there is an additional triangle drawn to the screen, the one made with the top and the first vertex ( a right triangle with the PI / 2 angle in the base of the pyramind and the other point being <<1.0f, 0.0f, -1.0f>> (the first point (A)).
So what I want is to hide that triangle, I tried making the device draw from 1 to 5 but that only gives me the base ( (A)-(B)-(C)-(D) plane ), and I also tried making the culling D3DCULL_CW, and when I rotated the pyramid half the time I can see the additional triangle and half it was hidden by another plane.
The last parameter to IDirect3DDevice9::DrawPrimitive() is the primitive count which should be 4 in your case?
If you want to include the base you'll have to render the pyramid as a triangle list instead as a complete pyramid can't be represented by a fan.