I'm trying to render textured meshes with OpenGL. Currently, my main class holds a state consisting of :
std::vector<vec3d> vertices
std::vector<face> mesh
std::vector<vec3d> colors
vec3d is an implementation of 3D vectors - nothing particular - and face a class holding 3 integers pointing to the index of a vertice in vertices.
So far, I rendered my meshes without a texture with the following code (working fine) :
glShadeModel(params.smooth ? GL_SMOOTH : GL_FLAT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
/*This is my attempt to add the texture
*
*if (_colors.size() != 0) {
* cout << "Hello" << endl;
* glClientActiveTexture(GL_TEXTURE0);
* glEnableClientState(GL_TEXTURE_COORD_ARRAY);
* glTexCoordPointer(3,GL_FLOAT,0,&_colors[0].x);
}*/
glNormalPointer(GL_FLOAT,0,&normals[0].x);
glVertexPointer(3,GL_FLOAT,0,&vertices[0].x);
glDrawElements(GL_TRIANGLES,mesh.size()*3,GL_UNSIGNED_INT,&mesh[0].v1);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
My texture is stored in colors as a list of triples of floats between 0 and 1. However, colors are not applied. I read many examples of texture mapping and tried to do the same, with no luck. Any idea of what I'm doing wrong ?
As seen from your comments, you are using the wrong OpenGL feature to achieve what you want. Texturing means to stick a 2d image onto a mesh by using e.g. uv-coordinates.
What you are doing is to specify a color on each vertex, so you will need to enable GL_COLOR_ARRAY instead of GL_TEXTURE_COORD_ARRAY and use the respective functions for that.
One additional hint: If you are learning OpenGL from scratch you should consider using only modern OpenGL (3.2+)
To answer the last comment:
Well, I read those colors from a texture file, that's what I meant. Is there a way to use such an array to display my mesh in color ?
Yes and no: You will most probably not get the result you expect when doing this. In general there will be multiple pixels in a texture that should be mapped to a face. With vertex-colors you can only apply one color-value per vertex which gets interpolated over the triangle. Have a look on how to apply textures to a mesh, you should be able to find a lot of resources on the internet.
Related
I'm using OpenGL 4 and C++11.
Currently I make a whole bunch of individual calls to glDrawElements using separate VAOs with a separate VBO and an IBO.
I do this because the texture coords change for each, and my Vertex data features the texture coords. I understand that there's some redundent position information in this vertex data; however, it's always -1,-1,1,1 because I use a translation and a scale matrix in my vertex shader to then position and scale the vertex data.
The VAO, VBO, IBO, position and scale matrix and texture ID are stored in an object. It's one object per quad.
Currently, some of the drawing would occur like this:
Draw a quad object via (glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT,0)). The bound VBO is just -1,-1,1,1 and the IBO draws me a quad. The bound VBO contains the texture coords of a common texture (same texture used to texture all drawn quads). Matrix transformations on shader position it.
Repeat with another quad object
glEnable(GL_SCISSOR_TEST) is called and the position information of the preview quad is used in a call to glScissor
Next quad object is drawn; only the parts of it visible from the previous quad are actually shown.
Draw another quad object
The performance I'm getting now is acceptable but I want it faster because I've only scratched the surface of what I have in mind. So I'm looking at optimizing. So far I've read that I should:
Remove the position information from my vertex data and just keep texture coords. Instead bind a single position VBO at the start of drawing quads so it's used by all of them.
But I'm unsure how this would work? Because I can only have one VBO active at any one time.
Would I then have to call glBufferSubData and update the texture coordinates prior to drawing each quad? Would this be better performance or worse (a call to glBindVertexArray for every object or a call to glBufferSubData?)
Would I still pass the position and scale as matrices to the shader, I would I take that opportunity to also update the position info of the vertices as well as the texture coords? Which would be faster?
Create one big VBO with or without an IBO and update the vertex data for the position (rather than use a transformation and scale matrix) of each quad within this. It seems like this would be difficult to manage.
Even if I did manage to do this; I would only have a single glDraw call; which sounds fast. Is this true? What sort of performance impact does a single glBindVertexArray call have over multiple?
I don't think there's any way to use this method to implement something like the glScissor call that I'm making now?
Another option I've read is instancing. So I draw the quad however many times I need it; which means I would pass the shader an array of translation matrices and an array of texture coords?
Would this be a lot faster?
I think I could do something like the glScissor test by passing an additional array of booleans which defines whether the current quad should be only drawn within the bounds of the previous one. However, I think this means that for each gl_InstanceID I would have to traverse all previous instances looking for true and false values, and it seems like it would be slow.
I'm trying to save time by not implementing all of these individually. Hopefully an expert can point me towards which is probably better. If anyone has an even better idea, please let me know.
You can have multiple VBO attached to different attributes!
following seqence binds 2 vbos to attribs 0 & 1, note that glBindBuffer() binds buffer temporarily and actual VBO assignment to attrib is made during glVertexAttribPointer().
glBindBuffer(GL_ARRAY_BUFFER,buf1);
glVertexAttribPointer(0, ...);
glEnableVertexAttribArray(0);
glBindBuffer(GL_ARRAY_BUFFER,buf2);
glVertexAttribPointer(1, ...);
glEnableVertexAttribArray(1);
The fastest way to provide quad positions & sizes is to use texture and sample it inside vertex shader. Of course you'd need at least RGBA (x,y,width,height) 16bits / channel texture. But then you can update quad positions using glTexSubImage2D() or you could even render them via FBO.
Everything other than that will perform slower, of course if you want we can elaborate about using uniforms, attribs in vbos or using attribs without enabled arrays for them.
Putting all together:
use single vbo, store quad id in it (int) + your texturing data
prepare x,y,w,h texture, define mapping from quad id to this texture texcoord ie: u=quad_id&0xFF , v=(quad_id>>8) (for texture 256x256 max 65536 quads)
use vertex shader to sample displacement and size from that texture (for given quad_id stored in attribute (or use vertex_ID/4 or vertex_ID/6)
fill vbo and texture
draw everything with single drawarrays of draw elements
I want to render multiple 3D cubes from one vbo. Each cube has a uniform color.
At this time, I create a vbo where each vertex has a color information.
Is it posible to upload only one color for a one shape (list of verticies)?
I'm also want to mix GL_TRIANGLES and GL_LINES in the glDrawElements-method of the same shader. Is it posible?
//Edit : I only have OpenGL 2.1. Later I want to build this project on Android.
//Edit 2:
I want to render a large count of cubes (up to 150.000). One cube has 24 verticies of geometry and color and 34 indices. Now my idea is to create some vbo's (maybe 50) and share out the cubes to the vbo's. I hope that this minimizes the overhead.
Drawing lots of cubes
Yes, if you want to draw a bunch of cubes, you can specify the color for each cube once.
Create a VBO containing the vertexes for one cube.
// cube = 36 vertexes with glDrawArrays(GL_TRIANGLES)
vbo1 = [v1] [v2] [v3] ... [v36]
Create another VBO with the view matrix and color for each cube, and use an attribute divisor of 1. (You can use the same vbo, but I would use a separate one.)
vbo2 = [cube 1 mat, color] [cube 2 mat, color] ... [cube N mat, color]
Call glDrawElementsInstanced() or glDrawArraysInstanced(). This will draw the cube over and over again.
Alternatively, you can use glUniform() for each cube, but this will limit the number of cubes you can draw. The above method will let you draw thousands, easily.
Mixing GL_TRIANGLES and GL_LINES
You will have to call glDraw????() once for each type of primitive. You can use the same shader for both times, if you like.
Regarding your questions :
Is it possible to upload only one color for one shape ?
Yes , you can use a uniform instead of a vertex attribute(ofc this means changes in more places). However, you will need to set the uniform for each shape, and have a different drawcall for each differently colored shape .
Is it possible to mix GL_TRIANGLES and GL_LINES in the glDrawElements ?
Yes and no. Yes , but you will need a new drawcall (which is obvious). You cannot do on the same drawcall some shapes with GL_TRIANGLES and some shapes with GL_LINES.
In pseudocode this will look like this :
draw shapes 1,2,10 from the vbo using color red and GL_TRIANGLES
draw shapes 3,4,6 from the vbo using color blue and GL_LINES
draw shapes 7,8,9 from the vb using color blue and GL_TRIANGLES
With OpenGL 2.1, I don't think there's a reasonable way of specifying the color only once per cube, and still draw everything in a single draw call.
The most direct approach is that, instead of having the color attribute in a VBO, you specify it directly before the draw call. Assuming that you're using generic vertex attributes, where you would currently have:
glEnableVertexAttribArray(colorLoc);
glVertexAttripPointer(colorLoc, ...);
you do this:
glDisableVertexAttribArray(colorLoc);
glVertexAttrib3f(colorLoc, r, g, b);
where glDisableVertexAttribArray() is only needed if the array was previously enabled for the location.
The big disadvantage is that you can only draw cubes with the same color in one draw call. In the extreme case, that's one draw call per cube. Of course if you have multiple cubes with the same color, you could still batch those into a single draw call.
You wonder whether this is more efficient than having a color for each vertex in the VBO? Impossible to say in general. You'll always get the same answer in cases like this: Try both, and benchmark. I'm skeptical that you will find it beneficial. In my experience, it's fairly rare for fetching vertex data to be a major performance bottleneck. So cutting out one attribute will likely no give you much of a gain. On the other hand, making many small draw calls absolutely can (and often will) hurt performance.
There is one option you can use that is sort of a hybrid. I'm not necessarily recommending it, but just in the interest of brainstorming. If you use a fairly limited number of colors, you can use a single scalar attribute in the VBO that encodes a "color index". Then in the vertex shader, you can use a texture lookup to translate the "color index" to the actual color.
The really good options are beyond OpenGL 2.1. #DietrichEpp nicely explained instanced rendering, which is an elegant solution for cases like this.
And no, you can not have lines and triangles in the same draw call. Even the most flexible draw calls in OpenGL 4.x, like glDrawElementsIndirect(), still take only one primitive type.
Attempting to switch drawing mode to GL_LINE, GL_LINE_STRIP or GL_LINE_LOOP when your cube's vertex data is constructed mainly for use with GL_TRIANGLES presents some interesting results but none that provide a good wireframe representation of the cube.
Is there a way to construct the cube's vertex and index data so that simply toggling the draw mode between GL_LINES/GL_LINE_STRIP/GL_LINE_LOOP and GL_TRIANGLES provides nice results? Or is the only way to get a good wireframe to re-create the vertices specifically for use with one of the line modes?
The most practical approach is most likely the simplest one: Use separate index arrays for line and triangle rendering. There is certainly no need to replicate the vertex attributes, but drawing entirely different primitive types with the same indices sounds highly problematic.
To implement this, you could use two different index (GL_ELEMENT_ARRAY_BUFFER) buffers. Or, more elegantly IMHO, use a single buffer, and store both sets of indices in it. Say you need triIdxCount indices for triangle rendering, and lineIdxCount for line rendering. You can then set up you index buffer with:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuf);
glBufferData(GL_ELEMENT_ARRAY_BUFFER,
(triIdxCount + lineIdxCount) * sizeof(GLushort), 0,
GL_STATIC_DRAW);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER,
0, triIdxCount * sizeof(GLushort), triIdxArray);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER,
triIdxCount * sizeof(GLushort), lineIdxCount * sizeof(GLushort),
lineIdxArray);
Then, when you're ready to draw, set up all your state, including the index buffer binding (ideally using a VAO for all of the state setup) and then render conditionally:
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indexBuf);
if (renderTri) {
glDrawElements(GL_TRIANGLES, triIndexCount, GL_UNSIGNED_SHORT, 0);
} else {
glDrawElements(GL_LINES, lineIdxCount, GL_UNSIGNED_SHORT,
triIndexCount * sizeof(GLushort));
}
From a memory usage point of view, having two sets of indices is a moderate amount of overhead. The actual vertex attribute data is normally much bigger than the index data, and the key point here is that the attribute data is not replicated.
If you don't strictly want to render lines, but just have a requirement for wireframe types of rendering, there are other options. There is for example an elegant approach (never implemented it myself, but it looks clever) where you only draw pixels close to the boundary of polygons, and discard the interior pixels in the fragment shader based on the distance to the polygon edge. This question (where I contributed an answer) elaborates on the approach: Wireframe shader - Issue with Barycentric coordinates when using shared vertices.
I'm migrating our graphics ending from using the old fixed pipeline functions to making use of the programmable pipeline. Our simplest model is just a collection of points in space where each point can be represented by different shapes. One of these being a cube.
I'm basing my code off the cube example from the OpenGL superbible.
In this example the cubes are placed at somewhat random places whereas I will have a fixed lit of points in space. I'm wondering if there is a way to pass that list to my shader so that a cube is drawn at each point vs looping through the list and calling glDrawElements each time. Is that even worth the trouble (performance wise)?
PS we are limited to OpenGL 3.3 functionality.
Is that even worth the trouble (performance wise)?
Probably yes, but try to profile nonetheless.
What you are looking for is instanced rendering, take a look at glDrawElementsInstanced and glVertexAttribDivisor.
What you want to do is store the 8 vertices of a generic cube (centered on the origin) in one buffer, and also store the coordinates of the center of each cube in another vertex attribute buffer.
Then you can use glDrawElementsInstanced to draw N cubes taking the vertices from the first buffer, and translating them in the shader using the specific position stored in the second buffer.
Something like this:
glVertexAttribPointer( vertexPositionIndex, /** Blah .. */ );
glVertexAttribPointer( cubePositionIndex, /** Blah .. */ );
glVertexAttribDivisor( cubePositionIndex, 1 ); // Advance one vertex attribute per instance
glDrawElementsInstanced( GL_TRIANGLES, 36, GL_UNSIGNED_BYTE, indices, NumberOfCubes );
In your vertex shader you need two attributes:
vec3 vertexPosition; // The coordinates of a vertex of the generic cube
vec3 cubePosition; // The coordinates of the center the specific cube being rendered
// ....
vec3 vertex = vertexPosition + cubePosition;
Obviously you can have also a buffer to store the size of each cube, or another one for the orientation, the idea remains the same.
In your example every cube uses its own model matrix per frame.
If you want to keep that you need multiple drawElements calls.
If some cubes don't move (don't need a per frame model matrix) you should combine these cubes into one VBO.
I have a beamforming program running on CUDA and i have to display the output of the beam in Opengl,I have to draw a rectangle in Opengl which is composed of an array of 24x12 small squares.I have to color each of these squares with a different color based on an output from a CUDA program doing the beamforming. I have been able to draw the reactangle using a VBO to which I pass an array containing the vertices of the squares and the color of each vertices using the following a structure. The overall summary of the problem that I am facing is that I am not able to assign the colors to each of the squares correctly. Some excerpts from the code :
struct attributes {
GLfloat coords[2]; //co-ordinates of the vertices
GLfloat color[3]; //color of the vertices
};
glGenBuffers(1, &vbo_romanis); // vbo_romanis is the VBO for drawing the frame
glBindBuffer(GL_ARRAY_BUFFER, vbo_romanis);
glBufferData(GL_ARRAY_BUFFER, sizeof(Vertices), Vertices, GL_STREAM_DRAW);
glShadeModel (GL_SMOOTH);
glUseProgram(program);
glEnableVertexAttribArray(attribute_coord);
glEnableVertexAttribArray(attribute_color);
glBindBuffer(GL_ARRAY_BUFFER, vbo_romanis);
glVertexAttribPointer(
attribute_coord2d, // attribute
2, // number of elements per vertex, here (x,y)
GL_FLOAT, // the type of each element
GL_FALSE, // take our values as-is
sizeof(struct attributes), // next coord2 appears every 5 floats
0 // offset of first element
);
glVertexAttribPointer(
attribute_color, // attribute
3, // number of elements per vertex, here (r,g,b)
GL_FLOAT, // the type of each element
GL_FALSE, // take our values as-is
sizeof(struct attributes), // stride
(GLvoid*) offsetof(struct attributes, color) // offset
);
/* Push each element in buffer_vertices to the vertex shader */
glDrawArrays(GL_QUADS, 0, 4*NUM_SQRS);
So I am facing 2 issues when i draw the array:
the colors not appearing as I want them to. From what I have read about Opengl, the color of the vertices once assigned cannot be changed. But since all the squares share vertices among them, the colors are probably messed up. If I give the same color to all the vertices,it works fine, but not when I want to draw all squares of different colors. So, if someone can point to how I can assign a different color to each of the squares that would really helpful.
How do I update the colors of the vertices for each frame, Do i need to redraw the entire frame or is there a way to just update the colors of the vertices only.
I am completely new to OpenGL programming and any help would be much appreciated.
It is not clear what your vertex data actually is, but this:
But since all the squares share vertices among them, the colors are
probably messed up.
implies to me that you are trying to use the following data for two adjacent squares (A-F being the vertices):
A---B---C
| | |
| | |
D---E---F
However, in OpenGL, a vertex is the set of all attributes, not just the postion. What you get here is that the colors will be smoothly interpolated between the squares. So technically, you need to duplicate the vertices B and E into B1/B2 and E1/E2, with B1,E1 beeing the color of the lieft square, and B2,E2 that of the right square, but the same coordiantes.
However, for your problem, there might be a shortcut, in form of flat shading by declaring your vaertex shader outputs as flat. Vertex shader outputs (varyings) are by default interpolated across the whole primitive. However, defining them as flat will prevent the interpolation. Instead, the value from just one vertex is used for the whole primitive. OpenGL uses the conecpt of the provoking vertex to define which vertex of a primitive will be the one defining the values for such flat outputs.
The command glProvokingVertex() might be used to specify the general rules for which vertex is to be selected, you can choose between the first and the last. If you cleverly construct your vertex data, you can get a vertex to be shared for both triangles of one square that will be the provoking vertex for both, so you can define the color for each "grid cell" with just the color of one corner vertex of the cell, and do not have any need for duplicating vertices.
As a side note: you have the commang glShadeModel(GL_SMOOTH); in your code. This is deprecated and also totally useless when you use the programmable pipeline, as your comments imply. However, conceptually, this is the exact opposite of the flat shading approach I'm suggesting here.
How do I update the colors of the vertices for each frame, Do i need
to redraw the entire frame or is there a way to just update the colors
of the vertices only.
OpenGL is not a scene graph library. It does not remember which objects you have drawn in the past and does not allow changing their attributes. OpenGL is a rendering API, so if you want something different to appear on the screen, you have to tell it to draw again. If you plan on updating the colors without changing the positions of the squares itself, you might be even better off using two non-interleaved VBOs to split color and position data. That way, you can have the positions statically in one buffer, and stream only the color updates in another.