I'm using the GPC tessellation library and it outputs triangle strips.
The example shows rendering like this:
for (s = 0; s < tri.num_strips; s++)
{
glBegin(GL_TRIANGLE_STRIP);
for (v = 0; v < tri.strip[s].num_vertices; v++)
glVertex2d(tri.strip[s].vertex[v].x, tri.strip[s].vertex[v].y);
glEnd();
}
The issue is in the fact that this renders multiple triangle strips. This is the problem for me. My application renders with VBO's, perticularly 1 VBO for 1 polygon. I need a way to modify the above code so that instead it could look something more like this:
glBegin(GL_TRIANGLES);
for (s = 0; s < tri.num_strips; s++)
{
// How should I specify vertices here?
}
glEnd();
How could I do this?
perticularly 1 VBO for 1 polygon
Whoa. 1 VBO per polygon won't be efficient. Kills the whole reason for vertex buffer. The idea for vertex buffer is to cram as many vertices into it as you can. You can put multiple triangle strip into one vertex buffer, or render separate primitives that are stored in one buffer.
I need a way to modify the above code so that instead it could look something more like this:
This should work:
glBegin(GL_TRIANGLES);
for (v= 0; v < tri.strip[s].num_vertices-2; v++)
if (v & 1){
glVertex2d(tri.strip[s].vertex[v].x, tri.strip[s].vertex[v].y);
glVertex2d(tri.strip[s].vertex[v+1].x, tri.strip[s].vertex[v+1].y);
glVertex2d(tri.strip[s].vertex[v+2].x, tri.strip[s].vertex[v+2].y);
}
else{
glVertex2d(tri.strip[s].vertex[v].x, tri.strip[s].vertex[v].y);
glVertex2d(tri.strip[s].vertex[v+2].x, tri.strip[s].vertex[v+2].y);
glVertex2d(tri.strip[s].vertex[v+1].x, tri.strip[s].vertex[v+1].y);
}
glEnd();
Because trianglestrip triangulation goes like this (numbers represent vertex indexes):
0----2
| /|
| / |
| / |
|/ |
1----3
Note: I assume that vertices in trianglestrips are stored in the same order as in my picture AND that you want triangle vertices to be sent in counter-clockwise order. If you want them to be CW, then use different code:
glBegin(GL_TRIANGLES);
for (v= 0; v < tri.strip[s].num_vertices-2; v++)
if (v & 1){
glVertex2d(tri.strip[s].vertex[v].x, tri.strip[s].vertex[v].y);
glVertex2d(tri.strip[s].vertex[v+2].x, tri.strip[s].vertex[v+2].y);
glVertex2d(tri.strip[s].vertex[v+1].x, tri.strip[s].vertex[v+1].y);
}
else{
glVertex2d(tri.strip[s].vertex[v].x, tri.strip[s].vertex[v].y);
glVertex2d(tri.strip[s].vertex[v+1].x, tri.strip[s].vertex[v+1].y);
glVertex2d(tri.strip[s].vertex[v+2].x, tri.strip[s].vertex[v+2].y);
}
glEnd();
Related
I'm building a heightmap out of a 2D array of shorts. The code to generate the vertices is
MeshVertex v; // has position, normal, and texCoord fields
v.position = glm::vec3(
(float) x * 150,
height * 64,
(float) z * 150
);
When I generate the indices for the terrain mesh, I followed the tutorial on learnopengl.com, where I loop through each row, loop through each column, and assign the two sets of indices for the two triangles in each cell.
for(unsigned int i = 0; i < mapSize-1; i++) // for each row a.k.a. each strip
{
for(unsigned int j = 0; j < mapSize; j++) // for each column
{
for(unsigned int k = 0; k < 2; k++) // for each side of the strip
{
indices.push_back(j + mapSize * (i + k));
}
}
}
Lastly, I thought GL_TRIANGLE_STRIP draws the tris in a counter clockwise direction, based on indices 0, 1, 2; 2, 1, 3 as per wikipedia. Assuming 0-based numbering, wouldn't this mean that vertices 1 and 4 share a texture coordinate and vertices 2 and 3 share a texture coordinate? I'm building my texture coords based on this with the following code
switch(index % 6) {
case 0:
v.texCoords = glm::vec2(0, (bandHeight * 1) + bandHeight);
break;
case 1:
case 4:
v.texCoords = glm::vec2(0, (bandHeight * 1));
break;
case 2:
case 3:
v.texCoords = glm::vec2(1, (bandHeight * 1));
break;
case 5:
v.texCoords = glm::vec2(1, (bandHeight * 1) + bandHeight);
break;
}
I don't know that it's relevant, but the bandHeight variable is due to my textures being a variable length vertical image of subimages, where the first texture is row 0, second texture is row 1, etc. That's all that is, with 1 meaning we're looking at the second texture, which I've hardcoded just for testing.
When I render the terrain mesh, only one quad in every 6 looks right. The rest are warped, and I'm not sure why. What is the right formula for generating texture coordinates based on which index the loop is processing?
Here's what a full picture of the terrain looks like, with each cell having the texture coordinates generated with the current cell's intended texture.
Edit
As there have been some votes to close the question as "not enough code to reproduce", here's a link to the full repository. You'll need your own copy of HUNTDAT from Carnivores2; I can't provide that due to copyright reasons. It can easily be found through some google searches, though.
It compiles on Windows 10 with MinGW 8, although the cmakelists file should be easy enough to customize to other platforms.
The file that generates the terrain is found here.
I have vertex position and index and I want vertex normal:
// input
vector<Vec3f> points = ... // position
vector<Vec3i> facets = ... // index (triangles)
// output
vector<Vec3f> norms; // normal
Method 1
I compute normal like this:
norms.resize(points.size()); // for each vertex there is a normal
for (Vec3i f : facets) {
int i0 = f.x();
int i1 = f.y(); // index
int i2 = f.z();
Vec3d pos0 = points.at(i0);
Vec3d pos1 = points.at(i1); // position
Vec3d pos2 = points.at(i2);
Vec3d N = triangleNormal(pos0, pos1, pos2); // face/triangle normal
norms[i0] = N;
norms[i1] = N; // Use the same normal for all 3 vertices
norms[i2] = N;
}
Then, the output mesh is rendered like this with a Phong material:
Method 1 with reversed normal
When I reverse normal direction in method 1:
norms[i0] = -N;
norms[i1] = -N;
norms[i2] = -N;
The dark and light regions are swapped:
The same happens by swapping position 0 with position 1 by:
// Vec3d N = triangleNormal(pos0, pos1, pos2);
Vec3d N = triangleNormal(pos1, pos0, pos2); // Swap pos0 with pos1
Method 2
I compute the normal by this method:
// Count how many faces/triangles a vertex is shared by
vector<int> counters;
counters.resize(points.size());
norms.resize(points.size());
for (Vec3i f : facets) {
int i0 = f.x();
int i1 = f.y(); // index
int i2 = f.z();
Vec3d pos0 = points.at(i0);
Vec3d pos1 = points.at(i1); // position
Vec3d pos2 = points.at(i2);
Vec3d N = triangleNormal(pos0, pos1, pos2);
// Must be normalized
// https://stackoverflow.com/a/21930058/3405291
N.normalize();
norms[i0] += N;
norms[i1] += N; // add normal to all vertices used in face
norms[i2] += N;
counters[i0]++;
counters[i1]++; // increment count for all vertices used in face
counters[i2]++;
}
// https://stackoverflow.com/a/21930058/3405291
for (int i = 0; i < static_cast<int>(norms.size()); ++i) {
if (counters[i] > 0)
norms[i] /= counters[i];
else
norms[i].normalize();
}
This method yields a totally dark final render by a Phong material:
I also tried methods suggested here and there which are similar to method 2. They all result in a final render which looks like that of method 2 i.e. all dark regions without any light one.
Method 2 with reversed normal
I used method 2, but at the end, I reversed the normal direction by:
for (Vec3d & n : norms) {
n = -n;
}
To my surprise, the final render is all darK:
Also in method 2, I tried swapping position 0 with position 1:
// Vec3d N = triangleNormal(pos0, pos1, pos2);
Vec3d N = triangleNormal(pos1, pos0, pos2); // swap pos0 with pos1
The final render is all dark regions without any light ones.
How?
Any idea how I can get my final render to be all light without any dark region?
That looks like your mesh does not have consistent winding rule. So some triangles/faces are defined CW other in CCW order of vertexes causing that some of your normals are facing in opposite direction. There are few things you can do to remedy:
use double sided normals lighting
this is easiest... somwhere in fragment or wherever you are computing the shading something like this:
out_color = face_color*(ambient_light+diffuse_light*max(0.0,dot(face_normal,light_direction)));
when the normal is in wrong direction the result of dot is negative leading to dark color so just use abs value instead:
out_color = face_color*(ambient_light+diffuse_light*abs(dot(face_normal,light_direction)));
In fixed function pipeline there is even switch for this IIRC:
glLightModeli(GL_LIGHT_MODEL_TWO_SIDE, GL_TRUE);
repair mesh winding
there must be 3D tools to do this (Blender,3DS,...) or if your mesh is generated on the fly you could update your code to create consistent winding on your own.
Correct winding enables you the use of GL_CULL_FACE which speeds up rendering considerably. Also it enables more advanced stuff like this:
OpenGL - How to create Order Independent transparency?
repair normals
In some cases there are ways to detect if the normal is pointing outwards or inwards to mesh for example like this:
Determing the direction of face normals consistently?
So just negate the wrong ones during computation of normal and that is it. However if your mesh is too complicated (too far from convex) is this not so easily done as you need to use local "centers" of mesh or even inside polygon tests which are expensive.
The averaging method of generating normals gives you dark colors for both directions of normals which means you wrongly computed them and they are most likely zero. For more info about such approach see:
How to achieve smooth tangent space normals?
Anyway to debug problems like this its best to render your normals as lines going from the vertexes of your mesh (use wireframe). Then you would see directly what normals are good and bad. Here example:
I've been trying to load a Wavefront obj model with ASSIMP. However, I can't get mtl materials colors to work (Kd rgb). I know how to load then, but, I don't know how to get the corresponding color to each vertex.
usemtl material_11
f 7//1 8//1 9//2
f 10//1 11//1 12//2
For example, the Wavefront obj snippet above means that thoose vertices uses material_11.
Q: So how can I get the material corresponding to each vertex?
Error
Wavefront obj materials aren't in the right vertices:
Original Model (Rendered with ASSIMP model Viewer):
Model renderered with my code:
Code:
Code that I use for loading mtl materials color:
std::vector<color4<float>> colors = std::vector<color4<float>>();
...
for (unsigned int i = 0; i < scene->mNumMeshes; i++)
{
const aiMesh* model = scene->mMeshes[i];
const aiMaterial *mtl = scene->mMaterials[model->mMaterialIndex];
color4<float> color = color4<float>(1.0f, 1.0f, 1.0f, 1.0f);
aiColor4D diffuse;
if (AI_SUCCESS == aiGetMaterialColor(mtl, AI_MATKEY_COLOR_DIFFUSE, &diffuse))
color = color4<float>(diffuse.r, diffuse.g, diffuse.b, diffuse.a);
colors.push_back(color);
...
}
Code for creating the vertices:
vertex* vertices_arr = new vertex[positions.size()];
for (unsigned int i = 0; i < positions.size(); i++)
{
vertices_arr[i].SetPosition(positions.at(i));
vertices_arr[i].SetTextureCoordinate(texcoords.at(i));
}
// Code for setting vertices colors (I'm just setting it in ASSIMP vertices order, since I don't know how to set it in the correct order).
for (unsigned int i = 0; i < scene->mNumMeshes; i++)
{
const unsigned int vertices_size = scene->mMeshes[i]->mNumVertices;
for (unsigned int k = 0; k < vertices_size; k++)
{
vertices_arr[k].SetColor(colors.at(i));
}
}
EDIT:
Looks like that the model vertices positions aren't being loaded correctly too. Even when I disable face culling and change the background color.
Ignoring issues related to the number of draw calls and extra storage, bandwidth and processing for vertex colours,
It looks like vertex* vertices_arr = new vertex[positions.size()]; is one big array you're creating to hold the entire model (which has many meshes, each with one material). Assuming your first loop is correct and positions contains all positions for all meshes of your model. The second loop starts duplicating mesh colours for each vertex within the mesh. However, vertices_arr[k] always starts at zero and needs to begin after the last vertex of the previous mesh. Instead, try:
int colIdx = 0;
for (unsigned int i = 0; i < scene->mNumMeshes; i++)
{
const unsigned int vertices_size = scene->mMeshes[i]->mNumVertices;
for (unsigned int k = 0; k < vertices_size; k++)
{
vertices_arr[colIdx++].SetColor(colors.at(i));
}
}
assert(colIdx == positions.size()); //double check
As you say, if the geometry isn't drawing properly, maybe positions doesn't contain all vertex data. Perhaps a similar issue to the above code? Another issue could be in joining the indices for each mesh. The indices will all need to be updated with an offset to the new vertex locations within the vertices_arr array. Though now I'm just throwing out guesses.
So a subset of my code (within a rendering loop of course) is:
for(int x=0;x<data.length;x++)
for(int y=0;y<data[0].length;y++) {
float r = colorData[x][y][0], g = colorData[x][y][1], b = colorData[x][y][2];
glGetError();
glColor3f(r,g,b);
int xCoord = x*100;
int yCoord = y*100;
int height = (int) Math.round(data[y][x]*25);
glBegin(GL_QUADS);
//Top
glVertex3f(xCoord,yCoord,height);
glVertex3f(xCoord+100,yCoord,height);
glVertex3f(xCoord+100,yCoord+100,height);
glVertex3f(xCoord,yCoord+100,height);
//*/
//Sides
glColor3f(r/2,g/2,b/2);
glVertex3f(xCoord,yCoord,height);
glVertex3f(xCoord,yCoord,0);
glVertex3f(xCoord+100,yCoord,0);
glVertex3f(xCoord+100,yCoord,height);
glVertex3f(xCoord,yCoord+100,height);
glVertex3f(xCoord+100,yCoord+100,height);
glVertex3f(xCoord+100,yCoord+100,0);
glVertex3f(xCoord,yCoord+100,0);
glVertex3f(xCoord,yCoord+100,height);
glVertex3f(xCoord,yCoord+100,0);
glVertex3f(xCoord,yCoord,0);
glVertex3f(xCoord,yCoord,height);
glVertex3f(xCoord+100,yCoord+100,height);
glVertex3f(xCoord+100,yCoord,height);
glVertex3f(xCoord+100,yCoord,0);
glVertex3f(xCoord+100,yCoord+100,0);
//Bottom
glColor3f(r/4,g/4,b/4);
glVertex3f(xCoord,yCoord,0);
glVertex3f(xCoord,yCoord+100,0);
glVertex3f(xCoord+100,yCoord+100,0);
glVertex3f(xCoord+100,yCoord,0);
glEnd();
int err = glGetError();
if(err != GL_NO_ERROR)
System.out.println("An error has occured: #"+err);
glColor3f(1,0,0);
glBegin(GL_LINES);
//Top
glVertex3f(xCoord,yCoord,height);
glVertex3f(xCoord+100,yCoord,height);
glVertex3f(xCoord+100,yCoord+100,height);
glVertex3f(xCoord,yCoord+100,height);
glEnd();
}
The code works correctly, drawing a two dimensional grid of boxes, with the sides of the box being slightly darker than the top. (Poor man's lighting)
Just for curiosity I took out the first call to glColor3f, expecting the top of the box to become colorless, and the sides to remain the same; however the entire grid of boxes goes white. Why is this? Is it something in this code here? Or is some other part of my render loop causing this?
(If it makes a difference, I'm using LWJGL's Java implementation of OpenGL1.1)
OpenGL immediate mode works a bit like this (pseudocode)
glColor3f(r,g,b){
state.color.{r,g,b} = r,g,b;
}
glNormal(x,y,z){
state.normal.{x,y,z} = x,y,z;
}
/* and so on */
glVertex3f(x,y,z){
send_vertex_to_rasterizer(state.color, state.normal, ..., {x,y,z})
}
I.e. OpenGL remembers the last color you've set and will it apply to all following vertices until you change it to another color, which then will apply.
right now, I used glutSolidSphere to draw multiple sphere which is 50k+ sphere
the speed is extremely low.
Is there any method or suggestion to increase speed?
below is my code...
void COpenGlWnd::OnPaint()
{
CPaintDC dc(this);
::wglMakeCurrent(m_hDC, m_hRC);
for(int k = 0; k < m_nCountZ; k++)
{
for(int j = 0; j < m_nCountY; j ++)
{
for(int i = 0; i < m_nCountX; i ++)
{
::glPushMatrix();
........
::glutSolidSphere(Size[i][j][k], 36, 36);
........
::glPopMatrix();
}
}
}
::SwapBuffers(m_hDC);
}
For more information:
the sphere will always be in specific location, but user can use mouse to rotate and see all sphere from difference view.
Here's a couple of suggestions:
Create a vertex buffer object (VBO) containing the sphere and render this instead of using glutSolidSphere.
Look into instancing, that is drawing many spheres with a single draw call.
The following article does almost exactly what you want: http://sol.gfxile.net/instancing.html
If you really want efficiency and are only dealing with spheres, you can actually draw a sphere with infinite resolution using only a single quad and a shader. Basically use math to work out the sphere. Start with an untextured circle. Add depth, normals, lighting, texturing and so on.
This calculates the sphere per-pixel making it as high res as required.