glDeleteBuffers crashes during destructor call - c++

Hi I am using VBO to load image texture and then draw it in C++. VBO id generate and bind and draw occurs here
void ViewManager::render(){
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_TEXTURE_2D);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
if(decompressTileImage->tileTexure == 0)
{
loadTexture(decompressTileImage);
glGenBuffers(1,&decompressTileImage->VBOId);
glBindBuffer(GL_ARRAY_BUFFER,decompressTileImage->VBOId);
glBufferData(GL_ARRAY_BUFFER,sizeof(*(this->tileCoordList))+sizeof(*(this->tileTextureCoordList)),0,GL_STATIC_DRAW);
glBufferSubData(GL_ARRAY_BUFFER,0,sizeof(*(this->tileCoordList)),this->tileCoordList);
glBufferSubData(GL_ARRAY_BUFFER,sizeof(*(this->tileCoordList)),sizeof(*(this->tileTextureCoordList)),this->tileTextureCoordList);
}
else
{
glBindBuffer(GL_ARRAY_BUFFER,decompressTileImage->VBOId);
glBindTexture(GL_TEXTURE_2D, decompressTileImage->tileTexure);
}
glColor4f(1.0f, 1.0f, 1.0f, textureAlpha);
if(textureAlpha < 1.0)
{
textureAlpha = textureAlpha + .03;
this->tiledMapView->renderNow();
}
glTexCoordPointer(3, GL_FLOAT, 0, (void*)sizeof(*(this->tileCoordList)));
glVertexPointer(3, GL_FLOAT, 0, 0);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glBindBuffer(GL_ARRAY_BUFFER,0);
glDisable(GL_BLEND);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glDisable(GL_TEXTURE_2D);
}
This function is in a class named MapTile. MapTile is created 35 times for 35 images downloaded from the internet. And then a thread calls this method 35 times for 35 MapTile object and keeps doing it. That is why I first check if the method is called for the first time so that I can load data and generate VBO id only once for each MapTile object. I check this with if(decompressTileImage->tileTexure == 0) this line. and then each time I just bind the vbo id to draw. No need to load the data again.
Here decompressTileImage is a TextureImageInfo class. The implementation is
#include "TextureImageInfo.h"
TextureImageInfo::TextureImageInfo(unsigned char * image,GLuint format,int texWidth,int texHeight,int imageWidth,int imageHeight,float tex_x,float tex_y)
{
// TODO Auto-generated constructor stub
this->format = format;
this->image = image;
this->imageHeight = imageHeight;
this->imageWidth = imageWidth;
this->texHeight = texHeight;
this->texWidth = texWidth;
this->tileTexure = 0;
this->VBOId = 0;
this->time = 0;
}
TextureImageInfo::~TextureImageInfo()
{
if(VBOId!=0)
glDeleteBuffers(1,&VBOId);
}
It draws and does everything fine but crashes when I try to clean up the memory in the destructor of TextureImageInfo class which is given here. I don't understand why. I checked to see if the VBOId is generated and loaded in the memory with the if condition in the destructor too.

As indicated in the comments, OpendGL ES commands should be submitted from the same thread where the context was created.
From the Blackberry docs Parallel processing with OpenGL ES:
It is important to note that each OpenGL ES rendering context targets
a single thread of execution.
If you want to render multiple scenes, you can separate each scene
into its own thread, making sure each thread has its own context

Related

Cannot set position of vertex in simple OpenGL 3.0 program [duplicate]

I'm following a tutorial on creating a Game Engine in Java using OpenGL.
I'm trying to render a triangle on the screen. Everything is running fine and I can change the background color but the triangle won't show. I've also tried running the code provided as part of the tutorial series and it still doesn't work.
Link to the tutorial: http://bit.ly/1EUnvz4
Link to the code used in the video: http://bit.ly/1z7XUlE
Setup
I've tried checking for OpenGL version and belive I have 2.1.
Mac OSX
Java - Eclipse
Mesh.java
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL15.*;
import static org.lwjgl.opengl.GL20.*;
public class Mesh
{
private int vbo; //pointer to the buffer
private int size; //size of the data to buffer
public Mesh ()
{
vbo = glGenBuffers();
size = 0;
}
public void addVertices (Vertex[] vertices)
{
size = vertices.length;
//add the data by first binding the buffer
glBindBuffer (GL_ARRAY_BUFFER, vbo); //vbo is now the buffer
//and then buffering the data
glBufferData (GL_ARRAY_BUFFER, Util.createFlippedBuffer(vertices), GL_STATIC_DRAW);
}
public void draw ()
{
glEnableVertexAttribArray (0); //divide up the data into a segment
glBindBuffer (GL_ARRAY_BUFFER, vbo); //vbo is now the buffer
//tell OpenGL more about the segment:
//segment = 0, elements = 3, type = float, normalize? = false, vertex size, where to start = 0)
glVertexAttribPointer(0, 3, GL_FLOAT, false, Vertex.SIZE * 4, 0);
//draw GL_TRIANGLES starting from '0' with a given 'size'
glDrawArrays (GL_TRIANGLES, 0, size);
glDisableVertexAttribArray (0);
}
}
RenderUtil.java
import static org.lwjgl.opengl.GL11.*;
import static org.lwjgl.opengl.GL30.*;
public class RenderUtil
{
public static void clearScreen ()
{
//TODO: Stencil Buffer
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
}
//set everything to engine defaults
public static void initGraphics ()
{
glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // default color
glFrontFace(GL_CW); // direction for visible faces
glCullFace(GL_BACK); // direction for back faces
glEnable (GL_CULL_FACE); // don't draw back faces
glEnable (GL_DEPTH_TEST); // determines draw order by pixel depth testing
//TODO: Depth clamp for later
glEnable (GL_FRAMEBUFFER_SRGB); // do exponential correction on gamma so we don't have to
}
}
Util.java
import java.nio.FloatBuffer;
import org.lwjgl.BufferUtils;
public class Util
{
//create a float buffer (we need this because java is weird)
public static FloatBuffer createFloatBuffer (int size)
{
return BufferUtils.createFloatBuffer(size);
}
//flip the buffer to fit what OpenGL expects
public static FloatBuffer createFlippedBuffer (Vertex[] vertices)
{
FloatBuffer buffer = createFloatBuffer(vertices.length * Vertex.SIZE);
for (int i = 0; i < vertices.length; i++)
{
buffer.put(vertices[i].getPos().getX());
buffer.put(vertices[i].getPos().getY());
buffer.put(vertices[i].getPos().getZ());
}
buffer.flip();
return buffer;
}
}
You are using an invalid mix of legacy and modern OpenGL.
The glVertexAttribPointer() and glEnableVertexAttribArray() functions you are calling are used for setting up generic vertex attributes. This is the only way to set up vertex attribues in current versions of OpenGL (Core Profile of desktop OpenGL, or OpenGL ES 2.0 and later). They can be used in older versions of OpenGL as well, but only in combination with providing your own shaders implemented in GLSL.
If you are just getting started, your best option is probably to stick with what you have, and study how to start implementing your own shaders. If you wanted to get the code working with the legacy fixed pipeline (which is only supported in the Compatibility Profile of OpenGL), you would need to use the glVertexPointer() and glEnableClientState() functions instead.
Try a single import?
import static org.lwjgl.opengl.GL11.*
I only have one import on mine, also try importing the packages you need separately. One thing you are likely doing wrong is importing multiple versions of OpenGL

OpenGL only renders the first frame then blackness

I have created a deferred renderer using OpenGL that seems to be working great for exactly one frame. Then it renders just blackness. For the code below I have flattened the architecture of the render quite a lot, but I think everything relevant is included. If more context is needed you can look here.
This first piece is run at program initialization:
// corresponds to deferredRenderer.Bind();
glViewport(0, 0, display.GetWidth(), display.GetHeight());
glClearColor(0, 0, 0, 1);
glFrontFace(GL_CW);
glCullFace(GL_BACK);
glDepthFunc(GL_LEQUAL);
Then the loop begins. First the renderer is bound for object/material pass:
// corresponds to deferredRenderer.BindForObjectPass();
gBuffer.BindAsDrawFrameBuffer();
glEnable(GL_CULL_FACE);
glEnable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
Then this code is then run for every object (each has its own shader):
materialShader.Bind();
diffuseTexture->Bind(0);
shader.SetUniform("u_diffuse", 0);
glm::mat4 modelViewMatrix = camera.GetViewMatrix() * transform.GetModelMatrix();
glm::mat4 projectionMatrix = camera.GetProjectionMatrix();
materialShader.SetUniform("u_model_view_matrix", modelViewMatrix);
materialShader.SetUniform("u_projection_matrix", projectionMatrix);
glBindVertexArray(vertexArray);
glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_INT, 0);
glBindVertexArray(0);
After all objects have been renderered, the light pass begins. At this stage in development it's just one shader with a hardcoded light:
// corresponds to deferredRenderer.RenderLightPass();
display.BindAsDrawFrameBuffer();
glDisable(GL_CULL_FACE);
glDisable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT);
screenSpaceShader.Bind();
gBuffer.GetAlbedoTexture().Bind(10);
screenSpaceShader.SetUniform("u_albedo", 10);
gBuffer.GetNormalTexture().Bind(11);
screenSpaceShader.SetUniform("u_normals", 11);
glBindVertexArray(vertexArray);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 8);
glBindVertexArray(0);
And finally the backbuffer is switched to the front:
SDL_GL_SwapWindow(window);
After this the renderer is bound for object pass again and continues to loop.
Note that the first frame renders exactly as it should, so I think it's safe to assume it's at least somewhat correct. The fact that it changes after one full frame also tells me that it probably has something to do with the gl state being in a strange state after the first loop around. I have also made sure that the gBuffer renderbuffer is complete, so that shouldn't be a/the problem
I fixed it by using some debugging tools that told me what gl-calls that caused errors.
The issue was that a shader accidentally was destroyed as soon as it was constructed since I was initializing it on the stack in a initializer list. All of the code in the question should be working correctly though.

Rendering mesh polygons in OpenGL - very slow

I recently switched from intermediate mode and have a new rendering process. There must be something I am not understanding. I think it has something to do with the indices.
Here is my diagram: Region->Mesh->Polygon Array->3 vertex indices which references the master list of vertices.
Here my render code:
// Render the mesh
void WLD::render(GLuint* textures, long curRegion, CFrustum cfrustum)
{
int num = 0;
// Set up rendering states
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
// Set up my indices
GLuint indices[3];
// Cycle through the PVS
while(num < regions[curRegion].visibility.size())
{
int i = regions[curRegion].visibility[num];
// Make sure the region is not "dead"
if(!regions[i].dead && regions[i].meshptr != NULL)
{
// Check to see if the mesh is in the frustum
if(cfrustum.BoxInFrustum(regions[i].meshptr->min[0], regions[i].meshptr->min[2], regions[i].meshptr->min[1], regions[i].meshptr->max[0], regions[i].meshptr->max[2], regions[i].meshptr->max[1]))
{
// Cycle through every polygon in the mesh and render it
for(int j = 0; j < regions[i].meshptr->polygonCount; j++)
{
// Assign the index for the polygon to the index in the huge vertex array
// This I think, is redundant
indices[0] = regions[i].meshptr->poly[j].vertIndex[0];
indices[1] = regions[i].meshptr->poly[j].vertIndex[1];
indices[2] = regions[i].meshptr->poly[j].vertIndex[2];
// Enable texturing and bind the appropriate texture
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, textures[regions[i].meshptr->poly[j].tex]);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex), &vertices[0].x);
glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), &vertices[0].u);
// Draw
glDrawElements(GL_TRIANGLES, 3, GL_UNSIGNED_INT, indices);
}
}
}
num++;
}
// End of rendering - disable states
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
}
Sorry if I left anything out. And I really appreciate feedback and help with this. I would even consider paying someone who is good with OpenGL and optimization to help me with this.
There is no point in using array rendering if you're only rendering 3 vertices at a time. The idea is to send thousands through with a single call. That is, you render a single "Polygon Array" or "Mesh" with one call.

Texturing VBOs (Vertex Buffer Objects)

I'm currently working on a procedural planet generation tool that works by taking a cube, mapping it to a sphere and then applying a heightmap to each face to generate terrain.
I'm using a VBO for each face which is created using the following method:
void Planet::setVertexBufferObject()
{
Vertex* vertices;
int currentVertex;
Vertex* vertex;
for(int i = 0; i < 6; i++)
{
// bottom face
if(i == 0)
{
glBindBuffer(GL_ARRAY_BUFFER, bottomVBO);
}
// top face
else if(i == 1)
{
glBindBuffer(GL_ARRAY_BUFFER, topVBO);
}
// front face
else if(i == 2)
{
glBindBuffer(GL_ARRAY_BUFFER, frontVBO);
}
// back face
else if(i == 3)
{
glBindBuffer(GL_ARRAY_BUFFER, backVBO);
}
// left face
else if(i == 4)
{
glBindBuffer(GL_ARRAY_BUFFER, leftVBO);
}
// right face
else
{
glBindBuffer(GL_ARRAY_BUFFER, rightVBO);
}
vertices = (Vertex*)glMapBuffer(GL_ARRAY_BUFFER, GL_WRITE_ONLY);
currentVertex = 0;
for(int x = 0; x < size; x++)
{
for(int z = 0; z < size; z++)
{
currentVertex = z * size + x;
vertex = &vertices[currentVertex];
vertex->xTextureCoord = (x * 1.0f) / 512.0f;
vertex->zTextureCoord = (z * 1.0f) / 512.0f;
Vector3 normal;
vertex->xNormal = normal.x;
vertex->yNormal = normal.y;
vertex->zNormal = normal.z;
vertex->x = heightMapCubeFace[i][x][z][0];
vertex->y = heightMapCubeFace[i][x][z][1];
vertex->z = heightMapCubeFace[i][x][z][2];
vertex->x *= (1.0f +((heightMaps[i][z][x]/256.0f) * 0.1));
vertex->y *= (1.0f +((heightMaps[i][z][x]/256.0f) * 0.1));
vertex->z *= (1.0f +((heightMaps[i][z][x]/256.0f) * 0.1));
}
}
glUnmapBuffer(GL_ARRAY_BUFFER);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
}
I have left out the setIndexBufferObject() method as that is working OK.
I am then rendering the sphere using this method:
void Planet::render()
{
// bottom face
glBindBuffer(GL_ARRAY_BUFFER, bottomVBO);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, bottomIBO);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glTexCoordPointer(2, GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(6 * sizeof(float)));
glEnableClientState(GL_NORMAL_ARRAY);
glNormalPointer(GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(3 * sizeof(float)));
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, sizeof(Vertex), BUFFER_OFFSET(0));
TextureManager::Inst()->BindTexture(textueIDs[0]);
glClientActiveTexture(GL_TEXTURE0+textueIDs[0]);
glDrawElements(GL_TRIANGLE_STRIP, numberOfIndices, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_NORMAL_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
// next face and so on...
The textures are being loaded in using free image, as you can see from the above code I am just using the example texture manager that came with freeimage.
Why is binding the texture not working?
Why is binding the texture not working?
Describing very specifically the behavior you're seeing, or better attaching a URL to a screenshot, goes a long way when trying to get through graphics questions.
Next, you need to include the gl error state or at least demonstrate that within the code that you're checking and failing if glGetError() doesn't return GL_NO_ERROR.
Your VBO / draw element state looks reasonable though you've deliberately omitted the definition of the index buffer object so we'll take it on faith that you're sending real indices into your glDrawElements() call. For a sanity check, do the glDrawElements() call with the raw indices pointer rather than binding a VBO for the elements.
You've also omitted the type definition of Vertex which is required to know if the offsets you're supplying to glTexCoordPointer() are consistent with the struct definition.
Lastly, I'm guessing most people on the forum don't know "free image" which I believe is what you've stated you're using to load textures with. If there's a texturing problem it's impossible to see because of the opaque nature of using this 3rd party library to set up texturing on your behalf.
If they've confused texture IDs, set up a wrap mode that isn't supported, not set the minification filter consistent with the expected mipmapping (on/off), not enabled texturing, or set the texture environment mode such that the modulation with the base geometry color is not as you expect - all of these would make texturing not work / appear to be working.
To troubleshoot just use the library for texture load and use the texture ID they provide. Then set up the filter modes and texture environment yourself. Turn off mipmapping while troubleshooting. Incomplete mipmap chains is a very common error when texturing.
You can also turn off per-fragment operations to simplify your troubleshooting. Disable blending, depth testing, and scissoring.
Assuming your texture is power of two dimensions or your implementation supports NPOT, try these settings immediately before drawing:
glDisable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glDisable(GL_SCISSOR_TEST);
glColor4f(1.0f, 1.0f, 1.0f, 1.0f);
glBindTexture(GL_TEXTURE_2D, texID);
glTexEnvi(GL_TEXTURE_ENV_MODE, GL_REPLACE); // turn off modulation
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); // turn off mipmapping
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); // turn off repeats
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glClientActiveTexture(GL_TEXTURE0+textueIDs[0]);
What are you trying to do here?
The active texture unit has nothing to do with a random texture object.

Using VBO's and CPU usage is very high

I'm really not sure what to do anymore. I'v made my application use VBO's and my cpu usage still goes into the 70's and 80's. My render proceedure works like this:
Set the camera transformation
if the shape has not been tesselated, tesselate it.
create it's VBO
if it has a VBO, use it.
You will notice I have display lists too, I might use these if VBO is not supported. I went and found an OpenGL demo that renders a 32000 poly mesh at 60fps on my PC and uses 4% cpu. I'm rendering about 10,000 polys # 60fps using vbos and its using 70-80%.
Here is my render proc:
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
POINT hh = controls.MainGlFrame.GetMousePos();
POINTFLOAT S;
S.x = static_cast<float>(hh.x);
S.y = static_cast<float>(hh.y);
POINTFLOAT t;
t.x = 256;
t.y = 256;
POINT dimensions;
dimensions.x = 512;
dimensions.y = 512;
glDeleteTextures(1,&texName);
texName = functions.CreateGradient(col,t,S,512,512,true);
itt = true;
}
HDC hdc;
PAINTSTRUCT ps;
glEnable(GL_MULTISAMPLE_ARB);
glEnable(GL_BLEND);
glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
hdc = BeginPaint(controls.MainGlContext.mhWnd,&ps);
//start OGL code
glClearColor( 1.0f, 1.0f, 1.0f, 0.0f );
if(!current.isdrawing)
glClear( GL_COLOR_BUFFER_BIT );
glPushMatrix();
glTranslatef(controls.MainGlFrame.GetCameraX(),
controls.MainGlFrame.GetCameraY(),0);
//glTranslatef(current.ScalePoint.x,current.ScalePoint.y,0);
glScalef(current.ScaleFactor,current.ScaleFactor,current.ScaleFactor);
//glTranslatef(-current.ScalePoint.x,-current.ScalePoint.y,0);
if(!current.isdrawing)
{
for(unsigned int currentlayer = 0; currentlayer < layer.size(); ++currentlayer)
{
PolygonTesselator.Init();
for(unsigned int i = 0; i < layer[currentlayer].Shapes.size(); i++)
{
if(layer[currentlayer].Shapes[i].DisplayListInt == -999)
{
gluTessNormal(PolygonTesselator.tobj, 0, 0, 1);
PolygonTesselator.Set_Winding_Rule(layer[currentlayer].Shapes[i].WindingRule);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, texName);
layer[currentlayer].Shapes[i].DisplayListInt = glGenLists(1);
glNewList(layer[currentlayer].Shapes[i].DisplayListInt,GL_COMPILE);
PolygonTesselator.SetDimensions(layer[currentlayer].Shapes[i].Dimensions,layer[currentlayer].Shapes[i].minima);
PolygonTesselator.Begin_Polygon();
for(unsigned int c = 0; c < layer[currentlayer].Shapes[i].Contour.size(); ++c)
{
if(layer[currentlayer].Shapes[i].Color.a != 0)
{
PolygonTesselator.Begin_Contour();
for(unsigned int j = 0; j < layer[currentlayer].Shapes[i].Contour[c].DrawingPoints.size(); ++j)
{
gluTessVertex(PolygonTesselator.tobj,&layer[currentlayer].Shapes[i].Contour[c].DrawingPoints[j][0],
&layer[currentlayer].Shapes[i].Contour[c].DrawingPoints[j][0]);
}
PolygonTesselator.End_Contour();
}
}
PolygonTesselator.End_Polygon();
glEndList();
PolygonTesselator.TransferVerticies(layer[currentlayer].Shapes[i].OutPoints);
glGenBuffersARB(1,&layer[currentlayer].Shapes[i].VBOInt);
glBindBufferARB(GL_ARRAY_BUFFER_ARB,layer[currentlayer].Shapes[i].VBOInt);
glBufferDataARB(GL_ARRAY_BUFFER_ARB,sizeof(GLfloat) * layer[currentlayer].Shapes[i].OutPoints.size(),
&layer[currentlayer].Shapes[i].OutPoints[0], GL_STATIC_DRAW_ARB);
InvalidateRect(controls.MainGlFrame.framehWnd,NULL,false);
}
else //run vbo
{
//glEnable(GL_TEXTURE_2D);
//glDisable(GL_TEXTURE_2D);
//glBindTexture(GL_TEXTURE_2D, texName);
glColor4f(layer[currentlayer].Shapes[i].Color.r,
layer[currentlayer].Shapes[i].Color.g,
layer[currentlayer].Shapes[i].Color.b,
layer[currentlayer].Shapes[i].Color.a);
//glColor4f(1,1,1,1);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, layer[currentlayer].Shapes[i].VBOInt);
//glCallList(layer[currentlayer].Shapes[i].DisplayListInt);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(2, GL_FLOAT, 0, 0);
glDrawArrays(GL_TRIANGLES, 0, layer[currentlayer].Shapes[i].OutPoints.size() / 2);
glDisableClientState(GL_VERTEX_ARRAY);
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
}
glDisable(GL_TEXTURE_2D);
//Draw outlines
if(layer[currentlayer].Shapes[i].Outline.OutlinePoints.size() > 4)
{
glColor4f(layer[currentlayer].Shapes[i].Outline.OutlineColor.r
,layer[currentlayer].Shapes[i].Outline.OutlineColor.g
,layer[currentlayer].Shapes[i].Outline.OutlineColor.b
,layer[currentlayer].Shapes[i].Outline.OutlineColor.a);
}
}
PolygonTesselator.End();
}
}
glPopMatrix();
//end OGL code
glFlush();
SwapBuffers(hdc);
glDisable(GL_MULTISAMPLE_ARB);
EndPaint(controls.MainGlContext.mhWnd,&ps);
}
Why could I be getting such high cpu usage?
Under what conditions is that first bit of code run? There's a couple of suspicious-looking lines in there:
glDeleteTextures(1,&texName);
texName = functions.CreateGradient(col,t,S,512,512,true);
If you're deleting and recreating a texture every time you paint, that could get expensive. I couldn't say how expensive the OpenGL parts would be -- I'd expect uploading texture data to be reasonably efficient, even if deleting and creating texture names might be less so -- but perhaps CreateGradient is inherently slow. Or maybe you're accidentally hitting some kind of slow path for your graphics card. Or the function is creating all the mipmap levels. And so on.
Aside from that, some random ideas:
What is the present interval? If the buffer swap is set to sync with the monitor, you may incur a delay because of that. (You can use the WGL_EXT_swap_control extension to tweak this value.)
If all of this is being run in response to a WM_PAINT, check that you aren't getting unexpected extra WM_PAINTs for some reason.
Check that the polygon tesselator Init and End functions aren't doing anything, since they're being called every time, even if there's no tesselating to be done.
Based on the code snippet you have provided, you have (at one point) loops nested four layers deep. You may be seeing high CPU load due to running each of these loops an extremely large number of times. Can you give us any idea how many iterations these loops are having to run through?
Try grabbing a timestamp inside each loop iteration and compare it against the previous to see how long it is taking to run one iteration of each particular loop. This should help you determine what part of the function is taking up the bulk of your CPU time.