QOpenGLWidget just works correctly in the first paint - c++

I developed a simple Point Cloud Data viewer (just for learning) and I am using Qt Library. My first implementation used QOpenGLWindow but now I want to use a QOpenGLWidget to uses it such as dynamic library.
Regarding QOpenGLWidget, it just works in the first "painting" (call to PaintGL when I create the widget) and nothing happens on successive paintGL() calls. This problem does not appear in QOpenGLWindow, it works perfectly.
My PaintGL() paint using the next method of my class:
void AEOpenGLViewer::renderGL() {
// Clear
glClear(GL_COLOR_BUFFER_BIT);
shader_program->bind();
shader_program->setUniformValue(u_worldToCamera, camera.toMatrix());
shader_program->setUniformValue(u_cameraToView, projection);
{
for (size_t globjects_index = 0; globjects_index < objects.size(); ++globjects_index) {
objects.at(globjects_index).vao->bind();
shader_program->setUniformValue(u_modelToWorld, objects.at(globjects_index).transform.toMatrix());
glDrawArrays(objects.at(globjects_index).primitive_type, 0, objects.at(globjects_index).vertices.size());
objects.at(globjects_index).vao->release();
}
}
shader_program->release();
}
where "objects" is a std::vector of a local struct that define an OpenGL's object in my applicacion:
typedef struct {
QOpenGLBuffer *vbo;
QOpenGLVertexArrayObject *vao;
GLenum primitive_type;
std::vector<Vertex> vertices;
Transform3D transform;
} GLObject;
The slot "update()" is called using input events (KeyPressEvent, MouseMoveEvent..) just in order to reduce the GPU load.
In case someone is interested, the vbo is allocated as follow:
void AEOpenGLViewer::prepareObjectInGPU(GLObject globject) {
shader_program->bind();
{
// Create Buffer (Do not release until VAO is created) (vbo)
globject.vbo->create();
globject.vbo->bind();
globject.vbo->setUsagePattern(QOpenGLBuffer::StaticDraw);
globject.vbo->allocate(globject.vertices.data(), globject.vertices.size() * sizeof(Vertex));
// Create Vertex Array Object (vao)
globject.vao->create();
globject.vao->bind();
shader_program->enableAttributeArray(0);
shader_program->enableAttributeArray(1);
shader_program->setAttributeBuffer(0, GL_FLOAT, Vertex::positionOffset(), Vertex::PositionTupleSize, Vertex::stride());
shader_program->setAttributeBuffer(1, GL_FLOAT, Vertex::colorOffset(), Vertex::ColorTupleSize, Vertex::stride());
// Release (unbind) all
globject.vbo->release();
globject.vao->release();
}
shader_program->release();
}

The problem is solved, It happened because I have to make the corresponding context current during the allocation of the object in the GPU (my prepareObjectInGPU() method). So, the problem is solved as follow:
void AEOpenGLViewer::prepareObjectInGPU(GLObject globject) {
makeCurrent();
shader_program->bind();
{
...
}
shader_program->release();
doneCurrent();
}
In the QOpenGLWindow, it was not necessary but I recommended to do it as good practices. Remember, during the PaintGL() It is not necessary to call the makeCurrent() because it is called automatically before invoking paintGL().
However, if someone want to answer me then you are welcome.
Thanks!

Related

how to delete texture object under different thread with share qopenglcontext, on ubuntu20.04 with mesa driver

I have an class object called "ShowVideo", which inherits from QOpenGLWidget and QOpenGLFunctions.
In the "initializeGL() " function, i create a work thread and a sub renderer object, within the sub renderer object, i create a QOpenGLContext object, which calls setShareContext with the QOpenGLWidget context for texture sharing.
under the "ShowVideo" class, there is a member function called "onSceneSwitch"(need to be called many times), in which i will first release all the sub renderer's opengl resource (QOpenGLFramebufferObject, textures, etc)(by calling glDeleteTextures), and then recreate the opengl resource (for example, using the glGenTextures).
on Windows system with nvidia graphic card, it works fine(the opengl resources is successfully released, the texture object name is ok for re-use after "glDeteleTextures", even though the share opengl context not destroyed ).
however on ubuntu20.04 with mesa graphics driver, the opengl resources can not be released . (if i use glGenTextures to gen texture object, i would get new texture object id every time i call the "onSceneSwitch", and thus results in quickly exceeding the "GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS")
The Code example is as follows:
class ShowVideo : public QOpenGLWidget,protected QOpenGLFunctions
{
protected:
void initializeGL()
{
m_mainRendererContext = new QOpenGLContext();
QSurfaceFormat format;
format.setDepthBufferSize(24);
format.setStencilBufferSize(8);
format.setRedBufferSize(8);
format.setBlueBufferSize(8);
format.setGreenBufferSize(8);
format.setAlphaBufferSize(8);
format.setVersion(2, 0);
format.setProfile(QSurfaceFormat::CoreProfile);
m_mainRendererContext->setFormat(format);
m_mainRendererContext->setShareContext(this->context());
m_mainRendererContext->create();
m_mainRenderThread = new QThread;
m_mainRenderer = new MainRenderer();
m_mainRenderer->moveToThread(m_mainRenderThread);
m_mainRendererContext->moveToThread(m_mainRenderThread);
m_mainRenderer->setGLContext(m_mainRendererContext);
}
void onSceneSwitch()
{
emit m_mainRenderer->releaseGLResourceSignal();
...
emit m_mainRenderer->reCreateGLResourceSignal();
}
};
class MainRenderer : public QObject, protected QOpenGLFunctions
{
private:
GLuint m_texUnit;
QOpenGLFramebufferObject *m_Fbo = nullptr;
QOpenGLContext *mContext;
QOffscreenSurface *mSurface
public:
void setGLContext(QOpenGLContext *context)
{
m_Context = context;
}
void releaseGLResource()
{
mContext->makeCurrent(mSurface);
glDeleteTextures(1, &m_texUnit);
delete m_Fbo;
m_Fbo = 0;
}
void reCreateGLResource()
{
mContext->makeCurrent(mSurface);
glGenTextures(1, &m_texUnit);
qDebug() << "the m_texUnit is : " << m_texUnit;
m_Fbo = new QOpenGLFramebufferObject (size, format);
}
}
for example, if the "m_texUnit" is 3 on initialize after glGenTextures, on windows system, i could get 3 after glGenTextures each time in the "onSceneSwitch"; but in ubuntu system, because the two openGLcontext object is shared, the texture object name is unable to release.
By the way, the reason i set the two openglContext shared, is that in the class "ShowVideo" member function "paintGL", i use the texture id generated in "MainRenderer" for rendering.
"if i use glGenTextures to gen texture object, i would get new texture object id every time" - as far as I know, this is correct behaviour of an OpenGL implementation. It is not guaranteed that glGenTexture will ever return ids that were previously deleted. The correct way is to reuse texture objects rather than frequently create and destroy them (note that is texture is not equal to texture data; you can change what a texture contains at any time).
Btw, GL_MAX_COMBINED_TEXTURE_IMAGE_UNITS is irrelevant here - it is not the number of textures you can create (this is only limited by technical stuff like sizeof(GLuint), available memory, or implementation-specific limitations), but the number of active texture units (see glActiveTexture), i.e. roughly the maximum number of textures that can be simultaneously used by a shader program.

How to set a uniform variable on user input only?

I have a RenderWidget class inherited from QOpenGLWidget, which has the following two methods, among others:
RenderWidget : public QOpenGLWidget, protected QOpenGLFunctions_4_3_Core {
// ...
public slots:
void setSmthEnabled(bool enabled) {
float val = (enabled == true) ? 1.0f : 0.0f;
shader.setUniformValue("uniformUserInput", val);
}
public:
void paintGL() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// ...
shader.setUniformValue("uniform1", value1);
shader.setUniformValue("uniform2", value2);
// ...
mesh->draw();
}
};
paintGL is being called 100 times/second, while setSmthEnabled is only called when a user toggles a checkbox in the UI, which happens occasionally. uniformUserInput uniform does not need to be set every frame, so I try to set it only in user input slot, but it does not work. The uniform preserves it's value, which was set on initialization.
I guess this happens because rendering is asynchronous, and uniform cannot be updated while the pipeline is busy. That's why I tried to call glFinish() before setting uniformUserInput in setSmthEnabled slot, but it didn't help. The only solution I found is to rewrite the class in the following manner:
RenderWidget : public QOpenGLWidget, protected QOpenGLFunctions_4_3_Core {
// ...
private:
float val = 1.0f;
public slots:
void setSmthEnabled(bool enabled) {
val = (enabled == true) ? 1.0f : 0.0f;
}
public:
void paintGL() {
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// ...
shader.setUniformValue("uniform1", value1);
shader.setUniformValue("uniform2", value2);
// ...
shader.setUniformValue("uniformUserInput", val);
mesh->draw();
}
};
Real-world code contains much more user-input uniforms, so I wouldn't like to update all of them each frame and keep redundant member variables for that purpose. How do I update uniforms on user input only?
shader.setUniformValue in the core above only calls QOpenGLShaderProgram::setUniformValue on a relevant QOpenGLShaderProgram object and returns.
so I try to set it only in user input slot, but it does not work
Which is to be expected. OpenGL is a statefull API and the state is encapsulated in a context. There may be any number of contexts used in a single program, which must be bound and can be unbound. for the sake of clarity in Qt's programming model the a OpenGL context is only bound in very specific circumstances, namely when inside initializeGL, paintGL and resizeGL. Also you can make a context current with QOpenGLContext::makeCurrent.
Only when a OpenGL context is bound current you can do things with it. Like setting uniform values.
I guess this happens because rendering is asynchronous, and uniform cannot be updated while the pipeline is busy.
That's not the reason. Also OpenGL is perfectly capable, in fact it is specified, that you can make any call at any time and everything gets queued up. You can even replace texture images (even through a PBO) even while the GPU is still using the texture object for rendering (the driver has to keep track of these changes and defer the execution until all affected resources are free to use).

2 QOpenGLWidget shared context causing crash

I would like to solve problem that I still deal with.. thats render 2 QOpenGLWidgets at same time in different top level windows with shared shader programs etc.
My first attempt was to use one context, wasnt working.
Is it even possible currently with QOpenGLWidget? Or I have to go to older QGLWidget? Or use something else?
testAttribute for Qt::AA_ShareOpenGLContexts returns true so there is not problem with sharing
even QOpenGLContext::areSharing returns true. So there is something I missing or I dont know. Not using threads.
Debug output:
MapExplorer true true true QOpenGLShaderProgram::bind: program is not
valid in the current context. MapExlorer paintGL ends MapExplorer true
true true QOpenGLShaderProgram::bind: program is not valid in the
current context. MapExlorer paintGL ends
QOpenGLFramebufferObject::bind() called from incompatible context
QOpenGLShaderProgram::bind: program is not valid in the current
context. QOpenGLShaderProgram::bind: program is not valid in the
current context. QOpenGLShaderProgram::bind: program is not valid in
the current context. QOpenGLFramebufferObject::bind() called from
incompatible context QOpenGLFramebufferObject::bind() called from
incompatible context
MapView initializeGL:
void MapView::initializeGL()
{
this->makeCurrent();
initializeOpenGLFunctions();
// Initialize World
world->initialize(this->context(), size(), worldCoords);
// Initialize camera shader
camera->initialize();
// Enable depth testing
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL); // just testing new depth func
glClearColor(0.65f, 0.77f, 1.0f, 1.0f);
}
MapView paintGL:
void MapView::paintGL()
{
this->makeCurrent();
glDrawBuffer(GL_FRONT);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
world->draw(...);
}
MapExplorer initializeGL:
void MapExplorer::initializeGL()
{
this->makeCurrent();
QOpenGLContext* _context = _mapView->context();
_context->setShareContext(this->context());
_context->create();
this->context()->create();
this->makeCurrent();
initializeOpenGLFunctions();
// Enable depth testing
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL); // just testing new depth func
glClearColor(0.65f, 0.77f, 1.0f, 1.0f);
}
MapExplorer paintGL:
void MapExplorer::paintGL()
{
this->makeCurrent();
qDebug() << "MapExplorer" << QOpenGLContext::areSharing(this->context(), _mapView->context()) << (QOpenGLContext::currentContext() == this->context());
QOpenGLShaderProgram* shader = world->getTerrainShader();
qDebug() << shader->create();
shader->bind(); // debug error "QOpenGLShaderProgram::bind: program is not valid in the current context."
// We need the viewport size to calculate tessellation levels and the geometry shader also needs the viewport matrix
shader->setUniformValue("viewportSize", viewportSize);
shader->setUniformValue("viewportMatrix", viewportMatrix);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
qDebug() << "MapExlorer paintGL ends";
//world->drawExplorerView(...);
}
Hi I've been hacking this for 2 days, and finally got something to work.
The main reference is qt's threadrenderer example.
Basically I have a QOpenglWidget with its own context, and a background thread drawing to the context shared from QOpenglWidget. The framebuffer drawn by the background thread can be directly used by the QOpenglWidget.
Here's the steps to make things work:
I have the QOpenglWidget RenderEngine and the background thread RenderWorker
// the worker is a thread
class RenderWorker : public QThread, protected QOpenGLFunctions
{
// the background thread's context and surface
QOffscreenSurface *surface = nullptr;
QOpenGLContext *context = nullptr;
RenderWorker::RenderWorker()
{
context = new QOpenGLContext();
surface = new QOffscreenSurface();
}
...
}
// the engine is a QOpenglWidget
class RenderEngine : public QOpenGLWidget, protected QOpenGLFunctions
{
protected:
// overwrite
void initializeGL() override;
void resizeGL(int w, int h) override;
void paintGL() override;
private:
// the engine has a background worker
RenderWorker *m_worker = nullptr;
...
}
Create and setup the background thread in QOpenglWidget's initializeGL()
void RenderEngine::initializeGL()
{
initializeOpenGLFunctions();
// done with current (QOpenglWidget's) context
QOpenGLContext *current = context();
doneCurrent();
// create the background thread
m_worker = new RenderWorker();
// the background thread's context is shared from current
QOpenGLContext *shared = m_worker->context;
shared->setFormat(current->format());
shared->setShareContext(current);
shared->create();
// must move the shared context to the background thread
shared->moveToThread(m_worker);
// setup the background thread's surface
// must be created here in the main thread
QOffscreenSurface *surface = m_worker->surface;
surface->setFormat(shared->format());
surface->create();
// worker signal
connect(m_worker, SIGNAL(started()), m_worker, SLOT(initializeGL()));
// must move the thread to itself
m_worker->moveToThread(m_worker);
// the worker can finally start
m_worker->start();
}
The background thread have to initialize the shared context in its own thread
void RenderWorker::initializeGL()
{
context->makeCurrent(surface);
initializeOpenGLFunctions();
}
Now any framebuffer drawn in the background thread can be directly used (as texture, etc.) by the QOpenglWidget, e.g. in the paintGL() function.
As far as I know, an opengl context is kind of binded to a thread. Shared context and the corresponding surface must be created and setup in the main thread, moved to another thread and initialized in it, before it can be finally used.
In your code I don't see where you link your shader program. You should:
Remove shader->create() form paintGL, create it once in initializeGL function of one the views and share it among other views.
link it in paintGL functions this way before binding it:
if (!shader->isLinked())
shader->link();
The link status of a shader program is context dependant (see OpenGL and take a look at QOpenGLShaderProgram::link() source code).
In the MapExplorer::initializeGL() remove _context (it's not used at all...). remove also this->context()->create (this is done by QOpenGLWidget).
In your main function put this at first line (or before QApplication instanciation) :
QApplication::setAttribute(Qt::AA_ShareOpenGLContexts);
I do like this in a multi-QOpenGLWidget app and it works fine.
One reason why your context is giving you grief is because you are trying to create your own in MapExplorer::initializeGL. QOpenGLWidget already creates its own context in its private initialize function. You need to use the one it creates. Its own context is also made current before each of initializeGL, paintGL and resizeGL. Making your own current is probably causing errors and is not how that widget is designed to be used.
Context sharing between widgets needs to be done with context.globalShareContext(). There is a static member of QOpenGLContext that gets initialized when QGuiApplication is created. That static member and the defaulFormat is what the QOpenGLWidgets context is initialized by automatically.

Use OpenGL context in an external class in Qt

I have a window in Qt that inherits from QGLViewer. If I create any shader program in that class, QGLShaderProgram myShader everything runs fine.
However I start moving some rendering calls to classes outside the class that has the draw()call and things are broken.
The application compiles fine without errors, but when executing it I received an error The program has unexpectedly finished.
I found around that from Qt4 to Qt5 the shader class changed, being QOpenGLShaderProgram the one used in Qt5. I gave it a try and the same problem occur, nevertheless I got a different error message QOpenGLFunctions created with a non-current context.
Which makes me think that when calling OpenGL functions from a class that has no direct relation to the class that actually does the drawing the OpenGL context is "lost".
How can I make the context visible across all the classes? In general my code looks like
MyViewer.hpp
class MyViewer : public QGLViewer
{
MyViewer(const QGLFormat format);
~MyViewer();
protected:
init();
draw()
{
// Clear color buffer and depth buffer
// Do stuff
m_cube.render();
}
private:
...
...
Cube m_cube;
};
Cube.cpp
class Cube
{
public:
Cube()
{
m_shaderProgram.addShaderFromSourceFile(QGLShader::Vertex, ":/vertex.glsl");
m_shaderProgram.addShaderFromSourceFile(QGLShader::Fragment, ":/fragment.glsl");
m_shaderProgram.link();
//Initialize VAO and VBOs
}
void render(){ // render OpenGL calls }
private:
QGLShaderProgram m_shaderProgram;
};
Open gl contexts are global but you can explicit share a context between 2 Viewers like so
QGLViewer ( QGLContext * context,
QWidget * parent = 0,
const QGLWidget * shareWidget = 0,
Qt::WindowFlags flags = 0
)
Acoording to ducumentation
Same as QGLViewer(), but a QGLContext can be provided so that viewers
share GL contexts, even with QGLContext sub-classes (use shareWidget
otherwise).
So first check the order in witch you classes are created. Because the Cube could be calling opengl functions whilest your vieuwer is still incomplete
If you call opengl functions before the QGLviewer has created a context you get an error.
if so a quick work around would be to create a new Qglcontext in your construtor of cube and pass it back to they viewer.
Else do this
Cube() {}; // empty cube constructor
void InitShaders()
{
m_shaderProgram.addShaderFromSourceFile(QGLShader::Vertex, ":/vertex.glsl");
m_shaderProgram.addShaderFromSourceFile(QGLShader::Fragment, ":/fragment.glsl");
m_shaderProgram.link();
//Initialize VAO and VBOs
}
and in the constructor of my viewer do
MyViewer(const QGLFormat format){
cube.initShaders();
}
I have not tested this code but it should alter the order of initialization.

Drawing with OpenGL without killing the CPU and without parallelizing

I'm writing a simple but useful OpenGL program for my work, which consists of showing how a vector field looks like. So the program simply takes the data from a file and draws arrows. I need to draw a few thousands of arrows. I'm using Qt for windows and OpenGL API.
The arrow unit is a cylinder and a cone, combined together in the function Arrow().
for(long i = 0; i < modifiedArrows.size(); i++) {
glColor4d(modifiedArrows[i].color.redF(),modifiedArrows[i].color.greenF(),
modifiedArrows[i].color.blueF(),modifiedArrows[i].opacity);
openGLobj->Arrow(modifiedArrows[i].fromX,modifiedArrows[i].fromY,
modifiedArrows[i].fromZ,modifiedArrows[i].toX,
modifiedArrows[i].toY,modifiedArrows[i].toZ,
simulationSettings->vectorsThickness);
}
Now the problem is that running an infinite loop to keep drawing this gets the CPU fully busy, which is not so nice. I tried as much as possible to remove all calculations from the paintGL() functions, and only simple ones remained. I end The paintGL() function with glFlush() and glFinish(), and yet I always have the main CPU full.
If I remove this loop, the CPU doesn't get too busy anymore. But I have to draw thousands of arrows anyway.
Is there any solution to this other than parallelizing?
You didn't pointed out on how you have implemented your openGLobj->Arrow method, but if you are using 100% CPU time on this, you are probably painting the arrows with immediate mode. This is really CPU intensive, because you have to transfer data from CPU to the GPU for every single instruction inside glBegin() and glEnd(). If you are using GLUT to draw your data, it's really ineficient too.
The way to go here is use GPU memory and processing power to dipslay your data. Phyatt has already pointed you some directions, but I will try to be more specific: use a Vertex Buffer Object (VBO).
The idea is to pre-allocate the needed memory for display your data on GPU and just update this chunk of memory when needed. This will probably make a huge difference on the efficience of your code, because you will use the efficient video card driver to handle the CPU->GPU transfers.
To illustrate the concept, I will show you just some pseudo-code in the end of the answer, but it's by no means completely correct. I didn't tested it and didn't had time to implement the drawing for you, but's it's a concept that can clarify your mind.
class Form
{
public:
Form()
{
// generate a new VBO and get the associated ID
glGenBuffers(1, &vboId);
// bind VBO in order to use
glBindBuffer(GL_ARRAY_BUFFER, vboId);
//Populate the buffer vertices.
generateVertices();
// upload data to VBO
glBufferData(GL_ARRAY_BUFFER_ARB, vertices.size(), vertices.data(), GL_STATIC_DRAW_ARB);
}
~Form()
{
// it is safe to delete after copying data to VBO
delete [] vertices;
// delete VBO when program terminated
glDeleteBuffersARB(1, &vboId);
}
//Implementing as virtual, because if you reimplement it on the child class, it will call the child method :)
//Generally you will not need to reimplement this class
virtual void draw()
{
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3, GL_FLOAT, 0, 0);
//I am drawing the form as triangles, maybe you want to do it in your own way. Do it as you need! :)
//Look! I am not using glBegin() and glEnd(), I am letting the video card driver handle the CPU->GPU
//transfer in a single instruction!
glDrawElements(GL_TRIANGLES, vertices.size(), GL_UNSIGNED_BYTE, 0);
glDisableClientState(GL_VERTEX_ARRAY);
// bind with 0, so, switch back to normal pointer operation
glBindBufferARB(GL_ARRAY_BUFFER_ARB, 0);
}
private:
//Populate the vertices vector with the form vertices.
//Remember, any geometric form in OpenGL is rendered as primitives (points, quads, triangles, etc).
//The common way of rendering this is to use multiple triangles.
//You can draw it using glBegin() and glEnd() just to debug. After that, instead of rendering the triangles, just put
//the generated vertices inside the vertices buffer.
//Consider that it's at origin. You can use push's and pop's to apply transformations to the form.
//Each form(cone or cilinder) will have its own way of drawing.
virtual void generateVertices() = 0;
GLuint vboId;
std::vector<GLfloat> vertices;
}
class Cone : public Form
{
public:
Cone() : Form() {}
~Cone() : ~Form() {}
private:
void generateVertices()
{
//Populate the vertices with cone's formula. Good exercise :)
//Reference: http://mathworld.wolfram.com/Cone.html
}
GLuint vboId;
std::vector<GLfloat> vertices;
}
class Cilinder : public Form
{
public:
Cone() : Form() {}
~Cone() : ~Form() {}
private:
void generateVertices()
{
//Populate the vertices with cilinders's formula. Good exercise :)
//Reference: http://math.about.com/od/formulas/ss/surfaceareavol_3.htm
}
GLuint vboId;
std::vector<GLfloat> vertices;
}
class Visualizer : public QOpenGLWidget
{
public:
//Reimplement the draw function to draw each arrow for each data using the classes below.
void updateGL()
{
for(uint i = 0; i<data.size(); i++)
{
//I really don't have a clue on how you position your arrows around your world model.
//Keep in mind that those functions glPush, glPop and glMatrix are deprecated. I recommend you reading
//http://duriansoftware.com/joe/An-intro-to-modern-OpenGL.-Chapter-3:-3D-transformation-and-projection.html if you want to implement this in the most efficient way.
glPush();
glMatrix(data[i].transform());
cilinder.draw();
cone.draw();
glPop();
}
}
private:
Cone cone;
Cilinder cilinder;
std::vector<Data> data;
}
As a final note, I can't assure you that this is the most efficient way of doing things. Probably, if you have a HUGE ammount of data, you would need some data-structure like Octrees or scene-graphs to optimize your code.
I recommend you taking a look at OpenSceneGraph or Visualization ToolKit to see if that methods are not already implemented for you, what would save you a lot of time.
Try this link for some ideas:
What are some best practices for OpenGL coding (esp. w.r.t. object orientation)?
Basically what I've seen that people do to increase their FPS and drop quality includes the following:
Using DisplayLists. (cache complex or repetitive matrix stacks).
Using Vertex Arrays.
Using simpler geometry with fewer faces.
Using simpler lighting.
Using simpler textures.
The main advantage of OpenGL is that is works with a lot of graphics cards, which are built to do a lot of the 4x4 matrix transformations, multiplications, etc, very quickly and they provide more RAM memory for storing rendered or partially rendered objects.
Assuming that all the vectors are changing so much and often that you can't cache any of the renderings...
My approach to this problem would be to simplify the drawing down to just lines and points, and get that to draw at the desired frame rate. (A line for your cylinder and a colored point on the end for the direction.)
After that draws fast enough, try making the drawing more complex, like a rectangular prism instead of a line, and a pyramid instead of a colored point.
Rounded objects typically require a lot more surfaces and calculations.
I am not an expert on the subject, but I would google other OpenGL tutorials that deal with optimization.
Hope that helps.
EDIT: Removed references to NeHe tutorials because of comments.