It seems that the paintEvent method of the QGLWidget is called before initializeGL, so where am I supposed to put my openGL initialization code?
I'm putting it into the paintEvent method like this:
void MyGLWidget::paintEvent(...)
{
makeCurrent();
..save modelview and projection matrices..
// This is initialization code
GLenum init = glewInit();
if (GLEW_OK != init)
{
/* Problem: glewInit failed, something is seriously wrong. */
qWarning() << glewGetErrorString(init);
}
// Dark blue background
glClearColor(0.2f, 0.0f, 0.5f, 0.0f);
// Enable depth test
glEnable(GL_DEPTH_TEST);
// End initialization code
... drawing code
QPainter painter(this);
...overpainting..
}
I really don't like the idea of having my glew library initialization function called every time a paintEvent is raised... although that's working.
Any suggestion?
You have to initialize OpenGL in initializeGL(), there is no other option.
But you also have to draw inside paintGL, not inside paintEvent, so that is where your mistake is.
Override initializeGL() function of QGLWidget. It is Created right for the purposes you want
From it's documentation:
This virtual function is called once before the first call to
paintGL() or resizeGL(), and then once whenever the widget has been
assigned a new QGLContext. Reimplement it in a subclass.
link to documentation: http://doc.qt.io/archives/qt-4.7/qglwidget.html#initializeGL
Related
I have this function that should draw a rect based on where the user clicks and drags however nothing is drawn. I have tried using the updateGL() function however that returns an error. I've also tried to use QPainter but I was not sure how to work it with the code below. I've also tried using makeCurrent() and doneCurrent() but that makes the program "unexpectedly finish".
shape drawing functions:
void drawingArea::paintEvent(QPaintEvent *event)
{
paintGL();
}
void drawingArea::mousePressEvent(QMouseEvent *event)
{
Shape newShape;
int x = event->x(),y = event->y();
shapes.push_back(newShape);
Point newPoint(x,y);
shapes[0].addPoint(newPoint);
}
void drawingArea::mouseReleaseEvent(QMouseEvent *event)
{
QVector<int> startPoint = shapes[0].points[0].getPoint();
int vertices[]
{
event->x(),event->y(),0,
event->x(),startPoint[1],0,
event->x(),event->y(),0,
startPoint[0],event->y(),0
};
glEnableClientState(GL_VERTEX_ARRAY);
glVertexPointer(3,GL_INT,0,vertices);
glDrawArrays(GL_QUADS,0,4);
QPainter painter(this);
glDisableClientState(GL_VERTEX_ARRAY);
update();
}
Also I do have this intializer function and maybe that the reason but when I try to run context->makeCurrent() it throws an error so I've commented it out but I'll drop the function incase it is that
drawingArea::drawingArea(QWidget *parent) : QOpenGLWidget(parent)
{
//initializeGL();
//paintGL();
QSurfaceFormat format;
format.setProfile(QSurfaceFormat::CompatibilityProfile);
format.setVersion(2,1);
setFormat(format);
context = new QOpenGLContext;
context->setFormat(format);
context->create();
//context->makeCurrent(this->);
openGLFunctions = context->functions();
}
To execute an OpenGL instruction, an valid and current OpenGL Context is needed. QOpenGLWidget provides such an OpenGL context, when the event callback methods initializeGL(), paintGL() and resizeGL() a re executed.
If you want to use en OpenGL instruction in a mouse event callback, then you have to make the OpenGL context current by makeCurrent() and releases it by doneCurrent():
For instance:
void drawingArea::mouseReleaseEvent(QMouseEvent *event)
{
makeCurrent()
// [...]
doneCurrent()
}
Side note, never call the paintGL callback function directly. Use update() to schedules a paint even.
I developed a simple Point Cloud Data viewer (just for learning) and I am using Qt Library. My first implementation used QOpenGLWindow but now I want to use a QOpenGLWidget to uses it such as dynamic library.
Regarding QOpenGLWidget, it just works in the first "painting" (call to PaintGL when I create the widget) and nothing happens on successive paintGL() calls. This problem does not appear in QOpenGLWindow, it works perfectly.
My PaintGL() paint using the next method of my class:
void AEOpenGLViewer::renderGL() {
// Clear
glClear(GL_COLOR_BUFFER_BIT);
shader_program->bind();
shader_program->setUniformValue(u_worldToCamera, camera.toMatrix());
shader_program->setUniformValue(u_cameraToView, projection);
{
for (size_t globjects_index = 0; globjects_index < objects.size(); ++globjects_index) {
objects.at(globjects_index).vao->bind();
shader_program->setUniformValue(u_modelToWorld, objects.at(globjects_index).transform.toMatrix());
glDrawArrays(objects.at(globjects_index).primitive_type, 0, objects.at(globjects_index).vertices.size());
objects.at(globjects_index).vao->release();
}
}
shader_program->release();
}
where "objects" is a std::vector of a local struct that define an OpenGL's object in my applicacion:
typedef struct {
QOpenGLBuffer *vbo;
QOpenGLVertexArrayObject *vao;
GLenum primitive_type;
std::vector<Vertex> vertices;
Transform3D transform;
} GLObject;
The slot "update()" is called using input events (KeyPressEvent, MouseMoveEvent..) just in order to reduce the GPU load.
In case someone is interested, the vbo is allocated as follow:
void AEOpenGLViewer::prepareObjectInGPU(GLObject globject) {
shader_program->bind();
{
// Create Buffer (Do not release until VAO is created) (vbo)
globject.vbo->create();
globject.vbo->bind();
globject.vbo->setUsagePattern(QOpenGLBuffer::StaticDraw);
globject.vbo->allocate(globject.vertices.data(), globject.vertices.size() * sizeof(Vertex));
// Create Vertex Array Object (vao)
globject.vao->create();
globject.vao->bind();
shader_program->enableAttributeArray(0);
shader_program->enableAttributeArray(1);
shader_program->setAttributeBuffer(0, GL_FLOAT, Vertex::positionOffset(), Vertex::PositionTupleSize, Vertex::stride());
shader_program->setAttributeBuffer(1, GL_FLOAT, Vertex::colorOffset(), Vertex::ColorTupleSize, Vertex::stride());
// Release (unbind) all
globject.vbo->release();
globject.vao->release();
}
shader_program->release();
}
The problem is solved, It happened because I have to make the corresponding context current during the allocation of the object in the GPU (my prepareObjectInGPU() method). So, the problem is solved as follow:
void AEOpenGLViewer::prepareObjectInGPU(GLObject globject) {
makeCurrent();
shader_program->bind();
{
...
}
shader_program->release();
doneCurrent();
}
In the QOpenGLWindow, it was not necessary but I recommended to do it as good practices. Remember, during the PaintGL() It is not necessary to call the makeCurrent() because it is called automatically before invoking paintGL().
However, if someone want to answer me then you are welcome.
Thanks!
I would like to solve problem that I still deal with.. thats render 2 QOpenGLWidgets at same time in different top level windows with shared shader programs etc.
My first attempt was to use one context, wasnt working.
Is it even possible currently with QOpenGLWidget? Or I have to go to older QGLWidget? Or use something else?
testAttribute for Qt::AA_ShareOpenGLContexts returns true so there is not problem with sharing
even QOpenGLContext::areSharing returns true. So there is something I missing or I dont know. Not using threads.
Debug output:
MapExplorer true true true QOpenGLShaderProgram::bind: program is not
valid in the current context. MapExlorer paintGL ends MapExplorer true
true true QOpenGLShaderProgram::bind: program is not valid in the
current context. MapExlorer paintGL ends
QOpenGLFramebufferObject::bind() called from incompatible context
QOpenGLShaderProgram::bind: program is not valid in the current
context. QOpenGLShaderProgram::bind: program is not valid in the
current context. QOpenGLShaderProgram::bind: program is not valid in
the current context. QOpenGLFramebufferObject::bind() called from
incompatible context QOpenGLFramebufferObject::bind() called from
incompatible context
MapView initializeGL:
void MapView::initializeGL()
{
this->makeCurrent();
initializeOpenGLFunctions();
// Initialize World
world->initialize(this->context(), size(), worldCoords);
// Initialize camera shader
camera->initialize();
// Enable depth testing
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL); // just testing new depth func
glClearColor(0.65f, 0.77f, 1.0f, 1.0f);
}
MapView paintGL:
void MapView::paintGL()
{
this->makeCurrent();
glDrawBuffer(GL_FRONT);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
world->draw(...);
}
MapExplorer initializeGL:
void MapExplorer::initializeGL()
{
this->makeCurrent();
QOpenGLContext* _context = _mapView->context();
_context->setShareContext(this->context());
_context->create();
this->context()->create();
this->makeCurrent();
initializeOpenGLFunctions();
// Enable depth testing
glEnable(GL_DEPTH_TEST);
glEnable(GL_CULL_FACE);
glDepthFunc(GL_LEQUAL); // just testing new depth func
glClearColor(0.65f, 0.77f, 1.0f, 1.0f);
}
MapExplorer paintGL:
void MapExplorer::paintGL()
{
this->makeCurrent();
qDebug() << "MapExplorer" << QOpenGLContext::areSharing(this->context(), _mapView->context()) << (QOpenGLContext::currentContext() == this->context());
QOpenGLShaderProgram* shader = world->getTerrainShader();
qDebug() << shader->create();
shader->bind(); // debug error "QOpenGLShaderProgram::bind: program is not valid in the current context."
// We need the viewport size to calculate tessellation levels and the geometry shader also needs the viewport matrix
shader->setUniformValue("viewportSize", viewportSize);
shader->setUniformValue("viewportMatrix", viewportMatrix);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
qDebug() << "MapExlorer paintGL ends";
//world->drawExplorerView(...);
}
Hi I've been hacking this for 2 days, and finally got something to work.
The main reference is qt's threadrenderer example.
Basically I have a QOpenglWidget with its own context, and a background thread drawing to the context shared from QOpenglWidget. The framebuffer drawn by the background thread can be directly used by the QOpenglWidget.
Here's the steps to make things work:
I have the QOpenglWidget RenderEngine and the background thread RenderWorker
// the worker is a thread
class RenderWorker : public QThread, protected QOpenGLFunctions
{
// the background thread's context and surface
QOffscreenSurface *surface = nullptr;
QOpenGLContext *context = nullptr;
RenderWorker::RenderWorker()
{
context = new QOpenGLContext();
surface = new QOffscreenSurface();
}
...
}
// the engine is a QOpenglWidget
class RenderEngine : public QOpenGLWidget, protected QOpenGLFunctions
{
protected:
// overwrite
void initializeGL() override;
void resizeGL(int w, int h) override;
void paintGL() override;
private:
// the engine has a background worker
RenderWorker *m_worker = nullptr;
...
}
Create and setup the background thread in QOpenglWidget's initializeGL()
void RenderEngine::initializeGL()
{
initializeOpenGLFunctions();
// done with current (QOpenglWidget's) context
QOpenGLContext *current = context();
doneCurrent();
// create the background thread
m_worker = new RenderWorker();
// the background thread's context is shared from current
QOpenGLContext *shared = m_worker->context;
shared->setFormat(current->format());
shared->setShareContext(current);
shared->create();
// must move the shared context to the background thread
shared->moveToThread(m_worker);
// setup the background thread's surface
// must be created here in the main thread
QOffscreenSurface *surface = m_worker->surface;
surface->setFormat(shared->format());
surface->create();
// worker signal
connect(m_worker, SIGNAL(started()), m_worker, SLOT(initializeGL()));
// must move the thread to itself
m_worker->moveToThread(m_worker);
// the worker can finally start
m_worker->start();
}
The background thread have to initialize the shared context in its own thread
void RenderWorker::initializeGL()
{
context->makeCurrent(surface);
initializeOpenGLFunctions();
}
Now any framebuffer drawn in the background thread can be directly used (as texture, etc.) by the QOpenglWidget, e.g. in the paintGL() function.
As far as I know, an opengl context is kind of binded to a thread. Shared context and the corresponding surface must be created and setup in the main thread, moved to another thread and initialized in it, before it can be finally used.
In your code I don't see where you link your shader program. You should:
Remove shader->create() form paintGL, create it once in initializeGL function of one the views and share it among other views.
link it in paintGL functions this way before binding it:
if (!shader->isLinked())
shader->link();
The link status of a shader program is context dependant (see OpenGL and take a look at QOpenGLShaderProgram::link() source code).
In the MapExplorer::initializeGL() remove _context (it's not used at all...). remove also this->context()->create (this is done by QOpenGLWidget).
In your main function put this at first line (or before QApplication instanciation) :
QApplication::setAttribute(Qt::AA_ShareOpenGLContexts);
I do like this in a multi-QOpenGLWidget app and it works fine.
One reason why your context is giving you grief is because you are trying to create your own in MapExplorer::initializeGL. QOpenGLWidget already creates its own context in its private initialize function. You need to use the one it creates. Its own context is also made current before each of initializeGL, paintGL and resizeGL. Making your own current is probably causing errors and is not how that widget is designed to be used.
Context sharing between widgets needs to be done with context.globalShareContext(). There is a static member of QOpenGLContext that gets initialized when QGuiApplication is created. That static member and the defaulFormat is what the QOpenGLWidgets context is initialized by automatically.
I have been through all the custom draw questions related to cocos2d-x on stack overflow and other places, I would just like to know if it's possible to override the draw function in CCSprite, not in CCLayer. I want to draw a rounded rectangle. I am used to controlling what is being drawn on screen using spriteBatches in libgdx and XNA (R.I.P :( ). I am a bit lost as to how i can do custom draw for my sprite objects. Below is what I have and it does not work.
// object.h
private:
std::string _colourLabel;
int width;
int height;
cocos2d::Vec2 position;
public:
PLOPPP(std::string colour, cocos2d::Vec2 pos, int width, int height);
virtual void draw(cocos2d::Renderer *renderer, const cocos2d::kmMat4 &transform, bool transformUpdated);
void onDraw(const Mat4 &transform, uint32_t flags);
//object.cpp
void PLOPPP::draw(cocos2d::Renderer* renderer, const cocos2d::kmMat4& transform, bool transformUpdated)
{
cocos2d::ccDrawColor4F(1.0f, 0.0f, 0.0f, 1.0f);
cocos2d::ccDrawLine(ccp(0, 0), ccp(100, 100));
}
void PLOPPP::onDraw(const Mat4 &transform, uint32_t flags)
{
cocos2d::ccDrawColor4F(1.0f, 0.0f, 0.0f, 1.0f);
cocos2d::ccDrawLine(ccp(0, 0), ccp(100, 100));
}
I have also put break points in both the methods and it's never called. I also tried to use the sprite->visit() method to direct the render component into the onDraw method as recommended on other questions. Cocos's has a horrible documentation so I use their test projects and read the header files. I see that there is a draw method to override but it's not hitting. I am not very experienced in C++ but I have a good grip on Programming concepts and writing games in general. So you can use any terms you would use to explain to other programmers.
The reason why I want to use CCsprite is to get the benefits of the actions. I also tried to use the CCNode class as a base but it still does not hit. I don't want to make every object a layer because that would go completely against the concept of having objects in a game drawn on a layer. If anyone can suggest any way I can draw custom shapes as sprites, that is also affected by the actions applied to it. It would be greatly appreciated.
If you think I could benefit from other posts please point to it. This should not be a difficult process but it seems to be taking up a lot of my time. If I could just draw a line I'll be able to fly through the rest of the challenges. I just want to draw a custom something.
This should be simple: I'm using a QGLWidget to draw some openGL graphics and I want to be able to write something on the openGL graphics rendered, so I'm using overpainting as in the Qt demo with QPainter.
Here are my two working choices of structuring my program:
// This works but it's probably stupid
paintEvent()
{
makeCurrent();
glewInit();
loadShaders();
loadTextures();
loadBuffers();
... actually paint something with openGL ...
QPainter painter(this);
... overpainting ...
}
------------------------------------------------------------------------------------
// This works and may probably be better
paintEvent()
{
QGLWidget::paintEvent(event); // Base class call, this calls initializeGL ONCE and then paintGL each time it's needed
QPainter painter(this);
... overpainting ...
}
initializeGL()
{
glewInit();
}
paintGL()
{
loadShaders();
loadTextures();
loadBuffers();
... actually paint something with openGL ...
}
Considering that textures and shaders aren't going to be always the same, is any of these options acceptable (in performances and reasonably)?
If not: how would you structure the program?
Thank you for any help
load/compile/link shaders in the initializeGL() method, because that is relatively slow operation (specially if it is read from the disk)
load textures in the initializeGL() method
Not sure what are buffers, but sounds like it should be done in the initialization, since it is done only once.