I am trying to use QOpenGLWidget without subclassing.
When I try to make OpenGL calls outside of QOpenGLWidget's methods or signals, nothing seems to happen. For example, following code clears window black despite me setting glClearColor:
MainWindow::MainWindow(QWidget *parent)
: QMainWindow(parent)
{
auto glw = new QOpenGLWidget( this );
glw->makeCurrent();
glClearColor(1.0f, 1.0f, 1.0f, 1.0f);
glw->doneCurrent();
connect( glw, &QOpenGLWidget::aboutToCompose, [=] {
glClear( GL_COLOR_BUFFER_BIT );
});
setCentralWidget( glw );
}
However, when I move glClearColor inside the lambda connected to the aboutToCompose signal, widget is cleared with white color.
As essentially explained in the comments section by Fabio and G.M., QOpenGLWidget::makeCurrent won't work if called before things are setup enough.
As of Qt 5.11 and probably other releases, QOpenGLWidget::makeCurrent works by calling the QOpenGLContext::makeCurrent method. That call only happens if the QOpenGLWidget is already in the initialized state however. Additionally, QOpenGLContext::makeCurrent can fail. The latter at least gives some indication of failure via its bool return parameter. Unfortunately, QOpenGLWidget::makeCurrent gives no indication at all. QOpenGLWidget::makeCurrent fails silently.
Besides knowing this & heeding the advice in the comments, one can use the QOpenGLWidget::context method to determine whether the QOpenGLWidget is in the initialized state. According to the linked documentation (and as seen in practice), the context method returns "0 if not yet initialized" or a non-null pointer otherwise. So it's a means to determine whether or not QOpenGLWidget::makeCurrent calls QOpenGLContext::makeCurrent and a means to partially work around QOpenGLWidget::makeCurrent returning void. That's probably not particularly useful in this case, but can be useful in other related contexts so I thought this worth mentioning.
So to get QOpenGLWidget::makeCurrent to actually succeed, QOpenGLWidget::makeCurrent has to be called after the QOpenGLWidget has been initialized in order for it to work.
Reading between the lines of this question, it sounds as if wondering about what was needed to be done in order for GL calls to work. And as the question's author recognizes, delaying the GL calls till the aboutToCompose signal has been fired, works for that (at least in the context of this user's code). Another way, is to make the QOpenGLWidget visible, then call the GL code.
Hope this answers your question completely if not at least helpfully.
You can call auto *ctx = QOpenGLContext::currentContext(); to check if makeCurrent succeeded.
Related
Both the functions SDL_GetRenderer(SDL_Window*) and SDL_CreateRenderer(SDL_Window*, int, Uint32) seem to do the same thing: return a pointer to SDL_Renderer from the window. However, what method is more appropriate for the task? The SDL Wiki does not provide much information on where which method should be used, so please explain what each method does, how they differ and where they should be used.
SDL_CreateRenderer allows you to create a renderer for a window by specifying some options. It's stored in the specific window data which you can query with SDL_GetRenderer (so the latter is equivalent to (SDL_Renderer *)SDL_GetWindowData(window, SDL_WINDOWRENDERDATA))
If you call SDL_GetRenderer without having created it beforehand, you'll get a NULL pointer.
If you call SDL_CreateRenderer on a window twice, the second call will fail with SDL_SetError("Renderer already associated with window"); (see line 805).
See here
I would like to have a window, in which a picture changes depending on what is happening during an infinite loop.
Imagine someone walking around and when he leaves a given track, the program should display an arrow towards the direction of the track. Therefore I have a program, which determines the distance between user and track, but I have no idea on how to update the image.
I use code::blocks with wxWidgets and think I have to use the wxStaticBitmap class. (If there is a better way, please tell me.)
I tried with:
while(true)
{
updatePosition();
if(userNotOnTrack)
{
if(trackRightOfUser)
{
StaticDirectionBitmap->SetBitmap("D:\\WindowsDgps\\WindowsDgpsGraphic\\arrow_right.png");
}
}
}
(Note that this snippet is mostly pseudocode, except the StaticDirectionBitmap part.)
On default the Bitmap has a "no_arrow" image. With this I get an error: error: no matching function for call to 'wxStaticBitmap::SetBitmap(const char [51])'|. I see from the documentation that this cannot work, but I have no idea what could.
If anyone knows how to handle this, I would be happy to hear. I remember a few years back, when I tried something similar in C# and failed completely because of thread safety... I hope it is not this hard in C++ with wxWidgets.
SetBitmap takes a wxBitmap parameter not a string. So the call should look something like:
SetBitmap(wxBitmap( "D:\\WindowsDgps\\WindowsDgpsGraphic\\arrow_right.png", wxBITMAP_TYPE_PNG) );
Make sure prior to making this call that the png handler has been added with a call like one of the following:
wxImage::AddHandler(new wxPNGHandler);
or
::wxInitAllImageHandlers();
The easiest place to do this is in the applications OnInit() method.
If you want update the static bitmap from a worker thread, you should throw a wxThreadEvent and then make the call to SetBitmap in the event handler. The sample here shows how to generate and handle these events.
I am populating a QGraphicsScene with instances of a custom item class (inherting QGraphicsPathItem). At some point during runtime, I try to remove an item (plus its children) from the scene by calling:
delete pItem;
This automatically calls QGraphicsScene::removeItem(), however it also leads to a crash in the class QGraphicsSceneFindItemBspTreeVisitor during the next repaint.
TL;DR: The solution is to ensure that QGraphicsItem::prepareGeometryChange() gets called before the item's removal from the scene.
The problem is that during the item removal from the scene, the scene internal index was not properly updated, resulting in the crash upon the next attempt of drawing the scene.
Since in my case, I use a custom subclass from QGraphicsPathItem, I simply put the call to QGraphicsItem::prepareGeometryChange() into its destructor since I am not manually removing the item from the scene (via QGraphicsScene::removeItem()), but instead I simply call delete pItem; which in return triggers the item's destructor as well as removeItem() later on.
I ran into the same issue using PySide2.
Disabling BSP indexing (as mentioned here) does work for me and is most likely the actual solution to the problem. But is a sub-optimal one, because the scene that I am working with can get arbitrarily large. I also tried to call prepareGeometryChange before removing the item, and while that did seem to work for a while, the error re-appeared just a few weeks later.
What worked for me (so far) is manually removing all child items before removing the item itself...
To that end, I am overwriting the QGraphicsScene::removeItem method in Python:
class GraphicsScene(QtWidgets.QGraphicsScene):
def removeItem(self, item: QtWidgets.QGraphicsItem) -> None:
for child_item in item.childItems():
super().removeItem(child_item)
super().removeItem(item)
Note that this will not quite work the same in C++ because QGraphicsScene::removeItem is not a virtual method, so you will probably have to add your own method removeItemSafely or whatever.
Disclaimer: Other methods have worked for me as well ... until they didn't. I have not seen a crash in QGraphicsSceneFindItemBspTreeVisitor::visit since introducing this workaround, but that does not mean that this is actually the solution. Use at your own risk.
I had this issue and it was a real pain to fix it. Besides the crash, I was also having "guost" items appearing on the screen.
I was changing the boundingRect size 2x inside a custom updateGeometry() method that updates the boundingbox and shape caches of the item.
I was initializing the boundig rectangle as QRectf():
boundingBox = QRectF();
... then doing some processing (and taking the opportunity to do some clean ups in unneeded objects from the scene).
And finally setting the value of the boundingRect to its new size:
boundingBox = polygon.boundingRect();
Calling prepareGeometryChange() in the beggining, alone, didn't solve the issue since I was changing it's size twice.
The solution was to remove the first attribution.
It seems the issue lasting for long time today and there are open bugs also.
But it seems to have a workaround, which I could find it useful and after hours of debugging and reading and investigations I have found it here:
https://forum.qt.io/topic/71316/qgraphicsscenefinditembsptreevisitor-visit-crashes-due-to-an-obsolete-paintevent-after-qgraphicsscene-removeitem/17
Some other tips and tricks regarding Graphics Scene here:
https://tech-artists.org/t/qt-properly-removing-qgraphicitems/3063/6
I have a question about how to (correctly) use glewInit().
Assume I have an multiple-window application, should I call glewInit() exactly once at application (i.e., global) level? or call glewInit() for each window (i.e., each OpenGL rendering context)?
Depending on the GLEW build being used the watertight method is to call glewInit after each and every context change!
With X11/GLX functions pointers are invariant.
But in Windows OpenGL function pointers are specific to each context. Some builds of GLEW are multi context aware, while others are not. So to cover that case, technically you have to call it, everytime the context did change.
(EDIT: due to request for clarification)
for each window (i.e., each OpenGL rendering context)?
First things first: OpenGL contexts are not tied to windows. It is perfectly fine to have a single window but multiple rendering contexts. In Microsoft Windows what matters to OpenGL is the device context (DC) associated with a window. But it also works the other way round: You can have a single OpenGL context, but multiple windows using it (as long as the window's pixelformat is compatible with the OpenGL context).
So this is legitimate:
HWND wnd = create_a window()
HDC dc = GetDC(wnd)
PIXELFORMATDESCRIPTOR pf = select_pixelformat();
SetPixelFormat(dc, pf);
HGLRC rc0 = create_opengl_context(dc);
HGLRC rc1 = create_opengl_context(dc);
wglMakeCurrent(dc, rc0);
draw_stuff(); // uses rc0
wglMakeCurrent(dc, rc1);
draw_stuff(); // uses rc1
And so is this
HWND wnd0 = create_a window()
HDC dc0 = GetDC(wnd)
HWND wnd1 = create_a window()
HDC dc1 = GetDC(wnd)
PIXELFORMATDESCRIPTOR pf = select_pixelformat();
SetPixelFormat(dc0, pf);
SetPixelFormat(dc1, pf);
HGLRC rc = create_opengl_context(dc0); // works also with dc1
wglMakeCurrent(dc0, rc);
draw_stuff();
wglMakeCurrent(dc1, rc);
draw_stuff();
Here's where extensions enter the picture. A function like glActiveTexture is not part of the OpenGL specification that has been pinned down into the Windows Application Binary Interface (ABI). Hence you have to get a function pointer to it at runtime. That's what GLEW does. Internally it looks like this:
First it defines types for the function pointers, declares them as extern variables and uses a little bit of preprocessor magic to avoid namespace collisions.
typedef void (*PFNGLACTIVETEXTURE)(GLenum);
extern PFNGLACTIVETEXTURE glew_ActiveTexture;
#define glActiveTexture glew_ActiveTexture;
In glewInit the function pointer variables are set to the values obtained using wglGetProcAddress (for the sake of readability I omit the type castings).
int glewInit(void)
{
/* ... */
if( openglsupport >= gl1_2 ) {
/* ... */
glew_ActiveTexture = wglGetProcAddress("glActiveTexture");
/* ... */
}
/* ... */
}
Now the important part: wglGetProcAddress works with the OpenGL rendering context that is current at the time of calling. So whatever was to the very last wglMakeCurrent call made before it. As already explained, extension function pointers are tied to their OpenGL context and different OpenGL contexts may give different function pointers for the same function.
So if you do this
wglMakeCurrent(…, rc0);
glewInit();
wglMakeCurrent(…, rc1);
glActiveTexture(…);
it may fail. So in general, with GLEW, every call to wglMakeCurrent must immediately be followed by a glewInit. Some builds of GLEW are multi context aware and do this internally. Others are not. However it is perfectly safe to call glewInit multiple times, so the safe way is to call it, just to be sure.
It should not be necessary to get multiple function ptrs one-per-context according to this... https://github.com/nigels-com/glew/issues/38 in 2016 ....
nigels-com answers this question from kest-relm…
do you think it is correct to call glewInit() for every context change?
Is the above the valid way to go for handling multiple opengl contexts?
…with…
I don't think calling glewInit for each context change is desirable, or even necessary, depending on the circumstances.
Obviously this scheme would not be appropriate for multi-threading, anyway.
Kest-relm then says…
From my testing it seems like calling glewInit() repeatedly is not required; the code runs just fine with multiple contexts
It is documented here:
https://www.opengl.org/wiki/Load_OpenGL_Functions
where it states:
"In practice, if two contexts come from the same vendor and refer to the same GPU, then the function pointers pulled from one context will work in the other."
I assume this should be true for most mainstream Windows GL drivers?
So when I run the app, at the biginning every thing runs smooth, but the more it goes, the slower it is. I looked at the memory it was using and when it reaches 400 mb it completely stops for 30 secs and then drop back to 200.
I am pretty new to SDL2, and I assume it is because each frame I call:
optionsTS = TTF_RenderText_Blended(font, "Options.", blanc);
optionsT = SDL_CreateTextureFromSurface(renderer, optionsTS);
for example and I have plenty of them.
The problem is that I don't know how to delete properly the object each frame, because if I do a SDL_FreeSurface I get an error.
I won't publish my whole code because it's a mess, but if you want it, feel free to ask.
Do you know how to fix that?
Just thought I would turn my comment into an answer.
In your code you call
optionsTS = TTF_RenderText_Blended(font, "Options.", blanc);
optionsT = SDL_CreateTextureFromSurface(renderer, optionsTS);
every frame, I suspect that if you remove them from there, initialise them outwith of the render loop and simply pass them in as arguments, you should lose the memory leak: the reason being that you will create only one in-memory instance of each and then you can repeatedly use them as needed. On looking at it again, I suspect that you could destroy optionTS once you have made optionT, that way you will save even more memory. (not tested yet as my main machine just crashed this weekend, and I am still re-installing drivers and VS2010)
As a general rule, try and not create/destroy any objects in the render loop, tends to get big and messy fast.
Consider taking advantage of RAII in C++ if possible.
For example, create a class that wraps an SDL_Surface and calls SDL_FreeSurface in the destructor.
class MySurface
{
public:
MySurface(SDL_Surface & surface) : m_surface(surface) {}
~MySurface() {SDL_FreeSurface(m_surface);}
SDL_Surface & GetSDLSurface() {return m_surface;}
private:
SDL_Surface & m_surface;
};
You would then create an instance of MySurface every time you grabbed an SDL_Surface from the SDL API, and you won't have to worry about when or whether to free that surface. The surface will be freed as soon as your instance of MySurface goes out of scope.
I'm certain better implementations can be written and tailored to your needs, but at a minimum something similar to this may prevent you from having leaks in the future.