Use of QMathGL to paint realtime data? - c++

Got really stuck, need some advise or real examples.
1) I have boost::thread vector producer thread (data arrives fast ~ 100 samples per second)
2) I want QMathGL to paint data as it arrives
3) I don't want my Qt gui freeze
I tried to move QMathGL::update() to separate thread - Qt argues that QPixmap not allowed in separate thread.
What should i try, Without modifying QMathGL?
Only thing comes in mind to repaint on timer (fps?), but i don't like this solution, please tell me if i am wrong.

I would strongly advise to go with a timer. Repaint operations are costly and I would assume that no user could realistically process more then 10 printed vectors a second. So I can't see a real benefit for the end user, apart from maybe that the display is updated more "smoothly" and entry for entry. But you could achieve these effects far easier with animations ;)
When repainting with every data change, you get the annoying behaviour you describe. Working around that is (imho) not worth the trouble.

I´ve also come along a similar problem sometimes.
The usual resolution i used is to buffer the data and repainting on a timer. This goes along the line of this (Pseudo Code) :
void Widget::OnNewData(void *dataSample)
{
this->threadSafebuffer->appendData(dataSample);
}
void Widget::OnTimeout()
{
DataBuffer renderBatch = this->threadSafebuffer->interlockedExchange();
/* Do UI updates according to renderBatch */
}
This assumes that OnNewData is called on a background thread. The OnTimeout is called from a QTimer on the UI-EventLoop. To prevent contention it justs does an interlocked exchange of the current buffer pointer with a second buffer. So no heavy synchronization (e.g. Mutext/Semaphore) is needed.
This will only work if the amount of work to do for rendering a renderBatch is less than the timeout.

Related

QProgressBar::setValue(int) causes a memory leak?

I am using Qt 5.7 for a GUI application with a QProgressBar. I suspect that there might be a memory leak since memory usage increases during runtime for about 50MB/s. I could narrow down the problem to one line of code.
QProgressBar *pbarQuality;
...
int curQuality = data.getQuality();
if (curQuality < 0) {
curQuality = 0;
qWarning("Value set to 0. ");
}
if (curQuality > 100) {
curQuality = 100;
qWarning("Value set to 100. ");
}
ui.pbarQuality->setValue(curQuality); //The memory problem doesn't occur when this single line is commented out
The value of the QProgressBar(pbarQuality) is only for displaying. It isn't used anywhere else.
I find this a very strange behaviour. Am I missing something?
Here is the auto generated code by the Qt Designer:
pbarQuality = new QProgressBar(frame_5);
pbarQuality->setObjectName(QStringLiteral("pbarQuality"));
pbarQuality->setGeometry(QRect(10, 50, 130, 23));
pbarQuality->setValue(24);
Try replacing setValue by pbarQuality.update(); QCoreApplication::processEvents(); and see if that reproduces the problem. If it does, you're leveraging the nested event loop to keep the GUI responsive while your blocking code runs, and that's a bad thing. setValue calls processEvents as a naive way to work around broken user code. IMHO it's a dangerous favor. The only fix then is to unbreak your code and return control to the main event loop instead of blocking.
This answer shows how to avoid the effects of an image storm by leveraging the QImage's RAII behavior, and links to another answer that demonstrates free image scaling by leveraging OpenGL.
My application runs another thread besides the GUI thread which periodically(up to 60 times per second) sends information(images) to the GUI thread. I am doing some minor image editing (resizing) within the GUI thread. Turns out this takes too long in order to keep up with the data posted by the other thread. Consequentially the event queue gets bigger and bigger and so does the used RAM too.
Lesson learned: Be aware of the processing speed of the thread if the data gets posted periodically. Data processing needs to be done before new data is available.
Thanks to #KubaOber for giving me the hint.

C++ - Execute function every X milliseconds

I can't seem to find a good answer to this:
I'm making a game, and I want the logic loop to be separate from the graphics loop. In other words I want the game to go through a loop every X milliseconds regardless of how many frames/second it is displaying.
Obviously they will both be sharing a lot of variables, so I can't have a thread/timer passing one variable back and forth... I'm basically just looking for a way to have a timer in the background that every X milliseconds sends out a flag to execute the logic loop, regardless of where the graphics loop is.
I'm open to any suggestions. It seems like the best option is to have 2 threads, but I'm not sure what the best way to communicate between them is, without constantly synchronizing large amounts of data.
You can very well do multithreading by having your "world view" exchanged every tick. So here is how it works:
Your current world view is pointed to by a single smart pointer and is read only, so no locking is necessary.
Your logic creates your (first) world view, publishes it and schedules the renderer.
Your renderer grabs a copy of the pointer to your world view and renders it (remember, read-only)
In the meantime, your logic creates a new, slightly different world view.
When it's done it exchanges the pointer to the current world view, publishing it as the current one.
Even if the renderer is still busy with the old world view there is no locking necessary.
Eventually the renderer finishes rendering the (old) world. It grabs the new world view and starts another run.
In the meantime, ... (goto step 4)
The only locking you need is for the time when you publish or grab the pointer to the world. As an alternative you can do atomic exchange but then you have to make sure you use smart pointers that can do that.
Most toolkits have an event loop (built above some multiplexing syscall like poll(2) -or the obsolete select-...), e.g. GTK has g_application_run (which is above:) gtk_main which is built above Glib main event loop (which in fact does a poll or something similar). Likewise, Qt has QApplication and its exec methods.
Very often, you can register timers within the event loop. For GTK, use GTimers, g_timeout_add etc. For Qt learn about its timers.
Very often, you can also register some idle or background processing, which is one of your function which is started by the event loop after other events and timeouts have been processed. Your idle function is expected to run quickly (usually it does a small step of some computation in a few milliseconds, to keep the GUI responsive). For GTK, use g_idle_add etc. IIRC, in Qt you can use a timer with a 0 delay.
So you could code even a (conceptually) single threaded application, using timeouts and idle processing.
Of course, you could use multi-threading: generally the main thread is running the event loop, and other threads can do other things. You have synchronization issues. On POSIX systems, a nice synchronization trick could be to use a pipe(7) to self: you set up a pipe before running the event loop, and your computation threads may write a few bytes on it, while the main event loop is "listening" on it (with GTK, using g_source_add_poll or async IO or GUnixInputStream etc.., with Qt, using QSocketNotifier etc....). Then, in the input handler running in the main loop for that pipe, you could access traditional global data with mutexes etc...
Conceptually, read about continuations. It is a relevant notion.
You could have a Draw and Update Method attached to all your game components. That way you can set it that while your game is running the update is called and the draw is ignored or any combination of the two. It also has the benefit of keeping logic and graphics completely separate.
Couldn't you just have a draw method for each object that needs to be drawn and make them globals. Then just run your rendering thread with a sleep delay in it. As long as your rendering thread doesn't write any information to the globals you should be fine. Look up sfml to see an example of it in action.
If you are running on a unix system you could use usleep() however that is not available on windows so you might want to look here for alternatives.

SDL_PollEvent vs SDL_WaitEvent

So I was reading this article which contains 'Tips and Advice for Multithreaded Programming in SDL' - https://vilimpoc.org/research/portmonitorg/sdl-tips-and-tricks.html
It talks about SDL_PollEvent being inefficient as it can cause excessive CPU usage and so recommends using SDL_WaitEvent instead.
It shows an example of both loops but I can't see how this would work with a game loop. Is it the case that SDL_WaitEvent should only be used by things which don't require constant updates ie if you had a game running you would perform game logic each frame.
The only things I can think it could be used for are programs like a paint program where there is only action required on user input.
Am I correct in thinking I should continue to use SDL_PollEvent for generic game programming?
If your game only updates/repaints on user input, then you could use SDL_WaitEvent. However, most games have animation/physics going on even when there is no user input. So I think SDL_PollEvent would be best for most games.
One case in which SDL_WaitEvent might be useful is if you have it in one thread and your animation/logic on another thread. That way even if SDL_WaitEvent waits for a long time, your game will continue painting/updating. (EDIT: This may not actually work. See Henrik's comment below)
As for SDL_PollEvent using 100% CPU as the article indicated, you could mitigate that by adding a sleep in your loop when you detect that your game is running more than the required frames-per-second.
If you don't need sub-frame precision in your input, and your game is constantly animating, then SDL_PollEvent is appropriate.
Sub-frame precision can be important for, eg. games where the player might want very small increments in movement - quickly tapping and releasing a key has unpredictable behavior if you use the classic lazy method of keydown to mean "velocity = 1" and keyup to mean "velocity = 0" and then you only update position once per frame. If your tap happens to overlap with the frame render then you get one frame-duration of movement, if it does not you get no movement, where what you really want is an amount of movement smaller than the length of a frame based on the timestamps at which the events occurred.
Unfortunately SDL's events don't include the actual event timestamps from the operating system, only the timestamp of the PumpEvents call, and WaitEvent effectively polls at 10ms intervals, so even with WaitEvent running in a separate thread, the most precision you'll get is 10ms (you could maybe approximate smaller by saying if you get a keydown and keyup in the same poll cycle then it's ~5ms).
So if you really want precision timing on your input, you might actually need to write your own version of SDL_WaitEventTimeout with a smaller SDL_Delay, and run that in a separate thread from your main game loop.
Further unfortunately, SDL_PumpEvents must be run on the thread that initialized the video subsystem (per https://wiki.libsdl.org/SDL_PumpEvents ), so the whole idea of running your input loop on another thread to get sub-frame timing is nixed by the SDL framework.
In conclusion, for SDL applications with animation there is no reason to use anything other than SDL_PollEvents. The best you can do for sub-framerate input precision is, if you have time to burn between frames, you have the option of being precise during that time, but then you'll get weird render-duration windows each frame where your input loses precision, so you end up with a different kind of inconsistency.
In general, you should use SDL_WaitEvent rather than SDL_PollEvent to release the CPU to the operating system to handle other tasks, like processing user input. This will manifest to you users as sluggish reaction to user input, since this can cause a delay between when they enter a command and when your application processes the event. By using SDL_WaitEvent instead, the OS can post events to your application more quickly, which improves the perceived performance.
As a side benefit, users on battery powered systems, like laptops and portable devices should see slightly less battery usage since the OS has the opportunity to reduce overall CPU usage since your game isn't using it 100% of the time - it would only be using it when an event actually occurs.
This is a very late response, I know. But this is the thread that tops a Google search on this, so it seems the place to add an alternative suggestion to dealing with this that some might find useful.
You could write your code using SDL_WaitEvent, so that, when your application is not actively animating anything, it'll block and hand the CPU back to the OS.
But then you can send a user-defined message to the queue, from another thread (e.g. the game logic thread), to wake up the main rendering thread with that message. And then it goes through the loop to render a frame, swap and returns back to SDL_WaitEvent again. Where another of these user-defined messages can be waiting to be picked up, to tell it to loop once more.
This sort of structure might be good for an application (or game) where there's a "burst" of animation, but otherwise it's best for it to block and go idle (and save battery on laptops).
For example, a GUI where it animates when you open or close or move windows or hover over buttons, but it's otherwise static content most of the time.
(Or, for a game, though it's animating all the time in-game, it might not need to do that for the pause screen or the game menus. So, you could send the "SDL_ANIMATEEVENT" user-defined message during gameplay, but then, in the game menus and pause screen, just wait for mouse / keyboard events and actually allow the CPU to idle and cool down.)
Indeed, you could have self-triggering animation events. In that the rendering thread is woken up by a "SDL_ANIMATEEVENT" and then one more frame of animation is done. But because the animation is not complete, the rendering thread itself posts a "SDL_ANIMATEEVENT" to its own queue, that'll trigger it to wake up again, when it reaches SDL_WaitEvent.
And another idea there is that SDL events can carry data too. So you could supply, say, an animation ID in "data1" and a "current frame" counter in "data2" with the event. So that when the thread picks up the "SDL_ANIMATEEVENT", the event itself tells it which animation to do and what frame we're currently on.
This is a "best of both worlds" solution, I feel. It can behave like SDL_WaitEvent or SDL_PollEvent at the application's discretion by just sending messages to itself.
For a game, this might not be worth it, as you're updating frames constantly, so there's no big advantage to this and maybe it's not worth bothering with (though even games could benefit from going to 0% CPU usage in the pause screen or in-game menus, to let the CPU cool down and use less laptop battery).
But for something like a GUI - which has more "burst-y" animation - then a mouse event can trigger an animation (e.g. opening a new window, which zooms or slides into view) that sends "SDL_ANIMATEEVENT" back to the queue. And it keeps doing that until the animation is complete, then falls back to normal SDL_WaitEvent behaviour again.
It's an idea that might fit what some people need, so I thought I'd float it here for general consumption.
You could actually initialise the SDL and the window in the main thread and then create 2 more threads for updates(Just updates game states and variables as time passes) and rendering(renders the surfaces accordingly).
Then after all that is done, use SDL_WaitEvent in your main thread to manage SDL_Events. This way you could ensure that event is managed in the same thread that called the sdl_init.
I have been using this method for long to make my games work in windows and linux and have been able to successfully run 3 threads at the same time as mentioned above.
I had to use mutex to make sure that textures/surfaces can be transformed/changed in the update thread as well by pausing the render thread, and the lock is called every once 60 frames, so its not going to cause major perf issues.
This model works best to create event driven games, run time games, or both.

wxWidgets - multitasking with single thread

I have a GUI app that I am creating with wxWidgets. As part of the functionality, I have to run "tasks" simultaneously with manipulation of the GUI window. For example, I may run the code:
long currentTime = wxGetLocalTime();
long stopTime = wxGetLocalTime() + 3;
while (wxGetLocalTime() != stopTime) {}
wxMessageBox("DONE IN APP");
For the duration of those 3 seconds, my application would essentially be frozen until the wxMessageBox is shown. Is there a way to have this run in the background without the use of multiple threads? It creates problems for the application that I've developing.
I was wondering if there are some types of event handling that could be used. Any sort of help is greatly appreciated.
There are 3 ways to run time-consuming tasks in GUI wx applications:
By far the most preferred is to use a different thread. The explanation of the application being "very GUI intensive" really doesn't make any sense to me, I think you should seriously reconsider your program design if its GUI intensity (whatever it is) prevents you from using background worker threads. If you do use this approach, it's pretty simple but pay special attention to the thread/program termination issues. In particular, you will need to either wait for the thread to finish (acceptable if it doesn't take a long time to run) or cancel it explicitly before exiting the program.
Use EVT_IDLE event to perform your task whenever there are no other events to process. This is not too bad for small tasks which can be broken in small enough pieces as you need to be able to resume processing in your handler. Don't forget to call event.RequestMore() to continue getting idle events even when nothing is happening otherwise.
The worst and most dangerous was is to call wxYield() as suggested by another answer. This can seem simple initially but you will regret doing it later because this can create extremely difficult to debug reentrancy problems in your code. If you do use it, you need to guard against reentrancy everywhere yourself and you should really understand what exactly this function does.
Try this:
long currentTime = wxGetLocalTime();
long stopTime = wxGetLocalTime() + 3;
while (wxGetLocalTime() != stopTime) {
wxYield();
}
wxMessageBox("DONE IN APP");
I know this is late to the game, but...
I've successfully used the EVT_IDLE method for YEARS (back in the 90's with Motif originally). The main idea is to break your task up into small pieces, where each piece calls the next piece (think linked-list). The mechanism to do this is using the CallAfter() method (using C++, of course). You just "CallAfter()" as the last step in the piece and that will allow the GUI main loop to run another iteration and possibly update GUI elements and such before calling your next piece. Just remember to keep the pieces small.
Using a background thread is really nice, but can be trickier than you imagine... eventually. As long as you know the data you're working on in the background won't be touched/viewed by anything else, you're OK. If you know this is the case, then that is the way to go. This method allows the GUI to remain fully responsive during background calculations (resizing/moving the window, etc.)
In either case, just don't forget to desensitize appropriate GUI elements as the first step so you won't accidentally launch the same background task multiple times (for example, accidentally clicking a push button multiple times in succession that launches the background thread).

QGraphicsView/Scene - multithreading nightmare

Am I correct in thinking that the QGraphics* classes are not thread-safe? I am porting an old app to Qt and in the process attempting to make it multi-threaded too. I looked at the update code and I see no locks whatsoever.
I started off right, all my processing is done in a set of worker threads so that the GUI does not have to block. But as soon as I come to display my visual representation the whole thing falls down like a pack of cards as the update code attempts to read from the buffer the other thread is writing to.
Here is my test case:
Create a bunch of ellipse objects
Create a thread and pass it the scene pointer
In a loop modify any setting on any object in the scene.
Test function:
bool CBasicDocument::Update( float fTimeStep )
{
const QList<QGraphicsItem*> tObjects = items();
for( QList<QGraphicsItem*>::const_iterator tIter = tObjects.constBegin();
tIter != tObjects.constEnd();
++tIter )
{
QGraphicsEllipseItem* pElipse = (QGraphicsEllipseItem*)(*tIter);
if( pElipse )
{
pElipse->setPen( QPen( QColor( (int)(255.0f * sinf( fTimeStep )), (int)(255.0f * cosf( fTimeStep )), (int)(255.0f * sinf( fTimeStep )) ) ) );
}
}
return true;
}
I have been thinking about ways I can fix this and none of them are particularly pretty.
Ideally what I want to happen is when I change a setting on an object it is buffered until the next render call, but for the time being I'll settle with it not crashing!
At the moment I have four options:
Double buffer the whole scene maintaining two scene graphs in lockstep (one rendering, one updating). This is how our multithreaded game engine works. This is horrible here though because it will require double the CPU time and double the memory. Not to mention the logistics of maintaining both scene graphs.
Modify QGraphics* to be thread safe as above. This is probably the most practical approach but it will be a lot of work to get it done.
Push modifications to the scene into a queue and process them from the main thread.
Throw multithreading to the wind for the time being and just let my app stall when the document updates. (Not pretty given the data size for some documents)
None of them are particularly appealing and all require a massive amount of work.
Does anybody have any ideas or attempted multithreading QGraphicsScene before?
Cheers
I've always read it's a better idea to have all the GUI work happen in the main thread, no matter you're using GTK or Qt or else.
In your case I would go for option 2.
Modifying Qt's code to put locks here and there is the worst thing to do ihmo as you won't be able to sync your modified Qt with upstream without a significant amount of work.
I notice in your test function that you are already casting (using c-style casts, at that) the QGraphicsItem to a QGraphicsEllipseItem. If you only use a certain number of item types, it might be easier to subclass those types, and provide the thread-safe functions in those subclasses. Then cast down to those subclasses, and call what you want to from the other thread. They can handle buffering the changes, or mutex locking, or whatever you want to do for those changes.
An alternative would be to have one mutex, and each thread locks that when processing. This would likely slow you down a lot as you wait for the mutex to become available. You could also maintain a map of items->mutexes somewhere that both threads can get to, and each thread locks the mutex for the item it is working on while it is working with it. This would be simpler to layer in if you think the subclassing option would be too complex.