I'm working in a software built in C++ using C++ Builder which is freezing once a month.
I'm looking in the code but it is too big to find it.
The freezes make the UI gets all white. I tried to simulate this error with some proposital bad codes (null pointers, while(1) and this kind of stuff) but never got the same blank UI.
I ran a What's Hang when it's stopped but I got nothing with it.
Someone knows what can I do in the next time to get more informations which could help me find the reason of the freezing?
A blank (white) UI generally occurs when a UI paint message is queued but not processed. Simply blocking the message queue from processing new messages is not enough if you don't do something within the UI to trigger a repaint in the first place.
As for troubleshooting the original problem - you should be looking for any code in the main thread that runs longs loop without processing new messages, or long waits on waitable objects using WaitForSingleObject() or WaitForMultipleOBjects() instead of MsgWaitForMultipleObjects(), calls to TThread::WaitFor() for threads that do not terminate in a timely manner, etc.
It is hard to troubleshoot this kind of problem without knowing what steps the user performs to lead up to the frozen UI so you know what code to start looking at.
Related
Hello I am looking for a signal for gtkmm. Basically I am doing some simulations and what I want is something like this :
I assume I do 5 simulations :
progressBar.set_fraction(0);
1 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
2 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
3 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
4 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
5 simulation
progressBar.set_fraction(progressBar.get_fraction()+1/5)
But I don't know which signal I have to use and how to translate to this.
Thank you a lot for your help !!!
The pseudo code which you presented in your question should actually work - no signal is necessary. However, you could introduce a signal into your simulation for update of the progress bar. IMHO this will not solve your problem and I will try to explain why and what to do to solve it:
You provided a little bit too less context, so, that I will introduce some more assumptions: You have a main window with a button or toolbar item or menu item (or even all of them) which start the simulation.
Let's imagine you set a breakpoint at Gtk::ProgressBar::set_fraction().
Once the debugger stopped at this break point you will find the following calls on the stack trace (probably with many other calls in between):
Gtk::Main::run()
the signal handler of the widget or action which started the simulation
the function which runs the five simulations
and last the call of Gtk::ProgressBar::set_fraction().
If you could inspect the internals of Gtk::ProgressBar you would notice that everything in Gtk::ProgressBar::set_fraction() is done properly. So what's wrong?
When you call Gtk::ProgressBar::set_fraction() it probably generates an expose event (i.e. adds an event to the event queue inside of Gtk::Main with a request for its own refresh). The problem is that you probably do not process the request until all five runs of the simulation are done. (Remember that Gtk::Main::run() which is responsible for this is the uppermost/outmost call of my imaginery stack trace.) Thus, the refresh does not happen until the simulation is over - that's too late. (Btw. the authors of Gtk+ stated somewhere in the manual about their cleverness to optimize events. I.e. there might be finally only one expose event for the Gtk::ProgressBar in the event queue but this does not make your situation better.)
Thus, after you called Gtk::ProgressBar::set_fraction() you must somehow flush the event queue before doing further progress with your simulation.
This sounds like leaving the simulation, leaving the calling widget signal handler, returning to Gtk::Main::run() for further event processing and finally coming back for next simulation step - terrible idea. But we did it much simpler. For this, we use essentially the following code (in gtkmm 2.4):
while (Gtk::Main::events_pending()) Gtk::Main::iteration(false);
(This should hopefully be the same in the gtkmm version you use but if in doubt consult the manual.)
It should be done immediately after updating the progress bar fraction and before simulation is continued.
This recursively enters (parts of) the main loop and processes all pending events in the event queue of Gtk::Main and thus, the progress bar is exposed before the simulation continues. You may be concerned to "recursively enter the main loop" but I read somewhere in the GTK+ manual that it is allowed (and reasonable to solve problems like this) and what to care about (i.e. to limit the number of recursions and to grant a proper "roll-back").
What in your case is the simulation we call in general long running functions. Because such long running functions are algorithms (in libraries for anything) which shall not be polluted with any GUI stuff, we built some administrational infra structure around this basic concept including
a progress "proxy" object with an update(double) method and a signal slot
a customized progress dialog which can connect a signal handler to such a progress object (i.e. its signal slot).
The long running function gets a progress object (as argument) and is responsible to call the Progress::update() method in appropriate intervals with an appropriate progress factor. (We simply use values in the range [0, 1].)
One issue is the interval of calling the progress update. If it is called to often the GUI will slow down your long running function significantly. The opposite case (calling it not often enough) results in less responsiveness of GUI. Thus, we decided for more often progress update. To lower the time consuming of GUI, we remember the time of last update in our progress dialog and skip the next refreshs until a certain duration since last refresh is measured. Thus, the long running function has still some extra effort for progress update but it is not recognizable anymore. (A good refresh interval is IMHO 0.1 s - the perception threshold of humans but you may choose 0.05 s if in doubt.)
Flushing all pending events results in processing of mouse events (and other GTK+ signals) also. This allows another useful feature: aborting the long running function.
When the "Cancel" button of our progress dialog is pressed it sets an internal flag. If the progress is updated next time it checks the flag. If the flag became true it throws a special exception. The throw aborts the caller of the progress update (the long running function) immediately. This exception must be catched in the signal handler of the button (or whatever called the long running function). Otherwise, it would "fall through" to the event dispatcher in Gtk::Main where it is catched definitely which would abort your application. (I saw it often enough whenever I forgot to catch.) On the other hand: catching the special exception tells clearly that the long running function has been aborted (in opposition to ended by regulary return). This may or may not be something which can be stated on GUI also.
Finally, the above solution can cause another issue: It enables to start the simulation (via GUI) while a simulation is already running. This is possible because button presses for simulation start could be processed while in progress update. To prevent this, there is actually a simple solution: set a flag at start of simulation in the GUI until it has finished and prevent further starts while the flag is set. Another option can be to make the widget/action insensitive when simulation is started. This topic becomes more complicated if you have multiple distinct long running functions in your application which may or may not exclude each other - leads to something like an exclusion matrix. Well, we solved it pragmatically... (but without the matrix).
And last but not least I want to mention that we use a similar concept for output of log views (e.g. visual logging of infos, warnings, and errors while anything long running is in progress). IMHO it is always good to provide some visual action for end users. Otherwise, they might get bored and use the telephone to complain about the (too) slow software which actually steals you the time to make it faster (a vicious cycle you have to break...)
I have a UI application in Qt, i have a couple of functions that run large scale SQL queries that returns thousands of results.
when the button that runs this query is clicked the UI windows just instantly goes to 'not responding' however i can see from console outputs that everything is still actually running in the background. As soon as the function ends the data is presented as expected and the UI is responding again and fully functional.
i know this is due to the fact that the function is looping thousands of times due to the large number of results, but i was hoping that i could have just put in a loading bar that progresses as the search does instead of just locking up the window making it look like the program has crashed. AFAIK i dont have memory leaks so does anyone have any suggestions?
oh also im thinking its not memory leaks because when i click that button task manager shows only a couple of MB of memory being used for this process and processor is by no means maxing out either
In an application, there's one thread that's responsible for handling UI events, messages, whatever you want to call them. Suppose that you have a button click event. As long as you don't return from the callback function, no other UI event can be triggered (repainting, updating, etc) and UI becomes unresponsive.
To mitigate this, you should considering performing time consuming tasks in a separate thread and once they're complete, update UI accordingly. If you need to block UI while the task is processed, you can disable your controls, display a pop up progress bar, whatever, but keep the UI thread relatively unoccupied to avoid "not responding" problem.
A simpler solution than to use threads is to use QCoreApplication::processEvents(). If your code is something like this:
void slowFunction()
{
lostOfResults = makeSqlQuery(...); // quite fast
for (r in lostOfResults)
doSomethingWithResult(r); // one call is quite fast
}
If one SQL query or one doSomethingWithResult() doesn't take too much time, you can process pending events using QCoreApplication::processEvents() like this:
void slowFunction()
{
lostOfResults = makeSqlQuery(...);
for (r in lostOfResults)
{
doSomethingWithResult(r);
QCoreApplication::processEvents();
}
}
Now the GUI events are processed and the program doesn't freeze. But if the SQL query alone takes a lot of time (several seconds) this doesn't help. Then you should consider separate thread.
I am currently writing a game in C++ for Windows. The server counterpart creates two additional threads at the very start. One of them handles receiving new data, and the other handles movement calculation of the objects in the game. What I managed to find out is that the last thread function (called TickFunc) is the one that slows everything down. My music freezes, I can't open new tabs in my browser, everything is slow and freezes. Even if I comment everything within the TickFunc out (leaving an empty while loop that executes forever), it still freezes, but if I do not create that thread at all, it's fine. It seems as though it slows the system down regardless of the intensity of calculation performed within the TickFunc. I would really appreciate any hints concerning what may be causing this. Thank you.
Regards,
Neob91
Put a small delay inside your infinite loop.
I am developing a simple WinAPI application and started from writing my own assertion system.
I have a macro defined like ASSERT(X) which would make pretty the same thing as assert(X) does, but with more information, more options and etc.
At some moment (when that assertion system was already running and working) I realized there is a problem.
Suppose I wrote a code that does some action using a timer and (just a simple example) this action is done while handling WM_TIMER message. And now, the situation changes the way that this code starts throwing an assert. This assert message would be shown every TIMER_RESOLUTION milliseconds and would simply flood the screen.
Options for solving this situation could be:
1) Totally pause application running (probably also, suspend all threads) when the assertion messagebox is shown and continue running after it is closed
2) Make a static counter for the shown asserts and don't show asserts when one of them is already showing (but this doesn't pause application)
3) Group similiar asserts and show only one for each assert type (but this also doesn't pause application)
4) Modify the application code (for example, Get / Translate / Dispatch message loop) so that it suspends itself when there are any asserts. This is good, but not universal and looks like a hack.
To my mind, option number 1 is the best. But I don't know any way how this can be achieved. What I'm seeking for is a way to pause the runtime (something similiar to Pause button in the debugger). Does somebody know how to achieve this?
Also, if somebody knows an efficient way to handle this problem - I would appreciate your help. Thank you.
It is important to understand how Windows UI programs work, to answer this question.
At the core of the Windows UI programming model is of course "the message" queue". Messages arrive in message queues and are retrieved using message pumps. A message pump is not special. It's merely a loop that retrieves one message at a time, blocking the thread if none are available.
Now why are you getting all these dialogs? Dialog boxes, including MessageBox also have a message pump. As such, they will retrieve messages from the message queue (It doesn't matter much who is pumping messages, in the Windows model). This allows paints, mouse movement and keyboard input to work. It will also trigger additional timers and therefore dialog boxes.
So, the canonical Windows approach is to handle each message whenever it arrives. They are a fact of life and you deal with them.
In your situation, I would consider a slight variation. You really want to save the state of your stack at the point where the assert happened. That's a particularity of asserts that deserves to be respected. Therefore, spin off a thread for your dialog, and create it without a parent HWND. This gives the dialog an isolated message queue, independent of the original window. Since there's also a new thread for it, you can suspend the original thread, the one where WM_TIMER arrives.
Don't show a prompt - either log to a file/debug output, or just forcibly break the debugger (usually platform specific, eg. Microsoft's __debugbreak()). You have to do something more passive than show a dialog if there are threads involved which could fire lots of failures.
Create a worker thread for your debugging code. When an assert happens, send a message to the worker thread. The worker thread would call SuspendThread on each thread in the process (except itself) to stop it, and then display a message box.
To get the threads in a process - create a dll and monitor the DllMain for Thread Attach (and Detach) - each call will be done in the context of a thread being created (or destroyed) so you can get the current thread id and create a handle to use with SuspendThread.
Or, the toolhelp debug api will help you find out the threads to pause.
The reason I prefer this approach is, I don't like asserts that cause side effects. Too often Ive had asserts fire from asynchronous socket processing - or window message - processing code - then the assert Message box is created on that thread which either causes the state of the thread to be corrupted by a totally unexpected re-entrancy point - MessageBox also discards any messages sent to the thread, so it messes up any worker threads using thread message queues to queue jobs.
My own ASSERT implementation calls DebugBreak() or as alternative INT 3 (__asm int 3 in MS VC++). An ASSERT should break on the debugger.
Use the MessageBox function. This will block until the user clicks "ok". After this is done, you could choose to discard extra assertion failure messages or still display them as your choice.
I have an NSDocument-based Cocoa app and I have a couple of secondary threads that I need to terminate gracefully (wait for them to run through the current loop) when the users closes the document window or when the application quits. I'm using canCloseDocumentWithDelegate to send a flag to the threads when the document is closing and then when they're done, one of them calls [NSDocument close]. This seems to work peachy keen when the user closes the document window, but when you quit the app, it goes all kinds of wrong (crashes before it calls anything). What is the correct procedure for something like this?
The best possible way is for the threads to own the objects necessary for the thread to finish doing whatever it is doing to the point of being able to abort processing and terminate as quickly as possible.
Under non-GC, this means a -retain that the thread -releases when done. For GC, it is just a hard reference to the object(s) desired.
If there is some kind of lengthy processing that must go on and must complete before the document is closed, then drop a sheet with a progress bar and leave the document modal until done (both Aperture and iPhoto do exactly this).