How do I get QWebView to release threads? - c++

I am creating a lot of QWebViews which each create QThreads. The problem is that I am running out of stack space to create more threads. Therefore, I was wondering if there is a way to clean up existing threads. To be clear, I am not the one creating these threads: Qt creates the threads when I show a QWebView.
Most of the threads (about 400/500) have the exact same stack:
0 ntdll!RtlEnableEarlyCriticalSectionEventCreation C:\Windows\system32\ntdll.dll 0 0x770b013d
1 KERNEL32!GetVolumePathNamesForVolumeNameA C:\Windows\syswow64\kernel32.dll 0 0x766d1a2c
2 USER32!MessageBoxA C:\Windows\syswow64\user32.dll 0 0x74cd086a
3 QEventDispatcherWin32::processEvents qeventdispatcher_win.cpp 831 0x69de3948
4 QEventLoop::processEvents qeventloop.cpp 149 0x69dbf0c5
5 QEventLoop::exec qeventloop.cpp 204 0x69dbf223
6 QThread::exec qthread.cpp 501 0x69cd412b
7 QThread::run qthread.cpp 568 0x69cd4283
8 QThreadPrivate::start qthread_win.cpp 346 0x69cd54d1
9 msvcrt!_itow_s C:\Windows\syswow64\msvcrt.dll 0 0x75401287
10 msvcrt!_endthreadex C:\Windows\syswow64\msvcrt.dll 0 0x75401328
11 KERNEL32!BaseCleanupAppcompatCacheSupport C:\Windows\syswow64\kernel32.dll 0 0x766d339a
12 ntdll!RtlpNtSetValueKey C:\Windows\system32\ntdll.dll 0 0x770c9ef2
13 ntdll!RtlpNtSetValueKey C:\Windows\system32\ntdll.dll 0 0x770c9ec5
14 ?? 0
Is there any way to clean them up?

Since I realized that the thread was allocated for the NetworkManager, I created a global NetworkManager instance and set it to the NetworkAccessManager of every QWebView. This allowed it to reuse the same group of threads for all of the pages and thus doesnt leave many threads laying around.

Related

How to get details of context switches happening in a multithreaded embedded application?

I am trying to profile a C++ application on an embedded device. Using Vtune, I found out that the app is launching hundreds of threads, among which most are active for only small percentage of the total time.
I want to get a details of the context switches that are happening (preferably in some kind of a timeline view). I have yet to come across a tool that can show the contact switch information. Is there some kind of profiler that provides this? Or some other way to get this info?
Thanks.
On Linux you could use cat /proc/{PID}/status, to get some information on threads, voluntary_ctxt_switches and nonvoluntary_ctxt_switches
for example,
uname#hostname:/$ cat /proc/1357/status
Name: avahi-daemon
Umask: 0022
State: S (sleeping)
Tgid: 1357
Ngid: 0
Pid: 1357
PPid: 1
TracerPid: 0
Uid: 107 107 107 107
Gid: 114 114 114 114
FDSize: 128
Groups: 114
NStgid: 1357
NSpid: 1357
NSpgid: 1357
NSsid: 1357
VmPeak: 10500 kB
VmSize: 8288 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 3664 kB
VmRSS: 2700 kB
RssAnon: 328 kB
RssFile: 2372 kB
RssShmem: 0 kB
VmData: 468 kB
VmStk: 132 kB
VmExe: 92 kB
VmLib: 3720 kB
VmPTE: 52 kB
VmSwap: 0 kB
HugetlbPages: 0 kB
CoreDumping: 0
Threads: 1
SigQ: 0/31668
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000001000
SigCgt: 0000000180004203
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
NoNewPrivs: 0
Seccomp: 0
Speculation_Store_Bypass: thread vulnerable
Cpus_allowed: ffffffff,ffffffff,ffffffff,ffffffff
Cpus_allowed_list: 0-127
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 1610
nonvoluntary_ctxt_switches: 25
This answer is specific for the Linux OS. It would be good if you specify what OS you are using because otherwise you may get the solution you don't need.
If you have Linux Perf events, you can get a visual timeline of the context switches in your application using perf timechart record and perf timechart. If the duration of the record is large it may take a while to process the result.
If you want to know what parts of your program are the culprit, maybe it would be better to use perf record -e context-switch --call-graph XXX to sample the backtrace when a context switch happens. Look into the perf manual to see more details of the command line options. Once you collect some trace data, you can visualise it with perf report. I believe Intel VTune is still able to open perf traces, but you need to rename the files from the default perf.data to a file name ending with .perf extension.

Ejabberd Server Application CPU Overload

We have build Ejabberd in AWS EC2 instance and have enabled the clustering in the 6 Ejabberd servers in Tokyo, Frankfurt, and Singapore regions.
The OS, middleware, applications and settings for each EC2 instance are exactly the same.
But currently, the Ejabberd CPUs in the Frankfurt and Singapore regions are overloaded.
The CPU of Ejabberd in the Japan region is normal.
Could you please tell me the suspicious part?
You can take a look at the ejabberd log files of the problematic (and the good) nodes, maybe you find some clue.
You can use the undocumented "ejabberdctl etop" shell command in the problematic nodes. It's similar to "top", but runs inside the erlang virtual machine that runs ejabberd
ejabberdctl etop
========================================================================================
ejabberd#localhost 16:00:12
Load: cpu 0 Memory: total 44174 binary 1320
procs 277 processes 5667 code 20489
runq 1 atom 984 ets 5467
Pid Name or Initial Func Time Reds Memory MsgQ Current Function
----------------------------------------------------------------------------------------
<9135.1252.0> caps_requests_cache 2393 1 2816 0 gen_server:loop/7
<9135.932.0> mnesia_recover 480 39 2816 0 gen_server:loop/7
<9135.1118.0> dets:init/2 71 2 5944 0 dets:open_file_loop2
<9135.6.0> prim_file:start/0 63 1 2608 0 prim_file:helper_loo
<9135.1164.0> dets:init/2 56 2 4072 0 dets:open_file_loop2
<9135.818.0> disk_log:init/2 49 2 5984 0 disk_log:loop/1
<9135.1038.0> ejabberd_listener:in 31 2 2840 0 prim_inet:accept0/3
<9135.1213.0> dets:init/2 31 2 5944 0 dets:open_file_loop2
<9135.1255.0> dets:init/2 30 2 5944 0 dets:open_file_loop2
<9135.0.0> init 28 1 3912 0 init:loop/1
========================================================================================

libcurl strange crashes after Idle time

I use libcurl for FTP works and it works fine but if left idle for some time it just crashes. Here is the backtrace which despite reading it for some time I cannot make sense of what is wrong. The trace does not show where in my functions crash originates and so am left orphan here by debugger. I use threads if that add value
Compiler is GCC 4.7 on Linux
0 0x00007fff8e09b524 addbyter /home/stefano/Desktop/myproject/curl-7.33.0/lib/mprintf.c 914
1 0x00007fff8e09a32f dprintf_formatf /home/stefano/Desktop/myproject/curl-7.33.0/lib/mprintf.c 572
2 0x00007fff8e09b5a4 curl_mvsnprintf /home/stefano/Desktop/myproject/curl-7.33.0/lib/mprintf.c 932
3 0x00007fff8e089510 Curl_failf /home/stefano/Desktop/myproject/curl-7.33.0/lib/sendf.c 152
4 0x00007fff8e07dbf4 Curl_resolv_timeout /home/stefano/Desktop/myproject/curl-7.33.0/lib/hostip.c 618
5 0x00007fff78012bf8 ??
6 0x000000c300000016 ??
7 0x00007fff8e0d3604 ??
8 0x0000000000000002 ??
9 0x00000000001b7740 ??
10 0x0000000000000000 ??
UPDATE 1
Run it again under debugger and met a crash at the line
FILE *fd;
fd = fopen(files[i].c_str(), "rb"); //<---here goes the crash!
files[i].c_str() is supposed to give const* char from wxString
The new BT is
0 0x00007fff8e08952a Curl_failf /home/stefano/Desktop/myproject/curl-7.33.0/lib/sendf.c 154
1 0x00007fff8e07dbf4 Curl_resolv_timeout /home/stefano/Desktop/myproject/curl-7.33.0/lib/hostip.c 618
2 0x00007fff780158c8 ??
3 0x00000000001b7730 ??
4 0x00007fff78009808 ??
5 0x00007fff78015e79 ??
6 0x00007fff78009808 ??
7 0x00007fff8c8a04a0 ??
8 0x00007fff8e0c84ca ftp_multi_statemach /home/stefano/Desktop/myproject/curl-7.33.0/lib/ftp.c 3113
Such error can be caused if you're using curl in non-main thread. When curl can't resolve dns entry, it sends a signal (by default) to interrupt a thread by timeout. Signals are not thread safe and can cause a crash. You should compile libcurl with --enable-threaded-resolver or with support of c-ares.
Also for me it was useful to disable signals at all
curl_easy_setopt(curl, CURLOPT_NOSIGNAL, 1)

OpenGL issue when 3rd party plug-ins also use OpenGL

I'm working on a program containing an OpenGL view (using Ogre3D); this program hosts third-party plug-ins (namely, VST) which can have their own UI opened. Some plug-ins also use OpenGL for their UI and make the program crash in the Ogre Render System as soon as this plug-in-specific OpenGL UI is opened (no crash with other non-opengl plug-ins' UI).
Exception Type: EXC_BAD_ACCESS (SIGBUS)
Exception Codes: KERN_PROTECTION_FAILURE at 0x0000000000000000
Crashed Thread: 0 Dispatch queue: com.apple.main-thread
Thread 0 Crashed: com.apple.main-thread
0 GLEngine gleRunVertexSubmitImmediate + 722
1 GLEngine gleLLVMArrayFunc + 60
2 GLEngine gleSetVertexArrayFunc + 116
3 GLEngine gleDrawArraysOrElements_ExecCore + 1514
4 GLEngine glDrawElements_Exec + 834
5 libGL.dylib glDrawElements + 52
6 RenderSystem_GL.dylib Ogre::GLRenderSystem::_Render(...)...
...
22 Ogre Ogre::Root::renderOneFrame() + 30
23 com.mycompany.myapp MyOgreWidget::paint()
...
(apparently a third-party thread from the plug-in)
Thread 10: Dipatch queue: com.apple.opengl.glvmDoWork
0 libSystem.B.dylib mach_msg_trap + 10
1 libSystem.B.dylib mach_msg + 68
2 libCoreVMClient.dylib cvmsServ_BuildModularFunction + 195
3 libCoreVMClient.dylib CVMSBuildModularFunction + 98
4 libGLProgrammability.dylib glvm_deferred_build_modular(voi*) + 254
5 libSystem.B.dylib _dispatch_queue_drain + 249
6 libSystem.B.dylib _dispatch_queue_invoke + 50
7 libSystem.B.dylib _dispatch_worker_thread2 + 249
8 libSystem.B.dylib _pthread_wqthread + 390
9 libSystem.B.dylib start_wqthread + 30
I suspected that the OpenGL Context was not properly managed, either in Ogre3D or in the plug-in's UI, but it is not possible to access the plug-ins' render callbacks.
I tested with Ogre3D 1.7.1 and 1.7.3. My UI toolkit is Qt (version 4.6.3 and 4.7.4). Same issues with MacOSX and Windows.
I know other programs with OpenGL views which don't have this issue, even with the exact same plug-ins, I wonder how they handle such situations.
Any idea how to handle that?
Thanks for any help. All the best.
Any idea how to handle that?
I'd add a call to QGLWidget::doneCurrent right after finishing your own (=Ogre3D's) OpenGL work, and do a QGLWidget::makeCurrent before doing your own OpenGL work.

Non voluntary context switches: How can I prevent them?

I have a small application that I was running right now and I wanted to check if I have any memory leaks in it so I put in this piece of code:
for (unsigned int i = 0; i<10000; i++) {
for (unsigned int j = 0; j<10000; j++) {
std::ifstream &a = s->fhandle->open("test");
char temp[30];
a.getline(temp, 30);
s->fhandle->close("test");
}
}
When I ran the application i cat'ed /proc//status to see if the memory increases.
The output is the following after about 2 Minutes of runtime:
Name: origin-test
State: R (running)
Tgid: 7267
Pid: 7267
PPid: 6619
TracerPid: 0
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 256
Groups: 4 20 24 46 110 111 119 122 1000
VmPeak: 183848 kB
VmSize: 118308 kB
VmLck: 0 kB
VmHWM: 5116 kB
VmRSS: 5116 kB
VmData: 9560 kB
VmStk: 136 kB
VmExe: 28 kB
VmLib: 11496 kB
VmPTE: 240 kB
VmSwap: 0 kB
Threads: 2
SigQ: 0/16382
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000002004
SigCgt: 00000001800044c2
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: ffffffffffffffff
Cpus_allowed: 3f
Cpus_allowed_list: 0-5
Mems_allowed: 00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 120
nonvoluntary_ctxt_switches: 26475
None of the values did change except the last one, so does the mean there are no memory leaks?
But what's more important and what I would like to know is, if it is bad that the last value is increasing rapidly (about 26475 switches in about 2 Minutes!).
I looked at some other applications to compare how much non-volunary switches they have:
Firefox: about 200
Gdm: 2
Netbeans: 19
Then I googled and found out some stuff but it's to technical for me to understand.
What I got from it is that this happens when the application switches the processor or something? (I have an Amd 6-core processor btw).
How can I prevent my application from doing that and in how far could this be a problem when running the application?
Thanks in advance,
Robin.
Voluntary context switch occurs when your application is blocked in a system call and the kernel decide to give it's time slice to another process.
Non voluntary context switch occurs when your application has used all the timeslice the scheduler has attributed to it (the kernel try to pretend that each application has the whole computer for themselves, and can use as much CPU they want, but has to switch from one to another so that the user has the illusion that they are all running in parallel).
In your case, since you're opening, closing and reading from the same file, it probably stay in the virtual file system cache during the whole execution of the process, and youre program is being preempted by the kernel as it is not blocking (either because of system or library caches). On the other hand, Firefox, Gdm and Netbeans are mostly waiting for input from the user or from the network, and must not be preempted by the kernel.
Those context switches are not harmful. On the contrary, it allow your processor to be used to fairly by all application even when one of them is waiting for some resource.≈
And BTW, to detect memory leaks, a better solution would be to use a tool dedicated to this, such as valgrind.
To add to #Sylvain's info, there is a nice background article on Linux scheduling here: "Inside the Linux scheduler" (developerWorks, June 2006).
To look for a memory leak it is much better to install and use valgrind, http://www.valgrind.org/. It will identify memory leaks in the heap and memory error conditions (using uninitialized memory, tons of other problems). I use it almost every day.