I am compiling code with an interface with FLTK, I need to be able to fork a callback because it's taking parameters from a form an launch work when hitting 'run'.
I cannot use a fork at the start of the function to have one thread coming back to the UI instantly, it is said that XInitThreads() is run without argument and returns zero on failure, any other is success.
My check don't show up XInitThreads returning 0, so this part is working. Yet I still got an error:
[xcb] Unknown sequence number while processing queue
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
rc: ../../src/xcb_io.c:260: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed.
And this appears two time as one per launched threads.
I assert the call with:
if(XInitThreads() == 0)
{
fprintf(stderr, "Warning ! No forking available.\n");
}
Warning doesn't appear.
Using ubuntu 20.10
g++
FLTK-1.1
ARCH amd64
//by using sdt::thread.
Fl_Button b;
b.callback(wrapper, data);
void wrapper(Fl_Widget *w, void *v) {
std::thread th(task, v);
th.detach();
}
the wrapper function starts a thread and we don't join inside of the callback function, or at all.
Related
I've hit a bit of an issue and I'm not sure what to make of it.
I'm running Qt 4.8.6, Qt creator 3.3.2, environment in Ubuntu 12.04 cross compiling to a Beaglebone Black running Debian 7 kernel 3.8.13.
The issue that I'm seeing is that this code:
if (qApp->hasPendingEvents())
{
qDebug() << "pending events";
}
qApp->processEvents(QEventLoop::AllEvents, 10);
does not function as it should according to (at least my interpretation of) the Qt documentation. I would expect the process events loop to function for AT MOST the 10 milliseconds specified.
What happens is the qDebug statement is never printed. I would then expect that there are therefore no events to be processed, and the process events statement goes in and out very quickly. Most of the time this is the case.
What happens (not every time, but often enough) the qDebug statement is skipped, and the processEvents statement executes for somewhere between 1 and 2 seconds.
Is there some way that I can dig into what is happening in the process events and find out what is causing the delay?
Qt is processing events for longer than specified for QApplication::processEvents call on Linux
system. Is there some way that I can dig into what is happening in the
process events and find out what is causing the delay?
Yes, observing Qt source code may help. The source code is in /home/myname/software/Qt/5.5/Src/qtbase/src/corelib/kernel/qeventdispatcher_unix.cpp or maybe somewhere around that:
bool QEventDispatcherUNIX::processEvents(QEventLoop::ProcessEventsFlags flags)
{
Q_D(QEventDispatcherUNIX);
d->interrupt.store(0);
// we are awake, broadcast it
emit awake();
// This statement implies forcing events from system event queue
// to be processed now with doSelect below
QCoreApplicationPrivate::sendPostedEvents(0, 0, d->threadData);
int nevents = 0;
const bool canWait = (d->threadData->canWaitLocked()
&& !d->interrupt.load()
&& (flags & QEventLoop::WaitForMoreEvents));
if (canWait)
emit aboutToBlock();
if (!d->interrupt.load()) {
// return the maximum time we can wait for an event.
timespec *tm = 0;
timespec wait_tm = { 0l, 0l };
if (!(flags & QEventLoop::X11ExcludeTimers)) {
if (d->timerList.timerWait(wait_tm))
tm = &wait_tm;
}
if (!canWait) {
if (!tm)
tm = &wait_tm;
// no time to wait
tm->tv_sec = 0l;
tm->tv_nsec = 0l;
}
// runs actual event loop with POSIX select
nevents = d->doSelect(flags, tm);
It seems there system posted events that are not accounted for qApp->hasPendingEvents(). And then QCoreApplicationPrivate::sendPostedEvents(0, 0, d->threadData); flushes those events to be processed by d->doSelect. If I was solving this task I would try to either flush those posted events out or maybe realize if and why flags parameter has QEventLoop::WaitForMoreEvents bit set. I usually either build Qt from source code or provide debugger with the path to its symbols/source so it is possible to dig in there.
P.S. I glanced at Qt 5.5.1 source event processing code but that should be very similar to what you deal with. Or could that implementation actually be bool QEventDispatcherGlib::processEvents(QEventLoop::ProcessEventsFlags flags)? It is easy to find on an actual system.
My OpenCL program doesn't always finish before further host (c++) code is executed. The OpenCL code is only executed up to a certain point (which apperears to be random). The code is shortened a bit, so there may be a few things missing.
cl::Program::Sources sources;
string code = ResourceLoader::loadFile(filename);
sources.push_back({ code.c_str(),code.length() });
program = cl::Program(OpenCL::context, sources);
if (program.build({ OpenCL::default_device }) != CL_SUCCESS)
{
exit(-1);
}
queue = CommandQueue(OpenCL::context, OpenCL::default_device);
kernel = Kernel(program, "main");
Buffer b(OpenCL::context, CL_MEM_READ_WRITE, size);
queue.enqueueWriteBuffer(b, CL_TRUE, 0, size, arg);
buffers.push_back(b);
kernel.setArg(0, this->buffers[0]);
vector<Event> wait{ Event() };
Version 1:
queue.enqueueNDRangeKernel(kernel, NDRange(), range, NullRange, NULL, &wait[0]);
Version 2:
queue.enqueueNDRangeKernel(kernel, NDRange(), range, NullRange, &wait, NULL);
.
wait[0].wait();
queue.finish();
Version 1 just does not wait for the OpenCL program. Version 2 crashes the program (at queue.enqueueNDRangeKernel):
Exception thrown at 0x51D99D09 (nvopencl.dll) in foo.exe: 0xC0000005: Access violation reading location 0x0000002C.
How would one make the host wait for the GPU to finish here?
EDIT: queue.enqueueNDRangeKernel returns -1000. While it returns 0 on a rather small kernel
Version 1 says to signal wait[0] when the kernel is finished - which is the right thing to do.
Version 2 is asking your clEnqueueNDRangeKernel() to wait for the events in wait before it starts that kernel [which clearly won't work].
On it's own, queue.finish() [or clFinish()] should be enough to ensure that your kernel has completed.
Since you haven'd done clCreateUserEvent, and you haven't passed it into anything else that initializes the event, the second variant doesn't work.
It is rather bad that it crashes [it should return "invalid event" or some such - but presumably the driver you are using doesn't have a way to check that the event hasn't been initialized]. I'm reasonably sure the driver I work with will issue an error for this case - but I try to avoid getting it wrong...
I have no idea where -1000 comes from - it is neither a valid error code, nor a reasonable return value from the CL C++ wrappers. Whether the kernel is small or large [and/or completes in short or long time] shouldn't affect the return value from the enqueue, since all that SHOULD do is to enqueue the work [with no guarantee that it starts until a queue.flush() or clFlush is performed]. Waiting for it to finish should happen elsewhere.
I do most of my work via the raw OpenCL API, not the C++ wrappers, which is why I'm referring to what they do, rather than the C++ wrappers.
I faced a similar problem with OpenCL that some packages of a data stream we're not processed by OpenCL.
I realized it just happens while the notebook is plugged into a docking station.
Maybe this helps s.o.
(No clFlush or clFinish calls)
I'm using Live555 to realize a C++ RTPS client for IP cameras.
I'm using most of the testRTSPClient code.
I used Poco library and Poco::Thread class too.
In other words any client for each camera runs in a separate thread tha owns his instance of Live555 objects (as live555-devel suggests, any thread uses an instance with his UsageEnvironment and TaskScheduler.). This to avoid shared variables and synchronization stuff. It seems to works well and fast.
My runnable (following the Poco library requirements) object IPCamera has the run method as simple as:
void IPCamera::run()
{
openURL(_myEnv, "", _myRtspCommand.c_str(), *this); //taken from the testRTSPClient example
_myEnv->TaskScheduler().doEventLoop(&_watchEventLoopVariable);
//it runs until _watchEventLoopVariable change to a value != 0
//exit from the run;
}
When the run is finished I call join() to close the thread (by the way I figured that if I don't call myThread->join(), the memory is not freed totally).
Upon shutdown, following the requirements in Live555-devel I put in my code:
void IPCamera::shutdown()
{
...
_myEnv->reclaim();
delete _myScheduler;
}
Using Valgrind to detect memory leaks I saw a strange behaviour:
1) case: Run the program - Close the program with all the IPCameras that run in the proper manner.
a) At the end of the program all the destructors are invoked.
b) exit from doEventLoop().
c) join the thread (actually is terminated because it exits from run method.
d) destroy the _myEnv and _myScheduler as showed.
e) destroy all the others objects, including IPCamera and Thread associated.
-> no memory leaks are found by Valgrind. Ok
Now comes the problem.
2) case: I'm implementing a use case where a Poco::Timer checks every X seconds if the camera is alive using ICMP ping. It raises an event (using Poco events) in case it doesn't answer because the network is down and I do the follow:
IPCamera down :
a) put the _watchEventLoopVariable = 1 to exit from the run method;
b) shutdown the client associated to the IPCamera as showed
c) join the thread
I don't destroy the thread because I would like to reuse it when the network is up again and the camera works again.And in that case:
a)I set the _eventWatchVariable = 0.
b) Let start again the thread with: myThread->run()
Valgrind tells me that memory leaks are found: 60 bytes direct, 20.000 indirect bytes are lost in the thread, in the H264BufferdPackedFactory::createNewPacket(...), a class of Live555.
SOLVED
I found out that the problem was the tunneling over TCP. In LIVE55 you can select the kind of protocol.
If I select:
#define REQUEST_STREAMING_OVER_TCP false
I don't have any leak. I used many times Valgrind to be sure (it discovered the problem).
If I use TCP then the above problem is manifested.
I am having problem in getting my stack trace output to stderr or dumping to a log file. I am running the code in Kubuntu10.04 with gcc compiler (4.4.3). The issue is that in the normal running mode (without gdb), the program does not output anything except 'Segmentation Fault'. I wish to output the backtrace output as in the print statements below. When I run gdb with my application, it comes to the printf/fprintf/(function call) statement, and then crashes with the following statement:
669 {
(gdb)
670 printf("Testing for stability.\n");
(gdb)
Program received signal SIGTRAP, Trace/breakpoint trap.
0x00007ffff68b1f45 in puts () from /lib/libc.so.6
The strange things is that it works if I call a function within the same file that crashes, it works fine and spews the output properly. But if the program crashes in a function outside this file, it does not print any output.
So no printf or file dumping statement or function call gets processed. I am using the following sample code:
void bt_sighandler(int sig, siginfo_t *info,
void *secret) {
void *trace[16];
char **messages = (char **)NULL;
int i, trace_size = 0;
ucontext_t *uc = (ucontext_t *)secret;
/* Do something useful with siginfo_t */
if (sig == SIGSEGV)
printf("Got signal %d, faulty address is %p, "
"from %p\n", sig, info->si_addr,
uc->uc_mcontext.gregs[0]);
else
printf("Got signal %d#92; \n", sig);
trace_size = backtrace(trace, 16);
/* overwrite sigaction with caller's address */
trace[1] = (void *) uc->uc_mcontext.gregs[0];
messages = backtrace_symbols(trace, trace_size);
/* skip first stack frame (points here) */
printf("[bt] Execution path:#92; \n");
for (i=1; i<trace_size; ++i)
printf("[bt] %s#92; \n", messages[i]);
exit(0);
}
int main() {
/* Install our signal handler */
struct sigaction sa;
sa.sa_sigaction = (void *)bt_sighandler;
sigemptyset (&sa.sa_mask);
sa.sa_flags = SA_RESTART | SA_SIGINFO;
sigaction(SIGSEGV, &sa, NULL);
sigaction(SIGUSR1, &sa, NULL);
/* Do something */
printf("%d#92; \n", func_b());
}
Thanks in advance for any help.
Unfortunately you just can't reliably do much of anything in a SIGSEGV handler. Think about it this way: Your program has a serious error and its state (including system level state such as the heap) is in an inconsistent state.
In such a case, you can't expect the OS to magically fix up the heap and other internals it needs in order to be able to execute arbitrary code within your signal handler.
If the SEGV happens in your own code, the good solution is to use the core and fix the root problem. If the core happens in other code via say a shared library, I'd suggest isolating that code in an entirely separate binary and communicate between the two binaries. Then if the library crashes your main program does not.
You are supposed to do very little in a signal handler, in principle only access variables of type sig_atomic_t and volatile data.
Doing I/O is definitely out of the question. See this page for gcc:
http://www.gnu.org/s/libc/manual/html_node/Nonreentrancy.html#Nonreentrancy
Try using simpler functions, such as strcat() and write().
Is there a reason you can't use valgrind?
When the application crashes Linux creates a core dump with the state of the application when it crashed. The core file can be examined using gdb.
If no core file is created try changing core file size with
ulimit -c unlimited
in the same shell and before the program is started.
The name of the core file is usually core.PID where PID is the pid of the program.
The core file is usually placed somewhere in /tmp or the directory where the program was started.
A lot more info on core files is available on the man page for core. Use
man core
to read the man page.
I managed to get it partially working. Actually I was running the application in 'sudo' mode. Running it in user mode gives me the callstack. However running in user mode disables hardware acceleration (nvidia graphics drivers). To resolve that, I added myself to the 'video' group, so that I have access to /dev/nvidia0 & /dev/nvidiactl. However when I get the access the stack does not get generated anymore. Its only when I am in user mode and hardware acceleration is disabled, the stack is coming. But I can't run my application without hardware acceleration (mean some important functionality would get disabled). Please let me know if anyone has any idea.
Thanks.
EDIT: I have now edited my code a bit to have a rough idea of "all" the code. Maybe this
might be helpful to identify the problem ;)
I have integrated the following simple code fragement which either cancels the timer if data
is read from the TCP socket or otherwise it cancels the data read from the socket
// file tcp.cpp
void CheckTCPSocket()
{
TRequestStatus iStatus;
TSockXfrLength len;
int timeout = 1000;
RTimer timer;
TRequestStatus timerstatus;
TPtr8 buff;
iSocket.RecvOneOrMore( buff, 0, iStatus, len );
timer.CreateLocal();
timer.After(timerstatus, timeout);
// Wait for two requests – if timer completes first, we have a
// timeout.
User::WaitForRequest(iStatus, timerstatus);
if(timerstatus.Int() != KRequestPending)
{
iSocket.CancelRead();
}
else
{
timer.Cancel();
}
timer.Close();
}
// file main.cpp
void TestActiveObject::RunL()
{
TUint Data;
MQueue.ReceiveBlocking(Data);
CheckTCPSocket();
SetActive();
}
This part is executed within active Object and since integrating the code piece above I always get the kernel panic:
E32User-CBase 46: This panic is raised by an active scheduler, a CActiveScheduler. It is caused by a stray signal.
I never had any problem with my code until now this piece of code is executed; code executes fine as data is read from the socket and
then the timer is canceled and closed. I do not understand how the timer object has here any influence on the AO.
Would be great if someone could point me to the right direction.
Thanks
This could be a problem with another active object completing (not one of these two), or SetActive() not being called. See Forum Nokia. Hard to say without seeing all your code!
BTW User::WaitForRequest() is nearly always a bad idea. See why here.
Never mix active objects and User::WaitForRequest().
(Well, almost never. When you know exactly what you are doing it can be ok, but the code you posted suggests you still have some learning to do.)
You get the stray signal panic when the thread request semaphore is signalled with RThread::RequestComplete() by the asynchronous service provider and the active scheduler that was waiting on the semaphore with User::WaitForAnyRequest() tries to look for an active object that was completed so that its RunL() could be called, but cannot find any in its list of active objects.
In this case you have two ongoing requests, neither of which is controlled by the active scheduler (for example, not using CActive::iStatus as the TRequestStatus; issuing SetActive() on an object where CActive::iStatus is not involved in an async request is another error in your code but not the reason for stray signal). You wait for either one of them to complete with WaitForRequest() but don't wait for the other to complete at all. The other request's completion signal will go to the active scheduler's WaitForAnyRequest(), resulting in stray signal. If you cancel a request, you will still need to wait on the thread request semaphore.
The best solution is to make the timeout timer an active object as well. Have a look at the CTimer class.
Another solution is just to add another WaitForRequest on the request not yet completed.
You are calling TestActiveObject::SetActive() but there is no call to any method that sets TestActiveObject::iStatus to KRequestPending. This will create the stray signal panic.
The only iStatus variable in your code is local to the CheckTCPSocket() method.