Gtk::Label update label speed - c++

I have a program that tries to update a Gtk::Label at a very high frequency, and is exhibing very unstable behaviour. I get several of these errors:
(gtkWindow:26559): Pango-CRITICAL **: pango_layout_copy: assertion 'PANGO_IS_LAYOUT (src)' failed
(gtkWindow:26559): Pango-CRITICAL **: pango_layout_set_width: assertion 'layout != NULL' failed
(gtkWindow:26559): Pango-CRITICAL **: pango_layout_get_pixel_extents: assertion 'PANGO_IS_LAYOUT (layout)' failed
(gtkWindow:26559): GLib-GObject-CRITICAL **: g_object_unref: assertion 'G_IS_OBJECT (object)' failed
until the program eventually crashes with:
Pango:ERROR:pango-layout.c:3916:pango_layout_check_lines: assertion failed: (!layout->log_attrs)
2921
Aborted (core dumped)
The relevant code lines:
while(1){
std::string sensorLine="";
_serial.readLine(&sensorLine); // read serial data with boost::asio
_output->set_label(sensorLine.data()); // _output -> Gtk::Label*
std::cout<<sensorLine<<std::endl;
//sleep(1);
}
I only get the error if I try to use _output->setlabel, if I comment this line everything runs smoothly, with the output printed in the console. The same thing happens if I call sleep()inside the loop, the Gtk::Label is updated as the commandline and no errors are thrown.
This loop is running on a separate thread that receives _output as argument.

Use g_idle_add (which actually is threadsafe) with a callback which in turn actually modifies (read: calls set_label on) your GtkLabel.
Do not call UI functions from a different thread! Never ever! You open the box of pandora if you do.

Related

XinitThread and libfltk

I am compiling code with an interface with FLTK, I need to be able to fork a callback because it's taking parameters from a form an launch work when hitting 'run'.
I cannot use a fork at the start of the function to have one thread coming back to the UI instantly, it is said that XInitThreads() is run without argument and returns zero on failure, any other is success.
My check don't show up XInitThreads returning 0, so this part is working. Yet I still got an error:
[xcb] Unknown sequence number while processing queue
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
rc: ../../src/xcb_io.c:260: poll_for_event: Assertion `!xcb_xlib_threads_sequence_lost' failed.
And this appears two time as one per launched threads.
I assert the call with:
if(XInitThreads() == 0)
{
fprintf(stderr, "Warning ! No forking available.\n");
}
Warning doesn't appear.
Using ubuntu 20.10
g++
FLTK-1.1
ARCH amd64
//by using sdt::thread.
Fl_Button b;
b.callback(wrapper, data);
void wrapper(Fl_Widget *w, void *v) {
std::thread th(task, v);
th.detach();
}
the wrapper function starts a thread and we don't join inside of the callback function, or at all.

Removing RTP Extensions in GStreamer

In a gstreamer (v1.17.0) application coded in C++, I have a stream of MPEG-TS/rtp with a specific-purpose rtp extension. I guess that rtpmp2tdepay has a problem with rtp extensions, as when I bypass the part of the chain adding rtp extensions, everything seems OK. So I decided to remove the rtp extension before rtpmp2tdepay element.
But as there is no remove_extension kind of thing in GstRtpBuffer, I have encountered problems with that.
This is my code, which results in segmentation faults:
GstRTPBuffer rtp = GST_RTP_BUFFER_INIT;
GstBuffer *buf = gst_buffer_make_writable(inputbuf);
gst_rtp_buffer_map(buf, GST_MAP_READWRITE, &rtp);
gst_rtp_buffer_set_extension(&rtp, 0);
if ((&rtp)->map[1].memory != NULL)
{
gst_buffer_unmap(buf, &((&rtp)->map[1]));
}
gst_rtp_buffer_unmap(&rtp);
return buf;
The error I encouter is this:
(receiver:1903): GStreamer-CRITICAL **: 13:19:43.409: gst_mini_object_unlock: assertion '(state & access_mode) == access_mode' failed
(receiver:1903): GStreamer-WARNING **: 13:19:43.409: free_priv_data: object finalizing but still has parent (object:0x7f4ea800d000, parent:0x7f4eb0107d80)
and later on:
(receiver:1903): GStreamer-CRITICAL **: 13:19:43.409: gst_mini_object_lock: assertion 'GST_MINI_OBJECT_IS_LOCKABLE (object)' failed
What is wrong with this code? Is there a better way to do this?

MPI_Comm_spawn throws Assertion failed in file src/util/procmap/local_proc.c at line 127: node_id <= max_node_id

I am using MPICH. I start a mpi program with one process, using:
mpiexec -n 1 -f hostfile ./master [arguments]
Where my arguments are the set of arguments required by the mpi process. The master calculates the optimal number of worker processes to be spawned and then calls the following function:
MPI_Comm_spawn("./worker", argv, nprocs, MPI_INFO_NULL, 0, MPI_COMM_SELF,&intercomm,NULL);
Now this program works fine when I have a machinefile looks like:
node01:2
node02:1
node03:1
node04:1
So on node01 I have 2 processes running (master + 1 worker) and the rest of the nodes have one process.However, I want each node to run only 1 process. So I modified the hostfile is as follows. Now after the MPI_COMM_spawn(), the child processes aren't initialized and I end up with an assertion failed.
node01:1
node02:1
node03:1
node04:1
ERROR
Assertion failed in file src/util/procmap/local_proc.c at line 127: node_id <= max_node_id
internal ABORT - process 0
Assertion failed in file src/util/procmap/local_proc.c at line 112: my_node_id <= max_node_id
internal ABORT - process 2
Assertion failed in file src/util/procmap/local_proc.c at line 127: node_id <= max_node_id
internal ABORT - process 1
I am unable to figure out what is going wrong in this case. Has anyone else faced a similar issue? What is the probable cause of it?

How to gracefully shutdown a boost asio ssl client?

The client does some ssl::stream<tcp_socket>::async_read_some()/ssl::stream<tcp_socket>::async_write() calls and at some point needs to exit, i.e. it needs to shutdown the connection.
Calling ssl::stream<tcp_socket>::lowest_layer().close() works, but (as it is expected) the server (a openssl s_server -state ... command) reports an error on closing the connection.
Looking at the API the right way seems to be to call ssl::stream<tcp_socket>::async_shutdown().
Now there are basically 2 situation where a shutdown is needed:
1) Client is in the async_read_some() callback and reacts on a 'quit' command from the server. Calling from there async_shutdown() yields a 'short read' error in the shutdown callback.
This is surprising but after googling around this seems to be normal behaviour - one seem to have to check if it is a real error or not like this:
// const boost::system::error_code &ec
if (ec.category() == asio::error::get_ssl_category() &&
ec.value() == ERR_PACK(ERR_LIB_SSL, 0, SSL_R_SHORT_READ)) {
// -> not a real error, just a normal TLS shutdown
}
The TLS server seems to be happy, though - it reports:
DONE
shutting down SSL
CONNECTION CLOSED
2) A async_read_some() is active - but a user decides to exit the client (e.g. via a command from stdin). When calling async_shutdown() from that context following happens:
the async_read_some() callback is executed with a 'short read' error code - kind of expected now
the async_shutdown() callback is executed with a decryption failed or bad record mac error code - this is unexpected
The server side does not report an error.
Thus my question how to properly shutdown a TLS client with boost asio.
One way to resolve the 'decryption failed or bad record mac' error code from the 2nd context is:
a) from inside the stdin handler call:
ssl::stream<tcp_socket>::lowest_layer()::shutdown(tcp::socket::shutdown_receive)
b) this results in the async_read_some() callback getting executed with a 'short read' 'error' code
c) in that callback under that 'error' condition async_shutdown() is called:
// const boost::system::error_code &ec
if (ec.category() == asio::error::get_ssl_category() &&
ec.value() == ERR_PACK(ERR_LIB_SSL, 0, SSL_R_SHORT_READ)) {
// -> not a real error:
do_ssl_async_shutdown();
}
d) the async_shutdown() callback is executed with a 'short read' error code, from where we finally call:
ssl::stream::lowest_layer()::close()
These steps result in a connection shutdown without any weird error messages on the client or server side.
For example, when using openssl s_server -state ... as server it reports on sutdown:
SSL3 alert read:warning:close notify
DONE
shutting down SSL
CONNECTION CLOSED
ACCEPT
(the last line is because the command accepts new connections)
Alternative
Instead of lowest_layer()::shutdown(tcp::socket::shutdown_receive) we can also call
ssl::stream<tcp_socket>::lowest_layer()::cancel()
to initiate a proper shutdown. It has the same effect, i.e. it yields the execution of the scheduled async_read_some() callback (but with operation_aborted error code). Thus, one can call async_shutdown() from there:
if (ec.value() == asio::error::operation_aborted) {
cout << "(not really an error)\n";
do_async_ssl_shutdown();
}

Linux kill() error unexpected

Kill(pid, 0) seems to not set the error code correctly...as stated in man for kill
Errors
The kill() function shall fail if:
EINVAL The value of the sig argument is an invalid or unsupported
signal number.
EPERM The process does not have permission to send the
signal to any receiving process.
ESRCH No process or process group can
be found corresponding to that specified by pid. The following
sections are informative.
1
It is returning ENOENT (no such file or directory) and then sometimes it returns EINTR (system call interrupted)...
Here is what I am doing:
kill(g_StatusInstance[i].pid, SIGTERM) == -1 && log_fatal_syscall("kill-sigterm");
kill(g_StatusInstance[i].pid, 0);
log_info_console( "Checking process for errors: %s\n", strerror(errno));
if(errno != ENOENT)
{
kill(g_StatusInstance[i].pid, SIGKILL) == -1 && log_fatal_syscall("kill-sigkill");
}
Am I doing something wrong?
Kill(pid, 0) seems to not set the error code correctly ...
It is returning ENOENT... EINTR
Here is what I am doing:
...
kill(g_StatusInstance[i].pid, 0);
log_info_console( "Checking process for errors: %s\n", strerror(errno));
Am I doing something wrong?
Yes. You are not checking the return value of the kill() system call. kill() does not set errno to any particular value in the successful case.
Try this:
if(kill(g_StatusInstance[i].pid, 0) == -1) {
log_info_console( "Checking process for errors: %s\n", strerror(errno));
} else {
log_info_console( "kill returned 0, process still alive\n" );
}
More generally, you ought to check the return value of every system call or library call, unless it is declared to return void.
Based on the discussion, your question is likely "Why did my kill() not generate the effect that I expected?"
In order to understand why that is happening, you should first try strace on the process which is the target of the kill(). Attach it to your existing process by pid or invoke it under strace. strace will show modifications to the signal mask and indicate when signals arrive. If your signal is arriving, you should debug the process targeted by the kill() and try to understand what the installed/default signal handler is expected to do.