::avahi_client_new fails with error 'An unexpected D-Bus error occured' - c++

I am using avahi for service advertisement and discovery.
As we all know avahi needs dbus as well and hence dbus-1.6.8 library is also added.
i am starting dbus-daemon and avahi-daemon at startup. both daemons are running, which i could see in process list.
But when I try to create avahi client, ::avahi_client_new call fails with error "An unexpected D-Bus error occured", which is AVAHI_ERR_DBUS_ERROR = -22, /**< An unexpected D-Bus error occured */
Bellow is my function all.
Client = ::avahi_client_new(
::avahi_threaded_poll_get(Poll),
static_cast<AvahiClientFlags>(0),
&AvahiWrapper::OnClientStateChange,
NULL,
&error);
PS: Poll = ::avahi_threaded_poll_new(); is successful.
Please let me if anyone has any clue on this problem. Or at-least how to debug.
Thanks in advance.

Related

Poco::Data::MySQL 'Got packets out of order' error

I got an ER_NET_PACKETS_OUT_OF_ORDER error when running a multithreaded C++ app using Poco::Data::MySQL and Poco::Data::SessionPool. The error message looks like this:
MySQL: [MySQL]: [Comment]: mysql_stmt_prepare error [mysql_stmt_error]: Got packets out of order [mysql_stmt_errno]: 1156 [mysql_stmt_sqlstate]: 08S01 [statemnt]: ...
The app is making queries from multiple threads every 100ms. The connections are provided by a common SessionPool.
I got around this problem by adding reset=true to the connection string. However, as stated in the official docs, adding this option may result in problems with encoding.

Upgrading to TLS1.2 (LINUX C++ GSOAP), encounter SSL_ERROR_SYSCAL

Q1: We would like to know the possible root cause of the following:
After upgrading from gsoap 2.8.21 to 2.8.70 , we encountered issue upon executing SSL_Connect (during handshake) when we are trying to use one of the methods of the generated gsoap proxy classes . Below is the error we encountered:
Issue:
Error 30 fault detected [no subcode]
"SSL_ERROR_SYSCALL
Error observed by underlying SSL/TLS BIO: Connection reset by peer"
Detail: SSL_connect() error in tcp_connect()
Result of initial investigation:
Upon debugging we gathered some information about the problem:
The issue occur inside tcp_connect function when ssl_connect is being executed. It returned value -1., since it was inside a loop initial value of SSL_get_error is 2 then tcp_select is executed and value is 1
For the second loop in the ssl_connect still under tcp_connect, the return value is still -1 but the SSL_get_error value became 5 which means (SSL_ERROR_SYSCAL) then when we look for errno its value is 104
. The return value of tcp_connect is 30.
Note:
The end points (webservice addr) that we used is working when we try using windows platform (.net framework). The above issue only encounter in arm-linux devices.
Thanks and best regards,
JC

websocketpp "Underlying Transport Error" when listening

I use websocketpp in my program as websocket server.But recently in some users' environment when I listenning on some specific port,an error happened,I print the message of error_code,it's "Underlying Transport Error",Is that means the listening has prevented by some firewall on 3rd party security software?
The code is as below:
std::error_code ec;
server_->set_message_handler(boost::bind(&on_message, server_, ::1, ::2));
server->set_tls_init_handler(boost::bind(&on_tls_init,MOZILLA_INTERMEDIATE,::1));
server->init_asio(ec);
server->listen(2007 , ec);
after executed init_asio, there is no error returned,But after listen,the error appeared.
Thanks all
Firstly ensure that your program is not swallowing up any of the logging information from websocketpp. For example, the commented lines below would block logging:
websocketpp::server<websocketpp::config::asio> server;
//server.set_access_channels(websocketpp::log::alevel::none);
//server.set_error_channels(websocketpp::log::elevel::none);
Then, when you're running the program you should get something like this:
[2018-07-25 10:33:10] [info] asio listen error: system:98 (Address already in use)
This corresponds to the error codes in: boost/asio/error.hpp. See also: Boost error codes reference
My guess would be you have two instances of the program trying to run on the same port.

Some questions about protobuf

We are building a RTB(real time bidding) platform. Using nginx as http server, the bidder is writen in lua, google protocol buffer for serializing data and Zlog for logs. After test runs, we got three error messages in the nginx error log:
"[libprotobuf Error, google/protobuf/wire_format.cc:1053]
String field contains invalid UTF-8 data when parsing a protocol buffer.
Use the 'bytes' type if you intend to send raw bytes."
So we went back to check the source code of protocol buffer, and found that this check is controlled by a macro(-DNDEBUG: it means NOT debug mode?, according to the comment). And -DNDEBUG disables GOOGLE_PROTOBUF_UTF8_VALIDATION(i think?). So, we enabled this macro(-DNDEBUG) in the configuration. However, after testing, we still got the same error message. And then, we changed all the "String" type to "Bytes" typr in XXX.proto. After testing, the same error message showed.
worker process 53574 exited on signal 11(core dumped),then process died.
lua entry thread aborted: runtime error:/home/bilin/rtb/src/lua/shared/log.lua:34: 'short' is not callable"
Hope somebody can help us solving those problems.
Thank you.

Jetty 8.1 flooding the log file with "Dispatched Failed" messages

We are using Jetty 8.1 as an embedded HTTP server. Under overload conditions the server sometimes starts flooding the log file with these messages:
warn: java.util.concurrent.RejectedExecutionException
warn: Dispatched Failed! SCEP#76107610{l(...)<->r(...),d=false,open=true,ishut=false,oshut=false,rb=false,wb=false,w=true,i=1r}...
The same message is repeated thousands of times, and the amount of logging appears to slow down the whole system. The messages itself are fine, our request handler ist just to slow to process the requests in time. But the huge number of repeated messages makes things actually worse and makes it more difficult for the system to recover from the overload.
So, my question is: is this a normal behaviour, or are we doing something wrong?
Here is how we set up the server:
Server server = new Server();
SelectChannelConnector connector = new SelectChannelConnector();
connector.setAcceptQueueSize( 10 );
server.setConnectors( new Connector[]{ connector } );
server.setThreadPool( new ExecutorThreadPool( 32, 32, 60, TimeUnit.SECONDS,
new ArrayBlockingQueue<Runnable>( 10 )));
The SelectChannelEndPoint is the origin of this log message.
To not see it, just set your named logger of org.eclipse.jetty.io.nio.SelectChannelEndPoint to LEVEL=OFF.
Now as for why you see it, that is more interesting to the developers of Jetty. Can you detail what specific version of Jetty you are using and also what specific JVM you are using?