Multithreading issue with UDP Client class implemented using C++ - c++

Since I am committed to develop some little audio applications that share audio content over network through the UDP protocol, I am currently drafting the code for a UDP Client class.
This mentioned class should receive the audio content of the other clients connected to network and also send the audio content processed on the local machine; all these contents are exchanged with a server, that works as a kind of content router.
Since audio content is generated by a process() method that is periodically called by the audio application, in order to not loose packets, each audio application should have a kind of UDP listener that is independent from the process() method and that should always be active; they should only share a buffer, or memory allocation, where audio data can be temporary saved and later processed.
Taking all this into account, I coded this method:
void udp_client::listen_to_packets() {
while (udp_client::is_listening) {
if ((udp_client::message_len = recvfrom(udp_client::socket_file_descr, udp_client::buffer, _BUFFER_SIZE, MSG_WAITALL, (struct sockaddr*) &(udp_client::client_struct), &(udp_client::address_len))) < 0) {
throw udp_client_exception("Error on receiving message.");
}
std::cout << "New message received!" << std::endl;
}
std::cout << "Stop listenig for messages!" << std::endl;
}
As you can see, the function uses udp_client::buffer that is the shared memory allocation I previously mentioned. In order to let it be always active, I was thinking to start a new thread or process at class construction and stop its execution at class destruction:
udp_client::udp_client():
is_listening(true) {
std::cout << "Constructing udp_client..." << std::endl;
std::thread listener = std::thread(udp_client::listen_to_packets);
}
udp_client::~udp_client() {
std::cout << "Destructing udp_client..." << std::endl;
udp_client::is_listening = false;
}
Of course, the above listed code doesn't work and as #user4581301 suggested, the listener and is_listening variable definitions have been moved to class attributes:
private:
std::atomic<bool> is_listening;
std::thread listener;
Furthermore the constructor and destructor have been modified a little:
udp_client::udp_client():
listener(&udp_client::listen_to_packets, this),
is_listening(true) {
std::cout << "Constructing udp_client..." << std::endl;
}
udp_client::~udp_client() {
std::cout << "Destructing udp_client..." << std::endl;
udp_client::is_listening = false;
listener.join();
}
Unfortunately, g++ still returns an error, saying that there are not constructors with two arguments for the std::thread class:
error: no matching constructor for initialization of 'std::thread'
listener(&udp_client::listen_to_packets, this)
So, what should I modify to make the code work properly?
Here you can see the implementation of the class (hoping that this link is allowed for Stack Overflow rules):
https://www.dropbox.com/sh/lzxlp3tyvoncvxo/AAApN5KLf3YAsOD0PV7wJJO4a?dl=0

Related

Confusion about boost::asio::io_context::run

I am currently working on a project where I use the MQTT protocol for communication.
There is a Session class in a dedicated file which basically just sets up the publish handler, i.e. the callback that is invoked, when this client receives a message (the handler checks if the topic matches "ZEUXX/var", then deserialized the binary content of the frame and subsequently unsubscribes the topic):
session.hpp:
class Session
{
public:
Session()
{
comobj = MQTT_NS::make_sync_client(ioc, "localhost", "1883", MQTT_NS::protocol_version::v5);
using packet_id_t = typename std::remove_reference_t<decltype(*comobj)>::packet_id_t;
// Setup client
comobj->set_client_id(clientId);
comobj->set_clean_session(true);
/* If someone sends commands to this client */
comobj->set_v5_publish_handler( // use v5 handler
[&](MQTT_NS::optional<packet_id_t> /*packet_id*/,
MQTT_NS::publish_options pubopts,
MQTT_NS::buffer topic_name,
MQTT_NS::buffer contents,
MQTT_NS::v5::properties /*props*/) {
std::cout << "[client] publish received. "
<< " dup: " << pubopts.get_dup()
<< " qos: " << pubopts.get_qos()
<< " retain: " << pubopts.get_retain() << std::endl;
std::string_view topic = std::string_view(topic_name.data(), topic_name.size());
std::cout << " -> topic: " << topic << std::endl;
else if (topic.substr(0, 9) == "ZEUXX/var")
{
std::cout << "[client] reading variable name: " << topic.substr(10, topic.size() - 9) << std::endl;
auto result = 99; // dummy variable, normally an std::variant of float, int32_t uint8_t
// obtained by deserialzing the binary content of the frame
std::cout << comobj->unsubscribe(std::string{topic});
}
return true;
});
}
void readvar(const std::string &varname)
{
comobj->publish(serialnumber + "/read", varname, MQTT_NS::qos::at_most_once);
comobj->subscribe(serialnumber + "/var/" + varname, MQTT_NS::qos::at_most_once);
}
void couple()
{
comobj->connect();
ioc.run();
}
void decouple()
{
comobj->disconnect();
std::cout << "[client] disconnected..." << std::endl;
}
private:
std::shared_ptr<
MQTT_NS::callable_overlay<
MQTT_NS::sync_client<MQTT_NS::tcp_endpoint<as::ip::tcp::socket, as::io_context::strand>>>>
comobj;
boost::asio::io_context ioc;
};
The client is based on a boost::asio::io_context object which happens to be the origin of my confusion. In my main file I have the following code.
main.cpp:
#include "session.hpp"
int main()
{
Session session;
session.couple();
session.readvar("speedcpu");
}
Essentially, this creates an instance of the class Session and the couple member invokes the boost::asio::io_context::run member. This runs the io_context object's event processing loop and blocks the main thread, i.e. the third line in the main function will never be reached.
I would like to initiate a connection (session.couple) and subsequently do my publish and subscribe commands (session.readvar). My question is: How do I do that correctly?
Conceptionally what I aim for is best expressed by the following python-code:
client.connect("localhost", 1883)
# client.loop_forever() that's what happens at the moment, the program
# doesn't continue from here
# The process loop get's started, however it does not block the program and
# one can send publish command subsequently.
client.loop_start()
while True:
client.publish("ZEUXX/read", "testread")
time.sleep(20)
Running the io_context object in a separate thread seems not to be working the way I tried it, any suggestions on how to tackle this problem? What I tried is the following:
Adaption in session.hpp
// Adapt the couple function to run io_context in a separate thread
void couple()
{
comobj->connect();
std::thread t(boost::bind(&boost::asio::io_context::run, &ioc));
t.detach();
}
Adpations in main.cpp
int main(int argc, char** argv)
{
Session session;
session.couple();
std::cout << "successfully started io context in separate thread" << std::endl;
session.readvar("speedcpu");
}
The std::cout line is now reached, i.e. the program does not get stuck in the couple member of the class by io_context.run(). However directly after this line I get an error: "The network connection was aborted by the local system".
The interesting thing about this is that when I use t.join() instead of t.detach() then there is no error, however I have the same behavior with t.join() as when I call io_context.run() directly, namely blocking the program.
Given your comment to the existing answer:
io_context.run() never return because it never runs out of work (it is being kept alive from the MQTT server). As a result, the thread gets blocked as soon as I enter the run() method and I cannot send any publish and subscribe frames anymore. That was when I thought it would be clever to run the io_context in a separate thread to not block the main thread. However, when I detach this separate thread, the connection runs into an error, if I use join however, it works fine but the main thread gets blocked again.
I'll assume you know how to get this running successfully in a separate thread. The "problem" you're facing is that since io_context doesn't run out of work, calling thread::join will block as well, since it will wait for the thread to stop executing. The simplest solution is to call io_context::stop before the thread::join. From the official docs:
This function does not block, but instead simply signals the io_context to stop. All invocations of its run() or run_one() member functions should return as soon as possible. Subsequent calls to run(), run_one(), poll() or poll_one() will return immediately until restart() is called.
That is, calling io_context::stop will cause the io_context::run call to return ("as soon as possible") and thus make the related thread joinable.
You will also want to save the reference to the thread somewhere (possibly as an attribute of the Session class) and only call thread::join after you've done the rest of the work (e.g. called the Session::readvar) and not from within the Session::couple.
When io_context runs out of work, it returns from run().
If you don't post any work, run() will always immediately return. Any subsequent run() also immediately returns, even if new work was posted.
To re-use io_context after it completed, use io_context.reset(). In your case, better to
use a work guard (https://www.boost.org/doc/libs/1_73_0/doc/html/boost_asio/reference/executor_work_guard.html), see many of the library examples
don't even "run" the ioc in couple() if you already run it on a background thread
If you need synchronous behaviour, don't run it on a background thread.
Also keep in mind that you need to afford graceful shutdown which is strictly harder with a detached thread - after all, now you can't join() it to know when it exited.

Synchronize object

Having object that has extensive API list.
What is the best way to synchronize this object, i.e. the object already exists in legacy code and used in hundreds of lines of code.
The naive way is to wrap each API call to object with std::mutex. Is there an easier or elegant way to do it?
I have tried below code, however would like to get opinion on it or alternative solutions .
Below is template wrapper class that lock the object during the usage , in an automatic way. i.e. locks the object on creation and unlocks upon destruction.
This pattern is very similar to scope lock, however it's useful only for static objects/singletons, it wouldn't work for different instances of a given object
template <typename T> class Synced
{
static std::mutex _lock;
T& _value;
public:
Synced(T& val) : _value(val)
{
std::cout << "lock" << endl;
_lock.lock();
}
virtual ~Synced()
{
std::cout << "unlock" << endl;
_lock.unlock();
}
T& operator()()
{
return _value;
}
};
template <class T> std::mutex Synced<T>::_lock;
example class to be used with Synced template class
this could be example of a class mentioned above with tens of API's
class Board
{
public:
virtual ~Board() { cout << "Test dtor " << endl; }
void read() { cout << "read" << endl; }
void write() { cout << "write" << endl; }
void capture() { cout << "capture" << endl; }
};
example of usage , basic calls , the Synced object isn't bounded to scope , thus the destructor is called immediately after semicolon
int main(int argc, char* argv[])
{
Board b;
Synced<Board>(t)().read();
cout <<" " << endl;
Synced<Board>(t)().write();
cout << " " << endl;
Synced<Board>(t)().capture();
cout << " " << endl;
return 1;
}
Here below is output of above example run :
lock
read
unlock
lock
write
unlock
lock
capture
unlock
Test dtor
I only use mutexes for very small critical sections, a few lines of code maybe, and only if I control all possible error conditions. For a complex API you may end up with the / a mutex in an unexpected state. I tend to tackle this sort of thing with the reactor pattern. Whether or not that is practical depends on whether or not you can reasonably use serialization / deserialization for this object. If you have to write serialization yourself then consider things like API stability and complexity. I personally prefer zeromq for this sort of thing when using it is practical, your mileage may vary.

C++ Boost::asio::io_service how can I safe destroy io_service resources when program be finished

I run aync job thread for async io_service work.
I want to destroy this resources used for async job.
boost::asio::io_service
boost::asio::io_service::work
boost::asio::steady_timer
boost::thread
I manage the singleton object by shared pointer, below code AsyncTraceProcessor. As you know, shared_ptr automatically call the destructor when use count be 0. I want to destroy all resources safe way at that time.
I wrote code below, but there is SIGSEGV error on JVM. (This program is java native library program)
How can I solve it? In my opinion, already queue but not yet executed works throw cause this error. In this case, how can I treat remain works in safety way?
AsyncTraceProcessor::~AsyncTraceProcessor() {
cout << "AsyncTraceProcessor Desructor In, " << instance.use_count() << endl;
_once_flag;
cout <<"++++++++++flag reset success" << endl;
traceMap.clear();
cout <<"++++++++++traceMap reset success" << endl;
timer.cancel();
cout <<"++++++++++timer reset success" << endl;
async_work.~work();
cout <<"++++++++++work reset success" << endl;
async_service.~io_service();
cout <<"++++++++++io_service reset success" << endl;
async_thread.~thread();
cout <<"++++++++++thread reset success" << endl;
instance.reset();
cout <<"++++++++++instance reset success" << endl;
cout << "AsyncTraceProcessor Desructor Out " << endl;
}
Error Log
AsyncTraceProcessor Desructor In, 0
Isn't Null
++++++++++flag reset success
++++++++++traceMap reset success
++++++++++timer reset success
++++++++++work reset success
A fatal error has been detected by the Java Runtime Environment:
++++++++++io_service reset success
++++++++++thread reset success
SIGSEGV
++++++++++instance reset success
AsyncTraceProcessor Desructor Out
C++ is unlike Java or C# - basically any garbage collecting language runtime. It has deterministic destruction. Lifetimes of object are very tangible and reliable.
async_service.~io_service();
This is explicitly invoking a destructor without deleting the object, or before the lifetime of the automatic-storage variable ends.
The consequence is that the language will still invoke the destructor when the lifetime does end.
This is not what you want.
If you need to clear the work, make it a unique_ptr<io_service::work> so you can work_p.reset() instead (which does call its destructor, once).
After that, just wait for the threads to complete io_service::run(), meaning you should thread::join() them before the thread object gets destructed.
Member objects of classes have automatic storage duration and will be destructed when leaving the destructor body. They will be destructed in the reverse order in which they are declared.
Sample
struct MyDemo {
boost::asio::io_service _ios;
std::unique_ptr<boost::asio::io_service::work> _work_p { new boost::asio::io_service::work(_ios) };
std::thread _thread { [&ios] { ios.run(); } };
~MyDemo() {
_work_p.reset();
if (_thread.joinable())
_thread.join();
} // members are destructed by the language
};

Unable to Call Tcp Socket Received Callback Function on NS3

I am new with NS3. I am trying to create a custom application and currently have a difficulty on calling a Socket callback function using socket->SetRecvCallback. This problem occur while I use TcpSocketFactory, another socket such as UDP does not produce this issue.
On main
Ptr<Socket> ns3TcpSocket = Socket::CreateSocket(nodes.Get(0), TcpSocketFactory::GetTypeId());
Custom Tcp Application Class
this->socket->SetRecvCallback(MakeCallback(&CustomTcpApplication::RecvCallback, this));
this->socket->SetSendCallback(MakeCallback(&CustomTcpApplication::WriteUntilBufferFull, this));
My callback function
void CustomTcpApplication::RecvCallback(Ptr<Socket> socket)
{
std::cout << "On Receive Callback Function" << std::endl;
}
void CustomTcpApplication::WriteUntilBufferFull(Ptr<Socket> localSocket, uint32_t txSpace)
{
std::cout << "On Send Callback Function" << std::endl;
}
Also. I read from this answer to implements SetAcceptCallback, ns-3 wlan grid TCP not working while UDP is
this->socket->SetAcceptCallback(MakeNullCallback<bool, Ptr<Socket>, const Address &>(), MakeCallback(&CustomTcpApplication::Accept, this));
Callback Function
void CustomTcpApplication::Accept(Ptr<Socket> socket, const ns3::Address& from)
{
std::cout << "Connection accepted" << std::endl;
socket->SetRecvCallback(MakeCallback(&CustomTcpApplication::MainRecvCallback, this));
}
However, I still cannot log it on the function. Did I missing any step?
I got the same problem today. After tracing codes for hours, I found that there is no native support for TCP data receiving callback.
The tcp-socket-base.cc forks a new TcpSocketBase object for the following data transmission in TcpSocketBase::ProcessListen().The copy constructor of TcpSocketBase resets new socket's callback functions including m_receivedData. That's why data receiving callback doesn't work for TCP.
A simple workaround is to reserve the callback variable from original TcpSocketBase and make m_receivedDataTCP public in src/network/model/socket.h.
TcpSocketBase::TcpSocketBase (const TcpSocketBase& sock)
{
...
m_receivedDataTCP = sock.m_receivedDataTCP;
}

Invalid instance variable in Asio completion handler

I've set up a simple async tcp server using Asio (non-boost), which pretty much follows the code used here: http://think-async.com/Asio/asio-1.11.0/doc/asio/tutorial/tutdaytime3.html
I'm experiencing an issue where attempting to access a variable of the current tcp_connection instance inside the completion handler for async_read_some/async_receive throws an error. The variable in question is simply a pointer to an instance of an encryption class that I have created. It seems that this pointer becomes invalid (address of 0xFEEEFEEE) once the completion handler is called. Here's the tcp_connection class that gets created once a connection from a client is made:
class tcp_connection
: public enable_shared_from_this<tcp_connection> {
public:
typedef shared_ptr<tcp_connection> pointer;
static pointer create(asio::io_service &ios) {
return pointer(new tcp_connection(ios));
}
tcp::socket &socket() {
return socket_;
}
void start() {
byte* buf = new byte[4096];
socket_.async_receive(asio::buffer(buf, 4096), 0,
bind(&tcp_connection::handle_receive, this,
buf,
std::placeholders::_1, std::placeholders::_2));
}
private:
tcp_connection(asio::io_service &ios)
: socket_(ios) {
crypt_ = new crypt();
}
void handle_receive(byte* data, const asio::error_code &err, size_t len) {
cout << "Received packet of length: " << len << endl;
crypt_->decrypt(data, 0, len); // This line causes a crash, as the crypt_ pointer is invalid.
for (int i = 0; i < len; ++i)
cout << hex << setfill('0') << setw(2) << (int)data[i] << ", ";
cout << endl;
}
tcp::socket socket_;
crypt* crypt_;
};
I'm assuming this has something to do with the way Asio uses threads internally. I would have thought that the completion handler (handle_receive) would be invoked with the current tcp_connection instance, though.
Is there something I'm missing? I'm not too familiar with Asio. Thanks in advance.
Firstly, you should use shared_from_this to prevent tcp_connection to be "collected" when there are only extant async operations:
socket_.async_receive(asio::buffer(buf, 4096), 0,
bind(&tcp_connection::handle_receive, shared_from_this()/*HERE!!*/,
buf,
std::placeholders::_1, std::placeholders::_2));
Secondly, your tcp_connection class should implement Rule Of Three (at least cleanup crypt_ in the destructor and prohibit copy/assignment).
You also don't free up buf in your current sample.
Of course, in general, just use smart pointers for all of these.
Live On Coliru