I wrote the simple app base on pjsip (pjsua2).
If I close my app when I have active calls I have memory access error in Endpoint::on_call_state(pjsua_call_id call_id, pjsip_event *e)
I try before closing do
Endpoint::instance().hangupAllCalls();
pj_thread_sleep(2000);
Some time 2 sec enough time for closing and app closed correctly, but some time doesn't.
What correct way to close the pjsua2 app?
And how waiting what all calls are hangup?
From my expirience of working with PJSUA2, the correct way to exit is to ensure that destructors of all calls are called before pj::Account destructor and desturctor for pj::Account is called before pj::Endpoint destructor.
In my applications I have integer calls counter and exit bool flag in pj::Account derived class like:
class MyAccount : public pj::Account
{
public:
...
void incrementCallsCount() { ++_callsCount; }
void decrementCallsCount() { --_callsCount; }
size_t getCallsCount() const { return _callsCount; }
...
void setWantExit(boot wantExitFlag) { _wantExitFlag = wantExitFlag; }
void onIncomingCall(pj::OnIncomingCallParam &prm)
{
if (_wantExitFlag)
return;
MyCall *call = MyCall::create(*this, prm.callId);
// Do your stuff with call:
// - add to map id->call
// - answer SIP 180 Ringing / SIP 200 OK
}
...
private:
std::atomic<size_t> _callsCount;
bool _wantExitFlag;
};
In constructor of pj::Call derived class I call incrementCallsCount() and in destructor I call decrementCallsCount() like:
class MyCall : public pj::Call
{
public:
typedef pj::Call base_class;
static MyCall *create(MyAccount &account, int callId)
{
return new MyCall(account, callId);
}
virtual void onCallState(pj::OnCallStateParam& prm)
{
pj::CallInfo ci = getInfo();
if (ci.state == PJSIP_INV_STATE_DISCONNECTED)
{
// You may call some onDisconnected() handler function here
delete this;
return;
}
}
...
private:
MyCall(MyAccount &account, int callId)
: base_class(account, callId)
, _myAccount(account)
{
_myAccount.incrementCallsCount();
}
virtual ~MyCall()
{
_myAccount.decrementCallsCount();
}
MyAccount &_myAccount;
};
Note that constructor and destructor declared as private, to ensure that users create calls by static fuction MyCall::create() only! MyCall class takes responsibility for memory management: call deleted only when PJSUA2 tells to delete call (PJSIP_INV_STATE_DISCONNECTED call state).
With this functions and classes in mind, if you just want to exit from application almost immediately, you should:
Stop creating MyCall in MyAccount by _myAccount.setWantExit(true). When pj::Call::onIncomingCall() function returns immediately, PJSUA2 hande this by executing hangup() for the call immediately.
call Endpoint::instance().hangupAllCalls()
wait until MyAccount::getCallsCount() will return 0
ensure MyAccount's destructor is called before Enpoint's destructor
exit application
Related
Considering the following code, where I declare a simple class for executing asynchronous/threaded operations:
#include <chrono>
#include <thread>
#include <mutex>
#include <future>
#include <iostream>
using namespace std::chrono_literals;
class basic_executor {
public:
basic_executor() {
_mx.lock();
printf("Ctor #%p\n", this);
_mx.unlock();
}
virtual ~basic_executor() {
_mx.lock();
printf("Dtor #%p\n", this);
_mx.unlock();
if (_thread.joinable()) {
_thread.join();
_mx.lock();
printf("Joined thread #%p\n", this);
_mx.unlock();
}
}
// sync call
void run() {
start();
execute();
stop();
}
// async call
void launch(bool detach = false) {
// create packaged task
std::packaged_task< void() > task([this] {
start();
execute();
stop();
});
// assign future object to function return
_done = task.get_future();
// launch function on a separate thread
_thread = std::thread(std::move(task));
// detach them from main thread in order to avoid waiting for them
if (detach == true) {
_thread.detach();
}
}
// blocking wait for async (detached/joinable)
void wait() const {
_done.wait();
}
protected:
virtual void start() { /* for derived types to implement */ }
virtual void stop() { /* for derived types to implement */ }
virtual void execute() { /* for derived types to implement */ }
std::mutex _mx;
std::thread _thread;
std::future< void > _done;
};
And using the following application example where I derive from it to create two logger objects that make dummy prints for a certain span of time:
class logger : public basic_executor {
public:
logger() { /* ... */}
~logger() {
_mx.lock();
std::cout << "logger destructor " << std::endl;
_mx.unlock();
}
void execute() override {
std::this_thread::sleep_for(1s);
for (int i = 0; i < 10; ++i) {
_mx.lock();
printf("L1: I am printing something\n");
_mx.unlock();
std::this_thread::sleep_for(1s);
}
}
void stop() override {
_mx.lock();
printf("L1: I am done!\n");
_mx.unlock();
}
};
class logger2 : public basic_executor {
public:
logger2() { /* ... */}
~logger2() {
_mx.lock();
printf("logger2 destructor\n");
_mx.unlock();
}
void execute() override {
for (int i = 0; i < 10; ++i) {
_mx.lock();
printf("L2: I am ALSO printing something\n");
_mx.unlock();
std::this_thread::sleep_for(2s);
}
}
void stop() override {
_mx.lock();
printf("L2: I am done!\n");
_mx.unlock();
}
};
int main(int argc, char const *argv[]) {
/* code */
// printf("log:\n");
logger log1;
// printf("log1:\n");
logger2 log2;
printf("----------------------------------!\n");
log2.launch();
log1.launch();
// log1.wait();
// log2.wait();
printf("----------------------------------!\n");
return 0;
}
I am getting an unexpected behavior from the program:
Ctor #0x7fff8b18c990
Ctor #0x7fff8b18c9e0
----------------------------------!
----------------------------------!
logger2 destructor
Dtor #0x7fff8b18c9e0
Joined thread #0x7fff8b18c9e0
logger destructor
Dtor #0x7fff8b18c990
L1: I am printing something
L1: I am printing something
L1: I am printing something
L1: I am printing something
L1: I am printing something
L1: I am printing something
L1: I am printing something
L1: I am printing something
L1: I am printing something
L1: I am printing something
Joined thread #0x7fff8b18c990
in that occasionally, the 'log2' object never starts its execution before being destroyed, or the 'join()' call on its destructor hangs indefinitely. Is there any obvious reason why this happens, what exactly am I missing here?
The bug can occur with either logging class. However, with undefined behavior you have no guarantees whatsoever, and no expectation of any kind of consistent results. You've only, so far, observed the same bug with one of two logging classes. Although I can explain why is that, in practical terms, it is immaterial. The bug can happen with either of the objects. Let's begin here:
_thread = std::thread(std::move(task));
You are not going to get any guarantees whatsoever that the new execution thread will immediately start executing any of the following before this code proceeds and returns from launch():
std::packaged_task< void() > task([this] {
start();
execute();
stop();
});
Most of the time, practically, this is going to start running pretty quickly, in the new execution thread. But you cannot rely on that. All that C++ guarantees you is that at some point after std::thread finishes constructing a new execution thread will start running. It may be immediate. Or, it may be a few hundred milliseconds later because your operating system had something more important on its plate.
You are expecting that the new execution thread will always start executing "immediately", simultaneous with std::thread getting constructed. That is not true. After all, you might be running with a single CPU core, and after constructing the std::thread object you're continuing to execute what follows in the same execution thread, and only a while later a context switch occurs, to the new execution thread.
Meanwhile:
launch() returns.
The parent execution thread reaches the end of main().
All of the objects in the automatic scope are going to get destroyed, one by one.
In C++, when an object consists of a superclass and a subclass, the subclass gets destroyed first, followed by the superclass. This is how C++ works.
So, the logger/logger2 subclass's destructor gets immediately invoked and it destroys the its object (just the logger/logger2 subclass).
Now the superclass's destructor gets invoked, to destroy the superclass. ~basic_executor starts doing its thing, patiently waiting.
And now, finally, that new execution thread, remember that one? Guess what: it finally starts running, and valiantly tries to execute start(), execute(), and stop(). Or maybe it managed to get through start(), first, but hasn't reached execute() yet. But since the actual logger subclass is already destroyed now, guess what happens? Nothing. It's gone. The subclass is no more. It ceased to be. It joined the choir invisible. It is pining for the fjords. It's an ex-subclass. There is no logger::execute or logger2::execute() any more.
I am creating a C++ server application using standalone Asio and C++11 and am getting an error, which is why I am asking for help.
The error
In the class worker_thread, during the call to shared_from_this(), a bad_weak_ptr exception is raised, which causes the program to crash.
The layout
The class connection_manager creates and stores objects of type std::shared_ptr<worker_thread> inside a std::vector container
The class worker_thread inherits from std::enable_shared_from_this<worker_thread>.
The class worker_thread creates objects of type std::shared_ptr<connection>.
The class connection requires a pointer (which is a shared pointer) to the class worker_thread, so that in can call the void handle_finish(std::shared_ptr<connection>)
Program flow
The class worker_thread is created via its constructor, from the class connection_manager using std::make_shared<worker_thread> with two shared pointers as parameters.
void init() is called from worker_thread by connection_manager
Later in the program, connection_manager calls std::shared_ptr<connection> get_available_connection() from worker_thread
During this method's execution, a new connection is created via std::make_shared<connection>, and one of the arguments is the shared pointer to the current worker_thread obtained via shared_from_this()
During the shared_from_this() call, the program crashes with a bad_weak_ptr exception.
Research
From my research, the most common causes of this error are:
When shared_from_this() is called within a constructor (or a function which is called by the constructor)
When there is no existing std::shared_ptr pointing to the object.
In my program:
The call to the constructor and the get_available_connection() are separate, and through outputing lines in the terminal, it seems that the worker_thread is constructed and initialised by the time the call to get_available_connection() occurs
The connection_manager class holds a shared pointer to every worker_thread object.
Code
All something_ptr are std::shared_ptr<something>
Header files
connection_manager.hpp
typedef asio::executor_work_guard<asio::io_context::executor_type>
io_context_work;
std::vector<worker_thread_ptr> workers;
std::vector<io_context_ptr> io_contexts;
std::vector<io_context_work> work;
worker_thread.hpp
class worker_thread : std::enable_shared_from_this<worker_thread> {
public:
/// Create a worker thread.
explicit worker_thread(io_context_ptr io, config_ptr vars_global);
void init();
void join();
connection_ptr get_available_connection();
//...
connection.hpp
explicit connection(std::shared_ptr<worker_thread> worker,
std::shared_ptr<asio::io_context> io,
config_ptr vars_parent);
Source files
connection_manager.cpp
connection_manager::connection_manager(config_ptr vars) {
std::size_t number_of_threads = vars->worker_threads;
while(number_of_threads > 0) {
io_context_ptr io_context(new asio::io_context);
io_contexts.push_back(io_context);
work.push_back(asio::make_work_guard(*io_context));
worker_thread_ptr worker =
std::make_shared<worker_thread>(io_context, vars);
workers.push_back(worker);
worker->init();
--number_of_threads;
}
}
connection_ptr connection_manager::get_available_connection() {
std::size_t index_of_min_thread = 0;
std::size_t worker_count = workers.size();
for(std::size_t i = 1; i < worker_count; ++i) {
if(workers[i]->active_connection_count() <
workers[index_of_min_thread]->active_connection_count())
index_of_min_thread = i;
}
return workers[index_of_min_thread]->get_available_connection();
}
worker_thread.cpp
worker_thread::worker_thread(io_context_ptr io,
config_ptr vars_global)
:io_context(io), active_conn_count(0), vars(vars_global),
worker(
[this]() {
if(io_context)
io_context->run();
}
) {}
void worker_thread::init() {
//Additional initialisation, this is called by connection_manager
//after this thread's construction
}
connection_ptr worker_thread::get_available_connection() {
connection_ptr conn;
if(!available_connections.empty()) {
conn = available_connections.front();
available_connections.pop();
active_connections.insert(conn);
return conn;
} else {
conn = std::make_shared<connection>(shared_from_this(), io_context, vars);
active_connections.insert(conn);
return conn;
}
}
I am sorry if this question has been answered before, but I tried to resolve this, and after trying for some time, I decided it would be better to ask for help.
EDIT
Here is a minimum test, which fails. It requires CMake, and you might have to change the minimum required version.
Google Drive link
I think your problem might be that you use default private inheritance.
here is a simple example of a program that crashes:
class GoodUsage : public std::enable_shared_from_this<GoodUsage>
{
public:
void DoSomething()
{
auto good = shared_from_this();
}
};
class BadUsage : std::enable_shared_from_this<BadUsage> // private inheritance
{
public:
void DoSomething()
{
auto bad = shared_from_this();
}
};
int main()
{
auto good = std::make_shared<GoodUsage>();
auto bad = std::make_shared<BadUsage>();
good->DoSomething(); // ok
bad->DoSomething(); // throws std::bad_weak_ptr
}
I quickly wrote some kind of wrapper to ensure some functionality in a system is always executed in a defined thread context. To make the code as small as possible, I simple use a pointer assignment to check if the thread has started.
void waitForStart() {
while (_handler == nullptr) {
msleep(100); // Sleep for 100ms;
}
msleep(100); // Sleep for 100ms to make sure the pointer is assigned
}
In my opinion, this should work in any case. Even if the assignment to _handler is for unknown reason split up into two operations on a CPU.
Is my assumtion correct? Or did I miss a case where this could go wrong?
For reference a more complete example how the system looks like. There are the System, the Thread and the Handler classes:
class Handler {
public:
void doSomeWork() {
// things are executed here.
}
};
class Thread : public ThreadFromAFramework {
public:
Thread() : _handler(nullptr) {
}
void waitForStart() {
while (_handler == nullptr) {
msleep(100); // Sleep for 100ms;
}
msleep(100); // Sleep for 100ms to make sure the pointer is assigned
}
Handler* handler() const {
return _handler;
}
protected:
virtual void run() { // This method is executed as a new thread
_handler = new Handler();
exec(); // This will go into a event loop
delete _handler;
_handler = nullptr;
}
private:
Handler *_handler;
}
class System {
public:
System() {
_thread = new Thread();
_thread->start(); // Start the thread, this will call run() in the new thread
_thread->waitForStart(); // Make sure we can access the handler.
}
void doSomeWork() {
Handler *handler = _thread->handler();
// "Magically" call doSomeWork() in the context of the thread.
}
private:
Thread *_thread;
}
You missed a case where this can go wrong. The thread might exit 5 msec after it sets the pointer. Accessing any changing variable from two threads is never reliable without synchronization.
What would be a good/best to ensure thread safety for callback objects? Specifically, I'm trying to prevent a callback object from being deconstructed before all the threads are finished with it.
It is easy to code the client code to ensure thread safety, but I'm looking for a way that is a bit more streamlined. For example, using a factory object to generate the callback objects. The trouble then lies in tracking the usage of the callback object.
Below is an example code that I'm trying to improve.
class CHandlerCallback
{
public:
CHandlerCallback(){ ... };
virtual ~CHandlerCallback(){ ... };
virtual OnBegin(UINT nTotal ){ ... };
virtual OnStep (UINT nIncrmt){ ... };
virtual OnEnd(UINT nErrCode){ ... };
protected:
...
}
static DWORD WINAPI ThreadProc(LPVOID lpParameter)
{
CHandler* phandler = (CHandler*)lpParameter;
phandler ->ThreadProc();
return 0;
};
class CHandler
{
public:
CHandler(CHandlerCallback * sink = NULL) {
m_pSink = sink;
// Start the server thread. (ThreadProc)
};
~CHandler(){...};
VOID ThreadProc(LPVOID lpParameter) {
... do stuff
if (m_pSink) m_pSink->OnBegin(..)
while (not exit) {
... do stuff
if (m_pSink) m_pSink->OnStep(..)
... do stuff
}
if (m_pSink) m_pSink->OnEnd(..);
};
private:
CHandlerCallback * m_pSink;
}
class CSpecial1Callback: public CHandlerCallback
{
public:
CSpecial1Callback(){ ... };
virtual ~CBaseHandler(){ ... };
virtual OnStep (UINT nIncrmt){ ... };
}
class CSpecial2Callback: public CHandlerCallback...
Then the code that runs everything in a way similar to the following:
int main {
CSpecial2Callback* pCallback = new CSpecial2Callback();
CHandler handler(pCallback );
// Right now the client waits for CHandler to finish before deleting
// pCallback
}
Thanks!
If you're using C++11 you can use smart pointers to keep the object around until the last reference to the object disappears. See shared_pointer. If you're not in C++11 you could use boost's version. If you don't want to include that library and aren't in C++11 you can resort to keeping an internal count of threads using that object and destroy the object when that count reaches 0. Note that trying to track the counter yourself can be difficult as you'll need atomic updates to the counter.
shared_ptr<CSpecial2Callback> pCallback(new CSpecial2Callback());
CHandler handler(pCallback); // You'll need to change this to take a shared_ptr
... //Rest of code -- when the last reference to
... //pCallback is used up it will be destroyed.
This is in the context of the Microsoft C++ Concurrency API.
There's a class called agent (under Concurrency namespace), and it's basically a state machine you derive and implement pure virtual agent::run.
Now, it is your responsibility to call agent::start, which will put it in a runnable state. You then call agent::wait*, or any of its variants, to actually execute the agent::run method.
But why do we have to call agent::done within the body? I mean, the obvious answer is that agent::wait* will wait until done is signaled or the timeout has elapsed, but...
What were the designers intending? Why not have the agent enter the done state when agent::run returns? That's what I want to know. Why do I have the option to not call done? The wait methods throw exceptions if the timeout has elapsed.
About the only reason I can see is that it would let you state you are done(), then do more work (say, cleanup) that you don't want your consumer to have to wait on.
Now, they could have done this:
private: void agent::do_run() {
run();
if (status() != agent_done)
done();
}
then have their framework call do_run() instead of run() directly (or the equivalent).
However, you'll note that you yourself can do this.
class myagent: public agent {
protected:
virtual void run() final override { /* see do_run above, except call do_run in it */ }
virtual void do_run() = 0;
};
and poof, if your do_run() fails to call done(), the wrapping function does it for you. If this second virtual function overhead is too high for you:
template<typename T>
class myagent: public agent {
private:
void call_do_run()
{
static_cast<T*>(this)->do_run();
}
protected:
virtual void run() final override { /* see do_run above, but call_do_run() */ }
};
the CRTP that lets you do compile-time dispatch. Use:
class foo: public myagent<foo>
{
public:
void do_run() { /* code */ }
};
... /shrug