How to make a C++ boost::signal be caught from an object which encapsulates the object which emits it? - c++

I have a TcpDevice class which encapsulates a TCP connection, which has an onRemoteDisconnect method which gets called whenever the remote end hangs up. Then, there's a SessionManager object which creates TcpSession objects which take a TcpDevice as a communication channel and inserts them in an internal pointer container for the application to use. In case any of the managed TcpSessions should end, I would like the SessionManager instance to be notified about it and then remove the corresponding session from the container, freeing up the resources associated with it.
I found my problem to be very similar to this question:
Object delete itself from container
but since he has a thread for checking the connections state, it gets a little different from mine and the way I intended to solve it using boost::signals, so I decided to go for a new question geared towards it - I apologize if it's the wrong way to do it... I'm still getting the feel on how to properly use S.O. :)
Since I'm kind of familiar with QT signals/slots, I found boost::signals offers a similar mechanism (I'm already using boost::asio and have no QT in this project), so I decided to implement a remoteDeviceDisconnected signal to be emitted by TcpDevice's onRemoteDisconnect, and for which I would have a slot in SessionManager, which would then delete the disconnected session and device from the container.
To initially try it out I declared the signal as a public member of TcpDevice in tcpdevice.hpp:
class TcpDevice
{
(...)
public:
boost::signal <void ()> remoteDeviceDisconnected;
(...)
}
Then I emitted it from TcpDevice's onRemoteDisconnect method like this:
remoteDeviceDisconnected();
Now, is there any way to connect this signal to my SessionManager slot from inside session manager? I tried this:
unsigned int SessionManager::createSession(TcpDevice* device)
{
unsigned int session_id = session_counter++;
boost::mutex::scoped_lock lock(sessions_mutex);
sessions.push_back(new TcpSession(device, session_id));
device->remoteDeviceDisconnected.connect(boost::bind(&SessionManager::removeDeadSessionSlot, this));
return session_id;
}
It compiles fine but at link time it complains of multiple definitions of remoteDeviceDisconnected in several object code files:
tcpsession.cpp.o:(.bss+0x0): multiple definition of `remoteDeviceDisconnected'
tcpdevice.cpp.o: (.bss+0x0): first defined here
sessionmanager.cpp.o:(.bss+0x0): multiple definition of `remoteDeviceDisconnected'
tcpdevice.cpp.o: (.bss+0x0): first defined here
I found this strange, since I didn't redefine the signal anywhere, but just used it at the createSession method above.
Any tips would be greatly appreciated! Thank you!

My bad! Like we all should expect, the linker was right... there was indeed a second definition, I just couldn't spot it right away because it wasn't defined by any of my classes, but just "floating" around one of my .cpp files, like those found on boost::signals examples.
Just for the record, the initial idea worked like a charm: when a given TcpDevice gets disconnected from the remote end, it emits the remoteDeviceDisconnected signal, which is then caught by the SessionManager object which holds the TcpSession instance that points to that TcpDevice. Once notified, SessionManager's method removeDeadSessionSlot gets executed, iterating through the sessions ptr_list container and removing the one which was disconnected:
void SessionManager::removeDeadSessionSlot()
{
boost::mutex::scoped_lock lock(sessions_mutex);
TcpSession_ptr_list_it it = sessions.begin();
while (it != sessions.end()) {
if (!(*it).device->isConnected())
it = sessions.erase(it);
else
++it;
}
}
Hope that may serve as a reference to somebody!

Related

Instantiating boost::beast in dynamic library causes a crash

I'm trying to implement a very simple, local, HTTP server for my C++ application — I'm using XCode on macOS. I have to implement it from within a dynamically loaded library rather than the "main" thread of the program. I decided to try using boost::beast since another part of the application uses boost libraries already. I'm trying to implement this example, but within the context of my library, and not as part its main program.
The host application for this library calls on the following function to start a localhost server, but crashes when instantiating "acceptor":
extern "C" BASICEXTERNALOBJECT_API long startLocalhost(TaggedData* argv, long argc, TaggedData * retval) {
try {
string status;
retval->type = kTypeString;
auto const address = net::ip::make_address("127.0.0.1");
unsigned short port = static_cast<unsigned short>(std::atoi("1337"));
net::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}}; // <-- crashes on this line
tcp::socket socket{ioc};
http_server(acceptor, socket);
ioc.run();
status = "{'status':'ok', 'message':'localhost server started!'}";
retval->data.string = getNewBuffer(status);
}
catch(std::exception const& e)
{
string status;
//err_msg = "Error: " << e.what() << std::endl;
status = "{'status':'fail', 'message':'Error starting web server'}";
retval->data.string = getNewBuffer(status);
}
return kESErrOK;
}
When stepping through the code, I see that XCode reports an error when the line with tcp::acceptor ... is executed:
Thread 1: EXC_BAD_ACCESS (code=1, address=0x783c0a3e3f22650c)
and is highlighted at the single line of code in a function in scheduler.h:
//Get the concurrency hint that was used to initialize the scheduler.
int concurrency_hint() const
{
return concurrency_hint_; //XCode halts here
}
I'm debating as to whether or not I should include a different C++ web server, like Drogon, instead of boost::beast, but I thought I would post here to see if anybody had any insight as to why the crash is happening in this case.
Update
I found a fix that is a workaround for my particular circumstances, hopefully it can help others running into this issue.
The address to the service_registry::create static factory method resolves correctly when I add ASIO_DECL in front of the methods declaration in asio/detail/service_registry.hpp.
It should look like this:
// Factory function for creating a service instance.
template <typename Service, typename Owner>
ASIO_DECL static execution_context::service* create(void* owner);
By adding ASIO_DECL in front of it, it resolves correctly and the scheduler and kqueue_reactor objects initialize properly avoiding the bad access to concurrency_hint().
In my case I am trying to use non-Boost ASIO inside of a VST3 audio plug-in running in Ableton Live 11 on macOS on an M1 processor. Using the VST3 plug-in in I'm getting this same crash. Using the same plug-in in other DAW applications, such as Reaper, does not cause the crash. It also does not occur for Ableton Live 11 on Windows.
I've got it narrowed down to the following issue:
In asio/detail/impl/service_registry.hpp the following method attempts to return a function pointer address to a create/factory method.
template <typename Service>
Service& service_registry::use_service(io_context& owner)
{
execution_context::service::key key;
init_key<Service>(key, 0);
factory_type factory = &service_registry::create<Service, io_context>;
return *static_cast<Service*>(do_use_service(key, factory, &owner));
}
Specifically, this line: factory_type factory = &service_registry::create<Service, io_context>;
When debugging in Xcode, in the hosts that work, when inspecting
factory, it shows the correct address linking to the service_registry::create<Service, io_context> static method.
However, in Ableton Live 11, it doesn't point to anything - somehow the address to the static method does not resolve correctly. This causes a cascade of issues, ultimately leading up to trying to invoke the factory function pointer in asio/asio/detail/impl/service_registry.ipp in the method service_registry::do_use_service. Since it doesn't point to a proper create method, nothing is created, it results in uninitialized objects, including the scheduler instance.
Therefore, when calling scheduler_.concurrency_hint() in kqueue_reactor.ipp the scheduler is uninitialized, and the EXC_BAD_ACCESS error results.
It's unclear to me why under some host processes, dynamically loading the plug-in cannot resolve the static method address, but others have no problem. In my case I compiled asio.hpp for standalone ASIO into the plug-in directly, there was no linking.
The best guesses I can come up with are
maybe your http_server might start additional threads or even fork. This might cause io_context and friends to be accessed after startLocalhost returned. To explain the crash location appearing to be at the indicated line, I could add the heuristic that something is already off during the destructor for ioc
the only other idea I have is that actually the opening/binding of the acceptor throws, but due to possible incompatibilities of types in the shared module vs the main program, the exception thrown is not actually caught and causes abnormal termination. This might happen more easily if the main program also uses Boost libraries, but a different copy (build/version) of them.
In this case there's a simple thing you can do: split up initialization and use the overloads that take error_code to instead use them.

uwp: How download files when app is in suspended mode

There is queue with links of files to download. I'm trying find the way to continue downloading when application goes to suspend mode.
According to official microsoft documentation suitable class for this is BackgroundDownloader, but it's handles only one current downloading process. It looks wrong to call in loop CreateDownload() method for every link without waiting for the completion of previous links, isn't right?
More logical in my opinion is using in-process background task. I see it this way:
Implement Run(IBackgroundTaskInstance) method of interface IBackgroundTask (it should stay alive even when app is suspended, right?)
Using custom event transmit the queue to the implemented method
Inside Run(IBackgroundTaskInstance) method use BackgroundDownloader (by implementing the execution of one instance at a time)
But I'm stuck even with simple implementation for one file downloading. Bellow my Run(IBackgroundTaskInstance) method implementation:
void Task::DownloaderTask::Run(IBackgroundTaskInstance ^ taskInstance)
{
TaskDeferral = taskInstance->GetDeferral();
std::wstring filename = L"Pleiades_large.jpg";
Uri^ uri = ref new Uri(ref new Platform::String(L"https://upload.wikimedia.org/wikipedia/commons/4/4e/Pleiades_large.jpg"));
Concurrency::create_task(KnownFolders::GetFolderForUserAsync(nullptr, KnownFolderId::PicturesLibrary))
.then([this, filename, uri](StorageFolder^ picturesLibrary)
{
return picturesLibrary->CreateFileAsync(ref new Platform::String(filename.c_str()), CreationCollisionOption::GenerateUniqueName);
}).then([this, filename, uri](StorageFile^ destinationFile) {
BackgroundDownloader^ downloader = ref new BackgroundDownloader();
DownloadOperation^ download = downloader->CreateDownload(uri, destinationFile);
download->StartAsync();
}).then([this](Concurrency::task<void> previousTask)
{
try
{
previousTask.get();
TaskDeferral->Complete();
}
catch (Platform::Exception^ ex)
{
wchar_t buffer[1024];
swprintf_s(buffer, L"Exception: %s", ex->Message);
OutputDebugString(buffer);
}
});
}
The code above only creates empty file, but using the same code without BackgroundTask it works correctly. I didn't find any restrictions for BackgroundDownloader inside BackgroundTask.
So, my questions are:
Is it right way of usage BackgroundTask?
Is there another approach to solving the problem?
Is this problem solvable at all?
I've found the cause of the unexpected behavior:
The line of code TaskDeferral->Complete(); was at the end of the method at first while it should be at the end of async call.
Therefore, initial implementation (published in question) is correct.
All that had to be done was to Rebuild project.

How can I create a persistent list of globally accesible objects queried from threads?

I've been trying to get a persistent object from a thread for hours.
I want to write a shared library in C++ that starts a persistent loop in a function.
In the following code snippets there is a class called Process. Process initializes a TCP/IP interface to read and write data from a Simulink model.
This is only for declaration and should not be important for this problem, but now you know what I talk about when mentioning the processes.
main.cpp
I know, it looks kinda ugly/unprofessional, but I'm fairly new to C++..
// frustrated attempt to make everything persistent
static vector<std::thread> processThreads;
static ProcessHandle processHandle;
static vector<std::promise<Process>> promiseProcess;
static vector<std::future<Process>> futureProcess;
EXPORT int initializeProcessLoop(const char *host, int port)
{
std::promise<Process> promiseObj;
futureProcess.push_back(std::future<Process>(promiseObj.get_future()));
processThreads.push_back(std::thread(&ProcessHandle::addProcess, processHandle, host, port, &promiseProcess[0]));
Process val = futureProcess[0].get();
processHandle.handleList.push_back(val);
return (processHandle.handleList.size() - 1);
}
ProcessHandle.cpp
The addProcess function from ProcessHandle creates the Process that should be persistent, adds it to a static vector member of ProcessHandle and passes the promise to the execution loop.
int ProcessHandle::addProcess(const char *address, int port, std::promise<Process> * promiseObj) {
Process process(address, port);
handleList.push_back(process);
handleList[handleList.size() - 1].exec(promiseObj);
return handleList.size() - 1;
}
To the main problem now...
If I change "initializeProcessLoop" to include:
if(processHandle.handleList[0].isConnected())
{
processHandle.handleList[0].poll("/Compare To Constant/const");
}
after i've pushed "val" to the processHandle.handleList everything works fine and I can poll the data as it should be.
If I instead poll it from - for examle - the main function, the loop crashes inside of the "initializeProcessLoop" because "Process val" is reassigned (?) with futureProcess[0].get().
How can I get the Process variable and the threaded loop to be consistent after the function returns?
If there are any questions to the code (and I bet there will be), feel free to ask. Thanks in advance!
PS: Obligatory "English is not my native language, please excuse any spelling errors or gibberish"...
Okay, first I have to declare, that the coding style above and following are by any means not best practice.
While Sam Varshavchik is still right with how to learn C++ the right way, just changing
Process val = futureProcess[0].get();
to
static Process val = futureProcess[0].get();
did the job.
To be clear: don't do this. It's a quick fix but it will backfire in the future. But I hope that it'll help anyone with a similar problem.
If anyone has a better solution (it can't get any worse, can it?), feel free to add your answer to this question.

Spawning new async request from an asio handler

I'm trying to get my feet wet with ASIO and thought a good first project would be a simple web crawler: download an html page, find the links in it, download all the links.
I have tried modifying the ASIO http client example to use enable_shared_from_this instead of a raw pointer so that I can spawn new async task from within the handler of the previous task without having to worry about the resources getting deleted in the middle of my work.
The problems start when I tried to subclass my client to handle different pages in different ways. The compiler complains that the type of the shared_ptr doesn't match the type of this.
Does anybody know how this is solved? I haven't been able to figure it out by myself.
This is unrelated to Asio.
If you inherited a base class from enable_shared_from_this, but need it in the derived one, use boost::static_pointer_cast:
struct base : enable_shared_from_this<base>
{
};
struct derived : base
{
shared_ptr<derived> shared_from_derived()
{
return static_pointer_cast<derived>(shared_from_this());
}
};

unable to successfully call function in dynamically loaded plugin in c++

I've successfully loaded a C++ plugin using a custom plugin loader class. Each plugin has an extern "C" create_instance function that returns a new instance using "new".
A plugin is an abstract class with a few non-virtual functions and several protected variables(std::vector refList being one of them).
The plugin_loader class successfully loads and even calls a virtual method on the loaded class (namely "std::string plugin::getName()".
The main function creates an instance of "host" which contains a vector of reference counted smart pointers, refptr, to the class "plugin". Then, main creates an instance of plugin_loader which actually does the dlopen/dlsym, and creates an instance of refptr passing create_instance() to it. Finally, it passes the created refptr back to host's addPlugin function. host::addPlugin successfully calls several functions on the passed plugin instance and finally adds it to a vector<refptr<plugin> >.
The main function then subscribes to several Apple events and calls RunApplicationEventLoop(). The event callback decodes the result and then calls a function in host, host::sendToPlugin, that identifies the plugin the event is intended for and then calls the handler in the plugin. It's at this point that things stop working.
host::sendToPlugin reads the result and determines the plugin to send the event off to.
I'm using an extremely basic plugin created as a debugging plugin that returns static values for every non-void function.
Any call on any virtual function in plugin in the vector causes a bad access exception. I've tried replacing the refptrs with regular pointers and also boost::shared_ptrs and I keep getting the same exception. I know that the plugin instance is valid as I can examine the instance in Xcode's debugger and even view the items in the plugin's refList.
I think it might be a threading problem because the plugins were created in the main thread while the callback is operating in a seperate thread. I think things are still running in the main thread judging by the backtrace when the program hits the error but I don't know Apple's implementation of RunApplicationEventLoop so I can't be sure.
Any ideas as to why this is happening?
class plugin
{
public:
virtual std::string getName();
protected:
std::vector<std::string> refList;
};
and the pluginLoader class:
template<typename T> class pluginLoader
{
public: pluginLoader(std::string path);
// initializes private mPath string with path to dylib
bool open();
// opens the dylib and looks up the createInstance function. Returns true if successful, false otherwise
T * create_instance();
// Returns a new instance of T, NULL if unsuccessful
};
class host
{
public:
addPlugin(int id, plugin * plug);
sendToPlugin(); // this is the problem method
static host * me;
private:
std::vector<plugin *> plugins; // or vector<shared_ptr<plugin> > or vector<refptr<plugin> >
};
apple event code from host.cpp;
host * host::me;
pascal OSErr HandleSpeechDoneAppleEvent(const AppleEvent *theAEevt, AppleEvent *reply, SRefCon refcon) {
// this is all boilerplate taken straight from an apple sample except for the host::me->ae_callback line
OSErr status = 0;
Result result = 0;
// get the result
if (!status) {
host::me->ae_callback(result);
}
return status;
}
void host::ae_callback(Result result) {
OSErr err;
// again, boilerplate apple code
// grab information from result
if (!err)
sendToPlugin();
}
void host::sendToPlugin() {
// calling *any* method in plugin results in failure regardless of what I do
}
EDIT: This is being run on OSX 10.5.8 and I'm using GCC 4.0 with Xcode. This is not designed to be a cross platform app.
EDIT: To be clear, the plugin works up until the Apple-supplied event loop calls my callback function. When the callback function calls back into host is when things stop working. This is the problem I'm having, everything else up to that point works.
Without seeing all of your code it isn't going to be easy to work out exactly what is going wrong. Some things to look at:
Make sure that the linker isn't throwing anything away. On gcc try the compile options -Wl -E -- we use this on Linux, but don't seem to have found a need for it on the Macs.
Make sure that you're not accidentally unloading the dynamic library before you've finished with it. RAII doesn't work for unloading dynamic libraries unless you also stop exceptions at the dynamic library border.
You may want to examine our plug in library which works on Linux, Macs and Windows. The dynamic loading code (along with a load of other library stuff) is available at http://svn.felspar.com/public/fost-base/trunk/
We don't use the dlsym mechanism -- it's kind of hard to use properly (and portably). Instead we create a library of plugins by name and put what are basically factories in there. You can examine how this works by looking at the way that .so's with test suites can be dynamically loaded. An example loader is at http://svn.felspar.com/public/fost-base/trunk/fost-base/Cpp/fost-ftest/ftest.cpp and the test suite registration is in http://svn.felspar.com/public/fost-base/trunk/fost-base/Cpp/fost-test/testsuite.cpp The threadsafe_store holds the factories by name and the suite constructor registers the factory.
I completely missed the fact that I was calling dlclose in my plugin_loader's dtor and for some reason the plugins were getting destructed between the RunApplicatoinEventLoop call and the call to sendToPlugin. I removed dlclose and things work now.