I'm trying to implement a very simple, local, HTTP server for my C++ application — I'm using XCode on macOS. I have to implement it from within a dynamically loaded library rather than the "main" thread of the program. I decided to try using boost::beast since another part of the application uses boost libraries already. I'm trying to implement this example, but within the context of my library, and not as part its main program.
The host application for this library calls on the following function to start a localhost server, but crashes when instantiating "acceptor":
extern "C" BASICEXTERNALOBJECT_API long startLocalhost(TaggedData* argv, long argc, TaggedData * retval) {
try {
string status;
retval->type = kTypeString;
auto const address = net::ip::make_address("127.0.0.1");
unsigned short port = static_cast<unsigned short>(std::atoi("1337"));
net::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}}; // <-- crashes on this line
tcp::socket socket{ioc};
http_server(acceptor, socket);
ioc.run();
status = "{'status':'ok', 'message':'localhost server started!'}";
retval->data.string = getNewBuffer(status);
}
catch(std::exception const& e)
{
string status;
//err_msg = "Error: " << e.what() << std::endl;
status = "{'status':'fail', 'message':'Error starting web server'}";
retval->data.string = getNewBuffer(status);
}
return kESErrOK;
}
When stepping through the code, I see that XCode reports an error when the line with tcp::acceptor ... is executed:
Thread 1: EXC_BAD_ACCESS (code=1, address=0x783c0a3e3f22650c)
and is highlighted at the single line of code in a function in scheduler.h:
//Get the concurrency hint that was used to initialize the scheduler.
int concurrency_hint() const
{
return concurrency_hint_; //XCode halts here
}
I'm debating as to whether or not I should include a different C++ web server, like Drogon, instead of boost::beast, but I thought I would post here to see if anybody had any insight as to why the crash is happening in this case.
Update
I found a fix that is a workaround for my particular circumstances, hopefully it can help others running into this issue.
The address to the service_registry::create static factory method resolves correctly when I add ASIO_DECL in front of the methods declaration in asio/detail/service_registry.hpp.
It should look like this:
// Factory function for creating a service instance.
template <typename Service, typename Owner>
ASIO_DECL static execution_context::service* create(void* owner);
By adding ASIO_DECL in front of it, it resolves correctly and the scheduler and kqueue_reactor objects initialize properly avoiding the bad access to concurrency_hint().
In my case I am trying to use non-Boost ASIO inside of a VST3 audio plug-in running in Ableton Live 11 on macOS on an M1 processor. Using the VST3 plug-in in I'm getting this same crash. Using the same plug-in in other DAW applications, such as Reaper, does not cause the crash. It also does not occur for Ableton Live 11 on Windows.
I've got it narrowed down to the following issue:
In asio/detail/impl/service_registry.hpp the following method attempts to return a function pointer address to a create/factory method.
template <typename Service>
Service& service_registry::use_service(io_context& owner)
{
execution_context::service::key key;
init_key<Service>(key, 0);
factory_type factory = &service_registry::create<Service, io_context>;
return *static_cast<Service*>(do_use_service(key, factory, &owner));
}
Specifically, this line: factory_type factory = &service_registry::create<Service, io_context>;
When debugging in Xcode, in the hosts that work, when inspecting
factory, it shows the correct address linking to the service_registry::create<Service, io_context> static method.
However, in Ableton Live 11, it doesn't point to anything - somehow the address to the static method does not resolve correctly. This causes a cascade of issues, ultimately leading up to trying to invoke the factory function pointer in asio/asio/detail/impl/service_registry.ipp in the method service_registry::do_use_service. Since it doesn't point to a proper create method, nothing is created, it results in uninitialized objects, including the scheduler instance.
Therefore, when calling scheduler_.concurrency_hint() in kqueue_reactor.ipp the scheduler is uninitialized, and the EXC_BAD_ACCESS error results.
It's unclear to me why under some host processes, dynamically loading the plug-in cannot resolve the static method address, but others have no problem. In my case I compiled asio.hpp for standalone ASIO into the plug-in directly, there was no linking.
The best guesses I can come up with are
maybe your http_server might start additional threads or even fork. This might cause io_context and friends to be accessed after startLocalhost returned. To explain the crash location appearing to be at the indicated line, I could add the heuristic that something is already off during the destructor for ioc
the only other idea I have is that actually the opening/binding of the acceptor throws, but due to possible incompatibilities of types in the shared module vs the main program, the exception thrown is not actually caught and causes abnormal termination. This might happen more easily if the main program also uses Boost libraries, but a different copy (build/version) of them.
In this case there's a simple thing you can do: split up initialization and use the overloads that take error_code to instead use them.
Related
I've been trying to get a persistent object from a thread for hours.
I want to write a shared library in C++ that starts a persistent loop in a function.
In the following code snippets there is a class called Process. Process initializes a TCP/IP interface to read and write data from a Simulink model.
This is only for declaration and should not be important for this problem, but now you know what I talk about when mentioning the processes.
main.cpp
I know, it looks kinda ugly/unprofessional, but I'm fairly new to C++..
// frustrated attempt to make everything persistent
static vector<std::thread> processThreads;
static ProcessHandle processHandle;
static vector<std::promise<Process>> promiseProcess;
static vector<std::future<Process>> futureProcess;
EXPORT int initializeProcessLoop(const char *host, int port)
{
std::promise<Process> promiseObj;
futureProcess.push_back(std::future<Process>(promiseObj.get_future()));
processThreads.push_back(std::thread(&ProcessHandle::addProcess, processHandle, host, port, &promiseProcess[0]));
Process val = futureProcess[0].get();
processHandle.handleList.push_back(val);
return (processHandle.handleList.size() - 1);
}
ProcessHandle.cpp
The addProcess function from ProcessHandle creates the Process that should be persistent, adds it to a static vector member of ProcessHandle and passes the promise to the execution loop.
int ProcessHandle::addProcess(const char *address, int port, std::promise<Process> * promiseObj) {
Process process(address, port);
handleList.push_back(process);
handleList[handleList.size() - 1].exec(promiseObj);
return handleList.size() - 1;
}
To the main problem now...
If I change "initializeProcessLoop" to include:
if(processHandle.handleList[0].isConnected())
{
processHandle.handleList[0].poll("/Compare To Constant/const");
}
after i've pushed "val" to the processHandle.handleList everything works fine and I can poll the data as it should be.
If I instead poll it from - for examle - the main function, the loop crashes inside of the "initializeProcessLoop" because "Process val" is reassigned (?) with futureProcess[0].get().
How can I get the Process variable and the threaded loop to be consistent after the function returns?
If there are any questions to the code (and I bet there will be), feel free to ask. Thanks in advance!
PS: Obligatory "English is not my native language, please excuse any spelling errors or gibberish"...
Okay, first I have to declare, that the coding style above and following are by any means not best practice.
While Sam Varshavchik is still right with how to learn C++ the right way, just changing
Process val = futureProcess[0].get();
to
static Process val = futureProcess[0].get();
did the job.
To be clear: don't do this. It's a quick fix but it will backfire in the future. But I hope that it'll help anyone with a similar problem.
If anyone has a better solution (it can't get any worse, can it?), feel free to add your answer to this question.
I'm using gSOAP under Linux in one of my projects, and I have a problem when using the server for a pretty long time (actually not very long, I get this error after something like 10 hours...). I followed the example gave some time ago here for multithreading in gSOAP. I create a soap service, then use the copy method and pass it to a thread. The thread functions is something like this:
void MyClass::SoapServer(myservice::Service* soapService)
{
int res = soapService->serve();
if (res != SOAP_OK)
{
// log error
}
soapService->destroy();
soap_free(soapService);
}
After a few hours, when there is a constant poller that calls SOAP functions, I get segmentation fault in the gSOAP copy function. Below i attach the code that accepts the connection and creates the thread.
while(true)
{
int error = mySoapService.accept();
if (!soap_valid_socket(error))
{
//error
}
else
{
myservice::Service *soapServiceCopy = NULL;
soapServiceCopy = mySoapService.copy();
// create thread using the SoapServer function
// and pass soapServiceCopy as an argument
}
}
It seems to me that the soap service clean up is correctly performed, is there anything I'm missing?
Thanks
The difference between your code and my example that you link to is that you use soap_free() to free the soapService object and my example uses delete. Changing my example code to use soap_free() and then running it under valgrind leads to free / delete / delete[] mismatches being reported which makes me think that soap_free() is built on top of free but the .copy() method is using new to create the copy.
I have a TcpDevice class which encapsulates a TCP connection, which has an onRemoteDisconnect method which gets called whenever the remote end hangs up. Then, there's a SessionManager object which creates TcpSession objects which take a TcpDevice as a communication channel and inserts them in an internal pointer container for the application to use. In case any of the managed TcpSessions should end, I would like the SessionManager instance to be notified about it and then remove the corresponding session from the container, freeing up the resources associated with it.
I found my problem to be very similar to this question:
Object delete itself from container
but since he has a thread for checking the connections state, it gets a little different from mine and the way I intended to solve it using boost::signals, so I decided to go for a new question geared towards it - I apologize if it's the wrong way to do it... I'm still getting the feel on how to properly use S.O. :)
Since I'm kind of familiar with QT signals/slots, I found boost::signals offers a similar mechanism (I'm already using boost::asio and have no QT in this project), so I decided to implement a remoteDeviceDisconnected signal to be emitted by TcpDevice's onRemoteDisconnect, and for which I would have a slot in SessionManager, which would then delete the disconnected session and device from the container.
To initially try it out I declared the signal as a public member of TcpDevice in tcpdevice.hpp:
class TcpDevice
{
(...)
public:
boost::signal <void ()> remoteDeviceDisconnected;
(...)
}
Then I emitted it from TcpDevice's onRemoteDisconnect method like this:
remoteDeviceDisconnected();
Now, is there any way to connect this signal to my SessionManager slot from inside session manager? I tried this:
unsigned int SessionManager::createSession(TcpDevice* device)
{
unsigned int session_id = session_counter++;
boost::mutex::scoped_lock lock(sessions_mutex);
sessions.push_back(new TcpSession(device, session_id));
device->remoteDeviceDisconnected.connect(boost::bind(&SessionManager::removeDeadSessionSlot, this));
return session_id;
}
It compiles fine but at link time it complains of multiple definitions of remoteDeviceDisconnected in several object code files:
tcpsession.cpp.o:(.bss+0x0): multiple definition of `remoteDeviceDisconnected'
tcpdevice.cpp.o: (.bss+0x0): first defined here
sessionmanager.cpp.o:(.bss+0x0): multiple definition of `remoteDeviceDisconnected'
tcpdevice.cpp.o: (.bss+0x0): first defined here
I found this strange, since I didn't redefine the signal anywhere, but just used it at the createSession method above.
Any tips would be greatly appreciated! Thank you!
My bad! Like we all should expect, the linker was right... there was indeed a second definition, I just couldn't spot it right away because it wasn't defined by any of my classes, but just "floating" around one of my .cpp files, like those found on boost::signals examples.
Just for the record, the initial idea worked like a charm: when a given TcpDevice gets disconnected from the remote end, it emits the remoteDeviceDisconnected signal, which is then caught by the SessionManager object which holds the TcpSession instance that points to that TcpDevice. Once notified, SessionManager's method removeDeadSessionSlot gets executed, iterating through the sessions ptr_list container and removing the one which was disconnected:
void SessionManager::removeDeadSessionSlot()
{
boost::mutex::scoped_lock lock(sessions_mutex);
TcpSession_ptr_list_it it = sessions.begin();
while (it != sessions.end()) {
if (!(*it).device->isConnected())
it = sessions.erase(it);
else
++it;
}
}
Hope that may serve as a reference to somebody!
Our unit tests fire off child processes, and sometimes these child processes crash. When this happens, a Windows Error Reporting dialog pops up, and the process stays alive until this is manually dismissed. This of course prevents the unit tests from ever terminating.
How can this be avoided?
Here's an example dialog in Win7 with the usual settings:
If I disable the AeDebug registry key, the JIT debugging option goes away:
If I disable checking for solutions (the only thing I seem to have control over via the control panel), it looks like this, but still appears and still stops the program from dying until the user presses something. WerAddExcludedApplication is documented to also have this effect.
A summary from the answers by jdehaan and Eric Brown, as well as this question (see also this question):
N.B. These solutions may affect other error reporting as well, e.g. failure to load a DLL or open a file.
Option 1: Disable globally
Works globally on the entire user account or machine, which can be both a benefit and a drawback.
Set [HKLM|HKCU]\Software\Microsoft\Windows\Windows Error Reporting\DontShowUI to 1.
More info: WER settings.
Option 2: Disable for the application
Requires modification to the crashing program, described in documentation as best practice, unsuitable for a library function.
Call SetErrorMode: SetErrorMode(SetErrorMode(0) | SEM_NOGPFAULTERRORBOX); (or with SEM_FAILCRITICALERRORS). More info: Disabling the program crash dialog (explains the odd arrangement of calls).
Option 2a: Disable for a function:
Requires modification to the crashing program, requires Windows 7/2008 R2 (desktop apps only) or higher, described in documenation as preferred to SetErrorMode, suitable for a thread-safe library function.
Call and reset SetThreadErrorMode:
DWORD OldThreadErrorMode = 0;
SetThreadErrorMode(SEM_FAILCRITICALERRORS,& OldThreadErrorMode);
…
SetThreadErrorMode (z_OldThreadErrorMode, NULL);
More info: not much available?
Option 3: Specify a handler
Requires modification to the crashing program.
Use SetUnhandledExceptionFilter to set your own structured exception handler that simply exits, probably with reporting and possibly an attempt at clean-up.
Option 4: Catch as an exception
Requires modification to the crashing program. For .NET applications only.
Wrap all code into a global try/catch block. Specify the HandleProcessCorruptedStateExceptionsAttribute and possibly also the SecurityCriticalAttribute on the method catching the exceptions. More info: Handling corrupted state exceptions
Note: this might not catch crashes caused by the Managed Debugging Assistants; if so, these also need to be disabled in the application.
Option 5: Stop the reporting process
Works globally on the entire user account, but only for a controlled duration.
Kill the Windows Error Reporting process whenever it shows up:
var werKiller = new Thread(() =>
{
while (true)
{
foreach (var proc in Process.GetProcessesByName("WerFault"))
proc.Kill();
Thread.Sleep(3000);
}
});
werKiller.IsBackground = true;
werKiller.Start();
This is still not completely bullet-proof though, because a console application may crash via a different error message, apparently displayed by an internal function called NtRaiseHardError:
The only solution is to catch all exceptions at a very high level (for each thread) and terminate the application properly (or perform another action).
This is the only way to prevent the exception from escaping your app and activating WER.
Addition:
If the exception is something you do not except to happen you can use an AssertNoThrow(NUnit) or alike in another Unit Test framework to enclose the code firing the child processes. This way you would also get it into your Unit test report. This is in my opinion the cleanest possible solution I can think of.
Addition2:
As the comments below show, I was mistaken: you cannot always catch the asynchronous exceptions, it depends on what the environment allows. In .NET some exceptions are prevented from being caught, what makes my idea worthless in this case...
For .NET: There are complicated workarounds involving the use of AppDomains, leading to an unload of an AppDomain instead of a crash of the whole application. Too bad...
http://www.bluebytesoftware.com/blog/PermaLink,guid,223970c3-e1cc-4b09-9d61-99e8c5fae470.aspx
http://www.develop.com/media/pdfs/developments_archive/AppDomains.pdf
EDIT:
I finally got it. With .NET 4.0 You can add the HandleProcessCorruptedStateExceptions attribute from System.Runtime.ExceptionServices to the method containing the try/catch block. This really worked! Maybe not recommended but works.
using System;
using System.Reflection;
using System.Runtime.InteropServices;
using System.Runtime.ExceptionServices;
namespace ExceptionCatching
{
public class Test
{
public void StackOverflow()
{
StackOverflow();
}
public void CustomException()
{
throw new Exception();
}
public unsafe void AccessViolation()
{
byte b = *(byte*)(8762765876);
}
}
class Program
{
[HandleProcessCorruptedStateExceptions]
static void Main(string[] args)
{
Test test = new Test();
try {
//test.StackOverflow();
test.AccessViolation();
//test.CustomException();
}
catch
{
Console.WriteLine("Caught.");
}
Console.WriteLine("End of program");
}
}
}
Try setting
HKCU\Software\Microsoft\Windows\Windows Error Reporting\DontShowUI
to 1. (You can also set the same key in HKLM, but you need admin privs to do that.)
This should prevent WER from showing any UI.
I've successfully loaded a C++ plugin using a custom plugin loader class. Each plugin has an extern "C" create_instance function that returns a new instance using "new".
A plugin is an abstract class with a few non-virtual functions and several protected variables(std::vector refList being one of them).
The plugin_loader class successfully loads and even calls a virtual method on the loaded class (namely "std::string plugin::getName()".
The main function creates an instance of "host" which contains a vector of reference counted smart pointers, refptr, to the class "plugin". Then, main creates an instance of plugin_loader which actually does the dlopen/dlsym, and creates an instance of refptr passing create_instance() to it. Finally, it passes the created refptr back to host's addPlugin function. host::addPlugin successfully calls several functions on the passed plugin instance and finally adds it to a vector<refptr<plugin> >.
The main function then subscribes to several Apple events and calls RunApplicationEventLoop(). The event callback decodes the result and then calls a function in host, host::sendToPlugin, that identifies the plugin the event is intended for and then calls the handler in the plugin. It's at this point that things stop working.
host::sendToPlugin reads the result and determines the plugin to send the event off to.
I'm using an extremely basic plugin created as a debugging plugin that returns static values for every non-void function.
Any call on any virtual function in plugin in the vector causes a bad access exception. I've tried replacing the refptrs with regular pointers and also boost::shared_ptrs and I keep getting the same exception. I know that the plugin instance is valid as I can examine the instance in Xcode's debugger and even view the items in the plugin's refList.
I think it might be a threading problem because the plugins were created in the main thread while the callback is operating in a seperate thread. I think things are still running in the main thread judging by the backtrace when the program hits the error but I don't know Apple's implementation of RunApplicationEventLoop so I can't be sure.
Any ideas as to why this is happening?
class plugin
{
public:
virtual std::string getName();
protected:
std::vector<std::string> refList;
};
and the pluginLoader class:
template<typename T> class pluginLoader
{
public: pluginLoader(std::string path);
// initializes private mPath string with path to dylib
bool open();
// opens the dylib and looks up the createInstance function. Returns true if successful, false otherwise
T * create_instance();
// Returns a new instance of T, NULL if unsuccessful
};
class host
{
public:
addPlugin(int id, plugin * plug);
sendToPlugin(); // this is the problem method
static host * me;
private:
std::vector<plugin *> plugins; // or vector<shared_ptr<plugin> > or vector<refptr<plugin> >
};
apple event code from host.cpp;
host * host::me;
pascal OSErr HandleSpeechDoneAppleEvent(const AppleEvent *theAEevt, AppleEvent *reply, SRefCon refcon) {
// this is all boilerplate taken straight from an apple sample except for the host::me->ae_callback line
OSErr status = 0;
Result result = 0;
// get the result
if (!status) {
host::me->ae_callback(result);
}
return status;
}
void host::ae_callback(Result result) {
OSErr err;
// again, boilerplate apple code
// grab information from result
if (!err)
sendToPlugin();
}
void host::sendToPlugin() {
// calling *any* method in plugin results in failure regardless of what I do
}
EDIT: This is being run on OSX 10.5.8 and I'm using GCC 4.0 with Xcode. This is not designed to be a cross platform app.
EDIT: To be clear, the plugin works up until the Apple-supplied event loop calls my callback function. When the callback function calls back into host is when things stop working. This is the problem I'm having, everything else up to that point works.
Without seeing all of your code it isn't going to be easy to work out exactly what is going wrong. Some things to look at:
Make sure that the linker isn't throwing anything away. On gcc try the compile options -Wl -E -- we use this on Linux, but don't seem to have found a need for it on the Macs.
Make sure that you're not accidentally unloading the dynamic library before you've finished with it. RAII doesn't work for unloading dynamic libraries unless you also stop exceptions at the dynamic library border.
You may want to examine our plug in library which works on Linux, Macs and Windows. The dynamic loading code (along with a load of other library stuff) is available at http://svn.felspar.com/public/fost-base/trunk/
We don't use the dlsym mechanism -- it's kind of hard to use properly (and portably). Instead we create a library of plugins by name and put what are basically factories in there. You can examine how this works by looking at the way that .so's with test suites can be dynamically loaded. An example loader is at http://svn.felspar.com/public/fost-base/trunk/fost-base/Cpp/fost-ftest/ftest.cpp and the test suite registration is in http://svn.felspar.com/public/fost-base/trunk/fost-base/Cpp/fost-test/testsuite.cpp The threadsafe_store holds the factories by name and the suite constructor registers the factory.
I completely missed the fact that I was calling dlclose in my plugin_loader's dtor and for some reason the plugins were getting destructed between the RunApplicatoinEventLoop call and the call to sendToPlugin. I removed dlclose and things work now.