I am using boost 1.55 (io_service doc). I need to call the destructor on my io_service to reset it after power is cycled on my serial device to get new data. The problem is that when the destructor is called twice (re-trying connection), I get a segmentation fault.
In header file
boost::asio::io_service io_service_port_1;
In function that closes connection
io_service_port_1.stop();
io_service_port_1.reset();
io_service_port_1.~io_service(); // how to check for NULL?
// do I need to re-construct it?
The following does not work:
if (io_service_port_1)
if (io_service_port_1 == NULL)
Thank you.
If you need manual control over when the object is created and destroyed, you should be wrapping it in a std::unique_ptr object.
std::unique_ptr<boost::asio::io_service> service_ptr =
std::make_unique<boost::asio::io_service>();
/*Do stuff until connection needs to be reset*/
service_ptr->stop();
//I don't know your specific use case, but the call to io_service's member function reset is probably unnecessary.
//service_ptr->reset();
service_ptr.reset();//"reset" is a member function of unique_ptr, will delete object.
/*For your later check*/
if(service_ptr) //returns true if a valid object exists in the pointer
if(!service_ptr) //returns true if no object is being pointed to.
Generally speaking, you should never directly call ~object_name();. Ever. Ever. Ever. There's several reasons why:
As a normal part of Stack Unwinding, this will get called anyways when the method returns.
deleteing a pointer will call it.
"Smart Pointers" (like std::unique_ptr and std::shared_ptr) will call it when they self-destruct.
Directly calling ~object_name(); should only ever be done in rare cases, usually involving Allocators, and even then, there are usually cleaner solutions.
Related
I know that when you create and release a CComPtr class the reference is increased and decreased. When reference gets to 0, the smart COM pointer is destructed.
I am not very sure how COM works regarding concurrency. If I reset/release my CComPtr holding the last reference, do I have a way to fully guarantee that in the next loc the destructor has been fully executed?
I want to know if decreasing the reference and calling the destructor will happen within the same thread that I am resetting the last com pointer. I heard that it is usually some kind of COM thread the one that actually takes care of this. If this is the case, is there any way to synchronize the COM destruction with your main working thread?
Since setting the smart pointer to nullptr indirectly calls IUnknown::Release(), it's just another function call, so it depends on the current model (see CoInitializeEx()).
If it's MTA, releasing occurs in the thread you call it. If it's STA, releasing is serialized. The point of STA is to avoid manual synchronization.
More about COM appartments here.
EDIT: I never figured this out - I refactored the code to be pretty much identical to a Boost sample, and still had the problem. If anyone else has this problem, yours may be the more common shared_from_this() being called when no shared_ptr exists (or in the constructor). Otherwise, I recommend just rebuilding from the boost asio samples.
I'm trying to do something that I think is pretty common, but I am having some issues.
I'm using boost asio, and trying to create a TCP server. I accept connections with async_accept, and I create shared pointers. I have a long lived object (like a connection manager), that inserts the shared_ptr into a set. Here is a snippet:
std::shared_ptr<WebsocketClient> ptr = std::make_shared<WebsocketClient>(std::move(s));
directory.addPending(ptr);
ptr->onConnect(std::bind(&Directory::addClient, &directory, std::placeholders::_1));
ptr->onDisconnect(std::bind(&Directory::removeClient, &directory, std::placeholders::_1));
ptr->onMessage(std::bind(&Directory::onMessage, &directory, std::placeholders::_1, std::placeholders::_2));
ptr->start();
The Directory has std::set<std::shared_ptr<WebsocketClient>> pendingClients;
The function for adding a client is:
void Directory::addPending(std::shared_ptr<WebsocketClient> ptr){
std::cout << "Added pending client: " << ptr->getName() << std::endl;
pendingClients.insert(ptr);
}
Now, when the WebsocketClient starts, it tries to create a shared_ptr using shared_from_this() and then initiates an async_read_until ("\r\n\r\n"), and passes that shared_ptr to the lambda to keep ownership. It crashes before actually invoking the asio function, on shared_from_this().
Call stack looks like this:
server.exe!WebsocketClient::start()
server.exe!Server::acceptConnection::__l2::<lambda>(boost::system::error_code ec)
server.exe!boost::asio::asio_handler_invoke<boost::asio::detail::binder1<void <lambda>(boost::system::error_code),boost::system::error_code> >(boost::asio::detail::binder1<void <lambda>(boost::system::error_code),boost::system::error_code> & function, ...)
server.exe!boost::asio::detail::win_iocp_socket_accept_op<boost::asio::basic_socket<boost::asio::ip::tcp,boost::asio::stream_socket_service<boost::asio::ip::tcp> >,boost::asio::ip::tcp,void <lambda>(boost::system::error_code) ::do_complete(boost::asio::detail::win_iocp_io_service * owner, boost::asio::detail::win_iocp_operation * base, const boost::system::error_code & result_ec, unsigned __int64 __formal) Line 142 C++
server.exe!boost::asio::detail::win_iocp_io_service::do_one(bool ec, boost::system::error_code &)
server.exe!boost::asio::detail::win_iocp_io_service::run(boost::system::error_code & ec)
server.exe!Server::run()
server.exe!main(int argc, char * * argv)
However, I get a bad_weak_ptr when I call shared_from_this. I thought that was thrown when no shared_ptr owned this object, but when I call the addPending, I insert "ptr" into a set, so there should still be a reference to it.
Any ideas? If you need more details please ask, and I'll provide them. This is my first post on StackOverflow, so let me know what I can improve.
You could be dealing with memory corruption. Whether that's the case or not, there are some troubleshooting steps you should definitely take:
Log the pointer value returned from make_shared, and again inside the member function just before calling shared_from_this. Check whether that pointer value exists in your running object table (which is effectively what that set<shared_ptr<...>> is)
Instrument constructor and destructor. If the shared_ptr count does actually hit zero, it'll call your destructor and the call stack will give you information on the problem.
If that doesn't help, the fact that you're using make_shared should be useful, because it guarantees that the metadata block is right next to the object.
Use memcpy to dump the raw bytes preceding your object at various times and watch for potential corruption.
Much of this logging will happen in a context that's exhibiting undefined behavior. If the compiler figures out that you're testing for something that's not supposed to be possible, it might actually remove the test. In that case, you can usually manage to make the tests work anyway by precision use of #pragma to disable optimization just on your debug logging code -- you don't want to change optimization settings on the rest of the code, because that might change the way corruption manifests without actually fixing it.
It is difficult to determine the cause of the problem without a code.
But which enable_shared_from_this you use, boost or std?
I see you use std::make_shared, so if WebsocketClient inherits boost::enable_shared_from_this it can cause crash.
I have a class which calls an asynchronous task using std::async in his constructor for loading its content. ( I want the loading of the object done asynchronously )
The code looks like this:
void loadObject(Object* object)
{
// ... load object
}
Object::Object():
{
auto future = std::async(std::launch::async, loadObject, this);
}
I have several instances of these objects getting created and deleted on my main thread, they can get deleted any time, even before their loading has finished.
I'd like to know if it is dangerous to having object getting destroyed when it is still getting handled on another thread. And how can I stop the thread if the object gets destroyed ?
EDIT: The std::future destructor does not block my code with the VS2013's compiler that I am using due to a bug.
As MikeMB already mentioned, your constructor doesn't finish until the load has been completed. Check this question for how to overcome that: Can I use std::async without waiting for the future limitation?
I'd like to know if it is dangerous to having object getting destroyed when it is still getting handled on another thread.
Accessing object's memory after deletion is certainly dangerous, yes. The behaviour will be undefined.
how can I stop the thread if the object gets destroyed ?
What I recommend you to take care of first, is to make sure that the object doesn't get destroyed while it's still being pointed at by something that is going to use it.
One approach is to use a member flag signifying completed load that is updated in the async task and checked in the destructor and synchronize the access with a condition variable. That will allow the destructor to block until the async task is complete.
Once you've managed to prevent the object from being destroyed, you can use another synchronized member flag to signify that the object is being destroyed and skip the loading if it's set. That'll add synchronization overhead but may be worth it if loading is expensive.
Another approach which avoids blocking destructor is to pass a std::shared_ptr to the async task and require all Object instances to be owned by a shared pointer. That limitation may not be very desireably and you'll need to inherit std::enable_shared_from_this to get the shared pointer in the constructor.
There is nothing asynchronous happening in your code, because the constructor blocks until loadObject() returns (The destructor of a future returned by std::async implicitly joins).
If it would not, it would depend on how you have written your code (and especially your destructor), but most probably, your code would incur undefined behavior.
Yes it is dangerous to having object getting destroyed when it is still getting handled on another thread
You can implement a lot of strategies actually depending on requirements and desired behaviour.
I would implement sort of pimpl strategy here, that means that all actual data will be stored in the pointer that your object holds. You will load all the data to the data-pointer-object and store it in the public-object atomically.
Techincally speaking object should be fully constrcuted and ready to use by the time the constrcutor is finished. In your case data-pointer-object will still probably be not ready to use. And you should make your class to handle correctly that state.
So here we go:
class Object
{
std::shared_ptr<Object_data> d;
Object::Object():
d(std::make_shared<Object_data>())
{
some_futures_matser.add_future(std::async(std::launch::async, loadObject, d));
}
}
Then you make atomic flag in your data-object that will signal that loading is complete and object is ready to use.
class Object_data
{
// ...
std::atomic<bool> loaded {false};
};
loadObject(std::shared_ptr<Object_data> d)
{
/// some load code here
d->loaded = true;
}
You have to check if your object is constrcuted every time when you acces it (with thread safe way) through loaded flag
I wonder, is it safe to implement like this? :
typedef shared_ptr<Foo> FooPtr;
FooPtr *gPtrToFooPtr // global variable
// init (before any thread has been created)
void init()
{
gPtrToFooPtr = new FooPtr(new Foo);
}
// thread A, B, C, ..., K
// Once thread Z execute read_and_drop(),
// no more call to read() from any thread.
// But it is possible even after read_and_drop() has returned,
// some thread is still in read() function.
void read()
{
FooPtr a = *gPtrToFooPtr;
// do useful things (read only)
}
// thread Z (executed once)
void read_and_drop()
{
FooPtr b = *gPtrToFooPtr;
// do useful things with a (read only)
b.reset();
}
We do not know which thread would do the actual realease.
Does boost's shared_ptr do the release safely under circumstance like this?
According to boost's document, thread safety of shared_ptr is:
A shared_ptr instance can be "read" (accessed using only const
operations) simultaneously by multiple threads. Different shared_ptr
instances can be "written to" (accessed using mutable operations such
as operator= or reset) simultaneosly by multiple threads.
As far as I am concerned, the code above does not violate any of thread safety criteria I mentioned above. And I believe the code should run fine. Does anyone tell me if I am right or wrong?
Thanks in advance.
Editted 2012-06-20 01:00 UTC+9
The pseudo code above works fine. The shared_ptr implementation guarantees to work correctly under circumstances where multiple thread is accessing instances of it (each thread MUST access its own instance of shared_ptr instantiated by using copy constructor).
Note that in the pseudo code above, you must delete gPtrToFooPtr to have the shared_ptr implementation finally release (drop the reference count by one) the object it owns(not proper expression since it is not an auto_ptr, but who cares ;) ). And in this case, you must be aware of the fact that it may cause SIGSEGV in multithreaded application.
How do you define 'safe' here? If you define it as 'I want to make sure that the object is destroyed exactly once', then YES, the release is safe. However, the problem is that the two threads share one smart pointer in your example. This is not safe at all. The reset() performed by one thread might not be visible to the other thread.
As stated by the documentation, smart pointers offer the same guarantees as built in types (i.e., pointers). Therefore, it is problematic to perform an unguarded write while an other thread might still be reading. It is undefined when that other reading thread will see writes of the other one. Therefore, while one thread calls reset() the pointer might NOT be reset in the other thread, since the shared_ptr instance itself is shared.
If you want some sort of thread safety, you have to use two shared pointer instances. Then, of course, resetting one of them WILL NOT release the object, since the other thread still has a reference to it. Usually this behaviour is intended.
However, I think the bigger problem is that you are misusing shared_ptrs. It is quite uncommon to use pointers of shared_ptrs and to allocate the shared_ptr on the heap (using new). If you do that, you have the problem you wanted to avoid using smart pointers again (you have to manage the lifetime of the shared_ptr now). Maybe check out some example code about smart pointers and their usage first.
For your own good, I will be honest.
Your code is doing many things and almost all are simply useless and absurd.
typedef shared_ptr<Foo> FooPtr;
FooPtr *gPtrToFooPtr // global variable
A raw pointer to a smart pointer, cancels the advantage of automatic resource management and does not solve any problem.
void read()
{
FooPtr a = *gPtrToFooPtr;
// do useful things (read only)
}
a is not used in any meaningful way.
{
FooPtr b = ...
b.reset();
}
b.reset() is useless here, b is about to be destroyed anyway. b has no purpose in this function.
I am afraid you have no idea what you are doing, what smart pointers are for, how to use shared_ptr, and how to do MT programming; so, you end up with this absurd pile of useless features to not solve the problem.
What about doing simple things simply:
Foo f;
// called before others functions
void init() {
// prepare f
}
// called in many threads {R1, R2, ... Rn} in parallel
void read()
{
// use f (read-only)
}
// called after all threads {R1, R2, ... Rn} have terminated
void read_and_drop()
{
// reset f
}
read_and_drop() must not be called before it can be guaranteed that other threads are not reading f.
To your edit:
Why not call reset() first on the global shared_ptr?
If you were the last one to access the object, fine it is deleted, then you delete the shared_ptr on the heap.
If some other thread still uses it, you reduce the ref count by one, and "disconnect" the global ptr from the (still existing) object that is pointed-to. You can then safely delete the shared_ptr on the heap without affecting any thread that might still use it.
I have a destructor that performs some necessary cleanup (it kills processes). It needs to run even when SIGINT is sent to the program. My code currently looks like:
typedef boost::shared_ptr<PidManager> PidManagerPtr
void PidManager::handler(int sig)
{
std::cout << "Caught SIGINT\n";
instance_.~PidManagerPtr(); //PidManager is a singleton
exit(1);
}
//handler registered in the PidManager constructor
This works, but there seem to be numerous warnings against explicitly calling a destructor. Is this the right thing to do in this situation, or is there a "more correct" way to do it?
If that object is a singleton, you don't need to use a shared-pointer. (There's only one!)
If you switch it to auto_ptr you can call release() on it. Or perhaps scoped_ptr, calling reset().
This all said, I'm 99% certain that exit() will destruct statically constructed objects. (Which singletons tend to be.) What I do know is that exit() calls the registered atexit() functions.
If your singleton is not destructed automatically by exit, the proper thing to do in your case is to make an atexit hook:
void release_singleton(void)
{
//instance_.release();
instance_.reset();
}
// in main, probably
atexit(release_singleton);
Never explicitly call destructor unless object was constructed with placement new.
Move cleanup code into separate function and call it instead. The same function is to be called from the destructor.
Turns out that doing this was a very bad idea. The amount of weird stuff going on is tremendous.
What was happening
The shared_ptr had a use_count of two going into the handler. One reference was in PidManager itself, the other was in the client of PidManager. Calling the destructor of the shared_ptr (~PidManager() ) reduced the use_count by one. Then, as GMan hinted at, when exit() was called, the destructor for the statically initialized PidManagerPtr instance_ was called, reducing the use_count to 0 and causing the PidManager destructor to be called. Obviously, if PidManager had more than one client, the use_count would not have dropped to 0, and this wouldn't have worked at all.
This also gives some hints as to why calling instance_.reset() didn't work. The call does indeed reduce the reference count by 1. But the remaining reference is the shared_ptr in the client of PidManager. That shared_ptr is an automatic variable, so its destructor is not called at exit(). The instance_ destructor is called, but since it was reset(), it no longer points to the PidManager instance.
The Solution
I completely abandoned the use of shared_ptrs and decided to go with the Meyers Singleton instead. Now my code looks like this:
void handler(int sig)
{
exit(1);
}
typedef PidManager * PidManagerPtr
PidManagerPtr PidManager::instance()
{
static PidManager instance_;
static bool handler_registered = false;
if(!handler_registered)
{
signal(SIGINT,handler);
handler_registered = true;
}
return &instance_;
}
Explicitly calling exit allows the destructor of the statically initialized PidManager instance_ to run, so no other clean up code need be placed in the handler. This neatly avoids any issues with the handler being called while PidManager is in an inconsistent state.
You really don't want to do much of anything in a signal handler. The safest the thing to do is just set a flag (e.g. a global volatile bool), and then have your program's regular event loop check that flag every so often, and if it has become true, call the cleanup/shutdown routine from there.
Because the signal handler runs asynchronously with the rest of the application, doing much more than that from inside the signal handler is unsafe -- whatever data you might want to interact with might be in an inconsistent state. (and you're not allowed to use mutexes or other synchronization from a signal handler, either -- signals are pretty evil that way)
However, if you don't like the idea of having to poll a boolean all the time, one other thing you can do from within a signal handler (at least on most OS's) is send a byte on a socket. So you could set up a socketpair() in advance, and have your normal event loop select() (or whatever) on the other end of the socket pair; when it receives a byte on that socket, it knows your signal handler must have sent that byte, and therefore it's time to clean up.
One other way could be to have the singleton dynamically allocated (on first use or in main), and delete it for cleanup.
Eh. I guess your PidManagerPtr actually points to a dynamically allocated object ... But doesn't boost::shared_ptr actually clean up on reallocation? So it should be enough to:
instance_ = 0;
?
Just call reset() on the shared_ptr and it'll remove your instance for you.