Dynamically create objects in loop - c++

I come from Java so this is pretty hard for me to understand.. I am writing a client/server program to start learning C++.
ServerSocket server(30000);
while (true) {
ServerSocket new_sock;
server.accept(new_sock);
std::cout << "client connected...\n";
ClientConnectionThread *cct = new ClientConnectionThread(new_sock);
cct->start();
}
My problem occurs when I try to write to the socket in the ClientConnectionThread.
client_sock << someObj;
Exception was caught in cct: Could not write to socket.
My assumption is that after the cc->start(); command the ServerSocket will lose 'scope' and be popped off the stack and automatically closed. To fix this I changed the code to:
ServerSocket server(30000);
while (true) {
ServerSocket *new_sock; <----
server.accept(new_sock);
std::cout << "client connected...\n";
ClientConnectionThread *cct = new ClientConnectionThread(new_sock);
cct->start();
}
But the program didn't even enter the loop.. with no error messages telling me why that didn't work (Of course changing the necessary code to accept the pointer).
If it is not obvious what I am trying to do.. I am looking to create a new thread on every client connection to handle each client. Of course the thread will need a reference to the socket to receive and send on - which is why I pass it to the CCT object.
If you need more code let me know.

Your first code does not work exactly because of what you said. The object is allocated on the stack but once it leaves of scope, it is destroyed and the underlying pointer to the socket is closed as a consequence.
If you want to keep the object "alive", you need to use pointers. You got that right, but missed a important point: you the to allocate the object! To do so, you need to use the operator new as the following:
ServerSocket *new_sock = new ServerSocket;
Now here's the catch, on Java your object gets deallocated automatically by GC, but C++ has no garbage collector, so you need to do it by hand. Once you are done using the object, you need to delete it.
delete new_sock;
This can be a lot tricky, can cause a lot of crashes and even memory leaks. If you wish some behaviour more like Java's GC, you can use a shared_ptr, that will automatically deallocate the object (it's not that simple, but you will easily find more about that on Google.)
std::shared_ptr<ServerSocket> new_sock = std::shared_ptr<ServerSocket>(new ServerSocket);
server.accept(*new_sock);
(assuming you are compiling against C++11)

You could make your first version work if you pass a copy instead of a reference of the ServerSocket to your thread (if that is possible - server socket would need a proper copy constructor for this). The original ServerSocket would go out of scope as you pointed out, which is now no longer a problem, as the copy is still valid.
If this is not an option for you go with the version Rogiel pointed out (and stick to resource handles like unique and shared pointer, those make your life a lot easier if you are used to GC :-) ).

Related

boost::asio::io_service check if null

I am using boost 1.55 (io_service doc). I need to call the destructor on my io_service to reset it after power is cycled on my serial device to get new data. The problem is that when the destructor is called twice (re-trying connection), I get a segmentation fault.
In header file
boost::asio::io_service io_service_port_1;
In function that closes connection
io_service_port_1.stop();
io_service_port_1.reset();
io_service_port_1.~io_service(); // how to check for NULL?
// do I need to re-construct it?
The following does not work:
if (io_service_port_1)
if (io_service_port_1 == NULL)
Thank you.
If you need manual control over when the object is created and destroyed, you should be wrapping it in a std::unique_ptr object.
std::unique_ptr<boost::asio::io_service> service_ptr =
std::make_unique<boost::asio::io_service>();
/*Do stuff until connection needs to be reset*/
service_ptr->stop();
//I don't know your specific use case, but the call to io_service's member function reset is probably unnecessary.
//service_ptr->reset();
service_ptr.reset();//"reset" is a member function of unique_ptr, will delete object.
/*For your later check*/
if(service_ptr) //returns true if a valid object exists in the pointer
if(!service_ptr) //returns true if no object is being pointed to.
Generally speaking, you should never directly call ~object_name();. Ever. Ever. Ever. There's several reasons why:
As a normal part of Stack Unwinding, this will get called anyways when the method returns.
deleteing a pointer will call it.
"Smart Pointers" (like std::unique_ptr and std::shared_ptr) will call it when they self-destruct.
Directly calling ~object_name(); should only ever be done in rare cases, usually involving Allocators, and even then, there are usually cleaner solutions.

C++ shared_ptr shared_from_this throws a bad_weak_ptr exception, even though I have a reference to it

EDIT: I never figured this out - I refactored the code to be pretty much identical to a Boost sample, and still had the problem. If anyone else has this problem, yours may be the more common shared_from_this() being called when no shared_ptr exists (or in the constructor). Otherwise, I recommend just rebuilding from the boost asio samples.
I'm trying to do something that I think is pretty common, but I am having some issues.
I'm using boost asio, and trying to create a TCP server. I accept connections with async_accept, and I create shared pointers. I have a long lived object (like a connection manager), that inserts the shared_ptr into a set. Here is a snippet:
std::shared_ptr<WebsocketClient> ptr = std::make_shared<WebsocketClient>(std::move(s));
directory.addPending(ptr);
ptr->onConnect(std::bind(&Directory::addClient, &directory, std::placeholders::_1));
ptr->onDisconnect(std::bind(&Directory::removeClient, &directory, std::placeholders::_1));
ptr->onMessage(std::bind(&Directory::onMessage, &directory, std::placeholders::_1, std::placeholders::_2));
ptr->start();
The Directory has std::set<std::shared_ptr<WebsocketClient>> pendingClients;
The function for adding a client is:
void Directory::addPending(std::shared_ptr<WebsocketClient> ptr){
std::cout << "Added pending client: " << ptr->getName() << std::endl;
pendingClients.insert(ptr);
}
Now, when the WebsocketClient starts, it tries to create a shared_ptr using shared_from_this() and then initiates an async_read_until ("\r\n\r\n"), and passes that shared_ptr to the lambda to keep ownership. It crashes before actually invoking the asio function, on shared_from_this().
Call stack looks like this:
server.exe!WebsocketClient::start()
server.exe!Server::acceptConnection::__l2::<lambda>(boost::system::error_code ec)
server.exe!boost::asio::asio_handler_invoke<boost::asio::detail::binder1<void <lambda>(boost::system::error_code),boost::system::error_code> >(boost::asio::detail::binder1<void <lambda>(boost::system::error_code),boost::system::error_code> & function, ...)
server.exe!boost::asio::detail::win_iocp_socket_accept_op<boost::asio::basic_socket<boost::asio::ip::tcp,boost::asio::stream_socket_service<boost::asio::ip::tcp> >,boost::asio::ip::tcp,void <lambda>(boost::system::error_code) ::do_complete(boost::asio::detail::win_iocp_io_service * owner, boost::asio::detail::win_iocp_operation * base, const boost::system::error_code & result_ec, unsigned __int64 __formal) Line 142 C++
server.exe!boost::asio::detail::win_iocp_io_service::do_one(bool ec, boost::system::error_code &)
server.exe!boost::asio::detail::win_iocp_io_service::run(boost::system::error_code & ec)
server.exe!Server::run()
server.exe!main(int argc, char * * argv)
However, I get a bad_weak_ptr when I call shared_from_this. I thought that was thrown when no shared_ptr owned this object, but when I call the addPending, I insert "ptr" into a set, so there should still be a reference to it.
Any ideas? If you need more details please ask, and I'll provide them. This is my first post on StackOverflow, so let me know what I can improve.
You could be dealing with memory corruption. Whether that's the case or not, there are some troubleshooting steps you should definitely take:
Log the pointer value returned from make_shared, and again inside the member function just before calling shared_from_this. Check whether that pointer value exists in your running object table (which is effectively what that set<shared_ptr<...>> is)
Instrument constructor and destructor. If the shared_ptr count does actually hit zero, it'll call your destructor and the call stack will give you information on the problem.
If that doesn't help, the fact that you're using make_shared should be useful, because it guarantees that the metadata block is right next to the object.
Use memcpy to dump the raw bytes preceding your object at various times and watch for potential corruption.
Much of this logging will happen in a context that's exhibiting undefined behavior. If the compiler figures out that you're testing for something that's not supposed to be possible, it might actually remove the test. In that case, you can usually manage to make the tests work anyway by precision use of #pragma to disable optimization just on your debug logging code -- you don't want to change optimization settings on the rest of the code, because that might change the way corruption manifests without actually fixing it.
It is difficult to determine the cause of the problem without a code.
But which enable_shared_from_this you use, boost or std?
I see you use std::make_shared, so if WebsocketClient inherits boost::enable_shared_from_this it can cause crash.

g++ misunderstands c++ semantics

There are two possible solutions to the problem: I don't understand the c++ semantics or g++ does.
I am programming a simple network game now. I have been building a library the game uses to communicate over the network. There is a class designated to handle the connection between the apps. Another class implements server functionality so it possess a method accept(). The method is to return a Connection class.
There are a few way to return the class. I have tried these three:
Connection accept() {
...
return Connection(...);
}
Connection* accept() {
...
return new Connection(...);
}
Connection& accept() {
...
Connection *temp = new Connection(...);
return *temp;
}
All three were accepted by g++. The problem is that the third is somewhat faulty. When you use internal information of the object of type Connection, you will fail. I don't know what is wrong because all fields within the object look like initiasized. My problem is that when I use any function from protocol buffers library my program is terminated by Segmentation fault. The function below fails every it calls the protobuf library.
Annoucement Connection::receive() throw(EmptySocket) {
if(raw_input->GetErrno() != 0) throw EmptySocket();
CodedInputStream coded_input(raw_input);
google::protobuf::uint32 n;
coded_input.ReadVarint32(&n);
char *b;
int m;
coded_input.GetDirectBufferPointer((const void**)&b, &m);
Annoucement ann;
ann.ParseFromArray(b, n);
coded_input.Skip(n);
return ann;
}
I get this every time:
Program received signal SIGSEGV,
Segmentation fault. 0x08062106 in
google::protobuf::io::FileInputStream::CopyingFileInputStream::GetErrno
(this=0x20) at
/usr/include/google/protobuf/io/zero_copy_stream_impl.h:104
When I changed the accept() to the second version, it finnaly worked (the first is good too but I modified conception in the meanwhile).
Have you come across any problem that is similiar to this one? Why the third version of accept() is wrong? How should I debug the program to find such a horrible bug (I thought protobuf need some fix whereas the problem was not there)?
First, returning by reference something allocated on the heap is a sure recipe for a memory leak so I would never suggest actually doing that.
The second case can still result in a leak unless the ownership semantics are very well specified. Have you considered using a smart pointer instead of a raw pointer?
As for why it doesn't work, it probably has to do with ownership semantics and not because you're returning by reference, but I can't see a problem in the posted code.
"How should I debug the program to find such a horrible bug?"
If you are on Linux try running under valgrind - that should pick up any memory scribbling going on.
You overlooked raw_input=0x20 which is obviously an invalid pointer. This is in the helpful message you got in the debugger after the segfault.
For general problems of this type, learn to use Valgrind’s memcheck, which gives you messages about where your program abused memory.
Meanwhile I suggest you make sure you understand pass by value vs pass by reference (both pointer and C++ reference) and know when constructors, copy constructors and destructors are called.

Disposing of Object problem

I have a problem freeing up memory of an object.Here is my code:
void Gateway::connect(DWORD dwIP)
{
if (m_objRRSInterface != NULL)
{
//delete m_obj;
m_obj = NULL;
}
m_obj = new objClass();
m_obj->SetCallBackFn(fncp);
if (m_obj->OpenSocket(dwIP, 3002))//3002 -port number
{
m_bConnect = TRUE;
}
else
{
m_bConnect = FALSE;
delete m_objRRSInterface;
m_obj = NULL;
}
}
objClass is not my own class , it is imported from an external .dll.
OpenSocket method opens a socket connection on port 3002 and then I get all the data on fncp.
This function work's OK for the first time that i call it.
The problem appears when I call the function the second time.The problem that I have is that there is no CloseSocket method that i could call to reliable close the socket.
My question to you guys is that :Is there any method to dispose of an object and all this object dependences?
I've tried calling delete m_obj; but this hangs the application.
You should investigate about C++ destructors, which are meant to do what you are after.
This is where resources clean-up is usually done, but this is up to the programmer of the class. In other words, it is likely that objClass destructor does it resources clean-up there, but without reading the docs or the code, I cannot say.
The fact that your application hangs has nothing to do with C++ or destructors in themselves, anyway. Rather, it seems a question of the way you use your DLL, like calling delete at the wrong time, or before some manual clean-up.
But without knowing about objClass interface and semantics, I cannot help with this.
If there is no function to explicitly clean up the object or close the socket in the library documentation, does it automatically shut down the socket if there is no activity after a certain amount of time?
If you have a way of telling if the socket is still open, you could pass the object to a helper thread to delete it when it detects that the socket is closed.
The only other thing that I can think of is that it may be possible to reuse the object for the new connection.

is it safe to to destroy a socket object while an asyn_read might be going on in boost.ASIO?

In the following code:
tcp::socket socket(io_service);
tcp::endpoint ep(boost::asio::ip::address::from_string(addr), i);
socket.async_connect(ep, &connect_handler);
socket.close();
is it correct to close the socket object, or should I close it only in the connect_handler(), resort to shared_ptr to prolong the life of of the socket object?
Thanks.
Closing the socket isn't much of an issue, but the socket being destructed and deallocated is. One way to deal with it is to just make sure the socket outlives the io_service where work is being done. In other words, you just make sure to not delete it until after the io_service has exited. Obviously this won't work in every situation.
In a variety of conditions it can be difficult to impossible to tell when all work is really done on the socket when it's active within the io_service, and ASIO doesn't provide any mechanism to explicitly remove or disconnect the object's callbacks so they don't get called. So you should consider holding the connection in a shared_ptr, which will keep the connection object until the last reference inside the io_service has been released.
Meanwhile your handler functors should handle all possible errors passed in, including the connection being destroyed.
It is safe. The connect_handler will give you ec == boost::asio::error::connection_aborted. Of course, you need to do io_service.run() for the handler to be invoked.
As already answered by Chila, it's safe to close the socket whenever you want. If the socket in question has an outstanding operation at the time, the handler/callback will be invoked to notify you've cancelled the operation. That's where connection_aborted shows up.
As for your question about shared_ptr, I consider it a big win if you have another thread or other objects referencing your sockets, however, it isn't required in many cases. All you have to do is to dynamically allocate them, and deallocate when they're no longer needed. Of course, if you have other objects or threads referencing your socket, you must update them prior to delete/dealloc. Doing so, you avoid invalid memory access because the object pointed by them no longer exists (see dangling pointer).