boost asio "A non-recoverable error occurred during database lookup" - c++

I'm currently stress testing my server.
sometimes I get "A non-recoverable error occurred during database lookup" Error
coming from error.message()
error is sent to my handling function by boost::asio::placeholders::error called on the async_read method.
I have no idea what this error means, and I am not able to reproduce purposely this error, it only happen sometimes and seems to be random (of course it is not, but it seems)
Does anyone have ever got this error message, and if so, know where it came from ?
EDIT 1
Here's what I found on the boost library, the error is :
no_recovery = BOOST_ASIO_NETDB_ERROR(NO_RECOVERY)
But can't figure out what this is...
EDIT 2
Just so you know everything about my problem, here the design :
I have only one io_service.
Everytime a user is connecting, an async_read is starting, waiting for something to read.
When it reads something, most of the time, it is doing some work on a thread (coming from a pool), and write something synchronously back to the user. (using boost write).
Even since boost 1.37 claims that synchronous write is thread safe, I'm really worried about the fact that it is coming from this.
If the user sends different message really quick, it can happen that async_read and write are called simultaneously, can it does any harm ?
EDIT 3
Here's some portion of my code asked by Dave S :
void TCPConnection::listenForCMD() {
boost::asio::async_read(m_socket,
boost::asio::buffer(m_inbound_data, 3),
boost::asio::transfer_at_least(3),
boost::bind(&TCPConnection::handle_cmd,
shared_from_this(),
boost::asio::placeholders::error)
);
}
void TCPConnection::handle_cmd(const boost::system::error_code& error) {
if (error) {
std::cout << "ERROR READING : " << error.message() << std::endl;
return;
}
std::string str1(m_inbound_data);
std::string str = str1.substr(0,3);
std::cout << "COMMAND FUNCTION: " << str << std::endl;
a_fact func = CommandFactory::getInstance()->getFunction(str);
if (func == NULL) {
std::cout << "command doesn't exist: " << str << std::endl;
return;
}
protocol::in::Command::pointer cmd = func(m_socket, client);
cmd->setCallback(boost::bind(&TCPConnection::command_is_done,
shared_from_this()));
cmd->parse();
}
m_inbound_data is a char[3]
Once cmd->parse() is done, it will call a callback command_is_done
void TCPConnection::command_is_done() {
m_inbound_data[0] = '0';
m_inbound_data[1] = '0';
m_inbound_data[2] = '0';
listenForCMD();
}
The error occurs in the handle_cmd when checking for error at the first line.
As I said before, the cmd->parse() will parse the command it just got, sometime lauching blocking code in a thread coming from a pool. On this thread it sends back data to the client with a synchronous write.
IMPORTANT THING : The callback command_is_done will always be called before the said thread is launched. this means that listenForCMD is already called when the thread may send something back to the client in synchronous write. Therefore my first worries.

When it reads something, most of the time, it is doing some work on a
thread (coming from a pool), and write something synchronously back to
the user. (using boost write). Even since boost 1.37 claims that
synchronous write is thread safe, I'm really worried about the fact
that it is coming from this.
Emphasis added by me, this is incorrect. A single boost::asio::tcp::socket is not thread safe, the documentation is very clear
Thread Safety
Distinct objects: Safe.
Shared objects: Unsafe.
It is also very odd to mix async_read() with a blocking write().

Related

Concurrent request processing with Boost Beast

I'm referring to this sample program from the Beast repository: https://www.boost.org/doc/libs/1_67_0/libs/beast/example/http/server/fast/http_server_fast.cpp
I've made some changes to the code to check the ability to process multiple requests simultaneously.
boost::asio::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}};
std::list<http_worker> workers;
for (int i = 0; i < 10; ++i)
{
workers.emplace_back(acceptor, doc_root);
workers.back().start();
}
ioc.run();
My understanding with the above is that I will now have 10 worker objects to run I/O, i.e. handle incoming connections.
So, my first question is the above understanding correct?
Assuming that the above is correct, I've made some changes to the lambda (handler) passed to the tcp::acceptor:
void accept()
{
// Clean up any previous connection.
boost::beast::error_code ec;
socket_.close(ec);
buffer_.consume(buffer_.size());
acceptor_.async_accept(
socket_,
[this](boost::beast::error_code ec)
{
if (ec)
{
accept();
}
else
{
boost::system::error_code ec2;
boost::asio::ip::tcp::endpoint endpoint = socket_.remote_endpoint(ec2);
// Request must be fully processed within 60 seconds.
request_deadline_.expires_after(
std::chrono::seconds(60));
std::cerr << "Remote Endpoint address: " << endpoint.address() << " port: " << endpoint.port() << "\n";
read_request();
}
});
}
And also in process_request():
void process_request(http::request<request_body_t, http::basic_fields<alloc_t>> const& req)
{
switch (req.method())
{
case http::verb::get:
std::cerr << "Simulate processing\n";
std::this_thread::sleep_for(std::chrono::seconds(30));
send_file(req.target());
break;
default:
// We return responses indicating an error if
// we do not recognize the request method.
send_bad_response(
http::status::bad_request,
"Invalid request-method '" + req.method_string().to_string() + "'\r\n");
break;
}
}
And here's my problem: If I send 2 simultaneous GET requests to my server, they're being processed sequentially, and I know this because the 2nd "Simulate processing" statement is printed ~30 seconds after the previous one which would mean that execution gets blocked on the first thread.
I've tried to read the documentation of boost::asio to better understand this, but to no avail.
The documentation for acceptor::async_accept says:
Regardless of whether the asynchronous operation completes immediately or not, the handler will not be >invoked from within this function. Invocation of the handler will be performed in a manner equivalent to >using boost::asio::io_service::post().
And the documentation for boost::asio::io_service::post() says:
The io_service guarantees that the handler will only be called in a thread in which the run(), >run_one(), poll() or poll_one() member functions is currently being invoked.
So, if 10 workers are in the run() state, then why would the two requests get queued?
And also, is there a way to workaround this behavior without adapting to a different example? (e.g. https://www.boost.org/doc/libs/1_67_0/libs/beast/example/http/server/async/http_server_async.cpp)
io_context does not create threads internally to execute the tasks, but rather uses the threads that call io_context::run explicitly. In the example the io_context::run is called just from one thread (main thread). So you have just one thread for task executions, which (thread) gets blocked in sleep and there is no other thread to execute other tasks.
To make this example work you have to:
Add more thread into the pool (like in the second example you referred to)
size_t const threads_count = 4;
std::vector<std::thread> v;
v.reserve(threads_count - 1);
for(size_t i = 0; i < threads_count - 1; ++i) { // add thraed_count threads into the pool
v.emplace_back([&ioc]{ ioc.run(); });
}
ioc.run(); // add the main thread into the pool as well
Add synchronization (for example, using strand like in the second example) where it is needed (at least for socket reads and writes), because now your application is multi-threaded.
UPDATE 1
Answering to the question "What is the purpose of a list of workers in the Beast example (the first one that referred) if in fact io_context is only running on one thread?"
Notice, regardless of thread count IO operations here are asynchronous, meaning http::async_write(socket_...) does not block the thread. And notice, that I explain here the original example (not your modified version). One worker here deals with one round-trip of 'request-response'. Imagine the situation. There are two clients client1 and client2. Client1 has poor internet connection (or requests a very big file) and client2 has the opposite conditions. Client1 makes request. Then client2 makes request. So if there was just one worker client2 would had to wait until client1 finished the whole round-trip 'request-response`. But, because there are more than one workers client2 gets response immediately not waiting the client1 (keep in mind IO does not block your single thread). The example is optimized for situation where bottleneck is IO but not the actual work. In your modified example you have quite the opposite situation - the work (30s) is very expensive compared to IO. For that case better use the second example.

How can I read all available data with boost::asio's async_read_some() without waiting for new data to arrive?

I'm using boost::asio for serial communications and I'd like to listen for incoming data on a certain port. So, I register a ReadHandler using serialport::async_read_some() and then create a separate thread to process async handlers (calls io_service::run()). My ReadHandler re-registers itself at its end by again calling async_read_some(), which seems to be a common pattern.
This all works, and my example can print data to stdout as it's received - except that I've noticed that data received while the ReadHandler is running will not be 'read' until the ReadHandler is done executing and new data is received after that happens. That is to say, when data is received while ReadHandler is running, although async_read_some is called at the conclusion of ReadHandler, it will not immediately invoke ReadHandler again for that data. ReadHandler will only be called again if additional data is received after the initial ReadHandler is completed. At this point, the data received while ReadHandler was running will be correctly in the buffer, alongside the 'new' data.
Here's my minimum-viable-example - I had initially put it in Wandbox but realized it won't help to compile it online because it requires a serial port to run anyway.
// Include standard libraries
#include <iostream>
#include <string>
#include <memory>
#include <thread>
// Include ASIO networking library
#include <boost/asio.hpp>
class SerialPort
{
public:
explicit SerialPort(const std::string& portName) :
m_startTime(std::chrono::system_clock::now()),
m_readBuf(new char[bufSize]),
m_ios(),
m_ser(m_ios)
{
m_ser.open(portName);
m_ser.set_option(boost::asio::serial_port_base::baud_rate(115200));
auto readHandler = [&](const boost::system::error_code& ec, std::size_t bytesRead)->void
{
// Need to pass lambda as an input argument rather than capturing because we're using auto storage class
// so use trick mentioned here: http://pedromelendez.com/blog/2015/07/16/recursive-lambdas-in-c14/
// and here: https://stackoverflow.com/a/40873505
auto readHandlerImpl = [&](const boost::system::error_code& ec, std::size_t bytesRead, auto& lambda)->void
{
if (!ec)
{
const auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now() - m_startTime);
std::cout << elapsed.count() << "ms: " << std::string(m_readBuf.get(), m_readBuf.get() + bytesRead) << std::endl;
// Simulate some kind of intensive processing before re-registering our read handler
std::this_thread::sleep_for(std::chrono::seconds(5));
//m_ser.async_read_some(boost::asio::buffer(m_readBuf.get(), bufSize), lambda);
m_ser.async_read_some(boost::asio::buffer(m_readBuf.get(), bufSize), std::bind(lambda, std::placeholders::_1, std::placeholders::_2, lambda));
}
};
readHandlerImpl(ec, bytesRead, readHandlerImpl);
};
m_ser.async_read_some(boost::asio::buffer(m_readBuf.get(), bufSize), readHandler);
m_asioThread = std::make_unique<std::thread>([this]()
{
this->m_ios.run();
});
}
~SerialPort()
{
m_ser.cancel();
m_asioThread->join();
}
private:
const std::chrono::system_clock::time_point m_startTime;
static const std::size_t bufSize = 512u;
std::unique_ptr<char[]> m_readBuf;
boost::asio::io_service m_ios;
boost::asio::serial_port m_ser;
std::unique_ptr<std::thread> m_asioThread;
};
int main()
{
std::cout << "Type q and press enter to quit" << std::endl;
SerialPort port("COM1");
while (std::cin.get() != 'q')
{
std::this_thread::sleep_for(std::chrono::milliseconds(200));
}
return 0;
}
(Don't mind the weird lambda stuff going on)
This program just prints data to stdout as it's received, along with a timestamp (milliseconds since program started). By connecting a virtual serial device to a virtual serial port pair, I can send data to the program (just typing in RealTerm, really). I can see the problem when I type a short string.
In this case, I typed 'hi', and the 'h' was printed immediately. I had typed the 'i' very shortly after, but at computer speeds it was quite a while, so it wasn't part of the initial data read into the buffer. At this point, the ReadHandler executes, which takes 5 seconds. During that time, the 'i' was received by the OS. But the 'i' does not get printed after the 5 seconds is up - the next async_read_some ignores it until I then type a 't', at which point it suddenly prints both the 'i' and the 't'.
Example program output
Here's a clearer description of this test and what I want:
Test: Start program, wait 1 second, type hi, wait 9 seconds, type t
What I want to happen (printed to stdout by this program):
1000ms: h
6010ms: i
11020ms: t
What actually happens:
1000ms: h
10000ms: it
It seems very important that the program has a way to recognize data that was received between reads. I know there is no way to check if data is available (in the OS buffer) using ASIO serial ports (without using the native_handle, anyway). But I don't really need to, as long as the read call returns. One solution to this issue might be to just make sure ReadHandler finishes running as quickly as possible - obviously the 5-second delay in this example is contrived. But that doesn't strike me as a good solution; no matter how fast I make ReadHandler, it will still be possible to 'miss' data (in that it will not be seen until some new data is received later). Is there any way to ensure that my handler will read all data within some short time of it being received, without depending on the receipt of further data?
I've done a lot of searching on SO and elsewhere, but everything I've found so far is just discussing other pitfalls that cause the system to not work at all.
As an extreme measure, it looks like it may be possible to have my worker thread call io_service::run_for() with a timeout, rather than run(), and then every short while have that thread somehow trigger a manual read. I'm not sure what form that would take yet - it could just call serial_port::cancel() I suppose, and then re-call async_read_some. But this sounds hacky to me, even if it might work - and it would require a newer version of boost, to boot.
I'm building with boost 1.65.1 on Windows 10 with VS2019, but I really hope that's not relevant to this question.
Answering the question in the title: You can't. By the nature of async_read_some you're asking for a partial read and a call to your handler as soon as anything is read. You're then sleeping for a long time before another async_read_some is called.
no matter how fast I make ReadHandler, it will still be possible to 'miss' data (in that it will not be seen until some new data is received later)
If I'm understanding your concern correctly, it doesn't matter - you won't miss anything. The data is still there waiting on a socket/port buffer until next you read it.
If you only want to begin processing once a read is complete, you need one of the async_read overloads instead. This will essentially perform multiple read_somes on the stream until some condition is met. That could just mean everything on the port/socket, or you can provide some custom CompletionCondition. This is called on each read_some until it returns 0, at which point the read is considered complete and the ReadHandler is then called.

Multithreading in C++, receive message from socket

I have studied Java for 8 months but decided to learn some c++ to on my spare time.
I'm currently making a multithreaded server in QT with minGW. My problem is that when a client connects, I create an instance of Client( which is a class) and pass the socket in the client class contructor.
And then I start a thread in the client object (startClient()) which is going to wait for messages, but it doesn't. Btw, startClient is a method that I create a thread from. See code below.
What happens then? Yes, when I try to send messages to the server, only errors, the server won't print out that a new client connects, and for some reason my computer starts working really hard. And qtcreator gets super slow until I close the server-program.
What I actually is trying to achieve is an object which derives the thread, but I have heard that it isn't a very good idea to do so in C++.
The listener loop in the server:
for (;;)
{
if ((sock_CONNECTION = accept(sock_LISTEN, (SOCKADDR*)&ADDRESS, &AddressSize)))
{
cout << "\nClient connected" << endl;
Client client(sock_CONNECTION); // new object and pass the socket
std::thread t1(&Client::startClient, client); //create thread of the method
t1.detach();
}
}
the Client class:
Client::Client(SOCKET socket)
{
this->socket = socket;
cout << "hello from clientconstructor ! " << endl;
}
void Client::startClient()
{
cout << "hello from clientmethod ! " << endl;
// WHEN I ADD THE CODE BELOW I DON'T GET ANY OUTPUT ON THE CONSOLE!
// No messages gets received either.
char RecvdData[100] = "";
int ret;
for(;;)
{
try
{
ret = recv(socket,RecvdData,sizeof(RecvdData),0);
cout << RecvdData << endl;
}
catch (int e)
{
cout << "Error sending message to client" << endl;
}
}
}
It looks like your Client object is going out of scope after you detach it.
if (/* ... */)
{
Client client(sock_CONNECTION);
std::thread t1(&Client::startClient, client);
t1.detach();
} // GOING OUT OF SCOPE HERE
You'll need to create a pointer of your client object and manage it, or define it at a higher level where it won't go out of scope.
The fact that you never see any output from the Server likely means that your client is unable to connect to your Server in the first place. Check that you are doing your IP addressing correctly in your connect calls. If that looks good, then maybe there is a firewall blocking the connection. Turn that off or open the necessary ports.
Your connecting client is likely getting an error from connect that it is interpreting as success and then trying to send lots of traffic on an invalid socket as fast as it can, which is why your machine seems to be working hard.
You definitely need to check the return values from accept, connect, read and write more carefully. Also, make sure that you aren't running your Server's accept socket in non-blocking mode. I don't think that you are because you aren't seeing any output, but if you did it would infinitely loop on error spawning tons of threads that would also infinitely loop on errors and likely bring your machine to its knees.
If I misunderstood what is happening and you do actually get a client connection and have "Client connected" and "hello from client method ! " output, then it is highly likely that your calls to recv() are failing and you are ignoring the failure. So, you are in a tight infinite loop that is repeatedly outputting "" as fast as possible.
You also probably want to change your catch block to catch (...) rather than int. I doubt either recv() or cout throw an int. Even so, that catch block won't be invoked when recv fails because recv doesn't throw any exceptions AFAIK. It returns its failure indicator through its return value.

Boost asio exits with code 0 for no reason. Setting a breakpoint AFTER the problematic statement solves it

I'm writing a TCP server-client pair with boost asio. It's very simple and synchronous.
The server is supposed to transmit a large amount of binary data through several recursive calls to a function that transmits a packet of data over TCP. The client does the analogue, reading and appending the data through a recursive function that reads incoming packets from the socket.
However, in the middle of receiving this data, most times (around 80%) the client just stops recursion suddenly, always before one of the read calls (shown below). It shouldn't be able to do this, given that there are several other statements and function calls after the recursion.
size_t bytes_transferred = m_socket.read_some(boost::asio::buffer(m_fileReadBuffer, m_fileReadBuffer.size()));
m_fileReadBuffer is a boost::array of char, with size 4096 (although I have tried other buffer formats as well with no success).
There is absolutely no way I can conceive of deducing why this is happening.
The program exits immediately, so I can't pass an error code to read_some and read any error messages, since that would need to happen after the read_some statement
No exceptions are thrown
No errors or warnings on compile/runtime
If I put breakpoints inside the recursive function, the problem never happens (transfer completes successfully)
If I put breakpoints after the transfer, or trap the execution in a while loop after the transfer, the problem never happens and there is no sign of anything wrong
Also, it's important to note that the server ALWAYS successfully sends all the data. On top of that, the problem always happens at the very end of transmissions: I can send 8000 bytes and it will exit when around 6000 or 7000 bytes have been transferred, and I can send 8000000 bytes and it will exit when something like 7996000 bytes have been transferred.
I can provide any code necessary, I just have no idea of where the problem could be. Below is the recursive read function on the client:
void TCP_Client::receive_volScan_message()
{
try
{
//If the transfer is complete, exit this loop
if(m_rollingSum >= (std::streamsize)m_fileSize)
{
std::cout << "File transfer complete!\n";
std::cout << m_fileSize << " "<< m_fileData.size() << "\n\n";
return;
}
boost::system::error_code error;
//Transfer isn't complete, so we read some more
size_t bytes_transferred = m_socket.read_some(boost::asio::buffer(m_fileReadBuffer, m_fileReadBuffer.size()));
std::cout << "Received " << (std::streamsize)bytes_transferred << " bytes\n";
//Copy the bytes_transferred to m_fileData vector. Only copies up to m_fileSize bytes into m_fileData
if(bytes_transferred+m_rollingSum > m_fileSize)
{
//memcpy(&m_fileData[m_rollingSum], &m_fileReadBuffer, m_fileSize-m_rollingSum);
m_rollingSum += m_fileSize-m_rollingSum;
}
else
{
// memcpy(&m_fileData[m_rollingSum], &m_fileReadBuffer, bytes_transferred);
m_rollingSum += (std::streamsize)bytes_transferred;
}
std::cout << "rolling sum: " << m_rollingSum << std::endl;
this->receive_volScan_message();
}
catch(...)
{
std::cout << "whoops";
}
}
As a suggestion, I have tried changing the recursive loops to for loops on both the client and server. The problem persists, somehow. The only difference is that now instead of exiting 0 before the previously mentioned read_some call, it exits 0 at the end of one of the for loop blocks, just before it starts executing another for loop pass.
EDIT: As it turns out, the error doesn't take place whenever I built the client in debug mode on my IDE.
I haven't completely understood the problem, however I have managed to fix it entirely.
The root of the issue was that on the client, the boost::asio::read calls made main exit with code 0 if the server messages hadn't arrived yet. That means that a simple
while(m_socket.available() == 0)
{
;
}
before all read calls completely prevented the problem. Both in debug and release mode.
This is very strange because as I understand these functions should just block until there is something to read, and even if they encountered errors they should return zero.
I think the debug/release discrepancy happened because the m_readBuffer wasn't initialized to anything whenever the read calls took place. This made the read call return some form of silent error. On debug, uninitialized variables get automatically set to NULL, stealthily fixing my problem.
I have no idea why adding a while loop after the transfer prevented the issue, though. Neither why it normally happened on the end of transfers, after the m_readBuffer had been set and successfully used several times.
On top of that, I have never seen this type of "crash" before, where the program simply exits with code 0 in a random place, with no errors or exceptions thrown.

Creating a separate boost thread for endpoint.listen() in a multithreaded program using websocketpp library

I am trying to integrate a websocketpp server into a multithreaded project. Everything works fine in a single thread approach, but I encountered a problem when creating a separate boost::thread for endpoint.listen() that would run in the background (so it does not disrupt the execution of the main thread). I have tried the code with Boost v1.46.1 and v1.50.0 on Ubuntu 12.04 64-bit with the newest build of websocketpp. Below is a code sample and an explanation of my approach.
#include <websocketpp/websocketpp.hpp>
#include <boost/thread.hpp>
#include <boost/bind.hpp>
#include <exception>
using websocketpp::server;
class echo_server_handler : public server::handler {
public:
void on_message(connection_ptr con, message_ptr msg) {
con->send(msg->get_payload(),msg->get_opcode());
std::cout << "Got message: " << msg->get_payload() << std::endl;
}
};
int main(int argc, char* argv[]) {
unsigned short port = 9002;
try {
server::handler::ptr h(new echo_server_handler());
server echo_endpoint(h);
echo_endpoint.alog().unset_level(websocketpp::log::alevel::ALL);
echo_endpoint.elog().unset_level(websocketpp::log::elevel::ALL);
echo_endpoint.alog().set_level(websocketpp::log::alevel::CONNECT);
echo_endpoint.alog().set_level(websocketpp::log::alevel::DISCONNECT);
echo_endpoint.elog().set_level(websocketpp::log::elevel::RERROR);
echo_endpoint.elog().set_level(websocketpp::log::elevel::FATAL);
std::cout << "Starting WebSocket echo server on port " << port << std::endl;
//Getting pointer to the right function
void(websocketpp::role::server<websocketpp::server>::*f)(uint16_t,size_t) =
&websocketpp::role::server<websocketpp::server>::listen;
std::cout << "Starting WSServer thread... \n" << std:endl;
boost::shared_ptr<boost::thread> ptr(new boost::thread(boost::bind(f, &echo_endpoint, port, 1)));
//ptr->join();
} catch (std::exception& e) {
std::cerr << "Exception: " << e.what() << std::endl;
}
//Simulating processing in the main thread
while(true) {std::cout << "Main thread processing..."<<std::endl; sleep(5);}
return 0;
}
If I compile the code with ptr->join(); the listening thread works fine, but it makes the main thread sleep. If I leave ptr->join() out and let the listening thread run in background, I encounter an error after the thread creation:
/usr/local/boost_1_50_0/libbin/include/boost/thread/pthread/recursive_mutex.hpp:105:
void boost::recursive_mutex::lock(): Assertion
`!pthread_mutex_lock(&m)' failed.
I'm not very experienced with threading or boost threads, and quite new with websocketpp, so I'm not sure if I'm doing something wrong here. If there are any better (and working) ways to tackle this issue, I would love to see some examples. I have been trying to figure out this problem for a long time now, so any help would be priceless. Thanks in advance!
Check out also: gdb stacktrace and valgrind results
Edit:
The "while(true)" in the code sample is there just to simulate the processing in the main thread. I'm integrating a websocket server in a big project that has different types of socket connections, events, data processing, client synchronization etc. running in the background. The websocket connection provides just another way to connect to the server using a web client instead a native one. The main thread creates all the necessary stuff, but I can't really affect in which order they are created, so the websocket server must be started in its own thread.
You create all the objects within the scope of try/catch. When you leave this scope, these objects get destroyed.
So, either move the object definitions out of try/catch, or move while(true) loop into it.
Why are you creating the boost::thread on the heap, when it could be on the stack?
You don't need to use boost::bind with boost::thread, so it should be simply:
boost::thread t(f, &echo_endpoint, port, 1);
Much simpler, no?
As for your program's behaviour. If you call ptr->join() there then the main thread waits for the other thread to finish, which never happens, so of course it sleeps. If you don't join it then ptr and echo_endpoint and h all go out of scope. The other thread will then be trying to use objects which no longer exist.
As #IgorR. said, you should put the while loop inside the try-catch block, so the work in the main loop can happen before the other thread and the objects it uses go out of scope.
From Boost 1.50 the boost::thread destructor matches the behaviour of std::thread i.e. it calls terminate() if the thread is joinable when its destructor runs. This is to prevent the sort of error you have, where the thread continues running even though the boost::thread handle referring to it and other stack objects no longer exist. If you want it to keep running you must detach it explicitly (but in your program that would still be wrong, because the echo_endpoint and h objects would still cease to exist and the thread would still try to use them.) So before a boost::thread object goes out of scope you should either join it or detach it.