Server not receives complete requests in each read - c++

I'm trying to write an async tcp client(client should be capable of writing to the socket without waiting for previous operations' results to arrive).
std::future<void> AsyncClient::SomeMethod(sometype& parameter)
{
return std::async(
std::launch::async,
[&]()
{
// Gonna send a json. ';' at the end of a json separates the requests.
const std::string requestJson = Serializer::ArraySumRequest(numbers) + ';';
boost::system::error_code err;
write(requestJson, err);
write method:
void AsyncClient::write(const std::string& strToWrite, boost::system::error_code& err)
{
// m_writeMutex is a class member I use to synchronize writing.
std::lock_guard<std::mutex> lock(m_writeMutex);
boost::asio::write(m_socket,
boost::asio::buffer(strToWrite), err);
}
But result is not what I expected. mostly what I receive on server side is not a complete request followed by ;.
What happens is like:
A request: {"Key":"Value"};{"Key":"Va
Next request: lue"};{"Key":"Value"};
Why it's like this?

You need to actually implement the protocol on the receiving end. If you haven't received an entire request, you need to call your receive function again. The socket doesn't understand your application protocol and has no idea what a "request" is -- that's the job of the code that implements the application protocol.
If you haven't received a complete request, you need to receive more. The socket has on idea what a "complete request" is. If that's a complete JSON object, then you need to implement enough of the JSON protocol to find where the end of a request is.

Related

using the boost asio server session from another thread

I have written a threaded server much like this one: https://www.boost.org/doc/libs/1_36_0/doc/html/boost_asio/example/echo/async_tcp_echo_server.cpp
And a client: https://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio/example/timeouts/blocking_tcp_client.cpp
They seem to work fine together when the client is talking directly with the server session. But now I would like to create another thread that would use the servers ioservice to send small messages. How would this be possible? I have read that shared_ptr would be one option, but have not got it working...
Inside the session class I define:
typedef boost::shared_ptr<session> session1;
static session1 create(boost::asio::io_service& io_service)
{
return session1(new session(io_service));
}
Then I define a global session_ptr as
session::session1 new_session1 = nullptr;
Then in the acceptor I start the session as:
new_session1 = session::create(tcp_io_serviceServer);
acceptor_.listen();
acceptor_.async_accept(new_session1->socket(), boost::bind(&server::handle_accept, this, boost::asio::placeholders::error));
and in the handle_accept:
new_session1->start();
Now what I would like to achieve, is that when the async_read of the server session gets a message from client to start a new thread:
if (dataReceived[0] == _dataStartCameraThread)
{
pthread = boost::thread(boost::bind(StartProcess, server));
}
then in that thread I want to send messages to the client as: new_session1->write1(error) as
void write1(const boost::system::error_code& error)
{
boost::asio::async_write(tcpsocket, boost::asio::buffer(sbuf, 1), boost::bind(&session::handle_dummy, this, boost::asio::placeholders::error));
}
But without the shared_ptr approach I cannot make this work. It claims that the file handle is not valid.
And using the shared_ptr approach I cannot seem to write anything from the server side, I can only read:
write failed. The file handle supplied is not valid
I checked that the socket is closed even though it just received the message.
Any suggestions where I should go here?
Thank you!

Segmentation fault in handlers with shared_ptr's

I am trying to make a proxy that works properly only for the first session in one execution of app. It catches SIGSEGV trying to handle the second one.
It works next way:
client connects
proxy connects to end server (unique connection for each session)
proxy sends data to server, gets handled data from server and sends handled data to client
proxy breaks connection with server and client
The problem is when we start the app and the first client tries to use proxy, it works fine (let it be clients connect to proxy consistently e.g. first one got its data, disconnection occured and only then the second one connects). But when the second one tries to connect after this, execution can not even reach the handleAccept and catches SIGSEGV in __atomic_add function in atomicity.h (I am working in Linux).
I can not understand either I make handlers incorrectly, use shared_ptr's incorrectly, or both.
run is called once after creating Proxy object to make it accept and handle client connections:
void Proxy::run() // create the very first session and keep waiting for other connections
{
auto newSession = std::make_shared<Session>(ioService_);
acceptor_.async_accept(
newSession->getClientSocket(),
[&](const boost::system::error_code &error) // handler is made according to boost documentation
{
handleAccept(newSession, error);
}
);
ioService_.run();
}
handleAccept does almost the same thing but also makes session start transferring data between client and end server:
void Proxy::handleAccept(std::shared_ptr<Session> session, const boost::system::error_code &error) // handle the new connection and keep waiting other ones
{
if (!error)
{
session->connectToServer(serverEndpoint_);
session->run(); // two more shared_ptr's to session are appeared here and we just let it go (details are further)
}
auto newSession = std::make_shared<Session>(ioService_);
acceptor_.async_accept(
newSession->getClientSocket(),
[&](const boost::system::error_code &error)
{
handleAccept(newSession, error);
}
);
}
Session contains two Socket objects (server and client) each of which has shared_ptr to it. When each of them will have done all actions or some error will have occured, they reset their shared_ptr's to session so it is deallocated.
Why you use/catch local variable by reference in handleAccept(...) ?:
acceptor_.async_accept(
newSession->getClientSocket(),
[&](const boost::system::error_code &error)
{
handleAccept(newSession, error);
}
);
Would you like to use:
acceptor_.async_accept(
newSession->getClientSocket(),
[this, newSession](const boost::system::error_code &error)
{
handleAccept(newSession, error);
}
);
The lambda will be run after function will be completed, and local variable newSession will be destroied before that.

Check for data with timing?

Is there a way to check for data for a certain time in asio?
I have a client with an asio socket which has a Method
bool ASIOClient::hasData()
{
return m_socket->available();
}
And i'd like to have some kind of delay here so it checks for data for like 1 second max and returns more ealy. Moreover i don't want to poll it for obvious reason that it meight take a second. The reaseon why i use this is, that i do send data to a client and wait for the respond. If he doesnt respond in a certain time i'd close the socket. Thats what the hasData is mentioned for.
I know that it is nativ possible with an select and an fd_set.
The asio Client is created in an Accept method of the server socket class and later used to handle requests and send back data to the one who connected here.
int ASIOServer::accept(const bool& blocking)
{
auto l_sock = std::make_shared<asio::ip::tcp::socket>(m_io_service);
m_acceptor.accept(*l_sock);
auto l_client = std::make_shared<ASIOClient>(l_sock);
return 0;
}
You just need to attempt to read.
The usual approach is to define deadlines for all asynchronous operations that could take "long" (or even indefinitely long).
This is quite natural in asynchronous executions:
Just add a deadline timer:
boost::asio::deadline_timer tim(svc);
tim.expires_from_now(boost::posix_time::seconds(2));
tim.async_wait([](error_code ec) {
if (!ec) // timer was not canceled, so it expired
{
socket_.cancel(); // cancel pending async operation
}
});
If you want to use it with synchronous calls, you can with judicious use of poll() instead of run(). See this answer: boost::asio + std::future - Access violation after closing socket which implements a helper await_operation that runs a single operations synchronously but under a timeout.

boost asio for sync server keeping TCP session open (with google proto buffers)

I currently have a very simple boost::asio server that sends a status update upon connecting (using google proto buffers):
try
{
boost::asio::io_service io_service;
tcp::acceptor acceptor(io_service,tcp::endpoint(tcp::v4(), 13));
for (;;)
{
tcp::socket socket(io_service);
acceptor.accept(socket);
...
std::stringstream message;
protoMsg.SerializeToOstream(&message);
boost::system::error_code ignored_error;
boost::asio::write(socket, boost::asio::buffer(message.str()), ignored_error);
}
}
catch (std::exception& e) { }
I would like to extend it to first read after accepting a new connection, check what request was received, and send different messages back depending on this message. I'd also like to keep the TCP connection open so the client doesn't have to re-connect, and would like to handle multiple clients (not many, maybe 2 or 3).
I had a look at a few examples on boost asio, namely the async time tcp server and the chat server, but both are a bit over my head tbh. I don't even understand whether I need an async server. I guess I could just do a read after acceptor.accept(socket), but I guess then I wouldn't keep on listening for further requests. And if I go into a loop I guess that would mean I could only handle one client. So I guess that means I have to go async? Is there a simpler example maybe that isn't 250 lines of code? Or do I just have to bite my way through those examples? Thanks
The examples you mention from the Boost.Asio documentation are actually pretty good to see how things work. You're right that at first it might look a bit difficult to understand, especially if you're new to these concepts. However, I would recommend that you start with the chat server example and get that built on your machine. This will allow you to closer look into things and start changing things in order to learn how it works. Let me guide you through a few things I find important to get started.
From your description what you want to do, it seems that the chat server gives you a good starting point as it already has similar pieces you need. Having the server asynchronous is what you want as you then quite easily can handle multiple clients with a single thread. Nothing too complicated from the start.
Simplified, asynchronous in this case means that your server works off a queue, taking a handler (task) and executes it. If there is nothing on the queue, it just waits for something to be put on the queue. In your case that means it could be a connect from a client, a new read of a message from a client or something like this. In order for this to work, each handler (the function handling the reaction to a particular event) needs to be set up.
Let me explain a bit using code from the chat server example.
In the server source file, you see the chat_server class which calls start_accept in the constructor. Here the accept handler gets set up.
void start_accept()
{
chat_session_ptr new_session(new chat_session(io_service_, room_)); // 1
acceptor_.async_accept(new_session->socket(), // 2
boost::bind(&chat_server::handle_accept, this, new_session, // 3
boost::asio::placeholders::error)); // 4
}
Line 1: A chat_session object is created which represents a session between one client and the server. A session is created for the accept (no client has connected yet).
Line 2: An asynchronous accept for the socket...
Line 3: ...bound to call chat_server::handle_accept when it happens. The session is passed along to be used by the first client which connects.
Now, if we look at the handle_accept we see that upon client connect, start is called for the session (this just starts stuff between the server and this client). Lastly a new accept is put outstanding in case other clients want to connect as well.
void handle_accept(chat_session_ptr session,
const boost::system::error_code& error)
{
if (!error)
{
session->start();
}
start_accept();
}
This is what you want to have as well. An outstanding accept for incoming connections. And if multiple clients can connect, there should always be one of these outstanding so the server can handle the accept.
How the server and the client(s) interact is all in the session and you could follow the same design and modify this to do what you want. You mention that the server needs to look at what is sent and do different things. Take a look at chat_session and the start function which was called by the server in handle_accept.
void start()
{
room_.join(shared_from_this());
boost::asio::async_read(socket_,
boost::asio::buffer(read_msg_.data(), chat_message::header_length),
boost::bind(
&chat_session::handle_read_header, shared_from_this(),
boost::asio::placeholders::error));
}
What is important here is the call to boost::asio::async_read. This is what you want too. This puts an outstanding read on the socket, so the server can read what the client sends. There is a handler (function) which is bound to this event chat_session::handle_read_header. This will be called whenever the server reads something on the socket. In this handler function you could start putting your specific code to determine what to do if a specific message is sent and so on.
What is important to know is that whenever calling these asynchronous boost::asio functions things will not happen within that call (i.e. the socket is not read if you call the function read). This is the asynchronous aspect. You just kind of register a handler for something and your code is called back when this happens. Hence, when this read is called it will immediately return and you're back in the handle_accept for the server (if you follow how things get called). And if you remember there we also call start_accept to set up another asynchronous accept. At this point you have two outstanding handlers waiting for either another client to connect or the first client to send something. Depending on what happens first, that specific handler will be called.
Also what is important to understand is that whenever something is run, it will run uninterrupted until everything it needs to do has been done. Other handlers have to wait even if there is are outstanding events which trigger them.
Finally, in order to run the server you'll need the io_service which is a central concept in Asio.
io_service.run();
This is one line you see in the main function. This just says that the thread (only one in the example) should run the io_service, which is the queue where handlers get enqueued when there is work to be done. When nothing, the io_service just waits (blocking the main thread there of course).
I hope this helps you get started with what you want to do. There is a lot of stuff you can do and things to learn. I find it a great piece of software! Good luck!
In case anyone else wants to do this, here is the minimum to get above going: (similar to the tutorials, but a bit shorter and a bit different)
class Session : public boost::enable_shared_from_this<Session>
{
tcp::socket socket;
char buf[1000];
public:
Session(boost::asio::io_service& io_service)
: socket(io_service) { }
tcp::socket& SocketRef() { return socket; }
void Read() {
boost::asio::async_read( socket,boost::asio::buffer(buf),boost::asio::transfer_at_least(1),boost::bind(&Session::Handle_Read,shared_from_this(),boost::asio::placeholders::error));
}
void Handle_Read(const boost::system::error_code& error) {
if (!error)
{
//read from buffer and handle requests
//if you want to write sth, you can do it sync. here: e.g. boost::asio::write(socket, ..., ignored_error);
Read();
}
}
};
typedef boost::shared_ptr<Session> SessionPtr;
class Server
{
boost::asio::io_service io_service;
tcp::acceptor acceptor;
public:
Server() : acceptor(io_service,tcp::endpoint(tcp::v4(), 13)) { }
~Server() { }
void operator()() { StartAccept(); io_service.run(); }
void StartAccept() {
SessionPtr session_ptr(new Session(io_service));
acceptor.async_accept(session_ptr->SocketRef(),boost::bind(&Server::HandleAccept,this,session_ptr,boost::asio::placeholders::error));
}
void HandleAccept(SessionPtr session,const boost::system::error_code& error) {
if (!error)
session->Read();
StartAccept();
}
};
From what I gathered through trial and error and reading: I kick it off in the operator()() so you can have it run in the background in an additional thread. You run one Server instance. To handle multiple clients, you need an extra class, I called this a session class. For asio to clean up dead sessions, you need a shared pointer as pointed out above. Otherwise the code should get you started.

boost::asio async_accept Refuse a connection

My application have an asio server socket that must accept connections from a defined List of IPs.
This filter must be done by the application, (not by the system), because it can change at any time (i must be able to update this list at any time)
The client must receive an acces_denied error.
I suppose when the handle_accept callback is called, SYN/ACK has already be sent, so don't want to accept then close brutally when i detect the connected ip est not allowed. I don't manage the client behavior, maybe it doesn't act the same when the connection is refused and just closed by peer, so i want to do everything clean.
(but it's what im soing for the moment)
Do you know how i can do that???
My access list is a container of std::strings (but i can convert it to a countainer of something else....)
Thank you very much
The async_accept method has an overload to obtain the peer endpoint. You can compare that value inside your async_accept handler. If it does not match an entry in your container, let the socket go out of scope. Otherwise, handle it as required by your appliation.
I don't know the details of your app, but this is how I'd do it.
In the accept handler/lambda
void onAccept(shared_ptr<connection> c, error_code ec)
{
if (ec) { /*... */ }
if (isOnBlackList(c->endpoint_))
{
c->socket_.async_write( /* a refusal message */,
[c](error_code, n)
{
c->socket_.shutdown();
// c fizzles out of all contexts...
});
}
else
{
// successful connection execution path
}
}