using boost:asio with select? blocking on TCP input OR file update - c++

I had intended to have a thread in my program which would wait on two file descriptors, one for a socket and a second one for a FD describing the file system (specifically waiting to see if a new file is added to a directory). Since I expect to rarely see either the new file added or new TCP messages coming in I wanted to have one thread waiting for either input and handle whichever input is detected when it occures rather then bothering with seperate threads.
I then (finally!) got permission from the 'boss' to use boost. So now I want to replace the basic sockets with boost:asio. Only I'm running into a small problem. It seems like asio implimented it's own version of select rather then providing a FD I could use with select directly. This leaves me uncertain how I can block on both conditions, new file and TCP input, at the same time when one only works with select and the other doesn't seem to support the use of select. Is there an easy work around to this I'm missing?

ASIO is best used asynchronously (that's what it stands for): you can set up handlers for both TCP reads and the file descriptor activity, and the handlers would be called for you.
Here's a demo example to get you started (written for Linux with inotify support):
#include <iostream>
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <sys/inotify.h>
namespace asio = boost::asio;
void start_notify_handler();
void start_accept_handler();
// this stuff goes into your class, only global for the simplistic demo
asio::streambuf buf(1024);
asio::io_service io_svc;
asio::posix::stream_descriptor stream_desc(io_svc);
asio::ip::tcp::socket sock(io_svc);
asio::ip::tcp::endpoint end(asio::ip::tcp::v4(), 1234);
asio::ip::tcp::acceptor acceptor(io_svc, end);
// this gets called on file system activity
void notify_handler(const boost::system::error_code&,
std::size_t transferred)
{
size_t processed = 0;
while(transferred - processed >= sizeof(inotify_event))
{
const char* cdata = processed
+ asio::buffer_cast<const char*>(buf.data());
const inotify_event* ievent =
reinterpret_cast<const inotify_event*>(cdata);
processed += sizeof(inotify_event) + ievent->len;
if(ievent->len > 0 && ievent->mask & IN_OPEN)
std::cout << "Someone opened " << ievent->name << '\n';
}
start_notify_handler();
}
// this gets called when nsomeone connects to you on TCP port 1234
void accept_handler(const boost::system::error_code&)
{
std::cout << "Someone connected from "
<< sock.remote_endpoint().address() << '\n';
sock.close(); // dropping connection: this is just a demo
start_accept_handler();
}
void start_notify_handler()
{
stream_desc.async_read_some( buf.prepare(buf.max_size()),
boost::bind(&notify_handler, asio::placeholders::error,
asio::placeholders::bytes_transferred));
}
void start_accept_handler()
{
acceptor.async_accept(sock,
boost::bind(&accept_handler, asio::placeholders::error));
}
int main()
{
int raw_fd = inotify_init(); // error handling ignored
stream_desc.assign(raw_fd);
inotify_add_watch(raw_fd, ".", IN_OPEN);
start_notify_handler();
start_accept_handler();
io_svc.run();
}

Related

Close Boost Websocket from Server side, C++, tcp::acceptor accept() timeout?

UPDATE:
Well it appears that I need to address my issue with an asynchronous implementation. I will update my posting with a new direction, once I've completed testing
Original:
I'm currently writing a multiserver application that will collect, share, and request information from multiple machines. In some cases, Machine A will request information from Machine B but will need to send it to Machine C, which will reply to A. Without getting too deep into what the application is going to do I need some help with my client application.
I have my client application designed with two threads. I used this example from boost, as the basis for my design.
Thread one will open a Client Websocket with Machine-A, it will stream a series of data points and commands. Here is a stripped-down version of my code
#include "Poco/Clock.h"
#include "Poco/Task.h"
#include "Poco/Thread.h"
#include <boost/asio.hpp>
#include <boost/beast.hpp>
#include <jsoncons/json.hpp>
namespace beast = boost::beast; // from <boost/beast.hpp>
namespace http = beast::http; // from <boost/beast/http.hpp>
namespace websocket = beast::websocket; // from <boost/beast/websocket.hpp>
namespace net = boost::asio; // from <boost/asio.hpp>
using tcp = net::ip::tcp; // from <boost/asio/ip/tcp.hpp>
class ResponseChannel : public Poco::Runnable {
void do_session(tcp::socket socket)
{
try {
websocket::stream<tcp::socket> ws{std::move(socket)};
ws.set_option(websocket::stream_base::decorator(
[](websocket::response_type& res) {
res.set(http::field::server,
std::string(BOOST_BEAST_VERSION_STRING) +
" websocket-server-sync");
}));
ws.accept();
for (;;) {
beast::flat_buffer buffer;
ws.read(buffer);
if (ws.got_binary()) {
// do something
}
}
} catch (beast::system_error const& se) {
if (se.code() != websocket::error::closed) {
std::cerr << "do_session1 ->: " << se.code().message()
<< std::endl;
return;
}
} catch (std::exception const& e) {
std::cerr << "do_session2 ->: " << e.what() << std::endl;
return;
}
}
virtual void run()
{
auto const address = net::ip::make_address(host);
auto const port = static_cast<unsigned short>(respPort);
try {
net::io_context ioc{1};
tcp::acceptor acceptor{ioc, {address, port}};
tcp::socket socket{ioc};
for (; keep_running;) {
acceptor.accept(socket);
std::thread(&ResponseChannel::do_session, this,
std::move(socket))
.detach();
}
} catch (const std::exception& e) {
std::cout << "run: " << e.what() << std::endl;
}
}
void _terminate() { keep_running = false; }
public:
std::string host;
int respPort;
bool keep_running = true;
int responseCount = 0;
std::vector<long long int> latency_times;
long long int time_sum;
Poco::Clock* responseClock;
};
int main()
{
using namespace std::chrono_literals;
Poco::Clock clock = Poco::Clock();
Poco::Thread response_thread;
ResponseChannel response_channel;
response_channel.responseClock = &clock;
response_channel.host = "0.0.0.0";
response_channel.respPort = 8080;
response_thread.start(response_channel);
response_thread.setPriority(Poco::Thread::Priority::PRIO_HIGH);
// doing some work here. work will vary depending on command-line arguments
std::this_thread::sleep_for(30s);
response_channel.keep_running = false;
response_thread.join();
}
The way I have designed the multiple machines works as expected regarding sending commands to Machine-B and receiving results from Machine-C.
The issue I'm facing is closing out Thread 2, which contains my local response channel.
I went back and forth between Poco::Thread and Poco::Task, but I decided that I do not want to use Task, as it would be a mistake to be able to close the 2nd thread/task from the main thread. I need to know that all packets have been received before closing down the 2nd thread.
So I need to close events down only once I have received a websocket::error::closed flag from Machine-C. Shutting down the websocket, detached, thread is no issue, as when the flag arrives it takes care of that for me.
However, as part of the loop process for reconnecting after a closed socket, the thread just waits for a new connection.
acceptor.accept(socket);
It's blocking, and through the documentation, there doesn't seem to be a timeout feature. I see that there is a close option, but my attempt to use close simply threw an exception. Which ultimately added complexity, I didn't want.
Ultimately, I want the Server to continuously loop through a series of connections from both Machine-B and Machine-C, but only after my client application has ended. The last thing I do before waiting for the Poco::Thread to complete is to set the flag that I no longer want the Websocket server to run.
I've put that flag before the blocking accept() call. This would work, only with perfect timing of the flag going up, a new connection is opened and then closed, before looping back to wait for a new connection.
Ideally, there would be a timeout so that it would loop around, first checking if it timed out, allow for a periodic check if I wanted the thread to remain open.
Has anyone ever run into this?

C++ multithreading closes TCP connection

I work on a C++ server where I wait for an network connection. If I get one I put the socket into a new thread and listen for further inputs. But the problem is that as soon as I have the socket in a new thread the TCP connection is disconnected. I'm using the SFML library.
Here's some code:
main.cpp:
int main() {
std::list<std::thread> user_connections;
sf::TcpListener listener;
listener.listen(PORT);
while (true)
{
sf::TcpSocket client;
listener.accept(client);
Protocol user_connection;
std::thread new_con (&Protocol::connect, &user_connection, std::ref(client));
new_con.detach();
user_connections.push_back(std::move(new_con)); // user_connections is a list
}
protocol.cpp:
class Protocol {
public:
void connect(sf::TcpSocket& client)
{
std::cout << "Address: " << client.getRemoteAddress() << ":" << client.getRemotePort() << std::endl;
}
}
This prints out:
Address: 0.0.0.0:0
And if I try to send any kind of message I get the status 4 which is according to the documentation disconnected.
EDIT:
According to #Ted Lyngmo it's because I need to put client in a list, because otherwise it runs out of scope. Now if I try to put it in a list via:
std::list<sf::TcpSocket> clients; // executed before while loop
// [...]
clients.push_back(client); // in the while loop
I get the error: (pastebin).
This is something built on your current threaded code. It may be a good idea to use a single threaded design and use the sf::SocketSelector to wait for events on the listener and all the connected clients instead.
In this lazy solution disconnected clients will not be removed from the servers list of clients until a new client is connected.
I've tried to explain it with comments in the code which is an echoing kind of server, so you can telnet to it, send messages and get them back.
#include <SFML/Network.hpp>
#include <atomic>
#include <iostream>
#include <list>
#include <thread>
constexpr uint16_t PORT = 2048; // what you have in your code.
// A simple struct to keep a client and thread
struct client_thread {
sf::TcpSocket client{};
std::thread thread{};
// The main thread can check "done" to remove this client_thread from its list:
std::atomic<bool> done{false};
~client_thread() {
// instead of detaching, join()
if(thread.joinable()) thread.join();
}
};
// the connect function gets a reference to a client_thread instead
void connect(client_thread& clith) {
constexpr std::size_t BufSize = 1024;
auto& [client, thread, done] = clith; // for convenience
std::cout << "thread: Address: " << client.getRemoteAddress() << ":"
<< client.getRemotePort() << std::endl;
std::string buffer(BufSize, '\0');
std::size_t received;
while(client.receive(buffer.data(), buffer.size(), received) == sf::Socket::Done) {
// remove ASCII control chars (cr and newline etc.)
while(received && buffer[received - 1] < ' ') --received;
buffer.resize(received);
std::cout << buffer << std::endl;
// send something back
buffer = "You sent >" + buffer + "<\n";
client.send(buffer.c_str(), buffer.size());
// restore the size
buffer.resize(BufSize);
}
std::cout << "thread: client disconnected\n";
client.disconnect();
// set done to true so the main thread can remove the client_thread
done = true;
}
int main() {
sf::TcpListener listener;
// check that listening actually works
if(listener.listen(PORT) != sf::Socket::Done) return 1;
// now a list of client_thread instead:
std::list<client_thread> user_connections;
while(true) {
// create a client_thread to use when listening
auto& clith = user_connections.emplace_back();
auto& [client, thread, _] = clith; // for convenience
std::cout << "main: listening ...\n";
sf::Socket::Status status = listener.accept(client);
if(status == sf::Socket::Done) {
std::cout << "main: got connection\n";
thread = std::thread(connect, std::ref(clith));
} else {
std::cout << "main: accept not done\n";
}
// remove disconnected clients, pre C++20
for(auto it = user_connections.begin(); it != user_connections.end();) {
// check the atomic bool in all threads
if(it->done) {
std::cout << "main: removing old connection\n";
it = user_connections.erase(it);
} else {
++it;
}
}
// remove disconnected clients, >= C++20
//
// std::erase_if(user_connections,
// [](auto& clith) -> bool { return clith.done; });
}
}
Edit regarding your edited question where you're trying to put the client in a list:
You're trying to copy the sf::TcpSocket and it's not copyable. What's worse, it's not even moveable. The reason the code in my answer works is because it avoids both copying and moving by using std::list::emplace_back to construct the element in place in the list.
It is apparently both sf::TcpSocket client and Protocol user_connection are destroyed. It's no use to only keep the thread alive, your thread only holds references to client and user_connection, but both of them are destroyed soon after your thread is created (and maybe not even started running).
I read a little bit on the SMFL library and unfortunately, at least the client, which is an object of TCPSocket, is not copyable, nor movable. The SMFL library must be a very old library. Any modern socket library will design socket to be at least movable, meaning that you can move your socket into the thread, or move it to the std::list or std::vector you created.
So, to use SMFL library, which was written without modern C++11 support (the copy & move in C++ was introduced in C++ 2011), together with C++11 library (std::thread), will be quite painful.
You can probably use std::shared_ptr to hold a newly created protocol & client, and pass shared_ptr into thread or into the list you created.
I don't know what Protocol exactly does, a rough pseudo code is as follows,
std::shared_ptr<TcpSocket> client = std::make_shared<TcpSocket>();
listener.accept(*client);
std::shared_ptr<Protocol> protocol = std::make_shared<Protocol>();
// copy the pointer into thread, they will be deleted after the thread is done
std::thread new_con ( [client, protocol] () { protocol->connect(*client); } );
or, protocol can probably be defined in the thread,
std::shared_ptr<TcpSocket> client = std::make_shared<TcpSocket>();
listener.accept(*client);
std::thread new_con ( [client] () {
Protocol protocol;
protocol.connect(*client);
} );

Integration between Node.js and C++

I have a Node.js application that I want to be able to send a JSON-object into a C++ application.
The C++ application will use the Poco-libraries (pocoproject.org).
I want the interaction to be lighting fast, so preferably no files or network-sockets.
I have been looking into these areas:
Pipes
Shared memory
unixSockets
What should I focus on, and can someone point my direction to docs. and samples?
First of all, some more data is needed to give good advice.
In general shared memory is the fastest, since there's no transfer required, but it's also the hardest to keep fine. I'm not sure you'd be able to do that with Node though.
If this program is just running for this one task and closing it might be worth just sending your JSON to the CPP program as a startup param
myCPPProgram.exe "JsonDataHere"
The simplest thing with decent performance should be a socket connection using Unix domain sockets with some low-overhead data frame format. E.g., two-byte length followed by UTF-8 encoded JSON. On the C++ side this should be easy to implement using the Poco::Net::TCPServer framework. Depending on where your application will go in the future you may run into limits of this format, but if it's basically just streaming JSON objects it should be fine.
To make it even simpler, you can use a WebSocket, which will take care of the framing for you, at the cost of the overhead for the initial connection setup (HTTP upgrade request). May even be possible to run the WebSocket protocol over a Unix domain socket.
However, the performance difference between a (localhost only) TCP socket and a Unix domain socket may not even be significant, given all the JavaScript/node.js overhead. Also, if performance is really a concern, JSON may not even be the right serialization format to begin with.
Anyway, without more detailed information (size of JSON data, message frequency) it's hard to give a definite recommendation.
I created a TCPServer, which seems to work. However if I close the server and start it again I get this error:
Net Exception: Address already in use: /tmp/app.SocketTest
Is it not possible to re-attach to the socket if it exists?
Here is the code for the TCPServer:
#include "Poco/Util/ServerApplication.h"
#include "Poco/Net/TCPServer.h"
#include "Poco/Net/TCPServerConnection.h"
#include "Poco/Net/TCPServerConnectionFactory.h"
#include "Poco/Util/Option.h"
#include "Poco/Util/OptionSet.h"
#include "Poco/Util/HelpFormatter.h"
#include "Poco/Net/StreamSocket.h"
#include "Poco/Net/ServerSocket.h"
#include "Poco/Net/SocketAddress.h"
#include "Poco/File.h"
#include <fstream>
#include <iostream>
using Poco::Net::ServerSocket;
using Poco::Net::StreamSocket;
using Poco::Net::TCPServer;
using Poco::Net::TCPServerConnection;
using Poco::Net::TCPServerConnectionFactory;
using Poco::Net::SocketAddress;
using Poco::Util::ServerApplication;
using Poco::Util::Option;
using Poco::Util::OptionSet;
using Poco::Util::HelpFormatter;
class UnixSocketServerConnection: public TCPServerConnection
/// This class handles all client connections.
{
public:
UnixSocketServerConnection(const StreamSocket& s):
TCPServerConnection(s)
{
}
void run()
{
try
{
/*char buffer[1024];
int n = 1;
while (n > 0)
{
n = socket().receiveBytes(buffer, sizeof(buffer));
EchoBack(buffer);
}*/
std::string message;
char buffer[1024];
int n = 1;
while (n > 0)
{
n = socket().receiveBytes(buffer, sizeof(buffer));
buffer[n] = '\0';
message += buffer;
if(sizeof(buffer) > n && message != "")
{
EchoBack(message);
message = "";
}
}
}
catch (Poco::Exception& exc)
{
std::cerr << "Error: " << exc.displayText() << std::endl;
}
std::cout << "Disconnected." << std::endl;
}
private:
inline void EchoBack(std::string message)
{
std::cout << "Message: " << message << std::endl;
socket().sendBytes(message.data(), message.length());
}
};
class UnixSocketServerConnectionFactory: public TCPServerConnectionFactory
/// A factory
{
public:
UnixSocketServerConnectionFactory()
{
}
TCPServerConnection* createConnection(const StreamSocket& socket)
{
std::cout << "Got new connection." << std::endl;
return new UnixSocketServerConnection(socket);
}
private:
};
class UnixSocketServer: public Poco::Util::ServerApplication
/// The main application class.
{
public:
UnixSocketServer(): _helpRequested(false)
{
}
~UnixSocketServer()
{
}
protected:
void initialize(Application& self)
{
loadConfiguration(); // load default configuration files, if present
ServerApplication::initialize(self);
}
void uninitialize()
{
ServerApplication::uninitialize();
}
void defineOptions(OptionSet& options)
{
ServerApplication::defineOptions(options);
options.addOption(
Option("help", "h", "display help information on command line arguments")
.required(false)
.repeatable(false));
}
void handleOption(const std::string& name, const std::string& value)
{
ServerApplication::handleOption(name, value);
if (name == "help")
_helpRequested = true;
}
void displayHelp()
{
HelpFormatter helpFormatter(options());
helpFormatter.setCommand(commandName());
helpFormatter.setUsage("OPTIONS");
helpFormatter.setHeader("A server application to test unix domain sockets.");
helpFormatter.format(std::cout);
}
int main(const std::vector<std::string>& args)
{
if (_helpRequested)
{
displayHelp();
}
else
{
// set-up unix domain socket
Poco::File socketFile("/tmp/app.SocketTest");
SocketAddress unixSocket(SocketAddress::UNIX_LOCAL, socketFile.path());
// set-up a server socket
ServerSocket svs(unixSocket);
// set-up a TCPServer instance
TCPServer srv(new UnixSocketServerConnectionFactory, svs);
// start the TCPServer
srv.start();
// wait for CTRL-C or kill
waitForTerminationRequest();
// Stop the TCPServer
srv.stop();
}
return Application::EXIT_OK;
}
private:
bool _helpRequested;
};
int main(int argc, char **argv) {
UnixSocketServer app;
return app.run(argc, argv);
}
The solution I have gone for, is to use unix domain sockets. The solution will run on a Raspbian-setup and the socket-file is placed in /dev/shm, which is mounted into RAM.
On the C++ side, I use the Poco::Net::TCPServer framework as described elsewhere in this post.
On the Node.js side, I use the node-ipc module (http://riaevangelist.github.io/node-ipc/).

Two-way C++ communication over serial connection

I am trying to write a really simple C++ application to communicate with an Arduino. I would like to send the Arduino a character that it sends back immediately. The Arduino code that I took from a tutorial looks like this:
void setup()
{
Serial.begin(9600);
}
void loop()
{
//Have the Arduino wait to receive input
while (Serial.available()==0);
//Read the input
char val = Serial.read();
//Echo
Serial.println(val);
}
I can communicate with the Arduino easily using GNU screen, so I know that everything is working fine with the basic communication:
$ screen /dev/tty.usbmodem641 9600
The (broken) C++ code that I have looks like this:
#include <fstream>
#include <iostream>
int main()
{
std::cout << "Opening fstream" << std::endl;
std::fstream file("/dev/tty.usbmodem641");
std::cout << "Sending integer" << std::endl;
file << 5 << std::endl; // endl does flush, which may be important
std::cout << "Data Sent" << std::endl;
std::cout << "Awaiting response" << std::endl;
std::string response;
file >> response;
std::cout << "Response: " << response << std::endl;
return 0;
}
It compiles fine, but when running it, some lights flash on the Arduino and the terminal just hangs at:
Opening fstream
Where am I going wrong?
There are three points:
First: You don't initialize the serial port (TTY) on the Linux side. Nobody knows in what state it is.
Doing this in your program you must use tcgetattr(3) and tcsetattr(3). You can find the required parameters by using these keywords at this site, the Arduino site or on Google. But just for quick testing I propose to issue this command before you call your own command:
stty -F /dev/tty.usbmodem641 sane raw pass8 -echo -hupcl clocal 9600
Especially the the missing clocal might prevent you opening the TTY.
Second: When the device is open, you should wait a little before sending anything. By default the Arduino resets when the serial line is opened or closed. You have to take this into account.
The -hupcl part will prevent this reset most of the time. But at least one reset is always necessary, because -hupcl can be set only when the TTY is already open and at that time the Arduino has received the reset signal already. So -hupcl will "only" prevent future resets.
Third: There is NO error handling in your code. Please add code after each IO operation on the TTY which checks for errors and - the most important part - prints helpful error messages using perror(3) or similar functions.
I found a nice example by Jeff Gray of how to make a simple minicom type client using boost::asio. The original code listing can be found on the boost user group. This allows connection and communication with the Arduino like in the GNU Screen example mentioned in the original post.
The code example (below) needs to be linked with the following linker flags
-lboost_system-mt -lboost_thread-mt
...but with a bit of tweaking, some of the dependence on boost can be replaced with new C++11 standard features. I'll post revised versions as and when I get around to it. For now, this compiles and is a solid basis.
/* minicom.cpp
A simple demonstration minicom client with Boost asio
Parameters:
baud rate
serial port (eg /dev/ttyS0 or COM1)
To end the application, send Ctrl-C on standard input
*/
#include <deque>
#include <iostream>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/asio/serial_port.hpp>
#include <boost/thread.hpp>
#include <boost/lexical_cast.hpp>
#include <boost/date_time/posix_time/posix_time_types.hpp>
#ifdef POSIX
#include <termios.h>
#endif
using namespace std;
class minicom_client
{
public:
minicom_client(boost::asio::io_service& io_service, unsigned int baud, const string& device)
: active_(true),
io_service_(io_service),
serialPort(io_service, device)
{
if (!serialPort.is_open())
{
cerr << "Failed to open serial port\n";
return;
}
boost::asio::serial_port_base::baud_rate baud_option(baud);
serialPort.set_option(baud_option); // set the baud rate after the port has been opened
read_start();
}
void write(const char msg) // pass the write data to the do_write function via the io service in the other thread
{
io_service_.post(boost::bind(&minicom_client::do_write, this, msg));
}
void close() // call the do_close function via the io service in the other thread
{
io_service_.post(boost::bind(&minicom_client::do_close, this, boost::system::error_code()));
}
bool active() // return true if the socket is still active
{
return active_;
}
private:
static const int max_read_length = 512; // maximum amount of data to read in one operation
void read_start(void)
{ // Start an asynchronous read and call read_complete when it completes or fails
serialPort.async_read_some(boost::asio::buffer(read_msg_, max_read_length),
boost::bind(&minicom_client::read_complete,
this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void read_complete(const boost::system::error_code& error, size_t bytes_transferred)
{ // the asynchronous read operation has now completed or failed and returned an error
if (!error)
{ // read completed, so process the data
cout.write(read_msg_, bytes_transferred); // echo to standard output
read_start(); // start waiting for another asynchronous read again
}
else
do_close(error);
}
void do_write(const char msg)
{ // callback to handle write call from outside this class
bool write_in_progress = !write_msgs_.empty(); // is there anything currently being written?
write_msgs_.push_back(msg); // store in write buffer
if (!write_in_progress) // if nothing is currently being written, then start
write_start();
}
void write_start(void)
{ // Start an asynchronous write and call write_complete when it completes or fails
boost::asio::async_write(serialPort,
boost::asio::buffer(&write_msgs_.front(), 1),
boost::bind(&minicom_client::write_complete,
this,
boost::asio::placeholders::error));
}
void write_complete(const boost::system::error_code& error)
{ // the asynchronous read operation has now completed or failed and returned an error
if (!error)
{ // write completed, so send next write data
write_msgs_.pop_front(); // remove the completed data
if (!write_msgs_.empty()) // if there is anthing left to be written
write_start(); // then start sending the next item in the buffer
}
else
do_close(error);
}
void do_close(const boost::system::error_code& error)
{ // something has gone wrong, so close the socket & make this object inactive
if (error == boost::asio::error::operation_aborted) // if this call is the result of a timer cancel()
return; // ignore it because the connection cancelled the timer
if (error)
cerr << "Error: " << error.message() << endl; // show the error message
else
cout << "Error: Connection did not succeed.\n";
cout << "Press Enter to exit\n";
serialPort.close();
active_ = false;
}
private:
bool active_; // remains true while this object is still operating
boost::asio::io_service& io_service_; // the main IO service that runs this connection
boost::asio::serial_port serialPort; // the serial port this instance is connected to
char read_msg_[max_read_length]; // data read from the socket
deque<char> write_msgs_; // buffered write data
};
int main(int argc, char* argv[])
{
// on Unix POSIX based systems, turn off line buffering of input, so cin.get() returns after every keypress
// On other systems, you'll need to look for an equivalent
#ifdef POSIX
termios stored_settings;
tcgetattr(0, &stored_settings);
termios new_settings = stored_settings;
new_settings.c_lflag &= (~ICANON);
new_settings.c_lflag &= (~ISIG); // don't automatically handle control-C
tcsetattr(0, TCSANOW, &new_settings);
#endif
try
{
if (argc != 3)
{
cerr << "Usage: minicom <baud> <device>\n";
return 1;
}
boost::asio::io_service io_service;
// define an instance of the main class of this program
minicom_client c(io_service, boost::lexical_cast<unsigned int>(argv[1]), argv[2]);
// run the IO service as a separate thread, so the main thread can block on standard input
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
while (c.active()) // check the internal state of the connection to make sure it's still running
{
char ch;
cin.get(ch); // blocking wait for standard input
if (ch == 3) // ctrl-C to end program
break;
c.write(ch);
}
c.close(); // close the minicom client connection
t.join(); // wait for the IO service thread to close
}
catch (exception& e)
{
cerr << "Exception: " << e.what() << "\n";
}
#ifdef POSIX // restore default buffering of standard input
tcsetattr(0, TCSANOW, &stored_settings);
#endif
return 0;
}
You should check if you have access to /dev/tty.usbmodem641. The usual way in Linux is to add the user to the proper group with adduser.
By the way, I know that to access the serial port, one needs to open /dev/ttyS0 (for COM1), until /dev/ttyS3. See for example this example in C.

Consume only part of data in boost::asio basic_stream_socket::async_read_some handler

I am new into boost::asio so my question maight be dumb - sorry if it is such.
I am writing asynchronous server application with keepalive (multiple requests may be sent on single connection).
Connection handling routine is simple:
In a loop:
schedule read request with socket->async_read_some(buffer, handler)
from handler schedule write response with async_write.
The problem I am facing is that when
handler passed to async_read_some is called by on of io_service threads, buffers may actually contain more data than single request (e.g. part of next request sent by client).
I do not want to (and cannot if it is only part of request) handle this remaining bytes at the moment.
I would like to do it after handling previous request is finished.
It would be easy to address this if I had the possiblity to reinject unnecessary remainging data back to the socket. So it is handled on next async_read_some call.
Is there such possiblity in boost::asio or do I have to store the remaining data somewhere aside, and handle it myself with extra code.
I think what you are looking for is asio::streambuf.
Basically, you can inspect your seeded streambuf as a char*, read as much as you see fit, and then inform how much was actually processed by consume(amount).
Working code-example to parse HTTP-header as a client:
#include <boost/asio.hpp>
#include <boost/bind.hpp>
#include <iostream>
#include <string>
namespace asio = boost::asio;
std::string LINE_TERMINATION = "\r\n";
class Connection {
asio::streambuf _buf;
asio::ip::tcp::socket _socket;
public:
Connection(asio::io_service& ioSvc, asio::ip::tcp::endpoint server)
: _socket(ioSvc)
{
_socket.connect(server);
_socket.send(boost::asio::buffer("GET / HTTP/1.1\r\nHost: localhost\r\nConnection: close\r\n\r\n"));
readMore();
}
void readMore() {
// Allocate 13 bytes space on the end of the buffer. Evil prime number to prove algorithm works.
asio::streambuf::mutable_buffers_type buf = _buf.prepare(13);
// Perform read
_socket.async_read_some(buf, boost::bind(
&Connection::onRead, this,
asio::placeholders::bytes_transferred, asio::placeholders::error
));
}
void onRead(size_t read, const boost::system::error_code& ec) {
if ((!ec) && (read > 0)) {
// Mark to buffer how much was actually read
_buf.commit(read);
// Use some ugly parsing to extract whole lines.
const char* data_ = boost::asio::buffer_cast<const char*>(_buf.data());
std::string data(data_, _buf.size());
size_t start = 0;
size_t end = data.find(LINE_TERMINATION, start);
while (end < data.size()) {
std::cout << "LINE:" << data.substr(start, end-start) << std::endl;
start = end + LINE_TERMINATION.size();
end = data.find(LINE_TERMINATION, start);
}
_buf.consume(start);
// Wait for next data
readMore();
}
}
};
int main(int, char**) {
asio::io_service ioSvc;
// Setup a connection and run
asio::ip::address localhost = asio::ip::address::from_string("127.0.0.1");
Connection c(ioSvc, asio::ip::tcp::endpoint(localhost, 80));
ioSvc.run();
}
One way of tackling this when using a reliable and ordered transport like TCP is to:
Write a header of known size, containing the size of the rest of the message
Write the rest of the message
And on the receiving end:
Read just enough bytes to get the header
Read the rest of the message and no more
If you know the messages are going to be of a fixed length, you can do something like the following:
//-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
void
Connection::readMore()
{
if (m_connected)
{
// Asynchronously read some data from the connection into the buffer.
// Using shared_from_this() will prevent this Connection object from
// being destroyed while data is being read.
boost::asio::async_read(
m_socket,
boost::asio::buffer(
m_readMessage.getData(),
MessageBuffer::MESSAGE_LENGTH
),
boost::bind(
&Connection::messageBytesRead,
shared_from_this(),
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred
),
boost::bind(
&Connection::handleRead,
shared_from_this(),
boost::asio::placeholders::error
)
);
}
}
//-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
std::size_t
Connection::messageBytesRead(const boost::system::error_code& _errorCode,
std::size_t _bytesRead)
{
return MessageBuffer::MESSAGE_LENGTH - _bytesRead;
}
//-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~-~
void
Connection::handleRead(const boost::system::error_code& _errorCode)
{
if (!_errorCode)
{
/// Do something with the populated m_readMessage here.
readMore();
}
else
{
disconnect();
}
}
The messageBytesRead callback will indicate to boost::asio::async_read when a complete message has been read. This snippet was pulled from an existing Connection object from running code, so I know it works...