How do you properly Await/Await Async in C++? - c++

Sorry if the question is not clear. i'll try to explain it here.
I am working on a test project where two nodes will be communicating specialty packets to each other. As in: Node A will be sending a packet to Node B, and while Node B is generating it's packet, Node B Should also be processing the information from Node A. Node A should be preparing another packet for when Node B.
I've been reading into std::future but I'm not sure i understand how it works. The question i have is about the large section of commented out code. This is just psuedo code so i can try and get a better understanding about asynchronous functions in C++. I normally program in C#/Java where Asynchronous functions are fairly simple(ish). I hope the above kind of explains what i'm trying to do.
The code below might have some other errors. this is just a quick-and-dirty attempt to try and convey what i am asking.
My questions are:
What is the best way to check if Node B is ready, assuming that Node B is a separate instance of this theoretical app, running on a different device? if this makes any difference, this is being written with Linux C++.
Is this even the correct way of doing Tasks/Await/Await Async in C++? If this is incorrect, what is the proper way?
Thanks in advance.
//Excerpt from pseudo code. file would be kw_worker_delegate.h
#include <future>
#include <iostream>
#include <string>
#include <optional>
#include <system_error>
#include "kw_network.h"
class kw_worker
{
private:
/* ... */
std::future<kw_packet> *kw_packet_receive_delegate;
std::future<bool> *kw_packet_send_delegate;
bool worker_is_ready;
std::string kw_worker_adress;
network_interface *kw_network_interface;
/* Base CTOR that provides Delegates */
public:
kw_worker(std::future<kw_packet> *kwpktr, std::future<kw_packet> *kwpkts, network_interface *kwinetf, std::string kwaddr);
bool kw_worker_execute(bool isSendOrRecieve, kw_worker *active_worker, kw_worker *target_worker, std::optional<kw_packet> packet_data = std::nullopt);
};
kw_worker::kw_worker(std::future<kw_packet> *kwpktr, std::future<bool> *kwpkts, network_interface *kwinetf, std::string kwaddr)
{
&kw_packet_recieve_delegate = &kwpktr;
&kw_packet_send_delegate = &kwpkts;
&kw_network_interface = &kwintef;
kw_worker_address = kwaddr;
kw_worker_is_ready = true;
}
bool kw_Worker::kw_worker_execute(bool isSendOrRecieve, kw_worker *active_worker, kw_worker *target_worker, std::optional<kw_packet> packet_data = std::nullopt);
{
try
{
//if(isSendOrRecieve)
//{
// IS THIS CORRECT?
//
// if(!(&target_worker->worker_is_ready))
// {
// cout << "Worker B is not ready for a data submission...\n";
// do_something_or_await();
// }else{
// if(packet_data.has_value())
// kw_packet_send_delegate = std::async (&kw_network_interface->send_to, &target_worker->kw_worker_adress, packet_data);
// else throw -1;
// cout << "The data was sent to Worker B. Waiting for response on receipt of data...\n";
// &active_worker->worker_is_ready = true;
//
// //Do somewithng else...
// }
//}else{
// if(!(&active_worker->worker_is_ready))
// {
// cout << "Worker A is not ready to receive data...\n";
// do_something_or_await();
// }else{
// kw_packet_receive_delegate = std::async (&kw_network_interface->receive_from, &target_worker->kw_worker_adress);
// cout << "The data was received by Worker A. A Will now process the data...\n";
// &active_worker->kw_worker_is_ready = false;
//
// //Worker B will set it's worker_is_ready boolean value using it's send function
//
// //process the data now
// }
//}
}catch(const system_error &e)
{
cout << "There are no threads available to complete delegation. Please Try Again later";
return false;
}
catch(int)
{
cout << "Some other error occurred when getting data from worker" << &target_worker << ". Please Try Again later.";
return false;
}
return true;
}

Related

Reason for losing messeges over NNG sockets in raw mode

Some context to my problem:
I need to establish an inter-process communication using C++ and sockets and I picked NNG library for that along with nngpp c++ wrapper. I need to use push/pull protocol so no contexts handling is available to me. I wrote some code based on raw example from nngpp demo. The difference here is that, by using push/pull protocol I split this into two separate programs. One for sending and one for receiving.
Problem descripion:
I need to receive let's say a thousand or more messages per second. For now, all messages are captured only when I send about 50/s. That is way too slow and I do believe it can be done faster. The faster I send, the more I lose. At the moment, when sending 1000msg/s I lose about 150 msgs.
Some words about the code
The code may be in C++17 standard. It is written in object-oriented manner so in the end I want to have a class with "receive" method that would simply give me the received messages. For now, I just print the results on screen. Below, I supply some parts of the project with descriptions:
NOTE msgItem is a struct like that:
struct msgItem {
nng::aio aio;
nng::msg msg;
nng::socket_view itemSock;
explicit msgItem(nng::socket_view sock) : itemSock(sock) {}
};
And it is taken from example mentioned above.
Callback function that is executed when message is received by one of the aio's (callback is passed in constructor of aio object). It aims at checking whether everything was ok with transmission, retrieving my Payload (just string for now) and passing it to queue while a flag is set. Then I want to print those messages from the queue using separate thread.
void ReceiverBase<Payload>::aioCallback(void *arg) try {
msgItem *msgItem = (struct msgItem *)arg;
Payload retMsg{};
auto result = msgItem->aio.result();
if (result != nng::error::success) {
throw nng::exception(result);
}
//Here we extract the message
auto msg = msgItem->aio.release_msg();
auto const *data = static_cast<typename Payload::value_type *>(msg.body().data());
auto const count = msg.body().size()/sizeof(typename Payload::value_type);
std::copy(data, data + count, std::back_inserter(retMsg));
{
std::lock_guard<std::mutex> lk(m_msgMx);
newMessageFlag = true;
m_messageQueue.push(std::move(retMsg));
}
msgItem->itemSock.recv(msgItem->aio);
} catch (const nng::exception &e) {
fprintf(stderr, "server_cb: %s: %s\n", e.who(), e.what());
} catch (...) {
fprintf(stderr, "server_cb: unknown exception\n");
}
Separate thread for listening to the flag change and printing. While loop at the end is for continuous work of the program. I use msgCounter to count successful message receival.
void ReceiverBase<Payload>::start() {
auto listenerLambda = [](){
std::string temp;
while (true) {
std::lock_guard<std::mutex> lg(m_msgMx);
if(newMessageFlag) {
temp = std::move(m_messageQueue.front());
m_messageQueue.pop();
++msgCounter;
std::cout << msgCounter << "\n";
newMessageFlag = false;
}}};
std::thread listenerThread (listenerLambda);
while (true) {
std::this_thread::sleep_for(std::chrono::microseconds(1));
}
}
This is my sender application. I tweak the frequency of msg sending by changing the value in std::chrono::miliseconds(val).
int main (int argc, char *argv[])
{
std::string connection_address{"ipc:///tmp/async_demo1"};
std::string longMsg{" here normally I have some long test text"};
std::cout << "Trying connecting sender:";
StringSender sender(connection_address);
sender.setupConnection();
for (int i=0; i<1000; ++i) {
std::this_thread::sleep_for(std::chrono::milliseconds(3));
sender.send(longMsg);
}
}
And this is receiver:
int main (int argc, char *argv[])
{
std::string connection_address{"ipc:///tmp/async_demo1"};
std::cout << "Trying connecting receiver:";
StringReceiver receiver(connection_address);
receiver.setupConnection();
std::cout<< "Connection set up. \n";
receiver.start();
return 0;
}
Nothing speciall in those two applications as You see. the setup method from StringReciver is something like that:
bool ReceiverBase<Payload>::setupConnection() {
m_connected = false;
try {
for (size_t i = 0; i < m_parallel; ++i) {
m_msgItems.at(i) = std::make_unique<msgItem>(m_sock);
m_msgItems.at(i)->aio =
nng::aio(ReceiverBase::aioCallback, m_msgItems.at(i).get());
}
m_sock.listen(m_adress.c_str());
m_connected = true;
for (size_t i = 0; i < m_parallel; ++i) {
m_msgItems.at(i)->itemSock.recv(m_msgItems.at(i)->aio);
}
} catch (const nng::exception &e) {
printf("%s: %s\n", e.who(), e.what());
}
return m_connected;
}
Do You have any suggestions why the performance is so low? Do I use lock_guards properly here? What I want them to do is basically lock the flag and queue so only one side has access to it.
NOTE: Adding more listeners thread does not affect the performance either way.
NOTE2: newMessageFlag is atomic

C++ multithreading closes TCP connection

I work on a C++ server where I wait for an network connection. If I get one I put the socket into a new thread and listen for further inputs. But the problem is that as soon as I have the socket in a new thread the TCP connection is disconnected. I'm using the SFML library.
Here's some code:
main.cpp:
int main() {
std::list<std::thread> user_connections;
sf::TcpListener listener;
listener.listen(PORT);
while (true)
{
sf::TcpSocket client;
listener.accept(client);
Protocol user_connection;
std::thread new_con (&Protocol::connect, &user_connection, std::ref(client));
new_con.detach();
user_connections.push_back(std::move(new_con)); // user_connections is a list
}
protocol.cpp:
class Protocol {
public:
void connect(sf::TcpSocket& client)
{
std::cout << "Address: " << client.getRemoteAddress() << ":" << client.getRemotePort() << std::endl;
}
}
This prints out:
Address: 0.0.0.0:0
And if I try to send any kind of message I get the status 4 which is according to the documentation disconnected.
EDIT:
According to #Ted Lyngmo it's because I need to put client in a list, because otherwise it runs out of scope. Now if I try to put it in a list via:
std::list<sf::TcpSocket> clients; // executed before while loop
// [...]
clients.push_back(client); // in the while loop
I get the error: (pastebin).
This is something built on your current threaded code. It may be a good idea to use a single threaded design and use the sf::SocketSelector to wait for events on the listener and all the connected clients instead.
In this lazy solution disconnected clients will not be removed from the servers list of clients until a new client is connected.
I've tried to explain it with comments in the code which is an echoing kind of server, so you can telnet to it, send messages and get them back.
#include <SFML/Network.hpp>
#include <atomic>
#include <iostream>
#include <list>
#include <thread>
constexpr uint16_t PORT = 2048; // what you have in your code.
// A simple struct to keep a client and thread
struct client_thread {
sf::TcpSocket client{};
std::thread thread{};
// The main thread can check "done" to remove this client_thread from its list:
std::atomic<bool> done{false};
~client_thread() {
// instead of detaching, join()
if(thread.joinable()) thread.join();
}
};
// the connect function gets a reference to a client_thread instead
void connect(client_thread& clith) {
constexpr std::size_t BufSize = 1024;
auto& [client, thread, done] = clith; // for convenience
std::cout << "thread: Address: " << client.getRemoteAddress() << ":"
<< client.getRemotePort() << std::endl;
std::string buffer(BufSize, '\0');
std::size_t received;
while(client.receive(buffer.data(), buffer.size(), received) == sf::Socket::Done) {
// remove ASCII control chars (cr and newline etc.)
while(received && buffer[received - 1] < ' ') --received;
buffer.resize(received);
std::cout << buffer << std::endl;
// send something back
buffer = "You sent >" + buffer + "<\n";
client.send(buffer.c_str(), buffer.size());
// restore the size
buffer.resize(BufSize);
}
std::cout << "thread: client disconnected\n";
client.disconnect();
// set done to true so the main thread can remove the client_thread
done = true;
}
int main() {
sf::TcpListener listener;
// check that listening actually works
if(listener.listen(PORT) != sf::Socket::Done) return 1;
// now a list of client_thread instead:
std::list<client_thread> user_connections;
while(true) {
// create a client_thread to use when listening
auto& clith = user_connections.emplace_back();
auto& [client, thread, _] = clith; // for convenience
std::cout << "main: listening ...\n";
sf::Socket::Status status = listener.accept(client);
if(status == sf::Socket::Done) {
std::cout << "main: got connection\n";
thread = std::thread(connect, std::ref(clith));
} else {
std::cout << "main: accept not done\n";
}
// remove disconnected clients, pre C++20
for(auto it = user_connections.begin(); it != user_connections.end();) {
// check the atomic bool in all threads
if(it->done) {
std::cout << "main: removing old connection\n";
it = user_connections.erase(it);
} else {
++it;
}
}
// remove disconnected clients, >= C++20
//
// std::erase_if(user_connections,
// [](auto& clith) -> bool { return clith.done; });
}
}
Edit regarding your edited question where you're trying to put the client in a list:
You're trying to copy the sf::TcpSocket and it's not copyable. What's worse, it's not even moveable. The reason the code in my answer works is because it avoids both copying and moving by using std::list::emplace_back to construct the element in place in the list.
It is apparently both sf::TcpSocket client and Protocol user_connection are destroyed. It's no use to only keep the thread alive, your thread only holds references to client and user_connection, but both of them are destroyed soon after your thread is created (and maybe not even started running).
I read a little bit on the SMFL library and unfortunately, at least the client, which is an object of TCPSocket, is not copyable, nor movable. The SMFL library must be a very old library. Any modern socket library will design socket to be at least movable, meaning that you can move your socket into the thread, or move it to the std::list or std::vector you created.
So, to use SMFL library, which was written without modern C++11 support (the copy & move in C++ was introduced in C++ 2011), together with C++11 library (std::thread), will be quite painful.
You can probably use std::shared_ptr to hold a newly created protocol & client, and pass shared_ptr into thread or into the list you created.
I don't know what Protocol exactly does, a rough pseudo code is as follows,
std::shared_ptr<TcpSocket> client = std::make_shared<TcpSocket>();
listener.accept(*client);
std::shared_ptr<Protocol> protocol = std::make_shared<Protocol>();
// copy the pointer into thread, they will be deleted after the thread is done
std::thread new_con ( [client, protocol] () { protocol->connect(*client); } );
or, protocol can probably be defined in the thread,
std::shared_ptr<TcpSocket> client = std::make_shared<TcpSocket>();
listener.accept(*client);
std::thread new_con ( [client] () {
Protocol protocol;
protocol.connect(*client);
} );

No need for mutex, race conditions not always bad, do they?

I'm getting this crazy idea that mutex synchronization can be omitted in some cases when most of us would typically want and would use mutex synchronization.
Ok suppose you have this case:
Buffer *buffer = new Buffer(); // Initialized by main thread;
...
// The call to buffer's `accumulateSomeData` method is thread-safe
// and is heavily executed by many workers from different threads simultaneously.
buffer->accumulateSomeData(data); // While the code inside is equivalent to vector->push_back()
...
// All lines of code below are executed by a totally separate timer
// thread that executes once per second until the program is finished.
auto bufferPrev = buffer; // A temporary pointer to previous instance
// Switch buffers, put old one offline
buffer = new Buffer();
// As of this line of code all the threads will switch to new instance
// of buffer. Which yields that calls to `accumulateSomeData`
// are executed over new buffer instance. Which also means that old
// instance is kinda taken offline and can be safely operated from a
// timer thread.
bufferPrev->flushToDisk(); // Ok, so we can safely flush
delete bufferPrev;
While it's obvious that during buffer = new Buffer(); there can still be uncompleted operations that add data on previous instance. But since disk operations are slow we get natural kind of barrier.
So how do you estimate the risk of running such code without mutex synchronisation?
Edit
It's so hard these days to ask a question in SO without getting mugged by couple of angry guys for no reason.
Here is my correct in all terms code:
#include <cassert>
#include "leveldb/db.h"
#include "leveldb/filter_policy.h"
#include <iostream>
#include <boost/asio.hpp>
#include <boost/chrono.hpp>
#include <boost/thread.hpp>
#include <boost/filesystem.hpp>
#include <boost/lockfree/stack.hpp>
#include <boost/lockfree/queue.hpp>
#include <boost/uuid/uuid.hpp> // uuid class
#include <boost/uuid/uuid_io.hpp> // streaming operators etc.
#include <boost/uuid/uuid_generators.hpp> // generators
#include <CommonCrypto/CommonDigest.h>
using namespace std;
using namespace boost::filesystem;
using boost::mutex;
using boost::thread;
enum FileSystemItemType : char {
Unknown = 1,
File = 0,
Directory = 4,
FileLink = 2,
DirectoryLink = 6
};
// Structure packing optimizations are used in the code below
// http://www.catb.org/esr/structure-packing/
class FileSystemScanner {
private:
leveldb::DB *database;
boost::asio::thread_pool pool;
leveldb::WriteBatch *batch;
std::atomic<int> queue_size;
std::atomic<int> workers_online;
std::atomic<int> entries_processed;
std::atomic<int> directories_processed;
std::atomic<uintmax_t> filesystem_usage;
boost::lockfree::stack<boost::filesystem::path*, boost::lockfree::fixed_sized<false>> directories_pending;
void work() {
workers_online++;
boost::filesystem::path *item;
if (directories_pending.pop(item) && item != NULL)
{
queue_size--;
try {
boost::filesystem::directory_iterator completed;
boost::filesystem::directory_iterator iterator(*item);
while (iterator != completed)
{
bool isFailed = false, isSymLink, isDirectory;
boost::filesystem::path path = iterator->path();
try {
isSymLink = boost::filesystem::is_symlink(path);
isDirectory = boost::filesystem::is_directory(path);
} catch (const boost::filesystem::filesystem_error& e) {
isFailed = true;
isSymLink = false;
isDirectory = false;
}
if (!isFailed)
{
if (!isSymLink) {
if (isDirectory) {
directories_pending.push(new boost::filesystem::path(path));
directories_processed++;
boost::asio::post(this->pool, [this]() { this->work(); });
queue_size++;
} else {
filesystem_usage += boost::filesystem::file_size(iterator->path());
}
}
}
int result = ++entries_processed;
if (result % 10000 == 0) {
cout << entries_processed.load() << ", " << directories_processed.load() << ", " << queue_size.load() << ", " << workers_online.load() << endl;
}
++iterator;
}
delete item;
} catch (boost::filesystem::filesystem_error &e) {
}
}
workers_online--;
}
public:
FileSystemScanner(int threads, leveldb::DB* database):
pool(threads), queue_size(), workers_online(), entries_processed(), directories_processed(), directories_pending(0), database(database)
{
}
void scan(string path) {
queue_size++;
directories_pending.push(new boost::filesystem::path(path));
boost::asio::post(this->pool, [this]() { this->work(); });
}
void join() {
pool.join();
}
};
int main(int argc, char* argv[])
{
leveldb::Options opts;
opts.create_if_missing = true;
opts.compression = leveldb::CompressionType::kSnappyCompression;
opts.filter_policy = leveldb::NewBloomFilterPolicy(10);
leveldb::DB* db;
leveldb::DB::Open(opts, "/temporary/projx", &db);
FileSystemScanner scanner(std::thread::hardware_concurrency(), db);
scanner.scan("/");
scanner.join();
return 0;
}
My question is: Can I omit synchronization for batch which I'm not using yet? Since it's thread-safe and it should be enough to just switch buffers before actually committing any results to disk?
You have a serious misunderstanding. You think that when you have a race condition, there are some specific list of things that can happen. This is not true. A race condition can cause any kind of failure, including crashes. So absolutely, definitely not. You absolutely cannot do this.
That said, even with this misunderstanding, this is still a disaster.
Consider:
buffer = new Buffer();
Suppose this is implemented by first allocating memory, then setting buffer to point to that memory, and then calling the constructor. Other threads may operate on the unconstructed buffer. boom.
Now, you can fix this. But it's just one the many ways I can imagine this screwing up. And it can screw up in ways that we're not clever enough to imagine. So, for all that is holy, do not even think of doing this ever again.

strange behavior in concurrently executing a function for objects in queue

My program has a shared queue, and is largely divided into two parts:
one for pushing instances of class request to the queue, and the other accessing multiple request objects in the queue and processing these objects. request is a very simple class(just for test) with a string req field.
I am working on the second part, and in doing so, I want to keep one scheduling thread, and multiple (in my example, two) executing threads.
The reason I want to have a separate scheduling thread is to reduce the number of lock and unlock operation to access the queue by multiple executing threads.
I am using pthread library, and my scheduling and executing function look like the following:
void * sched(void* elem) {
queue<request> *qr = static_cast<queue<request>*>(elem);
pthread_t pt1, pt2;
if(pthread_mutex_lock(&mut) == 0) {
if(!qr->empty()) {
int result1 = pthread_create(&pt1, NULL, execQueue, &(qr->front()));
if (result1 != 0) cout << "error sched1" << endl;
qr->pop();
}
if(!qr->empty()) {
int result2 = pthread_create(&pt2, NULL, execQueue, &(qr->front()));
if (result2 != 0) cout << "error sched2" << endl;
qr->pop();
}
pthread_join(pt1, NULL);
pthread_join(pt2, NULL);
pthread_mutex_unlock(&mut);
}
return 0;
}
void * execQueue(void* elem) {
request *r = static_cast<request*>(elem);
cout << "req is: " << r->req << endl; // req is a string field
return 0;
}
Simply, each of execQueue has one thread to be executed on, and just outputs a request passed to it through void* elem parameter.
sched is called in main(), with a thread, (in case you're wondering how, it is called in main() like below)
pthread_t schedpt;
int schresult = pthread_create(&schedpt, NULL, sched, &q);
if (schresult != 0) cout << "error sch" << endl;
pthread_join(schedpt, NULL);
and the sched function itself creates multiple(two in here) executing threads and pops requests from the queue, and executes the requests by calling execQueue on multiple threads(pthread_create and then ptrhead_join).
The problem is the weird behavior by the program.
When I checked the size and the elements in the queue without creating threads and calling them on multiple threads, they were exactly what I expected.
However, when I ran the program with multiple threads, it prints out
1 items are in the queue.
2 items are in the queue.
req is:
req is: FIRST! �(x'�j|1��rj|p�rj|1����FIRST!�'�j|!�'�j|�'�j| P��(�(��(1���i|p��i|
with the last line constantly varying.
The desired output is
1 items are in the queue.
2 items are in the queue.
req is: FIRST
req is: FIRST
I guess either the way I call the execQueue on multiple threads, or the way I pop() is wrong, but I could not figure out the problem, nor could I find any source to refer to for a correct usage.
Please help me on this. Bear with me for clumsy use of pthread, as I am a beginner.
Your queue holds objects, not pointers to objects. You can address the object at the front of the queue via operator &() as you are, but as soon as you pop the queue that object is gone and that address is no longer valid. Of course, sched doesn't care, but the execQueue function you sent that address do certainly does.
The most immediate fix for your code is this:
Change this:
pthread_create(&pt1, NULL, execQueue, &(qr->front()));
To this:
// send a dynamic *copy* of the front queue node to the thread
pthread_create(&pt1, NULL, execQueue, new request(qr->front()));
And your thread proc should be changed to this:
void * execQueue(void* elem)
{
request *r = static_cast<request*>(elem);
cout << "req is: " << r->req << endl; // req is a string field
delete r;
return nullptr;
}
That said, I can think of better ways to do this, but this should address your immediate problem, assuming your request object class is copy-constructible, and if it has dynamic members, follows the Rule Of Three.
And here's your mildly sanitized c++11 version just because I needed a simple test thingie for MSVC2013 installation :)
See it Live On Coliru
#include <iostream>
#include <thread>
#include <future>
#include <mutex>
#include <queue>
#include <string>
struct request { std::string req; };
std::queue<request> q;
std::mutex queue_mutex;
void execQueue(request r) {
std::cout << "req is: " << r.req << std::endl; // req is a string field
}
bool sched(std::queue<request>& qr) {
std::thread pt1, pt2;
{
std::lock_guard<std::mutex> lk(queue_mutex);
if (!qr.empty()) {
pt1 = std::thread(&execQueue, std::move(qr.front()));
qr.pop();
}
if (!qr.empty()) {
pt2 = std::thread(&execQueue, std::move(qr.front()));
qr.pop();
}
}
if (pt1.joinable()) pt1.join();
if (pt2.joinable()) pt2.join();
return true;
}
int main()
{
auto fut = std::async(sched, std::ref(q));
if (!fut.get())
std::cout << "error" << std::endl;
}
Of course it doesn't actually do much now (because there's no tasks in the queue).

Two-way C++ communication over serial connection

I am trying to write a really simple C++ application to communicate with an Arduino. I would like to send the Arduino a character that it sends back immediately. The Arduino code that I took from a tutorial looks like this:
void setup()
{
Serial.begin(9600);
}
void loop()
{
//Have the Arduino wait to receive input
while (Serial.available()==0);
//Read the input
char val = Serial.read();
//Echo
Serial.println(val);
}
I can communicate with the Arduino easily using GNU screen, so I know that everything is working fine with the basic communication:
$ screen /dev/tty.usbmodem641 9600
The (broken) C++ code that I have looks like this:
#include <fstream>
#include <iostream>
int main()
{
std::cout << "Opening fstream" << std::endl;
std::fstream file("/dev/tty.usbmodem641");
std::cout << "Sending integer" << std::endl;
file << 5 << std::endl; // endl does flush, which may be important
std::cout << "Data Sent" << std::endl;
std::cout << "Awaiting response" << std::endl;
std::string response;
file >> response;
std::cout << "Response: " << response << std::endl;
return 0;
}
It compiles fine, but when running it, some lights flash on the Arduino and the terminal just hangs at:
Opening fstream
Where am I going wrong?
There are three points:
First: You don't initialize the serial port (TTY) on the Linux side. Nobody knows in what state it is.
Doing this in your program you must use tcgetattr(3) and tcsetattr(3). You can find the required parameters by using these keywords at this site, the Arduino site or on Google. But just for quick testing I propose to issue this command before you call your own command:
stty -F /dev/tty.usbmodem641 sane raw pass8 -echo -hupcl clocal 9600
Especially the the missing clocal might prevent you opening the TTY.
Second: When the device is open, you should wait a little before sending anything. By default the Arduino resets when the serial line is opened or closed. You have to take this into account.
The -hupcl part will prevent this reset most of the time. But at least one reset is always necessary, because -hupcl can be set only when the TTY is already open and at that time the Arduino has received the reset signal already. So -hupcl will "only" prevent future resets.
Third: There is NO error handling in your code. Please add code after each IO operation on the TTY which checks for errors and - the most important part - prints helpful error messages using perror(3) or similar functions.
I found a nice example by Jeff Gray of how to make a simple minicom type client using boost::asio. The original code listing can be found on the boost user group. This allows connection and communication with the Arduino like in the GNU Screen example mentioned in the original post.
The code example (below) needs to be linked with the following linker flags
-lboost_system-mt -lboost_thread-mt
...but with a bit of tweaking, some of the dependence on boost can be replaced with new C++11 standard features. I'll post revised versions as and when I get around to it. For now, this compiles and is a solid basis.
/* minicom.cpp
A simple demonstration minicom client with Boost asio
Parameters:
baud rate
serial port (eg /dev/ttyS0 or COM1)
To end the application, send Ctrl-C on standard input
*/
#include <deque>
#include <iostream>
#include <boost/bind.hpp>
#include <boost/asio.hpp>
#include <boost/asio/serial_port.hpp>
#include <boost/thread.hpp>
#include <boost/lexical_cast.hpp>
#include <boost/date_time/posix_time/posix_time_types.hpp>
#ifdef POSIX
#include <termios.h>
#endif
using namespace std;
class minicom_client
{
public:
minicom_client(boost::asio::io_service& io_service, unsigned int baud, const string& device)
: active_(true),
io_service_(io_service),
serialPort(io_service, device)
{
if (!serialPort.is_open())
{
cerr << "Failed to open serial port\n";
return;
}
boost::asio::serial_port_base::baud_rate baud_option(baud);
serialPort.set_option(baud_option); // set the baud rate after the port has been opened
read_start();
}
void write(const char msg) // pass the write data to the do_write function via the io service in the other thread
{
io_service_.post(boost::bind(&minicom_client::do_write, this, msg));
}
void close() // call the do_close function via the io service in the other thread
{
io_service_.post(boost::bind(&minicom_client::do_close, this, boost::system::error_code()));
}
bool active() // return true if the socket is still active
{
return active_;
}
private:
static const int max_read_length = 512; // maximum amount of data to read in one operation
void read_start(void)
{ // Start an asynchronous read and call read_complete when it completes or fails
serialPort.async_read_some(boost::asio::buffer(read_msg_, max_read_length),
boost::bind(&minicom_client::read_complete,
this,
boost::asio::placeholders::error,
boost::asio::placeholders::bytes_transferred));
}
void read_complete(const boost::system::error_code& error, size_t bytes_transferred)
{ // the asynchronous read operation has now completed or failed and returned an error
if (!error)
{ // read completed, so process the data
cout.write(read_msg_, bytes_transferred); // echo to standard output
read_start(); // start waiting for another asynchronous read again
}
else
do_close(error);
}
void do_write(const char msg)
{ // callback to handle write call from outside this class
bool write_in_progress = !write_msgs_.empty(); // is there anything currently being written?
write_msgs_.push_back(msg); // store in write buffer
if (!write_in_progress) // if nothing is currently being written, then start
write_start();
}
void write_start(void)
{ // Start an asynchronous write and call write_complete when it completes or fails
boost::asio::async_write(serialPort,
boost::asio::buffer(&write_msgs_.front(), 1),
boost::bind(&minicom_client::write_complete,
this,
boost::asio::placeholders::error));
}
void write_complete(const boost::system::error_code& error)
{ // the asynchronous read operation has now completed or failed and returned an error
if (!error)
{ // write completed, so send next write data
write_msgs_.pop_front(); // remove the completed data
if (!write_msgs_.empty()) // if there is anthing left to be written
write_start(); // then start sending the next item in the buffer
}
else
do_close(error);
}
void do_close(const boost::system::error_code& error)
{ // something has gone wrong, so close the socket & make this object inactive
if (error == boost::asio::error::operation_aborted) // if this call is the result of a timer cancel()
return; // ignore it because the connection cancelled the timer
if (error)
cerr << "Error: " << error.message() << endl; // show the error message
else
cout << "Error: Connection did not succeed.\n";
cout << "Press Enter to exit\n";
serialPort.close();
active_ = false;
}
private:
bool active_; // remains true while this object is still operating
boost::asio::io_service& io_service_; // the main IO service that runs this connection
boost::asio::serial_port serialPort; // the serial port this instance is connected to
char read_msg_[max_read_length]; // data read from the socket
deque<char> write_msgs_; // buffered write data
};
int main(int argc, char* argv[])
{
// on Unix POSIX based systems, turn off line buffering of input, so cin.get() returns after every keypress
// On other systems, you'll need to look for an equivalent
#ifdef POSIX
termios stored_settings;
tcgetattr(0, &stored_settings);
termios new_settings = stored_settings;
new_settings.c_lflag &= (~ICANON);
new_settings.c_lflag &= (~ISIG); // don't automatically handle control-C
tcsetattr(0, TCSANOW, &new_settings);
#endif
try
{
if (argc != 3)
{
cerr << "Usage: minicom <baud> <device>\n";
return 1;
}
boost::asio::io_service io_service;
// define an instance of the main class of this program
minicom_client c(io_service, boost::lexical_cast<unsigned int>(argv[1]), argv[2]);
// run the IO service as a separate thread, so the main thread can block on standard input
boost::thread t(boost::bind(&boost::asio::io_service::run, &io_service));
while (c.active()) // check the internal state of the connection to make sure it's still running
{
char ch;
cin.get(ch); // blocking wait for standard input
if (ch == 3) // ctrl-C to end program
break;
c.write(ch);
}
c.close(); // close the minicom client connection
t.join(); // wait for the IO service thread to close
}
catch (exception& e)
{
cerr << "Exception: " << e.what() << "\n";
}
#ifdef POSIX // restore default buffering of standard input
tcsetattr(0, TCSANOW, &stored_settings);
#endif
return 0;
}
You should check if you have access to /dev/tty.usbmodem641. The usual way in Linux is to add the user to the proper group with adduser.
By the way, I know that to access the serial port, one needs to open /dev/ttyS0 (for COM1), until /dev/ttyS3. See for example this example in C.