How to read from serial device in C++ - c++

I'm trying to figure out how I should read/write to my Arduino using serial communication in Linux C++. Currently I'm trying to read a response from my Arudino that I "trigger" with
echo "g" > /dev/ttyACM0"
I've tried looking at the response from my Arduino in my terminal, by using the following command:
tail -f /dev/ttyACM0
This is working as it should. Now I want to do the same thing in a C++ application. So I made this test
void testSerialComm()
{
std::string port = "/dev/ttyACM0";
int device = open(port.c_str(), O_RDWR | O_NOCTTY | O_SYNC);
std::string response;
char buffer[64];
do
{
int n = read(device, buffer, sizeof buffer);
if (n > 0) {
response += std::string(buffer);
std::cout << buffer;
}
} while (buffer[0] != 'X'); // 'X' means end of transmission
std::cout << "Response is: " << std::endl;
std::cout << response << std::endl;
close(device);
}
After a few "messages", the transmission gets a little messed up. My test application writes the response characters in random order and something's not right. I tried configuring the /dev/ttyACM0 device by this command:
stty -F /dev/ttyUSB0 cs8 115200 ignbrk -brkint -icrnl -imaxbel -opost -onlcr -isig -icanon -iexten -echo -echoe -echok -echoctl -echoke noflsh -ixon -crtscts
No dice. Can someone help me understand how to communicate with my Arduino in C++?

The shown code opens /dev/ttyACM0, attempts to seek to the end of this "file", and based on the resulting file position allocates an old-fashioned, C-style memory buffer.
The problem with this approach is that you can only seek through regular, plain, garden-variety files. /dev/ttyACM0 is not a regular file. It's a device. Although some devices are seekable, this one isn't. Which, according to the comments, you've discovered independently.
Serial port devices are readable and writable. They are not seekable. There's no such thing as "seek"ing on a serial port. That makes no sense.
To read from the serial port you just read it, that's all. The operating system does maintain an internal buffer of some size, so if some characters were already received over the serial port, the initial read will return them all (provided that the read() buffer size is sufficiently large). If you pass a 1024 character buffer, for example, and five characters were already read from the serial port read() will return 5, to indicate that accordingly.
If no characters have been read, and you opened the serial port as a blocking device, read() will block at least until one character has been read from the serial port, and then return.
So, in order to read from the serial port all you have to do is read from it, until you've decided that you've read all there is to read from it. How do you decide that? That's up to you. You may decide that you want to read only until reading a newline character. Or you may decide that you want to read only until a fixed #n number of characters have been read. That's entirely up to you.
And, of course, if the hardware is suitably arranged, and you make the necessary arrangements with the serial port device to respect the serial port control pins, and, depending on your configuration, the DCD and/or DSR pins are signaled to indicate that the serial port device is no longer available, your read() will immediately return 0, to indicate a pseudo-end of file condition on the serial port device. That's also something that you will need to implement the necessary logic to handle.
P.S. neither C-style stdio, nor C++-style iostreams will work quite well with character devices, due to their own internal buffering algorithms. When working with serial ports, using open(2), read(2), write(2), and close(2) is better. But all of the above still applies.

Related

Qt Serial Communication not sending all data

I am writing a Qt application for serial communication with a Qorvo MDEK-1001. All built-in serial commands I've had to use work fine except for one: aurs n k, where n and k are integers corresponding to the desired rate of data transmission (e.g. "aurs 1 1\r"). Write function is:
void MainWindow::serialWrite(const QByteArray &command)
{
if(mdek->isOpen())
{
mdek->write(command);
qDebug() << "Command: " << command;
//mdek->flush();
}
}
If I send the command "aurs 1 1\r". It doesn't actually get sent to the device until I send another command for some reason. So if I subsequently send the "quit" command to the device, the returned data from the device is: "aurs 1quit", which registers as an unknown command. Any assistance getting this command to send properly is appreciated.
I've tried a bunch of stuff (setting bytes to write as second parameter in write(), using QDataStream, appending individual hex bytes onto QByteArray and writing that), but nothing has worked. This is the first time I've had to use Qt's serial communication software so I've probably missed something obvious.
On Linux Manjaro (same thing happens on Windows 8.1)
Connection settings: 8 data bits, Baud: 115200, No Flow Control, No Parity, One Stop Bit

Can't read from serial device

I'm trying to write a library to read data from a serial device, a Mipex-02 gas sensor. Unfortunately, my code doesn't seem to open the serial connection properly, and I can't figure out why.
The full source code is available on github, specifically, here's the configuration of the serial connection:
MipexSensor::MipexSensor(string devpath) {
if (!check_dev_path(devpath))
throw "Invalid devpath";
this->path = devpath;
this->debugout_ = false;
this->sensor.SetBaudRate(SerialStreamBuf::BAUD_9600);
this->sensor.SetCharSize(SerialStreamBuf::CHAR_SIZE_8);
this->sensor.SetNumOfStopBits(1);
this->sensor.SetParity(SerialStreamBuf::PARITY_NONE);
this->sensor.SetFlowControl(SerialStreamBuf::FLOW_CONTROL_NONE);
this->last_concentration = this->last_um = this->last_ur = this->last_status = 0;
cout << "Connecting to "<< devpath << endl;
this->sensor.Open(devpath);
}
I think the meaning of the enums here are obvious enough. The values are from the instruction manual:
UART characteristics:
exchange rate – 9600 baud,
8-bit message,
1 stop bit,
without check for parity
So at first I was using interceptty to test it, and it worked perfectly fine. But when I tried to connect to the device directly, I couldn't read anything. the RX LED flashes on the devices so clearly the program manages to send something, but -unlike with interceptty- the TX LED never flash.
So I don't know if it's sending data incorrectly, if it's not sending all of it, and I can't even sniff the connection since it only happens when interceptty isn't in the middle. Interceptty's command-line is interceptty -s 'ispeed 9600 ospeed 9600 -parenb -cstopb -ixon cs8' -l /dev/ttyUSB0 (-s options are passed to stty) which is in theory the same options set in the code.
Thanks in advance.

How to receive more than 40Kb in C++ socket using read()

I am developing a client-server application (TCP) in Linux using C++. This application is in charge of testing the network performance.
The connection between client and server is established only once, and then data are transmitted/received using write()/read() with an own-defined protocol.
When data exceeds 40Kb I receive just a part of the data only once. (i.e. I receive about 48KB)
Please find down the relevant part of the code:
while (1) {
servMtx.lock();
...
serv_bytes = (byte *) malloc(size_bytes);
n = read(newsockfd, serv_bytes,size_bytes);
if (n != (int)size_bytes ) {
std::cerr << "No enough data available for msg. Received just: " << n << std::endl;
continue;
}
receivedBytes += n + size_header_bytes + sizeof(ssize_t);
....
}
I increased the kernel buffer size to become 1MB using:
int buffsize = 1024*1024;
setsockopt(newsockfd, SOL_SOCKET, SO_RCVBUF, &buffsize, sizeof(buffsize));
and modified sysctl variables too:
sysctl -w net.core.rmem_max=8388608;
sysctl -w net.core.wmem_max=8388608;
as mentioned on this How to recive more than 65000 bytes in C++ socket using recv() but nothing was changed. Also, I tried to change the package size to no avail.
You should read or recv in several chunks (in general; if you are unlucky, the "several" becomes "one"). So you need to manage your buffering and keep (and use) the count of received bytes.
So at some point, you'll code
int nbrecv = recv(s, buffer + off, bufsize, 0);
if (nbrec>0) { off += nbrecv; bufsize -= nbrecv; }
and you probably should do that in your event loop (often around poll(2)...). And it does happen that nbrec is a lot less than bufsize and you should be handling that common case.
TCP does not guarantee that you'll get all the bytes in the same recv! It could depend on external factors (routing, network hardware, ...); it is a stream-oriented protocol, not a message-packet one. If your application wants messages it should buffer the input and chunk that input into messages according to the content. Look at HTTP or SMTP: their message have a well defined boundary given by header information (Content-Length: in HTTP) or by ending convention (line with a single . in SMTP).
Please read carefully read(2), recv(2), socket(7), tcp(7), some sockets tutorial, Advanced Linux Programming.

New to linux: C++ opening and closing usb port issues

New developer, Linux, C++, USB - Serial Adapter.
I've completed a program where I am able to write to the USB port. However, if I change my code, make, log back in as root, and try to write to the port again, it doesn't go through. It'll only work if I remove the USB cable from the computer and reseat it before attempting sending data again. If you need more info let me know.
I'm on two different computers and have no way of copying and pasting but here is the gist of what I'm doing.
int fd = 0;
int iOut = 0;
char *ComPort = "/dev/ttyUSB0";
fd=open(ComPort, O_CREAT | O_RDWR | O_NOCTTY | O_NDELAY);
if(fd == -1)
cout << "unable to open" << endl;
// blah blah getting data ready to be sent
// create a block of 50 hex characters to be sent : DB
iOut = write(fd, $DB, sizeof(DB));
// blah blah error checking
close(fd);
return(0);
#Surt #alexfarber I had a talk with a coworker on this and we concluded that this is most likely a hardware issue with my display or usb to serial adapter. I believe the only way this can work with this particular adapter is by turning off the power to it and turning it back on in order to reflect what it would see when being removed and reseated manually. I dont believe this is possible but I'll start another thread with anything I may run into. I appreciate you all taking the time to help with this, I did learn a number of other things I didn't know before hand so this was still very helpful. Thank you once again.
Take at look at chapter 3.2 here http://www.tldp.org/HOWTO/Serial-Programming-HOWTO/x115.html
add some of the error checking first so you can see where if fails. The perror line will help there.
if (fd <0) {perror(ComPort ); exit(-1); } // note the exit which your code doesn't have.
This should now tell you some more info and add
if (errno) {perror(ComPort ); exit(-1); }
after all operations, read, write and set things on the fd.
now add the newtio part of 3.2 to your program in case some handshake failed. You must change it so it conforms with the display.
The final version of your program might be more like 3.3.

Payload split over two TCP packets when using Boost ASIO, when it fits within the MTU

I have a problem with a boost::asio::ip::tcp::iostream. I am trying to send about 20 raw bytes. The problem is that this 20 byte payload is split into two TCP packets with 1 byte, then 19 bytes. Simple problem, why it is happening I have no idea. I am writing this for a legacy binary protocol that very much requires the payload to fit in a single TCP packet (groan).
Pasting the whole source from my program would be long and overly complex, I've posted the functional issue just within 2 functions here (tested, it does reproduce the issue);
#include <iostream>
// BEGIN cygwin nastyness
// The following macros and conditions are to address a Boost compile
// issue on cygwin. https://svn.boost.org/trac/boost/ticket/4816
//
/// 1st issue
#include <boost/asio/detail/pipe_select_interrupter.hpp>
/// 2nd issue
#ifdef __CYGWIN__
#include <termios.h>
#ifdef cfgetospeed
#define __cfgetospeed__impl(tp) cfgetospeed(tp)
#undef cfgetospeed
inline speed_t cfgetospeed(const struct termios *tp)
{
return __cfgetospeed__impl(tp);
}
#undef __cfgetospeed__impl
#endif /// cfgetospeed is a macro
/// 3rd issue
#undef __CYGWIN__
#include <boost/asio/detail/buffer_sequence_adapter.hpp>
#define __CYGWIN__
#endif
// END cygwin nastyness.
#include <boost/array.hpp>
#include <boost/asio.hpp>
#include <iostream>
typedef boost::asio::ip::tcp::iostream networkStream;
void writeTestingData(networkStream* out) {
*out << "Hello world." << std::flush;
// *out << (char) 0x1 << (char) 0x2 << (char) 0x3 << std::flush;
}
int main() {
networkStream out("192.168.1.1", "502");
assert(out.good());
writeTestingData(&out);
out.close();
}
To add to the strange issue, if I send the string "Hello world.", it goes in one packet. If I send 0x1, 0x2, 0x3 (the raw byte values), I get 0x1 in packet 1, then the rest of the data in the next TCP packet. I am using wireshark to look at the packets, there is only a switch between the dev machine and 192.168.1.1.
Don't worry, you are from from the only one to have this problem. There is definitely a solution. In fact, you have TWO problems with your legacy protocol and not only one.
Your old legacy protocol requires one "application message" to fit in "one and only one TCP packet" (because it incorrectly use a TCP stream-oriented protocol as a packet-oriented protocol). So we must make sure that :
no "application message" is split across multiple TCP packets (the problem you are seeing)
no TCP packet contains more than one "application message" (you are not seeing this but it may definitely happen)
The solution :
problem 1
You must feed your socket with all your "message" data at once. This is currently not happening because, as other people have outlined it, the boost stream API you use put data into the socket in separated calls when you use successive "<<" and the underlying TCP/IP stack of your OS doesn't buffer it enough (and with reasons, for better performance)
Multiple solutions :
you pass a char buffer instead of separate chars so that you make only one call to <<
you forget about boost, open an OS socket and feed it in one call to send() (on windows, look for the "winsock2" API, or look for "sys/socket.h" on unix/cygwin)
problem 2
You MUST activate the TCP_NODELAY option on your socket. This option is especially made for such legacy protocol cases. It will ensure that the OS TCP/IP stack send your data "without delay" and doesn't buffer it together with another application message you may send later.
if you stick with Boost, look for the TCP_NODELAY option, it is in the doc
if you use OS sockets, you'll have to use the setsockopt() function on your socket.
Conclusion
If you solve those two problems, you should be fine !
The OS socket API, either on windows or linux, is a bit tricky to use, but you'll gain full control about its behaviour. Unix example
Your code:
out << (char) 0x1 << (char) 0x2 << (char) 0x3;
Will make 3 calls of operator<< function.
Because of Nagle's algorithm of TCP, TCP stack will send available data ((char)0x1) to peer immediately after/during the first operator<< call.
So the rest of the data (0x2 and 0x3) will go to the next packet.
Solution for avoiding 1 byte TCP segments:
Call sending functions with bigger bunch of data.
I am not sure who would have imposed such a thing as having a requirement that an entire payload be within one TCP packet. TCP by its nature is a streamed protocol and much of the details in number of packets sent and payload size etc. are left up to the TCP stack implementation of the operating system.
I would double check to see if this is an actual restriction of your protocol or not.
I agree with User1's answer. You probably invoke operator << several times; on the first invocation it immediately sends the first byte over the network, then the Nagle's algorithm comes into play, hence the remaining data is sent within a single packet.
Nevertheless, even if the packetization was not an issue, the even fact that you invoke a socket sending function frequently on small pieces of data is a big problem. Every function called on a socket invokes a heavy kernel-mode transaction (system call), calling send for every byte is simply insane!
You should first format your message in the memory, and then send it. For your design I'd suggest creating a sort of a cache stream, that would accumulate the data in its internal buffer and send it at once to the underlying stream.
It is erroneous to think of data sent over a TCP socket as packets. It is a stream of bytes, how you frame the data is application specific.
Any suggestions?
I suggest you implement a protocol such that the receiver knows how many bytes to expect. One popular way to accomplish this is to send a fixed size header indicating the number of bytes for the payload.