I am trying to use QSerialPort to implement one side of a simple serial communication protocol. The board on the receiving end of the protocol is not under my control, so I am trying to debug my side by using a null DE-9 cable to loop-back from one serial port to another on the same board, and running a simple "listener" app that relies on the same underlying transport-protocol code.
The transport protocol requires instantly sending an "acknowledgment" packet when a valid message packet is received, so ports are always opened in read/write mode, and after each message packet is sent the code waits and listens for an acknowledgment; the port is not closed again until after a timeout period has passed. EDIT 4: The port is, however, closed when the program is not trying to send a message or specifically listening for data.
I've discovered that the code behaves differently depending on whether or not the baud rate of the ports before the Qt programs are run matches the baud rate selected by the Qt program. That is, I'm using setBaudRate() (in both the sender and the listener) to determine the baud rate, and if I set it to whatever it was before running my programs, then the listener sees the correct byte sequences, but if I set it to something else, then the listener just sees garbage on the serial port.
(Even when the listener sees the correct byte sequences and sends acks, these acks are not seen by the other program, and I'm not sure why; I suspect this problem may be related, but for this question it's not my focus.)
It does not appear to matter what baud rate is actually used; I just need to set it using stty before running the Qt programs, even though the Qt programs set the baud rate explicitly. If I use stty to set the baud rate to something other than what the Qt programs use, the listener sees garbage.
I have printed the values of baudRate(Input), baudRate(Output), dataBits(), flowControl(), parity(), and stopBits() for each port after setting the baud rates in order to assess whether the baud rate isn't being set correctly or whether some other property of the serial port is incorrect, but the values printed are identical in every case.
I have experimented (though not extensively) with other properties of stty (for instance setting both ports to raw or to cooked before running my programs), and have observed that no settings except for the baud rate appear to have any affect.
Does anyone have any idea what might be going on here, how to debug it, and whether QSerialPort or QSerialPortInfo provide the proper tools to fix whatever inconsistency is manifesting?
EDIT: To clarify, I know that the Qt baud-rate-setting code is having some effect, because the remote end of the protocol (the one that's not under my control) uses 57600 baud, and I can send and receive some messages with that hardware after using Qt (not stty) to change my port's baud rate from the default 9600 to 57600. (If I do not change the baud rate, communication is not possible; I get characters that don't match what the hardware is actually sending.)
It should also be clear that the Qt code is having some effect from the fact that the loopback test would always work if there were no effect from setting the baud rate in my program.
EDIT 2: Before setting the baud rate, Qt apparently perceives the rate, as determined by baudRate(), to be 9600 (the default rate), regardless of how it has been set by stty.
Also, if I use stty to set the "sending" side to the "right" baud rate and the "listening" side to the "wrong" baud rate, I get the partially-correct behavior that I get when both ports are set to the "wrong" rate in advance (i.e. the listener sees messages, but the sender never sees acknowledgments).
EDIT 3: I previously had an edit noting that some of these tests were done with the loopback cable unplugged, but I have just realized that I was mistaken.
Qt version: 5.4.0
OS: Debian 7
It turns out I was losing data due to the fact that I was opening and closing the ports. When a QSerialPort is closed, data is either not buffered (by the driver) in the first place, or it is discarded as soon as the port is re-opened.
Presumably, the reason that setting the baud rate using stty to match the baud rate used by Qt affected the program behavior is that, when the baud rate is already what Qt needs to set it to, the setBaudRate() call is a no-op, and no clean-up action is required upon close to restore the old baud rate; whereas when Qt does need to change the baud rate, it must also restore the old baud rate when closing the port, which takes up a significant amount of processing time.
Related
I'm writing a small program using QModbusDevice over the serial port (using the QModbusRtuSerialMaster class) and have some problems.
One of the problems seems to be that the flow control of the serial port is incorrect. Checking in a serial port sniffer I see that a working client sets RTS on when it sends requests, and then RTS off to receive replies. When I use QModbusRtuSerialMaster to send messages that doesn't happen.
The message is sent correctly (sometimes, subject for another question) compared to the working client. It's just the control flow that doesn't work and which causes the servers to be unable to reply.
I have set the Windows port settings for the COM-port in question to hardware flow control but it doesn't matter, the sniffer still reports no flow control.
Is there a way to get QModbusRtuSerialMaster to set the flow control as I would like? Or is there a way to manually handle the flow control (which is what the working client does)? Or is the only solution to skip the Qt modbus classes and make up my own using the serial port directly?
A short summary of what I'm doing...
First the initialization of the QModbusRtuSerialMaster object:
QModbusDevice* modbusDevice = new QModbusRtuSerialMaster(myMainWindow);
modbusDevice->setConnectionParameter(QModbusDevice::SerialPortNameParameter, "COM3");
modbusDevice->setConnectionParameter(QModbusDevice::SerialParityParameter, QSerialPort::NoParity);
modbusDevice->setConnectionParameter(QModbusDevice::SerialBaudRateParameter, QSerialPort::Baud115200);
modbusDevice->setConnectionParameter(QModbusDevice::SerialDataBitsParameter, QSerialPort::Data8);
modbusDevice->setConnectionParameter(QModbusDevice::SerialStopBitsParameter, QSerialPort::OneStop);
modbusDevice->setTimeout(100);
modbusDevice->setNumberOfRetries(3);
modbusDevice->connectDevice();
Then how I send a request:
auto response = modbusDevice->sendReadRequest(QModbusDataUnit(QModbusDataUnit::Coils, 0, 1), 1);
QtModbus does not implement an automatic toggling for the RTS line because it expects your hardware to do it on its own (with a dedicated line instead).
This should be the case for most RS485 converters (even cheap ones). You would only need the RTS line if you have a separate transceiver like this one with a DE/~RE input.
If you were on Linux and had some specific hardware you could try to use the RS485 mode to toggle the RTS line for you automatically. But you don't seem to be on Linux and the supported hardware is certainly very limited.
You can also toggle the line manually with port.setRequestToSend(true), see here. But note that depending on the timing needs of the device you are talking too, this software solution might not be very reliable. This particular problem has been discussed at length here. Take a look at the links on my answer too, I made some benchmarks with libmodbus that show good results.
Enabling or disabling flow control on the driver won't have any effect on this issue because this is not actually a flow control problem but a direction control one. Modbus runs on two-wire half-duplex links very often, and that means you need a way to indicate which device is allowed to talk on the bus at all times. The RTS (flow control) from an RS232 port can be used for this purpose as a software workaround.
In the end, it would be much less of a headache if you just replace your transceiver with one that supports hardware direction control. If you have a serial port with an FTDI engine you should be able to use the TXEN line for this purpose. Sometimes this hardware line is not directly routed and available on a pin but you can reroute it with MProg.
I would like to highlight that you did not mention if you are running your Modbus on RS485. I guess it's fair to assume you are, but if you have only a couple of devices next to each other you might use RS232 (even on TTL levels) and forget about direction control (you would be running full-duplex with three wires: TX, RX and GND).
I've got a Qt app (Qt 4.8.1) that's doing some Windows serial port tasks. I'm finding that occasionally the CreateFileA call that I do to open the serial port is taking up to 30 seconds to complete! Obviously I'm doing something to trigger this odd behavior, and I want to know what it is I might be doing to cause this.
m_portHand = CreateFileA( portDevice.c_str(),
GENERIC_READ | GENERIC_WRITE,
0, // must be opened with exclusive-access
NULL, // default security attributes
OPEN_EXISTING, // must use OPEN_EXISTING
FILE_FLAG_OVERLAPPED, // overlapped I/O
NULL ); // hTemplate must be NULL for comm devices
m_portHand is a HANDLE, and portDevice is an std::string and contains "COM5".
This call is triggered by a button push in the main thread of my app. At the time it happens the app has at most one other thread, but those threads (if any) are idle.
The only major thing going on in the system is a VM running Linux, but the system is a quad-core and 3 of the cores are as near to idle as you see on a Windows box, with only one doing anything with the VM.
The serial ports are on an 8 port USB serial box, could that be related?
Is this related to the Overlapped IO in some way?
In response to comments:
Port is not open by another app. Port was previously open by a previous invocation of this app, which was properly closed, and the port closed with 'CloseHandle'.
I have not been able to determine any correlations between it taking 30 seconds and not - sometimes I start the app up, click the button and we're off to the races, sometimes it takes up to 30 seconds.
The VM is intercepting some other USB devices on the same serial box.
Other than the serial box (with the VM polling 4 ports looking for devices), the USB bus is unloaded.
I've not seen the behavior in other apps. I'll try switching to a built-in port (COM1 on the motherboard) to see if that has any effect.
A thought just occurred to me: can the form of the port addressing have anything to do with it? Other similar apps I work on use the qestserialport library, which opens ports using the '\\.\COM#' notation. Is there some way that the notation used could affect the timing?
The USB serial device says 'VScom' on it, and normally it opens up right away (< 10 milliseconds for the CreateFile call). It's just an occasional issue where things get stuffed up, and I've got other programs that NEVER seem to exhibit this behavior.
The device I'm talking to is a medical monitor using the IEEE 11073 protocol. Anyway, I have the connection to the device working just fine, it's ONLY the serial port open that's problematic. Could the state of the serial control lines at open time have something to do with this? The device at the other end is polling it's ports looking for various things to talk to, so I have no idea what the serial lines look like at the exact moment things go wrong.
OK, problem is understood, if not solved. I was playing with a different serial device and the problem began manifesting even more frequently.
The problem seems to be that when the VM is in control of some of the serial ports, the drivers become intermittently slow to open the available ports.
My test program opens then closes the port 1000 times, timing the open call. It does NOT set the serial port parameters in any way. Prior to running the test program, I was doing actual work with a device that uses the baud rate 460800.
When the VM is in possession of 4 of the ports, then opens on the remaining 4 ports can sometimes (20-30 times out of 1000 attempts) take 20-30 seconds to complete. When the VM is not running, the opens happen quickly all 1000 attempts. With the VM running, but no USB serial ports in it's possession, the opens happened quickly on all 1000 attempts.
Since the VM is a development tool, not part of our intended deployment scenario, I can live with this issue.
Interestingly, this effect seems to be dependent on what baud rate the port was last used at. Prior to my initial inquiries I'd been operating at 9600 baud and below and don't recall ever seeing the problem. When I first asked the question, I was working with a device that was at 115000 baud, and was having the problem intermittently. With the latest device at 460800 baud, I get the problem often enough to be able to hunt the problem down. No idea why, but there it is.
The serial control lines interacting with a device driver issue is a likely cause.
Do you have the control signals correctly connected?
If not, connect RTS to CTS and connect CD, DTR and DSR. On a DB25, this means connecting pins 4 and 5 and connecting pins 6, 8 and 20. On a DB9, connect pins 7 and 8 and connect pins 1, 4 and 6.
If this fixes the problem, you should look for driver settings to ignore the control signals on open.
I have a program that opens a serial port using boost asio.
The serial port, by default, has a delay that keeps the line idle.
On windows platforms I saw a delay of 30ms and on Linux platforms the delay was 20ms.
For the Linux environment I found that the class 'ioctl' of "linux.h" has a way to set the serial settings with some flags (and what I needed: low_latency).
the code is as follows:
boost::asio::basic_serial_port<boost::asio::serial_port_service>::native_type native = serial_port_.native(); // serial_port_ is the boost's serial port class.
struct serial_struct serial;
ioctl(native, TIOCGSERIAL, &serial);
serial.flags |= ASYNC_LOW_LATENCY; // (0x2000)
ioctl(native, TIOCSSERIAL, &serial);
I want to reduce the delay on my windows platform as well.
Is there an equivalent way that does the same for windows with C++?
BTW, I saw that there are some solutions that suggests to change the properties of the serial port at the Windows Device Manager, but I don't have those properties as these solutions showed and I need a code solution.
Take the native handle you get in from boost asio in windows and pass it to SetCommTimeouts: http://msdn.microsoft.com/en-us/library/windows/desktop/aa363437(v=vs.85).aspx
In particular, look at the ReadIntervalTimeout of the COMMTIMEOUT structure: http://msdn.microsoft.com/en-us/library/windows/desktop/aa363190(v=vs.85).aspx
ReadIntervalTimeout
The maximum time allowed to elapse between the arrival of two bytes on the communications line, in milliseconds. During a ReadFile operation, the time period begins when the first byte is received. If the interval between the arrival of any two bytes exceeds this amount, the ReadFile operation is completed and any buffered data is returned. A value of zero indicates that interval time-outs are not used.
A value of MAXDWORD, combined with zero values for both the ReadTotalTimeoutConstant and ReadTotalTimeoutMultiplier members, specifies that the read operation is to return immediately with the bytes that have already been received, even if no bytes have been received.
You can also query the current values with GetCommTimeouts: http://msdn.microsoft.com/en-us/library/windows/desktop/aa363261(v=vs.85).aspx
I recall having this problem in the NT4/Win2k/WinXP era, I assume the problem you are having is similar.
Using the COMMTIMEOUT structure always added a timeslice delay after each timeout, which meant that for variable length packets you always added 10ms or 16ms of latency, depending on whether the machine was an SMP machine or not. You could not, for example, really get a 1ms timeout. This was true even when using COMMTIMEOUT on async operations on the serial port with completion ports.
To eliminate the problem I used IO Completion Ports and had a pipeline of single character read operations outstanding. The reads completed as bytes arrived on the port, and could be dequeued in the main IOCP event loop. This gave very high performance at the cost of some code complexity.
This approach will not work as readily with boost::asio because only one outstanding read is permitted. You could try to implement a Windows IOCP based serial port back end for Windows for asio, or you could have a separate IOCP and thread for serial comms.
I'm seeing some pretty odd behaviour from windows regarding my COM-Buffers.
I use 3 USB-Serial Converter with FTDI chips. I open the com ports with CreateFile and it all works fine. All 3 ports have the same configuration except for the baud rates. 2 work at 38400 and one at 9600.
Here is the odd part:
I am able to successfully write out of the 9600 port and one of the 38400 port. The second 38400 ports seems to be buffering the data. I have connected to this port with Hyperterminal and see that on the working ports i immediately get a response and on the "weird" port i only get the data when i close my application...
Has anyone else experienced this? How did you resolve this?
This is kind of a shot in the dark... but.
Check the flow control settings for both ends of the "weird" connection. I've seen strange things like this when the flow control is mismatched. The act of closing the port clears the bits and allows the buffered data to flow.
Having worked a bit with FTDI chips, I would suggest you check out the advanced driver settings for each port. The driver supports both buffering and latency control in order to allow you to compromise between high throughput and low latency. So check the settings that work and use the same for the one that doesn't (if they're not the same).
On a side note, by using FTDI:s own API you don't have to keep track of COM-port reassignment and the like. The API is quite similar to the normal Win32 one but exposes more configuration options.
I'm having a peculiar problem with boost::asio and a boost::asio::serial_port device. The code is finally working pretty well with asynchronous reads and stuff, but I can't figure out how to change the speed of the serial port on the fly.
What I'm trying to do right now is just telling the device connected in my serial port to change the serial port speed to say 38400 baud, then I'm setting my computers serial port to the same speed via:
port_.set_option(boost::asio::serial_port_base::baud_rate(rate));
But what's really happening is that if I do the set_option part, the device never receives the command to change the speed. If I don't do the set_option part the device changes speed correctly. From what I gather what's happening is that the (blocking, synchronous) write puts stuff in the hardware buffer on my computer and returns, then does the set_option which discards the buffer (before it has had time to send data to the device). So I need to think of some way to check if the hardware buffer is empty and the device really has received the command to change the speed, before reconfiguring my computers serial port. I also cannot find any info on if I have to do close() and open() on the port for the speed change to take affect. I'm also wondering if close() discards the stuff in the buffer or not... I'm using a USB->serial port adapter and my platform is Ubuntu 10.10 if it makes any difference.
Have you looked at man 3 termios? It seems tcdrain does what you need
tcdrain() waits until all output
written to the object referred to by
fd has been transmitted.
You can get the native descriptor from the boost::asio::serial_port::native method.
Did you try flushing the buffer or looking for an appropriate flush alternative?
Are the client and server in the same process?
Boost.Asio iostream flush not working?