Qt modbus serial port flow control handling - c++

I'm writing a small program using QModbusDevice over the serial port (using the QModbusRtuSerialMaster class) and have some problems.
One of the problems seems to be that the flow control of the serial port is incorrect. Checking in a serial port sniffer I see that a working client sets RTS on when it sends requests, and then RTS off to receive replies. When I use QModbusRtuSerialMaster to send messages that doesn't happen.
The message is sent correctly (sometimes, subject for another question) compared to the working client. It's just the control flow that doesn't work and which causes the servers to be unable to reply.
I have set the Windows port settings for the COM-port in question to hardware flow control but it doesn't matter, the sniffer still reports no flow control.
Is there a way to get QModbusRtuSerialMaster to set the flow control as I would like? Or is there a way to manually handle the flow control (which is what the working client does)? Or is the only solution to skip the Qt modbus classes and make up my own using the serial port directly?
A short summary of what I'm doing...
First the initialization of the QModbusRtuSerialMaster object:
QModbusDevice* modbusDevice = new QModbusRtuSerialMaster(myMainWindow);
modbusDevice->setConnectionParameter(QModbusDevice::SerialPortNameParameter, "COM3");
modbusDevice->setConnectionParameter(QModbusDevice::SerialParityParameter, QSerialPort::NoParity);
modbusDevice->setConnectionParameter(QModbusDevice::SerialBaudRateParameter, QSerialPort::Baud115200);
modbusDevice->setConnectionParameter(QModbusDevice::SerialDataBitsParameter, QSerialPort::Data8);
modbusDevice->setConnectionParameter(QModbusDevice::SerialStopBitsParameter, QSerialPort::OneStop);
modbusDevice->setTimeout(100);
modbusDevice->setNumberOfRetries(3);
modbusDevice->connectDevice();
Then how I send a request:
auto response = modbusDevice->sendReadRequest(QModbusDataUnit(QModbusDataUnit::Coils, 0, 1), 1);

QtModbus does not implement an automatic toggling for the RTS line because it expects your hardware to do it on its own (with a dedicated line instead).
This should be the case for most RS485 converters (even cheap ones). You would only need the RTS line if you have a separate transceiver like this one with a DE/~RE input.
If you were on Linux and had some specific hardware you could try to use the RS485 mode to toggle the RTS line for you automatically. But you don't seem to be on Linux and the supported hardware is certainly very limited.
You can also toggle the line manually with port.setRequestToSend(true), see here. But note that depending on the timing needs of the device you are talking too, this software solution might not be very reliable. This particular problem has been discussed at length here. Take a look at the links on my answer too, I made some benchmarks with libmodbus that show good results.
Enabling or disabling flow control on the driver won't have any effect on this issue because this is not actually a flow control problem but a direction control one. Modbus runs on two-wire half-duplex links very often, and that means you need a way to indicate which device is allowed to talk on the bus at all times. The RTS (flow control) from an RS232 port can be used for this purpose as a software workaround.
In the end, it would be much less of a headache if you just replace your transceiver with one that supports hardware direction control. If you have a serial port with an FTDI engine you should be able to use the TXEN line for this purpose. Sometimes this hardware line is not directly routed and available on a pin but you can reroute it with MProg.
I would like to highlight that you did not mention if you are running your Modbus on RS485. I guess it's fair to assume you are, but if you have only a couple of devices next to each other you might use RS232 (even on TTL levels) and forget about direction control (you would be running full-duplex with three wires: TX, RX and GND).

Related

Create virtual serial port in Qt/C++

I would like to create a linux app which appears as a serial port (eg /dev/ttyTEST). This app will listen for commands sent to the port, and respond back.
Is this possible using Qt/C++ ? I haven't done kernel programming so I'm hoping this is possible in user space.
Everything depends on what the application using such device expects.
If /dev/ttyTEST is to behave like a real serial device and respond properly to all ioctl's that set its speed etc., then this can't be done from userspace. It wouldn't be too hard to implement in the kernel space, though.
If /dev/ttyTEST only needs to be a tty, then provide a pseudo tty.
If /dev/ttyTEST is merely to be something another application can write to and read from then socketpair() does it.
If you have control over the application's code, then you can have it check whether the device is a socket pair or a real character device, and ignore the failures of the serial-port-specific APIs on a socket.

USB Serial Virtual COM Port : Read not working but write works

I use embedded hardware (by TI : Piccolo Control Stick xxx69) which uses FTDI usb to serial converter hardware.
On PC, I have simple VC++ application which tries to communicate to hardware over Virtual COM port (VCOM : typically COM7).
I am able to connect to port properly.
I am able to send data from application/PC to hardware and it is received correctly. (So, Tx on PC is working fine), Application first opens the connection using createfile(... ... ...) API and then uses writefile(.. ... ..) windows apis to write into the port directly.
SURPRISINGLY, I am not able to read from serial port to application. When I call readfile(... ... ...) api, it returns status as TRUE but ZERO bytes are read. I tried using API monitor software, which shows kernel api Ntreadfile(... ... ...), returns error as STATUS_TIMEOUT" [0x00000102]. It is surprising, because write works but read doesn't although data is there on line.
Data is on the line, because when I use normal hyper-terminal software, I am able to read the data correctly form controller and it is visible. [On controller side, it is all right because we can see data on hyper-terminal.
I am not windows programmer, as I deal with micro-controllers. Therefore, some help in terms to pursue this issue would be of great help.
Best Regards,
-Varun
Here is a Reference
Issue is solved. I had to add wait till InQueue > 0 (it means there is atleast 1 byte in receive buffer) or timeout (as safety exit) is over. it would be blocking call but it is OK for my application at the moment. waitComm() did not work well for me here.
sample snippet:
while(1)
{
ClearCommError((HANDLE)*h_drv, (LPDWORD)&Err, &CST);
if((CST.cbInQue >0)||(count >1000000))
break;
count++;
}

Simulate network conditions with a C/C++ Socket

I'm looking for a way to add network emulation to a socket.
The basic solution would be some way to add bandwidth limitation to a connection.
The ideal solution for me would:
Support advanced network properties (latency, packet-loss)
Open-source
Have a similar API as standard sockets (or wraps around them)
Work on both Windows and Linux
Support IPv4 and IPv6
I saw a few options that work on the system level, or even as proxy (Dummynet, WANem, neten, etc.), but that won't work for me, because I want to be able to emulate each socket manually (for example, open one socket with modem emulation and one with 3G emulation. Basically I want to know how these tools do it.
EDIT: I need to embed this functionality in my own product, therefore using an extra box or a third-party tool that needs manual configuration is not acceptable. I want to write code that does the same thing as those tools do, and my question is how to do it.
Epilogue: In hindsight, my question was a bit misleading. Apparently, there is no way to do what I wanted directly on the socket. There are two options:
Add delays to send/receive operation (Based on #PaulCoccoli's answer):
by adding a delay before sending and receiving, you can get a very crude network simulation (constant delay for latency, delay sending, as to not send more than X bytes per second, for bandwidth).
Paul's answer and comment were great inspiration for me, so I award him the bounty.
Add the network simulation logic as a proxy (Based on #m0she and others answer):
Either send the request through the proxy, or use the proxy to intercept the requests, then add the desired simulation. However, it makes more sense to use a ready solution instead of writing your own proxy implementation - from what I've seen Dummynet is probably the best choice (this is what webpagetest.org does). Other options are in the answers below, I'll also add DonsProxy
This is the better way to do it, so I'm accepting this answer.
You can compile a proxy into your software that would do that.
It can be some implementation of full fledged socks proxy (like this) or probably better, something simpler that would only serve your purpose (and doesn't require prefixing your communication with the destination and other socks overhead).
That code could run as a separate process or a thread within your process.
Adding throttling to a proxy shouldn't be too hard. You can:
delay forwarding of data if it passes some bandwidth limit
add latency by adding timer before read/write operations on buffers.
If you're working with connection based protocol (like TCP), it would be senseless to drop packets, but with a datagram based protocol (UDP) it would also be simple to implement.
The connection creation API would be a bit different from normal posix/winsock (unless you do some macro or other magic), but everything else (send/recv/select/close/etc..) is the same.
If you're building this into your product, then you should implement a layer of abstraction over the sockets API so you can select your own implementation at run time. Alternatively, you can implement wrappers of each socket function and select whether to call your own version or the system's version.
As for adding latency, you could have your implementation of the sockets API spin off a thread. In that thread, have a priority queue ordered by time (i.e. this background thread does a very basic discrete event simulation). Each "packet" you send or receive could be enqueued along with a delivery time. Each delivery time should have some amount of delay added. I would use some kind of random number generator with a Gaussian distribution.
The background thread would also have to simulate the other side of the connection, though it sounds like you may have already implemented that part?
I know only Network Link Conditioner for Mac OS X Lion. You should be mac developer to download it, so i cannot put download link there. Only description from 9to5mac.com: http://9to5mac.com/2011/08/10/new-in-os-x-lion-network-link-conditioner-utility-lets-you-simulate-internet-and-bandwidth-conditions/
This answer might be a partial solution for you when using linux:
Simulate delayed and dropped packets on Linux. It refers to a kernel module called netem, which can simulate all kinds of network problems.
If you want to work with TCP connections, having "packet loss" could be problematic since a lot of error-handling (like recovering lost packages) is done in the kernel. Simulating this in a cross-platform way could be hard.
you usually add a network device to your network that throttles the bandwidth or latency, on a port by port basis, you can then achieve what you want just by connecting to the port allocated to the particular type of crappy network you want to test, with no code changes or modifications required.
The easiest ways to do this is just add iptables rules to a Linux server acting as a proxy.
If you want it to work without the separate device, try trickle that is a software package that throttles your network on your client PC. (or for Windows)
You may would like to check WANem http://wanem.sourceforge.net/ . WANEM is Open Source and licensed under the GNU General Public License.
WANem allows the application development team to setup a transparent application gateway which can be used to simulate WAN characteristics like Network delay, Packet loss, Packet corruption, Disconnections, Packet re-ordering, Jitter, etc.
I think you could use a tool like Network Simulator. It's free, for Windows.
The only thing to do is to setup your program to use the right ports (and the settings for the network, of course).
If you want a software only solution that you control, you will have to implement it yourself. I know of no such existing package.
While a wrapper layer over a socket may give you the ability to introduce delay, it won't be sufficient to introduce loss or out of order delivery. In order to simulate those activities, you actually need intercept the data in transit between the two TCP stacks.
The approach I would recommend is to use a tunneling device (say tunX). Routes should be set so the client believes the way to the server is through tunX. Additional code (perhaps running in a different thread) would promiscuously intercept traffic on tunX, and perform your augmented behavior, before forwarding packets over the true physical interface that will get the traffic to your server. The reverse would happen for packets arriving from the server on the physical interface. Those packets would be intercepted by the client code, behavior augmented, before forwarding through tunX.
However, since you are testing client software, I am unclear as to why you would want to embed this code in your released software, unless the software itself is a WAN simulating client.

Windows does not flush COM-Buffer

I'm seeing some pretty odd behaviour from windows regarding my COM-Buffers.
I use 3 USB-Serial Converter with FTDI chips. I open the com ports with CreateFile and it all works fine. All 3 ports have the same configuration except for the baud rates. 2 work at 38400 and one at 9600.
Here is the odd part:
I am able to successfully write out of the 9600 port and one of the 38400 port. The second 38400 ports seems to be buffering the data. I have connected to this port with Hyperterminal and see that on the working ports i immediately get a response and on the "weird" port i only get the data when i close my application...
Has anyone else experienced this? How did you resolve this?
This is kind of a shot in the dark... but.
Check the flow control settings for both ends of the "weird" connection. I've seen strange things like this when the flow control is mismatched. The act of closing the port clears the bits and allows the buffered data to flow.
Having worked a bit with FTDI chips, I would suggest you check out the advanced driver settings for each port. The driver supports both buffering and latency control in order to allow you to compromise between high throughput and low latency. So check the settings that work and use the same for the one that doesn't (if they're not the same).
On a side note, by using FTDI:s own API you don't have to keep track of COM-port reassignment and the like. The API is quite similar to the normal Win32 one but exposes more configuration options.

controling individual pins on a serial port

I know that serial ports work by sending a single stream of bits in serial. I can write programs to send and receive data from that one pin.
However, there are a lot more other pins on the serial port connection that normal aren't used but from documentation all seem to have some sort of function for signalling as opposed to data transfer.
Is it possible in any way to cause the other pins that are not used for direct data transfer to be controlled individually? If so, how would i go about doing that?
EDIT: more information
I am working with a modern CPU running windows 7 64-bit on an intel core i7 870 processor. I'm using serial to usb ports because its imposable for me to do anything directly with a usb port and my computer does not come with serial ports and also for some inexplicable reason i have a bunch of these usb to serial port adapters lying around.
My goal is to control mutipul stepper motors (200 steps per rotation, 4 phase motors). My simple circuitry accepts single high pulses and interprets it as a command to cause the motor to rotate one step. The circuit itself will handle the power supply and phase switching. I wish to use the data transfer pin to send the rotation signals (we can control position and velocity by altering the number of high pulses and frequency of high pulses through the pin, however there is no real pulse width modulation).
I have many motors to control but they do not need to be controlled simultaneously. I hope to use the rest of the pins and run them through a simple combination logic circuit to identify which motor is being moved and which direction it is to move in. This is part of the power switching circuitry.
The data transfer pin will operate normally at some low end frequency. However, i want to control the other pins to allow me to give a solid on or off signal (they wont be flipping very quickly, only changes when i switch to controlling another motor).
Based of the suggestion of Hans Passant , I'd like to suggest that you use an Arduino instead of an USB-to-serial converter. The "Duemilanove" is an Arduino-based board that provides 6 PWM outputs (as well as 8 other digitial I/Os and 6 analog). Some more specialized boards might be even cheaper (Arduino Pro Mini, $15 in volume, some soldering required).
Using the handshaking pins to send data can work very well, though probably not on a multitasking OS, it's just very processor intensive (because the port needs to be polled constantly) and requires some custom cables. In fact, back in the day, this is exactly how Laplink got such high transfer rates over serial connections (and why to get those rates you needed a special 'Laplink' cable). And you need both sides of hte link to be aware of what's going on and be able to deal with the custom communications. Laplink would send a packet of data over both the normal UART pins while trying to send data from the other end of the packet over the handshaking pins. If the correct cable wasn't used (or there was some other problem with sending over the handshaking pins) there was no problem - all the data would just get send normally.
Embedded developers might know this as 'bit banging' - often on small embedded systems there's no dedicated UART circuitry - to get serial communications to work they have to toggle a general I/O pin with the correct timing. The same can be done on a UART's handshaking pins. But like I said, it can be detrimental to the system if other work needs to be done.
You can use DTR and RTS only, but that is four possible states. You do need to be careful that the device on the other end uses TTL levels. At he end of this link Serial there are tips on hardware if you need it.
What kind of data rate are you thinking of when you say high frequency? What kind of serial port do you have? With the old 9 pin connectors on the back of the computer the best you can do is around 115Kbps. With a USB adapter I have done test where I could push close to 1Mbps through the port.
Here's an article from Microsoft that goes into great detail on how to work with serial ports:
http://msdn.microsoft.com/en-us/library/ms810467.aspx
It mentions EscapeCommFunction for directly controlling the DTR line.
Before you check out this information, I'm joining in with the others that say a serial port is inappropriate for your application.
I´ve been trying to find an answer to your question for 3 hours, seems like there is no "simple way" to get a simple boolean signal from a computer...
But, there is always a way, and jet, as simple (maybe even stupid) as this may sound, have you considered using the audio jack connector as an output?, It is stereo so you would have 2 outputs available,the programming would is not that difficult. and you don#t need to buy expensive shit to make it work.
If you also need an input, just disassemble a mouse... and bridge the sensors to the servos, probably the most cheap and easiest way of doing it...
Another way would be using the leds for the Num-lock, caps-lock and the dspl-lock on the keyboard, these can be activated using software, and you just need to take a cheap external keyboard, and use the connectors for these 3 leds.
you are describing maybe a parallel port - where you can set bit patterns all at once - then toggle the xmit line to send it all...
Lets take a look from the "bottom up" point of view:
The serial port pins
Pins on the serial port may be connected to a "controller" or directly connected to the processor. In order for the processor to have access (control) the pins, there must be an electrical connection from the pins to the processor. If not, the processor nor the program can control the pins.
Using a serial controller
A controller, such as a USART, would be connected between the serial port and the processor. The controller may function as to convert 8 parallel data bits into serial bitstream. In the big picture, the controller must provide access to the port pins in order for them to be controlled. If it doesn't, the pins can't be accessed. The controller must be connected to the processor in order to control the pins if a controller is connected.
The Processor and the Serial port
Assuming that the pins you want to control are connected to the processor, the processor must be able to access them. Sometimes they are mapped as physical addresses (such as with an ARM processor), or they may be connected to a port (such as the intel 8086). A program would access the pins via a pointer or using a i/o instruction. In some processor, the i/o ports must be enabled and initialized before they can be used.
Support from the OS
Here's a big ticket item: If your platform has an Operating System, the Operating System must provide services to access the pins of the serial port. The services could be a driver or an API function call. If the OS doesn't provide services, you can't access the serial port pins.
Permission from the OS
Assuming the OS has support for the serial port, your program must now have permission to access the port. In some operating systems, permission may only be granted to root or drivers and not users. If your account does not have permission to access the pins, you are not going to read them.
Support from the Programming Language
Lastly, the programming language must have support for the port. If the language doesn't provide support for the port you may have to change languages, or even program in assembly.
Accessing the "unused" pins of a serial port require extensive research into the platform. Not all platforms have serial ports. Serial port access is platform dependent and may change across different platforms.
Ask another, more detailed question and you will get more detailed answers. Please provide the kind of platform and OS that you are using.