Expected output from an RM-1501 RS232 interface? - c++

I have an old RM-1501 digital tachometer which I'm using to try to identify the speed of an object.
According to the manual I should be able to read the data over a serial link. Unfortunately, I don't appear to be able to get any sensible output from the device (never gives a valid speed). I think it might be a signalling problem because disconnecting the CTS line starts to get some data through..
Has anyone ever developed anything for one of these / had any success?

The manual does not specify that flow control is used. Open the port with hardware/software flow control disabled.
The manual does not specify the connection - whether it is DTE<->DCE or Null Modem; are you using the cable supplied with the device?

I don;t know if this info is still use full. but I tried with even parity and got the data. The protocol in the document is incorrect i think (at least for the version I am using now) it is a 5 character display (9999) we only need 3 byte to get the required information the 4th byte should always be zero. Hence with 0x0D as starting and following 6 byte makes the entire packet i.e., 0xD0 B1 B2 B3 D1 D2 D3. B1,B2 and B3 bytes contain the divisor, status, units, functions and error flags. Where as the last three byte (D1,D2,D3) are the data, with D1 as LSB and D3 as MSB. I would also like to add that may be the manufacture changed the Firmware with out changing the user manual :). so my version of the protocol might be true some and wrong for others

I tried every combination of hardware control (both enabled and disabled) I could think of so I think it must be a hardware problem. Removnig the CLS link between PC and the device solved the issue.

Is it actually sending data to indicate speed, or is it providing make / break on one of the pins?

Related

What is wrong with com port comminucation if I get CE_FRAME errors?

I am trying to understand why I get CE_FRAME error in a serial communication. The documentation reads:
The hardware detected a framing error. Returned when the SERIAL_LSR_FE bit is detected in the LSR hardware register.
This is the framing error indicator. It is set whenever the hardware detects that the incoming serial data unit does not have a valid stop bit. This bit is cleared by reading this register:
define SERIAL_LSR_FE 0x08
But I don't really know what shall I do with this valid stop bit. Can I just ignore this?
I have no other issues with the communication. Every packet of data (send by the device) is being captured on the PC. On the PC I am using ClearCommError() to detect statistics of the channel, and from time to time I got this CE_FRAME flag on.
I am not sure if I have to provide details about the CreateFile() and SetCommState() function calls in my code, as there are nothing 'special' about them. But if needed, I can.
If you are programming on Windows then the application programmer does not set start and stop bits, the 'system' takes care of applying the start/stop bits as well as possible parity bits, baud rate and even some other settings. The critical ones are baud rate, start and stop bits and parity bits.
The system being the hardware or operating system. I think it is the UART chip which adds the start and stop bits. But you need to set the actual configuration to use in software.
What you do have to do is set the start and stop bits the same on both ends. So if you are communicating with a device which uses 1 start bit and 2 stop bits, then you have to set this same setting for your communication end.
You are likely to get framing errors if these settings are NOT set the same on both ends of the communication. I have seen framing errors where for example I set the baud rate 1200 one end but 9600 the other end. Actually my start and stop bits were correctly set both ends. So it may well be something simple like that.

Selectively ignoring framing errors on serial ports

For legacy reasons I need to make my program able to talk to a third-party device for which limited specs are available. This in itself is not really an issue and I have some code that can talk to it just fine when it ignores all serial errors.
I would like it to not ignore errors, though -- but the problem is that every single message received from the device produces a framing error on the first byte (due to some odd design decision on the part of the manufacturer).
When the device transmits a response, it seems to assert a space on the line for 6 bit times, then a mark for 2 bit times, and then settles into normal framing (1 space start bit, 8 data bits, 2 mark stop bits). Or to put it another way: the first byte transmitted appears to use 5-bit framing while every subsequent byte uses 8-bit framing; or that first byte is actually a really short break condition. (Other than this quirk, the message format is fairly well designed and unambiguous.) I'm assuming that this was intended as some sort of interrupt wake-up signal, though I have no idea why it doesn't use the same framing as the rest of the message, or a genuine longer-than-one-character break condition.
Unsurprisingly, this annoys the OS, generating a framing error when it sees that first "byte". Currently I'm using a Windows-based program to talk to this device (but may be migrating to Linux later). On Windows, I'm using overlapped I/O with ReadFileEx to read the actual data and ClearCommError to detect error conditions. Unfortunately, this means that I get frame errors reported independently of data -- this is then treated as an error for the entire chunk of data being read (typically 8 bytes at a time, though sometimes more) and I can't really seem to localise it further.
(The framing error does occasionally corrupt the second byte in the incoming message as well, but fortunately that does not cause any problems in interpreting this particular message format.)
I'd like to be able to identify which bytes specifically caused framing errors so that the port-handling code can pass this on to the protocol-handling code, and it can ignore errors that occur outside of the significant parts of the message. But I don't want to lower performance (which is what I suspect would happen if I tried to read byte-by-byte; and I'm not sure if that would even work anyway).
Is there a good way to do this? Or would I be better off forgetting the whole idea and just ignoring framing errors entirely?
I'm not 100% sure this is a robust solution, but it seems to work for me so far.
I've made it so that when it detects a framing error the next read will just read a single byte, and then the one after that (assuming there aren't still framing errors) will return to reading as much as possible. This seems to clear it past the error and receive the following bytes without issues. (At least when those bytes don't have any framing problems themselves. I'm not sure how to test what it does when they do.)
I had a maskable framing error about six months ago with teh sue of RTMmouse.
I had fixed it with DOS 7.10 debug,but now I have it again...why?
I had installed WIN95 on my DOS 7.10 primary aprtition and converted all my secondary aprtitions too...with the exception of my boot partition.I reinstalled windows after it was working fine on its aprtition to a win95 primary paritiron.This activated the NMI - to which was maskable to being unmaskable.How do I find the error.It is right on the bootstrap[I have a breakpoint for this with RTM.EXE redirection through the mouse driver after call lockdrv.bat with CSDPMI server provided .
So after intiial boot-i do this right off the bat :
C> debug
-u
I get a bunch of code generated from teh execution of autoexec.bat
naemly tracing the 8 bit operands I cans ee that the CPU is giving the NMI through this mehtod-not sure on the accuracey of this structure from memory but something like an AX calcaulation from lockdrv.bat for every %f in lock %%
then a push AX.Then the CPU does somthing else - it pushes AX and then sets ah to zero
push ax
mov ah,00
this is the bit being disabled-keeping the otehr data bits intact.This is the frame eroor in flat assembly---the al calls are made through si and dx in bx prior to this :
add [si+dx];al
well the computer recognizes modem data bits but I have none to send or recieve[I was up all nite last nite with Ralf Browns interrupt list and debug ] this was real fun.However it is a frame error.I verified the int 14 error with interrupt 0C as a frame error - win95 must have generated it as it does not like my breakpoint [to whcih should not be existant anymore through the v86 but however is generatign an NMI-a frame erro being a minor is still and NMI if teh CPU is notre cognizing it-is one of the basic or simpler NMIs to whcih can incur .NMI means nonmaskable or means no exceptions.An example is a divide overflow to which its absolute answer cannot be devaited with due casue to IRQ conflicts or otehr interrupt conflicts to whcih would form a 16 bit general protection fault0to whcih usually incurs in real mode when two or more programs conflict-or in WIN95 in ral mode trying to use DOS GUI for multitask-as WIN 95 does not support DOSes real mode share program for file loacking and file sharing.

Getting notification that the serial port is ready to be read from

I have to write a C++ application that reads from the serial port byte by byte. This is an important need as it is receiving messages over radio transmission using modbus and the end of transmission is defined by 3.5 character length duration so I MUST be able to get the message byte by byte. The current system utilises DOS to do this which uses hardware interrupts. We wish to transfer to use Linux as the OS for this software, but we lack expertise in this area. I have tried a number of things to do this - firstly using polling with non-blocking read, using select with very short timeout values, setting the size of the read buffer of the serial port to one byte, and even using a signal handler on SIGIO, but none of these things provide quite what I require. My boss informs me that the DOS application we currently run uses hardware interrupts to get notification when there is something available to read from the serial port and that the hardware is accessible directly. Is there any way that I can get this functionality from a user space Linux application? Could I do this if I wrote a custom driver (despite never having done this before and having close to zero knowledge of how the kernel works) ??. I have heard that Linux is a very popular OS for hardware control and embedded devices so I am guessing that this kind of thing must be possible to do somehow, but I have spent literally weeks on this so far and still have no concrete idea of how best to proceed.
I'm not quite sure how reading byte-by-byte helps you with fractional-character reception, unless it's that there is information encoded in the duration of intervals between characters, so you need to know the timing of when they are received.
At any rate, I do suspect you are going to need to make custom modifications to the serial port kernel driver; that's really not all that bad as a project goes, and you will learn a lot. You will probably also need to change the configuration of the UART "chip" (really just a tiny corner of some larger device) to make it interrupt after only a single byte (ie emulate a 16450) instead of when it's typically 16-byte (emulating at 16550) buffer is partway full. The code of the dos program might actually be a help there. An alternative if the baud rate is not too fast would be to poll the hardware in the kernel or a realtime extension (or if it is really really slow as it might be on an HF radio link, maybe even in userspace)
If I'm right about needing to know the timing of the character reception, another option would be offload the reception to a micro-controller with dual UARTS (or even better, one UART and one USB interface). You could then have the micro watch the serial stream, and output to the PC (either on the other serial port at a much faster baud rate, or on the USB) little packages of data that include one received character and a timestamp - or even have it decode the protocol for you. The nice thing about this is that it would get you operating system independence, and would work on legacy free machines (byte-by-byte access is probably going to fail with an off-the-shelf USB-serial dongle). You can probably even make it out of some cheap eval board, rather than having to manufacture any custom hardware.

Linux serial port reading - can I change size of input buffer?

I am writing an application on Ubuntu Linux in C++ to read data from a serial port. It is working successfully by my code calling select() and then ioctl(fd,FIONREAD,&bytes_avail) to find out how many bytes are available before finally obtaining the data using read().
My question is this: Every time select returns with data, the number of bytes available is reported as 8. I am guessing that this is a buffer size set somewhere and that select returns notification to the user when this buffer is full.
I am new to Linux as a developer (but not new to C++) and I have tried to research (without success) if it is possible to change the size of this buffer, or indeed if my assumptions are even true. In my application timing is critical and I need to be alerted whenever there is a new byte on the read buffer. Is this possible, without delving into kernel code?
You want to use the serial IOCTL TIOCSSERIAL which allows changing both receive buffer depth and send buffer depth (among other things). The maximums depend on your hardware, but if a 16550A is in play, the max buffer depth is 14.
You can find code that does something similar to what you want to do here
The original link went bad: http://www.groupsrv.com/linux/about57282.html
The new one will have to do until I write another or find a better example.
You can try to play with the VMIN and VTIME values of the c_cc member of the termios struct.
Some info here, especially in the section 3.2.

timing of reads from serial port on windows

I'm trying to implement a protocol over serial port on a windows(xp) machine.
The problem is that message synchronization in the protocol is done via a gap in the messages, i.e., x millisecond gap between sent bytes signifies a new message.
Now, I don't know if it is even possible to accurately detect this gap.
I'm using win32/serport.h api to read in one of the many threads of our server. Data from the serial port gets buffered, so if there is enough (and there will be enough) latency in our software, I will get multiple messages from the port buffer in one sequence of reads.
Is there a way of reading from the serial port, so that I would detect gaps in when particular bytes were received?
If you want more control over a Windows serial port, you will have to write your own driver.
The problem I see is that Windows may be executing other tasks or programs (such as virus checking) which will cause timing issues with your application. You application will not know when it has been swapped out for another application.
If possible, I suggest your program time stamp the end of the last message. When the next message arrives, another time stamp is taken. The difference between time stamps may help in detecting new messages.
I highly suggest changing the protocol so that timing is not a factor.
I've had to do something similar in the past. Although the protocol in question did not use any delimiter bytes, it did have a crc and a few fixed value bytes at certain positions so I could speculatively decode the message to determine if it was a complete individual message.
It always amazes me when I encounter these protocols that have no context information in them.
Look for crc fields, length fields, type fields with a corresponding indication of the expected message length or any other fixed offset fields with predictable values that could help you determine when you have a single complete message.
Another approach might be to use the CreateFile, ReadFile and WriteFile API functions. There are settings you can change using the SetCommTimeouts function that allows you to halt the i/o operation when a certain time gap is encountered.
Doing that along with some speculative decoding could be your best bet.
It sounds odd that there is no sort of data format delineating a "message" from the device. Every serial port device I've worked with has had some form of a header that described the data it transmitted.
Just throwing this out there, but could you use the Win32 Asynchronous ReadFileEx() and WriteFileEx() system calls? They allow you to attach a callback function, and then you might be able to manage a timer within the callback. The timer would only provide you a rough estimation, however.
If you need to write your own driver, the Windows Driver Kit has a sample that shows how to write a serial port driver. I can't imagine that you'll be able to override the Windows serial port bus driver(the driver that directly controls the serial port on your Windows machine), but you might be able to write a driver that sits on top of the bus driver.
I thought so. You all grew up with the web, I didn't, though I was present at the birth. Let me guess, the one byte is 1(SOH) or 2(STX)? IMVEO it is enough. You just need to think outside the box.
You receive message_delimiter followed by 4 (as length) and then 4 bytes of data. A valid message is not those 6 bytes.
message_delimiter - 1 byte
4 - length - 1 byte
(4 data bytes) - 4 bytes
A valid message is always bounded by the message_delimiter, so it would look like
message_delimiter - 1 byte
4 - length - 1 bytes
(4 data bytes) - 4 bytes
message_delimiter - 1 byte