I'm trying to capture a signal much like UART communication.
This specific signal is composed by:
1 start bit(low)
16 data bits
1 stop bit (high)
From testing, I figured out that signal is about ~8-9μs / bit. This led me to believe that the baud is around 115.2kbps.
My frist idea was to try a "manual" approach, and wrote a small C program. Although I couldn't sample the signal at the proper timing.
From here, I decided to look for libraries that could do the job. I did try "termios" and "asio::serial_port" from boost, but those don't seem to be able to receive 16 bit characters.
Am I being naive trying to configure a 16 bit receiver?
Does a "16 bit UART" even make sense?
Thanks!
-nls
There's nothing fundamentally wrong with the idea of a UART which supports a 16-data-bit configuration, but I'm not aware of any which do. 8 or 9 is usually the limit.
If you're communicating with a device which only supports that configuration (what the heck kind of device is that?), your only real option is bit-banging, which would be best done by a MCU dedicated to the purpose. You are not going to get microsecond-accurate timing in user space on a multitasking operating system, no matter what libraries you bring to bear.
EDIT: Note that you could do this, more or less, with bit-banging from a dedicated kernel-space driver. But that would make the system nearly unusable. The whole reason UARTs exist is because the CPU has better things to do than poll a line every few microseconds.
Related
I have an AHRS (attitude heading reference system) that interfaces with my C++ application. I receive a 50Hz stream of messages via Ethernet from the AHRS, and as part of this message, I get UTC time. My system will also have NTPD running as the time server for our embedded network. The AHRS also has a 1PPS output that indicates the second roll-over time for UTC. I would like to synchronize the NTPD time with the UTC. After some research, I have found that there are techniques that utilize a serial port as input for the 1PPS. From what I can find, these techniques use GPSD to read the 1PPS and communicate with NTPD to synchronize the system time. However, GPSD is expecting a NMEA formatted message from a GPS. I don't have that.
The way I see it now, I have a couple of optional approaches:
Don't use GPSD. Write a program that reads the 1PPS and the Ethernet
message contain UTC, and then somehow communicates this information
to NTPD.
Use GPSD. Write a program that repackages the Ethernet message into
something that can be sent to GPSD, and let it handle the
interaction with NTPD.
Something else?
Any suggestions would be very much appreciated.
EDIT:
I apologize for this poorly constructed question.
My solution to this problem is as follows:
1 - interface 1PPS to RS232 port, which as it turns out is a standard approach that is handled by GPSD.
2 - write a custom C++ application to read the Ethernet messages containing UTC, and from that build an NMEA message containing the UTC.
3 - feed the NMEA message to GPSD, which in turn interfaces with NTPD to synchronize the GPS/1PPS information with system time.
I dont know why you would want drive a PPS device with a signal that is delivered via ethernet frames. Moreover PPS does not work the way you seem to think it does. There is no timecode in a PPS signal so you cant sync the time to the PPS signal. The PPS signal is simply used to inform the computer of how long a second is.
there are examples that show how a PPS signal can be read in using a serial port, e.g. by attaching it to an interrupt capable pin - that might be RingIndicator (RI) or something else with comparable features. the problem i am seeing there is that any sort of code-driven service of an interrupt has its latencys and jitter. this is defined by your system design (and if you are doing it, by your own system tailored special interrupt handler routine - on a PC even good old ISA bus introduced NMI handlers might see such effects).
to my best understanding people that are doing time sync on a "computer" are using a true hardware timer-counter (with e.g. 64 bits) and a latch that gets triggered to sample and hold the value of the timer on every incoming 1PPS pulse. - folks are doing that already with PTP over the ethernet with the small variation that a special edge of the incoming data is used as the trigger and by this sender and receiver can be synchronized using further program logic that grabs the resulting value from the built in PTP-hardware-latch.
see here: https://en.wikipedia.org/wiki/Precision_Time_Protocol
along with e.g. 802.1AS: http://www.ieee802.org/1/pages/802.1as.html
described wikipedia in section "Related initiatives" as:
"IEEE 802.1AS-2011 is part of the IEEE Audio Video Bridging (AVB) group of standards, further extended by the IEEE 802.1 Time-Sensitive Networking (TSN) Task Group. It specifies a profile for use of IEEE 1588-2008 for time synchronization over a virtual bridged local area network (as defined by IEEE 802.1Q). In particular, 802.1AS defines how IEEE 802.3 (Ethernet), IEEE 802.11 (Wi-Fi), and MoCA can all be parts of the same PTP timing domain."
some article (in German): https://www.elektronikpraxis.vogel.de/ethernet-fuer-multimediadienste-im-automobil-a-157124/index4.html
and some presentation: http://www.ieee802.org/1/files/public/docs2008/as-kbstanton-8021AS-overview-for-dot11aa-1108.pdf
my rationale to your question is:
yes its possible. but it is a precision limited design due to the various internal things like latency and jitter of the interrupt handler you are forced to use. the achievable overall precision per pulse and in a long term run is hard to say but might be in the range of some 10 ms at startup with a single pulse to maybe/guessed 0,1 ms. - doing it means proving it. long term observations should help you unveiling the true practical caps with your very specific computer and selected software environment.
For legacy reasons I need to make my program able to talk to a third-party device for which limited specs are available. This in itself is not really an issue and I have some code that can talk to it just fine when it ignores all serial errors.
I would like it to not ignore errors, though -- but the problem is that every single message received from the device produces a framing error on the first byte (due to some odd design decision on the part of the manufacturer).
When the device transmits a response, it seems to assert a space on the line for 6 bit times, then a mark for 2 bit times, and then settles into normal framing (1 space start bit, 8 data bits, 2 mark stop bits). Or to put it another way: the first byte transmitted appears to use 5-bit framing while every subsequent byte uses 8-bit framing; or that first byte is actually a really short break condition. (Other than this quirk, the message format is fairly well designed and unambiguous.) I'm assuming that this was intended as some sort of interrupt wake-up signal, though I have no idea why it doesn't use the same framing as the rest of the message, or a genuine longer-than-one-character break condition.
Unsurprisingly, this annoys the OS, generating a framing error when it sees that first "byte". Currently I'm using a Windows-based program to talk to this device (but may be migrating to Linux later). On Windows, I'm using overlapped I/O with ReadFileEx to read the actual data and ClearCommError to detect error conditions. Unfortunately, this means that I get frame errors reported independently of data -- this is then treated as an error for the entire chunk of data being read (typically 8 bytes at a time, though sometimes more) and I can't really seem to localise it further.
(The framing error does occasionally corrupt the second byte in the incoming message as well, but fortunately that does not cause any problems in interpreting this particular message format.)
I'd like to be able to identify which bytes specifically caused framing errors so that the port-handling code can pass this on to the protocol-handling code, and it can ignore errors that occur outside of the significant parts of the message. But I don't want to lower performance (which is what I suspect would happen if I tried to read byte-by-byte; and I'm not sure if that would even work anyway).
Is there a good way to do this? Or would I be better off forgetting the whole idea and just ignoring framing errors entirely?
I'm not 100% sure this is a robust solution, but it seems to work for me so far.
I've made it so that when it detects a framing error the next read will just read a single byte, and then the one after that (assuming there aren't still framing errors) will return to reading as much as possible. This seems to clear it past the error and receive the following bytes without issues. (At least when those bytes don't have any framing problems themselves. I'm not sure how to test what it does when they do.)
I had a maskable framing error about six months ago with teh sue of RTMmouse.
I had fixed it with DOS 7.10 debug,but now I have it again...why?
I had installed WIN95 on my DOS 7.10 primary aprtition and converted all my secondary aprtitions too...with the exception of my boot partition.I reinstalled windows after it was working fine on its aprtition to a win95 primary paritiron.This activated the NMI - to which was maskable to being unmaskable.How do I find the error.It is right on the bootstrap[I have a breakpoint for this with RTM.EXE redirection through the mouse driver after call lockdrv.bat with CSDPMI server provided .
So after intiial boot-i do this right off the bat :
C> debug
-u
I get a bunch of code generated from teh execution of autoexec.bat
naemly tracing the 8 bit operands I cans ee that the CPU is giving the NMI through this mehtod-not sure on the accuracey of this structure from memory but something like an AX calcaulation from lockdrv.bat for every %f in lock %%
then a push AX.Then the CPU does somthing else - it pushes AX and then sets ah to zero
push ax
mov ah,00
this is the bit being disabled-keeping the otehr data bits intact.This is the frame eroor in flat assembly---the al calls are made through si and dx in bx prior to this :
add [si+dx];al
well the computer recognizes modem data bits but I have none to send or recieve[I was up all nite last nite with Ralf Browns interrupt list and debug ] this was real fun.However it is a frame error.I verified the int 14 error with interrupt 0C as a frame error - win95 must have generated it as it does not like my breakpoint [to whcih should not be existant anymore through the v86 but however is generatign an NMI-a frame erro being a minor is still and NMI if teh CPU is notre cognizing it-is one of the basic or simpler NMIs to whcih can incur .NMI means nonmaskable or means no exceptions.An example is a divide overflow to which its absolute answer cannot be devaited with due casue to IRQ conflicts or otehr interrupt conflicts to whcih would form a 16 bit general protection fault0to whcih usually incurs in real mode when two or more programs conflict-or in WIN95 in ral mode trying to use DOS GUI for multitask-as WIN 95 does not support DOSes real mode share program for file loacking and file sharing.
I have to write a C++ application that reads from the serial port byte by byte. This is an important need as it is receiving messages over radio transmission using modbus and the end of transmission is defined by 3.5 character length duration so I MUST be able to get the message byte by byte. The current system utilises DOS to do this which uses hardware interrupts. We wish to transfer to use Linux as the OS for this software, but we lack expertise in this area. I have tried a number of things to do this - firstly using polling with non-blocking read, using select with very short timeout values, setting the size of the read buffer of the serial port to one byte, and even using a signal handler on SIGIO, but none of these things provide quite what I require. My boss informs me that the DOS application we currently run uses hardware interrupts to get notification when there is something available to read from the serial port and that the hardware is accessible directly. Is there any way that I can get this functionality from a user space Linux application? Could I do this if I wrote a custom driver (despite never having done this before and having close to zero knowledge of how the kernel works) ??. I have heard that Linux is a very popular OS for hardware control and embedded devices so I am guessing that this kind of thing must be possible to do somehow, but I have spent literally weeks on this so far and still have no concrete idea of how best to proceed.
I'm not quite sure how reading byte-by-byte helps you with fractional-character reception, unless it's that there is information encoded in the duration of intervals between characters, so you need to know the timing of when they are received.
At any rate, I do suspect you are going to need to make custom modifications to the serial port kernel driver; that's really not all that bad as a project goes, and you will learn a lot. You will probably also need to change the configuration of the UART "chip" (really just a tiny corner of some larger device) to make it interrupt after only a single byte (ie emulate a 16450) instead of when it's typically 16-byte (emulating at 16550) buffer is partway full. The code of the dos program might actually be a help there. An alternative if the baud rate is not too fast would be to poll the hardware in the kernel or a realtime extension (or if it is really really slow as it might be on an HF radio link, maybe even in userspace)
If I'm right about needing to know the timing of the character reception, another option would be offload the reception to a micro-controller with dual UARTS (or even better, one UART and one USB interface). You could then have the micro watch the serial stream, and output to the PC (either on the other serial port at a much faster baud rate, or on the USB) little packages of data that include one received character and a timestamp - or even have it decode the protocol for you. The nice thing about this is that it would get you operating system independence, and would work on legacy free machines (byte-by-byte access is probably going to fail with an off-the-shelf USB-serial dongle). You can probably even make it out of some cheap eval board, rather than having to manufacture any custom hardware.
I'm developing cross-platform tool that captures multiple udp streams with various bit-rate.
boost::asio is used for networking. Is there any way to detect the situation when the udp buffer was full and data loss on socket could take place? The only way I can see now is reading /proc/%pid%/net/udp, but it's not aplicable for windows as you know :). Also I'd like to use boost features for it if possible.
If you need this capability, you have to code it into the protocol you are using. UDP is incapable of doing this by itself. For example, you could put a sequence number in each datagram. Missing datagrams would correspond to missing sequence numbers.
I've just hit the same issue (although for me Linux-specific), and despite the question being old might as well document my findings for others.
As far as I know, there are no portable way to do this, and nothing directly supported by Boost.
That said, there are some platform-specific ways of doing it. In Linux, it can be done by setting the SO_RXQ_OVFL socket-option, and then getting the replies using recvmsg(). It's poorly documented though, but you may be helped by http://lists.openwall.net/netdev/2009/10/09/75.
One way to avoid it in the first place is to increase the receive-buffers (I assume you've investigated it already, but including it for completeness). The SO_RCVBUF options seems fairly well-supported cross-platform. http://pubs.opengroup.org/onlinepubs/7908799/xns/setsockopt.html http://msdn.microsoft.com/en-us/library/windows/hardware/ff570832(v=vs.85).aspx OS:es puts an upper limit on this though, which an administrator might have to increase. On Linux, I.E. it can be increased using /proc/sys/net/core/rmem_max.
Finally, one way for your application to assess it's "load", which with large input-buffers might serve for early detection of overloading, could be to introduce a timestamp before and after the async operations. In pseudo_code (not boost::async-adapted):
work_time = 0
idle_time = 0
b = clock.now()
while running:
a = clock.now()
work_time += a-b
data = wait_for_input()
b = clock.now()
idle_time += b-a
process(data)
Then every second or so, you can check and reset work_time / (work_time+idle_time). If it approaches 1, you know you're heading for trouble and can send out an alert or take other actions.
I'm writing a program that implements the RFC 2544 network test. As the part of the test, I must send UDP packets at a specified rate.
For example, I should send 64 byte packets at 1Gb/s. That means that I should send UDP packet every 0.5 microseconds. Pseudocode can look like "Sending UDP packets at a specified rate":
while (true) {
some_sleep (0.5);
Send_UDP();
}
But I'm afraid there is no some_sleep() function in Windows, and Linux too, that can give me 0.5 microseconds resolution.
Is it possible to do this task in C++, and if yes, what is the right way to do it?
Two approaches:
Implement your own sleep by busy-looping using a high-resolution timer such as windows QueryPerformanceCounter
Allow slight variations in rate, insert Sleep(1) when you're enough ahead of the calculated rate. Use timeBeginPeriod to get 1ms resolution.
For both approaches, you can't rely on the sleeps being exact. You will need to keep totals counters and adjust the sleep period as you get ahead/behind.
This might be helpful, but I doubt it's directly portable to anything but Windows. Implement a Continuously Updating, High-Resolution Time Provider for Windows by Johan Nilsson.
However, do keep in mind that for packets that small, the IP and UDP overhead is going to account for a large fraction of the actual on-the-wire data. This may be what you intended, or not. A very quick scan of RFC 2544 suggests that much larger packets are allowed; you may be better off going that route instead. Consistently delaying for as little as 0.5 microseconds between each Send_UDP() call is going to be difficult at best.
To transmit 64-byte Ethernet frames at line rate, you actually want to send every 672 ns. I think the only way to do that is to get really friendly with the hardware. You'll be running up against bandwidth limitations with the PCI bus, etc. The system calls to send one packet will take significantly longer than 672 ns. A sleep function is the least of your worries.
You guess you should be able to do it with Boost Asios timer function. I haven't tried it myself, but I guess that deadline_timer would take a boost::posix_time::nanosec as well as the boost::posix_time::second
Check out an example here
Here's a native Windows implementation of nanosleep. If GPL is acceptable you can reuse the code, else you'll have to reimplement.