100 Hz Data From Serial - c++

I have a sensor which uses RS422 to spit out messages over serial. (I think thats the right terminology.) Anyways, I made my wiring harness, and hooked it up to my rs422 to usb convertor and tada, I got data in hyperterminal. Good stuff.
Now the sensor has an odd baud rate, 1500kbps. I am doing this in Windows, so it actually wasn't that hard to set that baud rate. Initially, at power on, the sensor sends out a 69 byte message every 10hz. I see this message, the correct bytes are read, and the message is very accurate (it includes a timestamp, which wait for it, increases by 0.1 s every message!) MOST IMPORTANTLY, I get the message on its boundary, in other words, every read was a new message.
Anyways things are going good so far, so I took the next step, I sent a write command over the serial port, to activate a sensor data message. This message is 76 bytes large, and is sent out at 100hz. Success again, more data begins appearing in reads. However, I am not getting it at 100hz, I get blocks of 3968 bytes. If I lower my buffer, I get three very very very quick reads of 1024, then immediately a read of 896. (3968 bytes again). (Note that I am now receiving two messages, one at 10 hz with size 69, and one at 100hz with size 76, note that no combination of the two messages evenly divides 3968.)
My question is, somewhere something is buffering my 100hz messages, and I am not getting them as they're being received. I would like to change that but I do not know what I'm looking for. I don't need that 100hz message on its boundary, I just don't want it at 2 Hz. I would be happy with 30hz or even 20hz.
Below I include my Serial Port Set up code:
Port Open
serial_port_ = CreateFile(L"COM6", GENERIC_READ | GENERIC_WRITE, 0, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0);
CommState and Timeouts
COMMTIMEOUTS comm_timeouts;
ZeroMemory(&comm_timeouts, sizeof(COMMTIMEOUTS));
//comm_timeouts.ReadIntervalTimeout = MAXDWORD; //Instant Read, still get 3968 chunks
comm_timeouts.ReadIntervalTimeout = 1; //1ms timeout
comm_timeouts.ReadTotalTimeoutConstant = 1000; //Derp?
comm_timeouts.WriteTotalTimeoutConstant = 5000; //Derp.
SetCommTimeouts(serial_port_, &comm_timeouts);
DCB dcb_configuration;
ZeroMemory(&dcb_configuration, sizeof(DCB));
dcb_configuration.DCBlength = sizeof(dcb_configuration);
dcb_configuration.BaudRate = 1500000;
dcb_configuration.ByteSize = 8;
dcb_configuration.StopBits = ONESTOPBIT;
dcb_configuration.Parity = ODDPARITY;
if(!SetCommState(serial_port_, &dcb_configuration))
My Read
if(!ReadFile(serial_port_, read_buffer_, 1024, &bytes_read, NULL))

I would suspect your serial->usb convertor to do the buffering. Since the usb is packet based, it needs to do some buffering. In rate 10Hz, there are probably big enough delays, to flush buffer after every message. But at 100Hz the messages are coming so far, that it is flushing buffer by some other logic.
Does that make sense?

Related

ReadFile USB Serial port too slow

I need to save data coming from GPS. I am using windows 7 system and GPS is connected using USB port. I am using visual studio dialog based application
GPS data is something like this
"$GPGLL,2219.2500182,N,09019.0118688,E,055547.65,A,A*61"
I need to save this data in file. I have thoroughly gone through this link and have set parameters accordingly.
char buffer[56];
This is my code to open port
hcomm= CreateFile("COM8",
GENERIC_READ,
0,
NULL,
OPEN_EXISTING,
0,
NULL);
if (hcomm == INVALID_HANDLE_VALUE)
TRACE("%s","error");
memset(&port, 0, sizeof(port));
port.DCBlength = sizeof(port);
if ( !GetCommState(hcomm, &port))
TRACE("getting comm state");
if (!BuildCommDCB("baud=19200", &port))
TRACE("building comm DCB");
if (!SetCommState(hcomm, &port))
TRACE("adjusting port settings");
timeouts.ReadIntervalTimeout = 0;
timeouts.ReadTotalTimeoutMultiplier = 0;
timeouts.ReadTotalTimeoutConstant = 0;
if (!SetCommTimeouts(hcomm, &timeouts))
TRACE("setting port time-outs.");
And am reading data as follows
while(loop which executes after every 20 ms) {
ReadFile(hcomm, buffer, sizeof(buffer), &read, NULL);
if ( read ){
//code to write data to file
}
Though I receive data, speed is terribly low. I receive data at 10 Hz. I want to save data at 50 readings/second.
Can somebody help me ?
EDIT:
As per # Paul R suggestion, I increased baud rate to 115200. Now, it saves data # 20 messages/second. My GPS supports maximum update rate of 20 Hz and maximum baud rate 115200. If I want to save data # 50 messages/second, what will I have to do..
For eg, if update rate is 20 Hz, each reading is available for 50 ms. So, if I am reading port after every 20ms, shouldn't it save every entry twice or in appropriate proportion ?
It's just basic arithmetic. At 19200 bps you can receive around 1920 characters per second (assuming each character = 8 data bits + 1 start bit + 1 stop bit). Your example message above is around 55 characters + line terminators etc, so that means a little over 30 messages per second best case. If you have to send a message to the device in between each received message then it will be even lower. So 50 messages / second is simply not possible at this data rate.
Simple solution: increase the data rate from 19200 bps to something much higher, e.g. 57600 bps.

TCP Packet size

Im trying to send a TCP packet from client to server, I want to be able to select the size of the packet. The problem Im facĂ­ng is that the packet size is not correct if I dont put a Sleep(45) (milliseconds). Im using Wireshark to see the size of the packets.
To make sure u guys are with me Im going to explain as clear as possible.
I have tried to do like this..
First I select the amount a data I want to send. For example say 1Mb or "1000000ish"bytes. I allocate an array with so much space.
To be able to send a specific packet size I have allocate a sendbuffer which contains the size of the packet I want (my case 64, 512, 1024 and 1514 bytes). I fill the sendbuffer with letters. Say I want to send with 64 as packet size.
for (int i = 0; i < packetSize; i++){
sendbuf[i] = 'a';
}
To know how many times I have to send a packet to reach 1Mb I done this math.
nrOfTimes = (dataSize / packetSize).
Then to send it with a loop.
for (int i = 0; i < nrOfTimes; i++) {
rc = send(sConnect, sendbuf, packetSize, 0); //rc and sConnect contains information where to send the data, if you wonder
Sleep(45); // If i dont use this the packet size gets 1514.
}
If I use the sleep(45) its working but its takes years to finished and Im supposed to measure the time so its incorrect to do like this. if I go lower then Sleep(45) then my network card ignors the packets size and put it to 1514 size.
Is there anyone who has some clear ideas what to do? I can only assume it might have something to do with the network card buffer.
TCP is a byte streaming protocol so it is incorrect to think of transmission in terms of discrete packets of bytes. It is likely you are interacting with the Nagle algorithm by injecting the 45 ms delays.

What means blocking for boost::asio::write?

I'm using boost::asio::write() to write data from a buffer to a com-Port. It's a serial port with a baud rate 115200 which means (as far as my understanding goes) that I can write effectively 11520 byte/s or 11,52KB/s data to the socket.
Now I'm having a quite big chunk of data (10015 bytes) which i want to write. I think that this should take little less than a second to really write on the port. But boost::asio::write() returns already 300 microseconds after the call with the transferred bytes 10015. I think this is impossible with that baud rate?
So my question is what is it actually doing? Really writing it to the port, or just some other kind of buffer maybe, which later writes it to the port.
I'd like the write() to only return after all the bytes have really been written to the port.
EDIT with code example:
The problem is that i always run into the timeout for the future/promise because it takes alone more than 100ms to send the message, but I think the timer should only start after the last byte is sent. Because write() is supposed to block?
void serial::write(std::vector<uint8_t> message) {
//create new promise for the request
promise = new boost::promise<deque<uint8_t>>;
boost::unique_future<deque<uint8_t>> future = promise->get_future();
// --- Write message to serial port --- //
boost::asio::write(serial_,boost::asio::buffer(message));
//wait for data or timeout
if (future.wait_for(boost::chrono::milliseconds(100))==boost::future_status::timeout) {
cout << "ACK timeout!" << endl;
//delete pointer and set it to 0
delete promise;
promise=nullptr;
}
//delete pointer and set it to 0 after getting a message
delete promise;
promise=nullptr;
}
How can I achieve this?
Thanks!
In short, boost::asio::write() blocks until all data has been written to the stream; it does not block until all data has been transmitted. To wait until data has been transmitted, consider using tcdrain().
Each serial port has both a receive and transmit buffer within kernel space. This allows the kernel to buffer received data if a process cannot immediately read it from the serial port, and allows data written to a serial port to be buffered if the device cannot immediately transmit it. To block until the data has been transmitted, one could use tcdrain(serial_.native_handle()).
These kernel buffers allow for the write and read rates to exceed that of the transmit and receive rates. However, while the application may write data at a faster rate than the serial port can transmit, the kernel will transmit at the appropriate rates.

How do I use COMMTIMEOUTS to wait until bytes are available but read more than one byte?

I have a C++ serial port class that has a none blocking and a blocking mode for read operations. For blocking mode:
COMMTIMEOUTS cto;
GetCommTimeouts(m_hFile,&cto);
// Set the new timeouts
cto.ReadIntervalTimeout = 0;
cto.ReadTotalTimeoutConstant = 0;
cto.ReadTotalTimeoutMultiplier = 0;
SetCommTimeouts(m_hFile,&cto)
For non blocking mode:
COMMTIMEOUTS cto;
GetCommTimeouts(m_hFile,&cto);
// Set the new timeouts
cto.ReadIntervalTimeout = MAXDWORD;
cto.ReadTotalTimeoutConstant = 0;
cto.ReadTotalTimeoutMultiplier = 0;
SetCommTimeouts(m_hFile,&cto)
I would like to add another mode that waits for any number of bytes and read them.
From MSDN COMMTIMEOUTS structure:
If an application sets ReadIntervalTimeout and ReadTotalTimeoutMultiplier to MAXDWORD and sets ReadTotalTimeoutConstant to a value greater than zero and less than MAXDWORD, one of the following occurs when the ReadFile function is called:
If there are any bytes in the input buffer, ReadFile returns immediately with the bytes in the buffer.
If there are no bytes in the input buffer, ReadFile waits until a
byte arrives and then returns immediately.
If no bytes arrive within the time specified by
ReadTotalTimeoutConstant, ReadFile times out.
This looks in code like this:
COMMTIMEOUTS cto;
GetCommTimeouts(m_hFile,&cto);
// Set the new timeouts
cto.ReadIntervalTimeout = 100;
cto.ReadTotalTimeoutConstant = MAXDWORD;
cto.ReadTotalTimeoutMultiplier = MAXDWORD;
SetCommTimeouts(m_hFile,&cto)
But this returns emidiately on the first byte. This is a problem since I am reading the port in a loop and the handling of a byte is so fast that the next time I read the port, only another byte is available. The end result is that I am reading one byte at a time in a loop and using 100% of the core running that thread.
I would like to use the cto.ReadIntervalTimeout like in the MSDN documentation but still wait until at least one byte is available. Does anyone have an idea?
Thanks.
The behavior you want will come from:
cto.ReadIntervalTimeout = 10;
cto.ReadTotalTimeoutConstant = 0;
cto.ReadTotalTimeoutMultiplier = 0;
It blocks arbitrarily long for the first byte (total timeout is disabled by setting the latter two fields to zero, per the documentation), then reads up to the buffer size as long as data is streaming in. If there's a 10ms gap in the data, it will return with what has been received so far.
If you're using 100% (or even close to it) of the CPU, it sounds like you're doing something wrong elsewhere. As I showed in a previous answer, for years I've used code with the timeouts all set to 1. I initially set it that way just as a wild guess at something that might at least sort of work, with the intent of tuning it later. It's worked well enough that I've never gotten around to tuning it at all. Just for example, it'll read input from my GPS (about the only thing I have that even imitates using a serial port any more) using an almost immeasurably tiny amount of CPU time -- after hours of reading a constant stream of data from the GPS, it still shows 0:00:00 seconds of CPU time used (and I can't see any difference in CPU usage whether it's running or not).
Now, I'll certainly grant that a GPS isn't (even close to) the fastest serial device around, but we're still talking about ~100% vs. ~0%. That's clearly a pretty serious difference.
if (dwEvtMask == EV_RXCHAR )
{
Sleep(1);
if (dwLength > 2)
{
Sleep(1);
Readfile( m_Serial->m_hCom, data,dwLength, &dwBytesRead, &Overlapped);
pDlg->PostMessage(WM_RECEIVE,0,0);
}
}

Calculating socket upload speed

I'm wondering if anyone knows how to calculate the upload speed of a Berkeley socket in C++. My send call isn't blocking and takes 0.001 seconds to send 5 megabytes of data, but takes a while to recv the response (so I know it's uploading).
This is a TCP socket to a HTTP server and I need to asynchronously check how many bytes of data have been uploaded / are remaining. However, I can't find any API functions for this in Winsock, so I'm stumped.
Any help would be greatly appreciated.
EDIT: I've found the solution, and will be posting as an answer as soon as possible!
EDIT 2: Proper solution added as answer, will be added as solution in 4 hours.
I solved my issue thanks to bdolan suggesting to reduce SO_SNDBUF. However, to use this code you must note that your code uses Winsock 2 (for overlapped sockets and WSASend). In addition to this, your SOCKET handle must have been created similarily to:
SOCKET sock = WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, WSA_FLAG_OVERLAPPED);
Note the WSA_FLAG_OVERLAPPED flag as the final parameter.
In this answer I will go through the stages of uploading data to a TCP server, and tracking each upload chunk and it's completion status. This concept requires splitting your upload buffer into chunks (minimal existing code modification required) and uploading it piece by piece, then tracking each chunk.
My code flow
Global variables
Your code document must have the following global variables:
#define UPLOAD_CHUNK_SIZE 4096
int g_nUploadChunks = 0;
int g_nChunksCompleted = 0;
WSAOVERLAPPED *g_pSendOverlapped = NULL;
int g_nBytesSent = 0;
float g_flLastUploadTimeReset = 0.0f;
Note: in my tests, decreasing UPLOAD_CHUNK_SIZE results in increased upload speed accuracy, but decreases overall upload speed. Increasing UPLOAD_CHUNK_SIZE results in decreased upload speed accuracy, but increases overall upload speed. 4 kilobytes (4096 bytes) was a good comprimise for a file ~500kB in size.
Callback function
This function increments the bytes sent and chunks completed variables (called after a chunk has been completely uploaded to the server)
void CALLBACK SendCompletionCallback(DWORD dwError, DWORD cbTransferred, LPWSAOVERLAPPED lpOverlapped, DWORD dwFlags)
{
g_nChunksCompleted++;
g_nBytesSent += cbTransferred;
}
Prepare socket
Initially, the socket must be prepared by reducing SO_SNDBUF to 0.
Note: In my tests, any value greater than 0 will result in undesirable behaviour.
int nSndBuf = 0;
setsockopt(sock, SOL_SOCKET, SO_SNDBUF, (char*)&nSndBuf, sizeof(nSndBuf));
Create WSAOVERLAPPED array
An array of WSAOVERLAPPED structures must be created to hold the overlapped status of all of our upload chunks. To do this I simply:
// Calculate the amount of upload chunks we will have to create.
// nDataBytes is the size of data you wish to upload
g_nUploadChunks = ceil(nDataBytes / float(UPLOAD_CHUNK_SIZE));
// Overlapped array, should be delete'd after all uploads have completed
g_pSendOverlapped = new WSAOVERLAPPED[g_nUploadChunks];
memset(g_pSendOverlapped, 0, sizeof(WSAOVERLAPPED) * g_nUploadChunks);
Upload data
All of the data that needs to be send, for example purposes, is held in a variable called pszData. Then, using WSASend, the data is sent in blocks defined by the constant, UPLOAD_CHUNK_SIZE.
WSABUF dataBuf;
DWORD dwBytesSent = 0;
int err;
int i, j;
for(i = 0, j = 0; i < nDataBytes; i += UPLOAD_CHUNK_SIZE, j++)
{
int nTransferBytes = min(nDataBytes - i, UPLOAD_CHUNK_SIZE);
dataBuf.buf = &pszData[i];
dataBuf.len = nTransferBytes;
// Now upload the data
int rc = WSASend(sock, &dataBuf, 1, &dwBytesSent, 0, &g_pSendOverlapped[j], SendCompletionCallback);
if ((rc == SOCKET_ERROR) && (WSA_IO_PENDING != (err = WSAGetLastError())))
{
fprintf(stderr, "WSASend failed: %d\n", err);
exit(EXIT_FAILURE);
}
}
The waiting game
Now we can do whatever we wish while all of the chunks upload.
Note: the thread which called WSASend must be regularily put into an alertable state, so that our 'transfer completed' callback (SendCompletionCallback) is dequeued out of the APC (Asynchronous Procedure Call) list.
In my code, I continuously looped until g_nUploadChunks == g_nChunksCompleted. This is to show the end-user upload progress and speed (can be modified to show estimated completion time, elapsed time, etc.)
Note 2: this code uses Plat_FloatTime as a second counter, replace this with whatever second timer your code uses (or adjust accordingly)
g_flLastUploadTimeReset = Plat_FloatTime();
// Clear the line on the screen with some default data
printf("(0 chunks of %d) Upload speed: ???? KiB/sec", g_nUploadChunks);
// Keep looping until ALL upload chunks have completed
while(g_nChunksCompleted < g_nUploadChunks)
{
// Wait for 10ms so then we aren't repeatedly updating the screen
SleepEx(10, TRUE);
// Updata chunk count
printf("\r(%d chunks of %d) ", g_nChunksCompleted, g_nUploadChunks);
// Not enough time passed?
if(g_flLastUploadTimeReset + 1 > Plat_FloatTime())
continue;
// Reset timer
g_flLastUploadTimeReset = Plat_FloatTime();
// Calculate how many kibibytes have been transmitted in the last second
float flByteRate = g_nBytesSent/1024.0f;
printf("Upload speed: %.2f KiB/sec", flByteRate);
// Reset byte count
g_nBytesSent = 0;
}
// Delete overlapped data (not used anymore)
delete [] g_pSendOverlapped;
// Note that the transfer has completed
Msg("\nTransfer completed successfully!\n");
Conclusion
I really hope this has helped somebody in the future who has wished to calculate upload speed on their TCP sockets without any server-side modifications. I have no idea how performance detrimental SO_SNDBUF = 0 is, although I'm sure a socket guru will point that out.
You can get a lower bound on the amount of data received and acknowledged by subtracting the value of the SO_SNDBUF socket option from the number of bytes you have written to the socket. This buffer may be adjusted using setsockopt, although in some cases the OS may choose a length smaller or larger than you specify, so you must re-check after setting it.
To get more precise than that, however, you must have the remote side inform you of progress, as winsock does not expose an API to retrieve the amount of data currently pending in the send buffer.
Alternately, you could implement your own transport protocol on UDP, but implementing rate control for such a protocol can be quite complex.
Since you don't have control over the remote side, and you want to do it in the code, I'd suggest doing very simple approximation. I assume a long living program/connection. One-shot uploads would be too skewed by ARP, DNS lookups, socket buffering, TCP slow start, etc. etc.
Have two counters - length of the outstanding queue in bytes (OB), and number of bytes sent (SB):
increment OB by number of bytes to be sent every time you enqueue a chunk for upload,
decrement OB and increment SB by the number returned from send(2) (modulo -1 cases),
on a timer sample both OB and SB - either store them, log them, or compute running average,
compute outstanding bytes a second/minute/whatever, same for sent bytes.
Network stack does buffering and TCP does retransmission and flow control, but that doesn't really matter. These two counters will tell you the rate your app produces data with, and the rate it is able to push it to the network. It's not the method to find out the real link speed, but a way to keep useful indicators about how good the app is doing.
If data production rate is bellow the network output rate - everything is fine. If it's the other way around and the network cannot keep up with the app - there's a problem - you need either faster network, slower app, or different design.
For one-time experiments just take periodic snapshots of netstat -sp tcp output (or whatever that is on Windows) and calculate the send-rate manually.
Hope this helps.
If your app uses packet headers like
0001234DT
where 000123 is the packet length for a single packet, you can consider using MSG_PEEK + recv() to get the length of the packet before you actually read it with recv().
The problem is send() is NOT doing what you think - it is buffered by the kernel.
getsockopt(sockfd, SOL_SOCKET, SO_SNDBUF, &flag, &sz));
fprintf(STDOUT, "%s: listener socket send buffer = %d\n", now(), flag);
sz=sizeof(int);
ERR_CHK(getsockopt(sockfd, SOL_SOCKET, SO_RCVBUF, &flag, &sz));
fprintf(STDOUT, "%s: listener socket recv buffer = %d\n", now(), flag);
See what these show for you.
When you recv on a NON-blocking socket that has data, it normally does not have MB of data parked in the buufer ready to recv. Most of what I have experienced is that the socket has ~1500 bytes of data per recv. Since you are probably reading on a blocking socket it takes a while for the recv() to complete.
Socket buffer size is the probably single best predictor of socket throughput. setsockopt() lets you alter socket buffer size, up to a point. Note: these buffers are shared among sockets in a lot of OSes like Solaris. You can kill performance by twiddling these settings too much.
Also, I don't think you are measuring what you think you are measuring. The real efficiency of send() is the measure of throughput on the recv() end. Not the send() end.
IMO.