ReadFile USB Serial port too slow - c++

I need to save data coming from GPS. I am using windows 7 system and GPS is connected using USB port. I am using visual studio dialog based application
GPS data is something like this
"$GPGLL,2219.2500182,N,09019.0118688,E,055547.65,A,A*61"
I need to save this data in file. I have thoroughly gone through this link and have set parameters accordingly.
char buffer[56];
This is my code to open port
hcomm= CreateFile("COM8",
GENERIC_READ,
0,
NULL,
OPEN_EXISTING,
0,
NULL);
if (hcomm == INVALID_HANDLE_VALUE)
TRACE("%s","error");
memset(&port, 0, sizeof(port));
port.DCBlength = sizeof(port);
if ( !GetCommState(hcomm, &port))
TRACE("getting comm state");
if (!BuildCommDCB("baud=19200", &port))
TRACE("building comm DCB");
if (!SetCommState(hcomm, &port))
TRACE("adjusting port settings");
timeouts.ReadIntervalTimeout = 0;
timeouts.ReadTotalTimeoutMultiplier = 0;
timeouts.ReadTotalTimeoutConstant = 0;
if (!SetCommTimeouts(hcomm, &timeouts))
TRACE("setting port time-outs.");
And am reading data as follows
while(loop which executes after every 20 ms) {
ReadFile(hcomm, buffer, sizeof(buffer), &read, NULL);
if ( read ){
//code to write data to file
}
Though I receive data, speed is terribly low. I receive data at 10 Hz. I want to save data at 50 readings/second.
Can somebody help me ?
EDIT:
As per # Paul R suggestion, I increased baud rate to 115200. Now, it saves data # 20 messages/second. My GPS supports maximum update rate of 20 Hz and maximum baud rate 115200. If I want to save data # 50 messages/second, what will I have to do..
For eg, if update rate is 20 Hz, each reading is available for 50 ms. So, if I am reading port after every 20ms, shouldn't it save every entry twice or in appropriate proportion ?

It's just basic arithmetic. At 19200 bps you can receive around 1920 characters per second (assuming each character = 8 data bits + 1 start bit + 1 stop bit). Your example message above is around 55 characters + line terminators etc, so that means a little over 30 messages per second best case. If you have to send a message to the device in between each received message then it will be even lower. So 50 messages / second is simply not possible at this data rate.
Simple solution: increase the data rate from 19200 bps to something much higher, e.g. 57600 bps.

Related

Why DPDK only cannot send and receive 60 bytes packet

I have written a simple DPDK send and receive application. When the packet len <= 60 bytes, send and receive application works, but when packet len > 60 bytes, send application show it has sent out packet. but in recieve application, it does not receive anything.
In send application:
mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS,
MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id());
pkt = rte_pktmbuf_alloc(mbuf_pool);
pkt->data_len = packlen; //if packlen<=60, it works, but when packlen>60, receiver cannot receive anything.
I try both l2fwd and basicfwd as receive application. It is same result.
The issue is here:
pchar[12]=0;
pchar[13] = 0
This means Ethertype is 0. From the list of assigned Ethertypes:
https://www.iana.org/assignments/ieee-802-numbers/ieee-802-numbers.xhtml
We see that 0 means zero Ethernet frame length. Since the minimum Ethernet frame length is 64 (60 + 4 FCS), that is why you have troubles sending packets longer that 60 bytes.
To fix the issue, simply put there a reasonable Ethertype from the list above.

getting started with Thorlabs APT

I'm hoping someone else out there has experience programming an APT - DC Servo controller.
My client wants a custom solution, so using the ActiveX control isn't viable.
I think once I can figure out how to send a basic message, I will be able to follow the API well enough, but I'm having difficulties getting started... and the documentation doesn't seem to clearly state how to actually send messages to the controller.
IE, am I supposed to be using the FTDI interface, with the FT_Write/FT_Read commands to operate the device?
I've run the following code which runs through the initial setup, which fails on the very last line where I try to flash the LED.
//the following is per the user manual for thor device.
ftHandle = FT_W32_CreateFile(SerialNumber.c_str(),
GENERIC_READ|GENERIC_WRITE,
0,
0,
OPEN_EXISTING,
FILE_ATTRIBUTE_NORMAL | FILE_FLAG_OVERLAPPED | FT_OPEN_BY_SERIAL_NUMBER,
0); // Open device by serial number
assert (ftHandle != INVALID_HANDLE_VALUE);
// Set baud rate to 115200.
const int uBaudRate=115200;
auto ftStatus = FT_SetBaudRate(ftHandle, (ULONG)uBaudRate);
assert(ftStatus==FT_OK);
// 8 data bits, 1 stop bit, no parity
ftStatus = FT_SetDataCharacteristics(ftHandle, FT_BITS_8, FT_STOP_BITS_1, FT_PARITY_NONE);
assert(ftStatus==FT_OK);
// Pre purge dwell 50ms.
Sleep(50);
// Purge the device.
ftStatus = FT_Purge(ftHandle, FT_PURGE_RX | FT_PURGE_TX);
assert(ftStatus==FT_OK);
// Post purge dwell 50ms.
Sleep(50);
ftStatus = FT_ResetDevice(ftHandle);
assert(ftStatus==FT_OK);
// Set flow control to RTS/CTS.
ftStatus = FT_SetFlowControl(ftHandle, FT_FLOW_RTS_CTS, 0, 0);
// Set RTS.
ftStatus = FT_SetRts(ftHandle);
assert(ftStatus==FT_OK);
//lets flash the led, MGMSG_MOD_IDENTIFY
BYTE buf[6] ={0x23,0x2,0,0,0x21,0x1};
DWORD written=0;
/*******************/
ftStatus = FT_Write(ftHandle, buf, (DWORD)6, &written);//4= FT_IO_ERROR
assert(ftStatus==FT_OK); //this is where I'm failing
/*******************/
For reference, I'm programming a 32 bit application - working on a 64 bit laptop.
Fixed by using FT_OpenEx instead of FT_W32_CreateFile.

How do I use COMMTIMEOUTS to wait until bytes are available but read more than one byte?

I have a C++ serial port class that has a none blocking and a blocking mode for read operations. For blocking mode:
COMMTIMEOUTS cto;
GetCommTimeouts(m_hFile,&cto);
// Set the new timeouts
cto.ReadIntervalTimeout = 0;
cto.ReadTotalTimeoutConstant = 0;
cto.ReadTotalTimeoutMultiplier = 0;
SetCommTimeouts(m_hFile,&cto)
For non blocking mode:
COMMTIMEOUTS cto;
GetCommTimeouts(m_hFile,&cto);
// Set the new timeouts
cto.ReadIntervalTimeout = MAXDWORD;
cto.ReadTotalTimeoutConstant = 0;
cto.ReadTotalTimeoutMultiplier = 0;
SetCommTimeouts(m_hFile,&cto)
I would like to add another mode that waits for any number of bytes and read them.
From MSDN COMMTIMEOUTS structure:
If an application sets ReadIntervalTimeout and ReadTotalTimeoutMultiplier to MAXDWORD and sets ReadTotalTimeoutConstant to a value greater than zero and less than MAXDWORD, one of the following occurs when the ReadFile function is called:
If there are any bytes in the input buffer, ReadFile returns immediately with the bytes in the buffer.
If there are no bytes in the input buffer, ReadFile waits until a
byte arrives and then returns immediately.
If no bytes arrive within the time specified by
ReadTotalTimeoutConstant, ReadFile times out.
This looks in code like this:
COMMTIMEOUTS cto;
GetCommTimeouts(m_hFile,&cto);
// Set the new timeouts
cto.ReadIntervalTimeout = 100;
cto.ReadTotalTimeoutConstant = MAXDWORD;
cto.ReadTotalTimeoutMultiplier = MAXDWORD;
SetCommTimeouts(m_hFile,&cto)
But this returns emidiately on the first byte. This is a problem since I am reading the port in a loop and the handling of a byte is so fast that the next time I read the port, only another byte is available. The end result is that I am reading one byte at a time in a loop and using 100% of the core running that thread.
I would like to use the cto.ReadIntervalTimeout like in the MSDN documentation but still wait until at least one byte is available. Does anyone have an idea?
Thanks.
The behavior you want will come from:
cto.ReadIntervalTimeout = 10;
cto.ReadTotalTimeoutConstant = 0;
cto.ReadTotalTimeoutMultiplier = 0;
It blocks arbitrarily long for the first byte (total timeout is disabled by setting the latter two fields to zero, per the documentation), then reads up to the buffer size as long as data is streaming in. If there's a 10ms gap in the data, it will return with what has been received so far.
If you're using 100% (or even close to it) of the CPU, it sounds like you're doing something wrong elsewhere. As I showed in a previous answer, for years I've used code with the timeouts all set to 1. I initially set it that way just as a wild guess at something that might at least sort of work, with the intent of tuning it later. It's worked well enough that I've never gotten around to tuning it at all. Just for example, it'll read input from my GPS (about the only thing I have that even imitates using a serial port any more) using an almost immeasurably tiny amount of CPU time -- after hours of reading a constant stream of data from the GPS, it still shows 0:00:00 seconds of CPU time used (and I can't see any difference in CPU usage whether it's running or not).
Now, I'll certainly grant that a GPS isn't (even close to) the fastest serial device around, but we're still talking about ~100% vs. ~0%. That's clearly a pretty serious difference.
if (dwEvtMask == EV_RXCHAR )
{
Sleep(1);
if (dwLength > 2)
{
Sleep(1);
Readfile( m_Serial->m_hCom, data,dwLength, &dwBytesRead, &Overlapped);
pDlg->PostMessage(WM_RECEIVE,0,0);
}
}

100 Hz Data From Serial

I have a sensor which uses RS422 to spit out messages over serial. (I think thats the right terminology.) Anyways, I made my wiring harness, and hooked it up to my rs422 to usb convertor and tada, I got data in hyperterminal. Good stuff.
Now the sensor has an odd baud rate, 1500kbps. I am doing this in Windows, so it actually wasn't that hard to set that baud rate. Initially, at power on, the sensor sends out a 69 byte message every 10hz. I see this message, the correct bytes are read, and the message is very accurate (it includes a timestamp, which wait for it, increases by 0.1 s every message!) MOST IMPORTANTLY, I get the message on its boundary, in other words, every read was a new message.
Anyways things are going good so far, so I took the next step, I sent a write command over the serial port, to activate a sensor data message. This message is 76 bytes large, and is sent out at 100hz. Success again, more data begins appearing in reads. However, I am not getting it at 100hz, I get blocks of 3968 bytes. If I lower my buffer, I get three very very very quick reads of 1024, then immediately a read of 896. (3968 bytes again). (Note that I am now receiving two messages, one at 10 hz with size 69, and one at 100hz with size 76, note that no combination of the two messages evenly divides 3968.)
My question is, somewhere something is buffering my 100hz messages, and I am not getting them as they're being received. I would like to change that but I do not know what I'm looking for. I don't need that 100hz message on its boundary, I just don't want it at 2 Hz. I would be happy with 30hz or even 20hz.
Below I include my Serial Port Set up code:
Port Open
serial_port_ = CreateFile(L"COM6", GENERIC_READ | GENERIC_WRITE, 0, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0);
CommState and Timeouts
COMMTIMEOUTS comm_timeouts;
ZeroMemory(&comm_timeouts, sizeof(COMMTIMEOUTS));
//comm_timeouts.ReadIntervalTimeout = MAXDWORD; //Instant Read, still get 3968 chunks
comm_timeouts.ReadIntervalTimeout = 1; //1ms timeout
comm_timeouts.ReadTotalTimeoutConstant = 1000; //Derp?
comm_timeouts.WriteTotalTimeoutConstant = 5000; //Derp.
SetCommTimeouts(serial_port_, &comm_timeouts);
DCB dcb_configuration;
ZeroMemory(&dcb_configuration, sizeof(DCB));
dcb_configuration.DCBlength = sizeof(dcb_configuration);
dcb_configuration.BaudRate = 1500000;
dcb_configuration.ByteSize = 8;
dcb_configuration.StopBits = ONESTOPBIT;
dcb_configuration.Parity = ODDPARITY;
if(!SetCommState(serial_port_, &dcb_configuration))
My Read
if(!ReadFile(serial_port_, read_buffer_, 1024, &bytes_read, NULL))
I would suspect your serial->usb convertor to do the buffering. Since the usb is packet based, it needs to do some buffering. In rate 10Hz, there are probably big enough delays, to flush buffer after every message. But at 100Hz the messages are coming so far, that it is flushing buffer by some other logic.
Does that make sense?

Calculating socket upload speed

I'm wondering if anyone knows how to calculate the upload speed of a Berkeley socket in C++. My send call isn't blocking and takes 0.001 seconds to send 5 megabytes of data, but takes a while to recv the response (so I know it's uploading).
This is a TCP socket to a HTTP server and I need to asynchronously check how many bytes of data have been uploaded / are remaining. However, I can't find any API functions for this in Winsock, so I'm stumped.
Any help would be greatly appreciated.
EDIT: I've found the solution, and will be posting as an answer as soon as possible!
EDIT 2: Proper solution added as answer, will be added as solution in 4 hours.
I solved my issue thanks to bdolan suggesting to reduce SO_SNDBUF. However, to use this code you must note that your code uses Winsock 2 (for overlapped sockets and WSASend). In addition to this, your SOCKET handle must have been created similarily to:
SOCKET sock = WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, WSA_FLAG_OVERLAPPED);
Note the WSA_FLAG_OVERLAPPED flag as the final parameter.
In this answer I will go through the stages of uploading data to a TCP server, and tracking each upload chunk and it's completion status. This concept requires splitting your upload buffer into chunks (minimal existing code modification required) and uploading it piece by piece, then tracking each chunk.
My code flow
Global variables
Your code document must have the following global variables:
#define UPLOAD_CHUNK_SIZE 4096
int g_nUploadChunks = 0;
int g_nChunksCompleted = 0;
WSAOVERLAPPED *g_pSendOverlapped = NULL;
int g_nBytesSent = 0;
float g_flLastUploadTimeReset = 0.0f;
Note: in my tests, decreasing UPLOAD_CHUNK_SIZE results in increased upload speed accuracy, but decreases overall upload speed. Increasing UPLOAD_CHUNK_SIZE results in decreased upload speed accuracy, but increases overall upload speed. 4 kilobytes (4096 bytes) was a good comprimise for a file ~500kB in size.
Callback function
This function increments the bytes sent and chunks completed variables (called after a chunk has been completely uploaded to the server)
void CALLBACK SendCompletionCallback(DWORD dwError, DWORD cbTransferred, LPWSAOVERLAPPED lpOverlapped, DWORD dwFlags)
{
g_nChunksCompleted++;
g_nBytesSent += cbTransferred;
}
Prepare socket
Initially, the socket must be prepared by reducing SO_SNDBUF to 0.
Note: In my tests, any value greater than 0 will result in undesirable behaviour.
int nSndBuf = 0;
setsockopt(sock, SOL_SOCKET, SO_SNDBUF, (char*)&nSndBuf, sizeof(nSndBuf));
Create WSAOVERLAPPED array
An array of WSAOVERLAPPED structures must be created to hold the overlapped status of all of our upload chunks. To do this I simply:
// Calculate the amount of upload chunks we will have to create.
// nDataBytes is the size of data you wish to upload
g_nUploadChunks = ceil(nDataBytes / float(UPLOAD_CHUNK_SIZE));
// Overlapped array, should be delete'd after all uploads have completed
g_pSendOverlapped = new WSAOVERLAPPED[g_nUploadChunks];
memset(g_pSendOverlapped, 0, sizeof(WSAOVERLAPPED) * g_nUploadChunks);
Upload data
All of the data that needs to be send, for example purposes, is held in a variable called pszData. Then, using WSASend, the data is sent in blocks defined by the constant, UPLOAD_CHUNK_SIZE.
WSABUF dataBuf;
DWORD dwBytesSent = 0;
int err;
int i, j;
for(i = 0, j = 0; i < nDataBytes; i += UPLOAD_CHUNK_SIZE, j++)
{
int nTransferBytes = min(nDataBytes - i, UPLOAD_CHUNK_SIZE);
dataBuf.buf = &pszData[i];
dataBuf.len = nTransferBytes;
// Now upload the data
int rc = WSASend(sock, &dataBuf, 1, &dwBytesSent, 0, &g_pSendOverlapped[j], SendCompletionCallback);
if ((rc == SOCKET_ERROR) && (WSA_IO_PENDING != (err = WSAGetLastError())))
{
fprintf(stderr, "WSASend failed: %d\n", err);
exit(EXIT_FAILURE);
}
}
The waiting game
Now we can do whatever we wish while all of the chunks upload.
Note: the thread which called WSASend must be regularily put into an alertable state, so that our 'transfer completed' callback (SendCompletionCallback) is dequeued out of the APC (Asynchronous Procedure Call) list.
In my code, I continuously looped until g_nUploadChunks == g_nChunksCompleted. This is to show the end-user upload progress and speed (can be modified to show estimated completion time, elapsed time, etc.)
Note 2: this code uses Plat_FloatTime as a second counter, replace this with whatever second timer your code uses (or adjust accordingly)
g_flLastUploadTimeReset = Plat_FloatTime();
// Clear the line on the screen with some default data
printf("(0 chunks of %d) Upload speed: ???? KiB/sec", g_nUploadChunks);
// Keep looping until ALL upload chunks have completed
while(g_nChunksCompleted < g_nUploadChunks)
{
// Wait for 10ms so then we aren't repeatedly updating the screen
SleepEx(10, TRUE);
// Updata chunk count
printf("\r(%d chunks of %d) ", g_nChunksCompleted, g_nUploadChunks);
// Not enough time passed?
if(g_flLastUploadTimeReset + 1 > Plat_FloatTime())
continue;
// Reset timer
g_flLastUploadTimeReset = Plat_FloatTime();
// Calculate how many kibibytes have been transmitted in the last second
float flByteRate = g_nBytesSent/1024.0f;
printf("Upload speed: %.2f KiB/sec", flByteRate);
// Reset byte count
g_nBytesSent = 0;
}
// Delete overlapped data (not used anymore)
delete [] g_pSendOverlapped;
// Note that the transfer has completed
Msg("\nTransfer completed successfully!\n");
Conclusion
I really hope this has helped somebody in the future who has wished to calculate upload speed on their TCP sockets without any server-side modifications. I have no idea how performance detrimental SO_SNDBUF = 0 is, although I'm sure a socket guru will point that out.
You can get a lower bound on the amount of data received and acknowledged by subtracting the value of the SO_SNDBUF socket option from the number of bytes you have written to the socket. This buffer may be adjusted using setsockopt, although in some cases the OS may choose a length smaller or larger than you specify, so you must re-check after setting it.
To get more precise than that, however, you must have the remote side inform you of progress, as winsock does not expose an API to retrieve the amount of data currently pending in the send buffer.
Alternately, you could implement your own transport protocol on UDP, but implementing rate control for such a protocol can be quite complex.
Since you don't have control over the remote side, and you want to do it in the code, I'd suggest doing very simple approximation. I assume a long living program/connection. One-shot uploads would be too skewed by ARP, DNS lookups, socket buffering, TCP slow start, etc. etc.
Have two counters - length of the outstanding queue in bytes (OB), and number of bytes sent (SB):
increment OB by number of bytes to be sent every time you enqueue a chunk for upload,
decrement OB and increment SB by the number returned from send(2) (modulo -1 cases),
on a timer sample both OB and SB - either store them, log them, or compute running average,
compute outstanding bytes a second/minute/whatever, same for sent bytes.
Network stack does buffering and TCP does retransmission and flow control, but that doesn't really matter. These two counters will tell you the rate your app produces data with, and the rate it is able to push it to the network. It's not the method to find out the real link speed, but a way to keep useful indicators about how good the app is doing.
If data production rate is bellow the network output rate - everything is fine. If it's the other way around and the network cannot keep up with the app - there's a problem - you need either faster network, slower app, or different design.
For one-time experiments just take periodic snapshots of netstat -sp tcp output (or whatever that is on Windows) and calculate the send-rate manually.
Hope this helps.
If your app uses packet headers like
0001234DT
where 000123 is the packet length for a single packet, you can consider using MSG_PEEK + recv() to get the length of the packet before you actually read it with recv().
The problem is send() is NOT doing what you think - it is buffered by the kernel.
getsockopt(sockfd, SOL_SOCKET, SO_SNDBUF, &flag, &sz));
fprintf(STDOUT, "%s: listener socket send buffer = %d\n", now(), flag);
sz=sizeof(int);
ERR_CHK(getsockopt(sockfd, SOL_SOCKET, SO_RCVBUF, &flag, &sz));
fprintf(STDOUT, "%s: listener socket recv buffer = %d\n", now(), flag);
See what these show for you.
When you recv on a NON-blocking socket that has data, it normally does not have MB of data parked in the buufer ready to recv. Most of what I have experienced is that the socket has ~1500 bytes of data per recv. Since you are probably reading on a blocking socket it takes a while for the recv() to complete.
Socket buffer size is the probably single best predictor of socket throughput. setsockopt() lets you alter socket buffer size, up to a point. Note: these buffers are shared among sockets in a lot of OSes like Solaris. You can kill performance by twiddling these settings too much.
Also, I don't think you are measuring what you think you are measuring. The real efficiency of send() is the measure of throughput on the recv() end. Not the send() end.
IMO.