I am currently trying to write a program that will read Bluetooth output from an Arduino HC-05 module on a Serial Communications Port.
http://cdn.makezine.com/uploads/2014/03/hc_hc-05-user-instructions-bluetooth.pdf
When I open a Putty terminal and tell it to listen to COM4, I am able to see the output that the program running on the Arduino is printing.
However, when I run the following program to try to process incoming data on the serial port programatically, I get the output shown.
#include <Windows.h>
#include <string>
#include <atltrace.h>
#include <iostream>
int main(int argc, char** argv[]) {
HANDLE hComm = CreateFile(
L"COM4",
GENERIC_READ | GENERIC_WRITE,
0,
0,
OPEN_EXISTING,
NULL,
0
);
if (hComm == INVALID_HANDLE_VALUE) {
std::cout << "Error opening COM4" << std::endl;
return 1;
}
DWORD dwRead;
BOOL fWaitingOnRead = false;
OVERLAPPED osReader = { 0 };
char message[100];
osReader.hEvent = CreateEvent(NULL, TRUE, FALSE, NULL);
if (osReader.hEvent == NULL) {
std::cout << "Error creating overlapping event" << std::endl;
return 2;
}
while (1) {
if (!fWaitingOnRead) {
if (!ReadFile(
hComm,
&message,
sizeof(message),
&dwRead,
NULL
)) {
if (GetLastError() != ERROR_IO_PENDING) {
std::cout << "Communications error" << std::endl;
return 3;
}
}
else {
message[100] = '\0';
std::cout << message << std::endl;
}
}
}
return 0;
}
I have made changes to the handle and the ReadFile function call so that it will be making the calls synchronously in an infinite loop. However, Visual Studio pops up a warning saying that the program has stopped working then asks to debug or close program. My assumption is that it must be stalling somewhere or failing to execute some WindowsAPI function somewhere up the stack.
Any help, pointers, greatly appreciated.
At least IMO, using overlapped I/O for this job is pretty severe overkill. You could make it work, but it would take a lot of extra effort on your part, and probably accomplish very little.
The big thing with using comm ports under Windows is to set the timeouts to at least halfway meaningful values. When I first did this, I started by setting all of the values to 1, with the expectation that this would sort of work, but probably consume excessive CPU time, so I'd want to experiment with higher values to retain fast enough response, while reducing CPU usage.
So, I wrote some code that just set all the values in the COMMTIMEOUTS structure to 1, and setup the comm port to send/read data.
I've never gotten around to experimenting with longer timeouts to try to reduce CPU usage, because even on the machine I was using when I first wrote this (probably a Pentium II, or thereabouts), it was functional, and consumed too little CPU time to care about--I couldn't really see the difference between the machine completely idle, and this transferring data. There might be circumstances that would justify more work, but at least for any need I've had, it seems to be adequate as it is.
That's because message has the wrong type.
To contain a string, it should be an array of characters, not an array of pointers to characters.
Additionally, to treat it as a string, you need to set the array element after the last character to '\0'. ReadFile will put the number of characters it reads into dwRead.
Also, it appears that you are not using overlapped I/O correctly. This simple program has no need for overlapped I/O - remove it. (As pointed out by #EJP, you are checking for ERROR_IO_PENDING incorrectly. Remove that too.)
See comments below, in your program:
if (!fWaitingOnRead) {
if (!ReadFile( // here you make a non-blocking read.
hComm,
message,
sizeof(*message),
&dwRead,
&osReader
)) {
// Windows reports you should wait for input.
//
if (GetLastError() != ERROR_IO_PENDING) {
std::cout << "Communications error" << std::endl;
return 3;
}
else { // <-- remove this.
// insert call to GetOverlappedcResult here.
std::cout << message << std::endl;
}
}
}
return 0; // instead of waiting for input, you exit.
}
After you call ReadFile() you have to insert a call for GetOverlappedResult(hComm, &osReader, &dwBytesRceived, TRUE) to wait for the read operation to complete and have some bytes in your buffer.
You will also need to have a loop in your program if you don't want to exit prematurely.
If you do not want to do overlapped i/o (which is a wise decision) , do not pass an OVERLAPPED pointer to ReadFile. ReadFile will block until it has some data to give you. You will then obviously not need to call GetOverlappedresult()
For the serial port, you also need to fill in a DCB structure. https://msdn.microsoft.com/en-us/library/windows/desktop/aa363214(v=vs.85).aspx
You can use BuildCommDCB()to initialize it. There is a link to it in the MS doc, CallGetCommState(hComm, &dcb) to initialize the serial port hardware. The serial port needs to know which baud rate etc. you need for your app.
Related
I need some help with understanding how to use ReadFile and WriteFile in C++ while using method shown in this guide:
https://www.delftstack.com/howto/cpp/cpp-serial-communication/
My question is, how to use these two functions to send or receive anything? I don't know how to call them properly
I start with Handle:
// Open serial port
HANDLE serialHandle;
serialHandle = CreateFile(L"COM3", GENERIC_READ | GENERIC_WRITE, 0, 0, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, 0);
Next I did some basic settings like setting baud, bytesize etc. I will skip that. And here we come to my problem.
I tried to send some data and receive it (my cable output and input pins are connected). Problem is I don't know how to call ReadFile and WriteFile properly. Here's how I tried to do it:
char sBuff[n + 1] = { 0 };
DWORD send = 0;
cout << "Sent: " << WriteFile(serialHandle, sBuff, n, &send, NULL) << endl;
DWORD dwRead = 0;
cout << "Received: " << ReadFile(serialHandle, sBuff, n, &dwRead, NULL) << endl;
CloseHandle(serialHandle);
}
}
It's just some attempt to guess the correct method. Any example with short explanation will be much appreciated.
Edit: removed useless chunk of code, hope my question is more understandable now
It's all about the handle "serialHandle" being set to the com3 port instead of the file.
The first question, in the layer where you write code, you should not worry about the synchronization or interference of sending and receiving. The serial port handler coordinates it.
Second question, your data is sent as a string of characters that are ASCII. You have to write a function called extract(reciedData) in receiving and place your variables in integer or double or string. In fact, you need a protocol, for example, NMEA .
please search aboat NMEA protocol in google
Consider this little programm be compiled as application.exe
#include <stdio.h>
int main()
{
char str[100];
printf ("Hello, please type something\n");
scanf("%[^\n]s", &str);
printf("you typed: %s\n", str);
return 0;
}
Now I use this code to start application.exe and fetch its output.
#include <stdio.h>
#include <iostream>
#include <stdexcept>
int main()
{
char buffer[128];
FILE* pipe = popen("application.exe", "r");
while (!feof(pipe)) {
if (fgets(buffer, 128, pipe) != NULL)
printf(buffer);
}
pclose(pipe);
return 0;
}
My problem is that there is no output until I did my input. Then both output lines get fetched.
I can workarround this problem by adding this line after the first printf statement.
fflush(stdout);
Then the first line is fetched before I do my input as expected.
But how can I fetch output of applications that I cannot modify and that do not use fflush() in "realtime" (means before they exit)? .
And how does the windows cmd do it?
You have been bitten by the fact that the buffering for the streams which are automatically opened in a C program changes with the type of device attached.
That's a bit odd — one of the things which make *nixes nice to play with (and which are reflected in the C standard library) is that processes don't care much about where they get data from and where they write it. You just pipe and redirect around at your leisure and it's usually plug and play, and pretty fast.
One obvious place where this rule breaks is interaction; you present a nice example. If the output of the program is block buffered you don't see it before maybe 4k data has accumulated, or the process exits.
A program can detect though whether it writes to a terminal via isatty() (and perhaps through other means as well). A terminal conceptually includes a user, suggesting an interactive program. The library code opening stdin and stdout checks that and changes their buffering policy to line buffered: When a newline is encountered, the stream is flushed. That is perfect for interactive, line oriented applications. (It is less than perfect for line editing, as bash does, which disables buffering completely.)
The open group man page for stdin is fairly vague with respect to buffering in order to give implementations enough leeway to be efficient, but it does say:
the standard input and standard output streams are fully buffered if and only if the stream can be determined not to refer to an interactive device.
That's what happens to your program: The standard library sees that it is running "non-interactively" (writing to a pipe), tries to be smart and efficient and switches on block buffering. Writing a newline does not flush the output any longer. Normally that is a good thing: Imagine writing binary data, writing to disk every 256 bytes, on average! Terrible.
It is noteworthy to realize that there is probably a whole cascade of buffers between you and, say, a disk; after the C standard library comes the operating system's buffers, and then the disk's proper.
Now to your problem: The standard library buffer used to store characters-to-be-written is in the memory space of the program. Despite appearances, the data has not yet left your program and hence is not (officially) accessible by other programs. I think you are out of luck. You are not alone: Most interactive console programs will perform badly when one tries to operate them through pipes.
IMHO, that is one of the less logical parts of IO buffering: it acts differently when directed to a terminal or to a file or pipe. If IO is directed to a file or a pipe, it is normally buffered, that means that output is actually written only when a buffer is full or when an explicit flush occurs => that is what you see when you execute a program through popen.
But when IO is directed to a terminal, a special case occurs: all pending output is automatically flushed before a read from the same terminal. That special case is necessary to allow interactive programs to display prompts before reading.
The bad thing is that if you try to drive an interactive application through pipes, you loose: the prompts can only be read when either the application ends or when enough text was output to fill a buffer. That's the reason why Unix developpers invented the so called pseudo ttys (pty). They are implemented as terminal drivers so that the application uses the interactive buffering, but the IO is in fact manipulated by another program owning the master part of the pty.
Unfortunately, as you write application.exe, I assume that you use Windows, and I do not know an equivalent mechanism in the Windows API. The callee must use unbuffered IO (stderr is by default unbuffered) to allow the prompts to be read by a caller before it sends the answer.
The problems of my question in my original post are already very good explained
in the other answers.Console applications use a function named isatty() to detect
if their stdout handler is connected to a pipe or a real console. In case of a pipe
all output is buffered and flushed in chunks except if you directly call fflush().
In case of a real console the output is unbuffered and gets directly printed to the
console output.
In Linux you can use openpty() to create a pseudoterminal and create your process in it. As a
result the process will think it runs in a real terminal and uses unbuffered output. Windows seems not to have
such an option.
After a lot of digging through winapi documentation I found that this is not true. Actually you can create
your own console screen buffer and use it for stdout of your process that will be unbuffered then.
Sadly this is not a very comfortable solution because there are no event handler and we need to poll for new data.
Also at the moment I'm not sure how to handle scrolling when this screen buffer is full. But even if there are still some problems
left I think I have created a very useful (and interesting) starting point for those of you who ever wanted to fetch unbuffered (and unflushed)
windows console process output.
#include <windows.h>
#include <stdio.h>
int main(int argc, char* argv[])
{
char cmdline[] = "application.exe"; // process command
HANDLE scrBuff; // our virtual screen buffer
CONSOLE_SCREEN_BUFFER_INFO scrBuffInfo; // state of the screen buffer
// like actual cursor position
COORD scrBuffSize = {80, 25}; // size in chars of our screen buffer
SECURITY_ATTRIBUTES sa; // security attributes
PROCESS_INFORMATION procInfo; // process information
STARTUPINFO startInfo; // process start parameters
DWORD procExitCode; // state of process (still alive)
DWORD NumberOfCharsWritten; // output of fill screen buffer func
COORD pos = {0, 0}; // scr buff pos of data we have consumed
bool quit = false; // flag for reading loop
// 1) Create a screen buffer, set size and clear
sa.nLength = sizeof(sa);
scrBuff = CreateConsoleScreenBuffer( GENERIC_READ | GENERIC_WRITE,
FILE_SHARE_READ | FILE_SHARE_WRITE,
&sa, CONSOLE_TEXTMODE_BUFFER, NULL);
SetConsoleScreenBufferSize(scrBuff, scrBuffSize);
// clear the screen buffer
FillConsoleOutputCharacter(scrBuff, '\0', scrBuffSize.X * scrBuffSize.Y,
pos, &NumberOfCharsWritten);
// 2) Create and start a process
// [using our screen buffer as stdout]
ZeroMemory(&procInfo, sizeof(PROCESS_INFORMATION));
ZeroMemory(&startInfo, sizeof(STARTUPINFO));
startInfo.cb = sizeof(STARTUPINFO);
startInfo.hStdOutput = scrBuff;
startInfo.hStdError = GetStdHandle(STD_ERROR_HANDLE);
startInfo.hStdInput = GetStdHandle(STD_INPUT_HANDLE);
startInfo.dwFlags |= STARTF_USESTDHANDLES;
CreateProcess(NULL, cmdline, NULL, NULL, FALSE,
0, NULL, NULL, &startInfo, &procInfo);
CloseHandle(procInfo.hThread);
// 3) Read from our screen buffer while process is alive
while(!quit)
{
// check if process is still alive or we could quit reading
GetExitCodeProcess(procInfo.hProcess, &procExitCode);
if(procExitCode != STILL_ACTIVE) quit = true;
// get actual state of screen buffer
GetConsoleScreenBufferInfo(scrBuff, &scrBuffInfo);
// check if screen buffer cursor moved since
// last time means new output was written
if (pos.X != scrBuffInfo.dwCursorPosition.X ||
pos.Y != scrBuffInfo.dwCursorPosition.Y)
{
// Get new content of screen buffer
// [ calc len from pos to cursor pos:
// (curY - posY) * lineWidth + (curX - posX) ]
DWORD len = (scrBuffInfo.dwCursorPosition.Y - pos.Y)
* scrBuffInfo.dwSize.X
+(scrBuffInfo.dwCursorPosition.X - pos.X);
char buffer[len];
ReadConsoleOutputCharacter(scrBuff, buffer, len, pos, &len);
// Print new content
// [ there is no newline, unused space is filled with '\0'
// so we read char by char and if it is '\0' we do
// new line and forward to next real char ]
for(int i = 0; i < len; i++)
{
if(buffer[i] != '\0') printf("%c",buffer[i]);
else
{
printf("\n");
while((i + 1) < len && buffer[i + 1] == '\0')i++;
}
}
// Save new position of already consumed data
pos = scrBuffInfo.dwCursorPosition;
}
// no new output so sleep a bit before next check
else Sleep(100);
}
// 4) Cleanup and end
CloseHandle(scrBuff);
CloseHandle(procInfo.hProcess);
return 0;
}
You can't.
Because not yet flushed data is owned by the program itself.
I think you can flush data to stderr or encapsulate a function of fgetc and fungetc to not corrupt the stream or use system("application.ext >>log") and then mmap log to memory to do things you want.
I need to read file asynchroneously
string read(string path) {
DWORD readenByte;
int t;
char* buffer = new char[512];
HANDLE hEvent = CreateEvent(NULL, FALSE, FALSE, "read");
OVERLAPPED overlap;
overlap.hEvent = hEvent;
HANDLE hFile = CreateFile(path.c_str(), GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL, NULL);
if(!hFile) {
Debug::error(GetLastError(), "fileAsync.cpp::read - ");
}
t = ReadFile(hFile, buffer, MAX_READ - 1, &readenByte, &overlap);
if(!t) {
Debug::error(GetLastError(), "fileAsync.cpp::read - ");
}
t = WaitForSingleObject(hEvent, 5000);
if(t == WAIT_TIMEOUT) {
Debug::error("fail to read - timeout, fileAsync.cpp::read");
}
buffer[readenByte] = '\0';
string str = buffer;
return str;
}
I've got the error at ReadFile - 38: reached the end of the file
How to read asynchroneusly file in c++ with use of winapi?
There are several bugs in your code that need to be addressed, some cause failure, others catastrophic failure.
The first bug leads to the error code you get: You have an uninitialized OVERLAPPED structure, instructing the following ReadFile call to read from the random file position stored in the Offset and OffsetHigh members. To fix this, initialize the data: OVERLAPPED overlap = {0};.
Next, you aren't opening the file for asynchronous access. To subsequently read asynchronously from a file, you need to call CreateFile passing FILE_FLAG_OVERLAPPED for dwFlagsAndAttributes. If you don't you're off to hunting a bug for months (see What happens if you forget to pass an OVERLAPPED structure on an asynchronous handle?).
The documentation for ReadFile explains, that lpNumberOfBytesRead parameter is not used for asynchronous I/O, and you should pass NULL instead. This should be immediately obvious, since an asynchronous ReadFile call returns, before the number of bytes transferred is known. To get the size of the transferred payload, call GetOverlappedResult once the asynchronous I/O has finished.
The next bug only causes a memory leak. You are dynamically allocating buffer, but never call delete[] buffer;. Either delete the buffer, or allocate a buffer with automatic storage duration (char buffer[MAX_READ] = {0};), or use a C++ container (e.g. std::vector<char> buffer(MAX_READ);).
Another bug is, where you try to construct a std::string from your buffer: The constructor you chose cannot deal with what would be an embedded NUL character. It'll just truncate whatever you have. You'd need to call a std::string constructor taking an explicit length argument. But even then, you may wind up with garbage, if the character encoding of the file and std::string do not agree.
Finally, issuing an asynchronous read, followed by WaitForSingleObject is essentially a synchronous read, and doesn't buy you anything. I'm assuming this is just for testing, and not your final code. Just keep in mind when finishing this up, that the OVERLAPPED structure need to stay alive for as long as the asynchronous read operation is in flight.
Additional recommendations, that do not immediately address bugs:
You are passing a std::string to your read function, that is used in the CreateFile call. Windows uses UTF-16LE encoding throughout, which maps to wchar_t/std::wstring when using Visual Studio (and likely other Windows compilers as well). Passing a std::string/const char* has two immediate drawbacks:
Calling the ANSI API causes character strings to be converted from MBCS to UTF-16 (and vice versa). This both needlessly wastes resources, as well as fails in very subtle ways, as it relies on the current locale.
Not every Unicode code point can be expressed using MBCS encoding. This means, that some files cannot be opened when using MBCS character encoding.
Use the Unicode API (CreateFileW) and UTF-16 character strings (std::wstring/wchar_t) throughout. You can also define the preprocessor symbols UNICODE (for the Windows API) and _UNICODE (for the CRT) at the compiler's command line, to not accidentally call into any ANSI APIs.
You are creating an event object that is only ever accessed through its HANDLE value, not by its name. You can pass NULL as the lpName argument to CreateEvent. This prevents potential name clashes, which is all the more important with a name as generic as "read".
1) You need to include the flag FILE_FLAG_OVERLAPPED in the 6th argument (dwFlagsAndAttributes) of the call to CreateFile. That is why most likely the overlapped read fails.
2) What is the value of MAX_READ? I hope it's less than 513 otherwise if the file is bigger than 512 bytes bad things will happen.
3) ReadFile with the overlapped structure pointer being not NULL will give you the error code 997 (ERROR_IO_PENDING) which is expected and thus you cannot evaluate the t after calling ReadFile.
4) In the case of asynchronous operation the ReadFile function does not store the bytes read in the pointer you pass in the call, you must query the overlapped result yourself after the operation is completed.
Here is a small working snippet, I hope you can build up from that:
#include <Windows.h>
#include <iostream>
#include <sstream>
class COverlappedCompletionEvent : public OVERLAPPED
{
public:
COverlappedCompletionEvent() : m_hEvent(NULL)
{
m_hEvent = CreateEvent(NULL, FALSE, FALSE, NULL);
if (m_hEvent == NULL)
{
auto nError = GetLastError();
std::stringstream ErrorStream;
ErrorStream << "CreateEvent() failed with " << nError;
throw std::runtime_error(ErrorStream.str());
}
ZeroMemory(this, sizeof(OVERLAPPED));
hEvent = m_hEvent;
}
~COverlappedCompletionEvent()
{
if (m_hEvent != NULL)
{
CloseHandle(m_hEvent);
}
}
private:
HANDLE m_hEvent;
};
int main(int argc, char** argv)
{
try
{
if (argc != 2)
{
std::stringstream ErrorStream;
ErrorStream << "usage: " << argv[0] << " <filename>";
throw std::runtime_error(ErrorStream.str());
}
COverlappedCompletionEvent OverlappedCompletionEvent;
char pBuffer[512];
auto hFile = CreateFileA(argv[1], GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_ATTRIBUTE_NORMAL | FILE_FLAG_OVERLAPPED, NULL);
if (hFile == NULL)
{
auto nError = GetLastError();
std::stringstream ErrorStream;
ErrorStream << "CreateFileA() failed with " << nError;
throw std::runtime_error(ErrorStream.str());
}
if (ReadFile(hFile, pBuffer, sizeof(pBuffer), nullptr, &OverlappedCompletionEvent) == FALSE)
{
auto nError = GetLastError();
if (nError != ERROR_IO_PENDING)
{
std::stringstream ErrorStream;
ErrorStream << "ReadFile() failed with " << nError;
throw std::runtime_error(ErrorStream.str());
}
}
::WaitForSingleObject(OverlappedCompletionEvent.hEvent, INFINITE);
DWORD nBytesRead = 0;
if (GetOverlappedResult(hFile, &OverlappedCompletionEvent, &nBytesRead, FALSE))
{
std::cout << "Read " << nBytesRead << " bytes" << std::endl;
}
CloseHandle(hFile);
}
catch (const std::exception& Exception)
{
std::cout << Exception.what() << std::endl;
return EXIT_FAILURE;
}
return EXIT_SUCCESS;
}
I attempted to modify Teunis van Beelen's Rs232 library, from Polling to event driven and non-overlapped to suit my project. RS232 Library
I expect to receive blocks of data (roughly 100 to 200 chars) every 200ms.
Problem I am having is the received data is very inconsistent, it is cut off at random points, and incomplete.
I would like ReadFile() to return only after reading one whole block of data.( or something to that effect)
I feel like the problem is with the time out settings, because by altering the figures I get different results, but I just cant get it right, my best result so far has been set all time out values to 0 and let ReadFile() expect 150 bytes, this way ReadFile() dose not return unless it reads 150 chars, but this just go out of sync after few transmissions, as I have no idea how much data to expect.
these are the main changes to the polling function in Teunis's code , besides time out settings, all other settings are unchanged:
//Using the EV_RXCHAR flag will notify the thread that a byte arrived at the port
DWORD dwError = 0;
//use SetCommMask and WaitCommEvent to see if byte has arrived at the port
//SetCommMask sets the desired events that cause a notification.
if(!SetCommMask(Cport[comport_number],EV_RXCHAR)){
printf("SetCommMask Error");
dwError = GetLastError();
// Error setting com mask
return FALSE;
}
//WaitCommEvent function detects the occurrence of the events.
DWORD dwCommEvent;
for( ; ; )
{
//wait for event to happen
if (WaitCommEvent(Cport[comport_number],&dwCommEvent,NULL))
{
if(ReadFile(Cport[comport_number], buf, 1, (LPDWORD)((void *)&n), NULL)){
//Byte has been read, buf is processed in main
}
else{
//error occoured in ReadFile call
dwError = GetLastError();
break;
}
else{
//error in WaitCommEvent
break;
}
break; //break after read file
}
attempt 2 as suggested by MSDN article on serial com using Do While to cycle through every character in the buffer, this method did not yield any good results either.
DWORD dwError = 0;
/*
Using the EV_RXCHAR flag will notify the thread that a byte arrived at the port
*/
//use SetCommMask and WaitCommEvent to see if byte has arrived at the port
//SetCommMask sets the desired events that cause a notification.
if(!SetCommMask(Cport[comport_number],EV_RXCHAR)){
printf("SetCommMask Error");
dwError = GetLastError();
// Error setting com mask
return FALSE;
}
//WaitCommEvent function detects the occurrence of the events.
DWORD dwCommEvent;
for( ; ; )
{
//wait for event to happen
if (WaitCommEvent(Cport[comport_number],&dwCommEvent,NULL))
{
//Do while loop will cycle ReadFile until bytes-read reach 0,
do{
if(ReadFile(Cport[comport_number], buf, size, (LPDWORD)((void *)&n), NULL)){
//Byte has been read, buf is processed in main
}
else{
//error occoured in ReadFile call
dwError = GetLastError();
break;
}
}while(n);
}
else{
//error in WaitCommEvent
break;
}
break; //break after read file
}
I am wondering if rewriting the code in overlapped mode will improve things, but I dont see the advantages as I have no need for multi threading. any suggestions would be great!
Thank you.
ReadFile has no way to detect what a "block of data" is. You should not expect it to understand your data or the timing of that data. The only fix for this issue is for you to process whatever it gives you, using your own knowledge of the data to divide it up into "blocks" for further processing. If you get a partial block keep it, and append to it with the next read.
There is no need to call WaitCommEvent for data. ReadFile will wait for data. But give it a suitably sized buffer and ask for a lot more than one byte at a time. It's extremely inefficient to call it for only one byte. Select the requested count and the timeouts so that ReadFile will return within an acceptable time, whether there is data or not.
This question has been asked a number of times, I have noted, but none of the solutions seem to be applicable to me. Before I continue I will post a little bit of code for you:
// Await the response and stream it to the buffer, with a physical limit of 1024 ASCII characters
stringstream input;
char buffer[4096*2];
while (recv(sock, buffer, sizeof(buffer) - 1, MSG_WAITALL) > 0)
input << buffer;
input << '\0';
// Close the TCP connection
close(sock);
freehostent(hostInfo);
And here is my request:
string data;
{
stringstream bodyStream;
bodyStream
<< "POST /api/translation/translate HTTP/1.1\n"
<< "Host: elfdict.com\n"
<< "Content-Type: application/x-www-form-urlencoded\n"
<< "Content-Length: " << (5 + m_word.length())
<< "\n\nterm=" << m_word;
data = bodyStream.str();
}
cout << "Sending HTTP request: " << endl << data << endl;
I am very new to this sort of programming (and stack overflow- preferring to slog it out and bang my head against a wall until I solve problems myself but I'm lost here!) and would really appreciate help working out why it takes so long! I've looked into setting it up so that it is non-blocking, but had issues getting that to work as expected. Though maybe people here could point me in the right direction, if the non-bocking route is the way I need to go.
I have seen that a lot of people prefer to use libraries but I want to learn to do this!
I'm also new to programming on the mac and working with sockets. Probably not the best first time project maybe, but I've started now! So I wish to continue :) Any help would be nice!
Thank you in advance!
The reason why it takes a long time to receive is because you tell the system to wait until it has received all data you ask for, i.e. 8k bytes, or there is an error on the connection or it is closed. This is what the flag MSG_WAITALL does.
One solution to this is to make the socket non-blocking, and do a continuous read in a loop until we get an error or the connection is closed.
How to make a socket non-blocking differs depending on platform, on Windows it done with the ioctlsocket function, on Linux or similar systems it is done with the fcntl function:
int flags = fcntl(sock, F_GETFL, 0);
flags |= O_NONBLOCK;
fcntl(sock, F_SETFL, flags);
Then you read from the socket like this:
std::istringstream input;
for (;;)
{
char buffer[256];
ssize_t recvsize;
recvsize = recv(sock, buffer, sizeof(buffer) - 1, 0);
if (recvsize == -1)
{
if (errno != EAGAIN && errno != EWOULDBLOCK)
break; // An error
else
continue; // No more data at the moment
}
else if (recvsize == 0)
break; // Connection closed
// Terminate buffer
buffer[recvsize] = '\0';
// Append to input
input << buffer;
}
The problem with the above loop is that if no data is ever received, it will loop forever.
However, you have a much more serious problem in your code: You receive into a buffer, and then you append it to the stringstream, but you do not terminate the buffer. You do not need to terminate the string in the stream, it's done automatically, but you do need to terminate the buffer.
This can be solved like this:
int rc;
while ((rc = recv(sock, buffer, sizeof(buffer) - 1, MSG_WAITALL)) > 0)
{
buffer[rc] = '\0';
input << buffer;
}
The problem here happens because you are specifying MSG_WAITALL flag. It forces the recv to remain blocked until all the specified bytes are received (sizeof(buffer) - 1 in your case, while the message being sent by the other party is obviously smaller) or an error occurs and it returns -1 with errno variable being set appropriately.
I think, a more preferable option would be to cause recv without any flags in a loop until the socket on the other end is closed (recv returns 0) or some separator is received.
However, you should be careful using input << buffer, because recv might return only a small portion of data (for example, 20 bytes) on each iteration, so you should put exactly this amount of data to string stream. The number of bytes received is returned by recv.