I have a client and a server communicating using a named pipe.
I'm trying to pass the address stored by an LPCWSTR variable from the client to the server.
To do this, I first write the address onto a wchar_t buffer, then I send the server the size of that buffer (as a DWORD), so now the server knows how many bytes it has to read. I managed to send the buffer size successfully, I'm unable to send the complete string though.
Even though the server says it has read the required number of bytes, the buffer on the server side doesn't have the entire string.
Client:
wchar_t msgBuffer[1024];
LPCWSTR lpName = L"NameString";
_swprintf(msgBuffer, _T("%p\0"), lpName); //Write data to the buffer
DWORD nBytesToWrite = wcslen(msgBuffer); //Number of bytes to be written
bWriteFile = WriteFile( //Send the buffer size
hCreateFile,
&nBytesToWrite,
(DWORD)sizeof(nBytesToWrite),
&dwNoBytesWritten,
NULL
);
bWriteFile = WriteFile( //Send the data
hCreateFile,
msgBuffer,
(DWORD)wcslen(msgBuffer),
&dwNoBytesWritten,
NULL
);
Server:
DWORD dwBytesToRead = 0;
bReadFile = ReadFile( //Read the size of the next message
hCreateNamedPipe,
&dwBytesToRead,
sizeof(DWORD),
&dwNoBytesRead,
NULL);
std::cout << "\nBytes to be read: " << dwBytesToRead;
wchar_t msg[] = L"";
bReadFile = ReadFile( //Read the data
hCreateNamedPipe,
&msg,
dwBytesToRead,
&dwNoBytesRead,
NULL);
std::cout << "\nBytes Read: " << dwNoBytesRead;// << '\n' << msg;
wprintf(L"\nMessage: %s\nSize: %zu", msg, wcslen(msg));
This is what the output on the server side is:
Bytes to be read: 9
Bytes Read: 9
Message: 78E7
Size: 5
The address is 78E7325C on the client side, but my server only prints 78E7
Even though the server says to have read 9 bytes, the size of the resultant wchar_t is just 5, why is this?
EDIT: I've checked the buffer on the client side, it has the correct address stored. And is it okay to be sending the DWORD variable using the address-of (&) operator in WriteFile()?
The Solution
Changed (DWORD)wcslen(nBytesToWrite) to (DWORD)sizeof(nBytesToWrite)
wcslen gives the number of characters, whereas sizeof gives the number of bytes, and these aren't the same.
C-style strings are represented as pointers to a character array, with an implied length. The length is the number of characters in the array up to the first NUL character. When you interpret binary data as a C-style string (which your call to wprintf does), it stops writing characters once it finds the first character with a value of zero.
You are indeed able to read the entire message. The bug is that your code to verify this condition is based on a wrong assumption. You'll have to output dwNoBytesRead bytes in a loop, and cannot take advantage of the built-in string facilities of wprintf.
Besides that, you are reading into unallocated memory. wchar_t msg[] = L"" allocates an array of exactly one character, but you are reading into it, as if it were able to grow. That's not how things work in C. You'll need to familiarize yourself with the basics of the programming language you are using.
In addition, you are sending only half of your payload. WriteFile expects the number of bytes to write, but you are passing the return value of wcslen, i.e. the number of characters. On Windows, a wchar_t is 2 bytes wide.
Related
I have a client-server application, with the server part written in C++ (Winsock) and the client part in Java.
When sending data from the client, I first send its length followed by the actual data. For sending the length, this is the code:
clientSender.print(text.length());
where clientSender is of type PrintWriter.
On the server side, the code that reads this is
int iDataLength;
if(recv(client, (char *)&iDataLength, sizeof(iDataLength), 0) != SOCKET_ERROR)
//do something
I tried printing the value of iDataLength within the if and it always turns out to be some random large integer. If I change iDataLength's type to char, I get the correct value. However, the actual value could well exceed a char's capacity.
What is the correct way to read an integer passed over a socket in C++ ?
I think the problem is that PrintWriter is writing text and you are trying to read a binary number.
Here is what PrintWriter does with the integer it sends:
http://docs.oracle.com/javase/7/docs/api/java/io/PrintWriter.html#print%28int%29
Prints an integer. The string produced by String.valueOf(int) is
translated into bytes according to the platform's default character
encoding, and these bytes are written in exactly the manner of the
write(int) method.
Try something like this:
#include <sys/socket.h>
#include <cstring> // for std::strerror()
// ... stuff
char buf[1024]; // buffer to receive text
int len;
if((len = recv(client, buf, sizeof(buf), 0)) == -1)
{
std::cerr << "ERROR: " << std::strerror(errno) << std::endl;
return 1;
}
std::string s(buf, len);
int iDataLength = std::stoi(s); // convert text back to integer
// use iDataLength here (after sanity checks)
Are you sure the endianness is not the issue? (Maybe Java encodes it as big endian and you read it as little endian).
Besides, you might need to implement receivall function (similar to sendall - as here). To make sure you receive exact number of bytes specified - because recv may receive fewer bytes than it was told to.
You have a confusion between numeric values and their ASCII representation.
When in Java you write clientSender.print(text.length()); you are actually writing an ascii string - if length is 15, you will send characters 1 (code ASCII 0x31) and 5 (code ASCII 0x35)
So you must either :
send a binary length in a portable way (in C or C++ you have hton and ntoh, but unsure in Java)
add a separator (newline) after the textual length from Java side and decode that in C++ :
char buffer[1024]; // a size big enough to read the packet
int iDataLength, l;
l = recv(client, (char *)&iDataLength, sizeof(iDataLength), 0);
if (l != SOCKET_ERROR) {
buffer[l] = 0;
iDataLength = sscanf(buffer, "%d", &iDataLength);
char *ptr = strchr(buffer, '\n');
if (ptr == NULL) {
// should never happen : peer does not respect protocol
...
}
ptr += 1; // ptr now points after the length
//do something
}
Java part should be : clientSender.println(text.length());
EDIT :
From Remy Lebeau's comment, There is no 1-to-1 relationship between sends and reads in TCP. recv() can and does return arbitrary amounts of data, so you cannot assume that a single recv() will read the entire line of text.
Above code should not do a simple recv but be ready to concatenate multiple reads to find the separator (left as exercise for the reader :-) )
I'd like to start with the fact that I'm still learning C++ and some of the things still baffles me.
What I'm trying to accomplish is to build a byte stream to send over a socket. I'm trying to create a packet 1536 bytes in length for a handshake
std::stringstream s1Stream;
char randData[1528], zeroVal[4] = {0, 0, 0, 0};
memset(&randData, 1, sizeof(randData)); // Fill the buffer with data
s1Stream << timestampVal; // 4 bytes
s1Stream << zeroVal; // 4 bytes
s1Stream << randData; // 1528 bytes
When I convert s1Stream to string and check the size() of that string the program says that the size is 1541.
What am I doing wrong?
std::stringstream's operator<<(char const*), which is used here, treats its argument as zero-terminated C-style strings, and your randData array is not zero-terminated.
Since randData is not really a C-style string and looks like it could end up containing null bytes, the fix is to use
s1Stream.write(randData, sizeof(randData));
Note that this problem applies with zeroVal as well, except nothing of zeroVal will be written to s1Stream because it is zero-terminated at the first byte.
First question: I am confused between Buffers in TCP. I am trying to explain my proble, i read this documentation TCP Buffer, author said a lot about TCP Buffer, thats fine and a really good explanation for a beginner. What i need to know is this TCP Buffer is same buffer with the one we use in our basic client server program (Char *buffer[Some_Size]) or its some different buffer hold by TCP internally ?
My second question is that i am sending a string data with prefix length (This is data From me) from client over socket to server, when i print my data at console along with my string it prints some garbage value also like this "This is data From me zzzzzz 1/2 1/2....." ?. However i fixed it by right shifting char *recvbuf = new char[nlength>>3]; nlength to 3 bits but why i need to do it in this way ?
My third question is in relevance with first question if there is nothing like TCP Buffer and its only about the Char *buffer[some_size] then whats the difference my program will notice using such static memory allocation buffer and by using dynamic memory allocation buffer using char *recvbuf = new char[nlength];. In short which is best and why ?
Client Code
int bytesSent;
int bytesRecv = SOCKET_ERROR;
char sendbuf[200] = "This is data From me";
int nBytes = 200, nLeft, idx;
nLeft = nBytes;
idx = 0;
uint32_t varSize = strlen (sendbuf);
bytesSent = send(ConnectSocket,(char*)&varSize, 4, 0);
assert (bytesSent == sizeof (uint32_t));
std::cout<<"length information is in:"<<bytesSent<<"bytes"<<std::endl;
// code to make sure all data has been sent
while (nLeft > 0)
{
bytesSent = send(ConnectSocket, &sendbuf[idx], nLeft, 0);
if (bytesSent == SOCKET_ERROR)
{
std::cerr<<"send() error: " << WSAGetLastError() <<std::endl;
break;
}
nLeft -= bytesSent;
idx += bytesSent;
}
std::cout<<"Client: Bytes sent:"<< bytesSent;
Server code:
int bytesSent;
char sendbuf[200] = "This string is a test data from server";
int bytesRecv;
int idx = 0;
uint32_t nlength;
int length_received = recv(m_socket,(char*)&nlength, 4, 0);//Data length info
char *recvbuf = new char[nlength];//dynamic memory allocation based on data length info
//code to make sure all data has been received
while (nlength > 0)
{
bytesRecv = recv(m_socket, &recvbuf[idx], nlength, 0);
if (bytesRecv == SOCKET_ERROR)
{
std::cerr<<"recv() error: " << WSAGetLastError() <<std::endl;
break;
}
idx += bytesRecv;
nlength -= bytesRecv;
}
cout<<"Server: Received complete data is:"<< recvbuf<<std::endl;
cout<<"Server: Received bytes are"<<bytesRecv<<std::endl;
WSACleanup();
system("pause");
delete[] recvbuf;
return 0;
}
You send 200 bytes from the client, unconditionally, but in the server you only receive the actual length of the string, and that length does not include the string terminator.
So first of all you don't receive all data that was sent (which means you will fill up the system buffers), and then you don't terminate the string properly (which leads to "garbage" output when trying to print the string).
To fix this, in the client only send the actual length of the string (the value of varSize), and in the receiving server allocate one more character for the terminator, which you of course needs to add.
First question: I am confused between Buffers in TCP. I am trying to
explain my proble, i read this documentation TCP Buffer, author said a
lot about TCP Buffer, thats fine and a really good explanation for a
beginner. What i need to know is this TCP Buffer is same buffer with
the one we use in our basic client server program (Char
*buffer[Some_Size]) or its some different buffer hold by TCP internally ?
When you call send(), the TCP stack will copy some of the bytes out of your char array into an in-kernel buffer, and send() will return the number of bytes that it copied. The TCP stack will then handle the transmission of those in-kernel bytes to its destination across the network as quickly as it can. It's important to note that send()'s return value is not guaranteed to be the same as the number of bytes you specified in the length argument you passed to it; it could be less. It's also important to note that sends()'s return value does not imply that that many bytes have arrived at the receiving program; rather it only indicates the number of bytes that the kernel has accepted from you and will try to deliver.
Likewise, recv() merely copies some bytes from an in-kernel buffer to the array you specify, and then drops them from the in-kernel buffer. Again, the number of bytes copied may be less than the number you asked for, and generally will be different from the number of bytes passed by the sender on any particular call of send(). (E.g if the sender called send() and his send() returned 1000, that might result in you calling recv() twice and having recv() return 500 each time, or recv() might return 250 four times, or (1, 990, 9), or any other combination you can think of that eventually adds up to 1000)
My second question is that i am sending a string data with prefix
length (This is data From me) from client over socket to server, when
i print my data at console along with my string it prints some garbage
value also like this "This is data From me zzzzzz 1/2 1/2....." ?.
However i fixed it by right shifting char *recvbuf = new
char[nlength>>3]; nlength to 3 bits but why i need to it in this way ?
Like Joachim said, this happens because C strings depend on the presence of a NUL-terminator byte (i.e. a zero byte) to indicate their end. You are receiving strlen(sendbuf) bytes, and the value returned by strlen() does not include the NUL byte. When the receiver's string-printing routine tries to print the string, it keeps printing until if finds a NUL byte (by chance) somewhere later on in memory; in the meantime, you get to see all the random bytes that are in memory before that point. To fix the problem, either increase your sent-bytes counter to (strlen(sendbuf)+1), so that the NUL terminator byte gets received as well, or alternatively have your receiver manually place the NUL byte at the end of the string after it has received all of the bytes of the string. Either way is acceptable (the latter way might be slightly preferable as that way the receiver isn't depending on the sender to do the right thing).
Note that if your sender is going to always send 200 bytes rather than just the number of bytes in the string, then your receiver will need to always receive 200 bytes if it wants to receive more than one block; otherwise when it tries to receive the next block it will first get all the extra bytes (after the string) before it gets the next block's send-length field.
My third question is in relevance with first question if there is
nothing like TCP Buffer and its only about the Char *buffer[some_size]
then whats the difference my program will notice using such static
memory allocation buffer and by using dynamic memory allocation buffer
using char *recvbuf = new char[nlength];. In short which is best and
why ?
In terms of performance, it makes no difference at all. send() and receive() don't care a bit whether the pointers you pass to them point at the heap or the stack.
In terms of design, there are some tradeoffs: if you use new, there is a chance that you can leak memory if you don't always call delete[] when you're done with the buffer. (This can particularly happen when exceptions are thrown, or when error paths are taken). Placing the buffer on the stack, on the other hand, is guaranteed not to leak memory, but the amount of space available on the stack is finite so a really huge array could cause your program to run out of stack space and crash. In this case, a single 200-byte array on the stack is no problem, so that's what I would use.
What variable type should be used for lpBuffer of C++ ReadFile and WriteFile functions to communicate between a Windows XP based PC and a micro-controller based system? The PC has WinForm application in VS2010 C++/CLI. The micro-controller firmware is ANSI C.
My PC is supposed to transmit command characters (say 'S', 'C' etc) followed by command termination character 0xd (hex for decimal 13). The micro-controller based system would respond with 5 to 10 bytes that would be mix of ASCII characters and hex numbers e.g. 'V' followed by 0x41 0x72 etc.
PC transmits and micro-controller receives:
TxMessage, PP1 and pTx declared as char and keeping nNumberOfBytesToWrite as 2, makes the micro-controller receive 0x53 for 'S' followed by 0xC3 instead of 0xd.
TxMessage, PP1 and pTx declared as wchar_t and keeping nNumberOfBytesToWrite as 2, makes the micro-controller receive 0x53 for 'S' only.
TxMessage, PP1 and pTx declared as wchar_t and keeping nNumberOfBytesToWrite as 4, makes the micro-controller receive 0x53 for 'S' followed by 0xd correctly.
The third scheme of transmit and receive above meets my expected behavior of a solution. But the confusion is here: Although the PC might be transmitting 4 bytes (for two wchar types), the micro-controller receives 2 bytes 0x53 for 'S', correctly followed by 0xD.
Micro-controller transmits and PC receives:
Assuming that wchar_t is the right choice for lpBuffer, what should be my nNumberOfBytesToRead for receiving 10 bytes from the micro-controller? ReadFile would expect 20 bytes by virtue of wchar_t, whereas the micro-controller would transmit 10 bytes only.
Amazingly, irrespective of declaring (RxMessage, PP2 and pRx) as wchar_t, char or unsigned char, ReadFile receives 10 bytes from the micro-controller (meets my expected behavior of a solution). But the issue is that transmitting 'A' 10 times from the micro-controller, ReadFile on the PC's end receives junk like 'S', 0x0, 0xd, 0x54, 0x29.
/// Required designer variable.
HANDLE hCommPort;
BOOL fSuccess;
array<wchar_t> ^ TxMessage;
array<unsigned char> ^ RxMessage;
TxMessage = gcnew array<wchar_t> (12);
RxMessage = gcnew array<unsigned char> (12);
{
TxMessage[0]='S';//target cmd
TxMessage[1]=0xd;//cmd termination character
DWORD dwhandled;
if (hCommPort != INVALID_HANDLE_VALUE)
{
pin_ptr<wchar_t> pp1 = &TxMessage[0];
wchar_t *pTx = pp1;
fSuccess = WriteFile(hCommPort, pTx, 4, &dwhandled, NULL);
PurgeComm(hCommPort, PURGE_RXABORT|PURGE_TXABORT|PURGE_RXCLEAR|PURGE_TXCLEAR);
pin_ptr<unsigned char> pp2 = &RxMessage[0];
unsigned char *pRx = pp2;
fSuccess = ReadFile(hCommPort, pRx, 10, &dwhandled, NULL);
}//if IsOpen
else{
this->toolStripStatusLabel4->Text="Port Not Opened";}
}
ReadFile/WriteFile do not care about C++ types, they operate in terms of bytes read/written. ReadFile reads specified number of bytes (or less if there is less bytes to read) from file/device, and puts them into memory pointed by lpBuffer. WriteFile writes specified number of bytes to file/device from memory pointed to by lpcBuffer. The memory buffer for these functions is simply a region of allocated memory that has the size at least as many bytes as you tell those functions in the third parameter.
wchat_t is a multibyte type. It's size can be bigger than one byte. Consequently, your TxMessage[0]='S'; TxMessage[1]=0xd; can be actually filling not two bytes in memory, but, say, 4 bytes. For example, it can be x0053 , x000D in wchar_t representation. From the point of view of WriteFile it does not care how and what you put into that memory. It will read raw memory and will write to device. So, if your device expects x530D, it might not be getting it, but x0053.
Overall, think about bytes. If you need to write 4 bytes x0A0B0C0D to your device, it does not matter HOW you allocated buffer for this value. It can be 4-byte unsigned int = x0A0B0C0D, it can be char[ 4 ] = {x0A, x0B, x0C, x0D}, it can be int short [ 2 ]= {x0A0B, x0C0D}, it can be ANY C++ type, including custom class. But the first 4 bytes of memory pointed by the memory pointer passed to WriteFile should be x0A0B0C0D.
Similarly, ReadFile will read number of bytes you specify. If your device sends you, say, 2 bytes, ReadFile will write 2 bytes it gets into memory pointed by the pointer you pass to it (and it's your responsibility to ensure it has enough bytes allocated). Again, it does not care how you allocated it as long as there are 2 bytes allocated. After that, again, you can look at these two bytes as you want - it can be char[ 2 ], can be int short etc.
Using bytes is the natural match, serial ports are pretty fundamentally byte oriented devices. You could make your transmitting code look like this:
bool SendCommand(HANDLE hCommPort) {
auto TxMessage = gcnew array<Byte> { 'S', '\r' };
pin_ptr<Byte> pbytes = &TxMessage[0];
DWORD bytesSent = 0;
BOOL fSuccess = WriteFile(hCommPort, pbytes, TxMessage->Length,
&bytesSent, NULL);
return fSuccess && bytesSent == TxMessage->Length);
}
Your receiving code needs to do more work, the number of bytes you get back from ReadFile() is unpredictable. A protocol is required to indicate when you should stop reading. A fixed length response is common. Or a special last character is very common. That could look like this:
bool ReceiveResponse(HANDLE hCommPort, array<Byte>^ RxMessage, int% RxCount) {
for (; RxCount < RxMessage->Length; ) {
DWORD bytesReceived = 0;
pin_ptr<Byte> pbytes = &RxMessage[0];
BOOL fSuccess = ReadFile(hCommPort, pbytes, RxMessage->Length - RxCount,
&bytesReceived, NULL);
if (!fSuccess) return false;
int rxStart = RxCount;
RxCount += bytesReceived;
for (int ix = rxStart; ix < RxCount; ++ix) {
if (RxMessage[ix] == '\r') return true;
}
}
return true;
}
Don't overlook the .NET System::IO::Ports::SerialPort class. It has built-in Encoding support which makes it easier to work with characters vs bytes. Your ReceiveResponse() method could collapse to a simple ReadLine() call with the NewLine property set correctly.
Apologies for not able to respond earlier than this. Infact the problem was identified and corrected after posting of my last comment. It related to 9bit protocol being forced by micro-controller. Original micro-firmware is 9bit protocol to address one of many slaves. For development-testing, i had temporary modification to 8bit protocol. Unfortunately, modifications missed the UART mode register that remained as 9bit mode. With boxed/biased mind i kept debugging
I am using Named Pipes configured to send data as a single byte stream to send serialized data structures between two applications. The serialized data changes in size quite dramatically. On the sending side, this is not a problem, I can adjust the number of bytes to send exactly.
How can I set the buffer on the receiveing (Reading) end to the exact number of bytes to read? Is there a way to know how big the data is on the sending (Writing) side is?
I have looked at PeekNamedPipe, but the function appears useless for byte typed named pipes?
lpBytesLeftThisMessage [out, optional]
A pointer to a variable that receives the number of bytes remaining in this message. This parameter will be zero for byte-type named pipes or for anonymous pipes. This parameter can be NULL if no data is to be read.
http://msdn.microsoft.com/en-us/library/windows/desktop/aa365779(v=vs.85).aspx
How does one handle such a situation best if you cannot determine the exact required buffer size?
Sending Code
string strData;
strData = "ShortLittleString";
DWORD numBytesWritten = 0;
result = WriteFile(
pipe, // handle to our outbound pipe
strData.c_str(), // data to send
strData.length(), // length of data to send (bytes)
&numBytesWritten, // will store actual amount of data sent
NULL // not using overlapped IO
);
Reading Code:
DWORD numBytesToRead0 = 0;
DWORD numBytesToRead1 = 0;
DWORD numBytesToRead2 = 0;
BOOL result = PeekNamedPipe(
pipe,
NULL,
42,
&numBytesToRead0,
&numBytesToRead1,
&numBytesToRead2
);
char * buffer ;
buffer = new char[numBytesToRead2];
char data[1024]; //1024 is way too big and numBytesToRead2 is always 0
DWORD _numBytesRead = 0;
BOOL result = ReadFile(
pipe,
data, // the data from the pipe will be put here
1024, // number of bytes allocated
&_numBytesRead, // this will store number of bytes actually read
NULL // not using overlapped IO
);
In the code above buffer is always of size 0 as the PeakNamedPipe function returns 0 for all numBytesToRead variables. Is there a way to set this buffer size exactly? If not, what is the best way to handle such a situation? Thanks for any help!
Why do you think you could not use lpTotalBytesAvail to get sent data size? It always works for me in bytes mode. If it's always zero possibly you did something wrong. Also suggest to use std::vector as data buffer, it's quite more safe than messing with raw pointers and new statement.
lpTotalBytesAvail [out, optional] A pointer to a variable that receives the total number of bytes available to be read from the pipe. This parameter can be NULL if no data is to be read.
Sample code:
// Get data size available from pipe
DWORD bytesAvail = 0;
BOOL isOK = PeekNamedPipe(hPipe, NULL, 0, NULL, &bytesAvail, NULL);
if(!isOK)
{
// Check GetLastError() code
}
// Allocate buffer and peek data from pipe
DWORD bytesRead = 0;
std::vector<char> buffer(bytesAvail);
isOK = PeekNamedPipe(hPipe, &buffer[0], bytesAvail, &bytesRead, NULL, NULL);
if(!isOK)
{
// Check GetLastError() code
}
Well, you are using ReadFile(). The documentation says, among other things:
If a named pipe is being read in message mode and the next message is longer
than the nNumberOfBytesToRead parameter specifies, ReadFile returns FALSE and
GetLastError returns ERROR_MORE_DATA. The remainder of the message can be read
by a subsequent call to the ReadFile or PeekNamedPipefunction.
Did you try that? I've never used a pipe like this :-), only used them to get to the stdin/out handles of a child process.
I'm assuming that the above can be repeated as often as necessary, making the "remainder of the message" a somewhat inaccurate description: I think if the "remainder" doesn't fit into your buffer you'll just get another ERROR_MORE_DATA so you know to get the remainder of the remainder.
Or, if I'm completely misunderstanding you and you're not actually using this "message mode" thing: maybe you are just reading things the wrong way. You could just use a fixed size buffer to read data into and append it to your final block, until you've reached the end of the data. Or optimize this a bit by increasing the size of the "fixed" size buffer as you go along.