First question: I am confused between Buffers in TCP. I am trying to explain my proble, i read this documentation TCP Buffer, author said a lot about TCP Buffer, thats fine and a really good explanation for a beginner. What i need to know is this TCP Buffer is same buffer with the one we use in our basic client server program (Char *buffer[Some_Size]) or its some different buffer hold by TCP internally ?
My second question is that i am sending a string data with prefix length (This is data From me) from client over socket to server, when i print my data at console along with my string it prints some garbage value also like this "This is data From me zzzzzz 1/2 1/2....." ?. However i fixed it by right shifting char *recvbuf = new char[nlength>>3]; nlength to 3 bits but why i need to do it in this way ?
My third question is in relevance with first question if there is nothing like TCP Buffer and its only about the Char *buffer[some_size] then whats the difference my program will notice using such static memory allocation buffer and by using dynamic memory allocation buffer using char *recvbuf = new char[nlength];. In short which is best and why ?
Client Code
int bytesSent;
int bytesRecv = SOCKET_ERROR;
char sendbuf[200] = "This is data From me";
int nBytes = 200, nLeft, idx;
nLeft = nBytes;
idx = 0;
uint32_t varSize = strlen (sendbuf);
bytesSent = send(ConnectSocket,(char*)&varSize, 4, 0);
assert (bytesSent == sizeof (uint32_t));
std::cout<<"length information is in:"<<bytesSent<<"bytes"<<std::endl;
// code to make sure all data has been sent
while (nLeft > 0)
{
bytesSent = send(ConnectSocket, &sendbuf[idx], nLeft, 0);
if (bytesSent == SOCKET_ERROR)
{
std::cerr<<"send() error: " << WSAGetLastError() <<std::endl;
break;
}
nLeft -= bytesSent;
idx += bytesSent;
}
std::cout<<"Client: Bytes sent:"<< bytesSent;
Server code:
int bytesSent;
char sendbuf[200] = "This string is a test data from server";
int bytesRecv;
int idx = 0;
uint32_t nlength;
int length_received = recv(m_socket,(char*)&nlength, 4, 0);//Data length info
char *recvbuf = new char[nlength];//dynamic memory allocation based on data length info
//code to make sure all data has been received
while (nlength > 0)
{
bytesRecv = recv(m_socket, &recvbuf[idx], nlength, 0);
if (bytesRecv == SOCKET_ERROR)
{
std::cerr<<"recv() error: " << WSAGetLastError() <<std::endl;
break;
}
idx += bytesRecv;
nlength -= bytesRecv;
}
cout<<"Server: Received complete data is:"<< recvbuf<<std::endl;
cout<<"Server: Received bytes are"<<bytesRecv<<std::endl;
WSACleanup();
system("pause");
delete[] recvbuf;
return 0;
}
You send 200 bytes from the client, unconditionally, but in the server you only receive the actual length of the string, and that length does not include the string terminator.
So first of all you don't receive all data that was sent (which means you will fill up the system buffers), and then you don't terminate the string properly (which leads to "garbage" output when trying to print the string).
To fix this, in the client only send the actual length of the string (the value of varSize), and in the receiving server allocate one more character for the terminator, which you of course needs to add.
First question: I am confused between Buffers in TCP. I am trying to
explain my proble, i read this documentation TCP Buffer, author said a
lot about TCP Buffer, thats fine and a really good explanation for a
beginner. What i need to know is this TCP Buffer is same buffer with
the one we use in our basic client server program (Char
*buffer[Some_Size]) or its some different buffer hold by TCP internally ?
When you call send(), the TCP stack will copy some of the bytes out of your char array into an in-kernel buffer, and send() will return the number of bytes that it copied. The TCP stack will then handle the transmission of those in-kernel bytes to its destination across the network as quickly as it can. It's important to note that send()'s return value is not guaranteed to be the same as the number of bytes you specified in the length argument you passed to it; it could be less. It's also important to note that sends()'s return value does not imply that that many bytes have arrived at the receiving program; rather it only indicates the number of bytes that the kernel has accepted from you and will try to deliver.
Likewise, recv() merely copies some bytes from an in-kernel buffer to the array you specify, and then drops them from the in-kernel buffer. Again, the number of bytes copied may be less than the number you asked for, and generally will be different from the number of bytes passed by the sender on any particular call of send(). (E.g if the sender called send() and his send() returned 1000, that might result in you calling recv() twice and having recv() return 500 each time, or recv() might return 250 four times, or (1, 990, 9), or any other combination you can think of that eventually adds up to 1000)
My second question is that i am sending a string data with prefix
length (This is data From me) from client over socket to server, when
i print my data at console along with my string it prints some garbage
value also like this "This is data From me zzzzzz 1/2 1/2....." ?.
However i fixed it by right shifting char *recvbuf = new
char[nlength>>3]; nlength to 3 bits but why i need to it in this way ?
Like Joachim said, this happens because C strings depend on the presence of a NUL-terminator byte (i.e. a zero byte) to indicate their end. You are receiving strlen(sendbuf) bytes, and the value returned by strlen() does not include the NUL byte. When the receiver's string-printing routine tries to print the string, it keeps printing until if finds a NUL byte (by chance) somewhere later on in memory; in the meantime, you get to see all the random bytes that are in memory before that point. To fix the problem, either increase your sent-bytes counter to (strlen(sendbuf)+1), so that the NUL terminator byte gets received as well, or alternatively have your receiver manually place the NUL byte at the end of the string after it has received all of the bytes of the string. Either way is acceptable (the latter way might be slightly preferable as that way the receiver isn't depending on the sender to do the right thing).
Note that if your sender is going to always send 200 bytes rather than just the number of bytes in the string, then your receiver will need to always receive 200 bytes if it wants to receive more than one block; otherwise when it tries to receive the next block it will first get all the extra bytes (after the string) before it gets the next block's send-length field.
My third question is in relevance with first question if there is
nothing like TCP Buffer and its only about the Char *buffer[some_size]
then whats the difference my program will notice using such static
memory allocation buffer and by using dynamic memory allocation buffer
using char *recvbuf = new char[nlength];. In short which is best and
why ?
In terms of performance, it makes no difference at all. send() and receive() don't care a bit whether the pointers you pass to them point at the heap or the stack.
In terms of design, there are some tradeoffs: if you use new, there is a chance that you can leak memory if you don't always call delete[] when you're done with the buffer. (This can particularly happen when exceptions are thrown, or when error paths are taken). Placing the buffer on the stack, on the other hand, is guaranteed not to leak memory, but the amount of space available on the stack is finite so a really huge array could cause your program to run out of stack space and crash. In this case, a single 200-byte array on the stack is no problem, so that's what I would use.
Related
I am new in socket programming. I have a client application written in c++ that connect to camera. And then Camera sends the packet of frames in chunks between 0 - 1460. I have used recv function to receive the packet. I saw soo many question in which it was clearly mentionthat recv function return the bytes received, but in my case recv function returning the value written in the 3rd parameter of the recv function i.e len. So is their anyway through whichI can find the actual bytes received.
I even try to use char* but that not even work.
So, anyone please tell me the solution.Any help will be appreciable. Thank in Advance
char *buf = new char[1461];
int bytes = recv(sock, buf, 2000, 0);
printf("%d", bytes);
that always print 2000.
because of that after the valid bytes in the buf their are unknown bytes that's results in unexpected Image.
First of all, your code has a bug (which leads to undefined behavior).
You have allocated 1461 bytes but you are trying to read more than that:
It should go like this:
vector<char> buf(5000); // you are using C++ not C
int bytes = recv(sock, buf.data(), buf.size(), 0);
std::cout << bytes;
Secondly result is as expected. Camera sends much more data than 2000 bytes, so I'm not surprised that number of bytes read covers whole requested size.
I'm writing a distributed system in c++ using TCP/IP and sockets.
For each of my messages, I need to receive the first 5 bytes to know the full length of the incoming message.
What's the best way to do this?
recv() only 5 bytes, then recv() again. if I choose this, would it be safe to assume I'll get 0 or 5 bytes in the recv (aka not write a loop to keep trying)?
use MSG_PEEK
recv() some larger buffer size, then read the first 5 bytes and allocate the final buffer then.
You don't need to know anything. TCP is a stream protocol, and at any given moment you can get as little as one byte, or as much as multiple megabytes of data. The correct and only way to use a TCP socket is to read in a loop.
char buf[4096]; // or whatever
std::deque<char> data;
for (int res ; ; )
{
res = recv(fd, buf, sizeof buf, MSG_DONTWAIT);
if (res == -1)
{
if (errno == EAGAIN || errno == EWOULDBLOCK)
{
break; // done reading
}
else
{
// error, break, die
}
}
if (res == 0)
{
// socket closed, finalise, break
}
else
{
data.insert(data.end(), buf, buf + res);
}
}
The only purpose of the loop is to transfer data from the socket buffer to your application. Your application must then decide independently if there's enough data in the queue to attempt extraction of some sort of higher-level application message.
For example, in your case you would check if the queue's size is at least 5, then inspect the first five bytes, and then check if the queue holds a complete application message. If no, you abort, and if yes, you extract the entire message and pop if off from the front of the queue.
Use a state machine with two states:
First State.
Receive bytes as they arrive into a buffer. When there are 5 or more bytes perform your check on those first 5 and possibly process the rest of the buffer. Switch to the second state.
Second State.
Receive and process bytes as they arrive to the end of the message.
to answer your question specifically:
it's not safe to assume you'll get 0 or 5. it is possible to get 1-4 as well. loop until you get 5 or an error as others have suggested.
i wouldn't bother with PEEK, most of the time you'll block (assuming blocking calls) or get 5 so skip the extra call into the stack.
this is fine too but adds complexity for little gain.
I have a trouble, my server application sends packet 8 bytes length - AABBCC1122334455 but my application receives this packet in two parts AABBCC1122 and 334455, via "recv" function, how can i fix that?
Thanks!
To sum up a liitle bit:
TCP connection doesn't operate with packets or messages on the application level, you're dealing with stream of bytes. From this point of view it's similar to writing and reading from a file.
Both send and recv can send and receive less data than provided in the argument. You have to deal with it correctly (usually by applying proper loop around the call).
As you're dealing with streams, you have to find the way to convert it to meaningful data in your application. In other words, you have to design serialisation protocol.
From what you've already mentioned, you most probably want to send some kind of messages (well, it's usually what people do). The key thing is to discover the boundaries of messages properly. If your messages are of fixed size, you simply grab the same amount of data from the stream and translate it to your message; otherwise, you need a different approach:
If you can come up with a character which cannot exist in your message, it could be your delimiter. You can then read the stream until you reach the character and it'll be your message. If you transfer ASCII characters (strings) you can use zero as a separator.
If you transfer binary data (raw integers etc.), all characters can appear in your message, so nothing can act as a delimiter. Probably the most common approach in this case is to use fixed-size prefix containing size of your message. Size of this extra field depends on the max size of your message (you will be probably safe with 4 bytes, but if you know what is the maximum size, you can use lower values). Then your packet would look like SSSS|PPPPPPPPP... (stream of bytes), where S is the additional size field and P is your payload (the real message in your application, number of P bytes is determined by value of S). You know every packet starts with 4 special bytes (S bytes), so you can read them as an 32-bit integer. Once you know the size of the encapsulated message, you read all the P bytes. After you're done with one packet, you're ready to read another one from the socket.
Good news though, you can come up with something completely different. All you need to know is how to deserialise your message from a stream of bytes and how send/recv behave. Good luck!
EDIT:
Example of function receiving arbitrary number of bytes into array:
bool recv_full(int sock, char *buffer, size_t size)
{
size_t received = 0;
while (received < size)
{
ssize_t r = recv(sock, buffer + received, size - received, 0);
if (r <= 0) break;
received += r;
}
return received == size;
}
And example of receiving packet with 2-byte prefix defining size of payload (size of payload is then limited to 65kB):
uint16_t msgSize = 0;
char msg[0xffff];
if (recv_full(sock, reinterpret_cast<char *>(&msgSize), sizeof(msgSize)) &&
recv_full(sock, msg, msgSize))
{
// Got the message in msg array
}
else
{
// Something bad happened to the connection
}
That's just how recv() works on most platforms. You have to check the number of bytes you receive and continue calling it in a loop until you get the number that you need.
You "fix" that by reading from TCP socket in a loop until you get enough bytes to make sense to your application.
my server application sends packet 8 bytes length
Not really. Your server sends 8 individual bytes, not a packet 8 bytes long. TCP data is sent over a byte stream, not a packet stream. TCP neither respects nor maintains any "packet" boundary that you might have in mind.
If you know that your data is provided in quanta of N bytes, then call recv in a loop:
std::vector<char> read_packet(int N) {
std::vector buffer(N);
int total = 0, count;
while ( total < N && (count = recv(sock_fd, &buffer[N], N-total, 0)) > 0 )
total += count;
return buffer;
}
std::vector<char> packet = read_packet(8);
If your packet is variable length, try sending it before the data itself:
int read_int() {
std::vector<char> buffer = read_packet(sizeof (int));
int result;
memcpy((void*)&result, (void*)&buffer[0], sizeof(int));
return result;
}
int length = read_int();
std::vector<char> data = read_buffer(length);
I'm having a problem with unix local sockets. While reading a message that's longer than my temp buffer size, the request takes too long (maybe indefinitely).
Added after some tests:
there is still problem with freeze at ::recv. when I send (1023*8) bytes or less to the UNIX socket - all ok, but when sended more than (1023*9) - i get freeze on recv command.
maybe its FreeBSD default UNIX socket limit or C++ default socket settings? Who know?
i made some additational tests and I am 100% sure that its "freeze" on the last 9th itteration when executing ::recv command, when trying to read message >= (1023*9) bytes long. (first 8th itterationg going well.)
What I'm doing:
The idea is to read in a do/while loop from a socket with
::recv (current_socket, buf, 1024, 0);
and check buf for a SPECIAL SYMBOL. If not found:
merge content of buffer to stringxxx += buf;
bzero temp buf
continue the ::recv loop
How do I fix the issue with the request taking too long in the while loop?
Is there a better way to clear the buffer? Currently, it's:
char buf [1025];
bzero(buf, 1025);
But I know bzero is deprecated in the new c++ standard.
EDIT:
*"Why need to clean the buffer*
I see questions at comments with this question. Without buffer cleanup on the next(last) itteration of reading to the buffer, it will contain the "tail" of first part of the message.
Example:
// message at the socket is "AAAAAACDE"
char buf [6];
::recv (current_socket, buf, 6, 0); // read 6 symbols, buf = "AAAAAA"
// no cleanup, read the last part of the message with recv
::recv (current_socket, buf, 6, 0);
// read 6 symbols, but buffer contain only 3 not readed before symbols, therefore
// buf now contain "CDEAAA" (not correct, we waiting for CDE only)
When your recv() enters an infinite loop, this probably means that it's not making any progress whatsoever on the iterations (i.e., you're always getting a short read of zero size immediately, so your loop never exits, because you're not getting any data). For stream sockets, a recv() of zero size means that the remote end has disconnected (it's something like read()ing from a file when the input is positioned at EOF also gets you zero bytes), or at least that it has shut down the sending channel (that's for TCP specifically).
Check whether your PHP script is actually sending the amount of data you claim it sends.
To add a small (non-sensical) example for properly using recv() in a loop:
char buf[1024];
std::string data;
while( data.size() < 10000 ) { // what you wish to receive
::ssize_t rcvd = ::recv(fd, buf, sizeof(buf), 0);
if( rcvd < 0 ) {
std::cout << "Failed to receive\n"; // Receive failed - something broke, see errno.
std::abort();
} else if( !rcvd ) {
break; // No data to receive, remote end closed connection, so quit.
} else {
data.append(buf, rcvd); // Received into buffer, attach to data buffer.
}
}
if( data.size() < 10000 ) {
std::cout << "Short receive, sender broken\n";
std::abort();
}
// Do something with the buffer data.
Instead of bzero, you can just use
memset(buf, 0, 1025);
These are 2 separate issues. The long time is probably some infinite loop due to a bug in your code and has nothing to do with the way you clear your buffer. As a matter of fact you shouldn't need to clear the buffer; receive returns the number of bytes read, so you can scan the buffer for your SPECIAL_SYMBOL up to that point.
If you paste the code maybe I can help. more.
Just to clarify: bzero is not deprecated in C++ 11. Rather, it's never been part of any C or C++ standard. C started out with memset 20+ years ago. For C++, you might consider using std::fill_n instead (or just using std::vector, which can zero-fill automatically). Then again, I'm not sure there's a good reason to zero-fill the buffer in this case at all.
I am using read function to read data from a socket, but when the data is more than 4k, read function just read part of the data, for example, less than 4k. Here is the key code:
mSockFD = socket(AF_INET, SOCK_STREAM, 0);
if (connect(mSockFD, (const sockaddr*)(&mSockAdd), sizeof(mSockAdd)) < 0)
{
cerr << "Error connecting in Crawl" << endl;
perror("");
return false;
}
n = write(mSockFD, httpReq.c_str(), httpReq.length());
bzero(mBuffer, BUFSIZE);
n = read(mSockFD, mBuffer, BUFSIZE);
Note than BUFSIZE is much larger than 4k.
When data is just a few hundred bytes, read function works as expected.
This is by design and to be expected.
The short answer to your question is you should continue calling "read" until you get all the data you expect. That is:
int total_bytes = 0;
int expected = BUFSIZE;
int bytes_read;
char *buffer = malloc(BUFSIZE+1); // +1 for null at the end
while (total_bytes < expected)
{
int bytes_read = read(mSockFD, buffer+total_bytes, BUFSIZE-total_bytes);
if (bytes_read <= 0)
break;
total_bytes += bytes_read;
}
buffer[total_bytes] = 0; // null terminate - good for debugging as a string
From my experience, one of the biggest misconceptions (resulting in bugs) that you'll receive as much data as you ask for. I've seen shipping code in real products written with the expectation that sockets work this way (and no one certain as to why it doesn't work reliably).
When the other side sends N bytes, you might get lucky and receive it all at once. But you should plan for receiving N bytes spread out across multiple recv calls. With the exception of a real network error, you'll eventually get all N bytes. Segmentation, fragmentation, TCP window size, MTU, and the socket layer's data chunking scheme are the reasons for all of this. When partial data is received, the TCP layer doesn't know about how much more is yet to come. It just passes what it has up to the app. It's up to the app to decide if it got enough.
Likewise, "send" calls can get conglomerated into the same packet together.
There may be ioctls and such that will make a socket block until all the expected data is received. But I don't know of any off hand.
Also, don't use read and write for sockets. Use recv and send.
Read this book. It will change your life with regards to sockets and TCP: