I'm writing something server-client related, and I have this code snippet here:
char serverReceiveBuf[65536];
client->read(serverReceiveBuf, client->bytesAvailable());
handleConnection(serverReceiveBuf);
that reads data whenever a readyRead() signal is emitted by the server. Using bytesAvailable() is fine when I test on my local network since there's no latency, but when I deploy the program I want to make sure the entire message is received before I handleConnection().
I was thinking of ways to do this, but read and write only accept chars, so the maximum message size indicator I can send in one char is 127. I want the maximum size to be 65536, but the only way I can think of doing that is have a size-of-size-of-message variable first.
I reworked the code to look like this:
char serverReceiveBuf[65536];
char messageSizeBuffer[512];
int messageSize = 0, i = 0; //max value of messageSize = 65536
client->read(messageSizeBuffer,512);
while((int)messageSizeBuffer[i] != 0 || i <= 512){
messageSize += (int) messageSizeBuffer[i];
//client will always send 512 bytes for size of message size
//if message size < 512 bytes, rest of buffer will be 0
}
client->read(serverReceiveBuf, messageSize);
handleConnection(serverReceiveBuf);
but I'd like a more elegant solution if one exists.
It is a very common technique when sending messages over a stream to send a fixed-sized header before the message payload. This header can include many different pieces of information, but it always includes the payload size. In the simplest case, you can send the message size encoded as a uint16_t for a maximum payload size of 65535 (or uint32_t if that's not sufficient). Just make sure you handle byte ordering with ntohs and htons.
uint16_t messageSize;
client->read((char*)&messageSize, sizeof(uint16_t));
messageSize = ntohs(messageSize);
client->read(serverReceiveBuf, messageSize);
handleConnection(serverReceiveBuf);
read and write work with byte streams. It does not matter to them if the bytes are chars or any other form of data. You can send a 4-byte integer by casting its address to char* and sending 4 bytes. On the receiving end cast the 4 bytes back to an int. (If the machines are of different types you may also have endian issues, requiring the bytes to be rearranged into an int. See htonl and its cousins.)
Related
I am new in socket programming. I have a client application written in c++ that connect to camera. And then Camera sends the packet of frames in chunks between 0 - 1460. I have used recv function to receive the packet. I saw soo many question in which it was clearly mentionthat recv function return the bytes received, but in my case recv function returning the value written in the 3rd parameter of the recv function i.e len. So is their anyway through whichI can find the actual bytes received.
I even try to use char* but that not even work.
So, anyone please tell me the solution.Any help will be appreciable. Thank in Advance
char *buf = new char[1461];
int bytes = recv(sock, buf, 2000, 0);
printf("%d", bytes);
that always print 2000.
because of that after the valid bytes in the buf their are unknown bytes that's results in unexpected Image.
First of all, your code has a bug (which leads to undefined behavior).
You have allocated 1461 bytes but you are trying to read more than that:
It should go like this:
vector<char> buf(5000); // you are using C++ not C
int bytes = recv(sock, buf.data(), buf.size(), 0);
std::cout << bytes;
Secondly result is as expected. Camera sends much more data than 2000 bytes, so I'm not surprised that number of bytes read covers whole requested size.
Good day.
I am sending a custom protocol for logging via TCP which looks like this:
Timestamp (uint32_t -> 4 bytes)
Length of message (uint8_t -> 1 byte)
Message (char -> Length of message)
The Timestamp is converted to BigEndian for the transport and everything goes out correctly, except for one little detail: Padding
The Timestamp is sent on its own, however instead of just sending the timestamp (4 bytes) my application (using BSD sockets under Ubuntu) automatically appends two bytes of padding to the message.
Wireshark recognizes this correctly and marks the two extraneous bytes as padding, however the QTcpSocket (Qt 5.8, mingw 5.3.0) apparently assumes that the two extra bytes are actually payload, which obviously messes up my protocol.
Is there any way for me to 'teach' QTcpSocket to ignore the padding (like it should) or any way to get rid of the padding?
I'd like to avoid to do the whole 'create a sufficiently large buffer and preassemble the entire packet in it so it will be sent out in one go'-method if possible.
Thank you very much.
Because it was asked, the code used to send the data is:
return
C->sendInt(entry.TS) &&
C->send(&entry.LogLen, 1) &&
C->send(&entry.LogMsg, entry.LogLen);
where sendInt is declared as (Src being the parameter):
Src = htonl(Src);
return send(&Src, 4);
where 'send' is declared as (Source and Len being the parameters):
char *Src = (char *)Source;
while(Len) {
int BCount = ::send(Sock, Src, Len, 0);
if(BCount < 1) return false;
Src += BCount;
Len -= BCount;
}
return true;
::send is the standard BSD send function.
Reading is done via QTcpSocket:
uint32_t timestamp;
if (Sock.read((char *)×tamp, sizeof(timestamp)) > 0)
{
uint8_t logLen;
char message[256];
if (Sock.read((char *)&logLen, sizeof(logLen)) > 0 &&
logLen > 0 &&
Sock.read(message, logLen) == logLen
) addToLog(qFromBigEndian(timestamp), message);
}
Sock is the QTcpSocket instance, already connected to the host and addToLog is the processing function.
Also to be noted, the sending side needs to run on an embedded system, using QTcpServer is therefor not an option.
Your read logic appears to be incorrect. You have...
uint32_t timestamp;
if (Sock.read((char *)×tamp, sizeof(timestamp)) > 0)
{
uint8_t logLen;
char message[256];
if (Sock.read((char *)&logLen, sizeof(logLen)) > 0 &&
logLen > 0 &&
Sock.read(message, logLen) == logLen
) addToLog(qFromBigEndian(timestamp), message);
}
From the documentation for QTcpSocket::read(data, MaxSize) it...
Reads at most maxSize bytes from the device into data, and returns the
number of bytes read
What if one of your calls to Sock.read reads partial data? You essentially discard that data rather than buffering it for reuse next time.
Assuming you have a suitably scoped QByteArray...
QByteArray data;
your reading logic should be more along the lines of...
/*
* Append all available data to `data'.
*/
data.append(Sock.readAll());
/*
* Now repeatedly read/trim messages from data until
* we have no further complete messages.
*/
while (contains_complete_log_message(data)) {
auto message = read_message_from_data(data);
data = data.right(data.size() - message.size());
}
/*
* At this point `data' may be non-empty but doesn't
* contain enough data for a complete message.
*/
If the length of the padding is always fixed then just add socket->read(2); to ignore the 2 bytes.
On the other hand it might be just the tip of the iceberg. What are you using to read and write?
You should not invoke send three times but only once. For conversion into BigEndian you might use the Qt functions and write everything into a single buffer and only call send once. It is not what you want, but I assume it is what you'll need to do and it should be easy, as you already know the size of you message. You also will not need to leave the Qt world for sending the messages.
First question: I am confused between Buffers in TCP. I am trying to explain my proble, i read this documentation TCP Buffer, author said a lot about TCP Buffer, thats fine and a really good explanation for a beginner. What i need to know is this TCP Buffer is same buffer with the one we use in our basic client server program (Char *buffer[Some_Size]) or its some different buffer hold by TCP internally ?
My second question is that i am sending a string data with prefix length (This is data From me) from client over socket to server, when i print my data at console along with my string it prints some garbage value also like this "This is data From me zzzzzz 1/2 1/2....." ?. However i fixed it by right shifting char *recvbuf = new char[nlength>>3]; nlength to 3 bits but why i need to do it in this way ?
My third question is in relevance with first question if there is nothing like TCP Buffer and its only about the Char *buffer[some_size] then whats the difference my program will notice using such static memory allocation buffer and by using dynamic memory allocation buffer using char *recvbuf = new char[nlength];. In short which is best and why ?
Client Code
int bytesSent;
int bytesRecv = SOCKET_ERROR;
char sendbuf[200] = "This is data From me";
int nBytes = 200, nLeft, idx;
nLeft = nBytes;
idx = 0;
uint32_t varSize = strlen (sendbuf);
bytesSent = send(ConnectSocket,(char*)&varSize, 4, 0);
assert (bytesSent == sizeof (uint32_t));
std::cout<<"length information is in:"<<bytesSent<<"bytes"<<std::endl;
// code to make sure all data has been sent
while (nLeft > 0)
{
bytesSent = send(ConnectSocket, &sendbuf[idx], nLeft, 0);
if (bytesSent == SOCKET_ERROR)
{
std::cerr<<"send() error: " << WSAGetLastError() <<std::endl;
break;
}
nLeft -= bytesSent;
idx += bytesSent;
}
std::cout<<"Client: Bytes sent:"<< bytesSent;
Server code:
int bytesSent;
char sendbuf[200] = "This string is a test data from server";
int bytesRecv;
int idx = 0;
uint32_t nlength;
int length_received = recv(m_socket,(char*)&nlength, 4, 0);//Data length info
char *recvbuf = new char[nlength];//dynamic memory allocation based on data length info
//code to make sure all data has been received
while (nlength > 0)
{
bytesRecv = recv(m_socket, &recvbuf[idx], nlength, 0);
if (bytesRecv == SOCKET_ERROR)
{
std::cerr<<"recv() error: " << WSAGetLastError() <<std::endl;
break;
}
idx += bytesRecv;
nlength -= bytesRecv;
}
cout<<"Server: Received complete data is:"<< recvbuf<<std::endl;
cout<<"Server: Received bytes are"<<bytesRecv<<std::endl;
WSACleanup();
system("pause");
delete[] recvbuf;
return 0;
}
You send 200 bytes from the client, unconditionally, but in the server you only receive the actual length of the string, and that length does not include the string terminator.
So first of all you don't receive all data that was sent (which means you will fill up the system buffers), and then you don't terminate the string properly (which leads to "garbage" output when trying to print the string).
To fix this, in the client only send the actual length of the string (the value of varSize), and in the receiving server allocate one more character for the terminator, which you of course needs to add.
First question: I am confused between Buffers in TCP. I am trying to
explain my proble, i read this documentation TCP Buffer, author said a
lot about TCP Buffer, thats fine and a really good explanation for a
beginner. What i need to know is this TCP Buffer is same buffer with
the one we use in our basic client server program (Char
*buffer[Some_Size]) or its some different buffer hold by TCP internally ?
When you call send(), the TCP stack will copy some of the bytes out of your char array into an in-kernel buffer, and send() will return the number of bytes that it copied. The TCP stack will then handle the transmission of those in-kernel bytes to its destination across the network as quickly as it can. It's important to note that send()'s return value is not guaranteed to be the same as the number of bytes you specified in the length argument you passed to it; it could be less. It's also important to note that sends()'s return value does not imply that that many bytes have arrived at the receiving program; rather it only indicates the number of bytes that the kernel has accepted from you and will try to deliver.
Likewise, recv() merely copies some bytes from an in-kernel buffer to the array you specify, and then drops them from the in-kernel buffer. Again, the number of bytes copied may be less than the number you asked for, and generally will be different from the number of bytes passed by the sender on any particular call of send(). (E.g if the sender called send() and his send() returned 1000, that might result in you calling recv() twice and having recv() return 500 each time, or recv() might return 250 four times, or (1, 990, 9), or any other combination you can think of that eventually adds up to 1000)
My second question is that i am sending a string data with prefix
length (This is data From me) from client over socket to server, when
i print my data at console along with my string it prints some garbage
value also like this "This is data From me zzzzzz 1/2 1/2....." ?.
However i fixed it by right shifting char *recvbuf = new
char[nlength>>3]; nlength to 3 bits but why i need to it in this way ?
Like Joachim said, this happens because C strings depend on the presence of a NUL-terminator byte (i.e. a zero byte) to indicate their end. You are receiving strlen(sendbuf) bytes, and the value returned by strlen() does not include the NUL byte. When the receiver's string-printing routine tries to print the string, it keeps printing until if finds a NUL byte (by chance) somewhere later on in memory; in the meantime, you get to see all the random bytes that are in memory before that point. To fix the problem, either increase your sent-bytes counter to (strlen(sendbuf)+1), so that the NUL terminator byte gets received as well, or alternatively have your receiver manually place the NUL byte at the end of the string after it has received all of the bytes of the string. Either way is acceptable (the latter way might be slightly preferable as that way the receiver isn't depending on the sender to do the right thing).
Note that if your sender is going to always send 200 bytes rather than just the number of bytes in the string, then your receiver will need to always receive 200 bytes if it wants to receive more than one block; otherwise when it tries to receive the next block it will first get all the extra bytes (after the string) before it gets the next block's send-length field.
My third question is in relevance with first question if there is
nothing like TCP Buffer and its only about the Char *buffer[some_size]
then whats the difference my program will notice using such static
memory allocation buffer and by using dynamic memory allocation buffer
using char *recvbuf = new char[nlength];. In short which is best and
why ?
In terms of performance, it makes no difference at all. send() and receive() don't care a bit whether the pointers you pass to them point at the heap or the stack.
In terms of design, there are some tradeoffs: if you use new, there is a chance that you can leak memory if you don't always call delete[] when you're done with the buffer. (This can particularly happen when exceptions are thrown, or when error paths are taken). Placing the buffer on the stack, on the other hand, is guaranteed not to leak memory, but the amount of space available on the stack is finite so a really huge array could cause your program to run out of stack space and crash. In this case, a single 200-byte array on the stack is no problem, so that's what I would use.
I have a trouble, my server application sends packet 8 bytes length - AABBCC1122334455 but my application receives this packet in two parts AABBCC1122 and 334455, via "recv" function, how can i fix that?
Thanks!
To sum up a liitle bit:
TCP connection doesn't operate with packets or messages on the application level, you're dealing with stream of bytes. From this point of view it's similar to writing and reading from a file.
Both send and recv can send and receive less data than provided in the argument. You have to deal with it correctly (usually by applying proper loop around the call).
As you're dealing with streams, you have to find the way to convert it to meaningful data in your application. In other words, you have to design serialisation protocol.
From what you've already mentioned, you most probably want to send some kind of messages (well, it's usually what people do). The key thing is to discover the boundaries of messages properly. If your messages are of fixed size, you simply grab the same amount of data from the stream and translate it to your message; otherwise, you need a different approach:
If you can come up with a character which cannot exist in your message, it could be your delimiter. You can then read the stream until you reach the character and it'll be your message. If you transfer ASCII characters (strings) you can use zero as a separator.
If you transfer binary data (raw integers etc.), all characters can appear in your message, so nothing can act as a delimiter. Probably the most common approach in this case is to use fixed-size prefix containing size of your message. Size of this extra field depends on the max size of your message (you will be probably safe with 4 bytes, but if you know what is the maximum size, you can use lower values). Then your packet would look like SSSS|PPPPPPPPP... (stream of bytes), where S is the additional size field and P is your payload (the real message in your application, number of P bytes is determined by value of S). You know every packet starts with 4 special bytes (S bytes), so you can read them as an 32-bit integer. Once you know the size of the encapsulated message, you read all the P bytes. After you're done with one packet, you're ready to read another one from the socket.
Good news though, you can come up with something completely different. All you need to know is how to deserialise your message from a stream of bytes and how send/recv behave. Good luck!
EDIT:
Example of function receiving arbitrary number of bytes into array:
bool recv_full(int sock, char *buffer, size_t size)
{
size_t received = 0;
while (received < size)
{
ssize_t r = recv(sock, buffer + received, size - received, 0);
if (r <= 0) break;
received += r;
}
return received == size;
}
And example of receiving packet with 2-byte prefix defining size of payload (size of payload is then limited to 65kB):
uint16_t msgSize = 0;
char msg[0xffff];
if (recv_full(sock, reinterpret_cast<char *>(&msgSize), sizeof(msgSize)) &&
recv_full(sock, msg, msgSize))
{
// Got the message in msg array
}
else
{
// Something bad happened to the connection
}
That's just how recv() works on most platforms. You have to check the number of bytes you receive and continue calling it in a loop until you get the number that you need.
You "fix" that by reading from TCP socket in a loop until you get enough bytes to make sense to your application.
my server application sends packet 8 bytes length
Not really. Your server sends 8 individual bytes, not a packet 8 bytes long. TCP data is sent over a byte stream, not a packet stream. TCP neither respects nor maintains any "packet" boundary that you might have in mind.
If you know that your data is provided in quanta of N bytes, then call recv in a loop:
std::vector<char> read_packet(int N) {
std::vector buffer(N);
int total = 0, count;
while ( total < N && (count = recv(sock_fd, &buffer[N], N-total, 0)) > 0 )
total += count;
return buffer;
}
std::vector<char> packet = read_packet(8);
If your packet is variable length, try sending it before the data itself:
int read_int() {
std::vector<char> buffer = read_packet(sizeof (int));
int result;
memcpy((void*)&result, (void*)&buffer[0], sizeof(int));
return result;
}
int length = read_int();
std::vector<char> data = read_buffer(length);
I know that TCP provides stream-like data transmission, but the main question is - what situations can occur while sending data over TCP?
1. The message can be split to N chunks to fit in MTU size.
2. Two messages can be read in 1 recv call.
Can there be the next situation?
MTU for example 1500 bytes.
Client calls send with 1498 bytes data.
Client calls send with 100 bytes data.
Server calls recv and receives 1500 bytes data.
Server calls recv and receives 98 bytes data.
So it end up with situation when 2 bytes from second client send will be received in first server recv.
My protocol defined as foolows:
4 bytes - data length
data content.
I wonder can I came up with situation when 4 bytes (data length) will be split into 2 chunks?
Yes, a stream of bytes may be split on any byte boundary. You certainly can have your 4 byte data length header split in any of 8 different ways:
4
1-3
2-2
3-1
1-1-2
1-2-1
2-1-1
1-1-1-1
Some of these are more likely to occur than others, but you must account for them. Code that could handle this might look something like the following:
unsigned char buf[4];
size_t len = 0;
while (len < sizeof(buf)) {
ssize_t n = recv(s, buf+len, sizeof(buf)-len, 0);
if (n < 0) {
// error handling here
}
len += n;
}
length = buf[0] | (buf[1] << 8) | (buf[2] << 16) | (buf[3] << 24);
I always write my applications in a manner that expects the data to become fragmented somehow. It's not hard to do once you come up with a good design.
What's the best way to monitor a socket for new data and then process that data?