I am dealing with problem that after sending data successfully i recv the first response from the client but the second one after he put his details and submit not.
do you have any idea why this happend?
Here is my code:
sock->listenAndAccept();
string url="HTTP/1.1 302 Found \r\nContent-Type: text/html; charset=utf8 \r\nContent- Length:279\r\n\r\n<!DOCTYPE html><html><head><title>Creating an HTML Element</title></head><body><form name=\"input\" action=\"login.html\" method=\"get\">user name: <input type=\"text\" name=\"user\"><br>password: <input type=\"text\" name=\"password\"><input type=\"submit\" value=\"Submit\"></form></body></html>";
sock->send(url.data(),url.length());
char buffer[1000];
sock->recv(buffer, 1000);
cout<<buffer<<endl;
sock->recv(buffer, 1000);
cout<<buffer<<endl;
listen and accept function:
TCPSocket* TCPSocket::listenAndAccept(){
int rc = listen(socket_fd, 1);
if (rc<0){
return NULL;
}
size_t len = sizeof(peerAddr);
bzero((char *) &peerAddr, sizeof(peerAddr));
int connect_sock = accept(socket_fd, (struct sockaddr *)&peerAddr,(unsigned int *)&len);
return new TCPSocket(connect_sock,serverAddr,peerAddr);
}
recv function:
int TCPSocket::recv(char* buffer, int length){
return read(socket_fd,buffer,length);
}
TCP is stream oriented protocol. It might be possible that you have read all the messages in first recv. Check the size of received data and see if it matches the expected output.
Always always always (I can't say that often enough) check the return value of recv. recv will read up to the amount you have requested. If you're certain the amount you've requested is on its way then you must go into a loop around recv buffering incoming data until you've received what you expect to receive.
This kind of bug tends to sit there lurking unseen while you test on your local machine using the very fast localhost interface and then surfaces as soon as you start running the client and server on different hosts.
When you move on from your test code to actual code then you must also deal with zero length responses (client closed the socket) and error codes (<0 response).
Finally, please post your client code. There may be bugs there as well.
Related
I'm writing a simple http server for a test and I'm rather confused as to how one is supposed to tell where the end of a request is.
recv() returns a negative number on error, 0 on connection close and a positive number receiving data, when there is no more data it just blocks.
I could create some frankenstein that continuously recv's on one thread and checks if it blocked on another thread but there has got to be a better way to do this... How can I tell if there is no more bytes to read for the time being without blocking?
First of all, you should follow the HTTP protocol when reading the HTTP request:
Continue reading from socket until \r\n\r\n is received
Parse the header
If Content-Length is specified, additionally read that many bytes of the request payload
Process the HTTP request
Send HTTP response
Close the socket (HTTP/1.0) or (HTTP/1.1) handle keep-alive, content-encoding, transfer-encoding, trailers, etc, potentially repeating from step 1.
To deal with potentially misbehaving clients, when using blocking sockets it is customary to set a socket timeout prior to issuing recv or send calls.
DWORD recvTimeoutMs = 20000;
setsockopt(socket, SOL_SOCKET, SO_SNDTIMEO, (const char *)&recvTimeoutMs, sizeof(recvTimeoutMs));
DWORD sendTimeoutMs = 30000;
setsockopt(socket, SOL_SOCKET, SO_RCVTIMEO, (const char *)&sendTimeoutMs, sizeof(sendTimeoutMs));
When a recv or send times out, it will fail with WSAGetLastError giving WSAETIMEDOUT (10060).
Sorry for improper description of my question.
What my program do is that connect a server, send some data and close connection. I simplified my code as below:
WSAStartup(MAKEWORD(2, 2), &wsaData);
SOCKET s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
connect(s, (const sockaddr*)&dstAddr, sizeof(dstAddr));
send(s, (const char*)pBuffer, fileLen, 0);
shutdown(s, SD_SEND);
closesocket(s);
WSACleanup();
Only partial data was received by server before found a RST causing communication shutdown.
I wrote a simulate server program to accept connection and receive data, but the simulator could get all data. Because I couldn't access server's source code, I didn't know if something made wrong in it. Is there a way I can avoid this error by adding some code in client, or can I prove that there is something wrong in server program?
Setting socket's linger option can fix the bug. But I need to give a magic number for the value of linger time.
linger l;
l.l_onoff = 1;
l.l_linger = 30;
setsockopt(socket, SOL_SOCKET, SO_LINGER, (const char*)&l, sizeof(l));
WSASend returns before sending data to device actually
Correct.
I created a non-blocking socket and tried to send data to server.
WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, WSA_FLAG_OVERLAPPED)
No you didn't. You created an overlapped I/O socket.
After executed, returnValue was SOCKET_ERROR and WSAGetLastError() returned WSA_IO_PENDING. Then I called WSAWaitForMultipleEvents to wait for event being set. After it returned WSA_WAIT_EVENT_0, I called WSAGetOverlappedResult to get actual sent data length and it is the same value with I sent.
So all the data got transferred into the socket send buffer.
I called WSASocket first, then WSASend/WSAWaitForMultipleEvents/WSAGetOverlappedResult several times to send a bunch of data, and closesocket at the end.
So at the end of that process all the data and the close had been transferred to the socket send buffer.
But server couldn't receive all data, I used Wireshark to view tcp packets and found that client sent RST before all packet were sent out.
That could be for a number of reasons none of which is determinable without seeing some code.
If I slept 1 minute before calling closesocket, then server would receive all data.
Again this would depend on what else had happened in your code.
It seemed like that WSASend/WSAWaitForMultipleEvents/WSAGetOverlappedResult returned before sending data to server actually.
Correct.
The data were saved in buffer and waiting for being sent out.
Correct.
When I called closesocket, communication was shut down.
Correct.
They didn't work as my expectation.
So your expectation was wrong.
What did I go wrong? This problem only occurred in specific PCs, the application run well in others.
Impossible to answer without seeing some code. The usual reason for issuing an RST is that the peer had written data to a connection that you had already closed: in other words, an application protocol error; but there are other possibilities.
I am doing a simple TCP server in C++ for Windows to echo incoming data. I have a problem with it. Before I explain my problem, I have to say that Winsock is properly set up and this problem happens with any IP address as the source.
The general behaviour when a connection is established is this:
In the loop that runs while connection still alive, it must echo data, and precede it with REPLY word.
To do that, I'm currently using two send() calls:
One call sending "REPLY " alone.
Another call just sending back received data.
But using Putty Client, I'm getting this:
REPLY data_echoed REPLY.
Why REPLY is sent after the last send call if it was the first??? I'll post a little code where the problem happens:
//Reply to client
message = "HELLO CLIENT!! READY TO ECHO.\n";
send(new_socket, message, strlen(message), 0);
///Enter into a loop until connection is finished.
printf("%s \n\n", "Incoming data goes here: ");
do{
///Clear buffer and receive data.
memset(buffer, 0, sizeof(buffer));
ret = recv(new_socket, buffer, sizeof(buffer), 0);
printf("%s", buffer);
///Send a REPLY WORD and the data echoed.
send(new_socket, "REPLY\r\n", 7, 0);
send(new_socket, buffer, sizeof(buffer), 0);
}while(ret != SOCKET_ERROR);
What is wrong with that? I'll remove the first call, and the double effect disappears. Why I can't do two send calls one after other?
You ignore the return value of recv until after you send REPLY, so no matter what happens, you send REPLY followed by the contents of the buffer. After you echo the first time, something happens, and no matter what it is, you send REPLY.
Bluntly, it doesn't look like you understand the very basics of TCP. We used to have a rule that before anyone can do anything with TCP, they must memorize and completely understand the following statement: "TCP is a byte-stream protocol that does not preserve application message boundaries."
Your code pretends that it is receiving and echoing application-level messages. But there is no code to actually implement application-level messages. TCP has no support for application-level messages, so if you want them, you have to implement them. You clearly want them. You also have not implemented them.
Do newline characters delimit messages? If so, where's the code to find them in the data stream? If not, what does?
I am very new to networking and have an issue with sending messages during a while loop.
To my knowledge I should do something along the lines of this:
Create Socket()
Connect()
While
Do logic
Send()
End while
Close Socket()
However it sends once and returns -1 there after.
The code will only work when I create the socket in the loop.
While
Create Socket()
Connect()
Do logic
Send()
Close Socket()
End while
Here is a section of the code I am using but doesn't work:
//init winsock
WSAStartup(MAKEWORD(2, 0), &wsaData);
//open socket
sock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
//connect
memset(&serveraddr, 0, sizeof(serveraddr));
serveraddr.sin_family = AF_INET;
serveraddr.sin_addr.s_addr = inet_addr(ipaddress);
serveraddr.sin_port = htons((unsigned short) port);
connect(sock, (struct sockaddr *) &serveraddr, sizeof(serveraddr));
while(true) {
if (send(sock, request.c_str(), request.length(), 0)< 0 /*!= request.length()*/) {
OutputDebugString(TEXT("Failed to send."));
} else {
OutputDebugString(TEXT("Activity sent."));
}
Sleep(30000);
}
//disconnect
closesocket(sock);
//cleanup
WSACleanup();
The function CheckForLastError() returns:10053
WSAECONNABORTED
Software caused connection abort.
An established connection was aborted by the software in your host computer, possibly due to a data transmission time-out or protocol error
Thanks
I have been looking for a solution to this problem too. I am having the same problem with my server. When trying to send a response from inside the loop, the client seems never to receive it.
As I understand the problem, according to user207421's suggestions, when you establish a connection between a client and a server, the protocol should have enough information to let the client know when the server has finished sending the response. If you see this example, you have a minimum HTTP server that responds to requests. In this case, you can use a browser or an application like Postman. And if you see the response message, you will see a header called Connection. Setting its value to close tells the client which one is the last message from the server for that request. The message is being sent, but the client keeps waiting, maybe because there is no closing element the client can recognize. I was also missing the Content-Length header. My HTTP response message was wrong, and the client was lost.
This diagram shows what needs to be outside the loop and what needs to be inside.
To understand how and why your program fails,you have to understand the functions you use.
Some of them are blocking functions and some are them not. Some of them need previous calles of other functions and some of them don't.
Now from what i understand we are talking about a client here,not a server.
The client has only non blocking functions in this case. That means that whenever you call a function,it will be executed without waiting.
So send() will send data the second it is called and the stream will go on to the next line of code.
If the information to be sent was not yet ready...you will have a problem,since nothing will be sent.
To solve it you could use some sort of a delay. The problem with delays is that they are Blocking functions meaning your stream will stop once it hits the delay. To solve it you can create a thread and lock it untill the information is ready to be sent.
But that would do the job for one send(). You will send the info and thats that.
If you want to hold the communication and send repeatedly info,you will need to create a while loop. once you have a while loop you dont have to worry about anything. That is because you can verify that the information is ready with a stream control and you can use send over and over again before terminating the connection.
Now the question is what is happening on the server side of things?
"ipaddress" should hold the ip of the server. The server might reject your request to connect.Or worst he might accept your request but he is listening with diffrent settings in relation to your client.Meaning that maybe the server is not reciving (does not have recv() function)information and you are trying to send info... that might resault in errors/crashes and what not.
The way my game and server work is like this:
I send messages that are encoded in a format I created. It starts with 'p' followed by an integer for the message length then the message.
ex: p3m15
The message is 3 bytes long. And it corresponds to message 15.
The message is then parsed and so forth.
It is designed for TCP potentially only sending only 1 byte (since TCP only has to send a minimum of 8 bits).
This message protocol I created is extremely lightweight and works great which is why I use it over something like JSON or other ones.
My main concern is, how should the client and the server start talking?
The server expects clients to send messages in my format. The game will always do this.
The problem I ran into was when I tested my server on port 1720. There was BitTorrent traffic and my server was picking it up. This was causing all kinds of random 'clients' to connect to my server and sending random garbage.
To 'solve' this, I made it so that the first thing a client must send me is the string "Hello Server".
If the first byte ever sent is != 'H' or if they have sent me > 12 bytes and it's != "Hello Server" then I immediately disconnect them.
This is working great. I'm just wondering if I'm doing something a bit naive or if there are more standard ways to deal with:
-Clients starting communication with server
-Clients passing Hello Server check, but somewhere along the line I get an invalid message. I can assume that my app will never send an invalid message. If it did, it would be a bug. Right now if I detect an invalid message then I disconnect the client.
I noticed BitTorrent was sending '!!BitTorrent Protocol' before each message. Should I do something like that?
Any advice on this and making it safer and more secure would be very helpful.
Thanks
perhaps a magic number field embedded in your message.
struct Message
{
...
unsigned magic_number = 0xbadbeef3;
...
};
so first thing you do after receive something, is checking whether the magic_number field is 0xbadbeef3.
Typically, I design protocols with a header something like this:
typedef struct {
uint32_t signature;
uint32_t length;
uint32_t message_num;
} header_t;
typedef struct {
uint32_t foo;
} message13_t;
Sending a message:
message13_t msg;
msg.foo = 0xDEADBEEF;
header_t hdr;
hdr.signature = 0x4F4C494D; // "MILO"
hdr.length = sizeof(message13_t);
hdr.message_num = 13;
// Send the header
send(s, &hdr, sizeof(hdr), 0);
// Send the message data
send(s, &msg, sizeof(msg), 0);
Receiving a message:
header_t hdr;
char* buf;
// Read the header - all messages always have this
recv(s, &hdr, sizeof(hdr), 0);
// allocate a buffer for the rest of the message
buf = malloc(hdr.length);
// Read the rest of the message
recv(s, buf, hdr.length, 0);
This code is obviously devoid of error-checking or making sure all data has been sent/received.