I'm trying to make a MFC application(client) that connects to a server on ("localhost",port 1234), the server replies to the client and the client reads from the server's response.
The server is able to receive the data from the client and it sends the reply back to the socket from where it received it, but I am unable to read the reply from within the client.
I am making a CAsyncSocket to connect to the server and send data and a CAsyncSocket with overloaded methods onAccet and onReceive to read the reply from the server.
Please tell me what I'm doing wrong.
class ServerSocket:public CAsyncSocket{
public:
void OnAccept(int nErrorCode){
outputBox->SetWindowTextA("RECEIVED DATA");
CAsyncSocket::OnAccept(nErrorCode);
}
};
//in ApplicationDlg I have:
socket.Create();
socket.Connect("127.0.0.1",1234);
socket.Send(temp,strlen(temp)); //this should be sending the initial message
if(!serverSocket.Create()) //this should get the response i guess...
AfxMessageBox("Could not create server socket");
if(!serverSocket.Listen())
AfxMessageBox("Could not listen to socket");
You should be aware that all network operations are potentially time-consuming operations. Now, since you're using MFC's CAsyncSocket class, it performs all the operations asynchronously (doesn't block you). But return from the function doesn't mean it's already completed.
Let's look at the following lines of code:
socket.Connect("127.0.0.1",1234);
socket.Send(temp,strlen(temp)); //this should be sending the initial message
The first is the call to Connect, which most probably doesn't complete immediately.
Next, you call Send, but your socket isn't connected yet! It definitely returns you an error code, but since you don't bother checking its return value - you just happily wait to receive something.
So, the next rule for you, my friend, should be checking every return value for every function that you call, especially when it comes to networking where errors are legitimate and happen frequently.
You should only start sending after OnConnect has been called.
First, I don't see where you send the data to client (on server).
Second, Accept() does not mean data received. Accept means you have a new incoming connection, for which you need to create Another socket, to which data should be sent.
Related
I am trying to come up how to effectively use ZMQ to multithread (so send doesn't block receive and receive doesn't block send).
I wanted to use ZMQ_DONTWAIT flag but when sending the data, it will sometimes not be send (EAGAIN error, so I would have to re-queue the message which is a waste of resources when dealing with megabytes of data).
I did come up with the following code:
Concurrency::concurrent_queue<zmq::message_t> QUEUE_IN;
Concurrency::concurrent_queue<zmq::message_t> QUEUE_OUT;
void SendThread(zmq::context_t &context) {
zmq::socket_t zmq_socket(context, ZMQ_DEALER);
zmq_socket.connect(string_format("tcp://%s:%s", address, port).c_str());
zmq::message_t reply;
while (true) {
while (QUEUE_OUT.try_pop(reply))
zmq_socket.send(reply);
Sleep(1);
}
}
void RecvThread(zmq::context_t &context) {
zmq::socket_t zmq_socket(context, ZMQ_DEALER);
zmq_socket.connect(string_format("tcp://%s:%s", address, port).c_str());
zmq::message_t reply;
while (true) {
while (zmq_socket.recv(&reply))
QUEUE_IN.push(reply);
}
}
void ConnectionThread()
{
zmq::context_t context(1);
std::thread* threads[2] = {
new std::thread(SendThread, context),
new std::thread(RecvThread, context)
};
threads[0]->join();
}
However that would require two sockets on the server end, and I would need to identify to which I need to send data and to which I need to listen on the server end, right?
Is there no way to use one socket yet use send and receive in a multithreaded environment?
I would maybe like to do it asychroniously on one socket, but after studying the async sample I still don't grasp the idea as there aren't much comments around it.
Avoiding the Sleep
To avoid the sleep, you can use zmq_poll() using a ZMQ_POLLOUT event to protect the send(). You don't need to use ZMQ_DONTWAIT. [I used the C function there, your binding will have the equivalent.]
Routing to RecvThread
One cannot share sockets between threads, so 2 sockets are needed for this to work. The server would only need one socket (presumably ROUTER) that bound to 2 ports. When it receives a message, it will then need to know where to send the reply...
When a ROUTER socket receives a message, the zmq internals adds a frame to the message with the identity of the sender. This frame will be seen by the server code, which would normally use that same identity frame when constructing a message to reply to the sender. In your case, that's the client's SendThread. OTOH, you want to reply to the client's receive socket, so the identity frame must be for that.
The only thing left is how the server obtains the identity frame of the client's receive socket. For that, you'll need to invent a small protocol. Arranging for the client's RecvThread to send one message to the server would almost be enough. The server should understand that message and simply retain the identity frame of the client's receive socket, and use a copy of it when constructing reply messages.
All of this is explained in the guide under "Exploring ROUTER Sockets".
When sending large data (you say you're sending MB of data in a single message), it's going to take some time, ZMQ doesn't "duplex" sending and receiving so that they can both actually happen. The DONTWAIT flag isn't going to help you so much there, its purpose is to ensure that you're not waiting on ZMQ when you could be performing non-ZMQ actions. All messages should still be queued up in any event (barring interference from the High Water Mark)
The only way to safely use multiple threads to parallelize sending and receiving is to use multiple sockets.
But, it's not all bad. If you use one designated send socket and one designated receive socket, then you can use pub/sub, which opens up some interesting options.
I am using the boost library to create a server application. At a time one client is allowed therefore if the async_accept(...) function gets called the acceptor will be closed.
The only job of my server is to send data periodically (if sending is enabled on the server, otherwise "just sit there" until it gets enabled) to the client. Therefore I have a boost message queue - if a message arrives the send() is called on the socket.
My problem is that I cannot tell if the client is still listening. Normally you would not care, by the next transmission the send would yield an error.
But in my case the acceptor is not opened when a socket is opened. If the socket gets in the CLOSE_WAIT state I have to close it and open the acceptor again so that the client can connect again.
Waiting until the next send is also no option since it is possible that the sending is disabled therefore my server would be stuck.
Question:
How can I determine if a boost::asio::ip::tcp::socket is in a CLOSE_WAIT state?
Here is the code to do what Dmitry Poroh suggests:
typedef asio::detail::socket_option::integer<ASIO_OS_DEF(SOL_SOCKET),SO_ERROR>so_error;
so_error tmp;
your_socket.get_option(tmp);
int value=tmp.value();
//do something with value.
You can try to use ip::tcp::socket::get_option and get error state with level SOL_SOCKET and option name SO_ERROR. I'm surprised that I have not found the ready boost implementation for it. So you can try to meet GettableSocketOption requirements an use ip::tcp::socket::get_option to fetch the socket error state.
I'm using boost::asio and sending a list to a client and closing the socket when finished. Somehow the client sometimes gets an End Of File error before he has received everything.
I'm guessing this has to do with the server closing the socket right after sending the last list entry. Is there an easy way to solve this async_send to call the handler only after the data has been successfully sent?
Or is my End Of File error coming from something else?
Boost.Asio is an operating system independent abstraction layer over TCP and UDP sockets. They provide no guarantee that the other application has received and processed the data. You will need to include this logic in your application, you may want to study the OSI model.
If you're closing the socket immediately after async_send() returns, this is incorrect. You should close it only after the completion handler is invoked.
I have a server and client program where it's convenient for the server to send out two messages to each client. The server calls the write() function on each of the client sockets twice in a row. On the client side, the readyread signal occurs, but when the client reads, it only gets the first message. I fixed this by adding waitForBytesWritten() before each write() the server does, and this seemed to fix the problem. However, I don't know why I can't just write to the buffer twice. I would think there is a better way to solve this problem.
Hey I'm using the WSAEventSelect for event notifications of sockets. So far everything is cool and working like a charm, but there is one problem.
The client is a .NET application and the server is written in Winsock C++. In the .NET application I'm using System.Net.Sockets.Socket class for TCP/IP. When I call the Socket.Shutdown() and Socket.Close() method, I receive the FD_CLOSE event in the server, which I'm pretty sure is fine. Okay the problem occurs when I check the iErrorCode of WSANETWORKEVENTS which I passed to WSAEnumNetworkEvents. I check it like this
if (listenerNetworkEvents.lNetworkEvents & FD_CLOSE)
{
if (listenerNetworkEvents.iErrorCode[FD_CLOSE_BIT] != 0)
{
// it comes here
// which means there is an error
// and the ERROR I got is
// WSAECONNABORTED
printf("FD_CLOSE failed with error %d\n",
listenerNetworkEvents.iErrorCode[FD_CLOSE_BIT]);
break;
}
closesocket(socketArray[Index]);
}
But it fails with the WSAECONNABORTED error. Why is that so?
EDIT: Btw, I'm running both the client and server on the same computer, is it because of that? And I received the FD_CLOSE event when I do this:
server.Shutdown(SocketShutdown.Both); // in .NET C#, client code
I'm guessing you're calling Shutdown() and then Close() immediately afterward. That will give the symptom you're seeing, because this is "slamming the connection shut". Shutdown() does initiate a graceful disconnect (TCP FIN), but immediately following it with Close() aborts that, sending a TCP RST packet to the remote peer. Your Shutdown(SocketShutdown.Both) call slams the connection shut, too, by the way.
The correct pattern is:
Call Shutdown() with the direction parameter set to "write", meaning we won't be sending any more data to the remote peer. This causes the stack to send the TCP FIN packet.
Go back to waiting for Winsock events. When the remote peer is also done writing, it will call Shutdown("write"), too, causing its stack to send your machine a TCP FIN packet, and for your application to get an FD_CLOSE event. While waiting, your code should be prepared to continue reading from the socket, because the remote peer might still be sending data.
(Please excuse the pseudo-C# above. I don't speak .NET, only C++.)
Both peers are expected to use this same shutdown pattern: each tells the other when it's done writing, and then waits to receive notification that the remote peer is done writing before it closes its socket.
The important thing to realize is that TCP is a bidirectional protocol: each side can send and receive independently of the other. Closing the socket to reading is not a nice thing to do. It's like having a conversation with another person but only talking and being unwilling to listen. The graceful shutdown protocol says, "I'm done talking now. I'm going to wait until you stop talking before I walk away."