Problem with retransmitting stdin to a server in OCaml - ocaml

I'm trying to write an IRC inspired server along with a client capable of communicating with it as a part of my uni class. I already wrote a server with which i can successfully interact using tools such as telnet. I'm having trouble with writing a client though, specifically with sending messages from stdin to the server. Module that i use for sending/receiving messages looks as follows:
let (>>=) = Lwt.bind
type t =
{ user_input : Lwt_io.input_channel;
server_input : Lwt_io.input_channel;
output : Lwt_io.output_channel }
let make fd =
{ user_input = Lwt_io.stdin;
server_input = Lwt_io.of_fd ~mode:Lwt_io.input fd;
output = Lwt_io.of_fd ~mode:Lwt_io.output fd }
let send conn msg =
Lwt_io.write conn.output msg >>= fun _ ->
Lwt_io.flush conn.output
let receive input_ch =
Lwt_io.read_line input_ch
>>= fun line ->
Lwt.return line
Funny enough, i know that it almost works properly thanks to some classic print_endline debugging techniques (I logged messages sent and received by the server). The problem is, when i type something in my terminal, it doesn't go through to the server until i press ctrl+c. Example session looks as follows:
I launch the server and successfully connect to it using the client that i wrote
The server asks to choose my nickname
In my client, i type a nick and press enter
The server doesn't signal receiving any messages
I press ctrl+c in my client session
The moment the client closes the server shows that it received all characters typed into the client before shutting it down with ctrl+c
Based on that i assume the message doesn't actually get sent through the socket until i press ctrl+c for some unknown reason.
server responses and corresponding client session for some more reference
I was wondering if it can somehow be caused by the way i read lines using Lwt_io.read_line, something about it maybe not recognizing newline characters and how to fix that (So it works as intended, that is sending typed message to the server after pressing Enter). Additionally here's the code responsible for the main loop of the client:
let rec handle_user conn () =
let open Connection in
receive conn.user_input
>>= fun msg ->
send conn msg
>>= handle_user conn
let rec handle_server conn () =
let open Connection in
receive conn.server_input
>>= fun msg ->
Lwt_io.write Lwt_io.stdout msg
>>= handle_server conn
let handle_connection conn () =
Lwt.join [
handle_user conn ();
handle_server conn ()
]
handle_connection conn () is launched from the main function using Lwt_main.run after doing some standard stuff on sockets. I feel like I have scoured half of the Internet trying to come up with a solution and yet I couldn't find anything and it completely blocks my further progress working on the project.
TL:DR
Tried: Sending a message to the server after typing it in my client session and pressing Enter
What i expected: Response from the server signaling that it in fact received a message along with it's content
What actually happened: The message didn't go through until closing the client with ctrl+c after which the server signaled receiving all messages typed so far but glued together, ignoring newline characters

As far as I can see (without any way to reproduce the error), the issue is that Lwt_io.read_line returns the line without the newline characters as documented at https://ocsigen.org/lwt/5.5.0/api/Lwt_io#VALread_line. Thus,
receive conn.user_input
>>= send conn
does not send any newlines, and the server has no way to know that the current message ended before the end of the connection.

Related

Asynchronous, Non-Blocking Socket Behaviour - WSAEWOULDBLOCK

I have inherited two applications, one Test Harness (a client) running on a Windows 7 PC and one server application running on a Windows 10 PC. I am attempting to communicate between the two using TCP/IP sockets. The Client sends requests (for data in the form of XML) to the Server and the Server then sends the requested data (also XML) back to the client.
The set up is as shown below:
Client Server
-------------------- --------------------
| | Sends Requests | |
| Client Socket | -----------------> | Server Socket |
| | <----------------- | |
| | Sends Data | |
-------------------- --------------------
This process always works on an initial connection (i.e. freshly launched client and server applications). The client has the ability to disconnect from the server, which triggers cleanup of sockets. Upon reconnection, I almost always (it does not always happen, but does most of the time) receive the following error:
"Receive() - The socket is marked as nonblocking and the receive operation would block"
This error is displayed at the client and the socket in question is an asynchronous, non-blocking socket.
The line which causes this SOCKET_ERROR is:
numBytesReceived = theSocket->Receive(theReceiveBuffer, 10000));
where:
- numBytesReceived is an integer (int)
- theSocket is a pointer to a class called CClientSocket which is a specialisation of CASyncSocket, which is part of the MFC C++ Library. This defines the socket object which is embedded within the client. It is an asynchonous, non-blocking socket.
- Receive() is a virtual function within the CASyncSocket object
- theReceiveBuffer is a char array (10000 elements)
In executing the line descirbed above, SOCKET_ERROR is returned from the function and calling theSocket->GetLastError() returns WSAEWOULDBLOCK.
SocketTools highlights that
When a non-blocking (asynchronous) socket attempts to perform an operation that cannot be performed immediately, error 10035 will be returned. This error is not fatal, and should be considered advisory by the application. This error code corresponds to the Windows Sockets error WSAEWOULDBLOCK.
When reading data from a non-blocking socket, this error will be returned if there is no more data available to be read at that time. In this case, the application should wait for the OnRead event to fire which indicates that more data has become available to read. The IsReadable property can be used to determine if there is data that can be read from the socket.
When writing data to a non-blocking socket, this error will be returned if the local socket buffers are filled while waiting for the remote host to read some of the data. When buffer space becomes available, the OnWrite event will fire which indicates that more data can be written. The IsWritable property can be used to determine if data can be written to the socket.
It is important to note that the application will not know how much data can be sent in a single write operation, so it is possible that if the client attempts to send too much data too quickly, this error may be returned multiple times. If this error occurs frequently when sending data it may indicate high network latency or the inability for the remote host to read the data fast enough.
I am consistently getting this error and failing to receive anything on the socket.
Using Wireshark, the following communications occur with the source, destinaton and TCP Bit Flags presented here:
Event: Connect Test Harness to Server via TCP/IP
Client --> Server: SYN
Server --> Client: SYN, ACK
Client --> Server: ACK
This appears to be correct and represents the Three-Way Handshake of connecting.
SocketSniff confirms that a Socket is closed on the client side. It was not possible to get SocketSniff to work with the Windows 10 Server application.
Event: Send a Request for Data from the Test Harness
Client --> Server: PSH, ACK
Server --> Client: PSH, ACK
Client --> Server: ACK
Both request data and received data is confirmed to be exchanged successfully
Event: Disconnect Test Harness from Server
Client --> Server: FIN, ACK
Server --> Client: ACK
Server --> Client: FIN, ACK
Client --> Server: ACK
This appears to be correct and represents the Four-Way handshake of connection closure.
SocketSniff confirms that a Socket is closed on the client side. It was not possible to get SocketSniff to work with the Windows 10 Server application.
Event: Reconnect Test Harness to Server via TCP/IP
Client --> Server: SYN
Server --> Client: SYN, ACK
Client --> Server: ACK
This appears to be correct and represents the Three-Way Handshake of connecting.
SocketSniff confirms that a new Socket is opened on the client side. It was not possible to get SocketSniff to work with the Windows 10 Server application.
Event: Send a Request for Data from the Test Harness
Client --> Server: PSH, ACK
Server --> Client: ACK
We see no data being pushed (PSH) back to the client, yet we do see an acknowledgement.
Has anyone got any ideas what may be going on here? I understand it would be difficult for you to diagnose without seeing the source code, however I was hoping others may have had experience with this error and could point me down the specific route to investigate.
More Info:
The Server initialises a listening thread and binds to 0.0.0.0:49720. The 'WSAStartup()', 'bind()' and 'listen()' functions all return '0', indicating success. This thread persists throughout the lifetime of the server application.
The Server initialises two threads, a read and a write thread. The read thread is responsible for reading request data off its socket and is initialised as follows with a class called Connection:
HANDLE theConnectionReadThread
= CreateThread(NULL, // Security Attributes
0, // Default Stacksize
Connection::connectionReadThreadHandler, // Callback
(LPVOID)this, // Parameter to pass to thread
CREATE_SUSPENDED, // Don't start yet
NULL); // Don't Save Thread ID
The write thread is initialised in a similar way.
In each case, the CreateThread() function returns a suitable HANDLE, e.g.
theConnectionReadThread = 00000570
theConnectionWriteThread = 00000574
The threads actually get started within the following function:
void Connection::startThreads()
{
ResumeThread(theConnectionReadThread);
ResumeThread(theConnectionWriteThread);
}
And this function is called from within another class called ConnectionManager which manages all the possible connections to the server. In this case, I am only concerned with a single connection, for simplicity.
Adding text output to the server application reveals that I can successfully connect/disconnect the client and server several times before the faulty behaviour is observed. For example, Within the connectionReadThreadHandler() and connectionWriteThreadHandler() functions, I am outputing text to a log file as soon as they execute.
When correct behaviour is observed, the following lines are output to the log file:
Connection::ResumeThread(theConnectionReadThread) returned 1
Connection::ResumeThread(theConnectionWriteThread) returned 1
ConnectionReadThreadHandler() Beginning
ConnectionWriteThreadHandler() Beginning
When faulty behaviour is observed, the following lines are output to the log file:
Connection::ResumeThread(theConnectionReadThread) returned 1
Connection::ResumeThread(theConnectionWriteThread) returned 1
The callback functions do not appear to being invoked.
It is at this point that the error is displayed on the client indicating that:
"Receive() - The socket is marked as nonblocking and the receive operation would block"
On the Client side, I've got a class called CClientDoc, which contains the client side socket code. It first initialises theSocket which is the socket object which is embedded within a client:
private:
CClientSocket* theSocket = new CClientSocket;
When a connection is initialised between client and server, this class calls a function called CreateSocket() part of which is included below, along with ancillary functions which it calls:
void CClientDoc::CreateSocket()
{
AfxSocketInit();
int lastError;
theSocket->Init(this);
if (theSocket->Create()) // Calls CAyncSocket::Create() (part of afxsock.h)
{
theErrorMessage = "Socket Creation Successful"; // this is a CString
theSocket->SetSocketStatus(WAITING);
}
else
{
// We don't fall in here
}
}
void CClientDoc::Init(CClientDoc* pDoc)
{
pClient = pDoc; // pClient is a pointer to a CClientDoc
}
void CClientDoc::SetSocketStatus(SOCKET_STATUS sock_stat)
{
theSocketStatus = sock_stat; // theSocketStatus is a private member of CClientSocket of type SOCKET_STATUS
}
Immediately after CreateSocket(), SetupSocket() is called which is also provided here:
void CClientDoc::SetupSocket()
{
theSocket->AsyncSelect(); // Function within afxsock.h
}
Upon disconnection of the client from the server,
void CClientDoc::OnClienDisconnect()
{
theSocket->ShutDown(2); // Inline function within afxsock.inl
delete theSocket;
theSocket = new CClientSocket;
CreateSocket();
SetupSocket();
}
So we delete the current socket and then create a new one, ready for use, which appears to work as expected.
The error is being written on the Client within the DoReceive() function. This function calls the socket to attempt to read in a message.
CClientDoc::DoReceive()
{
int lastError;
switch (numBytesReceived = theSocket->Receive(theReceiveBuffer, 10000))
{
case 0:
// We don't fall in here
break;
case SOCKET_ERROR: // We come in here when the faulty behaviour occurs
if (lastError = theSocket->GetLastError() == WSAEWOULDBLOCK)
{
theErrorMessage = "Receive() - The socket is marked as nonblocking and the receive operation would block";
}
else
{
// We don't fall in here
}
break;
default:
// When connection works, we come in here
break;
}
}
Hopefully the addition of some of the code proves insightful. I should be able to add a bit more if needed.
Thanks
The WSAEWOULDBLOCK error DOES NOT mean the socket is marked as blocking. It means the socket is marked as non-blocking and there is NO DATA TO READ at that time.
WSAEWOULDBLOCK means the socket WOULD HAVE blocked the calling thread waiting for data if the socket HAD BEEN marked as blocking.
To know when a non-blocking socket has data waiting to be read, use Winsock's select() function, or the CClientSocket::AsyncSelect() method to request FD_READ notifications, or other equivalent. Don't try to read until there is something to read.
In your analysis, you see the client sending data to the server, but the server is not sending data to the client. So you clearly have a logic bug in your code somewhere, you need to find and fix it. Either the client is not terminating its request correctly, or the server is not receiving/processing/replying to it correctly. But since you did not show your actual code, we can't tell you what is actually wrong with it.

python socket client does not send anything

I'am trying to do an integration via HTTP socket. I'am using python to create the socket client and send data to a socket server created in C.
As you can see in the following images, the integration documentation gives an example in C that shows how I must send the data to the server:
Integration documentation example:
1- define record / structure types for the message header and for each message format
2- Declare / Create a client socket object
3- Open the socket component in non blocking mode
4- declare a variable of the data structure type relevant to the API function you wish to call – then fill it with the correct data (including header). Copy the structure data to a byte array and send it through the socket
I've tried to do that using the ctypes module from python:
class SPMSifHdr(ctypes.Structure):
_fields_ = [
('ui32Synch1', ctypes.c_uint32),
('ui32Synch2', ctypes.c_uint32),
('ui16Version', ctypes.c_uint16),
('ui32Cmd', ctypes.c_uint32),
('ui32BodySize', ctypes.c_uint32)
]
class SPMSifRegisterMsg(ctypes.Structure):
_fields_ = [
('hdr1', SPMSifHdr),
('szLisence', ctypes.c_char*20),
('szApplName', ctypes.c_char*20),
('nRet', ctypes.c_int)
]
body_len = ctypes.sizeof(SPMSifRegisterMsg)
header = SPMSifHdr(ui32Synch1=0x55555555, ui32Synch2=0xaaaaaaaa, ui16Version=1, ui32Cmd=1, ui32BodySize=body_len)
body = SPMSifRegisterMsg(hdr1=header, szLisence='12345', szApplName='MyPmsTest', nRet=1)
socket_connection = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
# config is a dict with the socket server connection params
socket_connection.connect((config.get('ip'), int(config.get('port'))))
socket_connection.sendall(bytearray(body))
socket_connection.recv(1024)
When I call the socket recv function it never receive anything, so I have used a windows tool to check the data that I sent and as you can see in the next image it seems any data is sent:
Socket sniff
I've tried to send even a simple "Hello! world" string and the result is always the same.
The socket connection is open. I know it because I can see how many connections are open from the server panel.
What am I doing wrong?
The error was that the SocketSniff program only shows the sent data if the server return a response. In it case the server did never return nothing because some bytes were missing.
I found it creating my own socket echo server and checking that the data I was sending were uncomplete.
Mystery solved. :D

ZeroMQ: why a Client Server program without a use of multiprocessing fails?

I recently encountered ZeroMQ ( pyzmq ) and I found this very useful piece of code on a website Client Server with REQ and REP and I modified it to make only a single process call. My code is:
import zmq
import sys
from multiprocessing import Process
port = 5556
def server():
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:%s" % port)
print "Running server on port: %s" % port
# serves only 5 request and dies
#for reqnum in range(4):
# Wait for next request from client
message = socket.recv()
print "Received request : %s from client" % message
socket.send("ACK from %s" % port)
def client():
context = zmq.Context()
socket = context.socket(zmq.REQ)
#for port in ports:
socket.connect ("tcp://localhost:%s" % port)
#for request in range(20):
print "client Sending request to server"
socket.send ("Hello")
message = socket.recv()
print "Received ACK from server""[", message, "]"
time.sleep (1)
if __name__ == "__main__":
Process(target=server, args=()).start()
Process(target=client, args=()).start()
time.sleep(1)
I realise that ZeroMQ is powerful, especially with multiprocessing/Multi-threading calls, but I was wondering if it is possible to call the server and client methods without calling them as a Process in __main__. For example, I tried calling them like:
if __name__ == "__main__":
server()
client()
For some reason the server started but not the client and I had to hard exit the program.
Is there any way to achieve this without Process calling? If not, then is there a socket program ( with or without a client server type architecture ) that functions exactly like the one above? ( I want a single program, not 2 programs running in different terminals as a classic CL-SE program ).
Using Ubuntu 14.04, 32-bit VM with Python-2.7
Simply, the server() processing had to start, not the client()
Why?
because the pure [SERIAL]-process scheduling has stepped into the server() code, where a Context instance has been instantiated, a Socket-instance was created, and next, the call to a socket.recv() method has hung-up the whole process into an unlimited & uncontrollable waiting state, expecting to receive some message, having the REP-LY Formal Behaviour Pattern ready on the local-side, but having no live counterparty, that would have sent any such expected message yet.
Yes, distributed-computing has several new dimensions ( degrees-of-freedom ) to care about -- the elementary (non)-presence and order of events being just recognised in this trivial scenario.
Wherever I can advocate, I do, NEVER use a blocking format of .recv() + read about a risk of a principally un-salvageable REQ/REP mutual dead-lock ( you have no doubt when it will happen, but have a certainty, it will & a certainty, you cannot salvage the mutually dead-locked counterparties, once it happens )
So, welcome into the realms of a distributed-processing reality

Lwt leaking file descriptors, not sure if bug or my code

(Cross posted to lwt github issues)
I have boiled down my usage to this code sample which will leak file descriptors.
say you have:
#require "lwt.unix"
open Lwt.Infix
let echo ic oc = Lwt_io.(write_chars oc (read_chars ic))
let program =
let server_address = Unix.(ADDR_INET (inet_addr_loopback, 2000)) in
let other_addr = Unix.(ADDR_INET (inet_addr_loopback, 2001)) in
let server = Lwt_io.establish_server server_address begin fun (tcp_ic, tcp_oc) ->
Lwt_io.with_connection other_addr begin fun (nc_ic, nc_oc) ->
Lwt_io.printl "Created connection" >>= fun () ->
echo tcp_ic nc_oc <&> echo nc_ic tcp_oc >>= fun () ->
Lwt_io.printl "finished"
end
|> Lwt.ignore_result
end
in
fst (Lwt.wait ())
let () =
Lwt_main.run program
and then you create a simple server with:
nc -l 2001
and then let's start up the OCaml code with
utop example.ml
and then open up a client
nc localhost 2000
blah blah
^c
Then looking at the connections for port 2000 using lsof, we see
ocamlrun 71109 Edgar 6u IPv4 0x7ff3e309cb80aead 0t0 TCP 127.0.0.1:callbook (LISTEN)
ocamlrun 71109 Edgar 7u IPv4 0x7ff3e309c9dc8ead 0t0 TCP 127.0.0.1:callbook->127.0.0.1:54872 (CLOSE_WAIT)
In fact for each usage of nc localhost 2000, we'll get a leftover CLOSE_WAIT record from the lsof usage.
Eventually this will lead to the system running out of file descriptors, which will MOST annoyingly not crash the program, but will lead to Lwt to just hang.
I can't tell if I am doing something wrong or if this is a genuine bug, in any case this is a serious bug for me and I run out of file descriptors within 10 hours...
EDIT: It seems to me that the problem is that one side of the connection is closed but the other isn't, I would have thought that with_connection should cleanup/close up whenever either side closes, aka whenever nc_ic or nc_oc close.
EDIT II: I have tried every which way where I manually close the descriptors with Lwt_io.close, but I still have the CLOSE_WAIT message.
EDIT III: Even used Lwt_unix.close on a raw fd given to with_connection's optional fd argument with similar bad results.
EDIT IV: Most insidious is if I use Lwt_daemon.daemonize, then this problem seemingly goes away
First, it is not clear why you use join <&> instead of choose <?>. I guess the connection should be closed if one of both sides wants to close it.
Concerning CLOSE_WAIT: it is half-closed connection from utop server to nc client.
A TCP connection consists of two half-connections, and they are closed independently. The connection from nc client to utop server was closed by nc due to Ctrl-C. But you have to explicitly close the opposite connection on server side by closing output stream. I'm not sure why Lwt.establish_server doesn't close it automatically. Possible, this is a design issue.
This works for me on CentOS 7:
Lwt_io.printl "Created connection" >>= fun () ->
echo tcp_ic nc_oc <?> echo nc_ic tcp_oc >>= fun () ->
Lwt_io.close tcp_oc >>= fun () ->
Lwt_io.printl "finished"
Also, there is a simplified code snippet to reproduce the issue:
#require "lwt.unix"
let program =
let server_address = Unix.(ADDR_INET (inet_addr_loopback, 2000)) in
let _server = Lwt_io.establish_server server_address begin fun (ic, oc) ->
(* Lwt_io.close oc |> Lwt.ignore_result; *) ()
end
in
fst (Lwt.wait ())
let () =
Lwt_main.run program
Run nc localhost 2000 several times to get connections in CLOSE_WAIT state. Uncomment the code to fix the issue.
The underlying problem, at the time this question was asked, was that Lwt_io.establish_server did not make any effort at all to close the file descriptors associated with tcp_ic and tcp_oc. While this could (and should) have been addressed by users closing them manually, it was a weird and unexpected behavior.
The new Lwt_io.establish_server, available since Lwt 3.0.0, does try to close tcp_ic and tcp_oc automatically. To permit this, it has a slightly different type signature for the callback: the callback must return a promise, which you should resolve when tcp_ic/tcp_oc are not needed anymore. (EDIT) In practice, this means you just write your callback in natural Lwt style, and completion of the last Lwt operation will close the channels.
The new API also internally calls Lwt.async for running your callback, so you don't have to call that or Lwt.ignore_result.
You can still close the tcp_ic and tcp_oc manually in the callback, to write your own error handlers, which can be as elaborate as you please. The second automatic, internal close inside the new Lwt_io.establish_server won't have any harmful effect.
The new API was the eventual result of the parallel discussion of this question in the Lwt issue #208.
If someone would like the old, painful behavior, perhaps to reproduce the issue in the question, the old API is available for a while longer under the name Lwt_io.Versioned.establish_server_1.

C++ Server - Client Message Sync

I writing a small program that can Send File from Client -> Server (Send) and Server -> Client(Request).
Well done this part but the problems comes when:
1. I found the File on Server, How can I execute a cin on the client side?
2. How can I force my messages between Server and Client to be synced? I mean I dont want the Server to move to next step or freeze on the receive.
For Example(No Threading applied in this porblem):-
Server: Waiting a Message from Client.
Client: Send the Message.
Client: Waiting a Message from Client.
Server: Send the Message.
.....etc.
In a rare times the messages arrive on order but 99.999% of the time they don't and the program on two sides freeze.
The problem with the inorder messages was a thread on the client side who kept reading the inc replies without allowing the actual functions to see them.
However, about point 1.
I am trying in this code:
1. No shared resources so i am trying to define everything inside this function (part of it where the problem happening)
2. I was trying to pass this function to a thread so the server can accept more clients.
3. send & receive nothing special about them just a normal send/recv calls.
3. Question: if SendMyMessage & ReceiveMyMessage is going to be used by different threads, should I pass the socket to them with the message?
void ExecuteRequest(void * x)
{
RequestInfo * req = (RequestInfo *) x;
// 1st Message Direction get or put
fstream myFile;
myFile.open(req->_fName);
char tmp;
string _MSG= "";
string cFile = "*";
if(req->_fDir.compare("put") == 0)
{
if(myFile.is_open())
{
SendMyMessage("*F*");
cFile = ReceiveMyMessage();
// I want here to ask the client what to do after he found the that file exist on the server,
// I want the client to to get a message "*F*", then a cin command appear to him
// then the client enter a char
// then a message sent back to the server
// then the server continue executing the code
//More code
}
Client side:
{
cout <<"Waiting Message" <<endl;
temps = ReceiveMessage();
if(temps.compare("*F*") == 0)
{
cout <<"File found on Server want to:\n(1)Replace it.\n(2)Append to it." <<endl;
cin>>temps;
SendMyMessage(temps);
}}
I am using visual studio 2013
Windowx 7
thread am using: _beginthread (I removed all threads)
Regards,
On linux, there is a system call "select" using which the server can wait on the open sockets. As soon as there is an activity, like client wrote something, the server wakes up on that sockets and processes the data.
You are on windows platform. So :
http://msdn.microsoft.com/en-us/library/windows/desktop/ms740141%28v=vs.85%29.aspx