Whenever I send a message from my client to the server (or vice versa), I have to wait until I've sent a message from the other side for it to register.
server.py
from socket import *
host = '127.0.0.1'
port = 80
s = socket(AF_INET, SOCK_STREAM)
s.connect((host, port))
while True:
message = raw_input("Send message: ")
s.send(message)
print s.recv(1024)
client.py
from socket import *
host = '127.0.0.1'
port = 80
s = socket(AF_INET, SOCK_STREAM)
s.bind((host, port))
s.listen(5)
c,addr = s.accept()
while True:
message = raw_input("Send message: ")
c.send(message)
print c.recv(1024)
Example:
server sends message
nothing happens on client side
client sends message
both server and client receive their messages
this will happen over and over
Well, it's obvious that you have to wait for "the other side" because you have raw_input calls sitting in your code that are waiting for the user to type something...
That said, you're also recv'ing 1024 bytes but sending much less, probably. Thus the recv() will usually hang until it has enough data (but not always, in which case it returns less). Remember that with lowlevel socket communication you have to properly take care of message sizes/boundaries yourself!
You can agree on a termination byte such as "\n" that signifies the boundary between messages, or another strategy is to prepend every message by a few bytes that signify the size of the message following it. You then recv() exactly that amount. Or just send a fixed amount of data for every message.
What remains is that you still have to deal with the fact that send() and recv() sometimes don't send or receive everything: they have a return value that is important. Using sendall, and recv with the MSG_WAITALL flag can take care of most issues, but there's no guarantee.
Read this https://docs.python.org/3/howto/sockets.html#using-a-socket for starters.
Related
Sorry for improper description of my question.
What my program do is that connect a server, send some data and close connection. I simplified my code as below:
WSAStartup(MAKEWORD(2, 2), &wsaData);
SOCKET s = socket(AF_INET, SOCK_STREAM, IPPROTO_TCP);
connect(s, (const sockaddr*)&dstAddr, sizeof(dstAddr));
send(s, (const char*)pBuffer, fileLen, 0);
shutdown(s, SD_SEND);
closesocket(s);
WSACleanup();
Only partial data was received by server before found a RST causing communication shutdown.
I wrote a simulate server program to accept connection and receive data, but the simulator could get all data. Because I couldn't access server's source code, I didn't know if something made wrong in it. Is there a way I can avoid this error by adding some code in client, or can I prove that there is something wrong in server program?
Setting socket's linger option can fix the bug. But I need to give a magic number for the value of linger time.
linger l;
l.l_onoff = 1;
l.l_linger = 30;
setsockopt(socket, SOL_SOCKET, SO_LINGER, (const char*)&l, sizeof(l));
WSASend returns before sending data to device actually
Correct.
I created a non-blocking socket and tried to send data to server.
WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, WSA_FLAG_OVERLAPPED)
No you didn't. You created an overlapped I/O socket.
After executed, returnValue was SOCKET_ERROR and WSAGetLastError() returned WSA_IO_PENDING. Then I called WSAWaitForMultipleEvents to wait for event being set. After it returned WSA_WAIT_EVENT_0, I called WSAGetOverlappedResult to get actual sent data length and it is the same value with I sent.
So all the data got transferred into the socket send buffer.
I called WSASocket first, then WSASend/WSAWaitForMultipleEvents/WSAGetOverlappedResult several times to send a bunch of data, and closesocket at the end.
So at the end of that process all the data and the close had been transferred to the socket send buffer.
But server couldn't receive all data, I used Wireshark to view tcp packets and found that client sent RST before all packet were sent out.
That could be for a number of reasons none of which is determinable without seeing some code.
If I slept 1 minute before calling closesocket, then server would receive all data.
Again this would depend on what else had happened in your code.
It seemed like that WSASend/WSAWaitForMultipleEvents/WSAGetOverlappedResult returned before sending data to server actually.
Correct.
The data were saved in buffer and waiting for being sent out.
Correct.
When I called closesocket, communication was shut down.
Correct.
They didn't work as my expectation.
So your expectation was wrong.
What did I go wrong? This problem only occurred in specific PCs, the application run well in others.
Impossible to answer without seeing some code. The usual reason for issuing an RST is that the peer had written data to a connection that you had already closed: in other words, an application protocol error; but there are other possibilities.
I'm beginner to Networking. We have sender and receiver application. I have captured the packets sent by sender to receiver using WinDump.
I'm writing a python sender application which will fuzz the packets sent by sender to receiver.
I just want to confirm, can I directly put the packet data obtained using WinDump to socket send() method.
Say, "arp who-has host1 tell host2" is the packet obtained by WinDump
Can I write,
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
s.connect((TCP_IP, TCP_PORT))
s.send(arp who-has host1 tell host2)
In general pyhton network programming you can not pass as the arguments using send method.you just only pass the string as arguments as variable.
Basically we are using the send() and sendall() method to transmits the TCP messages.
I have mentioned the below desciption for both the method.
socket send() method :
The socket's send() method is not guaranteed to send all of the data you pass it. Instead, it returns the number of bytes that were actually sent and expects your application to handle retransmission of the unsent portion
from socket import socket
sock = socket()
sock.connect(('1.2.3.4', 1234))
sock.send('My Name is Dasadiya Chaitanya !!!\n'')
sock.close()
socket sendall() method :
which is same work as the send() method but python provides a convenience method called sendall() that makes sure all of your data is sent before returning and also provide the guarantee to send all the data to the receiver.
from socket import socket
sock = socket()
sock.connect(('1.2.3.4', 1234))
sock.sendall('My Name is Dasadiya Chaitanya !!!\n')
sock.close()
I hope this should helpful for you :)
When you use the simple ZeroMQ REQ/REP pattern you depend on a fixed send()->recv() / recv()->send() sequence.
As this article describes you get into trouble when a participant disconnects in the middle of a request because then you can't just start over with receiving the next request from another connection but the state machine would force you to send a request to the disconnected one.
Has there emerged a more elegant way to solve this since the mentioned article has been written?
Is reconnecting the only way to solve this (apart from not using REQ/REP but use another pattern)
As the accepted answer seem so terribly sad to me, I did some research and have found that everything we need was actually in the documentation.
The .setsockopt() with the correct parameter can help you resetting your socket state-machine without brutally destroy it and rebuild another on top of the previous one dead body.
(yeah I like the image).
ZMQ_REQ_CORRELATE: match replies with requests
The default behaviour of REQ sockets is to rely on the ordering of messages to match requests and responses and that is usually sufficient. When this option is set to 1, the REQ socket will prefix outgoing messages with an extra frame containing a request id. That means the full message is (request id, 0, user frames…). The REQ socket will discard all incoming messages that don't begin with these two frames.
Option value type int
Option value unit 0, 1
Default value 0
Applicable socket types ZMQ_REQ
ZMQ_REQ_RELAXED: relax strict alternation between request and reply
By default, a REQ socket does not allow initiating a new request with zmq_send(3) until the reply to the previous one has been received. When set to 1, sending another message is allowed and has the effect of disconnecting the underlying connection to the peer from which the reply was expected, triggering a reconnection attempt on transports that support it. The request-reply state machine is reset and a new request is sent to the next available peer.
If set to 1, also enable ZMQ_REQ_CORRELATE to ensure correct matching of requests and replies. Otherwise a late reply to an aborted request can be reported as the reply to the superseding request.
Option value type int
Option value unit 0, 1
Default value 0
Applicable socket types ZMQ_REQ
A complete documentation is here
The good news is that, as of ZMQ 3.0 and later (the modern era), you can set a timeout on a socket. As others have noted elsewhere, you must do this after you have created the socket, but before you connect it:
zmq_req_socket.setsockopt( zmq.RCVTIMEO, 500 ) # milliseconds
Then, when you actually try to receive the reply (after you have sent a message to the REP socket), you can catch the error that will be asserted if the timeout is exceeded:
try:
send( message, 0 )
send_failed = False
except zmq.Again:
logging.warning( "Image send failed." )
send_failed = True
However! When this happens, as observed elsewhere, your socket will be in a funny state, because it will still be expecting the response. At this point, I cannot find anything that works reliably other than just restarting the socket. Note that if you disconnect() the socket and then re connect() it, it will still be in this bad state. Thus you need to
def reset_my_socket:
zmq_req_socket.close()
zmq_req_socket = zmq_context.socket( zmq.REQ )
zmq_req_socket.setsockopt( zmq.RCVTIMEO, 500 ) # milliseconds
zmq_req_socket.connect( zmq_endpoint )
You will also notice that because I close()d the socket, the receive timeout option was "lost", so it is important set that on the new socket.
I hope this helps. And I hope that this does not turn out to be the best answer to this question. :)
There is one solution to this and that is adding timeouts to all calls. Since ZeroMQ by itself does not really provide simple timeout functionality I recommend using a subclass of the ZeroMQ socket that adds a timeout parameter to all important calls.
So, instead of calling s.recv() you would call s.recv(timeout=5.0) and if a response does not come back within that 5 second window it will return None and stop blocking. I had made a futile attempt at this when I run into this problem.
I'm actually looking into this at the moment, because I am retro fitting a legacy system.
I am coming across code constantly that "needs" to know about the state of the connection. However the thing is I want to move to the message passing paradigm that the library promotes.
I found the following function : zmq_socket_monitor
What it does is monitor the socket passed to it and generate events that are then passed to an "inproc" endpoint - at that point you can add handling code to actually do something.
There is also an example (actually test code) here : github
I have not got any specific code to give at the moment (maybe at the end of the week) but my intention is to respond to the connect and disconnects such that I can actually perform any resetting of logic required.
Hope this helps, and despite quoting 4.2 docs, I am using 4.0.4 which seems to have the functionality
as well.
Note I notice you talk about python above, but the question is tagged C++ so that's where my answer is coming from...
Update: I'm updating this answer with this excellent resource here: https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/ Socket programming is complicated so do checkout the references in this post.
None of the answers here seem accurate or useful. The OP is not looking for information on BSD socket programming. He is trying to figure out how to robustly handle accept()ed client-socket failures in ZMQ on the REP socket to prevent the server from hanging or crashing.
As already noted -- this problem is complicated by the fact that ZMQ tries to pretend that the servers listen()ing socket is the same as an accept()ed socket (and there is no where in the documentation that describes how to set basic timeouts on such sockets.)
My answer:
After doing a lot of digging through the code, the only relevant socket options passed along to accept()ed socks seem to be keep alive options from the parent listen()er. So the solution is to set the following options on the listen socket before calling send or recv:
void zmq_setup(zmq::context_t** context, zmq::socket_t** socket, const char* endpoint)
{
// Free old references.
if(*socket != NULL)
{
(**socket).close();
(**socket).~socket_t();
}
if(*context != NULL)
{
// Shutdown all previous server client-sockets.
zmq_ctx_destroy((*context));
(**context).~context_t();
}
*context = new zmq::context_t(1);
*socket = new zmq::socket_t(**context, ZMQ_REP);
// Enable TCP keep alive.
int is_tcp_keep_alive = 1;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE, &is_tcp_keep_alive, sizeof(is_tcp_keep_alive));
// Only send 2 probes to check if client is still alive.
int tcp_probe_no = 2;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE_CNT, &tcp_probe_no, sizeof(tcp_probe_no));
// How long does a con need to be "idle" for in seconds.
int tcp_idle_timeout = 1;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE_IDLE, &tcp_idle_timeout, sizeof(tcp_idle_timeout));
// Time in seconds between individual keep alive probes.
int tcp_probe_interval = 1;
(**socket).setsockopt(ZMQ_TCP_KEEPALIVE_INTVL, &tcp_probe_interval, sizeof(tcp_probe_interval));
// Discard pending messages in buf on close.
int is_linger = 0;
(**socket).setsockopt(ZMQ_LINGER, &is_linger, sizeof(is_linger));
// TCP user timeout on unacknowledged send buffer
int is_user_timeout = 2;
(**socket).setsockopt(ZMQ_TCP_MAXRT, &is_user_timeout, sizeof(is_user_timeout));
// Start internal enclave event server.
printf("Host: Starting enclave event server\n");
(**socket).bind(endpoint);
}
What this does is tell the operating system to aggressively check the client socket for timeouts and reap them for cleanup when a client doesn't return a heart beat in time. The result is that the OS will send a SIGPIPE back to your program and socket errors will bubble up to send / recv - fixing a hung server. You then need to do two more things:
1. Handle SIGPIPE errors so the program doesn't crash
#include <signal.h>
#include <zmq.hpp>
// zmq_setup def here [...]
int main(int argc, char** argv)
{
// Ignore SIGPIPE signals.
signal(SIGPIPE, SIG_IGN);
// ... rest of your code after
// (Could potentially also restart the server
// sock on N SIGPIPEs if you're paranoid.)
// Start server socket.
const char* endpoint = "tcp://127.0.0.1:47357";
zmq::context_t* context;
zmq::socket_t* socket;
zmq_setup(&context, &socket, endpoint);
// Message buffers.
zmq::message_t request;
zmq::message_t reply;
// ... rest of your socket code here
}
2. Check for -1 returned by send or recv and catch ZMQ errors.
// E.g. skip broken accepted sockets (pseudo-code.)
while (1):
{
try
{
if ((*socket).recv(&request)) == -1)
throw -1;
}
catch (...)
{
// Prevent any endless error loops killing CPU.
sleep(1)
// Reset ZMQ state machine.
try
{
zmq::message_t blank_reply = zmq::message_t();
(*socket).send (blank_reply);
}
catch (...)
{
1;
}
continue;
}
Notice the weird code that tries to send a reply on a socket failure? In ZMQ, a REP server "socket" is an endpoint to another program making a REQ socket to that server. The result is if you go do a recv on a REP socket with a hung client, the server sock becomes stuck in a broken receive loop where it will wait forever to receive a valid reply.
To force an update on the state machine, you try send a reply. ZMQ detects that the socket is broken, and removes it from its queue. The server socket becomes "unstuck", and the next recv call returns a new client from the queue.
To enable timeouts on an async client (in Python 3), the code would look something like this:
import asyncio
import zmq
import zmq.asyncio
#asyncio.coroutine
def req(endpoint):
ms = 2000 # In milliseconds.
sock = ctx.socket(zmq.REQ)
sock.setsockopt(zmq.SNDTIMEO, ms)
sock.setsockopt(zmq.RCVTIMEO, ms)
sock.setsockopt(zmq.LINGER, ms) # Discard pending buffered socket messages on close().
sock.setsockopt(zmq.CONNECT_TIMEOUT, ms)
# Connect the socket.
# Connections don't strictly happen here.
# ZMQ waits until the socket is used (which is confusing, I know.)
sock.connect(endpoint)
# Send some bytes.
yield from sock.send(b"some bytes")
# Recv bytes and convert to unicode.
msg = yield from sock.recv()
msg = msg.decode(u"utf-8")
Now you have some failure scenarios when something goes wrong.
By the way -- if anyone's curious -- the default value for TCP idle timeout in Linux seems to be 7200 seconds or 2 hours. So you would be waiting a long time for a hung server to do anything!
Sources:
https://github.com/zeromq/libzmq/blob/84dc40dd90fdc59b91cb011a14c1abb79b01b726/src/tcp_listener.cpp#L82 TCP keep alive options preserved for client sock
http://www.tldp.org/HOWTO/html_single/TCP-Keepalive-HOWTO/ How does keep alive work
https://github.com/zeromq/libzmq/blob/master/builds/zos/README.md Handling sig pipe errors
https://github.com/zeromq/libzmq/issues/2586 for information on closing sockets
https://blog.cloudflare.com/when-tcp-sockets-refuse-to-die/
https://github.com/zeromq/libzmq/issues/976
Disclaimer:
I've tested this code and it seems to be working, but ZMQ does complicate testing this a fair bit because the client re-connects on failure? If anyone wants to use this solution in production, I recommend writing some basic unit tests, first.
The server code could also be improved a lot with threading or polling to be able to handle multiple clients at once. As it stands, a malicious client can temporarily take up resources from the server (3 second timeout) which isn't ideal.
I am very new to networking and have an issue with sending messages during a while loop.
To my knowledge I should do something along the lines of this:
Create Socket()
Connect()
While
Do logic
Send()
End while
Close Socket()
However it sends once and returns -1 there after.
The code will only work when I create the socket in the loop.
While
Create Socket()
Connect()
Do logic
Send()
Close Socket()
End while
Here is a section of the code I am using but doesn't work:
//init winsock
WSAStartup(MAKEWORD(2, 0), &wsaData);
//open socket
sock = socket(PF_INET, SOCK_STREAM, IPPROTO_TCP);
//connect
memset(&serveraddr, 0, sizeof(serveraddr));
serveraddr.sin_family = AF_INET;
serveraddr.sin_addr.s_addr = inet_addr(ipaddress);
serveraddr.sin_port = htons((unsigned short) port);
connect(sock, (struct sockaddr *) &serveraddr, sizeof(serveraddr));
while(true) {
if (send(sock, request.c_str(), request.length(), 0)< 0 /*!= request.length()*/) {
OutputDebugString(TEXT("Failed to send."));
} else {
OutputDebugString(TEXT("Activity sent."));
}
Sleep(30000);
}
//disconnect
closesocket(sock);
//cleanup
WSACleanup();
The function CheckForLastError() returns:10053
WSAECONNABORTED
Software caused connection abort.
An established connection was aborted by the software in your host computer, possibly due to a data transmission time-out or protocol error
Thanks
I have been looking for a solution to this problem too. I am having the same problem with my server. When trying to send a response from inside the loop, the client seems never to receive it.
As I understand the problem, according to user207421's suggestions, when you establish a connection between a client and a server, the protocol should have enough information to let the client know when the server has finished sending the response. If you see this example, you have a minimum HTTP server that responds to requests. In this case, you can use a browser or an application like Postman. And if you see the response message, you will see a header called Connection. Setting its value to close tells the client which one is the last message from the server for that request. The message is being sent, but the client keeps waiting, maybe because there is no closing element the client can recognize. I was also missing the Content-Length header. My HTTP response message was wrong, and the client was lost.
This diagram shows what needs to be outside the loop and what needs to be inside.
To understand how and why your program fails,you have to understand the functions you use.
Some of them are blocking functions and some are them not. Some of them need previous calles of other functions and some of them don't.
Now from what i understand we are talking about a client here,not a server.
The client has only non blocking functions in this case. That means that whenever you call a function,it will be executed without waiting.
So send() will send data the second it is called and the stream will go on to the next line of code.
If the information to be sent was not yet ready...you will have a problem,since nothing will be sent.
To solve it you could use some sort of a delay. The problem with delays is that they are Blocking functions meaning your stream will stop once it hits the delay. To solve it you can create a thread and lock it untill the information is ready to be sent.
But that would do the job for one send(). You will send the info and thats that.
If you want to hold the communication and send repeatedly info,you will need to create a while loop. once you have a while loop you dont have to worry about anything. That is because you can verify that the information is ready with a stream control and you can use send over and over again before terminating the connection.
Now the question is what is happening on the server side of things?
"ipaddress" should hold the ip of the server. The server might reject your request to connect.Or worst he might accept your request but he is listening with diffrent settings in relation to your client.Meaning that maybe the server is not reciving (does not have recv() function)information and you are trying to send info... that might resault in errors/crashes and what not.
I am writing an XMLRPC client in c++ that is intended to talk to a python XMLRPC server.
Unfortunately, at this time, the python XMLRPC server is only capable of fielding one request on a connection, then it shuts down, I discovered this thanks to mhawke's response to my previous query about a related subject
Because of this, I have to create a new socket connection to my python server every time I want to make an XMLRPC request. This means the creation and deletion of a lot of sockets. Everything works fine, until I approach ~4000 requests. At this point I get socket error 10048, Socket in use.
I've tried sleeping the thread to let winsock fix its file descriptors, a trick that worked when a python client of mine had an identical issue, to no avail.
I've tried the following
int err = setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)TRUE,sizeof(BOOL));
with no success.
I'm using winsock 2.0, so WSADATA::iMaxSockets shouldn't come into play, and either way, I checked and its set to 0 (I assume that means infinity)
4000 requests doesn't seem like an outlandish number of requests to make during the run of an application. Is there some way to use SO_KEEPALIVE on the client side while the server continually closes and reopens?
Am I totally missing something?
The problem is being caused by sockets hanging around in the TIME_WAIT state which is entered once you close the client's socket. By default the socket will remain in this state for 4 minutes before it is available for reuse. Your client (possibly helped by other processes) is consuming them all within a 4 minute period. See this answer for a good explanation and a possible non-code solution.
Windows dynamically allocates port numbers in the range 1024-5000 (3977 ports) when you do not explicitly bind the socket address. This Python code demonstrates the problem:
import socket
sockets = []
while True:
s = socket.socket()
s.connect(('some_host', 80))
sockets.append(s.getsockname())
s.close()
print len(sockets)
sockets.sort()
print "Lowest port: ", sockets[0][1], " Highest port: ", sockets[-1][1]
# on Windows you should see something like this...
3960
Lowest port: 1025 Highest port: 5000
If you try to run this immeditaely again, it should fail very quickly since all dynamic ports are in the TIME_WAIT state.
There are a few ways around this:
Manage your own port assignments and
use bind() to explicitly bind your
client socket to a specific port
that you increment each time your
create a socket. You'll still have
to handle the case where a port is
already in use, but you will not be
limited to dynamic ports. e.g.
port = 5000
while True:
s = socket.socket()
s.bind(('your_host', port))
s.connect(('some_host', 80))
s.close()
port += 1
Fiddle with the SO_LINGER socket
option. I have found that this
sometimes works in Windows (although
not exactly sure why):
s.setsockopt(socket.SOL_SOCKET,
socket.SO_LINGER, 1)
I don't know if this will help in
your particular application,
however, it is possible to send
multiple XMLRPC requests over the
same connection using the
multicall method. Basically
this allows you to accumulate
several requests and then send them
all at once. You will not get any
responses until you actually send
the accumulated requests, so you can
essentially think of this as batch
processing - does this fit in with
your application design?
Update:
I tossed this into the code and it seems to be working now.
if(::connect(s_, (sockaddr *) &addr, sizeof(sockaddr)))
{
int err = WSAGetLastError();
if(err == 10048) //if socket in user error, force kill and reopen socket
{
closesocket(s_);
WSACleanup();
WSADATA info;
WSAStartup(MAKEWORD(2,0), &info);
s_ = socket(AF_INET,SOCK_STREAM,0);
setsockopt(s_,SOL_SOCKET,SO_REUSEADDR,(char*)&x,sizeof(BOOL));
}
}
Basically, if you encounter the 10048 error (socket in use), you can simply close the socket, call cleanup, and restart WSA, the reset the socket and its sockopt
(the last sockopt may not be necessary)
i must have been missing the WSACleanup/WSAStartup calls before, because closesocket() and socket() were definitely being called
this error only occurs once every 4000ish calls.
I am curious as to why this may be, even though this seems to fix it.
If anyone has any input on the subject i would be very curious to hear it
Do you close the sockets after using it?