I'm a beginner in Pytnon Networking . I'm writing this program in which a server will print whatever the client writes on his side. I don't know how to break the while statement in the best way if the client terminates the connection.
from socket import *
host = gethostname()
port = 27000
svr = socket(AF_INET, SOCK_STREAM)
svr.bind((host, port))
svr.listen(1)
print "Waiting for client connection..."
print "..."
c, addr = svr.accept()
print 'Got connection from', addr
while True:
print c.recv(1024)
I have tried this. it works. Hope you got this point.
while True:
n = raw_input("Some bla bla':")
if n.strip() == 'hello':
break
Consider using Twisted? Twisted has some very nice features that will let you do this, for example my server which just parrots back what it hears:
from twisted.internet import protocol, reactor
from twisted.protocols.basic import LineReceiver
class Echo(LineReceiver):
def dataReceived(self, data):
self.transport.write(data)
def connectionLost(self, reason):
print 'Client connection lost. Reason:\n{r}\n'.format(r=reason)
LineReceiver.connectionLost(self, reason)
reactor.stop()
class EchoFactory(protocol.Factory):
def buildProtocol(self, addr):
return Echo()
reactor.listenTCP(8000, EchoFactory())
reactor.run()
Sockets are nice at a very low level but sometimes a little too basic...
to get you off into Twisted world consider reading: http://krondo.com/blog/?page_id=1327
I would just try something like this
while True:
userInput = #here you insert whatever your user entered
if userInput == "Insert break statement here": #you could insert here something like "I want to get out of this function" or something like that, so that user wouldn't just accidentally type it
break
else:
print(userInput)
Related
I wrote two types of greenlets. MyGreenletPUB will publish message via ZMQ with message type 1 and message type 2.
MyGreenletSUB instances will subscribe to ZMQ PUB, based on parameter ( "1" and "2" ).
Problem here is that when I start my Greenlets run method in MyGreenletSUB code will stop on message = sock.recv() and will never return run time execution to other greenlets.
My question is how can I avoid this and how can I start my greenlets asynchronous with a while TRUE, without using gevent.sleep() in while methods to switch execution between greenlets
from gevent.monkey import patch_all
patch_all()
import zmq
import time
import gevent
from gevent import Greenlet
class MyGreenletPUB(Greenlet):
def _run(self):
# ZeroMQ Context
context = zmq.Context()
# Define the socket using the "Context"
sock = context.socket(zmq.PUB)
sock.bind("tcp://127.0.0.1:5680")
id = 0
while True:
gevent.sleep(1)
id, now = id + 1, time.ctime()
# Message [prefix][message]
message = "1#".format(id=id, time=now)
sock.send(message)
# Message [prefix][message]
message = "2#".format(id=id, time=now)
sock.send(message)
id += 1
class MyGreenletSUB(Greenlet):
def __init__(self, b):
Greenlet.__init__(self)
self.b = b
def _run(self):
context = zmq.Context()
# Define the socket using the "Context"
sock = context.socket(zmq.SUB)
# Define subscription and messages with prefix to accept.
sock.setsockopt(zmq.SUBSCRIBE, self.b)
sock.connect("tcp://127.0.0.1:5680")
while True:
message = sock.recv()
print message
g = MyGreenletPUB.spawn()
g2 = MyGreenletSUB.spawn("1")
g3 = MyGreenletSUB.spawn("2")
try:
gevent.joinall([g, g2, g3])
except KeyboardInterrupt:
print "Exiting"
A default ZeroMQ .recv() method modus operandi is to block until there has arrived anything, that will pass to the hands of the .recv() method caller.
For indeed smart, non-blocking agents, always use rather .poll() instance-methods and .recv( zmq.NOBLOCK ).
Beware, that ZeroMQ subscription is based on topic-filter matching from left and may get issues if mixed unicode and non-unicode strings are being distributed / collected at the same time.
Also, mixing several event-loops might become a bit tricky, depends on your control-needs. I personally always prefer non-blocking systems, even at a cost of more complex design efforts.
I have a problem. I'm having a echoserver which will accept clients and process his requirement and it returns the result to client.
Suppose I have two clients and 1 client requirement processing time would be 10 sec and 2 client requirement processing time would be 1 sec.
So when both clients connected to server at a time. how to run both the clients tasks at a time parallely and return the response to specific client which ever finishes first.
I have read that we can achieve this problem using python twisted. I have tried my luck, but Im unable to do it.
Please help me out of this Issue
Your code (https://trinket.io/python/87fd18ca9e) has many mistakes in terms of async design patterns, but I will only address the most blatant mistake. There are a few calls to time.sleep(), this is blocking code and is causing your code to stop until the sleep function is done running. The number 1 rule it async programming is do not use blocking functions! Don't worry, this is a very common mistake and the Twisted and Python async communities are there to help you :) I'll give you a naive solution for your server:
from twisted.internet.protocol import Factory
from twisted.internet import reactor, protocol, defer, task
def sleep(n):
return task.deferLater(reactor, n, lambda: None)
class QuoteProtocol(protocol.Protocol):
def __init__(self, factory):
self.factory = factory
def connectionMade(self):
self.factory.numConnections += 1
#defer.inlineCallbacks
def recur_factorial(self,n):
fact=1
print(n)
for i in range(1,int(n)+1):
fact=fact*i
yield sleep(5) # async sleep
defer.returnValue(str(fact))
def dataReceived(self, data):
try:
number = int(data) # validate data is an int
except ValueError:
self.transport.write('Invalid input!')
return # "exit" otherwise
# use Deferreds to write to client after calculation is finished
deferred_factorial = self.recur_factorial(number)
deferred_factorial.addCallback(self.transport.write)
def connectionLost(self, reason):
self.factory.numConnections -= 1
class QuoteFactory(Factory):
numConnections = 0
def buildProtocol(self, addr):
return QuoteProtocol(self)
reactor.listenTCP(8000, QuoteFactory())
reactor.run()
The main differences are in recur_factorial() and dataReceived(). The recur_factorial() is now utilizing Deferred (search how inlineCallbacks or coroutine's works) which allows for functions to execute after the result is available. So when the data in received, the factorial is calculated, then written to the end user. Finally there's the new sleep() function which allows for an async sleep function. I hope this helps. Keep reading the Krondo blog.
Following successfully hangs on exit
import threading
import Queue as queue
import time
import sys
class WorkItem(threading.Thread):
def __init__(self):
self.P1 = 20
self.P2 = 40
threading.Thread.__init__(self)
def run(self):
print "P1 = %d" % (self.P1)
print "P2 = %d" % (self.P2)
class WorkQueue(object):
def __init__(self,queueLimit = 5):
self.WorkQueue = queue.Queue(queueLimit)
self.dispatcherThread = threading.Thread(target=self.DequeueWorker)
self.dispatcherThread.start()
self.QueueStopEvent = threading.Event()
self.QueueStopEvent.clear()
def DequeueWorker(self):
print "DequeueWorker Enter .."
while not self.QueueStopEvent.isSet():
workItem = self.WorkQueue.get(True)
workItem.start()
def DispatchToQueue(self,workItem):
self.WorkQueue.put(workItem,True)
def Stop(self):
self.QueueStopEvent.set()
self.queue = None
def main():
q = WorkQueue()
for i in range(1,20):
t = WorkItem()
q.DispatchToQueue(t)
time.sleep(10)
q.Stop()
if __name__ == "__main__":
main()
I can see the DequeueWorker is the one still running and pending and trying to understand why since I do signal the Stop event. I was expecting the loop would exit.
>>> $w
=> Frame id=0, function=DequeueWorker
Frame id=1, function=run
Frame id=2, function=__bootstrap_inner
Frame id=3, function=__bootstrap
Help appreciated !!
You're calling get with block set to True, which means it will block until an item is actually available in the queue. In your code, once the work queue is exhausted, the next get will indefinitely block since it is waiting for an additional work item that will never come (and not letting the next iteration of the while loop execute, so the status of QueueStopEvent doesn't get checked anymore). Try modifying your DequeueWorker method to this:
def DequeueWorker(self):
print "DequeueWorker Enter .."
while not self.QueueStopEvent.isSet():
try:
workItem = self.WorkQueue.get(True, timeout=3)
workItem.start()
except queue.Empty: continue
Now when get is called after the work queue is exhausted, it will timeout (after 3 seconds in this case, I chose that arbitrarily) and raise the queue.Empty exception. In this case, we're simply going to let the loop continue to the next iteration where the loop will break itself when QueueStopEvent eventually gets set.
Other options would be to invoke get with block set to False or to use the get_nowait method inside that try/except block:
workItem = self.WorkQueue.get(False)
workItem = self.WorkQueue.get_nowait()
Although that creates a very tight while loop when the queue is empty.
I am using Tornado CurlAsyncHTTPClient. My process memory keeps growing for both blocking and non blocking requests when I instantiate corresponding httpclients for each request. This memory usage growth does not happen if I just have one instance of the httpclients(tornado.httpclient.HTTPClient/tornado.httpclient.AsyncHTTPClient) and reuse them.
Also If I use SimpleAsyncHTTPClient instead of CurlAsyncHTTPClient this memory growth doesnot happen irrespective of how I instantiate.
Here is a sample code that reproduces this,
import tornado.httpclient
import json
import functools
instantiate_once = False
tornado.httpclient.AsyncHTTPClient.configure('tornado.curl_httpclient.CurlAsyncHTTPClient')
hc, io_loop, async_hc = None, None, None
if instantiate_once:
hc = tornado.httpclient.HTTPClient()
io_loop = tornado.ioloop.IOLoop()
async_hc = tornado.httpclient.AsyncHTTPClient(io_loop=io_loop)
def fire_sync_request():
global count
if instantiate_once:
global hc
if not instantiate_once:
hc = tornado.httpclient.HTTPClient()
url = '<Please try with a url>'
try:
resp = hc.fetch(url)
except (Exception,tornado.httpclient.HTTPError) as e:
print str(e)
if not instantiate_once:
hc.close()
def fire_async_requests():
#generic response callback fn
def response_callback(response):
response_callback_info['response_count'] += 1
if response_callback_info['response_count'] >= request_count:
io_loop.stop()
if instantiate_once:
global io_loop, async_hc
if not instantiate_once:
io_loop = tornado.ioloop.IOLoop()
requests = ['<Please add ur url to try>']*5
response_callback_info = {'response_count': 0}
request_count = len(requests)
global count
count +=request_count
hcs=[]
for url in requests:
kwargs ={}
kwargs['method'] = 'GET'
if not instantiate_once:
async_hc = tornado.httpclient.AsyncHTTPClient(io_loop=io_loop)
async_hc.fetch(url, callback=functools.partial(response_callback), **kwargs)
if not instantiate_once:
hcs.append(async_hc)
io_loop.start()
for hc in hcs:
hc.close()
if not instantiate_once:
io_loop.close()
if __name__ == '__main__':
import sys
if sys.argv[1] == 'sync':
while True:
output = fire_sync_request()
elif sys.argv[1] == 'async':
while True:
output = fire_async_requests()
Here set instantiate_once variable to True, and execute
python check.py sync or python check.py async. The process memory increases continuously
With instantiate_once=False, this doesnot happen.
Also If I use SimpleAsyncHTTPClient instead of CurlAsyncHTTPClient this memory growth doesnot happen.
I have python 2.7/ tornado 2.3.2/ pycurl(libcurl/7.26.0 GnuTLS/2.12.20 zlib/1.2.7 libidn/1.25 libssh2/1.4.2 librtmp/2.3)
I could reproduce the same issue with latest tornado 3.2
Please help me to understand this behaviour and figure out the right way of using tornado as http library.
HTTPClient and AsyncHTTPClient are designed to be reused, so it will always be more efficient not to recreate them all the time. In fact, AsyncHTTPClient will try to magically detect if there is an existing AsyncHTTPClient on the same IOLoop and use that instead of creating a new one.
But even though it's better to reuse one http client object, it shouldn't leak to create many of them as you're doing here (as long as you're closing them). This looks like a bug in pycurl: https://github.com/pycurl/pycurl/issues/182
Use pycurl 7.19.5 and this hack to avoid memory leaks:
Your Tornado main file:
tornado.httpclient.AsyncHTTPClient.configure("curl_httpclient_leaks_patched.CurlAsyncHTTPClientEx")
curl_httpclient_leaks_patched.py
from tornado import curl_httpclient
class CurlAsyncHTTPClientEx(curl_httpclient.CurlAsyncHTTPClient):
def close(self):
super(CurlAsyncHTTPClientEx, self).close()
del self._multi
This is a code snippet written in python to receive sms via a usb modem. When I run the program all I get is a status message "OK"., but nothing else.How do I fix the issue to print the messages I am receiving?
import serial
class HuaweiModem(object):
def __init__(self):
self.open()
def open(self):
self.ser = serial.Serial('/dev/ttyUSB_utps_modem', 115200, timeout=1)
self.SendCommand('ATZ\r')
self.SendCommand('AT+CMGF=1\r')
def SendCommand(self,command, getline=True):
self.ser.write(command)
data = ''
if getline:
data=self.ReadLine()
return data
def ReadLine(self):
data = self.ser.readline()
print data
return data
def GetAllSMS(self):
self.ser.flushInput()
self.ser.flushOutput()
command = 'AT+CMGL="all"\r'
print self.SendCommand(command,getline=False)
self.ser.timeout = 2
data = self.ser.readline()
print data
while data !='':
data = self.ser.readline()
if data.find('+cmgl')>0:
print data
h = HuaweiModem()
h.GetAllSMS()
In GetAllSMS there are two things I notice:
1) You are using self.ser.readline and not self.Readline so GetAllSMS will not try to print anything (except the first response line) before the OK final response is received, and at that point data.find('+cmgl')>0 will never match.
Is that just the problem?
2) Will print self.SendCommand(command,getline=False) call the function just as it were written as self.SendCommand(command,getline=False)? (Just checking since I do not write python myself)
In any case, you should rework your AT parsing a bit.
def SendCommand(self,command, getline=True):
The getline parameter here is not a very good abstraction. Leave out reading responses from the SendCommand function. You should rather implement proper parsing of the responses given back by the modem and handle that outside. In the general case something like
self.SendCommand('AT+CSOMECMD\r')
data = self.ser.readline()
while ! IsFinalResult(data):
data = self.ser.readline()
print data # or do whatever you want with each line
For commands without any explicit processing of the responses, you can implement a SendCommandAndWaitForFinalResponse function that does the above.
See this answer for more information about a IsFinalResult function.
where you are having problems is here in your GetAllSMS function. Now replace my GeTALLSMS function with yours and see what happens
def GetAllSMS(self):
self.ser.flushInput()
self.ser.flushOutput()
command = 'AT+CMGL="all"\r' #to get all messages both read and unread
print self.SendCommand(command,getline=False)
while 1:
self.ser.timeout = 2
data = self.ser.readline()
print data
or this
def GetAllSMS(self):
self.ser.flushInput()
self.ser.flushOutput()
command = 'AT+CMGL="all"\r' #to get all messages both read and unread
print self.SendCommand(command,getline=False)
self.ser.timeout = 2
data = self.ser.readall() #you can also u read(10000000)
print data
thats all...