Killing a blocking thread - python-2.7

I'm having a tough time trying to develop a threaded app wherein the threads are each doing REST calls over the network.
I need to kill active threads after a certain timeout. I've tried every python 2.7.x approach I've seen on here and can't get this working.
I'm using python 2.7.6 on OEL linux (3.8.13-44.1.1.el6uek.x86_64).
Here is a simplified snippet of code:
class cthread(threading.Thread):
def __init__(self, cfg, host):
self.cfg = cfg
self.host = host
self.runf = False
self.stop = threading.Event()
threading.Thread.__init__(self. target=self.collect)
def terminate(self):
self.stop.set()
def collect(self):
try:
self.runf = True
while (not self.stop.wait(1)):
# Here I do urllib2 GET request to a REST service which could hang
<rest call>
finally:
self.runf = False
timer_t1 = 0
newthr = cthread(cfg, host)
newthr.start()
while True:
if timer_t1 > 600:
newthr.terminate()
break
time.sleep(30)
timer_t1 += 30
Basically after my timeout period I need to kill all remaining threads, either gracefully or not.
Haven't a heck of a time getting this to work.
Am I going about this the correct way?

There's no official API to kill a thread in python.
In your code relying on urllib2 you might periodically pass timeout left for your threads to run from the main loop and use urllib2 with the timeout option. Or even track the timers in threads exploiting the same approach with urllib2.

Related

Why celery not executing parallelly in Django?

I am having a issue with the celery , I will explain with the code
def samplefunction(request):
print("This is a samplefunction")
a=5,b=6
myceleryfunction.delay(a,b)
return Response({msg:" process execution started"}
#celery_app.task(name="sample celery", base=something)
def myceleryfunction(a,b):
c = a+b
my_obj = MyModel()
my_obj.value = c
my_obj.save()
In my case one person calling the celery it will work perfectly
If many peoples passing the request it will process one by one
So imagine that my celery function "myceleryfunction" take 3 Min to complete the background task .
So if 10 request are coming at the same time, last one take 30 Min delay to complete the output
How to solve this issue or any other alternative .
Thank you
I'm assuming you are running a single worker with default settings for the worker.
This will have the worker running with worker_pool=prefork and worker_concurrency=<nr of CPUs>
If the machine it runs on only has a single CPU, you won't get any parallel running tasks.
To get parallelisation you can:
set worker_concurrency to something > 1, this will use multiple processes in the same worker.
start additional workers
use celery multi to start multiple workers
when running the worker in a docker container, add replica's of the container
See Concurrency for more info.

Stopping a While loop when it ends a cycle in Python

This may be a strange request. I have an infinite While loop and each loop lasts ~7 minutes, then the program sleeps for a couple minutes to let the computer cool down, and then starts over.
This is how it looks:
import time as t
t_cooling = 120
while True:
try:
#7 minutes of uninterrupted calculations here
t.sleep(t_cooling)
except KeyboardInterrupt:
break
Right now if I want to interrupt the process, I have to wait until the program sleeps for 2 minutes, otherwise all the calculations done in the running cycle are wasted. Moreover the calculations involve writing on files and working with multiprocessing, so interrupting during the calculation phase is not only a waste, but can potentially damage the output on the files.
I'd like to know if there is a way to signal to the program that the current cycle is the last one it has to execute, so that there is no risk of interrupting at the wrong moment. To add one more limitation, it has to be a solution that works via command line. It's not possible to add a window with a stop button on the computer the program is running on. The machine has a basic Linux installation, with no graphical environment. The computer is not particularly powerful or new and I need to use the most CPU and RAM possible.
Hope everything is clear enough.
Not so elegant, but it works
#!/usr/bin/env python
import signal
import time as t
stop = False
def signal_handler(signal, frame):
print('You pressed Ctrl+C!')
global stop
stop = True
signal.signal(signal.SIGINT, signal_handler)
print('Press Ctrl+C')
t_cooling = 1
while not stop:
t.sleep(t_cooling)
print('Looping')
You can use a separate Thread and an Event to signal the exit request to the main thread:
import time
import threading
evt = threading.Event()
def input_thread():
while True:
if input("") == "quit":
evt.set()
print("Exit requested")
break
threading.Thread(target=input_thread).start()
t_cooling = 5
while True:
#7 minutes of uninterrupted calculations here
print("starting calculation")
time.sleep(5)
if evt.is_set():
print("exiting")
break
print("cooldown...")
time.sleep(t_cooling)
Just for completeness, I post here my solution. It's very raw, but it works.
import time as t
t_cooling = 120
while True:
#7 minutes of uninterrupted calculations here
f = open('stop', 'r')
stop = f.readline().strip()
f.close()
if stop == '0':
t.sleep(t_cooling)
else:
break
I just have to create a file named stop and write a 0 in it. When that 0 is changed to something else, the program stops at the end of the cycle.

ZeroMQ: why a Client Server program without a use of multiprocessing fails?

I recently encountered ZeroMQ ( pyzmq ) and I found this very useful piece of code on a website Client Server with REQ and REP and I modified it to make only a single process call. My code is:
import zmq
import sys
from multiprocessing import Process
port = 5556
def server():
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:%s" % port)
print "Running server on port: %s" % port
# serves only 5 request and dies
#for reqnum in range(4):
# Wait for next request from client
message = socket.recv()
print "Received request : %s from client" % message
socket.send("ACK from %s" % port)
def client():
context = zmq.Context()
socket = context.socket(zmq.REQ)
#for port in ports:
socket.connect ("tcp://localhost:%s" % port)
#for request in range(20):
print "client Sending request to server"
socket.send ("Hello")
message = socket.recv()
print "Received ACK from server""[", message, "]"
time.sleep (1)
if __name__ == "__main__":
Process(target=server, args=()).start()
Process(target=client, args=()).start()
time.sleep(1)
I realise that ZeroMQ is powerful, especially with multiprocessing/Multi-threading calls, but I was wondering if it is possible to call the server and client methods without calling them as a Process in __main__. For example, I tried calling them like:
if __name__ == "__main__":
server()
client()
For some reason the server started but not the client and I had to hard exit the program.
Is there any way to achieve this without Process calling? If not, then is there a socket program ( with or without a client server type architecture ) that functions exactly like the one above? ( I want a single program, not 2 programs running in different terminals as a classic CL-SE program ).
Using Ubuntu 14.04, 32-bit VM with Python-2.7
Simply, the server() processing had to start, not the client()
Why?
because the pure [SERIAL]-process scheduling has stepped into the server() code, where a Context instance has been instantiated, a Socket-instance was created, and next, the call to a socket.recv() method has hung-up the whole process into an unlimited & uncontrollable waiting state, expecting to receive some message, having the REP-LY Formal Behaviour Pattern ready on the local-side, but having no live counterparty, that would have sent any such expected message yet.
Yes, distributed-computing has several new dimensions ( degrees-of-freedom ) to care about -- the elementary (non)-presence and order of events being just recognised in this trivial scenario.
Wherever I can advocate, I do, NEVER use a blocking format of .recv() + read about a risk of a principally un-salvageable REQ/REP mutual dead-lock ( you have no doubt when it will happen, but have a certainty, it will & a certainty, you cannot salvage the mutually dead-locked counterparties, once it happens )
So, welcome into the realms of a distributed-processing reality

How to run 10 processes at a time from a list of 1000 processes in python 2.7

def get_url(url):
# conditions
import multiprocessing
threads = []
thread = multiprocessing.Process(target=get_url,args=(url))
threads.append(thread)
for st in threads:
st.start()
Now i want to execute 10 requests at a time, once those 10 are completed. Pick other 10 and so on. I was going through the documentation but i haven't found any use case. I am using this module for the first time. Any help would be appreciated.

Django: Gracefully restart nginx + fastcgi sites to reflect code changes?

Common situation: I have a client on my server who may update some of the code in his python project. He can ssh into his shell and pull from his repository and all is fine -- but the code is stored in memory (as far as I know) so I need to actually kill the fastcgi process and restart it to have the code change.
I know I can gracefully restart fcgi but I don't want to have to manually do this. I want my client to update the code, and within 5 minutes or whatever, to have the new code running under the fcgi process.
Thanks
First off, if uptime is important to you, I'd suggest making the client do it. It can be as simple as giving him a command called deploy-code. Using your method, if there is an error in their code, your method requires a 10 minute turnaround (read: downtime) for fixing it, assuming he gets it correct.
That said, if you actually want to do this, you should create a daemon which will look for files modified within the last 5 minutes. If it detects one, it will execute the reboot command.
Code might look something like:
import os, time
CODE_DIR = '/tmp/foo'
while True:
if restarted = True:
restarted = False
time.sleep(5*60)
for root, dirs, files in os.walk(CODE_DIR):
if restarted=True:
break
for filename in files:
if restared=True:
break
updated_on = os.path.getmtime(os.path.join(root, filename))
current_time = time.time()
if current_time - updated_on <= 6 * 60: # 6 min
# 6 min could offer false negatives, but that's better
# than false positives
restarted = True
print "We should execute the restart command here."