Pooling a member function in python 2.7 - python-2.7

I am getting a weird error with this code. I am trying to pool instances of a worker function which is a member the class invoking the pool. While I had my doubts if this will work or not, I am not sure of the exact reason as to why? The error thrown when I run this is a "PicklingError". Can someone explain why?
import multiprocessing
import time
class Pooler(multiprocessing.Process):
def __init__(self):
multiprocessing.Process.__init__(self)
def run(self):
pool = multiprocessing.Pool(10)
print "starting pool"
pool.map(self.worker, xrange(10), chunksize=10)
def worker(self, arg):
print "worker - arg - {}".format(arg)
if __name__ == '__main__':
jobs = []
for i in range(5):
proc = Pooler()
jobs.append(proc)
proc.start()
for j in jobs:
j.join()
print "...ending"
UPDATE
I changed the code to look as follows:
import multiprocessing
import time
class Pooler(multiprocessing.Process):
def __init__(self):
multiprocessing.Process.__init__(self)
def run(self):
pool = multiprocessing.Pool(1)
print "starting pool"
obj = Worker()
pool.map(obj.run, range(10), chunksize=1)
class Worker(object):
def __init__(self):
pass
def run(self, nums):
print "worker - arg - {}".format(nums)
if __name__ == '__main__':
jobs = []
for i in range(1):
proc = Pooler()
jobs.append(proc)
proc.start()
for j in jobs:
j.join()
print "...ending"
but I am still getting the following error:
starting pool
Process Pooler-1:
Traceback (most recent call last):
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "pool_test.py", line 13, in run
pool.map(obj.run, range(10), chunksize=1)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 251, in map
return self.map_async(func, iterable, chunksize).get()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/multiprocessing/pool.py", line 567, in get
raise self._value
PicklingError: Can't pickle <type 'instancemethod'>: attribute lookup __builtin__.instancemethod failed
...ending

The answer is simple. multiprocessing uses pickle to serialize objects and pass those objects between the different processes -- and as the error states, pickle can't serialize an instancemethod. You need to use a better serializer, like dill if you want to serialize an instancemethod (see https://stackoverflow.com/a/21345273/2379433).
So what can you do about multiprocessing? Fortunately, there is a fork of multiprocessing called multiprocess that uses dill, and if you use it, your object will serialize and your code will work. It's a one-line change that comes with the bonus of being able to run from the interpreter as well as serialize almost all the objects in python. (The link I posted above is for pathos and dill, but pathos is built on top of multiprocess, so it's still very relevant.)
>>> import multiprocess as multiprocessing
>>> import time
>>> class Pooler(multiprocessing.Process):
... def __init__(self):
... multiprocessing.Process.__init__(self)
... def run(self):
... pool = multiprocessing.Pool(1)
... print "starting pool"
... obj = Worker()
... pool.map(obj.run, range(10), chunksize=1)
...
>>> class Worker(object):
... def __init__(self):
... pass
... def run(self, nums):
... print "worker - arg - {}".format(nums)
...
>>> if __name__ == '__main__':
... jobs = []
... for i in range(1):
... proc = Pooler()
... jobs.append(proc)
... proc.start()
... for j in jobs:
... j.join()
... print "...ending"
...
starting pool
worker - arg - 0
worker - arg - 1
worker - arg - 2
worker - arg - 3
worker - arg - 4
worker - arg - 5
worker - arg - 6
worker - arg - 7
worker - arg - 8
worker - arg - 9
...ending
>>>

Related

PyQt4/QProcess issues with Nuke v9

PyQt4/QProcess issues with Nuke v9...
I am trying to utilize a QProcess to run renders in Nuke at my workplace. The reason why I want to use a QProcess is because I've setup this Task Manager with the help of the community at stackoverflow, which takes a list of commands and sequentially runs it one by one, and also allows me to display an output. You can view the question I posted here:
How to update UI with output from QProcess loop without the UI freezing?
Now I am trying to basically run Nuke renders through this "Task Manager". But every time I do it just gives me an error that the QProcess is destroyed while still running. I mean I tested this with subprocess and that worked totally fine. So i am not sure why the renders are not working through QProcess.
So to do more testing I just wrote a simplified version at home. The first issue I ran into though is that apparently PyQt4 couldn't be found from Nuke's python.exe. Even though I have PyQt4 for my main Python version. However apparently there is a compatibility issue with my installed PyQt4 since my main Python version is 2.7.12, while my Nuke's python version is 2.7.3. So i thought "fine then i'll just directly install PyQt4 inside my Nuke directory". So i grabbed this link and installed this PyQt version into my Nuke directory:
http://sourceforge.net/projects/pyqt/files/PyQt4/PyQt-4.10.3/PyQt4-4.10.3-gpl-Py2.7-Qt4.8.5-x64.exe
So i run my little test and seems to be doing the same thing as it does in my workplace, where the QProcess just gets destoryed. So i thought maybe adding "waitForFinished()" would maybe do something different, but then it gives me this error that reads:
The procedure entry point ??4QString##QEAAAEAV0#$$QEAV0##Z could not be located in the dynamic link library QtCore4.dll
And gives me this error as well:
ImportError: Failed to load C:\Program Files\Nuke9.0v8\nuke-9.0.8.dll
Now at this point I can't really do any more testing at home, and my studio is closed for the holidays. So i have two questions i'd like to ask:
1) What is this error I am seeing about "procedure entry point"? It only happens when i try to call something in a QProcess instance.
2) Why is my QProcess being destroyed before the render is finished?? How come this doesn't happen with subprocess? How can I submit a Nuke render job while acheiving the same results as subprocess?
Here is my test code:
import os
import sys
import subprocess
import PyQt4
from PyQt4 import QtCore
class Task:
def __init__(self, program, args=None):
self._program = program
self._args = args or []
#property
def program(self):
return self._program
#property
def args(self):
return self._args
class SequentialManager(QtCore.QObject):
started = QtCore.pyqtSignal()
finished = QtCore.pyqtSignal()
progressChanged = QtCore.pyqtSignal(int)
dataChanged = QtCore.pyqtSignal(str)
#^ this is how we can send a signal and can declare what type
# of information we want to pass with this signal
def __init__(self, parent=None):
# super(SequentialManager, self).__init__(parent)
# QtCore.QObject.__init__(self,parent)
QtCore.QObject.__init__(self)
self._progress = 0
self._tasks = []
self._process = QtCore.QProcess(self)
self._process.setProcessChannelMode(QtCore.QProcess.MergedChannels)
self._process.finished.connect(self._on_finished)
self._process.readyReadStandardOutput.connect(self._on_readyReadStandardOutput)
def execute(self, tasks):
self._tasks = iter(tasks)
#this 'iter()' method creates an iterator object
self.started.emit()
self._progress = 0
self.progressChanged.emit(self._progress)
self._execute_next()
def _execute_next(self):
try:
task = next(self._tasks)
except StopIteration:
return False
else:
print 'starting %s' % task.args
self._process.start(task.program, task.args)
return True
def _on_finished(self):
self._process_task()
if not self._execute_next():
self.finished.emit()
def _on_readyReadStandardOutput(self):
output = self._process.readAllStandardOutput()
result = output.data().decode()
self.dataChanged.emit(result)
def _process_task(self):
self._progress += 1
self.progressChanged.emit(self._progress)
class outputLog(QtCore.QObject):
def __init__(self, parent=None, parentWindow=None):
QtCore.QObject.__init__(self)
self._manager = SequentialManager(self)
def startProcess(self, tasks):
# self._manager.progressChanged.connect(self._progressbar.setValue)
self._manager.dataChanged.connect(self.on_dataChanged)
self._manager.started.connect(self.on_started)
self._manager.finished.connect(self.on_finished)
self._manager.execute(tasks)
#QtCore.pyqtSlot()
def on_started(self):
print 'process started'
#QtCore.pyqtSlot()
def on_finished(self):
print 'finished'
#QtCore.pyqtSlot(str)
def on_dataChanged(self, message):
if message:
print message
def nukeTestRender():
import nuke
nuke.scriptOpen('D:/PC6/Documents/nukeTestRender/nukeTestRender.nk')
writeNode = None
for node in nuke.allNodes():
if node.Class() == 'Write':
writeNode = node
framesList = [1, 20, 30, 40]
fr = nuke.FrameRanges(framesList)
# nuke.execute(writeNode, fr)
for x in range(20):
print 'random'
def run():
nukePythonEXE = 'C:/Program Files/Nuke9.0v8/python.exe'
thisFile = os.path.dirname(os.path.abspath("__file__"))
print thisFile
cmd = '"%s" %s renderCheck' %(nukePythonEXE, __file__)
cmd2 = [__file__, 'renderCheck']
cmdList = [Task(nukePythonEXE, cmd2)]
# subprocess.call(cmd, stdin=None, stdout=None, stderr=None, shell=False)
taskManager = outputLog()
taskManager.startProcess(cmdList)
taskManager._manager._process.waitForFinished()
if __name__ == "__main__":
print sys.argv
if len(sys.argv) == 1:
run()
elif len(sys.argv) == 2:
nukeTestRender()
I have managed to come up with an answer, so I will write in the details below:
Basically, I was getting the error with the installed PyQt4 because it was not compatible with my version of Nuke, so it is apparently more recommended to use PySide included in Nuke. However Nuke's Python executable cannot natively find PySide, a few paths needed to be added to the sys.path:
paths = ['C:\\Program Files\\Nuke9.0v8\\lib\\site-packages,
C:\\Users\\Desktop02\\.nuke',
'C:\\Program Files\\Nuke9.0v8\\plugins',
'C:\\Program Files\\Nuke9.0v8\\pythonextensions\\site-packages\\setuptools-0.6c11-py2.6.egg',
'C:\\Program Files\\Nuke9.0v8\\pythonextensions\\site-packages\\protobuf-2.5.0-py2.6.egg',
'C:\\Program Files\\Nuke9.0v8\\pythonextensions\\site-packages',
'C:\\Program Files\\Nuke9.0v8\\plugins\\modules',
'C:\\Program Files\\Nuke9.0v8\\configs\\Python\\site-packages',
'C:\\Users\\Desktop02\\.nuke\\Python\\site-packages']
for path in paths:
sys.path.append(path)
I found the missing paths by opening up both Nuke in GUI mode and the Python executable, and comparing both sys.path to see what the Python executable was lacking.
And to answer my own main question: if I call waitForFinished(-1) on the QProcess instance, this ignores the default 30sec limit of this function... Answer came from this thread:
QProcess and shell : Destroyed while process is still running
So here is my resulting working code:
import os
import sys
import subprocess
sysArgs = sys.argv
try:
import nuke
from PySide import QtCore
except ImportError:
raise ImportError('nuke not currently importable')
class Task:
def __init__(self, program, args=None):
self._program = program
self._args = args or []
#property
def program(self):
return self._program
#property
def args(self):
return self._args
class SequentialManager(QtCore.QObject):
started = QtCore.Signal()
finished = QtCore.Signal()
progressChanged = QtCore.Signal(int)
dataChanged = QtCore.Signal(str)
#^ this is how we can send a signal and can declare what type
# of information we want to pass with this signal
def __init__(self, parent=None):
# super(SequentialManager, self).__init__(parent)
# QtCore.QObject.__init__(self,parent)
QtCore.QObject.__init__(self)
self._progress = 0
self._tasks = []
self._process = QtCore.QProcess(self)
self._process.setProcessChannelMode(QtCore.QProcess.MergedChannels)
self._process.finished.connect(self._on_finished)
self._process.readyReadStandardOutput.connect(self._on_readyReadStandardOutput)
def execute(self, tasks):
self._tasks = iter(tasks)
#this 'iter()' method creates an iterator object
self.started.emit()
self._progress = 0
self.progressChanged.emit(self._progress)
self._execute_next()
def _execute_next(self):
try:
task = next(self._tasks)
except StopIteration:
return False
else:
print 'starting %s' % task.args
self._process.start(task.program, task.args)
return True
def _on_finished(self):
self._process_task()
if not self._execute_next():
self.finished.emit()
def _on_readyReadStandardOutput(self):
output = self._process.readAllStandardOutput()
result = output.data().decode()
self.dataChanged.emit(result)
def _process_task(self):
self._progress += 1
self.progressChanged.emit(self._progress)
class outputLog(QtCore.QObject):
def __init__(self, parent=None, parentWindow=None):
QtCore.QObject.__init__(self)
self._manager = SequentialManager(self)
def startProcess(self, tasks):
# self._manager.progressChanged.connect(self._progressbar.setValue)
self._manager.dataChanged.connect(self.on_dataChanged)
self._manager.started.connect(self.on_started)
self._manager.finished.connect(self.on_finished)
self._manager.execute(tasks)
#QtCore.Slot()
def on_started(self):
print 'process started'
#QtCore.Slot()
def on_finished(self):
print 'finished'
#QtCore.Slot(str)
def on_dataChanged(self, message):
if message:
print message
def nukeTestRender():
import nuke
nuke.scriptOpen('D:/PC6/Documents/nukeTestRender/nukeTestRender.nk')
writeNode = None
for node in nuke.allNodes():
if node.Class() == 'Write':
writeNode = node
framesList = [1, 20, 30, 40]
fr = nuke.FrameRanges(framesList)
nuke.execute(writeNode, fr)
# nuke.execute(writeNode, start=1, end=285)
for x in range(20):
print 'random'
def run():
nukePythonEXE = 'C:/Program Files/Nuke9.0v8/python.exe'
thisFile = os.path.dirname(os.path.abspath("__file__"))
print thisFile
cmd = '"%s" %s renderCheck' %(nukePythonEXE, sysArgs[0])
cmd2 = [sysArgs[0], 'renderCheck']
cmdList = [Task(nukePythonEXE, cmd2)]
# subprocess.call(cmd, stdin=None, stdout=None, stderr=None, shell=False)
taskManager = outputLog()
taskManager.startProcess(cmdList)
taskManager._manager._process.waitForFinished(-1)
if __name__ == "__main__":
print sys.argv
if len(sysArgs) == 1:
run()
elif len(sysArgs) == 2:
nukeTestRender()
For whatever reason, PySide refuses to load for me without the nuke module imported first. and also theres a known error when importing nuke it deletes all sys.argv arguments so thats gotta be stored somewhere first before the nuke import...

multiprocessing pool hangs in jupyter notebook

I have a very simple script which is the following:
import multiprocessing as multi
def call_other_thing_with_multi():
P = multi.Pool(3)
P.map(other_thing, range(0,5))
P.join()
def other_thing(arg):
print(arg)
return arg**2.
call_other_thing_with_multi()
When I call this, my code hangs at perpetuity. This is on windows with python 2.7.
Thanks for any guidance!
As per documentation, you need to call close() before join():
import multiprocessing as multi
def call_other_thing_with_multi():
P = multi.Pool(3)
P.map(other_thing, range(0,5))
P.close() # <-- calling close before P.join()
P.join()
print('END')
def other_thing(arg):
print(arg)
return arg**2.
call_other_thing_with_multi()
Prints:
0
1
2
3
4
END
EDIT: Better is use context manager, to not forget to call close():
def call_other_thing_with_multi():
with multi.Pool(3) as P:
P.map(other_thing, range(0,5))
print('END')

python,SocketServer : How to use the main thread timeout method in a multi-threaded scenario

my request is close main thread after more than 2 min.i called serve_forever() ,but don't know how to close the main thread.my codes:
import SocketServer,socket
import threading,time,re
last_request_time = 0
class ThreadSocketServer(SocketServer.ThreadingMixIn,SocketServer.TCPServer):pass
class RequestHandler(SocketServer.BaseRequestHandler):
def handle(self):
global last_request_time
last_request_time = time.time()
# print last_request_time
intact_data = []
while True:
data = str(self.request.recv(1024))
intact_data.append(data)
# if response[-16:-1].find('clientcomplete'):
if re.search('clientcomplete', data):
print 'server recv complete'
self.request.send('Finished')
break
self.request.close()
if __name__ == '__main__':
server = ThreadSocketServer(('192.168.3.33',12345),RequestHandler)
# server.handle_request()
server.serve_forever()
i tried server.handle_request(),but it can't seem to support multi threading.

How to get the "full" async result in Celery link_error callback

I have Celery 3.1.18 running with Django 1.6.11 and RabbitMQ 3.5.4, and trying to test my async task in a failure state (CELERY_ALWAYS_EAGER=True). However, I cannot get the proper "result" in the error callback. The example in the Celery docs shows:
#app.task(bind=True)
def error_handler(self, uuid):
result = self.app.AsyncResult(uuid)
print('Task {0} raised exception: {1!r}\n{2!r}'.format(
uuid, result.result, result.traceback))
When I do this, my result is still "PENDING", result.result = '', and result.traceback=''. But the actual result returned by my .apply_async call has the right "FAILURE" state and traceback.
My code (basically a Django Rest Framework RESTful endpoint that parses a .tar.gz file, and then sends a notification back to the user, when the file is done parsing):
views.py:
from producer_main.celery import app as celery_app
#celery_app.task()
def _upload_error_simple(uuid):
print uuid
result = celery_app.AsyncResult(uuid)
print result.backend
print result.state
print result.result
print result.traceback
msg = 'Task {0} raised exception: {1!r}\n{2!r}'.format(uuid,
result.result,
result.traceback)
class UploadNewFile(APIView):
def post(self, request, repository_id, format=None):
try:
uploaded_file = self.data['files'][self.data['files'].keys()[0]]
self.path = default_storage.save('{0}/{1}'.format(settings.MEDIA_ROOT,
uploaded_file.name),
uploaded_file)
print type(import_file)
self.async_result = import_file.apply_async((self.path, request.user),
link_error=_upload_error_simple.s())
print 'results from self.async_result:'
print self.async_result.id
print self.async_result.backend
print self.async_result.state
print self.async_result.result
print self.async_result.traceback
return Response()
except (PermissionDenied, InvalidArgument, NotFound, KeyError) as ex:
gutils.handle_exceptions(ex)
tasks.py:
from producer_main.celery import app
from utilities.general import upload_class
#app.task
def import_file(path, user):
"""Asynchronously import a course."""
upload_class(path, user)
celery.py:
"""
As described in
http://celery.readthedocs.org/en/latest/django/first-steps-with-django.html
"""
from __future__ import absolute_import
import os
import logging
from celery import Celery
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'producer_main.settings')
from django.conf import settings
log = logging.getLogger(__name__)
app = Celery('producer') # pylint: disable=invalid-name
# Using a string here means the worker will not have to
# pickle the object when using Windows.
app.config_from_object('django.conf:settings')
app.autodiscover_tasks(lambda: settings.INSTALLED_APPS) # pragma: no cover
#app.task(bind=True)
def debug_task(self):
print('Request: {0!r}'.format(self.request))
My backend is configured as such:
CELERY_ALWAYS_EAGER = True
CELERY_EAGER_PROPAGATES_EXCEPTIONS = False
BROKER_URL = 'amqp://'
CELERY_RESULT_BACKEND = 'redis://localhost'
CELERY_RESULT_PERSISTENT = True
CELERY_IGNORE_RESULT = False
When I run my unittest for the link_error state, I get:
Creating test database for alias 'default'...
<class 'celery.local.PromiseProxy'>
130ccf13-c2a0-4bde-8d49-e17eeb1b0115
<celery.backends.redis.RedisBackend object at 0x10aa2e110>
PENDING
None
None
results from self.async_result:
130ccf13-c2a0-4bde-8d49-e17eeb1b0115
None
FAILURE
Non .zip / .tar.gz file passed in.
Traceback (most recent call last):
So the task results are not available in my _upload_error_simple() method, but they are available from the self.async_result returned variable...
I could not get the link and link_error callbacks to work, so I finally had to use the on_failure and on_success task methods described in the docs and this SO question. My tasks.py then looks like:
class ErrorHandlingTask(Task):
abstract = True
def on_failure(self, exc, task_id, targs, tkwargs, einfo):
msg = 'Import of {0} raised exception: {1!r}'.format(targs[0].split('/')[-1],
str(exc))
def on_success(self, retval, task_id, targs, tkwargs):
msg = "Upload successful. You may now view your course."
#app.task(base=ErrorHandlingTask)
def import_file(path, user):
"""Asynchronously import a course."""
upload_class(path, user)
You appear to have _upload_error() as a bound method of your class - this is probably not what you want. try making it a stand-along task:
#celery_app.task(bind=True)
def _upload_error(self, uuid):
result = celery_app.AsyncResult(uuid)
msg = 'Task {0} raised exception: {1!r}\n{2!r}'.format(uuid,
result.result,
result.traceback)
class Whatever(object):
....
self.async_result = import_file.apply_async((self.path, request.user),
link=self._upload_success.s(
"Upload finished."),
link_error=_upload_error.s())
in fact there's no need for the self paramater since it's not used so you could just do this:
#celery_app.task()
def _upload_error(uuid):
result = celery_app.AsyncResult(uuid)
msg = 'Task {0} raised exception: {1!r}\n{2!r}'.format(uuid,
result.result,
result.traceback)
note the absence of bind=True and self
Be careful with UUID instance!
If you will try to get status of a task with id not string type but UUID type, you will only get PENDING status.
from uuid import UUID
from celery.result import AsyncResult
task_id = UUID('d4337c01-4402-48e9-9e9c-6e9919d5e282')
print(AsyncResult(task_id).state)
# PENDING
print(AsyncResult(str(task_id)).state)
# SUCCESS

Django multiprocessing and empty queue after put

I'm trying to make something like "task manager" using thread in Django which will be waiting some job.
import multiprocessing
from Queue import Queue
def task_maker(queue_obj):
while True:
try:
print queue_obj.qsize() # << always print 0
_data = queue_obj.get(timeout=10)
if _data:
_data['function'](*_data['args'], **_data['kwargs'])
except Empty:
pass
except Exception as e:
print e
tasks = Queue()
stream = multiprocessing.Process(target=task_maker, args=(tasks,))
stream.start()
def add_task(func=lambda: None, args=(), kwargs={}):
try:
tasks.put({
'function': func,
'args': args,
'kwargs': kwargs
})
print tasks.qsize() # print a normal size 1,2,3,4...
except Exception as e:
print e
I'm using "add_task" in views.py files, when user makes some request.
Why queue in "stream" always empty? what i'm doing wrong?
There are two issues with the current code. 1) with multiprocess (but not threading), the qsize() function is unreliable -- I suggest don't use it, as it is confusing. 2) you can't modify an object directly that's been taken from a queue.
Consider two processes, sending data back and forth. One won't know if the other has modified some data, as data is private. To communicate, send data explicitly, with Queue.put() or using a Pipe.
The general way producer/consumer system works is this: 1) jobs are stuff into a queue 2) worker blocks, waiting for work. When a job appears, it puts the result on a different queue. 3) a manager or 'beancounter' process consumes the output from the 2nd queue, and prints it or otherwise processes it.
Have fun!
#!/usr/bin/env python
import logging, multiprocessing, sys
def myproc(arg):
return arg*2
def worker(inqueue, outqueue):
logger = multiprocessing.get_logger()
logger.info('start')
while True:
job = inqueue.get()
logger.info('got %s', job)
outqueue.put( myproc(job) )
def beancounter(inqueue):
while True:
print 'done:', inqueue.get()
def main():
logger = multiprocessing.log_to_stderr(
level=logging.INFO,
)
logger.info('setup')
data_queue = multiprocessing.Queue()
out_queue = multiprocessing.Queue()
for num in range(5):
data_queue.put(num)
worker_p = multiprocessing.Process(
target=worker, args=(data_queue, out_queue),
name='worker',
)
worker_p.start()
bean_p = multiprocessing.Process(
target=beancounter, args=(out_queue,),
name='beancounter',
)
bean_p.start()
worker_p.join()
bean_p.join()
logger.info('done')
if __name__=='__main__':
main()
I've got it. I do not know why, but when I tried "threading", it worked!
from Queue import Queue, Empty
import threading
MailLogger = logging.getLogger('mail')
class TaskMaker(threading.Thread):
def __init__(self, que):
threading.Thread.__init__(self)
self.queue = que
def run(self):
while True:
try:
print "start", self.queue.qsize()
_data = self.queue.get()
if _data:
print "make"
_data['function'](*_data['args'], **_data['kwargs'])
except Empty:
pass
except Exception as e:
print e
MailLogger.error(e)
tasks = Queue()
stream = TaskMaker(tasks)
stream.start()
def add_task(func=lambda: None, args=(), kwargs={}):
global tasks
try:
tasks.put_nowait({
'function': func,
'args': args,
'kwargs': kwargs
})
except Exception as e:
print e
MailLogger.error(e)