Using warn_only with "run" and "sudo" is ok. But with "local", it produces the error below:
TypeError: local() got an unexpected keyword argument 'warn_only'
Should I use env.warn_only=True instead?
No, thats the kind of thing that you forget and then comes and bites you in the butt. If you can avoid accessing env altogether will be fantastic at least while you're still familiarizing yourself with fabric. Instead, i humbly suggest you use with setting(warn_only=True) that way you only override it momentarily.
#task
#setting(warn_only=True)
def task_that_always_throws_warning():
local('sudo ....')
local('sudo ....') # if this throws an error you will NOT know about it
#task
def task_that_always_throws_warning_but_i_need_to_catch_the_second_local():
with setting(warn_only=True):
local('sudo ....')
local('sudo ....') # if this throws an error you will know about it
you can also do this, (though i dont recommend it):
env.warn_only=True
#task
def task_that_always_throws_warning_but_i_need_to_catch_the_second_local():
local('sudo ....')
with setting(warn_only=False):
local('sudo ....') # if this throws an error you will know about it
Bottom line is you can do it how ever you want, "im" a big fan of NOT overriding a systems defaults.
Related
I was reading a similar question Returning error string from a function in python. While I experimenting to create something similar in an Object Oriented programming so I could learn a few more things I got lost.
I am using Python 2.7 and I am a beginner on Object Oriented programming.
I can not figure out how to make it work.
Sample code checkArgumentInput.py:
#!/usr/bin/python
__author__ = 'author'
class Error(Exception):
"""Base class for exceptions in this module."""
pass
class ArgumentValidationError(Error):
pass
def __init__(self, arguments):
self.arguments = arguments
def print_method(self, input_arguments):
if len(input_arguments) != 3:
raise ArgumentValidationError("Error on argument input!")
else:
self.arguments = input_arguments
return self.arguments
And on the main.py script:
#!/usr/bin/python
import checkArgumentInput
__author__ = 'author'
argsValidation = checkArgumentInput.ArgumentValidationError(sys.argv)
if __name__ == '__main__':
try:
result = argsValidation.validate_argument_input(sys.argv)
print result
except checkArgumentInput.ArgumentValidationError as exception:
# handle exception here and get error message
print exception.message
When I am executing the main.py script it produces two blank lines. Even if I do not provide any arguments as input or even if I do provide argument(s) input.
So my question is how to make it work?
I know that there is a module that can do that work for me, by checking argument input argparse but I want to implement something that I could use in other cases also (try, except).
Thank you in advance for the time and effort reading and replying to my question.
OK. So, usually the function sys.argv[] is called with brackets in the end of it, and with a number between the brackets, like: sys.argv[1]. This function will read your command line input. Exp.: sys.argv[0] is the name of the file.
main.py 42
In this case main.py is sys.argv[0] and 42 is sys.argv[1].
You need to identifi the string you're gonna take from the command line.
I think that this is the problem.
For more info: https://docs.python.org/2/library/sys.html
I made some research and I found this useful question/ answer that helped me out to understand my error: Manually raising (throwing) an exception in Python
I am posting the correct functional code under, just in case that someone will benefit in future.
Sample code checkArgumentInput.py:
#!/usr/bin/python
__author__ = 'author'
class ArgumentLookupError(LookupError):
pass
def __init__(self, *args): # *args because I do not know the number of args (input from terminal)
self.output = None
self.argument_list = args
def validate_argument_input(self, argument_input_list):
if len(argument_input_list) != 3:
raise ValueError('Error on argument input!')
else:
self.output = "Success"
return self.output
The second part main.py:
#!/usr/bin/python
import sys
import checkArgumentInput
__author__ = 'author'
argsValidation = checkArgumentInput.ArgumentLookupError(sys.argv)
if __name__ == '__main__':
try:
result = argsValidation.validate_argument_input(sys.argv)
print result
except ValueError as exception:
# handle exception here and get error message
print exception.message
The following code prints: Error on argument input! as expected, because I violating the condition.
Any way thank you all for your time and effort, hope this answer will help someone else in future.
I'm running unittests in the callbacks for motor database calls, and I'm successfully catching AssertionErrors and having them surface when running nosetests, but the AssertionErrors are being caught in the wrong test. The tracebacks are to different files.
My unittests look generally like this:
def test_create(self):
#self.callback
def create_callback(result, error):
self.assertIs(error, None)
self.assertIsNot(result, None)
question_db.create(QUESTION, create_callback)
self.wait()
And the unittest.TestCase class I'm using looks like this:
class MotorTest(unittest.TestCase):
bucket = Queue.Queue()
# Ensure IOLoop stops to prevent blocking tests
def callback(self, func):
def wrapper(*args, **kwargs):
try:
func(*args, **kwargs)
except Exception as e:
self.bucket.put(traceback.format_exc())
IOLoop.current().stop()
return wrapper
def wait(self):
IOLoop.current().start()
try:
raise AssertionError(self.bucket.get(block = False))
except Queue.Empty:
pass
The errors I'm seeing:
======================================================================
FAIL: test_sync_user (app.tests.db.test_user_db.UserDBTest)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/----/Documents/app/app-Server/app/tests/db/test_user_db.py", line 39, in test_sync_user
self.wait()
File "/Users/----/Documents/app/app-Server/app/tests/testutils/mongo.py", line 25, in wait
raise AssertionError(self.bucket.get(block = False))
AssertionError: Traceback (most recent call last):
File "/Users/----/Documents/app/app-Server/app/tests/testutils/mongo.py", line 16, in wrapper
func(*args, **kwargs)
File "/Users/----/Documents/app/app-Server/app/tests/db/test_question_db.py", line 32, in update_callback
self.assertEqual(result["question"], "updated question?")
TypeError: 'NoneType' object has no attribute '__getitem__'
Where the error is reported to be in UsersDbTest but is clearly in test_questions_db.py (which is QuestionsDbTest)
I'm having issues with nosetests and asynchronous tests in general, so if anyone has any advice on that, it'd be greatly appreciated as well.
I can't fully understand your code without an SSCCE, but I'd say you're taking an unwise approach to async testing in general.
The particular problem you face is that you don't wait for your test to complete (asynchronously) before leaving the test function, so there's work still pending in the IOLoop when you resume the loop in your next test. Use Tornado's own "testing" module -- it provides convenient methods for starting and stopping the loop, and it recreates the loop between tests so you don't experience interference like what you're reporting. Finally, it has extremely convenient means of testing coroutines.
For example:
import unittest
from tornado.testing import AsyncTestCase, gen_test
import motor
# AsyncTestCase creates a new loop for each test, avoiding interference
# between tests.
class Test(AsyncTestCase):
def callback(self, result, error):
# Translate from Motor callbacks' (result, error) convention to the
# single arg expected by "stop".
self.stop((result, error))
def test_with_a_callback(self):
client = motor.MotorClient()
collection = client.test.collection
collection.remove(callback=self.callback)
# AsyncTestCase starts the loop, runs until "remove" calls "stop".
self.wait()
collection.insert({'_id': 123}, callback=self.callback)
# Arguments passed to self.stop appear as return value of "self.wait".
_id, error = self.wait()
self.assertIsNone(error)
self.assertEqual(123, _id)
collection.count(callback=self.callback)
cnt, error = self.wait()
self.assertIsNone(error)
self.assertEqual(1, cnt)
#gen_test
def test_with_a_coroutine(self):
client = motor.MotorClient()
collection = client.test.collection
yield collection.remove()
_id = yield collection.insert({'_id': 123})
self.assertEqual(123, _id)
cnt = yield collection.count()
self.assertEqual(1, cnt)
if __name__ == '__main__':
unittest.main()
(In this example I create a new MotorClient for each test, which is a good idea when testing applications that use Motor. Your actual application must not create a new MotorClient for each operation. For decent performance you must create one MotorClient when your application begins, and use that same one client throughout the process's lifetime.)
Take a look at the testing module, and particularly the gen_test decorator:
http://tornado.readthedocs.org/en/latest/testing.html
These test conveniences take care of many details related to unittesting Tornado applications.
I gave a talk and wrote an article about testing in Tornado, there's more info here:
http://emptysqua.re/blog/eventually-correct-links/
Since upgrading to Django 1.8, there's a strange bug in my Django management command.
I run it as follows:
python manage.py my_command $DB_NAME $DB_USER $DB_PASS
And then I collect the arguments as follows:
class Command(BaseCommand):
def handle(self, *args, **options):
print args
db_name = args[0]
db_user = args[1]
db_pass = args[2]
self.conn = psycopg2.connect(database=db_name, user=db_user,
password=db_pass)
Previously this worked fine, but now I see this error:
usage: manage.py my_command [-h] [--version] [-v {0,1,2,3}]
[--settings SETTINGS]
[--pythonpath PYTHONPATH]
[--traceback] [--no-color]
manage.py my_command: error: unrecognized arguments: test test test
It's not even getting as far as the print args statement.
If I run it without any arguments, then it errors on the args[0] line, unsurprisingly.
Am I using args wrong here? Or is something else going on?
It is a change in Django 1.8. As detailed here:
Management commands that only accept positional arguments
If you have written a custom management command that only accepts positional arguments and you didn’t specify the args command variable, you might get an error like Error: unrecognized arguments: ..., as variable parsing is now based on argparse which doesn’t implicitly accept positional arguments. You can make your command backwards compatible by simply setting the args class variable. However, if you don’t have to keep compatibility with older Django versions, it’s better to implement the new add_arguments() method as described in Writing custom django-admin commands.
def add_arguments(self, parser):
parser.add_argument('args', nargs='*')
Add the above for compatibility, breaking it was a really unwise decision from the folks updating the django.
I'd like to add delaying of arbitrary tasks using django-celery. Currently, I have created a class similar to the one below (just an example, actual classes have more than this):
from celery.task import task
class Delayer(object):
def delay(self, func, minutes):
return task(func, name="%s.delayed"%self.__class__.__name__)\
.apply_async(countdown=minutes*60)
I'm running celeryd as follows:
python manage.py celeryd -E -B -lDEBUG
When I try running my delay method from within a django shell [eg Delayer().delay(lambda: 1, 1)], I'm getting an error like this in my celeryd output:
[2013-01-02 15:26:39,324: ERROR/MainProcess] Received unregistered task of type "Delayer.delayed".
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
{'retries': 0, 'task': "Delayer.delayed", 'eta': '2013-01-02T21:27:39.320913', 'args': [], 'expires': None, 'callbacks': None, 'errbacks': None, 'kwargs': {}, 'id': '99d49fa7-bd4b-40b0-80dc-57309a6f19b1', 'utc': True} (229b)
Traceback (most recent call last):
File "/home/simon/websites/envs/delayer/local/lib/python2.7/site-packages/celery/worker/consumer.py", line 432, in on_task_received
strategies[name](message, body, message.ack_log_error)
KeyError: "Delayer.delayed"
My question is, is it possible to register such dynamically created tasks? If not, what other method can I use to achieve the same effect using celery?
The simple answer is that you can't; because celery is running in a different process, it needs to be able to import any code that is run as a celery task; your generated callable is not, so celery's way of moving around references to callables doesn't work.
However this does suggest a possible way of attacking things: if you can come up with a different way of serializing your callable, then you can provide it as an argument to a simple celery task. This previous question may help. Note the cautionary mentions of security :-)
I am currently trying to integrate mandrill into this Django-based website for emails. Djrill is the recommended package for Django and sits in place of the default SMTP/email backend, passing emails through to a Mandrill account.
When I try to test that this new backend is working by running this command:
send_mail('Test email', body, 'noreply#*********.com', [user.email], fail_silently=False)
It throws the following error: http://pastebin.ca/2239978
Can anybody point me to my mistake?
Update:
As #DavidRobinson mentions in a comment, you are not getting a successful response from the mandrill API authentication call. You should double check your API key.
If that is correct, try using curl to post {"key": <your api key>, "email": <your from email>} to MANDRILL_API_URL + "/users/verify-sender.json" and see if you get a 200.
Something like this:
curl -d key=1234567890 -d email=noreply#mydomain.com http://mandrill.whatever.com/user/verify-sender.json
Original answer:
There is also an issue in Djrill that prevents a useful error message from propagating up. That last line of the stack trace is the problem.
This is the entire open method taken from the source:
def open(self, sender):
"""
"""
self.connection = None
valid_sender = requests.post(
self.api_verify, data={"key": self.api_key, "email": sender})
if valid_sender.status_code == 200:
data = json.loads(valid_sender.content)
if data["is_enabled"]:
self.connection = True
return True
else:
if not self.fail_silently:
raise
See how it just says raise without an exception argument? That syntax is only allowed inside an except block, and raises the exception currently being handled. It doesn't work outside an except block.
An open issue in Djrill mentions a send failure and links a fork that supposedly fixes it. I suspect Djrill isn't well supported and you might try that fork or another solution entirely.