creating custom command line in django that does not check database connectivity - django

I have a requirement whereby sometimes our database engine goes down and I need to automate a certain procedure to through django's command line. Am trying to write a code that would run to perform some reporting and start up some services and I need to run in from django context so i can use django's library and project setting.
Any ways, is there a way to develop a command line that can be executed without checking if database exists or perhaps i can trap the exception? any ideas?

NVM, I looked at source code for the command "shell" and saw "requires_model_validation" parameter. I used it in my command and it went through, here is an example
class Command(NoArgsCommand):
args = '<No arguments>'
help = 'This command will init the database.'
requires_model_validation = False
def handle(self, *args, **options):
try:
print 'ss'
except Exception :
raise CommandError('Cannot find default database settings')

Related

Google App Engine deferred.defer task not getting executed

I have a Google App Engine Standard Environment application that has been working fine for a year or more, that, quite suddenly, refuses to enqueue new deferred tasks using deferred.defer.
Here's the Python 2.7 code that is making the deferred call:
# Find any inventory items that reference the product, and change them too.
# because this could take some time, we'll do it as a deferred task, and only
# if needed.
if upd:
updater = deferredtasks.InvUpdate()
deferred.defer(updater.run, product_key)
My app.yaml file has the necessary bits to support deferred.defer:
- url: /_ah/queue/deferred
script: google.appengine.ext.deferred.deferred.application
login: admin
builtins:
- deferred: on
And my deferred task has logging in it so I should see it running when it does:
#-------------------------------------------------------------------------------
# DEFERRED routine that updates the inventory items for a particular product. Should be callecd
# when ANY changes are made to the product, because it should trigger a re-download of the
# inventory record for that product to the iPad.
#-------------------------------------------------------------------------------
class InvUpdate(object):
def __init__(self):
self.to_put = []
self.product_key = None
self.updcount = 0
def run(self, product_key, batch_size=100):
updproduct = product_key.get()
if not updproduct:
logging.error("DEFERRED ERROR: Product key passed in does not exist")
return
logging.info(u"DEFERRED BEGIN: beginning inventory update for: {}".format(updproduct.name))
self.product_key = product_key
self._continue(None, batch_size)
...
When I run this in the development environment on my development box, everything works fine. Once I deploy it to the App Engine server, the inventory updates never get done (i.e. the deferred task is not executed), and there are no errors (and no other logging from the deferred task in fact) in the log files on the server. I know that with the sudden move to get everybody on Python 3 as quickly as possible, the deferred.defer library has been marked as not recommended because it only works with the 2.7 Python environment, and I planned on moving to task queues for this, but I wasn't expecting deferred.defer to suddenly stop working in the existing python environment.
Any insight would be greatly appreciated!
I'm pretty sure you cant pass the method of an instance to appengine taskqueue, because that instance will not get exist when your task runs since it will be running in a different process. I actually dont understand how your task ever worked when running remotely in the first place (and running locally is not an accurate representation of how things will run remotely)
Try changing your code to this:
if upd:
deferred.defer(deferredtasks.InvUpdate.run_cls, product_key)
and then InvUpdate is the same but has a new function run_cls:
class InvUpdate(object):
#classmethod
def run_cls(cls, product_key):
cls().run(product_key)
And I'm still on the process of migrating to cloud tasks and my deferred tasks still work

Pylint: Module/Instance of has no member for google.cloud.vision API

When I run this code (which will later be used to detect and extract text using Google Vision API in Python) I get the following errors:
Module 'google.cloud.vision_v1.types' has no 'Image' member pylint(no-member)
Instance of 'ImageAnnotatorClient' has no 'text_detection' member pylint(no-member)
from google.cloud import vision
from google.cloud.vision import types
import os, io
os.environ['GOOGLE_APPLICATION_CREDENTIALS'] = r'C:\Users\paul\VisionAPI\key.json'
client = vision.ImageAnnotatorClient()
FILE_NAME = 'im3.jpg'
FOLDER_PATH = r'C:\Users\paul\VisionAPI\images'
with io.open(os.path.join(FOLDER_PATH , FILE_NAME), 'rb') as image_file:
content = image_file.read()
image = vision.types.Image(content=content)
response = client.text_detection(image=image)
What does "Module/Instance of ___ has no members" mean?
I was able to reproduce the pylint error, though the script executes successfully when run (with minor modifications for my environment to change the filename being processed).
Therefore, I am assuming that by "run this code" you mean "run this code through pylint". If not, please update the question with how you are executing the code in a way that generates pylint errors.
This page describes the specific error you are seeing, and the case that causes a false positive for it. This is likely exactly the false positive you are hitting.
The Google Cloud Vision module appears to dynamically create these members, and pylint doesn't have a way to detect that they actually exist at runtime, so it raises the error.
Two options:
Tag the affected lines with a # pylint: disable=no-member annotation, as suggested in the page linked above.
Run pylint with the --ignore-modules=google.cloud.vision_v1 flag (or put the equivalent in your .pylintrc). You'll notice that even the actual module name is different than the one you imported :)
This is a similar question with more detail about workarounds for the E1101 error.

Using python logging in reusable Django application: "No handlers could be found for logger"

In an open-source Django library I am working on, the logging module is used mainly to notice user for potential errors:
import logging
logger = logging.getLogger(__name__)
def my_wonderful_function():
# ... some code ...
if problem_detected:
logger.error('Please pay attention, something is wrong...')
Since this approach works fine most of the time, if Python 2 users doesn't configure the logging system for my package, they get the error:
No handlers could be found for logger "library.module"
as soon as the logger is used.
This error does not print out to Python 3 user, since there is a fallback mechanism to output messages to a default StreamHandler when no handler can be found for a specific logger (See the code).
My question is:
Is there a good way to report errors and warnings to user, but to print nothing (and in particular no error) when the user don't want to configure logging?
I finally found a solution that works well. I added these 2 functiond in my code:
def logger_has_handlers(logger):
"""Since Python 2 doesn't provide Logger.hasHandlers(), we have to
perform the lookup by ourself."""
if six.PY3:
return logger.hasHandlers()
else:
c = logger
rv = False
while c:
if c.handlers:
rv = True
break
if not c.propagate:
break
else:
c = c.parent
return rv
def get_default_logger(name):
"""Get a logger from default logging manager. If no handler
is associated, add a default NullHandler"""
logger = logging.getLogger(name)
if not logger_has_handlers(logger):
# If logging is not configured in the current project, configure
# this logger to discard all logs messages. This will prevent
# the 'No handlers could be found for logger XXX' error on Python 2,
# and avoid redirecting errors to the default 'lastResort'
# StreamHandler on Python 3
logger.addHandler(logging.NullHandler())
return logger
Then, when I need a logger, instead of calling logging.getLogger(__name__), I use get_default_logger(__name__). With this solution, the returned logger always contains at least 1 handler.
If user configured logging by himself, the returned logger contains references to defined handlers. If user didn't configure logging for the package (or configured it without any handler), the NullHandler instance associated with the logger ensure no error will be printed when it is used.

How to log from Python Luigi

I am trying to build a strategy to log from Luigi in such a way that there is a configurable list of outputs including stdout and a custom list of files. I would like to be able to set the logging level at runtime. Our system uses Luigi to call spark from Jenkins. Thank you in advance.
Inside any of the Task class methods, you can do:
class Agg(luigi.Task):
_date = luigi.DateParameter()
def output(self):
return luigi.LocalTarget("file_%.txt" % self._date)
def run(self):
# Use the luigi-interface to log to console
logger = logging.getLogger('luigi-interface')
logger.info("Running --> Agg.Task")
Have you checked the logging_conf_file parameter of the configuration? you can set up there all your configuration regarding logging using Python's standard logging mechanism.
For some examples see:
https://groups.google.com/forum/#!topic/luigi-user/nKJYq_ng5y8
https://groups.google.com/forum/#!topic/luigi-user/N83ZeePuqJk
https://github.com/spotify/luigi/issues/1401

How do I embed an IPython Interpreter into an application running in an IPython Qt Console

There are a few topics on this, but none with a satisfactory answer.
I have a python application running in an IPython qt console
http://ipython.org/ipython-doc/dev/interactive/qtconsole.html
When I encounter an error, I'd like to be able to interact with the code at that point.
try:
raise Exception()
except Exception as e:
try: # use exception trick to pick up the current frame
raise None
except:
frame = sys.exc_info()[2].tb_frame.f_back
namespace = frame.f_globals.copy()
namespace.update(frame.f_locals)
import IPython
IPython.embed_kernel(local_ns=namespace)
I would think this would work, but I get an error:
RuntimeError: threads can only be started once
I just use this:
from IPython import embed; embed()
works better than anything else for me :)
Update:
In celebration of this answer receiving 50 upvotes, here are the updates I've made to this snippet in the intervening six years since it was posted.
First, I now like to import and execute in a single statement, as I use black for all my python code these days and it reformats the original snippet in a way that doesn't make sense in this specific and unusual context. So:
__import__("IPython").embed()
Given than I often use this inside a loop or a thread, it can be helpful to include a snippet that allows terminating the parent process (partly for convenience, and partly to remind myself of the best way to do it). os._exit is the best choice here, so my snippet includes this (same logic w/r/t using a single statement):
q = __import__("functools").partial(__import__("os")._exit, 0)
Then I can simply use q() if/when I want to exit the master process.
My full snippet (with # FIXME in case I would ever be likely to forget to remove it!) looks like this:
q = __import__("functools").partial(__import__("os")._exit, 0) # FIXME
__import__("IPython").embed() # FIXME
You can follow the following recipe to embed an IPython session into your program:
try:
get_ipython
except NameError:
banner=exit_msg=''
else:
banner = '*** Nested interpreter ***'
exit_msg = '*** Back in main IPython ***'
# First import the embed function
from IPython.frontend.terminal.embed import InteractiveShellEmbed
# Now create the IPython shell instance. Put ipshell() anywhere in your code
# where you want it to open.
ipshell = InteractiveShellEmbed(banner1=banner, exit_msg=exit_msg)
Then use ipshell() whenever you want to be dropped into an IPython shell. This will allow you to embed (and even nest) IPython interpreters in your code and inspect objects or the state of the program.