I have a fabric code which is typically run on a remote machine, but it would be helpful to sometimes run it locally.
I've found two ways to allow this:
method #1: only use the 'run' command (avoid using local) and set up ssh key to be used to this machine on the same machine as if you want to ssh to itself.
method #2:
from fabric.api import local
from fabric.api import run as run_
def is_local():
return len(env.hosts) == 0 or env.hosts == ['localhost']
def run(cmd, curr_dir=None):
if is_local():
if curr_dir:
with cd(curr_dir):
return local(cmd, capture=True)
else:
return local(cmd, capture=True)
else:
if curr_dir:
with cd(curr_dir):
return run_(cmd)
else:
return run_(cmd)
which limits me in case I want to use some of the operations such as the quiet()...
What do you guys think is the best solution to be able to run the same code either remotely or locally?
Perhaps there is an additional trick I do not know about.
Related
I have a Google App Engine Standard Environment application that has been working fine for a year or more, that, quite suddenly, refuses to enqueue new deferred tasks using deferred.defer.
Here's the Python 2.7 code that is making the deferred call:
# Find any inventory items that reference the product, and change them too.
# because this could take some time, we'll do it as a deferred task, and only
# if needed.
if upd:
updater = deferredtasks.InvUpdate()
deferred.defer(updater.run, product_key)
My app.yaml file has the necessary bits to support deferred.defer:
- url: /_ah/queue/deferred
script: google.appengine.ext.deferred.deferred.application
login: admin
builtins:
- deferred: on
And my deferred task has logging in it so I should see it running when it does:
#-------------------------------------------------------------------------------
# DEFERRED routine that updates the inventory items for a particular product. Should be callecd
# when ANY changes are made to the product, because it should trigger a re-download of the
# inventory record for that product to the iPad.
#-------------------------------------------------------------------------------
class InvUpdate(object):
def __init__(self):
self.to_put = []
self.product_key = None
self.updcount = 0
def run(self, product_key, batch_size=100):
updproduct = product_key.get()
if not updproduct:
logging.error("DEFERRED ERROR: Product key passed in does not exist")
return
logging.info(u"DEFERRED BEGIN: beginning inventory update for: {}".format(updproduct.name))
self.product_key = product_key
self._continue(None, batch_size)
...
When I run this in the development environment on my development box, everything works fine. Once I deploy it to the App Engine server, the inventory updates never get done (i.e. the deferred task is not executed), and there are no errors (and no other logging from the deferred task in fact) in the log files on the server. I know that with the sudden move to get everybody on Python 3 as quickly as possible, the deferred.defer library has been marked as not recommended because it only works with the 2.7 Python environment, and I planned on moving to task queues for this, but I wasn't expecting deferred.defer to suddenly stop working in the existing python environment.
Any insight would be greatly appreciated!
I'm pretty sure you cant pass the method of an instance to appengine taskqueue, because that instance will not get exist when your task runs since it will be running in a different process. I actually dont understand how your task ever worked when running remotely in the first place (and running locally is not an accurate representation of how things will run remotely)
Try changing your code to this:
if upd:
deferred.defer(deferredtasks.InvUpdate.run_cls, product_key)
and then InvUpdate is the same but has a new function run_cls:
class InvUpdate(object):
#classmethod
def run_cls(cls, product_key):
cls().run(product_key)
And I'm still on the process of migrating to cloud tasks and my deferred tasks still work
How can I connect to MS Access with SQLAlchemy? In their website, it says connection string is access+pyodbc. Does that mean that I need to have pyodbc for the connection? Since I am a newbie, please be gentle.
In theory this would be via create_engine("access:///some_odbc_dsn"), but the Access backend hasn't been in service at all since SQLAlchemy 0.5, and it's not clear how well it was working back then either (this is why it's noted as "development" at http://docs.sqlalchemy.org/en/latest/core/engines.html#supported-databases - "development" means, "a development version of the dialect exists, but is not yet usable"). There's just not enough interest/volunteers to keep this dialect running right now. (when/if it is, you'll see it at http://docs.sqlalchemy.org/en/latest/dialects/access.html).
Your best bet for Access right now would be to export the data into a SQLite database file (or of course some other database, though SQLite is file-based in a similar way at least), then use that.
Update, September 2019:
The sqlalchemy-access dialect has been resurrected. Details here.
Usage example:
engine = create_engine("access+pyodbc://#some_odbc_dsn")
I primarily needed read access and some simple queries. The latest version of sqlalchemy has the (broken) access back end modules, but it isn't registered as an entrypoint.
It needed a few fixups, but this worked for me:
def fixup_access():
import sqlalchemy.dialects.access.base
class FixedAccessDialect(sqlalchemy.dialects.access.base.AccessDialect):
def _check_unicode_returns(self, connection):
return True
def do_execute(self, cursor, statement, params, context=None, **kwargs):
if params == {}:
params = ()
super(sqlalchemy.dialects.access.base.AccessDialect, self).do_execute(cursor, statement, params, **kwargs)
class SomeObject(object):
pass
fixed_dialect_mod = SomeObject
fixed_dialect_mod.dialect = FixedAccessDialect
sqlalchemy.dialects.access.fix = fixed_dialect_mod
fixup_access()
ENGINE = sqlalchemy.create_engine('access+fix://admin#/%s'%(db_location))
I am trying to use socket_io with my flask application. The problem is when i run database queries, like in the url_route function below. The first time the page loads properly but on consecutive calls the process goes into a blocking state. Even KeyboardInterrupt (Ctrl + c) terminates one of the python processes, i have to manually kill the other one.
One obvious solution would be to use a cache and use another script to run queries on database. Is there any other possible solution which could avoid running separate scripts?
#app.route('/status/<urlMap>')
def status(urlMap):
dictResponse = {}
data = models.Status.query.filter_by(urlmap = urlMap).first()
if data.conversion == "DONE":
dictResponse['conversion'] = 'success'
if data.published == "DONE":
dictResponse['publish'] = 'success'
return render_template('status.html',status = dictResponse)
Also on removing the import flask.ext.socketio and using app.run(host='0.0.0.0') instead of socketio.run(app,host='0.0.0.0') the app runs perfectly. So i think its the async gevent calls thats somehow blocking the process.
Like #Miguel pointed out the problem correctly. monkey patching the standard libraries solved the issue.
monkey.patch_all() solved the problem.
How can one disable the buildin print function depending on the server enviroment? The code below seems to be working
but I'm looking for a cleaner way to do it. I want to use this in a django app.
It would be nice if print kept working on localhost.
import sys
class MyFileWrapper(object):
def write(self, *args):
pass
def flush(self):
pass
if __name__=='__main__':
print('will be printed')
sys.stdout = MyFileWrapper()
print("won't be printed ")
If you just want to stop all writing to stdout, you can do sys.stdout = None (or, if you want to be slightly more pedantic, sys.stdout = open(os.devnull)). As far as changing behavior depending on your environment, you may be able to distinguish between them based on the results of (say) socket.gethostname(). Alternatively, you can set an environment variable on either the server or your local box (but not both) and then test os.environ for the variable's presence or value.
You may be better off using the built-in logging module instead of print() calls. That will allow much more fine-grained control over which things get logged and where the logs go.
I have a requirement whereby sometimes our database engine goes down and I need to automate a certain procedure to through django's command line. Am trying to write a code that would run to perform some reporting and start up some services and I need to run in from django context so i can use django's library and project setting.
Any ways, is there a way to develop a command line that can be executed without checking if database exists or perhaps i can trap the exception? any ideas?
NVM, I looked at source code for the command "shell" and saw "requires_model_validation" parameter. I used it in my command and it went through, here is an example
class Command(NoArgsCommand):
args = '<No arguments>'
help = 'This command will init the database.'
requires_model_validation = False
def handle(self, *args, **options):
try:
print 'ss'
except Exception :
raise CommandError('Cannot find default database settings')