Objective
Wrap every function of every class of gspread module.
I know there are countless of posts on the subject and most unanimously instruct to use decorators.
I'm not too familiar with decorators and felt like that approach is not as seamless as I hoped for. perhaps I didn't understand correctly.
But, I found this answer which "felt" like what I'm looking for.
(poor) Attempt
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import os
import inspect
class GoogleSheetAPI:
def __init__(self):
f = os.path.join(os.path.dirname(__file__), 'credentials.json')
os.environ.setdefault('GOOGLE_APPLICATION_CREDENTIALS', f)
scope = ['https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive']
credentials = ServiceAccountCredentials.from_json_keyfile_name(f, scope)
self.client = gspread.authorize(credentials)
self.client.login()
def SafeCall(f):
try:
print 'before call'
f()
print 'after call'
except:
print 'exception caught'
return None
for class_name, c in inspect.getmembers(gspread, inspect.isclass):
for method_name, f in inspect.getmembers(c, inspect.ismethod):
setattr(c, f, SafeCall(f)) # TypeError: attribute name must be string, not 'instancemethod'
g = GoogleSheetAPI()
spreadsheet = g.client.open_by_key('<ID>') # calls a function in gspread.Client
worksheet = spreadsheet.get_worksheet(0) # calls a function in gspread.Spreadsheet
worksheet.add_rows(['key','value']) # calls a function in gspread.Worksheet
Notes
When I use the word "seamless" I mean that considering my code has many calls to many gspread functions, I want to change as little as possible. Using inspect/setattr seems like the perfect/seamless trick.
There are three obvious issues with your code actually.
The first one is the TypeError - which is easy to solve FWIW: as the error message (raised by setattr() states, "attribute name must be string, not 'instancemethod'". And you're indeed trying to use f (the method itself) instead of method_name. What you want here is of course:
setattr(c, method_name, SafeCall(f))
The second issue is that your SafeCall "decorator" is NOT a decorator. A decorator (well, the kind of decorator you want here at least) returns a function that wraps the original one, your current implementation just calls the original function. Actually, it is almost what SafeCall should actually return. An example of a proper decorator would be:
def decorator(func):
def wrapper(*args, **kw):
print("before calling {}".format(func))
result = func(*args, **kw)
print("after calling {}".format(func))
return result
return wrapper
And finally, the third obvious issue is here:
except:
print 'exception caught'
return None
You certainly don't want this. This
1/ will catch absolutely everything (incuding SysExit, which is what Python raises on sys.exit() calls and StopIteration which is how iterators signals they are exhausted),
2/ discard all the very useful debugging infos - making it impossible to diagnose what actuall went wrong
3/ return something that can be plain unusable so you'll have to test the return value of each method call, and since you won't know what went wrong, you won't be able to handle the issue otherwise than printing "oops, something went wrong but don't ask me what nor where nor why" and exiting the program, which is definitly not better than letting the exception propagate - the program will crash in both cases, but at least if you leave the exception alone you'll have some hints on what caused the issue.
4/ or, much worse, return a valid return value for the method (yes, quite a few method are designed to change state and return None) so you won't even know something went wrong and happily continue execution - which is a sure way to have incorrect result and corrupted data.
5/ not to mention that the methods you're decorating that way are very probably calling each others and using (expected) exceptions internally (with proper exception handling), so you are actually introducing bugs in an otherwise working (or mostly working) library.
IOW, this is probably the worse antipattern you can ever think of...
Related
I am trying to use 'drop_all' after service test fails or finishes on flask app layer:
#pytest.fixture(scope='class')
def db_connection():
db_url = TestConfig.db_url
db = SQLAlchemyORM(db_url)
db.create_all(True)
yield db_connection
db.drop_all()
When some test passes the 'drop_all' works, but when it fails the test freezes.
So, that solution solves my problem:
https://stackoverflow.com/a/44437760/3050042
Unfortunately, I got a mess with that.
When I use the 'Session.close_all()' SQLAlchemy warns:
The Session.close_all() method is deprecated and will be removed in a future release. Please refer to session.close_all_sessions().
When I change to the suggestion:
AttributeError: 'scoped_session' object has no attribute 'close_all_sessions'
Yes, I use scoped_session and pure SQLAlchemy.
How to solve this?
The close_all_sessions function is defined at the top level of sqlalchemy.orm.session. At the time of writing this answer, here is how it looks. Thus, you can use it as follows.
from sqlalchemy.orm.session import close_all_sessions
close_all_sessions()
Before everything, I've to say that I'm not very inside python programming as I am in other languages. I'm quite (too) annoyed for searching other ways of solution, so thank you in advance for your help.
I love to make Roguelike games in my free time, so I've tried numerous ways to do my own "engine" implementations and own engines for my own games using C++, C, C#, HTML5 etc. I've never worked before with LibTCOD because I never could make it work in C++ alas is my favourite programming language, that's an issue that I'm not going to talk right now for it's in C++ thread.
Awfully, LibTCOD looks great but has too few mentions and precise documentation, so I've to worked it almost alone. The last days I've done a little python package to easily manage LibTCOD functionality for python and windows, and make the main game code the tiniest possible.
The final implementation I tried to add is to pass the main game loop to a thread, handling every basic game functionality (like keyboard/mouse changes and screen update), and running it by a function call.
Everything works fine... but not after the first loop step, for it freezes everything and stops working.
Basically this is the problematic code:
def ioHandler(l):
lastx = 0
lasty = 0
lastk = None
c = 0
noEvent = 0
casted = False
while not tcod.console_is_window_closed():
l.acquire()
try:
tcod.sys_check_for_event(tcod.EVENT_KEY_PRESS|tcod.EVENT_MOUSE,key,mouse)
finally:
l.release()
if mouse.lbutton_pressed:
casted = True
l.acquire()
try:
onClick(mouse, 'left')
finally:
l.release()
if mouse.rbutton_pressed:
casted = True
l.acquire()
try:
onClick(mouse, 'right')
finally:
l.release()
if mouse.cx != lastx or mouse.cy != lasty:
casted = True
l.acquire()
try:
lastx = mouse.cx
lasty = mouse.cy
onMouseMove(mouse)
finally:
l.release()
if key != lastk:
casted = True
l.acquire()
try:
lastk = key
onKeyPress(key)
finally:
l.release()
if not casted: noEvent += 1
l.acquire()
try:
onTickFrame(c+1)
finally:
l.release()
Most of the variables used there are for clearer debug understanding purpose, (even with the 'clean' function it froze) so I've to put those there.
The above 'def' is called from here:
def main_loop():
l = threading.Lock()
tr = threading.Thread(target=ioHandler, args=(l,))
#tr.daemon=True
tr.start()
For the 'Event' system, I've found this on internet:
class Event:
handlers = set()
def __init__(self):
self.handlers = set()
def handle(self, handler):
self.handlers.add(handler)
return self
def unhandle(self, handler):
try:
self.handlers.remove(handler)
except:
raise ValueError("Handler is not handling this event, so cannot unhandle it.")
return self
def fire(self, *args, **kargs):
for handler in self.handlers:
handler(*args, **kargs)
def getHandlerCount(self):
return len(self.handlers)
__iadd__ = handle
__isub__ = unhandle
__call__ = fire
__len__ = getHandlerCount
As a note: I'm working with Python 2.7, that's the only version that worked for the library, what a shame.
I think the event system may be the main problem. Reading the code again, I think I should apply a lock to the while condition too, therefore to the whole loop, or is that not necessary? Are the locks applied in the proper way? or should I use another methods to make the thread works?
Just to mention, everything works fine if the main game loop is doing on the main script without threads, but everything fails when called as a thread or even if it's not a thread itself but it's called from 'outside' as any other function in the package, so it can't be a library problem (I think).
I have to say, that I've only worked with LibTCOD in Python, as I can't make it work (at least on windows) only on it. If it helps, I've seen that the code for the python library is just a 'bind' for the original C library, so It's not a big deal to understand the python code. For the last statement, I think that's a problem for python threads too, or am I wrong? if there's something I can do to fix the thread implement, please help me!
Thank you all! I hope I have not bored you with my talk.
There was enough missing in your example that I couldn't get it running in a timely manner, so I do not have a known solution for you, but I do have some suggestions:
If the same thread will be releasing the lock that acquired it, I suggest using an RLock() instead of Lock()
l = threading.RLock()
To make the code cleaner and less error probe, I suggest using the context manager that the lock provides:
Instead of:
l.acquire():
try:
tcod.sys_check_for_event(
tcod.EVENT_KEY_PRESS | tcod.EVENT_MOUSE, key, mouse)
finally:
l.release()
Try:
with l:
tcod.sys_check_for_event(
tcod.EVENT_KEY_PRESS | tcod.EVENT_MOUSE, key, mouse)
As to the question of what else should be locked. That is hard to answer without understanding all of the data structures, but in general anything that will be used in more than one thread should be locked.
When I raise my owns exceptions in my Python libraries, the exception stack shows the raise-line itself as the last item of the stack. This is obviously not an error, is conceptually right, but points the focus on something that is not useful for debugging when you're are using code externally, for example as a module.
Is there a way to avoid this and force Python to show the previous-to-last stack item as the last one, like the standard Python libraries.
Due warning: modifying the behaviour of the interpreter is generally frowned upon. And in any case, seeing exactly where an error was raised may be helpful in debugging, especially if a function can raise an error for several different reasons.
If you use the traceback module, and replace sys.excepthook with a custom function, it's probably possible to do this. But making the change will affect error display for the entire program, not just your module, so is probably not recommended.
You could also look at putting code in try/except blocks, then modifying the error and re-raising it. But your time is probably better spent making unexpected errors unlikely, and writing informative error messages for those that could arise.
you can create your own exception hook in python. below is the example of code that i am using.
import sys
import traceback
def exceptionHandler(got_exception_type, got_exception, got_traceback):
listing = traceback.format_exception(got_exception_type, got_exception, got_traceback)
# Removing the listing of statement raise (raise line).
del listing[-2]
filelist = ["org.python.pydev"] # avoiding the debuger modules.
listing = [ item for item in listing if len([f for f in filelist if f in item]) == 0 ]
files = [line for line in listing if line.startswith(" File")]
if len(files) == 1:
# only one file, remove the header.
del listing[0]
print>>sys.stderr, "".join(listing)
And below are some lines that I have used in my custom exception code.
sys.excepthook = exceptionHandler
raise Exception("My Custom error message.")
In the method exception you can add file names or module names in list "filenames" if you want to ignore any unwanted files. As I have ignored the python pydev module since I am using pydev debugger in eclipse.
The above is used in my own module for a specific purpose. you can modify and use it for your modules.
I'd suggest to not use the Exception mechanism to validate arguments, as tempting as that is. Coding with exceptions as conditionals is like saying, "crash my app if, as a developer, I don't think of all the bad conditions my provided arguments can cause. Perhaps using exceptions for things not only out of your control but also which is under control of something else like the OS or hardware or the Python language would be more logical, I don't know. In practice however I use exceptions as you request a solution for.
To answer your question, in part, it is just as simple to code thusly:
class MyObject(object):
def saveas(self, filename):
if not validate_filename(filename):
return False
...
caller
if not myobject.saveas(filename): report_and_retry()
Perhaps not a great answer, just something to think about.
I'm trying to test the code in the main() but I'm a bit unsure how to go about it since I'm not passing any arguments or even returning anything. For the purposes of the example I've shortened the tree statements in the function..
Can anyone point me in the right direction on how to test the logic below? Also I did google to see if this question had already been asked but I couldn't find it, so if it was apologies did not mean to ask again.
script.py
from . import settings
def main():
if settings.PATHS: # list containing full paths to a directory of files
paths = settings.PATHS
for path in paths:
data = read_file(path)
modified_data = do_something_with_the_data_collected(data)
write_to_new_file(modified_data)
else:
logger.warning("There are no files in {}".format(settings.FILES_DIRECTORY))
if __name__ == '__main__':
main()
tests/file_tests.py
import unittest
from module.script import main
class FileManagerTests(unittest.TestCase):
def test_main_func(self):
main() # ?? this is where I am stuck, should I just test
# that it logs correctly if certain data exists
# in settings file?
if __name__ == '__main__':
unittest.main()
Your function has inputs - but they are not arguments. For example, settings.PATHS obviously is an input. You are receiving more inputs via read_file etc. Moreover, you also have outputs, namely the data you pass to write_to_new_file. You will have to find ways to influence the input data and to observe the output data.
I recommend to read a bit about test doubles (stubs, mocks and the like) and how to use them in unit-testing. This will help you to deal with the dependencies in your code while testing.
And, you can make your life a lot easier if you also learn a bit about designing code for testability. This can help you to improve the design to remove dependencies or make them easier to handle.
I'm trying to mark a function as deprecated so that the script calling it runs to its normal completion, but gets caught by PyCharm's static code inspections. (There are some other questions on this deprecation warnings, but I think they predate Python 2.6, when I believe class-based exceptions were introduced.)
Here's what I have:
class Deprecated(DeprecationWarning):
pass
def save_plot_and_insert(filename, worksheet, row, col):
"""
Deprecated. Docstring ...<snip>
"""
raise Deprecated()
# Active lines of
# the function here
# ...
My understanding is that Deprecated Warnings should allow the code to run, but this code sample actually halts when the function is called. When I remove "raise" from the body of the function, the code runs, but PyCharm doesn't mark the function call as deprecated.
What is the Pythonic (2.7.x) way of marking functions as deprecated?
You shouldn't raise DeprecationWarning (or a subclass) because then you still are raising an actual exception.
Instead use warnings.warn:
import warnings
warnings.warn("deprecated", DeprecationWarning)