I'm trying to mark a function as deprecated so that the script calling it runs to its normal completion, but gets caught by PyCharm's static code inspections. (There are some other questions on this deprecation warnings, but I think they predate Python 2.6, when I believe class-based exceptions were introduced.)
Here's what I have:
class Deprecated(DeprecationWarning):
pass
def save_plot_and_insert(filename, worksheet, row, col):
"""
Deprecated. Docstring ...<snip>
"""
raise Deprecated()
# Active lines of
# the function here
# ...
My understanding is that Deprecated Warnings should allow the code to run, but this code sample actually halts when the function is called. When I remove "raise" from the body of the function, the code runs, but PyCharm doesn't mark the function call as deprecated.
What is the Pythonic (2.7.x) way of marking functions as deprecated?
You shouldn't raise DeprecationWarning (or a subclass) because then you still are raising an actual exception.
Instead use warnings.warn:
import warnings
warnings.warn("deprecated", DeprecationWarning)
Related
For some time my old project used gmock_gen.py to generate automatically mocked classes (this is an old project from http://code.google.com/p/cppclean/ that it seems inactive and it depends on python2 that we don't want)
My question:
Is there anything on gtest environment that does the same as gmock_gen.py and supports python3, or what is the alternative to gmock_gen.py if we don't have or don't want to use python2?
Best regards,
Nuno
It seems that the conversion to python3 is very simple.
You only need to do two things and only one is required (step 2.):
you can use the python tool 2to3 to convert the code from python 2 code into python 3 code (optional)
change only one line to prevent an exception on the execution of the scrip:
gmock_gtest/generator/cpp/ast.py:908
change from:
def _GetNextToken(self):
if self.token_queue:
return self.token_queue.pop()
return next(self.tokens)
to
def _GetNextToken(self):
if self.token_queue:
return self.token_queue.pop()
return next(self.tokens, None)
and that will work.
I am trying to use 'drop_all' after service test fails or finishes on flask app layer:
#pytest.fixture(scope='class')
def db_connection():
db_url = TestConfig.db_url
db = SQLAlchemyORM(db_url)
db.create_all(True)
yield db_connection
db.drop_all()
When some test passes the 'drop_all' works, but when it fails the test freezes.
So, that solution solves my problem:
https://stackoverflow.com/a/44437760/3050042
Unfortunately, I got a mess with that.
When I use the 'Session.close_all()' SQLAlchemy warns:
The Session.close_all() method is deprecated and will be removed in a future release. Please refer to session.close_all_sessions().
When I change to the suggestion:
AttributeError: 'scoped_session' object has no attribute 'close_all_sessions'
Yes, I use scoped_session and pure SQLAlchemy.
How to solve this?
The close_all_sessions function is defined at the top level of sqlalchemy.orm.session. At the time of writing this answer, here is how it looks. Thus, you can use it as follows.
from sqlalchemy.orm.session import close_all_sessions
close_all_sessions()
Objective
Wrap every function of every class of gspread module.
I know there are countless of posts on the subject and most unanimously instruct to use decorators.
I'm not too familiar with decorators and felt like that approach is not as seamless as I hoped for. perhaps I didn't understand correctly.
But, I found this answer which "felt" like what I'm looking for.
(poor) Attempt
import gspread
from oauth2client.service_account import ServiceAccountCredentials
import os
import inspect
class GoogleSheetAPI:
def __init__(self):
f = os.path.join(os.path.dirname(__file__), 'credentials.json')
os.environ.setdefault('GOOGLE_APPLICATION_CREDENTIALS', f)
scope = ['https://spreadsheets.google.com/feeds',
'https://www.googleapis.com/auth/drive']
credentials = ServiceAccountCredentials.from_json_keyfile_name(f, scope)
self.client = gspread.authorize(credentials)
self.client.login()
def SafeCall(f):
try:
print 'before call'
f()
print 'after call'
except:
print 'exception caught'
return None
for class_name, c in inspect.getmembers(gspread, inspect.isclass):
for method_name, f in inspect.getmembers(c, inspect.ismethod):
setattr(c, f, SafeCall(f)) # TypeError: attribute name must be string, not 'instancemethod'
g = GoogleSheetAPI()
spreadsheet = g.client.open_by_key('<ID>') # calls a function in gspread.Client
worksheet = spreadsheet.get_worksheet(0) # calls a function in gspread.Spreadsheet
worksheet.add_rows(['key','value']) # calls a function in gspread.Worksheet
Notes
When I use the word "seamless" I mean that considering my code has many calls to many gspread functions, I want to change as little as possible. Using inspect/setattr seems like the perfect/seamless trick.
There are three obvious issues with your code actually.
The first one is the TypeError - which is easy to solve FWIW: as the error message (raised by setattr() states, "attribute name must be string, not 'instancemethod'". And you're indeed trying to use f (the method itself) instead of method_name. What you want here is of course:
setattr(c, method_name, SafeCall(f))
The second issue is that your SafeCall "decorator" is NOT a decorator. A decorator (well, the kind of decorator you want here at least) returns a function that wraps the original one, your current implementation just calls the original function. Actually, it is almost what SafeCall should actually return. An example of a proper decorator would be:
def decorator(func):
def wrapper(*args, **kw):
print("before calling {}".format(func))
result = func(*args, **kw)
print("after calling {}".format(func))
return result
return wrapper
And finally, the third obvious issue is here:
except:
print 'exception caught'
return None
You certainly don't want this. This
1/ will catch absolutely everything (incuding SysExit, which is what Python raises on sys.exit() calls and StopIteration which is how iterators signals they are exhausted),
2/ discard all the very useful debugging infos - making it impossible to diagnose what actuall went wrong
3/ return something that can be plain unusable so you'll have to test the return value of each method call, and since you won't know what went wrong, you won't be able to handle the issue otherwise than printing "oops, something went wrong but don't ask me what nor where nor why" and exiting the program, which is definitly not better than letting the exception propagate - the program will crash in both cases, but at least if you leave the exception alone you'll have some hints on what caused the issue.
4/ or, much worse, return a valid return value for the method (yes, quite a few method are designed to change state and return None) so you won't even know something went wrong and happily continue execution - which is a sure way to have incorrect result and corrupted data.
5/ not to mention that the methods you're decorating that way are very probably calling each others and using (expected) exceptions internally (with proper exception handling), so you are actually introducing bugs in an otherwise working (or mostly working) library.
IOW, this is probably the worse antipattern you can ever think of...
When I raise my owns exceptions in my Python libraries, the exception stack shows the raise-line itself as the last item of the stack. This is obviously not an error, is conceptually right, but points the focus on something that is not useful for debugging when you're are using code externally, for example as a module.
Is there a way to avoid this and force Python to show the previous-to-last stack item as the last one, like the standard Python libraries.
Due warning: modifying the behaviour of the interpreter is generally frowned upon. And in any case, seeing exactly where an error was raised may be helpful in debugging, especially if a function can raise an error for several different reasons.
If you use the traceback module, and replace sys.excepthook with a custom function, it's probably possible to do this. But making the change will affect error display for the entire program, not just your module, so is probably not recommended.
You could also look at putting code in try/except blocks, then modifying the error and re-raising it. But your time is probably better spent making unexpected errors unlikely, and writing informative error messages for those that could arise.
you can create your own exception hook in python. below is the example of code that i am using.
import sys
import traceback
def exceptionHandler(got_exception_type, got_exception, got_traceback):
listing = traceback.format_exception(got_exception_type, got_exception, got_traceback)
# Removing the listing of statement raise (raise line).
del listing[-2]
filelist = ["org.python.pydev"] # avoiding the debuger modules.
listing = [ item for item in listing if len([f for f in filelist if f in item]) == 0 ]
files = [line for line in listing if line.startswith(" File")]
if len(files) == 1:
# only one file, remove the header.
del listing[0]
print>>sys.stderr, "".join(listing)
And below are some lines that I have used in my custom exception code.
sys.excepthook = exceptionHandler
raise Exception("My Custom error message.")
In the method exception you can add file names or module names in list "filenames" if you want to ignore any unwanted files. As I have ignored the python pydev module since I am using pydev debugger in eclipse.
The above is used in my own module for a specific purpose. you can modify and use it for your modules.
I'd suggest to not use the Exception mechanism to validate arguments, as tempting as that is. Coding with exceptions as conditionals is like saying, "crash my app if, as a developer, I don't think of all the bad conditions my provided arguments can cause. Perhaps using exceptions for things not only out of your control but also which is under control of something else like the OS or hardware or the Python language would be more logical, I don't know. In practice however I use exceptions as you request a solution for.
To answer your question, in part, it is just as simple to code thusly:
class MyObject(object):
def saveas(self, filename):
if not validate_filename(filename):
return False
...
caller
if not myobject.saveas(filename): report_and_retry()
Perhaps not a great answer, just something to think about.
I keep getting this :
DeprecationWarning: integer argument expected, got float
How do I make this message go away? Is there a way to avoid warnings in Python?
You should just fix your code but just in case,
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
I had these:
/home/eddyp/virtualenv/lib/python2.6/site-packages/Twisted-8.2.0-py2.6-linux-x86_64.egg/twisted/persisted/sob.py:12:
DeprecationWarning: the md5 module is deprecated; use hashlib instead import os, md5, sys
/home/eddyp/virtualenv/lib/python2.6/site-packages/Twisted-8.2.0-py2.6-linux-x86_64.egg/twisted/python/filepath.py:12:
DeprecationWarning: the sha module is deprecated; use the hashlib module instead import sha
Fixed it with:
import warnings
with warnings.catch_warnings():
warnings.filterwarnings("ignore",category=DeprecationWarning)
import md5, sha
yourcode()
Now you still get all the other DeprecationWarnings, but not the ones caused by:
import md5, sha
From documentation of the warnings module:
#!/usr/bin/env python -W ignore::DeprecationWarning
If you're on Windows: pass -W ignore::DeprecationWarning as an argument to Python. Better though to resolve the issue, by casting to int.
(Note that in Python 3.2, deprecation warnings are ignored by default.)
None of these answers worked for me so I will post my way to solve this. I use the following at the beginning of my main.py script and it works fine.
Use the following as it is (copy-paste it):
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
Example:
import "blabla"
import "blabla"
def warn(*args, **kwargs):
pass
import warnings
warnings.warn = warn
# more code here...
# more code here...
I found the cleanest way to do this (especially on windows) is by adding the following to C:\Python26\Lib\site-packages\sitecustomize.py:
import warnings
warnings.filterwarnings("ignore", category=DeprecationWarning)
Note that I had to create this file. Of course, change the path to python if yours is different.
Docker Solution
Disable ALL warnings before running the python application
You can disable your dockerized tests as well
ENV PYTHONWARNINGS="ignore::DeprecationWarning"
When you want to ignore warnings only in functions you can do the following.
import warnings
from functools import wraps
def ignore_warnings(f):
#wraps(f)
def inner(*args, **kwargs):
with warnings.catch_warnings(record=True) as w:
warnings.simplefilter("ignore")
response = f(*args, **kwargs)
return response
return inner
#ignore_warnings
def foo(arg1, arg2):
...
write your code here without warnings
...
#ignore_warnings
def foo2(arg1, arg2, arg3):
...
write your code here without warnings
...
Just add the #ignore_warnings decorator on the function you want to ignore all warnings
Python 3
Just write below lines that are easy to remember before writing your code:
import warnings
warnings.filterwarnings("ignore")
If you are using logging (https://docs.python.org/3/library/logging.html) to format or redirect your ERROR, NOTICE, and DEBUG messages, you can redirect the WARNINGS from the warning system to the logging system:
logging.captureWarnings(True)
It will capture the warnings with the tag "py.warnings". Also if you want to throw away those warnings without logging, you can then, set the logging level to ERROR by using:
logging.getLogger("py.warnings").setLevel(logging.ERROR)
It will cause all those warnings to be ignored without showing up in your terminal or anywhere else.
See https://docs.python.org/3/library/warnings.html and https://docs.python.org/3/library/logging.html#logging.captureWarnings and captureWarnings set to True doesn't capture warnings
In my case, I was formatting all the exceptions with the logging system, but warnings (e.g. scikit-learn) were not affected.
Pass the correct arguments? :P
On the more serious note, you can pass the argument -Wi::DeprecationWarning on the command line to the interpreter to ignore the deprecation warnings.
Convert the argument to int. It's as simple as
int(argument)
For python 3, just write below codes to ignore all warnings.
from warnings import filterwarnings
filterwarnings("ignore")
Try the below code if you're Using Python3:
import sys
if not sys.warnoptions:
import warnings
warnings.simplefilter("ignore")
or try this...
import warnings
def fxn():
warnings.warn("deprecated", DeprecationWarning)
with warnings.catch_warnings():
warnings.simplefilter("ignore")
fxn()
or try this...
import warnings
warnings.filterwarnings("ignore")
If you know what you are doing, another way is simply find the file that warns you(the path of the file is shown in warning info), comment the lines that generate the warnings.
A bit rough, but it worked for me after the above methods did not.
./myscrypt.py 2>/dev/null
Not to beat you up about it but you are being warned that what you are doing will likely stop working when you next upgrade python. Convert to int and be done with it.
BTW. You can also write your own warnings handler. Just assign a function that does nothing.
How to redirect python warnings to a custom stream?
Comment out the warning lines in the below file:
lib64/python2.7/site-packages/cryptography/__init__.py