I'm starting my first large python project, and I'm running into a common issue. I'll have some file response.py that is purely functional and has no classes. I often end up doing this:
from my_cookbook.util import response
...
def foo():
response = bar.get_response()
response.baz(response)
The reponse module operates on the response variable, which of course conflicts. PEP8 says package and module names should be lowercase, as should local variables.
Question: Is there a way I can mitigate the amount of naming conflicts I get without sacrificing readability of both module and variable names?
Related
I'm writing function libraries in Python 2.7.8, to use in some UAT testing using froglogic Squish. It's for my employer, so I'm not sure how much I can share and still conform to company privacy regulations.
Early in the development, I put some functions in some very small files. There was one file that contained only a single function. I could import the file and use the function with no problem.
I am at a point where I want to consolidate some of those tiny files into a larger file. For some reason that completely eludes me, some of the functions that I copy/pasted into this larger file, are not being found, and a "NameError: global name 'My_variableStringVerify' is not defined" error is displayed, for example. (I just added the "My_", in case there was a name collision with some other function...)
This worked with the EXACT same simple function in a separate 'module'. Other functions in this python file -- appearing both before and after this function in the new, expanded module -- are being found and used without problems. The only module this function needs is re. I am importing that. I deleted all the pyc files in the directory, in case that was not getting updated (I'm pretty sure it was, from the datetime on the pyc file).
I have created and used dozens of functions in a dozen of my 'library modules', all with no issues. What's so special about this trivial, piece of crap function, as a part of a different module? It worked before, and it STILL works -- as long as I do not try to use it from the new library module.
I'm not python guru, but I have been doing this kind of thing for years...
Ugh. What a fool. The answer was in the error, after all: "global name xxx is not found". I was trying to use the function directly inside a Squish API call, which is the global scope. Moving the call to my function outside of the Squish API call (using it in the local scope), it worked fine.
The detail that surprised me: I was using "from foo import *", in both cases (before and after adding it to another 'library' module of mine).
When this one function was THE ONLY function in foo, I was able to use it in the global scope successfully.
When it was just one of many functions in foo-extended (names have been changed, to protect the innocent), I could NOT use it in the global scope. I had to reference it in the local scope.
After spending more time reading https://docs.python.org/2.0/ref/import.html (yes, it's old), I'm surprised it appeared in the global scope in either case. That page did state that "(The current implementation does not enforce the latter two restrictions, but programs should not abuse this freedom, as future implementations may enforce them or silently change the meaning of the program.)" about scope restrictions with the "from foo import *" statement.
I guess I found an edge case that somehow skirted the restriction in this implementation.
Still... what a maroon! Verifies my statement that I am no python guru.
In numpy.testing, there's assert_array_less and assert_array_equal, but there isn't an assert_array_less_equal function or even an assert_array_greater. I have two questions:
Is there a reason these functions are missing, but assert_array_less is not?
I've written my own versions of these missing functions by using numpy.testing.util.assert_array_compare, e.g.:
def assert_array_greater(aa, bb):
assert_array_compare(np.greater, aa, bb)
Is this safe? I.e. is there a reason why assert_array_compare is hidden away in numpy.testing.util, rather than living in numpy.testing?
Forgive my paranoia; it just seems weird that these functions don't exist, to the extent that I fear it's for some good reason that I shouldn't be working around.
np.testing is a module that collects tests and tools used by various of the numpy unittesting files. So it's designed more for internal use, than for end user use. So the simple answer would be that those extra tests aren't needed.
But looking at the source code for one of those functions:
def assert_array_less(x, y, err_msg='', verbose=True):
assert_array_compare(operator.__lt__, x, y, err_msg=err_msg,
verbose=verbose,
header='Arrays are not less-ordered',
equal_inf=False)
It looks like it would be easy to write a variation the uses one of the other operator methods.
The 'root' of np.testing is numpy/testing/__init__.py, which is a short file. Looks like its main task is from .utils import *. This is typical subpackage organization. The __init__ collects the necessary imports, but often doesn't have significant code of its own.
Say I have a function
def pyfunc():
print("ayy lmao")
return 4
and I want to call it in c++
int j = (int)python.pyfunc();
how exactly would I do that?
You might want to have a look into this:https://docs.python.org/2/extending/extending.html
In order to call a Python function from C++, you have to embed Python
in your C++ application. To do this, you have to:
Load the Python DLL. How you do this is system dependent:
LoadLibrary under Windows, dlopen under Unix. If the Python DLL is
in the usual path you use for DLLs (%path% under Windows,
LD_LIBRARY_PATH under Unix), this will happen automatically if you try
calling any function in the Python C interface. Manual loading will
give you more control with regards to version, etc.
Once the library has been loaded, you have to call the function
Py_Initialize() to initialize it. You may want to call
Py_SetProgramName() or Py_SetPythonHome() first to establish the
environment.
Your function is in a module, so you'll have to load that:
PyImport_ImportModule. If the module isn't in the standard path,
you'll have to add its location to sys.path: use
PyImport_ImportModule to get the module "sys", then
PyObject_GetAttrString to get the attribute "path". The path
attribute is a list, so you can use any of the list functions to add
whatever is needed to it.
Your function is an attribute of the module, so you use
PyObject_GetAttrString on the module to get an instance of the
function. Once you've got that, you pack the arguments into a tuple or
a dictionary (for keyword arguments), and use PyObject_Call to call
it.
All of the functions, and everything that is necessary, is documented
(extremely well, in fact) in https://docs.python.org/2/c-api/. You'll
be particularly interested in the sections on "Embedding Python" and
"Importing Modules", along with the more general utilities ("Object
Protocol", etc.). You'll also need to understand the general principles
with regards to how the Python/C API works—things like reference
counting and borrowed vs. owned references; you'll probably want to read
all of the sections in the Introduction first.
And of course, despite the overall quality of the documentation, it's
not perfect. A couple of times, I've had to plunge into the Python
sources to figure out what was going on. (Typically, when I'm getting
an error back from Python, to find out what it's actually complaining
about.)
Sometimes I want to include a module in some other subroutine but I only need several subroutines from that module. What is the difference between
use a_module, only: a_subroutine
or simply
use a_module
?
Here is a complete answer, some of which has already been discussed in the comments.
From Metcalf et al. (2011) p.146 (a leading Fortran reference textbook), use a_module provides (emphasis added):
access to all the public named data objects, derived types,
interface blocks, procedures, generic identifiers, and namelist groups
in the module named.
Conversely, use a_module, only an_entity provides:
access to an entity in a module only if the entity ... is specified.
i.e. use a_module is equivalent to the not-recommended (e.g. in [2]) python practice:
from a_module import *
while use a_module, only an_entity is equivalent to the preferred python practice:
from a_module import an_entity
Unfortunately, the recommended python practice
import module [as name]
or
import module.submodule [as name]
is not available in Fortran since Fortran imports all entities into a global namespace rather than accessing entities from modules via the module's namespace as done in python, e.g.:
import numpy as np
array = np.array([1, 2, 3])
As noted in the comments and elsewhere (e.g. [3]), explicit imports (use a_module, only an_entity) are preferred over implicit imports (use a_module) for code clarity and to avoid namespace pollution / name clashes ("explicit is better than implicit").
Metcalf et al. (2011) also note that should you require two entities with the same name from different modules, name clashes can be avoided by renaming one (or both) of the clashing entities locally (i.e. within your program / module only), e.g.
use stats_lib, only sprod => prod
use maths_lib, only prod
where prod from stats_lib is accessed locally using the name sprod, while prod from maths_lib is accessed locally using the name prod.
Incidentally, Metcalf et al. (2011) also note:
A name clash is permitted if there is no reference to the name in the
scoping unit.
i.e. you can successfully compile:
use stats_lib
use maths_lib
without problems provided neither module's prod (or any other clashing name) is used in your program / module. However, for the reasons above, such practice is not recommended.
[1] Metcalf, M, Reid, J & Cohen, M. (2011) "Modern Fortran Explained" (Oxford University Press)
[2] https://www.tutorialspoint.com/python/python_modules.htm
[3] http://www.fortran90.org/src/best-practices.html
I need to import one of the core modules (datetime) inside my C extension module since I want to return a datetime.date from some functions of my module.
It appears that Python C extension modules have no complement for the PyMODINIT_FUNC upon destruction.
Question: What can I do short of importing the required module time and time again in every call inside my C extension module and then dereferencing it at the end of the call(s) again?
Rationale: Basically I fear that this (importing it over and over again) creates an unnecessary overhead, because from my understanding of the documentation dereferencing it means that the garbage collector can come around and collect it, so next time PyImport_ImportModule would have to be called again.
Somewhat related questions:
Import and use standard Python module from inside Python C extension
Making a C extension to Python that requires another extension