I want to use django-appengine-toolkit to provide symlinks needed by Appengine to include dependencies in the production runtime environment as discussed here. Unfortunately, I ran into an "AttributeError: 'module' object has no attribute 'symlink' " problem. A bit of research took me to this solution for apptrace, which indicated it was due to running the code on Windows. Adding this change with the arguments for
kdll.CreateSymbolicLinkA(srcname, dstname, 0)
changed to
kdll.CreateSymbolicLinkA(path, dest, 0)
at _utils.py at line 62 (as shown here) fixed the AttributeError and allowed the code to complete and autogenerate appengine_config.py with the necessary sys.path information.
Unfortunately, the dependencies were not populated under the 'libs' directory and I fear that my Python skills failed me at that point.
Can anyone identify what further code changes are needed to populate the dependencies?
since I really needed this ASAP, I ended up modifying the _utils.py file on appengine_toolkit:
def make_simlinks(dest_dir, paths_list):
"""
TODO docstrings
"""
for path in paths_list:
dest = os.path.join(dest_dir, os.path.split(path)[-1])
if os.path.exists(dest):
if os.path.islink(dest):
os.remove(dest)
else:
sys.stderr.write('A file or dir named {} already exists, skipping...\n'.format(dest))
continue
try:
os.symlink(path, dest)
except:
import shutil
sys.stdout.write('Couldn\'t create symlink copying files instead ...\n')
shutil.copytree(path, dest)
Basically if symlinking fails, I just copy everything. Not the cleanest hack but it works
Related
For some time my old project used gmock_gen.py to generate automatically mocked classes (this is an old project from http://code.google.com/p/cppclean/ that it seems inactive and it depends on python2 that we don't want)
My question:
Is there anything on gtest environment that does the same as gmock_gen.py and supports python3, or what is the alternative to gmock_gen.py if we don't have or don't want to use python2?
Best regards,
Nuno
It seems that the conversion to python3 is very simple.
You only need to do two things and only one is required (step 2.):
you can use the python tool 2to3 to convert the code from python 2 code into python 3 code (optional)
change only one line to prevent an exception on the execution of the scrip:
gmock_gtest/generator/cpp/ast.py:908
change from:
def _GetNextToken(self):
if self.token_queue:
return self.token_queue.pop()
return next(self.tokens)
to
def _GetNextToken(self):
if self.token_queue:
return self.token_queue.pop()
return next(self.tokens, None)
and that will work.
I'm using Python 2.6 and PyGTK 2.22.6 from the all-in-one installer on Windows XP, trying to build a single-file executable (via py2exe) for my app.
My problem is that when I run my app as a script (ie. not built into an .exe file, just as a loose collection of .py files), it uses the native-looking Windows theme, but when I run the built exe I see the default GTK theme.
I know that this problem can be fixed by copying a bunch of files into the dist directory created by py2exe, but everything I've read involves manually copying the data, whereas I want this to be an automatic part of the build process. Furthermore, everything on the topic (including the FAQ) is out of date - PyGTK now keeps its files in C:\Python2x\Lib\site-packages\gtk-2.0\runtime\..., and just copying the lib and etc directories doesn't fix the problem.
My questions are:
I'd like to be able to programmatically find the GTK runtime data in setup.py rather than hard coding paths. How do I do this?
What are the minimal resources I need to include?
Update: I may have almost answered #2 by trial-and-error. For the "wimp" (ie. MS Windows) theme to work, I need the files from:
runtime\lib\gtk-2.0\2.10.0\engines\libwimp.dll
runtime\etc\gtk-2.0\gtkrc
runtime\share\icons\*
runtime\share\themes\MS-Windows
...without the runtime prefix, but otherwise with the same directory structure, sitting directly in the dist directory produced by py2exe. But where does the 2.10.0 come from, given that gtk.gtk_version is (2,22,0)?
Answering my own question here, but if anyone knows better feel free to answer too. Some of it seems quite fragile (eg. version numbers in paths), so comment or edit if you know a better way.
1. Finding the files
Firstly, I use this code to actually find the root of the GTK runtime. This is very specific to how you install the runtime, though, and could probably be improved with a number of checks for common locations:
#gtk file inclusion
import gtk
# The runtime dir is in the same directory as the module:
GTK_RUNTIME_DIR = os.path.join(
os.path.split(os.path.dirname(gtk.__file__))[0], "runtime")
assert os.path.exists(GTK_RUNTIME_DIR), "Cannot find GTK runtime data"
2. What files to include
This depends on (a) how much of a concern size is, and (b) the context of your application's deployment. By that I mean, are you deploying it to the whole wide world where anyone can have an arbitrary locale setting, or is it just for internal corporate use where you don't need translated stock strings?
If you want Windows theming, you'll need to include:
GTK_THEME_DEFAULT = os.path.join("share", "themes", "Default")
GTK_THEME_WINDOWS = os.path.join("share", "themes", "MS-Windows")
GTK_GTKRC_DIR = os.path.join("etc", "gtk-2.0")
GTK_GTKRC = "gtkrc"
GTK_WIMP_DIR = os.path.join("lib", "gtk-2.0", "2.10.0", "engines")
GTK_WIMP_DLL = "libwimp.dll"
If you want the Tango icons:
GTK_ICONS = os.path.join("share", "icons")
There is also localisation data (which I omit, but you might not want to):
GTK_LOCALE_DATA = os.path.join("share", "locale")
3. Piecing it together
Firstly, here's a function that walks the filesystem tree at a given point and produces output suitable for the data_files option.
def generate_data_files(prefix, tree, file_filter=None):
"""
Walk the filesystem starting at "prefix" + "tree", producing a list of files
suitable for the data_files option to setup(). The prefix will be omitted
from the path given to setup(). For example, if you have
C:\Python26\Lib\site-packages\gtk-2.0\runtime\etc\...
...and you want your "dist\" dir to contain "etc\..." as a subdirectory,
invoke the function as
generate_data_files(
r"C:\Python26\Lib\site-packages\gtk-2.0\runtime",
r"etc")
If, instead, you want it to contain "runtime\etc\..." use:
generate_data_files(
r"C:\Python26\Lib\site-packages\gtk-2.0",
r"runtime\etc")
Empty directories are omitted.
file_filter(root, fl) is an optional function called with a containing
directory and filename of each file. If it returns False, the file is
omitted from the results.
"""
data_files = []
for root, dirs, files in os.walk(os.path.join(prefix, tree)):
to_dir = os.path.relpath(root, prefix)
if file_filter is not None:
file_iter = (fl for fl in files if file_filter(root, fl))
else:
file_iter = files
data_files.append((to_dir, [os.path.join(root, fl) for fl in file_iter]))
non_empties = [(to, fro) for (to, fro) in data_files if fro]
return non_empties
So now you can call setup() like so:
setup(
# Other setup args here...
data_files = (
# Use the function above...
generate_data_files(GTK_RUNTIME_DIR, GTK_THEME_DEFAULT) +
generate_data_files(GTK_RUNTIME_DIR, GTK_THEME_WINDOWS) +
generate_data_files(GTK_RUNTIME_DIR, GTK_ICONS) +
# ...or include single files manually
[
(GTK_GTKRC_DIR, [
os.path.join(GTK_RUNTIME_DIR,
GTK_GTKRC_DIR,
GTK_GTKRC)
]),
(GTK_WIMP_DIR, [
os.path.join(
GTK_RUNTIME_DIR,
GTK_WIMP_DIR,
GTK_WIMP_DLL)
])
]
)
)
I am trying to write Python 2.7 code that will
Dynamically load a list / array of modules from a config file at startup
Call functions from those modules. Those functions are also designated in a config file (maybe the same config file, maybe a different one).
The idea is my code will have no idea until startup which modules to load. And the portion that calls the functions will have no idea which functions to call and which modules those functions belong to until runtime.
I'm not succeeding. A simple example of my situation is this:
The following is abc.py, a module that should be dynamically loaded (in my actual application I would have several such modules designated in a list / array in a config file):
def abc_fcn():
print("Hello World!")
def another_fcn():
print("BlahBlah")
The following is the .py code which should load abc.py (my actual code would need to import the entire list / array of modules from the config file). Both this .py file and abc.py are in the same folder / directory. Please note comments next to each statement.
module_to_import = "abc" #<- Will normally come from config file
fcn_to_call = "abc.abc_fcn" #<- Will normally come from config file
__import__(module_to_import) #<- No error
print(help(module_to_import)) #<- Works as expected
eval(fcn_to_call)() #<- NameError: name 'abc' is not defined
When I change the second line to the following...
fcn_to_call = "abc_fcn"
...the NameError changes to "name 'abc_fcn' is not defined".
What am I doing wrong? Thanks in advance for the help!
__import__ only returns the module specified, it does not add it to the global namespace. So to accomplish what you want, save the result as a variable, and then dynamically retrieve the function that you want. That could look like
fcn_to_call = 'abc_fcn'
mod = __import__(module_to_import)
func = getattr(mod, fcn_to_call)
func()
On a side note, abc is the name of name of the Abstract Base Classes builtin Python module, although I know you were probably just using this an example.
You should assign the returning value of __import__ to a variable abc so that you can actually use it as a module.
abc = __import__(module_to_import)
When I raise my owns exceptions in my Python libraries, the exception stack shows the raise-line itself as the last item of the stack. This is obviously not an error, is conceptually right, but points the focus on something that is not useful for debugging when you're are using code externally, for example as a module.
Is there a way to avoid this and force Python to show the previous-to-last stack item as the last one, like the standard Python libraries.
Due warning: modifying the behaviour of the interpreter is generally frowned upon. And in any case, seeing exactly where an error was raised may be helpful in debugging, especially if a function can raise an error for several different reasons.
If you use the traceback module, and replace sys.excepthook with a custom function, it's probably possible to do this. But making the change will affect error display for the entire program, not just your module, so is probably not recommended.
You could also look at putting code in try/except blocks, then modifying the error and re-raising it. But your time is probably better spent making unexpected errors unlikely, and writing informative error messages for those that could arise.
you can create your own exception hook in python. below is the example of code that i am using.
import sys
import traceback
def exceptionHandler(got_exception_type, got_exception, got_traceback):
listing = traceback.format_exception(got_exception_type, got_exception, got_traceback)
# Removing the listing of statement raise (raise line).
del listing[-2]
filelist = ["org.python.pydev"] # avoiding the debuger modules.
listing = [ item for item in listing if len([f for f in filelist if f in item]) == 0 ]
files = [line for line in listing if line.startswith(" File")]
if len(files) == 1:
# only one file, remove the header.
del listing[0]
print>>sys.stderr, "".join(listing)
And below are some lines that I have used in my custom exception code.
sys.excepthook = exceptionHandler
raise Exception("My Custom error message.")
In the method exception you can add file names or module names in list "filenames" if you want to ignore any unwanted files. As I have ignored the python pydev module since I am using pydev debugger in eclipse.
The above is used in my own module for a specific purpose. you can modify and use it for your modules.
I'd suggest to not use the Exception mechanism to validate arguments, as tempting as that is. Coding with exceptions as conditionals is like saying, "crash my app if, as a developer, I don't think of all the bad conditions my provided arguments can cause. Perhaps using exceptions for things not only out of your control but also which is under control of something else like the OS or hardware or the Python language would be more logical, I don't know. In practice however I use exceptions as you request a solution for.
To answer your question, in part, it is just as simple to code thusly:
class MyObject(object):
def saveas(self, filename):
if not validate_filename(filename):
return False
...
caller
if not myobject.saveas(filename): report_and_retry()
Perhaps not a great answer, just something to think about.
I'm using google's cpplint.py to verify source code in my project meets the standards set forth in the Google C++ Style Guide. We use SCons to build so I'd like to automate the process by having SCons first read in all of our .h and .cc files and then run cpplint.py on them, only building a file if it passes. The issues are as follows:
In SCons how do I pre-hook the build process? No file should be compiled until it passes linting.
cpplint doesn't return an exit code. How do I run a command in SCons and check whether the result matches a regular expression? I.E., how do I get the text being output?
The project is large, whatever the solution to #1 and #2 it should run concurrently when the -j option is passed to SCons.
I need a whitelist that allows some files to skip the lint check.
One way to do this is to monkey patch the object emitter function, which turns C++ code into linkable object files. There are 2 such emitter functions; one for static objects and one for shared objects. Here is an example that you can copy paste into a SConstruct:
import sys
import SCons.Defaults
import SCons.Builder
OriginalShared = SCons.Defaults.SharedObjectEmitter
OriginalStatic = SCons.Defaults.StaticObjectEmitter
def DoLint(env, source):
for s in source:
env.Lint(s.srcnode().path + ".lint", s)
def SharedObjectEmitter(target, source, env):
DoLint(env, source)
return OriginalShared(target, source, env)
def StaticObjectEmitter(target, source, env):
DoLint(env, source)
return OriginalStatic(target, source, env)
SCons.Defaults.SharedObjectEmitter = SharedObjectEmitter
SCons.Defaults.StaticObjectEmitter = StaticObjectEmitter
linter = SCons.Builder.Builder(
action=['$PYTHON $LINT $LINT_OPTIONS $SOURCE','date > $TARGET'],
suffix='.lint',
src_suffix='.cpp')
# actual build
env = Environment()
env.Append(BUILDERS={'Lint': linter})
env["PYTHON"] = sys.executable
env["LINT"] = "cpplint.py"
env["LINT_OPTIONS"] = ["--filter=-whitespace,+whitespace/tab", "--verbose=3"]
env.Program("test", Glob("*.cpp"))
There's nothing too tricky about it really. You'd set LINT to the path to your cpplint.py copy, and set appropriate LINT_OPTIONS for your project. The only warty bit is creating a TARGET file if the check passes using the command line date program. If you want to be cross platform then that'd have to change.
Adding a whitelist is now just regular Python code, something like:
whitelist = """"
src/legacy_code.cpp
src/by_the_PHB.cpp
"""".split()
def DoLint(env, source):
for s in source:
src = s.srcnode().path
if src not in whitelist:
env.Lint( + ".lint", s)
It seems cpplint.py does output the correct error status. When there are errors it returns 1, otherwise it returns 0. So there's no extra work to do there. If the lint check fails, it will fail the build.
This solution works with -j, but the C++ files may compile as there are no implicit dependencies between the lint fake output and the object file target. You can add an explicit env.Depends in there to force that the ".lint" output depend on the object target. As is it's probably enough, since the build itself will fail (scons gives a non-zero return code) if there are any remaining lint issues even after all the C++ compiles. For completeness the depends code would be something like this in the DoLint function:
def DoLint(env, source, target):
for i in range(len(source)):
s = source[i]
out = env.Lint(s.srcnode().path + ".lint", s)
env.Depends(target[i], out)
AddPreAction seems to be what you are looking for, from the manpage:
AddPreAction(target, action)
env.AddPreAction(target, action)
Arranges for the specified action to be performed before the specified target is built. T
Also see http://benno.id.au/blog/2006/08/27/filtergensplint for an example.
See my github for a pair of scons scripts complete with an example source tree. It uses Google's cpplint.py.
https://github.com/xyzisinus/scons-tidbits