I'm trying to use the bazel restricted_to attribute for a test.
I want the test to only run on a specific cpu = build.
To make this somewhat more complicated, the cpu type is defined in our
/tools/cpp/CROSSTOOL file (cpu=armhf-debian).
I've had no luck with guessing the syntax of the restricted_to parameter
(my first guess was //cpu:armhf-debian, which just looked for a cpu package)
Any Suggestions?
There's not a lot of documentation on restricted_to, and the other rules it works with, environment and environment_group. Mostly this is because the use case they are for is very specific to Google's repository setup, and we're in the process of replacing them with a more flexible system.
To use restricted_to, you would need to define several environment rules, and an environment_group to contain them, and then specify which environment the test is restricted to, and finally always use the "--target_environment" flag to specify the current environment group. That would look something like this:
environment(name = "x86")
environment(name = "ppc")
environment_group(
name = "cpus",
defaults = [":x86"],
environments = [
":x86",
":ppc",
])
cc_test(
name = "test",
other config
restricted_to = [":ppc"],)
You could then run the test as so:
bazel test --target_environment=//:ppc //:test
to get the environment checking.
This isn't terribly useful, as whoever is running the test has to also remember to set "--target_environment" properly.
A better way to disable the test, using currently supported code, is to use config_setting and select, like this:
config_setting(
name = "k8",
values = {"cpu": "k8"})
config_setting(
name = "ppc",
values = {"cpu":, "ppc")
cc_test(
name = "test",
other config
srcs = [other sources] +
select({
"//:x86": ["x86_test_src.cpp"],
"//:ppc": ["ppc_test_src.cpp"],
"//conditions:default": ["default_test_src.cpp"],
})
config_setting will take a value based on the current "--cpu" flag. By changing the files included in the select, you can control what files are included in the test for each cpu setting.
Obviously, these don't have to be in the same package, and the usual Bazel visibility rules apply. See Bazel's src/BUILD for an example of config_setting, and src/test/cpp/BUILD for an example of using it in select.
We're working hard on platforms, which is a better way to describe and query Bazel's execution environment, and we'll make sure to post documentation and a blog post when that's ready for people to test.
Related
I'm using Python 2.6 and PyGTK 2.22.6 from the all-in-one installer on Windows XP, trying to build a single-file executable (via py2exe) for my app.
My problem is that when I run my app as a script (ie. not built into an .exe file, just as a loose collection of .py files), it uses the native-looking Windows theme, but when I run the built exe I see the default GTK theme.
I know that this problem can be fixed by copying a bunch of files into the dist directory created by py2exe, but everything I've read involves manually copying the data, whereas I want this to be an automatic part of the build process. Furthermore, everything on the topic (including the FAQ) is out of date - PyGTK now keeps its files in C:\Python2x\Lib\site-packages\gtk-2.0\runtime\..., and just copying the lib and etc directories doesn't fix the problem.
My questions are:
I'd like to be able to programmatically find the GTK runtime data in setup.py rather than hard coding paths. How do I do this?
What are the minimal resources I need to include?
Update: I may have almost answered #2 by trial-and-error. For the "wimp" (ie. MS Windows) theme to work, I need the files from:
runtime\lib\gtk-2.0\2.10.0\engines\libwimp.dll
runtime\etc\gtk-2.0\gtkrc
runtime\share\icons\*
runtime\share\themes\MS-Windows
...without the runtime prefix, but otherwise with the same directory structure, sitting directly in the dist directory produced by py2exe. But where does the 2.10.0 come from, given that gtk.gtk_version is (2,22,0)?
Answering my own question here, but if anyone knows better feel free to answer too. Some of it seems quite fragile (eg. version numbers in paths), so comment or edit if you know a better way.
1. Finding the files
Firstly, I use this code to actually find the root of the GTK runtime. This is very specific to how you install the runtime, though, and could probably be improved with a number of checks for common locations:
#gtk file inclusion
import gtk
# The runtime dir is in the same directory as the module:
GTK_RUNTIME_DIR = os.path.join(
os.path.split(os.path.dirname(gtk.__file__))[0], "runtime")
assert os.path.exists(GTK_RUNTIME_DIR), "Cannot find GTK runtime data"
2. What files to include
This depends on (a) how much of a concern size is, and (b) the context of your application's deployment. By that I mean, are you deploying it to the whole wide world where anyone can have an arbitrary locale setting, or is it just for internal corporate use where you don't need translated stock strings?
If you want Windows theming, you'll need to include:
GTK_THEME_DEFAULT = os.path.join("share", "themes", "Default")
GTK_THEME_WINDOWS = os.path.join("share", "themes", "MS-Windows")
GTK_GTKRC_DIR = os.path.join("etc", "gtk-2.0")
GTK_GTKRC = "gtkrc"
GTK_WIMP_DIR = os.path.join("lib", "gtk-2.0", "2.10.0", "engines")
GTK_WIMP_DLL = "libwimp.dll"
If you want the Tango icons:
GTK_ICONS = os.path.join("share", "icons")
There is also localisation data (which I omit, but you might not want to):
GTK_LOCALE_DATA = os.path.join("share", "locale")
3. Piecing it together
Firstly, here's a function that walks the filesystem tree at a given point and produces output suitable for the data_files option.
def generate_data_files(prefix, tree, file_filter=None):
"""
Walk the filesystem starting at "prefix" + "tree", producing a list of files
suitable for the data_files option to setup(). The prefix will be omitted
from the path given to setup(). For example, if you have
C:\Python26\Lib\site-packages\gtk-2.0\runtime\etc\...
...and you want your "dist\" dir to contain "etc\..." as a subdirectory,
invoke the function as
generate_data_files(
r"C:\Python26\Lib\site-packages\gtk-2.0\runtime",
r"etc")
If, instead, you want it to contain "runtime\etc\..." use:
generate_data_files(
r"C:\Python26\Lib\site-packages\gtk-2.0",
r"runtime\etc")
Empty directories are omitted.
file_filter(root, fl) is an optional function called with a containing
directory and filename of each file. If it returns False, the file is
omitted from the results.
"""
data_files = []
for root, dirs, files in os.walk(os.path.join(prefix, tree)):
to_dir = os.path.relpath(root, prefix)
if file_filter is not None:
file_iter = (fl for fl in files if file_filter(root, fl))
else:
file_iter = files
data_files.append((to_dir, [os.path.join(root, fl) for fl in file_iter]))
non_empties = [(to, fro) for (to, fro) in data_files if fro]
return non_empties
So now you can call setup() like so:
setup(
# Other setup args here...
data_files = (
# Use the function above...
generate_data_files(GTK_RUNTIME_DIR, GTK_THEME_DEFAULT) +
generate_data_files(GTK_RUNTIME_DIR, GTK_THEME_WINDOWS) +
generate_data_files(GTK_RUNTIME_DIR, GTK_ICONS) +
# ...or include single files manually
[
(GTK_GTKRC_DIR, [
os.path.join(GTK_RUNTIME_DIR,
GTK_GTKRC_DIR,
GTK_GTKRC)
]),
(GTK_WIMP_DIR, [
os.path.join(
GTK_RUNTIME_DIR,
GTK_WIMP_DIR,
GTK_WIMP_DLL)
])
]
)
)
I am trying to include an external library into my Bazel project.
It is a commercial, closed source software library that comes as a bunch of .h and .a files in a tar file (Linux). There is no public download link, you have to download the archive manually somewhere.
Thus, I checked the library archives into Git (let's not discuss this here) in vendor/ and use
http_archive(
name = "library_name",
build_file = "//third_party:library_name.BUILD",
sha256 = "12165fbcbac............",
urls = ["file:///home/myuser/git/repo/vendor/libraryname.tar.gz"],
)
I would like to not use an absolute path in urls=, so my coworkers can checkout and build the library without hassle. How can I use a relative path with http_archive()?
I have looked at this answer, but it seems to be not exactly the same problem and the example is incomplete.
A very simple custom repository rule can do that for you. repository_ctx.extract does all the heavy lifting. I wrote this up just now as a barebones example:
def _test_archive_impl(repository_ctx):
repository_ctx.extract(repository_ctx.attr.src)
repository_ctx.file("BUILD.bazel", repository_ctx.read(repository_ctx.attr.build_file))
test_archive = repository_rule(
attrs = {
"src": attr.label(mandatory = True, allow_single_file = True),
"build_file": attr.label(mandatory = True, allow_single_file = True),
},
implementation = _test_archive_impl,
)
For your basic use case, you might not need any changes to that (besides a better name). Adding the ability to pass stripPrefix through would be straightforward. Depending on your use case, build_file_contents like other rules have instead of build_file might be useful too.
For reference, here's the WORKSPACE I used for testing (the rule definition above was in test.bzl):
load("//:test.bzl", "test_archive")
test_archive(
name = "test_repository",
src = "//:test.tar.gz",
build_file = "//:test.BUILD",
)
As an alternative to all of that, you could just check in the files from the archive and then use new_local_repository instead. Easier to work with in some ways, harder in others.
Is there a way to change the identifier of a project, without directly editing the database?
There is no obvious option to change it in the WebUI.
Apparently this has been an "issue" for a while:
https://www.redmine.org/boards/2/topics/2918?r=48986
Project identifiers are clearly not meant to be modified. It appears that the expectation is that one should delete a project and re-create it with the new identifier. Since this is unacceptable to me, I found a way around it.
The web interface does not allow for changing the identifier and there are a few roadblocks in the Project class itself that prevents one from just opening a console and running something like this (which, as a Rails developer, I would expect to be able to do):
p = Project.find_by(identifier: 'old-identifier')
p.identifier = 'new-identifier'
p.save
However, I have found that one can do this from a production console:
p = Project.where(identifier: 'old-identifier').first
p.instance_eval { self['identifier'] = 'new-identifier' }
p.save
Note: To access a "production console"...
cd into [R]edmine install directory, then run RAILS_ENV=production rails console
(Thanks, Dave)
I'm looking for a way to register somthing like an end-build callback in scons. For example, I'm doing something like this right now:
def print_build_summary():
failures = SCons.Script.GetBuildFailures()
notifyExe = 'notify-send '
if len(failures) > 0:
notifyExe = notifyExe + ' --urgency=critical Build Failed'
else:
notifyExe = notifyExe + ' --urgency=normal Build Succeed'
os.system(notifyExe)
atexit.register(print_build_summary)
This only works in non-interactive mode. I'd like to be able to pop up something like this at the end of every build, specifically, when running multiple 'build' commands in an interactive scons session.
The only suggestions I've found, looking around, seem to be to use the dependency system or the AddPostAction call to glom this on. It doesn't seem quite right to me to do it that way, since it's not really a dependency (it's not even really a part of the build, strictly speaking) - it's just a static bit of code that needs to be run at the end of every build.
Thanks!
I don't think there's anything wrong with using the dependency system to resolve this. This is how I normally do it:
def finish( target, source, env ):
raise Exception( 'DO IT' )
finish_command = Command( 'finish', [], finish )
Depends( finish_command, DEFAULT_TARGETS )
Default( finish_command )
This creates a command that depends on the default targets for it's execution (so you know it'll always run last - see DEFAULT_TARGETS in scons manual). Hope this helps.
Ive been looking into this and havent found that SCons offers anything that would help. This seems like quite a usefull feature, maybe the SCons developers are watching these threads and will take the suggestion...
I looked at the source code and figured out how to do it. I'll try to suggest this change to the SCons developers on scons.org.
If you're interested, the file is engine/SCons/Script/Main.py, and the function is _build_targets(). At the end of this funcion, you would simply need to add a call to a user supplied callback. Of course this solution would not be very useful if you build on several different machines in your network, since you would have to port the change everywhere its needed, but if you're only building on one machine, then maybe you could make the change until/if SCons officially provides a solution.
Let me know if you need help implementing the change, and I'll see what I can do.
Another option would be to wrap the call to SCons, and have the wrapper script perform the desired actions, but that wouldnt help in SCons interactive mode.
Hope this helps,
Brady
EDIT:
I create a feature request for this: http://scons.tigris.org/issues/show_bug.cgi?id=2834
I'm using google's cpplint.py to verify source code in my project meets the standards set forth in the Google C++ Style Guide. We use SCons to build so I'd like to automate the process by having SCons first read in all of our .h and .cc files and then run cpplint.py on them, only building a file if it passes. The issues are as follows:
In SCons how do I pre-hook the build process? No file should be compiled until it passes linting.
cpplint doesn't return an exit code. How do I run a command in SCons and check whether the result matches a regular expression? I.E., how do I get the text being output?
The project is large, whatever the solution to #1 and #2 it should run concurrently when the -j option is passed to SCons.
I need a whitelist that allows some files to skip the lint check.
One way to do this is to monkey patch the object emitter function, which turns C++ code into linkable object files. There are 2 such emitter functions; one for static objects and one for shared objects. Here is an example that you can copy paste into a SConstruct:
import sys
import SCons.Defaults
import SCons.Builder
OriginalShared = SCons.Defaults.SharedObjectEmitter
OriginalStatic = SCons.Defaults.StaticObjectEmitter
def DoLint(env, source):
for s in source:
env.Lint(s.srcnode().path + ".lint", s)
def SharedObjectEmitter(target, source, env):
DoLint(env, source)
return OriginalShared(target, source, env)
def StaticObjectEmitter(target, source, env):
DoLint(env, source)
return OriginalStatic(target, source, env)
SCons.Defaults.SharedObjectEmitter = SharedObjectEmitter
SCons.Defaults.StaticObjectEmitter = StaticObjectEmitter
linter = SCons.Builder.Builder(
action=['$PYTHON $LINT $LINT_OPTIONS $SOURCE','date > $TARGET'],
suffix='.lint',
src_suffix='.cpp')
# actual build
env = Environment()
env.Append(BUILDERS={'Lint': linter})
env["PYTHON"] = sys.executable
env["LINT"] = "cpplint.py"
env["LINT_OPTIONS"] = ["--filter=-whitespace,+whitespace/tab", "--verbose=3"]
env.Program("test", Glob("*.cpp"))
There's nothing too tricky about it really. You'd set LINT to the path to your cpplint.py copy, and set appropriate LINT_OPTIONS for your project. The only warty bit is creating a TARGET file if the check passes using the command line date program. If you want to be cross platform then that'd have to change.
Adding a whitelist is now just regular Python code, something like:
whitelist = """"
src/legacy_code.cpp
src/by_the_PHB.cpp
"""".split()
def DoLint(env, source):
for s in source:
src = s.srcnode().path
if src not in whitelist:
env.Lint( + ".lint", s)
It seems cpplint.py does output the correct error status. When there are errors it returns 1, otherwise it returns 0. So there's no extra work to do there. If the lint check fails, it will fail the build.
This solution works with -j, but the C++ files may compile as there are no implicit dependencies between the lint fake output and the object file target. You can add an explicit env.Depends in there to force that the ".lint" output depend on the object target. As is it's probably enough, since the build itself will fail (scons gives a non-zero return code) if there are any remaining lint issues even after all the C++ compiles. For completeness the depends code would be something like this in the DoLint function:
def DoLint(env, source, target):
for i in range(len(source)):
s = source[i]
out = env.Lint(s.srcnode().path + ".lint", s)
env.Depends(target[i], out)
AddPreAction seems to be what you are looking for, from the manpage:
AddPreAction(target, action)
env.AddPreAction(target, action)
Arranges for the specified action to be performed before the specified target is built. T
Also see http://benno.id.au/blog/2006/08/27/filtergensplint for an example.
See my github for a pair of scons scripts complete with an example source tree. It uses Google's cpplint.py.
https://github.com/xyzisinus/scons-tidbits