Bazel: relative local path as url in http_archive() - c++

I am trying to include an external library into my Bazel project.
It is a commercial, closed source software library that comes as a bunch of .h and .a files in a tar file (Linux). There is no public download link, you have to download the archive manually somewhere.
Thus, I checked the library archives into Git (let's not discuss this here) in vendor/ and use
http_archive(
name = "library_name",
build_file = "//third_party:library_name.BUILD",
sha256 = "12165fbcbac............",
urls = ["file:///home/myuser/git/repo/vendor/libraryname.tar.gz"],
)
I would like to not use an absolute path in urls=, so my coworkers can checkout and build the library without hassle. How can I use a relative path with http_archive()?
I have looked at this answer, but it seems to be not exactly the same problem and the example is incomplete.

A very simple custom repository rule can do that for you. repository_ctx.extract does all the heavy lifting. I wrote this up just now as a barebones example:
def _test_archive_impl(repository_ctx):
repository_ctx.extract(repository_ctx.attr.src)
repository_ctx.file("BUILD.bazel", repository_ctx.read(repository_ctx.attr.build_file))
test_archive = repository_rule(
attrs = {
"src": attr.label(mandatory = True, allow_single_file = True),
"build_file": attr.label(mandatory = True, allow_single_file = True),
},
implementation = _test_archive_impl,
)
For your basic use case, you might not need any changes to that (besides a better name). Adding the ability to pass stripPrefix through would be straightforward. Depending on your use case, build_file_contents like other rules have instead of build_file might be useful too.
For reference, here's the WORKSPACE I used for testing (the rule definition above was in test.bzl):
load("//:test.bzl", "test_archive")
test_archive(
name = "test_repository",
src = "//:test.tar.gz",
build_file = "//:test.BUILD",
)
As an alternative to all of that, you could just check in the files from the archive and then use new_local_repository instead. Easier to work with in some ways, harder in others.

Related

Gtk2 gui looks different after compiling with py2exe to make a exe file [duplicate]

I'm using Python 2.6 and PyGTK 2.22.6 from the all-in-one installer on Windows XP, trying to build a single-file executable (via py2exe) for my app.
My problem is that when I run my app as a script (ie. not built into an .exe file, just as a loose collection of .py files), it uses the native-looking Windows theme, but when I run the built exe I see the default GTK theme.
I know that this problem can be fixed by copying a bunch of files into the dist directory created by py2exe, but everything I've read involves manually copying the data, whereas I want this to be an automatic part of the build process. Furthermore, everything on the topic (including the FAQ) is out of date - PyGTK now keeps its files in C:\Python2x\Lib\site-packages\gtk-2.0\runtime\..., and just copying the lib and etc directories doesn't fix the problem.
My questions are:
I'd like to be able to programmatically find the GTK runtime data in setup.py rather than hard coding paths. How do I do this?
What are the minimal resources I need to include?
Update: I may have almost answered #2 by trial-and-error. For the "wimp" (ie. MS Windows) theme to work, I need the files from:
runtime\lib\gtk-2.0\2.10.0\engines\libwimp.dll
runtime\etc\gtk-2.0\gtkrc
runtime\share\icons\*
runtime\share\themes\MS-Windows
...without the runtime prefix, but otherwise with the same directory structure, sitting directly in the dist directory produced by py2exe. But where does the 2.10.0 come from, given that gtk.gtk_version is (2,22,0)?
Answering my own question here, but if anyone knows better feel free to answer too. Some of it seems quite fragile (eg. version numbers in paths), so comment or edit if you know a better way.
1. Finding the files
Firstly, I use this code to actually find the root of the GTK runtime. This is very specific to how you install the runtime, though, and could probably be improved with a number of checks for common locations:
#gtk file inclusion
import gtk
# The runtime dir is in the same directory as the module:
GTK_RUNTIME_DIR = os.path.join(
os.path.split(os.path.dirname(gtk.__file__))[0], "runtime")
assert os.path.exists(GTK_RUNTIME_DIR), "Cannot find GTK runtime data"
2. What files to include
This depends on (a) how much of a concern size is, and (b) the context of your application's deployment. By that I mean, are you deploying it to the whole wide world where anyone can have an arbitrary locale setting, or is it just for internal corporate use where you don't need translated stock strings?
If you want Windows theming, you'll need to include:
GTK_THEME_DEFAULT = os.path.join("share", "themes", "Default")
GTK_THEME_WINDOWS = os.path.join("share", "themes", "MS-Windows")
GTK_GTKRC_DIR = os.path.join("etc", "gtk-2.0")
GTK_GTKRC = "gtkrc"
GTK_WIMP_DIR = os.path.join("lib", "gtk-2.0", "2.10.0", "engines")
GTK_WIMP_DLL = "libwimp.dll"
If you want the Tango icons:
GTK_ICONS = os.path.join("share", "icons")
There is also localisation data (which I omit, but you might not want to):
GTK_LOCALE_DATA = os.path.join("share", "locale")
3. Piecing it together
Firstly, here's a function that walks the filesystem tree at a given point and produces output suitable for the data_files option.
def generate_data_files(prefix, tree, file_filter=None):
"""
Walk the filesystem starting at "prefix" + "tree", producing a list of files
suitable for the data_files option to setup(). The prefix will be omitted
from the path given to setup(). For example, if you have
C:\Python26\Lib\site-packages\gtk-2.0\runtime\etc\...
...and you want your "dist\" dir to contain "etc\..." as a subdirectory,
invoke the function as
generate_data_files(
r"C:\Python26\Lib\site-packages\gtk-2.0\runtime",
r"etc")
If, instead, you want it to contain "runtime\etc\..." use:
generate_data_files(
r"C:\Python26\Lib\site-packages\gtk-2.0",
r"runtime\etc")
Empty directories are omitted.
file_filter(root, fl) is an optional function called with a containing
directory and filename of each file. If it returns False, the file is
omitted from the results.
"""
data_files = []
for root, dirs, files in os.walk(os.path.join(prefix, tree)):
to_dir = os.path.relpath(root, prefix)
if file_filter is not None:
file_iter = (fl for fl in files if file_filter(root, fl))
else:
file_iter = files
data_files.append((to_dir, [os.path.join(root, fl) for fl in file_iter]))
non_empties = [(to, fro) for (to, fro) in data_files if fro]
return non_empties
So now you can call setup() like so:
setup(
# Other setup args here...
data_files = (
# Use the function above...
generate_data_files(GTK_RUNTIME_DIR, GTK_THEME_DEFAULT) +
generate_data_files(GTK_RUNTIME_DIR, GTK_THEME_WINDOWS) +
generate_data_files(GTK_RUNTIME_DIR, GTK_ICONS) +
# ...or include single files manually
[
(GTK_GTKRC_DIR, [
os.path.join(GTK_RUNTIME_DIR,
GTK_GTKRC_DIR,
GTK_GTKRC)
]),
(GTK_WIMP_DIR, [
os.path.join(
GTK_RUNTIME_DIR,
GTK_WIMP_DIR,
GTK_WIMP_DLL)
])
]
)
)

Modifying include paths in custom C++ Bazel rule

I'm building some custom C++ Bazel rules, and I need to add support for modifying the include paths of the C++ headers, the same way cc_library headers can be modified with strip_include_prefix.
My custom rule is implemented using ctx.actions.run like this:
custom_cc_library = rule(
_impl,
attrs = {
...
"hdrs": attr.label_list(allow_files = [".h"]),
"strip_include_prefix": attr.string(),
...
},
)
Then within _impl I call the following function to rewrite hdrs:
def _strip_prefix(ctx, hdrs, prefix):
stripped = []
for hdr in hdrs:
stripped = hdr
if file.path.startswith(strip_prefix):
stripped_file = ctx.actions.declare_file(file.path[len(strip_prefix):])
ctx.actions.run_shell(
command = "mkdir -p {dest} && cp {src} {dest};".format(src=hdr.path, dest=stripped.path),
inputs = [hdr],
outputs = [stripped],
)
stripped.append(stripped_file)
return stripped
This doesn't work because Bazel won't copy files outside of their package directory, and besides it feels like the totally wrong approach to implementing this.
What is the best way to modify C++ header directories for dependencies to achieve the same functionality as cc_library's parameter strip_include_prefix?
You can create the header layout you want in a directory within your package, and then add that to the include path via create_compilation_outputs.includes. That's basically what the cc_library implementation boils down to.
The cc_library implementation names it _virtual_includes, and creates it using getUniqueDirectoryArtifact which is a helper in the Java code to create it "in a directory that is unique to the rule". I use something like "_%s_virtual_includes" % ctx.label.name to get similar functionality from Starlark, and give a hint that it's a private implementation detail which other rules should avoid relying on.

Use of [bazel] restricted_to attribute

I'm trying to use the bazel restricted_to attribute for a test.
I want the test to only run on a specific cpu = build.
To make this somewhat more complicated, the cpu type is defined in our
/tools/cpp/CROSSTOOL file (cpu=armhf-debian).
I've had no luck with guessing the syntax of the restricted_to parameter
(my first guess was //cpu:armhf-debian, which just looked for a cpu package)
Any Suggestions?
There's not a lot of documentation on restricted_to, and the other rules it works with, environment and environment_group. Mostly this is because the use case they are for is very specific to Google's repository setup, and we're in the process of replacing them with a more flexible system.
To use restricted_to, you would need to define several environment rules, and an environment_group to contain them, and then specify which environment the test is restricted to, and finally always use the "--target_environment" flag to specify the current environment group. That would look something like this:
environment(name = "x86")
environment(name = "ppc")
environment_group(
name = "cpus",
defaults = [":x86"],
environments = [
":x86",
":ppc",
])
cc_test(
name = "test",
other config
restricted_to = [":ppc"],)
You could then run the test as so:
bazel test --target_environment=//:ppc //:test
to get the environment checking.
This isn't terribly useful, as whoever is running the test has to also remember to set "--target_environment" properly.
A better way to disable the test, using currently supported code, is to use config_setting and select, like this:
config_setting(
name = "k8",
values = {"cpu": "k8"})
config_setting(
name = "ppc",
values = {"cpu":, "ppc")
cc_test(
name = "test",
other config
srcs = [other sources] +
select({
"//:x86": ["x86_test_src.cpp"],
"//:ppc": ["ppc_test_src.cpp"],
"//conditions:default": ["default_test_src.cpp"],
})
config_setting will take a value based on the current "--cpu" flag. By changing the files included in the select, you can control what files are included in the test for each cpu setting.
Obviously, these don't have to be in the same package, and the usual Bazel visibility rules apply. See Bazel's src/BUILD for an example of config_setting, and src/test/cpp/BUILD for an example of using it in select.
We're working hard on platforms, which is a better way to describe and query Bazel's execution environment, and we'll make sure to post documentation and a blog post when that's ready for people to test.

Coapp / autopkg : multiple include folders in /build/native/include/

I am trying to build a nuget package via CoApp tool for c++.
The package needs to embed 3 folders when compiling a cpp using it.
So, I want an internal include structure as following :
/build/native/include/lib1,
/build/native/include/lib2,
/build/native/include/lib3
My question: how to add several include folders in /build/native/include/
I tryied :
Multiple blocs of (varying lib1, lib2, lib3):
nestedInclude +=
{
#destination = ${d_include}lib1;
".\lib1\**\*.hpp", ".\lib1\**\*.h"
};
Multiple blocs of (varying lib1, lib2, lib3):
nestedInclude
{
#destination = ${d_include}lib1;
".\lib1\**\*.hpp", ".\lib1\**\*.h"
};
but it seems coapp accumulates the .h/.hpp files among the blocs (depending of operator += or not) and at the end, add all of them to the last #destination tag value. So I get an unique entry : /build/native/include/lib3
The destination is overwritten in your example and therefore you get everything flat in the last given address. To handle this you can instead create multiple nested include,
nested1Include: {
#destination = ${d_include}lib1;
".\lib1\**\*.hpp", ".\lib1\**\*.h"
}
nested2Include: {
#destination = ${d_include}lib2;
".\lib2\**\*.hpp", ".\lib2\**\*.h"
}
I've just hit the same issue, and Gorgar's answer set me on the right track, thank you. But I do have one additional piece of information. I only had one underlying directory, and in that case CoApp still flattened everything. The trick is to make it think it has two, even if it doesn't, like this:
include1: {
#destination = ${d_include}NativeLogger;
"include\NativeLogger\*.h"
};
// The use of a second include spec here which doesn't actually address any files
// is to force CoApp to create the substructure of the first include. There is some
// discussion on the net about bugginess related to includes structures, but this
// seems to fix it.
include2: { include\* };

Setting up SCons to Autolint

I'm using google's cpplint.py to verify source code in my project meets the standards set forth in the Google C++ Style Guide. We use SCons to build so I'd like to automate the process by having SCons first read in all of our .h and .cc files and then run cpplint.py on them, only building a file if it passes. The issues are as follows:
In SCons how do I pre-hook the build process? No file should be compiled until it passes linting.
cpplint doesn't return an exit code. How do I run a command in SCons and check whether the result matches a regular expression? I.E., how do I get the text being output?
The project is large, whatever the solution to #1 and #2 it should run concurrently when the -j option is passed to SCons.
I need a whitelist that allows some files to skip the lint check.
One way to do this is to monkey patch the object emitter function, which turns C++ code into linkable object files. There are 2 such emitter functions; one for static objects and one for shared objects. Here is an example that you can copy paste into a SConstruct:
import sys
import SCons.Defaults
import SCons.Builder
OriginalShared = SCons.Defaults.SharedObjectEmitter
OriginalStatic = SCons.Defaults.StaticObjectEmitter
def DoLint(env, source):
for s in source:
env.Lint(s.srcnode().path + ".lint", s)
def SharedObjectEmitter(target, source, env):
DoLint(env, source)
return OriginalShared(target, source, env)
def StaticObjectEmitter(target, source, env):
DoLint(env, source)
return OriginalStatic(target, source, env)
SCons.Defaults.SharedObjectEmitter = SharedObjectEmitter
SCons.Defaults.StaticObjectEmitter = StaticObjectEmitter
linter = SCons.Builder.Builder(
action=['$PYTHON $LINT $LINT_OPTIONS $SOURCE','date > $TARGET'],
suffix='.lint',
src_suffix='.cpp')
# actual build
env = Environment()
env.Append(BUILDERS={'Lint': linter})
env["PYTHON"] = sys.executable
env["LINT"] = "cpplint.py"
env["LINT_OPTIONS"] = ["--filter=-whitespace,+whitespace/tab", "--verbose=3"]
env.Program("test", Glob("*.cpp"))
There's nothing too tricky about it really. You'd set LINT to the path to your cpplint.py copy, and set appropriate LINT_OPTIONS for your project. The only warty bit is creating a TARGET file if the check passes using the command line date program. If you want to be cross platform then that'd have to change.
Adding a whitelist is now just regular Python code, something like:
whitelist = """"
src/legacy_code.cpp
src/by_the_PHB.cpp
"""".split()
def DoLint(env, source):
for s in source:
src = s.srcnode().path
if src not in whitelist:
env.Lint( + ".lint", s)
It seems cpplint.py does output the correct error status. When there are errors it returns 1, otherwise it returns 0. So there's no extra work to do there. If the lint check fails, it will fail the build.
This solution works with -j, but the C++ files may compile as there are no implicit dependencies between the lint fake output and the object file target. You can add an explicit env.Depends in there to force that the ".lint" output depend on the object target. As is it's probably enough, since the build itself will fail (scons gives a non-zero return code) if there are any remaining lint issues even after all the C++ compiles. For completeness the depends code would be something like this in the DoLint function:
def DoLint(env, source, target):
for i in range(len(source)):
s = source[i]
out = env.Lint(s.srcnode().path + ".lint", s)
env.Depends(target[i], out)
AddPreAction seems to be what you are looking for, from the manpage:
AddPreAction(target, action)
env.AddPreAction(target, action)
Arranges for the specified action to be performed before the specified target is built. T
Also see http://benno.id.au/blog/2006/08/27/filtergensplint for an example.
See my github for a pair of scons scripts complete with an example source tree. It uses Google's cpplint.py.
https://github.com/xyzisinus/scons-tidbits