How to build same library more than once in Yocto? - c++

I have 2 application, both uses the same library but the library should be build with a flag enabled in one and disabled in other. this is a static library, so at run time there won't be a conflict in runtime. But the library is separate ie, the application is build separately and the library is separate. In each configuration, the library will be build with a different name which is taken care by the makefile. This can be done manually. but now I need to add it to Yocto.
In yocto, how can I build the same library 2 times in separate configuration?

If you're limited to .bbappend and you don't want to duplicate the recipe, you can add some additional tasks then. In these additional tasks (after regular installation) you can do configuration/compilation/installation once again but with any kind of additional actions/variable overrides or whatever. Something like this:
do_special_configure() {
oe_runmake clean
export MAGIC_VARIABLE="magic value"
do_configure
}
do_special_compile() {
export MAGIC_VARIABLE="magic value"
do_compile
}
fakeroot do_special_install() {
export MAGIC_VARIABLE="magic value"
do_install
}
do_special_configure[dirs] = "${B}"
do_special_compile[dirs] = "${B}"
do_special_install[dirs] = "${B}"
addtask special_configure after do_install before do_special_compile
addtask special_compile after do_special_configure before do_special_install
addtask special_install after do_special_compile before do_package do_populate_sysroot

If the different configurations really produce different installed files, then you'll have no problems adding two separate recipes that just happen to have the same SRC_URI

Well, you can't, not without two recipes.
Your two applications, can't influence in any way, how the library is being used. Thus, your options (as long as both these two applications should be available for the same machine / distro combination) basically are:
Create a 2nd recipe (in this case, likely in your layer, though preferably in the upstream layer). If the recipe you're copying uses in .inc and a small .bb that mostly includes that file, you can easily do just the same. Otherwise, your options are to either copy the recipe and modify it, or to have your new recipe
require <PATH_FROM COREBASE-TO-THE-UPSTREAM-RECIPE>/upstream-recipe.bb
If possible, modify the upstream recipe (preferably using a .bbappend) to simultaneously build both versions that you require.

Related

Is there a way to make Bazel work with transitive repositories?

I work with a massive codebase distributed across many repositories and using even more third-party dependencies. The goal is to make the build hermetic and I contemplate using Bazel to achieve it. On the one hand, Bazel has git_repository rule to refer to the external repos in the WORKSPACE file. On the other hand, WORKSPACE files are not loaded recursively, so to get to indirect dependencies I need to build all inclusive WORKSPACE file somehow. I wonder if somebody already tackled that problem using Bazel or some other existing tools. Is there a way to expand the WORKSPACE as part of the build? May be WORKSPACE can #include other (generated) files?
WORKSPACE files can load and then call macros, which gives similar functionality to #include.
A common pattern is each project having a macro which calls macros (for dependencies on other projects) and creates *_archive rules (for dependencies directly on files to download) so it builds. For example, protobuf has protobuf_deps to implement this pattern. If you create a repository with protobuf (using git_repository, or http_archive, or any of the other repository rules), then you can load that macro and call it, and you'll automatically get all the transitive dependencies.
For example (from Chromium):
load("#bazel_tools//tools/build_defs/repo:http.bzl", "http_archive")
# This com_google_protobuf repository is required for proto_library rule.
# It provides the protocol compiler binary (i.e., protoc).
http_archive(
name = "com_google_protobuf",
strip_prefix = "protobuf-master",
urls = ["https://github.com/protocolbuffers/protobuf/archive/master.zip"],
)
load("#com_google_protobuf//:protobuf_deps.bzl", "protobuf_deps")
protobuf_deps()
I'm showing http_archive because it's easier to work with, but you can easily change it to git_archive if you want.
Another common pattern which makes this all work is the way protobuf_deps checks native.existing_rule before creating each http_archive. That allows you to instantiate a specific version (or from a specific source, etc) of the dependency directly in your WORKSPACE file to override the one protobuf would otherwise bring in.

How do I make a configuration in DUB that turns multiple input D files with main functions into multiple output executables?

I can think of at least a few use-cases or scenarios for this:
An "examples" configuration that builds multiple example programs.
A project with a client program and a host program.
A project with one main program and some associated utility programs (ex: gcc).
More generally, if the project has multiple output executables that have the same or similar set of dependencies, then it may make sense to build them all at once. It is easier for a user to decide which executable to run than to figure out how to make a DUB create the executable they want (they might not be a D developer that is familiar with DUB). It is convenient for me as a D developer too, because I can run fewer build commands to get what I want in total.
As an example, I might have a project layout like this:
dub.json
.gitignore
source/common/foo.d (Code called by all examples)
source/examples/simple_example.d (Build input with main function)
source/examples/complex_example.d (Build input with main function)
source/examples/clever_example.d (Build input with main function)
bin/simple_example.exe (Output executable)
bin/complex_example.exe (Output executable)
bin/clever_example.exe (Output executable)
Another project might look like this:
dub.json
.gitignore
source/common/netcode.d (Code called by all programs)
source/common/logic.d (Code called by all programs)
source/executables/host-daemon.d (Does privileged things for the server)
source/executables/server.d (Unprivileged network endpoint)
source/executables/client.d (Queries the server)
bin/host-daemon (Output executable)
bin/server (Output executable)
bin/client (Output executable)
In either project, I would want to build all executables with a single invocation of DUB. Ideally, this would all be managed from one dub.json file, due to the interrelated nature of the inputs and outputs.
It seems like subPackages might be able to do this, but managing it from one dub.json file is "generally discouraged":
The sub directories /component1 and /component2 then contain normal packages and can be referred to as "mylib:component1" and "mylib:component2" from outside projects. To refer to sub packages within the same repository use the "*" version specifier.
It is also possible to define the sub packages within the root package file, but note that it is generally discouraged to put the source code of multiple sub packages into the same source folder. Doing so can lead to hidden dependencies to sub packages that haven't been explicitly stated in the "dependencies" section. These hidden dependencies can then result in build errors in conjunction with certain build modes or dependency trees that may be hard to understand.
Can DUB build multiple executables in one go, as above, and if so, what is the most recommended way and why?
I managed to do it using configurations, without subpackages.
First, you prefix all your main functions with versions specifiers.
Suppose you have 3 different files, each with its own main.
one.d:
....
versions(one)
void main() {
....
}
two.d:
....
versions(two)
void main() {
....
}
three.d:
....
versions(three)
void main() {
....
}
In your dub project file (I use sdl format here), you can define three configurations, such that each configuration defines a different version, and outputs to a different output file.
dub.sdl:
....
configuration "one" {
versions "one"
targetType "executable"
targetName "one"
}
configuration "two" {
versions "two"
targetType "executable"
targetName "two"
}
configuration "three" {
versions "three"
targetType "executable"
targetName "three"
}
If you just call dub build, it will use the first configuration, but you can pass a different configuration:
dub build --config=two
"Subpackages are intended to modularize your package from the outside".
Instead, you should be able to create multiple configurations to do what you want like dub itself does (it defines multiple libraries but you could just as easily do multiple executables).
I'm not sure if there is a command to build all configurations at once. --combined maybe (the documentation isn't clear but I think it's actually for building all source files with a single compiler invocation rather that generating object files one by one).

waf: nested projects and _cache.py: not supported?

I am converting a project from autotools to waf with the hope that it can be easily compiled in windows as well.
I am using a super project with two children folders that are 2 projects.
One of them is a library, the other, a program, like this:
superproject/wscript
superproject/libraryproject/wscript
superproject/programproject/wscript
It seems that waf has terrible support for subprojects. I have a wscript in each of these directories.
I recurse from superproject into the 2 other projects, but the _cache.py file is shared for both projects. This has the following side effects (issues):
When using the boost tool, I had to use it like this to avoid name collisions:
# In library project
cfg.check_boost('boost_program_options', uselib_store='BOOST_LIBRARYPROJECT')
# In program project
cfg.check_boost('boost_program_options', uselib_store='BOOST_PROGRAMPROJECT')
boost-libs and boost-includes command line options are also lost by default, so I have to set them manually, like this:
cfg.env.LIBPATH_BOOST_PROGRAMPROJECT = cfg.options.boost_libs
...
The _cache.py file is overwritten by the programproject/wscript, loosing all the configuration for the flags.
Questions:
Is there any good way to nest projects and avoid at least issue 2?
Is there any reasonable way to avoid both that doesn't require a script and building projects separately?
Configuration file is not written twice.
My mistake was to do this:
cfg.env = ConfigSet()
I wanted a new and clean ConfigSet but doing that in both projects made the first set of flags to be lost.
Since the environment seems to be shared among all project configurations, is it good style to name the variables with custom names? For example, instead of using:
cfg.check_boost('program_options')
Should I use:
cfg.check_boost('program_options', uselib_store='BOOST_MYPROGRAMPROJECT')
Is this good style or it's usually done in another way?
Can be done in a cleaner way deriving ConfigSets?

Can I use SCons aliasing for choosing SConscripts to run?

I'm using SCons to build a very large project, with many buildable sub-projects. I can easily use keyword commands like scons group=ai to build the AI sub-projects with if statements (choosing the right SConscripts based on the keyword command), but I want to make it as easy as possible for others to use scons. Ideally, I'd like to use it like so: scons ai to build the AI components. However, the only single-word command functionality I've found in SCons so far is aliasing, and all the examples are about changing the target. This is not what I want. Since I have a very large project with multiple sub-SConscript files to build the subprojects, I want to call the SConscripts selectively. I've tried code like so:
env.Alias("ai", SConscript("ai/SConscript", 'env'))
but this calls the AI SConscript every time, regardless of whether I use the "ai" alias or a different one. Does anyone know if it is possible to use aliasing this way to selectively call SConscripts based on the alias?
As you mentioned, the Alias() function is only used for targets. I can think of 2 ways to solve this
Alias() can be called multiple times for the same alias with different targets, so you could call it for all targets in each SConscript, then you could build everything in a SConscript. Here's an example of what I mean:
ai/SConscript:
# targets, etc
env.Alias("ai", target1)
env.Alias("ai", target2)
...
env.Alias("ai", targetn)
Another option would be to put some logic in your root SConstruct so it only calls sub-project SConscript's based on a command line argument. This option would require you to use a command line argument of this form: group=ai

how to install a library with a different name in waf build system?

I want to build a library with waf, but install it under a different name than the target name. It seems you can do
bld.shlib(..., install_path='${PREFIX}/lib')
but I need to be able to do something like:
bld.shlib(..., install_as='${PREFIX}/lib/xyz')
Also, bld.install_as() wont work, as it doesn't seem to accept a task as a target, and I can't figure out how to turn a task into a node representing the target, so the following doesnt work either:
tgt = bld.shlib(...)
bld.install_as('foo', tgt)
Or alternatively, I need to be able to disable the "lib" prefix that is automatically added to library names, but only for this one library - not for all them during the build, e.g. something like:
bld.shlib(..., libprefix='', install_path="${PREFIX}/lib/")
I know you can set shlib_PATTERN as well, but that seems to affect all libraries under the current environment. We have a pretty complicated build that uses a lot of different environments for building debug/release concurrently, so just cloning the current environment and changing the flag doesnt work either, because it clones the default environment, not the one the target will eventually be built under (because we clone the targets for each environment during build time).
Any thoughts? Thanks!
You can do this:
hello_lib = bld.shlib(
includes='/usr/include/python',
source='a.cpp',
target='hello',
uselib='BOOST_PYTHON',
vnum='0.0.1')
hello_lib.env.cxxshlib_PATTERN = '%s.so'
This code changes naming pattern for only one task.
There are two keyword arguments you can use: "name" and "target". "target" is the name of the file create while Name is the name of the target when using the "--target" argument. Confusing, but here is an example:
bld(features=['cxx','cxxshlib'],
source=src,
includes=inc,
target='OutputName',
name='NameOfTarget',
use=libs,
install_path='${PREFIX}/lib/MyLibs
)
waf configure build install --target=NameOfTarget --prefix=/home/Brian
This creates a shared library "libOutputName.so" and installs it to /home/Brian/lib/MyLib