Bazel cc_toolchain for non-gnu TI DSP compiler - build

I'm porting a TI C6000 DSP build from Makefile to Bazel. This DSP is running on a very complex embedded system, and we are already building the same code for multiple other processors using Bazel, but those other processors use GNU flavored (e.g. gcc-based) compilers.
The cc_rules package seems to make assumptions about the flags, filename extensions etc. I'm hoping I can avoid creating a completely custom toolchain, which would involve porting all the existing BUILD files to use different rules depending on the toolchain.
I can't seem to find any documentation on customizing those attributes, or others. The things that I already know I need to customize:
The flag that is used to specify the output file: -fr=filename and -fs=filename vs -o filename
Implicit dependency generation: someone told me that cc_rules generates .d files under the hood to detect whether you have missing dependencies. I'm not sure if this is true or not, but if so, I need to change the flag and extension used
The extension of the object and library files: as noted above, we build the same files for multiple CPUs, and need to customize the extension of the output for the rules.
There are probably other requirements that I'm not aware of yet as well. It very may well be that I'm doing it all wrong and should take a different approach. We have been using Bazel since the early days (v0.12), and still may have holdovers from then.
We are currently on v1.1.0, which I ported us to from v0.12 six months ago. I'm surprised that the master branch is already on v3.???!!!
Any help is greatly appreciated. Please remind me if I've left out anything important.
EDIT: One thing to note is that the compiler appears to be based on clang and llvm. If there are examples of clang/llvm-based toolchains (I'm pretty sure there are) then I could get started there.
I know that the enscriptem example in the docs is technically a LLVM-based compiler, but that uses a script to do magic to the params, etc. I can do that if that's the right thing to do, but I want to make sure I'm headed down the right path.

This is not a complete answer to all of your questions, but this also goes beyond what could be formatted and posted as comment. To your most recent inquiry. This snippet would redefine option used for output file (instead of -o OUTPUT to -fr=OUTPUT):
compiler_output_flags_feature = feature(
name = "compiler_output_flags",
flag_sets = [
flag_set(
actions = [
ACTION_NAMES.assemble,
ACTION_NAMES.c_compile,
ACTION_NAMES.cpp_compile,
ACTION_NAMES.cpp_header_parsing,
ACTION_NAMES.cpp_module_compile,
ACTION_NAMES.cpp_module_codegen,
],
flag_groups = [
flag_group(
flags = ["-fr=%{output_file}"],
),
],
),
],
)
As for available and used actions, you can check this out. For features, as you've already discovered: disabling legacy features and see what you need is one option. There is also this list in the docs that you've stumbled upon. Beyond that (incl. what variables are available at which point), it's a bit of "use the source, Luke" at least that's where I usually for better or worse ended heading for details. For action a good point would be here.
But I also find checking out other pre-packaged toolchain configs (esp. MSVC for being... different) insightful.

I think that adding your own custom rule that provides CcToolchainConfigInfo, would solve the problem you are having.
def _impl(ctx):
tool_paths = [
tool_path(name = "gcc", path = "/<abs>/<path>/clang"),
tool_path(name = "ld", path = "/<abs>/<path>/ld"),
tool_path(name = "ar", path = "/<abs>/<path>/ar"),
tool_path(name = "cop", path = "/bin/false"),
tool_path(name = "gcov", path = "/bin/false"),
tool_path(name = "nm", path = "/bin/false"),
tool_path(name = "objdump", path = "/bin/false"),
tool_path(name = "strip", path = "/bin/false"),
]
toolchain_compiler_flags = feature(
name = "compiler_flags",
enabled = True,
flag_sets = [
flag_set(
actions = [
ACTION_NAMES.assemble,
ACTION_NAMES.preprocess_assemble,
ACTION_NAMES.linkstamp_compile,
ACTION_NAMES.c_compile,
ACTION_NAMES.cpp_compile,
ACTION_NAMES.cpp_header_parsing,
ACTION_NAMES.cpp_module_compile,
ACTION_NAMES.cpp_module_codegen,
ACTION_NAMES.lto_backend,
ACTION_NAMES.clif_match,
],
flag_groups = [
flag_group(flags = ["<compiler-flags>"]),
],
),
],
)
toolchain_linker_flags = feature(
name = "linker_flags",
enabled = True,
flag_sets = [
flag_set(
actions = [
ACTION_NAMES.linkstamp_compile,
],
flag_groups = [
flag_group(flags = ["<linker-flags>"]),
],
),
],
)
return cc_common.create_cc_toolchain_config_info(
ctx = ctx,
toolchain_identifier = ctx.attr.toolchain_identifier,
host_system_name = ctx.attr.host_system_name,
target_system_name = "<your-system-name>",
target_cpu = "<your-cpu>",
target_libc = "<your-libc>",
compiler = "<your-compiler>,
abi_version = "<your-eabiu>",
abi_libc_version = <your-version>,
tool_paths = tool_paths,
features = [
toolchain_compiler_flags,
toolchain_linker_flags,
<more-custom-features>,
],
)
cc_arm_none_eabi_config = rule(
implementation = _impl,
attrs = {
"toolchain_identifier": attr.string(default = ""),
"host_system_name": attr.string(default = ""),
},
provides = [CcToolchainConfigInfo],
)
I have posted an example about using GCC embedded toolchains with Bazel on Github that you could use as a template. The example works with the arm-none-eabi-gcc compiler, but in principle, it would work just as well with clang.

Related

Understanding the GN build system in Fuchsia OS, what is `build_api_module`?

GN stands for Generate Ninja. It generates ninja files which build things. The main file is BUILD.GN at the root of the fuchsia source tree
It contains a lot of build_api_module calls:
build_api_module("images") {
testonly = true
data_keys = [ "images" ]
deps = [
# XXX(46415): as the build is specialized by board (bootfs_only)
# for bringup, it is not possible for this to be complete. As this
# is used in the formation of the build API with infrastructure,
# and infrastructure assumes that the board configuration modulates
# the definition of `zircon-a` between bringup/non-bringup, we can
# not in fact have a complete description. See the associated
# conditional at this group also.
"build/images",
# This has the images referred to by $qemu_kernel_label entries.
"//build/zircon/zbi_tests",
]
}
however, it's unclear for me what this does exactly. Looking at its definition on build/config/build_api_module.gn for example:
template("build_api_module") {
if (current_toolchain == default_toolchain) {
generated_file(target_name) {
outputs = [ "$root_build_dir/$target_name.json" ]
forward_variables_from(invoker,
[
"contents",
"data_keys",
"deps",
"metadata",
"testonly",
"visibility",
"walk_keys",
"rebase",
])
output_conversion = "json"
metadata = {
build_api_modules = [ target_name ]
if (defined(invoker.metadata)) {
forward_variables_from(invoker.metadata, "*", [ "build_api_modules" ])
}
}
}
} else {
not_needed([ "target_name" ])
not_needed(invoker, "*")
}
}
it looks like it simply generates a file.
Can someone explain to me how build_api_module("images") ends up building all the zircon kernel images?
The build_api_module() targets generate JSON files that describe something about the current build system configuration. These files are typically consumed by other tools (in some cases dependencies to other build rules) that need to know about the current build.
One example is the tests target which generates the tests.json file. This file is used by fx test to determine which tests are available and match the test name you provide to the component URL to invoke.
Can someone explain to me how build_api_module("images") ends up building all the zircon kernel images?
It doesn't. These targets are descriptive of the current build configuration, they are not prescriptive of what artifacts the build generates. In this specific case, the images.json file is typically used by tools like FEMU and ffx to determine what system images to use on a target device.

How do I import my main crate into my test files? Rust doc example doesn't work

I'm setting up unit tests for my rust project and using this guide. The documentation says to do something like this, where "adder" is the project name (if I am not mistaken).
tests/integration_test.rs
use adder;
mod common;
#[test]
fn it_adds_two() {
common::setup();
assert_eq!(4, adder::add_two(2));
}
I've done something similar. My folder structure is tests/users.rs where tests/ is right next to src/ as in the example. Here is what my test file actually looks like:
tests/users.rs
use test_project;
#[test]
pub fn create_test() {
//do things with the modules from main
}
But I'm getting this error:
no external crate 'test_project'
As far as I can tell I'm following the documentation to the letter. Can someone point out what I could be missing here?
Here is my folder structure also:
I have no problems running a dummy test without the imports cargo test so cargo is able to find the tests/ folder without any issues
Here is my Cargo.toml
[package]
name = "test_project"
version = "0.1.0"
authors = ["mcrandall <randall123#protonmail.com>"]
edition = "2018"
[lib]
name = "errormsg"
path = "errormsg/src/lib.rs"
[dependencies]
diesel = { version = "1.4.5", features = ["sqlite"] }
dotenv = "0.15.0"
download_rs = "0.2.0"
futures = "0.3.12"
futures-util = "0.3.12"
oauth2 = { version = "3.0"}
reqwest = { version = "0.11", features = ["json", "stream", "blocking"] }
serde = { version= "1.0.123", features = ["derive"] }
serde_derive = "1.0.123"
serde_json = "1.0.61"
simple-server = "0.4.0"
tokio = { version = "1", features = ["full"] }
url = "2.2.0"
uuid = { version = "0.8.2", features = ["v4"] }
Make sure that in your Cargo.toml name = "test_project".
Also, you can only import it if it is a library Library Documentation.
Looking at your Cargo.toml, the lib section tells cargo that this package exports one lib called errormsg contained in errormsg/src/lib.rs. So test_project will not be available for you, because only one lib is allowed per package why?.
There are two solutions to your problem.
You can either make errormsg a module which you then can import for example with test_project::errormsg in tests/users.rs.
Or you can create a separate package and then import it in the Cargo.toml file:
[dependencies]
errormsg = { version = "0.1", path = "./../errormsg" }
Another way is to use workspaces to group packages, but i'm not really familiar with it.

How to change the experiment file path generated when running Ray's run_experiments()?

I'm using the following spec on my code to generate experiments:
experiment_spec = {
"test_experiment": {
"run": "PPO",
"env": "MultiTradingEnv-v1",
"stop": {
"timesteps_total": 1e6
},
"checkpoint_freq": 100,
"checkpoint_at_end": True,
"local_dir": '~/Documents/experiment/',
"config": {
"lr_schedule": grid_search(LEARNING_RATE_SCHEDULE),
"num_workers": 3,
'observation_filter': 'MeanStdFilter',
'vf_share_layers': True,
"env_config": {
},
}
}
}
ray.init()
run_experiments(experiments=experiment_spec)
Note that I use grid_search to try various learning rates. The problem is "lr_schedule" is defined as:
LEARNING_RATE_SCHEDULE = [
[
[0, 7e-5], # [timestep, lr]
[1e6, 7e-6],
],
[
[0, 6e-5],
[1e6, 6e-6],
]
]
So when the experiment checkpoint is generated it has a lot of [ in it's path name, making the path unreadable to the interpreter. Like this:
~/Documents/experiment/PPO_MultiTradingEnv-v1_0_lr_schedule=[[0, 7e-05], [3500000.0, 7e-06]]_2019-08-14_20-10-100qrtxrjm/checkpoint_40
The logic solution is to manually rename it but I discovered that its name is referenced in other files like experiment_state.json, so the best solution is to set a custom experiment path and name.
I didn't find anything in documentation.
This is my project if it helps
Can someone help?
Thanks in advance
You can set custom trial names - https://ray.readthedocs.io/en/latest/tune-usage.html#custom-trial-names. Let me know if that works for you.

Suppress warnings from CPD for C/C++ code

We are using PMD Copy Paste Detector (CPD) to analyze our C and C++ code.
However, there are a few parts of the code that are very similar, but with a good reason and we would like to suppress the warnings for these parts.
The documentation of PMD CPD only mentions something about annotations, but this will not work for our these languages.
How can I still ignore warnings for specific parts?
Is there a comment to do so perhaps?
[UPDATE] I'm using the following Groovy script to run CPD:
#GrabResolver(name = 'jcenter', root = 'https://jcenter.bintray.com/')
#Grab('net.sourceforge.pmd:pmd-core:5.4.+')
#Grab('net.sourceforge.pmd:pmd-cpp:5.4.+')
import net.sourceforge.pmd.cpd.CPD
import net.sourceforge.pmd.cpd.CPDConfiguration
import java.util.regex.Pattern
def tokens = 60
def scanDirs = ['./path/to/scan', './scan/this/too']
def ignores = [
'./ignore/this/path',
'./this/must/be/ignored/too'
].collect({ it.replace('/', File.separator) })
def rootDir = new File('.')
def outputDir = new File('./reports/analysis/')
def filename_date_format = 'yyyyMMdd'
def encoding = System.getProperty('file.encoding')
def language_converter = new CPDConfiguration.LanguageConverter()
def config = new CPDConfiguration()
config.language = new CPDConfiguration.LanguageConverter().convert('c')
config.minimumTileSize = tokens
config.renderer = config.getRendererFromString 'xml', 'UTF-8'
config.skipBlocksPattern = '//DUPSTOP|//DUPSTART'
config.skipLexicalErrors = true
def cpd = new CPD(config)
scanDirs.each { path ->
def dir = new File(path);
dir.eachFileRecurse(groovy.io.FileType.FILES) {
// Ignore file?
def doIgnore = false
ignores.each { ignore ->
if(it.path.startsWith(ignore)) {
doIgnore = true
}
}
if(doIgnore) {
return
}
// Other checks
def lowerCaseName = it.name.toLowerCase()
if(lowerCaseName.endsWith('.c') || lowerCaseName.endsWith('.cpp') || lowerCaseName.endsWith('.h')) {
cpd.add it
}
}
}
cpd.go();
def duplicationFound = cpd.matches.hasNext()
def now = new Date().format(filename_date_format)
def outputFile = new File(outputDir.canonicalFile, "cpd_report_${now}.xml")
println "Saving report to ${outputFile.absolutePath}"
def absoluteRootDir = rootDir.canonicalPath
if(absoluteRootDir[-1] != File.separator) {
absoluteRootDir += File.separator
}
outputFile.parentFile.mkdirs()
def xmlOutput = config.renderer.render(cpd.matches);
if(duplicationFound) {
def filePattern = "(<file\\s+line=\"\\d+\"\\s+path=\")${Pattern.quote(absoluteRootDir)}([^\"]+\"\\s*/>)"
xmlOutput = xmlOutput.replaceAll(filePattern, '$1$2')
} else {
println 'No duplication found.'
}
outputFile.write xmlOutput
You can define your custom markers for excluding certain blocks from analysis through the --skip-blocks-pattern option.
--skip-blocks-pattern Pattern to find the blocks to skip. Start and End pattern separated by |. Default is #if 0|#endif.
For example the following will ignore blocks between /* SUPPRESS CPD START */ and /* SUPPRESS CPD END */ comments (the comment must occupy a separate line):
$ ./run.sh cpd --minimum-tokens 100 --files /path/to/c/source --language cpp ----skip-blocks-pattern '/* SUPPRESS CPD START */|/* SUPPRESS CPD END */'
Note however, that this will cause the tool perform copy-paste-detection inside code delimited by #if 0/#endif.
After searching through the code of PMD on GitHub, I think I can safely say that this is NOT supported at this point in time (current version being PMD 5.5.0).
A search for CPD-START in their repository, does not show any results within the pmd-cpp directory (see the search results on GitHub).
I know this is a ~3 years old question, but for completeness, CPD started supporting this in PMD 5.6.0 (April 2017) in Java, and since 6.3.0 (April 2018) it has been extended to many other languages such as C/C++. Nowadays, almost all CPD supported languages allow for comment-based suppressions.
The complete (current) docs for comment-based suppression are available at https://pmd.github.io/pmd-6.13.0/pmd_userdocs_cpd.html#suppression
It's worth noting, if a file has a // CPD-OFF comment, but no matching // CPD-ON, everything will be ignored until the end of file.
I don't have any help for CPD. In general, I know about such tools; I don't understand the bit about "warnings".
Our CloneDR tool finds exact and near-miss duplicate code. IMHO, it finds better clones than CPD, because it uses the language syntax/ structure as a guide. [This fact is backed up by a research report done by a third party that you can find at the site]. And it does not issue "warnings".
If there is code that it thinks is involved in a clone, the tool will generate an output report page for the clones involved. But that isn't a warning. There is no way to suppress the reporting behavior. Obviously, if you have seen such a clone and decide it is not interesting, you can mark one of the clone entries with a comment stating that it is an uninteresting clone; that comment will show up in the clone report. (Such) comments have no impact whatsover on what clones are detected by CloneDR, so adding them does not change the computed answer.

scons - How to add search directories to an existing scanner

My main goal is to add support of -isystem include paths in scons, like this is proposed here : https://stackoverflow.com/a/2547261/4042960
The solution of creating new variables works fine: I do that:
#### Add support for system headers
env['SYSTEMINCPREFIX'] = '-isystem '
env['SYSTEMINCSUFFIX'] = ''
env['_CPPSYSTEMINCFLAGS'] = '$( ${_concat(SYSTEMINCPREFIX, CPPSYSTEMPATH, SYSTEMINCSUFFIX, __env__, RDirs, TARGET, SOURCE)} $)'
env['_CCCOMCOM'] += ' $_CPPSYSTEMINCFLAGS'
I use it by adding for instance:
env.Append(CPPSYSTEMPATH = ['/my/include/path'])
My problem is that now, the path /my/include/path is not scanned by the C (or C++) dependency scanner. After many search, I failed to find how to add my variable "CPPSYSTEMPATH" to be treated like "CPPPATH" by the dependency scanner.
Does anyone know how I could add the search path contained in "CPPSYSTEMPATH" to the existing C scanner ?
I hope that my problem is clear enough, else do not hesitate to tell me.
Here's a basic recipe for replacing the FindPath method of the default C scanner, but be warned it's an ugly hack:
# Create environment
env = Environment()
# Define your new env variable as combination of both paths
env['MYCPPPATHS'] = ['$CPPPATH','$CPPSYSTEMPATH']
# Replace the path_function of the standard C scanner by:
import SCons.Tool
import SCons.Scanner
setattr(SCons.Tool.CScanner,'path_function',SCons.Scanner.FindPathDirs('MYCPPPATHS'))
# Do your build stuff...
env['CPPSYSTEMPATH'] = 'myinclude'
env.Program('main','main.cpp')
By the way, why not ask these kind of questions on our user mailing list scons-users#scons.org? ;)