Is it possible, with ocp-build, to do the following actions:
Compile a generator.
Call the generator to generate a file.
Compile the project with the generated file.
So far, I tried this:
(generator.ocp)
begin library "error_gen"
sort = true
files = [ "error_gen.ml" ]
requires = [str]
end
(generated.ocp)
begin library "error_code"
sort = true
files = [
"error_code.ml" (
pp = [ "./_obuild/error_gen/error_gen.byte" ]
pp_requires = [ "error_gen:byte" ]
)
]
requires = ["error_gen"]
end
(and the main.ocp)
begin program "main"
sort = true
files = []
requires = ["error_code" "parser"]
end
It complains with this message:
Error: in project "error_code", the source filename
"src/generated/error_code.ml" does not exist
I saw that some supports exist for version file generation, such as in the project ocp-indent
line 46.
"indentVersion.ml" (ocp2ml) (* auto-generated by ocp-build *)
Any helps greatly appreciated, thanks.
In branch "next" of github.com/OCamlPro/ocp-build, you will find a version of ocp-build that might solve your issue:
begin library "error_code"
sort = true
files = [ "error_code.ml" ]
build_rules = [
"error_code.ml" (
(* the files that are needed to call the command *)
sources = [ "%{error_gen_FULL_DST_DIR}%/error_gen.byte" ]
(* the commands to be executed, in the sub-directory of the library
each command has the syntax: { "COMMAND" "ARG1" "ARG2" ... }
*)
commands = [
{ "%{error_gen_FULL_DST_DIR}%/error_gen.byte" }
]
)
]
requires = ["error_gen"]
end
This is, for example, used in wxOCaml:
https://github.com/OCamlPro/ocplib-wxOCaml/blob/next-ocpbuild/configure.ocp
Commands can be post-fixed with options:
{ "configure" } (chdir = "subdirectory") (* to execute in a sub-directory *)
{ "cat" "toto" } (stdout = "new_toto") (* to copy the stdout in "new_toto" *)
Related
I have this list of maps:
[
[a:1998-01-14, b:2028-11-05, c:OQSeMPIcHNP, d:ASD, e:DEF, f:UuzSJiVvxjhJipzxUPsSsbmaeMvOT, g:mef, e:P00003036, h:P],
[a:1998-08-22, b:2028-11-05, c:fDScShsqpKqreHPqpANUrnZklN, d:LAS, e:FGH, f:lKuMgRxxrVuwCEXMNiIERHOcUCNmbG, g:ela, e:P00006583, h:P],
[a:1992-02-29, b:2031-01-01, c:SOThmQNAbKexnvDaxOi, d:MAR, e:ZAD, f:tkYiSUxSoTZrceRoIOYYsZztvvnzkno, g:ela, e:P00002839, h:P]
]
I need to create a new list of maps or update the existing one by doing a uppercase on all the values of all the keys c and f.
Expected outcome:
[
[a:1998-01-14, b:2028-11-05, c:OQSEMPICHNP, d:ASD, e:DEF, f:UUZSJIVVXJHJIPZXUPSSSBMAEMVOT, g:mef, e:P00003036, h:P],
[a:1998-08-22, b:2028-11-05, c:FDSCSHSQPKQREHPQPANURNZKLN, d:LAS, e:FGH, f:LKUMGRXXRVUWCEXMNIIERHOCUCNMBG, g:ela, e:P00006583, h:P],
[a:1992-02-29, b:2031-01-01, c:SOTHMQNABKEXNVDAXOI, d:MAR, e:ZAD, f:TKYISUXSOTZRCEROIOYYSZZTVVNZKNO, g:ela, e:P00002839, h:P]
]
How can I achieve this with groovy?
Please try this:
def source = [
[a:'1998-01-14', b:'2028-11-05', c:'OQSeMPIcHNP', d:'ASD', e:'DEF', f:'UuzSJiVvxjhJipzxUPsSsbmaeMvOT', g:'mef', e1:'P00003036', h:'P'],
[a:'1998-08-22', b:'2028-11-05', c:'fDScShsqpKqreHPqpANUrnZklN', d:'LAS', e:'FGH', f:'lKuMgRxxrVuwCEXMNiIERHOcUCNmbG', g:'ela', e1:'P00006583', h:'P'],
[a:'1992-02-29', b:'2031-01-01', c:'SOThmQNAbKexnvDaxOi', d:'MAR', e:'ZAD', f:'tkYiSUxSoTZrceRoIOYYsZztvvnzkno', g:'ela', e1:'P00002839', h:'P']
]
def target = source.collect {
it.c = it.c?.toUpperCase()
it.f = it.f?.toUpperCase()
it
}
println "target = $target"
I have written an MSVC Precompiled Header Files (PCH) implementation for Bazel (2.0) and would like to get some feedback on it as I'm not happy with it.
To quickly recap what needs to be done to get PCH working in MSVC:
Compile the PCH with /Yc and /Fp to obtain the (1) .pch file and the (2) .obj file.
Compile the binary using the /Yu on (1) and again the same /Fp option.
Link the binary using the .obj file (2).
Implementation
We define a rule which takes the pchsrc (for /Yc) and pchhdr (for /Fp) as an argument as well as some of the cc_* rule arguments (to get the defines and includes). We then invoke the compiler to obtain the the PCH (mainly following the approach demonstrated here). Once we have the PCH, we propagate the location and linker inputs via CcInfo and the user needs to call cc_pch_copts to get the /Yu and /Fp options.
pch.bzl
load("#rules_cc//cc:action_names.bzl", "ACTION_NAMES")
load("#rules_cc//cc:find_cc_toolchain.bzl", "find_cc_toolchain")
def cc_pch_copts(pchheader, pchtarget):
return [
"/Yu\"" + pchheader + "\"",
"/Fp\"$(location :" + pchtarget + ")\""
]
def _cc_pch(ctx):
""" Create a precompiled header """
cc_toolchain = find_cc_toolchain(ctx)
source_file = ctx.file.pchsrc
pch_file = ctx.outputs.pch
pch_obj_file = ctx.outputs.obj
# Obtain the includes of the dependencies
cc_infos = []
for dep in ctx.attr.deps:
if CcInfo in dep:
cc_infos.append(dep[CcInfo])
deps_cc_info = cc_common.merge_cc_infos(cc_infos=cc_infos)
# Flags to create the pch
pch_flags = [
"/Fp" + pch_file.path,
"/Yc" + ctx.attr.pchhdr,
]
# Prepare the compiler
feature_configuration = cc_common.configure_features(
ctx = ctx,
cc_toolchain = cc_toolchain,
requested_features = ctx.features,
unsupported_features = ctx.disabled_features,
)
cc_compiler_path = cc_common.get_tool_for_action(
feature_configuration = feature_configuration,
action_name = ACTION_NAMES.cpp_compile,
)
deps_ctx = deps_cc_info.compilation_context
cc_compile_variables = cc_common.create_compile_variables(
feature_configuration = feature_configuration,
cc_toolchain = cc_toolchain,
user_compile_flags = ctx.fragments.cpp.copts + ctx.fragments.cpp.cxxopts + pch_flags + ctx.attr.copts,
source_file = source_file.path,
output_file = pch_obj_file.path,
preprocessor_defines = depset(deps_ctx.defines.to_list() + deps_ctx.local_defines.to_list() + ctx.attr.defines + ctx.attr.local_defines),
include_directories = deps_ctx.includes,
quote_include_directories = deps_ctx.quote_includes,
system_include_directories = depset(["."] + deps_ctx.system_includes.to_list()),
framework_include_directories = deps_ctx.framework_includes,
)
env = cc_common.get_environment_variables(
feature_configuration = feature_configuration,
action_name = ACTION_NAMES.cpp_compile,
variables = cc_compile_variables,
)
command_line = cc_common.get_memory_inefficient_command_line(
feature_configuration = feature_configuration,
action_name = ACTION_NAMES.cpp_compile,
variables = cc_compile_variables,
)
args = ctx.actions.args()
for cmd in command_line:
if cmd == "/showIncludes":
continue
args.add(cmd)
# Invoke the compiler
ctx.actions.run(
executable = cc_compiler_path,
arguments = [args],
env = env,
inputs = depset(
items = [source_file],
transitive = [cc_toolchain.all_files],
),
outputs = [pch_file, pch_obj_file],
progress_message = "Generating precompiled header {}".format(ctx.attr.pchhdr),
)
return [
DefaultInfo(files = depset(items = [pch_file])),
CcInfo(
compilation_context=cc_common.create_compilation_context(
includes=depset([pch_file.dirname]),
headers=depset([pch_file]),
),
linking_context=cc_common.create_linking_context(
user_link_flags = [pch_obj_file.path]
)
)
]
cc_pch = rule(
implementation = _cc_pch,
attrs = {
"pchsrc": attr.label(allow_single_file=True, mandatory=True),
"pchhdr": attr.string(mandatory=True),
"copts": attr.string_list(),
"local_defines": attr.string_list(),
"defines": attr.string_list(),
"deps": attr.label_list(allow_files = True),
"_cc_toolchain": attr.label(default = Label("#bazel_tools//tools/cpp:current_cc_toolchain")),
},
toolchains = ["#bazel_tools//tools/cpp:toolchain_type"],
fragments = ["cpp"],
outputs = {
"pch": "%{pchsrc}.pch",
"obj": "%{pchsrc}.pch.obj"
},
provides = [CcInfo],
)
We would use it:
BUILD.bzl
load(":pch.bzl", "cc_pch", "cc_pch_copts")
load("#rules_cc//cc:defs.bzl", "cc_binary")
def my_cc_binary(name, pchhdr, pchsrc, **kwargs):
pchtarget = name + "_pch"
cc_pch(
name = pchtarget,
pchsrc = pchsrc,
pchhdr = pchhdr,
defines = kwargs.get("defines", []),
deps = kwargs.get("deps", []),
local_defines = kwargs.get("local_defines", []),
copts = kwargs.get("copts", []),
)
kwargs["deps"] = kwargs.get("deps", []) + [":" + pchtarget])
kwargs["copts"] = kwargs.get("copts", []) + cc_pch_copts(pchhdr, pchtarget))
native.cc_binary(name=name, **kwargs)
my_cc_binary(
name = "main",
srcs = ["main.cpp", "common.h", "common.cpp"],
pchsrc = "common.cpp",
pchhdr = "common.h",
)
with project being contained of:
main.cpp
#include "common.h"
int main() { std::cout << "Hello world!" << std::endl; return 0; }
common.h
#include <iostream>
common.cpp
#include "common.h"
Questions
The implementation works. However, my discussion points are:
What is the best way to propagate the additional compile flags to dependent targets? The way I solved it via cc_pch_copts seems rather hacky. I would assume it involves defining a provider, but I couldn't find one which allows me to forward flags (CcToolChainConfigInfo has something in this direction but it seems overkill).
Is there another way to get all the compile flags (defines, includes etc.) than what I implemented above? It's really verbose and it most doesn't cover a lot of corner cases. Would it be possible to do something like compiling an empty.cpp file in the cc_pch rule to obtain a provider which gives direct access to all the flags?
Note: I'm aware of the downsides of precompiled headers but this is a large codebase and not using it is unfortunately not an option.
Maybe it can be simplified by generating a dummy cpp just to trigger the generation of the pch file, there is no need to link the resulting obj. (like in qmake): You just define the name of the precomp header, it will generate a dummy precomp.h.cpp and use this to trigger the generation of the pch file.
In VS/msbuild it is also possible to just generate the pch from the precomp.h file (but requires change to the source):
- change the item type of the header to "C/C++ compile"
- set the /Yc option on this
- add a hdrstop directive at the end of precomp.h like
#pragma once
#include <windows.h>
#pragma hdrstop("precomp.h")
Thanks for sharing your bzl files, I'm also looking into this (large code base with precomp headers).
From what I know precompiled headers are especially usefull for framework developers doing lot of template metaprogramming and having a respectable code base. It is not intended to speed up the compilation if you are still in development of the framework. It does not speedup the compile time if the code is poorly designed and every dependencies comes in sequence.
Your files here are only the config file of VC++, the actual job not even started yet and precompiled headers are bytecode.Use parallel build whenever possible.
Also, the resulting headers are HUGE !
I am trying to make a following setup run with Bazel. With calling “bazel build” a Python script should generate unknown number of *.cc files with random names, and then compile these into single static library (.a file), all within one Bazel call. I have tried following: one generated file has fixed name, this one is referenced in outs of genrule() and srcs of cc_library rule. The problem is I need all generated files to be built as a library, not only the file with fixed name. Any ideas how to do this?
My BUILD file:
py_binary(
name = "sample_script",
srcs = ["sample_script.py"],
)
genrule(
name = "sample_genrule",
tools = [":sample_script"],
cmd = "$(location :sample_script)",
outs = ["cpp_output_fixed.cc"], #or shall also the files with random names be defined here?
)
cc_library(
name = "autolib",
srcs = ["cpp_output_fixed.cc"],
#srcs = glob([ #here should all generated .cc files be named
# "./*.cc",
# "./**/*.cc",
# ])+["cpp_output_fixed.cc"],
)
Python file sample_script.py:
#!/usr/bin/env python
import hashlib
import time
time_stamp = time.time()
time_1 = str(time_stamp)
time_2 = str(time_stamp + 1)
random_part_1 = hashlib.sha1(time_1).hexdigest()[-4:]
random_part_2 = hashlib.sha1(time_1).hexdigest()[-4:]
fixed_file = "cpp_output_fixed" + ".cc"
file_1 = "cpp_output_" + random_part_1 + ".cc"
file_2 = "cpp_output3_" + random_part_2 + ".cc"
with open(fixed_file, "w") as outfile:
outfile.write("#include <iostream>"
"int main() {"
" std::cout <<'''Hello_world''' <<std::endl;"
" return 0"
"}")
with open(file_1, "w") as outfile:
outfile.write("#include <iostream>"
"int main() {"
" std::cout <<'''Hello_world''' <<std::endl;"
" return 0"
"}")
with open(file_2, "w") as outfile_new:
outfile_new.write("#include <iostream>"
"int main() {"
" std::cout <<'''Hello_world''' <<std::endl;"
" return 0"
"}")
print ".cc generation DONE"
[big edit, since I found a way to make it work :)]
If you really need to emit files that are unknown at the analysis phase, your only way is what we internally call tree artifacts. You can think of it as a directory that contains files that will only be inspected at the execution phase. You can declare a tree artifact from Skylark using ctx.actions.declare_directory.
Here is a working example. Note 3 things:
we need to add ".cc" to the directory name to fool C++ rules that this is valid input
the generator needs to create the directory that bazel tells it to
you need to use bazel#HEAD (or bazel 0.11.0 and later)
genccs.bzl:
def _impl(ctx):
tree = ctx.actions.declare_directory(ctx.attr.name + ".cc")
ctx.actions.run(
inputs = [],
outputs = [ tree ],
arguments = [ tree.path ],
progress_message = "Generating cc files into '%s'" % tree.path,
executable = ctx.executable._tool,
)
return [ DefaultInfo(files = depset([ tree ])) ]
genccs = rule(
implementation = _impl,
attrs = {
"_tool": attr.label(
executable = True,
cfg = "host",
allow_files = True,
default = Label("//:genccs"),
)
}
)
BUILD:
load(":genccs.bzl", "genccs")
genccs(
name = "gen_tree",
)
cc_library(
name = "main",
srcs = [ "gen_tree" ]
)
cc_binary(
name = "genccs",
srcs = [ "genccs.cpp" ],
)
genccs.cpp
#include <fstream>
#include <sys/stat.h>
using namespace std;
int main (int argc, char *argv[]) {
mkdir(argv[1], S_IRWXU);
ofstream myfile;
myfile.open(string(argv[1]) + string("/foo.cpp"));
myfile << "int main() { return 42; }";
return 0;
}
1) List all output files.
2) Use the genrule as a dependency to the library.
genrule(
name = "sample_genrule",
tools = [":sample_script"],
cmd = "$(location :sample_script)",
outs = ["cpp_output_fixed.cc", "cpp_output_0.cc", ...]
)
cc_library(
name = "autolib",
srcs = [":sample_genrule"],
)
How to make sublime text 3's SublimeOnSaveBuild package not compile the file named beginning with _ prefix?
For Example. I want a.scss or a_b.scss files can be complied when I saving this files. But not include the files such as named _a.scss.
I saw the guide in github is to set the filename_filter configuring.
So I create a SublimeOnSaveBuild.sublime-settings.Contents:
{
"filename_filter": "/^([^_])\\w*.(sass|less|scss)$/"
}
I used two \ , because it seems would saved as a .sublime-settings file which format likes JSON.
But it doesn't work.I use JavaScript to test it, it works well !
let reg = /^[^\_]\w*.(sass|less|scss)$/,
arr = [
'a.scss',
'_a.scss',
'a_b.scss'
];
arr.forEach(function( filename ) {
console.log( filename + '\t' + reg.test(filename) );
});
// a.scss true
// _a.scss false
// a_b.scss true
Thanks!
I found a solution in joshuawinn. But I can't understand why my codes can't work...
{
"filename_filter": "(/|\\\\|^)(?!_)(\\w+)\\.(css|js|sass|less|scss)$",
"build_on_save": 1
}
Sorry for my poor English !
let reg = /^(_|)\w*.(sass|less|scss)$/,
arr = [
'a.scss',
'_a.scss',
'a_b.scss'
];
arr.forEach(function( filename ) {
console.log( filename + '\t' + reg.test(filename) );
});
I have a podspec for a project that contains an embedded C++ library. The podspec looks like this (with the source being local until I get it working and push to GitHub):
Pod::Spec.new do |s|
s.name = "LibName"
s.version = "1.0.0"
s.summary = "Summary"
s.license = "BSD"
s.homepage = "https://homepage.com"
s.author = { "Dov Frankel" => "dov#email.com" }
s.source = { :git => "/Users/Dov/PathTo/LocalLibrary" }
s.ios.deployment_target = "5.0"
s.osx.deployment_target = "10.7"
s.requires_arc = false
s.source_files = "Classes/*.{mm,m,h}",
"Libraries/unrar/*.hpp",
"Libraries/lib/fileA.cpp",
"Libraries/lib/fileB.cpp",
s.preserve_paths = "Libraries/lib/fileC.cpp",
"Libraries/lib/fileD.cpp"
end
In the LibName project that gets created, the list of compiled sources includes fileA, fileB, fileC, and fileD. Why is that? The preserve_paths files should only be preserved, not compiled.
D'oh! Remove the trailing comma from fileB.cpp, which apparently causes the preserve_paths to get concatenated onto the end of source_files.