Building SpiderMonkey for Windows - c++

I'm trying to build SpiderMonkey (32 bit) for Windows. Following the answer here, I performed the instructions here
The command line I used for building is:
PATH=$PATH:"/c/Program Files/LLVM/bin/" JS_STANDALONE=1 ../configure.in --enable-nspr-build --disable-jemalloc --disable-js-shell --disable-tests --target=i686-pc-mingw32 --host=i686-pc-mingw32 --with-libclang-path="C:/Program Files/LLVM/bin"
However, I'm getting various linker errors where SpiderMonkey doesn't find Rust encoding functions, such as:
lld-link: error: undefined symbol: _encoding_mem_convert_latin1_to_utf8_partial
referenced by c:\firefox_90_0\js\src\vm\CharacterEncoding.cpp:109
..\Unified_cpp_js_src17.obj:(unsigned int __cdecl JS::DeflateStringToUTF8Buffer(class
JSLinearString *, class mozilla::Span<char, 4294967295>))
After looking at SpiderMonkey config files (Cargo.toml files), it seems to me that during compilation SpiderMonkey should build jsrust.lib out of Rust bindings. but in fact this doesn't happen and I get the linker errors. any idea?

Yes, you are right, in that during compiling SpiderMonkey mach/mozbuild build jsrust.lib and link it into the resulting dll/js-shell executable.
Also, in my case, building jsrust.lib was also missing a bcrypt import.
this can be easily fixed by applying the following patch to the sources,
which enables mozbuild to traverse into the js/rust directory, and fixes the aforementioned missing import too.
(tested on esr91 and up):
--- a/js/src/moz.build
+++ b/js/src/moz.build
## -7,6 +7,10 ##
include("js-config.mozbuild")
include("js-cxxflags.mozbuild")
+if CONFIG["JS_STANDALONE"]:
+ DIRS += ["rust"]
+ include("js-standalone.mozbuild")
+
# Directory metadata
component_engine = ("Core", "JavaScript Engine")
component_gc = ("Core", "JavaScript: GC")
## -51,10 +55,7 ## if CONFIG["ENABLE_WASM_CRANELIFT"]:
CONFIGURE_SUBST_FILES += ["rust/extra-bindgen-flags"]
if not CONFIG["JS_DISABLE_SHELL"]:
- DIRS += [
- "rust",
- "shell",
- ]
+ DIRS += ["shell"]
TEST_DIRS += [
"gdb",
--- a/js/src/rust/moz.build
+++ b/js/src/rust/moz.build
## -37,4 +37,5 ## elif CONFIG["OS_ARCH"] == "WINNT":
"shell32",
"userenv",
"ws2_32",
+ "bcrypt"
]
(the patch is available as a gist alongside a tested mozbuild config, which builds a 32bit .dll, here: https://gist.github.com/razielanarki/a890f21a037312a46450e244beeba983 )

Related

platformio No such file or directory problem

So I wanted to create a project, but when I tried to add a simple header file (adp.h), use quick fix to add this line: "${workspaceFolder}/lib/boot/headers" to c_cpp_properties.json and build the project, builder does not even find this file.
It seems strange to me, because three days earlier, when I was using this method to add header files the builder was building my older projects flawlessly.
adp.h do not contain anything
editor setup
Building message:
Processing teensy41 (platform: teensy; board: teensy41; framework: arduino)
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------Verbose mode can be enabled via `-v, --verbose` option
CONFIGURATION: https://docs.platformio.org/page/boards/teensy/teensy41.html
PLATFORM: Teensy (4.12.0) > Teensy 4.1
HARDWARE: IMXRT1062 600MHz, 512KB RAM, 7.75MB Flash
DEBUG: Current (jlink) External (jlink)
PACKAGES:
- framework-arduinoteensy 1.153.0 (1.53)
- toolchain-gccarmnoneeabi 1.50401.190816 (5.4.1)
LDF: Library Dependency Finder -> http://bit .ly/configure-pio-ldf
LDF Modes: Finder ~ chain, Compatibility ~ soft
Found 90 compatible libraries
Scanning dependencies...
No dependencies
Building in release mode
Compiling .pio\build\teensy41\src\main.cpp.o
Compiling .pio\build\teensy41\FrameworkArduino\AudioStream.cpp.o
Compiling .pio\build\teensy41\FrameworkArduino\Blink.cc.o
Compiling .pio\build\teensy41\FrameworkArduino\DMAChannel.cpp.o
src\main.cpp:2:17: fatal error: adp.h: No such file or directory
*************************************************************
* Looking for adp.h dependency? Check our library registry!
*
* CLI > platformio lib search "header:adp.h"
* Web > https://platformio.org/lib/search?query=header:adp.h
*
*************************************************************
compilation terminated.

rpmbuild check-rpath reports error that path is not absolute, incorrectly

I've been building RPMs using CMake & CPack 3.13.4 on OEL7 for several months without issue. My CMake configuration contained these lines:
SET(CMAKE_SKIP_BUILD_RPATH FALSE)
SET(CMAKE_BUILD_WITH_INSTALL_RPATH FALSE)
SET(CMAKE_INSTALL_RPATH "${CMAKE_INSTALL_PREFIX}/lib")
SET(CMAKE_INSTALL_RPATH_USE_LINK_PATH FALSE)
This has allowed me to ensure that the locally built versions of the library are used before any installed versions. Without making any changes to these lines I am suddenly unable to build RPMs any more. I now get this error message:
+ /usr/lib/rpm/check-rpaths
*******************************************************************************
*
* WARNING: 'check-rpaths' detected a broken RPATH and will cause 'rpmbuild'
* to fail. To ignore these errors, you can set the '$QA_RPATHS'
* environment variable which is a bitmask allowing the values
* below. The current value of QA_RPATHS is 0x0000.
*
* 0x0001 ... standard RPATHs (e.g. /usr/lib); such RPATHs are a minor
* issue but are introducing redundant searchpaths without
* providing a benefit. They can also cause errors in multilib
* environments.
* 0x0002 ... invalid RPATHs; these are RPATHs which are neither absolute
* nor relative filenames and can therefore be a SECURITY risk
* 0x0004 ... insecure RPATHs; these are relative RPATHs which are a
* SECURITY risk
* 0x0008 ... the special '$ORIGIN' RPATHs are appearing after other
* RPATHs; this is just a minor issue but usually unwanted
* 0x0010 ... the RPATH is empty; there is no reason for such RPATHs
* and they cause unneeded work while loading libraries
* 0x0020 ... an RPATH references '..' of an absolute path; this will break
* the functionality when the path before '..' is a symlink
*
*
* Examples:
* - to ignore standard and empty RPATHs, execute 'rpmbuild' like
* $ QA_RPATHS=$[ 0x0001|0x0010 ] rpmbuild my-package.src.rpm
* - to check existing files, set $RPM_BUILD_ROOT and execute check-rpaths like
* $ RPM_BUILD_ROOT=<top-dir> /usr/lib/rpm/check-rpaths
*
*******************************************************************************
ERROR 0002: file '/opt/project/lib/libConfigLoader.so.4.0.0' contains an invalid rpath '/opt/project/lib' in [/opt/project/lib]
ERROR 0002: file '/opt/project/lib/libConfigLoaderDb.so.4.0.0' contains an invalid rpath '/opt/project/lib' in [/opt/project/lib]
This seems wrong because it's stating that /opt/project/lib is not an absolute path, which it is.
The permissions of /opt/project/lib are:
[user#c7 ]$ ll -d /opt/
drwxrwxr-x. 10 root root 139 Oct 11 14:31 /opt/
[user#c7 ]$ ll -d /opt/project/
drwxrwx--- 11 root project 114 Oct 11 14:32 /opt/project/
[user#c7 ]$ ll -d /opt/project/lib
drwxrwx--- 2 root project 4096 Oct 11 14:53 /opt/project/lib
I am able to suppress the error by prepending QA_RPATHS=0x0002 to my make command, but I'm concerned that doing this might obscure other errors in future.
I looked into the check-rpaths script (and the check-rpaths-worker script that it uses), and the issue seems to come from this part, where j has been set to the rpath, in this case /opt/project/lib:
case "$j" in
(/lib/*|/usr/lib/*|/usr/X11R6/lib/*|/usr/local/lib/*)
badness=0;;
(/lib64/*|/usr/lib64/*|/usr/X11R6/lib64/*|/usr/local/lib64/*)
badness=0;;
(\$ORIGIN|\${ORIGINX}|\$ORIGIN/*|\${ORIGINX}/*)
test $allow_ORIGIN -eq 0 && badness=8 || {
badness=0
new_allow_ORIGIN=1
}
;;
(/*\$PLATFORM*|/*\${PLATFORM}*|/*\$LIB*|/*\${LIB}*)
badness=0;;
(/lib|/usr/lib|/usr/X11R6/lib)
badness=1;;
(/lib64|/usr/lib64|/usr/X11R6/lib64)
badness=1;;
(.*)
badness=4;;
(*) badness=2;;
esac
(Source)
I don't understand how this ever let /opt/project/lib pass, as from that 'case' statement it would always drop to the (*) case and set badness=2
What else can I try?
I had the same problem.
In my case, removing the ~/.rpmmacros file has solved the problem.
(I was running make package with cmake/cpack-generated Makefile on the shared machine. Probably, somebody or something had changed the contents of that file in a way that the following line appeared or got uncommented:
%__arch_install_post /usr/lib/rpm/check-rpaths /usr/lib/rpm/check-buildroot
That seemed to be the reason of the problem in my case.)
It seems that in some versions/platforms cmake / cpack assumes that the path used for the "make" step will affect the binaries for rpm-distribution. For example in my procect multiple CMakeList.txt files each link together binaries and related shared objects. There are defined include and library directories which are relevant for building each binary but have nothing to do with the resulting rpms for the distribution.
So there are two type of directories, one for building the binaries and shared objects and other for the distribution of the binaries and related shared objects in rpms (often defined in the system path and LD_LIBRARY_PATH on the target platform).
So in my case I got no rpath error/warning in the "make" step which would have been understandable but instead I got them in the "cpack"-step which makes no sense at all. I bypassed it with setting the environment variable
export QA_SKIP_RPATHS=1
AFTER the "make" and BEFORE the "cpack" step.
you may optout with %global __brp_check_rpaths %{nil}
for more details read https://fedoraproject.org/wiki/Changes/Broken_RPATH_will_fail_rpmbuild

C++ Tensorflow API with TensorRT

My goal is to run a tensorrt optimized tensorflow graph in a C++ application. I am using tensorflow 1.8 with tensorrt 4. Using the python api I am able to optimize the graph and see a nice performance increase.
Trying to run the graph in c++ fails with the following error:
Not found: Op type not registered 'TRTEngineOp' in binary running on e15ff5301262. Make sure the Op and Kernel are registered in the binary running in this process.
Other, non tensorrt graphs work. I had a similar error with the python api, but solved it by importing tensorflow.contrib.tensorrt. From the error I am fairly certain the kernel and op are not registered, but am unaware on how to do so in the application after tensorflow has been built. On a side note I can not use bazel but am required to use cmake. So far I link against libtensorflow_cc.so and libtensorflow_framework.so.
Can anyone help me here? thanks!
Update:
Using the c or c++ api to load _trt_engine_op.so does not throw an error while loading, but fails to run with
Invalid argument: No OpKernel was registered to support Op 'TRTEngineOp' with these attrs. Registered devices: [CPU,GPU], Registered kernels:
<no registered kernels>
[[Node: my_trt_op3 = TRTEngineOp[InT=[DT_FLOAT, DT_FLOAT], OutT=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], input_nodes=["tower_0/down_0/conv_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer", "tower_0/down_0/conv_skip/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer"], output_nodes=["tower_0/down_0/conv_skip/Relu", "tower_0/down_1/conv_skip/Relu", "tower_0/down_2/conv_skip/Relu", "tower_0/down_3/conv_skip/Relu"], serialized_engine="\220{I\000...00\000\000"](tower_0/down_0/conv_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, tower_0/down_0/conv_skip/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer)]]
Another way to solve the problem with the error "Not found: Op type not registered 'TRTEngineOp'" on Tensorflow 1.8:
1) In the file tensorflow/contrib/tensorrt/BUILD, add new section with following content :
cc_library(
name = "trt_engine_op_kernel_cc",
srcs = [
"kernels/trt_calib_op.cc",
"kernels/trt_engine_op.cc",
"ops/trt_calib_op.cc",
"ops/trt_engine_op.cc",
"shape_fn/trt_shfn.cc",
],
hdrs = [
"kernels/trt_calib_op.h",
"kernels/trt_engine_op.h",
"shape_fn/trt_shfn.h",
],
copts = tf_copts(),
visibility = ["//visibility:public"],
deps = [
":trt_logging",
":trt_plugins",
":trt_resources",
"//tensorflow/core:gpu_headers_lib",
"//tensorflow/core:lib_proto_parsing",
"//tensorflow/core:stream_executor_headers_lib",
] + if_tensorrt([
"#local_config_tensorrt//:nv_infer",
]) + tf_custom_op_library_additional_deps(),
alwayslink = 1, # buildozer: disable=alwayslink-with-hdrs
)
2) Add //tensorflow/contrib/tensorrt:trt_engine_op_kernel_cc as dependency to the corresponding BAZEL project you want to build
PS: No need to load library _trt_engine_op.so with TF_LoadLibrary
Here are my findings (and some kind of solution) for this problem (Tensorflow 1.8.0, TensorRT 3.0.4):
I wanted to include the tensorrt support into a library, which loads a graph from a given *.pb file.
Just adding //tensorflow/contrib/tensorrt:trt_engine_op_kernel to my Bazel BUILD file didn't do the trick for me. I still got a message indicating that the Ops where not registered:
2018-05-21 12:22:07.286665: E tensorflow/core/framework/op_kernel.cc:1242] OpKernel ('op: "TRTCalibOp" device_type: "GPU"') for unknown op: TRTCalibOp
2018-05-21 12:22:07.286856: E tensorflow/core/framework/op_kernel.cc:1242] OpKernel ('op: "TRTEngineOp" device_type: "GPU"') for unknown op: TRTEngineOp
2018-05-21 12:22:07.296024: E tensorflow/examples/tf_inference_lib/cTfInference.cpp:56] Not found: Op type not registered 'TRTEngineOp' in binary running on ***.
Make sure the Op and Kernel are registered in the binary running in this process.
The solution was, that I had to load the Ops library (tf_custom_op_library) within my C++ Code using the C_API:
#include "tensorflow/c/c_api.h"
...
TF_Status status = TF_NewStatus();
TF_LoadLibrary("_trt_engine_op.so", status);
The shared object _trt_engine_op.so is created for the bazel target //tensorflow/contrib/tensorrt:python/ops/_trt_engine_op.so:
bazel build --config=opt --config=cuda --config=monolithic \
//tensorflow/contrib/tensorrt:python/ops/_trt_engine_op.so
Now I only have to make sure, that _trt_engine_op.so is available whenever it is needed, e.g. by LD_LIBRARY_PATH.
If anybody has an idea, how to do this in a more elegant way (why do we have 2 artefacts which have to be build? Can't we just have one?), I'm happy for every suggestion.
tldr
add //tensorflow/contrib/tensorrt:trt_engine_op_kernel as dependency to the corresponding BAZEL project you want to build
Load the ops-library _trt_engine_op.so in your code using the C-API.
For Tensorflow r1.8, the additions shown below in two BUILD files and building libtensorflow_cc.so with the monolithic option worked for me.
diff --git a/tensorflow/BUILD b/tensorflow/BUILD
index cfafffd..fb8eb31 100644
--- a/tensorflow/BUILD
+++ b/tensorflow/BUILD
## -525,6 +525,8 ## tf_cc_shared_object(
"//tensorflow/cc:scope",
"//tensorflow/cc/profiler",
"//tensorflow/core:tensorflow",
+ "//tensorflow/contrib/tensorrt:trt_conversion",
+ "//tensorflow/contrib/tensorrt:trt_engine_op_kernel",
],
)
diff --git a/tensorflow/contrib/tensorrt/BUILD b/tensorflow/contrib/tensorrt/BUILD
index fd3582e..a6566b9 100644
--- a/tensorflow/contrib/tensorrt/BUILD
+++ b/tensorflow/contrib/tensorrt/BUILD
## -76,6 +76,8 ## cc_library(
srcs = [
"kernels/trt_calib_op.cc",
"kernels/trt_engine_op.cc",
+ "ops/trt_calib_op.cc",
+ "ops/trt_engine_op.cc",
],
hdrs = [
"kernels/trt_calib_op.h",
## -86,6 +88,7 ## cc_library(
deps = [
":trt_logging",
":trt_resources",
+ ":trt_shape_function",
"//tensorflow/core:gpu_headers_lib",
"//tensorflow/core:lib_proto_parsing",
"//tensorflow/core:stream_executor_headers_lib",
As you mentioned, it should work when you add //tensorflow/contrib/tensorrt:trt_engine_op_kernel to the dependency list. Currently the Tensorflow-TensorRT integration is still in progress and may work well only for the python API; for C++ you'll need to call ConvertGraphDefToTensorRT() from tensorflow/contrib/tensorrt/convert/convert_graph.h for the conversion.
Let me know if you have any questions.
Solution: add import
from tensorflow.python.compiler.tensorrt import trt_convert as trt
link discuss: https://github.com/tensorflow/tensorflow/issues/26525
here is my solution, tensorflow is 1.14.
in your BUILD file,exp,tensorflow/examples/your_workspace/BUILD:
in tf_cc_binary:
scrs= [...,"//tensorflow/compiler/tf2tensorrt:ops/trt_engine_op.cc"]
deps=[...,"//tensorflow/compiler/tf2tensorrt:trt_op_kernels"]

Cannot make CGAL examples in Cygwin

I am unable to build some of the CGAL examples under Cygwin. All of the failing examples share similar error messages.
Any guidance would be most appreciated.
Following are the steps that I followed and a sample error from a "make".
Cygwin (x64) installed under Windows 7 to d:\cygwin64.
CGAL source downloaded from https://github.com/CGAL/cgal/releases/download/releases%2FCGAL-4.9/CGAL-4.9.zip
and unzipped to D:\cygwin64\usr\CGAL-4.9
All libraries supposedly needed for CGAL were installed via the Cygwin x64 setup.
Initial cmake:
cd /usr/CGAL-4.9
cmake -DCMAKE_LEGACY_CYGWIN_WIN32=1 -DWITH_CGAL_Qt5=OFF -DWITH_examples=ON .
Some examples could not be configured, these included the mesh and Scale_space_reconstruction_3 examples.
cd /usr/CGAL-4.9
make
make examples
The first few examples were created successfully. For example,
PATH=/usr/local/bin:/usr/bin:/bin:/lib:/usr/CGAL-4.9/bin:/usr/CGAL-4.9/lib
cd /usr/CGAL-4.9/examples/AABB_tree
./AABB_triangle_3_example.exe
3 intersections(s) with ray query
closest point is: 0.333333 0.333333 0.333333
squared distance: 8.33333
A later example demonstrates a nagging problem that shows up in a number of the examples:
cd /usr/CGAL-4.9/examples/Snap_rounding_2/
cmake -DCGAL_DIR=/usr/CGAL-4.9 .
make
Scanning dependencies of target snap_rounding
[ 16%] Building CXX object CMakeFiles/snap_rounding.dir/snap_rounding.cpp.o
In file included from /usr/CGAL-4.9/include/CGAL/CORE/CoreDefs.h:41:0,
from /usr/CGAL-4.9/include/CGAL/CORE/BigFloatRep.h:40,
from /usr/CGAL-4.9/include/CGAL/CORE/BigFloat.h:38,
from /usr/CGAL-4.9/include/CGAL/CORE_BigFloat.h:27,
from /usr/CGAL-4.9/include/CGAL/CORE_arithmetic_kernel.h:39,
from /usr/CGAL-4.9/include/CGAL/Arithmetic_kernel.h:51,
from /usr/CGAL-4.9/include/CGAL/Arr_rational_function_traits_2.h:28,
from /usr/CGAL-4.9/include/CGAL/Sweep_line_2_algorithms.h:37,
from /usr/CGAL-4.9/include/CGAL/Snap_rounding_2.h:28,
from /usr/CGAL-4.9/examples/Snap_rounding_2/snap_rounding.cpp: :
/usr/CGAL-4.9/include/CGAL/CORE/extLong.h:171:8: warning: ‘CORE::extLong::extLong(int)’ redeclared without dllimport attribute after being referenced with dll linkage
inline extLong::extLong(int i) : val(i), flag(0) {
^
/usr/CGAL-4.9/include/CGAL/CORE/extLong.h:292:13: warning: ‘bool CORE::extLong::isNaN() const’ redeclared without dllimport attribute after being referenced with dll linkage
inline bool extLong::isNaN() const {
There are a number of similar errors that have been omitted here.
Thanks!!!
As the errors, that you are not reporting, are likely due to wrong importing directive, you can try the following:
On include/CGAL/export/helpers.h
replace
# if defined(_WIN32) || defined(__CYGWIN__)
with
# if defined(_WIN32)
and then build with
cmake -DWITH_CGAL_Qt5=OFF -DWITH_examples=ON
for what I see the build works much better (20% done in one our and still going)

Choosing compiler options based on the operating system in boost-build

Currently I can build my program using boost build in different platforms by setting the toolset and parameters in the command line. For example :
Linux
b2
MacOS
b2 toolset=clang cxxflags="-stdlib=libc++" linkflags="-stdlib=libc++"
Is there a way to create a rule in the Jamroot file to decide which compiler to use based on the operating system? I am looking for something along these lines:
import os ;
if [ os.on-macos ] {
using clang : <cxxflags>"-stdlib=libc++" <linkflags>"-stdlib=libc++c ;"
}
in linux it automatically decides to use gcc but in the mac if I don't specify the clang toolset it will try (without success) to compile it with gcc.
Just for reference, here is my current jamroot (any suggestions also appreciated):
# Project requirements (note, if running on a Mac you have to build foghorn with clang with libc++)
project myproject
: requirements <cxxflags>-std=c++11 <linkflags>-std=c++11 ;
# Build binaries in src
lib boost_program_options ;
exe app
: src/main.cpp src/utils src/tools boost_program_options
;
How abou using a Jamroot? I have the following in mine. It selects between two GCC versions on Linux, depending on what's in an environmen variable, and chooses vacpp on AIX.
if [ os.name ] = LINUX
{
switch [ modules.peek : ODSHOME ]
{
case *gcc-4* : using gcc : 4.4 : g++-4.4 ;
case *gcc-3.3* : using gcc : 3.3 : g++-3.3 ;
case * : error Only gcc v4 and gcc v3.3 supported. ;
}
}
else if [ os.name ] = AIX
{
using vacpp ;
}
else
{
error Only Linux and AIX supported at present. ;
}
After a long time I have found out that there is really no way (apart from very hacky) to do this. The goal of Boost.Build is to let the toolset option for the user to define.
The user has several ways to specify the toolset:
in the command line with --toolset=gcc for example
in the user configuration by setting it in the user-config.jam for all projects compiled by the user
in the site configuration by setting it in the site-config.jam for all users
the user-config.jam can be in the user's $HOME or in the boost build path.
the site-config.jam should be in the /etc directory, but could also be in the two locations above.
In summary, setup your site-config or user-config for a pleasant experience, and write a nice README file for users trying to compile your program.
Hope this helps someone else.