I'm wondering it is possibile to add new tool to Yocto env. My recipe calls mkimage tool provided by U-Boot but this tool need to call bison which is not available on Yocto env (error message says: /bin/sh: bison: not going,). How to add this tool?
Thanks.
Related
I am trying to build TensorFlow 2.4.1 C++ API on Windows 10 and I am having issues.
What I've done so far:
Download TensorFlow Source from the official repo
https://github.com/tensorflow/tensorflow
and switched to the official v2.4.1 tag
Download and install Python 3.6.8 x64
Created a virtual env with python 3.6.8
Created and installed requirements.txt based on the data here
Download and install CUDA 11.0 and cuDNN v8.0.4.30 (for CUDA 11)
Downloaded and installed msys2 and set it's location in PATH
Download and install bazel (3.1.0) as it is the most recent entry here
Then I run the configuration process with python configure.py
I configure for C++ build (tensorflow_cc) with GPU support
Here I had some problem that apparently during the config process windows-style backslashes are accepted as a valid input, but then when you actually run bazel compilation they cause problems, so I reran my configuration to provide linux-style backslashes. Thus CUDA and cuDNN were successfully detected and compilation started.
The full contents of my .tf_configure.bazelrc are below
build --action_env PYTHON_BIN_PATH="D:/code/sdk/tensorflow/venv/Scripts/python.exe"
build --action_env PYTHON_LIB_PATH="D:/code/sdk/tensorflow/venv/lib/site-packages"
build --python_path="D:/code/sdk/tensorflow/venv/Scripts/python.exe"
build --config=xla
build --action_env TF_CUDA_VERSION="11.0"
build --action_env TF_CUDNN_VERSION="8.0.4"
build --action_env TF_CUDA_PATHS="C:/Program Files/NVIDIA GPU Computing
Toolkit/CUDA/v11.0,D:/code/sdk/cudnn-11.0-windows-x64-v8.0.4.30/cuda"
build --action_env CUDA_TOOLKIT_PATH="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.0"
build --action_env TF_CUDA_COMPUTE_CAPABILITIES="5.0"
build --config=cuda
build:opt --copt=/arch:AVX
build:opt --host_copt=/arch:AVX
build:opt --define with_default_optimizations=true
build --define=override_eigen_strong_inline=true
test --flaky_test_attempts=3
test --test_size_filters=small,medium
test:v1 --test_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu,-oss_serial
test:v1 --build_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu
test:v2 --test_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu,-oss_serial,-
v1only
test:v2 --build_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu,-v1only
build --action_env TF_CONFIGURE_IOS="0"
About 20 minutes into the compilation however it failed with the following error:
ERROR: D:/code/sdk/tensorflow/tensorflow/stream_executor/cuda/BUILD:366:1: C++ compilation of rule '//tensorflow/stream_executor/cuda:cudnn_stub' failed (Exit 2): python.exe failed: error executing command
And the reason the command fails to execute is
bazel-out/x64_windows-opt/bin/external/local_config_cuda/cuda/_virtual_includes/cudnn_header\third_party/gpus/cudnn/cudnn.h(61): fatal error C1083: Cannot open include file: 'cudnn_ops_infer.h': No such file or directory
And here I am wondering what is going on? I already provided a path to cuDNN, but apparently bazel doesn't really know about it, even though it previously acknowledged that the path I have provided is correct. Am I missing some environment variable that I need to set to instruct where cuDNN is?
Has anyone built TF C++ v2.4.1 on Windows? There is so little information online, even the official page says nothing about Windows builds. It's only Linux and Mac...
As I was running out of ideas I decided to go take a look at the Bazel build scripts for CUDA
In <REPO>\tensorflow\third_party\gpus\cuda_configure.bzl I saw that cuDNN path is read from env variable CUDNN_INSTALL_PATH
and if not present it will default to /usr/local/include?
Anyway, tried set CUDNN_INSTALL_PATH=D:/code/sdk/cudnn-11.0-windows-x64-v8.0.4.30/cuda and WOOHOO It compiled!
(Pro Tip: Set the env var without any quotes and with linux-style slashes...)
Hello i need help about building artefact for Java. I get an error when running this command for build:
./configure --with-vpnc-script=~/Downloads/vpnc-script --with-java=/Library/Java/JavaVirtualMachines/jdk-14.0.1.jdk/Contents/Home --disable-nls.
This is the error that i get:
checking jni.h usability... no
configure: error: unable to compile JNI test program
I need your help please. I'm using release version 8.08 and building it on mac. This is the official gitlab repository
Here is the content of config.log generated
Thanks
It seems that configure script expects you to pass the path to the JDK's include directory, not the JDK itself.
This should work:
--with-java=/Library/Java/JavaVirtualMachines/jdk-14.0.1.jdk/Contents/Home/include
I'm doing a yocto build for an Altera ARM processor. I'm trying to build the userland for core-image-minimal and I run into a dependency on pandoc. Whats the best way to add pandoc to the yocto build?
In case that You want to add some package into image, Yocto framework provide the IMAGE_INSTALL variable:
IMAGE_INSTALL_append = " pandoc"
I have some C/C++ code that I need to compile for target platforms (MacOS, Linux flavors, etc). However, it isn't for Node.js bindings, just some scripts written in C, so I don't absolutely need to use node-gyp to do this.
My question is - what is the best way to compile these C scripts if they are packaged in an NPM package. Should I just use the postinstall script to compile the C code? What is best practice here?
What is the best way to compile these C scripts if they are packaged in an NPM package
This tasks generally solved by crossplatform build systems like automake, cmake, qmake and so on.
Create an independent c++ package with configured build environment. Add to npm package checking that your program compiled from c++ is available. Show error message if it is not available and notice where to find it and mention in doc about how to compile and install.
I installed the CDT package via Install Software option in Eclipse, and after that, I installed the Command Line Tools using Xcode on my Mac. I am running Eclipse Juno on Mountain Lion.
After installing command line tools, I exported the paths with:
export CC=/usr/bin/gcc
export CC=/usr/bin/g++
In eclipse, I'm getting this error with auto-generated HelloWorld executable projects and autotools:
Error 127 occured while running autoreconf
make: *** No rule to make target 'all'.
From what I have found, the second has to do with g++, but I'm not really sure what the issue is.
I'd appreciate any help. Thanks.
Hopefully you've installed the XCode command line tools.
Also you might need to configure the project.
Ideally you invoke aclocal, automake --add-missing and then autoconf.
Then run configure and make. You might need the -i option for autoconf.