Error while building OpenConnect project (from official gitlab repository) for Java - openconnect

Hello i need help about building artefact for Java. I get an error when running this command for build:
./configure --with-vpnc-script=~/Downloads/vpnc-script --with-java=/Library/Java/JavaVirtualMachines/jdk-14.0.1.jdk/Contents/Home --disable-nls.
This is the error that i get:
checking jni.h usability... no
configure: error: unable to compile JNI test program
I need your help please. I'm using release version 8.08 and building it on mac. This is the official gitlab repository
Here is the content of config.log generated
Thanks

It seems that configure script expects you to pass the path to the JDK's include directory, not the JDK itself.
This should work:
--with-java=/Library/Java/JavaVirtualMachines/jdk-14.0.1.jdk/Contents/Home/include

Related

Can't make TensorFlow 2.4.1 (CPP) compile on Windows

I am trying to build TensorFlow 2.4.1 C++ API on Windows 10 and I am having issues.
What I've done so far:
Download TensorFlow Source from the official repo
https://github.com/tensorflow/tensorflow
and switched to the official v2.4.1 tag
Download and install Python 3.6.8 x64
Created a virtual env with python 3.6.8
Created and installed requirements.txt based on the data here
Download and install CUDA 11.0 and cuDNN v8.0.4.30 (for CUDA 11)
Downloaded and installed msys2 and set it's location in PATH
Download and install bazel (3.1.0) as it is the most recent entry here
Then I run the configuration process with python configure.py
I configure for C++ build (tensorflow_cc) with GPU support
Here I had some problem that apparently during the config process windows-style backslashes are accepted as a valid input, but then when you actually run bazel compilation they cause problems, so I reran my configuration to provide linux-style backslashes. Thus CUDA and cuDNN were successfully detected and compilation started.
The full contents of my .tf_configure.bazelrc are below
build --action_env PYTHON_BIN_PATH="D:/code/sdk/tensorflow/venv/Scripts/python.exe"
build --action_env PYTHON_LIB_PATH="D:/code/sdk/tensorflow/venv/lib/site-packages"
build --python_path="D:/code/sdk/tensorflow/venv/Scripts/python.exe"
build --config=xla
build --action_env TF_CUDA_VERSION="11.0"
build --action_env TF_CUDNN_VERSION="8.0.4"
build --action_env TF_CUDA_PATHS="C:/Program Files/NVIDIA GPU Computing
Toolkit/CUDA/v11.0,D:/code/sdk/cudnn-11.0-windows-x64-v8.0.4.30/cuda"
build --action_env CUDA_TOOLKIT_PATH="C:/Program Files/NVIDIA GPU Computing Toolkit/CUDA/v11.0"
build --action_env TF_CUDA_COMPUTE_CAPABILITIES="5.0"
build --config=cuda
build:opt --copt=/arch:AVX
build:opt --host_copt=/arch:AVX
build:opt --define with_default_optimizations=true
build --define=override_eigen_strong_inline=true
test --flaky_test_attempts=3
test --test_size_filters=small,medium
test:v1 --test_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu,-oss_serial
test:v1 --build_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu
test:v2 --test_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu,-oss_serial,-
v1only
test:v2 --build_tag_filters=-benchmark-test,-no_oss,-no_windows,-no_windows_gpu,-no_gpu,-v1only
build --action_env TF_CONFIGURE_IOS="0"
About 20 minutes into the compilation however it failed with the following error:
ERROR: D:/code/sdk/tensorflow/tensorflow/stream_executor/cuda/BUILD:366:1: C++ compilation of rule '//tensorflow/stream_executor/cuda:cudnn_stub' failed (Exit 2): python.exe failed: error executing command
And the reason the command fails to execute is
bazel-out/x64_windows-opt/bin/external/local_config_cuda/cuda/_virtual_includes/cudnn_header\third_party/gpus/cudnn/cudnn.h(61): fatal error C1083: Cannot open include file: 'cudnn_ops_infer.h': No such file or directory
And here I am wondering what is going on? I already provided a path to cuDNN, but apparently bazel doesn't really know about it, even though it previously acknowledged that the path I have provided is correct. Am I missing some environment variable that I need to set to instruct where cuDNN is?
Has anyone built TF C++ v2.4.1 on Windows? There is so little information online, even the official page says nothing about Windows builds. It's only Linux and Mac...
As I was running out of ideas I decided to go take a look at the Bazel build scripts for CUDA
In <REPO>\tensorflow\third_party\gpus\cuda_configure.bzl I saw that cuDNN path is read from env variable CUDNN_INSTALL_PATH
and if not present it will default to /usr/local/include?
Anyway, tried set CUDNN_INSTALL_PATH=D:/code/sdk/cudnn-11.0-windows-x64-v8.0.4.30/cuda and WOOHOO It compiled!
(Pro Tip: Set the env var without any quotes and with linux-style slashes...)

How to build PythonQt in ubutnu

I want to embed the python script in my c++ Qt application, By searching on the net I found that PythonQt is exactly what I am looking for but when I went to it's github repo there is build description given for windows system but not for ubuntu system so after cloning the repo if I include it's src in my Qt .pro file it gives me output that
Python.h not found, I think the reason is that I didn't build it in my system. Is there anyone who could tell me that how to build PythonQt in ubuntu. The link for their repo is this: https://github.com/MeVisLab/pythonqt
If this didn't work you can also suggest me some other thing which will help me to embed python scripts into my Qt c++ application.
First clone the repo by using the following command
https://github.com/MeVisLab/pythonqt.git
After that cd into the clone folder and execute the below command to build it into your system.
qmake
This command will generate the MakeFile into your current directory run the following command to completely build the PythonQt in your system.
sudo make all
sudo make install
While executing those commands if you get the following error
fatal error: 'private/qmetaobjectbuilder_p.h'
Run the below command to solve this
sudo apt install qtbase5-private-dev

How do i install socket.io c++ client library

I am trying to use socket.io c++ client implementation.I have never used an external library before with c++ so im confused.
This is the library i am trying to use:
https://github.com/socketio/socket.io-client-cpp
So i followed this instructions:
https://github.com/socketio/socket.io-client-cpp/blob/master/INSTALL.md
Installed boost and cmake as its stated and i guess it is fine.
My problem is with the 4th step and the rest of it.
If i run
make install
console throws:
make: *** No rule to make target 'install'. Stop.
current dir looks like:
To be honest i didnt understand what is 5th step, and the installation process in general.How should i include this library in my main.cpp so that i can use it ? What are those visual studio project files generated ?
Edit:
If i open INSTALL visual studio project file and build INSTALL project from solution explorer,i get this error:
Edit2:
After updating websocketpp library now i get this error after build:
if you are having lots of issues this is how i solved mine:
boost 1.7.0 was not working for me.I installed boost 1.65.0
after that update websocketpp library
go to C:\socket.io-client-cpp\.git\modules\lib\websocketpp directory with command line and type
git pull origin master
so after changing boost version to 1.65.0 and updating websocketpp finally it built succesfully.

Can't compile fabric on windows - missing ltdl.h

I am trying to build a chaincode using go build.
Environment:
installed go 1.8.3 windows/amd
Windows 10
When I run go build I get the following error:
# github.com/hyperledger/fabric/vendor/github.com/miekg/pkcs11
..\..\github.com\hyperledger\fabric\vendor\github.com\miekg\pkcs11\pkcs11.go:29:18: fatal error: ltdl.h: No such file or directory
compilation terminated.
I checked and my GCC installation does not contain the ltdl.h file in the include folder.
I found a SO post with a solution for Linux, but not one for Windows.
Can someone help?
On windows you can build without PKCS
go build --tags nopkcs11
Try running the following command
sudo apt install libtool libltdl-dev
Make sure go get -u github.com/hyperledger/fabric/core/chaincode/shim throws no error then go build it.

Running Linux configure error in Fix8 C++ library?

I am trying to run Fix8 at Fix8.org. I am following the README instructions as explained at:
https://github.com/dakka/fix8
I am getting an error when running the ./configure command? It results in:
configure: error: cannot find install-sh, install.sh, or shtool in "." "./.." "./../.."
Does anyone have experience in fixing this? I m running both latest versions of Debian and Ubuntu Linux.
Thanks
The files mentioned in the error are placed in the source directory by autoconf and should be distributed in the release tarball. If they are not it's a bug, so please report it to the author.
If you have autoconf installed, you can get the files by running ./bootstrap (or whatever script it has; the usual name is ./autogen.sh), but you are not supposed to need autoconf to run configure script.
Yeah sorry about that - you need libtool. I have supplied configure - as this was missing. Check the FAQ. If you have any more problems please email on the support group.