I'm new to Bazel and Armeria. In the Armeria dev guide, in setting up with a build system, it has examples from Gradle and Maven, but not Bazel. Downloading the jar file (armeria-1.18.0.jar) and importing it directly using java_import() will build the project, but gives and error during runtime. It cannot find the runtime dependencies of Armeria like micrometer, etc.
This seems more like a question about bazel rather than armeria.
Have you tried using maven_install? I guess this takes care of pulling in transitive dependencies.
Sample: https://github.com/bazelbuild/examples/tree/main/java-maven
Documentation: https://github.com/bazelbuild/rules_jvm_external
Related
https://github.com/dremio/dremio-oss/blob/master/pom.xml
After I clone the project,I try to build the project, but some dependencies are not found .How to find these dependencies,such as
the project accquire calicte:4.0.0-20210722102535-bda216e83f-dremio,which can not find in mavencenter
Any help?
I tried building dremio-oss on Windows the other day and there were no problems with resolving dependencies. You should include more information from the build log. You can also try using the "-U" option to force checking remote repositories which is useful if your local repository cache is out of date. Your error message also doesn't make sense since the correct version for Calcite is this one: https://github.com/dremio/dremio-oss/blob/master/pom.xml#L46
I have recently started learning meson and I am testing switching to it (from CMake) in one of my projects. The problem is that I usually use cpack to build the project's packages/installers, and after scouring the meson docs for something similar to cpack I am unable to find anything.
Requirements/what I currently use cpack for
Single script to automatically build and package binary releases (such as deb, rpm, windows installer, etc)
Integrates with the build system - Picks up targets automatically, doesn't require redefining installation logic or structure
Supports building at least deb packages and a windows installer (don't care which)
There is the information on building release archives and then using scripts to process them with packaging tools (such as inno). However, this is not really what I am looking for as it is far more awkward and inflexible than cpack (i.e I have to change 3 different scripts if the directory structure changes).
Ultimately I can learn to use the meson system and manually write packaging scripts, no doubt it will make me a better scripter, however, I am eager to know if there is a better way of doing this which is not advertised in the docs or if there is some unofficial project which will automate the process.
Edit
By package I mean like a deb package - a package for a system package manager, not something like conan
I suggest that you use conan. Please take a look at conan configuration in the Meson.
It might be worth considering using meson rpm packaging module RPM module:
It autodetects installed files, dependencies and so on
This module as of now supports only generation of RPM spec file:
rpm = import('rpm')
rpm.generate_spec_template()
I am using Buck for my own C++ project, but I depend on a third-party library that is built using CMake. The CMake file is complex, so I do not think it is practical to recreate their CMake file in Buck. Instead, I would like to call CMake from Buck.
What is the best way to call CMake from Buck?
How should I structure my project to minimise headaches?
My suggestion is using genrule and prebuilt_cxx_library.
This is my Buck study project using Google Test:
https://github.com/ar90n/lab/tree/master/sandbox/buck-study/gtest
This project contains two Buck files. The one (./gtest/BUCK) is for fetching and building Google Test. The another (./BUCK) is for building and running test programs.
If you want to build and run this project, please execute the following commands,
$ buck fetch //gtest:googletest-src
$ buck build :sample1
$ buck run :sample1
Calling CMake will break reproducibility, so it isn't the best approach. Instead, try the following:
Fork the project that builds with CMake.
Call CMake to generate any header files.
Save the header files somewhere in the project (e.g. /cmake-generated).
Create a header-only library of the header files generated by CMake.
Build the rest of the project with Buck, depending on the CMake library.
Commit everything to Git.
Repeat step 2 for every target that you require.
This is not as good as a true port to Buck, but you get most of the benefits for a one-time manual step.
I've been developing c++ project using a Tensorflow c++ api. it just execute created tensorflow's graph from Python. I build it using bazel with Tensorflow code now. But I think it's inefficient way.
I want just Tensorflow library and header files, and Just compile my project only using Cmake.
I know how to build shared library.
bazel build -c opt --config=cuda //tensorflow:libtensorflow.so but this command just make a libtensorflow.so file. I can't find header files for build my project.
Is there way to package tensorflow library for c++? such as mvn package command.
As far as I know, there is no official distributable C++ API package. There is, however, tensorflow_cc project that builds and installs TF C++ API for you, along with convenient CMake targets you can link against. According to your description, that may be just what you need.
If your operating system is Debian or Ubuntu, you can download unofficial prebuilt packages with the Tensorflow C/C++ libraries. This distribution can be used for C/C++ inference with CPU, GPU support is not included:
https://github.com/kecsap/tensorflow_cpp_packaging/releases
There are instructions written how to freeze a checkpoint in Tensorflow (TFLearn) and load this model for inference with the C/C++ API:
https://github.com/kecsap/tensorflow_cpp_packaging/blob/master/README.md
Beware: I am the developer of this Github project.
As Floop already mentioned, his tensorflow_cc project is also a good alternative without packaging, especially if you want GPU support for the inference.
You can build tensorflow with CMake. This also creates a TensorflowConfig.cmake, which you can integrate in your project
https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/cmake
Little hint: You have to build the shared lib, even if you do not need it.
You have two option: static linking and dynamic linking.
If you want to dynamic link your c++ project to TensorFlow, all you need is a --whole-archive linker flag. The necessary header files are provided by a pip install.
Generating the library is basically
bazel build -c opt --copt=-mfpmath=both --config=cuda //tensorflow:libtensorflow.so
bazel build -c opt --copt=-mfpmath=both --config=cuda //tensorflow:libtensorflow_cc.so
Having everything in place it is easy to run a TensorFlow graph in C, C++, Go (GitHub project). See the linked project for these working examples in C, C++, Go.
When building against the shared library, the headers I use are in $PROJECT_HOME/bazel-genfiles.
Adding $PROJECT_HOME/bazel-genfiles to the linker header list should be enough.
Using Maven 1.x with just the bundled/standard plugins, what configuration is necessary to build an executable Jar?
Answers should cover:
including dependencies in target Jar
proper classpath configuration to make dependency Jars accessible
Well the easiest way is to simply set the maven.jar.mainclass property to the main class you'd like to use.
As far as setting up the manifest classpath you can use maven.jar.manifest.classpath.add=true to have maven automatically update the classpath based on the dependencies described in the project.xml.
Disclaimer: It's been a long time since I've used Maven 1 and I did not test any of this out, but I'm pretty sure this will get you started in the right direction. For more information check out the jar plugin docs.