Build ROS nodes with Bazel - build

Is it possible to Build ROS nodes with Bazel?
e.g. could we build below example with Bazel?
http://wiki.ros.org/ROS/Tutorials/WritingServiceClient%28c%2B%2B%29
Please share your views and solution if possible.

Nicolò Valigi has done so on a GitHub repository:
https://github.com/nicolov/ros-bazel
His steps are:
Use Catkin from within Bazel to build base ROS code.
Run message generation as Skylark rules.
This works for C++ and Python code.

Related

How to handle native dependencies in Flutter plugin

I'm trying to create a Flutter plugin which wraps some C++ code. That code has multiple dependencies, so steps need to be taken to provide those dependencies to the C++ build system (currently based on cmake, building for windows desktop), particularly where to find headers and libraries. There are a number of ways to do this with CMake, and we happen to be using Conan to fetch and provided dependencies, but in any case, I need a way to do a pre-build step and/or provide arguments to Flutter's execution of cmake to help it locate these dependencies.
I have ascertained that Flutter doesn't support a lot of (or any) customizability of the build. There are the build and build_runner packages, but I don't believe they will have knowledge of the build output directory for my platform nor provide a way to inject arguments to the cmake invocation.
How can I feed dependencies to the native build of the Flutter plugin? These dependencies may be static or dynamic libraries.
The only possible official way to interop with C/C++ code is ffi library. There is not much info about it online, but there is the official documentation.
Keep in mind that C-interop is parent platform dependent.
To properly generate binding with the needed native code, you have ffigen library. I think it will also help you with your dependencies issue.
Hope it helps

C++ V8 Embedding project structure

I'm trying to get chrome V8 embedded in my C++ project, and I can only get what I could call, my project being embedded in V8. My only concern with this is that my program is cross-platform and I would like build commands to be the same. I started development it on Windows, but I'm using a mac now to get V8 running.
I can get V8 built and their samples running using this setup:
Get this: https://commondatastorage.googleapis.com/chrome-infra-docs/flat/depot_tools/docs/html/depot_tools_tutorial.html#_setting_up
get source: https://v8.dev/docs/source-code
build: https://v8.dev/docs/build
My current solution has a few commands install, build, run. The build command is more complicated as it attempts to automatically edit the BUILD.gn file in V8 to insert your project instead of V8. It will add all files in your source directory to the sources.
This approach feels very wrong for a few reasons. The first being that there is almost definitely a better way to configure my project than editing a build script with a python script. Secondly, I would like V8 to be embedded in my project, not the other way around. I only have SDL2 as a dependency but I have cross platform CMake builds setup, which would be abandoned for however V8 builds the source files. I feel this way could get hard to manage if I add more dependencies.
I'm currently working with a small test project with one source file.
EDIT: I can't find anything on embedding V8 between running a sample and API usage
The usual approach is to have a step in your build system that builds the V8 library as a dependency (as well as any other dependencies you might have). For that, it should use the official V8 build instructions. If you have a split between steps to get sources/dependencies and compiling them, then getting depot_tools and calling fetch_v8/gclient sync belongs in there. Note that you probably want to pin a version (latest stable branch) rather than using tip-of-tree. So, in pseudocode, you'd have something like:
step get_dependencies:
download/update depot_tools
download/update V8 # pinned_revision (using depot_tools)
step compile (depends on "get_dependencies"):
cd v8; gn args out/...; ninja -C out/...;
cd sdl; build sdl
build your own code, linking against V8/sdl/other deps.
Many build systems already have convenient ways to do these things. I don't know CMake very well though, so I can't suggest anything specific there.
I agree that using scripts to automatically modify BUILD.gn feels wrong. It'll probably also turn out to be brittle and high-maintenance over time.
I got V8 building with CMake very easily using brew:
brew install v8
then add the following lines to CMakeLists.txt
file(GLOB_RECURSE V8_LIB # just GLOB is probably fine
"/usr/local/opt/v8/lib/*.dylib"
)
include_directories(
YOUR_INCLUDES
/usr/local/opt/v8
/usr/local/opt/v8/include
)
target_link_libraries(YOUR_PROJECT LINK_PUBLIC YOUR_LIBS ${V8_LIB})
Worked on Mojave 10.14.1

Cleanest way to depend on a make-based C library in my CMake C++ project

I'm working on a C++ project (I am using CMake) and need to depend on a C library like this one (but uses make): https://github.com/RoaringBitmap/CRoaring
What's the cleanest way for me to integrate that library into my project?
One option is importing that code into my source tree, creating a CMakeLists.txt for that external dependency and using it as a CMake submodule. But I don't want to do that, since that project might evolve, and I just want that code as a git submodule dependency, and not actually committed into my repository.
What you could do is use the build system of make within CMake if you cd into the sources of CRoaring and call external commands within CMakeLists.txt using the execute_process command:
execute_process
With INSTALL_PREFIX you can indicate where to compile that library and cmake would use it then.
This also would mean, that the make compilation starts whenever you trigger configure though I imagine that you could control that a bit. If you want to avoid that add_custom_command could help you on this:
add_custom_command
For Commander Genius we have been using those when building the Windows version and using icotool so the executable gets an application icon embedded into the exe.
Another alternative would be using ExternalProject like indicated in the similar post by Tsyvarev:
ExternalProject
Yet I'm not sure if that call is flexible enough for your needs. It has a lot of options though.
So the cleanest way to use CMake really depends on what you need to do for your project.

Regarding the Necessity of ROS Packages

Up until this point, while working on my project, I've been building ROS scripts using rospy- establishing topics and nodes, subscribing to things, and just generally doing all sorts of functions. I've been led to believe, though, that eventually my scripts will need to be made into 'packages', with the notion being that they increase modularity of programs (and is just the way things are done).
So far, my scripts are pretty compact, and I don't see why sending out a python script invoking rospy would require this extra level of wrapping (particularly given the obfuscatory nature of most of ROS wiki's tutorials). I've not had to create catkin packages or anything for any of my programs so far. Is there some overwhelming reason why I need concern myself with ROS packages and catkin and the like? Right now, I just don't see the point when everything works well and likely would across any machine the script is run from.
Thanks!
There are a lot of cases in which you definitely want to use catkin:
Your package contains C++ code. This has to be compiled which will be taken care of by catkin.
You have custom message types. Custom messages have to be generated and compiled. Again this is done by catkin.
You have dependencies on other ROS package (or vice versa). catkin resolves this dependencies and build them if necessary.
You have Python modules which need to be installed so other packages can use them. Of course you can make your custom setup.py but using catkin is the ROS-way to do this.
When your scripts are in a catkin package, you can use the ROS command line tools (rosrun, roscd, rosed, ...), which are very convenient.
As long as you really only have simple Python scripts without dependencies on other non-core ROS packages, you are probably fine without bundling them in a package.
However, as soon as you are sharing your code with other ROS developers, I would package them nonetheless. While it may be working, it will be confusing for the others if they don't get the package structure they are used to.

C++ Buildsystem with ability to compile dependencies beforehand

I'm in the middle of setting up an build environment for a c++ game project. Our main requirement is the ability to build not just our game code, but also its dependencies (Ogre3D, Cegui, boost, etc.). Furthermore we would like to be able build on Linux as well as on Windows as our development team consists of members using different operating systems.
Ogre3D uses CMake as its build tool. This is why we based our project on CMake too so far. We can compile perfectly fine once all dependencies are set up manually on each team members system as CMake is able to find the libraries.
The Question is if there is an feasible way to get the dependencies set up automatically. As a Java developer I know of Maven, but what tools do exist in the world of c++?
Update: Thanks for the nice answers and links. Over the next few days I will be trying out some of the tools to see what meets our requirements, starting with CMake. I've indeed had my share with autotools so far and as much as I like the documentation (the autobook is a very good read), I fear autotools are not meant to be used on Windows natively.
Some of you suggested to let some IDE handle the dependency management. We consist of individuals using all possible technologies to code from pure Vim to fully blown Eclipse CDT or Visual Studio. This is where CMake allows use some flexibility with its ability to generate native project files.
In the latest CMake 2.8 version there is the new ExternalProject module.
This allows to download/checkout code, configure and build it as part of your main build tree.
It should also allow to set dependencies.
At my work (medical image processing group) we use CMake to build all our own libraries and applications. We have an in-house tool to track all the dependencies between projects (defined in a XML database). Most of the third party libraries (like Boost, Qt, VTK, ITK etc..) are build once for each system we support (MSWin32, MSWin64, Linux32 etc..) and are commited as zip-files in the version control system. CMake will then extract and configure the correct zip file depending on which system the developer is working on.
I have been using GNU Autotools (Autoconf, Automake, Libtool) for the past couple of months in several projects that I have been involved in and I think it works beautifully. Truth be told it does take a little bit to get used to the syntax, but I have used it successfully on a project that requires the distribution of python scripts, C libraries, and a C++ application. I'll give you some links that helped me out when I first asked a similar question on here.
The GNU Autotools Page provides the best documentation on the system as a whole but it is quite verbose.
Wikipedia has a page which explains how everything works. Autoconf configures the project based upon the platform that you are about to compile on, Automake builds the Makefiles for your project, and Libtool handles libraries.
A Makefile.am example and a configure.ac example should help you get started.
Some more links:
http://www.lrde.epita.fr/~adl/autotools.html
http://www.developingprogrammers.com/index.php/2006/01/05/autotools-tutorial/
http://sources.redhat.com/autobook/
One thing that I am not certain on is any type of Windows wrapper for GNU Autotools. I know you are able to use it inside of Cygwin, but as for actually distributing files and dependencies on Windows platforms you are probably better off using a Windows MSI installer (or something that can package your project inside of Visual Studio).
If you want to distribute dependencies you can set them up under a different subdirectory, for example, libzip, with a specific Makefile.am entry which will build that library. When you perform a make install the library will be installed to the lib folder that the configure script determined it should use.
Good luck!
There are several interesting make replacements that automatically track implicit dependencies (from header files), are cross-platform and can cope with generated files (e.g. shader definitions). Two examples I used to work with are SCons and Jam/BJam.
I don't know of a cross-platform way of getting *make to automatically track dependencies.
The best you can do is use some script that scans source files (or has C++ compiler do that) and finds #includes (conditional compilation makes this tricky) and generates part of makefile.
But you'd need to call this script whenever something might have changed.
The Question is if there is an feasible way to get the dependencies set up automatically.
What do you mean set up?
As you said, CMake will compile everything once the dependencies are on the machines. Are you just looking for a way to package up the dependency source? Once all the source is there, CMake and a build tool (gcc, nmake, MSVS, etc.) is all you need.
Edit: Side note, CMake has the file command which can be used to download files if they are needed: file(DOWNLOAD url file [TIMEOUT timeout] [STATUS status] [LOG log])
Edit 2: CPack is another tool by the CMake guys that can be used to package up files and such for distribution on various platforms. It can create NSIS for Windows and .deb or .tgz files for *nix.
At my place of work (we build embedded systems for power protection) we used CMake to solve the problem. Our setup allows cmake to be run from various locations.
/
CMakeLists.txt "install precompiled dependencies and build project"
project/
CMakeLists.txt "build the project managing dependencies of subsystems"
subsystem1/
CMakeLists.txt "build subsystem 1 assume dependecies are already met"
subsystem2/
CMakeLists.txt "build subsystem 2 assume dependecies are already met"
The trick is to make sure that each CMakeLists.txt file can be called in isolation but that the top level file can still build everything correctly. Technically we don't need the sub CMakeLists.txt files but it makes the developers happy. It would be an absolute pain if we all had to edit one monolithic build file at the root of the project.
I did not set up the system (I helped but it is not my baby). The author said that the boost cmake build system had some really good stuff in it, that help him get the whole thing building smoothly.
On many *nix systems, some kind of package manager or build system is used for this. The most common one for source stuff is GNU Autotools, which I've heard is a source of extreme grief. However, with a few scripts and an online depository for your deps you can set up something similar like so:
In your project Makefile, create a target (optionally with subtargets) that covers your dependencies.
Within the target for each dependency, first check to see if the dep source is in the project (on *nix you can use touch for this, but you could be more thorough)
If the dep is not there, you can use curl, etc to download the dep
In all cases, have the dep targets make a recursive make call (make; make install; make clean; etc) to the Makefile (or other configure script/build file) of the dependency. If the dep is already built and installed, make will return fairly promptly.
There are going to be lots of corner cases that will cause this to break though, depending on the installers for each dep (perhaps the installer is interactive?), but this approach should cover the general idea.
Right now I'm working on a tool able to automatically install all dependencies of a C/C++ app with exact version requirement :
compiler
libs
tools (cmake, autotools)
Right now it works, for my app. (Installing UnitTest++, Boost, Wt, sqlite, cmake all in correct order)
The tool, named «C++ Version Manager» (inspired by the excellent ruby version manager), is coded in bash and hosted on github : https://github.com/Offirmo/cvm
Any advices and suggestions are welcomed.