ROS personal global planner - undefined symbol: _ZN18base_local_planner12CostmapModelC1ERKN10costmap_2d9Costmap2DE - c++

My setup is: ROS melodic, Ubuntu: 18.04
I want simulate turtlebot3 moving with my own global planner and have been following this tutorial to get started: http://wiki.ros.org/navigation/Tutorials/Writing%20A%20Global%20Path%20Planner%20As%20Plugin%20in%20ROS#Running_the_Plugin_on_the_Turtlebot. The tutorial seem to be made for ROS hydro, but as it was the best source of guidance I could find I hoped it would work. I have been using this turtlebot3 tutorial and commands to get started: https://emanual.robotis.com/docs/en/platform/turtlebot3/nav_simulation/
There is no problem having the robot navigate with 2D Nav Goal in rviz using the in-built planning packages, but when I try to run the the global path planner in my own package I get the following error when I try to launch the 'turtlebot3_navigation.launch' file:
[ INFO] [1661178206.728674676, 7.359000000]: global_costmap: Using plugin "static_layer"
[ INFO] [1661178206.742733426, 7.372000000]: Requesting the map...
[ INFO] [1661178206.945370142, 7.575000000]: Resizing costmap to 384 X 384 at 0.050000 m/pix
[ INFO] [1661178207.047423541, 7.676000000]: Received a 384 X 384 map at 0.050000 m/pix
[ INFO] [1661178207.053220010, 7.678000000]: global_costmap: Using plugin "obstacle_layer"
[ INFO] [1661178207.056864268, 7.685000000]: Subscribed to Topics: scan
[ INFO] [1661178207.079615282, 7.706000000]: global_costmap: Using plugin "inflation_layer"
/opt/ros/melodic/lib/move_base/move_base: symbol lookup error: /home/aut/catkin_ws/devel/lib//libmy_global_planner_lib.so: undefined symbol: _ZN18base_local_planner12CostmapModelC1ERKN10costmap_2d9Costmap2DE
[move_base-4] process has died [pid 625, exit code 127, cmd /opt/ros/melodic/lib/move_base/move_base cmd_vel:=/cmd_vel odom:=odom __name:=move_base __log:=/home/aut/.ros/log/f4c41f78-2225-11ed-befb-b8ca3a965376/move_base-4.log].
log file: /home/aut/.ros/log/f4c41f78-2225-11ed-befb-b8ca3a965376/move_base-4*.log
I ran c++filt on the symbol lookup error and got:
c++filt _ZN18base_local_planner12CostmapModelC1ERKN10costmap_2d9Costmap2DE
base_local_planner::CostmapModel::CostmapModel(costmap_2d::Costmap2D const&)
I've beeen using this code (https://github.com/ros-planning/navigation/blob/noetic-devel/carrot_planner/src/carrot_planner.cpp) and (https://github.com/ros-planning/navigation/blob/noetic-devel/carrot_planner/include/carrot_planner/carrot_planner.h) changing carrot_planner and CarrotPlanner to my_global_planner and MyGlobalPlanner, figuring that using some code is already/should already be working was a good way to avoid confusion about whether my code or something else caused errors.
My CMakeList.txt is looking like this currently:
cmake_minimum_required(VERSION 3.0.2)
project(my_global_planner)
find_package(catkin REQUIRED
actionlib
roscpp
rospy
std_msgs
)
catkin_package(
# INCLUDE_DIRS include
# LIBRARIES my_global_planner
# CATKIN_DEPENDS other_catkin_pkg
# DEPENDS system_lib
)
include_directories(
include
${catkin_INCLUDE_DIRS}
)
add_library(my_global_planner_lib src/my_global_planner/my_global_planner.cpp)
I've been experimenting with it, adding stuff like:
find_package(catkin REQUIRED
COMPONENTS
angles
base_local_planner
costmap_2d
nav_core
pluginlib
roscpp
tf2
tf2_geometry_msgs
tf2_ros
)
and such in the catkin_packages also, but it doesn't seem to have worked and I've returned it to how it was. I've also tried adding more than just:
<buildtool_depend>catkin</buildtool_depend>
<build_depend>nav_core</build_depend>
<exec_depend>nav_core</exec_depend>
to my package.xml, but no luck there either.
I hope I've made the problem clear and provided the needed information without dumping a massive sheet of code here. I feel that I've exhausted all my options at this point and any help or guidance would be greatly appreciated.

From what the error shows, your file has found the corrected header file, however, when looking up the lib, there is no correlated lib file.
There could be many reasons for that, e,g
You have multiple incompatible versions of costmap2D, to solve it, just delete all cost map in both workspace and in /opt/ros. then reinstall the costmap2d with corrected branch https://github.com/ros-planning/navigation/tree/melodic-devel
Secondly, In your cmakelist, do include costmap_2d and navigation inside find_package
Also include costmap_2d and navigation in your manifest

Related

CMake error: Could not find the VTK package with the following required components:GUISupportQt, ViewsQt

I compiled VTK in my RedHat 8.3 machine, and now when I want to compile an example in the /GUI/Qt/SimpleView with cmake I get the following error message when configuring:
CMake Warning at CMakeLists.txt:4 (find_package):
Found package configuration file:
home/user/Downloads/VTK-9.1.0/build/lib64/cmake/vtk-9.1/vtk-config.cmake
but it set VTK_FOUND to FALSE so package “VTK” is considered to be NOT FOUND.
Reason given by package:
Could not find the VTK package with the following required components:
GUISupportQt, ViewsQt.
Has anyone encountered this problem before ?
Thank you for your help.
This looks like you did not set the VTK_MODULE_ENABLE_VTK_GuiSupportQt and VTK_MODULE_ENABLE_VTK_ViewsQt options to "YES" when running configure in CMake.
Note: the abovementioned option names are only applicable for VTK >= 9; for VTK < 9, they are called Module_vtkGUISupportQt and Module_vtkViewsQt (and you might also need to enable Module_vtkGUISupportQtOpenGL and Module_vtkRenderingQt).
These options are not enabled by default, but they seem to be required by the example that you're trying to compile.
Don't worry, you shouldn't have to re-do everything now. To fix:
Open the CMake GUI.
Enter the folder where you built VTK in "Where to build the binaries".
If it's not checked, set the "Advanced" checkbox (the required options are not visible otherwise).
Set VTK_MODULE_ENABLE_VTK_GuiSupportQt and VTK_MODULE_ENABLE_VTK_ViewsQt options to "YES"
Press "Configure", and wait for it to finish
During Configuring, you might get an error, if CMake doesn't know how to find Qt; if so, enter the Qt5_DIR / Qt6_DIR, and press configure again.
Press "Generate", and wait for it to finish
Start the vtk build again (depends on what build tool you choose...)
Try configuring the example again, now you should not see the error message anymore.

Vtk charts break in QT, "no override found for 'vtkContextDevice2D"

I can't use any type of vtk 2D chart in QT without getting the error:
"Generic Warning: In vtkContextDevice2D.cxx, line 31 Error: no override found for 'vtkContextDevice2D".
There is limited discussion on this with almost all suggestions being to upgrade qt/vtk, but theses are year old and I am on the newest versions.
This doesn't help either:
include "vtkAutoInit.h"
VTK_MODULE_INIT(vtkRenderingOpenGL2); // VTK was built with vtkRenderingOpenGL2
VTK_MODULE_INIT(vtkInteractionStyle);
Info: Win64 on 64bit machine, vtk8.2.0, Qt5.13.0, compiled/built in MCVS2017(Release x64) with cmake3.15.0
(Everything else works fine, even 3D renderings with vtk)
Code:
view->SetInteractor(this->qvtkWidgetRight->GetInteractor());
this->qvtkWidgetRight->SetRenderWindow(view->GetRenderWindow());
What the error produces
I had a similar problem when I run this example:QtBarChart, and I fixed this issue with linking with these vtk libraries:
find_package(VTK COMPONENTS
vtkChartsCore
vtkCommonCore
vtkCommonDataModel
vtkInteractionStyle
vtkRenderingContext2D
vtkRenderingContextOpenGL2
vtkRenderingCore
vtkRenderingFreeType
vtkRenderingGL2PSOpenGL2
vtkRenderingOpenGL2
vtkViewsContext2D
QUIET
It seems I missed some libraries.

C++ Tensorflow API with TensorRT

My goal is to run a tensorrt optimized tensorflow graph in a C++ application. I am using tensorflow 1.8 with tensorrt 4. Using the python api I am able to optimize the graph and see a nice performance increase.
Trying to run the graph in c++ fails with the following error:
Not found: Op type not registered 'TRTEngineOp' in binary running on e15ff5301262. Make sure the Op and Kernel are registered in the binary running in this process.
Other, non tensorrt graphs work. I had a similar error with the python api, but solved it by importing tensorflow.contrib.tensorrt. From the error I am fairly certain the kernel and op are not registered, but am unaware on how to do so in the application after tensorflow has been built. On a side note I can not use bazel but am required to use cmake. So far I link against libtensorflow_cc.so and libtensorflow_framework.so.
Can anyone help me here? thanks!
Update:
Using the c or c++ api to load _trt_engine_op.so does not throw an error while loading, but fails to run with
Invalid argument: No OpKernel was registered to support Op 'TRTEngineOp' with these attrs. Registered devices: [CPU,GPU], Registered kernels:
<no registered kernels>
[[Node: my_trt_op3 = TRTEngineOp[InT=[DT_FLOAT, DT_FLOAT], OutT=[DT_FLOAT, DT_FLOAT, DT_FLOAT, DT_FLOAT], input_nodes=["tower_0/down_0/conv_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer", "tower_0/down_0/conv_skip/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer"], output_nodes=["tower_0/down_0/conv_skip/Relu", "tower_0/down_1/conv_skip/Relu", "tower_0/down_2/conv_skip/Relu", "tower_0/down_3/conv_skip/Relu"], serialized_engine="\220{I\000...00\000\000"](tower_0/down_0/conv_0/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer, tower_0/down_0/conv_skip/Conv2D-0-TransposeNHWCToNCHW-LayoutOptimizer)]]
Another way to solve the problem with the error "Not found: Op type not registered 'TRTEngineOp'" on Tensorflow 1.8:
1) In the file tensorflow/contrib/tensorrt/BUILD, add new section with following content :
cc_library(
name = "trt_engine_op_kernel_cc",
srcs = [
"kernels/trt_calib_op.cc",
"kernels/trt_engine_op.cc",
"ops/trt_calib_op.cc",
"ops/trt_engine_op.cc",
"shape_fn/trt_shfn.cc",
],
hdrs = [
"kernels/trt_calib_op.h",
"kernels/trt_engine_op.h",
"shape_fn/trt_shfn.h",
],
copts = tf_copts(),
visibility = ["//visibility:public"],
deps = [
":trt_logging",
":trt_plugins",
":trt_resources",
"//tensorflow/core:gpu_headers_lib",
"//tensorflow/core:lib_proto_parsing",
"//tensorflow/core:stream_executor_headers_lib",
] + if_tensorrt([
"#local_config_tensorrt//:nv_infer",
]) + tf_custom_op_library_additional_deps(),
alwayslink = 1, # buildozer: disable=alwayslink-with-hdrs
)
2) Add //tensorflow/contrib/tensorrt:trt_engine_op_kernel_cc as dependency to the corresponding BAZEL project you want to build
PS: No need to load library _trt_engine_op.so with TF_LoadLibrary
Here are my findings (and some kind of solution) for this problem (Tensorflow 1.8.0, TensorRT 3.0.4):
I wanted to include the tensorrt support into a library, which loads a graph from a given *.pb file.
Just adding //tensorflow/contrib/tensorrt:trt_engine_op_kernel to my Bazel BUILD file didn't do the trick for me. I still got a message indicating that the Ops where not registered:
2018-05-21 12:22:07.286665: E tensorflow/core/framework/op_kernel.cc:1242] OpKernel ('op: "TRTCalibOp" device_type: "GPU"') for unknown op: TRTCalibOp
2018-05-21 12:22:07.286856: E tensorflow/core/framework/op_kernel.cc:1242] OpKernel ('op: "TRTEngineOp" device_type: "GPU"') for unknown op: TRTEngineOp
2018-05-21 12:22:07.296024: E tensorflow/examples/tf_inference_lib/cTfInference.cpp:56] Not found: Op type not registered 'TRTEngineOp' in binary running on ***.
Make sure the Op and Kernel are registered in the binary running in this process.
The solution was, that I had to load the Ops library (tf_custom_op_library) within my C++ Code using the C_API:
#include "tensorflow/c/c_api.h"
...
TF_Status status = TF_NewStatus();
TF_LoadLibrary("_trt_engine_op.so", status);
The shared object _trt_engine_op.so is created for the bazel target //tensorflow/contrib/tensorrt:python/ops/_trt_engine_op.so:
bazel build --config=opt --config=cuda --config=monolithic \
//tensorflow/contrib/tensorrt:python/ops/_trt_engine_op.so
Now I only have to make sure, that _trt_engine_op.so is available whenever it is needed, e.g. by LD_LIBRARY_PATH.
If anybody has an idea, how to do this in a more elegant way (why do we have 2 artefacts which have to be build? Can't we just have one?), I'm happy for every suggestion.
tldr
add //tensorflow/contrib/tensorrt:trt_engine_op_kernel as dependency to the corresponding BAZEL project you want to build
Load the ops-library _trt_engine_op.so in your code using the C-API.
For Tensorflow r1.8, the additions shown below in two BUILD files and building libtensorflow_cc.so with the monolithic option worked for me.
diff --git a/tensorflow/BUILD b/tensorflow/BUILD
index cfafffd..fb8eb31 100644
--- a/tensorflow/BUILD
+++ b/tensorflow/BUILD
## -525,6 +525,8 ## tf_cc_shared_object(
"//tensorflow/cc:scope",
"//tensorflow/cc/profiler",
"//tensorflow/core:tensorflow",
+ "//tensorflow/contrib/tensorrt:trt_conversion",
+ "//tensorflow/contrib/tensorrt:trt_engine_op_kernel",
],
)
diff --git a/tensorflow/contrib/tensorrt/BUILD b/tensorflow/contrib/tensorrt/BUILD
index fd3582e..a6566b9 100644
--- a/tensorflow/contrib/tensorrt/BUILD
+++ b/tensorflow/contrib/tensorrt/BUILD
## -76,6 +76,8 ## cc_library(
srcs = [
"kernels/trt_calib_op.cc",
"kernels/trt_engine_op.cc",
+ "ops/trt_calib_op.cc",
+ "ops/trt_engine_op.cc",
],
hdrs = [
"kernels/trt_calib_op.h",
## -86,6 +88,7 ## cc_library(
deps = [
":trt_logging",
":trt_resources",
+ ":trt_shape_function",
"//tensorflow/core:gpu_headers_lib",
"//tensorflow/core:lib_proto_parsing",
"//tensorflow/core:stream_executor_headers_lib",
As you mentioned, it should work when you add //tensorflow/contrib/tensorrt:trt_engine_op_kernel to the dependency list. Currently the Tensorflow-TensorRT integration is still in progress and may work well only for the python API; for C++ you'll need to call ConvertGraphDefToTensorRT() from tensorflow/contrib/tensorrt/convert/convert_graph.h for the conversion.
Let me know if you have any questions.
Solution: add import
from tensorflow.python.compiler.tensorrt import trt_convert as trt
link discuss: https://github.com/tensorflow/tensorflow/issues/26525
here is my solution, tensorflow is 1.14.
in your BUILD file,exp,tensorflow/examples/your_workspace/BUILD:
in tf_cc_binary:
scrs= [...,"//tensorflow/compiler/tf2tensorrt:ops/trt_engine_op.cc"]
deps=[...,"//tensorflow/compiler/tf2tensorrt:trt_op_kernels"]

Conflict Protobuf version when using Opencv and Tensorflow c++

I am currently trying to use Tensorflow's shared library in a non-bazel project, so I creat a .so file from tensorflow using bazel.
but when I launch a c++ program that uses both Opencv and Tensorflow, it makes me the following error :
[libprotobuf FATAL external/protobuf/src/google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
what(): This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
Abandon (core dumped)
Can you help me?
Thank you
You should rebuild TensorFlow with a linker script to avoid making third party symbols global in the shared library that Bazel creates. This is how the Android Java/JNI library for TensorFlow is able to coexist with the pre-installed protobuf library on the device (look at the build rules in tensorflow/contrib/android for a working example)
Here's a BUILD file that I adapted from the Android library to do this:
package(default_visibility = ["//visibility:public"])
licenses(["notice"]) # Apache 2.0
exports_files(["LICENSE"])
load(
"//tensorflow:tensorflow.bzl",
"tf_copts",
"if_android",
)
exports_files([
"version_script.lds",
])
# Build the native .so.
# bazel build //tensorflow/contrib/android_ndk:libtensorflow_cc_inference.so \
# --crosstool_top=//external:android/crosstool \
# --host_crosstool_top=#bazel_tools//tools/cpp:toolchain \
# --cpu=armeabi-v7a
LINKER_SCRIPT = "//tensorflow/contrib/android:version_script.lds"
cc_binary(
name = "libtensorflow_cc_inference.so",
srcs = [],
copts = tf_copts() + [
"-ffunction-sections",
"-fdata-sections",
],
linkopts = if_android([
"-landroid",
"-latomic",
"-ldl",
"-llog",
"-lm",
"-z defs",
"-s",
"-Wl,--gc-sections",
"-Wl,--version-script", # This line must be directly followed by LINKER_SCRIPT.
LINKER_SCRIPT,
]),
linkshared = 1,
linkstatic = 1,
tags = [
"manual",
"notap",
],
deps = [
"//tensorflow/core:android_tensorflow_lib",
LINKER_SCRIPT,
],
)
And the contents of version_script.lds:
{
global:
extern "C++" {
tensorflow::*;
};
local:
*;
};
This will make everything in the tensorflow namespace global and available through the library, while hiding the reset and preventing it from conflicting with protobuf.
(wasted a ton of time on this so I hope it helps!)
The error indicates that the program was complied using headers (.h files) from protobuf 2.6.1. These headers are typically found in /usr/include/google/protobuf or /usr/local/include/google/protobuf, though they could be in other places depending on your OS and how the program is being built. You need to update these headers to version 3.1.0 and recompile the program.
This is indeed a pretty serious problem! I get the below error similar to you:
$./ceres_single_test
[libprotobuf FATAL google/protobuf/stubs/common.cc:78] This program was compiled against version 2.6.1 of the Protocol Buffer runtime library, which is not compatible with the installed version (3.1.0). Contact the program author for an update. If you compiled the program yourself, make sure that your headers are from the same version of Protocol Buffers as your link-time library. (Version verification failed in "/build/mir-pkdHET/mir-0.21.0+16.04.20160330/obj-x86_64-linux-gnu/src/protobuf/mir_protobuf.pb.cc".)
terminate called after throwing an instance of 'google::protobuf::FatalException'
Aborted
My workaround:
cd /usr/lib/x86_64-linux-gnu
sudo mkdir BACKUP
sudo mv libmirprotobuf.so* ./BACKUP/
Now, the executable under test works, cool. What is not cool, however, is that things like gedit no longer work without running from a shell that has the BACKUP path added to LD_LIBRARY_PATH :-(
Hopefully there's a better fix out there?
The error complains about the Protocol Buffer runtime library, which is not compatible with the installed version. This error is coming from the GTK3 library. GTK3 use Protocol Buffer 2.6.1. If you use GTK3 to support Opencv, you get this error. The easiest way to fix this, you can use QT instead of GTK3.
If you use Cmake GUI to install Opencv, just select QT support instead of using GTK3. You can install QT using the following command.
sudo apt install qtbase5-dev
rebuild libprotobuf with -Dprotobuf_BUILD_SHARED_LIBS=ON
then make install to cover the older version

Cannot make CGAL examples in Cygwin

I am unable to build some of the CGAL examples under Cygwin. All of the failing examples share similar error messages.
Any guidance would be most appreciated.
Following are the steps that I followed and a sample error from a "make".
Cygwin (x64) installed under Windows 7 to d:\cygwin64.
CGAL source downloaded from https://github.com/CGAL/cgal/releases/download/releases%2FCGAL-4.9/CGAL-4.9.zip
and unzipped to D:\cygwin64\usr\CGAL-4.9
All libraries supposedly needed for CGAL were installed via the Cygwin x64 setup.
Initial cmake:
cd /usr/CGAL-4.9
cmake -DCMAKE_LEGACY_CYGWIN_WIN32=1 -DWITH_CGAL_Qt5=OFF -DWITH_examples=ON .
Some examples could not be configured, these included the mesh and Scale_space_reconstruction_3 examples.
cd /usr/CGAL-4.9
make
make examples
The first few examples were created successfully. For example,
PATH=/usr/local/bin:/usr/bin:/bin:/lib:/usr/CGAL-4.9/bin:/usr/CGAL-4.9/lib
cd /usr/CGAL-4.9/examples/AABB_tree
./AABB_triangle_3_example.exe
3 intersections(s) with ray query
closest point is: 0.333333 0.333333 0.333333
squared distance: 8.33333
A later example demonstrates a nagging problem that shows up in a number of the examples:
cd /usr/CGAL-4.9/examples/Snap_rounding_2/
cmake -DCGAL_DIR=/usr/CGAL-4.9 .
make
Scanning dependencies of target snap_rounding
[ 16%] Building CXX object CMakeFiles/snap_rounding.dir/snap_rounding.cpp.o
In file included from /usr/CGAL-4.9/include/CGAL/CORE/CoreDefs.h:41:0,
from /usr/CGAL-4.9/include/CGAL/CORE/BigFloatRep.h:40,
from /usr/CGAL-4.9/include/CGAL/CORE/BigFloat.h:38,
from /usr/CGAL-4.9/include/CGAL/CORE_BigFloat.h:27,
from /usr/CGAL-4.9/include/CGAL/CORE_arithmetic_kernel.h:39,
from /usr/CGAL-4.9/include/CGAL/Arithmetic_kernel.h:51,
from /usr/CGAL-4.9/include/CGAL/Arr_rational_function_traits_2.h:28,
from /usr/CGAL-4.9/include/CGAL/Sweep_line_2_algorithms.h:37,
from /usr/CGAL-4.9/include/CGAL/Snap_rounding_2.h:28,
from /usr/CGAL-4.9/examples/Snap_rounding_2/snap_rounding.cpp: :
/usr/CGAL-4.9/include/CGAL/CORE/extLong.h:171:8: warning: ‘CORE::extLong::extLong(int)’ redeclared without dllimport attribute after being referenced with dll linkage
inline extLong::extLong(int i) : val(i), flag(0) {
^
/usr/CGAL-4.9/include/CGAL/CORE/extLong.h:292:13: warning: ‘bool CORE::extLong::isNaN() const’ redeclared without dllimport attribute after being referenced with dll linkage
inline bool extLong::isNaN() const {
There are a number of similar errors that have been omitted here.
Thanks!!!
As the errors, that you are not reporting, are likely due to wrong importing directive, you can try the following:
On include/CGAL/export/helpers.h
replace
# if defined(_WIN32) || defined(__CYGWIN__)
with
# if defined(_WIN32)
and then build with
cmake -DWITH_CGAL_Qt5=OFF -DWITH_examples=ON
for what I see the build works much better (20% done in one our and still going)