[summary]
I can not compile vulkan program including "vkGetAccelerationStructureBuildSizesKHR" or "vkCreateAccelerationStructureKHR" with errors "undefined reference to `vkGetAccelerationStructureBuildSizesKHR'".
[environment]
OS : Ubuntu 20.04
command : clang++ with option -lvulkan
vulkan version : 1.2.170
[what tried]
I guess that it is need more library.
I added "-lvulkan_radeon", but it does not make any change.
For reference, If my program does not include any new feature of ray tracing, I can build it normally.
I read that ray tracing features are officially supported from version 1.2.167, so I expect that I can build it without any additional works, but I can't.
Is there any additional library to be linked?
As with all functions provided by extensions and not being part of the core, you have to manually define and get the function pointers in your application before you can call it:
PFN_vkGetAccelerationStructureBuildSizesKHR pfnGetAccelerationStructureBuildSizesKHR;
pfnGetAccelerationStructureBuildSizesKHR = reinterpret_cast<PFN_vkGetAccelerationStructureBuildSizesKHR>(vkGetDeviceProcAddr(device, "vkGetAccelerationStructureBuildSizesKHR"));
Related
I am using Tensorflow's C++ API to load and run a saved model. When I build my C++ code in GCC using the optimization flag -O2 I get the following error:
undefined reference to `tensorflow::TensorShapeBase<tensorflow::TensorShape>::TensorShapeBase(absl::Span<long const>)'
which I believe is due to the following tensor creation:
Tensor my_tensor(DT_DOUBLE, TensorShape({2, 4}));
However, if I build my C++ code without the compiler flag -O2, the code builds and executes perfectly. I am using Tensorflow 2.5 library that was built from source.
Any suggestions on how to fix the error?
The issue is related to a conflict between C++14 and C++17 when compiling Tensorflow with ABSL.
See this link:
Tensorflow_cc library uses own copy of Absl, and uses
absl::string_view in function signatures. absl::string_view is mapped
to std::string_view if C++17 is detected, and to own implementation if
C++17 is not. That leads to linker errors like this when using Arch
tensorflow_cc library from C++17 code:
The workaround would be to comment out the lines: Using the library
from C++17 after building libraries in C++11 mode (Arch)
One workaround is to comment out this line in tensorflow/include/absl/base/config.h:
#define ABSL_HAVE_STD_STRING_VIEW 1
This will make the library look for the custom absl::string_view implementation even if tensorflow is called from C++17.
I am trying to use the latest NVIDIA Video SDK, specifically - its NVDEC (hardware video decoder lib). I had been using the previous version for a while and it was loading function pointers in runtime from libnvcuvid.so, which on my machine is located in:
/usr/lib/nvidia-396/
It contains the folowing related items:
/usr/lib/nvidia-396/libnvcuvid.so
/usr/lib/nvidia-396/libnvcuvid.so.1
/usr/lib/nvidia-396/libnvcuvid.so.396.18
Now,in the latest SDK 8.1,there is no loading of library function pointers in runtime, but all the API methods marked as extern and static linking is used. On Windows they provide nvcuvid.lib. But on linux, the are only above mentioned SOs. My IDE targets that directory (triple checked;if I remove the path,the linker complains that it can't find the lib) correctly, also I put the libnvcuvid.so on the linker exactly the same way as I put cuda.so and cudart.so in the same place for static linking vs CUDA API. But I am still getting
"undefined reference"
for all cuvid functions declared in the latest header. As you can see, my drivers version is also up to date (8.1 requires at least 390).
Why it doesn't link?
UPDATE (linker):
/usr/bin/g++ -o bin/xxxxx_xxx_d #"xxxxx_xxx.txt" -L. -LDebug
-L/usr/lib/nvidia-396 -L/usr/local/cuda-9.1/lib64 -lcuda -lcudart -lnvcuvid .....
I am trying to work through this tutorial on OpenCL, on a Windows 10 dev system which has integrated Intel HD graphics. I have installed Intel's OpenCL SDK. I have added the include directory from the SDK install into Properties > C/C++ General > Paths and Symbols > Includes. I am using MinGW as my compiler for Eclipse
In response to a number of linker errors that popped up when I first tried to compile the project, I set up the linker in eclipse to point to opencl.lib as outlined in this answer.
That took care of the linker errors, but there's an offending line from the tutorial which makes it impossible for the tutorial boiler-plate to compile:
87 cl_int result = program.build({ device }, "");
Set up as I am, this gives me the following warning and error:
..\src\main.cpp:93:32: warning: extended initializer lists only available with -std=c++11 or -std=gnu++11
..\src\main.cpp:93:45: error: no matching function for call to 'cl::Program::build(<brace-enclosed initializer list>, const char [1])'
If I'm reading this correctly (I haven't used C++ since before C++11 was a thing), the compiler is first warning me that it doesn't properly recognize what {device} is supposed to be (a vector of devices which has only one entry in it, initialized earlier in the code). Then, since it doesn't recognize {device}, the compiler errors out because it can't find a signature for cl::Program::build with arguments that match whatever-the-heck it's interpreting {device} to be.
Following the warning's advice, I followed the instructions given in this answer to add the -std=c++11 option for the compiler. However, when I do that the linker errors come back. Trying to compile with these options results in about thirty errors which all basically say they can't find any reference for the CL calls in the library files. For example:
C:/Program Files (x86)/Intel/OpenCL SDK/6.3/include/CL/cl.hpp:1753: undefined reference to `clGetPlatformInfo#20'
How do I make the compiler behave? I think I remember reading somewhere that the order of compiler options in the command line matters withe regards to linking, could that be messing up my compile since I added the -std=c++11 option?
I (sort of) figured out why the compiler was unhappy--the library I was linking was the x64 library for OpenCL installed in [base Intel dir]\OpenCL SDK\6.3\lib\x64, but (I think?) my compiler is not set up to create x64 apps. When I link to the .lib file in OpenCL SDK\6.3\lib\x86 my linker errors disappeared.
I have some code that is heavily dependent on Eigen. I would like to optimize it with CUDA, but when I am compiling I get:
[tcai4#golubh4 Try1]$ nvcc conv_parallel.cu -I /home/tcai4/project-cse/Try1 -lfftw3 -o conv.o
In file included from Eigen/Dense:1,
from Eigen/Eigen:1,
from functions.h:8,
from conv_parallel.cu:10:
Eigen/Core:44:34: error: math_functions.hpp: No such file or directory
I think math_functions.hpp is a file from CUDA. Can someone help me figure out why nvcc cannot find it?
edit: I am using CUDA 5.5 and Eigen 3.3, except from linking Eigen and fftw3 library, I did not use any other flags(as you can see from my code).
I encountered this issue while building TensorFlow 1.4.1 with Cuda 9.1, and strangely math_functions.hpp existed only in include/crt.
Creating a symlink from cuda/include/math_functions.hpp to cuda/include/crt/math_functions.hpp fixed the issue:
ln -s /usr/local/cuda/include/crt/math_functions.hpp /usr/local/cuda/include/math_functions.hpp
The reason nvcc cannot find the file in question is because that file is part of the CUDA Math library, which was introduced in CUDA 6. Your almost 4 year old version of CUDA predates the release of the Math library. Your CUDA version doesn't contain said file.
You should, therefore, assume that what you are trying to do cannot work without first updating to a newer version of the CUDA toolkit.
Creating symlink sometimes causes other complication.
You can try replacing
// We need math_functions.hpp to ensure that that EIGEN_USING_STD_MATH macro
// works properly on the device side
#include <math_functions.hpp>
with
// We need cuda_runtime.h to ensure that that EIGEN_USING_STD_MATH macro
// works properly on the device side
#include <cuda_runtime.h>
in
/usr/include/eigen3/Eigen/Core,
which works for me.
The reason why "math_functions.hpp" cannot be found is because "math_functions.hpp" has been renamed to "math_functions.h". So you just need to go to
/usr/include/eigen3/Eigen/Core
and change "math_functions.hpp" to "math_functions.h"
I grepped inside GLEW while trying to solve my other question, concerning missing __glewX* symbols for Mac, and found that they are guarded by GLEW_APPLE_GLX.
When I attempt to build GLEW from source with that flag defined, I get undefined symbols (stuff like _glXGetClientString). Linking against X11 (-lX11) doesn't help.
Question: assuming defining GLEW_APPLE_GLX does indeed make sense, how can I fix the build?
When building an application that uses the X Server (XQuartz) instead of using CGL, you also need to add -lGL.
Ordinarily when building GL software on OS X you use OpenGL.framework (-framework OpenGL) and that gets you OpenGL and CGL/AGL functions but leaves out GLX.
You should also ditch any includes to things like <OpenGL/gl.h> and use <GL/gl.h> instead, as that will point to /usr/X11R6/include/GL/... instead of the OpenGL framework headers.