Is Boost's Small Vector Compatible with NVCC 8? [duplicate] - c++

I'm trying to integrate CUDA to an existing aplication wich uses boost::spirit.
Isolating the problem, I've found out that the following code does not copile with nvcc:
main.cu:
#include <boost/spirit/include/qi.hpp>
int main(){
exit(0);
}
Compiling with nvcc -o cudaTest main.cu I get a lot of errors that can be seen here.
But if I change the filename to main.cpp, and compile again using nvcc, it works. What is happening here and how can I fix it?

nvcc sometimes has trouble compiling complex template code such as is found in Boost, even if the code is only used in __host__ functions.
When a file's extension is .cpp, nvcc performs no parsing itself and instead forwards the code to the host compiler, which is why you observe different behavior depending on the file extension.
If possible, try to quarantine code which depends on Boost into .cpp files which needn't be parsed by nvcc.
I'd also make sure to try the nvcc which ships with the recent CUDA 4.1. nvcc's template support improves with each release.

Related

math_functions.hpp not found when using CUDA with Eigen

I have some code that is heavily dependent on Eigen. I would like to optimize it with CUDA, but when I am compiling I get:
[tcai4#golubh4 Try1]$ nvcc conv_parallel.cu -I /home/tcai4/project-cse/Try1 -lfftw3 -o conv.o
In file included from Eigen/Dense:1,
from Eigen/Eigen:1,
from functions.h:8,
from conv_parallel.cu:10:
Eigen/Core:44:34: error: math_functions.hpp: No such file or directory
I think math_functions.hpp is a file from CUDA. Can someone help me figure out why nvcc cannot find it?
edit: I am using CUDA 5.5 and Eigen 3.3, except from linking Eigen and fftw3 library, I did not use any other flags(as you can see from my code).
I encountered this issue while building TensorFlow 1.4.1 with Cuda 9.1, and strangely math_functions.hpp existed only in include/crt.
Creating a symlink from cuda/include/math_functions.hpp to cuda/include/crt/math_functions.hpp fixed the issue:
ln -s /usr/local/cuda/include/crt/math_functions.hpp /usr/local/cuda/include/math_functions.hpp
The reason nvcc cannot find the file in question is because that file is part of the CUDA Math library, which was introduced in CUDA 6. Your almost 4 year old version of CUDA predates the release of the Math library. Your CUDA version doesn't contain said file.
You should, therefore, assume that what you are trying to do cannot work without first updating to a newer version of the CUDA toolkit.
Creating symlink sometimes causes other complication.
You can try replacing
// We need math_functions.hpp to ensure that that EIGEN_USING_STD_MATH macro
// works properly on the device side
#include <math_functions.hpp>
with
// We need cuda_runtime.h to ensure that that EIGEN_USING_STD_MATH macro
// works properly on the device side
#include <cuda_runtime.h>
in
/usr/include/eigen3/Eigen/Core,
which works for me.
The reason why "math_functions.hpp" cannot be found is because "math_functions.hpp" has been renamed to "math_functions.h". So you just need to go to
/usr/include/eigen3/Eigen/Core
and change "math_functions.hpp" to "math_functions.h"

Clang fails to find iostream. What should I do?

Earlier, I posed a related question.
I have the following program extracted from a large project in my Mac OS
#include <iostream>
int main(){
std::cout<<"hello"<<std::endl;
return 0;
}
Compiling it with Clang fails with the following error:
$ clang test.cpp
test.cpp:1:10: fatal error: 'iostream' file not found
#include <iostream>
^
1 error generated.
For information,
A) I have already installed xcode command line tools, using xcodeselect --install. But it seems iostream does not locate in the default search path of clang.
B) Using g++ instead of clang compiles the program. But in my problem, I am not allowed to use other compiler than clang, or to change the source program.
C) I can see workaround techniques, e.g, by tweaking the search path in .bashrc or with some symbolic link, etc. But I feel reluctant to use them, because it seems that I have an installation problem with my Clang and tweaking the path only helps to avoid one of these path issues.
clang and clang++ do different things. If you want to compile C++ code, you need to use clang++
Alternatively you can invoke c++ compiler directly by providing language name explicitely:
clang -x=c++

Why does nvcc fails to compile a CUDA file with boost::spirit?

I'm trying to integrate CUDA to an existing aplication wich uses boost::spirit.
Isolating the problem, I've found out that the following code does not copile with nvcc:
main.cu:
#include <boost/spirit/include/qi.hpp>
int main(){
exit(0);
}
Compiling with nvcc -o cudaTest main.cu I get a lot of errors that can be seen here.
But if I change the filename to main.cpp, and compile again using nvcc, it works. What is happening here and how can I fix it?
nvcc sometimes has trouble compiling complex template code such as is found in Boost, even if the code is only used in __host__ functions.
When a file's extension is .cpp, nvcc performs no parsing itself and instead forwards the code to the host compiler, which is why you observe different behavior depending on the file extension.
If possible, try to quarantine code which depends on Boost into .cpp files which needn't be parsed by nvcc.
I'd also make sure to try the nvcc which ships with the recent CUDA 4.1. nvcc's template support improves with each release.

Cuda with Boost

I am currently writing a CUDA application and want to use the boost::program_options library to get the required parameters and user input.
The trouble I am having is that NVCC cannot handle compiling the boost file any.hpp giving errors such as
1>C:\boost_1_47_0\boost/any.hpp(68): error C3857: 'boost::any': multiple template parameter lists are not allowed
I searched online and found it is because NVCC cannot handle the certain constructs used in the boost code but that NVCC should delegate compilation of host code to the C++ compiler. In my case I am using Visual Studio 2010 so host code should be passed to cl.
Since NVCC seemed to be getting confused I even wrote a simple wrapper around the boost stuff and stuck it in a separate .cpp (instead of .cu) file but I am still getting build errors. Weirdly the error is thrown upon compiling my main.cu instead of the wrapper.cpp but still is caused by boost even though main.cu doesn't include any boost code.
Does anybody know of a solution or even workaround for this problem?
Dan, I have written a CUDA code using boost::program_options in the past, and looked back to it to see how I dealt with your problem. There are certainly some quirks in the nvcc compile chain. I believe you can generally deal with this if you've decomposed your classes appropriately, and realize that often NVCC can't handle C++ code/headers, but your C++ compiler can handle the CUDA-related headers just fine.
I essentially have main.cpp which includes my program_options header, and the parsing stuff dictating what to do with the options. The program_options header then includes the CUDA-related headers/class prototypes. The important part (as I think you've seen) is to just not have the CUDA code and accompanying headers include that options header. Pass your objects to an options function and have that fill in relevant info. Something like an ugly version of a Strategy Pattern. Concatenated:
main.cpp:
#include "myprogramoptionsparser.hpp"
(...)
CudaObject* MyCudaObj = new CudaObject;
GetCommandLineOptions(argc,argv,MyCudaObj);
myprogramoptionsparser.hpp:
#include <boost/program_options.hpp>
#include "CudaObject.hpp"
void GetCommandLineOptions(int argc,char **argv,CudaObject* obj){
(do stuff to cuda object) }
CudaObject.hpp:
(do not include myprogramoptionsparser.hpp)
CudaObject.cu:
#include "CudaObject.hpp"
It can be a bit annoying, but the nvcc compiler seems to be getting better at handling more C++ code. This has worked fine for me in VC2008/2010, and linux/g++.
You have to split the code in two parts:
the kernel have to be compiled by nvcc
the program that invokes the kernel has to be compiled by g++.
Then link the two objects together and everything should be working.
nvcc is required only to compile the CUDA kernel code.
Thanks to #ronag's comment I realised I was still (indirectly) including boost/program_options.hpp indirectly in my header since I had some member variables in my wrapper class definition which needed it.
To get around this I moved these variables outside the class and thus could move them outside the class defintion and into the .cpp file. They are no longer member variables and now global inside wrapper.cpp
This seems to work but it is ugly and I have the feeling nvcc should handle this gracefully; if anybody else has a proper solution please still post it :)
Another option is to wrap cpp only code in
#ifndef __CUDACC__

Why won't OpenCV compile in NVCC?

I am trying to integrate CUDA and openCV in a project. Problem is openCV won't compile when NVCC is used, while a normal c++ project compiles just fine. This seems odd to me, as I thought NVCC passed all host code to the c/c++ compiler, in this case the visual studio compiler.
The errors I get are?
c:\opencv2.0\include\opencv\cxoperations.hpp(1137): error: no operator "=" matches these operands
operand types are: const cv::Range = cv::Range
c:\opencv2.0\include\opencv\cxoperations.hpp(2469): error: more than one instance of overloaded function "std::abs" matches the argument list:
function "abs(long double)"
function "abs(float)"
function "abs(double)"
function "abs(long)"
function "abs(int)"
argument types are: (ptrdiff_t)
So my question is why the difference considering the same compiler (should be) is being used and secondly how I could remedy this.
In general I would recommend keeping separation between host code and CUDA code, only using nvcc for the kernels and host "wrappers". This is particularly easy with Visual Studio, create your project as normal (e.g. a Console application) and then implement your application in .cpp files. When you want to run a CUDA function, create the kernel and a wrapper in one or more .cu files. The Cuda.rules file provided with the SDK will automatically enable VS to compile the .cu files and link the result with the rest of the .cpp files.
NVCC passes C++ code through to the host compiler, but it has to first parse and understand the code. Unfortunately, NVCC has troubles with STL. If at all possible, separate code that makes use of STL into .cpp files and have those compiled with Visual Studio (without passing them first through NVCC).
Compile the .cu code as a library and then link it to the main program. I suggest to use cmake as it makes the process a breeze
There's a project hosted at cuda-grayscale that shows how to integrate OpenCV + CUDA together. If you download the sources check the Makefile:
g++ $(CFLAGS) -c main.cpp -o Debug/main.o
nvcc $(CUDAFLAGS) -c kernel_gpu.cu -o Debug/kernel_gpu.o
g++ $(LDFLAGS) Debug/main.o Debug/kernel_gpu.o -o Debug/grayscale
It's a very simple project that demonstrates how to separate regular C++ code (OpenCV and stuff) from the CUDA code and compile them.
So there's no easy way to use nvcc to compile your current C++ code, you have to write wrappers and compile them using nvcc, while you compile the rest of your code using g++ or what-have-you?