I am trying to utilize the C++ API for OpenCL. I have installed my NVIDIA drivers and I have tested that I can run the simple vector addition program provided here. I can compile this program with following gcc call and the program runs without problem.
gcc main.c -o vectorAddition -l OpenCL -I/usr/local/cuda-6.5/include
However, I would very much prefer to use the C++ API as opposed the very verbose host files needed for C.
I downloaded the C++ bindings from Khronos from here and placed the cl.hpp file in the same location as my other cl.h file. The code uses some C++11 so I can compile the code with:
g++ main.cpp -o vectorAddition_cpp -std=c++11 -l OpenCL -I/usr/local/cuda-6.5/include
but when I try to run the program I get the error:
clGetPlatformIDs(-1001)
I also tried the example provided here as well which gave a more helpful error message.
No platforms found. Check OpenCL installation!
The particular code which provides this error is this:
std::vector<cl::Platform> all_platforms;
cl::Platform::get(&all_platforms);
if(all_platforms.size()==0){
std::cout<<" No platforms found. Check OpenCL installation!\n";
exit(1);
}
This seems so strange given that the C implementation runs without problem. Any insights would be sincerely appreciated.
EDIT
The C implementation actually isn't running correctly. Each addition is printed to equal zero. Checking the ret_num_platforms also returns 0. For some reason my setup is failing to find my GPU. What could I have missed? My install consists of the nvidia-340 driver and cuda-6.5 installed via apt-get and the .run file respectively.
My sincerest thanks to #pasternak for helping me troubleshoot this problem. To solve it however I ended up needing to avoid essentially all ubuntu apt-get calls for install and just use the cuda run file for the full installation. Here is what fixed the problem.
Purge existing nvidia and cuda implementations (sudo apt-get purge cuda* nvidia-*)
Download cuda-6.5 toolkit from the CUDA toolkit archive
Reboot computer
Switch to ttyl (Ctrl-Alt-F1)
Stop the X server (sudo stop lightdm)
Run the cuda run file (sh cuda_6.5.14_linux_64.run)
Select 'yes' and accept all defaults
Required reboot
Switch to ttyl, stop X server and run the cuda run file again and select 'yes' and default for everything (including the driver again)
Update PATH to include /usr/local/cuda-6.5/bin and LD_LIBRARY_PATH
to include /usr/local/cuda-6.5/lib64
Reboot again
Compile main.c program (gcc main.c -o vectorAddition -l OpenCL -I/usr/local/cuda-6.5/include)
Verify works with ./vectorAddition
C++ API
Download cl.hpp file from Khronos here noting that it is version 1.1
Place cl.hpp file in /usr/local/cuda-6.5/include/CL with other cl headers.
Compile main.cpp (g++ main.cpp -o vectorAddition_cpp -std=c++11 -l OpenCL -I/usr/local/cuda-6.5/include)
Verify it works (./vectorAddition_cpp)
All output from both programs show the correct output for addition between vectors.
I personally find it interesting the Ubuntu's nvidia drivers don't seem to play well with the cuda toolkits. Possibly just for the older versions but still very unexpected.
It is hard to say without running the specific code on your machine but looking at the difference between the example C code you said was working and the cl.hpp might give us a clue. In particular, notice that the C example uses the following line to simply read a single platform ID:
cl_platform_id platform_id = NULL;
cl_int ret = clGetPlatformIDs(1, &platform_id, &ret_num_platforms);
Notice that is passes 1 as its first argument. This assumes that at least one OpenCL platform exists and requests that the first one found is placed in platform_id. Additionally, note that even though the return code is assigned to "ret" is it not used to actually check if an error is returned.
Now if we look at the implementation of the static method used to queue the set of platforms in cl.hpp, i.e. cl::Platform::get:
static cl_int get(
VECTOR_CLASS<Platform>* platforms)
{
cl_uint n = 0;
cl_int err = ::clGetPlatformIDs(0, NULL, &n);
if (err != CL_SUCCESS) {
return detail::errHandler(err, __GET_PLATFORM_IDS_ERR);
}
cl_platform_id* ids = (cl_platform_id*) alloca(
n * sizeof(cl_platform_id));
err = ::clGetPlatformIDs(n, ids, NULL);
if (err != CL_SUCCESS) {
return detail::errHandler(err, __GET_PLATFORM_IDS_ERR);
}
platforms->assign(&ids[0], &ids[n]);
return CL_SUCCESS;
}
we see that it first calls
::clGetPlatformIDs(0, NULL, &n);
notice that the first parameter is 0, which tells the OpenCL runtime to return the number of platforms in "n". If this is successful it then goes on to request the actual "n" platform IDs.
So the difference here is that the C version is not checking that there is at least one platform and simply assuming that one exists, while the cl.hpp variant is and as such maybe it is this call that is failing.
The most likely reason for all this is that the ICD is not correctly installed. You can see this thread for an example of how to fix this issue:
ERROR: clGetPlatformIDs -1001 when running OpenCL code (Linux)
I hope this helps.
Related
I am trying to use ocamldebug with my project, to understand why a 3rd party lib I'm using is not behaving the way I expected.
https://ocaml.org/manual/debugger.html
The OCaml debugger is invoked by running the program ocamldebug with the name of the bytecode executable file as first argument
I have added (modes byte exe) to my dune file.
When I run dune build I can see the bytecode file output, alongside the exe, as _build/default/bin/cli.bc
When I pass this to ocamldebug I get the following error:
ocamldebug _build/default/bin/cli.bc
OCaml Debugger version 4.12.0
(ocd) r
Loading program... done.
Fatal error: debugger does not support channel locks
Lost connection with process 33035 (active process)
between time 170000 and time 180000
Restart from time 170000 and try to get closer of the problem ? (y or n)
If I choose y the console seems to hang indefinitely.
I found the source of the error here:
https://github.com/ocaml/ocaml/blob/f68acd1a618ac54790a8347fad466084f15a9a9e/runtime/debugger.c#L144
/* The code in this file does not bracket channel I/O operations with
Lock and Unlock, so fail if those are not no-ops. */
if (caml_channel_mutex_lock != NULL ||
caml_channel_mutex_unlock != NULL ||
caml_channel_mutex_unlock_exn != NULL)
caml_fatal_error("debugger does not support channel locks");
...but I don't know what might be triggering it.
My project is using cmdliner and lwt ...I think at this early point of execution it hasn't hit any lwt code though.
Is ocamldebug incompatible with cmdliner?
If that's the case then I will need to make a new entrypoint just for debugging I guess. (currently the bin/cli is the only executable artefact in my project, the code I need to debug is all under lib/s)
It looks like that the OCaml debugger is broken for your version of macOS. Please, report the issue to the OCaml issue tracker including the detailed information on your system. I can't reproduce it on my machine, but I am using a pretty old version of macOS (10.11.6) and I have the 4.12 debugger working flawlessly.
As a workaround, try using an older version of OCaml, as this channel lock test was introduced very recently you can install any version prior to 4.12,
opam switch create 4.11.0
eval $(opam env)
Then, do not forget to rebuild your project (previously installing the required dependencies),
opam install lwt cmdliner
dune build
and then you can use the debugger to your taste.
I am currently installing the HDF5 library, more precisely the hdf5-1.10.0-patch1, on Cygwin, as I want to use it with Fortran. Following the instructions from the hdfgroup website
(here is the link), I did the following:
./configure --enable-fortran
make > "out1_check.txt" 2> "warn1_check.txt" &
make check > "out2_check.txt" 2> "warn2_check.txt" &
The execution of the last command (make check) proceeds as it should, until it gets stuck. The process does not stop and something is happening (8-12% CPU are in use by sh.exe, already 39 hours of CPU time) but "out2_check.txt" looks like
Making check in src
...
[many successful checks]
...
============================
No need to test testlinks_env.sh again.
============================
============================
Testing testswmr.sh
Unfortunately, I do not have the output file from the first run of make check, but it did not contain more information on Testing testswmr.sh. There was never any error message.
So, what is this testswmr.sh, why does it get stuck and how can I finalize the installation process? Maybe I can skip the remaining checks and just proceed to make install?
Important note: an older version of HDF5 is already installed from the Cygwin repo. It does not seem to support Fortran however, so I decided to install the current version myself.
Available (and used) compilers are gcc and gfortran.
As far as I can tell, only Intel Fortran is supported on Windows. There is no Cygwin download here https://support.hdfgroup.org/HDF5/release/obtain518.html and I have never come across a report of experience for Cygwin/Fortran/HDF5.
Your options:
Use Intel Fortran
Use Linux or Mac
Sorry
I want to load Digital Terrain Elevation(DTED) data using gdal using g++ in solaris 10. In solaris 10, the application with cc compiler loads the data successfully, but when I am using netbeans and g++. the application successfully reads Digital Terrain Elevation(DTED) data but application crashes at GetGeoTranformation(double *) when I print GDALdataset->getDriver()->GetDescription(). This function is working fine in cc. If I comment the line, the application crashes at GDALDataset->GetRasterBand(1), and error prints ld.so.1 fatal reallocation error symbol_ZN11GDALDataset13GetRasterBandIOEi reference symbol not found
Would you mind posting a part of the code which uses GDAL? There could be a few issues. Off the top of my head...
GDAL GetRasterBand starts at 1 when indexing. It seems like your snippet you provided does that.
GDAL requires that you initialize the drivers with GDALAllRegister().
Most GDAL functions return NULL when they return without data. You may want to test that before passing it into another function to prevent potential seg faults.
What does cc point to? I would check the symbolic link with something like
which cc
ls -la /usr/bin/cc (Solaris is Unix, not Linux so forgive me if I am wrong).
I'm trying to compile an run a very basic program given below (test.cpp) which calls the OpenNI class. You can see the files and dirs they're in here. Sorry that some characters screws up a little bit in the browser's encoding. I'm using the linux command: tree, if you know a better command tell me and I will update it.
File Structure
I'm following the guide here, see "GCC / GNU Make".
#include < stdio.h >
#include < OpenNI.h >
using namespace openni;
int
main ( void )
{
Status rc = OpenNI::initialize();
if (rc != STATUS_OK)
{
printf("\nInitialize failed\n%s\n", OpenNI::getExtendedError());
return 1;
}
printf("Hello, world!\n");
return 0;
}
Here is what I'm running in the command line to compile it (gcc 4.7.2):
gcc test.cpp -I../OpenNI-2.0.0/Include -L/home/evan/Code/OpenNi/Init -l OpenNI2 -o test
This works fine but when I run ./test I get the following error:
Initialize failed
DeviceDriver: library handle is invalid for file libOniFile.so
Couldn't understand file 'libOniFile.so' as a device driver
DeviceDriver: library handle is invalid for file libPS1080.so
Couldn't understand file 'libPS1080.so' as a device driver
Found no valid drivers in './OpenNI2/Drivers'
Thanks, any help would be much appreciated.
Instructions from your guide says, that
It is highly suggested to also add the "-Wl,-rpath ./" to your linkage command. Otherwise, the runtime linker will not find the libOpenNI.so file when you run your application. (default Linux behavior is to look for shared objects only in /lib and /usr/lib).
It seems you have exactly this problem -- it can not find some libraries. Try to add proper rpath (seems to be /home/evan/Code/OpenNi/Init/OpenNI2/Drivers in your case) to your compilation string.
I had the same issue after compiling this little "Hello World" with Eclipse and trying to run it in the command line.
The "Wl,-rpath=./" thing did not work for me.
As also discussed here it worked for me after setting some env. variables before execution:
export LD_LIBRARY_PATH="/path/to/OpenNI2:$LD_LIBRARY_PATH"
export OPENNI2_DRIVERS_PATH="/path/to/OpenNI2/Drivers"
export LD_LIBRARY_PATH="/path/to/OpenNI2/Drivers:$LD_LIBRARY_PATH"
Somewhere I got the info that the first two lines should be enough but it was the third line which is important. I does also work just with the third line.
I have a C++ application that I am porting to MacOSX (specifically, 10.6). The app makes heavy use of the C++ standard library and boost. I recently observed some breakage in the app that I'm having difficulty understanding.
Basically, the boost filesystem library throws a runtime exception when the program runs. With a bit of debugging and googling, I've reduced the offending call to the following minimal program:
#include <locale>
int main ( int argc, char *argv [] ) {
std::locale::global(std::locale(""));
return 0;
}
This program fails when I run this through g++ and execute the resulting program in an environment where LANG=en_US.UTF-8 is set (which on my computer is part of the default bash session when I create a new console window). Clearing the environment variable (setenv LANG=) allows the program to run without issues. But I'm surprised I'm seeing this breakage in the default configuration.
My questions are:
Is this expected behavior for this code on MacOS 10.6?
What would a proper workaround be? I can't really re-write the function because the version of the boost libraries we are using executes this statement internally as part of the filesystem library.
For completeness, I should point out that the program from which this code was synthesized crashes when launched via the 'open' command (or from the Finder) but not when Xcode runs the program in Debug mode.
edit The error given by the above code on 10.6.1 is:
$ ./locale
terminate called after throwing an instance of 'std::runtime_error'
what(): locale::facet::_S_create_c_locale name not valid
Abort trap
Ok I don't have an answer for you, but I have some clues:
This isn't limited to OS X 10.6. I get the same result on a 10.4 machine.
I looked at the GCC source for libstdc++ and hunted around for _S_create_c_locale. What I found is on line 143 of config/locale/generic/c_locale.cc. The comment there says "Currently, the generic model only supports the "C" locale." That's not promising. In fact if I do LANG=C the runtime error goes away, but any other value for LANG I try causes the same error, regardless of what arguments I give to the locale constructor. (I tried locale::classic(), "C", "", and the default). This is true as far back as GCC 4.0
That same page has a reference to libstdc++ mailing list discussion on this topic. I don't know how fruitful it is: I only followed it a little way down, and it gets very technical very fast.
None of this tells you why the default locale on 10.6 wouldn't work with std::locale but it does suggest a workaround, which is to set LANG=C before running the program.
I have encountered this problem very recently on Ubuntu 14.04 LTS and on a Raspberry Pi running the latest Raspbian Wheezy.
It has nothing to do with OS X, rather with a combination of G++ and Boost (at least up to V1.55) and the default locale settings on certain platforms. There are Boost bug tickets sort of related to this issue, see
ticket #4688 and ticket #5928.
My "solution" was first to do some extra locale setup, as suggested by this AskUbuntu posting:
sudo locale-gen en_US en_US.UTF-8
sudo dpkg-reconfigure locales
But then, I also had to make sure that the environment variable LC_ALL is set to the value of LANG (it is advisable to put this in your .profile):
export LC_ALL=$LANG
In my case I use the locale en_US.UTF-8.
Final remark: the OP said "This program fails when I run this through g++". I understand that this thread was started in 2009, but today there is absolutely no need to use GCC or G++ on the Mac, the much better LLVM/Clang compiler suite is available from Apple free of charge, see the XCode home page.
The situation is still the same. But some functionality may be gained by
setlocale( LC_ALL, "" );
This gets you UTF-8 coding on wide iostreams but not money formatting, for my two data points.
locale::global( locale( "" ) );
should be equivalent, but it crashes if subsequently run in the very same program.
I had the same problem, checked LANG and LC_MESSAGES and they are not set when you lunch the application through Finder, so the following lines saved the day:
unset("LANG");
unset("LC_MESSAGES");
The _S_create_c_locale exception seems to indicate some sort of misconfiguration: check that whatever your LC_ALL or LANG environment variable is set to, exists in the output of locale -a.
$ env LC_ALL=xx_YY ./test
terminate called after throwing an instance of 'std::runtime_error'
what(): locale::facet::_S_create_c_locale name not valid
Aborted
$ env LC_ALL=C ./test
$ echo $?
0
But since you're on OS X, I'm not really sure how locale information is supposed to be handled.
Quoting the accepted answer:
It has nothing to do with OS X
I encountered this issue on MacOS Big Sur using an outdated MacOS utility. The specific utility was VMWare's ovftool, but none of the above LANG/LC_ALL workarounds fixed it. Updating the tool was the only way to get the error to go away. No combination of locale workarounds would fix this.
In my specific case, the error occurred using ovftool 4.1.0, and the error went away using ovftool 4.4.3.