Compiling C++ program using H5Cpp.h header file - c++

I'm need to create a HDF5 file. I'm using the H5Cpp.h header file.
I'm trying to compile the C++ code below on OSX 10.11 El Capitan.
#include "include/hdf5-1.10.0-patch1/c++/src/H5Cpp.h"
using namespace H5;
const int NX = 5;
const int NY = 5;
const H5std_string FILE_NAME( "SDS.h5" );
const H5std_string DATASET_NAME( "IntArray" );
int main(){
int i, j;
int data[NX][NY]; // buffer for data to write
for (j = 0; j < NX; j++){
for (i = 0; i < NY; i++)
data[j][i] = i + j;
}
H5File file(FILE_NAME, H5F_ACC_TRUNC);
hsize_t dimsf[2] = {NX, NY};
DataSpace dataspace(2, dimsf);
DataSet dataset = file.createDataSet(DATASET_NAME, PredType::NATIVE_INT,
dataspace);
// Attempt to write data to HDF5 file
dataset.write(data, PredType::NATIVE_DOUBLE);
return 0;
}
I keep getting this error,
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
make: *** [all] Error 1
Here's the verbose output - https://gist.github.com/gkarthik/e21d2f83baffc2d2eb1b883696c44df8
Thanks!

A few tips:
I think that you have to compile the HDF5 library yourself if you haven't the pre-built binaries. At the HDF5 website there are no pre-built binaries for Mac (as far as I can see).
In windows, to compile a C++ application to use the HDF5 library you have to indicate to the linker the following dependencies (in this order), maybe in mac is also imperative:
szip.lib zlib.lib hdf5.lib hdf5_cpp.lib
If you download some pre-built binaries, there is a file that will indicate the steps to compile your C++ program. In windows it is called: "USING_HDF5_CMake.txt" and "USING_HDF5_VS.txt". You can find these files for example here.

Related

cuda <<<X,X>>> gives expected an expression error

I am trying to compile and run the following program called test.cu:
#include <iostream>
#include <math.h>
#include "cuda_runtime.h"
#include "device_launch_parameters.h"
// Kernel function to add the elements of two arrays
__global__
void add(int n, float* x, float* y)
{
int index = threadIdx.x;
int stride = blockDim.x;
for (int i = index; i < n; i += stride)
y[i] = x[i] + y[i];
}
int main(void)
{
int N = 1 << 20;
float* x, * y;
// Allocate Unified Memory – accessible from CPU or GPU
cudaMallocManaged(&x, N * sizeof(float));
cudaMallocManaged(&y, N * sizeof(float));
// initialize x and y arrays on the host
for (int i = 0; i < N; i++) {
x[i] = 2.0f;
y[i] = 1.0f;
}
// Run kernel on 1M elements on the GPU
add <<<1, 256>>> (N, x, y);
// Wait for GPU to finish before accessing on host
cudaDeviceSynchronize();
// Check for errors (all values should be 3.0f)
for (int i = 0; i < 10; i++)
std::cout << y[i] << std::endl;
// Free memory
cudaFree(x);
cudaFree(y);
return 0;
}
I am using visual studio comunity 2019 and it marks the "add <<<1, 256>>> (N, x, y);" line as having an expected an expression error. I tried compiling it and somehow it compiles without mistakes, but when running the .exe file it outputs a bunch of "1" instead of the expected "3".
I also tried compiling using "nvcc test.cu", but initially it said "nvcc fatal : Cannot find compiler 'cl.exe' in PATH", so i added "C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.27.29110\bin\Hostx64\x64" to path and now compiling with nvcc gives the same mistake as compiling with visual studio.
In both cases the program never enter the "add" function.
I am pretty sure the code is right and the problem has something to do with the installation, but i already tried reinstalling cuda toolkit and repairing MCVS, but it didn't work.
The kernel.cu exemple that appears when starting a new project with cuda in visual studio also didn't work. When running it outputted "No kernel image available for execution on the device".
How can is solve this?
nvcc version if that helps:
nvcc -V
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2020 NVIDIA Corporation
Built on Wed_Jul_22_19:09:35_Pacific_Daylight_Time_2020
Cuda compilation tools, release 11.0, V11.0.221
Build cuda_11.0_bu.relgpu_drvr445TC445_37.28845127_0
Visual Studio provides IntelliSense for C++. In the C++ language, the proper parsing of angle brackets is troublesome. You've got < as less than and for templates, and << as shift. So, the fact is that the guys at NVIDIA choose the worst possible delimiter <<<>>>. This makes Intellisense difficult to work properly. The way to get full IntelliSense in CUDA is to switch from the Runtime API to the Driver API. The C++ is just C++, and the CUDA is still (sort of) C++, there is no <<<>>> badness for the language parsing to have to work around.
You could take a look at the difference between matrixMul and matrixMulDrv. The <<<>>> syntax is handled by the compiler essentially just spitting out code that calls the Driver API calls. You'll link to cuda.lib not cudart.lib, and may have to deal with a "mixed mode" program if you use CUDA-RT only libraries. You could refer to this link for more information.
Also, this link tells how to add Intellisense for CUDA in VS.

Exception deallocating memory for std::string (trying to use YOLO/Darknet inside UE4)

I am trying to get YOLO/Darknet working inside a UE4 project. I have built YOLO as a C++ DLL and added all the dependencies to the project. It builds fine, and some parts of the code are working, but I'm getting an exception at a certain point, and after literally days trying to figure it out, I'm now at a loss and need some help.
Here is the code I'm calling to create a YOLO Detector class from inside a UE4 class:
YOLO_DataPath = FPaths::ProjectDir() + "Plugins/Stereolabs/Source/YOLO/Data/";
std::string YOLO_DataPathC(TCHAR_TO_UTF8(*YOLO_DataPath));
std::string NamesFile = YOLO_DataPathC + "obj.names";
std::string CFGFile = YOLO_DataPathC + "yolo-obj.cfg";
std::string WeightsFile = YOLO_DataPathC + "yolo-obj.weights";
Detector YOLODetector(CFGFile, WeightsFile);
Here is the constructor being called (from 'yolo_v2_class.cpp', line 130):
LIB_API Detector::Detector(std::string cfg_filename, std::string weight_filename, int gpu_id) : cur_gpu_id(gpu_id)
{
wait_stream = 0;
#ifdef GPU
int old_gpu_index;
check_cuda( cudaGetDevice(&old_gpu_index) );
#endif
detector_gpu_ptr = std::make_shared<detector_gpu_t>();
detector_gpu_t &detector_gpu = *static_cast<detector_gpu_t *>(detector_gpu_ptr.get());
#ifdef GPU
//check_cuda( cudaSetDevice(cur_gpu_id) );
cuda_set_device(cur_gpu_id);
printf(" Used GPU %d \n", cur_gpu_id);
#endif
network &net = detector_gpu.net;
net.gpu_index = cur_gpu_id;
//gpu_index = i;
_cfg_filename = cfg_filename;
_weight_filename = weight_filename;
char *cfgfile = const_cast<char *>(_cfg_filename.c_str());
char *weightfile = const_cast<char *>(_weight_filename.c_str());
net = parse_network_cfg_custom(cfgfile, 1, 1);
if (weightfile) {
load_weights(&net, weightfile);
}
set_batch_network(&net, 1);
net.gpu_index = cur_gpu_id;
fuse_conv_batchnorm(net);
layer l = net.layers[net.n - 1];
int j;
detector_gpu.avg = (float *)calloc(l.outputs, sizeof(float));
for (j = 0; j < NFRAMES; ++j) detector_gpu.predictions[j] = (float*)calloc(l.outputs, sizeof(float));
for (j = 0; j < NFRAMES; ++j) detector_gpu.images[j] = make_image(1, 1, 3);
detector_gpu.track_id = (unsigned int *)calloc(l.classes, sizeof(unsigned int));
for (j = 0; j < l.classes; ++j) detector_gpu.track_id[j] = 1;
#ifdef GPU
check_cuda( cudaSetDevice(old_gpu_index) );
#endif
}
All this code seems to run fine, but when it reaches the end of the constructor, it hits an exception where it seems to be trying to delete a string, on this line (line 132 of xmemory - c:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\include\xmemory0):
::operator delete(_Ptr);
The full call stack is like this:
[Inline Frame] yolo_cpp_dll.dll!std::_Deallocate(void * _Ptr, unsigned __int64) Line 132
[Inline Frame] yolo_cpp_dll.dll!std::allocator<char>::deallocate(char *) Line 720
[Inline Frame] yolo_cpp_dll.dll!std::_Wrap_alloc<std::allocator<char> >::deallocate(char * _Count, unsigned __int64) Line 987
[Inline Frame] yolo_cpp_dll.dll!std::basic_string<char,std::char_traits<char>,std::allocator<char> >::_Tidy(bool) Line 2258
[Inline Frame] yolo_cpp_dll.dll!std::basic_string<char,std::char_traits<char>,std::allocator<char> >::{dtor}() Line 1017
yolo_cpp_dll.dll!Detector::Detector(std::basic_string<char,std::char_traits<char>,std::allocator<char> > cfg_filename, std::basic_string<char,std::char_traits<char>,std::allocator<char> > weight_filename, int gpu_id) Line 177
From what limited information I can find out about this error, it seems like it might be an issue with DLLs being compiled with different compilers. I have spent days now trying different versions of compiling everything from scratch - UE4 from source, the UE4 project, the YOLO cpp DLL. I have tried totally clean installs of Visual Studio 2015 and 2017 for everything, and I'm getting this same problem each time.
Does anyone know exactly what is going on here? And how I might either fix it or work around it?
Simple way: Never pass std::xxx objects between different modules. Use raw C types. Memory should be released in module that allocated it.
Detector(const char* cfg_filename, const char* weight_filename, int gpu_id)
Hard way: compile all modules with same compiler/options (extra hard in case of UE4).
As other have pointed out, the trouble-free way is hiding your code behind a C interface, but that limits a lot your interface if you're using C++. So if you want to go the hard way, I know at least two projects that successfully managed to do it, you can go into their code for full details
CARLA: https://github.com/carla-simulator/carla
AirSim: https://github.com/Microsoft/AirSim
As most comments here pointed out, use same compiler and configuration, if you open a UE4 project in Visual Studio you should be able to inspect the configuration.
Then the trick is statically linking MSVCRT.lib. Unless you compiled UE4 yourself, it uses release mode so us /MD compiler flag (See msvc reference). Here's the example in CARLA's build script, BuildLibCarla.bat#L103-L111, uses CMake to create a static library, later this library is linked into a UE4 plugin (dll).
Linux
For completion I'll add the details for Linux too, cause I wish I've found this online when I had to do it!
If you're building on Linux things get a bit more complicated, UE4 links against LLVM's libc++ runtime instead of the default GNU's libstdc++. Depending on the UE4 version they use a different version of the LLVM toolchain (they tend to update it relatively often). They come bundled with UE4, you can find them at
Headers and libs: Engine/Source/ThirdParty/Linux/LibCxx
SDK, binaries: Engine/Extras/ThirdPartyNotUE/SDKs/HostLinux/Linux_x64/<clang version>/x86_64-unknown-linux-gnu
Then either compile and link against the bundled version of libc++.a and libc++abi.a, or compile your own (this is what CARLA does Setup.sh#L37-L58). Note that using the libc++ and libc++abi you get from apt install won't work because those are not compiled with position independent code (-fPIC) that you would need to link to an .so in UE4.

Undefined symbols for architecture x86_64: missing package?

I am doing different exercises in c++ for preparing my exam at university.
I am pretty sure they are all without heavy mistake and should complement.
All codes can't complement with the same error log:
Undefined symbols for architecture x86_64:
[hundreds lines of error log]
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
The error log between is about every single line of code.
I am wondering if I've missed to install some package for complement c++ on my device.
Code Example:
#include <iostream>
#include <new>
int main () {
int n;
std::cout<<"How many value do you want to enter to your list?"<< std::endl;
std::cin>>n;
int* numbArray = new int[n];
for(int i = 0; i<n; i++) {
std::cout<<"Enter the"<< i+1 <<". value!"<<std::endl;
std::cin>>numbArray[i];
}
std::cout << "List of value: " << std::endl;
for(int i=0; i<n; i++ ) {
std::cout<<numbArray[i]<<" "<<std::endl;
}
std::cout<<"end of arrays"<<std::endl;
delete[]numbArray;
return 0;
}
My operating System is macOS Catalina 10.15.2
Thanks for your help.
There are no explicit/implicit linker symbols. This is a symbol in the C++ standard library. How to you call the compiler and linker? Do you happen to link with "gcc" instead of "g++", as it should be?
Just as Erlkoeing said, using "g++" instead of "gcc" for calling the compiler and linker works fine.

C++:Linker command with exit code -1 in xcode

I was implementing a suffix array in xcode using c++ when I got the following error:
ld: 32-bit RIP relative reference out of range (100000121018926 max is +/-4GB): from _main (0x100001310) to _L (0x5AF417B0F130) in '_main' from /Users/priya/Library/Developer/Xcode/DerivedData/cfquestions-boqlvazrozappdeetfhesfsohczs/Build/Intermediates/cfquestions.build/Debug/cfquestions.build/Objects-normal/x86_64/main.o for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
My code for the suffix array is as follows:
char a[size1];
int ans[size2][size2];
struct node{
int gg[2],pos;
}L[size1];
int step=1,ct=1;
bool comp(node a, node b){
return a.gg[0]==b.gg[0]?(a.gg[1]<b.gg[1]?1:0):(a.gg[0]<b.gg[0]?1:0);
}
int main(){
int TT;
cin>>TT;
while(TT--){
set<int> s;
//a.clear();
int n=strlen(a);
scanf("%s",a);
for(int i=0;i<strlen(a);i++)ans[0][i]=a[i]-'a';
for(;ct<strlen(a);step++,ct<<=1){
for(int i=0;i<n;i++){
L[i].gg[0]=ans[step-1][i];
L[i].gg[1]=i+ct<n?ans[step-1][i+ct]:-1;
L[i].pos=i;
}
sort(L,L+n,comp);
for(int i=0;i<n;i++)
ans[step][L[i].pos]=i>0&&L[i].gg[0]==L[i-1].gg[0]&&L[i].gg[1]==L[i-1].gg[1]?ans[step][L[i-1].pos]:i;
}
for(int i=0;i<n;i++)
{
if(s.find(ans[step-1][i])!=s.end()){
}
else s.insert(ans[step-1][i]);
}
cout<<s.size()<<endl;
}
return 0;
}
P.S: The code runs fine on any other code. I tried even much more complex codes; but it works fine. Hence there must be something wrong with this piece of code- but I am not able to figure out what!
Any help would be appreciated thanks!!
edit: The reason is the size of ans, as pointed out in the comment by #WhozCraig after you added the sizes. My guess regarding why the linker error message names _L is that the compiler put main at a lower address and then the global data in order, giving the too big RIP offset to L.
In one of the for loops you have i = 0 and access L[i-1], which is a very large number for the array index .
edit: but that should give a runtime failure or error and not give a linker error.

Octave: mkoctfile not found when compiling a C++ file

I am trying to use Octave functions in C++. I install Octave-3.8.0 on Mac OS X 10.9.3 and follow the standalone program example on Octave website,
#include <iostream>
#include <octave/oct.h>
int
main (void)
{
std::cout << "Hello Octave world!\n";
int n = 2;
Matrix a_matrix = Matrix (n, n);
for (octave_idx_type i = 0; i < n; i++)
for (octave_idx_type j = 0; j < n; j++)
a_matrix(i,j) = (i + 1) * 10 + (j + 1);
std::cout << a_matrix;
return 0;
}
Then I type
$ mkoctfile --link-stand-alone main.cpp -o standalone
But it shows mkoctfile: command not found. What is the problem?
I also tried to compile the C++ file with g++
$ g++ -I /usr/local/octave/3.8.0/include/octave-3.8.0 main.cpp
but it shows 2 errors as follows.
1) 'config.h' file not found with include; use "quotes" instead.
2) fatal error: 'hdft.h' file not found.
Please help me!
It may be that octave is not registered in your system properly, judging by the shell response.
Try invoking the command from inside the octave interpreter.
Not sure about on MacOS, but on Linux, mkoctfile is not bundled within the default Octave distribution. Instead, it requires a supplementary package, liboctave-dev, that has to be installed in addition to Octave itself.
This is not documented in the Octave web tutorial.