How to implement conditional compilation without messing up the library API? - c++

I have a library which can do GPU computation using the OpenCL framework. Sadly, OpenCL is not available on all platforms. However I would still like to be able to compile my code on those platforms, just excluding OpenCL functionality.
I think this question applies to all situations where you want to conditionally compile some external resource which may not always be available, and it messes with your library API.
Currently I have it set up like this:
CMake:
if(ENABLE_OPENCL)
add_definitions(-DENABLE_OPEN_CL)
find_package(OpenCL REQUIRED)
include_directories(${OpenCL_INCLUDE_DIR})
target_link_libraries(mylibrary ${OpenCL_LIBRARY})
endif()
C++
// settings.hpp, exposed to public API
class settings
{
int general_setting_1;
bool general_setting_2;
// ... Other general settings
#ifdef ENABLE_OPEN_CL
int open_cl_platform_id;
// ... Other settings available only when OpenCL is available
#endif
// More settings, possibly also conditionally compiled on other external libraries
};
// computation.cpp, internal to the library
#ifdef ENABLE_OPEN_CL
#include <CL/cl.hpp>
#endif
void do_things()
{
// ...
#ifdef ENABLE_OPEN_CL
if(settings.open_cl_platform_id != -1)
{
// Call OpenCL code
}
#endif
// ...
}
So when I compile the library, if I want to enable OpenCL I do cmake .. -DENABLE_OPEN_CL.
This works, but if the client is consuming the library compiled with ENABLE_OPEN_CL, it forces the client to define the same ENABLE_OPEN_CL, otherwise the included library's header file don't match the one used in the client, and very bad things happen.
This opens a whole can of worms, for example what if the client forgets to do it? What if it uses the same identifier name for something else?
Can I avoid this? If not, is there some way I could verify that the header files match on the client and the library, and cause a compilation error? Or at least throw a run-time exception? What is the correct approach to this scenario?

The obvious way to do it, is to leave open_cl_platform_id as a member of settings even if OpenCL is not supported. The user then gets a run-time error if they try to use OpenCL functionality when the library hasn't been compiled for it.
Alternatively, have two header files settings_no_open_cl.hpp and settings_open_cl.hpp, and require the user to include the right one.

Related

How to turn off submodule of a C++ library based on preprocessor defined macro

What I'm doing I'm writing a C++ library with dependence on NetCDF library. For example,
include <netcdf>
class myLib {
public:
myLib();
myLib(const myLib&);
virtual ~myLib();
std::string probe_data(std::string & file_path);
...
And the function probe_data uses the functions from NetCDF library.
What is the problem I have defined a preprocessor macro CANALOGSIO_WITHOUT_NETCDF. Because in some system, there is no NetCDF library installed. So I would like to turn off this functionality in my library, for example, the library will still have probe_data function, but it simply returns NetCDF not installed.
What would be a good practice of doing that? Thank you!
I'll list 2 different methods which came to my mind for this.
1, just use ifdefs in the definition of the function. This will keep the uniform interface and everyone could call it.
class myLib {
...
std::string probe_data(std::string & file_path) {
#ifdef NETCDF
return do_real_probe(file_pat);
#else
cerr << "NOt implemented function" << endl;
return "";
#endif
}
...
This requires separate compilation for separate systems. YOu need to supply the definition of the macro at the compilation comand line, i.e. with gcc:
g++ -DNETCDF ..
the second method wold be based on the library approach. You can compile separate implementation libraries for different systems. Then at link time you can choose which static library to use (or at run time for dynamic libs). Most likely you would only deliver the library which work on the target system and nothing will. You might get away without #ifdefs if you choose so, just have different implementation in different files>
sys1.cpp
string probe(string &) { return do_probe();}
gcc sys1.cpp -o sys1.so -shared
sys2.cpp
string probe(string &) {cerr << messate; return "";}
gcc sys2.cpp -o sys2.so -shared
Now you just need to deliver the correct library (sys1 or sys2) to the correct system. Or a correct statically linked image of your program.
There are multiple way to use conditional compilation to do it. So you decide.

glGenFramebuffers() in Qt get 'was not declared in this scope'

I'm trying to compile a code with this call in Qt5 under Linux and I'm getting this error on compile time.
Any problem of compatibility? Any other error
I have this include:
#include <GL/gl.h>
Try this:
#ifdef Q_OS_WIN
#include "gl/GLU.h"
#else
#include <glu.h>
#endif
When building this example on OSX and on Windows that code worked for me for finding the correct headers.
https://github.com/peteristhegreat/circles-in-a-cube/blob/master/glwidget.cpp
Hope that helps.
glGenFramebuffers() is never declared in the standard GL headers, because that is a function which is not guaranteed to be even exported by the GL library on most platforms. For anything beoynd GL 1.1, the GL extension mechanism has to be used to retrieve the function pointers at run time. There are a couple of different OpenGL loading libraries which do all of this for you under the hood, and also provide appropriate header files so that any GL function can be used as if you were directly linking them.
You already use Qt, which provides its own GL loading mechanism, namely QOpenGLFunctions and the more modern QAbstractOpenGLFunctions class.This article provides a brief overview about the different possibilites.
Also note that Qt provides also the QGLFramebufferObject class as a wrapper around GL's FBOs.

boost::shared_ptr vs std::tr1::shared_ptr on multi os compilation

I have maintained different code for browser plugin(c++) for windows and mac system. The difference of the code is only for shared pointer.
In windows version I am using std::tr1::shared_ptr and on Mac version am using boost::shared_ptr.
Now I wanted to merge these code into one.I wanted to use std::tr1::shared_ptr in both code and maintain single source code but two different solution/project folder.
This browser plugin support up to OSX 10.5 onwards.Presently I am compiling in Xcode 4.6.2(Apple LLVM compiler).Basically I am Windows programmer and mostly work on Visual Studio.
My question is Will Mac Older version will support plugin with this change.Is this is a good idea ?
Please let me know whether boost is useful in this case.
First of all, boost::shared_ptr and std::tr1::shared_ptr are almost the same, but if you can you should use std::shared_ptr instead by enabling C++11 support (default on VS12 I think, to be enabled in clang / llvm).
The shared_ptr is a template class wrapping a pointer, thus the whole code is instanciated when you compile your program: the original class implementation resides in a header file which is incorporated in your translation unit (each separate file being built).
As such, you don't need any specific library to use shared_ptr (neither a .dll nor a .so or something else on Mac). So your program will run on any machine for which it has been built, you don't require additional library to run it.
You can also - for compatibility reason - use your own wrapper around the shared_ptr:
namespace my_code {
#if defined(_STD_TR1_SHARED_PTR)
using std::tr1::shared_ptr;
#elif defined(_STD_SHARED_PTR)
using std::shared_ptr;
#else
using boost::shared_ptr;
#endif
}
Thus you can access my_code::shared_ptr which will resolve to the appropriate type depending on the macros you define. Note that this only works if you use a compatible interface for all those types, but this should be the case.
Why don't you just test it? An easy first step would be to use a typedef to change the actual shared pointer definition under the hood:
namespace myNs{
#ifdef _USE_STD_SRDPTR
typedef std::shared_ptr sharedPtr;
#else
typedef boost::shared_ptr sharedPtr; //default to boost if no symbol defined
#endif
}
//in code replace old shared pointer usage
myNs::sharedPtr<Fruit> ourFruit( new Banana(WONKY) );
This way you can replace it in both code libraries, and change the underlying implementation whenever you want. You can add support for more options (e.g. the tr1 version) as you need to without changing your code. Of course all options need to have the same interface or the code won't compile.

How to export a global variables/arrays from the dll built using VS compiler to the client built using MingW compiler?

General info [optional]:
I have recently acquantied with static & dynamic libraries.
Now I am trying to learn how to use DLLs, I try to immitate all possible variants of usage in order to encounter possible bottlenecks and methods to prevent them.
My goal:
Is to find out how to export a global variable from a DLL which was compiled with one compiler for ex. VS compiler (IDE Visual studio 2010) to the client which was compiled using another compiler MingW(IDE Qt Creator 5.0). Actually I am interested in specific case, not common, but if info for common case will be provided - it will be great.
Also important that connection of the dll to the client is implicit(not expilcit then we manually connect library).
Also I have posed such question because it is interesting for me how to support a client`s application by providing updated dll, because version of compiler used for client and dll at the beggining of a project can be the same, but as time passes by they may vary, so how to solve this binary compatibility issue?
I got stuck trying to export an array defined in dll to the client.
DLL & Client
/* header file. Is used by both: dll and client */
#ifdef EXPORT
#define MYLIB __declspec(dllexport)
#else
#define MYLIB __declspec(dllimport)
#endif
extern "C" { // My be this directive not supported by MingW???
#ifdef VS2010
extern MYLIB char ImplicitDLLName[];
#else
Q_DECL_IMPORT extern char ImplicitDLLName[];
#endif
}
DLL
/* .cpp file in dll: */
#define EXPORT ""
#define VS2010 ""
char ImplicitDLLName[] = "MySUMoperator";
Client
/* Client .cpp */
void MainWindow::on_pushButtonAdd_clicked()
{
// ...
printf("%s",ImplicitDLLName);
}
Attempt to use the array in the Client results in the following error raised by the linker:
error: undefined reference to `_imp__ImplicitDLLName'
I am aware of names mangling and compatibilty issue that may arise from that, but I am trying to resolve it by disabeling it using
extern "C"{}
By the error returned from the clients linker I can tell that I have failed to disable it, because it reports that reference on _imp__ImplicitDLLName wasnt found, so I guess that it is ImplicitDLLName only decorated with additional symobls(name mangling).
I wonder may be this issue arose due to different implementation of arrays in different compilers or alignment of arrays in memory??
Question: how to solve this binary comptability issue??

Cuda with Boost

I am currently writing a CUDA application and want to use the boost::program_options library to get the required parameters and user input.
The trouble I am having is that NVCC cannot handle compiling the boost file any.hpp giving errors such as
1>C:\boost_1_47_0\boost/any.hpp(68): error C3857: 'boost::any': multiple template parameter lists are not allowed
I searched online and found it is because NVCC cannot handle the certain constructs used in the boost code but that NVCC should delegate compilation of host code to the C++ compiler. In my case I am using Visual Studio 2010 so host code should be passed to cl.
Since NVCC seemed to be getting confused I even wrote a simple wrapper around the boost stuff and stuck it in a separate .cpp (instead of .cu) file but I am still getting build errors. Weirdly the error is thrown upon compiling my main.cu instead of the wrapper.cpp but still is caused by boost even though main.cu doesn't include any boost code.
Does anybody know of a solution or even workaround for this problem?
Dan, I have written a CUDA code using boost::program_options in the past, and looked back to it to see how I dealt with your problem. There are certainly some quirks in the nvcc compile chain. I believe you can generally deal with this if you've decomposed your classes appropriately, and realize that often NVCC can't handle C++ code/headers, but your C++ compiler can handle the CUDA-related headers just fine.
I essentially have main.cpp which includes my program_options header, and the parsing stuff dictating what to do with the options. The program_options header then includes the CUDA-related headers/class prototypes. The important part (as I think you've seen) is to just not have the CUDA code and accompanying headers include that options header. Pass your objects to an options function and have that fill in relevant info. Something like an ugly version of a Strategy Pattern. Concatenated:
main.cpp:
#include "myprogramoptionsparser.hpp"
(...)
CudaObject* MyCudaObj = new CudaObject;
GetCommandLineOptions(argc,argv,MyCudaObj);
myprogramoptionsparser.hpp:
#include <boost/program_options.hpp>
#include "CudaObject.hpp"
void GetCommandLineOptions(int argc,char **argv,CudaObject* obj){
(do stuff to cuda object) }
CudaObject.hpp:
(do not include myprogramoptionsparser.hpp)
CudaObject.cu:
#include "CudaObject.hpp"
It can be a bit annoying, but the nvcc compiler seems to be getting better at handling more C++ code. This has worked fine for me in VC2008/2010, and linux/g++.
You have to split the code in two parts:
the kernel have to be compiled by nvcc
the program that invokes the kernel has to be compiled by g++.
Then link the two objects together and everything should be working.
nvcc is required only to compile the CUDA kernel code.
Thanks to #ronag's comment I realised I was still (indirectly) including boost/program_options.hpp indirectly in my header since I had some member variables in my wrapper class definition which needed it.
To get around this I moved these variables outside the class and thus could move them outside the class defintion and into the .cpp file. They are no longer member variables and now global inside wrapper.cpp
This seems to work but it is ugly and I have the feeling nvcc should handle this gracefully; if anybody else has a proper solution please still post it :)
Another option is to wrap cpp only code in
#ifndef __CUDACC__