Compile code based on your library support - c++

A portion of my C++ code is based on GPUs, so my one of my collegues that works on my same project doesn't have the possibility to compile it.
For example, in one file there is this line:
#include "opencv2/xfeatures2d/cuda.hpp"
Or in another file there is this line of code:
cv::cuda::GpuMat imgGpu, descriptorsGpu, keypoints;
imgGpu.upload(img);
Which are possible to compile only with CUDA (and GPUs) support.
How can we solve this? My only solution was to introduce a macro for every source file containing this code, wrap the section with macro and edit its value if you have the library supports, but this is a kind of nightmare.
Any better solution?
PS: our project is makefile-based

A preferred approach is to isolate all GPU dependent code into a separate library. It may be worth it to build a mock or dummy substitute library that exposes same API but does not require CUDA. This separation of responsibilities may prove invaluable if one day you need to substitute CUDA for Vulcan or some other framework.

Related

C++ Source-to-Source Transformation with Clang

I am working on a project for which I need to "combine" code distributed over multiple C++ files into one file. Due to the nature of the project, I only need one entry function (the function that will be defined as the top function in the Xilinx High-Level-Synthesis software -> see context below). The signature of this function needs to be preserved in the transformation. Whether other functions from other files simply get copied into the file and get called as a subroutine or are inlined does not matter. I think due to variable and function scopes simply concatenating the files will not work.
Since I did not write the C++ code myself and it is quite comprehensive, I am looking for a way to do the transformation automatically. The possibilities I am aware of to do this are the following:
Compile the code to LLVM IR with inlining flags and use a C++/C backend to turn the LLVM code into the source language again. This will result in bad source code and require either an old release of Clang and LLVM or another backend like JuliaComputing. 1
The other option would be developing a tool that relies on using the AST and a library like LibTooling to restructure the code. This option would probably result in better code and put everything into one file without the unnecessary inlining. 2 However, this options seems too complicated to put the all the code into one file.
Hence my question: Are you aware of a better or simply alternative approach to solve this problem?
Context: The project aims to run some of the code on a Xilinx FPGA and the Vitis High-Level-Synthesis tool requires all code that is to be made into a single IP block to be contained in a single file. That is why I need to realise this transformation.

Share Fortran library without revealing source code

I have a software developed in-house. It is written in Fortran and consists of 3 kinds of files: 1) the solver files, 2) the models' files and 3) a file where the models used are defined. The solver also uses some libraries namely lapack and HSL ma41. Usually, I select the needed models for the user, compile all together and provide an executable.
I want to allow users to add their own models or modify the existing ones without being able to change/modify/see the solver source code.
One thought was to compile the solver into an object file. Then the user would compile the definition file and his models and link them together with the libraries. Is that possible? I guess then the user must have the same platform as the one the solver was compiled on? (ie Intel compiler on Windows 64-bit) So I'll need to built a library for any possible combination OS/hardware/compiler?
Another idea is to send the solver source also but use obfuscation. I can't find any tested/reliable solutions for that online? Is it a good option?
Thanks in advance.
You can distribute the object code in a library, as you propose. If the entry points for your code are in Fortran modules, then you also need to distribute the mod files (or equivalent for your compiler) that also result from compilation of the modules.
(If any of the entry points for your library code are external procedures then it is a convenience for your users if you provide interface blocks for those external procedures. These interface blocks can be in source form (the interface block contains no information beyond what your library's documentation would have to provide), or again could be pre-compiled into a module.)
Object code may be platform (architecture) specific, compiler specific, compiler version specific and in some cases compile options specific. Careful design and specification of the interface between your solver and the clients models can mitigate some of the potential variation. For example - many platforms have a well defined (perhaps through explicit specification or near ubiquitous convention) C application binary interface, so interfaces described using the C equivalent are typically robust, at the cost of significant loss of capability over a common-processor Fortran to Fortran interface.

Is it possible to inject code into translation unit immediately before compilation

I build my C++ code base with MSVC++ 2008 and 2010. Is it even possible to get translation unit, analyze it, insert some code if necessary and then pass on to the compilation process? Original source code should not be affected.
Sure, it should be transparent for a developer who builds a project. Finally, it will only affect object files. Visual studio is very powerful. I guess, there should be some kind of plugin API or hooks to do that. Please, give me a hint.
I don't believe this is possible as you describe it, though I don't know for sure. It would certainly be non-trivial. The only similar project that springs to mind is OpenMP, but I got the impression that Microsoft was the one who implemented their version of it.
I could see a template engine such as Cheetah sufficing though. You would likely give up your bells and whistles like code completion and intellisense though.
Basically, you would set up the files to use a custom compiler to generate the new code in another file. The C++ compiler would then compile the generated files. I don't think it would be elegant or pleasant to use, to be frank.
I've used CMake to do similar things, though I did not target it as a general tool. I wrote a one off for some content generation.
Maybe if you actually describe some of the specifics of what you want to do we can offer a better solution.

Alternatives to preprocessor directives

I am engaged in developing a C++ mobile phone application on the Symbian platforms. One of the requirement is it has to work on all the Symbian phones right from 2nd edition phones to 5th edition phones. Now across editions there are differences in the Symbian SDKs. I have to use preprocessor directives to conditionally compile code that are relevant to the SDK for which the application is being built like below:
#ifdef S60_2nd_ED
Code
#elif S60_3rd_ED
Code
#else
Code
Now since the application I am developing is not trivial it will soon grow to tens of thousands of lines of code and preprocessor directives like above would be spread all over. I want to know is there any alternative to this or may be a better way to use these preprocessor directives in this case.
Please help.
Well ... That depends on the exact nature of the differences. If it's possible to abstract them out and isolate them into particular classes, then you can go that route. This would mean having version-specific implementations of some classes, and switch entire implementations rather than just a few lines here and there.
You'd have
MyClass.h
MyClass_S60_2nd.cpp
MyClass_S60_3rd.cpp
and so on. You can select which CPP file to compile either by wrapping the entire inside using #ifdefs as above, or my controlling at the build-level (through Makefiles or whatever) which files are included when you're building for various targets.
Depending on the nature of the changes, this might be far cleaner.
I've been exactly where you are.
One trick is, even if you're going to have conditions in code, don't switch on Symbian versions. It makes it difficult to add support for new versions in future, or to customise for handsets which are unusual in some way. Instead, identify what the actual properties are that you're relying on, write the code around those, and then include a header file which does:
#if S60_3rd_ED
#define CAF_AGENT 1
#define HTTP_FILE_UPLOAD 1
#elif S60_2nd_ED
#define CAF_AGENT 0
#if S60_2nd_ED_FP2
#define HTTP_FILE_UPLOAD 1
#else
#define HTTP_FILE_UPLOAD 0
#endif
#endif
and so on. Obviously you can group the defines by feature rather than by version if you prefer, have completely different headers per configuration, or whatever scheme suits you.
We had defines for the UI classes you inherit from, too, so that there was some UI code in common between S60 and UIQ. In fact because of what the product was, we didn't have much UI-related code, so a decent proportion of it was common.
As others say, though, it's even better to herd the variable behaviour into classes and functions where possible, and link different versions.
[Edit in response to comment:
We tried quite hard to avoid doing anything dependent on resolution - fortunately the particular app didn't make this too difficult, so our limited UI was pretty generic. The main thing where we switched on screen resolution was for splash/background images and the like. We had a script to preprocess the build files, which substituted the width and height into a file name, splash_240x320.bmp or whatever. We actually hand-generated the images, since there weren't all that many different sizes and the images didn't change often. The same script generated a .h file containing #defines of most of the values used in the build file generation.
This is for per-device builds: we also had more generic SIS files which just resized images on the fly, but we often had requirements on installed size (ROM was sometimes quite limited, which matters if your app is part of the base device image), and resizing images was one way to keep it down a bit. To support screen rotation on N92, Z8, etc, we still needed portrait and landscape versions of some images, since flipping aspect ratio doesn't give as good results as resizing to the same or similar ratio...]
In our company we write a lot of cross-platform code (gamedevelopment for win32/ps3/xbox/etc).
To avoid platform-related macroses as much as possible we generally use the next few tricks:
extract platfrom-related code into platform-abstraction libraries that has the same interface across different platforms, but not the same implementation;
split code into different .cpp files for different platforms (ex: "pipe.h", "pipe_common.cpp", "pipe_linux.cpp", "pipe_win32.cpp", ...);
use macroses and helper functions to unify platform-specific function calls (ex: "#define usleep(X) Sleep((X)/1000u)");
use cross-platform third-party libraries.
You can try to define a common interface for all the platforms, if possible. Then, implement the interface for each platform.
Select the right implementation using preprocessor directives.
This way, you will have the platform selection directive in fewer places in your code (ideally, in one place, explicitly in the header file declaring the interface).
This means something like:
commoninterface.h /* declaring the common interface API. Platform identification preprocessor directives might be needed for things like common type definitions */
platform1.c /*specific implementation*/
platform2.c /*specific implementation*/
Look at SQLite. They have the same problem. They move the platform-dependent stuff to separate files and effectively compile only needed stuff by having the preprocessor directives that exclude an entire file contents. It's a widely used approach.
No Idea about alternative, But what you can do is, use different files to include for different version of OS. example
#ifdef S60_2nd_ED
#include "graphics2"
#elif S60_3rd_ED
#include "graphics3"
#else
#include "graphics"
You could something like they do for the assembly definition in the linux kernel. Each architecture has its own directory (asm-x86 for instance). All these folders cluster the same high level header files presenting the same interface. When the kernel is configured, a link named asm is created targeting the appropriate asm-arch directory. This way, all the C files include files like .
There are several differences between S60 2nd ed and 3rd ed applications that are not limited to code: application resource files differ, graphic formats and tools to pack them are different, mmp-files differ in many ways.
Based on my experience, don't try to automate it too much, but have a separate build scripts for 2nd ed and 3rd ed. In code level, separate differences to own classes that have common abstract API, use flags only in rare cases.
You should try and avoid spreading #ifs through the code.
Rather; use the #if in the header files to define alternative macros and then in the code use the single macro.
This method allows you to keep the code slightly more readable.
Example:
Plop.h
======
#if V1
#define MAKE_CALL(X,Y) makeCallV1(X,Y)
#elif V2
#define MAKE_CALL(X,Y) makeCallV2("Plop",X,222,Y)
....
#endif
Plop.cpp
========
if (pushPlop)
{
MAKE_CALL(911,"Help");
}
To help facilitate this split version specific code into their own functions, then use macros to activiate the functions as shown above. Also you can wrap the changing parts of the SDK in your own class to try and provide a consistent interface then all your differences are managed within the wrapper class leaving your code that does the work more tidy.

How do YOU reduce compile time, and linking time for Visual C++ projects (native C++)?

How do YOU reduce compile time, and linking time for VC++ projects (native C++)?
Please specify if each suggestion applies to debug, release, or both.
It may sound obvious to you, but we try to use forward declarations as much as possible, even if it requires to write out long namespace names the type(s) is/are in:
// Forward declaration stuff
namespace plotter { namespace logic { class Plotter; } }
// Real stuff
namespace plotter {
namespace samples {
class Window {
logic::Plotter * mPlotter;
// ...
};
}
}
It greatly reduces the time for compiling also on others compilers. Indeed it applies to all configurations :)
Use the Handle/Body pattern (also sometimes known as "pimpl", "adapter", "decorator", "bridge" or "wrapper"). By isolating the implementation of your classes into your .cpp files, they need only be compiled once. Most changes do not require changes to the header file so it means you can make fairly extensive changes while only requiring one file to be recompiled. This also encourages refactoring and writing of comments and unit tests since compile time is decreased. Additionally, you automatically separate the concerns of interface and implementation so the interface of your code is simplified.
If you have large complex headers that must be included by most of the .cpp files in your build process, and which are not changed very often, you can precompile them. In a Visual C++ project with a typical configuration, this is simply a matter of including them in stdafx.h. This feature has its detractors, but libraries that make full use of templates tend to have a lot of stuff in headers, and precompiled headers are the simplest way to speed up builds in that case.
These solutions apply to both debug and release, and are focused on a codebase that is already large and cumbersome.
Forward declarations are a common solution.
Distributed building, such as with Incredibuild is a win.
Pushing code from headers down into source files can work. Small classes, constants, enums and so on might start off in a header file simply because it could have been used in multiple compilation units, but in reality they are only used in one, and could be moved to the cpp file.
A solution I haven't read about but have used is to split large headers. If you have a handful of very large headers, take a look at them. They may contain related information, and may also depend on a lot of other headers. Take the elements that have no dependencies on other files...simple structs, constants, enums and forward declarations and move them from the_world.h to the_world_defs.h. You may now find that a lot of your source files can now include only the_world_defs.h and avoid including all that overhead.
Visual Studio also has a "Show Includes" option that can give you a sense of which source files include many headers and which header files are most frequently included.
For very common includes, consider putting them in a pre-compiled header.
I use Unity Builds (Screencast located here).
The compile speed question is interesting enough that Stroustrup has it in his FAQ.
We use Xoreax's Incredibuild to run compilation in parallel across multiple machines.
Also an interesting article from Ned Batchelder: http://nedbatchelder.com/blog/200401/speeding_c_links.html (about C++ on Windows).
Our development machines are all quad-core and we use Visual Studio 2008 supports parallel compiling. I am uncertain as to whether all editions of VS can do this.
We have a solution file with approximately 168 individual projects, and compile this way takes about 25 minutes on our quad-core machines, compared to about 90 minutes on the single core laptops we give to summer students. Not exactly comparable machines but you get the idea :)
With Visual C++, there is a method, some refer to as Unity, that improves link time significantly by reducing the number of object modules.
This involves concatenating the C++ code, usually in groups by library. This of course makes editing the code much more difficult, and you will run into namespace collisions unless you use them well. It keeps you from using "using namespace foo";
Several teams at our company have elaborate systems to take the normal C++ files and concatenate them at compile time as a build step. The reduction in link times can be enormous.
Another useful technique is blobbing. I think it is something similar to what was described by Matt Shaw.
Simply put, you just create one cpp file in which you include other cpp files. You may have two different project configurations, one ordinary and one blob. Of course, blobbing puts some constrains on your code, e.g. class names in unnamed namespaces may clash.
One technique to avoid recompiling the whole code in a blob (as David Rodríguez mentioned) when you change one cpp file - is to have your "working" blob which is created from files modified recently and other ordinary blobs.
We use blobbing at work most of the time, and it reduces project build time, especially link time.
Compile Time:
If you have IncrediBuild, compile time won't be a problem. If you don't have a IncrediBuild, try the "unity build" method. It combine multiple cpp files to a single cpp file so the whole compile time is reduced.
Link Time:
The "unity build" method also contribute to reduce the link time but not much. How ever, you can check if the "Whole global optimization" and "LTCG" are enabled, while these flags make the program fast, they DO make the link SLOW.
Try turning off the "Whole Global Optimization" and set LTCG to "Default" the link time might be reduced by 5/6. (LTCG stands for Link Time Code Generation)