Use multiple different implementations of library in one program (unit tests) - unit-testing

During writing of unit tests I found that I need to implement "dummy" version of some library in one of unit tests. However the same library is used in its standard version in other unit tests. The obvious way how to solve this issue is to compile unit tests as two (or more) standalone programs (executables).
However, this solution does not satisfy my ambition ;-). I think that there should be a way how to compile one library against another library "statically" (in sense of "static function"). In other words: some used libraries are encapsulated in my library but the rest of program uses another implementation of used libraries.
Any ideas how to achieve that?
Our tests are build on unity and compiled with gcc and cmake.

Related

What is the correct way to link 3rd party C++ libraries in Gradle?

I am currently trying to evaluate the use of Gradle C++ for a project that will have both Java and C++ components (with JNI to interface). I could just use CMake for the C++ portion, but then I would have 2 build systems which is less cleanly organized. As such, I prefer to use Gradle's C++ system in a multi-project build if it has the support that I need. The main thing that I can't find any detailed information (with code examples, etc.) is the linking of libraries. For Cmake, it is simple: use find_package or the pkg-config module. Every library (that I have tried to use) offers at least one of those systems. With Gradle, however, it only seems to document it for linking to C++ libraries that are built in the same project. What if, for example, I want to link to Vulkan, SFML, OpenGl, yaml-cpp, Boost, or any number of established and FOSS C++ libraries? The documentation also doesn't specify how to control dynamic or static linking.

How to manage compilation of C++ header-only libraries across shared objects

I'm developing a large software package consisting of many packages which are compiled to shared objects. For performance reasons, I want to compile Eigen 3 (a header-only library) with vector instructions, but the templated methods are being compiled all over the place. How can I ensure that the Eigen functions are compiled into a specific object file?
This software consists of ~2000 individual packages. To keep development going at a reasonable pace, the recommended way of compiling the program is to sparsely check out some of the packages and compile them, after which the program can be executed using precompiled (by some CI system) shared libraries.
The problem is that part of my responsibility is to optimise the CPU time of the program. In order to do so, I wanted to compile the package I am working on (let's call it A.so) with the -march flag so Eigen can exploit modern SIMD processor extensions.
Unfortunately, because Eigen is a header-only library, the Eigen functions are compiled into many different shared objects. For example, one of the most CPU intensive methods called in A.so is the matrix multiplaction kernel which is compiled in B.so. Many other Eigen functions are compiled into C.so, D.so, etc. Since these objects are compiled for older, more widely implemented instruction set extensions, they are not compiled with AVX, AVX2, etc.
Of course, one possible solution is to include packages B, C, D, etc. into my own sparse compilation but this negates the advantage of compiling only a part of the project. In addition, it leaves me including ever more and more packages if I really want to vectorise all linear algebra operations in the code of package A.
What I am looking for is a way to compile all the Eigen functions that package A uses into A.so, as if the Eigen functions were defined with the static keyword. Is this possible? Is there some mechanism in the compiler/linker that I can leverage to make this happen?
One obvious solution is to hide these symbols. This happens (if I understand the problem properly) because these functions are exported and can be used by other subsequently loaded libraries.
When you build your library and link against the other libraries, the linker reuses what it can. And the old packages as well. I hope you don't require these libraries for your own build?
So two options:
Force the loading of A before the other libraries (but if you need the other libraries, I don't think this is doable),
Tell the linker that these functions should not be visible by other libraries (visibility=hidden by default).
I saw something similar happening with a badly compiled 3rd-party library. It was built in debug mode, shipped in the product, and all of a sudden one of our libraries experienced a slow down. The map files identified where the culprit debug function came from, as it exported all its symbols by default.
An alternative way to change visibility without modifying the code is to filter symbols during linking stage using version scripts -> https://sourceware.org/binutils/docs/ld/VERSION.html. You'll need something like
{
global: *;
local:
extern "C++"
{
Eigen::*;
*Eigen::internal::*;
};
};

Let Autotools compile each header to ensure stand-alone property

I have a C++ codebase which uses templates a lot and therefore is header-only for the most part. The only time that something gets compiled is for the test cases. The build system used is GNU Autotools.
During work on the codebase I noticed that headers are rarely stand-alone. Headers only work if they are included in the right order and after the ones they (implicitly) depend on. When I add a header myself, I try to make it stand-alone and include the needed bits. Then I see that those bits were not standalone either.
Compiling the unit tests gives me errors, but the test is not as strong as I would like. I think I want to call g++ -c on each header file. If they are formed correctly, they should all compile, right?
How could I tell GNU Autotools in a maintainable way that it should compile each header into an object file in order to see whether dependencies are correctly specified via #include?

Runtime dependency and build dependency concepts

I have been hearing about build dependency / runtime dependency. They are quite self explanatory terms. As far as I understand, build dependency is used for components required in the compile time. For example if A has a build dependency to B, A cannot be built without B. Runtime dependency on the other hand is dynamic. If A has a runtime dependency to B, A can be built without B but cannot run without B.
This information however is too shallow. I'd like to read and understand these concepts better. I have been googling but could not find a source, can you please provide me a link or right keywords to search?
I'll try to keep it simple and theoretical only.
When you write code that calls function "func", compiler needs your function descriptor (e.g. "int func(char c);" usually available in .h files) to verify arguments correctness and linker needs your function implementation (where your actual code reside).
Operating systems provide mechanism to separate functions implementation into different compiled modules. It is usually required for
Better code reuse (multiple applications can use the same code, with different data context)
More efficient compilation (you don't need to recompile all dependency libraries)
Partial upgrades
Distribution of compiled libraries, without disclosing the source code
To support such functionality compiler is provided with function descriptors (.h files) as usual. While Linker is provided with lib files containing function stubs. Operating system is responsible to load an actual implementation file during application loading procedure (if it is not yet loaded for different application) and to map actual functions into memory of the new application.
Dynamic load functionality is extended for object oriented languages as well (C++, C#, Java and etc.)
Practical implementations are OS dependent - dynamic linking is implemented as DLL files in Windows or as SO files in Linux
Special OS dependent techniques can be used to share context (variables, objects) between different applications that uses the same dynamic library.
Meir Tseitlin

Build C++ unit tests for C89 code

I'm hoping this won't turn to a dead end, but here's what I would like to do. Please let me know if this is even remotely possible, or if you have another good (or better) approach.
I'm working on a legacy code base that's C89 standard (what year is it again?) and I would like to build unit tests for some parts of the software. The C unit test frameworks I've found do not seem nearly as useful and easy as the Catch framework in C++, or really most other C++ unit test framework for obvious reasons. Thus, I would like to build these unit tests in C++.
There exists some homebrew build process that will build whatever application you're working on, all using the C89 standard. The system was built because there are a lot of shared dependencies among projects, and this was made before better alternatives existed. What I would like to do is use the artifacts built from that process to be linked in to a C++ unit test application. I could, alternatively, try to find all the dependencies and build them in to the unit test, but that's 1. redundant to rebuild, 2. cumbersome, and 3. removes the C89 compiledness of them, which I'd like to maintain to ensure the code I'm testing will be exactly as it runs for the end user (compiled in C89, not C++).
Using the same compiler (gcc), is this possible to accomplish, or am I stuck with C unit test frameworks? Some technical explanation either way would be very helpful, since I'm not too familiar with the differences among the different language and standard library artifacts (other than the library itself is different). Also, note that at this point changing the build process is (unfortunately) not feasible (read: in the budget).
Converting comments into an answer
In my view, it should be doable provided that you have suitable headers for all the parts of the system that will be exploited by the C++ unit test code. Those headers should be usable by the C++ compiler, and will need to be wrapped inside extern "C" { and a matching }. This might be done inside the unit test code or in the main (C) headers (based on #if defined(__cplusplus)). If you don't have headers that can be compiled by C++, then it is not going to be possible until you do have appropriate headers.
Can I link my C++ unit test executable directly with the C89 objects? No worries about potential conflicts using different versions of the standard library in the same application?
You will be linking with the C++ compiler. You might need to add -lc to the link line, but probably won't. The C library will be the same as you used anyway — there isn't a separate C89 library and C99 library and C11 library. So, although your code under test is written to C89 and your code testing it is in C++11, the C code will be using the current C library and test code will be using the C++ library. The C++ code will be using facilities in the C++ std namespace, so you shouldn't run into any troubles there. That said, I've not validated this, but it is the sort of thing that compilers get right.
There's a book Test Driven Development for Embedded C which recommends C++ unit test frameworks for testing (embedded) C code.
Code is easily included in a C++ project simply by linking in your C object files (.o, .a). C header files are included in the C++ project wrapped with extern "C", e.g.
extern "C" {
#include "my_c_header.h"
}
You might get weird compile- and run-time issues. Look out for some C legacy code gotchas like #defining bool and replacing allocators with custom memory management schemes. -_-'