I am currently playing with C++20 modules, in order to modernize our company code base.
Suppose you want to write a C++ module to export the content of a C library that uses #define to declare numerical or string constants. For example, imagine you want to create a Windows module with all the content of <Windows.h>.
So assume we have an existing third-party library header SuperLib.h that contains this:
#pragma once
struct super_lib_type {int field;};
extern void super_lib_function(int a);
#define SUPER_LIB_CONSTANT 42
Now we write a C++ module like that:
export module SuperLib;
export
{
#include <SuperLib.h>
}
The super_lib_function function and super_lib_type type will be successfully exported in the module. But obviously not SUPER_LIB_CONSTANT constant, since macros are by design never exported to C++20 modules.
To fix this, it will be great if we could have the equivalent of that code:
export module SuperLib;
export
{
#include <SuperLib.h>
}
#undef SUPER_LIB_CONSTANT
export constexpr auto SUPER_LIB_CONSTANT = 42;
But without having to hard code the value 42 in the module file.
I tried to write a macro to do that. This one kind of works:
#define EXPORT(name) export constexpr auto SL_##name = name;
EXPORT(SUPER_LIB_CONSTANT)
It will export a constant SL_SUPER_LIB_CONSTANT. But this approach has the inconvenience to require a prefix, or a suffix, in the symbol name. For compatibility with existing code, it would be great if the exported name is the same as the original macro.
I thought I could solve the problem by splitting the original macro name into 2 pieces.
Something like that:
#define CONCAT(prefix,suffix) prefix##suffix
#define EXPORT(prefix,suffix) export constexpr auto CONCAT(prefix,suffix) = prefix##suffix;
EXPORT(SUPER_LIB_,CONSTANT)
But this generates the invalid code export auto constexpr 42 = 42; after preprocessing.
Still, I have the impression that it is possible to write a two arguments macro to fulfill the need. And maybe even a single argument macro.
Related
Clang and GCC (and maybe MSVC?) are using a two-step compile for their modules implementation at the moment:
Generate the BMI/CMI (IPR for MSVC, if it still does this?) to be consumed by someone else's import.
Generate the object file to be feed to the linker.
It seems that there are some possible uses for modules that produce a BMI/CMI but that don't produce an object file, for example modules that only export types or constexpr variables used for conditional compilation.
As far as I can understand from the standard, there's nothing saying I have to produce/link the objects file. So I'm wondering if I've missed something obvious about using modules like this, and if we expect tooling to support this "build as a module, don't build as an object" workflow?
I expect modules to be able to provide definition for stuff that normally wouldn't be with headers.
Imagine this module:
export module hello;
export inline auto say_hello() -> char const* {
return "hello world";
}
As you can see, the function is inline. It is also in the interface. Now with header, there is no place to put the implementation. For inline function to be possible, the language allow for multiple definition to be found. So each TU output their own definition in the object file.
That is repeated work that can be avoided using module. As you can see, module interface are TU just like any other cpp file. When you export an inline function, yes the implementation is available to other TU, but it won't be necessary for all tu to provide implementation, since it can be put in one place: the TU that has the inline function in it.
I expect the same thing with constexpr variables. They also need definition, since you may take a reference or an address to them. Take this for example:
export module foo;
import <tuple>;
export constexpr auto tup = std::tuple{1, 'a', 5.6f};
import foo;
int a = std::get<0>(tup);
The std::get function takes a reference to the tuple. Even though it's a constexpr variable, some context (especially without optimizations) may require the variable to be ODR used.
So in my example, even though the module foo only export a constexpr variable, I expect the cpp file to compile into an object file containing the definition.
It may also happen that there is nothing inside the object file. I also expect it to behave like an empty TU today:
// I'm empty
You can add such cpp file into a project without any problem, and link it to your executable. I expect the tools to behave the same with modules.
I am using GCC C++ 11 in CodeBlocks IDE. I have been trying to reuse classes that I wrote by putting them into a header file but it doesn't work. I have looked into many books but none has detailed information on making C++ code reusable.
There are a couple concepts that C++ uses that I think you're missing:
The difference between a declaration and a definition
#include statements
Linking against other libraries
In general, one reuses code in C++ by Declaring a function in a header file (.h), Defining it in a source file (.cpp). The header file ("foo.h") is then included in both the source file (foo.cpp) and any other file you want to use something declared in it using and preprocessor include directive #include "foo.h". Finally if the source file in which the functions declared in the header file are defined is not a part of the source for the library or executable that you're compiling, you will need to link against the library in which it was built.
Here's a simple example that assumes that you don't need to link against an external library and that all files are in the same directory:
foo.h:
The class Foo is declared along with a member function foobar and a private member variable barM. In this file we're telling the world that there is a class named Foo, it's constructor and destructor are public, it has a member function named fooBar that returns an integer and it also has a private member variable named barM.
class Foo
{
public:
Foo();
~Foo();
int fooBar();
private:
int barM;
};
foo.cpp
The individual member functions for our class Foo are defined, we implement the things we declared in the header file. Notice the include statement at the top
#include "foo.h"
Foo::Foo()
{
barM = 10;
}
Foo::~Foo()
{
}
int Foo::fooBar()
{
return barM;
}
main.cpp
We use our class in a different file, again the header file is included at the top.
#include <stdio.h>
#include "foo.h"
int main(int argc, char *argv[])
{
Foo flub;
std::cout << "flub's fooBar is: " << flub.fooBar() << std::endl();
return 0;
}
The expected output from this would be:
flub's fooBar is 10.
As a general note, I haven't compiled this code, but it should be enough to give you a basic example of the ideas of declarations, definitions, and include statements.
Seeing as you're coming from Java, I'm actually betting that you got all of that already, the hard part is actually using code from a different c++ library, which is akin to Java packages. Setting this up requires exporting the symbols you desired to use in a different library. The way to do this is compiler specific, but is generally accomplished by defining a Macro in a different header file and then using that macro in the declaration of the item you'd like to export. For gcc, see reference GNU Reference Manual.
To extend the above example you create another header file: fooLibExport.h
#if BUILDING_LIBFOO && HAVE_VISIBILITY
#define LIBFOO_DLL_EXPORTED __attribute__((__visibility__("default")))
#elif BUILDING_LIBFOO && defined _MSC_VER
#define LIBFOO_DLL_EXPORTED __declspec(dllexport)
#elif defined _MSC_VER
#define LIBFOO_DLL_EXPORTED __declspec(dllimport)
#else
#define LIBFOO_DLL_EXPORTED
#endif
foo.h would then be changed to:
#include "fooLibExport.h"
class LIBFOO_DLL_EXPORTED Foo
{
public:
Foo();
~Foo();
int fooBar();
private:
int barM;
};
Finally you'll need to link against the library that Foo was built into. Again this is compiler specific. At this point we're through the setting up of header files for exporting symbols from a C++ library so that functions defined in one library can be used in another. I'm going assume that you can follow the reference material for setting up the GCC compiler for the rest of the process. I've tried to bold the key words that should help refine your searches.
One final note about #include statements, the actual argument isn't just the filename, its the path, relative or absolute, to the file in question. So if the header file isn't in the same file as the file you're trying to include it in, you'll need to use the appropriate path to the file.
Code re-usability casts its net wide in C++ terminology. Please be specific what do you mean by it.C and C++ programming language features usually considered to be relevant to code reuse could be :-
functions, defined types, macros, composition, generics, overloaded functions and operators, and polymorphism.
EDITED IN RESPONSE TO COMMENT:-
Then you have to use header files for putting all declarations which you can use in any file just by including this.
I am writing a program and I would really prefer to write in C++, however, I'm required to include a C header that redefines bool:
# define false 0
# define true 1
typedef int bool;
The obvious solution would be to edit the header to say:
#ifndef __cplusplus
# define false 0
# define true 1
typedef int bool;
#endif
but, alas, since the library is read-only I cannot.
Is there a way I can tell gcc to ignore this typedef? Or, can I write most functions in C++ and then make a C wrapper for the two? Or, should I suck it up and write the thing in C?
You can hack it!
The library, call it fooLib, thinks it's using some type bool which it has the prerogative to define. To the library, bool is just an identifier.
So, you can just force it to use another identifier instead:
#define bool fooLib_bool
#include "fooLib.h"
#undef bool
#undef true
#undef false
Now the compiler sees the offending line transformed to this:
typedef int fooLib_bool;
You're stuck with the interface using type fooLib_bool = int instead of a real bool, but that's impossible to work around, as the code might in fact rely on the properties of int, and library binary would have been compiled with such an assumption baked in.
I suppose you can wrap the offending code into a header and then undef what you don't need
Library_wrapper.h:
#define bool something_else // This will get you past the C++ compilation
#include "library.h"
#undef false
#undef true
#undef bool
main.cpp:
#include "Library_wrapper.h"
#include "boost.h"
Regarding the typedef.. the compiler should complain if you try to redefine a basic type in C++. You can redeclare a type by the way (it is allowed in C++) or define it (simple text replacement).
Unfortunately, no, you cannot use this file in Standard C++:
§7.1.3 [dcl.typedef]
6/ In a given scope, a typedef specifier shall not be used to redefine the name of any type declared in that scope to refer to a different type.
Thus typedef ... bool; is forbidden.
§17.6.4.3.1 [macro.names]
2/ A translation unit shall not #define or #undef names lexically identical to keywords, to the identifiers listed in Table 3, or to the attribute-tokens described in 7.6.
And in §2.12 [lex.key] we find that bool is a keyword.
Thus trying to trick the compiler by using #define bool ... prior to including the offending file is forbidden.
So, what is the alternative ? A shim !
Isolate that offending library behind a C & C++ compatible header of your own; and compile this part as C. Then you can include your own header in the C++ program without issue or tricks.
Note: yes, most compilers will probably accept #define bool ..., but it is still explicitly forbidden by the Standard.
You may copy a bad header and use an edited copy. Tell to compiler the path it should prefer and...
You could compile the code which uses the header as C, then just link it together with your C++ object files. You probably use MSVC or GCC; both can compile code as either C++ or C, and will allow you to create compatible object files.
Whether that's a clean solution or unnecessary overkill really depends on the exact situation.
I am currently looking through the code written by senior engineer. The code works fine but i am trying to figure out one detail.
He uses quite a few global variables and his code is broken down into a lot of separate files. So he uses a technique to make sure that global vars are declared everywhere where he needs to access them but are only defined once.
The technique is new to me but I read few articles on the internet and got some understanding about how it works. He uses
#undef EXTERN
followed by conditional definition of EXTERN as an empty string or actual extern. There is a very good article here explaining how it works. Also there is a discussion here
What gets me confused is that all examples I saw on the web suggest to include header file in a regular way in all of the source files that need it except for one. In this single special case line that includes header is preceded by definition of a symbol that will ensure that EXTERN will be defined to an empty string and .. so on (see link above). Typically this single special case is in main or in a separate source file dedicated to the declaration of global variables.
However in the code that I am looking at this special case is always in the source file that corresponds the header. Here is the minimal example:
"peripheral1.h" :
#undef EXTERN
#ifndef PERIPHERAL_1_CPP
#define EXTERN extern
#else
#define EXTERN
#endif
EXTERN void function1(void);
"peripheral1.cpp" :
#define PERIPHERAL_1_CPP
#include "peripheral1.h"
function1()
{
//function code code here
}
Everywhere else in the code he just does
#include "peripheral1.h"
My question is how and why does that work? In other words, how does compiler know where to define and where to just declare function (or variable, or class ...)? And why is it ok in above example to have the lines :
#define PERIPHERAL_1_CPP
#include "peripheral1.h"
in actual peripheral1.cpp rather then in main.cpp or elsewhere?
Or am I missing something obvious here?
All the source files, except "perripheral1.cpp", after preprocessing contain a sequence
of external variable declarations like:
extern int a;
extern int b;
extern int c;
In peripheral1.cpp only, after preprocessing, there will be a sequence of declarations:
int a;
int b;
int c;
int d;
which are tentative definitions of the corresponding variables, which, under normal circumstances are equivalent of the external definitions :
int a = 0;
int b = 0;
int c = 0;
int d = 0;
End result is, variable are declared everywhere, but defined only once.
PS. To be perfectly clear ...
In other words, how does compiler know where to define and where to
just declare function (or variable, or class ...)?
The compiler knows where to declare, whenever it encounters a grammatical construct, which is defined in the standard to have the semantics of a declaration.
The compiler knows where to define, whenever it encounters a grammatical construct, which is defined in the standard to have the semantics of a definition.
In other other words, the compiler does not know - you tell it explicitly what you want it to do.
Nostalgia
Ahh, this takes me back a fair way (about 20 years or so).
This is a way for C code to define global variables across multiple files: you define the variable once using a macro to ensure it is defined exactly only once, and then extern it in other C code files so you can utilise it. Nowadays it is quite superfluous in many instances, however it still has its place in legacy code, and will (most likely) still work in many modern compilers, nut it is C code not C++.
Normally the likes of #define PERIPHERAL_1_CPP is utilised to ensure uniquenesss of inclusion like a #pragma once
In my own code I would use something like:
#ifndef PERIPHERAL_1_CPP
#define PERIPHERAL_1_CPP
// my includes here
// my code here
#endif
That way you can #include the file as many times as you want all over your code, in every code file even, and you will avoid multiple definition errors. To be fair I normally do it with the .h files and have something like:
// for absolutely insane safety/paranoia
#pragma once
// normally sufficient
#ifndef PERIPHERAL_1_H
#define PERIPHERAL_1_H
// my includes here
// my code here
#endif
I have never tried it on cpp files but wil llater tonight to see if there is any benefit one way or the other:)
Give me a shout if you need any more info:)
I am trying to write something in c++ with an architecture like:
App --> Core (.so) <-- Plugins (.so's)
for linux, mac and windows. The Core is implicitly linked to App and Plugins are explicitly linked with dlopen/LoadLibrary to App. The problem I have:
static variables in Core are duplicated at run-time -- Plugins and App have different copys of them.
at least on mac, when a Plugin returns a pointer to App, dynamic casting that pointer in App always result in NULL.
Can anyone give me some explanations and instructions for different platforms please? I know this may seem lazy to ask them all here but I really cannot find a systematic answer to this question.
What I did in the entry_point.cpp for a plugin:
#include "raw_space.hpp"
#include <gamustard/gamustard.hpp>
using namespace Gamustard;
using namespace std;
namespace
{
struct GAMUSTARD_PUBLIC_API RawSpacePlugin : public Plugin
{
RawSpacePlugin(void):identifier_("com.gamustard.engine.space.RawSpacePlugin")
{
}
virtual string const& getIdentifier(void) const
{
return identifier_;
}
virtual SmartPtr<Object> createObject(std::string const& name) const
{
if(name == "RawSpace")
{
Object* obj = NEW_EX RawSpaceImp::RawSpace;
Space* space = dynamic_cast<Space*>(obj);
Log::instance().log(Log::LOG_DEBUG, "createObject: %x -> %x.", obj, space);
return SmartPtr<Object>(obj);
}
return SmartPtr<Object>();
}
private:
string identifier_;
};
SmartPtr<Plugin> __plugin__;
}
extern "C"
{
int GAMUSTARD_PUBLIC_API gamustardDLLStart(void) throw()
{
Log::instance().log(Log::LOG_DEBUG, "gamustardDLLStart");
__plugin__.reset(NEW_EX RawSpacePlugin);
PluginManager::instance().install(weaken(__plugin__));
return 0;
}
int GAMUSTARD_PUBLIC_API gamustardDLLStop(void) throw()
{
PluginManager::instance().uninstall(weaken(__plugin__));
__plugin__.reset();
Log::instance().log(Log::LOG_DEBUG, "gamustardDLLStop");
return 0;
}
}
Some Background
Shared libraries in C++ are quite difficult because the standard says nothing about them. This means that every platform has a different way of doing them. If we restrict ourselves to Windows and some *nix variant (anything ELF), the differences are subtle. The first difference is Shared Object Visibility. It is highly recommended that you read that article so you get a good overview of what visibility attributes are and what they do for you, which will help save you from linker errors.
Anyway, you'll end up with something that looks like this (for compiling with many systems):
#if defined(_MSC_VER)
# define DLL_EXPORT __declspec(dllexport)
# define DLL_IMPORT __declspec(dllimport)
#elif defined(__GNUC__)
# define DLL_EXPORT __attribute__((visibility("default")))
# define DLL_IMPORT
# if __GNUC__ > 4
# define DLL_LOCAL __attribute__((visibility("hidden")))
# else
# define DLL_LOCAL
# endif
#else
# error("Don't know how to export shared object libraries")
#endif
Next, you'll want to make some shared header (standard.h?) and put a nice little #ifdef thing in it:
#ifdef MY_LIBRARY_COMPILE
# define MY_LIBRARY_PUBLIC DLL_EXPORT
#else
# define MY_LIBRARY_PUBLIC DLL_IMPORT
#endif
This lets you mark classes, functions and whatever like this:
class MY_LIBRARY_PUBLIC MyClass
{
// ...
}
MY_LIBRARY_PUBLIC int32_t MyFunction();
This will tell the build system where to look for the functions when it calls them.
Now: To the actual point!
If you're sharing constants across libraries, then you actually should not care if they are duplicated, since your constants should be small and duplication allows for much optimization (which is good). However, since you appear to be working with non-constants, the situation is a little different. There are a billion patterns to make a cross-library singleton in C++, but I naturally like my way the best.
In some header file, let's assume you want to share an integer, so you would do have in myfuncts.h:
#ifndef MY_FUNCTS_H__
#define MY_FUNCTS_H__
// include the standard header, which has the MY_LIBRARY_PUBLIC definition
#include "standard.h"
// Notice that it is a reference
MY_LIBRARY_PUBLIC int& GetSingleInt();
#endif//MY_FUNCTS_H__
Then, in the myfuncts.cpp file, you would have:
#include "myfuncs.h"
int& GetSingleInt()
{
// keep the actual value as static to this function
static int s_value(0);
// but return a reference so that everybody can use it
return s_value;
}
Dealing with templates
C++ has super-powerful templates, which is great. However, pushing templates across libraries can be really painful. When a compiler sees a template, it is the message to "fill in whatever you want to make this work," which is perfectly fine if you only have one final target. However, it can become an issue when you're working with multiple dynamic shared objects, since they could theoretically all be compiled with different versions of different compilers, all of which think that their different template fill-in-the-blanks methods is correct (and who are we to argue -- it's not defined in the standard). This means that templates can be a huge pain, but you do have some options.
Don't allow different compilers.
Pick one compiler (per operating system) and stick to it. Only support that compiler and require that all libraries be compiled with that same compiler. This is actually a really neat solution (that totally works).
Don't use templates in exported functions/classes
Only use template functions and classes when you're working internally. This does save a lot of hassle, but overall is quite restrictive. Personally, I like using templates.
Force exporting of templates and hope for the best
This works surprisingly well (especially when paired with not allowing different compilers).
Add this to standard.h:
#ifdef MY_LIBRARY_COMPILE
#define MY_LIBRARY_EXTERN
#else
#define MY_LIBRARY_EXTERN extern
#endif
And in some consuming class definition (before you declare the class itself):
// force exporting of templates
MY_LIBRARY_EXTERN template class MY_LIBRARY_PUBLIC std::allocator<int>;
MY_LIBRARY_EXTERN template class MY_LIBRARY_PUBLIC std::vector<int, std::allocator<int> >;
class MY_LIBRARY_PUBLIC MyObject
{
private:
std::vector<int> m_vector;
};
This is almost completely perfect...the compiler won't yell at you and life will be good, unless your compiler starts changing the way it fills in templates and you recompile one of the libraries and not the other (and even then, it might still work...sometimes).
Keep in mind that if you're using things like partial template specialization (or type traits or any of the more advanced template metaprogramming stuff), all the producer and all its consumers are seeing the same template specializations. As in, if you have a specialized implementation of vector<T> for ints or whatever, if the producer sees the one for int but the consumer does not, the consumer will happily create the wrong type of vector<T>, which will cause all sorts of really screwed up bugs. So be very careful.