Forward declaring enum in C++ defined in C - c++

I searched about forward declaration and didn't see any way to make my situation work. So here it is:
1) There is a C-header file, an export interface so to speak for a large multi-component software, that contains an enum typedef
"export.h":
// This is in "C"!
typedef enum _VM_TYPE {...., ...., ...,} VM_TYPE;
2) A part of the code, in C++, uses that export.
"cpp_code.cpp":
// This is in C++
#include "export.h"
#include "cpp_header.hpp"
{ .... using VM_TYPE values to do stuffs....}
"cpp_header.hpp":
// Need to somehow forward declear VM_TYPE here but how?
Struct VM_INFO {
....
VM_TYPE VType; //I need to add this enum to the struct
....
};
So quite obvious, the problem is in cpp_head.hpp, as it doesn't know about the enum.
I tried adding to cpp_header.hpp
typedef enum _VM_TYPE VM_TYPE;
and it'll actually work. So why does THIS work? Because it has C-style syntax?!
Anyway, I was told to not do that ("it's C++, not C here") by upper "management".
Is there other way to make this work at all, based on how things are linked currently? They don't want to change/add include files; "enum class" is c++ only, correct? Adding just "enum VM_TYPE" to cpp_header.hpp will get error about redefinition.
Any idea? Thanks.

In the particular situation described in your question, you don't need to forward declare at all. All the files you #include are going to essentially get copy-pasted into a single translation unit before compilation proper begins, and since you #include "export.h" before you #include "cpp_header.hpp", then it'll just work, because by the time the compiler sees the definition of struct VM_INFO, it'll already have seen the definition of enum _VM_TYPE, so you've got no problem. There's basically no difference here between including "export.h" in "cpp_header.hpp", and including them both in "cpp_code.cpp" in that order, since you end up with essentially the same code after preprocessing. So all you have to do here is make sure you get your includes in the correct order.
If you ever wanted to #include "cpp_header.hpp" without including "export.h" in a translation unit where you need to access the members of struct VM_INFO (so that leaving it as an incomplete type isn't an option) then "export.h" is just badly designed, and you should break out the definition of anything you might need separately into a new header. If, as the comments suggest, you absolutely cannot do this and are required to have a suboptimal design, then your next best alternative would be to have two versions of "cpp_header.hpp", one which just repeats the definition of enum _VM_TYPE, and one which does not. You'd #include the first version in any translation unit where you do not also #include "export.h", and #include the second version in any translation unit where you do. Obviously any code duplication of this type is inviting problems in the future.
Also, names beginning with an underscore and a capital letter are always reserved in C, so you really shouldn't use them. If a future version of C ever decides to make use of _VM_TYPE, then you'll be stuck with either using an outdated version of C, or having all this code break.

A enum can not be forward declarations because the compiler needs to know the size of the enum. The underlying a enumerator is compiler specific, but usually a int. Can you just cast the enum as an int?
"I could be and often am wrong"

Related

Inline functions in header files in C++

Why is it a bad practice to define the functions of the class in the header files?
Lets say I have a header file and I define the functions of the class in the class definition itself like,
headerfile.hpp
#ifndef _HEADER_FILE_
#define _HEADER_FILE_
class node{
int i;
public:
int nextn(){
......
return i;
}
}
#endif //_HEADER_FILE_
So defining the function in the class like this makes the function "Inline".So if we include this header file in say two .cpp files, will it cause "Multiple definition error" ??Is it a bad practice to define the functions like this in the class definition?
It is a bad practice for the following reasons: If you need to change the code, let's say to add a trace in a simple setter (they are commonly in the .h); then you will need to recompile all CPP files that #includes the change (and any dependency of). In my current project that could reach up to 1 hour lost. If you later need to add another trace, then another and so on you quickly loose 1-2 days or work waiting for the compiler.
If you place your code in the CPP, then you only need to re-link, and that takes only a few minutes. Your project may be small today, but who knows in a few years. It's just a good habit to take.
Another (not so good) reason is that if you search your code base for the string "::MyFonction" you will not find it in the declaration since there is no "::" (we only want implementations). But a good IDE should find it anyway using a context search instead of a string search.
It's not bad practise (in fact it's commonplace) and it will not cause multiple definition errors. Inline functions never cause multiple definition errors, that's one of the meanings of inline.
The convention to separate prototypes (that is, the declaration of the class, its functions, their types) from implementation comes from both a design and a performance point of view.
Type checking and compiling your dependants is cheaper. Something that uses your class can be safely compiled without knowing your implementation.
Your compiler won't need to parse and recompile the same information lots of times each time you do compile those dependants.
The thing is to remember what it really means with you write #include at the top of a file in C++: it means "take all the contents of some other file, and put them here." So if you're using a class in lots of places all over your code base, then it's getting parsed every single time, and re-compiled in the context of that compilation unit.
This is precisely the reason why you have to put implementations of template classes in-line in the header file; the compiler needs to re-parse and compile the class for every different template instantiation (because that's what templates are about).
To answer your question directly:
* No, you will not get a multiple definition error.
* Maybe, some people would consider it back practice from a design points of view (others wouldn't)
* You might see a difference in performance (though not necessarily a degredation, as I believe - though I could be wrong), that despite the above, it can still be faster to compile header-only libraries.
Probably avoid doing this if your implementations are long, the class is used often in the codebase, and will be subject to frequent change.
For further reading, it might be worth checking up on "precompiled headers."
It is legal to define (inline) functions in your hpp file. Note that some people prefer to gather then under a dedicated extension like "inl.hpp", but this is just a style preference.

Is it OK to put a standard, pure C header #include directive inside a namespace? [duplicate]

This question already has answers here:
Closed 10 years ago.
Possible Duplicate:
Is it a good idea to wrap an #include in a namespace block?
I've got a project with a class log in the global namespace (::log).
So, naturally, after #include <cmath>, the compiler gives an error message each time I try to instantiate an object of my log class, because <cmath> pollutes the global namespace with lots of three-letter methods, one of them being the logarithm function log().
So there are three possible solutions, each having their unique ugly side-effects.
Move the log class to it's own namespace and always access it with it's fully qualified name. I really want to avoid this because the logger should be as convenient as possible to use.
Write a mathwrapper.cpp file which is the only file in the project that includes <cmath>, and makes all the required <cmath> functions available through wrappers in a namespace math. I don't want to use this approach because I have to write a wrapper for every single required math function, and it would add additional call penalty (cancelled out partially by the -flto compiler flag)
The solution I'm currently considering:
Replace
#include <cmath>
by
namespace math {
#include "math.h"
}
and then calculating the logarithm function via math::log().
I have tried it out and it does, indeed, compile, link and run as expected. It does, however, have multiple downsides:
It's (obviously) impossible to use <cmath>, because the <cmath> code accesses the functions by their fully qualified names, and it's deprecated to use in C++.
I've got a really, really bad feeling about it, like I'm gonna get attacked and eaten alive by raptors.
So my question is:
Is there any recommendation/convention/etc that forbid putting include directives in namespaces?
Could anything go wrong with
diferent C standard library implementations (I use glibc),
different compilers (I use g++ 4.7, -std=c++11),
linking?
Have you ever tried doing this?
Are there any alternate ways to banish the math functions from the global namespace?
I've found several similar questions on stackoverflow, but most were about including other C++ headers, which obviously is a bad idea, and those that weren't made contradictory statements about linking behaviour for C libraries. Also, would it be beneficial to additionally put the #include <math.h> inside extern "C" {}?
edit
So I decided to do what probably everyone else is doing, and put all of my code in a project namespace, and to access the logger with it's fully qualified name when including <cmath>.
No, the solution that you are considering is not allowed. In practice what it means is that you are changing the meaning of the header file. You are changing all of its declarations to declare differently named functions.
These altered declarations won't match the actual names of the standard library functions so, at link time, none of the standard library functions will resolve calls to the functions declared by the altered declarations unless they happen to have been declared extern "C" which is allowed - but not recommended - for names which come from the C standard library.
ISO/IEC 14882:2011 17.6.2.2/3 [using.headers] applies to the C standard library headers as they are part of the C++ standard library:
A translation unit shall include a header only outside of any external declaration or definition[*], and shall include the header lexically before the first reference in that translation unit to any of the entities declared in that header.
[*] which would include a namespace definition.
Why not putting a log class in it's own namespace and using typedef namespace::log logger; to avoid name clashes in a more convenient way?
Change your class's name. Not that big of a deal. ;-)
Seriously though, it's not a great idea to put names in the global namespace that collide with names from any standard header. C++03 didn't explicitly permit <cmath> to define ::log. But implementations were chronically non-conforming about that due to the practicalities of defining <cmath> on top of an existing <math.h> (and perhaps also an existing static-link library for some headers, including math). So C++11 ratifies existing practice, and allows <cmath> to dump everything into the global namespace. C++11 also reserves all those names for use with extern "C" linkage, and all function signatures for use with C++ linkage, even if you don't include the header. But more on that later.
Because in C++ any standard header is allowed to define the names from any other standard header (i.e, they're allowed to include each other), this means that any standard header at all can define ::log. So don't use it.
The answer to your question about different implementations is that even if your scheme works to begin with (which isn't guaranteed), in some other implementation there might be a header that you use (or want to use in future in the same TU as your log class), that includes <cmath>, and that you didn't give the namespace math treatment to. Off the top of my head, <random> seems to me a candidate. It provides a whole bunch of continuous random number distributions that plausibly could be implemented inline with math functions.
I suggest Log, but then I like capitalized class names. Partly because they're always distinct from standard types and functions.
Another possibility is to define your class as before and use struct log in place of log. This doesn't clash with the function, for reasons that only become clear if you spend way too much time with the C and C++ standards (you only use log as a class name, not as a function and not as a name with "C" linkage, so you don't infringe on the reserved name. Despite all appearances to the contrary, class names in C++ still inhabit a parallel universe from other names, rather like struct tags do in C).
Unfortunately struct log isn't a simple-type-identifier, so for example you can't create a temporary with struct log(VERY_VERBOSE, TO_FILE). To define a simple-type-identifier:
typedef struct log Log;
Log(VERY_VERBOSE, TO_FILE); // unused temporary object
An example of what I say in a comment below, based on a stated example usage. I think this is valid, but I'm not certain:
#include <iostream>
#include <cmath>
using std::log; // to enforce roughly what the compiler does anyway
enum Foo {
foo, bar
};
std::ostream &log(Foo f) { return std::cout; }
int main() {
log(foo) << log(10) << "\n";
}
It is ugly hack too, but I believe will not cause any linker problems. Just redefine log name from <math.h>
#define log math_log
#include <math.h>
#undef log
It could cause problems with inline functions from math using this log, but maybe you'd be lucky...
Math log() is still accessible but it's not easy. Within functions where you want to use it, just repeat its real declaration:
int somefunc() {
double log(double); // not sure if correct
return log(1.1);
}

In C/C++, is there a directive similar to #ifndef for typedefs?

If I want to define a value only if it is not defined, I do something like this :
#ifndef THING
#define THING OTHER_THING
#endif
What if THING is a typedef'd identifier, and not defined? I would like to do something like this:
#ifntypedef thing_type
typedef uint32_t thing_type
#endif
The issue arose because I wanted to check to see if an external library has already defined the boolean type, but I'd be open to hearing a more general solution.
There is no such thing in the language, nor is it needed. Within a single project you should not have the same typedef alias referring to different types ever, as that is a violation of the ODR, and if you are going to create the same alias for the same type then just do it. The language allows you to perform the same typedef as many times as you wish and will usually catch that particular ODR (within the same translation unit):
typedef int myint;
typedef int myint; // OK: myint is still an alias to int
//typedef double myint; // Error: myint already defined as alias to int
If what you are intending to do is implementing a piece of functionality for different types by using a typedef to determine which to use, then you should be looking at templates rather than typedefs.
C++ does not provide any mechanism for code to test presence of typedef, the best you can have is something like this:
#ifndef THING_TYPE_DEFINED
#define THING_TYPE_DEFINED
typedef uint32_t thing_type
#endif
EDIT:
As #David, is correct in his comment, this answers the how? part but importantly misses the why? It can be done in the way above, If you want to do it et all, but important it you probably don't need to do it anyways, #David's answer & comment explains the details, and I think that answers the question correctly.
No there is no such facility in C++ at preprocessing stage. At the max can do is
#ifndef thing_type
#define thing_type uint32_t
#endif
Though this is not a good coding practice and I don't suggest it.
Preprocessor directives (like #define) are crude text replacement tools, which know nothing about the programming language, so they can't act on any language-level definitions.
There are two approaches to making sure a type is only defined once:
Structure the code so that each definition has its place, and there's no need for multiple definitions
#define a preprocessor macro alongside the type, and use #ifndef to check for the macro definition before defining the type.
The first option will generally lead to more maintainable code. The second could cause subtle bugs, if you accidentally end up with different definitions of the type within one program.
As other have already said, there are no such thing, but if you try to create an alias to different type, you'll get a compilation error :
typedef int myInt;
typedef int myInt; // ok, same alias
typedef float myInt; // error
However, there is a thing called ctag for finding where a typedef is defined.
The problem is actually real PITA, because some APIs or SDKs redefine commonly used things. I had issue that header files for a map processing software (GIS) were redefining TRUE and FALSE (generally used by windows SDK)keywords to integer literals instead of true and false keywords ( obviously, that can break SOMETHING). And yes, famous joke "#define true false" is relevant.
define would never feel a typedef or constant declared in C\C++ code because preprocessor doesn't analyze code, it only scans for # statements. And it modifies code prior to giving it to syntax analyzer. SO, in general, it's not possible.
https://msdn.microsoft.com/en-us/library/5xkf423c.aspx?f=255&MSPPError=-2147217396
That one isn't portable so far, though there were known request to implement it in GCC. I think, it also counts as "extension" in MSVC. It's a compiler statement, not a preprocessor statement, so it will not "feel" defined macros, it would detect only typedefs outside of function body. "full type" there means that it will react on full definition, ignoring statements like "class SomeClass;". Use it at own risk.
Edit: apparently it also supported on MacOS now and by Intel comiler with -fms-dialect flag (AIX\Linux?)
This might not directly answer the question, but serve as a possible solution to your problem.
Why not try something like this?
#define DEFAULT_TYPE int // just for argument's sake
#ifndef MY_COOL_TYPE
#define MY_COOL_TYPE DEFAULT_TYPE
#endif
typedef MY_COOL_TYPE My_Cool_Datatype_t;
Then if you want to customize the type, you can either define MY_COOL_TYPE somewhere above this (like in a "configure" header that is included at the top of this header) or pass it as a command line argument when compiling (as far as I know you can do this with GCC and LLVM, maybe others, too).
No there is nothing like what you wanted. I have had your same problem with libraries that include their owntypedefs for things like bool. It gets to be a problem when they just don't care about what you use for bool or if any other libs might be doing the same thing!!
So here's what I do. I edit the header file for the libs that do such things and find the typedef bool and add some code like this:
#ifdef USE_LIBNAME_BOOL
typedef unsigned char bool; // This is the lib's bool implementation
#else
#include <stdbool.h>
#endif
Notice that I included if I didn't want to use the libs' own bool typdef. This means that you need C99 support or later.
As mentioned before this is not included in the C++ standard, but you might be able to use autotools to get the same functionality.
You could use the ac_cxx_bool macro to make sure bool is defined (or different routines for different datatypes).
The solution I ended up using was including stdbool.h. I know this doesn't solve the question of how to check if a typedef is already defined, but it does let me ensure that the boolean type is defined.
This is a good question. C and Unix have a history together, and there are a lot of Unix C typedefs not available on a non-POSIX platform such as Windows (shhh Cygwin people). You'll need to decide how to answer this question whenever you're trying to write C that's portable between these systems (shhhhh Cygwin people).
If cross-platform portability is what you need this for, then knowing the platform-specific preprocessor macro for the compilation target is sometimes helpful. E.g. windows has the _WIN32 preprocessor macro defined - it's 1 whenever the compilation target is 32-bit ARM, 64-bit ARM, x86, or x64. But it's presence also informs us that we're on a Windows machine. This means that e.g. ssize_t won't be available (ssize_t, not size_t). So you might want to do something like:
#ifdef _WIN32
typedef long ssize_t;
#endif
By the way, people in this thread have commented about a similar pattern that is formally called a guard. You see it in header files (i.e. interfaces or ".h" files) a lot to prevent multiple inclusion. You'll hear about header guards.
/// #file poop.h
#ifndef POOP_H
#define POOP_H
void* poop(Poop* arg);
#endif
Now I can include the header file in the implementation file poop.c and some other file like main.c, and I know they will always compile successfully and without multiple inclusion, whether they are compiled together or individually, thanks to the header guards.
Salty seadogs write their header guards programmatically or with C++11 function-like macros. If you like books I recommend Jens Gustedt's "Modern C".
It is not transparent but you can try to compile it one time without typedef (just using the alias), and see if it compiles or not.
There is not such things.
It is possible to desactivate this duplicate_typedef compilator error.
"typedef name has already been declared (with same type)"
On a another hand, for some standardized typedef definition there is often a preprocessor macro defined like __bool_true_false_are_defined for bool that can be used.

Setting a value to an enum in same namespace but different class? C++

EDIT
Thanks to comments under the question, I realized that you have to declare an enum in the header file also. >.< Why does nothing on the internet about enums mention this?
Now the compiler is recognizing Geologist.
My enum is within namespace Star in a class called GameModeState but I need to check the current enum value within a class called ZoneMovementState, which is also using namespace Star. I have GameModeState included at the top of ZoneMovementState.
The enum declaration in GameModeState is this:
enum Job {Landman = 0, Geologist = 1};
I'm trying to use this code in ZoneMovementState:
int placeholderJob = Star::GameModeState::Geologist;
//or I've tried this
int placeholderJob = GameModeState::Geologist;
For some reason my compiler is not recognizing Geologist in either attempt; how do I set placeholderJob to Geologist?
Does it not recognize Geologist in the scope of your program? (When you mouse over does intellisense pop up and show you that Geologist is an enum type equal to 1) or does it have a squigly underneath it (indicating it does not recognize the type?)
This could be a scoping issue (although based on your information it doesn't sound like it), or perhaps the compiler you are using does not allow setting the value of an enumeration to an integer without an explicit cast.
Why does nothing on the internet about enums mention this?
The internet doesn't need to mention this, because it groks compilation units.
Header files are there to tell a compiler (basically) what names (identifiers) exist and what they represent. This is why the compiler tells you when he doesn't know what a Geologist represents.
The same goes for functions, fields, classes, structs, typedefs, namespaces, so really the question would be
Why would a compiler magically know about enums in another compilation unit, when everything else has to be spelled out for him?

Why do functions need to be declared before they are used?

When reading through some answers to this question, I started wondering why the compiler actually does need to know about a function when it first encounters it. Wouldn't it be simple to just add an extra pass when parsing a compilation unit that collects all symbols declared within, so that the order in which they are declared and used does not matter anymore?
One could argue, that declaring functions before they are used certainly is good style, but I am wondering, is there are any other reason why this is mandatory in C++?
Edit - An example to illustrate: Suppose you have to functions that are defined inline in a header file. These two function call each other (maybe a recursive tree traversal, where odd and even layers of the tree are handled differently). The only way to resolve this would be to make a forward declaration of one of the functions before the other.
A more common example (though with classes, not functions) is the case of classes with private constructors and factories. The factory needs to know the class in order to create instances of it, and the class needs to know the factory for the friend declaration.
If this is requirement is from the olden days, why was it not removed at some point? It would not break existing code, would it?
How do you propose to resolve undeclared identifiers that are defined in a different translation unit?
C++ has no module concept, but has separate translation as an inheritance from C. A C++ compiler will compile each translation unit by itself, not knowing anything about other translation units at all. (Except that export broke this, which is probably why it, sadly, never took off.)
Header files, which is where you usually put declarations of identifiers which are defined in other translation units, actually are just a very clumsy way of slipping the same declarations into different translation units. They will not make the compiler aware of there being other translation units with identifiers being defined in them.
Edit re your additional examples:
With all the textual inclusion instead of a proper module concept, compilation already takes agonizingly long for C++, so requiring another compilation pass (where compilation already is split into several passes, not all of which can be optimized and merged, IIRC) would worsen an already bad problem. And changing this would probably alter overload resolution in some scenarios and thus break existing code.
Note that C++ does require an additional pass for parsing class definitions, since member functions defined inline in the class definition are parsed as if they were defined right behind the class definition. However, this was decided when C with Classes was thought up, so there was no existing code base to break.
Historically C89 let you do this. The first time the compiler saw a use of a function and it didn't have a predefined prototype, it "created" a prototype that matched the use of the function.
When C++ decided to add strict typechecking to the compiler, it was decided that prototypes were now required. Also, C++ inherited the single-pass compilation from C, so it couldn't add a second pass to resolved all symbols.
Because C and C++ are old languages. Early compilers didn't have a lot of memory, so these languages were designed so a compiler can just read the file from top to bottom, without having to consider the file as a whole.
I think of two reasons:
It makes the parsing easy. No extra pass needed.
It also defines scope; symbols/names are available only after its declaration. Means, if I declare a global variable int g_count;, the later code after this line can use it, but not the code before the line! Same argument for global functions.
As an example, consider this code:
void g(double)
{
cout << "void g(double)" << endl;
}
void f()
{
g(int());//this calls g(double) - because that is what is visible here
}
void g(int)
{
cout << "void g(int)" << endl;
}
int main()
{
f();
g(int());//calls g(int) - because that is what is the best match!
}
Output:
void g(double)
void g(int)
See the output at ideone : http://www.ideone.com/EsK4A
The main reason will be to make the compilation process as efficient as possible. If you add an extra pass you're adding both time and storage. Remember that C++ was developed back before the time of Quad Core Processors :)
The C programming language was designed so that the compiler could be implemented as a one-pass compiler. In such a compiler, each compilation phase is only executed once. In such a compiler you cannot referrer to an entity that is defined later in the source file.
Moreover, in C, the compiler only interpret a single compilation unit (generally a .c file and all the included .h files) at a time. So you needed a mechanism to referrer to a function defined in another compilation unit.
The decision to allow one-pass compiler and to be able to split a project in small compilation unit was taken because at the time the memory and the processing power available was really tight. And allowing forward-declaration could easily solve the issue with a single feature.
The C++ language was derived from C and inherited the feature from it (as it wanted to be as compatible with C as possible to ease the transition).
I guess because C is quite old and at the time C was designed efficient compilation was a problem because CPUs were much slower.
Since C++ is a static language, the compiler needs to check if values' type is compatible with the type expected in the function's parameters. Of course, if you don't know the function signature, you can't do this kind of checks, thus defying the purpose of a static compiler. But, since you have a silver badge in C++, I think you already know this.
The C++ language specs were made right because the designer didn't want to force a multi-pass compiler, when hardware was not as fast as the one available today. In the end, I think that, if C++ was designed today, this imposition would go away but then, we would have another language :-).
One of the biggest reasons why this was made mandatory even in C99 (compared to C89, where you could have implicitly-declared functions) is that implicit declarations are very error-prone. Consider the following code:
First file:
#include <stdio.h>
void doSomething(double x, double y)
{
printf("%g %g\n",x,y);
}
Second file:
int main()
{
doSomething(12345,67890);
return 0;
}
This program is a syntactically valid* C89 program. You can compile it with GCC using this command (assuming the source files are named test.c and test0.c):
gcc -std=c89 -pedantic-errors test.c test0.c -o test
Why does it print something strange (at least on linux-x86 and linux-amd64)? Can you spot the problem in the code at a glance? Now try replacing c89 with c99 in the command line — and you'll be immediately notified about your mistake by the compiler.
Same with C++. But in C++ there're actually other important reasons why function declarations are needed, they are discussed in other answers.
* But has undefined behavior
Still, you can have a use of a function before it is declared sometimes (to be strict in the wording: "before" is about the order in which the program source is read) -- inside a class!:
class A {
public:
static void foo(void) {
bar();
}
private:
static void bar(void) {
return;
}
};
int main() {
A::foo();
return 0;
}
(Changing the class to a namespace doesn't work, per my tests.)
That's probably because the compiler actually puts the member-function definitions from inside the class right after the class declaration, as someone has pointed it out here in the answers.
The same approach could be applied to the whole source file: first, drop everything but declaration, then handle everything postponed. (Either a two-pass compiler, or large enough memory to hold the postponed source code.)
Haha! So, they thought a whole source file would be too large to hold in the memory, but a single class with function definitions wouldn't: they can allow for a whole class to sit in the memory and wait until the declaration is filtered out (or do a 2nd pass for the source code of classes)!
I remember with Unix and Linux, you have Global and Local. Within your own environment local works for functions, but does not work for Global(system). You must declare the function Global.