Pointless 'MIDL_INTERFACE' Macro in winapi? - c++

After browsing some old code, I noticed that some classes are defined in this manner:
MIDL_INTERFACE("XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX")
Classname: public IUnknown {
/* classmembers ... */
};
However, the macro MIDL_INTERFACE is defined as:
#define MIDL_INTERFACE(x) struct
in C:/MinGW/include/rpcndr.h (somewhere around line 17). The macro itself is rather obviously entirely pointless, so what's the true purpose of this macro?

In the Windows SDK version that macro expands to
struct __declspec(uuid(x)) __declspec(novtable)
The first one allows use of the __uuidof keyword which is a nice way to get the guid of an interface from the typename. The second one suppresses the generation of the v-table, one that is never used for an interface. A space optimization.

This is because MinGW does not support COM (or rather, supports it extremely poorly). MIDL_INTERFACE is used when defining a COM component, and it is generated by the IDL compiler, which generates COM type libraries and class definitions for you.
On MSVC, this macro typically expands to more complicated initialization and annotations to expose the given C++ class to COM.

If I had to guess, it's for one of two use cases:
It's possible that there's an external tool that parses the files looking for declarations like these. The idea is that by having the macro evaluate to something harmless, the code itself compiles just fine, but the external tool can still look at the source code and extract information out of it.
Another option might be that the code uses something like the X Macro Trick to selectively redefine what this preprocessor directive means so that some other piece of the code can interpret the data in some other way. Depending on where the #define is this may or may not be possible, but it seems reasonable that this might be the use case. This is essentially a special-case of the first option.

Related

Why doesn't the vscode-cpptools extension provide intellisense for namespaced declarations inside macro definitions?

I am using VS Code. All the following description happens on VS Code env.
I got one header with namespace "Loki" defined which is called "Typelist.h"
I am trying to use a struct inside this namespace defined in this header.
I did:
# define LOKI_TYPELIST_1(T1) ::Loki::TypeList<T1, ::Loki::NullType>
# define LOKI_TYPELIST_2(T1, T2) ::Loki::TypeList<T1, LOKI_TYPELIST_1(T2)>
Normally, I think it should give me intellisense when I am trying to type ::Loki::[Something from namespace Loki], but it doesn't show me anything.
Even, when I am trying to use LOKI_TYPELIST_1 when I define LOKI_TYPELIST_2, it doesn't work either.
What's going on here? Why doesn't the vscode-cpptools extension provide intellisense for namespaced declarations inside macro definitions?
P.S. I did include "Typelist.h" in my current header.
As far as I know, the vscode-cpptools doesn't provide intellisense inside macro definitions.
This is actually fairly reasonable. What a macro definition really means when the macro gets used is often highly dependent on context where it is used. After all... isn't one of the main features of macros?
Take for an (unrealistic, toy) example: #define FOO std::. Let's say I expect to see identifiers for things declared inside the global std namespace when I trigger VS Code to provide suggestions there. Who's to say some lunatic won't redefine what std is and then use FOO?
Granted, if it were instead #define FOO ::std::, our imaginary lunatic is out of luck, but I'd wager this is a case of "It's too much work to distinguish between what is guaranteeable as known inside a macro body, so let's just not provide intellisense there."
Here's more (and probably better) food for thought: How would intellisense know what is declared inside the std namespace at the point of the macro usage? That would depend on what standard headers have been included and what forward declarations have been made before the point of the macro usage. At one usage site, I might have included <string>, and at another, not have included it. Etc. A macro can be used in multiple places. How do you give intellisense for something that can have different valid suggestions depending on where it is used?
Even with such a small example (and there are many varying others), there are already two challenges that make this unduly difficult for the vscode-cpptools extension to provide good suggestions/autocomplete for.
Granted- the vscode-cpptools extension can and will show problem highlighting at/inside the macro if any usage of that macro has problems, because it uses compiler diagnostic messages to find the "site" of the problem, and most compilers will report the specific macro line and column where the problem is happening in their diagnostic messages (in addition to the line and column of where the macro is being used).

How to define a macro that expands to a conditional statement in C++?

I want to create some context-sensitive macros. The macro
#define LOG_WARNING(x) qWarning()().noquote().nospace() << x
works fine, it is located in file Macros.h. I want to define a macro, which does not print log messages when called from unit testing routine. So, I modified like
#define LOG_INFO(x) if(!UNIT_TESTING) qInfo().noquote().nospace() << x
Since in this way the macro will depend on UNIT_TESTING, I provided in the same Macros.h
extern bool UNIT_TESTING; // Whether in course of unit testing
However, the compilers tells
declaration does not declare anything [-fpermissive]
extern bool UNIT_TESTING; // Whether in course of unit testing
^
At the same time, if the external is declared in the file from which Macros.h is included, it works fine. Do I wrong something?
Here is how to share variables across source files. Nevertheless, I would highly recommend not to do so, but to implement a function (bool IS_UNIT_TESTING() ) or class which takes care of this. In this way you can change the implementation without changing the interface.
Moreover, Macros are evil. They are error prone can not be debuged easily. Use inline functions or constexp instead. The compiler will optimize it to almost the very same code.

Using an inline function for an API

I want to create an API which, on the call of a function, such as getCPUusage(), redirects to a getCPUusage() function for Windows or Linux.
So I'm using a glue file api.h :
getCPUusage() {
#ifdef WIN32
getCPUusage_windows();
#endif
#ifdef __gnu_linux__
getCPUusage_linux();
#endif
}
So I wonder if using inline would be a better solution, since with what I have, the call will be bigger.
My question is the following : is it better to use inlined function for every call in this situation ?
It depends on use-case of your program. If consumer is still c++ - then inline has a sense.
But just assume that you would like to reuse it inside C, Pascal, Java ... in this case inline is not a case. Caller must export someway stable name over lib, but not from header file.
Lib for linux is rather transparent, while on Windows you need apply __dllexport keyword - that is not applicable to inline
The answer is: yes, it is indeed more efficient, but probably not worthwhile:
If you're putting these functions in a class, it is not required that you write down the "inline" keyword in your situation, because you only have a header file (You don't have any cpp-files - according to your description). Functions that are implemented inside the class definition (in the header file) will be automatically seen as inline functions by the compiler. Note however that this is only a "hint" to the compiler. The compiler may still decide to make your function non-inline if that is found to be (by the compiler) more efficient. For small functions like yours, it will most likely produce actual inline functions.
If you're not putting these functions in a class, I don't think you should bother adding inline either, as (as said above) it's only a "hint" and even without those "hints", modern compilers will figure out what functions to inline anyway. Humans are much more likely to be wrong about these things then the compiler anyway.

Parsing irregular c++ prototypes

I am trying to build a program that parses and lists the content of header files. So far, so good, I found it easy parsing and listing headers I wrote, but when I started parsing cross platform API headers things got messy.
My current approach is rather simplistic, here is a pseudocode example of parsing the following function:
void foo(int a);
void is a type, so we are dealing with instancing a type
foo is the name of that type
foo is followed by brackets, meaning it is a function of type void named foo
int is a type...
a is the name of that type instance
foo is a function of type void that takes one parameter of type int named a
However, when I got into bigger and more complex headers I stumbled upon somewhat irregular prototypes, involving macros and god knows what. An example:
GLAPI void APIENTRY glEvalCoord1d( GLdouble u );
GLAPI and APIENTRY are platform dependent macros. Which kind of spoils my simple parsing scheme, since it expects the name of an object to follow its type. Those two macros happen to translate to either __stdcall, __declspec(dllimport) or extern but in theory they could mean anything, with their meaning being unclear until compile time.
How to write my parser so it can deal with such scenarios and not get confused? The macros themselves are defined at an earlier stage, so the parser can be aware GLAPI and APIENTRY are macros so they can simply be ignored, is this the way to go? Naturally this is just one of the many variations of irregularities the parser may stumble upon parsing through different headers, so any general techniques of how to deal with the parsing of any "legal" header content are welcome.
There isn't any real alternative to expanding the macros before you parse, at least if you want process header files with the same complexity as Microsoft's, or any other header files associated with a compiler system that has been around for 10 years or more.
The unpreprocessed source code is NOT C; it is simply unpreprocessed source code. The macros (and prepreprocessor conditionals which you surprising didn't mention) can edit the apparant source in not arbitrary but spectacularly complex fashion. And you can't often know what the macros used, or conditionals expanded, unless you process the #includes as well.
You can get GCC to do preprocessor expansion for you, and then parse it. That would be far
the easiest way to approach this.
That still leaves the problem of parsing real C code, with all the complexities of declarators, and ambiguities in fragments suchas T X; where the meaning of the statement depends on the declaration of T. To parse the headers accurately, you need a full C parser.
Our C Front End can do full preprocessing, or you can invoke it a mode in which some macros are expanded, and some are not. By tuning this set, you often parse such headers without exapanding every macro. Preprocessor conditionals are much more difficult, because they can occur at inconvenient (unstructured) places.
If all you want is the name and signature of functions, then a simple search and replace for macros should be sufficient.
However, you need to check if a macro contains keywords (like the return value). This may be possible by stripping macro definitions of every but keywords as they are defined, but tracking them and using a simple preprocessor will be necessary.
The platform dependent keywords, such as __declspec and __attribute__ have very limited syntax and there are only a few of them, so specifically removing those is possible.
You may want to take a look at how doxygen handles this, because it does almost exactly what you want and does handle macros. It allows a list of macros to be expanded as defined, and ones that should be expanded to a custom value. You could adapt that to expand __declspec(x) to nothing, and expand all others to their defined value by default.
This certainly isn't foolproof, but a search and replace is about the simplest functional solution you'll get. You need to follow the standard C++ preprocessor rules, which aren't terribly complex, with additional macros (const, declspec, etc) to strip extra attributes, and parse the final results.

In C/C++, is there a directive similar to #ifndef for typedefs?

If I want to define a value only if it is not defined, I do something like this :
#ifndef THING
#define THING OTHER_THING
#endif
What if THING is a typedef'd identifier, and not defined? I would like to do something like this:
#ifntypedef thing_type
typedef uint32_t thing_type
#endif
The issue arose because I wanted to check to see if an external library has already defined the boolean type, but I'd be open to hearing a more general solution.
There is no such thing in the language, nor is it needed. Within a single project you should not have the same typedef alias referring to different types ever, as that is a violation of the ODR, and if you are going to create the same alias for the same type then just do it. The language allows you to perform the same typedef as many times as you wish and will usually catch that particular ODR (within the same translation unit):
typedef int myint;
typedef int myint; // OK: myint is still an alias to int
//typedef double myint; // Error: myint already defined as alias to int
If what you are intending to do is implementing a piece of functionality for different types by using a typedef to determine which to use, then you should be looking at templates rather than typedefs.
C++ does not provide any mechanism for code to test presence of typedef, the best you can have is something like this:
#ifndef THING_TYPE_DEFINED
#define THING_TYPE_DEFINED
typedef uint32_t thing_type
#endif
EDIT:
As #David, is correct in his comment, this answers the how? part but importantly misses the why? It can be done in the way above, If you want to do it et all, but important it you probably don't need to do it anyways, #David's answer & comment explains the details, and I think that answers the question correctly.
No there is no such facility in C++ at preprocessing stage. At the max can do is
#ifndef thing_type
#define thing_type uint32_t
#endif
Though this is not a good coding practice and I don't suggest it.
Preprocessor directives (like #define) are crude text replacement tools, which know nothing about the programming language, so they can't act on any language-level definitions.
There are two approaches to making sure a type is only defined once:
Structure the code so that each definition has its place, and there's no need for multiple definitions
#define a preprocessor macro alongside the type, and use #ifndef to check for the macro definition before defining the type.
The first option will generally lead to more maintainable code. The second could cause subtle bugs, if you accidentally end up with different definitions of the type within one program.
As other have already said, there are no such thing, but if you try to create an alias to different type, you'll get a compilation error :
typedef int myInt;
typedef int myInt; // ok, same alias
typedef float myInt; // error
However, there is a thing called ctag for finding where a typedef is defined.
The problem is actually real PITA, because some APIs or SDKs redefine commonly used things. I had issue that header files for a map processing software (GIS) were redefining TRUE and FALSE (generally used by windows SDK)keywords to integer literals instead of true and false keywords ( obviously, that can break SOMETHING). And yes, famous joke "#define true false" is relevant.
define would never feel a typedef or constant declared in C\C++ code because preprocessor doesn't analyze code, it only scans for # statements. And it modifies code prior to giving it to syntax analyzer. SO, in general, it's not possible.
https://msdn.microsoft.com/en-us/library/5xkf423c.aspx?f=255&MSPPError=-2147217396
That one isn't portable so far, though there were known request to implement it in GCC. I think, it also counts as "extension" in MSVC. It's a compiler statement, not a preprocessor statement, so it will not "feel" defined macros, it would detect only typedefs outside of function body. "full type" there means that it will react on full definition, ignoring statements like "class SomeClass;". Use it at own risk.
Edit: apparently it also supported on MacOS now and by Intel comiler with -fms-dialect flag (AIX\Linux?)
This might not directly answer the question, but serve as a possible solution to your problem.
Why not try something like this?
#define DEFAULT_TYPE int // just for argument's sake
#ifndef MY_COOL_TYPE
#define MY_COOL_TYPE DEFAULT_TYPE
#endif
typedef MY_COOL_TYPE My_Cool_Datatype_t;
Then if you want to customize the type, you can either define MY_COOL_TYPE somewhere above this (like in a "configure" header that is included at the top of this header) or pass it as a command line argument when compiling (as far as I know you can do this with GCC and LLVM, maybe others, too).
No there is nothing like what you wanted. I have had your same problem with libraries that include their owntypedefs for things like bool. It gets to be a problem when they just don't care about what you use for bool or if any other libs might be doing the same thing!!
So here's what I do. I edit the header file for the libs that do such things and find the typedef bool and add some code like this:
#ifdef USE_LIBNAME_BOOL
typedef unsigned char bool; // This is the lib's bool implementation
#else
#include <stdbool.h>
#endif
Notice that I included if I didn't want to use the libs' own bool typdef. This means that you need C99 support or later.
As mentioned before this is not included in the C++ standard, but you might be able to use autotools to get the same functionality.
You could use the ac_cxx_bool macro to make sure bool is defined (or different routines for different datatypes).
The solution I ended up using was including stdbool.h. I know this doesn't solve the question of how to check if a typedef is already defined, but it does let me ensure that the boolean type is defined.
This is a good question. C and Unix have a history together, and there are a lot of Unix C typedefs not available on a non-POSIX platform such as Windows (shhh Cygwin people). You'll need to decide how to answer this question whenever you're trying to write C that's portable between these systems (shhhhh Cygwin people).
If cross-platform portability is what you need this for, then knowing the platform-specific preprocessor macro for the compilation target is sometimes helpful. E.g. windows has the _WIN32 preprocessor macro defined - it's 1 whenever the compilation target is 32-bit ARM, 64-bit ARM, x86, or x64. But it's presence also informs us that we're on a Windows machine. This means that e.g. ssize_t won't be available (ssize_t, not size_t). So you might want to do something like:
#ifdef _WIN32
typedef long ssize_t;
#endif
By the way, people in this thread have commented about a similar pattern that is formally called a guard. You see it in header files (i.e. interfaces or ".h" files) a lot to prevent multiple inclusion. You'll hear about header guards.
/// #file poop.h
#ifndef POOP_H
#define POOP_H
void* poop(Poop* arg);
#endif
Now I can include the header file in the implementation file poop.c and some other file like main.c, and I know they will always compile successfully and without multiple inclusion, whether they are compiled together or individually, thanks to the header guards.
Salty seadogs write their header guards programmatically or with C++11 function-like macros. If you like books I recommend Jens Gustedt's "Modern C".
It is not transparent but you can try to compile it one time without typedef (just using the alias), and see if it compiles or not.
There is not such things.
It is possible to desactivate this duplicate_typedef compilator error.
"typedef name has already been declared (with same type)"
On a another hand, for some standardized typedef definition there is often a preprocessor macro defined like __bool_true_false_are_defined for bool that can be used.