I'd like to take advantage of the vectorcall convention (or regcall, etc. depending on the compiler), but for sake of 3rd party libraries it's not really possible to enable this convention as default for the entire project. Adding keywords to all functions / methods of a gigantic project doesn't seem like a very good idea either.
Is there a way to select a default calling convention to a class? Or perhaps a block, something similar to #pragma pack(push/pop)". Or just anything :).
According to this 2013 MSDN blog about vectorcall it can be activated in Visual Studio via the /Gv compiler switch.
So perhaps a possible answer might be to place the code/classes that you want to use vectorcall into separate libraries from the code that should use standard calling conventions.
Those libraries can then be built with the appropriate compiler switch.
I would use something similar to an idiom: Inline Guard Macro
So in this case:
#define CALL_CONVENTION vectorcall
This can be defined depending on other defines.
Then all your functions can look like this:
void CALL_CONVENTION func()
{ ... }
Related
I want to create an API which, on the call of a function, such as getCPUusage(), redirects to a getCPUusage() function for Windows or Linux.
So I'm using a glue file api.h :
getCPUusage() {
#ifdef WIN32
getCPUusage_windows();
#endif
#ifdef __gnu_linux__
getCPUusage_linux();
#endif
}
So I wonder if using inline would be a better solution, since with what I have, the call will be bigger.
My question is the following : is it better to use inlined function for every call in this situation ?
It depends on use-case of your program. If consumer is still c++ - then inline has a sense.
But just assume that you would like to reuse it inside C, Pascal, Java ... in this case inline is not a case. Caller must export someway stable name over lib, but not from header file.
Lib for linux is rather transparent, while on Windows you need apply __dllexport keyword - that is not applicable to inline
The answer is: yes, it is indeed more efficient, but probably not worthwhile:
If you're putting these functions in a class, it is not required that you write down the "inline" keyword in your situation, because you only have a header file (You don't have any cpp-files - according to your description). Functions that are implemented inside the class definition (in the header file) will be automatically seen as inline functions by the compiler. Note however that this is only a "hint" to the compiler. The compiler may still decide to make your function non-inline if that is found to be (by the compiler) more efficient. For small functions like yours, it will most likely produce actual inline functions.
If you're not putting these functions in a class, I don't think you should bother adding inline either, as (as said above) it's only a "hint" and even without those "hints", modern compilers will figure out what functions to inline anyway. Humans are much more likely to be wrong about these things then the compiler anyway.
I have to build an API for a C++ framework which do some simulation stuff. I already have created a new class with __declspec(dllexport) functions and built the framework to a DLL.
This works fine and I can use the framework within a C# application.
But is there another or a better approach to create an API with C++?
If you want to create a C++-API, exporting a set of classes from a DLL/shared library is the way to go. Many libraries written in C++ decide to offer a C interface though, because pure C interfaces are much easier to bind to foreign languages. To bind foreign languages to C++, a wrapper generator such as SWIG is typically required.
C++-APIs also have the problem that, due to C++ name-mangling, the same compiler/linker needs to be used to build the framework and the application.
It is important to note that the __declspec(dllexport)-mechanism of telling the compiler that a class should be exported is specific to the Microsoft Compiler. It is common practice to put it into a preprocessor macro to be able to use the same code on other compilers:
#ifdef _MSC_VER
# define MY_APP_API __declspec(dllexport)
#else
# define MY_APP_API
#endif
class MY_APP_API MyClass {}
The solution with exporting classes have some serious drawbacks. You won't be able to write DLLs in another languages, because they don't support name mangling. Furthermore, you won't be able to use other compilers than VS (because of the same reason). Furthermore, you may not be able to use another version of VS, because MS doesn't guarantee, that mangling mechanism stays the same in different versions of the compiler.
I'd suggest using flattened C-style interface, eg.
MyClass::Method(int i, float f);
Export as:
MyClass_MyMethod(MyClass * instance, int i, float f);
You can wrap it inside C# to make it a class again.
I have a big C++ Project, in which I try to implement a debug function which needs classes from other libraries. Unfortunately these classes share the same name and namespaces with classes called inside the project. I tried to use a static library to avoid multiple definitions, but of course the compiler complains about that issue. So my question:
It is possible to create that library for the function without that the compiler knows about the called classes inside the function?
I don't know, like a "protected function" or like putting all the code from the libraries inside the function code..
Edit: I'm using the g++ compiler.
Max, I know but so far I see no other way.
Schematic, the problem is.
Project:
#include a.h // (old one)
#include a2.h
return a->something();
return a2->something(); //debug function
debug function a2:
#include a.h // (new one!!)
return a->something(); // (new one!)
Compiling Process looks so far:
g++ project -la -la2
That is a very simplified draft. But that's it actually.
Maybe you can create a wrapper library which internally links to that outside library and exports its definitions under a different name or namespace.
try enclosing the #includes for the declarations of the classes that you are using in your debug function in a namspace, but don't use an using clause for that namespace.
There are a few techniques that may help you, but that depends on what the "debug version" of the library does.
First, it's not unheard of to have #ifdef blocks inside functions that do additional checking depending on whether the program was built in debug mode. The C assert macro behaves this way.
Second, it's possible that the "debug version" does nothing more than log messages. It's easy enough to include the logging code in both debug and release versions, and make the decision to actually log based on some kind of "priority" parameter for each log message.
Third, you may consider using an event-based design where functions can, optionally, take objects as parameters that have certain methods, and then if interesting things happen and the function was passed an event object, the function can call those methods.
Finally, if you're actually interested in what happens at a lower level than the library you're working on, you can simply link to debug versions of those lower level libraries. This is a case of the first option mentioned above, applied to a different library than the one you're actually working on. Microsoft's runtime libraries do this, as do Google's perftools and many "debugging malloc" libraries.
After browsing some old code, I noticed that some classes are defined in this manner:
MIDL_INTERFACE("XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX")
Classname: public IUnknown {
/* classmembers ... */
};
However, the macro MIDL_INTERFACE is defined as:
#define MIDL_INTERFACE(x) struct
in C:/MinGW/include/rpcndr.h (somewhere around line 17). The macro itself is rather obviously entirely pointless, so what's the true purpose of this macro?
In the Windows SDK version that macro expands to
struct __declspec(uuid(x)) __declspec(novtable)
The first one allows use of the __uuidof keyword which is a nice way to get the guid of an interface from the typename. The second one suppresses the generation of the v-table, one that is never used for an interface. A space optimization.
This is because MinGW does not support COM (or rather, supports it extremely poorly). MIDL_INTERFACE is used when defining a COM component, and it is generated by the IDL compiler, which generates COM type libraries and class definitions for you.
On MSVC, this macro typically expands to more complicated initialization and annotations to expose the given C++ class to COM.
If I had to guess, it's for one of two use cases:
It's possible that there's an external tool that parses the files looking for declarations like these. The idea is that by having the macro evaluate to something harmless, the code itself compiles just fine, but the external tool can still look at the source code and extract information out of it.
Another option might be that the code uses something like the X Macro Trick to selectively redefine what this preprocessor directive means so that some other piece of the code can interpret the data in some other way. Depending on where the #define is this may or may not be possible, but it seems reasonable that this might be the use case. This is essentially a special-case of the first option.
Wonder what the difference between:
static PROCESSWALK pProcess32First=(PROCESSWALK)GetProcAddress(hKernel,"Process32First");
...
pProcess32First(...);
what is hKernel? Look in here. You can replace with GetModuleHandle()
and
#include <Tlhelp32.h>
...
Process32First(...);
What are the differences, I wonder which I should use. Is there any difference in terms of best practices then?
NOTE: my answer assumes that the function is available either way, there are other things to consider if you are after non-exported functions.
If you use LoadLibrary and GetProcAddress, then you have the option running with reduced functionality if the required library isn't there. if you use the include and link directly the lib, and dll isn't there (or doesn't have the export due to wrong version) your app will simply fail to load.
It really only makes a difference if you want to use a function that isn't in all versions of a given dll.
In addition to what Evan said (which is correct), another important difference (IMHO) is that if you're dynamically loading functions, you need to have a typedef to cast the void* to in order to call the function once it's loaded. Unless the header files which defines the function prototype for static linkage has a mechanism for also defining the typedef for the function pointer from the same template function definition code, you're going to end up with a duplicate function definition, probably in your code. If the external header definitions are ever updated (for example, with new definitions for 64bit data types), you risk runtime errors in your application unless you update the other function prototypes (these will not be caught at compile time, because of the c-style cast to the function typedef).
It's a subtle issue, but an important one to consider. I would use implicit ("static") linking if you can because of that issue, and if you're using dynamic loading, be aware of the issue, and structure your code to avoid problems in the future as best you can.
See here for Microsoft's explanation.
Implicit linking (using *.lib) is simplier.
As for kernel libraries, there are no other difference.
I would take the first approach if I had optional plugin libraries that the user may or may not have. For necessary libraries I would use the second since it is a lot less code to get the functions.