I have exported a method from a C++ DLL and then call it from a VB.net forms application. The C++ method currently has no return value (void) but I want to improve it and return an int that represents a series of error codes. (I plan to return zero if all went well.)
Where and how is the best place to define these error codes?
Should I do the following at the top of my CPP file:
#define ERR_NEGATIVE_CELL_SIZE 1
#define ERR_INVALID_FILE_PATH 2
etc
The VB.net application will also define these same codes and then show UI messages to the user based on the code.
Obviously I would prefer to throw an exception in the DLL and catch it (along with the relevant exception message) in VB.net, but this doesn't seem to be possible using the extern "C" __declspec(dllexport) method.
Happy to hear about alternative design patterns. I also plan to expose the DLL methods via a C++ console executable, so storing the error messages once in the DLL and having them available to both the console and UI applications is ideal.
If you want the error codes to be available to other compilation units then they are best placed in a header file. Typically when writing library code you would create one or more header files that declare all the constants, types, functions and classes that are needed to use the library. The implementations are then compiled into the library. The consumer of the library includes your headers and an import library.
Regarding your constants, you are proposing using #define to declare them. Don't do that. Consider using constants or enums. The pre-processor is generally something to use as little as possible.
Please avoid the preprocessor wherever possible.
For your scenario, defining an enum would be reasonable.
Define it next to the function prototype.
For exposing the translations, use a translation function and export that too.
Something like:
size_t TranslateError(int error, char* buffer, size_t size)
Returns: size of the translated message, message in the buffer if return <= len
Related
I am working on an API that in wraps C++ behaviors with C calling convention functions. The API is composed of a collection of shared libraries (dll) that may be used in a variety of languages. In instances where a C++ class object is pass across the dll boundary, a C opaque pointer or "Handle" is used to refer to the underlying C++ object, similar to the Win32 API. An example header file prototype of such a wrapper function is
extern "C" { __declspec(dllexport) int __cdecl MyObjConfig(MyObj_t* handle); }
Many of the API functions / classes interface with hardware peripherals. In many cases it's not practical to be able to test on the representative hardware system. I would like to find a way to mock the lower level components so that higher level libraries or exectuables using those libraries can be tested in a simulated environment. However, I'm loathe to include phrases in the underlying source code such as
if(is_test) { return 0; }
For example, I would like to mock the behavior of a function float GetSensorData() so that I can test an executable that links to GetSensorData's parent dll and calls get sensor data, by returning a reasonable imitation of the sensor's normal data, without setting up the sensor explicitly. Also, I would like to avoid having to alter the source of the executable beyond making sure it is linking to an imitation of GetSensorData's dll.
A key part of my interest in an automated framework for creating the dll mocks is that I don't want to have to maintain two seperate versions of each library: a test version and an actual version. Rather I would like to work on the actual and have the "mock" compilation generated programmatically.
Can anyone suggest a good way to do this? I've looked at Gtest and CMock / Unity. Both seem fine for testing the dlls themselves but don't seem well equipped to accomodate the
extern "C" { __declspec(dllexport) int __cdecl
function prototype declarations.
Thanks!
If you have a function that you wish to mock that is located in a library you can either do function pointer substitution or link-time substitution. I wrote about it more in-depth in this answer: https://stackoverflow.com/a/65814339/4441211
I personally recommend the function pointer substitution since it is much more simple and straightforward.
EDIT: I found a similar question, and the answers are basically that windows.h is bad and you must either rename your functions or #undef the macros: Other's library #define naming conflict
However, I believe mine is still different due to the conflicting behavior of LoadLibrary under debug and release builds.
I am programming on Windows using Visual Studio and I ran into a few peculiar issues with preprocessor directives used by windows.h and the headers it includes.
Our project had a function in its own namespace, MyProject::FileManager::CreateFile(). After including windows.h our code failed to compile due to a linker error stating that it could not resolve MyProject::FileManager::CreateFileW (note the W at the end of the function name). This was not a static function, it was a member function of a FileManager object that was being called with file_manager.CreateFile(...).
When highlighting the function in Visual Studio a tooltip displayed the following:
#define CreateFile CreateFileW
We were puzzled but just renamed the function as a workaround. However later we ran into a similar issue with the LoadLibrary function we were trying to use from the Windows API. Compiling in Debug mode, LoadLibrary was defined as LoadLibraryW() which took an LPCWSTR (wide string) as a parameter. When I tried building in Release mode this function was now defined as LoadLibraryA() which takes a normal LPCSTR. This broke our build because the code was written under the assumption that LoadLibrary took an LPCWSTR.
So, my question is, how should a programmer deal with this? Should I just wrap my calls to LoadLibrary with #ifdef's checking for Debug or Release mode? Or is there a more simple solution?
Also, I found an interesting header file on github which appears to have been created for the sole purpose of #undef'ing all these function names:
https://github.com/waTeim/poco/blob/master/include/Poco/UnWindows.h
There are few things I generally do to cope with this:
Isolate all Windows system calls in a Windows-specific layer. For example, if I'm working with the file system API, I'll typically have win/filesystem.h and win/filesystem.cpp to wrap all the calls. (This is also a good place to convert Win32 errors into std::system_error exceptions, remove unneeded/obsolete/reserved parameters, and generally make the Windows API more C++ friendly.)
Avoid using Windows-specific types. Allowing definitions like DWORD, LPTSTR and BOOL to infiltrate all levels of your code makes dealing with Windows.h that much more difficult. (And porting too.) The only files that should #include <Windows.h> should be your wrapper C++ files.
Avoid using the Windows redirection macros yourself. For example, your wrapper layer should call CreateFileW or CreateFileA directly instead of relying on the macro. That way, you don't need to depend on the Unicode/Multi-byte project setting.
Example
win/filsystem.h might contain definitions like this:
namespace win32
{
class FileHandle
{
void* raw_handle_;
public:
// The usual set of constructors, destructors, and accessors (usually move-only)
};
FileHandle CreateNewFile(std::wstring const& file_name);
FileHandle OpenExistingFile(std::wstring const& file_name);
// and so on...
}
Any part of your code can include this file to access the file system API. Since win/filesystem.h does not itself include <Windows.h>, the client code will be uncontaminated by the various Win32 macros.
The problem here is that windows.h tries to support two different string models: strings consisting of single-byte characters and strings encoded in unicode (defined by microsoft as two-byte characters). Almost all of the Windows API functions have two different versions, one that takes single-byte character strings and one that takes two-byte character strings. You're supposed to write your code with the generic names (such as CreateFile, LoadLibrary, etc.) and let windows.h take care of mapping those names to the actual API functions. For single-byte characters those two are CreateFileA and LoadLibraryA; for two-byte characters they are CreateFileW and LoadLibraryW. And there are a bajillion more, of course. You choose the model at compile time by defining the macro UNICODE in every compilation unit.
Incidentally, the 'A' suffix stands for "ANSI", and the 'W' suffix stands for "wide character".
This is particularly insidious when you write code that tries to isolate the Windows dependencies into a handful of source files. If you write a class that has a member function named CreateFile, it will be seen in source files that don't use windows.h as CreateFile, and in source files that do use windows.h as CreateFileA or CreateFileW. Result: linker errors.
There are a several ways around this problem:
always #include <windows.h> in every header file; that's a compiler performance killer, but it will work. (windows.h was a major motivator for precompiled headers in early C++ compilers targeting windows)
always use the doctored name, either CreateFileA or CreateFileW; that will work, but at the cost of losing the flexibility of being able to change the underlying string model when you're making API calls; whether that matters is up to you.
don't use any of the Windows API names; potentially painful if you use the same naming convention as Windows; not at all painful if you use snake case, i.e., all lower-case with underbars to separate words. For example, create_file. Alternatively, use a local prefix or suffix: MyCreateFile or CreateFileMine.
I have a number of C++ projects in my solution that were written by other teams before I started working on my UWP app, and all of these projects use std::strings. So, to ease the communication between the other projects and my WinRT modules, I wrote some string conversion functions to go from std::strings to Platform::Strings and vice versa.
I'm in the process of converting my UWP codebase into WinRT modules and I'm coming across a recurring problem: because WinRT modules don't allow you to have classes or functions with public native types, I am unable to have my string functions publicly accessible. A private, protected, or internal declaration is fine for passing around native types, just not public.
Many of my modules need to communicate down into the native C++ code and I don't want to have to redefine my string functions again and again for each individual file that needs a std::string.
Is there anything I can do so I can reuse my string functions across WinRT modules? Has anyone else had a similar problem? Any suggestions are greatly appreciated!
Thank you
You have two options.
Make those functions inline, and define all of them in a header file. Then, include the header file everywhere you want to consume them. This is a more straightforward solution without requiring you to mess with your build system.
You can compile those functions into one of your DLLs, and import them to other ones. Let's call the DLL where you put your functions in "StringModule.dll". You'll need to put those functions in a .cpp/.h header file pair, then compile that .cpp file into StringModule.dll. Then, annotate your functions with a define that evaluates to __declspec(dllexport) when building StringModule.dll, and __declspec(dllimport) when building all the other DLLs. For instance:
#ifndef BUILDING_STRING_CONVERSIONS_DLL // This should be defined to 1 when building StringModule.dll
#define BUILDING_STRING_CONVERSIONS_DLL 0
#endif
#if BUILDING_STRING_CONVERSIONS_DLL
#define MY_STRING_API __declspec(dllexport)
#else
#define MY_STRING_API __declspec(dllimport)
#endif
namespace MyStringFunctions
{
MY_STRING_API Platform::String^ ConvertStdStringToPlatformString(const std::string& str);
MY_STRING_API std::string ConvertPlatformStringToStdString(Platform::String^ str);
}
When you build StringModule.dll, there will be StringModule.lib file created next to it. You'll have to pass its path to the linker as an argument when building all the DLLs that consume your string functions. In all the places where you want to use your DLL, just include that header file and use them as usual.
I have to build an API for a C++ framework which do some simulation stuff. I already have created a new class with __declspec(dllexport) functions and built the framework to a DLL.
This works fine and I can use the framework within a C# application.
But is there another or a better approach to create an API with C++?
If you want to create a C++-API, exporting a set of classes from a DLL/shared library is the way to go. Many libraries written in C++ decide to offer a C interface though, because pure C interfaces are much easier to bind to foreign languages. To bind foreign languages to C++, a wrapper generator such as SWIG is typically required.
C++-APIs also have the problem that, due to C++ name-mangling, the same compiler/linker needs to be used to build the framework and the application.
It is important to note that the __declspec(dllexport)-mechanism of telling the compiler that a class should be exported is specific to the Microsoft Compiler. It is common practice to put it into a preprocessor macro to be able to use the same code on other compilers:
#ifdef _MSC_VER
# define MY_APP_API __declspec(dllexport)
#else
# define MY_APP_API
#endif
class MY_APP_API MyClass {}
The solution with exporting classes have some serious drawbacks. You won't be able to write DLLs in another languages, because they don't support name mangling. Furthermore, you won't be able to use other compilers than VS (because of the same reason). Furthermore, you may not be able to use another version of VS, because MS doesn't guarantee, that mangling mechanism stays the same in different versions of the compiler.
I'd suggest using flattened C-style interface, eg.
MyClass::Method(int i, float f);
Export as:
MyClass_MyMethod(MyClass * instance, int i, float f);
You can wrap it inside C# to make it a class again.
I'm working on a library in C++ and a part of it is an abstraction layer of several OS functions that i need. I started implementing that with the Windows API but plan to add support for other platforms with #ifdef and such.
What is however starting to become a problem is that including Windows.h propagates to the whole rest of the code where i don't need it and especially, as it is a library, it will also contaminate the code of other people that would use it. I wouldn't really mind if the Windows API used a namespace or some clear way to distinguish its code but instead they #define a lot of pretty common words such as small, near, far (lowercase) and a lot of the function names are also pretty general.
So i would really like if only the platform specific part of my code had access to these and it wouldn't be included anywhere else. I know that the obvious solution would be to only include Windows.h in the CPP files but that isn't always possible because some of the platform specific data types or structures are class member variables such as:
class Window {
public:
// ...
private:
HWND handle;
};
So is there a way to accomplish this?
Thanks.
Use the pimpl idiom ( http://en.wikipedia.org/wiki/Opaque_pointer ). Limitations of the C++ programming language makes it necessary to use tricks like this in order to get information hiding.
One of the ways of doing that is doing it the same way as you would in C (where you don't have that problem at all, because of the following): Forward-declare a struct in the header file and define its contents in the implementation file.
Most people do that by extracting the entire private part of your example into its own struct, whose contents is only defined in the implementation file, and only put a pointer to it in the header file, as the now only member of the private part of the class.
also, #define WIN32_LEAN_AND_MEAN before the #include in order to strip down what the windows.h gives you.