Implementing C library function in C++ - c++

What are the disadvantages of implementing C library in C++? The library is going to be used to build Windows application for regular PC using Visual Studio 2008 or newer. It is not clear why the specs state that it should be C library. I am guessing that what they want is plain C-API, not pure C lib. But my boss disagrees.
Anyway, what I want to do is to extern "C" all function declarations and use C++ in implementation files. I did some testing and everything worked just fine even when the application was compiled as C (by changing project option in Visual Studio).

I've seen people do that for, say, exposing STL collections to C programs. If you are sure that the library will only be used in environments with sane C/C++ compilers (say, VS and gcc only) I think this is a pretty safe thing to do from the technical perspective. N
ow, it sounds like you have some sort of outside requirement at play here, but obviously we can't comment on that. Might be worse double checking with the requirements source?
UPDATE: oh, I should mention that it will affect the DLLs that your library will require. Like the C++ runtime DLL will need to be loaded in addition to CRT.

The extern c is used like all the time to port some functionality from c to c++. For instance the new operator inturn calls the malloc() from std c. This is one good example of c library being given a c++ look. new operator makes it much more easy to allocate memory and in addition to that it also allows a lot of functionality like operator overloading which is not available in c. My guess would be to add more functionality to and to make neat interfaces.
If you are considering about disadvatanges then it might be related compiler specific problems where the ABL generated for a c++ program differs from that of the C and if the compiler is not able to differentiate between the two then you struck with it.
I am not sure if this is what you are seeking for, if not try this link it might be of some assistance.
http://www.informit.com/guides/content.aspx?g=cplusplus&seqNum=180

If they are going to use it for a C programm, i.e. the main() function is compiled by a C compiler, then you have to be very carefully with your C++ library. The problem is that the c programm will not execute any constructor for static variables. So you have to omit the usage of any static variables with constructor. This is easy for your library itself, but you have to check every call to a library C++ function if it relies on the existance of a static initialized variable (e.g. std::cout, std::cin etc.).

Related

adding some C++ to a library used by C programs?

I have a long-standing C library used by C and C++ programs.
It has a few compilation units (in other words, C++ source files) that are totally C++, but this has never been a problem. My understanding is that linkers (at least, Linux, Windows, etc.) always work at the file-by-file level so that an object file in a library that isn't referred to, doesn't have any effect on the linking, isn't put in the binary, and so on. The C users of the library never refer to the C++ symbols, and the library doesn't internally, so the resulting linked app is C-only. So while it's always worked perfectly, I don't know if it's because the C++ doesn't make it past the linking stage, or because more deeply, this kind of mixing would always work even if I did mix languages.
For the first time I'm thinking of adding some C++ code to the existing C API's implementation.
For purposes of discussion let us say I have a C function that does something, and logs it via stdout, and since this is buffered separately from cout, the output can become confusing. So let us say this module has an option that can be set to log to cout instead of stdout. (This is a more general question, not merely about getting cout and stdout to cooperate.) The C++ code might or might not run, but the dependencies will definitely be there.
In what way would this impact users of this library? It is widely used so I cannot check with the entire user base, and as it's used for mission-critical apps it'd be unacceptable to make a change that makes links start failing, at least unless I supply a release note explaining the problem and solution.
Just as an example of a possible problem, I know compilers have "hidden" libraries of support functions that are necessary for C and C++ programs. There are obviously also the Standard C and C++ libraries, that normally you don't have to explicitly link to. My concerns are that the compiler might not know to do these things.

Can C++ code reliably interact with other C++ code?

In C, I'm used to being able to write a shared library that can be called from any client code that wishes to use it simply by linking the library and including the related header files. However, I've read that C++'s ABI is simply too volatile and nonstandard to reliably call functions from other sources.
This would lead me to believe that creating truly shared libraries that are as universal as C's is impossible in C++, but real-world implementations seem to indicate otherwise. For example, Node.js exposes a very simple module system that allows plain C++ functions (without extern "C") to be exported dynamically using the NODE_SET_METHOD function.
Which elements of a C++ API are safe to expose, if any, and what are the common methods of allowing C++ code to interact with other pieces of C++ code? Is it possible to create shared libraries that can expose C++ classes? Or must these classes be individually recompiled for each program due to the inconsistent ABI?
Yes, C++ interop is difficult and filled with traps. Cold hard rules are that you must use the exact same compiler version with the exact same compiler settings to build the modules and ensure that they share the exact same CRT and standard C++ libraries. Breaking those rules tend to get you C++ classes that don't have the same layout on either end of the divide and trouble with memory management when one module allocates an object using a different allocator from the module that deletes the object. Problems that lead to very hard to diagnose runtime failure when code uses the wrong offset to access a class member and leaks memory or corrupts the heap.
Node.js avoids these problems by first of all not exporting anything. NODE_SET_METHOD() doesn't do what you think it does, it simply adds a symbol to the Javascript engine's symbol table, along with a function pointer that's called when the function is called in script. Furthermore, it is an open source project so building everything with the same compiler and runtime library isn't a problem.
This
For example, Node.js exposes a very simple module system that allows
plain C++ functions (without extern "C") to be exported dynamically
using the NODE_SET_METHOD function.
Is wrong, you can see that they are using an an extern "C" there in the init() function, which is clearly what node.js is calling which is then forwarding the function on to which ever C++ function they want, which isn't exposed.
As explained in this question How does an extern "C" declaration work? - When the compiler compiles the code, it mangles the function names, class names and namespace names. The reason it does this is because there can very easily be name clashes, for instance with overloaded functions.
Read about it more here: http://en.wikipedia.org/wiki/Name_mangling
The only way to refer and lookup a function is if the extern "C" declaration is used, which forces the compiler to not mangle the name. I.e. in the example above, the function init will be called init where as the function foo will be called something like _ugAGE (I made this up, because it doesn't matter, it isn't for human consumption)
In summary, you can expose any C++ to any other language, but the entry point to the library must be one or more extern "C"'d global functions as they are the only way to refer to an unmangled name.
Neither the C nor the C++ standards define an ABI. That is entirely left up to the implementation. The reason it's harder to get shared/dynamic libraries working for C++, is that C++ added things like classes, polymorphism, templates, exceptions, function overloading, STL, ...
So, the real source of information for you, is your compilers' documentation, as well as a corresponding set of guidelines for your library API to avoid any issues with any of the implementations your library will be built for. It's harder in C++ (the set of guidelines will likely be quite a bit bigger than for C, and you might have to work with a subset of C++), but not impossible.

Is the D language completely dependant upon the D runtime?

Lately, I've been studying on the D language. I've always been kind of confused about the runtime.
From the information I can gather about it, (which isn't a whole lot) I understand that it's sort of a, well, runtime that helps with some of D's features. Like garbage collection, it runs along with your own programs. But since D is compiled to machine code, does it really need features such as garbage collection, if our program doesn't need it?
What really confuses me is statements such as:
"You can write an operating system in D."
I know that you can't really do that because there's more to an operating system than any compiled language can give without using some assembly. But if you had a kernel that called D code, would the D runtime prevent D from running in such a bare-bones environment? Or is the D runtime simpler than that? Can it
be thought of as simply an "automatic" inclusion of sourcefile/libraries, that when compiled with your application make no more of a difference than writing that code yourself?
Maybe I'm just looking at it all wrong. But I'm sure some information on the subject could do a lot of people good.
Yes, indeed, you can implement the functions of DRuntime that the compiler expects right in your main module (or wherever), compile without a runtime, and it'll Just Work (tm).
If you just build your code without a runtime, the compiler will emit errors when it's missing a symbol that it expects to be implemented by the runtime. You can then go and look at how DRuntime implements it to see what it does, and then implement it in whatever way you prefer. This is what XOmB, the kernel written in D (language version 1, though, but same deal), does: http://xomb.net/index.php?title=Main_Page
A lot of DRuntime isn't actually used by many applications, but it's the most convenient way to include the runtime components of D into applications, so that's why it's done as a static library (hopefully a shared library in the future).
It's pretty much the same as C and C++ I expect. The language itsself compiles to native code and just runs. But there is some code that is always needed to set everything up to run your program, for example processing command line parameters.
And some more complex language facilities are better implemented by calling some standard code rather than generating the code everywhere it is used. For example throwing an exception needs to find the relevent handler function. No doubt the compiler could insert the code to do there everywhere it was used, but it's much more sensible to write the code in a library and call that. Plus there are many pre-written library functions in the standard library.
All of this taken together is the runtime.
If you write C you can use it to write an operating system because you can write the startup code yourself, you can write all the code for handing memory allocation yourself, you can write all the code for standard functions like strcat yourself instead of using the provided ones in the runtime. But you'd not want to do that for any application program.

What problems can appear when using G++ compiled DLL (plugin) in VC++ compiled application?

I use and Application compiled with the Visual C++ Compiler. It can load plugins in form of a .dll. It is rather unimportant what exactly it does, fact is:
This includes calling functions from the .dll that return a pointer to an object of the Applications API, etc.
My question is, what problems may appear when the Application calls a function from the .dll, retrieves a pointer from it and works with it. For example, something that comes into my mind, is the size of a pointer. Is it different in VC++ and G++? If yes, this would probably crash the Application?
I don't want to use the Visual Studio IDE (which is unfortunately the "preferred" way to use the Applications SDK). Can I configure G++ to compile like VC++?
PS: I use MINGW GNU G++
As long as both application and DLL are compiled on the same machine, and as long as they both only use the C ABI, you should be fine.
What you can certainly not do is share any sort of C++ construct. For example, you mustn't new[] an array in the main application and let the DLL delete[] it. That's because there is no fixed C++ ABI, and thus no way in which any given compiler knows how a different compiler implements C++ data structures. This is even true for different versions of MSVC++, which are not ABI-compatible.
All C++ language features are going to be entirely incompatible, I'm afraid. Everything from the name-mangling to memory allocation to the virtual-call mechanism are going to be completely different and not interoperable. The best you can hope for is a quick, merciful crash.
If your components only use extern "C" interfaces to talk to one another, you can probably make this work, although even there, you'll need to be careful. Both C++ runtimes will have startup and shutdown code, and there's no guarantee that whichever linker you use to assemble the application will know how to include this code for the other compiler. You pretty much must link g++-compiled code with g++, for example.
If you use C++ features with only one compiler, and use that compiler's linker, then it gets that much more likely to work.
This should be OK if you know what you are doing. But there's some things to watch out for:
I'm assuming the interface between EXE and DLL is a "C" interface or something COM like where the only C++ classes exposed are through pure-virutal interfaces. It gets messier if you are exporting a concrete class through a DLL.
32-bit vs. 64bit. The 32-bit app won't load a 64-bit DLL and vice-versa. Make sure they match.
Calling convention. __cdecl vs __stdcall. Often times Visual Studio apps are compiled with flags that assuming __stdcall as the default calling convention (or the function prototype explicitly says so). So make sure that the g++ compilers generates code that matches the calling type expected by the EXE. Otherwise, the exported function might run, but the stack can get trashed on return. If you debug through a crash like this, there's a good chance the cdecl vs stdcall convention was incorrectly specified. Easy to fix.
C-Runtimes will not likely be shared between the EXE and DLL, so don't mix and match. A pointer allocated with new or malloc in the EXE should not be released with delete or free in the DLL (and vice versa). Likewise, FILE handles returned by fopen() can not be shared between EXE and DLL. You'll likely crash if any of this happens.... which leads me to my next point....
C++ header files with inline code cause enough headaches and are the source of issues I called out in #3. You'll be OK if the interface between DLL And EXE is a pure "C" interface.
Name mangling issues. If you run into issues where the function name exported doesn't match because of name mangling or because of a leading underscore, you can fix that up in a .DEF file. At least that's what I've done in the past with Visual Studio. Not sure if the equivalent exists in g++/MinGW. Example below. Learn to use "dumpbin.exe /exports" to you can validate your DLL is exporting function with the right name. Using extern "C" will also help fix this as well.
EXPORTS
FooBar=_Foobar#12
BlahBlah=??BlahBlah##QAE#XZ #236 NONAME
Those are the issues that I know of. I can't tell you much more since you didn't explain the interface between the DLL and EXE.
The size of a pointer won't vary; that is dependent on the platform and module bitness and not the compiler (32-bit vs 64-bit and so on).
What can vary is the size of basically everything else, and what will vary are templates.
Padding and alignment of structs tends to be compiler-dependent, and often settings-within-compiler dependent. There are so loose rules, like pointers typically being on a platform-bitness-boundary and bools having 3 bytes after them, but it's up to the compiler how to handle that.
Templates, particularly from the STL (which is different for each compiler) may have different members, sizes, padding, and mostly anything. The only standard part is the API, the backend is left to the STL implementation (there are some rules, but compilers can still compile templates differently). Passing templates between modules from one build is bad enough, but between different compilers it can often be fatal.
Things which aren't standardized (name mangling) or are highly specific by necessity (memory allocation) will also be incompatible. You can get around both of those issues by only destroying from the library that creates (good practice anyway) and using STL objects that take a deleter, for allocation, and exporting using undecorated names and/or the C style (extern "C") for exported methods.
I also seem to remember a catch with how the compilers handle virtual destructors in the vtable, with some small difference.
If you can manage to only pass references of your own objects, avoid externally visible templates entirely, work primarily with pointers and exported or virtual methods, you can avoid a vast majority of the issues (COM does precisely this, for compatibility with most compilers and languages). It can be a pain to write, but if you need that compatibility, it is possible.
To alleviate some, but not all, of the issues, using an alternate to the STL (like Qt's core library) will remove that particular problem. While throwing Qt into any old project is a hideous waste and will cause more bloat than the "boost ALL THE THINGS!!!" philosophy, it can be useful for decoupling the library and the compiler to a greater extent than using a stock STL can.
You can't pass C runtime objects between them. For example you can not open a FILE buffer in one and pass it to be used in the other. You can't free memory allocated on the other side.
The main problems are the function signatures and way parameters are passed to library code. I've had great difficulty getting VC++ dll's to work in gnu based compilers in the past. This was way back when VC++ always cost money and mingw was the free solution.
My experience was with DirectX API's. Slowly a subset got it's binaries modified by enthusiasts but it was never as up-to-date or reliable so after evaluating it I switched to a proper cross platform API, that was SDL.
This wikipedia article describes the different ways libraries can be compiled and linked. It is rather more in depth than I am able to summarise here.

Use a Qt library from a C application

Is there a way to write a qt library such that I can then use it (statically linked is fine) in a C application?
My C code is huge, old and will not convert to C++ without an inordinate amount of work. I say this as other similar questions seem to answer "just make your C code a Qt app". That's not an option.
I hope I can write a qt library, and build it in a way that lets it be called from C (something alluded to in QLibrary documentation).
The symbol must be exported as a C
function from the library for
resolve() to work. This means that the
function must be wrapped in an extern
"C" block if the library is compiled
with a C++ compiler. On Windows, this
also requires the use of a dllexport
macro; see resolve() for the details
of how this is done.
Can someone confirm/deny that I can do this, and let me know how much "qt" I can put in the library?
I don't need a GUI but would like to use some of the SQL handling.
Cheers
Mike
You can put as much Qt in a library as you wish, including full UI capability. The rub is that since you want to access it from C code, you must provide your own access functions and your C functionality will be constrained to whatever level of access you provide.
You can even pass Qt object pointers between C and C++ but you'll need to cast them into something that C can compile -- either void * or preferably your own new type definition (such as C_QString *). To C code these pointers will be opaque, but they'll still be valid.