calling qsort with a pointer to a c++ function - c++

I have found a book that states that if you want to use a function from the C standard library which takes a function pointer as an argument (for example qsort), the function of which you want to pass the function pointer needs to be a C function and therefore declared as extern "C".
e.g.
extern "C" {
int foo(void const* a, void const* b) {...}
}
...
qsort(some_array, some_num, some_size, &foo);
I would not be surprised if this is just wrong information, however - I'm not sure, so: is this correct?

A lot depends on whether you're interested in a practical answer for the compiler you're using right now, or whether you care about a theoretical answer that covers all possible conforming implementations of C++. In theory it's necessary. In reality, you can usually get by without it.
The real question is whether your compiler uses a different calling convention for calling a global C++ function than when calling a C function. Most compilers use the same calling convention either way, so the call will work without the extern "C" declaration.
The standard doesn't guarantee that though, so in theory there could be a compiler that used different calling conventions for the two. At least offhand, I don't know of a compiler like that, but given the number of compilers around, it wouldn't surprise me terribly if there was one that I don't know about.
OTOH, it does raise another question: if you're using C++, why are you using qsort at all? In C++, std::sort is almost always preferable -- easier to use and usually faster as well.

This is incorrect information.
extern C is needed when you need to link a C++ library into a C binary; it allows the C linker to find the function names. This is not an issue with function pointers (as the function is not referenced by name in the C code).

Related

Can I change the inputs of a function that passes reference arguments into pointers and still have it work (C, C++)?

So I have this function prototype within a C++ header file I was given:
extern LIBRARY_API BOOL read(unsigned int in_one, unsigned int & value);
But I'm running it via a MATLAB mex file, so it may have to be in C, not C++. Since references are a C++ only thing, someone suggested changing the function prototype into a pointer argument. Then I might have something like this:
extern LIBRARY_API BOOL read(unsigned int in_one, unsigned int * value);
And then in the mex file, I would make sure that I created unsigned int *value instead of unsigned int value and dereferenced it after running the read function.
However, I'm worried that (1) I may be referencing/dereferencing one thing too many (or not enough), and (2) I won't be able to do this because I can't change the actual source code and changing the prototype will just cause a mismatch between instantiation and definition.
So assuming that I can somehow change the definition to match the header, would my pointer function above be a valid substitution? And if I can't change the source code, is there a substitution that I could make that would be possible with C? Like a reference C-substitute that would still allow for the same definition?
So assuming that I can somehow change the definition to match the header, would my pointer function above be a valid substitution?
Yes.
And you would simply pass &myInt instead of myInt at the callsite.
And if I can't change the source code, is there a substitution that I could make that would be possible with C? Like a reference C-substitute that would still allow for the same definition?
No.
If MATLAB requires C calling convention and such, then you're stuck with C features (mostly).
For what it's worth, I believe MATLAB is fine with C++; the documentation refers to "C/C++" throughout (which, we must assume, they mean to be "C and also C++").
So, looking at one final quote from your question:
But I'm running it via a MATLAB mex file, so it may have to be in C, not C++. Since references are a C++ only thing, someone suggested changing
Stop. See whether it actually works as-is, first. Proceed from there.
What you want would probably work in practice, but is not really standard conforming (and depends upon the ABI & the calling conventions).
I would recommend instead to code a trivial stub function (which calls the original one), e.g.
/// assume that `read` is declared like in your question (first declaration)
extern "C" BOOL my_read(unsigned int in_one, unsigned int * value) {
return read(in_out, *value);
}
(and tell matlab about my_read, not read) an optimizing compiler is likely to compile that in a single jmp machine instruction (which runs really fast).

Why can't C functions be name-mangled?

I had an interview recently and one question asked was what is the use of extern "C" in C++ code. I replied that it is to use C functions in C++ code as C doesn't use name-mangling. I was asked why C doesn't use name-mangling and to be honest I couldn't answer.
I understand that when the C++ compiler compiles functions, it gives a special name to the function mainly because we can have overloaded functions of the same name in C++ which must be resolved at compile time. In C, the name of the function will stay the same, or maybe with an _ before it.
My query is: what's wrong with allowing the C++ compiler to mangle C functions also? I would have assumed that it doesn't matter what names the compiler gives to them. We call functions in the same way in C and C++.
It was sort of answered above, but I'll try to put things into context.
First, C came first. As such, what C does is, sort of, the "default". It does not mangle names because it just doesn't. A function name is a function name. A global is a global, and so on.
Then C++ came along. C++ wanted to be able to use the same linker as C, and to be able to link with code written in C. But C++ could not leave the C "mangling" (or, lack there of) as is. Check out the following example:
int function(int a);
int function();
In C++, these are distinct functions, with distinct bodies. If none of them are mangled, both will be called "function" (or "_function"), and the linker will complain about the redefinition of a symbol. C++ solution was to mangle the argument types into the function name. So, one is called _function_int and the other is called _function_void (not actual mangling scheme) and the collision is avoided.
Now we're left with a problem. If int function(int a) was defined in a C module, and we're merely taking its header (i.e. declaration) in C++ code and using it, the compiler will generate an instruction to the linker to import _function_int. When the function was defined, in the C module, it was not called that. It was called _function. This will cause a linker error.
To avoid that error, during the declaration of the function, we tell the compiler it is a function designed to be linked with, or compiled by, a C compiler:
extern "C" int function(int a);
The C++ compiler now knows to import _function rather than _function_int, and all is well.
It's not that they "can't", they aren't, in general.
If you want to call a function in a C library called foo(int x, const char *y), it's no good letting your C++ compiler mangle that into foo_I_cCP() (or whatever, just made up a mangling scheme on the spot here) just because it can.
That name won't resolve, the function is in C and its name does not depend on its list of argument types. So the C++ compiler has to know this, and mark that function as being C to avoid doing the mangling.
Remember that said C function might be in a library whose source code you don't have, all you have is the pre-compiled binary and the header. So your C++ compiler can't do "it's own thing", it can't change what's in the library after all.
what's wrong with allowing the C++ compiler to mangle C functions also?
They wouldn't be C functions any more.
A function is not just a signature and a definition; how a function works is largely determined by factors like the calling convention. The "Application Binary Interface" specified for use on your platform describes how systems talk to each other. The C++ ABI in use by your system specifies a name mangling scheme, so that programs on that system know how to invoke functions in libraries and so forth. (Read the C++ Itanium ABI for a great example. You'll very quickly see why it's necessary.)
The same applies for the C ABI on your system. Some C ABIs do actually have a name mangling scheme (e.g. Visual Studio), so this is less about "turning off name mangling" and more about switching from the C++ ABI to the C ABI, for certain functions. We mark C functions as being C functions, to which the C ABI (rather than the C++ ABI) is pertinent. The declaration must match the definition (be it in the same project or in some third-party library), otherwise the declaration is pointless. Without that, your system simply won't know how to locate/invoke those functions.
As for why platforms don't define C and C++ ABIs to be the same and get rid of this "problem", that's partially historical — the original C ABIs weren't sufficient for C++, which has namespaces, classes and operator overloading, all of which need to somehow be represented in a symbol's name in a computer-friendly manner — but one might also argue that making C programs now abide by the C++ is unfair on the C community, which would have to put up with a massively more complicated ABI just for the sake of some other people who want interoperability.
MSVC in fact does mangle C names, although in a simple fashion. It sometimes appends #4 or another small number. This relates to calling conventions and the need for stack cleanup.
So the premise is just flawed.
It's very common to have programs which are partially written in C and partially written in some other language (often assembly language, but sometimes Pascal, FORTRAN, or something else). It's also common to have programs contain different components written by different people who may not have the source code for everything.
On most platforms, there is a specification--often called an ABI [Application Binary Interface] which describes what a compiler must do to produce a function with a particular name which accepts arguments of some particular types and returns a value of some particular type. In some cases, an ABI may define more than one "calling convention"; compilers for such systems often provide a means of indicating which calling convention should be used for a particular function. For example, on the Macintosh, most Toolbox routines use the Pascal calling convention, so the prototype for something like "LineTo" would be something like:
/* Note that there are no underscores before the "pascal" keyword because
the Toolbox was written in the early 1980s, before the Standard and its
underscore convention were published */
pascal void LineTo(short x, short y);
If all of the code in a project was compiled using the same compiler, it
wouldn't matter what name the compiler exported for each function, but in
many situations it will be necessary for C code to call functions that were
compiled using other tools and cannot be recompiled with the present compiler
[and may very well not even be in C]. Being able to define the linker name
is thus critical to the use of such functions.
I'll add one other answer, to address some of the tangential discussions that took place.
The C ABI (application binary interface) originally called for passing arguments on the stack in reverse order (i.e. - pushed from right to left), where the caller also frees the stack storage. Modern ABI actually uses registers for passing arguments, but many of the mangling considerations go back to that original stack argument passing.
The original Pascal ABI, in contrast, pushed the arguments from left to right, and the callee had to pop the arguments. The original C ABI is superior to the original Pascal ABI in two important points. The argument push order means that the stack offset of the first argument is always known, allowing functions that have an unknown number of arguments, where the early arguments control how many other arguments there are (ala printf).
The second way in which the C ABI is superior is the behavior in case the caller and callee do not agree on how many arguments there are. In the C case, so long as you don't actually access arguments past the last one, nothing bad happens. In Pascal, the wrong number of arguments is popped from the stack, and the entire stack is corrupted.
The original Windows 3.1 ABI was based on Pascal. As such, it used the Pascal ABI (arguments in left to right order, callee pops). Since any mismatch in argument number might lead to stack corruption, a mangling scheme was formed. Each function name was mangled with a number indicating the size, in bytes, of its arguments. So, on 16 bit machine, the following function (C syntax):
int function(int a)
Was mangled to function#2, because int is two bytes wide. This was done so that if the declaration and definition mismatch, the linker will fail to find the function rather than corrupt the stack at run time. Conversely, if the program links, then you can be sure the correct number of bytes is popped from the stack at the end of the call.
32 bit Windows and onward use the stdcall ABI instead. It is similar to the Pascal ABI, except push order is like in C, from right to left. Like the Pascal ABI, the name mangling mangles the arguments byte size into the function name to avoid stack corruption.
Unlike claims made elsewhere here, the C ABI does not mangle the function names, even on Visual Studio. Conversely, mangling functions decorated with the stdcall ABI specification isn't unique to VS. GCC also supports this ABI, even when compiling for Linux. This is used extensively by Wine, that uses it's own loader to allow run time linking of Linux compiled binaries to Windows compiled DLLs.
C++ compilers use name mangling in order to allow for unique symbol names for overloaded functions whose signature would otherwise be the same. It basically encodes the types of arguments as well, which allows for polymorphism on a function-based level.
C does not require this since it does not allow for the overloading of functions.
Note that name mangling is one (but certainly not the only!) reason that one cannot rely on a 'C++ ABI'.
C++ wants to be able to interop with C code that links against it, or that it links against.
C expects non-name-mangled function names.
If C++ mangled it, it would not find the exported non-mangled functions from C, or C would not find the functions C++ exported. The C linker must get the name it itself expects, because it does not know it is coming from or going to C++.
Mangling the names of C functions and variables would allow their types to be checked at link time. Currently, all (?) C implementations allow you to define a variable in one file and call it as a function in another. Or you can declare a function with a wrong signature (e.g. void fopen(double) and then call it.
I proposed a scheme for the type-safe linkage of C variables and functions through the use of mangling back in 1991. The scheme was never adopted, because, as other have noted here, this would destroy backward compatibility.

Is there any way to replace a function in a library?

I work with a library which defines its internal division operator for a scripting language. Unfortunately it does not zero-check the divisor. Which leads to lot of headaches. I know the signature of the operator.
double ScriptClass::Divide(double&, double&);
Sadly it isn't even a C function. Is there any way I could make my application use my own Divide function instead of ScriptClass::Divide function?
EDIT:
I was aware of dlopen(NULL,..) and replacing "C" functions with user defined ones. Can this be done for class member functions (Without resorting to using mangled names)?
Various linkers and dynamic linker implementations will provide something that looks like a solution to this, as others have mentioned.
However, if you redefine one C++ function using any of those features (GNU ld's --wrap, ld.so's LD_PRELOAD, etc.), you are violating the one-definition rule and are thus invoking undefined behaviour.
While compiling your library, the compiler is allowed to inline the function in question in any way that it sees fit, which means that your redefinition of the function might not be invoked in all cases.
Consider the following code:
class A
{
public:
void foo();
void bar();
};
void A::foo()
{
std::cout << "Old version.\n";
}
void A::bar()
{
foo();
}
GCC 4.5, when invoked with -O3, will actually decide to inline the definition of foo() into bar(). If you somehow made your linker replace this definition of A::foo() with a definition of your own, A::bar() would still output the string "Old version.\n".
So, in a word: don't.
Generally speaking it's up to the programmer, not the underlying divide operator to prevent division by zero. If you're dividing by zero a lot that seems to indicate a possible flaw in the algorithm being used. Consider reworking the algorithm, or if that's not an option, guard calls to divide with a zero check. You could even do that inside a protected_divide type function.
All that being said, assuming that since it looks like a C++ function you have a C++ library compiled with all the same options you're using to build your application so name mangling matches you might be able to redefine the function into a .so and use LD_PRELOAD to force it to load. If you link statically, I think you can create the function into your own .o file and linking that prior to the library itself will cause the linker to pick up your version.
LD_PRELOAD is your friend. As an example, see:
https://web.archive.org/web/20090130063728/http://ibm.com/developerworks/linux/library/l-glibc.html
There's no getting away from the mangled names, I don't think, but you can use ld's --wrap option to cause a particular function to be given a new name based on its old name. You can then write a new version of it, and forward to the old version too if you like.
Quick overview here:
http://linux.die.net/man/1/ld
I've used this in the past to hook into malloc (etc.) without having to recompile the runtime library, though this wasn't on Linux (it was an embedded thing with no runtime loading). I didn't use it to wrap C++ functions, but if you can handle the C++ calling convention somehow, and you can create a function with the original function's mangled name, and get the compiler to accept a call to a function that has some ugly name with funny chars in it... I don't see why it shouldn't be possible to make it work.
Just short Q,
Cant you just wrap the class with your own code?
It'll be some headache at the start but after than you can simplify a lot of functions.
(Or even just wrap the function with a macro)

How does this function definition work?

I generated a hash function with gperf couple of days ago. What I saw for the hash function was alien to me. It was something like this (I don't remember the exact syntax) :
unsigned int
hash(str, size)
register char* str;
register unsigned int size;
{
//Definition
}
Now, when I tried to compile with a C++ compiler (g++) it threw errors at me for not having str and size declared. But this compiled on the C compiler (gcc). So, questions:
I thought C++ was a superset of C. If its so, this should compile with a C++ compiler as well right?
How does the C compiler understand the definition? str and size are undeclared when they first appear.
What is the purpose of declaring str and size after function signature but before function body rather than following the normal approach of doing it in either of the two places?
How do I get this function to compile on g++ so I can use it in my C++ code? Or should I try generating C++ code from gperf? Is that possible?
1. C++ is not a superset, although this is not standard C either.
2/3. This is a K&R function declaration. See What are the major differences between ANSI C and K&R C?
.
4. gperf does in fact have an option, -L, to specify the language. You can just use -L C++ to use C++.
The Old C syntax for the declaration of a function's formal arguments is still supported by some compilers.
For example
int func (x)
int x
{
}
is old style (K&R style) syntax for defining a function.
I thought C++ was a superset of C. If its so, this should compile with a C++ compiler as well right?
Nopes! C++ is not a superset of C. This style(syntax) of function declaration/definition was once a part of C but has never been a part of C++. So it shouldn't compile with a C++ compiler.
This appears to be "old-school" C code. Declaring the types of the parameters outside of the parentheses but before the open curl-brace of the code block is a relic of the early days of C programming (I'm not sure why but I guess it has something to do with variable management on the stack and/or compiler design).
To answer your questions:
Calling C++ a "superset" of C is somewhat a misnomer. While they share basic syntax features, and you can even make all sorts of C library calls from C++, they have striking differences with respect to type safety, warnings vs. errors (C is more permissible), and compiler/preprocessor options.
Most contemporary C compilers understand legacy code (such as this appears to be). The C compiler holds the function parameter names sort of like "placeholders" until their type can be declared immediately following the function header name.
No real "purpose" other than again, this appears to be ancient code, and the style back in the day was like this. The "normal" approach is IMO the better, more intuitive way.
My suggestion:
unsigned int hash(register char *str, register unsigned int size)
{
// Definition
}
A word of advice: Consider abandoning the register keyword - this was used in old C programs as a way of specifying that the variable would be stored in a memory register (for speed/efficiency), but nowadays compilers are better at optimizing away this need. I believe that modern compilers ignore it. Also, you cannot use the & (address of) operator in C/C++ on a register variable.

Using Fortran to call C++ Functions

I'm trying to get some FORTRAN code to call a couple c++ functions that I wrote (c_tabs_ being one of them). Linking and everything works just fine, as long as I'm calling functions that don't belong to a class.
My problem is that the functions I want the FORTRAN code to call belong to a class. I looked at the symbol table using nm and the function name is something ugly like this:
00000000 T _ZN9Interface7c_tabs_Ev
FORTRAN won't allow me to call a function by that name, because of the underscore at the beginning, so I'm at a loss.
The symbol for c_tabs when it's not in a class is quite simple, and FORTRAN has no problems with it:
00000030 T c_tabs_
Any suggestions? Thanks in advance.
The name has been mangled, which is what the C++ compiler does to functions to allow things like function overloading and type-safe linkage. Frankly, you are extremely unlikely to be able to call member functions from FORTRAN (because FORTRAN cannot create C++ class instances, among other reasons) - you should express your interface in terms of a C API, which will be callable from just about anywhere.
You will need to create a c-style interface and "extern" it. C++ mangles method names (and overloaded functions) for linking. It's notoriously difficult to link C++ with anything except C++. There are "ways" but I'd highly suggest that you simply export a C interface and use the standard facilities available in Fortran.
If you make the C++ routine have a C-style interface (as described already), then you can use the ISO C Binding feature of Fortran 2003 to call it. With the ISO C Binding, you can specify the name of the routine and (within limits) the C-types and calling conventions (reference, by value) of the arguments and function return. This method works well and has the advantage of being a standard, and therefore compiler and platform dependent, unlike the old methods of calling C from Fortran. The ISO C Binding is supported by many Fortran 95 compilers, such as gfortran >= 4.3.
You have to create extern "C" wrappers to handle all the details of FORTRAN calling C++, name mangling being the most obvious.
class foo {
public:
int a_method (int x);
}
extern "C" int foo_a (foo * pfoo, int * px) {
if (NULL == pfoo)
return 0;
else
return pfoo->a_method (*px);
}
Notice that FORTRAN compilers pass all arguments by reference, never by value. (Although I'm told this is not strictly speaking part of the FORTRAN standard.)